Subscribe: Global Moxie - Blog
Added By: Feedage Forager Feedage Grade B rated
Language: English
artificial intelligence  design  designers  designing  intelligence  interfaces  learning  machine learning  machine  new  systems  work 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Global Moxie - Blog

Big Medium - Full Feed

Big Medium is the design studio of Josh Clark and friends. We build forward-looking experiences for connected devices, mobile apps, and responsive websites.

Last Build Date: Wed, 14 Mar 2018 16:55:02 UT


The Juvet Agenda

Mon, 30 Oct 2017 16:12:43 UT

I had the privilege last month of joining 19 other designers, researchers, and writers to consider the future (both near and far) of artificial intelligence and machine learning. We headed into the woods—to the Juvet nature retreat in Norway—for several days of hard thinking. Under the northern lights, we considered the challenges and opportunities that AI presents for society, for business, for our craft—and for all of us individually.

Answers were elusive, but questions were plenty. We decided to share those questions, and the result is the Juvet Agenda. The agenda lays out the urgent themes surrounding AI‚and presents a set of provocations for teasing out a future we want to live in:

Artificial intelligence? It’s complicated. It’s the here and now of hyper-efficient algorithms, but it’s also the heady possibility of sentient systems. It might be history’s greatest opportunity or its worst existential threat — or maybe it will only optimize what we’ve already got. Whatever it is and whatever it might become, the thing is moving too fast for any of us to sit still. AI demands that we rethink our methods, our business models, maybe even our cultures.

In September 2017, 20 designers, urbanists, researchers, writers, and futurists gathered at the Juvet nature retreat among the fjords and forests of Norway. We came together to consider AI from a humanist perspective, to step outside the engineering perspective that dominates the field. Could we sort out AI’s contradictions? Could we describe its trajectory? Could we come to any conclusions?

Across three intense days the group captured ideas, played games, drew diagrams, and snapped photos. In the end, we arrived at more questions than answers — and Big Questions at that. These are not topics we can or should address alone, so we share them here.

Together these questions ask how we can shape AI for a world we want to live in. If we don’t decide for ourselves what that world looks like, the technology will decide for us. The future should not be self-driving; let’s steer the course together.

Stop Pretending You Really Know What AI Is

Sat, 09 Sep 2017 10:28:22 UT

“Artificial intelligence” is broadly used in everything from science fiction to the marketing of mundane consumer goods, and it no longer has much practical meaning, bemoans John Pavlus at Quartz. He surveys practitioners about what the phrase does and doesn’t mean:

It’s just a suitcase word enclosing a foggy constellation of “things”—plural—that do have real definitions and edges to them. All the other stuff you hear about—machine learning, deep learning, neural networks, what have you—are much more precise names for the various scientific, mathematical, and engineering methods that people employ within the field of AI.

But what’s so terrible about using the phrase “artificial intelligence” to enclose all that confusing detail—especially for all us non-PhDs? The words “artificial” and “intelligent” sound soothingly commonsensical when put together. But in practice, the phrase has an uncanny almost-meaning that sucks adjacent ideas and images into its orbit and spaghettifies them.

Me, I prefer to use “machine learning” for most of the algorithmic software I see and work with, but “AI” is definitely a convenient (if overused) shorthand.

AI Guesses Whether You're Gay or Straight from a Photo

Sat, 09 Sep 2017 09:43:35 UT

Well this seems ominous. The Guardian reports:

Artificial intelligence can accurately guess whether people are gay or straight based on photos of their faces, according to new research that suggests machines can have significantly better “gaydar” than humans.

The study from Stanford University – which found that a computer algorithm could correctly distinguish between gay and straight men 81% of the time, and 74% for women – has raised questions about the biological origins of sexual orientation, the ethics of facial-detection technology, and the potential for this kind of software to violate people’s privacy or be abused for anti-LGBT purposes.

The Tools I Use

Wed, 30 Aug 2017 20:14:04 UT

I’ve been getting outsized joy from An Event Apart’s series The Tools We Use, in which my favorite web designers and developers share their most cherished tools and gewgaws. I love learning about the props, rituals, and machinery that we lean into to make things happen. I’m a voyeur of work habits, always peeking behind the curtain of other people’s personal productivity setups.

Fair’s fair, it’s my turn to reciprocate. My entry in the series  just went online:

I’m both a creature of routine and a captive to my own muscle memory. This makes me incredibly (often ridiculously) loyal to my tools, and it takes a lot for me to pitch one overboard for a new one.

I mean, look, I’ve used the same brand of pen religiously for over a decade. I’ve inhabited the same software for 25 years (hi there, BBEdit). When I do adopt new software it’s often based on the same metaphors and keyboard shortcuts as familiar apps (Sketch works an awful lot like Keynote) so that I don’t have to do heavy context switching among apps.

In other words, my tools bend to the way I work and think, instead of the reverse. This makes me conservative about jumping onto the latest and greatest, but it also lets me focus on the task at hand instead of learning new settings, features and workflows.

Related: I tend to use Apple’s stock Mac software: Safari, Mail, iWork, Calendar… Individually, they’re not all best of class, but as a set they fit hand in glove and of course enjoy special integration with the operating system. It’s a suite of apps that talk easily to each other across devices, too, reducing friction and letting me get to work.

Here’s my kit.

Check out the full post for the 37 tools I can’t live without.

Designing with Artificial Intelligence: The Dinosaur on a Surfboard

Tue, 29 Aug 2017 20:56:20 UT

It was a real treat to chat with one of my design heroes, Jeff Veen, as we mulled over the role of designers in a world of algorithms and machine learning. Jeff interviewed me for his podcast Presentable, a must-listen series of conversations about designing the future. Tune in to the episode: Designing with Artificial Intelligence: The Dinosaur on a Surfboard.

And yes, we do indeed talk about dinosaurs on surfboards, as well as using AI in your everyday work, the one-true-answer problem, comically evil search suggestions, the approaching golden age of AI mashups, the battle against bias, and how we might approach the training of algorithms as UX research at massive scale.

All of this is in the service of exploring the substantial design problems that have so far gone unaddressed in machine-generated content and interaction. There’s much hard and interesting work to be done here by designers. As I said to Jeff, a new kind of content requires a new kind of presentation:

It is “intelligence,” but it’s not ours. These machines think in a different way. … They’re weird. The machine logic is weird and surprising. One of the things I’m finding as we begin to design for data-generated content and interaction is that much of the work is actually just trying to anticipate the weirdness that comes out of these machines. […]

The presentation of the data generated content is just as important as the algorithm. We’ve made incredible engineering gains in how machine learning and artificial intelligence work, especially with deep learning almost month to month—it’s these incredible leaps. The algorithms are becoming more and more accurate, and yet the presentation of them is busted.

Listen in to hear us talk about techniques, design principles, and even concepts of social justice that we can put to use today as we work with this new design material.

Four SXSW Panels for Your Consideration

Wed, 16 Aug 2017 19:35:53 UT

I proposed a solo talk for the SXSW Interactive Festival and, wow, was also flattered to be invited to two other panel proposals on top of that. My favorite person Liza Kindred also proposed a talk on Mindful Technology, which is both timely and actionable. Help me usher these panels into the bright, bright Texas sunlight. Voting closes August 25. I’d be mighty obliged if you’d vote up these four proposals at the SXSW PanelPicker: Josh Clark: Design in the Era of the Algorithm I’m super-excited to expand on the techniques and design principles I outlined earlier this year in my long-form essay about designing with machine learning. Data-driven content and interfaces are at the core of every single emerging consumer technology right now. I’ve got a crisp point of view about design’s role and responsibility in this developing future: Designers have an urgent role to play in crafting the emerging generation of AI interfaces. This hour explores a rich set of examples—both entertaining and sobering—that unearth 10 design principles for creating responsible machine-learning applications. Learn to use AI as design material in your everyday work. Anticipate the weird, unexpected or incorrect conclusions the machines sometimes deliver. Above all, scrub data for bias to create a respectful and inclusive world we all want to live in. Questions answered I’m a designer, not a data scientist. What’s my role in AI? How can I start using machine learning in my everyday work, starting today? How can design set better expectations and confidence in the accuracy and perspective of AI interfaces? (We’re still bad at this.) What are the ethical responsibilities—and actionable tasks—to root out data bias and create inclusive services? (We’re bad at this, too.) Vote it up! Liza Kindred: Mindful Technology src="" width="640" height="360" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen> My astonishing wife Liza Kindred brought down the house at this year’s Interaction conference when she debuted her framework of design principles for mindful technology. How do we bend technology to our lives instead of the reverse? What are our responsibilities as designers to help focus attention instead of trap it? Friends, I’m the luckiest person on the planet that I get to be inspired by Liza every day. You’ll do the SXSW audience a big favor by treating them to even an hour of her insights: Staying connected does matter–but to each other, not our devices. Instead of views and attention grabs, let’s design for purpose, calm & compassion. Instead of engagement with interfaces, let’s design to engage with what we truly care about. Instead of simply building more tech, let’s build more human connection. With a host of practical examples, Liza Kindred shares a framework for how we can make and use new technologies that create real insight, joy, and utility–while still getting the job done. Questions answered What is the role of mindfulness in a technology-saturated world, and how are some of the world’s biggest businesses already putting it to practical use? As tech settles into the most intimate spaces of our lives, homes, & bodies, what are our responsibilities for creating respectful experiences? What are the practical principles for & examples of creating (and using!) technologies that are respectful, calm, purposeful, & humane? Vote it up! Panel: Designing Our Robot Overlords In a few short weeks, I’m headed deep into the northern forests of Norway to huddle with a group of intimidatingly smart people to consider our alongside robots and algorithms. My friend Andy Budd of Clearleft organized this retreat and also had the good idea to share what comes of it at SXSW. I’m all atingle to join in this conversation with Andy[...]

The Pop-Up Employer: Build a Team, Do the Job, Say Goodbye

Wed, 02 Aug 2017 21:03:46 UT

Big Medium is what my friend and collaborator Dan Mall calls a design collaborative. Dan runs his studio Superfriendly the same way I run Big Medium: rather than carry a full-time staff, we both spin up bespoke teams from a tight-knit network of well-known domain experts. Those teams are carefully chosen to meet the specific demands of each project. It’s a very human, very personal way to source project teams.

And so I was both intrigued and skeptical to read about an automated system designed to do just that at a far larger scale. Noam Scheiber reporting for The New York Times:

True Story was a case study in what two Stanford professors call “flash organizations” — ephemeral setups to execute a single, complex project in ways traditionally associated with corporations, nonprofit groups or governments. […]

And, in fact, intermediaries are already springing up across industries like software and pharmaceuticals to assemble such organizations. They rely heavily on data and algorithms to determine which workers are best suited to one another, and also on decidedly lower-tech innovations, like middle management. […]

“One of our animating goals for the project was, would it be possible for someone to summon an entire organization for something you wanted to do with just a click?” Mr. Bernstein said.

The fascinating question here is how systems might develop algorithmic proxies for the measures of trust, experience, and quality that weave the fabric of our professional networks. But even more intriguing: how might such models help to connect underrepresented groups with work they might otherwise never have access to? For that matter, how might those models introduce me to designers outside my circle who might introduce more diverse perspectives into my own work?

The BBQ and the Errant Butler

Wed, 02 Aug 2017 20:31:56 UT

Marek Pawlowski shares a tale of a dinner party taken hostage by a boorish Alexa hell-bent on selling the guests music.

Amid the flashy marketing campaigns and rapid technological advances surrounding virtual assistants like Alexa, Cortana and Siri, few end users seem willing to question how the motivation of their creators is likely to affect the overall experience. Amazon has done much to make Alexa smart, cheap and useful. However, it has done so in service of an over-arching purpose: retailing. Of course, Google, Microsoft and Apple have ulterior motives for their own assistants, but it should come as no surprise that Alexa is easily sidetracked by her desire to sell you things.

Toy Story Lessons for the Internet of Things

Wed, 02 Aug 2017 20:24:32 UT

Dan Gärdenfors ponders how to handle “leadership conflicts” in IoT devices:

In future smart homes, many interactions will be complex and involve combinations of different devices. People will need to know not only what goes on but also why. For example, when smart lights, blinds and indoor climate systems adjust automatically, home owners should be able to know what triggered it. Was it weather forecast data or the behaviour of people at home that made the thermostat lower the temperature? Which device made the decision and told the others to react? Especially when things don’t end up the way we want them to, smart objects need to communicate more, not less.

As we introduce more sensors, services, and smart gadgets into our life, some of them will inevitably collide. Which one “wins”? And how do we as users see the winner (or even understand that there was a conflict in the first place)?

UX design gets complicated when you introduce multiple triggers from multiple opinionated systems. And of course all those opinionated systems should bow to the most important opinion of all: the user’s. But even that is complicated in a smart-home environment where there are multiple users who have changing needs, desires, and contexts throughout the day. Fun!

Politics Are a Design Constraint

Wed, 02 Aug 2017 20:07:07 UT

Designers, if you believe that politics don’t belong at work, guess what: your work is itself political. Software channels behavior, and that means that it’s freighted with values.

Ask yourself: as a designer, what are the behaviors I’m shaping, for what audience, to what end, and for whose benefit? Those questions point up the fact that software is ideological. The least you can do is own that fact and make sure that your software’s politics line up with your own. John Warren Hanawalt explains why:

Designers have a professional responsibility to consider what impact their work has—whether the project is explicitly “political” or not. Design can empower or disenfranchise people through the layout of ballots or UX of social network privacy settings.

Whose voices are amplified or excluded by the platforms we build, who profits from or is exploited by the service apps we code, whether we have created space for self-expression or avenues for abuse: these are all political design considerations because they decide who is represented, who can participate and at what cost, and who has power. […]

If you’re a socially conscious designer, you don’t need to quit your job; you need to do it. That means designing solutions that benefit people without marginalizing or harming others. When your boss or client asks you to do something that might do harm, you have to say no. And if you see unethical behavior happening in other areas of your company, fight for something better. If you find a problem, you have a problem. Good thing solving problems is your job.

AI First—with UX

Wed, 02 Aug 2017 19:08:23 UT

When mobile exploded a decade ago, many of us wrestled with designing for the new context of freshly portable interfaces. In fact, we often became blinded by that context, assuming that mobile interfaces should be optimized strictly for on-the-go users: we overdialed on location-based interactions, short attention spans, micro-tasks. The “lite” mobile version ruled. It turned out that the physical contexts of mobile gadgets—device and environment—were largely red herrings. The notion of a single “mobile context” was a myth that distracted from the more meaningful range of “softer” contexts these devices introduced by unchaining us from the desktop. The truth was that we now had to design for a huge swath of temporal, behavioral, emotional, and social contexts. When digital interfaces can penetrate any moment of our lives, the designer can no longer assume any single context in which it will be used. This already challenging contextual landscape is even more complicated for predictive AI assistants that constantly run in the background looking for moments to provide just-in-time info. How much do they need to know about current context to judge the right moment to interrupt with (hopefully) useful information? In an essay for O’Reilly, Mike Loukides explores that question, concluding that it’s less a concern of algorithm design than of UX design: What’s the experience I want in being “assisted” How is that experience designed? A design that requires me to expend more effort to take advantage of the assistant’s capabilities is a step backward. The design problem becomes more complex when we think about how assistance is delivered. Norvig’s "reminders" are frequently delivered in the form of asynchronous notifications. That’s a problem: with many applications running on every device, users are subjected to a constant cacophony of notifications. Will AI be smart enough to know what notifications are actually wanted, and which are just annoyances? A reminder to buy milk? That’s one thing. But on any day, there are probably a dozen or so things I need, or could possibly use, if I have time to go to the store. You and I probably don’t want reminders about all of them. And when do we want these reminders? When we’re driving by a supermarket, on the way to the aforementioned doctor’s appointment? Or would it just order it from Amazon? If so, does it need your permission? Those are all UX questions, not AI questions. We’ve made lots of fast progress in just the last few years—months, even—in crafting remarkably accurate algorithms. We’re still getting started, though, in crafting the experiences we wrap around them. There’s lots of work to be done right now by designers, including UX research at unprecedented scale, to understand how to put machine learning to use as design material. I have ideas and design principles about how to get started. In the meantime, I really like the way Mike frames the problem: In a future where humans and computers are increasingly in the loop together, understanding context is essential. But the context problem isn’t solved by more AI. The context is the user experience. What we really need to understand, and what we’ve been learning all too slowly for the past 30 years, is that technology is the easy part. O'Reilly Media | AI first—with UX [...]

Making Software with Casual Intelligence

Wed, 02 Aug 2017 17:45:02 UT

The most broadly impactful technologies tend to be the ones that become mundane—cheap, expected, part of the fabric of everyday life. We absorb them into our lives, their presence assumed, their costs negligible. Electricity, phones, televisions, internet, refrigeration, remote controls, power windows—once-remarkable technologies that now quietly improve our lives.

That’s why the aspects of machine learning that excite me most right now are the small and mundane interventions that designers and developers can deploy today in everyday projects. As I wrote in Design in the Era of the Algorithm, there are so many excellent (and free!) machine-learning APIs just waiting to be integrated into our digital products. Machine learning is the new design material, and it’s ready today, even for the most modest product features.

All of this reminds me of an essay my friend Evan Prodromou wrote last year about making software with casual intelligence. It’s a wonderful call to action for designers and developers to start integrating machine learning into everyday design projects.

Programmers in the next decade are going to make huge strides in applying artificial intelligence techniques to software development. But those advances aren’t all going to be in moonshot projects like self-driving cars and voice-operated services. They’re going to be billions of incremental intelligent updates to our interfaces and back-end systems.

I call this _casual intelligence_ — making everything we do a little smarter, and making all of our software that much easier and more useful. It’s casual because it makes the user’s experience less stressful, calmer, more leisurely. It’s also casual because the developer or designer doesn’t think twice about using AI techniques. Intelligence becomes part of the practice of software creation.

Evan touches on one of the most intriguing implications of designing data-driven interfaces. When machines generate both content and interaction, they will often create experiences that designers didn’t imagine (both for better and for worse). The designer’s role may evolve into one of corralling the experience in broad directions, rather than down narrow paths. (See conversational interfaces and open-ended, Alexa/Siri-style interactions, for example.)

Designers need to stop thinking in terms of either-or interfaces — either we do it this way, or we do it that way. Casual intelligence lets interfaces become _and-also_ — different users have different experiences. Some users will have experiences never dreamed of in your wireframes — and those may be the best ones of all.

In the AI Age, “Being Smart” Will Mean Something Completely Different

Wed, 02 Aug 2017 16:55:21 UT

As machines become better than people at so many things, the natural question is what’s left for humans—and indeed what makes us human in the first place? Or more practically: what is the future of work for humans if machines are smarter than us in so many ways? Writing for Harvard Business Review, Ed Hess suggests that the answer is in shifting the meaning of human smarts away from information recall, pattern-matching, fast learning—and even accuracy.

What is needed is a new definition of being smart, one that promotes higher levels of human thinking and emotional engagement. The new smart will be determined not by what or how you know but by the quality of your thinking, listening, relating, collaborating, and learning. Quantity is replaced by quality. And that shift will enable us to focus on the hard work of taking our cognitive and emotional skills to a much higher level.

We will spend more time training to be open-minded and learning to update our beliefs in response to new data. We will practice adjusting after our mistakes, and we will invest more in the skills traditionally associated with emotional intelligence. The new smart will be about trying to overcome the two big inhibitors of critical thinking and team collaboration: our ego and our fears. Doing so will make it easier to perceive reality as it is, rather than as we wish it to be. In short, we will embrace humility. That is how we humans will add value in a world of smart technology.

Designing for Touch—in French! And Chinese!

Thu, 20 Jul 2017 18:26:29 UT

Designing for Touch is now available in Chinese, French, and the original English.

I always feel a giddy cosmopolitan flush when one of my books is published in a new language. (Hey if I can’t manage to a cosmopolitan swagger, at least my books can look the part.)

And so I’m feeling especially je ne sais quoi about the publication of Designing for Touch in France and—new this summer—in China. Many thanks to translators Charles Robert and Zou Zheng (aka “C7210”) for their remarkable work on these two editions.

You can snap up any of these editions at the links below:

Designing for Touch explores all the ways that touchscreen UX goes way beyond making buttons bigger for fat fingers. Designers have to revisit—and in many cases chuck out—the common solutions of the last thirty years of traditional interface design. In this book, you’ll discover entirely new methods, including design patterns, touchscreen metrics, ergonomic guidelines, and interaction metaphors that you can use in your websites and apps right now. The future is in your hands.

The Design System and Your Future Self

Thu, 20 Jul 2017 14:57:24 UT

I had a great time talking design systems this week with Anna Debenham and Brad Frost on their Style Guide podcast.

Topics included: how to adapt design systems to an organization’s workflow and culture; the humility required to build wonderfully boring design systems; and the way that design systems will evolve to capture emerging interfaces.

All of these points circle the most important role of design systems: easing future work. Here’s what I had to say about that:

I do have a focus on designing for what’s next and how do we prepare organizations and products for what appear to be the emerging technologies that are likely to be really important in the next year or two. I do think that a really important thing about design systems and pattern libraries is being kind to your future self. It’s creating documentation, so that for the next project, my colleague or me, I’ll have all of this stuff at hand. … The way that I think of design system work writ large is that it is a container of institutional knowledge; it is a collection of solved problems and an example of the best of what a company does when it comes to design. …

It takes less and less imagination now to see that things like speech in particular, but also some artificial intelligence aspects, are going to be coming into these [projects], that our interactions are going to go well beyond keyboard and mouse, as we’ve already seen with touchscreens. … What that means is that we’ll have an explosion of design best practices for each of these new channels, and we have to get a lot better at documenting all of them. And that’s what pattern libraries and design systems are great at. …

What is emerging with all of this design system work is just how many disciplines and kinds of brains and kinds of creativity go into creating large and complex design solutions.

Listen to the whole podcast or read the transcript. Or hey why wait: just subscribe to the Style Guide podcast (iTunes or feed) so that you don’t miss an episode. Anna and Brad are doing great work to surface the best ideas and techniques for pattern libraries, style guides, and design systems. So good.

Does your company need a design system to bring order to a disconnected set of apps and websites? Or to capture emerging best practices for the latest interfaces? Big Medium can help with workshops, executive sessions, or a full-blown design engagement. Get in touch.