Published: Wed, 12 Oct 2016 00:00:00 +0000
Last Build Date: Wed, 12 Oct 2016 00:00:00 +0000Copyright: LukeW Ideation + Design
Wed, 12 Oct 2016 00:00:00 +0000
Conversions@Google published a complete video recording of my three hour seminar this month (Oct 2016 in Dublin) on creating obvious designs for mobile and beyond.
In part one I pull back the curtain on an significant design change for a large-scale mobile app and discuss the in-depth thinking/processes that went into it.
In part two I answer audience questions and cover responsive web designs, native applications, form conversions, touch gestures, navigation, cross-platform consistency and more.
Thanks to the Conversions@Google team for making these sessions available to all.
Thu, 11 Feb 2016 00:00:00 +0000
In her Bug Fixes & Minor Improvements, Writ Large presentation at Webstock 2016 in Wellington New Zealand, Anna Pickard shared the thinking behind Slack’s app release notes and communication with their customers. Here's my notes from her talk:
Thu, 11 Feb 2016 00:00:00 +0000
In his The Shape of Things presentation at Webstock 2016 in Wellington New Zealand, Tom Coates shared his latest thoughts on designing for the Internet of Things. Here's my notes from his talk:
Wed, 10 Feb 2016 00:00:00 +0000In his The Map & The Territory presentation at Webstock 2016 in Wellington New Zealand, Ethan Marcotte shared his thoughts about the changing definition of the Web and his philosophy for making broadly accessible experiences. Here's my notes from his talk: In 1807, New York's population was clustered in the bottom part of the island where rivers, over crowding, and disease ran rampant. Civic planning was needed. John Randel Jr. was assigned to re-plan the streets. He created a structured plan that defined New York's famous street plan today. Randal developed a new map for New York. This system was designed for use to create regularity. Randal’s map was attempting to define a new territory: what could a territory be or should be. Maps capture our understanding of a space. They make us aware of our surroundings and make things feel a little less foreign. We’ve been focused on a relatively small portion of the Web: a few desktops & laptops. Five years ago, our view of the Web was much more limited. We were overly concerned about a few fixed width views of our layouts. The three main ingredients of a responsive design: fluid grid, flexible images, and media queries. This is an attempt to embrace the fluidity of the Web and design across device boundaries. This simple recipe has blossomed into lots of amazing examples. Notable examples include: An Event Apart, Currys, Expedia, Coop, Disney, Time, The Guardian, BBC News, and more. We’re drawing a new map and marking it with new sites both responsive and device specific. But this map is far from complete. Toward a New Map Many sites have been having success with responsive design. The Boston Globe and several e-commerce sites have published statistics about the positive impact responsive redesigns have had. The Tehan-Lax and Oakley Moto sites (originally launched at 84MB) are very beautiful designs but very large in terms of file size from 6MB to 21MB. Responsive design has often been criticized for being too heavy (not performant enough). But the truth is most Websites are much to heavy for today’s reality. Internet.org designed to get people better access to the Web in developing markets weighed 4.3 MB. The Apple Mac Pro website is 33.4MB -its not responsive. In 2009, the average size of a Web site was 320kb. Today in 2015, it is 2.2MB. Every two years, we’ve doubled in size. Moore’s law has not kept up with bandwidth. The map is not the territory. It can't capture all the detail of the complexity of what's around us. We may be doing the same for the Web. Our map of the Web is made through a vision of the Western world. We've mistaken the map for the territory. We think mobile devices are always on, always connected, and uniquely yours. But this isn't true around the World. 9.2M people live in 300 square kilometers in Dhaka, Bangladesh. There has been a 900% increase in mobile Internet access in these developing cities. The next wave of urbanization is helping in these areas. More people in the world have mobile devices than access to toilets and latrines. In Africa, 60% have mobile devices (700M). That’s more than have access to clean water. Mobile devices are shared across multiple people in many developing nations. This is slowly changing as smartphones get cheaper. What’s happening in these developing areas is emblematic of what’s happening across the World. A large portion of the world is coming online now, on less capable devices and networks than we are used to. Are we ready for this new reality? The Web doesn't evolve in one straight line. Is our work prepared for the new face of the Web? For low powered devices and poor access. Is this the new normal? Who does it change how we design & build for the Web? Randal’s map of New York was attempting to define a new territory: what could a territory be or should be? He was trying to find a balance between emerging growth and entrenched interests. Maps are traditionally use to represent exist[...]
Tue, 02 Feb 2016 00:00:00 +0000In her presentation at Google today, Stephanie Rieger shared how commerce is evolving (and in many cases innovating) in emerging Internet markets. Here's my notes from her talk: Exploring the Just-in-Time Social Web. The Web was first conceived about 25 years ago from mostly European origins. For the following 15 years, most of the internet's users and companies came from developed countries. The people who founded the Web were the ones who built it and consumed it. Today the situation is quite different. Internet penetration is nearing saturation in developed economies but fast growing companies are emerging from places like China, India, and Russia. There's close to 3 billion people that have yet to come online and thanks to cheap mobile devices, their barriers to entry are quickly falling. A New Kind of Web So the Internet that the next billion people "discover" isn't quite like "ours". For example, people in Kuwait are sleeping sheep on Instagram. This creates a lot of ad-hoc businesses. These businesses provide a glimpse of a new, digital, and mobile-fueled informal economy. "Informal" businesses are relatively ad-hoc. They use chat for negotiations and Facebook pages as store fronts. These services don't offer a great experience but they are "good enough". They balance reach, effort, functionality, and adaptability to local circumstances. Most of these businesses get their largest growth from the countryside, where brick and mortar shops are under-developed. But these businesses reach cities (very big, high density, lots of middle class) as well. Opening enough stores to serve 700 million urban residents is very expensive. Mobile fills this gap. In China shopping on mobile is the primary way of buying things. Chinese commerce is 90% through online marketplaces vs. in the US 76% is through online merchants. So in China, sellers and buyers find themselves online in large e-commerce platforms like TMall. Even large brands have big digital storefronts on TMall. Taobao is like eBay in China but very social as vendors aren't limited to selling things, they also sell services like travel planning. Running a shop on Taobao is a national pastime, like a second job or hobby for tens of thousands of Chinese. Digital commerce services in China are a mix of Amazon, eBay, and PayPal, with a dash of Google. They've evolved differently. Alibaba is bringing digital services to rural communities by identifying people who can order for villages. "We give them a computer and the start taking orders for the whole village." The Chinese model is moving to other countries like Jumia in Africa. They are trying to leapfrog brick and mortar commerce with online selling. Starting with Social With millions of vendors, how do you find stuff? Online shopping neighborhoods are created through social media where people can curate lists. These sites get a cut for each transaction. Part of the reason these services work because they feed of a cycle of mobile and social media adoption. Developing markets are going straight to social. When they come online, they just jump into social networks as their onramp into the Internet. The most poplar social media services are platforms. WeChat is a great example of integrating wallet, blogging, rss, messaging, etc. There's thousands of apps inside WeChat, you can book doctors, talk to stores, etc. WeChat adresses daily and hourly needs not monthly active users. There is no Web in WeChat its all inside the platform. Ironically, though many people reach WeChat mini-apps through Web pages and QR codes. Payments In sub-Saharan Africa, 1/4 of adults have accounts at formal financial institutions. Less than 15% of Indonesians have a credit card. As a result, they use mobile banking services. In many developing countries, you can pay for online services with cash-on-delivery. With such giant marketplaces, counterfeiting treats trust concerns between buyers and [...]
Mon, 25 Jan 2016 00:00:00 +0000
Smartphones are always with us and (as a result) used all the time. But time spent on our mobile screens doesn't just increase with portability, it grows with screen size as well.
Nearly four years ago, an analysis of Web browser page views on a variety of tablets showed that as screen size decreased people's use of the Web dropped from an average of 125 page views on 10" tablets to an average of 79 page views on 5" tablets.
Turns out the opposite is true as well. As screen size increases, so does people's activity on the device... to a point. Looking at Android data usage, bigger screens mean more Web browsing, social networking, and communicating up until about 6".
One reason for this increased activity may be that bigger smartphones are now the norm. It took 5 years for the average smartphone to go from 3" to 4" but only 2 more years to get to 5" (by the end of 2014).
Another reason may be that larger phones take time away from other devices like tablets. Reading app Pocket found that users with 4.7" phones read 19% less on their tablets during the week and 27% less over the weekend. Those with a 5.5" phone were on their tablets 31% less during the week and 36% less over the weekend.
Screen size alone, however, can also lead to increased use. Pocket saw that people with a 4.7" phone opened 33% more articles and videos than they did with a 4" screen, and those with a 5.5" screen opened 65% more items than they did with a smaller phone.
More recent data from Flurry tells the same story. Time spent on 5" to 6.9" phones has grown 334% while only increasing 85% on 3.5" to 4.9" phones.
In other words: the bigger your portable screen, the more you're going to use it.
Wed, 13 Jan 2016 00:00:00 +0000
Given how few new apps people download on mobile and the steep drop-off within apps, getting Day 1 retention right is critical.
Given the cost the effort required to get people to download your app, Day 1 retention is a critical metric.
Day 1 Retention is the fraction of people that return to your app one day after install. If you don't get Day 1 retention right, the rest really doesn’t matter.
So how do you get Day 1 retention right? Spend [a lot of] time working through the first time experience for your apps. How can you get people to come back the next day? Get that over 50% and then you can tackle the next metric...
Thu, 07 Jan 2016 00:00:00 +0000While the mobile opportunity has been clear for some time, how to best tackle it remains a subject of debate. In particular, when building software for mobile should we invest in Web-based solutions or a native apps? Yes... The tremendous growth of mobile has created an unprecedented opportunity to provide content and services to nearly everyone on planet Earth through software. Native mobile apps dominate time spent on mobile devices. Looking a bit deeper, most of these devices are smartphones. And just a few apps really dominate time spent on your phone, dwarfing mobile Web engagement. But mobile audience growth is driven by mobile web properties, which are actually bigger and growing faster than native apps. In other words. The Web is for audience reach and native apps are for rich experiences. Both are strategic. Both are valuable. So when it comes to mobile, it's not Web vs. Native. It's both. Whenever I make this point, someone inevitably asks "what about Web browsers within native apps?" Is there a large amount of time spent on Web properties within embedded Web browsers inside the World's most popular native mobile apps?[...]
Thu, 10 Dec 2015 00:00:00 +0000
At the Glance Conference in San Francisco, CA I had the pleasure of hearing Bob Moesta, an originator of the Jobs-To-Be-Done methodology talk over the process how it works and why.
Thu, 10 Dec 2015 00:00:00 +0000
In his Glanceable Interactions presentation at the Glance Conference in San Francisco CA 2015, Josh Clark talked through design considerations for the Apple Watch. Here are my notes from his talk:
Fri, 24 Jul 2015 00:00:00 +0000After wearing my Apple watch daily for the past two+ months, I've found myself wishing for a simpler interaction model for moving between content and apps. Here's what I'd propose and why. The vast majority of my interactions with the Apple Watch involve notifications. I get a light tap on my wrist, raise my arm, and get a bit of useful information. This awareness of what's happening helps me stay connected to the real World and is the primary reason I enjoy wearing a smartwatch. Thanks to "long look" notifications, I'm also able to take action on some notifications —mostly to triage things quickly. Much more rarely do I engage with Glances and even less with apps. This creates a hierarchy of use: notifications, glances, apps. Unfortunately, the Apple Watch's interaction model doesn't echo this hierarchy. Instead, there's a lot of emphasis on apps, which seems to create more UI than is necessary on your wrist. Consider for starters, all the ways to access an app on the Apple Watch (not counting the special case of the Friends app). This structure makes the times when I do use apps needlessly complex. As an example, the common use case of listening to music while working out requires moving between watch faces, glances, and apps. There's also an app-only way to accomplish the same task but with different results. So switching tasks using Glances requires a trip to the watch face and a swipe up but switching tasks using apps requires double-tapping the digital crown. All this is made more confusing when you compare the Music Glance and app: which is which and why does using each result in different OS-level behavior? Looking at the interaction model of the iPhone suggests an alternative. Why not echo that (now familiar) structure on the Apple Watch? Comparing the current Apple Watch interaction model with this change illustrates the simplification. In the redesign, there's one layer for home screens (filled with apps and/or complications), one for notifications, and one for currently running apps. Glances are replaced with the ability to scroll through active apps. But that doesn't mean "glance-able content" goes ways. In fact, each app could have a "glance-like" home screen as its default (it could even be the Glance) but also display its last used screen when appropriate (like after you've recently interacted with the app). With this model, switching between two active apps is trivial, just swipe up from the bottom to move between them with no confusion between Glances and apps. To return to the watch face, simply press the home button. Android wear has had a similar interaction model from the start. Just swipe up from the bottom of the watch face to scroll through active (relevant) apps. Then tap on any app to go deeper. This approach also allows you to scroll through the content within each app easily. Want to go deeper? Simply tap a card to scroll within it, or swipe across to access other features in the app. When wearing an Android Wear smartwatch, I found myself keeping up with more than I do when wearing the Apple Watch. A simple scroll up on Wear would give me the latest content from several apps ordered by relevance. In their current state, Glances on the Apple Watch don't give me that lightweight way of staying on top of the information I care about. Their inclusion in the Apple Watch interaction model seems, instead, to complicate moving between tasks (and apps). Perhaps this is simply an artifact of what's possible with the first version of Apple Watch given the current long loading times for apps and Glances. If so, Apple's interaction model may well be a good fit when performance on the Watch improves. Or perhaps third [...]
Fri, 17 Jul 2015 00:00:00 +0000All too often mobile forms make use of dropdown menus for input when simpler or more appropriate controls would work better. Here's several alternatives to dropdowns to consider in your designs and why. Expectations impact conversion No one likes filling in forms. And the longer or more complicated a form seems, the less likely we are to jump in and start filling in the blanks -especially on small screens with imprecise inputs (like our fingers). While there's two extra fields in the "painful" version above, the primary difference between these two flight booking forms is how they ask questions. One makes use of dropdown menus for nearly every question asked, the other uses the most appropriate input control for each question. Interacting with dropdown menus on mobile and the desktop is a multi-step process often requiring more effort than necessary. First tap the control, then scroll (usually more than once), find & select your target, and finally move on. We can do better. Steppers Stepper controls increase or decrease value by a constant amount and are great for making small adjustments. When testing mobile flight booking forms, we found people preferred steppers for selecting the number of passengers. No dropdown menu required, especially since there's a maximum of 8 travelers allowed and the vast majority select 1-2 travelers. When working with steppers, generally simpler is better. Changing the basic design of a stepper control too much makes its function less clear. That's true of any input control, really. Radio Groups Radio groups, or segmented controls, are a set of closely related, but mutually exclusive choices. When comparing mobile flight booking forms, we found radio groups worked really well for selecting class of travel. Additional Controls Steppers and radio groups aren't the only controls that can used in place of dropdown menus. Switches support two simple, diametrically opposed choices. Sliders allow you to select fine-grained values from an allowed range. When starting with a dropdown heavy form, look at each question and consider if any of these controls is a more appropriate way of getting an answer. Button inputs expose the options that would otherwise be hidden in a dropdown and make selecting them a single-tap v.s multi-tap action. In some cases multiple dropdown menus can be condensed into one input control. The flight booking example I labeled "painful" above made use of six dropdowns to collect travel dates. In our mobile flight booking research, we found a single input control for travel dates worked much better. From six drop-downs to a single date picker. Now that's progress. As a Last Resort All these alternatives to dropdown menus don't mean you should never use them in a user interface design. Well-designed forms make use of the most appropriate input control for each question they ask. Sometimes that's a stepper, a radio group, or even a dropdown menu. But because they are hard to navigate, hide options by default, don't support hierarchies, and only enable selection not editing, dropdowns shouldn't be the first UI control you reach for. In today's software designs, they often are. So instead consider other input controls first and save the dropdown as a last resort. For More... For a deeper look at this topic and lots more about mobile form design, check out my video presentation on Mobile Inputs.[...]
Tue, 19 May 2015 00:00:00 +0000
Conversions@Google published a complete video recording of my two and a half hour seminar last month (April 2015 in Dublin) on designing for mobile and across devices.
In part one I walk through the growth of connected screens and how to design software that works well across a wide variety of devices from mobile to TVs and beyond. In particular I focus on output (what can show up on a screen), input (what people can put into our screens), and posture (how people interact with the output and input of a screen).
In part two I answer audience questions and cover screen-less devices, responsive web designs vs. native applications, form conversions, touch gestures, navigation, cross-platform consistency and more.
Thanks to the Conversions@Google team for making these sessions available to all.
Mon, 18 May 2015 00:00:00 +0000On PCs (or smartphones, or TVs) computer operating systems have a lot of user interface elements and conventions in common. So it's really the differences in each OS that define their unique personalities. Wearables are no different as my recent companions between Android Wear and Apple Watch help illustrate. Both operating systems contain an extensive but "hidden" interface layer. That is, there's lots of controls and features available but they require knowledge of touch gestures, audio commands, or hardware to operate. In practice I've personally found it hard to keep all these invisible interface commands memorized (especially across devices). When starting from a watch face with touch, Android Wear mostly maintains a continuous vertical scroll path. Apple Watch's scroll flow, on the other hand, varies between vertical and horizontal swipe interactions. When I first began designing touch interfaces, I'd make UIs that required people to switch between vertical and horizontal scrolling often —sometimes even in the same task. Over the years I've come to appreciate the importance of scroll flow: a singular, simple gesture that makes an app's core value easily accessible. Which may be why it took me a while to get used to Apple's multi-directional model. Another key difference between Apple Watch and Android Wear is how useful information is organized. Apple allows you to set a specific order for these "glances". Once set, that order stays consistent whenever you access them. Android Wear, instead, displays these "cards" based on which are most relevant at any given time or place. Clearly both approaches (consistency and context) have benefits and both seem to reflect the strengths of each of the companies behind their OS. Android Wear's focus on context is also reflected in the display of information on the watch face. Rather than showing when a meeting on your calendar will begin, Wear displays the time left until the meeting starts. A small but useful difference in making "timely, glance-able information" work. Both operating systems offer multiple ways to access apps and information including touch gestures, voice commands, and hardware controls. In practice, however, I find myself relying on heads-up glance-able information frequently and voice or touch activated actions much more rarely. On the wrist, awareness (glance-able, contextually relevant updates/info) and triage (lightweight actions in response) work well. Initiating tasks happens a lot less often and usually requires more time focusing on the watch than necessary. In other words, actions, not apps seem to make for better wrist-based experiences. At least for now...[...]
Thu, 30 Apr 2015 00:00:00 +0000
Hang around software companies long enough and you'll certainly hear someone proclaim "the best interface is invisible." While this adage seems inevitable, today's device ecosystem makes it clear we may not be there yet.
When there's no graphical user interface (icons, labels, etc.) in a product to guide us, our memory becomes the UI. That is, we need to remember the hidden voice and gesture commands that operate our devices. And these controls are likely to differ per device making the task even harder.
Consider the number of gesture interfaces on today's smartwatch home screens. A swipe in any direction, a tap, or a long press each trigger different commands. Even after months of use, I still find myself forgetting about hidden gestures in these UIs. Perhaps Apple's 80+ page guide to their watch is telling, it takes a lot of learning to operate a hidden interface.