Subscribe: A List Apart
http://www.alistapart.com/rss.xml
Added By: Feedage Forager Feedage Grade A rated
Language: English
Tags:
accessibility  css  design  information  it’s  mental  people  problem  research  researcher  time  user  users  web  work 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: A List Apart

A List Apart: The Full Feed



Articles for people who make web sites.



Published: 2018-04-26T13:02:00+00:00

 



The Illusion of Control in Web Design

2018-04-26T13:02:00+00:00

We all want to build robust and engaging web experiences. We scrutinize every detail of an interaction. We spend hours getting the animation swing just right. We refactor our JavaScript to shave tiny fractions of a second off load times. We control absolutely everything we can, but the harsh reality is that we control less than we think. Last week, two events reminded us, yet again, of how right Douglas Crockford was when he declared the web “the most hostile software engineering environment imaginable.” Both were serious enough to take down an entire site—actually hundreds of entire sites, as it turned out. And both were avoidable. In understanding what we control (and what we don’t), we will build resilient, engaging products for our users. What happened? The first of these incidents involved the launch of Chrome 66. With that release, Google implemented a security patch with serious implications for folks who weren’t paying attention. You might recall that quite a few questionable SSL certificates issued by Symantec Corporation’s PKI began to surface early last year. Apparently, Symantec had subcontracted the creation of certificates without providing a whole lot of oversight. Long story short, the Chrome team decided the best course of action with respect to these potentially bogus (and security-threatening) SSL certificates was to set an “end of life” for accepting them as secure. They set Chrome 66 as the cutoff. So, when Chrome 66 rolled out (an automatic, transparent update for pretty much everyone), suddenly any site running HTTPS on one of these certificates would no longer be considered secure. That’s a major problem if the certificate in question is for our primary domain, but it’s also a problem it’s for a CDN we’re using. You see, my server may be running on a valid SSL certificate, but if I have my assets—images, CSS, JavaScript—hosted on a CDN that is not secure, browsers will block those resources. It’s like CSS Naked Day all over again. To be completely honest, I wasn’t really paying attention to this until Michael Spellacy looped me in on Twitter. Two hundred of his employer’s sites were instantly reduced to plain old semantic HTML. No CSS. No images. No JavaScript. The second incident was actually quite similar in that it also involved SSL, and specifically the expiration of an SSL certificate being used by jQuery’s CDN. If a site relied on that CDN to serve an HTTPS-hosted version of jQuery, their users wouldn’t have received it. And if that site was dependent on jQuery to be usable … well, ouch! For what it’s worth, this isn’t the first time incidents like these have occurred. Only a few short years ago, Sky Broadband’s parental filter dramatically miscategorized the jQuery CDN as a source of malware. With that designation in place, they spent the better part of a day blocking all requests for resources on that domain, affecting nearly all of their customers. It can be easy to shrug off news like this. Surely we’d make smarter implementation decisions if we were in charge. We’d certainly have included a local copy of jQuery like the good Boilerplate tells us to. The thing is, even with that extra bit of protection in place, we’re falling for one of the most attractive fallacies when it comes to building for the web: that we have control. Lost in transit? There are some things we do control on the web, but they may be fewer than you think. As a solo dev or team lead, we have considerable control over the HTML, CSS, and JavaScript code that ultimately constructs our sites. Same goes for the tools we use and the hosting solutions we’ve chosen. Of course, that control lessens on large teams or when others are calling the shots, though in those situations we still have an awareness of the coding conventions, tooling, and hosting environment we’re working with. Once our carefully-crafted code leaves our servers, however, all bets are off. First off, we don’t—at least in the vast majority of cases—control the network our code travers[...]



Working with External User Researchers: Part II

2018-04-17T13:10:00+00:00

In the first installment of the Working with External User Researchers series, we explored the reasons why you might hire a user researcher on contract and helpful things to consider in choosing one. This time, we talk about getting the actual work done. You’ve hired a user researcher for your project. Congrats! On paper, this person (or team of people) has everything you need and more. You might think the hardest part of your project is complete and that you can be more hands off at this point. But the real work hasn’t started yet. Hiring the researcher is just the beginning of your journey. Let’s recap what we mean by an external user researcher. Hiring a contract external user researcher means that a person or team is brought on for the duration of a contract to conduct research. This situation is most commonly found in: organizations without researchers on staff; organizations whose research staff is maxed out; and organizations that need special expertise. In other words, external user researchers exist to help you gain the insight from your users when hiring one full-time is not an option. Check out Part I to learn more about how to find external user researchers, the types of projects that will get you the most value for your money, writing a request for proposal, and finally, negotiating payment. Working together Remember why you hired an external researcher No project or work relationship is perfect. Before we delve into more specific guidelines on how to work well together, remember the reasons why you decided to hire an external researcher (and this specific one) for your project. Keeping them in mind as you work together will help you keep your priorities straight. External researchers are great for bringing in a fresh, objective perspective You could ask your full-time designer who also has research skills to wear the research hat. This isn’t uncommon. But a designer won’t have the same depth and breadth of expertise as a dedicated researcher. In addition, they will probably end up researching their own design work, which will make it very difficult for them to remain unbiased. Product managers sometimes like to be proactive and conduct some form of guerrilla user research themselves, but this is an even riskier idea. They usually aren’t trained on how to ask non-leading questions, for example, so they tend to only hear feedback that validates their ideas. It isn’t a secret—but it’s well worth remembering—that research participants tend to be more comfortable sharing critical feedback with someone who doesn’t work for the product that is being tested. The real work begins In our experience the most important work starts once a researcher is hired. Here are some key considerations in setting them and your own project team up for success. Be smart about the initial brain dump Do share background materials that provide important context and prevent redundant work from being done. It’s likely that some insight is already known on a topic that will be researched, so it’s important to share this knowledge with your researcher so they can focus on new areas of inquiry. Provide things such as report templates to ensure that the researcher presents their learnings in a way that’s consistent with your organization’s unique culture. While you’re at it, consider showing them where to find documentation or tutorials about your product, or specific industry jargon. Make sure people know who they are Conduct a project kick-off meeting with the external researcher and your internal stakeholders. Influence is often partially a factor of trust and relationships, and for this reason it’s sometimes easy for internal stakeholders to question or brush aside projects conducted by research consultants, especially if they disagree with research insights and recommendations. (Who is this person I don’t know trying to tell me what is best for my product?) Conduct a kick-off meeting with the broader team A great way to prevent this potential push[...]



Going Offline

2018-04-10T13:10:00+00:00

A note from the editors: We’re excited to share Chapter 1 of Going Offline by Jeremy Keith, available this month from A Book Apart.Businesses are built on the web. Without the web, Twitter couldn’t exist. Facebook couldn’t exist. And not just businesses—Wikipedia couldn’t exist. Your favorite blog couldn’t exist without the web. The web doesn’t favor any one kind of use. It’s been deliberately designed to accommodate many and varied activities. Just as many wonderful things are built upon the web, the web itself is built upon the internet. Though we often use the terms web and internet interchangeably, the World Wide Web is just one application that uses the internet as its plumbing. Email, for instance, is another. Like the web, the internet was designed to allow all kinds of services to be built on top of it. The internet is a network of networks, all of them agreeing to use the same protocols to shuttle packets of data around. Those packets are transmitted down fiber-optic cables across the ocean floor, bounced around with Wi-Fi or radio signals, or beamed from satellites in freakin’ space. As long as these networks are working, the web is working. But sometimes networks go bad. Mobile networks have a tendency to get flaky once you’re on a train or in other situations where you’re, y’know, mobile. Wi-Fi networks work fine until you try to use one in a hotel room (their natural enemy). When the network fails, the web fails. That’s just the way it is, and there’s nothing we can do about it. Until now. Weaving the Web For as long as I can remember, the World Wide Web has had an inferiority complex. Back in the ’90s, it was outshone by CD-ROMs (ask your parents). They had video, audio, and a richness that the web couldn’t match. But they lacked links—you couldn’t link from something in one CD-ROM to something in another CD-ROM. They faded away. The web grew. Later, the web technologies of HTML, CSS, and JavaScript were found wanting when compared to the whiz-bang beauty of Flash. Again, Flash movies were much richer than regular web pages. But they were also black boxes. The Flash format seemed superior to the open standards of the web, and yet the very openness of those standards made the web an unstoppable force. Flash—under the control of just one company—faded away. The web grew. These days it’s native apps that make the web look like an underachiever. Like Flash, they’re under the control of individual companies instead of being a shared resource like the web. Like Flash, they demonstrate all sorts of capabilities that the web lacks, such as access to device APIs and, crucially, the ability to work even when there’s no network connection. The history of the web starts to sound like an endless retelling of the fable of the tortoise and the hare. CD-ROMs, Flash, and native apps outshine the web in the short term, but the web always seems to win the day somehow. Each of those technologies proved very useful for the expansion of web standards. In a way, Flash was like the R&D department for HTML, CSS, and JavaScript. Smooth animations, embedded video, and other great features first saw the light of day in Flash. Having shown their usefulness, they later appeared in web standards. The same thing is happening with native apps. Access to device features like the camera and the accelerometer is beginning to show up in web browsers. Most exciting of all, we’re finally getting the ability for a website to continue working even when the network isn’t available. Service Workers The technology that makes this bewitching offline sorcery possible is a browser feature called service workers. You might have heard of them. You might have heard that they’re something to do with JavaScript, and technically they are…but conceptually they’re very different from other kinds of scripts. Usually when you’re writing some JavaScript that’s going to run in a web browser, it’s all related to the document currently[...]



Planning for Everything

2018-04-03T13:22:00+00:00

A note from the editors: We’re pleased to share an excerpt from Chapter 7 (“Reflecting”) of Planning for Everything: The Design of Paths and Goals by Peter Morville, available now from Semantic Studios.Once upon a time, there was a happy family. Every night at dinner, mom, dad, and two girls who still believed in Santa played a game. The rules are simple. Tell three stories about your day, two true, one false, and see who can detect the fib. Today I saw a lady walk a rabbit on a leash. Today I found a tooth in the kitchen. Today I forgot my underwear. The family ate, laughed, and learned together, and lied happily ever after. There’s truth in the tale. It’s mostly not false. We did play this game, for years, and it was fun. We loved to stun and bewilder each other, yet the big surprise was insight. In reflecting on my day, I was often amazed by oddities already lost. If not for the intentional search for anomaly, I’d have erased these standard deviations from memory. The misfits we find, we rarely recall. We observe a tiny bit of reality. We understand and remember even less. Unlike most machines, our memory is selective and purposeful. Goals and beliefs define what we notice and store.  To mental maps we add places we predict we’ll need to visit later. It’s not about the past. The intent of memory is to plan. In reflecting we look back to go forward. We search the past for truths and insights to shift the future. I’m not speaking of nostalgia, though we are all borne back ceaselessly and want what we think we had. My aim is redirection. In reflecting on inconvenient truths, I hope to change not only paths but goals. Figure 7-1. Reflection changes direction. We all have times for reflection. Alone in the shower or on a walk, we retrace the steps of a day. Together at lunch for work or over family dinner, we share memories and missteps. Some of us reflect more rigorously than others. Given time, it shows. People who as a matter of habit extract underlying principles or rules from new experiences are more successful learners than those who take their experiences at face value, failing to infer lessons that can be applied later in similar situations.1 In Agile, the sprint retrospective offers a collaborative context for reflection. Every two to four weeks, at the end of a sprint, the team meets for an hour or so to look back. Focal questions include 1) what went well? 2) what went wrong? 3) how might we improve? In reflecting on the plan, execution, and results, the team explores surprises, conflicts, roadblocks, and lessons. In addition to conventional analysis, a retrospective creates an opportunity for double loop learning. To edit planned actions based on feedback is normal, but revising assumptions, goals, values, methods, or metrics may effect change more profound. A team able to expand the frame may hack their habits, beliefs, and environment to be better prepared to succeed and learn. Figure 7-2. Double loop learning. Retrospectives allow for constructive feedback to drive team learning and bonding, but that’s what makes them hard. We may lack courage to be honest, and often people can’t handle the truth. Our filters are as powerful as they are idiosyncratic, which means we’re all blind men touching a tortoise, or is it a tree or an elephant? It hurts to reconcile different perceptions of reality, so all too often we simply shut up and shut down. Search for Truth To seek truth together requires a culture of humility and respect. We are all deeply flawed and valuable. We must all speak and listen. Ideas we don’t implement may lead to those we do. Errors we find aren’t about fault, since our intent is a future fix. And counterfactuals merit no more confidence than predictions, as we never know what would have happened if. Reflection is more fruitful if we know our own minds, but that is harder than we think. An imperfect ability to predict actions of sentient beings is a product of ev[...]



Meeting Design

2018-03-29T13:13:00+00:00

A note from the editors: We’re pleased to share an excerpt from Chapter 2 (“The Design Constraint of All Meetings”) of Meeting Design: For Managers, Makers, and Everyone by Kevin Hoffman, available now from Two Waves.Jane is a “do it right, or I’ll do it myself ” kind of person. She leads marketing, customer service, and information technology teams for a small airline that operates between islands of the Caribbean. Her work relies heavily on “reservation management system” (RMS) software, which is due for an upgrade. She convenes a Monday morning meeting to discuss an upgrade with the leadership from each of her three teams. The goal of this meeting is to identify key points for a proposal to upgrade the outdated software. Jane begins by reviewing the new software’s advantages. She then goes around the room, engaging each team’s representatives in an open discussion. They capture how this software should alleviate current pain points; someone from marketing takes notes on a laptop, as is their tradition. The meeting lasts nearly three hours, which is a lot longer than expected, because they frequently loop back to earlier topics as people forget what was said. It concludes with a single follow-up action item: the director of each department will provide her with two lists for the upgrade proposal. First, a list of cost savings, and second, a list of timesaving outcomes. Each list is due back to Jane by the end of the week. The first team’s list is done early but not organized clearly. The second list provides far too much detail to absorb quickly, so Jane puts their work aside to summarize later. By the end of the following Monday, there’s no list from the third team—it turns out they thought she meant the following Friday. Out of frustration, Jane calls another meeting to address the problems with the work she received, which range from “not quite right” to “not done at all.” Based on this pace, her upgrade proposal is going to be finished two weeks later than planned. What went wrong? The plan seemed perfectly clear to Jane, but each team remembered their marching orders differently, if they remembered them at all. Jane could have a meeting experience that helps her team form more accurate memories. But for that meeting to happen, she needs to understand where those memories are formed in her team and how to form them more clearly. Better Meetings Make Better Memories If people are the one ingredient that all meetings have in common, there is one design constraint they all bring: their capacity to remember the discussion. That capacity lives in the human brain. The brain shapes everything believed to be true about the world. On the one hand, it is a powerful computer that can be trained to memorize thousands of numbers in random sequences.1 But brains are also easily deceived, swayed by illusions and pre-existing biases. Those things show up in meetings as your instincts. Instincts vary greatly based on differences in the amount and type of previous experience. The paradox of ability and deceive-ability creates a weird mix of unpredictable behavior in meetings. It’s no wonder that they feel awkward. What is known about how memory works in the brain is constantly evolving. To cover that in even a little detail is beyond the scope of this book, so this chapter is not meant to be an exhaustive look at human memory. However, there are a few interesting theories that will help you be more strategic about how you use meetings to support forming actionable memories. Your Memory in Meetings The brain’s job in meetings is to accept inputs (things we see, hear, and touch) and store it as memory, and then to apply those absorbed ideas in discussion (things we say and make). See Figure 2.1. FIGURE 2.1 The human brain has a diverse set of inputs that contribute to your memories. Neuroscience has identified four theoretical stages of memory, which include sensory, working,[...]



Designing for Research

2018-03-20T13:09:00+00:00

If you’ve spent enough time developing for the web, this piece of feedback has landed in your inbox since time immemorial: “This photo looks blurry. Can we replace it with a better version?” Every time this feedback reaches me, I’m inclined to question it: “What about the photo looks bad to you, and can you tell me why?” That’s a somewhat unfair question to counter with. The complaint is rooted in a subjective perception of image quality, which in turn is influenced by many factors. Some are technical, such as the export quality of the image or the compression method (often lossy, as is the case with JPEG-encoded photos). Others are more intuitive or perceptual, such as content of the image and how compression artifacts mingle within. Perhaps even performance plays a role we’re not entirely aware of. Fielding this kind of feedback for many years eventually lead me to design and develop an image quality survey, which was my first go at building a research project on the web. I started with twenty-five photos shot by a professional photographer. With them, I generated a large pool of images at various quality levels and sizes. Images were served randomly from this pool to users who were asked to rate what they thought about their quality. Results from the first round were interesting, but not entirely clear: users seemed to have a tendency to overestimate the actual quality of images, and poor performance appeared to have a negative impact on perceptions of image quality, but this couldn’t be stated conclusively. A number of UX and technical issues made it necessary to implement important improvements and conduct a second round of research. In lieu of spinning my wheels trying to extract conclusions from the first round results, I decided it would be best to improve the survey as much as possible, and conduct another round of research to get better data. This article chronicles how I first built the survey, and then how I subsequently listened to user feedback to improve it. Defining the research Of the subjects within web performance, image optimization is especially vast. There’s a wide array of formats, encodings, and optimization tools, all of which are designed to make images small enough for web use while maintaining reasonable visual quality. Striking the balance between speed and quality is really what image optimization is all about. This balance between performance and visual quality prompted me to consider how people perceive image quality. Lossy image quality, in particular. Eventually, this train of thought lead to a series of questions spurring the design and development of an image quality perception survey. The idea of the survey is that users are providing subjective assessments on quality. This is done by asking participants to rate images without an objective reference for what’s “perfect.” This is, after all, how people view images in situ. A word on surveys Any time we want to quantify user behavior, it’s inevitable that a survey is at least considered, if not ultimately chosen to gather data from a group of people. After all, surveys are perfect when your goal is to get something measurable. However, the survey is a seductively dangerous tool, as Erika Hall cautions. They’re easy to make and conduct, and are routinely abused in their dissemination. They’re not great tools for assessing past behavior. They’re just as bad (if not worse) at predicting future behavior. For example, the 1–10 scale often employed by customer satisfaction surveys don’t really say much of anything about how satisfied customers actually are or how likely they’ll be to buy a product in the future. The unfortunate reality, however, is that in lieu of my lording over hundreds of participants in person, the survey is the only truly practical tool I have to measure how people perceive image quali[...]



Conversational Design

2018-03-15T14:30:00+00:00

A note from the editors: We’re pleased to share an excerpt from Chapter 1 of Erika Hall’s new book, Conversational Design, available now from A Book Apart.Texting is how we talk now. We talk by tapping tiny messages on touchscreens—we message using SMS via mobile data networks, or through apps like Facebook Messenger or WhatsApp..article-layout .main-content > figure.quote:first-child figcaption { margin-top: 1rem; } In 2015, the Pew Research Center found that 64% of American adults owned a smartphone of some kind, up from 35% in 2011. We still refer to these personal, pocket-sized computers as phones, but “Phone” is now just one of many communication apps we neglect in favor of texting. Texting is the most widely used mobile data service in America. And in the wider world, four billion people have mobile phones, so 4 billion people have access to SMS or other messaging apps. For some, dictating messages into a wristwatch offers an appealing alternative to placing a call. The popularity of texting can be partially explained by the medium’s ability to offer the easy give-and-take of conversation without requiring continuous attention. Texting feels like direct human connection, made even more captivating by unpredictable lag and irregular breaks. Any typing is incidental because the experience of texting barely resembles “writing,” a term that carries associations of considered composition. In his TED talk, Columbia University linguist John McWhorter called texting “fingered conversation”—terminology I find awkward, but accurate. The physical act—typing—isn’t what defines the form or its conventions. Technology is breaking down our traditional categories of communication. By the numbers, texting is the most compelling computer-human interaction going. When we text, we become immersed and forget our exchanges are computer-mediated at all. We can learn a lot about digital design from the inescapable draw of these bite-sized interactions, specifically the use of language. What Texting Teaches Us This is an interesting example of what makes computer-mediated interaction interesting. The reasons people are compelled to attend to their text messages—even at risk to their own health and safety—aren’t high-production values, so-called rich media, or the complexity of the feature set. Texting, and other forms of social media, tap into something very primitive in the human brain. These systems offer always-available social connection. The brevity and unpredictability of the messages themselves triggers the release of dopamine that motivates seeking behavior and keeps people coming back for more. What makes interactions interesting may start on a screen, but the really interesting stuff happens in the mind. And language is a critical part of that. Our conscious minds are made of language, so it’s easy to perceive the messages you read not just as words but as the thoughts of another mingled with your own. Loneliness seems impossible with so many voices in your head. With minimal visual embellishment, texts can deliver personality, pathos, humor, and narrative. This is apparent in “Texts from Dog,” which, as the title indicates, is a series of imagined text exchanges between a man and his dog. (Fig 1.1). With just a few words, and some considered capitalization, Joe Butcher (writing as October Jones) creates a vivid picture of the relationship between a neurotic canine and his weary owner. Fig 1.1: “Texts from Dog” shows how lively a simple text exchange can be. Using words is key to connecting with other humans online, just as it is in the so-called “real world.” Imbuing interfaces with the attributes of conversation can be powerful. I’m far from the first person to suggest this. However, as computers mediate more and more relationships, including customer relationships, anyone thinking about digital product[...]



A DIY Web Accessibility Blueprint

2018-03-13T13:18:00+00:00

The summer of 2017 marked a monumental victory for the millions of Americans living with a disability. On June 13th, a Southern District of Florida Judge ruled that Winn-Dixie’s inaccessible website violated Title III of the Americans with Disabilities Act. This case marks the first trial under the ADA, which was passed into law in 1990. Despite spending more than $7 million to revamp its website in 2016, Winn-Dixie neglected to include design considerations for users with disabilities. Some of the features that were added include online prescription refills, digital coupons, rewards card integration, and a store locator function. However, it appears that inclusivity didn’t make the cut. Because Winn-Dixie’s new website wasn’t developed to WCAG 2.0 standards, the new features it boasted were in effect only available to sighted, able-bodied users. When Florida resident Juan Carlos Gil, who is legally blind, visited the Winn-Dixie website to refill his prescriptions, he found it to be almost completely inaccessible using the same screen reader software he uses to access hundreds of other sites. Juan stated in his original complaint that he “felt as if another door had been slammed in his face.” But Juan wasn’t alone. Intentionally or not, Winn-Dixie was denying an entire group of people access to their new website and, in turn, each of the time-saving features it had to offer. What makes this case unique is that it marks the first time in history in which a public accommodations case went to trial, meaning the judge ruled the website to be a “place of public accommodation” under the ADA and therefore subject to ADA regulations. Since there are no specific ADA regulations regarding the internet, Judge Scola decided the adoption of the Web Content Accessibility Guidelines (WCAG) 2.0 Level AA to be appropriate. (Thanks to the hard work of the Web Accessibility Initiative (WAI) at the W3C, WCAG 2.0 has found widespread adoption throughout the globe, either as law or policy.) Learning to have empathy Anyone with a product subscription service (think diapers, razors, or pet food) knows the feeling of gratitude that accompanies the delivery of a much needed product that arrives just in the nick of time. Imagine how much more grateful you’d be for this service if you, for whatever reason, were unable to drive and lived hours from the nearest store. It’s a service that would greatly improve your life. But now imagine that the service gets overhauled and redesigned in such a way that it is only usable by people who own cars. You’d probably be pretty upset. This subscription service example is hypothetical, yet in the United States, despite federal web accessibility requirements instituted to protect the rights of disabled Americans, this sort of discrimination happens frequently. In fact, anyone assuming the Winn-Dixie case was an isolated incident would be wrong. Web accessibility lawsuits are rising in number. The increase from 2015 to 2016 was 37%. While some of these may be what’s known as “drive-by lawsuits,” many of them represent plaintiffs like Juan Gil who simply want equal rights. Scott Dinin, Juan’s attorney, explained, “We’re not suing for damages. We’re only suing them to follow the laws that have been in this nation for twenty-seven years.” For this reason and many others, now is the best time to take a proactive approach to web accessibility. In this article I’ll help you create a blueprint for getting your website up to snuff. The accessibility blueprint If you’ll be dealing with remediation, I won’t sugarcoat it: successfully meeting web accessibility standards is a big undertaking, one that is achieved only when every page of a site adheres to all the guidelines you are attempting to comply with. As I mentioned earlier, those guidelines are usually WCA[...]



We Write CSS Like We Did in the 90s, and Yes, It’s Silly

2018-03-06T13:47:00+00:00

As web developers, we marvel at technology. We enjoy the many tools that help with our work: multipurpose editors, frameworks, libraries, polyfills and shims, content management systems, preprocessors, build and deployment tools, development consoles, production monitors—the list goes on. Our delight in these tools is so strong that no one questions whether a small website actually requires any of them. Tool obesity is the new WYSIWYG—the web developers who can’t do without their frameworks and preprocessors are no better than our peers from the 1990s who couldn’t do without FrontPage or Dreamweaver. It is true that these tools have improved our lives as developers in many ways. At the same time, they have perhaps also prevented us from improving our basic skills. I want to talk about one of those skills: the craft of writing CSS. Not of using CSS preprocessors or postprocessors, but of writing CSS itself. Why? Because CSS is second in importance only to HTML in web development, and because no one needs processors to build a site or app. Most of all, I want to talk about this because when it comes to writing CSS, it often seems that we have learned nothing since the 1990s. We still write CSS the natural way, with no advances in sorting declarations or selectors and no improvements in writing DRY CSS. Instead, many developers argue fiercely about each of these topics. Others simply dig in their heels and refuse to change. And a third cohort protests even the discussion of these topics. I don’t care that developers do this. But I do care about our craft. And I care that we, as a profession, are ignoring simple ways to improve our work. Let’s talk about this more after the code break. Here’s unsorted, unoptimized CSS from Amazon in 2003. .serif { font-family: times, serif; font-size: small; } .sans { font-family: verdana, arial, helvetica, sans-serif; font-size: small; } .small { font-family: verdana, arial, helvetica, sans-serif; font-size: x-small; } .h1 { font-family: verdana, arial, helvetica, sans-serif; color: #CC6600; font-size: small; } .h3color { font-family: verdana, arial, helvetica, sans-serif; color: #CC6600; font-size: x-small; } .tiny { font-family: verdana, arial, helvetica, sans-serif; font-size: xx-small; } .listprice { font-family: arial, verdana, sans-serif; text-decoration: line-through; font-size: x-small; } .price { font-family: verdana, arial, helvetica, sans-serif; color: #990000; font-size: x-small; } .attention { background-color: #FFFFD5; } And here’s CSS from contemporary Amazon: .a-box { display: block; border-radius: 4px; border: 1px #ddd solid; background-color: #fff; } .a-box .a-box-inner { border-radius: 4px; position: relative; padding: 14px 18px; } .a-box-thumbnail { display: inline-block; } .a-box-thumbnail .a-box-inner { padding: 0 !important; } .a-box-thumbnail .a-box-inner img { border-radius: 4px; } .a-box-title { overflow: hidden; } .a-box-title .a-box-inner { overflow: hidden; padding: 12px 18px 11px; background: #f0f0f0; } Just as in 2003, the CSS is unsorted and unoptimized. Did we learn anything over the past 15 years? Is this really the best CSS we can write? Let’s look at three areas where I believe we can easily improve the way we do our work: declaration sorting, selector sorting, and declaration repetition. Declaration sorting The 90s web developer, if he or she wrote CSS, wrote CSS as it occurred to them. Without sense or order—with no direction whatsoever. The same was true of last decade’s developer. The same is true of today’s developer, whether novice or expert. .foo { font: arial, sans-serif; background: #abc; margin: 1em; text-align: center; letter-spacing: 1px; -x-yaddayadda: yes; } The only difference between now and[...]



Owning the Role of the Front-End Developer

2018-02-27T13:30:00+00:00

When I started working as a web developer in 2009, I spent most of my time crafting HTML/CSS layouts from design comps. My work was the final step of a linear process in which designers, clients, and other stakeholders made virtually all of the decisions. Whether I was working for an agency or as a freelancer, there was no room for a developer’s input on client work other than when we were called to answer specific technical questions. Most of the time I would be asked to confirm whether it was possible to achieve a simple feature, such as adding a content slider or adapting an image loaded from a CMS. In the ensuing years, as front-end development became increasingly challenging, developers’ skills began to evolve, leading to more frustration. Many organizations, including the ones I worked for, followed a traditional waterfall approach that kept us in the dark until the project was ready to be coded. Everything would fall into our laps, often behind schedule, with no room for us to add our two cents. Even though we were often highly esteemed by our teammates, there still wasn’t a chance for us to contribute to projects at the beginning of the process. Every time we shared an idea or flagged a problem, it was already too late. Almost a decade later, we’ve come a long way as front-end developers. After years of putting in the hard work required to become better professionals and have a bigger impact on projects, many developers are now able to occupy a more fulfilling version of the role. But there’s still work to be done: Unfortunately, some front-end developers with amazing skills are still limited to basic PSD-to-HTML work. Others find themselves in a better position within their team, but are still pushing for a more prominent role where their ideas can be fostered. Although I’m proud to believe I’m part of the group that evolved with the role, I continue to fight for our seat at the table. I hope sharing my experience will help others fighting with me. My road to earning a seat at the table My role began to shift the day I watched an inspiring talk by Seth Godin, which helped me realize I had the power to start making changes to make my work more fulfilling. With his recommendation to demand responsibility whether you work for a boss or a client, Godin gave me the push I needed. I wasn’t expecting to make any big leaps—just enough to feel like I was headed in the right direction. Taking small steps within a small team My first chance to test the waters was ideal. I had recently partnered with a small design studio and we were a team of five. Since I’d always been open about my soft spot for great design, it wasn’t hard to sell them on the idea of having me begin to get a bit more involved with the design process and start giving technical feedback before comps were presented to clients. The results were surprisingly amazing and had a positive impact on everybody’s work. I started getting design hand-offs that I both approved of from a technical point of view and had a more personal connection with. For their part, the designers happily noticed that the websites we launched were more accurate representations of the comps they had handed off. My next step was to get involved with every single project from day one. I started to tag along to initial client meetings, even before any contracts had been signed. I started flagging things that could turn the development phase into a nightmare; at the same time I was able to throw around some ideas about new technologies I’d been experimenting with. After a few months, I started feeling that my skills were finally having an impact on my team’s projects. I was satisfied with my role within the team, but I knew it wouldn’t last forever. Eventually it was time for me to embark on a journey that [...]



Discovery on a Budget: Part II

2018-02-13T15:00:00+00:00

Welcome to the second installment of the “Discovery on a Budget” series, in which we explore how to conduct effective discovery research when there is no existing data to comb through, no stakeholders to interview, and no slush fund to draw upon. In part 1 of this series, we discussed how it is helpful to articulate what you know (and what you assume) in the form of a problem hypothesis. We also covered strategies for conducting one of the most affordable and effective research methods: user interviews. In part 2 we will discuss when it’s beneficial to introduce a second, competing problem hypothesis to test against the first. We will also discuss the benefits of launching a “fake-door” and how to conduct an A/B test when you have little to no traffic. A quick recap In part 1 I conducted the first round of discovery research for my budget-conscious (and fictitious!) startup, Candor Network. The original goal for Candor Network was to provide a non-addictive social media platform that users would pay for directly. I articulated that goal in the form of a problem hypothesis: Because their business model relies on advertising, social media tools like Facebook are deliberately designed to “hook” users and make them addicted to the service. Users are unhappy with this and would rather have a healthier relationship with social media tools. They would be willing to pay for a social media service that was designed with mental health in mind. Also in part 1, I took extra care to document the assumptions that went into creating this hypothesis. They were: Users feel that social media sites like Facebook are addictive. Users don’t like to be addicted to social media. Users would be willing to pay for a non-addictive Facebook replacement. For the first round of research, I chose to conduct user interviews because it is a research method that is adaptable, effective, and—above all—affordable. I recruited participants from Facebook, taking care to document the bias of using a convenience sampling method. I carefully crafted my interview protocol, and used a number of strategies to keep my participants talking. Now it is time to review the data and analyze the results. Analyze the data When we conduct discovery research, we look for data that can help us either affirm or reject the assumptions we made in our problem hypothesis. Regardless of what research method you choose, it’s critical that you set aside the time to objectively review and analyze the results. In practice, analyzing interview data involves creating transcriptions of the interviews and then reading them many, many times. Each time you read through the transcripts, you highlight and label sentences or sections that seem relevant or important to your research question. You can use products like NVivo, HyperRESEARCH, or any other qualitative analysis tool to help facilitate this process. Or, if you are on a pretty strict budget, you can simply use Google Sheets to keep track of relevant sections in one column and labels in another. Screenshot of my interview analysis in Google Sheets For my project, I specifically looked for data that would show whether my participants felt Facebook was addicting and whether that was a bad thing, and if they’d be willing to pay for an alternative. Here’s how that analysis played out: Assumption 1: Users feel that social media sites like Facebook are addictive Facebook has a weird, hypnotizing effect on my brain. I keep scrolling and scrolling and then I like wake up and think, ‘where have I been? why am I spending my time on this?’ interview participant Overwhelmingly, my data affirms this assumption. All of my participants (eleven out of eleven) mentioned Facebook being addictive in some way. Assumption 2: Use[...]



My Accessibility Journey: What I’ve Learned So Far

2018-02-06T15:00:00+00:00

Last year I gave a talk about CSS and accessibility at the stahlstadt meetup in Linz, Austria. Afterward, an attendee asked why I was interested in accessibility: Did I or someone in my life have a disability? I’m used to answering this question—to which the answer is no—because I get it all the time. A lot of people seem to assume that a personal connection is the only reason someone would care about accessibility. This is a problem. For the web to be truly accessible, everyone who makes websites needs to care about accessibility. We tend to use our own abilities as a baseline when we’re designing and building websites. Instead, we need to keep in mind our diverse users and their diverse abilities to make sure we’re creating inclusive products that aren’t just designed for a specific range of people. Another reason we all should think about accessibility is that it makes us better at our jobs. In 2016 I took part in 10k Apart, a competition held by Microsoft and An Event Apart. The objective was to build a compelling web experience that worked without JavaScript and could be delivered in 10 kB. On top of that, the site had to be accessible. At the time, I knew about some accessibility basics like using semantic HTML, providing descriptions for images, and hiding content visually. But there was still a lot to learn. As I dug deeper, I realized that there was far more to accessibility than I had ever imagined, and that making accessible sites basically means doing a great job as a developer (or as a designer, project manager, or writer). Accessibility is exciting Web accessibility is not about a certain technology. It’s not about writing the most sophisticated code or finding the most clever solution to a problem; it’s about users and whether they’re able to use our products. The focus on users is the main reason why I’m specializing in accessibility rather than solely in animation, performance, JavaScript frameworks, or WebVR. Focusing on users means I have to keep up with pretty much every web discipline, because users will load a page, deal with markup in some way, use a design, read text, control a JavaScript component, see animation, walk through a process, and navigate. What all those things have in common is that they’re performed by someone in front of a device. What makes them exciting is that we don’t know which device it will be, or which operating system or browser. We also don’t know how our app or site will be used, who will use it, how fast their internet connection will be, or how powerful their device will be. Making accessible sites forces you to engage with all of these variables—and pushes you, in the process, to do a great job as a developer. For me, making accessible sites means making fast, resilient sites with great UX that are fun and easy to use even in conditions that aren’t ideal. I know, that sounds daunting. The good news, though, is that I’ve spent the last year focusing on some of those things, and I’ve learned several important lessons that I’m happy to share. 1. Accessibility is a broad concept Many people, like me pre-2016, think making your site accessible is synonymous with making it accessible to people who use screen readers. That’s certainly hugely important, but it’s only one part of the puzzle. Accessibility means access for everyone: If your site takes ten seconds to load on a mobile connection, it’s not accessible. If your site is only optimized for one browser, it’s not accessible. If the content on your site is difficult to understand, your site isn’t accessible. It doesn’t matter who’s using your website or when, where, and how they’re doing it. Wh[...]



Design Like a Teacher

2018-02-01T15:00:00+00:00

In 2014, the clinic where I served as head of communications and digital strategy switched to a new online patient portal, a change that was mandated by the electronic health record (EHR) system we used. The company that provides the EHR system held several meetings for the COO and me to learn the new tool and provided materials to give to patients to help them register for and use the new portal. As the sole person at my clinic working on any aspect of user experience, I knew the importance of knowing the audience when implementing an initiative like the patient portal. So I was skeptical of the materials provided to the patients, which assumed a lot of knowledge on their part and focused on the cool features of the portal rather than on why patients would actually want to use it. By the time the phone rang for the fifth time on the first day of the transition, I knew my suspicion that the patient portal had gone wrong in the user experience stage was warranted. Patients were getting stuck during every phase of the process—from wondering why they should use the portal to registering for and using it. My response was to ask patients what they had tried so far and where they were getting stuck. Then I would try to explain why they might want to use the portal. Sometimes I lost patients completely; they just refused to sign up. They had a bad user experience trying to understand how a portal fit into their mental model of receiving healthcare, and I had a terrible user experience trying to learn what I needed to do to guide patients through the migration. To borrow a phrase from Dave Platt, the lead instructor of the UX Engineering course I currently help teach, the “hassle budget” was extremely high. I realized three important things in leading this migration: When people get stuck, their frustration prevents them from providing information up front. They start off with “I’m stuck” and don’t offer additional feedback until you pull it out of them. (If you felt a tremor just then, that was every IT support desk employee in the universe nodding emphatically.) In trying to get them unstuck, I had to employ skills that drew on my work outside of UX. There was no choice; I had a mandate to reach an adoption rate of at least 60%. The overarching goal was really to help these patients learn to do something different than what they were used to, whether that was deal with a new interface or deal with an interface for the first time at all. Considering these three realizations led me to a single, urgent conclusion that has transformed my UX practice: user experience is really a way of defining and planning what we want a user to learn, so we also need to think about our own work as how to teach. It follows, then, that user experience toolkits need to include developing a teaching mindset. But what does that mean? And what’s the benefit? Let’s use this patient portal story and the three realizations above as a framework for considering this. Helping users get unstuck Research on teaching and learning has produced two concepts that can help explain why people struggle to get unstuck and what to do about it: 1) cognitive load and 2) the zone of proximal development. Much like you work your muscles through weight resistance to develop physical strength, you work your brain through cognitive load to develop mental strength—to learn. There are three kinds of cognitive load: intrinsic, germane, and extraneous.  .cognitive-load td:first-child{width: 33.33%;} .cognitive-load td:last-child{width: 66.67%} This type of cognitive load ... is responsible for ... Intrinsic Actual learning of the material Germane Building that new information into a more permanent memory store [...]



CSS: The Definitive Guide, 4th Edition

2018-01-30T13:48:00+00:00

A note from the editors: We’re pleased to share an excerpt from Chapter 19 (“Filters, Blending, Clipping, and Masking”) of CSS: The Definitive Guide, 4th Edition by Eric Meyer and Estelle Weyl, available now from O’Reilly.In addition to filtering, CSS offers the ability to determine how elements are composited together. Take, for example, two elements that partially overlap due to positioning. We’re used to the one in front obscuring the one behind. This is sometimes called simple alpha compositing, in that you can see whatever is behind the element as long as some (or all) of it has alpha channel values less than 1. Think of, for example, how you can see the background through an element with opacity: 0.5, or in the areas of a PNG or GIF87a that are set to be transparent. But if you’re familiar with image-editing programs like Photoshop or GIMP, you know that image layers which overlap can be blended together in a variety of ways. CSS has gained the same ability. There are two blending strategies in CSS (at least as of late 2017): blending entire elements with whatever is behind them, and blending together the background layers of a single element. Blending Elements In situations where elements overlap, it’s possible to change how they blend together with the property mix-blend-mode. mix-blend-mode Values: normal | multiply | screen | overlay | darken | lighten | colordodge | color-burn | hard-light | soft-light | difference | exclusion | hue | saturation | color | luminosity Initial value: normal Applies to: All elements Computed value: As declared Inherited: No Animatable: No The way the CSS specification puts this is: “defines the formula that must be used to mix the colors with the backdrop.” That is to say, the element is blended with whatever is behind it (the “backdrop”), whether that’s pieces of another element, or just the background of its parent element. The default, normal, means that the element’s pixels are shown as is, without any mixing with the backdrop, except where the alpha channel is less than 1. This is the “simple alpha compositing” mentioned previously. It’s what we’re all used to, which is why it’s the default value. A few examples are shown in Figure 19-6. Figure 19-6. Simple alpha channel blending For the rest of the mix-blend-mode keywords, I’ve grouped them into a few categories. Let’s also nail down a few definitions: The foreground is the element that has mix-blend-mode applied to it. The backdrop is whatever is behind that element. This can be other elements, the background of the parent element, and so on. A pixel component is the color component of a given pixel: R, G, and B If it helps, think of the foreground and backdrop as images that are layered atop one another in an image-editing program. With mix-blend-mode, you can change the blend mode applied to the top image (the foreground). Darken, Lighten, Difference, and Exclusion These blend modes might be called simple-math modes—they achieve their effect by directly comparing values in some way, or using simple addition and subtraction to modify pixels: darken: Each pixel in the foreground is compared with the corresponding pixel in the backdrop, and for each of the R, G, and B values (the pixel components), the smaller of the two is kept. Thus, if the foreground pixel has a value corresponding to rgb(91,164,22) and the backdrop pixel is rgb(102,104,255), the resulting pixel will be rgb(91,104,22). lighten: This blend is the inverse of darken: when comparing the R, G, and B components of a foreground pixel and its corresponding backdrop pixel, the larger of the two values is kept. Thus, if the foreground pixel has a value co[...]



The King vs. Pawn Game of UI Design

2018-01-23T13:33:00+00:00

If you want to improve your UI design skills, have you tried looking at chess? I know it sounds contrived, but hear me out. I’m going to take a concept from chess and use it to build a toolkit of UI design strategies. By the end, we’ll have covered color, typography, lighting and shadows, and more. But it all starts with rooks and pawns. I want you to think back to the first time you ever played chess (if you’ve never played chess, humor me for a second—and no biggie; you will still understand this article). If your experience was anything like mine, your friend set up the board like this: And you got your explanation of all the pieces. This one’s a pawn and it moves like this, and this one is a rook and it moves like this, but the knight goes like this or this—still with me?—and the bishop moves diagonally, and the king can only do this, but the queen is your best piece, like a combo of the rook and the bishop. OK, want to play? This is probably the most common way of explaining chess, and it’s enough to make me hate board games forever. I don’t want to sit through an arbitrary lecture. I want to play. One particular chess player happens to agree with me. His name is Josh Waitzkin, and he’s actually pretty good. Not only at chess (where he’s a grandmaster), but also at Tai Chi Push Hands (he’s a world champion) and Brazilian Jiu Jitsu (he’s the first black belt under 5x world champion Marcelo Garcia). Now he trains financiers to go from the top 1% to the top .01% in their profession. Point is: this dude knows a lot about getting good at stuff. Now here’s the crazy part. When Josh teaches you chess, the board looks like this: King vs. King and Pawn Whoa. Compared to what we saw above, this is stupidly simple. And, if you know how to play chess, it’s even more mind-blowing that someone would start teaching with this board. In the actual game of chess, you never see a board like this. Someone would have won long ago. This is the chess equivalent of a street fight where both guys break every bone in their body, dislocate both their arms, can hardly see out of their swollen eyes, yet continue to fight for another half-hour. What gives? Here’s Josh’s thinking: when you strip the game down to its core, everything you learn is a universal principle. That sounds pretty lofty, but I think it makes sense when you consider it. There are lots of things to distract a beginning chess player by a fully-loaded board, but everything you start learning in a king-pawn situation is fundamentally important to chess: using two pieces to apply pressure together; which spaces are “hot”; and the difference between driving for a checkmate and a draw. Are you wondering if I’m ever going to start talking about design? Glad you asked. The simplest possible scenario What if, instead of trying to design an entire page with dozens of elements (nav, text, input controls, a logo, etc.), we consciously started by designing the simplest thing possible? We deliberately limit the playing field to one, tiny thing and see what we learn? Let’s try. What is the simplest possible element? I vote that it’s a button. This is the most basic, default button I could muster. It’s Helvetica (default font) with a 16px font size (pretty default) on a plain, Sketch-default-blue rectangle. It’s 40px tall (nice, round number) and has 20px of horizontal padding on each side. So yeah, I’ve already made a bunch of design decisions, but can we agree I basically just used default values instead of making decisions for principled, design-related reasons? Now let’s start playing with this button. What properties are modi[...]



Mental Illness in the Web Industry

2018-01-18T13:57:00+00:00

The picture of the tortured artist has endured for centuries: creative geniuses who struggle with their metaphorical demons and don’t relate to life the same way as most people. Today, we know some of this can be attributed to mental illness: depression, anxiety, bipolar disorder, and many others. We have modern stories about this and plenty of anecdotal information that fuels the popular belief in a link between creativity and mental illness. But science has also started asking questions about the link between mental illness and creativity. A recent study has suggested that creative professionals may be more genetically predisposed to mental illness. In the web industry, whether designer, dev, copywriter, or anything else, we’re often creative professionals. The numbers suggest that mental illness hits the web industry especially hard. Our industry has made great strides in compassionate discussion of disability, with a focus on accessibility and events like Blue Beanie Day. But even though we’re having meaningful conversations and we’re seeing progress, issues related to diversity, inclusion, and sexual harassment are still a major problem for our industry. Understanding and acceptance of mental health issues is an area that needs growth and attention just like many others. When it comes to mental health, we aren’t quite as understanding as we think we are. According to a study published by the Center of Disease Control, 57% of the general population believes that society at large is caring and sympathetic toward people with mental illness; but only 25% of people with mental health symptoms believed the same thing. Society is less understanding and sympathetic regarding mental illness than it thinks it is. Where’s the disconnect?  What does it look like in our industry? It’s usually not negligence or ill will on anybody’s part. It has a lot more to do with people just not understanding the prevalence and reality of mental illness in the workplace. We need to begin discussing mental illness as we do any other personal challenge that people face. This article is no substitute for a well-designed scientific study or a doctor’s advice, and it’s not trying to declare truths about mental illness in the industry. And it certainly does not intend to lump together or equalize any and all mental health issues, illnesses, or conditions. But it does suspect that plenty of people in the industry struggle with their mental health at some point or another, and we just don’t seem to talk about it. This doesn’t seem to make sense in light of the sense of community that web professionals have been proud of for decades. We reached out to a few people in our industry who were willing to share their unique stories to bring light to what mental health looks like for them in the workplace. Whether you have your own struggles with mental health issues or just want to understand those who do, these stories are a great place to start the conversation. Meet the contributors Gerry: I’ve been designing websites since the late ‘90s, starting out in UI design, evolving into an IA, and now in a UX leadership role. Over my career, I’ve contributed to many high-profile projects, organized local UX events, and done so in spite of my personal roadblocks. Brandon Gregory: I’ve been working in the web industry since 2006, first as a designer, then as a developer, then as a manager/technical leader. I’m also a staff member and regular contributor at A List Apart. I was diagnosed with bipolar disorder in 2002 and almost failed out of college because of it, although I now live a mostly normal life with a [...]



Working with External User Researchers: Part I

2018-01-16T15:00:00+00:00

You’ve got an idea or perhaps some rough sketches, or you have a fully formed product nearing launch. Or maybe you’ve launched it already. Regardless of where you are in the product lifecycle, you know you need to get input from users. You have a few sound options to get this input: use a full-time user researcher or contract out the work (or maybe a combination of both). Between the three of us, we’ve run a user research agency, hired external researchers, and worked as freelancers. Through our different perspectives, we hope to provide some helpful considerations. Should you hire an external user researcher? First things first–in this article, we focus on contracting external user researchers, meaning that a person or team is brought on for the duration of a contract to conduct the research. Here are the most common situations where we find this type of role: Organizations without researchers on staff: It would be great if companies validated their work with users during every iteration. But unfortunately, in real-world projects, user research happens at less frequent intervals, meaning there might not be enough work to justify hiring a full-time researcher. For this reason, it sometimes makes sense to use external people as needed. Organizations whose research staff is maxed out: In other cases, particularly with large companies, there may already be user researchers on the payroll. Sometimes these researchers are specific to a particular effort, and other times the researchers themselves function as internal consultants, helping out with research across multiple projects. Either way, there is a finite amount of research staff, and sometimes the staff gets overbooked. These companies may then pull in additional contract-based researchers to independently run particular projects or to provide support to full-time researchers. Organizations that need special expertise: Even if a company does have user research on staff and those researchers have time, it’s possible that there are specialized kinds of user research for which an external contract-based researcher is brought on. For example, they may want to do research with representative users who regularly use screen readers, so they bring in an accessibility expert who also has user research skills. Or they might need a researcher with special quantitative skills for a certain project. Why hire an external researcher vs. other options? Designers as researchers: You could hire a full-time designer who also has research skills. But a designer usually won’t have the same level of research expertise as a dedicated researcher. Additionally, they may end up researching their own designs, making it extremely difficult to moderate test sessions without any form of bias. Product managers as researchers: While it’s common for enthusiastic product managers to want to conduct their own guerilla user research, this is often a bad idea. Product managers tend to hear feedback that validates their ideas and most often aren’t trained on how to ask non-leading questions. Temporary roles: You could also bring on a researcher in a staff augmentation role, meaning someone who works for you full-time for an extended period of time, but who is not considered a full-time employee. This can be a bit harder to justify. For example, there may be legal requirements that you’d have to pass if you directly contract an individual. Or you could find someone through a staffing agency–fewer legal hurdles, but likely far pricier. If these options don’t sound like a good fit for your needs, hiring an external user researcher on a proj[...]



No More FAQs: Create Purposeful Information for a More Effective User Experience

2018-01-11T15:00:00+00:00

It’s normal for your website users to have recurring questions and need quick access to specific information to complete … whatever it is they came looking for. Many companies still opt for the ubiquitous FAQ (frequently asked/anticipated questions) format to address some or even all information needs. But FAQs often miss the mark because people don’t realize that creating effective user information—even when using the apparently simple question/answer format—is complex and requires careful planning. As a technical writer and now information architect, I’ve worked to upend this mediocre approach to web content for more than a decade, and here’s what I’ve learned: instead of defaulting to an unstructured FAQ, invest in information that’s built around a comprehensive content strategy specifically designed to meet user and company goals. We call it purposeful information. The problem with FAQs Because of the internet’s Usenet heritage—discussion boards where regular contributors would produce FAQs so they didn’t have to repeat information for newbies—a lot of early websites started out by providing all information via FAQs. Well, the ‘80s called, and they want their style back! Unfortunately, content in this simple format can often be attractive to organizations, as it’s “easy” to produce without the need to engage professional writers or comprehensively work on information architecture (IA) and content strategy. So, like zombies in a horror film, and with the same level of intellectual rigor, FAQs continue to pop up all over the web. The trouble is, this approach to documentation-by-FAQ has problems, and the information is about as far from being purposeful as it’s possible to get. For example, when companies and organizations resort to documentation-by-FAQ, it’s often the only place certain information exists, yet users are unlikely to spend the time required to figure that out. Conversely, if information is duplicated, it’s easy for website content to get out of sync. The FAQ page can also be a dumping ground for any information a company needs to put on the website, regardless of the topic. Worse, the page’s format and structure can increase confusion and cognitive load, while including obviously invented questions and overt marketing language can result in losing users’ trust quickly. Looking at each issue in more detail: Duplicate and contradictory information: Even on small websites, it can be hard to maintain information. On large sites with multiple authors and an unclear content strategy, information can get out of sync quickly, resulting in duplicate or even contradictory content. I once purchased food online from a company after reading in their FAQ—the content that came up most often when searching for allergy information—that the product didn’t contain nuts. However, on receiving the product and reading the label, I realized the FAQ information was incorrect, and I was able to obtain a refund. An information architecture (IA) strategy that includes clear pathways to key content not only better supports user information needs that drive purchases, but also reduces company risk. If you do have to put information in multiple locations, consider using an object-oriented content management system (CMS) so content is reused, not duplicated. (Our company open-sourced one called Fae.) Lack of discernible content order: Humans want information to be ordered in ways they can understand, whether it’s alphabetical, time-based, or by order of operation, importance, or even frequency. The question format can[...]



Why Mutation Can Be Scary

2018-01-09T15:00:00+00:00

A note from the editors: This article contain sample lessons from Learn JavaScript, a course that helps you learn JavaScript to build real-world components from scratch.To mutate means to change in form or nature. Something that’s mutable can be changed, while something that’s immutable cannot be changed. To understand mutation, think of the X-Men. In X-Men, people can suddenly gain powers. The problem is, you don’t know when these powers will emerge. Imagine your friend turns blue and grows fur all of a sudden; that’d be scary, wouldn’t it?h2 code, h3 code { text-transform: none; } In JavaScript, the same problem with mutation applies. If your code is mutable, you might change (and break) something without knowing. Objects are mutable in JavaScript In JavaScript, you can add properties to an object. When you do so after instantiating it, the object is changed permanently. It mutates, like how an X-Men member mutates when they gain powers. In the example below, the variable egg mutates once you add the isBroken property to it. We say that objects (like egg) are mutable (have the ability to mutate). const egg = { name: "Humpty Dumpty" }; egg.isBroken = false; console.log(egg); // { // name: "Humpty Dumpty", // isBroken: false // } Mutation is pretty normal in JavaScript. You use it all the time. Here’s when mutation becomes scary. Let’s say you create a constant variable called newEgg and assign egg to it. Then you want to change the name of newEgg to something else. const egg = { name: "Humpty Dumpty" }; const newEgg = egg; newEgg.name = "Errr ... Not Humpty Dumpty"; When you change (mutate) newEgg, did you know egg gets mutated automatically? console.log(egg); // { // name: "Errr ... Not Humpty Dumpty" // } The example above illustrates why mutation can be scary—when you change one piece of your code, another piece can change somewhere else without your knowing. As a result, you’ll get bugs that are hard to track and fix. This weird behavior happens because objects are passed by reference in JavaScript. Objects are passed by reference in JavaScript To understand what “passed by reference” means, first you have to understand that each object has a unique identity in JavaScript. When you assign an object to a variable, you link the variable to the identity of the object (that is, you pass it by reference) rather than assigning the variable the object’s value directly. This is why when you compare two different objects, you get false even if the objects have the same value. console.log({} === {}); // false When you assign egg to newEgg, newEgg points to the same object as egg. Since egg and newEgg are the same thing, when you change newEgg, egg gets changed automatically. console.log(egg === newEgg); // true Unfortunately, you don’t want egg to change along with newEgg most of the time, since it causes your code to break when you least expect it. So how do you prevent objects from mutating? Before you understand how to prevent objects from mutating, you need to know what’s immutable in JavaScript. Primitives are immutable in JavaScript In JavaScript, primitives (String, Number, Boolean, Null, Undefined, and Symbol) are immutable; you cannot change the structure (add properties or methods) of a primitive. Nothing will happen even if you try to add properties to a primitive. const egg = "Humpty Dumpty"; egg.isBroken = false; console.log(egg); // Humpty Dumpty console.log(egg.isBroken); // undefined const doesn’t grant immutability Many people think that variables declared with co[...]



Discovery on a Budget: Part I

2018-01-04T15:00:00+00:00

If you crack open any design textbook, you’ll see some depiction of the design cycle: discover, ideate, create, evaluate, and repeat. Whenever we bring on a new client or start working on a new feature, we start at the top of the wheel with discover (or discovery). It is the time in the project when we define what problem we are trying to solve and what our first approach at solving it should be. Ye olde design cycle We commonly talk about discovery at the start of a sprint cycle at an established business, where there are things like budgets, product teams, and existing customers. The discovery process may include interviewing stakeholders or pouring over existing user data. And we always exit the discovery phase with some sort of idea to move forward with. However, discovery is inherently different when you work at a nonprofit, startup, or fledgling small business. It may be a design team of one (you), with zero dollars to spend, and only a handful of people aware the business even exists. There are no clients to interview and no existing data to examine. This may also be the case at large businesses when they want to test the waters on a new direction without overcommitting (or overspending). Whenever you are constrained on budget, data, and stakeholders, you need to be flexible and crafty in how you conduct discovery research. But you can’t skimp on rigor and thoroughness. If the idea you exit the discovery phase with isn’t any good, your big launch could turn out to be a business-ending flop. In this article I’ll take you through a discovery research cycle, but apply it towards a (fictitious) startup idea. I’ll introduce strategies for conducting discovery research with no budget, existing user data, or resources to speak of. And I’ll show how the research shapes the business going forward. Write up the problem hypothesis An awful lot of ink (virtual or otherwise) has been spent on proclaiming we should all, “fall in love with the problem, not the solution.” And it has been ink spent well. When it comes to product building, a problem-focused philosophy is the cornerstone of any user-centric business. But how, exactly, do you know when you have a problem worth solving? If you work at a large, established business you may have user feedback and data pointing you like flashing arrows on a well-marked road towards a problem worth solving. However, if you are launching a startup, or work at a larger business venturing into new territory, it can be more like hiking through the woods and searching for the next blaze mark on the trail. Your ideas are likely based on personal experiences and gut instincts. When your ideas are based on personal experiences, assumptions, and instincts, it’s important to realize they need a higher-than-average level of tire-kicking. You need to evaluate the question “Do I have a problem worth solving?” with a higher level of rigor than you would at a company with budget to spare and a wealth of existing data. You need to take all of your ideas and assumptions and examine them thoroughly. And the best way to examine your ideas and categorize your assumptions is with a hypothesis. As the dictionary describes, a hypothesis is “a supposition or proposed explanation made on the basis of limited evidence as a starting point for further investigation.” That also serves as a good description of why we do discovery research in the first place. We may have an idea that there is a problem worth solving, but we don’t yet know the scope or critical det[...]