Subscribe: Adactio
Added By: Feedage Forager Feedage Grade A rated
Language: English
cache  google  it’s  network  offline  progressive  script  service worker  service  site  talk  that’s  web  worker 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Adactio

Adactio: Journal

The online journal of Jeremy Keith, an author and web developer living and working in Brighton, England.


Links from a talk

Wed, 21 Mar 2018 16:56:28 GMT

In two weeks time, I’ll be in Seattle for An Event Apart. I’ll be giving a brand new talk. The title is The Way Of The Web (although perhaps a more accurate title would be The Layers Of The Web). Here’s the description: Do you ever get overwhelmed by the ever-changing nature of web design and development? Exhausting, isn’t it? How are you supposed to know which technologies and tools you should invest your time in? Will they stick around or will you just have to relearn everything in another few months? Join Jeremy as he takes a tour of the past, present, and future of working on the web. From the building blocks of HTML, CSS, and JavaScript through to frameworks and libraries right up to the latest and greatest Progressive Web Apps, this talk will examine our collective assumptions with a critical eye. By learning from the past, we can make sensible design decisions today to build the web of tomorrow. There’s a direct evolution line from my previous talks—Resilence and Evaluating Technology—to this new one. (Spoiler: everything I talk about is in some way related to progressive enhancement …even if I never use the words “progressive" or “enhancement" in the talks.) I’ve been preparing this new talk for months. It started with a mind map—an A3 sheet of paper with disconnected thoughts, like something from the scene in the crime movie where the enter the layer of the serial killer and find a crazy wall. Then I set it aside and began procrastinating. But it was the good kind of procrastinating, right? I mean, I had made a start and all those thoughts were now bubbling around in my head. Eventually I forced myself to put things in some sort of order and started creating slides. That’s the beginning of the horrible process bouncing between thinking “this is pretty good!” and “this is absolute crap!” To be honest, I never actually know if a talk is any good until I give it in front of an audience (practice runs at work are great for getting feedback but they’re not the same as doing the talk for real). Anyway, I think the talk is ready to roll. If you see me giving this talk and you’re interested in diving deeper into the topics raised, I’ve gathered together some of sources I used. Further Reading Pace Layering: How Complex Systems Learn and Keep Learning by Stewart Brand The Human Computer’s Dreams Of The Future (PDF) by Ida Rhodes 3D Glasses On Reality by Kim Stanley Robinson The Rule of Least Power by Tim Berners-Lee and Noah Mendelsohn Everything Easy Is Hard Again by Frank Chimero Over-engineering is under-engineering by Baldur Bjarnason The Burden of Precision by Daniel Eden The work I like by Ethan Marcotte A Sound Of Thunder (PDF) by Ray Bradbury Related posts on 02018-03-06 Minimal viable service worker 02017-12-23 Ubiquity and consistency 02017-11-02 The dConstruct Audio Archive works offline 02017-03-15 Progressive Web App questions 02017-01-11 Making Resilient Web Design work offline 02016-10-18 Choice 02015-11-15 Home Screen Progressive Web Apps Huffduffer Trivago Resilient Web Design Adactio The dConstruct Archive Books Make It So by Nathan Shedroff and Christopher Noessel How Buildings Learn by Stewart Brand Time Travel by James Gleick Films Plan Nine From Outer Space, 1959, directed by Ed Wood 2001: A Space Odyssey, 1968, directed by Stanley Kubrick Blade Runner, 1982, directed by Ridley Scott Brazil, 1985, directed by Terry Gilliam Southland Tales, 2006, directed by Richard Kelly [...]

A workshop on building for resilience

Fri, 09 Mar 2018 16:51:28 GMT

In February, I tried out a new workshop two times—once at Webstock in New Zealand, and once in Hong Kong. The workshop is called The Progressive Web: Building for Resilience. Here’s an excerpt form the blurb: This workshop will show you to to think in a progressive way that works with the grain of the web. Together we’ll peel back the layers of the web and build upwards, creating experiences that work for everyone while making the best of cutting-edge browser technologies. From URL design to Progressive Web Apps, this journey will cover each stage of technological advancement. Basically, it’s the workshop version of Resilient Web Design. If that book is the theory, this workshop is the practice. Tim recently posted his tips for running workshops and there’s a lot in there that resonates with me. Like Tim, I’ve become less and less reliant on slides. In fact, this workshop—like my workshop on evaluating technology—has no slides. Instead it’s all about the exercises and going with the flow. After starting with a warm-up, I canvas the room to see if there any specific topics, tools or technologies that people are particularly interested in covering. I’ll note those (on post-its slapped on the wall) for reference throughout the day, to try to make sure that those particular things are touched on at some point. Then I start with a thought experiment… First of all, I get everyone to call out websites, services and apps that they use almost every day: Twitter, Facebook, Gmail, Slack, Google Docs, and so on. Those all get documented on the wall. Then it’s time to ask of each product, “What is the core functionality?” The idea here is to get beneath the surface-level verbs like swiping, tapping and dragging to get to the real purpose of a service: buying, selling, sharing, reading, writing, collaborating, and so on. At this point I inform the attendees that the year is 1995. And now we’re going to build these services using the technology of this time. This is a playful way of getting answers to the question “What’s the simplest technology to enable the core functionality?” It’s mostly forms, links, and lots of heavy lifting on the server. Then the real fun begins. “Enhance!” Moving forward in time, we get to add styles, we add interactivity with JavaScript, then Ajax, and then we get to really have fun with technologies like web sockets, geolocation, local storage, right the way up to service workers, notifications, and background sync. And the beauty of it all is that, if any of those technologies aren’t supported in a particular browser or device, the core functionality is still available. Next, we apply this layered mindset to a new service. I split the attendees into groups, and each of them gets a procedurally-generated startup idea …generated by shuffling some cards. This is an exercise I first tried when I was teaching in Porto: I made five cards with types of sites on them: news, social network, shopping, travel, and learning. Another five cards had subjects: books, music, food, pets, and cars. And another five cards had audiences: students, parents, the elderly, commuters, and teachers. Everyone was dealt a random card from each deck, resulting in briefs like “a travel site about food for the elderly” or “a social network about music for commuters.” The first few exercises are good creative fun: come up with a name, then a logo, then a business model. Then it’s time to build. It starts with URL design. Then it’s content prioritisation (for a representative URL). Then it’s layout (sketching!). The enhancements have begun. “How might this URL benefit from Ajax?” “How might this URL benefit from geolocation?” “How might this URL benefit from offline storage?” “How might this URL benefit from a service worker?” At this point, we’ve applied the layered, progressive approach at the scale of an entire service, and at the scale of an individual URL. Finally, we apply the same approach at t[...]

Minimal viable service worker

Tue, 06 Mar 2018 12:12:48 GMT

I really, really like service workers. They’re one of those technologies that have such clear benefits to users that it seems like a no-brainer to add a service worker to just about any website. The thing is, every website is different. So the service worker strategy for every website needs to be different too. Still, I was wondering if it would be possible to create a service worker script that would work for most websites. Here’s the script I came up with. The logic works like this: If there’s a request for an HTML page, fetch it from the network and store a copy in a cache (but if the network request fails, try looking in the cache instead). For any other files, look for a copy in the cache first but meanwhile fetch a fresh version from the network to update the cache (and if there’s no existing version in the cache, fetch the file from the network and store a copy of it in the cache). So HTML files are served network-first, while all other files are served cache-first, but in both cases a fresh copy is always put in the cache. The idea is that HTML content will always be fresh (unless there’s a problem with the network), while all other content—images, style sheets, scripts—might be slightly stale, but get refreshed with every request. My original attempt was riddled with errors. Jake came to my rescue and we revised the script into something that actually worked. In the process, my misunderstanding of how await works led Jake to write a great blog post on await vs return vs return await. I got there in the end and the script seems solid enough. It’s a fairly simplistic strategy that could work for quite a few sites, but it has some issues… Service workers don’t perform any automatic cleanup of caches—that’s up to you to do (usually during the activate event). This script doesn’t do any cleanup so the cache might grow and grow and grow. For that reason, I think the script is best suited for fairly small sites. The strategy also assumes that a file will either be fetched from the network or the cache. There’s no contingency for when both attempts fail. So there’s no fallback offline page, for example. I decided to test it in the wild, but I expanded it slightly to fix the fallback issue. The version on the Ampersand 2018 website includes a worst-case-scenario option to show a custom offline page that has been pre-cached. (By the way, if you haven’t got a ticket for Ampersand yet, get a ticket now—it’s going to be superb day of web typography nerdery.) Anyway, this fairly basic script seems to be delivering some good performance improvements. If you’ve got a site that you think would benefit from this network/caching strategy, and it’s served over HTTPS, then: Feel free to download the script or copy and paste it into a file called serviceworker, Put that file in the root directory of your website, Add this in a script element at the bottom of your HTML pages: if (navigator.serviceWorker && !navigator.serviceWorker.controller) { navigator.serviceWorker.register('/serviceworker'); } You can also use the script as a starting point. You might find issues specific to your particular website. That’s okay—you can tweak and adjust the script to suit your needs. If this minimal service worker script proves in any way useful to you, thank Jake. [...]

Just change it

Fri, 02 Mar 2018 19:22:14 GMT

Amber and I often have meta conversations about the nature of learning and teaching. We swap books and share ideas and experiences whenever we’re trying to learn something or trying to teach something. A topic that comes up again and again is the idea of “the curse of knowledge“—it’s the focus of Steven Pinker’s book The Sense Of Style. That’s when the author/teacher can’t remember what it’s like not to know something, which makes for a frustrating reading/learning experience.

This is one of the reasons why I encourage people to blog about stuff as they’re learning it; not when they’ve internalised it. The perspective that comes with being in the moment of figuring something out is invaluable to others. I honestly think that most explanatory books shouldn’t be written by experts—the “curse of knowledge” can become almost insurmountable.

I often think about this when I’m reading through the installation instructions for frameworks, libraries, and other web technologies. I find myself put off by documentation that assumes I’ve got a certain level of pre-existing knowledge. But now instead of letting it get me down, I use it as an opportunity to try and bridge that gap.

The brilliant Safia Abdalla wrote a post a while back called How do I get started contributing to open source?. I definitely don’t have the programming chops to contribute much to a codebase, but I thoroughly agree with Safia’s observation:

If you’re interested in contributing to open source to improve your communication and empathy skills, you’re definitely making the right call. A lot of open source tools could definitely benefit from improvements in the documentation, accessibility, and evangelism departments.

What really jumps out at me is when instructions use words like “simply” or “just”. I’m with Brad:

“Just” makes me feel like an idiot. “Just” presumes I come from a specific background, studied certain courses in university, am fluent in certain technologies, and have read all the right books, articles, and resources. “Just” is a dangerous word.

But rather than letting that feeling overwhelm me, I now try to fix the text. Here are a few examples of changes I’ve suggested, usually via pull requests on Github repos:

They all have different codebases in different programming languages, but they’re all intended for humans, so having clear and kind documentation is a shared goal.

I like suggesting these kinds of changes. That initial feeling of frustration I get from reading the documentation gets turned into a warm fuzzy feeling from lending a helping hand.

Faraway February

Thu, 01 Mar 2018 10:15:14 GMT

For the shortest month of the year, February managed to pack a lot in. I was away for most of the month. I had the great honour of being asked back to speak at Webstock in New Zealand this year—they even asked me to open the show!

I had no intention of going straight to New Zealand and then turning around to get on the first flight back, so I made sure to stretch the trip out (which also helps to mitigate the inevitable jet lag). Jessica and I went to Hong Kong first, stayed there for a few nights, then went on Sydney for a while (and caught up with Charlotte while we were out there), before finally making our way to Wellington. Then, after Webstock was all wrapped up, we retraced the same route in reverse. Many flat whites, dumplings, and rays of sunshine later, we arrived back in the UK.

As well as giving the opening keynote at Webstock, I did a full-day workshop, and I also ran a workshop in Hong Kong on the way back. So technically it was a work trip, but I am extremely fortunate that I get to go on adventures like this and still get to call it work.

Offline itineraries with service workers

Wed, 28 Feb 2018 17:55:17 GMT

The Trivago website is a progressive web app. That means it

  1. is served over HTTPS,
  2. has a web app manifest JSON file, and it
  3. has a service worker script.

The service worker provides an opportunity for a nice bit of fun branding—if you lose your internet connection, the site provides a neat little maze game you can play. Cute!

That’s a fairly simple example of how service workers can enhance the user experience when the dreaded offline situation arises. But it strikes me that the travel industry is the perfect place to imagine other opportunities for offline enhancements.

Travel sites often provide itineraries—think airlines, trains, or hotels. The itineraries consist of places, times, and contact information. This is exactly the kind of information that you might find yourself trying to retrieve in an emergency situation, like maybe in a cab on the way to the airport or train station. Perhaps you’re stuck in traffic, in a tunnel. Or maybe you don’t have a data plan for the country you’re currently in. Either way, wouldn’t it be great if you could hit the website for your airline or hotel and get your itinerary, even if you’re offline.

Alright, let’s think this through…

Let’s assume that an individual itinerary has its own URL. That URL is a web page of information, mostly text, with perhaps an image or two (like a map). Now when you make your booking, let’s have the service worker cache that URL (and its assets) for offline access.

Hmm …but there’s a good chance that the device you make the booking on is not the same device that you’d have with you out and and about. Because caches are local to the browser, that’s a problem.

Okay, but of these kinds of sites have some kind of log-in mechanism. So we could update the log-in flow a bit: when a user logs in, check to see if they have any itineraries assigned to them, and if they do, fire off an event to the service worker (using postMessage) to cache the URLs of the itineraries.

Now that the itineraries are cached, the final step is to create a custom offline page. As well as the usual “Sorry, the internet’s down” message, we can say “Sorry, the internet’s down …but here are your itineraries”. (This is kind of like the pattern you see on blogs like mine, Ethan’s, or Mike’s—a custom offline page that lists cached URLs of articles you’ve previously visited).

That’s just one pattern off the top of my head. It’s fun to imagine the different ways that service workers could be used to enhance the experience of just about any site, but they seem particularly relevant to travel sites—dodgy internet connections and travelling go hand-in-hand. At Clearleft, we’ve been working with quite a few travel-related clients lately so that’s why these scenarios are on my mind: booking holidays, flights, and so on. But, as I’ve said before and I’ll say again, every website can benefit from becoming a progressive web app.

Ends and means

Mon, 26 Feb 2018 18:02:27 GMT

The latest edition of the excellent History Of The Web newsletter is called The Day(s) The Web Fought Back. It recounts the first time that websites stood up against bad legislation in the form of the Communications Decency Act (CDA), and goes to recount the even more effective use of blackout protests against SOPA and PIPA. I remember feeling very heartened to see WikiPedia, Google and others take a stand on January 18th, 2012. But I also remember feeling uneasy. In this particular case, companies were lobbying for a cause I agreed with. But what if they were lobbying for a cause I didn’t agree with? Large corporations using their power to influence politics seems like a very bad idea. Isn’t it still a bad idea, even if I happen to agree with the cause? Cloudflare quite rightly kicked The Daily Stormer off their roster of customers. Then the CEO of Cloudflare quite rightly wrote this in a company-wide memo: Literally, I woke up in a bad mood and decided someone shouldn’t be allowed on the Internet. No one should have that power. There’s an uncomfortable tension here. When do the ends justify the means? Isn’t the whole point of having principles that they hold true even in the direst circumstances? Why even claim that corporations shouldn’t influence politics if you’re going to make an exception for net neutrality? Why even claim that free speech is sacrosanct if you make an exception for nazi scum? Those two examples are pretty extreme and I can easily justify the exceptions to myself. Net neutrality is too important. Stopping fascism is too important. But where do I draw the line? At what point does something become “too important?” There are more subtle examples of corporations wielding their power. Google are constantly using their monopoly position in search and browser marketshare to exert influence over website-builders. In theory, that’s bad. But in practice, I find myself agreeing with specific instances. Prioritising mobile-friendly sites? Sounds good to me. Penalising intrusive ads? Again, that seems okey-dokey to me. But surely that’s not the point. So what if I happen to agree with the ends being pursued? The fact that a company the size and power of Google is using their monopoly for any influence is worrying, regardless of whether I agree with the specific instances. But I kept my mouth shut. Now I see Google abusing their monopoly again, this time with AMP. They may call the preferential treatment of Google-hosted AMP-formatted pages a “carrot”, but let’s be honest, it’s an abuse of power, plain and simple. By the way, I have no doubt that the engineers working on AMP have the best of intentions. We are all pursuing the same ends. We all want a faster web. But we disagree on the means. If Google search results gave preferential treatment to any fast web pages, that would be fine. But by only giving preferential treatment to pages written in a format that they created, and hosted on their own servers, they are effectively forcing everyone to use AMP. I know for a fact that there are plenty of publications who are producing AMP content, not because they are sold on the benefits of the technology, but because they feel strong-armed into doing it in order to compete. If the ends justify the means, then it’s easy to write off Google’s abuse of power. Those well-intentioned AMP engineers honestly think that they have the best interests of the web at heart: We were worried about the web not existing anymore due to native apps and walled gardens killing it off. We wanted to make the web competitive. We saw a sense of urgency and thus we decided to build on the extensible web to build AMP instead of waiting for standard and browsers and websites to catch up. I stand behind this process. I’m a practical guy. There’s real hubris and audacity in thinking that one company should be able to tackl[...]

Global Diversity CFP Day—Brighton edition

Thu, 01 Feb 2018 16:42:21 GMT

There are enough middle-aged straight white men like me speaking at conferences. That’s why the Global Diversity Call-For-Proposals Day is happening this Saturday, February 3rd.

The purpose is two-fold. One is to encourage a diverse range of people to submit talk proposals to conferences. The other is to help with the specifics—coming with ideas, writing a good title and abstract, preparing the presentation, and all that.

Julie is organising the Brighton edition. Clearleft are providing the venue—68 Middle Street. I’ll be on hand to facilitate. Rosa and Dot will be doing the real work, mentoring the attendees.

If you’ve ever thought about submitting a talk proposal to a conference but just don’t know where to start, or if you’re just interested in the idea, please do come along on Saturday. It’s starts at 11am and will be all wrapped up by 3pm.

See you there!

GDPR and Google Analytics

Mon, 29 Jan 2018 12:55:35 GMT

Enforcement of the European Union’s General Data Protection Regulation is coming very, very soon. Look busy. This regulation is not limited to companies based in the EU—it applies to any service anywhere in the world that can be used by citizens of the EU. It’s less about data protection and more like a user’s bill of rights. That’s good. Cennydd has written a techie’s rough guide to GDPR. The Open Data Institute’s Jeni Tennison wrote down her thoughts on how it could change data portability in particular. While she welcomes GDPR, she has some misgivings. Blaine—who really needs to get a blog—shared his concerns in the form of the online equivalent of interpretive dance …a twitter thread (it’s called a thread because it inevitably gets all tangled, and it’s easy to break.) It’s increasingly looking like GDPR is a massive scaled-up version of the idiotic and horrifically mis-managed “cookie law”.— Blaine Cook (@blaine) January 28, 2018 The interesting thing about the so-called “cookie law” is that it makes no mention of cookies whatsoever. It doesn’t list any specific technology. Instead it states that any means of tracking or identifying users across websites requires disclosure. So if you’re setting a cookie just to manage state—so that users can log in, or keep items in a shopping basket—the legislation doesn’t apply. But as soon as your site allows a third-party to set a cookie, it’s banner time. Google Analytics is a classic example of a third-party service that uses cookies to track people across domains. That’s pretty much why it exists. We, as site owners, get to use this incredibly powerful tool, and all we have to do in return is add one little snippet of JavaScript to our pages. In doing so, we’re allowing a third party to read or write a cookie from their domain. Before Google Analytics, Google—the search engine business—was able to identify and track what users were searching for, and which search results they clicked on. But as soon as the user left, the trail went cold. By creating an enormously useful analytics product that only required site owners to add a single line of JavaScript, Google—the online advertising business—gained the ability to keep track of users across most of the web, whether they were on a site owned by Google or not. Under the old “cookie law”, using a third-party cookie-setting service like that meant you had to inform any of your users who were citizens of the EU. With GDPR, that changes. Now you have to get consent. A dismissible little overlay isn’t going to cut it any more. Implied consent isn’t enough. Now this situation raises an interesting question. Who’s responsible for getting consent? Is it the site owner or the third party whose script is the conduit for the tracking? In the first scenario, you’d need to wait for an explicit agreement from a visitor to your site before triggering the Google Analytics functionality. Suddenly it’s not as simple as adding a single line of JavaScript to your site. In the second scenario, you don’t do anything differently than before—you just add that single line of JavaScript. But now that script would need to launch the interface for getting consent before doing any tracking. Google Analytics would go from being something invisible to something that directly impacts the user experience of your site. I’m just using Google Analytics as an example here because it’s so widespread. This also applies to third-party sharing buttons—Twitter, Facebook, etc.—and of course, advertising. In the case of advertising, it gets even thornier because quite often, the site owner has no idea which third party is about to do the tracking. Many, many sites use intermediary services (y’know, ‘cause bloate[...]