Added By: Feedage Forager Feedage Grade B rated
Language: English
api  app  apps  code  data  ios  javascript  mask  native  order  quip  react  reader  side  spam  via  web  webkit 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics

Updated: 2016-10-19T22:20:58.710-07:00


Avoiding Incremental Rendering in Hybrid Desktop Apps


I previously described how adding native popovers and modal dialogs to Quip’s hybrid desktop apps helped them to blend in and avoid the “uncanny valley” of web-based apps that don’t quite feel right. Another area that we focused on was the experience of creating a new window, especially during application startup. This is the first impression for a user, and thus informs how they will perceive the rest of the app. In theory launching should be fast — not much happens when a new window is created: An NSWindow is instantiated. A WebView is added to the window. The web view is directed to load the HTML file from the app’s bundle. That file serves as the “shell” for the HTML, JavaScript and CSS that are used for the UI. Once a signal from the JS that it has been initialized is received, the native side instructs it to render the desktop (or other initial object, when restoring the previous application state). Here’s a short screen recording showing the application launch sequence for a small account: End-to-end it doesn’t feel too slow, but there’s a lot of flashing and incremental rendering of the UI, which definitely feels “webby” as opposed to native. To make it a bit easier to understand what is going on, here’s a version that has been slowed down¹ by a factor of 3: The visual progression can be broken down into 4 stages: Blank window appears Basic app “chrome” appears (without any user data) Data and images begin to appear Rendering is complete In a native app, none of the intermediate stages are visible, thus they have much of a “snappy” feeling when creating a new window. Incremental rendering is normally desirable on the web if waiting on resources that require a network round-trip. In this case the overall time is short enough that intermediate states which only appear for a few frames (such as the one shown below) are distracting rather than giving a sense of progress. To avoid showing an entirely blank window (stage 1), instead of showing the window immediately, we modified the new window sequence to keep the window hidden until the ReactDOM.render() call for the application’s root view finished. This does mean that there is a slightly longer delay between the app being launched and the window appearing. However, since other things are happening (dock icon bouncing, menu bar changing), and the delay is on the order one to two hundred milliseconds, it's not noticeable. The initial rendered view was very empty since it didn’t have any of the data for the user’s desktop or inbox. On the web we “seed” this data in the initial HTML response to avoid the extra network round-trip to fetch it. We had assumed that loading data from the local database is so fast that such an optimization was unnecessary, and instead it could be loaded on demand like everything else. It was indeed quite fast, but it still took tens of milliseconds, which led to frames that appeared incomplete. Once we added the same “seeding” capability to the desktop app (the request to render the initial object also included all the data necessary for the view), almost everything appeared at once, skipping over most of stage 3. The reason why I said almost everything appeared at once is because some images were still taking a few extra frames to show up. The odd part was that these images were local static assets, and thus should have been readily available (e.g. the folder and sidebar icons in the screenshot above). Further, we inline the images as data: URIs into our main stylesheet (an optimization originally meant for the website, but carried over to the desktop app since it didn’t seem like it would hurt). Thus loading of the images should not involve any more I/O once the stylesheet was loaded and parsed. Evidently that was not the case — even when using data: URIs there was an asynchronous “fetch” and decompress step for rendering the image. When experimenting with creating additional windows, we noticed that in those cases the images do appear instantaneously. We th[...]

Grafting Local Static Resources onto Production


tl;dr: Using the webRequest Chrome extension API it is possible to “graft” development/localhost JavaScript and CSS assets on a production web service, thus allowing rapid debugging iteration against real production data sets. Demo site and extension. During the summer of 2015 I was investigating an annoying bug in Quip where our message list would not stay “bottom-anchored” in some circumstances¹. Unfortunately I was only able to trigger it on our live production site, not on my local development setup. Though Chrome’s developer tools are quite nice, I did not have the necessary ability to rapidly iterate on the code in order to further investigate the bug. I had in the past pushed alternate builds our staging site to debug such production-only issues, but that would still take several minutes to see the results of every change. My next thought was that I could instead try to reproduce the bug in our soon-to-be-released desktop app. The app can use local (minimally processed) JavaScript while running against production data. Unfortunately the bug did not manifest itself in our Mac app. I chalked this up to rendering engine differences (the bug was only visible in Chrome, and our Mac app uses a WebKit-backed WebView). I then tried our Windows app (which uses the same rendering engine as Chrome via the Chromium Embedded Framework), but it didn’t happen there it either. I was forced to conclude that the bug was due to some specific behavior in our website when running against production data, not something in the shared React-based UI. As I was wishing for a way to use JavaScript and CSS from my laptop with production data (for security reasons my local Quip server cannot connect to the production databases) I remembered that Gmail used to have exactly such a mode. As I recall it, you could start a local CaribouGmail server, go to your (work) Gmail instance and append a special URL parameter that would cause the JavaScript from the local server to be requested instead². With most of Gmail’s behavior being driven by the client-side JavaScript (with the server serving as an API endpoint) this meant that it was possible to try out pretty complex changes on your own data without having to “deploy” them. I considered adding this mode to Quip, but that seemed scary, security-wise, since it was effectively intentional cross-site scripting. It also would have meant waiting for the next day’s production push (and I wanted to solve the problem as soon as possible). However, it then occurred to me that I didn’t actually need to have the server change it behavior; I could instead write a Chrome extension which (via the webRequest API) would “graft” the local JavaScript and CSS files from my local server onto the production site when loaded in my browser. I had hoped that the extension could modify the HTML that is initially served and replace the JavaScript and CSS URLs, but it turns out the webRequest API cannot modify the HTTP response body. What did work was to intercept the JavaScript and CSS requests before they were sent to our CDN and redirect them to paths on my local server. Chrome would initially flag this as being insecure (since we use HTTPS in production, and the redirected URLs were over plain HTTP), but it is possible to convince it to load the resources anyway. Once I had the necessary tooling and ability to iterate quickly, fixing the bug that prompted all this was pretty straightforward (it was caused by the “mount point” system that we used to incrementally migrate our website to React, but that’s a whole other blog post). Since then it’s come in handy in debugging other hard-to-recreate problems, and for measuring JavaScript performance against more realistic data. It did briefly break when we added a Content Security Policy (CSP) — since we were loading scripts from an unknown domain the browser was correctly blocking the “grafted” response. However, the webRequest API also allows the extension to edit the response headers, thus it was st[...]

Multiple Windows in Hybrid React Desktop Apps


Quip’s desktop apps are hybrid apps: both the Windows and Mac apps are composed of a web React-based UI talking to our C++ Syncer library, along with some platform-specific glue code. While this allows us to support two additional platforms with a small team we knew that architecting the apps in this way would run the risk of not “fitting in” with other Mac OS X or Windows applications. To see if we could mitigate this problem, we started to enumerate what makes an app feel like “just a web view.” There were obvious things like responsiveness, working offline, and using platform conventions (e.g. a menu bar on the Mac). After thinking about it more, we also observed that native apps often have multiple windows, whereas web apps are bound to a single browser tab. This wasn’t just a matter of multiple windows showing Quip documents (something that did not seem too difficult to accomplish), but also all of the child windows that a native app would have (especially popovers and sheets on Mac OS X). We did have equivalents in our React “parts” library that we used in our website, but visually they didn’t match the native versions, and they felt very much bound by the enclosing rectangle of the web view. This was especially apparent with popovers that were triggered near the edge of the web view; one would expect them to “spill out” of the window but instead they would awkwardly position themselves “inward” to avoid getting clipped by the edges. This triggered an “uncanny valley” effect where the illusion of a native app would be broken. All of this was very much on my mind about a year ago as I was building up the foundations of our desktop apps. Though seemingly just a “polish” kind of detail, it seemed like supporting child windows would require some design trade-offs that would be easier to make early on, rather than retrofit later. My first thought was that we should define all of the popover and dialog content in a very declarative manner, thus allowing to later change how they were rendered. For modal dialogs this worked pretty well, since their content tended to be pretty simple. For example, a deletion confirmation dialog would be represented as: parts.spec.newPanel() .addSection(parts.spec.newTitleSection(_("Delete Chat Room"))) .addSection(parts.spec.newSection() .addMessage(_("If you delete this chat room, you will…"))) .addSection(parts.spec.newSection() .addDefaultButton(_("Hide"), function() { ... }) .addButton(_("Delete for all %(count)s people", {"count": count}), function() { ... }) .addCancelButton(_("Cancel"), function() {})) It was then a pretty straightforward mapping to generate an NSAlert instance that would fit right in on a Mac app (we support multiple modal dialog “runners”: one that creates a React version and another that passes it off to the platform native code). However, once we began to implement more and more popovers, the sheer variety of the kinds of controls used within them became apparent. In addition to all these widgets popovers also often had custom validation logic (e.g. the “Create” button in the “New Folder” popovers is grayed out while the title is empty) and dynamic updates (rows representing users show presence). A declarative system would need to be quite complex in order to support all of these concepts. Furthermore, implementing each control multiple times was going to slow down development significantly, and the benefits seemed minimal (for example there are no platform conventions to follow for how to show a user row — that is a Quip-specific concept). Since we already had a separation between the content of popovers and their frames (indeed, any React-based popover could be shown in a modal dialog instead), I wondered if it was possible to show the contents in a separate web view contained within an NSPopover instead. That way the frame would be native, and it coul[...]

WKWebView Communication Latency Revisited


Earlier this year I posted about WKWebView communication latency and how it’s not quite as good as UIWebView (when using WKScriptMessageHandler, the officially sanctioned mechanism). There have a been a few developments in this area, so an update seems warranted. Things appear to be promising for iOS 9: In late June Mark Lam fixed ScriptMessageHandlerDelegate::didPostMessage to reuse JSContext instances across calls. Now that I had an Apple engineer to bug appeal to, I filed a bug asking for same fix to be applied to [WKWebView evaluateJavaScript:completionHandler:], to help with execution round trips. Mark kindly obliged, and the fix was picked up (merged?) for iOS 9 beta 4. Earlier, in April, Ted Suzman emailed me to ask if I’d considered modifying document.title to send data from the JS side (since it’s mirrored on the native side as the title property on the WKWebView, which can be observed via KVO). I implemented this approach and it seemed to work quite well. Ted also suggested trying location.replace (instead setting location.hash), which (though it should be equivalent) ends up being slightly faster for both UIWebView and WKWebView (implementation). Ted’s messages got me thinking about other WKWebView properties that could be manipulated from the JS side, and so I took a closer look at the delegate protocols. The webView:runJavaScriptAlertPanelWithMessage:initiatedByFrame:completionHandler: method on WKUIDelegate caught my eye. We don’t use window.alert() in Quip, but this seemed like it would provide a way of getting a string from JS to native with minimal overhead (as previously mentioned, Quip encodes all JS ↔ native communications as strings already, so we don’t want the web view to do any other serialization for us). Better yet webView:runJavaScriptTextInputPanelWithPrompt:defaultText:initiatedByFrame:completionHandler: (which maps to window.prompt()) allows the native side to return a value to the JS side. I implemented these two approaches too. Here are the results from testing the various communication mechanisms using my test bed. Tests were run on iPad Air 2’s, one running iOS 8.4 and another running iOS 9.0 beta 5 Method/OS iOS 8.4 iOS 9.0 beta 5 UIWebView location.hash 0.26 0.28 location.replace 0.18 0.18 WKWebView WKScriptMessageHandler 2.94 0.63 location.hash 0.69 0.58 location.replace 0.46 0.51 document.title 0.57 0.63 window.alert() 0.42 0.46 window.prompt() 0.37 0.45 JS execution round-trip UIWebView 0.17 0.16 WKWebView 2.60 0.39 Quip is still using UIWebView since we’re still supporting iOS 7 (and supporting both web views did not seem like it would be worth the complexity). However, once iOS 9 is released we will most likely drop iOS 7 support, so it’s good to know that switching to WKWebView will not pose an unreasonable latency burden (though it remains to be seen if the selective swizzling and subview spelunking that we do will carry over).[...]

Teaching the Closure Compiler About React


tl;dr: react-closure-compiler is a project that contains a custom Closure Compiler pass that understands React concepts like components, elements and mixins. It allows you to get type-aware checks within your components and compile React itself alongside your code with full minification. Late last year, Quip started a gradual migration to React for our web UI (incidentally the chat features that were launched recently represent the first major functionality to be done entirely using React). When I started my research into the feasibility of using React, one of my questions was “Does it work with the Closure Compiler?” At Quip we rely heavily on it not just for minification, but also for type annotations to make refactorings less scary and code more self-documenting¹, and for its many warnings to prevent other gotchas in JavaScript development. The tidbits that I found were encouraging, though a bit sparse: An externs file with type declarations for most of React's API² A Quora post by Pete Hunt (a React core contributor) describing React as “closure compiler compatible” React's documentation about refs mentions making sure to quote refs annotated via string attributes³ In general I got the impression that it was certainly possible to use React with the Closure Compiler, but that not a lot of people were, and thus I would be off the beaten path⁴. My first attempt was to add react (the unminified version) as source input along with a simple “hello world” component⁵. The rationale behind doing it this way was that, if React was to be a core library, it should be included in the main JavaScript bundle that we serve to our users, instead of being a separate file. It also wouldn't need an externs file, since the compiler was aware of it. Finally, since it was going to be minified with the rest of our code, I could use the non-minified version as the input, and get better error messages. I was then greeted by hundreds of errors and warnings which broadly fell into three categories: “illegal use of unknown JSDoc tag providesModule” and similar warnings about JSDoc tags that the React source uses that the Closure Compiler didn't understand “variable React is undeclared” indicating that the Closure compiler did not realize what symbols react exported, most likely because the module wrapper that it uses is a bit convoluted, and thus it's not obvious that the exported symbols are in the global scope “dangerous use of the global this object” within my component methods, since the Closure Compiler did not realize that the functions within the spec passed to React.createClass were going to be run as methods on the component instance. Since I was still in a prototyping stage with React, I looked into the most minimal set of changes I could do to deal with these issues. For 2, adding the externs file to our list helped, since the compiler now knew that there was a React symbol and its many properties. This did seem somewhat wrong, since the React source was not actually external, and it was in fact safe to (globally) rename createClass and other methods, but it did quieten those errors. For 1 and 3 I wrote a small custom warnings guard that ignored all “errors” in the React source itself and the “dangerous use of global this” warning in .jsx files. Once I did all that, the code compiled, and appeared to run fine with all the other warnings and optimizations that we had. However, a few days later, as I was working on a more complex component, I ran into another error. Given: var Comp = React.createClass({ render: function() {...}, someComponentMethod: function() {...} }); var compInstance = React.render(React.createElement(Comp), ...); compInstance.someComponentMethod(); I was told that someComponentMethod was not a known property on compInstance (which was of type React.ReactComponent — per the externs file). This once again boiled down to the compiler not understand[...]

WKWebView Communication Latency


One of the many exciting things about the modern WebKit API is that it has an officially sanctioned communication mechanism for doing JavaScript-to-native communication. Hacks should no longer be necessary to get data back out of the WKWebView. It is a simple matter of registering a WKScriptMessageHandler on the native side and then invoking it with window.webkit.messageHandlers..postMessage on the JS side. After a heads-up from Jordan that messaging latency with the new web view was not that great, I extended the test bed I had developed to measure UIWebView communication mechanisms to also measure this setup. I got results that mirrored Jordan's experiences — on an iPad mini 2 (A7), UIWebView's best officially supported communication mechanism (changing location.hash¹) took 0.44ms, while the same round trip on a WKWebView took 3.63ms. I was expecting somewhat higher latency, since there is a cross-process IPC involved now, but not this much higher. Curious as to what was going on, I pointed a profiler at the test harness, and got the following results. Running Time Symbol Name 4147.0ms 94.1% Main Thread 10 stack frames elided 2165.0ms 49.1% IPC::Connection::dispatchOneMessage() 2164.0ms 49.1% IPC::Connection::dispatchMessage(…) 2161.0ms 49.0% WebKit::WebProcessProxy::didReceiveMessage(…) 2160.0ms 49.0% IPC::MessageReceiverMap::dispatchMessage(…) 2137.0ms 48.4% void IPC::handleMessage<…> 2137.0ms 48.4% WebKit::WebUserContentControllerProxy::didPostMessage(…) 2132.0ms 48.3% ScriptMessageHandlerDelegate::didPostMessage(…) 1274.0ms 28.9% -[JSContext initWithVirtualMachine:] 769.0ms 17.4% -[JSContext init] 30.0ms 0.6% -[BenchmarkViewController userContentController:didReceiveScriptMessage:] 16.0ms 0.3% -[JSValue toObject] It looks like nearly half the time is spent in JSContext² initializers. As previously mentioned, the WKWebView API is open source, so it's actually possible to read the source and see what's going on. If we look at the ScriptMessageHandlerDelegate::didPostMessage source we can see that indeed for every postMessage() call on the JS side a new JSContext is initialized to own the deserialized value. Even when not doing any postMessage calls on the JS side, and just measuring WKWebView's evaluateJavaScript:completionHandler: latency, there was still lots of time spent in JSContext initialization. This turned out to be because the completion handler also results in a JSContext being created in order to own the deserialized value. Thankfully this only happens if a completion handler is specified and there is a result, in other cases the function exits early. After thinking about this some more, it occurred to me that the old location.hash mechanism could still work with a WKWebView. By adding a WKNavigationDelegate to the web view, it's possible to observe location changes on the native side. I implemented this approach and was pleasantly surprised to see that it took 0.95ms. This was almost 4x faster than the officially sanctioned mechanism (albeit still about twice as slow as the equivalent on a UIWebView, but that is presumably explained by the IPC overhead). I then wondered if using location.hash to communicate in both directions (changing the location on the native side, listening for hashchange events on the JS side) would be be even better (to bypass more of the JS execution machinery), but that approach ended up being slower (for both UIWebView and WKWebView) since it involved more delegate invocations. Putting all this together, here are the results (in milliseconds, averaged over 100 round trips) using these mechanisms on various devices (all running iOS 8.1) using my test be[...]



tl;dr: RetroGit is a simple tool that sends you a daily (or weekly) digest of your GitHub commits from years past. Use it as a nostalgia trip or to remind you of TODOs that you never quite got around to cleaning up. Think of it as Timehop for your codebase. It's now been a bit more than two years since I've joined Quip. I recall a sense of liberation the first few months as we were working in a very small, very new codebase. Compared with the much older and larger projects at Google, experimentation was expected, technical debt was non-existent, and in any case it seemed quite likely that almost everything would be rewritten before any real users saw it¹. It was also possible to skim every commit and generally have a sense that you could keep the whole project in your head. As time passed, more and more code was written, prototypes were replaced with “productionized” systems and whole new areas that I was less familiar with (e.g. Android) were added. After about a year, I started to have the experience, familiar to any developer working on a large codebase for a while, of running blame on a file and being surprised by seeing my own name next to foreign-looking lines of code. Generally, it seemed like the codebase was still manageable when working in a single area. Problems with keeping it all in my head appeared when doing context switches: working on tables for a month, switching to annotations for a couple of months, and then trying to get back into tables. By that point tables had been “swapped out” and it all felt a bit alien. Extrapolating from that, it seemed like coming back to a module a year later would effectively mean starting from scratch. I wondered if I could build a tool to help me keep more of the codebase “paged in”. I've been a fan of Timehop for a while, back to the days when they were known as 4SquareAnd7YearsAgo. Besides the nostalgia factor, it did seem like periodic reminders of places I've gone to helped to keep those memories fresher. Since Quip uses GitHub for our codebase (and I had also migrated all my projects there a couple of years ago), it seemed like it would be possible to build a Timehop-like service for my past commits via their API. I had also wanted to try building something with Go², and this seemed like a good fit. Between go-github and goauth2, the “boring” bits would be taken care of. App Engine's Go runtime also made it easy to deploy my code, and it didn't seem like this would be a very resource-intensive app (famous last words). I started experimenting over Fourth of July weekend, and by working on it for a few hours a week I had it emailing me my daily digests by the end of the month. At this point I ran into what Akshay described as the “eh, it works well enough” trough, where it was harder to find the motivation to clean up the site so that others could use it too. But eventually it did reach a “1.0” state, including a name change, ending up with RetroGit. The code ended up being quite straightforward, though I'm sure I have quite a ways to go before writing idiomatic Go. The site employs a design similar to Tweet Digest, where it doesn't store any data beyond an OAuth token, and instead just makes the necessary API calls on the fly to get the commits from years past. The GitHub API behaved as advertised — the only tricky bit was how to handle the my aforementioned migrated repositories. Their creation dates were 2011-2012, but they had commits going back much further. I didn't want to “probe” the interval going back indefinitely, just in case there were commits from that year — in theory someone could import some very old repositories into GitHub³. I ended up using the statistics endpoint to determine when the first commit for a user was in a repository, and persisting that as a “vintage” timestamp. I'm not entirely happy with the visual design — I[...]

HTML Munging My Way To a React Conf Ticket


Like many others, I was excited to see that Facebook is putting on a conference for the React community. Tickets were being released in three waves, and so for the last three Fridays I have been trying to get one. The first Friday I did not even manage to see an order form. The next week I got as far as choosing a quantity, before being told that tickets were sold out when pushing the “Submit” button. Today was going to be my last chance, so I enlisted some coworkers to the cause — if any of them managed to get an order form the plan was that I would come by their desk and fill it out with my details. At 12 o'clock I struck out, but Bret and Casey both managed to run the gauntlet and make it to the order screen. However, once I tried to submit I was greeted with: Based on Twitter, I was not the only one. Looking at the Dev Tools console showed that a bunch of URLs were failing to load from CloudFront, with pathnames like custom-tickets and custom-tickets.css. I assumed that some supporting resources were missing, hence the form was not entirely populated¹. After checking that those URL didn't load while tethered to my phone (in case our office network was banned for DDoS-like behavior), I decided to spelunk through the code and see if I could inject the missing form fields by hand. I found some promising-looking JavaScript of the form: submitPaymentForm({ number: $('.card-number').val(), cvc: $('.card-cvc').val(), exp_month: $('.card-expiry-month').val(), exp_year: $('.card-expiry-year').val(), name: $('.cardholder-name').val(), address_zip: $('.card-zipcode').val() }); I therefore injected some DOM nodes with the appropriate class names and values and tried resubmitting. Unfortunately, I got the same error message. When I looked at the submitPaymentForm implementation, I could see that the input parameter was not actually used: function submitPaymentForm(fields) { var $form = $("#billing-info-form"); warnLeave = false; $form.get(0).submit(); } I looked at the form fields that had loaded, and they had complex names like order[TicketOrder][email]. It seemed like it would be difficult to guess the names of the missing ones (I checked the network request and they were not being submitted at all). I then had the idea of finding another Splash event order form, and seeing if I could get the valid form fields from there. I eventually ended up on the ticket page for a music video release party that had a working credit card form. Excited, I copied the form fields into the React order page that I still had up, filled them out, and pressed “Submit”. There was a small bump where it thought that the expiration date field was required and not provided, but I bypassed that client-side check and got a promising-looking spinner that indicated that the order was being processed. I was all set to be done, when the confirmation page finally loaded, and instead of being for the React conference, it was for the video release party. Evidently starting the order in a second tab had clobbered the some kind of cookie or other client-side state from the React tab. I had thus burned the order page on Bret's computer. However, Case had been dutifully keeping the session on his computer alive, so I switched to that one. There I went through the same process, but opened the “donor” order page in a Chrome incognito window. With bated breath I pressed the order button, and was relieved to see a confirmation page for React, and this email to arrive shortly after: Possibly not quite as epic was my last attempt at working around broken service providers, but it still made for an exciting excuse to be late for lunch. And if anyone wants to go a music video release party tonight in Brooklyn, I have a ticket². I now see that other Splash order forms have the same network error, so it may have been [...]

Two Hard Things


Inspired by Phil Karlton's (possibly apocryphal) Two Hard Things, here's what I struggle with on a regular basis:

  1. Getting information X from system A to system B without violating the N layers of abstraction in between.
  2. Naming X such that the name is concise, unique (i.e. greppable) and has the right connotations to someone new to the codebase (or myself, a year later).

Gmail's HTML Tag Whitelist


I couldn't find a comprehensive list of the HTML tags that Gmail's sanitizer allows through, so I wrote one up.


The Modern WebKit API is Open Source


When doing web development (or really any development on a complex enough platform), sometimes the best course of action for understanding puzzling behavior is to read the source. It's therefore been fortunate that Chrome/Blink, WebKit (though not Safari) and Firefox/Gecko are all open source (and often the source is easily searchable too). One of exceptions has been mobile WebKit when accessed via a UIWebView on iOS. Though it's based on the same components (WebCore, JavaScriptCore, WebKit API layer) as its desktop counterpart, there is enough mobile-specific behavior (e.g. interaction with auto-complete and the on-screen keyboard) that source access would come in handy. Apple would periodically do code dumps on, but those only included the WebCore and JavaScriptCore components¹ and in any case there hasn't been one since iOS 6.1. At WWDC, as part of iOS 8, Apple announced a modern WebKit API that would be unified between the Mac and iOS. Much of the (positive) reaction has been about the new API giving third-party apps access to faster, JITed, JavaScript execution. However, just as important to me is the fact that implementation of the new API is open source. Besides browsing around the source tree, it's also possible to track its development more closely, via an RSS feed of commits. However, there are no guarantees that just because something is available in the trunk repository that it will also be available on the (presumed) branch that iOS 8 is being worked on. For example, [WKWebView evaluateJavaScript:completionHandler:] was added on June 10, but it didn't show up in iOS 8 until beta 3, released on July 7 (beta 2 was released on June 17). More recent changes, such as the ability to control selection granularity (added on June 26) have yet to show up. There don't seem to be any (header) changes² that live purely in the iOS 8 SDK, so I'm inclined to believe that (at least at this stage) there's not much on-branch development, which is encouraging. Many thanks to Anders, Benjamin, Mitz and all the other Apple engineers for doing all in the open. Update on July 21, 2014: The selection granularity API has shown up in beta 4, which was released today. IANAL, but my understanding is that WebCore and JavaScriptCore are LGPL-licensed (due to its KHTML heritage) and so modifications in shipping software have to distributed as source, while WebKit is BSD-licensed, and therefore doesn't have that requirement. Modulo some munging done as part of the release process. [...]

Using ASan with iOS Applications


I've written up a quick guide for getting ASan (Address Sanitizer) working with iOS apps. This is the kind of thing I would have put directly into this blog in the past, but:

  1. Blogger's editor is not pleasant to use — I usually end up editing the HTML directly, especially for posts with code blocks. Not that Quip doesn't have bugs, but at least they're our bugs.
  2. Quip has public sharing now, so in theory that doc should be just as accessible (and indexable) as a regular post.

However, I still like the idea of this blog being a centralized repository of everything that I've written, hence this "stub" post.


Adding Keyboard Shortcuts For Inspecting iOS Apps and Web Pages in Safari


Back in iOS 6 Apple added the ability to remotely inspect pages in mobile Safari and UIWebViews. While I'm very grateful for that capability, the fact that it's buried in a submenu in Safari's “Develop” menu means that I have to navigate a maze with a mouse every time I relaunch the app. I decided to investigate adding a way of triggering the inspector via a keyboard shortcut. The target My first thought was that I could add a keyboard shortcut via OS X's built-in support. After all, “mobile.html” is just another menu item. Something like: If only it were so easy Unfortunately, while that worked if I opened the “Develop” menu at least once, it didn't on a cold start of Safari. I'm guessing that the contents of the menu are generated dynamically (and lazily), and thus there isn't a “mobile.html” item initially for the keyboard shortcut system to hook into. Inspired by a similar BBEdit script, I then decided to experiment with AppleScript and the System Events UI automation framework. After cursing at AppleScript for a while (can't wait for JavaScript for Automation), I ended up with: tell application "Safari" to activate tell application "System Events" to ¬ click menu item "mobile.html" of menu ¬ "iPhone Simulator" of menu item "iPhone Simulator" of menu ¬ "Develop" of menu bar item "Develop" of menu bar 1 of process "Safari" That seemed to work reliably, now it was just a matter of binding it to a keyboard shortcut. There apps like FastScripts that provide this capability, but to make the script more portable, I wanted a way that didn't depend on third-party software. It turned out that Automator can be used to do this, albeit in a somewhat convoluted fashion: Launch Automator Create a new “Service” workflow Add a “Run AppleScript” action¹ Change the setting at the top of the window to “Service receives no input in any application“ Replace the (* Your script goes here *) placeholder with the script above (your workflow should end up looking like this) Save the service as “Inspect Simulator” I wanted to attach a keyboard shortcut to this service when either Safari or the simulator were running, but not in other apps. I therefore then went to the “App Shortcuts” keyboard preferences pane (pictured above) and added shortcuts for that menu item in both apps (to add shortcuts for the simulator, you need to select the “Other…” option in the menu and select it from /Applications/ One final gotcha is that the first time the script is run in either app, you will get a “The action 'Run AppleScript' encountered an error.” dialog. Immediately behind that dialog is another, saying “'' would like to control this computer using accessibility features.” You'll need to open the Security & Privacy preferences pane and enable Safari (and the simulator's) accessibility permissions. Not be confused with the “Execute AppleScript” action, which is a Remote Desktop one — I did that and was puzzled by the “no computers” error message for a good while. [...]

Per-Package Method Counts for Android's DEX Format


Quip's Android app recently ran into the Android DEX/Dalvik 64K method limit. We suspected that this was due to code generated by the Protocol Buffer compiler¹, but we wanted to get more specific numbers, to both understand the situation better and track our progress. As a starting point, we figured per-package method counts would give us what we needed. The Android SDK ships with a dexdump tool that disassembles .dex (or .apk files) and dumps certain information out of it. Running it with the -f flag generated a method_ids_size line that showed that we were indeed precariously close to the limit. The script supports an XML output and per-class output of methods, so it seemed like a straightforward task to group methods and classes by package. However, once I actually processed its output, I got a much lower number than expected (when I did a sanity check to add up all the per-package counts). It turned out that the XML output is hardcoded to only output public classes and methods. I then held my nose and rewrote the script to instead parse dexdump's text format. Unfortunately, even then there was some undercounting — not as significant, but I was missing a few thousand methods. I looked at the counts for a few classes, and nothing seemed to be missing, so this was perplexing. After some more digging, it turned out that the limit counts referenced methods too, not just those defined in the DEX file. Therefore iterating over the methods defined in each class was missing most of the android.* methods that we were calling. Mohammad then pointed me at a script that used the smali/baksmali assembler/disassembler to generate per-package counts. However, when I ran it, it seemed to overcount. Looking into it a bit more, it looked like the script disassembled the .apk, re-assembled it to generate a .dex per package, and then ran dexdump on each one. However, this meant that referenced methods were counted by each package that used them, thus the overall count would include them more than once. I briefly considered modifying dexdump to extract the information that I needed, but it didn't seem like a fun codebase to work in; besides being in C++ it had lots of dependencies into the rest of the Android tree. Looking around for other DEX format parses turned up smali's, dexinfo, dexinsight, dexterity, dexlib and a few others. All seemed to require a bit more effort to build and understand than I was willing to put in late on a Friday night. However, after browsing around through the Android tree more, I came across the dexdeps tool². It is designed for separating referenced and defined methods (and classes), but its DEX file parser looked simple enough to modify to extract the data that I was interested in. Better yet, it had no other dependencies, and looked straightforward to build. Sure enough, it was pretty easy to modify it to create a per-package method counting tool. After a few more commits, I ended up with a dex-method-counts tool that can be pointed at an APK (or DEX file) and provide a package hierarchy tree-view of defined and referenced method counts. The README has a few more details, including a few flags that I've found useful when looking at protocol buffer compiler-generated code. As for how we solved our actual method count limit problem, we've so far managed to stave off doom by refactoring our .proto files to include fewer messages in our Java build (we were picking up some that were for other platform or server use only). That is, nothing too crazy yet. For others in this situation, Square's Wire library may be an alternative. Somewhat amusingly, this is not the only Java-based DEX parser in the Android source tree, there is also dex-tools in the Comp[...]

Getting ALL your data out of Quip


No, Quip is not shutting down. But we did just launch an API, so I thought I would take my experience with doing data export to write a backup tool that exports as much data as can be obtained via the API into a local folder. It's missing a few things (images most notably), but you do end up with a folder with all your documents (rendered to HTML) and conversations.

The tool is one of the samples¹ in our API repository, feel free to give it a spin. Pull requests are also welcome.

  1. The Webhook one is also mine. Party like it's 2009!

Saving The Day For (A Few) Veronica Mars Fans


Yesterday was the release day of the Veronica Mars movie. As a Kickstarter backer, Ann got a digital copy of the movie. For reasons that I'm sure were not entirely technical, it was only available via Flixster/UltraViolet¹, so getting access to it involved registering for a new account and jumping through some hoops. To actually download the movie for offline viewing, Flixster said it needed a “Flixster Desktop” client app. It was served as a ~29 MB .zip file, so it seemed like a straightforward download. I noticed that I was only getting ~ 30K/second download speeds, but I wasn't in a hurry, so I let it run. The download finished, but with only a ~21MB file that was malformed when I tried to expand it. I figured the WiFi of the hotel that we were staying at was somehow messing with the connection, so I tried again while tethered to my phone. I was still getting similarly slow download speeds, and the “completed” download was still too small. Since 30K/second was definitely under the expected tethered LTE throughput, I began to suspect Flixster's servers as the root cause. It certainly seemed plausible given that the file was served from, which did not seem like a CDN domain². I guess ~60,000 fans were enough to DDoS it; from reading a Reddit thread, it seemed like I was not the only one. The truncated downloads were of slightly different sizes, but it seemed like they finished in similar amounts of time, so I decided to be more scientific and timed the next attempt. It finished in exactly 10 minutes. My hypothesis was now that Flixster's server (or some intermediary) was terminating connections after 10 minutes, regardless of what was being transferred or what state it was in. Chrome's download manager has a Pause/Resume link, so my next thought was to use it to break up the download into two smaller chunks. After getting the first 10 MB, I paused the download, disconnected the laptop from WiFi (to make sure the connection would not be reused) and then reconnected and tried resuming. Unfortunately, the download never restarted. I did a HEAD request on the file, and since the response headers did not include an Accept-Ranges header, I assumed that the server just didn't support resumable downloads, and that this path was a dead end. After spending a few minutes trying to find a mirror of the app on sketchy download sites, a vague memory of Chrome's download manager not actually supporting HTTP range requests came to me. I did some quick tests with curl and saw that if I issued requests with --range parameters I got different results back. So it seemed like despite the lack of Accept-Ranges headers, the server (Apache fronted by Varnish) did in fact support range requests³. I therefore downloaded the file in two chunks by using --range 0-10000000 and --range 10000000- and concatenated them with cat. Somewhat surprisingly, the resulting zip file was well-formed and expanded correctly. I put a copy of the file in my Dropbox account and shared it on the Reddit thread, it seemed to have helped a few others. Of course, by the end of all this, I was more excited about having successfully downloaded the client app than getting or watching the movie itself⁴. As opposed to say, iTunes or Amazon. Now that I check the download page a day later, it seems to be served from, so I guess Flixster realized what was going on and fixed the problem. A closer reading of section 14.5 of the HTTP 1.1 spec showed that servers MAY respond with Accept-Ranges: bytes, but are not required to. Downloading the actual movie worked fine, I assume it was distributed over a CDN and thus wa[...]

Finding Messages Explicitly Marked as Spam in Gmail


tl;dr: Search Gmail for “is:spam -label:^os” to find messages that you manually marked as spam (as opposed to ones that Gmail automatically marked for you). Gmail recently had a bug where some emails were accidentally moved to the trash or marked as spam. Google “encouraged” users that might have been affected to check their trash and spam folders for any messages that didn't belong. Since I get a lot of spam (one of the perks of having the same email address since 1996), I didn't relish the thought of going through thousands of messages to see if any of them were mislabeled¹. I figured that Gmail must keep track of which messages were explicitly marked as spam by the user versus one that it automatically classifies (though I get a lot of spam, almost all of it is caught by Gmail's filters). Gmail (like Google Reader) keeps track of per-message state via internal system labels. For example, others have discovered that Gmail's Smart Labels are represented as ^smartlabel_type labels while Superstars uses names like ^ss_sy. Indeed, if you try to use a caret in a label name, Gmail says that it is not allowed. It therefore seemed like a reasonable assumption that there was a system label that would tell us how a message came to be marked as spam. The problem was to figure out what it was called. Thinking back to Reader (where all label operations went through an edit-tag HTTP API call, which listed the labels to added or removed), I figured I would see what the request was when marking a message as spam. Unfortunately, it looked like Gmail's requests were of slightly higher abstraction level, where marking a message as spam would send a request with an act=sp parameter (while marking as read uses act=rd, and so on). I then figured I should look at HTTP response when loading the spam folder. There appeared to be a bunch of system label names associated with each message. One that I explicitly marked as spam had the labels: "^a", "^ad_1391126400000", "^all", "^bsm"," ^clu_group", "^clu_unim", "^cob-processed-gmr", "^cob_pevent", "^oc_group", "^os_group", "^s", "^smartlabel_group", "^u" Meanwhile, another that had been automatically marked as spam used: "^ad_1391126400000", "^all"," ^bsm", "^clu_notification", "^cob-processed-gmr", "^oc_notification", "^os", "^os_notification", "^s", "^smartlabel_notification", "^u” ^s was present on all of them, and indeed doing a search for label:^s shows all spam messages (and the UI rewrites the search to in:spam). Others could also be puzzled out based on name, for example ^u is for unread messages. The more mysterious ones like ^cob_pevent I figured I could ignore². After looking at a bunch of messages, both automatically and manually marked as spam, ^os stood out. It only seemed to be present on messages that Gmail itself had decided were spam. Doing the search is:spam -label:^os seemed to show only messages that I had marked as spam. Indeed, each of the messages in the result displayed the header: "Why is this message in Spam? You clicked 'Report spam' for this message." Thus I was able to go through the much shorter list and see if any where mistakenly marked (they weren't). Seeing the plethora of labels that were present on all messages, I got curious what other internal labels there were. Between examining HTTP responses, looking through Gmail's JavaScript for strings that start with ^ and a simple dictionary attack for two-letter names, here's some others that I've found (those that are marked as “unknown” are ones that match some messages in my account, but with no apparent pattern): ^a: archived conversations ^b: chat transc[...]

JavaScript Array Sorting Performance Puzzler


I recently fixed a small performance problem in Quip. Since it ended up being yet another tidbit in my mile-long list of factoids that are necessary to keep in your head when doing web development, I thought I would write up the process. To implement search-as-you-type for contacts, documents, etc., Quip maintains a sorted array of index terms. Occasionally, this index needs to be updated on the client (for example, when a new contact is added). I noticed that the index updates were taking longer than expected, to the point where they were impacting typing latency if they happened at the wrong time. As initially implemented the index update involving making the additions and modifications to the array, and then re-sorting it. It was the sort operation that was taking a while: up to a few hundred milliseconds in the case of an index with 10,000 terms. Given that in the grand scheme of things that is not a very big array and the sort was pure computation (with no need to touch the DOM or otherwise do anything expensive) that was surprising. To be a bit more specific, the array that was being re-sorted was of the form: [ ["termA", ["id1", "id2", ...], ["termB", ["id7", "id9", ...], ... ] I've created a jsPerf test case that simulates it (array1 in the setup code). On my machine that test runs at 4.23 runs/second, which works out to 236ms, which lines up well with that I was seeing within the Quip codebase. My first guess was that the fact that the array was nearly in sorted order already was somehow triggering some pathological behavior in v8's sort implementation. I tested this guess by shuffling the array (array in the jsPerf test case), but sorting time was not affected. From reading the source this makes sense — for large arrays v8 uses Quicksort where the pivot point is picked as the median of the first, middle and last elements, thus it should still have O(N log N) running time regardless of the input's ordering. I then wondered if the fact that the array members were arrays themselves was a factor. I created a simplified test case (array3 in the jsPerf test case) where the term was used as the array member directly, instead of being the first element in a sub-array. This resulted in significantly higher speeds (163 runs/second or 6.1ms). That was a bit surprising, since the effective thing being compared should have been the same (the terms were all unique, thus the list of IDs should not be a factor in the sorting). I then decided to read the documentation a bit more carefully, which said “If [a comparator] is not supplied, elements are sorted by converting them to strings and comparing strings in lexicographic order.”¹ Though that behavior did result in a correct sort order, it did mean that each comparison involved the stringification of each of the sub-arrays, which meant a lot of needless memory allocations and computations. Changing the sort to use a very simple comparator (function (a, b) { return a[0] < b[0] ? -1 : (a[0] > b[0] ? 1 : 0); }, see “array1 with comparator” on jsPerf) resulted in 302 runs/second, or 3.3ms. This was more inline with my expectations. Mystery solved. Though as it turned out, in Quip's specific case, this optimization ended up not being needed, since I removed the need for the re-sorting altogether. It was instead more efficient to do an in-place modification or insertion in the right spot into the array. The correct index can be determined via binary search (i.e. O(log N)) and the insertion may involve O(N) operations to move the other array elements around, but that's still better than the[...]

A Faster UIWebView Communication Mechanism


tl;dr: Use location.hash (instead of location.href or the src attribute of iframes) to do fast synthetic navigations that trigger a UIWebViewDelegate's webView:shouldStartLoadWithRequest:navigationType: method. As previously mentioned, Quip's editor on iOS is implemented using a UIWebView that wraps a contentEditable area. The editor needs to communicate with the containing native layer for both big (document data in and out) and small (update toolbar state, accept or dismiss auto-corrections, etc.) things. While UIWebView provides an officially sanctioned mechanism for getting data into it (stringByEvaluatingJavaScriptFromString¹), there is no counterpart for getting data out. The most commonly used workaround is to have the JavaScript code trigger a navigation to a synthetic URL that encodes the data, intercept it via the webView:shouldStartLoadWithRequest:navigationType: delegate method and then extract the data out of the request's URL². The workaround did allow us to communicate back to the native Objective-C code, but it seemed to be higher latency than I would expect, especially on lower-end devices like the iPhone 4 (where it was several milliseconds). I decided to poke around and see what happened between the synthetic URL navigation happening and the delegate method being invoked. Getting a stack from the native side didn't prove helpful, since the delegate method was invoked via NSInvocation with not much else on the stack beyond the event loop scaffolding. However, that did provide a hint that the delegate method was being invoked after some spins of the event loop, which perhaps explained the delays. On the JavaScript side, we were triggering the navigation by setting the location.href property. By starting at the WebKit implementation of that setter, we end up in DOMWindow::setLocation, which in turn uses NavigationScheduler::scheduleLocationChange³. As the name “scheduler” suggests, this class requests navigations to happen sometime in the future. In the case of explicit location changes, a delay of 0 is used. However, 0 doesn't mean “immediately”: a timer is still installed, and WebKit waits for it to fire. That involves at least one spin of the event loop, which may be a few milliseconds on a low-end device. I decided to look through the WebKit source to see if there were other JavaScript-accessible ways to trigger navigations that didn't go through NavigationScheduler. Some searching turned up the HTMLAnchorElement::handleClick method, which invoked FrameLoader::urlSelected directly (FrameLoader being the main entrypoint into WebKit's URL loading). In turn, the anchor handleClick method can be directly invoked from the JavaScript side by dispatching a click event (most easily done via the click() method). Thus it seemed like an alternate approach would be to create a dummy link node, set its href attribute to the synthetic URL, and simulate a click on it. More work than just setting the location.href property, but perhaps it would be faster since it would avoid spinning the event loop. Once I got that all hooked up, I could indeed see that everything was now running slightly faster, and synchronously too — here's a stack trace showing native-to-JS-to-native communication: #0: TestBed`-[BenchmarkViewController endIteration:] #1: TestBed`-[BenchmarkViewController webView:shouldStartLoadWithRequest:navigationType:] #2: UIKit`-[UIWebView webView:decidePolicyForNavigationAction:request:frame:decisionListener:] ... #17: WebCore`WebCore::FrameLoader::urlSelected(...) ... #23: WebCore`WebCore::jsHTMLElemen[...]

Programmatically accepting keyboard auto-corrections on iOS


tl;dr: To programatically accept keyboard auto-corrections on iOS, call reloadInputViews on the first (current) UIResponder. Quip's document editor is implemented via contentEditable. This is true not just for the desktop web version, but also for the iOS and Android apps. So far, this has been a good way of getting basic editing behavior from the browser for “free” on all platforms while also having enough control to customize it for Quip's specific needs. One area where mobile editing behavior differs from the desktop is in the interaction with the auto-correction mechanisms that on-screen keyboards have. Normally auto-corrections are transparent to web content, but Quip needs to override the behavior of some key events, most notably for the return key. Since the return key also accepts the auto-correction, we needed a way to accept the auto-correction without actually letting the key event be processed by the contentEditable layer or the browser in general¹. Kevin did some research into programmatically accepting auto-corrections, and it turned out that this could be done by temporarily swapping the firstResponder. He implemented this (though most of the editor is in JavaScript, we do judiciously punch holes² to native side where needed) and all was well. However, a few months later, when we started to test Quip with the iOS 7 betas, we noticed that accepting auto-corrections no longer worked. Kevin went once more unto the breach. He observed that the next/previous form element buttons that iOS places above web keyboards (that we normally hide) also had the side-effect of accepting auto-corrections. He thus implemented an alternate mechanism on iOS 7 that simulated advancing to a dummy form elements and the going back. Once the initial iOS 7 release was out the door and we had some time to regroup (and I had a train ride that I could dedicate to this), I thought I would look more into this problem, to see if I could understand what was happening better. The goal was to stop having two divergent code paths, and ideally find a mechanism with fewer side effects (switching the firstResponder resulted in the keyboard being detached from the UIWebView, which would sometimes affect its scroll offset). The first step was to better understand how the iOS 6 and 7 mechanisms worked. Stepping through them with a debugger seemed tedious, but I guessed that a notification would be sent as part of the accept happening. I therefore added a listener that logged all notifications: [NSNotificationCenter.defaultCenter addObserverForName:nil object:nil queue:nil usingBlock:^(NSNotification *notification) { NSLog(@"notification: %@, info: %@",, notification.userInfo); }]; This logged a lot of other unrelated notifications, but there was something that looked promising: notification: UIViewAnimationDidCommitNotification, info: { delegate = ">"; name = UIKeyboardAutocorrection; } This looks like a (private) notification that is sent when the animation that shows the auto-correction is being committed. Since committing of animations happens synchronously, whatever triggered the accept must still be on the stack. I therefore changed the listener to be more specific: [NSNotificationCenter.defaultCenter addObserverForName:@"UIViewAnimationDidCo[...]

Quip: Back to app development


Though I've mentioned it off-hand a couple of times, I haven't explicitly blogged that I left Google towards the end of 2012¹. I joined Quip, a small start-up in San Francisco that's trying to re-think word processing. My ​mid-2010 switch to the Chrome team was prompted by a desire to understand “the stack” better, and also a slight fear of being typecast as a frontend guy and thus narrowing my options of projects. However, after several months of working on the rendering engine, I realized that I missed the “we've shipped” feeling created by being on a smaller, focused project with distinct milestones (as opposed to continuously improving the web platform, though I certainly appreciate that as a platform client). I thus switched to the Chrome Apps team, co-leading the effort to build the “packaged apps” platform. We shipped the developer preview at Google I/O 2012, and I'm happy to see that these apps are now available to everyone. As we were building the platform, I got more and more jealous of the people building apps on top of it. Making samples scratched that itch a bit, but there's still a big gap between a toy app that's a proof of concept and something that helps people get their job done. I also found it hard to cope with the longer timelines that are involved in platform building — it takes a while to get developers to try APIs, making changes requires preserving backwards compatibility or having a migration strategy, etc. In the end, I realized that although I enjoy understanding how the platform works, I don't feel the need to build it myself. When Bret approached me about Quip in the fall of 2012, a lot of the above clicked together for the first time. It was also very appealing to get to work (with a small yet very capable team) on something that I would be using every day². My fears of becoming overly specialized were assuaged by being at a startup, where we're always resource constrained and every engineer has to wear multiple hats. Depending on the day, I may be working in C++, Objective-C, Java, JavaScript, or Python. The changes can range from fixing a bug in protocol buffer code generation to figuring out how to mimic iOS 7's translucent keyboard background color³. My time working on Chrome also turned out to be well spent. Even on my first day I had to dig through the source to understand some puzzling behavior⁴. It's also been interesting to see how quickly my perspective on certain platform (anti-)patterns has changed. When I was working on Chrome, slow-running unload event handlers and (worse yet) synchronous XMLHttpRequests were obviously abominations that prevented a fluid user experience. Now that I work on a product that involves user data, those are the exact mechanisms that I need to use to make sure it's not lost⁵. I also view browser bugs differently now. Before I took them as an immutable fact of life, just something to be worked around. Now I still do the workaround, but I also report them. This is the biggest change (and improvement) between the last time I did serious web development (early 2010) and now — more and more browsers are evergreen, so I can rely on bug fixes actually reaching users. This doesn't mean that all bugs get fixed immediately; some are dealt with more promptly than others (and don't get me started on contentEditable bugs). The same benefit applies not just to bugs but also to features. I've been wishing for better exception reporting via the global onerror handler, and now that it's finally h[...]

Internet Memories


An Accidental Tech Podcast episode from a few months ago had some reminiscing of the ways to get online in the mid-90s (most notably, the X2 vs. K56flex debate). I thought I would write down some of my earliest recollections of Internet access¹. The first (indirect) Internet access that I recall having was in the spring of 1995². My dad had access to an FTP-to-email gateway³ at work, and had discovered an archive that had various space pictures and renderings. He would print out directory listings and bring them home. We would go over them, and highlight the ones that seemed interesting. The next day he would send the fetch requests, the files would be emailed to him, and he would bring them home on a floppy disk. In this way I acquired an early International Space Station rendering and a painting of Galileo over Io. I recall using the latter picture as my startup screen, back in the day when having so many extensions that they wrapped onto a second row was a point of pride, and the associated multi-minute startup time necessitated something pretty to look at. A little while later, my dad found an archive with Mac software (most likely Info-Mac or WUArchive). This was very exciting, but all of the files had .sit.hqx extensions, which we hadn't encountered before (uuencoding was the primary encoding that was used in the email gateway). .hqx to turned out to refer to BinHex, which Compact Pro (the sole compression utility that I had access to⁴) could understand. A bit more digging turned up that .sit referred to StuffIt archives, and the recently released (and free) StuffIt Expander could expand them. We somehow managed to find a copy of StuffIt that was either self-expanding or compressed in a format CompactPro understood, and from that point I was set. In the early summer of 1995, almost at the end of the school year, my school got Internet access through the 100 Schools project⁵. It would take until the fall for the lab to be fully set up, but as I recall there was a server/gateway (something in the NEC PC-98 family running PC-UX) connected to 3 or 4 LC 575s (see the last picture on this page). Also at the end of the school year a student who had graduated the year before came by, and told wondrous tales of high-speed in-dorm Internet access. At this point I still didn't have Internet access at home, but I did have a copy of the recently released version 1.1 of Netscape. I learned a bunch of HTML from the browser's about page​⁶, since it was the only page that I had local access to. Sometime in the fall, we got a Telebit Trailblazer modem. The modem only had a DB-25 connector, so my dad soldered together an adapter cable that would enable it work with the Mac's mini-DIN8 “modem” port. This was used to dial into local and U.S.-based BBSes, at first with ZTerm and later with FirstClass A little while later, we finally got proper Internet access at home. I had read Neuromancer over the summer, and it left a strong impression. Before dialing up to the ISP I would declare that I was going to “jack in” and demand that the lights be left off and that I not be disturbed, all for a more immersive experience. In the spring of 1996 we upgraded to a Global Village⁷ Teleport Platinum, going from 9600 baud to 28.8Kbps⁸. Over the summer, the modem developed an annoying tendency to drop the connection without any warning, with no obvious trigger. It eventually became apparent that this was correlated with the air condi[...]

Using Google Reader's reanimated corpse to browse archived data


Having gotten all my data out of Google Reader, the next step was to do something with it. I wrote a simple tool to dump data given an item ID, which let me do spot checks that the archived data was complete. A more complete browsing UI was needed, but this proved to be slow going. It's not a hard task per se, but the idea of re-implementing something that I worked on for 5 years didn't seem that appealing. It then occurred to me that Reader is a canonical single page application: once the initial HTML, JavaScript, CSS, etc. payload is delivered, all other data is loaded via relatively straightforward HTTP calls that return JSON (this made adding basic offline support relatively easy back in 2007). Therefore if I served the archived data in the same JSON format, then I should be able to browse it using Reader's own JavaScript and CSS. Thankfully this all occurred to me the day before the Reader shutdown, thus I had a chance to save a copy of Reader's JavaScript, CSS, images, and basic HTML scaffolding. width="640" height="470" src="" frameborder="0" allowfullscreen> zombie_reader is the implementation of that idea. It's available as another tool in my collection. Once pointed at a directory with an archive generated by reader_archive, it parses it and starts an HTTP server on port 8074. Beyond serving the static resources that were saved from Reader, the server uses to implement a minimal (read-only) subset of Reader's API. The tool required no modifications to Reader's JavaScript or CSS beyond fixing a few absolute paths1. Even the alternate header layout (without the Google+ notification bar) is something that was natively supported by Reader (for the cases where the shared notification code couldn't be loaded). It also only uses publicly-served (compressed/obfuscated) resources that had been sent to millions of users for the past 8 years. As the kids say these days, no copyright intended. A side effect is that I now have a self-contained Reader installation that I'll be able to refer to years from now, when my son asks me how I spent my mid-20s. It also satisfies my own nostalgia kicks, like knowing what my first read item was. In theory I could also use this approach to build a proxy that exposes Reader's API backed by (say) NewsBlur's, and thus keep using the Reader UI to read current feeds. Beyond the technical issues (e.g. impedance mismatches, since NewsBlur doesn't store read or starred state as tags, or has per item tags in general) that seems like an overly backwards-facing option. NewsBlur has its own distinguishing features (e.g. training and "focus" mode)2, and forcing it into a semi-functional Reader UI would result in something that is worse than either product. And changing the logo to make it more obvious that this isn't just a stale tab from last week. The font is called Demon Sker. One of the reasons why I picked NewsBlur is that it has been around long enough to develop its own personality and divergent feature set. I'll be the first to admit that Reader had its faults, and it's nice to see a product that tries to remedy them. [...]

Getting ALL your data out of Google Reader


Update on July 3: The reader_archive and feed_archive scripts are no longer operational, since Reader (and its API) has been shut down. Thanks to everyone that tried the script and gave feedback. For more discussion, see also Hacker News. There remain only a few days until Google Reader shuts down. Besides the emotions1 and the practicalities of finding a replacement2, I've also been pondering the data loss aspects. As a bit of a digital pack rat, the idea of not being able to get at a large chunk of the information that I've consumed over the past seven and a half years seems very scary. Technically most of it is public data, and just a web search away. However, the items that I've read, tagged, starred, etc. represent a curated subset of that, and I don't see an easy of recovering those bits. Reader has Takeout support, but it's incomplete. I've therefore built the reader_archive tool that dumps everything related to your account in Reader via the "API". This means every read item3, every tagged item, every comment, every like, every bundle, etc. There's also a companion site at that explains how to use the tool, provides pointers to the archive format and collects related tools4. Additionally, Reader is for better or worse the papersite of record for public feed content on the internet. Beyond my 545 subscriptions, there are millions of feeds whose histories are best preserved in Reader. Thankfully, ArchiveTeam has stepped up. I've also provided a feed_archive tool that lets you dump Reader's full history for feeds for your own use.5 I don't fault Google for providing only partial data via Takeout. Exporting all 612,599 read items in my account (and a few hundred thousand more from subscriptions, recommendations, etc.) results in almost 4 GB of data. Even if I'm in the 99th percentile for Reader users (I've got the badge to prove it), providing hundreds of megabytes of data per user would not be feasible. I'm actually happy that Takeout support happened at all, since my understanding is that it was all during 20% time. It's certainly better than other outcomes. Of course, I've had 3 months to work on this tool, but per Parkinson's law, it's been a bit of a scramble over the past few days to get it all together. I'm now reasonably confident that the tool is getting everything it can. The biggest missing piece is a way to browse the extracted data. I've started on reader_browser, which exposes a web UI for an archive directory. I'm also hoping to write some more selective exporters (e.g. from tagged items to Evernote for Ann's tagged recipes). Help is appreciated. I am of course saddened to see something that I spent 5 years working on get shut down. And yet, I'm excited to see renewed interest and activity in a field that had been thought fallow. Hopefully not having a a disinterested incumbent will be for the best. Still a toss-up between NewsBlur and Digg Reader. Up to a limit of 300,000, imposed by Reader's backend. If these command-line tools are too unfriendly, CloudPull is a nice-looking app that backs up subscriptions, tags and starred items. Google's Feed API will continue to exist, and it's served by the same backend that served Google Reader. However it does not expose items beyond recent ones in the feed. [...]

Image-based SVG Masking


Image-based masking was first introduced by WebKit a few years ago, and has proven to be a useful CSS feature. Unfortunately browsers without a WebKit lineage do not support it, which makes it a less than appealing option for cross-browser development. There is however the alternative of SVG-based masking, introduced by Firefox/Gecko at least partly in response to WebKit's feature. My goal was to find some way to combine the two mechanisms, so that I could use the same image assets in both rendering engines to achieve the masking effect. My other requirement was that the masks had to be provided as raster images (WebKit can use SVG file as image masks, but some shapes are complex enough that representing it as a bitmap is preferable). SVG supports an element, so at first glance this would just be a matter of something like: Unfortunately, depending on your mask image, when trying that, you will most likely end up with nothing being displayed. A more careful reading shows that WebKit's image masks only use the alpha channel to determine what gets masked, while SVG masks use the luminance. If your mask image has black in the RGB channels, then the luminance is 0, and nothing will show through. SVG 2 introduces a mask-type="alpha" property that is meant to solve this very problem. Better yet, code to support this feature in Gecko landed 6 months ago. However, the feature is behind a layout.css.masking.enabled about:config flag, so it's not actually useful. After more exploration of what SVG can and can't do, it occurred to me that I could transform an alpha channel mask into a luminance mask entirely within SVG (I had initially experimented with using , but that would have meant that masks would not be ready until scripts had executed). Specifically, SVG Filters can be used to alter SVG images, including masks. The feColorMatrix filter can be used to manipulate color channel data, and thus a simple matrix can be used to copy the alpha channel over to the RGB channels. Putting all that together gives us: I've put up a small demo of this in action (a silhouette of the continents is used to mask various textures). It seems to work as expected in Chrome, Safari 6, and Firefox 21.[...]