Subscribe: Bolinfest Changeblog
Added By: Feedage Forager Feedage Grade A rated
Language: English
atom  closure  code  gdata  goog  google  javascript  jquery  new  python  talk  targetalert  things  time  web  work 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Bolinfest Changeblog

Bolinfest Changeblog

When changes, you'll be the first to know.

Last Build Date: Tue, 23 Jan 2018 13:33:46 +0000


JavaScript vs. Python in 2017

Mon, 20 Mar 2017 08:31:00 +0000

I may be one of the last people you would expect to write an essay criticizing JavaScript, but here we are.Two of my primary areas of professional interest are JavaScript and “programming in the large.” I gave a presentation back in 2013 at mloc where I argued that static typing is an essential feature when picking a language for a large software project. For this reason, among others, I historically limited my use of Python to small projects with no more than a handful of files.Recently, I needed to build a command-line tool for work that could speak Thrift. I have been enjoying the Nuclide/Flow/Babel/ESLint toolchain a lot recently, so my first instinct was to use JavaScript for this new project, as well. However, it quickly became clear that if I went that route, I would have to spend a lot of time up front on getting the Thrift bindings to work properly. I couldn't convince myself that my personal preference for JavaScript would justify such an investment, so I decided to take a look at Python.I was vaguely aware that there was an effort to add support for static typing in Python, so I Googled to find out what the state of the art was. It turns out that it is a tool named Mypy, and it provides gradual typing, much like Flow does for JavaScript or Hack does for PHP. Fortunately, Mypy is more like Hack+HHVM than it is like Flow in that a Python 3 runtime accepts the type annotations natively whereas Flow type annotations must be stripped by a separate tool before passing the code to a JavaScript runtime. (To use Mypy in Python 2, you have to put your type annotations in comments, operating in the style of the Google Closure toolchain.) Although Mypy does not appear to be as mature as Flow (support for incremental type checking is still in the works, for example), simply being able to succinctly document type information was enough to renew my interest in Python.In researching how to use Thrift from Python, a Google search turned up some sample Python code that spoke to Thrift using asynchronous abstractions. After gradual typing, async/await is the other feature in JavaScript that I cannot live without, so this code sample caught my attention! As we recently added support for building projects in Python 3.6 at work, it was trivial for me to get up and running with the latest and greatest features in Python. (Incidentally, I also learned that you really want Python 3.6, not 3.5, as 3.6 has some important improvements to Mypy, fixes to the asyncio API, literal string interpolation like you have in ES6, and more!)Coming from the era of “modern” JavaScript, one thing that was particularly refreshing was rediscovering how Python is an edit/refresh language out of the box whereas JavaScript used to be that way, but is no more. Let me explain what I mean by that:In Python 3.6, I can create a new file in my text editor, write Python code that uses async/await and type annotations, switch to my terminal, and run python to execute my code.In JavaScript, I can create a new example file in my text editor, write JavaScript code that uses async/await and type annotations, switch to my terminal, run node example, see it fail because it does not understand my type annotations, run npm install -g babel-node, run babel-node example, see it fail again because I don't have a .babelrc that declares the babel-plugin-transform-flow-strip-types plugin, rummage around on my machine and find a .babelrc I used on another project, copy that .babelrc to my current directory, run babel-node example again, watch it fail because it doesn't know where to find babel-plugin-transform-flow-strip-types, go back to the directory from which I took the .babelrc file and now copy its package.json file as well, remove the junk from package.json that example doesn't need, run npm install, get impatient, kill npm install, run yarn install, and run babel-node example to execute my code. For bonus points, babel-node example runs considerably slower than node example (with the type annotations stripped) because it re-transpiles exampl[...]

Python: Airing of Grievances

Mon, 20 Mar 2017 03:56:00 +0000

Although this list focuses on the negative aspects of Python, I am publishing it so I can use it as a reference in an upcoming essay about my positive experiences with Python 3.6. Update: This is referenced by my post, "JavaScript vs. Python in 2017." I am excited by many of the recent improvements in Python, but here are some of my outstanding issues with Python (particularly when compared to JavaScript): PEP-8What do you get when you combine 79-column lines, four space indents, and snake case? Incessant line-wrapping, that's what! I prefer 100 cols and 2-space indents for my personal Python projects, but every time a personal project becomes a group project and  true Pythonista joins the team, they inevitably turn on PEP-8 linting with its defaults and reformat everything. Having to declare self as the first argument of a methodAs someone who is not a maintainer of Python, there is not much I could do to fix this, but I did the next best thing: I complained about it on Facebook. In this case, it worked! That is, I provoked Łukasz Langa to add a check for this (it now exists as B902 in flake8-bugbear), so at least when I inevitably forget to declare self, Nuclide warns me inline as I'm authoring the code. .pyc filesThese things are turds. I get why they are important for production, but I'm tired of seeing them appear in my filetree, adding *.pyc to .gitignore every time I create a new project, etc. Lack of a standard docblockHistorically, Python has not had a formal type system, so I was eager to document my parameter and return types consistently. If you look at the top-voted answer to “What is the standard Python docstring format?” on StackOverflow, the lack of consensus extremely dissatisfying. Although Javadoc has its flaws, Java's documentation story is orders of magnitude better than that of Python's. Lack of a Good, free graphical debuggerUnder duress, I use pdb. I frequently edit Python on a remote machine, so most of the free tools do not work for me. I have not had the energy to try to set up remote debugging in PyCharm, though admittedly that appears to be the best commercial solution. Instead, I tried to lay the groundwork for a Python debugger in Nuclide, which would support remote development by default. I need to find someone who wants to continue that effort. Whitespace-delimitedI'm still not 100% sold on this. In practice, this begets all sorts of subtle annoyances. Code snippets that you copy/paste from the Web or an email have to be reformatted before you can run them. Tools that generate Python code incur additional complexity because they have to keep track of the indent level. Editors frequently guess the wrong place to put your cursor when you introduce a new line whereas typing } gives it the signal it needs to un-indent without further interaction. Expressions that span a line frequently have to be wrapped in parentheses in order to parse. The list goes on and on. Bungled rollout of Python 3Honestly, this is one of the primary reasons I didn't look at Python seriously for awhile. When Python 3 came out, the rhetoric I remember was: “Nothing substantially new here. Python 2.7 works great for me and the 2to3 migration tool is reportedly buggy. No thanks.” It's rare to see such a large community move forward with such a poorly executed backwards-compatibility story. (Presumably this has contributed to the lack of a default Python 3 installation on OS X, which is another reason that some software shops stick with Python 2.) Angular 2 is the only other similar rage-inducing upgrade in recent history that comes to mind. I don't understand how imports are resolvedArguably, this is a personal failure rather than a failure of Python. That said, compared to Java or Node, whatever Python is doing seems incredibly complicated. Do I need an file? Do I need to put something in it? If I do have to put something in it, does it have to define __all__? For a language whose mantra is “There's only one way to do it,” it is frustrating that t[...]

Hacking on Atom Part II: Building a Development Process

Sun, 18 Oct 2015 21:12:00 +0000

Over one year ago, I wrote a blog post: “Hacking on Atom Part I: CoffeeScript”. This post (Part II) is long overdue, but I kept putting it off because I continued to learn and improve the way I developed for Atom such that it was hard to reach a “stopping point” where I felt that the best practices I would write up this week wouldn't end up becoming obsolete by something we discovered next week. Honestly, I don't feel like we're anywhere near a stopping point, but I still think it's important to share some of the things we have learned thus far. (In fact, I know we still have some exciting improvements in the pipeline for our developer process, but those are gated by the Babel 6.0 release, so I'll save those for another post once we've built them out.) I should also clarify that when I say “we,” I am referring to the Nuclide team, of which I am the tech lead. Nuclide is a a collection of packages for Atom to provide IDE-like functionality for a variety of programming languages and technologies. The code is available on GitHub (and we accept contributions!), but it is primarily developed by my team at Facebook. At the time of this writing, Nuclide is composed of 40 Atom packages, so we have quite a bit of Atom development experience under our belts. My Transpiler Quest When I spun up the Nuclide project, I knew that we were going to produce a lot of JavaScript code. Transpilers such as Traceur were not very mature yet, but I strongly believed that ES6 was the future and I didn't want to start writing Nuclide in ES5 and have to port everything to ES6 later. On a high level, I was primarily concerned about leveraging the following language extensions: Standardized class syntax async/await JSX (for React) type annotations Note that of those four things, only the first one is specified by ES6. async/await appears to be on track for ES7, though I don't know if the TC39 will ever be able to agree on a standard for type annotations or let JSX in, but we'll see. Because of my diverse set of requirements, it was hard to find a transpiler that could provide all of these things: Traceur provided class syntax and other ES6 features, but more experimental things like async/await were very buggy. TypeScript provided class syntax and type annotations, but I knew that Flow was in the works at Facebook, so ultimately, that is what we would use for type annotations. React came with jstransform, which supported JSX and a number of ES6 features. recast provided a general JavaScript AST transformation pipeline. Most notably, the regenerator transform to provide support for yield and async/await was built on recast. Given my constraints and the available tools, it seemed like investing in recast by adding more transforms for the other features we wanted seemed like the most promising way to go. In fact, someone had already been working on such an endeavor internally at Facebook, but the performance was so far behind that of jstransform that it was hard to justify the switch. For awhile, I tried doing crude things with regular expressions to hack up our source so that we could use regenerator and jstransform. The fundamental problem was that transpilers do not compose  because if they both recognize language features that cause parse errors in the other, then you cannot use them together. Once we started adding early versions of Flow into the mix to get type checking (and Flow's parser recognized even less of ES6 than jstransform did), the problem became even worse. For a long time, in an individual file, we could have async/await or type checking, but not both. To make matters worse, we also had to run a file-watcher service that would write the transpiled version of the code someplace where Atom could load it. We tried using a combination of gulp and other things, but all too often a change to a file would go unnoticed, the version on disk would not get transpiled correctly, and we would then have a subtle bug (this seemed to happen most often when interacti[...]

Trying to prove that WeakMap is actually weak

Fri, 06 Mar 2015 01:40:00 +0000

I don't have a ton of experience with weak maps, but I would expect the following to work:

// File: example
var key = {};
var indirectReference = {};
indirectReference['key'] = (function() { var m = {}; m['foo'] = 'bar'; return m; })();
var map = new WeakMap();

map.set(key, indirectReference['key']);
console.log('Has key after setting value: %s', map.has(key));

delete indirectReference['key'];
console.log('Has key after deleting value: %s', map.has(key));

console.log('Has key after performing global.gc(): %s', map.has(key));
I downloaded the latest version of io (so that I could have a JavaScript runtime with WeakMap and global.gc() and ran it as follows:
./iojs --expose-gc example
Here is what I see:

Has key after setting value: true
Has key after deleting value: true
Has key after performing global.gc(): true
Despite my best efforts, I can't seem to get WeakMap to give up the value that is mapped to key. Am I doing it wrong? Obviously I'm making some assumptions here, so I'm curious where I'm off.

Ultimately, I would like to be able to use WeakMap to write some tests to ensure certain objects get garbage collected and don't leak memory.

Hacking on Atom Part I: CoffeeScript

Tue, 19 Aug 2014 06:17:00 +0000

Atom is written in CoffeeScript rather than raw JavaScript. As you can imagine, this is contentious with “pure” JavaScript developers. I had a fairly neutral stance on CoffeeScript coming into Atom, but after spending some time exploring its source code, I am starting to think that this is not a good long-term bet for the project. Why CoffeeScript Makes Sense for AtomThis may sound silly, but perhaps the best thing that CoffeeScript provides is a standard way to declare JavaScript classes and subclasses. Now before you get out your pitchforks, hear me out: The “right” way to implement classes and inheritance in JavaScript has been of great debate for some time. Almost all options for simulating classes in ES5 are verbose, unnatural, or both. I believe that official class syntax is being introduced in ES6 not because JavaScript wants to be thought of as an object-oriented programming language, but because the desire for developers to project the OO paradigm onto the language today is so strong that it would be irresponsible for the TC39 to ignore their demands. This inference is based on the less aggressive maximally minimal classes proposal that has superseded an earlier, more fully-featured, proposal, as the former states that “[i]t focuses on providing an absolutely minimal class declaration syntax that all interested parties may be able to agree upon.” Hooray for design by committee! Aside: RantThe EcmaScript wiki is the most frustrating heap of documentation that I have ever used. Basic questions, such as, “Are Harmony,, and ES6 the same thing?” are extremely difficult to answer. Most importantly, it is impossible to tell what the current state of ES6 is. For example, with classes, there is a proposal under harmony:classes, but another one at Maximally Minimal Classes. Supposedly the latter supersedes the former. However, the newer one has no mention of static methods, which the former does and both Traceur and JSX support. Perhaps the anti-OO folks on the committee won and the transpilers have yet to be updated to reflect the changes (or do not want to accept the latest proposal)? In practice, the best information I have found about the latest state of ES6 is at I stumbled upon that via a post in traceur-compiler-discuss that linked to some conference talk that featured the link to the TC39 member’s on GitHub. (The conference talk also has an explicit list of what to expect in ES7, in particular async/await and type annotations, which is not spelled out on the EcmaScript wiki.) Also, apparently using an entire GitHub repo to post a single web page is a thing now. What a world. To put things in perspective, I ran a git log --follow on some key files in the src directory of the main Atom repo, and one of the earliest commits I found introducing a .coffee file is from August 24, 2011. Now, let’s consider that date in the context of modern JavaScript transpiler releases: CoffeeScript introduced its class syntax on February 27, 2010 (version 0.5.3). Google announced Traceur on May 3, 2011 at JSConf US. Google announced Dart on October 10, 2011. Microsoft released TypeScript on October 1, 2012. Facebook released React, which included JSX and other transforms on May 29, 2013 at JSConf US. Later, these transforms (including ES6 transforms) were spun out into their own repo, jstransform, on August 19, 2013. As you can see, at the time Atom was spinning up, CoffeeScript was the only mature transpiler. If I were starting a large JavaScript project at that time (well, we know I would have used Closure...) and wanted to write in a language whose transpilation could be improved later as JavaScript evolved, then CoffeeScript would have made perfect sense. Many arguments about what the “right JavaScript idiom is” (such as how to declare classes and subclasses) go away because CoffeeScript is more of a “there’s only one way to [...]

Hacking on Atom Series

Tue, 19 Aug 2014 05:53:00 +0000

I have started to spend some time hacking on Atom, and I wanted to share some of my thoughts and learnings from this exploration. I have not posted on my blog in awhile, and this seems like the right medium to document what I have discovered (i.e., too long for a tweet; not profound enough for an essay; failing to answer in the form of a question suitable for the Atom discussion forum).

In the best case, in places where I have stumbled with Atom’s design or implementation, I hope that either (1) someone sets me straight on best practices and why things work the way they do, or (2) to spur discussion on how to make things better. Hacking on Atom is a lot of fun, but I still have a lot to learn.

Today I am making Appendix B of my book available for free online

Mon, 28 Oct 2013 16:59:00 +0000

Today I am making one of the appendices of my book available for free: Appendix B: Frequently Misunderstood JavaScript Concepts. You might expect that appendices are just extra junk that authors stick at the end of the book to up their page count, like reference material that is readily available online. And if that were the case, this would be an uncharitable and worthless thing to do.

As it turns out, many have told me that Appendix B has been the most valuable part of the book for them, so I assure you that I am not releasing the "dregs" of the text. Unsurprisingly, many many more folks are interested in JavaScript in general than are interested in Closure Tools, which is presumably why this appendix has been a favorite.

Because the manuscript was written in DocBook XML, it was fairly easy to programmatically translate the XML into HTML. For some reason, I do not have a copy of the figure from Appendix B that appears in the book, so I had to recreate it myself. Lacking real diagramming tools at the time, I created the original illustration using Google Docs. It took several rounds with the illustrator at O'Reilly to get it reproduced correctly for my book. Since I was too embarrassed to include the Google Docs version in this "HTML reprint," I redid it using Omnigraffle, which I think we can all agree looks much better.

In some ways, the HTML version is better than the print or ebook versions in that (1) the code samples are syntax highlighted, and (2) it is possible to hyperlink to individual sections. Depending on how this is received, I may make more chunks of the book available in the future. As I mentioned, the script to convert the XML to HTML is already written, though it does contain some one-off fixes that would need to be generalized to make it reusable for the other chapters.

If you want to read the rest of Closure: The Definitive Guide (O'Reilly), you can find it on Amazon and other web sites that sell dead trees.

Hello, World!

Sun, 18 Aug 2013 01:39:00 +0000

Traditionally, "Hello, World!" would be the first post for a new blog, but this is post #100 for the Bolinfest Changeblog. However, I have posted so infrequently in the past year and a half that I thought it would be nice to check in and say hello to report on what I have been up to, as well as some of the projects that I maintain, as I have not been able to respond to all of the email requests for status updates. I'll start with the big one: I moved to California and joined Facebook The latter was not the cause for the former. I moved to California to co-found a startup (check!), but after killing myself for three months, I decided that the combination of working on consumer software that served millions (now billions) of users and getting a stable paycheck was something I really enjoyed. Now that I was in California, there were more companies where I could do that then there were in New York City, so I shopped around a bit. After working at Google for so long and feeling like Facebook was The Enemy/technically inferior (they write PHP, right?), I never thought that I would end up there. Fortunately, I decided to test my assumptions about Facebook by talking to engineers there who I knew and greatly respected, and I learned that they were quite happy and doing good work. Facebook in 2012 felt more like the Google that I joined in 2005 than the Google of 2012 did, so I was also excited about that. Basically, Facebook is a lot smaller than Google, yet I feel like my potential for impact (both in the company and in the world) is much larger. I attended Google I/O this year, and although I was in awe of all of the announcements during the keynote, I imagined that whatever I could do at Google would, at best, be one tiny contribution to that presentation. Personally, I found the thought of feeling so small pretty demoralizing. Fast-forward to life at Facebook, and things are exciting and going well. "Done is better than perfect" is one of our mantras at Facebook, which encourages us to move fast, but inevitably means that there is always some amount of shit that is broken at any given time. Coming from Google, this was pretty irritating at first, but I had to acknowledge that that mentality is what helped Facebook get to where it is today. If you can be Zen about all of the imperfections, you can find motivation in all the potential impact you can have. At least that's what I try to do. Using Outlook instead of Gmail and Google Calendar is infuriating, though. I spend my time writing Java instead of JavaScript In joining a company that made its name on the Web, and knowing a thing or two about JavaScript, I expected that I would be welcomed into the ranks of UI Engineering at Facebook. Luckily for me, that did not turn out to be the case. At Facebook, there are many web developers who rely on abstractions built by the core UI Engineering (UIE) team. When I joined, my understanding was that if you wanted to be a UIE, first you needed to "pay your dues" as a web developer for at least a year so that you understood the needs of the Web frontend, and then perhaps you could be entrusted as a UIE. In general, this seems like a reasonable system, but all I was thinking was: "I made Google Calendar and Google Tasks work in IE6. I have paid my fucking dues." Fortunately, some friends steered me toward joining mobile at Facebook, it being the future of the company and whatnot. At first, I was resistant because, to date, I had built a career on writing JavaScript, so I was reluctant to lay that skillset by the wayside and learn a new one. I also realized that that line of thinking is what would likely turn me into a dinosaur one day, so I needed to resist it. Okay, so Android or iOS? I had just gotten a new iPhone after living miserably on a borrowed Nexus One for the previous six months (my 3GS only lasted 75% of the way through my [...]

Chromebook Pixel gives me an excuse to fork JSNES

Sat, 18 May 2013 21:20:00 +0000

For a long time, I have been intrigued by NES emulators. I was extremely excited when Ben Firshman released an NES emulator in JavaScript (JSNES) over three years ago. At the time, I noted that JSNES ran at almost 60fps in Chrome, but barely trickled along in Firefox. It's pretty shocking to see that that is still the case today: in Firefox 21.0 on Ubuntu, I am seeing at most 2fps while sitting idle on the title screen for Dr. Mario. That's pretty sad. (It turns out I was getting 1fps because I had Firebug enabled. Maybe that's why my perception of Firefox has diminished over time. Someone at Mozilla should look into that...) This is the only browser benchmark I care about these days.

When the Gamepad API was first announced for Chrome, I tried to get my USB NES RetroPort controllers to work, but Chrome did not seem to recognize them. I made a mental note to check back later, assuming the API would eventually be more polished. Fast-forward to this week where I was fortunate enough to attend Google I/O and score a Chromebook Pixel. It seemed like it was time to give my controllers another try.

Last night, I plugged a RetroPort into the Pixel and visited the Gamepad API test page, and it worked! Obviously the next thing I had to do was wire this up to JSNES, so that was the first thing I did when I woke up this morning. I now have my own fork of the JSNES project where I added support for the RetroPort controllers as well as loading local ROMs from disk. As I admit in my, there are already outstanding pull requests for these types of things, but I wanted to have the fun of doing it myself (and an excuse to poke around the JSNES code).

Finally, the one outstanding feature I hoped to add was loading ROMs from Dropbox or GDrive using pure JavaScript. Neither product appears to have a simple JavaScript API that will give you access to file data like the W3C File API does. Perhaps I'll host my fork of JSNES if I can ever add such a feature...

P.S. I should admit that one does not need a Pixel to do these types of things. However, having a new piece of hardware and APIs that have been around long enough that you expect them to be stable is certainly a motivating factor. It's nice to have a project that doesn't involve any yak-shaving, such as figuring out how to install a version of Chrome from the Beta channel!

Generating Google Closure JavaScript from TypeScript

Wed, 02 Jan 2013 16:02:00 +0000

Over a year ago, I created a prototype for generating Google Closure JavaScript from CoffeeScript. Today, I am releasing a new, equivalent prototype that uses TypeScript as the source language. I will be speaking at mloc next month in Budapest, Hungary, which is a conference on scaling JavaScript development. There, I will discuss various tools to help keep large codebases in check, so if you are interested in hearing more about this work, come join me in eastern Europe in February!I have been interested in the challenge of Web programming in the large for quite some time. I believe that Google Closure is currently still the best option for large-scale JavaScript/Web development, but that it will ultimately be replaced by something that is less verbose. Although Dart shows considerable promise, I am still dismayed by the size of the JavaScript that it generates. By comparision, if TypeScript can be directly translated to JavaScript that can be compiled using the advanced mode of the Closure Compiler, then we can have all the benefits of optimized JavaScript from Closure without the verbosity. What's more, because TypeScript is a superset of JavaScript, I believe that its syntax extensions have a chance of making it into the ECMAScript standard at some point, whereas the chances of Dart being supported natively in all major browsers is pretty low. In terms of hacking on TypeScript itself, getting started with the codebase was a pain because I develop on Linux, and presumably TypeScript's developers do not. Apparently Microsoft does not care that cloning a CodePlex repository using Git on Linux does not work. Therefore, in order to get the TypeScript source code, I had to switch to a Mac, clone the repo there, and then import it into GitHub so I could clone it from my Linux box. Once I got the code on my machine, the next step was figuring out how to build it. TypeScript includes a Makefile, but it contains batch commands rather than Bash commands, so of course those did not work on Linux, either. Modifying the Makefile was a bit of work, and unfortunately will likely be a pain to reconcile with future upstream changes. It would be extremely helpful if Microsoft switched to a cross-platform build tool (or maintained two versions of the Makefile themselves). Once I was able to build, I wanted to start hacking and exploring the TypeScript codebase. I suspected that reading the .ts files as plaintext without any syntax highlighting would be painful, so I Googled to see if there were any good alternatives to Visual Studio with TypeScript support that would run on Linux. Fortunately, JetBrains recently released a beta of PhpStorm and WebStorm 6.0 that includes support for TypeScript, so I decided to give it a try. Given that both TypeScript and support for it in WebStorm are very new, I am quite impressed with how well it works today. Syntax highlighting and ctrl+clicking to definitions work as expected, though PhpStorm reports many false positives in terms of TypeScript errors. I am optimistic that this will get better over time. Now that I have all of the setup out of the way, the TypeScript codebase is pretty nice to work with. The source code does not contain much in the way of comments, but the code is well-structured and the class and variable names are well-chosen, so it is not too hard to find your way. I would certainly like to upstream some of my changes and contribute other fixes to TypeScript, though if that Git bug on CodePlex doesn't get fixed, it is unlikely that I will become an active contributor. Want to learn more about Closure? Pick up a copy of my book, Closure: The Definitive Guide (O'Reilly), and learn how to build sophisticated web applications like Gmail and Google Maps![...]

New Essay: Caret Navigation in Web Applications

Wed, 25 Apr 2012 21:38:00 +0000

For those of you who don't follow me on Twitter, yesterday I announced a new essay: Caret Navigation in Web Applications. There I talk about some of the things that I worked hard to get right when implementing the UI for Google Tasks. This essay may not be for JavaScript lightweights, though it didn't seem to gain much traction on Hacker News or on Reddit (which is particularly sad because Reddit readers were much of the reason that I took the time to write it in the first place). I suppose it would have been more popular if I had made it a lot shorter, but I believe it would have been much less useful if I did. That's just my style, I guess: Closure: The Definitive Guide was originally projected to be 250-300 pages and it ended up at 550. I also wanted to take this opportunity to make some more comparisons about Google Tasks vs. Asana that weren't really appropriate for the essay: It's annoying that Asana does not support real hierarchy. Asana has responded to this on Quora, but their response is really about making Asana's life easier, not yours. What I think is really interesting is that if they ever decide to reverse their decision, it will really screw with their keyboard shortcuts since they currently heavily leverage Tab as a modifier, but that may not work so well when users expect Tab to indent (and un-indent). It's pretty subtle, but when you check something off of your list in Google Tasks, there is an animated strikethrough. I did this by creating a timing function that redraws the task with the in an increasing position to achieve the effect. This turned out to be tricky to get right because you want the user to be able to see it (independent of task text length) without it feeling slow for long tasks. Since so many people seem to delight in crossing things off their list, this always seemed like a nice touch in Google Tasks. I don't find crossing things off my list in Asana as satisfying. Maybe I haven't given it enough time yet, but I have a lot of trouble with Asana's equivalent of the details pane. The thing seems to slide over when I don't want it to, or I just want to navigate through my tasks and have the details pane update and it starts bouncing on me...I don't know. Am I alone? In Tasks, it's shift-enter to get in and shift-enter to get out. It's not as sexy since there's no animation, but it's fast, it works, and it doesn't get in my way. Finally, some friends thought I should include a section in the essay about "what browser vendors can do to help" since clearly the browser does not have the right APIs to make things like Google Tasks easy to implement. As you may imagine, I was getting pretty tired of writing the essay, so I didn't include it, but this is still an important topic. At a minimum, an API to return the current (x, y) position of the cursor as well as one to place it at an (x, y) position would help, though even that is ambiguous since a cursor is generally a line, not a point. It would be interesting to see how other UI toolkits handle this. Oh, and I should mention that if I were to reimplement Google Tasks again today, I would not implement it the way that I did, if I had enough resources. If you look at Google Docs, you will see that your cursor is not a real cursor, but a blinking
. That means that Docs is taking responsibility for turning all of your mouse and keyboard input into formatted text. It eliminates all of the funny stuff with contentEditable by basically rebuilding...well, everything related to text editing in the browser. If I had libraries like that to work with (that already have support for pushing changes down to multiple editors in real time), then I would leverage those instead. Of course, if you don't have code like that available, then that is a pretty serious invest[...]

I want a magical operator to assuage my async woes (and a pony)

Fri, 28 Oct 2011 21:33:00 +0000

Lately, I have spent a lot of time thinking about how I could reduce the tedium of async programming in JavaScript. For example, consider a typical implementation of using an XMLHttpRequest to do a GET that returns a Deferred (this example uses jQuery's implementation of Deferred, but there are many other reasonable implementations, and there is a great need to settle on a standard API, but that is a subject for another post):/** @return {Deferred} */var simpleGet = function(url) { var deferred = new $.Deferred(); var xhr = new XMLHttpRequest(); xhr.onreadystatechange = function() { if (xhr.readyState == 4) { if (xhr.status == 200) { deferred.resolve(xhr.responseText); } else { deferred.reject(xhr.status); } } };'GET', url, true /* async */); xhr.send(null); return deferred;};What I want is a magical ~ operator that requires (and understands) an object that implements a well-defined Deferred contract so I can write my code in a linear fashion:/** @return {Deferred} */var getTitle = function(url) { if (url.substring(0, 7) != 'http://') url = 'http://' + url; var html = ~simpleGet(url); var title = html.match(/(.*)<\/title>/)[1]; return title;};/** Completes asynchronously, but does not return a value. */var logTitle = function(url) { try { var title = ~getTitle(url); console.log(title); } catch (e) { console.log('Could not extract title from ' + url); }};Unfortunately, to get this type of behavior today, I have to write something like the following (and even then, I am not sure whether the error handling is quite right):/** @return {Deferred} */var getTitle = function(url) { if (url.substring(0, 7) != 'http://') url = 'http://' + url; var deferred = new $.Deferred(); simpleGet(url).then(function(html) { var title = html.match(/<title>(.*)<\/title>/)[1]; deferred.resolve(title); }, function(error) { deferred.reject(error); }); return deferred;};/** Completes asynchronously, but does not return a value. */var logTitle = function(url) { var deferred = getTitle(url); deferred.then(function(title) { console.log(title); }, function(error) { console.log('Could not extract title from ' + url); })};I am curious how difficult it would be to programmatically translate the first into the second. I spent some time playing with generators in Firefox, but I could not seem to figure out how to emulate my desired behavior. I also spent some time looking at the ECMAScript wiki, but it is unclear whether they are talking about exactly the same thing.In terms of modern alternatives, it appears that C#'s await and async keywords are the closest thing to what I want right now. Unfortunately, I want to end up with succinct JavaScript that runs in the browser, so I'm hoping that either CoffeeScript or Dart will solve this problem, unless the ECMAScript committee gets to it first!Please feel free to add pointers to related resources in the comments. There is a lot out there to read these days (the Dart mailing list alone is fairly overwhelming), so there's a good chance that there is something important that I have missed.Update (Fri Oct 28, 6:15pm): I might be able to achieve what I want using deferred functions in Traceur. Apparently I should have been looking at the deferred functions strawman proposal more closely: I skimmed it and assumed it was only about defining a Deferred API.Want to learn about a suite of tools to help manage a large JavaScript codebase? Pick up a copy of my new book, Closure: The Definitive Guide (O'Reilly), and learn how to build sophisticated web applications like Gmail and Google Maps![...] <div class='clearbothFix'></div> </div><br /> <hr /><br /><div class="item"> <a href="" rel="nofollow">An Examination of goog.base()</a><br /> <p>Tue, 02 Aug 2011 17:44:00 +0000</p> A few weeks ago, I started working on <a href="">adding an option to CoffeeScript to spit out Closure-Compiler-friendly JavaScript</a>. In the process, I discovered that calls to a superclass constructor in CoffeeScript look slightly different than they do in the Closure Library. For example, if you have a class <code>Foo</code> and a subclass <code>Bar</code>, then in CoffeeScript, the call [in the generated JavaScript] to invoke <code>Foo</code>'s constructor from <code>Bar</code>'s looks like:<br /><pre>, a, b);</pre>whereas in the Closure Library, the canonical thing is to do the following, identifying the superclass function directly:<br /><pre>, a, b);</pre>The two are functionally equivalent, though CoffeeScript's turns out to be slightly simpler to use as a developer because it does not require the author to know the name of the superclass when writing the line of code. In the case of CoffeeScript, where JavaScript code generation is being done, this localization of information makes the translation of CoffeeScript to JavaScript easier to implement.<br /><br />The only minor drawback to using the CoffeeScript form when using Closure (though note that you would have to use <code>superClass_</code> instead of <code>__super__</code>) is that the CoffeeScript call is more bytes of code. Unfortunately, the Closure Compiler does not know that <code>Bar.superClass_.constructor</code> is equivalent to <code>Foo</code>, so it does not rewrite it as such, though such logic could be added to the Compiler.<br /><br />This piqued my curiosity about how <code>goog.base()</code> is handled by the Compiler, so I ended up taking a much deeper look at <code>goog.base()</code> than I ever had before. I got so caught up in it that I ended up composing a new essay on what I learned: <a href="">"An Examination of goog.base()."</a><br /><br />The upshot of all this is that in my CoffeeScript-to-Closure translation code, I am not going to translate any of CoffeeScript's <code>super()</code> calls into <code>goog.base()</code> calls because avoiding <code>goog.base()</code> eliminates a couple of issues. I will still use <code>goog.base()</code> when writing Closure code by hand, but if Closure code is being autogenerated anyway, then using <code>goog.base()</code> is not as compelling.<br /><br />Finally, if you're wondering why I started this project a few weeks ago and have not made any progress on the code since then, it is because I got married and went on a honeymoon, so at least my wife and I would consider that a pretty good excuse!<br /><br /><em>Want to learn more about Closure? Pick up a copy of my new book, <a href="">Closure: The Definitive Guide (O'Reilly)</a>, and learn how to build sophisticated web applications like Gmail and Google Maps!</em> <div class='clearbothFix'></div> </div><br /> <hr /><br /><div class="item"> <a href="" rel="nofollow">Writing useful JavaScript applications in less than half the size of jQuery</a><br /> <p>Fri, 01 Jul 2011 20:10:00 +0000</p> Not too long ago, I tried to bring attention to how little of the jQuery library many developers actually use and argue that frontend developers should consider what sort of improvements their users would see if they could compile their code with the Advanced mode of the Closure Compiler. Today I would like to further that argument by taking a look at TargetAlert, my browser extension that I re-released this week for Chrome.TargetAlert is built using Closure and plovr using the template I created for developing a Chrome extension. The packaged version of the extension includes three JavaScript files:.targetalert-js-stats { border-collapse: collapse; } .targetalert-js-stats td, .targetalert-js-stats th { border: 1px solid #CCC; } Name Size (bytes) Description targetalert 19475 content script that runs on every page options 19569 logic for the TargetAlert options page targetalert 3590 background page that channels information from the options to the content script Total 42634   By comparison, the minified version of jQuery 1.6 is 91342 bytes, which is more than twice the size of the code for TargetAlert. (The gzipped sizes are 14488 vs. 31953, so the relative sizes are the same, even when gzipped.)And to put things in perspective, here is the set of goog.require() statements that appear in TargetAlert code, which reflects the extent of its dependencies:goog.require('goog.array');goog.require('goog.dispose');goog.require('goog.dom');goog.require('goog.dom.NodeIterator');goog.require('goog.dom.NodeType');goog.require('goog.dom.TagName');goog.require('');goog.require('');goog.require('goog.string');goog.require('goog.ui.Component');goog.require('goog.Uri');goog.require('soy');I include this to demonstrate that there was no effort to re-implement parts of the Closure Library in order to save bytes. On the contrary, one of the primary reasons to use Closure is that you can write code in a natural, more readable way (which may be slightly more verbose), and make the Compiler responsible for minification. Although competitions like JS1K are fun and I'm amazed to see how small JS can get when it is hand-optimized, even the first winner of JS1K, Marijn Haverbeke, admits, "In terms of productivity, this is an awful way of coding."When deploying a packaged browser extension, there are no caching benefits to consider: if your extension includes a copy of jQuery, then it adds an extra 30K to the user's download, even when gzipped. To avoid this, your extension could reference (or equivalent) from its code, but then it may not work when offline. Bear in mind that in some parts of the world (including the US! think about data plans for tablets), users have quotas for how much data they download, so you're helping them save a little money if you can package your resources more efficiently.Further, if your browser extension has a content script that runs on every page, keeping your JS small reduces the amount of code that will be executed on every page load, minimizing the impact of your extension on the user's browsing experience. As users may have many extensions installed, if everyone starts including an extra 30K of JS, then this additional tax can really start to add up! Maybe it's time you gave Closure a good look, if you haven't already.Want to learn more about Closure? Pick up a copy of my new book, Closure: The Definitive Guide (O'Reilly), and learn how to build sophisticated web applications like Gmail and Google Maps![...] <div class='clearbothFix'></div> </div><br /> <hr /><br /><div class="item"> <a href="" rel="nofollow">The Triumphant Return of TargetAlert!</a><br /> <p>Tue, 28 Jun 2011 02:52:00 +0000</p> About seven years ago, my adviser and I were sitting in his office Googling things as part of research for my thesis. I can't remember what we were looking for, but just after we clicked on a promising search result, the Adobe splash screen popped up. As if on cue, we both let out a groan in unison as we waited for the PDF plugin to load. In that instant, it struck me that I could build a small Firefox extension to make browsing the Web just a little bit better.Shortly thereafter, I created TargetAlert: a browser extension that would warn you when you were about to click on a PDF. It used the simple heuristic of checking whether the link ended in pdf, and if so, it inserted a PDF icon at the end of the link as shown on the original TargetAlert home page.And that was it! My problem was solved. Now I was able to avoid inadvertently starting up Adobe Reader as I browsed the Web.But then I realized that there were other things on the Web that were irritating, too! Specifically, links that opened in new tabs without warning or those that started up Microsoft Office. Within a week, I added alerts for those types of links, as well.After adding those features, I should have been content with TargetAlert as it was and put it aside to focus on my thesis, but then something incredible happened: I was Slashdotted! Suddenly, I had a lot more traffic to my site and many more users of TargetAlert, and I did not want to disappoint them, so I added a few more features and updated the web site. Bug reports came in (which I recorded), but it was my last year at MIT, and I was busy interviewing and TAing on top of my coursework and research, so updates to TargetAlert were sporadic after that. It wasn't until the summer between graduation and starting at Google that I had time to dig into TargetAlert again.Though the primary reason that TargetAlert development slowed is that Firefox extension development should have been fun, but it wasn't. At the time, every time you made a change to your extension, you had to restart Firefox to pick up the change. As you can imagine, that made for a slow edit-reload-test cycle, inhibiting progress. Also, instead of using simple web technologies like HTML and JSON, Firefox encouraged the use of more obscure things, such as XUL and RDF. The bulk of my energy was spent on getting information into and out of TargetAlert's preferences dialog (because I actually tried to use XUL and RDF, as recommended by Mozilla), whereas the fun part of the extension was taking the user's preferences and applying them to the page.The #1 requested feature for TargetAlert was for users to be able to define their own alerts (as it were, users could only enable or disable the alerts that were built into TargetAlert). Conceptually, this was not a difficult problem, but realizing the solution in XUL and RDF was an incredible pain. As TargetAlert didn't generate any revenue and I had other personal projects (and work projects!) that were more interesting to me, I never got around to satisfying this feature request.Fast-forward to 2011 when I finally decommissioned a VPS that I had been paying for since 2003. Even though I had rerouted all of its traffic to a new machine years ago and it was costing me money to keep it around, I put off taking it down because I knew that I needed to block out some time to get all of the important data off of it first, which included the original CVS repository for TargetAlert.As part of the data migration, I converted all of my CVS repositories to SVN and then to Hg, preserving all of the version history (it should have been possible to convert from CVS to Hg directly, but I couldn't get hg convert to work with CVS). Once I h[...] <div class='clearbothFix'></div> </div><br /> <hr /><br /><div class="item"> <a href="" rel="nofollow">It takes a village to build a Linux desktop</a><br /> <p>Thu, 09 Jun 2011 18:00:00 +0000</p> tl;dr Instead of trying to build your own Ubuntu PC, check out a site like instead.In many ways, this post is more for me than for you—I want to make sure I re-read this the next time I am choosing a desktop computer.My Windows XP desktop from March 2007 finally died, so it was time for me to put together a new desktop for development. Because Ubuntu on my laptop had been working out so well, I decided that I would make my new machine an Ubuntu box, too. Historically, it was convenient to have a native Windows machine to test IE, Firefox, Chrome, Safari, and Opera, but Cygwin is a far cry from GNOME Terminal, so using Windows as my primary desktop environment had not been working out so well for me.As a developer, I realize that my computer needs are different from an ordinary person's, but I didn't expect it would be so difficult to buy the type of computer that I wanted on the cheap (<$1000). Specifically, I was looking for:At least 8GB of RAM.A 128GB solid state drive. (I would have been happy with 64GB because this machine is for development, not storing media, but Jeff Atwood convinced me to go for 128GB, anyway.)A video card that can drive two 24" vertical monitors (I still have the two that I used with my XP machine). Ideally, the card would also be able to power a third 24" if I got one at some point.A decent processor and motherboard.I also wanted to avoid paying for: A Windows license.A CD/DVD-ROM drive.Frivolous things.I did not think that I would need a CD-ROM drive, as I planned to install Ubuntu from a USB flash drive. I expected to be able to go to Dell or HP's web site and customize something without much difficulty. Was I ever wrong. At first, I thought it was going to be easy as the first result for [Dell Ubuntu] looked very promising: it showed a tower starting at $650 with Ubuntu preinstalled. I started to customize it: upgrading from 4GB to 8GB of RAM increased the price by $120, which was reasonable (though not quite as good as Amazon). However, I could not find an option to upgrade to an SSD, so buying my own off of Newegg would cost me $240. Finally, the only options Dell offered for video cards were ATI, and I have had some horrible experiences trying to get dual monitors to work with Ubuntu and ATI cards in the past (NVIDIA seems to be better about providing good Linux drivers). At this point, I was over $1000, and was not so sure about the video card, so I started asking some friends for their input. Unfortunately, I have smart, capable friends who can build their own machines from parts who were able to convince me that I could, too. You see, in general, I hate dealing with hardware. For me, hardware is simply an inevitable requirement for software. When software goes wrong, I have some chance of debugging it and can attack the problem right away. By comparison, when hardware goes wrong, I am less capable, and may have to wait until a new part from Amazon comes in before I can continue debugging the problem, which sucks. At the same time, I realized that I should probably get past my aversion to dealing with hardware, so I started searching for blog posts about people who had built their own Ubuntu boxes. I found one post by a guy who built his PC for $388.95, which was far more than the Dell that I was looking at! Further, he itemized the parts that he bought, so at least I knew that if I followed his steps, I would end up with something that worked with Ubuntu (ending up with hardware that was not supported by Ubuntu was one of my biggest fears during this project). I cross-checked this list with a friend who had recently put together a Linux machine with an Intel i7[...] <div class='clearbothFix'></div> </div><br /> <hr /><br /><div class="item"> <a href="" rel="nofollow">GData: I can't take it anymore</a><br /> <p>Fri, 27 May 2011 19:49:00 +0000</p> I have been playing with GData since I was on the Google Calendar team back in 2005/2006. My experiences with GData can be summarized by the following graph:There are many reasons why GData continues to infuriate me: GData is not about data—it is an exercise in XML masturbation. If you look at the content of a GData feed, most of the bytes are dedicated to crap that does not matter due to a blind devotion to the Atom Publishing Protocol. In recent history, GData has become better in providing clean JSON data, but the equivalent XML it sends down is still horrifying by comparison. I understand the motivation to provide an XML wire format, but at least the original Facebook API had the decency to use POX to reduce the clutter. Atom is the reason why, while I was at Google, the first GData JSON API we released had to have a bunch of dollar signs and crap in it, so that you could use it to construct the Atom XML equivalent when making a request to the server. When I wanted to add web content events to Calendar in 2006, most of my energy was spent on debating what the semantically correct Atom representation should be rather than implementing the feature. Hitching GData to the Atom wagon was an exhausting waste of time and energy. Is it really that important to enable users to view their updates to a spreadsheet in Google Reader? REST APIs aren't cool. Streaming APIs are cool. If we want to have a real time Web, then we need streaming APIs. PubSub can be used as a stopgap, but it's not as slick. For data that may change frequently (such as a user's location in Google Latitude), you have no choice but to poll aggressively using a REST API. GData has traditionally given JavaScript developers short shrift. Look at the API support for the GData Java client library or Python client library compared to the JavaScript client library. JavaScript is the lingua franca of the Web: give it the attention it deserves. The notion of Atom forces the idea of "feeds" and "entries." That is fine for something like Google Calendar, but is less appropriate for hierarchical data, such as that stored in Google Tasks. Further, for data that does not naturally split into "entries," such as a Google Doc, the entire document becomes a single entry. Therefore, making a minor change to a Google Doc via GData requires uploading the entire document rather than the diff. This is quite expensive if you want to create your own editor for a Google Doc that has autosave. Perhaps the biggest time sink when getting started with GData is wrapping your head around the authentication protocols. To play around with your data, the first thing you have to do is set up a bunch of crap to get an AuthSub token. Why can't I just fill out a form on and give myself one? Setting up AuthSub is not the compelling piece of the application I want to build—interacting with my data is. Let me play with my data first and build a prototype so I can determine if what I'm trying to build is worth sharing with others and productionizing, and then I'll worry about authentication. Facebook's JavaScript SDK does this exceptionally well. After registering your application, you can include one <script> tag on your page and start using the Facebook API without writing any server code. It's much more fun and makes it easier to focus on the interesting part of your app.If GData were great, then Google products would be built on top of GData. A quick look under the hood will reveal that no serious web application at Google (Gmail, Calendar, Docs, etc.) uses it. If GData isn't good enough for Google engineers, then why should we be [...] <div class='clearbothFix'></div> </div><br /> <hr /><br /><div class="item"> <a href="" rel="nofollow">Reflecting on my Google I/O 2011 Talk</a><br /> <p>Mon, 16 May 2011 19:13:00 +0000</p> This year was my first trip to Google I/O, both as an attendee and a speaker. The title of my talk was JavaScript Programming in the Large with Closure Tools. As I have spoken to more and more developers, I have come to appreciate how jQuery and Closure are good for different things, and that Closure's true strength is when working in large JavaScript codebases, particularly for SPAs. With the increased interest in HTML5 and offline applications, I believe that JavaScript codebases will continue to grow, and that Closure will become even more important as we move forward, which is why I was eager to deliver my talk at I/O.Although it does not appear to be linked from the sessions page yet, the video of my I/O talk is available on YouTube. I have also made my slides available online, though I made a concerted effort to put less text on the slides than normal, so they may not make as much sense without the narration from my talk.I was incredibly nervous, but I did watch all 57 minutes of the video to try to evaluate myself as a speaker. After observing myself, I'm actually quite happy with how things went! I was already aware that I sometimes list back and forth when speaking, and that's still a problem (fortunately, most of the video shows the slides, not me, so you may not be able to tell how bad my nervous habit is). My mumbling isn't as bad as it used to be (historically, I've been pretty bad about it during ordinary conversation, so mumbling during public speaking was far worse). It appears that when I'm diffident about what I'm saying (such as when I'm trying to make a joke that I'm not sure the audience will find funny), I often trail off at the end of the sentence, so I need to work on that. On the plus side, the word "like" appears to drop out of my vernacular when I step on a stage, and I knew my slides well enough that I was able to deliver all of the points I wanted to make without having to stare at them too much. (I never practiced the talk aloud before giving it—I only played through what I wanted to say in my head. I can't take myself seriously when I try to deliver my talk to myself in front of a mirror.)If you pay attention during the talk, you'll notice that I switch slides using a real Nintendo controller. The week before, I was in Portland for JSConf, which had an 8-bit theme. There, I gave a talk on a novel use of the with keyword in JavaScript, but I never worked the 8-bit theme into my slides, so I decided to do so for Google I/O (you'll also note that Mario and Link make cameos in my I/O slides). Fortunately, I had messed around with my USB NES RetroPort before, so I already had some sample Java code to leverage—I ended up putting the whole NES navigation thing together the morning of my talk.For my with talk the week before, I had already created my own presentation viewer in Closure/JavaScript so I could leverage things like prettify. In order to provide an API to the NES controller, I exported some JavaScript functions to navigate the presentation forward and backward (navPresoForward() and navPresoBack()). Then I embedded the URL to the presentation in an org.eclipse.swt.browser.Browser and used com.centralnexus.input.Joystick to process the input from the controller and convert right- and left-arrow presses into browser.execute("navPresoForward()") and browser.execute("navPresoBack()") calls in Java. (The one sticking point was discovering that joystick input had to be processed in a special thread scheduled by Display.asyncExec().) Maybe it wasn't as cool as Marcin Wichary's Power Glove during his and Ryan's talk, The Secret[...] <div class='clearbothFix'></div> </div><br /> <hr /><br /><div class="item"> <a href="" rel="nofollow"> uses only 34% of jQuery</a><br /> <p>Wed, 20 Apr 2011 13:27:00 +0000</p> I created a Firefox extension, JsBloat, to help determine what fraction of the jQuery library a web page uses. It leverages JSCoverage to instrument jQuery and keep track of which lines of the library are executed (and how often). The table below is a small sample of sites that I tested with JsBloat. Here, "percentage used" means the fraction of lines of code that were executed in the version of the jQuery library that was loaded: URL jQuery version % used on page load % used after mousing around 1.4.2 18% 34% 1.4.4 23% 23% 1.5.1 30% 33% 1.4.4 19% 23% Note that three of the four sites exercise new code paths as a result of mousing around the page, so they do not appear to be using pure CSS for their hover effects. For example, on, a single mouseover of the "Lightweight Footprint" text causes a mouseover animation that increases the percentage of jQuery used by 11%! Also, when jQuery is loaded initially on, it calls $() 61 times, but after mousing around quite a bunch (which only increases the percentage of code used to 33%), the number of times that $() is executed jumps to 9875! (Your results may vary, depending on how many elements you mouse over, but it took less than twenty seconds of mousing for me to achieve my result. See the Postscript below to learn how to run JsBloat on any jQuery-powered page.) Although code coverage is admittedly a coarse metric for this sort of experiment, I still believe that the results are compelling.I decided to run this test because I was curious about how much jQuery users would stand to gain if they could leverage the Advanced mode of the Closure Compiler to compile their code. JavaScript that is written for Advanced mode (such as the Closure Library) can be compiled so that it is far smaller than its original source because the Closure Compiler will remove code that it determines is unreachable. Therefore, it will only include the lines of JavaScript code that you will actually use, whereas most clients of jQuery appear to be including much more than that.From these preliminary results, I believe that most sites that use jQuery could considerably reduce the amount of JavaScript that they serve by using Closure. As always, compiling custom JavaScript for every page must be weighed against caching benefits, though I suspect that the Compiler could find a healthy subset of jQuery that is universal to all pages on a particular site.If you're interested in learning more about using Closure to do magical things to your JavaScript, come find me at Track B of JSConf where I'm going to provide some mind-blowing examples of how to use the with keyword effectively! And if I don't see you at JSConf, then hopefully I'll see you at Google I/O where I'll be talking about JavaScript Programming in the Large with Closure Tools.Postscript: If you're curious how JsBloat works...JsBloat works by intercepting requests to for jQuery and injecting its own instrumented version of jQuery into the response. The instrumented code looks something like:for (var i = 0, l = insert.length; (i < l); (i++)) { _$jscoverage['jquery-1.5.2'][5517]++; var elems = ((i > 0)? this.clone(true): this).get(); _$jscoverage['jquery-1.5.2'][5518]++; (jQuery(insert[i])[original])(elems); _$jscoverage['jquery-1.5.2'][5519]++; ret = ret.concat(elems);}_$jscoverage['jquery-1.5.2'][5522]++;return this.pushStack(ret, name, insert.selector);so that after each line of code is executed, the _$jscoverage global increments its cou[...] <div class='clearbothFix'></div> </div><br /> <hr /><br /><div class="item"> <a href="" rel="nofollow">Suggested Improvements to JSON</a><br /> <p>Wed, 06 Apr 2011 14:22:00 +0000</p> Today I'm publishing an essay about my <a href="">suggested improvements to JSON</a>. I really like JSON a lot, but I think that it could use a few tweaks to make it easier to use for the common man.<br /><br />One question some have asked is: even if these changes to JSON were accepted, how would you transition existing systems? I would lean towards what I call the Nike approach, which is: "just do it." That is, start updating the <code>JSON.parse()</code> method in the browser to accept these extensions to JSON. What I am proposing is a strict superset of what's in <a href="">RFC 4627</a> today (which you may recall claims that "[a] JSON parser MAY accept non-JSON forms or extensions"), so it would not break anyone who continued to use ES3 style JSON.<br /><br />At first, you may think that sounds ridiculous -- how could we update the spec without versioning? Well, if you haven't been paying attention to the HTML5 movement, we are already doing this sort of thing <em>all the time</em>. Browser behavior continues to change/improve, and you just have to roll with the punches. It's not ideal, but yet the Web moves forward, and it's faster than waiting for standards bodies to agree on something.<br /><br />Though if the Nike approach is too heretical, then I think that versioning is also a reasonable option. In the browser, a read-only <code>JSON.version</code> property could be added, though I don't imagine most developers would check it an runtime, anyway. Like most things on the web, a least-common-denominator approach would be used by those who want to be safe, which would ignore the version number. Only those who do <a href="">user-agent sniffing on the server</a> would be able to serve a slimmer JavaScript library, though that is already true for many other browser features today. I trust <a href="">Browserscope</a> and <a href=""></a> much more than any sort of formal specification, anyway.<br /><br />Special thanks to <a href="">Kushal</a>, <a href="">Dolapo</a>, and <a href="">Mihai</a> for providing feedback on my essay. <div class='clearbothFix'></div> </div><br /> <hr /><br /><div class="item"> <a href="" rel="nofollow">What I learned from üjs</a><br /> <p>Mon, 04 Apr 2011 14:19:00 +0000</p> For April Fool's, I released a mock JavaScript library, üjs (pronounced "umlaut JS"). The library was "mock" in the "mockumentary" Spin̈al Tap sense, not the unit testing sense. It took me about a day to create the site, and I learned some interesting lessons along the way, which I thought I would share:In purchasing the domain name ü, I learned about Punycode. Basically, when you type ü into the browser, the browser will translate it to the Punycode equivalent, which is On one hand, this is what prevents you from buying göö and creating a giant phishing scheme; on the other hand, because most users are unfamiliar with Punycode, going to a legitimate domain like ü looks like a giant phishing scheme due to the rewrite.I only promoted üjs through three channels: Twitter, Hacker News, and Reddit. On Twitter, I discovered that using a Punycode domain as your punchline really limits its reach because instead of tweeting about ü, many tweeted about (because that's what you can copy from the location bar after you visit the site), or (somewhat ironically) a URL-shortened version of the domain. I suspect more people would have followed the shared link if they could see the original domain name.According to Google Analytics, just over half of my traffic (50.04%) on April 1 was from Hacker News (where I made it to the front page!). Another 9.89% was from Reddit. Analytics also claims that only 1.57% of my traffic came from Twitter, though 29.76% of my traffic was "direct," so I assume that was from Twitter, as well. On April 1, I had 3690 visits, and then another 443 on April 2 (presumably from the eastern hemisphere who woke up on Saturday to an aftermath of Internet April Fools' pranks).GoDaddy was not able to do a domain search for ü, presumably due to the non-ASCII characters. It turned out that was able to, so I ended up going with them. (Presumably if I understood Punycode well enough at the outset of this project, I could have registered through their site.)I found a discount code for, so my total cost out of pocket for this project was $8.75 (my hosting fees are a sunk cost).Not that it was a substantial investment, but I was hopeful/curious whether I could break even in some way. I felt that traditional ads would have been a little over the top, so instead I decided to include two Amazon Associates links. Although I produced 350 clicks to Amazon (!), I failed to generate any conversions, so I did not make any money off of my ads.Chartbeat is really, really cool. It made it much more fun to watch all of the web traffic to ü during the day. (I wish that I generally had enough traffic to make it worth using Chartbeat all the time!) I believe that I had 144 simultaneous visitors during the peak of üjs, and I was amazed at how dispersed the traffic was from across the globe.One thing that I did not realize is that Chartbeat does not do aggregate statistics. Fortunately, I set up Google Analytics in addition to Chartbeat, so I had access to both types of data.Response times were about 1s on average during peak traffic times. At first, I thought that was horrendously slow, but then I realized that there were a large number of requests coming from outside the US, which increased the average. Most of the requests from the US loaded in the low hundreds of milliseconds, which made me feel good about my hosting choice (who really is excellent, btw).The n̈ in Spin̈al Tap [...] <div class='clearbothFix'></div> </div><br /> <hr /><br /><div class="item"> <a href="" rel="nofollow">Awesome New JavaScript Library!</a><br /> <p>Fri, 01 Apr 2011 13:00:00 +0000</p> Despite months of advocating Closure, I've finally given up and found a superior JavaScript library (and it's not jQuery!): <a href="http://ü">http://ü</a> <div class='clearbothFix'></div> </div><br /> <hr /><br /><div class="item"> <a href="" rel="nofollow">What are the most important classes for high school students to succeed in software engineering?</a><br /> <p>Wed, 19 Jan 2011 04:21:00 +0000</p> What are the most important classes for high school students to succeed in software engineering? That is the question that I try to answer in an <a href="">essay of same name</a>.<br /><br />Also, this is the first essay I have written using <a href="">NJEdit</a>, which is the editing software that I built (and have since <a href="">open sourced</a>) in order to write <a href="">Closure: The Definitive Guide</a>. It helps me focus more on content while worrying less about formatting, though it still has a ways to go before becoming my "one click" publishing solution.<br /><br />A unique feature of NJEdit is that when I produce the HTML to publish my essay, I also produce <a href="">the DocBook XML version as a by-product</a>! It's not a big selling point today, but if I ever want to publish anything to print again, I'll be ready! For open-source projects that are slowly creating HTML documentation that they hope to publish as a print book one day, NJEdit might be the solution.<br /><br />And if it is, maybe someone will help me fix its bugs...<br /><br /><em>Want to learn more about Closure? Pick up a copy of my new book, <a href="">Closure: The Definitive Guide (O'Reilly)</a>, and learn how to build sophisticated web applications like Gmail and Google Maps!</em> <div class='clearbothFix'></div> </div><br /> <hr /><br /><div class="item"> <a href="" rel="nofollow">My latest Closure talks</a><br /> <p>Thu, 13 Jan 2011 04:47:00 +0000</p> Last Thursday, January 6, I had a doubleheader, as I gave one talk in the afternoon at an <a href="">undergraduate MIT Web programming competition (6.470)</a>, and another talk in the evening at the <a href="">Boston JavaScript Meetup</a>. It was a long day (I took the train up from NYC that morning), but I got to talk to so many interesting people that I was far more energized at the end of the day than exhausted!<br /><br />I cleaned up the slide decks from both talks so they are suitable for sharing. Because I was informed that the MIT programming competition had a lot of freshmen in it, I didn't go into as much detail about Closure as I normally would have because I didn't want to overwhelm them. (I ended up taking so much out of my "standard" talk that I ended up finishing ~30 minutes early, though that was likely a welcome relief to the students as I was the last speaker in a full day of lecture.) As you can see, the MIT talk is only 17 slides:<br /><br /> src="" frameborder="0" width="410" height="342"></iframe><br /><br />The talk I prepared for the Boston JS Meetup was the most technical one I have given to date. It was my first time presenting for a group that actually has "JavaScript" in the name, so it was refreshing not to have to spend the first 15 minutes explaining to the audience that JavaScript is a real language and that you can do serious things with it. By comparison, my second talk went much longer (about an hour and a half?), as there was a lot more material to cover as well as tons of great questions from the (very astute!) audience during my presentation:<br /><br /> src="" frameborder="0" width="410" height="342"></iframe><br /><br />The one thing that these presentations do not capture is the <a href="">plovr</a> demo that I have been giving during my talks. (This was also the first time that I demoed using plovr with jQuery, as I had just attempted using jQuery with plovr for the first time myself the night before. I have an <a href="">open topic on the plovr discussion group about how to make plovr easier to use for jQuery developers</a>, so please contribute if you have thoughts!) At some point, I'm planning to do a webcast with O'Reilly on Closure, so that might be a good opportunity to record a plovr demo that can be shared with everyone.<br /><br /><em>Want to learn more about Closure? Pick up a copy of my new book, <a href="">Closure: The Definitive Guide (O'Reilly)</a>, and learn how to build sophisticated web applications like Gmail and Google Maps!</em> <div class='clearbothFix'></div> </div><br /> <hr /><br /><div class="item"> <a href="" rel="nofollow">Web Content Wizard is Fixed AGAIN!</a><br /> <p>Wed, 12 Jan 2011 04:42:00 +0000</p> Back in 2006, I released this little tool for Google Calendar called the Web Content Wizard. With it, you can add your own "web content events" in Google Calendar. Unfortunately, there is no way to add such events so via the Google Calendar UI, so normally your only alternative is to do some gnarly web programming to talk to GData.Things were going well until one day, the Web Content wizard just broke. The GData team changed something on their end which caused my poor little app to stop working. If I remember correctly, it was because they created this new thing about registering your web application. I learned of the problem because some users wrote in, and eventually sometime I finally found the time to debug the problem and fix it.Then things were going well again for awhile, but then in late 2009 (when I had some newfound free time), I decided that it was finally time to migrate from the Red Hat 9 server it had been running on since 2003! I had a VPS from RimuHosting (who rocks btw!) that I acquired in 2006 for my project with Mike Lambert, but I had never had the time to move my content from over there. (Getting PHP 5 with its native JSON parsing methods was what finally put my over the edge, as I repeatedly failed at compiling them myself for the version of PHP 4 that was running on Shrike.)As you can imagine, not everything worked when I did the migration in 2009. Honestly, I probably should have just gotten an even newer VPS then and migrated both servers, as now is running on a dusty version of Ubuntu 8.04! (I'm running version 0.9.5 of Mercurial, which means that I don't even have the rebase extension!) Anyway, when I did the migration, I was far more preoccupied with migrating my conf files for Apache from 1.3 to 2.0 than I was with getting the Web Content Wizard to work again. Every endeavor of mine involving GData always involves a minimum of 3 hours work and a lot of cursing, and I just wasn't up for it.Then today I got another polite email asking me why the Web Content Wizard wasn't working. I have amassed quite a few of these over the years, and I generally star them in my inbox, yet never make the time to investigate what was wrong. But for some reason, tonight I felt motivated, so I dug in and decided to debug it. I [wrongly] assumed that GData had changed on me again, so I focused my efforts there first. Surprisingly, the bulk of my time was spent getting the right error message out of PHP, as all it told me was that my request to the GData server was returning a 400. Now, I know GData is pretty lousy at a lot of things, but when it gives you a 400, in my experience, at least it gives you a reason!I use cURL under the hood to communicate with GData from PHP, and that has never been pretty. I thought I remembered editing something in /etc/php.ini (which was now at /etc/php5/apache2/php.ini) to get cURL working correctly on the old, but I could not remember what it was, nor did I appear to have documented it.The key insight occurred when I changed this call to curl_setopt() to set the CURLOPT_FAILONERROR option to false:curl_setopt($ch, CURLOPT_FAILONERROR, false);Suddenly, my requests to GData contained a real error message:The value following "version" in the XML declaration must be a quoted stringThis did not make any sense because I was sure that I was sending well-formed XM[...] <div class='clearbothFix'></div> </div><br /> <div class="item"> <center> <h3><a href=''>Comments (0) - Read and Add your own Comments</a></h3> </center> </div> </div> <!-- google_ad_section_end --> </div> <div class="col-md-3 hidden-sm hidden-xs"> <script type="text/javascript"> ( function() { if (window.CHITIKA === undefined) { window.CHITIKA = { 'units' : [] }; }; var unit = {"calltype":"async[2]","publisher":"marksavoca","width":120,"height":600,"sid":"Feed_preview_right"}; var placement_id = window.CHITIKA.units.length; window.CHITIKA.units.push(unit); document.write('<div id="chitikaAdBlock-' + placement_id + '"></div>'); }()); </script> <script type="text/javascript" src="//" async></script> </div> </div> <div class="row"> <!-- FOOTER --> <div id="footer" class="col-md-12"> <script type="text/javascript"><!-- post_view(868806); //--></script> <!-- google_ad_section_start(weight=ignore) --> <div> Copyright 2006-2015 <a href=""> LLC</a><br /><br /> <a href="">Privacy policy</a> - <a href="">Sitemap</a> - <a href="">Press</a> - <a href="">Terms of Service</a> - <a href="">Copyright information</a><br /> <a href="">Contact Us</a> - <a href="">About Us</a> - <a href="">Feedage</a> - <a href="" target="_blank">Search Engine Consulting</a><br /><br /> <small>Language Detection Powered by</small><a href=""><img src="" width="51" height="15" title="دعم فني" alt="دعم فني"/></a><br /><br /> </div> <!-- google_ad_section_end --> </div> <!-- END FOOTER --> </div> </div> </body> </html>