Subscribe: joe shaw
Added By: Feedage Forager Feedage Grade B rated
Language: English
args  bin nop  bin  context context  context  ctx  data  file  input  it’s  line  new  request  return  vagrant  work 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: joe shaw

joe shaw

Updated: 2017-07-25T13:06:02-04:00


Testing with os/exec and TestMain


If you look at the tests for the Go standard library’s os/exec package, you’ll find a neat trick for how it tests execution: func helperCommandContext(t *testing.T, ctx context.Context, s ...string) (cmd *exec.Cmd) { testenv.MustHaveExec(t) cs := []string{"", "--"} cs = append(cs, s...) if ctx != nil { cmd = exec.CommandContext(ctx, os.Args[0], cs...) } else { cmd = exec.Command(os.Args[0], cs...) } cmd.Env = []string{"GO_WANT_HELPER_PROCESS=1"} return cmd } // TestHelperProcess isn't a real test. // // Some details elided for this blog post. func TestHelperProcess(*testing.T) { if os.Getenv("GO_WANT_HELPER_PROCESS") != "1" { return } defer os.Exit(0) args := os.Args for len(args) > 0 { if args[0] == "--" { args = args[1:] break } args = args[1:] } if len(args) == 0 { fmt.Fprintf(os.Stderr, "No command\n") os.Exit(2) } cmd, args := args[0], args[1:] switch cmd { case "echo": iargs := []interface{}{} for _, s := range args { iargs = append(iargs, s) } fmt.Println(iargs...) //// etc... } } When you run go test, under the covers the toolchain compiles your test code into a temporary binary and runs it. (As an aside, passing -x to the go tool is a great way to learn what the toolchain is actually doing.) This helper function in exec_test.go sets a GO_WANT_HELPER_PROCESS environment variable and calls itself with a parameter directing it to run a specific test, named TestHelperProcess. Nate Finch wrote an excellent blog post in 2015 on this pattern in greater detail, and Mitchell Hashimoto’s 2017 GopherCon talk also makes mention of this trick. I think this can be improved upon somewhat with the TestMain mechanism that was added in Go 1.4, however. Here it is in action: package myexec import ( "fmt" "os" "os/exec" "strings" "testing" ) func TestMain(m *testing.M) { switch os.Getenv("GO_TEST_MODE") { case "": // Normal test mode os.Exit(m.Run()) case "echo": iargs := []interface{}{} for _, s := range os.Args[1:] { iargs = append(iargs, s) } fmt.Println(iargs...) } } func TestEcho(t *testing.T) { cmd := exec.Command(os.Args[0], "hello", "world") cmd.Env = []string{"GO_TEST_MODE=echo"} output, err := cmd.Output() if err != nil { t.Errorf("echo: %v", err) } if g, e := string(output), "hello world\n"; g != e { t.Errorf("echo: want %q, got %q", e, g) } } We still set an environment variable and self-execute, but by moving the dispatching to TestMain we avoid the somewhat-hacky special test which only ran when a certain environment variable is set, and which needed to do extra command-line argument handling. Update: Chris Hines wrote about this and other useful things you can do with TestMain in a post from 2015 that I did not know about![...]One weird trick to make cleaner tests with os/exec

Don't defer Close() on writable files


Update: Another approach suggested by the inimitable Ben Johnson has been added to the end of the post. Update 2: Discussion about fsync() added to the end of the post. It’s an idiom that quickly becomes rote to Go programmers: whenever you conjure up a value that implements the io.Closer interface, after checking for errors you immediately defer its Close() method. You see this most often when making HTTP requests: resp, err := http.Get("") if err != nil { return err } defer resp.Body.Close() or opening files: f, err := os.Open("/home/joeshaw/notes.txt") if err != nil { return err } defer f.Close() But this idiom is actually harmful for writable files because deferring a function call ignores its return value, and the Close() method can return errors. For writable files, Go programmers should avoid the defer idiom or very infrequent, maddening bugs will occur. Why would you get an error from Close() but not an earlier Write() call? To answer that we need to take a brief, high-level detour into the area of computer architecture. Generally speaking, as you move outside and away from your CPU, actions get orders of magnitude slower. Writing to a CPU register is very fast. Accessing system RAM is quite slow in comparison. Doing I/O on disks or networks is an eternity. If every Write() call committed the data to the disk synchronously, the performance of our systems would be unusably slow. While synchronous writes are very important for certain types of software (like databases), most of the time it’s overkill. The pathological case is writing to a file one byte at a time. Hard drives – brutish, mechanical devices – need to physically move a magnetic head to the position on the platter and possibly wait for a full platter revolution before the data could be persisted. SSDs, which store data in blocks and have a finite number of write cycles for each block, would quickly burn out as blocks are repeatedly written and overwritten. Fortunately this doesn’t happen because multiple layers within hardware and software implement caching and write buffering. When you call Write(), your data is not immediately being written to media. The operating system, storage controllers and the media itself are all buffering the data in order to batch smaller writes together, organizing the data optimally for storage on the medium, and deciding when best to commit it. This turns our writes from slow, blocking synchronous operations to quick, asynchronous operations that don’t directly touch the much slower I/O device. Writing a byte at a time is never the most efficient thing to do, but at least we are not wearing out our hardware if we do it. Of course, the bytes do have to be committed to disk at some point. The operating system knows that when we close a file, we are finished with it and no subsequent write operations are going to happen. It also knows that closing the file is its last chance to tell us something went wrong. On POSIX systems like Linux and macOS, closing a file is handled by the close system call. The BSD man page for close(2) talks about the errors it can return: ERRORS The close() system call will fail if: [EBADF] fildes is not a valid, active file descriptor. [EINTR] Its execution was interrupted by a signal. [EIO] A previously-uncommitted write(2) encountered an input/output error. EIO is exactly the error we are worried about. It means that we’ve lost data trying to save it to disk, and our Go programs should absolutely not return a nil error in that case. The simplest way to solve this is simply not to use defer when writing files: func helloNotes() error { f, err := os.Create("/home/joeshaw/notes.txt") if err != nil { return err } if err = io.WriteString(f, "hello world"); err != nil { f.Close() return err } return f.Close() } This does mean additional bookkeeping of the file in the[...]

Revisiting context and http.Handler for Go 1.7


Go 1.7 was released earlier this month, and the thing I’m most excited about is the incorporation of the context package into the Go standard library. Previously it lived in the package. With the move, other packages within the standard library can now use it. The net package’s Dialer and os/exec package’s Command can now utilize contexts for easy cancelation. More on this can be found in the Go 1.7 release notes. Go 1.7 also brings contexts to the net/http package’s Request type for both HTTP clients and servers. Last year I wrote a post about using context.Context with http.Handler when it lived outside the standard library, but Go 1.7 makes things much simpler and thankfully renders all of the approaches from that post obsolete. A quick recap I suggest reading my original post for more background, but one of the main uses of context.Context is to pass around request-scoped data. Things like request IDs, authenticated user information, and other data useful for handlers and middleware to examine in the scope of a single HTTP request. In that post I examined three different approaches for incorporating context into requests. Since contexts are now attached to http.Request values, this is no longer necessary. As long as you’re willing to require at least Go 1.7, it’s now possible to use the standard http.Handler interface and common middleware patterns with context.Context! The new approach Recall that the http.Handler interface is defined as: type Handler interface { ServeHTTP(ResponseWriter, *Request) } Go 1.7 adds new context-related methods on the *http.Request type. func (r *Request) Context() context.Context func (r *Request) WithContext(ctx context.Context) *Request The Context method returns the current context associated with the request. The WithContext method creates a new Request value with the provided context. Suppose we want each request to have an associated ID, pulling it from the X-Request-ID HTTP header if present, and generating it if not. We might implement the context functions like this: type key int const requestIDKey key = 0 func newContextWithRequestID(ctx context.Context, req *http.Request) context.Context { reqID := req.Header.Get("X-Request-ID") if reqID == "" { reqID = generateRandomID() } return context.WithValue(ctx, requestIDKey, reqID) } func requestIDFromContext(ctx context.Context) string { return ctx.Value(requestIDKey).(string) } We can implement middleware that derives a new context with a request ID, create a new Request value from it, and pass it onto the next handler in the chain. func middleware(next http.Handler) http.Handler { return http.HandlerFunc(func(rw http.ResponseWriter, req *http.Request) { ctx := newContextWithRequestID(req.Context(), req) next.ServeHTTP(rw, req.WithContext(ctx)) }) } The final handler and any middleware lower in the chain have access to all the previously request-scoped data set in middleware above it. func handler(rw http.ResponseWriter, req *http.Request) { reqID := requestIDFromContext(req.Context()) fmt.Fprintf(rw, "Hello request ID %v\n", reqID) } And that’s it! It’s no longer necessary to implement custom context handlers, adapters to standard http.Handler implementations, or hackily wrap http.ResponseWriter. Everything you need is in the standard library, and right there on the *http.Request type.[...]I forget what I was told by myself

Smaller Docker containers for Go apps


At litl we use Docker images to package and deploy our Room for More services, using our Galaxy deployment platform. This week I spent some time looking into how we might reduce the size of our images and speed up container deployments. Most of our services are in Go, and thanks to the fact that compiled Go binaries are mostly-statically linked by default, it’s possible to create containers with very few files within. It’s surely possible to use these techniques to create tighter containers for other languages that need more runtime support, but for this post I’m only focusing on Go apps. The old way We built images in a very traditional way, using a base image built on top of Ubuntu with Go 1.4.2 installed. For my examples I’ll use something similar. Here’s a Dockerfile: FROM golang:1.4.2 EXPOSE 1717 RUN go get # Don't run network servers as root in Docker USER nobody CMD qotd The golang:1.4.2 base image is built on top of Debian Jessie. Let’s build this bad boy and see how big it is. $ docker build -t qotd . ... Successfully built ae761b93e656 $ docker images qotd REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE qotd latest ae761b93e656 3 minutes ago 520.3 MB Yikes. Half a gigabyte. Ok, what leads us to a container this size? $ docker history qotd IMAGE CREATED BY SIZE ae761b93e656 /bin/sh -c #(nop) CMD ["/bin/sh" "-c" "qotd"] 0 B b77d0ca3c501 /bin/sh -c #(nop) USER [nobody] 0 B a4b2a01d3e42 /bin/sh -c go get 3.021 MB c24802660bfa /bin/sh -c #(nop) EXPOSE 1717/tcp 0 B 124e2127157f /bin/sh -c #(nop) COPY file:56695ddefe9b0bd83 2.481 kB 69c177f0c117 /bin/sh -c #(nop) WORKDIR /go 0 B 141b650c3281 /bin/sh -c #(nop) ENV PATH=/go/bin:/usr/src/g 0 B 8fb45e60e014 /bin/sh -c #(nop) ENV GOPATH=/go 0 B 63e9d2557cd7 /bin/sh -c mkdir -p /go/src /go/bin && chmod 0 B b279b4aae826 /bin/sh -c #(nop) ENV PATH=/usr/src/go/bin:/u 0 B d86979befb72 /bin/sh -c cd /usr/src/go/src && ./make.bash 97.4 MB 8ddc08289e1a /bin/sh -c curl -sSL 39.69 MB 8d38711ccc0d /bin/sh -c #(nop) ENV GOLANG_VERSION=1.4.2 0 B 0f5121dd42a6 /bin/sh -c apt-get update && apt-get install 88.32 MB 607e965985c1 /bin/sh -c apt-get update && apt-get install 122.3 MB 1ff9f26f09fb /bin/sh -c apt-get update && apt-get install 44.36 MB 9a61b6b1315e /bin/sh -c #(nop) CMD ["/bin/bash"] 0 B 902b87aaaec9 /bin/sh -c #(nop) ADD file:e1dd18493a216ecd0c 125.2 MB This is not a very lean container, with a lot of intermediate layers. To reduce the size of our containers, we did two additional steps: (1) Every repo has a script that is run inside the container after it is initially built. Here’s part of a script for one of our Ubuntu-based Go images: apt-get purge -y software-properties-common byobu curl git htop man unzip vim \ python-dev python-pip python-virtualenv python-dev python-pip python-virtualenv \ python2.7 python2.7 libpython2.7-stdlib:amd64 libpython2.7-minimal:amd64 \ libgcc-4.8-dev:amd64 cpp-4.8 libruby1.9.1 perl-modules vim-runtime \ vim-common vim-tiny libpython3.4-stdlib:amd64 python3.4-minimal xkb-data \ xml-core libx11-data fonts-dejavu-core groff-base eject python3 locales \ python-software-properties supervisor git-core make wget cmake gcc bzr mercurial \ libglib2.0-0:amd64 libxml2:amd64 apt-get clean autoclean apt-get autoremove -y rm -rf /usr/local/go rm -rf /usr/local/go1.*.linux-amd64.tar.gz rm -rf /var/lib/{apt,dpkg,cache,log}/ rm -rf /var/{cache,log} (2) We run Jason Wilder’s excellent docker-squash tool. It is especially helpful when combined with the script above. These st[...]

Go's net/context and http.Handler


The approaches in this post are now obsolete thanks to Go 1.7, which adds the context package to the standard library and uses it in the net/http *http.Request type. The background info here may still be helpful, but I wrote a follow-up post that revisits things for Go 1.7 and beyond. A summary of this post is available in Japanese thanks to @craftgear. こちらに @craftgearによる日本語の要約があります。 The package (hereafter referred as net/context although it’s not yet in the standard library) is a wonderful tool for the Go programmer’s toolkit. The blog post that introduced it shows how useful it is when dealing with external services and the need to cancel requests, set deadlines, and send along request-scoped key/value data. The request-scoped key/value data also makes it very appealing as a means of passing data around through middleware and handlers in Go web servers. Most Go web frameworks have their own concept of context, although none yet use net/context directly. Questions about using net/context for this kind of server-side context keep popping up on the /r/golang subreddit and the Gopher’s Slack community. Having recently ported a fairly large API surface from Martini to http.ServeMux and net/context, I hope this post can answer those questions. About http.Handler The basic unit in Go’s HTTP server is its http.Handler interface, which is defined as: type Handler interface { ServeHTTP(ResponseWriter, *Request) } http.ResponseWriter is another simple interface and http.Request is a struct that contains data corresponding to the HTTP request, things like URL, headers, body if any, etc. Notably, there’s no way to pass anything like a context.Context here. About context.Context Much more detail about contexts can be found in the introductory blog post, but the main aspect I want to call attention to in this post is that contexts are derived from other contexts. Context values become arranged as a tree, and you only have access to values set on your context or one of its ancestor nodes. For example, let’s take context.Background() as the root of the tree, and derive a new context by attaching the content of the X-Request-ID HTTP header. type key int const requestIDKey key = 0 func newContextWithRequestID(ctx context.Context, req *http.Request) context.Context { return context.WithValue(ctx, requestIDKey, req.Header.Get("X-Request-ID")) } func requestIDFromContext(ctx context.Context) string { return ctx.Value(requestIDKey).(string) } ctx := context.Background() ctx = newContextWithRequestID(ctx, req) This derived context is the one we would then pass to the next layer of the system. Perhaps that would create its own contexts with values, deadlines, or timeouts, or it could extract values we previously stored. Approaches These approaches are now obsolete as of Go 1.7. Read my follow-up post that revisits this topic for Go 1.7 and beyond. So, without direct support for net/context in the standard library, we have to find another way to get a context.Context into our handlers. There are three basic approaches: Use a global request-to-context mapping Create a http.ResponseWriter wrapper struct Create your own handler types Let’s examine each. Global request-to-context mapping In this approach we create a global map of requests to contexts, and wrap our handlers in a middleware that handles the lifetime of the context associated with a request. This is the approach taken by Gorilla’s context package, although with its own context type rather than net/context. Because every HTTP request is processed in its own goroutine and Go’s maps are not safe for concurrent access for performance reasons, it is crucial that we protect all map accesses with a sync.Mutex. This also introduces lock contention among concurrently processed requests. Depending on your application and workload, this could become a[...]Two great tastes that taste great together

Contributing to GitHub projects


I often see people asking how to contribute to an open source project on GitHub. Some are new programmers, some may be new to open source, others aren’t programmers but want to make improvements to documentation or other parts of a project they use everyday. Using GitHub means you’ll need to use Git, and that means using the command-line. This post gives a gentle introduction using the git command-line tool and a companion tool for GitHub called hub. Workflow The basic workflow for contributing to a project on GitHub is: Clone the project you want to work on Fork the project you want to work on Create a feature branch to do your own work in Commit your changes to your feature branch Push your feature branch to your fork on GitHub Send a pull request for your branch on your fork Clone the project you want to work on $ hub clone pydata/pandas (Equivalent to git clone This clones the project from the server onto your local machine. When working in git you make changes to your local copy of the repository. Git has a concept of remotes which are, well, remote copies of the repository. When you clone a new project, a remote called origin is automatically created that points to the repository you provide in the command line above. In this case, pydata/pandas on GitHub. To upload your changes back to the main repository, you push to the remote. Between when you cloned and now changes may have been made to upstream remote repository. To get those changes, you pull from the remote. At this point you will have a pandas directory on your machine. All of the remaining steps take place inside it, so change into it now: $ cd pandas Fork the project you want to work on The easiest way to do this is with hub. $ hub fork This does a couple of things. It creates a fork of pandas in your GitHub account. It establishes a new remote in your local repository with the name of your github username. In my case I now have two remotes: origin, which points to the main upstream repository; and joeshaw, which points to my forked repository. We’ll be pushing to my fork. Create a feature branch to do your own work in This creates a place to do your work in that is separate from the main code. $ git checkout -b doc-work doc-work is what I’m choosing to name this branch. You can name it whatever you like. Hyphens are idiomatic. Now make whatever changes you want for this project. Commit your changes to your feature branch If you are creating new files, you will need to explicitly add them to the to-be-commited list (also called the index, or staging area): $ git add etc If you are just editing existing files, you can add them all in one batch: $ git add -u Next you need to commit the changes. $ git commit This will bring up an editor where you type in your commit message. The convention is usually to type a short summary in the first line (50-60 characters max), then a blank line, then additional details if necessary. Push your feature branch to your fork in GitHub Ok, remember that your fork is a remote named after your github username. In my case, joeshaw. $ git push joeshaw doc-work This pushes to the joeshaw remote only the doc-work branch. Now your work is publicly visible to anyone on your fork. Send a pull request for your branch on your fork You can do this either on the web site or using the hub tool. $ hub pull-request This will open your editor again. If you only had one commit on your branch, the message for the pull request will be the same as the commit. This might be good enough, but you might want to elaborate on the purpose of the pull request. Like commits, the first line is a summary of the pull request and the other lines are the body of the PR. In general you will be requesting a pull from your current branch (in this case doc-work) into the master b[...]

Terrible Vagrant/Virtualbox performance on Mac OS X


Update March 2016: There’s a much easier way to enable the host IO cache from the command-line, but it only works for existing VMs. See the update below. I recently started using Vagrant to test our auto-provisioning of servers with Puppet. Having a simple-yet-configurable system for starting up and accessing headless virtual machines really makes this a much simpler solution than VMware Fusion. (Although I wish Vagrant had a way to take and rollback VM snapshots.) Unfortunately, as soon as I tried to really do anything in the VM my Mac would completely bog down. Eventually the entire UI would stop updating. In Activity Monitor, the dreaded kernel_task was taking 100% of one CPU, and VBoxHeadless taking most of another. Things would eventually free up whenever the task in the VM (usually apt-get install or puppet apply) would crash with a segmentation fault. Digging into this, I found an ominous message in the VirtualBox logs: AIOMgr: Host limits number of active IO requests to 16. Expect a performance impact. Yeah, no kidding. I tracked this message down to the “Use host I/O cache” setting being off on the SATA Controller in the box. (This is a per-VM setting, and I am using the stock Vagrant “lucid64” box, so the exact setting may be somewhere else for you. It’s probably a good idea to turn this setting on for all storage controllers.) When it comes to Vagrant VMs, this setting in the VirtualBox UI is not very helpful, though, because Vagrant brings up new VMs automatically and without any UI. To get this to work with the Vagrant workflow, you have to do the following hacky steps: Turn off any IO-heavy provisioning in your Vagrantfile vagrant up a new VM vagrant halt the VM Open the VM in the VirtualBox UI and change the setting Re-enable the provisioning in your Vagrantfile vagrant up again This is not going to work if you have to bring up new VMs often. Fortunately this setting is easy to tweak in the base box. Open up ~/.vagrant.d/boxes/base/box.ovf and find the StorageController node. You’ll see an attribute HostIOCache="false". Change that value to true. Lastly, you’ll have to update the SHA1 hash of the .ovf file in ~/.vagrant.d/boxes/base/ Get the new hash by running openssl dgst -sha1 ~/.vagrant.d/boxes/base/box.ovf and replace the old value in with it. That’s it. All subsequent VMs you create with vagrant up will now have the right setting. Update Thanks to this comment on a Vagrant bug report you can enable the host cache more simply from the command-line for an existing VM: VBoxManage storagectl --name --hostiocache on Where is your vagrant VM name, which you can get from: VBoxManage list vms and is probably "SATA Controller". The VM must be halted for this to work. You can add a section to your Vagrantfile to do this when new VMs are created: config.vm.provider "virtualbox" do |v| v.customize [ "storagectl", :id, "--name", "SATA Controller", "--hostiocache", "on" ] end And for further reading, here is the relevant section in the Virtualbox manual that goes into more detail about the pros and cons of host IO caching.[...]One weird trick to speed up your virtual machines

Linux input ecosystem


Over the past couple of days, I’ve been trying to figure out how input in Linux works on modern systems. There are lots of small pieces at various levels, and it’s hard to understand how they all interact. Things are not helped by the fact that things have changed quite a bit over the past couple of years as HAL – which I helped write – has been giving way to udev, and existing literature is largely out of date. This is my attempt at understanding how things work today, in the Ubuntu Lucid release. Kernel In the Linux kernel’s input system, there are two pieces: the device driver and the event driver. The device driver talks to the hardware, obviously. Today, for most USB devices this is handled by the usbhid driver. The event drivers handle how to expose the events generated by the device driver to userspace. Today this is primarily done through evdev, which creates character devices (typically named /dev/input/eventN) and communicates with them through struct input_event messages. See include/linux/input.h for its definition. A great tool to use for getting information about evdev devices and events is evtest. A somewhat outdated but still relevant description of the kernel input system can be found in the kernel’s Documentation/input/input.txt file. udev When a device is connected, the kernel creates an entry in sysfs for it and generates a hotplug event. That hotplug event is processed by udev, which applies some policy, attaches additional properties to the device, and ultimately creates a device node for you somewhere in /dev. For input devices, the rules in /lib/udev/rules.d/60-persistent-input.rules are executed. Among the things it does is run a /lib/udev/input_id tool which queries the capabilities of the device from its sysfs node and sets environment variables like ID_INPUT_KEYBOARD, ID_INPUT_TOUCHPAD, etc. in the udev database. For more information on input_id see the original announcement email to the hotplug list. X X has a udev config backend which queries udev for the various input devices. It does this at startup and also watches for hotplugged devices. X looks at the different ID_INPUT_* properties to determine whether it’s a keyboard, a mouse, a touchpad, a joystick, or some other device. This information can be used in /usr/lib/X11/xorg.conf.d files in the form of MatchIsPointer, MatchIsTouchpad, MatchIsJoystick, etc. in InputClass sections to see whether to apply configuration to a given device. Xorg has a handful of its own drivers to handle input devices, including evdev, synaptics, and joystick. And here is where things start to get confusing. Linux has this great generic event interface in evdev, which means that very few drivers are needed to interact with hardware, since they’re not speaking device-specific protocols. Of the few needed on Linux nearly all of them speak evdev, including the three I listed above. The evdev driver provides basic keyboard and mouse functionality, speaking – obviously – evdev through the /dev/input/eventN devices. It also handles things like the lid and power switches. This is the basic, generic input driver for Xorg on Linux. The synaptics driver is the most confusing of all. It also speaks evdev to the kernel. On Linux it does not talk to the hardware directly, and is in no way Synaptics(tm) hardware-specific. The synaptics driver is simply a separate driver from evdev which adds a lot of features expected of touchpad hardware, for example two-finger scrolling. It should probably be renamed the “touchpad” module, except that on non-Linux OSes it can still speak the Synaptics protocol. The joystick driver similarly handles joysticky things, but speaks evdev to the kernel rather than some device-specific protocol. X only has concepts of keyboards and pointers, the latter of which includes mice, touchpads, joysticks, wa[...]

AVCHD to MP4/H.264/AAC conversion


For posterity:

I have a Canon HF200 HD video camera, which records to AVCHD format. AVCHD is H.264 encoded video and AC-3 encoded audio in a MPEG-2 Transport Stream (m2ts, mts) container. This format is not supported by Aperture 3, which I use to store my video.

With Blizzard’s help, I figured out an ffmpeg command-line to convert to H.264 encoded video and AAC encoded audio in an MPEG-4 (mp4) container. This is supported by Aperture 3 and other Quicktime apps.

$ ffmpeg -sameq -ab 256k -i input-file.m2ts -s hd1080 output-file.mp4 -acodec aac

Command-line order is important, which is infuriating. If you move the -s or -ab arguments, they may not work. Add -deinterlace if the source videos are interlaced, which mine were originally until I turned it off. The only downside to this is that it generates huge output files, on the order of 4-5x greater than the input file.

Update, 28 April 2010: Alexander Wauck emailed me to say that re-encoding the video isn’t necessary, and that the existing H.264 video could be moved from the m2ts container to the mp4 container with a command-line like this:

$ ffmpeg -i input-file.m2ts -ab 256k -vcodec copy -acodec aac output-file.mp4

And he’s right… as long as you don’t need to deinterlace the video. With the whatever-random-ffmpeg-trunk checkout I have, adding -deinterlace to the command-line segfaults. I actually had tried -vcodec copy early in my experiments but abandoned it after I found that it didn’t deinterlace. I had forgotten to try it again after I moved past my older interlaced videos. Thanks Alex!

It is surprisingly tricky

Real-time MBTA bus location + Google Maps mashup


This weekend I read that the MBTA and Massachusetts Department of Transportation had released a trial real-time data feed for the positioning of vehicles on five of its bus routes. This is very important data to have, and while obviously everyone would like to see more routes added, it’s a start. I decided to hack together a mashup of this data with Google Maps, to see how easy it would be. In the end it took me a few hours on Saturday to get the site up and running, and a couple more on Sunday adding features like the drawing of routes on the map, colorizing markers for inbound vs. outbound buses, and adding reverse geocoding of the buses themselves. To do this I used three technologies (Google App Engine, JQuery, Google Maps) and two data sources (the real-time XML feed and the MBTA Google Transit Feed Specification files). Google App Engine App Engine is so perfectly suited for smaller, playtime hacks like this that it’s hard to imagine how anyone got anything done before it existed. The tedious, up-front bootstrapping that is required in so many programming projects has been enough to completely turn me off to small, spare-time hacking projects on occasion in the past. The brilliance behind a hosted software environment is obvious, but the amount of work to build a safe, hosted system with a fairly comprehensive set of APIs seems to be such a mountain of work that in many ways I find it surprising that anyone – even, perhaps especially, Google – built it at all. I chose the Python SDK and the programming was straightforward and easy. It takes some elements from Django, with which I am familiar from work. JQuery A no-brainer. Hands down the best JavaScript toolkit available. Making the AJAX calls to get route and vehicle location information was a breeze, and the transparent handling of the XML data of the real-time feed prevents me from losing the will to live – a common feeling when dealing with XML. My only complaint is with the documentation. While the API reference is good for any given piece of the API, the examples are a little light and there is absolutely zero cross-referencing to other parts, especially ones not a part of JQuery itself. It was not obvious, for example, how to deal with the XML document returned by the AJAX call. It sounds like the docs are getting some work, though, so this will hopefully improve. Google Maps This was my first endeavor with the Maps API, and it’s good. It’s not the best API in the world, but it’s hardly the worst either. Adding markers of different colors is annoying, but not so onerous as to make it tedious. The breadth of functionality provided is impressive, but then again it has been around for a few years at this point. Markers are easy to add, drawing the route map is absolutely trivial with a KML file, and even the reverse geocoding – which gives you a street address given a latitude/longitude pair – is straightforward. The docs suck, though. There’s no indication that a size or anchor position is required when creating an icon for a custom marker – required for colors other than red – and due to the minified JS files tracking down that error took longer than any other task in the project. Reverse geocoding mentions that a Placemark object will be returned, but that class doesn’t appear anywhere in the reference documentation. Real-time data feed Lots to like. Straightforward, easy to parse. It’d be nice if I didn’t have to do the reverse geocoding to figure out what the street address is, but it’s not a dealbreaker. Main downside is that it’s XML as opposed to JSON. And of course, it’s only 5 bus routes and zero subway and commuter rail routes. MBTA Google Transit Feed Specification files A comprehensive set of data describing every transit ro[...]