Subscribe: joe shaw
http://joeshaw.org/rss.php
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
args  bin nop  bin  context context  context  ctx  data  err  github joeshaw  github  input  joeshaw  request  return  vagrant 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: joe shaw

joe shaw





Updated: 2017-11-09T23:01:03-05:00

 



Understanding Go panic output

2017-11-09T23:00:00-05:00

My code has a bug. 😭 panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x30 pc=0x751ba4] goroutine 58 [running]: github.com/joeshaw/example.UpdateResponse(0xad3c60, 0xc420257300, 0xc4201f4200, 0x16, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /go/src/github.com/joeshaw/example/resp.go:108 +0x144 github.com/joeshaw/example.PrefetchLoop(0xacfd60, 0xc420395480, 0x13a52453c000, 0xad3c60, 0xc420257300) /go/src/github.com/joeshaw/example/resp.go:82 +0xc00 created by main.runServer /go/src/github.com/joeshaw/example/cmd/server/server.go:100 +0x7e0 This panic is caused by dereferencing a nil pointer, as indicated by the first line of the output. These types of errors are much less common in Go than in other languages like C or Java thanks to Go’s idioms around error handling. If a function could fail, the function must return an error as its last return value. The caller should immediately check for errors from that function. // val is a pointer, err is an error interface value val, err := somethingThatCouldFail() if err != nil { // Deal with the error, probably pushing it up the call stack return err } // By convention, nearly all the time, val is guaranteed to not be // nil here. However, there must be a bug somewhere that is violating this implicit API contract. Before I go any further, a caveat: this is architecture- and operating system-dependent stuff, and I am only running this on amd64 Linux and macOS systems. Other systems can and will do things differently. Line two of the panic output gives information about the UNIX signal that triggered the panic: [signal SIGSEGV: segmentation violation code=0x1 addr=0x30 pc=0x751ba4] A segmentation fault (SIGSEGV) occurred because of the nil pointer dereference. The code field maps to the UNIX siginfo.si_code field, and a value of 0x1 is SEGV_MAPERR (“address not mapped to object”) in Linux’s siginfo.h file. addr maps to siginfo.si_addr and is 0x30, which isn’t a valid memory address. pc is the program counter, and we could use it to figure out where the program crashed, but we conveniently don’t need to because a goroutine trace follows. goroutine 58 [running]: github.com/joeshaw/example.UpdateResponse(0xad3c60, 0xc420257300, 0xc4201f4200, 0x16, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /go/src/github.com/joeshaw/example/resp.go:108 +0x144 github.com/joeshaw/example.PrefetchLoop(0xacfd60, 0xc420395480, 0x13a52453c000, 0xad3c60, 0xc420257300) /go/src/github.com/joeshaw/example/resp.go:82 +0xc00 created by main.runServer /go/src/github.com/joeshaw/example/cmd/server/server.go:100 +0x7e0 The deepest stack frame, the one where the panic happened, is listed first. In this case, resp.go line 108. The thing that catches my eye in this goroutine backtrace are the arguments to the UpdateResponse and PrefetchLoop functions, because the number doesn’t match up to the function signatures. func UpdateResponse(c Client, id string, version int, resp *Response, data []byte) error func PrefetchLoop(ctx context.Context, interval time.Duration, c Client) UpdateResponse takes 5 arguments, but the panic shows that it takes more than 10. PrefetchLoop takes 3, but the panic shows 5. What’s going on? To understand the argument values, we have to understand a little bit about the data structures underlying Go types. Russ Cox has two great blog posts on this, one on basic types, structs and pointers, strings, and slices and another on interfaces which describe how these are laid out in memory. Both posts are essential reading for Go programmers, but to summarize: Strings are two words (a pointer to string data and a length) Slices are three words (a pointer to a backing array, a length, and a capacity) Interfaces are two words (a pointer to the type and a pointer to the value) When a panic happens, the arguments we see in the output include the “exploded” values of strings, slices, and interfa[...]



Testing with os/exec and TestMain

2017-07-25T08:00:00-04:00

If you look at the tests for the Go standard library’s os/exec package, you’ll find a neat trick for how it tests execution: func helperCommandContext(t *testing.T, ctx context.Context, s ...string) (cmd *exec.Cmd) { testenv.MustHaveExec(t) cs := []string{"-test.run=TestHelperProcess", "--"} cs = append(cs, s...) if ctx != nil { cmd = exec.CommandContext(ctx, os.Args[0], cs...) } else { cmd = exec.Command(os.Args[0], cs...) } cmd.Env = []string{"GO_WANT_HELPER_PROCESS=1"} return cmd } // TestHelperProcess isn't a real test. // // Some details elided for this blog post. func TestHelperProcess(*testing.T) { if os.Getenv("GO_WANT_HELPER_PROCESS") != "1" { return } defer os.Exit(0) args := os.Args for len(args) > 0 { if args[0] == "--" { args = args[1:] break } args = args[1:] } if len(args) == 0 { fmt.Fprintf(os.Stderr, "No command\n") os.Exit(2) } cmd, args := args[0], args[1:] switch cmd { case "echo": iargs := []interface{}{} for _, s := range args { iargs = append(iargs, s) } fmt.Println(iargs...) //// etc... } } When you run go test, under the covers the toolchain compiles your test code into a temporary binary and runs it. (As an aside, passing -x to the go tool is a great way to learn what the toolchain is actually doing.) This helper function in exec_test.go sets a GO_WANT_HELPER_PROCESS environment variable and calls itself with a parameter directing it to run a specific test, named TestHelperProcess. Nate Finch wrote an excellent blog post in 2015 on this pattern in greater detail, and Mitchell Hashimoto’s 2017 GopherCon talk also makes mention of this trick. I think this can be improved upon somewhat with the TestMain mechanism that was added in Go 1.4, however. Here it is in action: package myexec import ( "fmt" "os" "os/exec" "strings" "testing" ) func TestMain(m *testing.M) { switch os.Getenv("GO_TEST_MODE") { case "": // Normal test mode os.Exit(m.Run()) case "echo": iargs := []interface{}{} for _, s := range os.Args[1:] { iargs = append(iargs, s) } fmt.Println(iargs...) } } func TestEcho(t *testing.T) { cmd := exec.Command(os.Args[0], "hello", "world") cmd.Env = []string{"GO_TEST_MODE=echo"} output, err := cmd.Output() if err != nil { t.Errorf("echo: %v", err) } if g, e := string(output), "hello world\n"; g != e { t.Errorf("echo: want %q, got %q", e, g) } } We still set an environment variable and self-execute, but by moving the dispatching to TestMain we avoid the somewhat-hacky special test which only ran when a certain environment variable is set, and which needed to do extra command-line argument handling. Update: Chris Hines wrote about this and other useful things you can do with TestMain in a post from 2015 that I did not know about![...]One weird trick to make cleaner tests with os/exec



Don't defer Close() on writable files

2017-06-12T08:00:00-04:00

Update: Another approach suggested by the inimitable Ben Johnson has been added to the end of the post. Update 2: Discussion about fsync() added to the end of the post. It’s an idiom that quickly becomes rote to Go programmers: whenever you conjure up a value that implements the io.Closer interface, after checking for errors you immediately defer its Close() method. You see this most often when making HTTP requests: resp, err := http.Get("https://joeshaw.org") if err != nil { return err } defer resp.Body.Close() or opening files: f, err := os.Open("/home/joeshaw/notes.txt") if err != nil { return err } defer f.Close() But this idiom is actually harmful for writable files because deferring a function call ignores its return value, and the Close() method can return errors. For writable files, Go programmers should avoid the defer idiom or very infrequent, maddening bugs will occur. Why would you get an error from Close() but not an earlier Write() call? To answer that we need to take a brief, high-level detour into the area of computer architecture. Generally speaking, as you move outside and away from your CPU, actions get orders of magnitude slower. Writing to a CPU register is very fast. Accessing system RAM is quite slow in comparison. Doing I/O on disks or networks is an eternity. If every Write() call committed the data to the disk synchronously, the performance of our systems would be unusably slow. While synchronous writes are very important for certain types of software (like databases), most of the time it’s overkill. The pathological case is writing to a file one byte at a time. Hard drives – brutish, mechanical devices – need to physically move a magnetic head to the position on the platter and possibly wait for a full platter revolution before the data could be persisted. SSDs, which store data in blocks and have a finite number of write cycles for each block, would quickly burn out as blocks are repeatedly written and overwritten. Fortunately this doesn’t happen because multiple layers within hardware and software implement caching and write buffering. When you call Write(), your data is not immediately being written to media. The operating system, storage controllers and the media itself are all buffering the data in order to batch smaller writes together, organizing the data optimally for storage on the medium, and deciding when best to commit it. This turns our writes from slow, blocking synchronous operations to quick, asynchronous operations that don’t directly touch the much slower I/O device. Writing a byte at a time is never the most efficient thing to do, but at least we are not wearing out our hardware if we do it. Of course, the bytes do have to be committed to disk at some point. The operating system knows that when we close a file, we are finished with it and no subsequent write operations are going to happen. It also knows that closing the file is its last chance to tell us something went wrong. On POSIX systems like Linux and macOS, closing a file is handled by the close system call. The BSD man page for close(2) talks about the errors it can return: ERRORS The close() system call will fail if: [EBADF] fildes is not a valid, active file descriptor. [EINTR] Its execution was interrupted by a signal. [EIO] A previously-uncommitted write(2) encountered an input/output error. EIO is exactly the error we are worried about. It means that we’ve lost data trying to save it to disk, and our Go programs should absolutely not return a nil error in that case. The simplest way to solve this is simply not to use defer when writing files: func helloNotes() error { f, err := os.Create("/home/joeshaw/notes.txt") if err != nil { return err } if err = io.WriteString(f, "hello world"); err != nil { f.Close() return err } return f.Close() } This does mean additional boo[...]



Revisiting context and http.Handler for Go 1.7

2016-08-30T16:10:00-04:00

Go 1.7 was released earlier this month, and the thing I’m most excited about is the incorporation of the context package into the Go standard library. Previously it lived in the golang.org/x/net/context package. With the move, other packages within the standard library can now use it. The net package’s Dialer and os/exec package’s Command can now utilize contexts for easy cancelation. More on this can be found in the Go 1.7 release notes. Go 1.7 also brings contexts to the net/http package’s Request type for both HTTP clients and servers. Last year I wrote a post about using context.Context with http.Handler when it lived outside the standard library, but Go 1.7 makes things much simpler and thankfully renders all of the approaches from that post obsolete. A quick recap I suggest reading my original post for more background, but one of the main uses of context.Context is to pass around request-scoped data. Things like request IDs, authenticated user information, and other data useful for handlers and middleware to examine in the scope of a single HTTP request. In that post I examined three different approaches for incorporating context into requests. Since contexts are now attached to http.Request values, this is no longer necessary. As long as you’re willing to require at least Go 1.7, it’s now possible to use the standard http.Handler interface and common middleware patterns with context.Context! The new approach Recall that the http.Handler interface is defined as: type Handler interface { ServeHTTP(ResponseWriter, *Request) } Go 1.7 adds new context-related methods on the *http.Request type. func (r *Request) Context() context.Context func (r *Request) WithContext(ctx context.Context) *Request The Context method returns the current context associated with the request. The WithContext method creates a new Request value with the provided context. Suppose we want each request to have an associated ID, pulling it from the X-Request-ID HTTP header if present, and generating it if not. We might implement the context functions like this: type key int const requestIDKey key = 0 func newContextWithRequestID(ctx context.Context, req *http.Request) context.Context { reqID := req.Header.Get("X-Request-ID") if reqID == "" { reqID = generateRandomID() } return context.WithValue(ctx, requestIDKey, reqID) } func requestIDFromContext(ctx context.Context) string { return ctx.Value(requestIDKey).(string) } We can implement middleware that derives a new context with a request ID, create a new Request value from it, and pass it onto the next handler in the chain. func middleware(next http.Handler) http.Handler { return http.HandlerFunc(func(rw http.ResponseWriter, req *http.Request) { ctx := newContextWithRequestID(req.Context(), req) next.ServeHTTP(rw, req.WithContext(ctx)) }) } The final handler and any middleware lower in the chain have access to all the previously request-scoped data set in middleware above it. func handler(rw http.ResponseWriter, req *http.Request) { reqID := requestIDFromContext(req.Context()) fmt.Fprintf(rw, "Hello request ID %v\n", reqID) } And that’s it! It’s no longer necessary to implement custom context handlers, adapters to standard http.Handler implementations, or hackily wrap http.ResponseWriter. Everything you need is in the standard library, and right there on the *http.Request type.[...]I forget what I was told by myself



Smaller Docker containers for Go apps

2015-07-31T14:00:00-04:00

At litl we use Docker images to package and deploy our Room for More services, using our Galaxy deployment platform. This week I spent some time looking into how we might reduce the size of our images and speed up container deployments. Most of our services are in Go, and thanks to the fact that compiled Go binaries are mostly-statically linked by default, it’s possible to create containers with very few files within. It’s surely possible to use these techniques to create tighter containers for other languages that need more runtime support, but for this post I’m only focusing on Go apps. The old way We built images in a very traditional way, using a base image built on top of Ubuntu with Go 1.4.2 installed. For my examples I’ll use something similar. Here’s a Dockerfile: FROM golang:1.4.2 EXPOSE 1717 RUN go get github.com/joeshaw/qotd # Don't run network servers as root in Docker USER nobody CMD qotd The golang:1.4.2 base image is built on top of Debian Jessie. Let’s build this bad boy and see how big it is. $ docker build -t qotd . ... Successfully built ae761b93e656 $ docker images qotd REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE qotd latest ae761b93e656 3 minutes ago 520.3 MB Yikes. Half a gigabyte. Ok, what leads us to a container this size? $ docker history qotd IMAGE CREATED BY SIZE ae761b93e656 /bin/sh -c #(nop) CMD ["/bin/sh" "-c" "qotd"] 0 B b77d0ca3c501 /bin/sh -c #(nop) USER [nobody] 0 B a4b2a01d3e42 /bin/sh -c go get github.com/joeshaw/qotd 3.021 MB c24802660bfa /bin/sh -c #(nop) EXPOSE 1717/tcp 0 B 124e2127157f /bin/sh -c #(nop) COPY file:56695ddefe9b0bd83 2.481 kB 69c177f0c117 /bin/sh -c #(nop) WORKDIR /go 0 B 141b650c3281 /bin/sh -c #(nop) ENV PATH=/go/bin:/usr/src/g 0 B 8fb45e60e014 /bin/sh -c #(nop) ENV GOPATH=/go 0 B 63e9d2557cd7 /bin/sh -c mkdir -p /go/src /go/bin && chmod 0 B b279b4aae826 /bin/sh -c #(nop) ENV PATH=/usr/src/go/bin:/u 0 B d86979befb72 /bin/sh -c cd /usr/src/go/src && ./make.bash 97.4 MB 8ddc08289e1a /bin/sh -c curl -sSL https://golang.org/dl/go 39.69 MB 8d38711ccc0d /bin/sh -c #(nop) ENV GOLANG_VERSION=1.4.2 0 B 0f5121dd42a6 /bin/sh -c apt-get update && apt-get install 88.32 MB 607e965985c1 /bin/sh -c apt-get update && apt-get install 122.3 MB 1ff9f26f09fb /bin/sh -c apt-get update && apt-get install 44.36 MB 9a61b6b1315e /bin/sh -c #(nop) CMD ["/bin/bash"] 0 B 902b87aaaec9 /bin/sh -c #(nop) ADD file:e1dd18493a216ecd0c 125.2 MB This is not a very lean container, with a lot of intermediate layers. To reduce the size of our containers, we did two additional steps: (1) Every repo has a clean.sh script that is run inside the container after it is initially built. Here’s part of a script for one of our Ubuntu-based Go images: apt-get purge -y software-properties-common byobu curl git htop man unzip vim \ python-dev python-pip python-virtualenv python-dev python-pip python-virtualenv \ python2.7 python2.7 libpython2.7-stdlib:amd64 libpython2.7-minimal:amd64 \ libgcc-4.8-dev:amd64 cpp-4.8 libruby1.9.1 perl-modules vim-runtime \ vim-common vim-tiny libpython3.4-stdlib:amd64 python3.4-minimal xkb-data \ xml-core libx11-data fonts-dejavu-core groff-base eject python3 locales \ python-software-properties supervisor git-core make wget cmake gcc bzr mercurial \ libglib2.0-0:amd64 libxml2:amd64 apt-get clean autoclean apt-get autoremove -y rm -rf /usr/local/go rm -rf /usr/local/go1.*.linux-amd64.tar.gz rm -rf /var/lib/{apt,dpkg,cache,log}/ rm -rf /var/{cache,log} (2) We run Jason Wilder’s excellent docker-squash tool. It is especially helpful when combined with the clea[...]



Go's net/context and http.Handler

2015-05-06T10:38:01-04:00

The approaches in this post are now obsolete thanks to Go 1.7, which adds the context package to the standard library and uses it in the net/http *http.Request type. The background info here may still be helpful, but I wrote a follow-up post that revisits things for Go 1.7 and beyond. A summary of this post is available in Japanese thanks to @craftgear. こちらに @craftgearによる日本語の要約があります。 The golang.org/x/net/context package (hereafter referred as net/context although it’s not yet in the standard library) is a wonderful tool for the Go programmer’s toolkit. The blog post that introduced it shows how useful it is when dealing with external services and the need to cancel requests, set deadlines, and send along request-scoped key/value data. The request-scoped key/value data also makes it very appealing as a means of passing data around through middleware and handlers in Go web servers. Most Go web frameworks have their own concept of context, although none yet use net/context directly. Questions about using net/context for this kind of server-side context keep popping up on the /r/golang subreddit and the Gopher’s Slack community. Having recently ported a fairly large API surface from Martini to http.ServeMux and net/context, I hope this post can answer those questions. About http.Handler The basic unit in Go’s HTTP server is its http.Handler interface, which is defined as: type Handler interface { ServeHTTP(ResponseWriter, *Request) } http.ResponseWriter is another simple interface and http.Request is a struct that contains data corresponding to the HTTP request, things like URL, headers, body if any, etc. Notably, there’s no way to pass anything like a context.Context here. About context.Context Much more detail about contexts can be found in the introductory blog post, but the main aspect I want to call attention to in this post is that contexts are derived from other contexts. Context values become arranged as a tree, and you only have access to values set on your context or one of its ancestor nodes. For example, let’s take context.Background() as the root of the tree, and derive a new context by attaching the content of the X-Request-ID HTTP header. type key int const requestIDKey key = 0 func newContextWithRequestID(ctx context.Context, req *http.Request) context.Context { return context.WithValue(ctx, requestIDKey, req.Header.Get("X-Request-ID")) } func requestIDFromContext(ctx context.Context) string { return ctx.Value(requestIDKey).(string) } ctx := context.Background() ctx = newContextWithRequestID(ctx, req) This derived context is the one we would then pass to the next layer of the system. Perhaps that would create its own contexts with values, deadlines, or timeouts, or it could extract values we previously stored. Approaches These approaches are now obsolete as of Go 1.7. Read my follow-up post that revisits this topic for Go 1.7 and beyond. So, without direct support for net/context in the standard library, we have to find another way to get a context.Context into our handlers. There are three basic approaches: Use a global request-to-context mapping Create a http.ResponseWriter wrapper struct Create your own handler types Let’s examine each. Global request-to-context mapping In this approach we create a global map of requests to contexts, and wrap our handlers in a middleware that handles the lifetime of the context associated with a request. This is the approach taken by Gorilla’s context package, although with its own context type rather than net/context. Because every HTTP request is processed in its own goroutine and Go’s maps are not safe for concurrent access for performance reasons, it is crucial that we protect all map accesses with a sync.Mutex. This also introduces lock contention among concurrently processed requests. Depending on your application and wo[...]Two great tastes that taste great together



Contributing to GitHub projects

2015-04-20T11:20:28-04:00

I often see people asking how to contribute to an open source project on GitHub. Some are new programmers, some may be new to open source, others aren’t programmers but want to make improvements to documentation or other parts of a project they use everyday. Using GitHub means you’ll need to use Git, and that means using the command-line. This post gives a gentle introduction using the git command-line tool and a companion tool for GitHub called hub. Workflow The basic workflow for contributing to a project on GitHub is: Clone the project you want to work on Fork the project you want to work on Create a feature branch to do your own work in Commit your changes to your feature branch Push your feature branch to your fork on GitHub Send a pull request for your branch on your fork Clone the project you want to work on $ hub clone pydata/pandas (Equivalent to git clone https://github.com/pydata/pandas.git) This clones the project from the server onto your local machine. When working in git you make changes to your local copy of the repository. Git has a concept of remotes which are, well, remote copies of the repository. When you clone a new project, a remote called origin is automatically created that points to the repository you provide in the command line above. In this case, pydata/pandas on GitHub. To upload your changes back to the main repository, you push to the remote. Between when you cloned and now changes may have been made to upstream remote repository. To get those changes, you pull from the remote. At this point you will have a pandas directory on your machine. All of the remaining steps take place inside it, so change into it now: $ cd pandas Fork the project you want to work on The easiest way to do this is with hub. $ hub fork This does a couple of things. It creates a fork of pandas in your GitHub account. It establishes a new remote in your local repository with the name of your github username. In my case I now have two remotes: origin, which points to the main upstream repository; and joeshaw, which points to my forked repository. We’ll be pushing to my fork. Create a feature branch to do your own work in This creates a place to do your work in that is separate from the main code. $ git checkout -b doc-work doc-work is what I’m choosing to name this branch. You can name it whatever you like. Hyphens are idiomatic. Now make whatever changes you want for this project. Commit your changes to your feature branch If you are creating new files, you will need to explicitly add them to the to-be-commited list (also called the index, or staging area): $ git add file1.md file2.md etc If you are just editing existing files, you can add them all in one batch: $ git add -u Next you need to commit the changes. $ git commit This will bring up an editor where you type in your commit message. The convention is usually to type a short summary in the first line (50-60 characters max), then a blank line, then additional details if necessary. Push your feature branch to your fork in GitHub Ok, remember that your fork is a remote named after your github username. In my case, joeshaw. $ git push joeshaw doc-work This pushes to the joeshaw remote only the doc-work branch. Now your work is publicly visible to anyone on your fork. Send a pull request for your branch on your fork You can do this either on the web site or using the hub tool. $ hub pull-request This will open your editor again. If you only had one commit on your branch, the message for the pull request will be the same as the commit. This might be good enough, but you might want to elaborate on the purpose of the pull request. Like commits, the first line is a summary of the pull request and the other lines are the body of the PR. In general you will be requesting a pull from your current branch (in this case [...]



Terrible Vagrant/Virtualbox performance on Mac OS X

2011-09-30T11:00:00-04:00

Update March 2016: There’s a much easier way to enable the host IO cache from the command-line, but it only works for existing VMs. See the update below. I recently started using Vagrant to test our auto-provisioning of servers with Puppet. Having a simple-yet-configurable system for starting up and accessing headless virtual machines really makes this a much simpler solution than VMware Fusion. (Although I wish Vagrant had a way to take and rollback VM snapshots.) Unfortunately, as soon as I tried to really do anything in the VM my Mac would completely bog down. Eventually the entire UI would stop updating. In Activity Monitor, the dreaded kernel_task was taking 100% of one CPU, and VBoxHeadless taking most of another. Things would eventually free up whenever the task in the VM (usually apt-get install or puppet apply) would crash with a segmentation fault. Digging into this, I found an ominous message in the VirtualBox logs: AIOMgr: Host limits number of active IO requests to 16. Expect a performance impact. Yeah, no kidding. I tracked this message down to the “Use host I/O cache” setting being off on the SATA Controller in the box. (This is a per-VM setting, and I am using the stock Vagrant “lucid64” box, so the exact setting may be somewhere else for you. It’s probably a good idea to turn this setting on for all storage controllers.) When it comes to Vagrant VMs, this setting in the VirtualBox UI is not very helpful, though, because Vagrant brings up new VMs automatically and without any UI. To get this to work with the Vagrant workflow, you have to do the following hacky steps: Turn off any IO-heavy provisioning in your Vagrantfile vagrant up a new VM vagrant halt the VM Open the VM in the VirtualBox UI and change the setting Re-enable the provisioning in your Vagrantfile vagrant up again This is not going to work if you have to bring up new VMs often. Fortunately this setting is easy to tweak in the base box. Open up ~/.vagrant.d/boxes/base/box.ovf and find the StorageController node. You’ll see an attribute HostIOCache="false". Change that value to true. Lastly, you’ll have to update the SHA1 hash of the .ovf file in ~/.vagrant.d/boxes/base/box.mf. Get the new hash by running openssl dgst -sha1 ~/.vagrant.d/boxes/base/box.ovf and replace the old value in box.mf with it. That’s it. All subsequent VMs you create with vagrant up will now have the right setting. Update Thanks to this comment on a Vagrant bug report you can enable the host cache more simply from the command-line for an existing VM: VBoxManage storagectl --name --hostiocache on Where is your vagrant VM name, which you can get from: VBoxManage list vms and is probably "SATA Controller". The VM must be halted for this to work. You can add a section to your Vagrantfile to do this when new VMs are created: config.vm.provider "virtualbox" do |v| v.customize [ "storagectl", :id, "--name", "SATA Controller", "--hostiocache", "on" ] end And for further reading, here is the relevant section in the Virtualbox manual that goes into more detail about the pros and cons of host IO caching.[...]One weird trick to speed up your virtual machines



Linux input ecosystem

2010-10-01T15:27:24-04:00

Over the past couple of days, I’ve been trying to figure out how input in Linux works on modern systems. There are lots of small pieces at various levels, and it’s hard to understand how they all interact. Things are not helped by the fact that things have changed quite a bit over the past couple of years as HAL – which I helped write – has been giving way to udev, and existing literature is largely out of date. This is my attempt at understanding how things work today, in the Ubuntu Lucid release. Kernel In the Linux kernel’s input system, there are two pieces: the device driver and the event driver. The device driver talks to the hardware, obviously. Today, for most USB devices this is handled by the usbhid driver. The event drivers handle how to expose the events generated by the device driver to userspace. Today this is primarily done through evdev, which creates character devices (typically named /dev/input/eventN) and communicates with them through struct input_event messages. See include/linux/input.h for its definition. A great tool to use for getting information about evdev devices and events is evtest. A somewhat outdated but still relevant description of the kernel input system can be found in the kernel’s Documentation/input/input.txt file. udev When a device is connected, the kernel creates an entry in sysfs for it and generates a hotplug event. That hotplug event is processed by udev, which applies some policy, attaches additional properties to the device, and ultimately creates a device node for you somewhere in /dev. For input devices, the rules in /lib/udev/rules.d/60-persistent-input.rules are executed. Among the things it does is run a /lib/udev/input_id tool which queries the capabilities of the device from its sysfs node and sets environment variables like ID_INPUT_KEYBOARD, ID_INPUT_TOUCHPAD, etc. in the udev database. For more information on input_id see the original announcement email to the hotplug list. X X has a udev config backend which queries udev for the various input devices. It does this at startup and also watches for hotplugged devices. X looks at the different ID_INPUT_* properties to determine whether it’s a keyboard, a mouse, a touchpad, a joystick, or some other device. This information can be used in /usr/lib/X11/xorg.conf.d files in the form of MatchIsPointer, MatchIsTouchpad, MatchIsJoystick, etc. in InputClass sections to see whether to apply configuration to a given device. Xorg has a handful of its own drivers to handle input devices, including evdev, synaptics, and joystick. And here is where things start to get confusing. Linux has this great generic event interface in evdev, which means that very few drivers are needed to interact with hardware, since they’re not speaking device-specific protocols. Of the few needed on Linux nearly all of them speak evdev, including the three I listed above. The evdev driver provides basic keyboard and mouse functionality, speaking – obviously – evdev through the /dev/input/eventN devices. It also handles things like the lid and power switches. This is the basic, generic input driver for Xorg on Linux. The synaptics driver is the most confusing of all. It also speaks evdev to the kernel. On Linux it does not talk to the hardware directly, and is in no way Synaptics(tm) hardware-specific. The synaptics driver is simply a separate driver from evdev which adds a lot of features expected of touchpad hardware, for example two-finger scrolling. It should probably be renamed the “touchpad” module, except that on non-Linux OSes it can still speak the Synaptics protocol. The joystick driver similarly handles joysticky things, but speaks evdev to the kernel rather than some device-specific protocol. X only has concepts of keyboards and pointers, the latter of which includes mic[...]



AVCHD to MP4/H.264/AAC conversion

2010-04-10T10:28:03-04:00

For posterity:

I have a Canon HF200 HD video camera, which records to AVCHD format. AVCHD is H.264 encoded video and AC-3 encoded audio in a MPEG-2 Transport Stream (m2ts, mts) container. This format is not supported by Aperture 3, which I use to store my video.

With Blizzard’s help, I figured out an ffmpeg command-line to convert to H.264 encoded video and AAC encoded audio in an MPEG-4 (mp4) container. This is supported by Aperture 3 and other Quicktime apps.

$ ffmpeg -sameq -ab 256k -i input-file.m2ts -s hd1080 output-file.mp4 -acodec aac

Command-line order is important, which is infuriating. If you move the -s or -ab arguments, they may not work. Add -deinterlace if the source videos are interlaced, which mine were originally until I turned it off. The only downside to this is that it generates huge output files, on the order of 4-5x greater than the input file.

Update, 28 April 2010: Alexander Wauck emailed me to say that re-encoding the video isn’t necessary, and that the existing H.264 video could be moved from the m2ts container to the mp4 container with a command-line like this:

$ ffmpeg -i input-file.m2ts -ab 256k -vcodec copy -acodec aac output-file.mp4

And he’s right… as long as you don’t need to deinterlace the video. With the whatever-random-ffmpeg-trunk checkout I have, adding -deinterlace to the command-line segfaults. I actually had tried -vcodec copy early in my experiments but abandoned it after I found that it didn’t deinterlace. I had forgotten to try it again after I moved past my older interlaced videos. Thanks Alex!

It is surprisingly tricky