Subscribe: Another Day In The Code Mines
Added By: Feedage Forager Feedage Grade A rated
Language: English
apple  bit  code  data  game  kind  lot  make  much  new  people  phone  simple  star wars  things  time  update  xml 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Another Day In The Code Mines

Another Day In The Code Mines

Yet another infrequently-updated blog, this one about the daily excitement of working in the software industry.

Updated: 2017-09-25T09:00:03.728-07:00


Follow up: LockState security failures


I wrote a blog post last month on what your IoT startup can learn from the LockState debacle. In the intervening weeks, not much new information has come to light about the specifics of the update failure, and it seems from their public statements that LockState thinks it's better if they don't do any kind of public postmortem on their process failures, which is too bad for the rest of us, and for the Internet of Things industry, in general - if you can't learn from others' mistakes, you (and your customers) might have to learn your own mistakes.

However, I did see a couple of interesting articles in the news related to LockState. The first one is from a site called, and it takes a bit more of a business-focused look at things, as you might have expected from the site name. Rather than looking at the technical failures that allowed the incident to happen, they take LockState to task for their response after the fact. There's good stuff there, about how it's important to help your customers understand possible failure modes, how you should put the software update process under their control, and how to properly respond to an incident via social media.

And on The Parallax, a security news site, I found this article, which tells us about another major failure on the part of LockState - they apparently have a default 4-digit unlock code set for all of their locks from the factory, and also an 8-digit "programming code", which gives you total control over the lock - you can change the entry codes, rest the lock, disable it, and disconnect it from WiFi, among other things.

Okay, I really shouldn't be surprised by this at this point, I guess - these LockState guys are obviously a bit "flying by the seat of your pants" in terms of security practice, but seriously? Every single lock comes pre-programmed with the same unlock code and the same master programming code?

Maybe I'm expecting too much, but if a $2.99 cheap combination lock from the hardware store comes with a slip of paper in the package with its combination printed on it, maybe the $600 internet-connected smart lock can do the same? Or hell, use a laser to mark the master combination on the inside of the case, so it's not easily lost, and anyone with the key and physical access can use the code to reset the lock, in the (rare) case that that's necessary.

Or, for that matter - if you must have a default security code for your device (because your manufacturing line isn't set up for per-unit customization, maybe?), then make it part of the setup process to change the damn code, and don't let your users get into a state where they think your product is set up, but they haven't changed the code.

It's easy to fall into the trap of saying that the user should be more-aware of these things, and they should know that they need to change the default code. But your customers are not typically security experts, and you (or at least some of your employees) should be security experts. You need to be looking out for them, because they aren't going to be doing a threat analysis while installing their new IoT bauble.

A short rant on XML - the Good, the Bad, and the Ugly


[editor's note: This blog post has been in "Drafts" for 11 years. In the spirit of just getting stuff out there, I'm publishing it basically as-is. Look for a follow-up blog post next week with some additional observations on structured data transfer from the 21st century]So, let's see if I can keep myself to less than ten pages of text this time... XML is the eXtensible Markup Language. It's closely related to both HTML, the markup language used to make the World Wide Web, and SGML, a document format that you've probably never dealt with unless you're either a government contractor, or you used the Internet back in the days before the Web. For the pedants out there, I do know that HTML is actually an SGML "application" and that XML is a proper subset of SGML. Let's not get caught up in the petty details at this point.XML is used for a variety of different tasks these days, but the most common by far is as a kind of "neutral" format for exchanging structured data between different applications. To keep this short and simple, I'm going to look at XML strictly from the perspective of a data storage and interchange format.The goodUnicode supportXML Documents can be encoded using the Unicode character encoding, which means that nearly any written character in any language can be easily represented in an XML document.Uniform hierarchical structureXML defines a simple tree structure for all the elements in a file - there's one root element, it has zero or more children, which each have zero or more children, ad infinitum. All elements must have an open and close tag, and elements can't overlap. This simple structure makes it relatively easy to parse XML documents. Human-readable (more or less)XML is a text format, so it's possible to read and edit an XML document "by hand" in a text editor. This is often useful when you're learning the format of an XML document in order to write a program to read or translate it. Actually writing or modifying XML documents in a text editor can be incredibly tedious, though a syntax-coloring editor makes it easier. Widely supportedModern languages like C# and Java have XML support "built in" in their standard libraries. Most other languages have well-supported free libraries for working with XML. Chances are, whatever messed up environment you have to work in, there's an XML reader/writer library available.The badLegacy encoding supportXML Documents can also be encoded in whatever wacky character set your nasty legacy system uses. You can put a simple encoding="Ancient-Elbonian-EBCDIC" attribute in the XML declaration element, and you can write well-formed XML documents in your favorite character encoding. You probably shouldn't expect that anyone else will actually be able to read it, though. Strictly hierarchical formatNot every data set you might want to interchange between two systems is structured hierarchically. In particular, representing a relational database or an in-memory graph of objects is problematic in XML. A number of approaches are used to get around this issue, but they're all outside the scope of standardized XML (obviously), and different systems tend to solve this problem in different ways, neatly turning the "standardized interchange format for data" into yet another proprietary format, which is only readable by the software that created it. XML is verboseA typical XML document can be 30% markup, sometimes more. This makes it larger than desired in many cases. There have been several attempts to define a "binary XML" format (most recently by the W3C group), but they really haven't caught on yet. For most applications where size or transmission speed is an issue, you probably ought to look into compressing the XML document using a standard compression algorithm (gzip, or zlib, or whatever), then decompressing it on the other end. You'll save quite a bit more that way than by trying to make the XML itself less wordy.Some XML processing libraries are extremely memory-intensiveThere are two basic approaches to reading an XML document. You can r[...]

Post-trip review: Telestial “International Travel SIM”


For our recent trip to Europe, Yvette and I tried the seasoned-traveler technique of swapping out the SIM cards in our phones, rather than paying AT&T’s fairly extortionate international roaming fees. It was an interesting experience, and we learned a few things along the way, which I’ll share here.We used Telestial, which is apparently Jersey Telecom. Not New Jersey: JT is headquartered in the Jersey Isles, off the coast of Britain. JT/Telestial's claim to fame is really their wide roaming capability. Their standard “I’m traveling to Europe” SIM is good pretty much across all of Europe. It definitely worked just fine in Germany, Denmark, Sweden, Austria, Estonia, Russia, and Finland. They claim a few dozen more, and I don’t doubt it works equally-well in those countries.Why not just get a local SIM in the country you’re visiting? Isn’t that cheaper?People who frequently travel overseas will often just pop into a phone retailer, or buy a SIM from a kiosk in the airport. Based on the comparison shopping I did while we were traveling, this is definitely cheaper tan the Telestial solution. However, it’s not at all clear in many cases how well an Austrian SIM is going to work in Finland (for example), and just how much you’ll be paying for international roaming.So, I think if you’re traveling to just one country (especially one with really cheap mobile phone service costs), buying a local SIM is definitely the way to go. I didn’t really want to keep updating people with new contact numbers every other day as we switched countries. I might look into one of the “virtual phone number” solutions, like Google Voice, for the next multi-country trip. Being able to give people one number, and still roam internationally, seems like it’d be useful, but I don’t know what the restrictions are.What does setup look like?First of all, you need a compatible phone. Not all American mobile phones will wrk in Europe. You can check the technical specs for the particular phone mode you have, to see which radio frequencies it supports. Alternatively, you can buy any iPhone more-recent than the iPhone 4s, all of which are “world phones”, as far as I know. Verizon and Sprint still some phones that are CDMA-only, which means they can’t work anywhere but the USA, but most CDMA smartphones also have a GSM SIM slot, so it’s worth taking a look to see, even if you’re on Verizon.Secondly, your phone needs to not be “activation locked” to a particular carrier. Most phones sold on contract in the US are set up this way, so you can’t just default on the contract and get a phone you can use on another network. Ideally, your phone would get unlocked automatically at the end of the contract, but US law doesn’t require this, so you’ll need to request an unlock from your carrier. AT&T has made this process a lot easier since the last time I tried to do it, which is great, because I forgot to check that Yvette’s phone was unlocked before we left. I did manage to make the unlock request from the other phone while we were in a taxi on the freeway in Austria, which is a testament to how easy this stuff is these days, I guess.Assuming you have a compatible phone, then process is basically power off phone, pop out the SIM tray with a paper clip, swap the SIMS, turn on the phone, and wait. For the Telestial SIM, you probably really want to activate it and pre-pay for some amount of credit before your trip, which is easy to do on their website.What kind of plan did you get?We had a pre-paid fixed allowance for calls and text, and 2GB of data for each phone. Calls were $0.35 a minute, and texts were $0.35 each. Pre-loading $30 on the phone was enough for and hour and a half of phone calls, or a fairly large number of texts. When we had data coverage, we could use iMessage or WhatsApp fro basically free text messages. I don't know whether Voice Over LTE actually worked, and if it avoided the per-minute charge, since we just didn't call that m[...]

A brief history of the Future


A brief history of the FutureLessons learned from API design under pressureIt was August of 2009, and the WebOS project was in a bit of trouble. The decision had been made to change the underlying structure of the OS from using a mixture of JavaScript for applications, and Java for the system services, to using JavaScript for both the UI and the system services. This decision was made for a variety of reasons, primarily in a bid to simplify and unify the programming models used for application development and services development. It was hoped that a more-familiar service development environment would be helpful in getting more developers on board with the platform. It was also hoped that by having only one virtual machine running, we'd save on memory.Initially, this was all built on top of a customized standalone V8 JavaScript interpreter, with a few hooks to system services. Eventually, we migrated over to Node, when Node was far enough along that it looked like an obvious win, and after working with the Node developers to improve performance on our memory-limited platform.The problem with going from Java to JavaScriptAs you probably already know, despite the similarity in the names, Java and JavaScript are very different languages. In fact, the superficial similarities in syntax were only making things harder for the application and system services authors trying to translate things from Java to JavaScript.In particular, the Java developers were used to a multi-threaded environment, where they could spin off threads to do background tasks, and have them call blocking APIs in a straightforward fashion. Transitioning from that to JavaScript's single-threaded, events and callbacks model was proving to be quite a challenge. Our code was rapidly starting to look like a bunch of "callback hell" spaghetti.The proposed solutionAs one of the most-recent additions to the architecture group, I was asked to look into this problem, and see if there was something we could do to make it easier for the application and service developers to write readable and maintainable code. I went away and did some research, and came back with an idea, which we called a Future. The Future was a construct based on the idea of a value that would be available "in the future". You could write your code in a more-or-less straight-line fashion, and as soon as the data was available, it'd flow right through the queued operations.If you're an experienced JavaScript developer, you might be thinking at this point "this sounds a lot like a Promise", and you'd be right. So, why didn't we use Promises? At this point in history, the Proamises/A spec was still in active discussion amongst the CommonJS folks, and it was not at all obvious that it'd become a popular standard (and in fact, it took Promises/A+ for that to happen). The Node core had in fact just removed their own Promises API in favor of a callback-based API (this would have been around Node v0.1, I think).The design of the FutureFuture was based on ideas from SmallTalk(promise), Java(future/promise), Dojo(deferred), and a number of other places. The primary design goals were:Make it easy to read through a piece of asynchronous code, and understand how it was supposed to flow, in the "happy path" caseSimplify error handling - in particular, make it easy to bail out of an operation if errors occur along the wayTo the extent possible, use Future for all asynchronous control flowYou can see the code for Future, because it got released along with the rest of the WebOS Foundations library as open source for the Open WebOS project.My design brief looked something like this:A Future is an object with these properties and methods:.result The current value of the Future. If the future does not yet have a value, accessing the result property raises an exception. Setting the result of the Future causes it to execute the next "then" function in the Future's pipeline.  .then(next, error) Adds a stage to[...]

What your Internet Of Things startup can learn from LockState


The company LockState has been in the news recently for sending an over-the-air update to one of their smart lock products which "bricked" over 500 of these locks. This is a pretty spectacular failure on their part, and it's the kind of thing that ought to be impossible in any kind of well-run software development organization, so I think it's worthwhile to go through a couple of the common-sense processes that you can use to avoid being the next company in the news for something like this.The first couple of these are specific to the problem of shipping the wrong firmware to a particular model, but the others apply equally well to an update that's for the right target, but is fundamentally broken, which is probably the more-likely scenario for most folks.Mark your updates with the product they go toThe root cause of this incident was apparently that LockState had produced an update intended for one model of their smart locks, and somehow managed to send that to a bunch of locks that were a different model. Once the update was installed, those locks were unable to connect to the Internet (one presumes they don't even boot), and so there was no way for them to update again to replace the botched update.It's trivially-easy to avoid this issue, using a variety of different techniques. Something as simple as using a different file name for firmware for different devices would suffice. If not that, you can have a "magic number" at a known offset in the file, or a digital signature that uses a key unique to the device model. Digitally-signed firmware updates are a good idea for a variety of other reasons, especially for a security product, so I'm not sure how they managed to screw this up.Have an automated build & deployment processEven if you've got a good system for marking updates as being for a particular device, that doesn't help if there are manual steps that require someone to explicitly set them. You should have a "one button" build process which allows you to say "I want to build a firmware update for *this model* of our device, and at the end you get a build that was compiled for the right device, and is marked as being for that device.Have a staging environmentEvery remote firmware update process should have the ability to be tested internally via the same process end-users would use, but from a staging environment. Ideally, this staging environment would be as similar as possible to what customers use, but available in-company only. Installing the bad update on a single lock in-house before shipping it to customers would have helped LockState avoid bricking any customer devices. And, again - this process should be automated.Do customer rollouts incrementallyLockState might have actually done this, since they say only 10% of their locks were affected by this problem. Or they possibly just got lucky, and their update process is just naturally slow. Or maybe this model doesn't make up much of the installed base. In any case, rolling out updates to a small fraction of the installed base, then gradually ramping it up over time, is a great way to ensure that you don't inconvenience a huge slice of your user base all at once.Have good telemetry built into your productTying into the previous point, wouldn't it be great if you could measure the percentage of systems that were successfully updating, and automatically throttle the update process based on that feedback? This eliminates another potential human in-the-loop situation, and could have reduced the damage in this case by detecting automatically that the updated systems were not coming back up properly.Have an easy way to revert firmware updatesNot everybody has the storage budget for this, but these days, it seems like practically every embedded system is running Linux off of a massive Flash storage device. If you can, have two operating system partitions, one for the "current" firmware, and one for the "previous" firmware. At startup, have a key combinat[...]

Why I hate the MacBook Pro Touchbar


Why I hate the MacBook Pro TouchbarThe Touchbar that Apple added to the MacBook Pro is one of those relatively-rare instances in which they have apparently struck the wrong balance between form and function. The line between “elegant design” and “design for its own sake” is one that they frequently dance on the edge of, and occasionally fall over. But they get it right often enough that it’s worth sitting with things for a while to see if the initial gut reaction is really accurate.I hated the Touchbar pretty much right away, and I generally haven’t warmed up to it at all over the last 6 months. Even though I’ve been living with it for a while, I have only recently really figured out why I don’t like it.What does the Touchbar do?One of the functions of the Touchbar, of course, is serving as a replacement for the mechanical function keys at the top of the keyboard. It can also do other things, like acting as a slider control for brightness, or quickly allowing you to quickly select elements from a list. Interestingly, it’s the “replacement for function keys” part of the functionality that gets most of the ire, and I think this is useful for figuring out where the design fails.What is a “button” on a screen, anyway?Back in the dark ages of the 1980s, when the world was just getting used to the idea of the Graphical User Interface, the industry gradually settled on a series of interactions, largely based on the conventions of the Macintosh UI. Among other things, this means “push buttons” that highlight when you click the mouse button on them, but don’t actually take an action until you release the mouse button. If you’ve used a GUI that’s takes actions immediately on mouse-down, you might have noticed that they feel a bit “jumpy”, and one reason the Mac, and Windows, and iOS (mostly)  perform actions on release is exactly because action on mouse-down feels “bad”.Why action on release is good:Feedback — When you mouse-down, or press with your finger, you can see what control is being activated. This is really important to give your brain context for what happens next. If there’s any kind of delay before the action completes, you will see that the button is “pressed”, and know that your input was accepted. This reduces both user anxiety, and double-presses.Cancelability — In a mouse-and-pointer GUI, you can press a button, change your mind, and move the mouse off before releasing to avoid the action. Similar functionality exits on iOS, by moving your finger before lifting it. Even if you hardly ever use this gesture, it’s there, and it promotes a feeling of being in control.Both of these interaction choices were made to make the on-screen “buttons” feel and act more like real buttons in the physical world. In the case of physical buttons or keyswitches, the feedback and the cancelability are mostly provided by the mechanical motion of the switch. You can rest your finger on a switch, look at which switch it is, and then decide that you’d rather do something else and take your finger off, with no harm done. The interaction with a GUI button isn’t exactly the same, but it provides for “breathing space” in your interaction with the machine, which is the important thing.The Touchbar is (mostly) action on finger-downWith a very few exceptions [maybe worth exploring those more?], the Touchbar is designed to launch actions on finger-down. This is inconsistent with the rest of the user experience, and it puts a very high price on having your finger slip off of a key at the top of the keyboard. This is exacerbated by bad decisions made by third-party developers like Microsoft, who ridiculously put the “send” function in Outlook on the Touchbar, because if there was ever anything I wanted to make easier, it’s sending an email before I’m quite finished with it.How did it end up working that way?I’m not sure why the desi[...]

Rogue One: A Star Wars Review


Spoiler-free introIf you haven't seen the movie yet, you might want to not read past the spoiler alert warning below.So, you've no doubt seen reviews that say Rogue One is the best Star Wars film, and reviews that claim it's utterly disappointing. I liked it a lot, and I think it's a movie that gets better the closer you look at it. I definitely plan to see it again, and see how much more I can pick out of it.I have a theory about how people's enjoyment of Rogue One related to their overall level of nerdiness, specifically their level of Star Wars nerdiness. The theory goes like this: I think that the graph of enjoyment vs Star Wars nerdiness has two peaks, one on the low end, and one on the high end, with a substantial dip in between.If you only know the Star Wars films a little bit, or just aren't that much of nerd, in general, Rogue One is a pretty serviceable Sci-Fi action adventure, with some shootouts, some chases, and some amazing visual effects. If you're a super-fan, you get all of that, AND a truly prodigious number of cameos, offhand references, and call-outs to just about every Star Wars movie or TV series, from the original trilogy, to the prequels, to The Force Awakens, to Rogue One  to Rebels. I wouldn't be surprised if there's a reference to the Star Wars Christmas Special hidden in there somewhere.In the middle, we run into the unfortunate people who just really liked the original 3 films (and maybe The Force Awakens), and are headed into Rogue One expecting more of the same I expect these people to come out a little disappointed with Rogue One, because it's really quite different from the original Star Wars movies. This is, I believe, intentional, and brilliant in its own way, but it's definitely going to turn some people off.Here are some of the ways in which Rogue One carves out new territory in the Star Wars universe, and some of my favorite bits of clever film-making in it.SPOILERS START HERE!!!!Mirrors and reflectionsWe all remember how the original Star Wars started, I hope. There's the opening text crawl, and then we jump right into the action. A small space ship, fleeing under heavy fire, retreats into the background, and then their pursuer comes into view - a mind-bogglingly-massive grey wedge of death, the Super Star Destroyer.Roque One has no text crawl, but it does open on a scene in space, relatively peaceful, or so it seems. And then a great grey wedge starts to intrude into the scene. For just a fraction of a moment, your brain says "Oh, it's one of those giant Imperial ships", but then you realize it just doesn't look right, and the camera tilts and pans, revealing that you're looking at the edge of a planetary ring. As you're settling in, waiting to see what happens, a small ship (an Imperial shuttle, this time) crosses the frame, and heads into the background. Where as in Star Wars, we're immediately ready to cheer on the rebels, in this scene, we don't yet know what to think of this spaceship, heading alone down to land on the planet. It's obviously not good news for the locals, though...So, there's an obvious echo here between the original and the prequel. There's just enough similarity to trip you up if you think you know what you're about to see. It's a bit like watching the original Star Wars through a blurry lens, or in a fun-house mirror. Rogue One does a lot of this.It's in the nature of sequels, and even more so in the nature of prequels, to be defined to some extent by how they fit together with, and how they differ from, the original film. It's a bit like how people who have a twin will often grow up to define themselves in terms of their differences from their sibling.As an immediate prequel to the original movie, Rogue One basically ends right where Star Wars begins. This single moment becomes the mirror in which the original Star Wars is reflected both back in time and backwards in outlook.When T[...]

Apple vs the FBI


What's up with Apple and the FBI?Several of my friends and family have asked me about this case, which has been in the news a lot recently. A whole lot of news stories have been written trying more-or-less successfully to explain what's going on here, often with ill-fitting analogies to locks and keys, and it seems like a lot of people (including some of our presidential candidates) are just as confused about what's going on now as they were when the whole thing started. The Wired article above is really very good, but it's long, fairly-technical, and doesn't cover the non-technical side of things particularly well.So, since y'all asked, here are some of my thoughts on the case. I'm going to be kind of all over the map here, because I've gotten questions about the moral side of things as well as the technical. I'm going to mostly skip over the legal side of things (because I'm unqualified to comment), except for a couple of specific points.On the off-chance that someone stumbles across this who doesn't already know who I am, I'm a computer programmer, and I have worked on encryption and digital security software for a number of different companies, including 3 of the 5 largest PC manufacturers.I'm going to try to stay away from using any analogies, and just explain the actual technology involved as simply as I can, since I know you can handle a bit of jargon, and the analogy-slinging I see on Facebook isn't making things any clearer for people, as far as I can see. There will be links to Wikipedia articles in here. You don't need to read them, but they are there in case you want to read more about those subjects.First, a very quick run-down of what this is all about:The FBI has an iPhone that was used by Syed Rizwan Farook, one of the shooters in the San Bernardino shootings last December.The phone is locked (of course), and the FBI wants Apple to help them unlock it, and in fact has a court order requiring Apple to do so.Apple is refusing to do what the FBI wants, for some fairly-complicated reasons.A whole lot of people, including information security experts, law experts, and politicians, have weighed in on how they think this should go.So, what's my take on all this?Encryption does not work the way you might think it does, from watching movies or TV.In the movies, you always see "hackers" running some piece of software that puts up a progress bar, and the software makes gradual progress over the course of seconds or minutes, until the encryption is "broken", and the spy gets access to the data they need. In the real world, unless the encryption implementation is fundamentally-broken by design, the only way to break in is by trying every possible key (we call this a "brute force attack"), and there are an enormous number of possible keys. You could get in with the very first key you try, or you might end up checking every possible key before you find the right one. Nothing about this process gives you any information about whether you're "close" to getting the right key, or whether you've still got billions of keys to try.The data on the iPhone is encrypted with a key long enough that trying to decrypt it through brute force is essentially impossible.The data on the iPhone is encrypted using AES, the Advanced Encryption Standard, which was developed by the US government for companies like Apple to use to secure data for their customers. as far as anybody knows, brute-force is the only way to attack AES, and with a 256-bit key (as is used on the iPhone), it'd take literally billions of years to try every possible key, if you used all of the computing power in the world.Apple doesn't have that key to hand it over to the FBIThe key used to encrypt data on the iPhone is derived from a combination of a device-specific key, and the pass-code which the user has set on the phone. There's no way to extract the device-specific key from the[...]

Predictions for Apple's big announcement event tomorrow


So, Apple has scheduled some new product announcements tomorrow, which will certainly include a new iPhone (it’s the right time of year for that). There’s a lot of buzz on the internet about the event, based on oblique references from various Apple employees that this event is about much more than just a new iPhone.Despite the fact that I haven’t worked there in a decade, some people have asked me what I think Apple’s going to announce. For everybody’s amusement, here are my predictions, so we can all have a good laugh about them tomorrow. But first, some background:I’m really bad at thisAs many of my friends and family already well know, I have a history of really, really bad predictions of what Apple will and won’t do. A couple of notable failure in the past include:“Apple wouldn’t buy NeXT. That would make no sense. They might license some of the technology”When I said this, Apple was actually currently in negotiations to purchase NeXT, which ended up being their largest acquisition value-wise, until they acquired Beats Electronics this year.“Mac OS X will never ship. It’s a doomed project”This was while I was working on the OS X team, and more than a little depressed at the level of infighting and backstabbing going on between various teams. It took almost another year, but OS X 1.0 did actually ship,“Clearly, the Mac will be transitioning to a new architecture again. It won’t be X86, though”I had assumed X86-64 on AMD processors was the new target. I take some satisfaction from the fact that Apple relatively-quickly obsoleted the X86 processors in Macs, for 64-bit capable ones.  I *almost* got this one right, but I underestimated how much influence non-technical factors would have on the decision.That’s a common theme amongst many of the times that I mis-predict what Apple is going to do - because I’m this hyper-logical engineer-type person, it always surprises me when they do something that’s not the “right” decision technically, but makes sense economically or in some other way.PredictionsOkay, so here are my logical predictions, almost none of which will likely come to pass.What I think of the popular rumorsiPhone 6No doubt that this is going to be announced. It’ll be lighter, better battery life, faster. Rumors are that there will be a physically much-larger model, with a 5.5 inch screen. That’s totally ridiculous. We’ve all seen someone using one of those massive Android phones, and I think we can all agree that they look like total dorks. No way that Apple is going to make an iPhone that you have to use both hands to use.iWatchNot a chance in hell that Apple will produce a smart watch like the Galaxy Gear or Moto 360. Again with the “dork” factor - who even wears a watch those days? I haven’t worn a watch since I got my first  Palm Pilot, back in the day. My iPhone goes with me nearly everywhere I go, already. I look at higher-end wristwatches, and I can appreciate the craftsmanship, but I have no more interest in wearing them than any other piece of jewelry. If Apple does introduce a piece of “wearable technology”, then it won’t be a conventional watch. I could see something playing up the health-monitor angle, but a wristwatch? No way. A $300 accessory for my iPhone that saves me the effort of pulling my phone out of my pocket to read the calendar notifications? Ridiculous.”Obvious” things, which I haven’t seen rumors aboutNew MacsWeirdly, there’s not much buzz about this in the rumor-sphere. There was a little bit of buzz about that early on, given that the event is at the Flint Center, where the introduction of the original Macintosh was held, as well as the iMac, the machine that saved the whole Macintosh line. But the rumor mill died out, partly due to lack of information, and I think partly due to people being u[...]

One down, 11 to go


January OneGameAMonth post-mortemJanuary is over, and I'm done working on Rocks! (for now, at least), and it's time to go over what worked, what didn't, and what I'll do differently for February.First, here's a link to the current version:Rocks!And here's the Github repository with the source code:Repo!What I was trying to do:This was the first month of the One Game A Month challenge, and I really wanted to make sure I finished something, so I'd get started off on the right foot. To that end, I tried to shrink the scale of what I was trying to do for January to something I was sure I'd be able to finish. Rather than design a game from scratch, I started with a well-known design, and implemented it on an unknown (to me) technology stack. So, I decided to do a clone of Asteroids, running in the web browser, using the canvas element for graphics, and the Web Audio API for sound.I wanted to produce something with a retro feel, true to the spirit of the original, even if it wasn't exactly the same in execution. And I decided to do the whole thing without the use of any frameworks or libraries, both because I thought that the game was simple enough that I could just bang it out without much help, and because I wanted to actually learn the browser APIs, not some third-party library.What went right:Got something working very fast, then iteratedBy the end of the first week, I had a playable game, if not a very interesting one. That took a lot of the pressure off, knowing that even if I ran out of time, I'd have *something* to show for it.Scope creep (mostly) avoidedAlthough lots of really great ideas came to me while working on Rocks!, I managed to avoid the temptation to add in a bunch of extra features. I feel especially good about this given that I didn't quite meet the initial goals - I'd have felt a lot worse if I didn't manage to make a complete game, because I'd gotten distracted by doing something cool, but not part of the core gameplay.Proper "retro-twitch" feelI spent a fair amount of time tweaking the controls, to get ship movement that felt "right". I think this is something that really distinguishes my game from the other Asteroids-like games that were submitted to OneGameAMonth last month. My ship is very responsive, it turns and accelerates quickly enough to get out of trouble, which makes the player feel like they're in control of their own fate.No ArtI didn't want to spend a lot of time drawing terrible art that I then hated. I figured that going with the vector approach would encourage (enforce?) a simple graphical design, and save me from spending hours tweaking art trying to make it look less goofy. My inability to draw well is going to be an ongoing issue for the next 11 games, too.I "Finished" on timeActually a bit ahead of time. Which is good, because a bunch of "real world" stuff came up in the last few weeks of January.What went wrong:Spent much more time on art & sound than expectedDespite the fact that I went with a totally minimalist look & sound, I still had to do a fair amount of tweaking. But with everything defined in code (see next item), it was pure tedium to make any changes in the graphics or sound.No creative toolsI ended up doing the entire art design by sketching things out on graph paper and manually copying down the coordinates into my code. This wasn't *terrible*, but it was tedious and error-prone. I didn't produce an editor for shapes and sounds because that sounded like more work than actually finishing the game. For *this* game, that was arguably true - but a couple of features got left out, rather than going through the process of manually designing graphics & sound for them. I'm planning on using the same technologies in future games, so I'll be able to amortize the effort to produce the tools over several projects. Conveniently [...]

Rocks! Update #2 - it's a game


It's an actual game now!So, first things first - here's the current version of Rocks!Rocks!New features include:updated graphics - random rock shapes, and a progression of sizeson-screen instructionsbetter soundsproper collision detectionparticle effects when things are destroyedmore than one levela "shield" that will prevent rocks from running into youIt's looking a lot more like a real game now.Sound design is hardOddly enough, the hardest thing for me so far has been making those decidedly "retro", simple sound effects. The Web Audio API is very powerful, but it's also very much focused on doing sophisticated manipulation of sampled sound. I certainly could have grabbed appropriate sampled sounds, or built some in Audacity, but I wanted to push the "classic" feel of the thing, and I thought - "I've done this sort of thing before, how hard can it be"? Besides, attaching a couple of huge sample files to a game that's currently under 20kb total in size felt a bit like the tail wagging the dog.Of course, the last time I tried to create synthesized sounds from scratch was probably 30 years ago, on an 8-bit home computer with a fixed-voice synthesizer chip. There's something to be said for the existence of fewer choices helping to focus your efforts. When you're faced with an API that supports multi-channel surround sound, arbitrary frequency- and time-domain manipulation, 3-D positional audio, dynamics compression, and all the rest, it's a little difficult to figure out how to just make a simple "beep".Here's what I've learned so far about using the Web Audio API:Web Audio is based on a connected graph of nodes, leading from one or more sources through the graph to the ultimate audio outputThis is enormously-flexible, and each of the individual node types is jut about as simple as it can be to do the thing it's designed for. There's a "gain" node that just multiplies the input by a constant and feeds it to the output, for instance. The source nodes don't have individual volume controls (because there's the gain node for that).There's one weird quirk to my old-school sensibilities, which is that every note requires making another source node and connecting it to the node graph. When a note stops playing, the source node is automatically removed and garbage collected. If you want to play the same sound over and over, you're continuously creating and destroying nodes and connecting them to the graph.There's a simple oscillator source node that's very flexibleYou can easily create an oscillator that uses an arbitrary waveform (square, triangle, sine, on user-defined), plays at a specific frequency, and starts and stops at a specific time. This is about 80% of what you need to make a "beep", but:Oddly, there's no built-in ADSR envelope supportBack in the day, we'd set up ADSR (attack, decay, sustain, release) parameters for a sound, which would control how quickly it came up to full volume, how loud it was as the note progressed, and how quickly it faded. There are probably about 10 different ways to do the same thing in Web Audio, but nothing with the same simplicity.There's no simple white-noise sourceThis is a bit of a weird omission, in that noise sources are the basic building blocks of a lot of useful effects, including explosions, hissing, and roaring noises. And again, there's probably 10 different ways to solve this with the existing building blocks, each with their own limitations and quirks. I ended up using Javascript to create a buffer of random samples, which I could then feed through filters to get the appropriate noises for thrust and explosions.The API is very much a work in progressDespite the fact I wasn't trying to anything particularly sophisticated, I ran into a few bugs in both Safari and Chrome. I imagine a certain amount of this is to[...]

One Game a Month, One Blog a Month?


A New Year Brings a Fresh StartI swear, I'm not going to start this post out with how disappointed I am at my lack of writing output over the last year. Oops...The ProblemNo matter how much I promise myself I'm going to update my blog more often, it tends to languish. I have a bunch of half-written articles waiting to be published, but in the absence of any compelling deadline, I can continue to look at them as "not quite ready for public view" for forever.A possible solutionSomething I've seen work really well for other people who struggle with producing consistent output are what I think of as "creative challenges". Things like the "take a picture every day for a year" challenge that a lot of people are doing to improve their photography.I just can't face the idea of a "blog a day" challenge, though - I like the idea of something a little more long-form, and a daily deadline would force me to cut corners to an extent I'm not ready for yet.So instead, I signed up for the OneGameAMonth challenge. Game design is one of my non-programming passions, so I feel like I'll be able to stay motivated and really try to see this through. A month is a long-enough deadline that I feel like I can produce something worth examining, and the practical problems and "stuff I learned along the way" should provide ample material for *at least* one blog entry a month.The PlanI haven't planned the whole 12 months out yet, but here's what I do know my plans: I will create a variety of games in different formats, including video games, board games, and card gamesI will explore different genres in each formatEverything I do will be open-source on my Github accountI will write at least one blog entry every month, about the current gameIf I don't finish a game in a particular month, I will not give up - I'll just do something less ambitious for the next monthThe ProofAnd to prove that I'm not completely full of it, here's the in-progress game for January, after two days of after-hours hacking:It's named Rocks!And here's the GitHub repository for it.This is an HTML5 Canvas & WebAudio version of the old Asteroids arcade game. Because it uses some cutting-edge web features, it only runs properly in recent WebKit-based browsers. That's Google Chrome and Safari. Future games will likely be more cross-platform, but I wanted to learn a bit about the Web Audio API.What I've learned on this project so farThis first version is very limited, and frankly pretty buggy:There's no proper collision detection - it's hard to die, unless you try to hit a rock with the shipThe asteroids don't start larger and break up into smaller onesThere's no level progression, and no game-over when you die 3 timesNo enemy UFOs yetThere are missing sound & visual effectsAnd the code is, frankly, a mess. But on the other side, there's a lot I've learned over the last two days:All of the rendering is done using the Canvas line-drawing primitivesThe sounds are synthesized on-the-fly using Web Audio units instead of sampled soundsThe animation is driven using requestAnimationFrame, so it should throttle back when in the backgroundThe whole thing is less than 11k in size, and there's about 400 lines of Javascript in the main game file. That's smaller than a typical iOS app icon...[...]

The simplest possible computer


The simplest possible computer So, if we were going to build a model of the simplest possible computer, where would we start? As it turns out, you probably have such a model in your home already. Many homes have what's known as a "three-way" switch, which is a light switch that you can turn on and off from two different locations. This circuit can be used as a simple digital computer. By properly labeling the switch positions and the light bulb, we can use them to solve a logic problem. Let's say that you need a system to tell you whether to have dessert with your lunch, but you have some specific rules to follow: 1. If you have a salad for lunch, you'll have dessert. 2. If you have soup for lunch, you'll have dessert. 3. If you have both soup and salad for lunch, you'll skip dessert (since you'll be over-full). 4. If you haven't had anything for lunch, you won't have dessert (because dessert on an empty stomach will make you sick). Here's how to solve this problem with the three-way switch: If necessary, flip one of the switches so that the light is off. Label the positions that the switches are currently in. Label one "had soup", and the other "had salad". Label the other two positions "no soup" and "no salad", respectively. Hang a sign on the light bulb that reads "have dessert". Congratulations! You now have a computer that will tell you, based on whether you've had soup and/or salad, whether you should have dessert. Try it out, and you'll find that it follows the rules given above, and the light will only come on if you've had either soup or salad (but not both). This isn't all that exciting by itself, but this same circuit can be used to solve an entire family of related logic problems, just by changing the labels on the switches and the light bulb. This ability to use the same logic to solve many different problems is the source of the flexibility of computers, and is what enables them to be useful for so many different things. [...]

A new project!


I'm working on a "book" in my spare time. I put book in quotes there, because I don't know that it'll actually get to the level of being published on dead trees. Due to the subject matter, it would make more sense to publish it online (or perhaps, via something like iBooks) in any case.

It's intended to be an introduction to Computer Science for non-nerds (and/or younger folk), which I'm sure is well-covered ground, but the unique direction I'm planning on taking is to start "at the bottom" with the most basic principles and work my way up.

This is based on conversations I've had with family and friends over the last few decades, at family gatherings, at parties, and on road trips. I get the impression that a lot of folks think that there's this mysterious "other level" beneath what they understand about their computer that requires a lot of formal training to understand. I want to show that things aren't really that complicated at the lower level, and that all of the complexity is layered on top of a very simple foundation.

And, I find the subject really interesting, so I enjoy writing about it. I'm going to set up a website for he new project soon, but in the meantime, I'll put an excerpt up here to see what people think.

Update: Here it is - The Simplest Possible Computer

Time for a reboot...


Okay, it's now been more than a year and a half since I updated this blog. I need to get back on the horse. Stay tuned for an update soon (really)!

JavaScript by example: functions and function objects


I've been working in JavaScript a lot these last couple of months, and I feel like I've learned a lot. I wanted to show some of the more interesting aspects of JavaScript that I've had the opportunity to bump into. I'll use some simple examples along the way to illustrate my points.

Note: If you want to follow along with the examples in this blog post (and the followup posts), you'll probably want to use an interactive JavaScript environment. I tend to use Firebug with Firefox when I'm trying stuff out, but there shouldn't be anything in these examples that won't work in the WebKit console in Safari or Chrome, or in Rhino, for that matter.


A simple function is defined and used in JavaScript thusly:

function add(x, y) {
return x + y;
console.log(add(3, 5)); // this prints "8" to the console

This does just about what it looks like it does. There's no trickery here (the trickery comes later on). Let's say that we want a version of this function that takes a single argument, and always adds 5 to it. You could do that like this:

function add5(a) {
return add(a, 5);

console.log(add5(3)); // prints "8"

But what if you're going to need a bunch of these one-argument variants on the add function? Well, since functions are first-class objects in JavaScript, you can do this:

function make_adder(v) {
var f = function(x) {
return add(x, v);
return f;

var add7 = make_adder(7); //create a function
console.log(add7(3)); // prints "10"

This is only slightly more complicated than the original example. One possibly subtle point here is that the returned function "captures" the value of v that was passed into make_adder. In a more formal discussion, you'd call this a closure.

PuzzleTwist is now available!


My latest creation is now up on the iTunes App Store. It's called PuzzleTwist, and it's a puzzle game where you unscramble a picture by rotating the pieces.  As each piece is rotated into place, others will rotate as well - some in the same direction, some in the opposite direction. The key to solving the puzzle is to figure out what order to move the pieces in.

One unique feature is that the rules for each puzzle are different - some are simple, some are more complex. A few are so difficult that I can't solve them without looking at the solution.

Once you've solved a puzzle, you can save the resulting picture in the Photo Library on your iPhone, and then use it as the wallpaper image for the phone, or assign it to one of your contacts.

PuzzleTwist also keeps track of the best reported scores, so you can compare your scores versus the rest of the world.

If you're a puzzle fan, you should check it out. Here's the iTunes store link.

On a side note, this application was approved much faster than the previous applications I submitted. Perhaps the App Store review team is coming out from under their backlog.

The eyes have it - a tale of 3 vision problems


I'm recovering from a head cold today, so rather than try to do heavy programming work, I decided to write up a personal story that I've been thinking about lately, for a variety of reasons. As anyone who knows me personally can probably attest, I wear glasses and have pretty bad eyesight.  Not many of my friends, and probably not even all of my family, know that I have three distinct vision problems, only one of which is actually addressed by my glasses. I'm going to tell y'all about all three, more or less in the order that I found out about them, and the ways in which they've been treated.Disclaimer: I'm not an eye doctor, nor an expert in human vision. This is all about what my experience has been. It's entirely likely that I'll make at least one glaring error in my use of some technical term. Feel free to correct me.Chapter 1: Nearsightedness and AstigmatismOkay, that probably looks like two different problems, but they're both refractive issues, and they're caused by misshapen eyeballs, and so are corrected easily with eyeglasses. If I remember correctly, I got my first pair of glasses in the 5th grade, when I was 10 years old or so. I was pretty astounded at the difference when I put them on - for the first time, I could see the leaves on trees as individual objects. I asked my eye doctor how bad my vision was compared to the 20/20 that's considered "normal" and got the unsatisfying response that the 20/x scale wasn't really a useful measure for people with strong nearsightedness. Since I can barely find the eyechart on the wall at 20 feet without my glasses, I can now understand where he was coming from.There was some consternation amongst the various parties involved about how it is that I could have gone without glasses as long as I had without anybody noticing that I was blind as a bat. For whatever reason, there wasn't any mandatory screening for vision problems in my elementary school. I got screened for a number of other potential issues, amusingly including colorblindness, but nobody ever stuck an eye chart up on the wall and had me read it.The biggest issue was probably a simple (dare I say child-like?) assumption on my part that everybody else saw things more or less the same way that I did. So, since I couldn't see the blackboard if I was sitting in the back row, I assumed that nobody else could, either. And if it was critical to the learning process for us to be able to read what the teacher was writing, the school would have arranged the classroom such that it was possible, right?It probably didn't help that I was also a bit of a daydreamer and a slacker. I think that when I said I didn't know that we had homework due, my teachers and my Mom assumed that I just wasn't paying attention, when in reality, I might have simply not seen the homework assignment written on the board.As I get older, and more and more of my friends have children, it's occurring to me that there might actually be something useful for other people to learn from my experiences. I think the lesson here is actually a pretty simple one. Parents, talk to your kids about what their sensory experiences are. If someone had at any point between age 2 and age 10 simply asked me whether or not I could see some distant object, or count the number of birds on a telephone wire, or even tacked up an eye chart on the wall and tested me, I might have gotten into glasses sooner.Alright, so I got glasses at age 10, which helped a lot with being able to see what was written on the blackboard, probably made it a lot safer for me to ride my bicycle around, and generally greatly improved my [...]

Release early, release often...


Six Apps In Six Months - or, Why Mark Can't Schedule
That was the plan, anyway - but I haven't been able to keep myself on track. It's really difficult to release something when you're the only engineer working on it. It's hard to resist the temptation to just keep polishing the thing, or try out some new ideas, until you're well and truly past your milestone. Ah, well.

Beta Test Now Open
In the interest of trying to keep the pipeline flowing, I've just released "The Picture Puzzle Game Without an Interesting Name" to a select group of Beta testers. Since I don't want anyone to miss out on the fun of seeing what you get when I'm forced to release something that I don't think is ready, I'll put a link here to Starchy Tuber's Secret Beta page, where you can sign up to Beta test my latest creation.

If you like puzzle games, or if you're just interested in seeing how the sausage is made, the Starchy Tuber Secret Beta program is the place to be!

I reserve the right to limit Beta signups to the first 100 applicants (ha! as if...).

Grr. I just found a typo on the Beta Sign Up instructions. I'll go fix that...

Easter Pictems - a marketing experiment


I'm trying an experiment. There's a free version of Pictems up on the App Store now, loaded with just the subset of items appropriate for Easter.

This version is called "Easter Pictems", appropriately enough, and you can get it here, if you're curious about Pictems, but didn't feel like ponying up the $2.99 to find out whether you liked it.

I'm hoping that folks will download the free version and like it enough to upgrade to the full version. This seems to be a common tactic among developers on the App Store. Of course, people have to find out about your free app if it's to be of any value as a marketing tool. I'll update this post if anything dramatic happens with sales.

In related news, product #2 is coming along nicely. It's a puzzle game, along the lines of the sliding-squares puzzles you might be familiar with, but with a twist (literally, in this case). For this game, the idea of Free and Pay versions makes a lot of sense, so I'm going to release both at the same time. Here's a preview of the (as yet unnamed) puzzle game:

Obsolescence is a pain in the neck...


I'm trying to clean out some of the unused/unloved technology around the house. An interesting case that I'm currently working on is Yvette's old laptop. She used this thing back in her college days, and it'd be nice to be able to get the data off it (for nostalgic purposes), then send it to the great computer graveyard.

It's an approximately 20-year old NEC DOS-based laptop, with a black-and-white LCD screen, and a massive 20 MB hard drive. It boots and seems to run just fine, a bit of a miracle in itself, but I haven't yet figured out how to get the data off of it.

You'd think that it'd be relatively easy to copy the data off this thing, but:

1. Accessing the floppy drive causes the computer to reboot.

2. Neither the serial port nor the modem are recognized by the communications software installed on the thing, so I can't transfer data that way.

3. This computer is old enough that those (and the printer port) are only external I/O ports - there's no USB, no network port, and no wireless network ability.

I took the thing apart, and discovered that the hard drive in it is actually an IDE drive. Wow - that's almost a current-generation drive technology. I figured I could just get an adapter, and connect the old hard drive directly to a new system. Piece of cake, right? I've already got a Firewire-to-IDE external drive case, so it ought to be just a matter of hooking things up.

Not so fast. They do make a 44-pin to 40 pin adaptor just for connecting laptop 40-pin drives to an IDE connector, and I can connect that adapter to my Firewire-to-IDE external drive enclosure, and the drive spins up on power-up and everything. However, it isn't recognized properly. Apparently the firewire-IDE adapter doesn't work correctly with this drive. If I had to take a guess, I'd guess that the adapter doesn't support IDE drives which don't do DMA transfers.

It's a bit frustrating to have a drive that I know is readable, and have no way to get the data off of it. I'll probably try another IDE bridge and see if it works with this drive, but if that's a bust, I may be in the market for an OLD PC that I can connect the drive to, copy the data off of it, and then recycle.

There may be a trip to Weird Stuff Warehouse in my near future...

Grr. Blogger hates me.


It won't even scale images correctly if I use the "upload image" tool. Oh, well. click the image to see the full comic...

A New Kind of Science meets XKCD


400 pages down, 450 to go. Here's my impression so far, with a little help from xkcd:

Conspiracy Theories


String Theory

A New Kind Of Science


I'm currently struggling my way through Stephen Wolfram's book A New Kind Of Science. So far, I've made it to about page 200 or so (of 850, not including almost 350 pages of end-notes). I'm not going to review it until (unless?) I've gotten to the end, but so far, I'm not very impressed. This book is really frustrating to read.

For starters, the title of the book ends up getting repeated over and over in the text. It's fairly common when writing about new phenomena or new ideas to assign names to them, for purposes of shorthand if nothing else. But no - phrases like "a new kind of science", or "the new kind of science I've discovered", or "the new kind of science described in this book" appear over and over in the first few chapters. This is really hard to read, and gives the impression of really trying to "sell" the idea that there's some kind of radical new idea here, which, 1/4 of the way in, there is so far no sign of.

It's also really hard to read a book where the author seems to be taking personal credit for well-known results in computer science, without so much as a reference to the work other people have done in the area. There are some references in the end-notes, but the main text doesn't seem to make any kind of distinction between what's new, and what's well-known or borrowed. For someone who isn't familiar with the field, it'd be easy to get the impression that Wolfram invented everything here.

I expected that this book would be fascinating. I've been interested in Cellular Automata since the 80's, and some of the things people have been able to do with the Game Of Life, or the Wireworld CA are pretty amazing. So far, though, there's been a lot of build up for the "big discovery", and some fairly rough-shod introduction to CA theory, but I feel like I'm not making much progress towards any kind of goal.

Yesterday, in an attempt to see whether it's just me that's having a problem with this book, I did a search for reviews of the book. The results were not encouraging.

I'd really like to hear from anybody who has made it all the way through this book. In particular, I'd like to know if I should just skip ahead to the grand conclusion, or slog through the rest of the text.

Well, I'm getting better...


I updated my Blogger layout to the "new and improved" form of the old layout. I'm not sure how "improved" it is, but I ended up with a hierarchical archive, which makes it really easy to see how many blog posts I've had in any given month or year, over the history of the blog.

As I start my 4th year of blogging, I can see that the trend looks like this:
2005: 3 posts
2006: 12 posts
2007: 14 posts
2008: 25 posts

Last year was the first year that I managed to post at least one blog post a month. That's nowhere near where I thought I wanted to be, but at least I'm getting better at consistently writing. I think the writing has gotten easier for me, as well. I suspect that the quality hasn't gone up much (if at all), but I've effectively trained myself not to edit my posts to death, and I'm no longer taking months to get one paragraph just right for publishing.

So it's a qualified success. Onward and upward!