Subscribe: ACM Queue - Planet Queue
Added By: Feedage Forager Feedage Grade B rated
Language: English
acm  blit  environment  level  model  new  performance  queue  read  sfs  software  spec sfs  spec  sun  system  systems  work 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: ACM Queue - Planet Queue

ACM Queue - Planet Queue

Planet Queue is an RSS fed blog roll of Queue authors who “unlock” what they consider to be important articles on any topic and from any source available in the ACM Digital Library. Planet Queue permits Queue website users to access and read b


Queue, CACM, and the rebirth of the ACM

Fri, 15 May 2009 11:58:21 GMT

As I have mentioned before (if in passing), I sit on the Editorial Advisory Board of ACM Queue, ACM's flagship publication for practitioners. In the past year, Queue has undergone a significant transformation, and now finds itself at the vanguard of a much broader shift within the ACM -- one that I confess to once thinking impossible. My story with respect to the ACM is like that of many practitioners, I suspect: I first became aware of the organization as an undergraduate computer science student, when it appeared to me as the embodiment of academic computer science. This perception was cemented by its flagship publication, Communications of the ACM, a magazine which, to a budding software engineer longing for the world beyond academia, seemed to be adrift in dreamy abstraction. So when I decided at the end of my undergraduate career to practice my craft professionally, I didn't for a moment consider joining the ACM: it clearly had no interest in the practitioner, and I had no interest in it. Several years into my career, my colleague David Brown mentioned that he was serving on the Editorial Board of a new ACM publication aimed at the practitioner, dubbed ACM Queue. The idea of the ACM focussing on the practitioner brought to mind a piece of Sun engineering lore from the old Mountain View days. Sometime in the early 1990s, the campus engaged itself in a water fight that pitted one building against the next. The researchers from the Sun Labs building built an elaborate catapult to launch water-filled missiles at their adversaries, while the gritty kernel engineers in legendary MTV05 assembled surgical tubing into simple but devastatingly effective three-person water balloon slingshots. As one might guess, the Labs folks never got their catapult to work -- and the engineers doused them with volley after volley of water balloons. So when David first mentioned that the ACM was aiming a publication at the practitioner, my mental image was of lab-coated ACM theoreticians, soddenly tinkering with an overcomplicated contraption. I chuckled to myself at this picture, wished David good luck on what I was sure was going to be a fruitless endeavor, and didn't think any more of it. Several months after it launched, I happened to come across an issue of the new ACM Queue. With skepticism, I read a few of the articles. I found them to be surprisingly useful -- almost embarrassingly so. I sheepishly subscribed, and I found that even the articles that I disagreed with -- like this interview with an apparently insane Alan Kay -- were more thought-provoking than enraging. And in soliciting articles on sordid topics like fault management from engineers like my long-time co-conspirator Mike Shapiro, the publication proved itself to be interested in both abstract principles and their practical application. So when David asked me to be a guest expert for their issue on system performance, I readily accepted. I put together an issue that I remain proud of today, with articles from Bart Smaalders on performance anti-patterns, Phil Beevers on development methodologies for high-performance software, me on DTrace -- and topped off with an interview between Kirk McKusick and Jarod Jenson that, among its many lessons, warns us of the subtle perils of Java's notifyAll. Two years later, I was honored to be asked to join Queue's Editorial Advisory Board, where my eyes were opened to a larger shift within the ACM: the organization -- led by both its executive leadership in CEO John White and COO Pat Ryan and its past and present elected ACM leadership like Steve Bourne, Dave Patterson, Stu Feldman and Wendy Hall -- were earnestly and deliberately seeking to bring the practitioner into the ACM fold. And I quickly learned that I was not alone in my undergraduate dismissal of Communications of the ACM: CACM was broadly viewed within the ACM as being woefully out of touch with both academic and practitioner alike, with one past president confessing that he himself couldn't stomach reading it -- even when his name was on the masthead.[...]

Electronic Junk revisited

Sun, 15 Feb 2009 03:46:00 GMT

Back in March 1982, Peter Denning wrote (President's Letter, CACM 25, 3, pp 163-165) about "Electronic Junk". I stumbled across this article recently and was struck by both how much and how little has changed in the intervening 27 years, at least as regards electronic mail.

Professor Denning spoke of getting "5-10 pieces of regular junk mail, 15-25 regular letters, 5 pieces of campus mail, 5 reports or documents (not all technical), 5-10 incoming phone calls, 10-20 local electronic messages, and 10-20 external electronic messages." The ratios have changed for many of us: I get relatively little physical mail, far fewer phone calls, and far, far more email -- several thousand a day, before filtering, of which perhaps 50-150 per day actually get through to me. Probably 2/3 of those are newsletters or mailing lists that I read-and-delete (or just delete!) fairly quickly, leaving me with about 20-50 messages per day that require significant time and attention. Even if that only amounts to five minutes per message, I'm still spending 1.5-4 hours per day processing email. And of course, some of those messages represent multi-hour projects, but I think of those as being somehow external to the regular email stream just to maintain my sanity.

Professor Denning asserted the necessity of automatic filtering even before the advent of spam, as well as unfiltered "urgent, certified, and personal" mailboxes. He didn't predict dictionary attacks, believing that having "unlisted" mailboxes would be enough; we now know that essentially all mailboxes have to be filtered. He did discuss "threshold reception" (essentially a "bid for attention" scheme) that is a form of "electronic postage". He did not predict "proof of work" (e.g., hash cash) schemes, but it's interesting that he recognized the fundamental economic problem with email (cheaper to send than to receive) so early.

Today of course we filter essentially all email. We don't have "bid for attention" but we probably should. Many of us do use multiple mailboxes (or at least multiple addresses); I use a new address each time I sign up at a web site so that I can turn the address off if it starts getting spam (which also turns out to make it easier to recognize legitimate mail!). Sender authentication such as DKIM will move ideas such as "certified" mail at least into the realm of possibility, although more work is of course needed.

He didn't get everything right. He proposes "importance numbers", essentially a way for the sender to assert urgency, without realizing that malicious senders would lie. He talked about individuals having multiple mailboxes without predicting the problem of managing more than one or two. He mentions "restricted access" mailboxes without mentioning the necessity for authentication; in fact, he seems to have failed to anticipate bad actors at all (spammers, phishers, etc.) -- although he did acknowledge junk mail. I think he probably thought of junk email, but not in the quantities that we see today. Interestingly, he talked about certifying message quality, but he didn't mention sender quality (what we would today call accreditation and reputation).

Despite these gaps, he understood the fundamental problem of managing information overload and that email was going to make things worse in the short run. There are some interesting techniques for that, but that's another topic.

For a contemporary reaction to his topic, see the June 1982 ACM Forum (CACM 25, 6, pp 398-400).

Eulogy for a benchmark

Mon, 02 Feb 2009 15:22:28 GMT

I come to bury SPEC SFS, not to praise it. When we at Fishworks set out, our goal was to build a product that would disrupt the enterprise NAS market with revolutionary price/performance. Based on the economics of Sun's server business, it was easy to know that we would deliver on the price half of that promise, but the performance half promised to be more complicated: while price is concrete and absolute, the notion of performance fluctuates with environment, workload and expectations. To cut through these factors, computing systems have long had their performance quantified with benchmarks that hold environment and workload constant, and as we began to learn about NAS benchmarks, one in particular loomed large among the vendors: SPEC's system file server benchmark, SFS. Curiously, the benchmark didn't come up much in conversations with customers, who seemed to prefer talking about raw capabilities like maximum delivered read bandwidth, maximum delivered write bandwidth, maximum synchronous write IOPS (I/O operations per second) or maximum random read IOPS. But it was clear that the entrenched NAS vendors took SPEC SFS very seriously (indeed, to the point that they seemed to use no other metric to describe the performance of the system), and upstarts seeking to challenge them seemed to take it even more seriously, so we naturally assumed that we too should use SPEC SFS as the canonical metric of our system... But as we explored SPEC SFS -- as we looked at the workload that it measures, examined its run rules, studied our rivals' submissions and then contrasted that to what we saw in the market -- an ugly truth emerged: whatever connection to reality it might have once had, SPEC SFS has long since become completely divorced from the way systems are actually used. And worse than simply being outdated or irrelevant, SPEC SFS is so thoroughly misguided as to implicitly encourage vendors to build the wrong systems -- ones that are increasingly outrageous and uneconomic. Quite the opposite of being beneficial to customers in evaluating systems, SPEC SFS has decayed to the point that it is serving the opposite ends: by rewarding the wrong engineering decisions, punishing the right ones and eliminating price from the discussion, SPEC SFS has actually led to lower performing, more expensive systems! And amazingly, in the update to SPEC SFS -- SPEC SFS 2008 -- the benchmark's flaws have not only gone unaddressed, they have metastasized. The result is such a deformed monstrosity that -- like the index case of some horrific new pathogen -- its only remaining utility lies on the autopsy table: by dissecting SPEC SFS and understanding how it has failed, we can seek to understand deeper truths about benchmarks and their failure modes. Before taking the scalpel to SPEC SFS, it is worth considering system benchmarks in the abstract. The simplest system benchmarks are microbenchmarks that measure a small, well-defined operation in the system. Their simplicity is their great strength: because they boil the system down to its most fundamental primitives, the results can serve as a truth that transcends the benchmark. That is, if a microbenchmark measures a NAS box to provide 1.04 GB/sec read bandwidth from disk, then that number can be considered and understood outside of the benchmark itself. The simplicity of microbenchmarks conveys other advantages as well: microbenchmarks are often highly portable, easily reproducible, straightforward to execute, etc. Unfortunately, systems themselves are rarely as simple as their atoms, and microbenchmarks are unable to capture the complex interactions of a deployed system. More subtly, microbenchmarks can also lead to the wrong conclusions (or, worse, the wrong engineering decisions) by giving excessive weight to infrequent operations. In his excellent article on performance anti-patterns, my colleague Bart Smaalders discussed this problem with respect to the getpid system call. Because measuring getpid has been the can[...]

Catching disk latency in the act

Thu, 01 Jan 2009 05:57:26 GMT

Today, Brendan made a very interesting discovery about the potential sources of disk latency in the datacenter. Here's a video we made of Brendan explaining (and demonstrating) his discovery:

(object) (embed)

This may seem silly, but it's not farfetched: Brendan actually made this discovery while exploring drive latency that he had seen in a lab machine due to a missing screw on a drive bracket. (!) Brendan has more details on the discovery, demonstrating how he used the Fishworks analytics to understand and visualize it.

If this has piqued your curiosity about the nature of disk mechanics, I encourage you to read Jon Elerath's excellent ACM Queue article, Hard disk drives: the good, the bad and the ugly! As Jon notes, noise is a known cause of what is called a non-repeatable runout (NRRO) -- though it's unclear if Brendan's shouting is exactly the kind of noise-induced NRRO that Jon had in mind...

Concurrency's Shysters

Tue, 04 Nov 2008 12:24:07 GMT

For as long as I've been in computing, the subject of concurrency has always induced a kind of thinking man's hysteria. When I was coming up, the name of the apocalypse was symmetric multiprocessing -- and its arrival was to be the Day of Reckoning for software. There seemed to be no end of doomsayers, even among those who putatively had the best understanding of concurrency. (Of note was a famous software engineer who -- despite substantial experience in SMP systems at several different computer companies -- confidently asserted to me in 1995 that it was "simply impossible" for an SMP kernel to "ever" scale beyond 8 CPUs. Needless to say, several of his past employers have since proved him wrong...) There also seemed to be no end of concurrency hucksters and shysters, each eager to peddle their own quack cure for the miasma. Of these, the one that stuck in my craw was the two-level scheduling model, whereby many user-level threads are multiplexed on fewer kernel-level (schedulable) entities. (To paraphrase what has been oft said of little-known computer architectures, you haven't heard of it for a reason.) The rationale for the model -- that it allowed for cheaper synchronization and lightweight thread creation -- seemed to me at the time to be long on assertions and short on data. So working with my undergraduate advisor, I developed a project to explore this model both quantitatively and dynamically, work that I undertook in the first half of my senior year. And early on in that work, it became clear that -- in part due to intractable attributes of the model -- the two-level thread scheduling model was delivering deeply suboptimal performance... Several months after starting the investigation, I came to interview for a job at Sun with Jeff, and he (naturally) asked me to describe my undergraduate work. I wanted to be careful here: Sun was the major proponent of the two-level model, and while I felt that I had the hard data to assert that the model was essentially garbage, I also didn't want to make a potential employer unnecessarily upset. So I stepped gingerly: "As you may know," I began, "the two-level threading model is very... intricate." "Intricate?!" Jeff exclaimed, "I'd say it's completely busted!" (That moment may have been the moment that I decided to come work with Jeff and for Sun: the fact that an engineer could speak so honestly spoke volumes for both the engineer and the company. And despite Sun's faults, this engineering integrity remains at Sun's core to this day -- and remains a draw to so many of us who have stayed here through the ups and downs.) With that, the dam had burst: Jeff and I proceeded to gush about how flawed we each thought the model to be -- and how dogmatic its presentation. So paradoxically, I ended up getting a job at Sun in part by telling them that their technology was unsound! Back at school, I completed my thesis. Like much undergraduate work, it's terribly crude in retrospect -- but I stand behind its fundamental conclusion that the unintended consequences of the two-level scheduling model make it essentially impossible to achieve optimal performance. Upon arriving at Sun, I developed an early proof-of-concept of the (much simpler) single-level model. Roger Faulkner did the significant work of productizing this as an alternative threading model in Solaris 8 -- and he eliminated the two-level scheduling model entirely in Solaris 9, thus ending the ill-begotten experiment of the two-level scheduling model somewhere shy of its tenth birthday. (Roger gave me the honor of approving his request to integrate this work, an honor that I accepted with gusto.) So why this meandering walk through a regrettable misadventure in the history of software systems? Because over a decade later, concurrency is still being used to instill panic in the uninformed. This time, it is chip-level multiprocessing (CMP) instead of SMP that promises to be the End of D[...]

Once Blitten

Sun, 20 Jul 2008 08:37:28 GMT

I've worked on a fair number of debuggers over the years, and in my efforts to engineer new things, I always spend time researching what has gone before. The ACM recently added the ability to unlock papers at their extensive Digital Library, so this gives me the pleasure of beginning to unlock some of the older systems papers that influenced my thinking in various topics over the past decade or so. One of these was The Blit Debugger, originally published in SIGSOFT proceedings, and written by Thomas Cargill of Bell Labs. The Blit itself takes us back quite a ways: it was a bitmapped terminal containing a small microkernel that could communicate with host programs running on UNIX. The idea was that one could write a small C program, which was turned into relocatable form, that could control the bitmap display and the mouse and keyboard, and beam it over to the Blit, where it would be executed by mpx. These mini-programs running in the Blit could then communicate with normal UNIX programs back on the host, i.e. executing in the complete UNIX timesharing environment, to form a complete interactive program. Written in 1983, the same year bytes of 68000 asssembly code were being downloaded over a serial line from a Lisa to early prototype Macs, the Blit quickly looked behind the times only a few months later when MacPaint and MacWrite showed up. But in computing, everything old is new again, and we don't spend enough time studying our history. Hence we've reinvented virtualization and interpreters about four times now. And looking back at the Blit now, you can kind of squint your eyes and see something quite remarkable. A high-resolution bit-mapped display with keyboard and mouse control, running a kernel of software capable of multiplexing between multiple downloaded graphical applications that can drive user interaction, each communicating over a channel to a more fully capable UNIX machine with complete network access and a larger application logic. Sound familiar? I pretty much just described your favorite web browser, Javascript, and the AJAX programming model. So now on to debugging. Cargill's Blit Debugger basically let you wander around the Blit display, pointing and clicking to attach the debugger to a program of interest. Then once you did that, it could download the symbol tables to the debugger, and then conjure up appropriate menus on the fly that displayed various program symbols and let you descend data structures. I'll let you read the paper for the full details, but there is a very central concept here, independent of the Blit, that for me turned on a big light bulb when I first read this paper in college. As a programming student, I always considered the debugger a kind of container: i.e. you either ran your program, or you ran it inside the debugger. This was the way we all learned to program, and this was the way all those big all-encompassing IDEs behaved (and mostly still do). The Blit debugger was the first description I'd read of a debugger that was truly a general-purpose tool that could be used to explore any aspect of a running environment, literally allowing its user to wander the screen in search of interesting processes and their symbols that you could shine your flashlight on. And it wasn't a debug environment, it was just your normal, running compute environment. This is a seminal concept, and one that has had great influence on my work in debugging over the years, most prominently with the development of DTrace, where Bryan and Adam and I created a modern version of this kind of flashlight, one that would let you take your production software environment and roam around arbitrarily asking questions, poking into any aspect of the running system. But looking back at the Blit and its analogies to the rapidly-evolving AJAX environment, I hope that others will find inspiration in this idea as well, and bring new (and old) thinking to what kin[...]

Self-Healing Unlocked

Sun, 20 Jul 2008 07:36:16 GMT

A couple of years ago, I wrote an article on Self-Healing in Modern Operating Systems for ACM Queue, detailing the state of the art in self-healing techniques for operating systems and introducing the Fault Management capabilities we built for Solaris 10. The ACM has recently added the ability for authors to unlock articles, so I've gone and unlocked that article here. This is a really fantastic idea to make available more of the amazing content available at their Digital Library. The other thing happening at the ACM everyone in the field should be excited about is the inclusion of new articles and an increased focus on practice and engineering in their magazine, Communications of the ACM. The increased energy really shows in the July issue with articles on Flash (courtesy of our own Adam Leventhal), Transactional Memory, and others. I also strongly recommend Pamela Samuelson's article on software patents, which avoids the usual religious and emotional responses and instead gives us a long overdue detailed and thoughtful overview of where the courts and the case law stands on this critical subject.


Revisiting the Intel 432

Sat, 19 Jul 2008 01:11:23 GMT

As I have discussed before, I strongly believe that to understand systems, you must understand their pathologies -- systems are most instructive when they fail. Unfortunately, we in computing systems do not have a strong history of studying pathology: despite the fact that failure in our domain can be every bit as expensive (if not more so) than in traditional engineering domains, our failures do not (usually) involve loss of life or physical property and there is thus little public demand for us to study them -- and a tremendous industrial bias for us to forget them as much and as quickly as possible. The result is that our many failures go largely unstudied -- and the rich veins of wisdom that these failures generate live on only in oral tradition passed down by the perps (occasionally) and the victims (more often). A counterexample to this -- and one of my favorite systems papers of all time -- is Robert Colwell's brilliant Performance Effects of Architectural Complexity in the Intel 432. This paper, which dissects the abysmal performance of Intel's infamous 432, practically drips with wisdom, and is just as relevant today as it was when the paper was originally published nearly twenty years ago. For those who have never heard of the Intel 432, it was a microprocessor conceived of in the mid-1970s to be the dawn of a new era in computing, incorporating many of the latest notions of the day. But despite its lofty ambitions, the 432 was an unmitigated disaster both from an engineering perspective (the performance was absolutely atrocious) and from a commercial perspective (it did not sell -- a fact presumably not unrelated to its terrible performance). To add insult to injury, the 432 became a sort of punching bag for researchers, becoming, as Colwell described, "the favorite target for whatever point a researcher wanted to make." But as Colwell et al. reveal, the truth behind the 432 is a little more complicated than trendy ideas gone awry; the microprocessor suffered from not only untested ideas, but also terrible execution. For example, one of the core ideas of the 432 is that it was a capability-based system, implemented with a rich hardware-based object model. This model had many ramifications for the hardware, but it also introduced a dangerous dependency on software: the hardware was implicitly dependent on system software (namely, the compiler) for efficient management of protected object contexts ("environments" in 432 parlance). As it happened, the needed compiler work was not done, and the Ada compiler as delivered was pessimal: every function was implemented in its own environment, meaning that every function was in its own context, and that every function call was therefore a context switch!. As Colwell explains, this software failing was the greatest single inhibitor to performance, costing some 25-35 percent on the benchmarks that he examined. If the story ended there, the tale of the 432 would be plenty instructive -- but the story takes another series of interesting twists: because the object model consumed a bunch of chip real estate (and presumably a proportional amount of brain power and department budget), other (more traditional) microprocessor features were either pruned or eliminated. The mortally wounded features included a data cache (!), an instruction cache (!!) and registers (!!!). Yes, you read correctly: this machine had no data cache, no instruction cache and no registers -- it was exclusively memory-memory. And if that weren't enough to assure awful performance: despite having 200 instructions (and about a zillion addressing modes), the 432 had no notion of immediate values other than 0 or 1. Stunningly, Intel designers believed that 0 and 1 "would cover nearly all the need for constants", a conclusion that Colwell (generously) describes as "almost certainly in error." [...]

Understanding Types

Sun, 13 Jul 2008 05:03:00 GMT

There's no better way to start a flame war (well at least among the geek crowd) than to boldy assert than untyped languages are unusable for any real software development, or on the flip side to assert that typed languages simply place useless barriers in the way of good programmers. But most of us simply take type systems (or the lack thereof) for granted, and types are often looked at in curious ways. For example, I've met C++ programmers who look at templates not so much as a typing mechanism, but as a clever way to get the compiler to generate the code they want with a minimum of actual C++ written. Many years ago, I read a paper that talked about the fundamentals of type systems; things like why type systems exist at all, what generic types are, why it makes sense to be able to treat types as first class entities within a programming language. This paper fundamentally altered the way that I viewed programming languages and type systems. And even though its an 'academic' paper, it is well-written and something that any programmer would find interesting (IMHO). As part of its effort to provide more services for practitioners, the ACM is allowing classic articles like this to be openly available to anyone whether they are an ACM member or not. So, you can find this paper by Cardelli and Wegner at Read and enjoy!