Subscribe: Rough Type: Nicholas Carr's Blog
http://www.roughtype.com/index.xml
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
book  innovation  much  music  new  news  noise  online  people  production  record  sales  signal  silence  social  time 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Rough Type: Nicholas Carr's Blog

Roughtype





Last Build Date: Mon, 25 Jun 2012 20:33:27 -0500

Copyright: Copyright 2012
 



Rough Type has a new RSS feed

Mon, 25 Jun 2012 20:33:27 -0500

As a result of attacks by hackers intent on selling bootleg viagra and other goodies, Rough Type has made a hasty switch to a new blogging platform (WordPress). As a result, the old RSS feed has been replaced by a new one. If you'd like to continue to subscribe to Rough Type, here's the new subscription link:

http://www.roughtype.com/?feed=rss2

You can also re-subscribe through the home page.

I apologize for the duplicate links in recent days and appreciate your interest in my work.

Nick




Rough Type is experiencing technical difficulties

Thu, 21 Jun 2012 23:26:55 -0500

I guess this is what I get for delaying a software upgrade for seven years. Things should be back to normal reasonably soon.




What realtime is before it's realtime

Mon, 18 Jun 2012 18:31:36 -0500

They say that there's a brief interlude, measured in milliseconds, between the moment a thought arises in the cellular goop of our brain and the moment our conscious mind becomes aware of that thought. That gap, they say, swallows up our free will and all its attendant niceties. After the fact, we pretend that something we think of as "our self" came up with something we think of as "our thought," but that's all just make-believe. In reality, they say, we're mere automatons, run by some inscrutable Oz hiding behind a synaptical curtain. The same thing goes for sensory perception. What you see, touch, hear, smell are all just messages from the past. It takes time for the signals to travel from your sensory organs to your sense-making brain. Milliseconds. You live, literally, in the past. Now is then. Always. As the self-appointed chronicler of realtime, as realtime's most dedicated cyber-scribe, I find this all unendurably depressing. The closer our latency-free networks and devices bring us to realtime, the further realtime recedes. The net trains us to think not in years or seasons or months or weeks or days or hours or even minutes. It trains us to think in seconds and fractions of seconds. Google says that if it takes longer than the blink of an eye for a web page to load, we're likely to bolt for greener pastures. Microsoft says that if a site lags 250 milliseconds behind competing sites, it can kiss its traffic goodbye. The SEOers know the score (even if they don't know the tense): Back in 1999 the acceptable load time for a site is 8 seconds. It decreased to 4 seconds in 2004, and 2 seconds in 2009. These are based on the study of the behavior of the online shoppers. Our expectations already exceed the 2-second rule, and we want it faster. This 2012, we’re going sub-second. And yet, as we become more conscious of each passing millisecond, it becomes harder and harder to ignore the fact that we're always a moment behind the real, that what we imagine to be realtime is really just pseudorealtime. A fraud. They say a man never steps into the same stream twice. But that same man will never step into a web stream even once. It's long gone by the time he becomes conscious of his virtual toe hitting the virtual flow. That tweet/text/update/alert you read so hungrily? It may as well be an afternoon newspaper tossed onto your front stoop by some child-laborer astride a banana bike. It's yesterday. But there's hope. The net, Andrew Keen reports on the eve of Europe's Le Web shindig, is about to get, as the conference's official theme puts it, "faster than realtime." What does that mean? The dean of social omnipresence, Robert Scoble, explains: "It's when the server brings you a beer before you ask for it because she already knows what you drink!" Le Web founder Loic Le Meur says to Keen, "We've arrived in the future": Online apps are getting to know us so intimately, he explained, that we can know things before they happen. To illustrate his point, Le Meur told me about his use of Highlight, a social location app which offers illuminating data about nearby people who have signed up for the network like - you guessed it - the digitally omniscient Robert Scoble. Highlight enabled Le Meur to literally know the future before it happened because, he says, it is measuring our location all of the time. "I opened the door before he was there because I knew he was coming," Le Meur told me excitedly about a recent meeting that he had in the real world with Scoble. I opened the door before he was there because I knew he was coming. I could repeat that sentence to myself endlessly - it's that beautiful. And it's profound. Our apps will anticipate our synapses. Our apps will deliver our pre-conscious thoughts to our consciousness before they've even become pre-conscious thoughts. The net will out-Oz Oz. Life will become redundant, but that seems a small price to pay for a continuous preview of real realtime. Le Meur states the obvious to Keen: We have "no choice but to [...]



1964

Thu, 14 Jun 2012 13:25:27 -0500

From Simon Reynolds's interview with Greil Marcus in the Los Angeles Review of Books: SR: I wanted to ask you about an experience that seems to have been utterly formative and enduringly inspirational: the Free Speech Movement at Berkeley in 1964. That is a real touchstone moment for you, right? GM: That was a cauldron. It was a tremendously complex experience, struggle, event. A series of events. In a lot of ways, it's been misconstrued: there are many versions of it. Each person had their own version of it. The affair began when there'd been a lot of protests in the Bay Area in the spring of 1964 against racist hiring practices. At the Bank of America, at car dealerships, at the Oakland Tribune —black people were not hired at all for any visible job. So there were no black sales people, no black tellers or clerks. A lot of the organizing for these protests, which involved mass arrests and huge picket lines and publicity, was done on the Berkeley campus. Different political groups would set up a table and distribute leaflets and collect donations and announce picket lines and sit-ins. The business community put a lot of pressure on the University of California to stop this, and the university instituted a policy that no political advocacy could take place on the campus. No distribution of literature, no information about events where the law might be broken. So people set up their tables anyway. And the university had them arrested. And out of that came the Free Speech Movement, saying, "We demand the right to speak freely on campus like anywhere else. We've read the Constitution." ... This Free Speech Movement was an extraordinary series of events where people stepped out of the anonymity of their own lives and either spoke in public or argued with everybody they knew all the time. It was three or four solid months of arguing in public: in dorm rooms, on walks, on picket lines. "What's this place for? Why are we here? What's this country about? Is this country a lie, or can we keep its promises even if it won't?" All these questions had come to life, and it was just the most dynamic and marvelous experience. And there were moments of tremendous drama and fear and courage. I used to walk around the campus thinking how lucky I was to be here at that moment. You really had a sense not that history was being made in some real sense for the world, but that you were making your own history — you along with other people. You were taking part in events, you were shaping events. You weren't just witnessing events that would change your life. That, as I understood it, would leave you unsatisfied, because you couldn't reenact what Thomas Jefferson called the "public happiness" of acting in public with other people. He was referring to his own moments as a revolutionary, drafting the Declaration of Independence. In that meeting, people pledged their lives and their sacred honor. And they knew that if they lost, they'd all be shot. Because they were acting together in public, they were taken out of themselves. They were acting on a stage that they themselves had built. I wasn't the only one who felt that way. In that moment I didn't have to wonder how it would feel to be that free. I was that free. And so were countless other people. SR: And while all this was going on, you also had the tremendous excitement of the Beatles, the Stones, and then, a little later, Bob Dylan. Must have been a pretty exciting time to be young. GM: The Free Speech thing was the fall of 1964. And the Beatles dominated the spring of '64. One thing I will never forget about being a student here was reading in the San Francisco Chronicle that this British rock 'n' roll group was going to be on The Ed Sullivan Show. And I thought that sounded funny: I didn't know they had rock 'n' roll in England. So I went down to the commons room of my dorm to watch it and I figured there'd be an argument over what to watch. But instead there were 200 people there, and everybody had t[...]



Live fast, die young and leave a beautiful hologram

Tue, 12 Jun 2012 19:28:15 -0500

"For us, of course, it's about keeping Jimi authentically correct." So says Janie Hendrix, explaining the motivation behind her effort to turn her long-dead brother into a Strat-wielding hologram. Tupac Shakur's recent leap from grave to stage was just the first act of what promises to be an orgy of cultural necrophilia. Billboard reports that holographic second comings are in the works not just for Jimi Hendrix but for Elvis Presley, Jim Morrison, Otis Redding, Janis Joplin, Peter Tosh, and even Rick James. Superfreaky! What could be more authentically correct than an image of an image?

I'm really looking forward to seeing the Doors with Jim Morrison back out in front - that guy from the Cult never did it for me - but I admit it may be kind of discomforting to see the rest of the band looking semi-elderly while the Lizard King appears as his perfect, leather-clad 24-year-old self. Jeff Jampol, the Doors' manager, says, "Hopefully, 'Jim Morrison' will be able to walk right up to you, look you in the eye, sing right at you and then turn around and walk away." That's all well and good, but I'm sure Jampol knows that the crowd isn't going to be satisfied unless "Jim" whips out his virtual willy. (Can you arrest a hologram for obscenity?) In any case, hearing the Morrison Hologram sing "Cancel my subscription to the resurrection" is going to be just priceless - a once-in-a-lifetime moment, replayable endlessly.

I think it was Nietzsche who said that what kills you only makes you stronger in the marketplace.




Books ain't music

Mon, 04 Jun 2012 13:58:34 -0500

C30, C60, C90, go!Off the radio, I get a constant flowCause I hit it, pause it, record it and playOr turn it rewind and rub it away! -Bow Wow Wow, 1980 When I turned twelve, in the early 1970s, I received, as a birthday present from my parents, a portable, Realistic-brand cassette tape recorder from Radio Shack. Within hours, I became a music pirate. I had a friend who lived next door, and his older brother had a copy of Abbey Road, an album I had taken a shine to. I carried my recorder over to their house, set its little plastic microphone (it was a mono machine) in front of one of the speakers of their stereo, and proceeded to make a cassette copy of the record. I used the same technique at my own house to record hit songs off the radio as well as make copies of my siblings' and friends' LPs and 45s. It never crossed my mind that I was doing anything wrong. I didn't think of myself as a pirate, and I didn't think of my recordings as being illicit. I was just being a fan. I was hardly unique. Tape recorders, whether reel-to-reel or cassette, were everywhere, and pretty much any kid who had access to one made copies of albums and songs. (If you've read Walter Isaacson's biography of Steve Jobs, you know that when Jobs went off to college in 1972, he brought with him a comprehensive collection of Dylan bootlegs on tape.) When, a couple of years later, cassette decks became commonplace components of stereo systems, ripping songs from records and the radio became even simpler. There was a reason that cassette decks had input jacks as well as output jacks. My friends and I routinely exchanged cassette copies of albums and mixtapes. It was the norm. We also, I should point out, bought a lot of records, particularly when we realized that pretty much everything being played on the radio was garbage. (I apologize to all Doobie Brothers fans who happen to be reading this.) There are a few reasons why record sales and record copying flourished simultaneously. First, in order to make a copy of an album, someone in your circle of friends had to own an original; there were no anonymous, long-distance exchanges of music. Second, vinyl was a superior medium to tape because, among other things, it made it easier to play individual tracks (and it was not unusual to play a favorite track over and over again). Third, record sleeves were cool and they had considerable value in and of themselves. Fourth, owning the record had social cachet. And fifth, records weren't that expensive. What a lot of people forget about LPs back then is that most of them, not long after their original release, were remaindered as what were called cutouts, and you could pick them up for $1.99 or so. Even as a high-schooler working a part-time, minimum-wage job, you could afford to buy a couple of records a week, which was - believe it or not - plenty. The reason I'm telling you all this is not that I suddenly feel guilty about my life as a teenage music pirate. I feel no guilt whatsoever. It's just that this weekend I happened to read an article in the Wall Street Journal, by Listen.com founder Rob Reid, which argued that "in the swashbuckling arena of digital piracy, the publishing world is acquitting itself far better than the brash music industry." Drawing a parallel between the music and book businesses, Reid writes: The book business is now further into its own digital history than music was when Napster died. Both histories began when digital media became portable. For music, that was 1999, when the record labels ended a failing legal campaign to ban MP3 players. For books, it came with the 2007 launch of the Kindle. Publishing has gotten off to a much better start. Both industries saw a roughly 20% drop in physical sales four years after their respective digital kickoffs. But e-book sales have largely made up the shortfall in publishing—unlike digital music sales, which stayed stubbornly close to zero for years. [...]



Reading with Oprah

Sat, 02 Jun 2012 13:33:44 -0500

We want to think an ebook is a book. But although an ebook is certainly related to a book, it's not a book. It's an ebook. And we don't yet know what an ebook is. We are getting some early hints, though. Oprah Winfrey dropped one just yesterday, when she announced the relaunch of her famous book club. Oprah's Book Club 2.0 is, she said, a book club for "our digital world." What's most interesting about it, at least for media prognosticators, is that each of Oprah's picks will be issued in a special ebook edition, available for Kindles, Nooks, and iPads, that will, as Julie Bosman reports, "include margin notes from Ms. Winfrey highlighting her favorite passages." Those passages will appear as underlined text in the ebook edition, followed by an icon in the shape of an "O." Click on the text or the icon and up pops Oprah's reflection on the passage. For instance, in the first Book Club 2.0 choice, Cheryl Strayed's Wild, the following sentence is highlighted: Of all the things I'd been skeptical about, I didn't feel skeptical about this: the wilderness had a clarity that included me. Oprah's gloss on the sentence reads: That may be my favorite line in the whole book. First of all, it's so beautifully constructed, and it captures what this journey was all about. She started out looking to find herself—looking for clarity—and that's exactly what happens. The essence of the book is held right there in that sentence. It means that every step was worth it. It means all the skepticism of whether this hike is the right thing or not the right thing—it all gets resolved in that sentence. For the reader, Oprah's notes become part of the book, a new authorial voice woven into the original text. There's plenty of precedent for this, of course. Annotated and critical editions of books routinely include an overlay of marginal comments and other notes, which very much influence the reader's experience of the book. But such editions are geared for specialized audiences - students and scholars - and they tend to appear well after the original edition. Oprah's notes are different, and they point to some of the ways that ebooks may overthrow assumptions that have built up during the centuries that people have been reading bound books. For one thing, it becomes fairly easy to publish different versions of the same book, geared to different audiences or even different retailers, at the same time. We may, for example, see a proliferation of "celebrity editions," with comments from politicians, media stars, and other prominent folk. There may also be "sponsored editions," in which a company buys the right to, say, have its CEO annotate an ebook (that could be a real money-maker for authors and publishers of volumes of management advice). Writers themselves could come out with premium editions that include supplemental comments or other material - for a couple of bucks more than the standard edition. There's no reason the annotations need be limited to text, either. In future book club selections, what pops up when you click the O icon might be a video of Oprah sharing her comments. And since an ebook is in essence an application running on a networked computer, the added material could be personalized for individual readers or could be continually updated. Because ebooks tend to sell for a much lower price than traditional hard covers, publishers will have strong incentives to try all these sorts of experiments as well as many others, particularly if the experiments have the potential to strengthen sales or open new sources of revenues. In small or large ways, the experience of reading, and of writing, will change as books are remodeled to fit their new container.[...]



Careful what you link to

Fri, 01 Jun 2012 04:40:47 -0500

The front page of today's New York Times serves up a cautionary tale:

On Valentine’s Day, Nick Bergus came across a link to an odd product on Amazon.com: a 55-gallon barrel of ... personal lubricant.

He found it irresistibly funny and, as one does in this age of instant sharing, he posted the link on Facebook, adding a comment: “For Valentine’s Day. And every day. For the rest of your life.”

Within days, friends of Mr. Bergus started seeing his post among the ads on Facebook pages, with his name and smiling mug shot. Facebook — or rather, one of its algorithms — had seen his post as an endorsement and transformed it into an advertisement, paid for by Amazon ...

55 gallons? That's a lot of frictionless sharing.




Perfect silence

Thu, 31 May 2012 13:37:40 -0500

I realized this morning that my last two posts share a common theme, so I thought I might as well go ahead and make a trilogy of it. To the voices of Kraus and Teleb I'll add that of the Pope: Silence is an integral element of communication; in its absence, words rich in content cannot exist. In silence, we are better able to listen to and understand ourselves; ideas come to birth and acquire depth; we understand with greater clarity what it is we want to say and what we expect from others; and we choose how to express ourselves. By remaining silent we allow the other person to speak, to express him or herself; and we avoid being tied simply to our own words and ideas without them being adequately tested. In this way, space is created for mutual listening, and deeper human relationships become possible. It is often in silence, for example, that we observe the most authentic communication taking place between people who are in love: gestures, facial expressions and body language are signs by which they reveal themselves to each other. Joy, anxiety, and suffering can all be communicated in silence – indeed it provides them with a particularly powerful mode of expression. Silence, then, gives rise to even more active communication, requiring sensitivity and a capacity to listen that often makes manifest the true measure and nature of the relationships involved. When messages and information are plentiful, silence becomes essential if we are to distinguish what is important from what is insignificant or secondary. Deeper reflection helps us to discover the links between events that at first sight seem unconnected, to make evaluations, to analyze messages; this makes it possible to share thoughtful and relevant opinions, giving rise to an authentic body of shared knowledge. For this to happen, it is necessary to develop an appropriate environment, a kind of ‘eco-system’ that maintains a just equilibrium between silence, words, images and sounds. (Aside to Vatican: Change the background on your site. It's very noisy.) Making the case for silent communication has always been a tricky business, since language itself wants to make an oxymoron of the idea, but it's trickier than ever today. We've come to confuse communication, and indeed thought itself, with the exchange of explicit information. What can't be codified and transmitted, turned into data, loses its perceived value. (What code does a programmer use to render silence?) We seek ever higher bandwidth and ever lower latency, not just in our networks but in our relations with others and even in ourselves. The richness of implicit communication, of thought and emotion unmanifested in expression, comes to be seen as mere absence, as wasted bandwidth. Whitman in a way is the most internet-friendly of the great poets. He would have made a killer blogger (though Twitter would have unmanned him). But even Whitman, I'm pretty sure, would have tired of the narrowness of so much bandwidth, would in the end have become a refugee from the Kingdom of the Explicit: When I heard the learn’d astronomer; When the proofs, the figures, were ranged in columns before me; When I was shown the charts and the diagrams, to add, divide, and measure them; When I, sitting, heard the astronomer, where he lectured with much applause in the lecture-room, How soon, unaccountable, I became tired and sick; Till rising and gliding out, I wander’d off by myself, In the mystical moist night-air, and from time to time, Look’d up in perfect silence at the stars. "Unaccountable" indeed. I'm speechless.[...]



A little more signal, a lot more noise

Wed, 30 May 2012 09:56:09 -0500

I don't fully understand this excerpt from Nassim Nicholas Taleb's forthcoming book Antifragile, but I found this bit to be intriguing:

The more frequently you look at data, the more noise you are disproportionally likely to get (rather than the valuable part called the signal); hence the higher the noise to signal ratio. And there is a confusion, that is not psychological at all, but inherent in the data itself. Say you look at information on a yearly basis, for stock prices or the fertilizer sales of your father-in-law’s factory, or inflation numbers in Vladivostock. Assume further that for what you are observing, at the yearly frequency the ratio of signal to noise is about one to one (say half noise, half signal) —it means that about half of changes are real improvements or degradations, the other half comes from randomness. This ratio is what you get from yearly observations. But if you look at the very same data on a daily basis, the composition would change to 95% noise, 5% signal. And if you observe data on an hourly basis, as people immersed in the news and markets price variations do, the split becomes 99.5% noise to .5% signal. That is two hundred times more noise than signal — which is why anyone who listens to news (except when very, very significant events take place) is one step below sucker. ... Now let’s add the psychological to this: we are not made to understand the point, so we overreact emotionally to noise. The best solution is to only look at very large changes in data or conditions, never small ones.

I've long suspected, based on observations of myself as well as observations of society, that, beyond the psychological and cognitive strains produced by what we call information overload, there is a point in intellectual inquiry when adding more information decreases understanding rather than increasing it. Taleb's observation that as the frequency of information sampling increases, the amount of noise we take in expands more quickly than the amount of signal might help to explain the phenomenon, particularly if human understanding hinges as much or more on the noise-to-signal ratio of the information we take in as on the absolute amount of signal we're exposed to. Because we humans seem to be natural-born signal hunters, we're terrible at regulating our intake of information. We'll consume a ton of noise if we sense we may discover an added ounce of signal. So our instinct is at war with our capacity for making sense.

If this is indeed a problem, it's not an isolated one. We have a general tendency to believe that if x amount of something is good, then 2x must be better. This leads, for instance, to a steady increase in the average portion size of a soft drink - until the negative effects on health become so apparent that they're impossible to ignore. Even then, though, it remains difficult to moderate our personal behavior. When given the choice, we continue to order the Big Gulps.




Filling all the gaps

Tue, 29 May 2012 11:59:19 -0500

In a recent presentation, entrepreneur, angel, and Googler Joe Kraus provided a good overview of the costs of our "culture of distraction" and how smartphones are ratcheting those costs up. Early in the talk he shows, in stark graphical terms, how people's patterns of internet use change when they get a smartphone. Essentially, a tool becomes an environment.

width="500" height="281" src="http://www.youtube.com/embed/EzpX0TLKS9Q?rel=0" frameborder="0" allowfullscreen>

For those of you who are text-biased, here's a transcript.




Workers of the world, level up!

Mon, 28 May 2012 13:23:09 -0500

[Google Doodle from Nov. 30, 2011] For my sins, I've been reading some marketing brochures - pdfs, actually - from an outfit called Lithium. Lithium is a consulting company that helps businesses design programs to take advantage of the social web, to channel the energies of online communities toward bottom lines. "We do great things and have a playful mindset while doing it," Lithium says of itself, exhibiting a characteristically innovative approach to grammar. It likes the bright colors and rounded fonts that have long been the hallmarks of Web 2.0's corporate identity program: One of the main thrusts of Lithium's business, as the above clipping suggests, is to reduce its clients' customer service costs by tapping into the social web's free labor pool. This, according to a recent report from the Economist's Babbage blog, is called "unsourcing." Instead of paying employees or contractors to answer customers' questions or provide them with technical support, you offload the function to the customers themselves. They do the work for free, and you pocket the savings. As Babbage explains: Some of the biggest brands in software, consumer electronics and telecoms have now found a workforce offering expert advice at a fraction of the price of even the cheapest developing nation, who also speak the same language as their customers, and not just in the purely linguistic sense. Because it is their customers themselves. "Unsourcing", as the new trend has been dubbed, involves companies setting up online communities to enable peer-to-peer support among users. ... This happens either on the company's own website or on social networks like Facebook and Twitter, and the helpers are generally not paid anything for their efforts. If Tom Sawyer were alive today and living in Silicon Valley, he might well be bigger than Zuck. Unsourcing reveals that the digital-sharecropping model of low-cost online production has applications beyond media creation and curation. Businesses of all stripes have opportunities to replace paid labor with play labor. Call it functional sharecropping. As Lithium makes clear, customer-service communities don't just pop up out of nowhere. You have to cultivate them. You have to create the right platform to capture the products of the labor, and you have to offer a set of incentives that will inspire the community to do your bidding. You also have to realize that, as is typical of social networks, a tiny fraction of the members are probably going to do the bulk of the work. So the challenge is to identify your star sharecroppers (or "superfans," as Lithium calls them), entice them to contribute a good amount of time to the effort, and keep them motivated with non-monetary rewards. That's where gamification comes in. "Gamification" refers to the use of game techniques - competition, challenges, awards, point systems, "level up" advances, and the like - to get people to do what you want them to do. Lithium describes it like this: Humans love games. There’s all kinds of math and science behind why, but the bottom line is - games are fun. Games provide an opportunity for us to enter a highly rewarding mental state where our challenges closely match our abilities. Renowned game designer, Jane McGonigal, writes that games offer us “blissful productivity” - the chance to improve, to advance, and to level up. If you’re trying to get your social customers to post product reviews, help other customers solve problems, come up with new solutions, provide sales advice, or help you innovate new products, introducing games - the chance for blissful productivity - into the experience can provide the right type of incentives that pave the way to higher, more sustained interactions. Blissful productivity: that soun[...]



Screenage wasteland

Fri, 25 May 2012 13:59:10 -0500

In 1993, the band Cracker released a terrific album called Kerosene Hat - the opening track, "Low," was an alternative radio staple - and I became a fan. I remember checking out the group's message board on America Online at the time and being pleasantly surprised to find the two founding members - David Lowery and Johnny Hickman - making frequent postings. Lowery, who had earlier been in Camper Van Beethoven, turned out to be one of the more tech-savvy rock musicians. He'd been trained as a mathematician and was as adept with computers as he was with guitars. When the Web came along, he and his bands soon had a fairly sophisticated network of sites, hosting fan conversations, selling music, promoting gigs. In addition to playing, Lowery runs an indie label, operates a recording studio, produces records for other bands, teaches music finance at the University of Georgia, and is married to a concert promoter. He knows the business, and much of his career has been spent fighting with traditional record companies. That's all by way of background to a remarkable talk that Lowery gave in February at the SF MusicTech Summit, a transcript of which has been posted at The Trichordist (thanks to Slashdot for the pointer). Lowery offers a heartfelt and incisive critique of the effects of the internet and, in particular, the big tech companies that now act as aggregators and mediators of music, a critique that dismantles the starry-eyed assumption that the net has liberated musicians from servitude to record companies. The net, he argues, has merely replaced the Old Boss with a New Boss, and, as it turns out, the New Boss is happy to skim money from the music business without investing any capital or sharing any risk with musicians. The starving artist is hungrier than ever. When Napster came along, Lowery says, he immediately understood that bands would "lose sales to large-scale sharing" but he was nevertheless optimistic that "through more efficient distribution systems and disintermediation we artists would net more": So like many other artists I embraced the new paradigm and waited for the flow of revenue to the artists to increase. It never did. In fact everywhere I look the trend seemed to be negative. Less money for touring. Less money for recording. Less money for promotion and publicity. The old days of the evil record labels started to seem less bad. It started to seem downright rosy ... Was the old record label system better? Sadly I think the answer turns out to be yes. Things are worse. This was not really what I was expecting. I’d be very happy to be proved wrong. I mean it’s hard for me to sing the praises of the major labels. I’ve been in legal disputes with two of the three remaining major labels. But sadly I think I’m right. And the reason is quite unexpected. It’s seems the Bad Old Major Record Labels “accidentally” shared too much revenue and capital through their system of advances. Also the labels ”accidentally” assumed most of the risk. This is contrasted with the new digital distribution system where some of the biggest players assume almost no risk and share zero capital. Lowery also points out how the centralization of traffic at massive sites like Facebook and YouTube has in recent years made it even harder for musicians to make a living. The big sites have actually been a force for re-intermediation, stealing visitors (and sales) away from band sites: Facebook, YouTube and Twitter ate our web traffic. It started with Myspace and got worse when Facebook added band pages. Somewhere around 2008 every artist I know experienced a dramatic collapse in traffic to their websites. The Internet seems to have a tendency towards monopoly. All those social interacti[...]



The hierarchy of innovation

Mon, 14 May 2012 12:11:56 -0500

"If you could choose only one of the following two inventions, indoor plumbing or the Internet, which would you choose?" -Robert J. Gordon Justin Fox is the latest pundit to ring the "innovation ain't what it used to be" bell. "Compared with the staggering changes in everyday life in the first half of the 20th century," he writes, summing up the argument, "the digital age has brought relatively minor alterations to how we live." Fox has a lot of company. He points to sci-fi author Neal Stephenson, who worries that the Internet, far from spurring a great burst of creativity, may have actually put innovation "on hold for a generation." Fox also cites economist Tyler Cowen, who has argued that, recent techno-enthusiasm aside, we're living in a time of innovation stagnation. He could also have mentioned tech powerbroker Peter Thiel, who believes that large-scale innovation has gone dormant and that we've entered a technological "desert." Thiel blames the hippies: Men reached the moon in July 1969, and Woodstock began three weeks later. With the benefit of hindsight, we can see that this was when the hippies took over the country, and when the true cultural war over Progress was lost. The original inspiration for such grousing - about progress, not about hippies - came from Robert J. Gordon, a Northwestern University economist whose 2000 paper "Does the 'New Economy' Measure Up to the Great Inventions of the Past?" included a damning comparison of the flood of inventions that occurred a century ago with the seeming trickle that we see today. Consider the new products invented in just the ten years between 1876 and 1886: internal combustion engine, electric lightbulb, electric transformer, steam turbine, electric railroad, automobile, telephone, movie camera, phonograph, linotype, roll film (for cameras), dictaphone, cash register, vaccines, reinforced concrete, flush toilets. The typewriter had arrived a few years earlier and the punch-card tabulator would appear a few years later. And then, in short order, came airplanes, radio, air conditioning, the vacuum tube, jet aircraft, television, refrigerators and a raft of other home appliances, as well as revolutionary advances in manufacturing processes. (And let's not forget The Bomb.) The conditions of life changed utterly between 1890 and 1950, observed Gordon. Between 1950 and today? Not so much. So why is innovation less impressive today? Maybe Thiel is right, and it's the fault of hippies, liberals, and other degenerates. Or maybe it's crappy education. Or a lack of corporate investment in research. Or short-sighted venture capitalists. Or overaggressive lawyers. Or imagination-challenged entrepreneurs. Or maybe it's a catastrophic loss of mojo. But none of these explanations makes much sense. The aperture of science grows ever wider, after all, even as the commercial and reputational rewards for innovation grow ever larger and the ability to share ideas grows ever stronger. Any barrier to innovation should be swept away by such forces. Let me float an alternative explanation: There has been no decline in innovation; there has just been a shift in its focus. We're as creative as ever, but we've funneled our creativity into areas that produce smaller-scale, less far-reaching, less visible breakthroughs. And we've done that for entirely rational reasons. We're getting precisely the kind of innovation that we desire - and that we deserve. My idea - and it's a rough one - is that there's a hierarchy of innovation that runs in parallel with Abraham Maslow's famous hierarchy of needs. Maslow argued that human needs progress through five stages, with each new stage requiring the fulfillment of lower-level, or more basic, needs[...]



Social production guru, heal thyself

Fri, 11 May 2012 11:39:01 -0500

I was pleased to see that Yochai Benkler launched a blog on Monday - and, since the first (and as yet only) post was a response to my claim of victory in the Carr-Benkler Wager, I think I can even take a bit of credit for inspiring the professor to join the hurly-burly of the blogosphere. Welcome, Yochai! Long may you blog! I fear, however, that no one explained to Yochai the concept of Comments. You see, on Monday I scribbled out a fairly long reply to his post, and submitted it through his comment form. I was hoping, as a long-time social producer myself, to spur a good, non-price-incentivized online conversation. But, now five days later, my comment has not appeared on his blog. In fact, no comments have appeared. Perhaps it's a technical glitch, but since Yochai's blog is one of many Harvard Law blogs, I have to think that the Comment form is working and that the fault lies with the blogger. (I even resubmitted my comment, just in case there was a glitch with the first submission.) Memo to Yochai: social production begins at home. Fortunately, I saved a copy of my comment. So, while waiting for Yochai to get around to tending his comment stream, I will post it here: Yochai, Thanks for your considered reply. I'm sure you'll be shocked to discover that I disagree with your claim that you’ve won our wager. I think you're doing more than a little cherry-picking here, both in choosing categories and in defining categories. Most important, you're shortchanging the actual content that circulates online, which I would consider the most important factor in diagnosing the nature of production. For example, you highlight "music distribution" and "music funding," but you ignore the music itself (the important thing), which continues to be dominated by price-incentivized production - even on p-to-p networks, the vast majority of what's traded is music produced by paid professionals. The same goes, for example, for news reporting, where peer production is a minuscule slice of the pie. Where is the great expansion of the peer production model that you promised six years ago? Where are all the Wikipedias in other realms of informational goods? The fact is, the spread of social production just hasn't happened. Even open-source software – your primary example of social production - has moved away from peer production and toward price-incentivized production. I don’t say that this is necessarily a good thing, I just say that it’s the reality of what’s happening. Here's my (quick and dirty) alternative rundown of whether the most influential sites in major categories of online activity are generally characterized by price-incentivized production (pi) or peer production (pp): News reporting/writing: pi (relatively little pp) News opinion: pi (Huff Post has mix, but shifting toward more pi, both by Huff Post paid staff and for others writing as paid employees of other organizations; other popular news opinion sites are dominantly pi) News story discovery/aggregation: unclear (are Twitter, Reddit, Stumbleupon et al. really the most influential, or are they secondary to the editorial decision-making by sites like NY Times, WSJ, BBC, Guardian, Daily Mail, The Atlantic, and other traditional papers and magazines? I don't think this is an easy one to figure out. My own sense is that traditional news organizations play a dominant role in shaping what news is seen and read online, though I acknowledge the importance of social production in the sharing of links to stories and the incorporation of social-production tools at traditional newspaper and magazine sites.) General (non-news) web content discovery/aggregation: pp Blogging: pi (big change from[...]