Subscribe: New Scientist Technology Blog
Added By: Feedage Forager Feedage Grade B rated
Language: English
future  human  new  online technology  online  people  science  simonite online  technology editor  technology  tom simonite 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: New Scientist Technology Blog

New Scientist Technology Blog

Updated: 2010-02-22T16:53:00.258Z


This blog's moving home!


(image) After more than two years and 1127 posts, the technology blog is moving home. We're merging with Short Sharp Science, a blog for everything New Scientist covers in the world of science, technology and ideas.

You can view that new, super-blog here, and see only the technology posts at this link.

For those of you viewing in RSS, please update your readers to subscribe to this new feed.

Tom Simonite, online technology editor

How to measure a website's IQ?


(image) The creator of the world wide web, Tim Berners-Lee, has made an odd request: for a kind of rating system to help people distinguish sites that can be trusted to tell the truth, and those that can't.

Berners-Lee was speaking at the launch of the World Wide Web Foundation, which aims to ensure that everyone in the world benefits as the web evolves.

In his speech he referred to the way fears that the LHC could destroy the world spread like wildfire online. As the BBC puts it, he explained that "there needed to be new systems that would give websites a label for trustworthiness once they had been proved reliable sources."

He went on to say that he didn't think "a simple number like an IQ rating" is a good idea: "I'd be interested in different organisations labelling websites in different ways". Whatever process is used to hand out the labels, it sounds like a bad idea to me.

Berners-Lee himself directed us towards some of the its biggest problems:
"On the web the thinking of cults can spread very rapidly and suddenly a cult which was 12 people who had some deep personal issues suddenly find a formula which is very believable...A sort of conspiracy theory of sorts and which you can imagine spreading to thousands of people and being deeply damaging."
There are plenty of arguments online already about whether Scientology is a cult. I find it unlikely anyone will be keen to step in and label sites on either side as not to be trusted. Others might reasonably argue that all religions - whether established or not - should come with a warning message.

As for wading in to put a stop to conspiracy theories, I can't image anything their proponents could benefit from more.

Berners-Lee also mentioned the system would help people find out the real science behind, for example, the LHC's risks. You might think handing out rating for sites about science would be easier, with publishers of peer-reviewed science, for example, receiving a top rating without problems.

But there will be papers in the archives of any journal that have been entirely superseded. And a whole lot more that present results that are valid, but can be misleading to some readers. Web licences to ensure that people only read sites they can handle are the next logical step.

Fortunately it's much more likely that the whole idea will quietly be forgotten, which will at least prevent Berners-Lee receiving one of the first "potentially misleading" badges for thinking it up in the first place.

Let's hope the World Wide Web Foundation and its laudable goals have a rosier future.

Tom Simonite, online technology editor

Jamming the future


(image) Nokia's cellphone anthropologist Jan Chipchase - interviewed in depth here - blogged this week about the etiquette of connectivity. When is it OK to whip out a phone or laptop, and when is it not?

Chipchase gives the example of a UK cafe that discouraged customers from using laptops by targeting them with bustling cleaners. I've certainly been to places that seem to carefully cultivate an atmosphere that makes people feel they must leave their laptops in their bags, and steal outside to make or receive calls.

Here in London, lovers of non-connectivity were worried this week by suggestions that underground trains may soon get cellphone reception. Trains between cities here commonly have "quiet carriages" where the use of phones and music players is banned. But I think that is unlikely on the Tube - the march of connectivity is set to continue until we just don't question it anymore.

Laptops are largely tolerated in lecture halls and mobile phones are hardly ever banned anywhere anymore. We've rolled over, and adjusted.

Chipchase hints at the idea of places that actually jam mobile or Wi-Fi reception. Also unlikely, I think, but before patches without connectivity are completely eradicated, perhaps they'll become more celebrated for a while. They deserve some commemoration of their passing.

Tom Simonite, online technology editor

Apple's latest DRM will restrict your wardrobe


(image) You've heard, of course, of digital rights management - used to control how you play, copy or otherwise use media files like music.

Now Apple wants to apply that concept to your sporting wardrobe. In US patent application 2008/0218310, the company details a way to stop us using unauthorised training shoes with the in-sole sensors it sells as part of the Nike + iPod kit. The shoe sensors work as pedometers, sending the data to your iPod as you run.

Apple's patent explains that "some people have taken it upon themselves to remove the sensor from the special pocket of the Nike shoe and place it in inappropriate locations - shoelaces, for example - or place it on non-Nike shoes".

They seem to consider this beyond the pale. The patent details a way of "pairing a sensor and an authorised garment", such as "running shoes, shirts or slacks". Companies like Nike could authorise their garments by burying an RFID chip inside it. That chip is required to activate the sensor. No longer will you be able to use the sensor you paid for with any shoe of your choosing.

Apple's idea sounds mean-minded to me. What do you think?

The company has previous form, though. Last year they tried to patent a system that would prevent you from recharging a music player if you ever use it with unauthorised software.

Paul Marks, New Scientist technology correspondent

Can the US make coal the new oil?


Last week, DARPA issued researchers with a plea for help: help us make liquid coal economical and environmentally sound.

It's easy to see the logic here - the US Department of Defense guzzles its way through 300,000 barrels of liquid fuel a day, relying on foreign oil to meet that need. The US has an estimated 275 billion tons of coal reserves. Convert that coal to liquid fuel and - hey presto - you could sever the dependency on foreign oil .

The technology even exists - the Nazis were producing liquid coal using indirect synthesis via the Fischer-Tropsch process in the 1940s. But that in itself is revealing - this isn't a very economical process, and was perhaps only viable in Nazi Germany as a last resort when oil resources dried up.

A Google search for 'liquid coal' offers little comfort. Coming in at number 3 is "Liquid Coal is a Bad Deal for Global Warming", while at number 6 is "Why Liquid Coal Is Not a Viable Option to Move America Beyond Oil".

The US Air Force itself would tend to agree: on 5 August they appear to be on the verge of abandoning their own attempts at converting liquid to coal. Time will tell if DARPA succeeds where the US Air Force has failed.

Colin Barras, online technology reporter

Bletchley Park gets US cash injection


(image) There's always been a bit of confusion between the UK and the US over who contributed most to the invention of the electronic programmable computer. It is heart-warming, however, to see some leading lights in US computing recognise the achievements of Alan Turing and his fellow WWII code breakers that were long kept classified.

Data encryption company PGP Corporation and PC-inventor IBM donated $100,000 to help maintain Bletchley Park, where Turing and colleagues worked. To what should be the UK government's shame, the place risks falling into ruin. I visited today as PGP and IBM tried to encourage others to add to their donation. If you want to do so, visit this website.

Bletchley Park says it needs some £10 million for the upkeep of the crumbling huts - where Alan Turing and others kickstarted computing as they tried to crack Nazi codes - and the manor house nearby. A further £7 million is needed for a museum to house Europe's largest collection of fully functional computers.

The most famous computers from Bletchley Park are Colossus, the world’s first programmable electronic computer, which was used to decode Nazi teleprinter traffic on the fly, while the Bombes - giant electromechanical calculators - revealed the rotor settings from various types of Enigma machines.

But because this top secret work stayed classified for so long after the war, a US computer, EDVAC stole some of Bletchley Park's deserved thunder, PGP's chief technical officer Jon Callas and president Phil Dunkelberger told me. Only in the late 70s did the achievements of the British machines begin to be recognised, by which time the early history of computing was already written.

It wasn't until the 1970s and early 1980s that computer scientists began to hear whispers of the existence of a super fast machine in England that predated post-war American computers," says Callas. "When the details eventually came out about Colossus we couldn’t quite believe how fast it had been at its one task: breaking ciphers.”

"As the acknowledged birthplace of modern computing, Bletchley Park is responsible for laying the foundation for many of today's technology innovations," said Dunkelberger.

"We have had a great response to the campaign so far, but more is definitely needed to preserve this British – and international – icon," says Bletchley spokesman Jon Fell. He told me that he hopes the UK National Lottery and the US Sidney E Frank Foundation will soon pledge money too.

Paul Marks, technology correspondent

Ministry of silly walks


(image) Fans of US TV series The West Wing, which until 2006 portrayed the inner workings of a fictional White House, might have experienced deja vu on reading my colleague's article last week on how scientists at NASA are working on identifying people from satellite images of their shadows. Their trick is to spot signs of a person's gait - their characteristic pattern of walking.

The series slipped up badly on this very subject in one 2003 episode, in which someone from a thinly disguised Slashdot says he has a tipoff about military research on mind control.

Horrified press spokesperson C J Cregg looks into it and is visited by a scientist from US defence research agency DARPA - one of the most hilariously portrayed nerds TV has ever screened.

Things soon turn nasty for viewers, though. The dialogue proceeds to jumble together the MK-Ultra project of the 1960s, in which the CIA really did use drugs for mind control, with more current research, by DARPA and others. This ranged from tiny cameras to adhesives based on gecko feet to mind-computer interfaces to, yes, gait analysis. Except this gait analysis was supposed to tell if a person is a potential criminal. Nonsense.

A colleague tells Cregg to leak the story to the press, so this horror will be shut down. Sadly that is where the show veered back towards the real world. True, horrific research such as MK-Ultra has been done in the name of security. But some research by the military, especially blue-sky types like DARPA, is merely banal, and even beneficial - like this internet thing you are currently using.

Much fear of science is engendered by the sloppy, arm-waving, button-pushing alarmism that results when commentators garble all research with horrors like MK-Ultra. And sometimes good research is threatened by this fear.

In this case, the otherwise brilliant writers of West Wing engaged in just such sloppiness. Disappointing, but cautionary - an occasion, in this US election year, to remind ourselves to beware such dishonest portrayals of science, even by writers we can otherwise trust.

Debbie MacKenzie, Brussels correspondent

Image courtesy Fotografar

Spore game simulates life - but is it science?


Today sees the debut of a simulation that rolls together evolution, astrobiology, architecture, sociology and AI into one big, all-encompassing computer model of life. But this is not a project running on a supercomputer - it's a video game you can play at home.Spore is the hotly anticipated latest creation of Will Wright, creator of SimCity and The Sims, and was originally titled "Sim Everything".I met Wright this week as he demoed his brainchild in London, its first outing in the English-speaking world. You can read the interview in depth in our 27 September issue, but here are some highlights about Spore."Science is one thing, not hundreds of little fiefdoms," Wright told the crowd of over 100 highly excited fans that had gathered in the Apple Store on Regent Street, sipping his signature frappuccino and showing slides of about the game. "I want Spore to reflect that."In case you missed the hype, players start the game as microbes. They spawn pincers and flagella and swim to shore, "evolving" into a complex multi-celled organism that later becomes tribal, then builds cities and spaceships and finally colonises inhospitable planets and strives for galactic domination.Wright says Spore was "heavily inspired" by an equation by Frank Drake - now emeritus professor of astronomy at University of California, Santa Cruz - that estimated how many alien intelligences are likely to exist in the universe.Wright describes the equation as "spanning all known science" and "an interesting way to collapse all of science into a simple question even a child can understand". But is Spore really science?Although playing the game takes hours, not eons, each player is shown a timeline describing their organism's evolution that spans billions of years - a nod to the huge timescales of evolution.What's more, organisms really do compete for ecological niches - so one planet can only house a certain number of plants, herbivores and carnivores.A player's single-celled organism begins the game after reaching its new plant riding on a comet - just as in the theory of panspermia.On the other hand, creatures don't evolve by random mutation and natural selection, they are built by a player - with the aid of the creature creator - who chooses the bits, and where they go.Organisms also evolve linearly - from single cell, to multi-celled organism to creature to tribe - not into branching "tree" of organisms that better reflects evolution on earth.Wright himself admits that making a great game - not absolute faithfulness to scientific knowledge - has to be the priority. But he told me later that: "We only break science with good reason."The science seems important to many fans too. Among those gathered to see Wright speak is 14-year-old Kieron Gillingham from Southampton, who describes Spore as a "universe in a box". "It's a complete genre of its own," he gasps.Another fan - Guy Stern, a medical student at Nottingham University - is also drawn to the game because of its parallels to science: "Evolution is really cool."If you've had a go - let us know what you think. Wright freely admits Spore is still a challenge for him: "I have not completed the game on "hard" yet, which kind of annoys me."Celeste Biever, biomedical sciences news editor[...]

Mighty Google needs you for airwaves campaign


Google may have a lot of political power, but it unusual to see it being used openly. Now, however, the corporation is currently waging a very public campaign, and appealing for help from you, the public.

The aim of the fight is to to fight to free the "white space" between US terrestrial TV channels for long-range, high-speed wireless broadband.

A campaign website was launched in August to swing public opinion in favour of the idea, and to provide a place for people to give it their backing. As well as a petition, the site helps people create videos explaining why they like the idea.

Google is open about the fact it stands to make money from free white space. But you are reminded that consumers stand to benefit too - for example, from phones and other portable gadgets empowered with fast, always-on connections.

Of course, Google wouldn't be campaigning if there wasn't opposition to the idea. There are fears that annexing the white space will cause problems for TV channels, as well as existing users of white-space using gadgets, like wireless microphones. New Scientist took an in depth look at the issue in January.

The graphic below, from that article, illustrates the worries of TV broadcasters.

(image) An update earlier this week revealed that, so far, more than 13,000 people have signed the petition.

Whether you plan to join them or not. It's nice to see that even giant advertising and search companies able to flood nearly every news source just by launching a browser still need ordinary people.

Tom Simonite, online technology editor

Artificial brain becomes ace pilot


The helicopter in the video below may look dangerously out of control. But it is being piloted by a piece of software able to perform the "chaos" manoeuvre shown for indefinite periods.

(object) (embed)

Researchers in the Stanford University AI Lab have created software able to learn complex aerobatics from human pilots.

Using the data from a suite of sensors (accelerometers, gyroscopes and magnetometers) on a helicopter piloted by an expert human, the software works out how to pull off the same moves. This is not simply a case of mimicking the commands sent from the controller, the software must learn to deal with the effects of varying wind conditions and other complications.

Smart as it is, the Stanford software doesn't get it first time. Watch this video (wmv format) to see a helicopter improve it's "tic-toc" manoeuvre.

Small autonomous aircraft are learning aerobatics elsewhere, too. A team at the Georgia Institute of Technology has created helicopters that can land on slopes steeper than any human pilot can handle, and MIT researchers have built planes that are able to perform vertical, perching, landings. Video of both those craft in action are below.

Tom Simonite, online technology editor

(object) (embed)

(object) (embed)

Miracle toast for all


In this data-sensitive age, could printing on food be the ultimate security measure? There's no risk of accidental disclosure once you've consumed it.

At the SIGGRAPH conference in Los Angeles last month a couple of representatives from OnLatte, a Boston-based start-up company, set up shop next to Starbucks. For no charge, they'd print any simple image on top of a foamy cappuccino.

The results (video below) arguably rival the finest examples of Latte Art, while requiring zero skill from the barista. The secret is a caramel ink, which is virtually indistinguishable from the nutty brown colour of the coffee. A simple laser printer transfers the ink onto the surface of the coffee. They need to make it faster though - after the 2 minute process your coffee is well on the way to cold.

(object) (embed)

Elsewhere, Inseq Design, an Austrian-based company, took inspiration from a dot matrix printer and produced a toaster, Zuse, that burns a 12x12 pixel image into bread. There's a video of it in action below.

But the real question is, will the price of miracle toasted sandwiches fall if this sort of technology becomes popular?

Colin Barras, online technology reporter

(object) (embed)

When is the future?


In the second half of the twentieth century it was pretty obvious where the future lay: in the 21st century. The year 2000. And numbers in the low 2000s were everywhere - from book and film titles to products.

But what date signifies the future today? That's a question Marc Auge asks in his book Ou est passe l'avenir? (Where is the future?).

Surely the year 2010 is too near to be considered futuristic, whereas 2020 carries at least one other meaning that's liable to confuse people.

Perhaps the year 2100 is the new future - it certainly appears in a number of computer game titles. But is 2100 too distant to become the default future year? Arguably, the year 2000 became a popular emblem of the future in the later 20th century simply because it was tantalisingly within reach and pleasingly round.

Perhaps it held less power earlier in the twentieth century. For instance, Philip Francis Nowlan chose to set his futuristic Buck Rogers stories in the 25th century rather than the 21st. For a taste of what to expect when we do reach 2400, take a look at this video.

Colin Barras, online technology reporter

Via Pasta & Vinegar

Fossilised data - the ultimate back-up?


(image) There's an interesting discussion on Slashdot today about how to store digital images underground in a format still readable 25 years later. There are some interesting suggestions.

One person proposes avoiding problems with changing memory formats by putting a whole computer into the time capsule. Only a power supply would be needed to view them in the future. Unfortunately the fact many computer components would corrode make that unlikely to succeed.

Others suggest using archival paper that should last more than a century, to either print the photos, or even the digital 1s and 0s that make them up. To try that last suggestion yourself, check out this site.

If you wanted data to last even longer, this idea to encode data as millimetre-scale bumps on a thick steel disk might serve useful.

New Scientist's resident potter Dan Palmer says ceramic glazes could be used to imprint clay with data, as dots, bar codes, or pictures. When fired to stoneware temperatures, they could last thousands of years.

Searching for a truly long term solution, our online tech reporter Colin Barras points out that long-lasting data needs to fossilise well. That's not impossible - insects up to 150 million years old trapped in amber can have fine features like compound eyes well preserved.

Perhaps the ultimate digital time capsule would contain data stored in a physical format, and then encased in a material that will fossilise it nicely. Tree resin springs to mind, but I'm sure there are better alternatives.

Of course, making it something that is obviously a message to inhabitants of the future is another problem, and one that is very real for designers of nuclear waste stores.

Tom Simonite, online technology editor

Why Adam gets more spam than Eve


(image) Email addresses that begin with certain letters get more than their fair share of spam, says Richard Clayton at the University of Cambridge.

He looked at more than half a billion emails that arrived at one UK ISP over an eight-week period. After ignoring addresses that appear to be out of use, he showed that for those beginning with A 30% of messages are spam. Someone with an address starting with Z gets a smaller proportion - 20%.

The exact reason for the difference is unclear. Clayton thinks it is down to spammers attempting to guess addresses. There are few real addresses that start with Z compared with those that start with A, so guessing them correctly is less likely.

Clayton did not perform statistical tests on the significance of his results. But it is interesting to see that some letters got even more spam than 'A', despite their being fewer addresses beginning with those letters. Those beginning with R, P, S and M all received around 40% spam.

Tom Simonite, online technology editor

Japanese or US Americans: who likes androids more?


(image) US Americans do, according to Christoph Bartneck at the Technical University of Munich. He thinks that crossing the uncanny valley - overcoming the revulsion we feel towards robots that are almost, but not quite, human-like - is something that a society does together.

Bartneck showed Japanese and US citizens a number of photos and asked them to rate them for likeability. Some of those showed the faces of real humans, some showed human-like androids, and some were simply photos of robot pets. The Japanese participants liked toy robots better than US participants did, but US citizens were more likely to 'like' human-like androids.

Those results are down to cultural differences, thinks Bartneck. Japanese culture is awash with cute robots, a fact that has boosted their likeability. Because human-like androids are still largely confined to the lab, they are not liked. Those androids are perhaps no more common in the States, but Bartneck thinks US citizens are more easy-going in general, and happier to talk to new people - so they are less disturbed by the appearance of human-like androids.

Those conclusions seem a bit simplistic, but Bartneck probably has a point. The onus on crossing the uncanny valley is often placed exclusively on the researchers behind the androids, pushing their creations across the valley floor. Maybe humans on the opposite side of the valley have an equally important role to play in pulling those robots and computer generated images towards them. We can learn to love creepy robots, if we just try.

Colin Barras, online technology reporter

Shimmer vision binoculars see further thanks to heat haze


Heat haze usually blocks your view of distant objects. But a new kind of binoculars use it to see further than possible through clear air.

The Super-Resolution Vision System (SRVS) is funded by the US military research agency DARPA. It exploits the fact that the distortions of a heat haze can fleetingly act like a lens, magnifying a clear view of objects behind it.

The SRVS binoculars automatically collects those "lucky regions" when trained on shimmering air. They can then be digitally stitched together into a single continuous view with more detail than possible without the heat haze. This slideshow provides more detail and some example images.

The SRVS system can even beat the diffraction limit that applies to any standard optical device, set by a lens' diameter and the wavelength of light.

DARPA hopes SRVS will provide 90% accurate facial recognition of a moving individual from 1 km away, using a 6-centimetre lens. That's three times better than existing telescopes manage in much more favourable conditions.

However, because the technique relies on the combination of images from a large number of frames, it will not be operating in real time. Researchers are aiming for a refresh rate of one image per second.

Proof-of-principle experiments carried at Sandia National Laboratories in New Mexico with US Army and Marine Corps snipers have shown that the technique can significantly increase the range at which targets can be identified.

The next stage will be to refine the technology so that it is small and robust enough for battlefield use. DARPA has called for a finished product less than 2 kg in weight and is less than 35 cm long. The prototype should be tested in 2009, with finished units being delivered to Special Operations units in 2011.

The same principle could be applied to other optical systems affected by atmospheric turbulence, such as astronomical telescopes.

SRVS seems to be one of DARPA's better ideas. Check out some of its worst, including telepathic spies and a mechanical elephant.

David Hambling, New Scientist contributor

Photosynth goes live


Yesterday, Microsoft finally released the first public version of Photosynth, software that meshes many photos of the same place into a 3D landscape. There are already several synths on the Photosynth website - to view them you'll first need to download the software there. You can upload collections of your own photos of a place and have them get the Photosynth treatment. The video below gives you an idea of what you can expect.

(object) (embed)

Impressive though this week's public release is, there's a lot to look forward to in the future. Last week I caught up with Microsoft's Richard Szeliski at the SIGGRAPH conference in Los Angeles, where the Photosynth team were showing off the latest additions to their software.

Where the current version of the software gives users a kind of 3D slideshow, Photosynth will eventually do more. Photos can be used to make a 3D environment that users can seamlessly spin around or travel through, truly adding a new dimension to holiday snaps.

Colin Barras, online technology reporter

Robot tells human off for doing it wrong


The video below shows a scenario that is likely to become real as industrial robots improve: a human and a robot work together to assemble an object from its parts. But in the clip from the University of Minho, Portugal, not everything is going to plan. The human gets a stern warning from the robot that they are doing it wrong.

(object) (embed)

The pair are assembling a foam chassis with two wheels. Although the robot has already attached the wheel on its side of the chassis, the human offers it another. The robot - ARoS - is not impressed.

"Ah! you want to give me a wheel. I have already inserted the wheel on my side."

When the human makes another mistake - offering it a bolt the robot doesn't need - the robot again refuses. It also points out that the human needs it for himself.

Industrial robots today are essentially dumb and dangerous - and must be kept separate from people. Having them work directly with humans could speed up all kinds of processes. But they need to be able to understand what their biological colleagues are doing - and perhaps make some small talk too. When the chassis is built ARoS declares: "I enjoyed your help! I hope to work again with you."

Most importantly, they need to be safe. Check out this previous story about a robot that can tell when it has accidentally hit a human.

Tom Simonite, online technology editor

Should technology be allowed to tumble records?


(image) Spectators at Beijing's Olympic swimming pool have witnessed some outlandish goings-on over the last couple of weeks: 25 world records have fallen, compared with eight at the Athens Olympics four years ago. Seven of them were broken by one swimmer, Michael Phelps of the US, while the UK's Rebecca Adlington improved on the 800 metres freestyle record - unchallenged for 19 years - by more than two seconds.

Records have been tumbling in other sports too, but none at this rate. What's going on? Have the swimmers found some new technique to propel them more efficiently through the water? Are they training more intensively? Or is it down to sheer competitiveness? The answer is more prosaic.

The Beijing pool is 3 metres deep, a metre deeper than standard competitive pools. As explained in this week's issue of New Scientist magazine, the extra depth helps dissipate the turbulence caused by the swimmer's movement, causing less resistance. In other words, they are being helped by the architecture.

You could argue that technological "fixes" like this diminish the value of modern sporting records, making it unfair to compare the performances of this year's athletes with those through history. Some critics have suggested, for example, that since the reduced friction suits used by runners and swimmers give them an undeniable advantage over previous competitors, their race times should be adjusted downwards to reflect this.

The problem with this line of reasoning is that there is no end to it. Technology - science too - has always been part of sport, from the design of runners' shoes and aerodynamic bikes to the development of improved training regimes and performance-enhancing diets.

What matters is not whether today's athletes have an unfair advantage, but how they use what's available to them - so long as it's within the rules. Michael Phelps is the fastest swimmer ever over seven disciplines: the fact that he did it wearing a streamlined suit rather than a pair of baggy trunks is surely irrelevant. If he'd done it taking high-performance steroids, now that would be a different matter.

Michael Bond, New Scientist consultant

Handheld gadget offers a window on Rome's past


(image) Visitors to Rome will no longer have to rely on their imagination to see the ancient city at its glorious peak. Just point TimeMachine, a new handheld gadget produced by Ducati Myers and the University of Bologna, at famous sites like the Colosseum and it automatically displays a 3D reconstruction of the building.

The device is the first commercial application of the Rome Reborn project, an ongoing effort to reconstruct the city as it was in AD 320. The starting point of the project was an impressively detailed physical model of the city, some 15 meters across, built over a 40 years span in the twentieth century. That model was laser-scanned and digitised by an international team of researchers led by Bernard Frischer at the University of Virginia. The latest version, Rome Reborn 2.0, was unveiled at the SIGGRAPH conference in LA last week.

Fun though it is to fly over the ancient city and zoom in on specific buildings, it's when those flashy graphics meet present day Rome that Rome Reborn becomes most gripping.

That's where TimeMachine comes in. Point the device at a ruin and it displays the picture on a small screen. The gadget then uses image recognition to identify the delapidated buildin , and superimposes the virtual reconstruction of its heyday on top. You can then walk towards, or around the building to see the virtual reconstruction from any view. The viewer is given a moveable window on the past.

It's even relatively cheap to use - hiring the device costs around 5 Euros per hour, according to Joel Myers. So far TimeMachine is active at the Colosseum, and a version designed for the Forum should be available next month.

Colin Barras, online technology reporter

Robot tripod takes impressive panoramas


At the SIGGRAPH graphics conference in Los Angeles I got the chance to look at GigaPan, a robotic tripod (pictured below) that lets photographers produce impressively large panoramas at the touch of a button.

Check out the snowscape from Colorado above, taken by Jason Buchheim. Zoom in and you will appreciate how much detail GigaPan can capture. The image contains 1.91 gigapixels stitched together from 19 separate snapshots. A gigapixel is 1 billion pixels.

Producing a GigaPan image is easy. The user clamps their camera to the tripod. After a bit of calibration they simply point it at the top left and lower right hand corner of the panorama they want to photograph and the tripod does the rest, taking a series of snaps that can then be stitched together to create the panorama. The tripod is as low tech as it sounds - it even has a robotic arm to mechanically operate the shutter.

The largest panorama yet weighs in at 6 gigapixels, snapped by a botanist in Hawaii.

The good news is that GigaPan should be commercially available soon. There's already a 2-3 month waiting list, but if you can stand to wait till the end of the year, a GigaPan tripod can be yours for around US$400.


Colin Barras, online technology reporter

Just an illusion: still images that move


Earlier this week I found myself on a long-haul flight to the SIGGRAPH computer graphics conference in Los Angeles. To pass the time I decided to catch up on some of the papers that are being presented here. But I had to stop when it came to the mind-warping, travel-sickness-inducing images that littered a paper by researchers from National Cheng Kung University in Taiwan and the Chinese University of Hong Kong.For the full effect click the image above to enlarge or here.Ming-Te Chi and colleagues have analysed a number of hand drawn examples of these 'self-animating images', such as the image above - "Rotating Snake" by Akiyoshi Kitaoka. Even though the viewer knows they are static the images unnervingly seem to move anyway. Chi's team is trying to find out why.They've identified a number of important factors. The pictures appear to creep because of the arrangement of colour bands in the small repeated asymmetric patterns (dubbed RAPs). Certain combinations seem to give the impression of it creeping in a particular direction, although that effect is relatively weak. The illusion is strengthened if a ribbon of RAPs that appears to flow to the right is placed next to one that appears to flow to the left.Most impressively, Chi's team has worked out a way to predict which colour combinations give the best illusions. By plotting the four colours used in their images on a standard colour wheel they found a characteristic pattern emerged: the four colours used should always be as different from each other as is possible.White couples best with black, blue works well with yellow. With that discovery the researchers could begin experimenting with a wider palette of colours than was previously available to produce self-animating images. As a result they can make far more sophisticated images.For an example check out this document (pdf) for a creeping rendition of Van Gogh's Starry Night.Fascinating work, but not good reading material when travelling.Colin Barras, online technology reporter[...]

How to evade the web ad trackers


All too often adverts you see online are your past come back to haunt you. Advertisers use tracking cookies to capture the web history of users and monitor usage of a particular site. That information is used to serve up adverts most likely to influence you.But I discovered earlier this week that some advertising companies let you opt out of that tracking. Read on to find out how to free yourself from tracking.First, though, consider why you may want to. There are two ways of looking at this. Either you believe the advertisers who say well-targetted ads are actually helpful to users, or you think it best that your personal information stay that way.After all, the information ad firms gather can be enough to identify individuals. In 2006 AOL was embarrassed when supposedly anonymised search data made public was used to do just that. For a taste of what can be gathered and how it can be used, check out this post on a site that will guess your gender.But now, to opt out:To stop Google tracking and targetting you, visit this site and click "opt out". You will still see adverts, but they won't be based on your personal web use.On this page you can opt out of targetted advertising from 17 advertising networks, including Yahoo's. It also shows you which of those services it covers already have a tracking cookie on your machine.Yahoo also have their own dedicated opt out page, as do DoubleClick - a large online advertising firm acquired by Google in March. Visit this page and hunt for the well-hidden link, or click here to opt out directly.The DoubleClick site includes a handy reminder of which "non-personally identifiable information" they will use even if you do opt out:"Your browser type, internet service provider, information about the general content of the site or page displayed on your browser and other non-personally identifiable information provided by the site."That's still quite a bit of information. And this company even questions whether the DoubleClick opt out is effective. They say it only affects cookie-based tracking and not tracking that uses the IP address of users.The only way to protect yourself may be to set your browser not to reject 3rd-party cookies (find out how here), to prompt you to decline or accept every cookie any site tries to send you, or to regularly delete them. It's surprising hard it is to keep your web use to yourself.The opt outs linked to above may be a good thing. But as pointed out at TechCrunch, these firms are not offering anyone the choice to opt into their tracking and targetting systems.US politicians have said in the past that the law should restrict and regulate online tracking like this. Could the appearance of opt outs be an attempt to head off that threat? Whether it is or not, I expect the number of people that use them to be small. Let us know whether you chose to opt out or not, and your reason for doing so.Tom Simonite, online technology editor [...]

Why 2084 may be like 1984


(image) While some people may choose to mark the release of a DVD of the Terminator TV show by buying in a pizza, UK academic roboticist Noel Sharkey wrote a report on the future of policing robots. Not for free of course - Warner paid him to.

You can download the project from Sharkey's webpage here (.doc format).

Whatever you think of that arrangement, Sharkey, who has previously spoken out against military robots, still makes interesting points. He predicts a growing role for robots in policing between now and 2084. And his vision of the future is not particularly warming.

Sharkey told me that, although nothing surprises him about robots anymore, reviewing developments for the report did send a shiver down his spine when he realised humans will still be in control.
"I am not a believer in AI coming close to organic intelligence or overtaking it, and so my realisation was that whoever controlled the robots would have control of society. They [humans] would be able to enforce arbitrary laws."
Sharkey's predictions are more cyberpunk than space opera. Humans will still call the shots, and robots will enhance their failings as much as, if not more than, their merits.

All futurology is doomed to fail in one way or another. I think it's often best to ignore the content of predictions and look at how they're framed instead. With that in mind, I asked Sharkey why he chose 2084 as the end-point of his timeline - by which point he says soft humanoid robots will be walking the streets, making arrests and questioning suspects.

His answer revealed it was nothing more than an allusion to Orwell's 1984 - hinting at the civil liberties concerns he has about robotic policing.

Tom Simonite, online technology editor

Computer has a go and beats pro player


I've just read over on Slashdot that a supercomputer has beaten a professional human player at the ancient boardgame, go, albeit with a 9-stone head start. It's a surprising result to those familiar with the game, since computers have so far proved no match for human players.

MoGo's performance stunned onlookers, including another go software programmer who said: "I'm shocked at the result. I really didn't expect the computer to win in a one-hour game."

Although it has similarities to chess in computational terms, go strategy is in practice much more complex. Its large board and few rules mean that a computer attempting to calculate a "tree" of possible future moves quickly creates an exponentially growing tangle. In the relatively short time available in a game, there isn't time to work out the best option.

Current AI techniques just aren't up to scratch. Increasing the speed with which calculations can be made is thought to be unlikely to lead to proper computer supremacy - although it seems to have played a major role in MoGo's victory. Instead many experts say novel ideas about how to give AIs some equivalent to the kind of intuition used by human players are needed. Easier said than done.

Tom Simonite, online technology editor