Subscribe: Brad Ideas
http://ideas.4brad.com/index.rdf
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
car  cars  data  don  level  make  might  much  new  nhtsa  people  regulations  tesla  time  vehicles  vote  world 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Brad Ideas

Brad Ideas



Brad Templeton is Chairman Emeritus of the EFF, Singularity U computing chair, software architect and internet entrepreneur, robotic car strategist, futurist lecturer, photographer and Burning Man artist. This is an "ideas" blog rather than a "cool thing



 



What if the city ran Waze and you had to obey it? Could this cure congestion?

Thu, 01 Dec 2016 23:42:54 -0800

I believe we have the potential to eliminate a major fraction of traffic congestion in the near future, using technology that exists today which will be cheap in the future. The method has been outlined by myself and others in the past, but here I offer an alternate way to explain it which may help crystallize it in people’s minds. Today many people drive almost all the time guided by their smartphone, using navigation apps like Google Maps, Apple Maps or Waze (now owned by Google.) Many have come to drive as though they were a robot under the command of the app, trusting and obeying it at every turn. Tools like these apps are even causing controversy, because in the hunt for the quickest trip, they are often finding creative routes that bypass congested major roads for local streets that used to be lightly used. Put simply, the answer to traffic congestion might be, “What if you, by law, had to obey your navigation app at rush hour?” To be more specific, what if the cities and towns that own the streets handed out reservations for routes on those streets to you via those apps, and your navigation app directed you down them? And what if the cities made sure there were never more cars put on a piece of road than it had capacity to handle? (The city would not literally run Waze, it would hand out route reservations to it, and it would still do the UI and be a private company.) The value is huge. Estimates suggest congestion costs around 160 billion dollars per year in the USA, including 3 billion gallons of fuel and 42 hours of time for every driver. Roughly quadruple that for the world. Road metering actually works This approach would exploit one principle in road management that’s been most effective in reducing congestion, namely road metering. The majority of traffic congestion is caused, no surprise, by excess traffic — more cars trying to use a stretch of road than it has the capacity to handle. There are other things that cause congestion — accidents, gridlock and irrational driver behaviour, but even these only cause traffic jams when the road is near or over capacity. Today, in many cities, highway metering is keeping the highways flowing far better than they used to. When highways stall, the metering lights stop cars from entering the freeway as fast as they want. You get frustrated waiting at the metering light but the reward is you eventually get on a freeway that’s not as badly overloaded. Another type of metering is called congestion pricing. Pioneered in Singapore, these systems place a toll on driving in the most congested areas, typically the downtown cores at rush hour. They are also used in London, Milan, Stockholm and some smaller towns, but have never caught on in many other areas for political reasons. Congestion charging can easily be viewed as allocating the roads to the rich when they were paid for by everybody’s taxes. A third successful metering system is the High-occupancy toll lane. HOT lanes take carpool lanes that are being underutilized, and let drivers pay a market-based price to use them solo. The price is set to bring in just enough solo drivers to avoid wasting the spare capacity of the lane without overloading it. Taking those solo drivers out of the other lanes improves their flow as well. While not every city will admit it, carpool lanes themselves have not been a success. 90% of the carpools in them are families or others who would have carpooled anyway. The 10% “induced” carpools are great, but if the carpool lane only runs at 50% capacity, it ends up causing more congestion than it saves. HOT is a metering system that fixes that problem.  read more » [...]



The Electoral College: Good, bad or Trump trumper, and how to abolish it if you want

Thu, 01 Dec 2016 11:49:59 -0800

Many are writing about the Electoral college. Can it still prevent Trump’s election, and should it be abolished? Like almost everybody, I have much to say about the US election results. The core will come later — including an article I was preparing long before the election but whose conclusions don’t change much because of the result, since Trump getting 46.4% is not (outside of the result) any more surprising than Trump getting 44% like we expected. But for now, since I have written about the college before, let me consider the debate around it. By now, most people are aware that the President is not elected Nov 8th, but rather by the electors around Dec 19. The electors are chosen by their states, based on popular vote. In almost all states all electors are from the party that won the popular vote in a “winner takes all,” but in a couple small ones they are distributed. In about half the states, the electors are bound by law to vote for the candidate who won the popular vote in that state. In other states they are party loyalists but technically free. Some “faithless” electors have voted differently, but it’s very rare. I’m rather saddened by the call by many Democrats to push for electors to be faithless, as well as calls at this exact time to abolish the college. There are arguments to abolish the college, but the calls today are ridiculously partisan, and thus foolish. I suspect that very few of those shouting to abolish the college would be shouting that if Trump had won the popular vote and lost the college (which was less likely but still possible.) In one of Trump’s clever moves, he declared that he would not trust the final results (if he lost) and this tricked his opponents into getting very critical of the audacity of saying such a thing. This makes it much harder for Democrats to now declare the results are wrong and should be reversed. The college approach — where the people don’t directly choose their leader — is not that uncommon in the world. In my country, and in most of the British parliamentary democracies, we are quite used to it. In fact, the Prime Minister’s name doesn’t even appear on our ballots as a fiction the way it does in the USA. We elect MPs, voting for them mostly (but not entirely) on party lines, and the parties have told us in advance who they will name as PM. (They can replace their leader after if they want, but by convention, not rule, another election happens not long after.) In these systems it’s quite likely that a party will win a majority of seats without winning the popular vote. In fact, it happens a lot of the time. That’s because in the rest of the world there are more than 2 parties, and no party wins the popular vote. But it’s also possible for the party that came 2nd in the popular vote to form the government, sometimes with a majority, and sometimes in an alliance. Origins of the college When the college was created, the framers were not expecting popular votes at all. They didn’t think that the common people (by which they meant wealthy white males) would be that good at selecting the President. In the days before mass media allowed every voter to actually see the candidates, one can understand this. The system technically just lets each state pick its electors, and they thought the governor or state house would do it. Later, states started having popular votes (again only of land owning white males) to pick the electors. They did revise the rules of the college (12th amendment) but they kept it because they were federalists, strong advocates of states’ rights. They really didn’t imagine the public picking the President directly.  read more » [...]



Comma One goes Open Source, Robocars in New Zealand Earthquakes and more

Wed, 30 Nov 2016 14:25:06 -0800

There have been few postings this month since I took the time to enjoy a holiday in New Zealand around speaking at the SingularityU New Zealand summit in Christchurch. The night before the summit, we enjoyed a 7.8 earthquake not so far from Christchurch, whose downtown was over 2/3 demolished after quakes in 2010 and 2011. On the 11th floor of the hotel, it was a disturbing nailbiter of swaying back and forth for over 2 minutes — but of course swaying is what the building is supposed to do; that means it’s working. The shocks were rolling, not violent, and in fact we got more violent jolts from aftershocks a week later when we went to Picton. While driving around that region, we encountered this classic earthquake scene on the road: There were many like this, and in fact the main highway of the South Island was destroyed long-term not too far away, cutting off several towns. A scene like this makes you wonder just what a robocar would do in such situations. I already answered this question in a blog post on how to handle a tsunami. Fortunately there was only a mild tsunami for this quake. A tsunami will result in a warning in the rich world, and the car will know the elevation map of the roads and know how to get to high ground. In some places, like Japan,t here is also an advanced earthquake warning system that tells you quakes are coming well before they hit you, since electrons go much faster than seismic waves. With such a system, robocars should receive a warning and come to a stop unless they need to evacuate a tsunami zone. Without such a warning, we still could imagine the road cracking and collapsing in front of you as might have happened on this road. Of course the cones and signs that warned me days later would not be present. The answer again lies in the fact that pictures like mine will be used to create situations like this in simulator, and all car developers will be able to test their systems with simulated quake damage to make sure they do the right thing. I’ve spoken since 2010 on the value of a shared simulator environment and I think if government agencies like NHTSA want to really help development, providing funding and tools for such an environment would be a good step. NHTSA’s proposal that all developers share their logs of all incidents would clearly make such a simulator better, but there is pushback because of the proprietary value of those logs. When it comes to strange situations like earthquakes, I doubt there would be much pushback on having an open and shared simulator environment. New Zealand’s government is taking a very welcoming approach to robocars. They are not regulating for a while, and have invited developers to come and test. They have even said it’s OK to test unmanned vehicles under some fairly simple rules. NZ does not have any auto industry, and of course it’s quite remote, but we’ll see if they can attract developers to come test. Their roads feature something you don’t see much in the USA — tons and tons of one-lane bridges and other one-lane stretches of highway. Turns out that robocars, with a little bit of communication, can make very superhumanly efficient use of one-lane two-way roads, and it might be worth exploring. Open Source Comma One box Speaking of Open, today Comma.ai, which previously had declared they were giving up on their neural network autopilot due to NHTSA threats today announced they have open sourced their software, along with hardware designs and case designs. NHTSA did not want them making an autopilot, and said they could not simply rely on the fact that drivers were told they must be diligent, it will be very interesting to see how NHTSA reacts to the release of open designs that anybody can then install on their car. The automotive industry has had a long history of valuing the tinkerer. All the big car companies had their beginnings with small tinkerers and inventors. Some even died in the ver[...]



How will robotaxi services compete in the future?

Fri, 04 Nov 2016 10:52:53 -0700

Right now Uber, Lyft and traditional taxis are competing. But in the robocar world of the future, when large fleets of cars operate as taxis and replace car ownership for many, how will they compete with one another. Will there be a monopoly in each town, or just a couple of companies? Can we have dozens? Does the biggest fleet win?

I have a new major article on the subject. I also welcome comments on other ways these services might find a competitive edge.

Read Competition in the Robotaxi world




If you built "Westworld" (or other robot sex) it would probably be with VR

Tue, 01 Nov 2016 15:33:31 -0700

HBO released a new version of “Westworld” based on the old movie about a robot-based western theme park. The show hasn’t excited me yet — it repeats many of the old tropes on robots/AI becoming aware — but I’m interested in the same thing the original talked about — simulated experiences for entertainment. The new show misses what’s changed since the original. I think it’s more likely they will build a world like this with a combination of VR, AI and specialty remotely controlled actuators rather than with independent self-contained robots. One can understand the appeal of presenting the simulation in a mostly real environment. But the advantages of the VR experience are many. In particular, with the top-quality, retinal resolution light-field VR we hope to see in the future, the big advantage is you don’t need to make the physical things look real. You will have synthetic bodies, but they only have to feel right, and only just where you touch them. They don’t have to look right. In particular, they can have cables coming out of them connecting them to external computing and power. You don’t see the cables, nor the other manipulators that are keeping the cables out of your way (even briefly unplugging them) as you and they move. This is important to get data to the devices — they are not robots as their control logic is elsewhere, though we will call them robots — but even more important for power. Perhaps the most science fictional thing about most TV robots is that they can run for days on internal power. That’s actually very hard. The VR has to be much better than we have today, but it’s not as much of a leap as the robots in the show. It needs to be at full retinal resolution (though only in the spot your eyes are looking) and it needs to be able to simulate the “light field” which means making the light from different distances converge correctly so you focus your eyes at those distances. It has to be lightweight enough that you forget you have it on. It has to have an amazing frame-rate and accuracy, and we are years from that. It would be nice if it were also untethered, but the option is also open for a tether which is suspended from the ceiling and constantly moved by manipulators so you never feel its weight or encounter it with your arms. (That might include short disconnections.) However, a tracking laser combined with wireless power could also do the trick to give us full bandwidth and full power without weight. It’s probably not possible to let you touch the area around your eyes and not feel a headset, but add a little SF magic and it might be reduced to feeling like a pair of glasses. The advantages of this are huge: You don’t have to make anything look realistic, you just need to be able to render that in VR. You don’t even have to build things that nobody will touch, or go to, including most backgrounds and scenery. You don’t even need to keep rooms around, if you can quickly have machines put in the props when needed before a player enters the room. In many cases, instead of some physical objects, a very fast manipulator might be able to quickly place in your way textures and surfaces you are about to touch. For example, imagine if, instead of a wall, a machine with a few squares of wall surface quickly holds one out anywhere you’re about to touch. Instead of a door there is just a robot arm holding a handle that moves as you push and turn it. Proven tricks in VR can get people to turn around without realizing it, letting you create vast virtual spaces in small physical ones. The spaces will be designed to match what the technology can do, of course. You will also control the audio and cancel sounds, so your behind-the-scenes manipulations don’t need to be fully silent. You do it all with central computers, you don&#[...]



Comma.ai cancels comma-one add-on box after threats from NHTSA

Fri, 28 Oct 2016 12:13:42 -0700

Comma.ai, the brash startup attempting to make a self-driving system entirely from a neural network has announced it will cancel the “comma one” add-on box it has planned to sell to owners of certain Honda vehicles. The box stuck on the rear-view mirror and used the car’s own bus commands to provide an autopilot similar to those offered by car makers, with lane-keeping and adaptive cruise control. Of particular importance is the letter from NHTSA to comma.ai which I suggest you read. This letter creates several big issues: There are many elements of this letter which would also apply to Tesla and other automakers which have built supervised autopilot functions. Of particular interest is the paragraph which says: “it is insufficient to assert, as you do, that the product does not remove any of the driver’s responsibilities” and “there is a high likelihood that some drivers will use your product in a manner that exceeds its intended purpose.” That must be very scary for Tesla. I noted before that the new NHTSA regulations appear to forbid the use of “black box” neural network approaches to the car’s path planning and decision making. I wondered if this made illegal the approach being done by Comma, NVIDIA and many other labs and players. This may suggest that. We now have a taste of the new regulatory regime, and it seems that had it existed before, systems like Tesla’s autopilot, Mercedes Traffic Jam Assist, and Cruise’s original aftermarket autopilot would never have been able to get off the ground. George Hotz of comma declares “Would much rather spend my life building amazing tech than dealing with regulators and lawyers. It isn’t worth it. The comma one is cancelled. comma.ai will be exploring other products and markets. Hello from Shenzhen, China.” To be clear, comma is a tiny company taking a radical approach, so it is not a given that what NHTSA has applied to them would have been or will be unanswerable by the big guys. Because Tesla’s autopilot is not a pure machine learning system, they can answer many of the questions in the NHTSA letter that comma can’t. They can do much more extensive testing that a tiny startup can’t. But even so a letter like this sends a huge chill through the industry. It should also be noted that in Comma’s photos the box replaced the rear-view mirror, and NHTSA had reason to ask about that. George’s declaration that he’s in Shenzen gives us the first sign of the new regulatory regime pushing innovation away from the United States and California. I will presume the regulators will say, “We only want to scare away dangerous innovation” but the hard truth is that is a very difficult thing to judge. All innovation in this space is going to be a bit dangerous. It’s all there trying to take the car — the 2nd most dangerous legal consumer product — and make it safer, but it starts from a place of danger. We are not going to get to safety without taking risks along the way. I sometimes ask, “Why do we let 16 year olds drive?” They are clearly a major danger to themselves and others. Driver testing is grossly inadequate. They are not adults so they don’t have the legal rights of adults. We let them drive because they are going to start out dangerous and then get better. It is the only practical way for them to get better, and we all went through it. Today’s early companies are teenagers. They are going to take risks. But this is the fastest and only practical way to let them get better and save millions. “…some drivers will use your product in a manner that exceeds its intended purpose” This sentence, though in the cover letter and not the actual legal demand, looks at the question asked so much after the Tesla [...]



Of the SAE's robocar "levels" only level 4 will be meaningful, and only partly

Thu, 27 Oct 2016 17:28:29 -0700

It’s no secret that I’ve been a critic of the NHTSA “levels” as a taxonomy for types of Robocars since the start. Recent changes in their use calls for some new analysis that concludes that only one of the levels is actually interesting, and only tells part of the story at that. As such, they have become even less useful as a taxonomy. Levels 2 and 3 are unsafe, and Level 5 is remote future technology. Level 4 is the only interesting one and there is thus no taxonomy. Unfortunately, they have just been encoded into law, which is very much the wrong direction. NHTSA and SAE both created a similar set of levels, and they were so similar that NHTSA declared they would just defer to the SAE’s system. Nothing wrong with that, but the core flaws are not addressed by this. Far better, their regulations declared that the levels were just part of the story, and they put extra emphasis on what they called the “operating domain” — namely what locations, road types and road conditions the vehicle operates in. The levels focus entirely on the question of how much human supervision a vehicle needs. This is an important issue, but the levels treated it like the only issue, and it may not even be the most important. My other main criticism was that the levels, by being numbered, imply a progression for the technology. That progression is far from certain and in fact almost certainly wrong. SAE updated its levels to say that they are not intended to imply a progression, but as long as they are numbers this is how people read them. Today I will go further. All but level 4 are uninteresting. Some may never exist, or exist only temporarily. They will be at best footnotes of history, not core elements of a taxonomy. Level 4 is what I would call a vehicle capable of “unmanned” operation — driving with nobody inside. This enables most of the interesting applications of robocars. Here’s why the other levels are less interesting: Levels 0 and 1 — Manual or ADAS-improved Levels 0 and 1 refer to existing technology. We don’t really need new terms for our old cars. Level 2 perhaps best described as a more advanced version of level 1 and that transition has already taken place. Level 2 — Supervised Autopilot Supervised autopilots are real. This is what Tesla sells, and many others have similar offerings. They are working in one of two ways. The first is the intended way, with full time supervision. This is little more than a more advanced cruise control, and may not even be as relaxing. The second way is what we’ve seen happen with Tesla — a car that needs supervision, but is so good at driving that supervisors get complacent and stop supervising. They want a full self-driving car but don’t have it, so they pretend they do. Many are now saying that this makes the idea of supervised autopilot too dangerous to deploy. The better you make it, the more likely it can lull people into bad activity. Update: One day after I wrote this, it was revealed that NHTSA shut down comma.ai’s efforts to build an aftermarket autopilot citing these concerns, among others. Level 3 — Standby driver This level is really a variation of Level 4, but the vehicle needs the ability to call upon a driver who is not paying attention and get them to take control with 10 to 60 seconds of advance warning. Many people don’t think this can be done safely. When Google experimented with it in 2013, they concluded it was not safe, and decided to take the steering wheel entirely out of their experimental vehicles. Even if Level 3 is a real thing, it will be short lived as people see an unmanned capable vehicle. And Level 4 vehicles will offer controls for special use, just not transition while moving. Level 5 — Drive absolutely everywhere SAE,[...]



Our routers need to remove the "internet" from the "internet of things" to stop DDOS

Sun, 23 Oct 2016 14:33:44 -0700

I frequently say that there is no “internet of things.” That’s a marketing phrase for now. You can’t go buy a “thing” and plug it into the “internet of things.” IoT is still interesting because underneath the name is a real revolution from the way that computing, sensing and communications are getting cheaper, smaller and using less power. New communications protocols are also doing interesting things.

We learned a lesson on Friday though, about why using the word “internet” is its own mistake. The internet — one of the world’s greatest inventions — was created as a network of networks where anything could talk to anything, and it was useful for this to happen. Later, for various reasons, we moved to putting most devices behind NATs and firewalls to diminish this vision, but the core idea remains.

Attackers on Friday made use of growing collection of low cost IoT devices with low security to mount a DDOS attack on DYN’s domain name servers, shutting off name lookup for some big sites. While not the only source of the attack, a lot of attention has come to certain Chinese brands of IP based security cameras and baby monitors. To make them easy to use, they are designed with very poor security, and as a result they can be hijacked and put into botnets to do DDOS — recruiting a million vulnerable computers to all overload some internet site or service at once.

Most applications for small embedded systems — the old and less catchy name of the “internet of things” — aren’t at all in line with the internet concept. They have no need or desire to be able to talk to the whole world the way your phone, laptop or web server do. They only need to talk to other local devices, and sometimes to cloud servers from their vendor. We are going to see billions of these devices connected to our networks in the coming years, perhaps hundreds of billions. They are going to be designed by thousands of vendors. They are going to be cheap and not that well made. They are not going to be secure, and little we can do will change that. Even efforts to make punishments for vendors of insecure devices won’t change that.

So here’s an alternative; a long term plan for our routers and gateways to take the internet out of IoT.

Our routers should understand that two different classes of devices will connect to them. The regular devices, like phones and laptops, should connect to the internet as we expect today. There should also be a way to know that the connecting devices does not want regular internet access, and not to give it. One way to do that is for the devices to know about this, and to convey how much access they need when they first connect. One proposal for this is my friend Eliot Lear’s MUD proposal. Unfortunately, we can’t count on devices to do this. We must limit stupid devices and old devices too.  read more »




Vendors push back on California Robocar regulations - plus Tesla and Apple news

Sat, 22 Oct 2016 12:19:13 -0700

California Hearings Wednesday, California held hearings on the latest draft of their regulations. The new regulations heavily incorporate the new NHTSA guidelines released last month, and now incorporate language on the testing and deployment of unmanned vehicles. The earlier regulations caused consternation because they correctly identified that nobody had sufficient understanding of unmanned vehicle operations to write regulations, but incorrectly proceeded to forbid those vehicles until later. Once you ban something, it’s very hard to un-ban it. The new approach does not ban the vehicles, but attempts instead to write regulations for them that are too premature. Comment from developers of the vehicles reflected sentiment that all the regulations are premature. California worked together with NHTSA on their regulations, and incorporated them. In particular, while NHTSA’s regulations lay out a 15 point list of functional domains that creators of vehicles should certify, the federal regulations technically declare this certification to be optional. A vendor in submitting a report can explicitly state they decline to certify most of the items. California suggests that this certification might be mandatory here. For all my criticism of NHTSA’s plan, they do have an understanding that it is still far too early to be writing detailed rules for vehicles that don’t yet exist, and left these avenues for change and disagreement within their regulations. The avenues are not great — I feel that vendors will be concerned that truly treating the regulations as voluntary will will be done at their peril — but at least they exist. Several vendors also pointed out the serious problems with traditional regulatory timelines and the speed of development of computer technologies. The California regulations may require that a car be tested for a year before it is deployed. On the surface that sounds normal by old standards, but the reality of development is very different. Pretty much all the vendors I know are producing new builds of their vehicle software and testing them out on the roads the next day — with trained safety drivers behind the wheel. The software goes through extensive “regression testing,” running through every tricky situation the team has encountered anywhere, as well as simulated situations, but the safety driver is there to deal with any problem not found with that testing. Vendors won’t release into production cars with only one night of testing, but neither can they wait a year. This is particularly true because in the early days of this technology, new problems will be found during deployment, and you want to get the fixes out on the road as quickly as is safe to do. An arbitrary timeline makes no sense. This is just the start of the problems. While one may argue that it was always going to be hard for startups and tinkerers to develop these cars, these regulations (and the federal ones) put more nails in the coffin of the small innovator. The amount of bureaucracy, the size of the insurance bonds and many other factors will make it hard for teams the size of the DARPA challenge teams who kickstarted this technology and make it real to actually play in the game. The auto industry has a long history of allowing tinkerers to innovate, even at the cost of relaxing safety requirements applied to them. We may end up with a world where only the big players can play at all, and we know that this is generally not good at all for the pace of innovation. Delivery Robots The new regulations allowing unmanned vehicles might seem to open doors for delivery robots like we’re working on at Starship. Unfortunately they seem aimed primarily at large vehicles. Since California rules define the sidewalk as part of [...]



Most voting is about the next election, not this one.

Sun, 16 Oct 2016 18:25:41 -0700

When people vote, what do they think it will accomplish? How does this affect how they vote, and how should it? My apologies for more of this in a season when our social media are overwhelmed with politics, but in a lot of the postings I see about voting plans, I see different implicit views on just what the purpose of voting is. The main focus will be on the vote for US President. The vast majority of people will vote in non-contested states. The logic is different in the “swing” states where all the campaign attention is. In a non-contested state, there is essentially zero chance your vote will affect the result of the election. If you’re voting thinking you are exerting your small power to have a say in who wins, you are deluding yourself. Your vote does one, and only one thing — it changes the popular vote totals that are published and looked at by some people. You will change the total for the nation, your state, and some will even look at the totals in your region. For minor party candidates, having a higher vote total — in particular reaching 5% — can also make a giant difference by giving access to federal campaign funding, which can make a serious difference in the funding level for those parties. Voters should ask themselves, whose popular vote total do they want to increase? Some logic suggests that it makes more sense to vote for a minor party that you support. Not because they will win, but because you will create a larger proportionate increase in their total. One more vote for a Republican or Democrat will be barely noticed. One more vote for a minor party will also on its own make no difference, but proportionately it may be 10 times or more greater. It’s for the next election, not this one You don’t increase the popular vote totals to affect this election. You do it to affect the next one. Supporting a party makes other supporters realize they are not alone. It makes them just a bit more likely to join the cause, if they believe in it. Most voters don’t understand this “next election” principle, and so while a minor party remains too small to win or affect the election, they are less likely to support it. This is how most movements go from being small to being large. When a protest movement is small, people are afraid to show their support. When they see a real crowd march in the square, they are now more likely to join the crowd and to let the world see how much support there really is. As such, the particular platform planks and candidate quirks are almost entirely irrelevant for the non-swing voter. When you’re voting for the next election, you are really supporting only the party and its broad platform, or a basic overall impression of a candidate. I often see voters say, “I could not vote for a candidate who supports X” but they do not realize that is not what they are doing. The minor parties are particularly bad at this. Most of them like to pretend they are just like major parties. They nominate candidates based on what they say or stand for. They create detailed party platforms. This is an error. A detailed platform is only a reason for people to vote against you. Detailed platforms are only for candidates who might actually have a shot at implementing their platform. Minor party candidates take it as gospel that they should never admit that they can’t win, even though any rational person knows it quite clearly. The reality is that you can know you can’t win the current election, but can more reasonably hope you can step higher and get within range of winning in a future election. Only when this happens should you act like a major party. You almost never see minor candidates say the truth: “Vote for me,[...]



Yikes - even Barack Obama wants to solve robocar "Trolley Problems" now

Thu, 13 Oct 2016 21:31:10 -0700

I had hoped I was done ranting about our obsession with what robocars will do in no-win “who do I hit?” situations, but this week, even Barack Obama in his interview with Wired opined on the issue, prompted by my friend Joi Ito from the MIT Media Lab. (The Media Lab recently ran a misleading exercise asking people to pretend they were a self-driving car deciding who to run over.) I’ve written about the trouble with these problems and even proposed a solution but it seems there is still lots of need to revisit this. Let’s examine why this problem is definitely not important enough to merit the attention of the President or his regulators, and how it might even make the world more dangerous. We are completely fascinated by this problem Almost never do I give a robocar talk without somebody asking about this. Two nights ago, I attended another speaker’s talk and he got the question as his 2nd one. He looked at his watch and declared he had won a bet with himself about how quickly somebody would ask. It has become the #1 question in the mind of the public, and even Presidents. It is not hard to understand why. Life or death issues are morbidly attractive to us, and the issue of machines making life or death decisions is doubly fascinating. It’s been the subject of academic debates and fiction for decades, and now it appears to be a real question. For those who love these sorts of issues, and even those who don’t, the pull is inescapable. At the same time, even the biggest fan of these questions, stepping back a bit, would agree they are of only modest importance. They might not agree with the very low priority that I assign, but I don’t think anybody feels they are anywhere close to the #1 question out there. As such we must realize we are very poor at judging the importance of these problems. So each person who has not already done so needs to look at how much importance they assign, and put an automatic discount on this. This is hard to do. We are really terrible at statistics sometimes, and dealing with probabilities of risk. We worry much more about the risks of a terrorist attack on a plane flight than we do about the drive to the airport, but that’s entirely wrong. This is one of those situations, and while people are free to judge risks incorrectly, academics and regulators must not. Academics call this the Law of triviality. A real world example is terrorism. The risk of that is very small, but we make immense efforts to prevent it and far smaller efforts to fight much larger risks. These situations are quite rare, and we need data about how rare they are In order to judge the importance of these risks, it would be great if we had real data. All traffic fatalities are documented in fairly good detail, as are many accidents. A worthwhile academic project would be to figure out just how frequent these incidents are. I suspect they are extremely infrequent, especially ones involving fatality. Right now fatalities happen about every 2 million hours of driving, and the majority of those are single car fatalities (with fatigue and alcohol among leading causes.) I have still yet to read a report of a fatality or serious injury that involved a driver having no escape, but the ability to choose what they hit with different choices leading to injuries for different people. I am not saying they don’t exist, but first examinations suggest they are quite rare. Probably hundreds of billions of miles, if not more, between them. Those who want to claim they are important have the duty to show that they are more common than these intuitions suggest. Frankly, I think if there were accidents where the driver made a deliberate decision to run down one person to save [...]



The social networks could hold great political power due to GOTV. Should they?

Sun, 02 Oct 2016 19:49:03 -0700

The social networks have access (or more to the point can give their users access) to an unprecedented trove of information on political views and activities. Could this make a radical difference in affecting who actually shows up to vote, and thus decide the outcome of elections? I’ve written before about how the biggest factor in US elections is the power of GOTV - Get Out the Vote. US Electoral turnout is so low — about 60% in Presidential elections and 40% in off-year — that the winner is determined by which side is able to convince more of their weak supporters to actually show up and vote. All those political ads you see are not going to make a Democrat vote Republican or vice versa, they are going to scare a weak supporter to actually show up. It’s much cheaper, in terms of votes per dollar (or volunteer hour) to bring in these weak supporters than it is to swing a swing voter. The US voter turnout numbers are among the worst in the wealthy world. Much of this is blamed on the fact the US, unlike most other countries, has voter registration; effectively 2 step voting. Voter registration was originally implemented in the USA as a form of vote suppression, and it’s stuck with the country ever since. In almost all other countries, some agency is responsible for preparing a list of citizens and giving it to each polling place. There are people working to change that, but for now it’s the reality. Registration is about 75%, Presidential voting about 60%. (Turnout of registered voters is around 80%) Scary negative ads are one thing, but one of the most powerful GOTV forces is social pressure. Republicans used this well under Karl Rove, working to make social groups like churches create peer pressure to vote. But let’s look at the sort of data sites like Facebook have or could have access to: They can calculate a reasonably accurate estimate of your political leaning with modern AI tools and access to your status updates (where people talk politics) and your friend network, along with the usual geographic and demographic data They can measure the strength of your political convictions through your updates They can bring in the voter registration databases (which are public in most states, with political use allowed on the data. Commercial use is forbidden in a portion of states but this would not be commercial.) In many cases, the voter registration data also reveals if you voted in prior elections Your status updates and geographical check-ins and postings will reveal voting activity. Some sites (like Google) that have mobile apps with location sensing can detect visits to polling places. Of course, for the social site to aggregate and use this data for its own purposes would be a gross violation of many important privacy principles. But social networks don’t actually do (too many) things; instead they provide tools for their users to do things. As such, while Facebook should not attempt to detect and use political data about its users, it could give tools to its users that let them select subsets of their friends, based only on information that those friends overtly shared. On Facebook, you can enter the query, “My friends who like Donald Trump” and it will show you that list. They could also let you ask “My Friends who match me politically” if they wanted to provide that capability. Now imagine more complex queries aimed specifically at GOTV, such as: “My friends who match me politically but are not scored as likely to vote” or “My friends who match me politically and are not registered to vote.” Possibly adding “Sorted by the closeness of our connection” which is something the[...]



NHTSA Regulations part 4: Crashes, Training, Certification, State Law, Operation, Validation and Autopilots

Fri, 30 Sep 2016 12:27:30 -0700

After my initial reactions and Overall Analysis here is a point by point consideration of second set of elements from NHTSA’s 15 point certification list for robocars. See my series for other articles or the first half of the list.

Crashworthiness

In this section, the remind vendors they still need to meet the same standards as regular cars do. We are not ready to start removing heavy passive safety systems just because the vehicles get in fewer crashes. In the future we might want to change that, as those systems can be 1/3 of the weight of a vehicle.

They also note that different seating configurations (like rear facing seats) need to protect as well. It’s already the case that rear facing seats will likely be better in forward collisions. Face-to-face seating may present some challenges in this environment, as it is less clear how to deploy the airbags. Taxis in London often feature face-to-face seating, though that is less common in the USA. Will this be possible under these regulations?

The rules also call for unmanned vehicles to absorb energy like existing vehicles. I don’t know if this is a requirement on unusual vehicle design for regular cars or not. (If it were, it would have prohibited SUVs with their high bodies that can cause a bad impact with a low-body sports-car.)

Consumer Education and Training

This seems like another mild goal, but we don’t want a world where you can’t ride in a taxi unless you are certified as having taking a training course. Especially if it’s one for which you have very little to do. These rules are written more for people buying a car (for whom training can make sense) than those just planning to be a passenger.

(image)

Registration and Certification

This section imagines labels for drivers. It’s pretty silly and not very practical. Is a car going to have a sticker saying “This car can drive itself on Elm St. south of Pine, or on highway 101 except in Gilroy?” There should be another way, not labels, that this is communicated, especially because it will change all the time.

Post-Crash Behavior

This set is fairly reasonable — it requires a process describing what you do to a vehicle after a crash before it goes back into service.

Federal, State and Local Laws

This section calls for a detailed plan on how to assure compliance with all the laws. Interestingly, it also asks for a plan on how the vehicle will violate laws that human drivers sometimes violate. This is one of the areas where regulatory effort is necessary, because strictly cars are not allowed to violate the law — doing things like crossing the double-yellow line to pass a car blocking your path.  read more »




NHTSA Regulations part 3: Data Sharing, Privacy, Safety, Security and HMI

Fri, 30 Sep 2016 11:49:55 -0700

After my initial reactions and Overall Analysis here is a point by point consideration of the elements from NHTSA’s 15 point certification list for robocars. See also the second half and the whole series Let’s dig in: Data Recording and Sharing These regulations require a plan about how the vehicle keep logs around any incident (while following privacy rules.) This is something everybody already does — in fact they keep logs of everything for now — since they want to debug any problems they encounter. NHTSA wants the logs to be available to NHTSA for crash investigation. NHTSA also wants recordings of positive events (the system avoided a problem.) Most interesting is a requirement for a data sharing plan. NHTSA wants companies to share their logs with their competitors in the event of incidents and important non-incidents, like near misses or detection of difficult objects. This is perhaps the most interesting element of the plan, but it has seen some resistance from vendors. And it is indeed something that might not happen at scale without regulation. Many teams will consider their set of test data to be part of their crown jewels. Such test data is only gathered by spending many millions of dollars to send drivers out on the roads, or by convincing customers or others to voluntarily supervise while their cars gather test data, as Tesla has done. A large part of the head-start that leaders have in this field is the amount of different road situations they have been able to expose their vehicles to. Recordings of mundane driving activity are less exciting and will be easier to gather. Real world incidents are rare and gold for testing. The sharing is not as golden, because each vehicle will have different sensors, located in different places, so it will not be easy to adapt logs from one vehicle directly to another. While a vehicle system can play its own raw logs back directly to see how it performs in the same situation, other vehicles won’t readily do that. Instead this offers the ability to build something that all vendors want and need, and the world needs, which is a high quality simulator where cars can be tested against real world recordings and entirely synthetic events. The data sharing requirement will allow the input of all these situations into the simulator, so every car can test how it would have performed. This simulation will mostly be at the “post perception level” where the car has (roughly) identified all the things on the road and is figuring out what to do with them, but some simulation could be done at lower levels. These data logs and simulator scenarios will create what is known as a regression test suite. You test your car in all the situations, and every time you modify the software, you test that your modifications didn’t break something that used to work. It’s an essential tool. In the history of software, there have been shared public test suites (often sourced from academia) and private ones that are closely guarded. For some time, I have proposed that it might be very useful if there were a a public and open source simulator environment which all teams could contribute scenarios to, but I always expected most contributions would come from academics and the open source community. Without this rule, the teams with the most test miles under their belts might be less willing to contribute. Such a simulator would help all teams and level the playing field. It would allow small innovators to even build and test prototype ideas entirely in simulator, with very low cost and zero risk compared to building it in physical hardware. This is a great example of where [...]



Detailed analysis of NHTSA robocar regulations: Overview

Wed, 28 Sep 2016 23:50:58 -0700

The recent Federal Automated Vehicles Policy is long. (My same-day analysis is here and the whole series is being released.) At 116 pages (to be fair, less than half is policy declarations and the rest is plans for the future and associated materials) it is much larger than many of us were expecting. The policy was introduced with a letter attributed to President Obama, where he wrote: There are always those who argue that government should stay out of free enterprise entirely, but I think most Americans would agree we still need rules to keep our air and water clean, and our food and medicine safe. That’s the general principle here. What’s more, the quickest way to slam the brakes on innovation is for the public to lose confidence in the safety of new technologies. Both government and industry have a responsibility to make sure that doesn’t happen. And make no mistake: If a self-driving car isn’t safe, we have the authority to pull it off the road. We won’t hesitate to protect the American public’s safety. This leads in to an unprecedented effort to write regulations for a technology that barely exists and has not been deployed beyond the testing stage. The history of automotive regulation has been the opposite, and so this is a major change. The key question is what justifies such a big change, and the cost that will come with it. Make no mistake, the cost will be real. The cost of regulations is rarely known in advance but it is rarely small. Regulations slow all players down and make them more cautious — indeed it is sometimes their goal to cause that caution. Regulations result in projects needing “compliance departments” and the establishment of procedures and legal teams to assure they are complied with. In almost all cases, regulations punish small companies and startups more than they punish big players. In some cases, big players even welcome regulation, both because it slows down competitors and innovators, and because they usually also have skilled governmental affairs teams and lobbying teams which are able to subtly bend the regulations to match their needs. This need not even be nefarious, though it often is. Companies that can devote a large team to dealing with regulations, those who can always send staff to meetings and negotiations and public comment sessions will naturally do better than those which can’t. The US has had a history of regulating after the fact. Of being the place where “if it’s not been forbidden, it’s permitted.” This is what has allowed many of the most advanced robocar projects to flourish in the USA. The attitude has been that industry (and startups) should lead and innovate. Only if the companies start doing something wrong or harmful, and market forces won’t stop them from being that way, is it time for the regulators to step in and make the errant companies do better. This approach has worked far better than the idea that regulators would attempt to understand a product or technology before it is deployed, imagine how it might go wrong, and make rules to keep the companies in line before any of them have shown evidence of crossing a line. In spite of all I have written here, the robocar industry is still young. There are startups yet to be born which will develop new ideas yet to be imagined that change how everybody thinks about robocars and transportation. These innovative teams will develop new concepts of what it means to be safe and how to make things safe. Their ideas will be obvious only well after the fact. Regulations and standards don’t deal well with that. They can only encode conven[...]



Critique of NHTSA's newly released regulations

Mon, 19 Sep 2016 22:58:12 -0700

The long awaited list of recommendations and potential regulations for Robocars has just been released by NHTSA, the federal agency that regulates car safety and safety issues in car manufacture. Normally, NHTSA does not regulate car technology before it is released into the market, and the agency, while it says it is wary of slowing down this safety-increasing technology, has decided to do the unprecedented — and at a whopping 115 pages. Broadly, this is very much the wrong direction. Nobody — not Google, Uber, Ford, GM or certainly NHTSA — knows the precise form of these cars will have when deployed. Almost surely something will change from our existing knowledge today. They know this, but still wish to move. Some of the larger players have pushed for regulation. Big companies like certainty. They want to know what the rules will be before they invest. Startups thrive better in the chaos, making up the rules as we go along. NHTSA hopes to define “best practices” but the best anybody can do in 2016 is lay down existing practices and conventional wisdom. The entirely new methods of providing safety that are yet to be invented won’t be in such a definition. The document is very detailed, so it will generate several blog posts of analysis. Here I present just initial reactions. Those reactions are broadly negative. This document is too detailed by an order of magnitude. Its regulations begin today, but fortunately they are also accepting public comment. The scope of the document is so large, however, that it seems extremely unlikely that they would scale back this document to the level it should be at. As such, the progress of robocar development in the USA may be seriously negatively affected. Vehicle performance guidelines The first part of the regulations is a proposed 15 point safety standard. It must be certified (by the vendor) that the car meets these standards. NHTSA wants the power, according to an Op-Ed by no less than President Obama, to be able to pull cars from the road that don’t meet these safety promises. Data Recording and Sharing Privacy System Safety Vehicle Cybersecurity Human Machine Interface Crashworthiness Consumer Education and Training Registration and Certification Post-Crash Behavior Federal, State and Local Laws Operational Design Domain Object and Event Detection and Response Fall Back (Minimal Risk Condition) Validation Methods Ethical Considerations As you might guess, the most disturbing is the last one. As I have written many times, the issue of ethical “trolley problems” where cars must decide between killing one person or another are a philosophy class tool, not a guide to real world situations. Developers should spend as close to zero effort on these problems as possible, since they are not common enough to warrant special attention, if not for our morbid fascination with machines making life or death decisions in hypothetical situations. Let the policymakers answer these questions if they want to; programmers and vendors don’t. For the past couple of years, this has been a game that’s kept people entertained and ethicists employed. The idea that government regulations might demand solutions to these problems before these cars can go on the road is appalling. If these regulations are written this way, we will delay saving lots of real lives in the interest of debating which highly hypothetical lives will be saved or harmed in ridiculously rare situations. NHTSA’s rules demand that ethical decisions be “made consciously and intentionally.” Algorithms must be [...]



The incredible Cheapness of Being Parked

Mon, 19 Sep 2016 13:43:51 -0700

Some people have wondered about my forecast in the spreadsheet on Robotaxi economics about the very low parking costs I have predicted. I wrote about most of the reasons for this in my 2007 essay on Robocar Parking but let me expand and add some modern notes here. The Glut of Parking Today, researchers estimate there are between 3 and 8 parking spots for every car in the USA. The number 8 includes lots of barely used parking (all the shoulders of all the rural roads, for example) but the value of 3 is not unreasonable. Almost all working cars have a spot at their home base, and a spot at their common destination (the workplace.) There are then lots of other places (streets, retail lots, etc.) to find that 3rd spot. It’s probably an underestimate. We can’t use all of these at once, but we’re going to get a great deal more efficient at it. Today, people must park within a short walk of their destination. Nobody wants to park a mile away. Parking lots, however, need to be sized for peak demand. Shopping malls are surrounded by parking that is only ever used during the Christmas shopping season. Robocars will “load balance” so that if one lot is full, a spot in an empty lot too far away is just fine. Small size and Valet Density When robocars need to park, they’ll do it like the best parking valets you’ve ever seen. They don’t even need to leave space for the valet to open the door to get out. (The best ones get close by getting out the window!) Because the cars can move in concert, a car at the back can get out almost as quickly as one at the front. No fancy communications network is needed; all you need is a simple rule that if you boxed somebody in, and they turn on their lights and move an inch towards you, you move an inch yourself (and so on with those who boxed you in) to clear a path. Already, you’ve got 1.5x to 2x the density of an ordinary lot. I forecast that many robotaxis will be small, meant for 1-2 people. A car like that, 4’ by 12’ would occupy under 50 square feet of space. Today’s parking lots tend to allocate about 300 square feet per car. With these small cars you’re talking 4 to 6 times as many cars in the same space. You do need some spare space for moving around, but less than humans need. When we’re talking about robotaxis, we’re talking about sharing. Much of the time robotaxis won’t park at all, they would be off to pick up their next passenger. A smaller fraction of them would be waiting/parked at any given time. My conservative prediction is that one robotaxi could replace 4 cars (some estimate up to 10 but they’re overdoing it.) So at a rough guess we replace 1,000 cars, 900 of which are parked, with 250 cars, only 150 of which are parked at slow times. (Almost none are parked during the busy times.) Many more spaces available for use Robocars don’t park, they “stand.” Which means we can let them wait all sorts of places we don’t let you park. In front of hydrants. In front of driveways. In driveways. A car in front of a hydrant should be gone at the first notification of a fire or sound of a siren. A car in front of your driveway should be gone the minute your garage opens or, if your phone signals your approach, before you get close to your house. Ideally, you won’t even know it was there. You can also explicitly rent out your driveway space for money if you wish it. (You could rent your garage too, but the rate might be so low you will prefer to use it to add a new room to your house [...]



Tesla Radar, MobilEye fight and the Comma One $1,000 add-on-box

Sat, 17 Sep 2016 17:13:51 -0700

Tesla’s spat with MobilEye reached a new pitch this week, and Tesla announced a new release of their autopilot and new plans. As reported here earlier, MobilEye announced during the summer that they would not be supplying the new and better versions of their EyeQ system to Tesla. Since that system was and is central to the operation of the Telsa autopilot, they may have been surprised that MBLY stock took a big hit after that announcement (though it recovered for a while and is now back down) and TSLA did not. Statements and documents now show a nastier battle, with MobilEye intimating they were worried about Tesla using their tool in an unsafe way, invoking all the debate about the fatality and other crashes allegedly caused by people who are lulled into not bothering to supervise the autopilot. Tesla says that instead they have been developing their own advanced vision tools, and that MobilEye was afraid of that and told Tesla that if they wanted more EyeQ chips, they would need to halt the competing project and commit to ME. That’s a nasty spat. Tesla’s own efforts represent a threat to MobilEye from the growing revolution in neural network pattern matchers. Computer vision is going through a big revolution. MobilEye is a big player in that revolution, because their ASICs do both standard machine vision functions and can do neural networks. An ASIC will beat a general purpose processor when it comes to cost, speed and power, but only if the ASIC’s abilities were designed to solve those particular problems. Since it takes years to bring an ASIC to production, you have to aim right. MobilEye aimed pretty well, but at the same time lots of research out there is trying to aim even better, or do things with more general purpose chips like GPUs. Soon we will see ASICs aimed directly at neural network computations. To solve the problem with neural networks, you need the computing horsepower, and you need well designed deep network architectures, and you need the right training data and lots of it. Tesla and ME both are gaining lots of training data. Many companies, including Nvidia, Intel and others are working on the hardware for neural networks. Most people would point to Google as the company with the best skills in architecting the networks, though there are many doing interesting work there. (Google’s DeepMind built the tools that beat humans at the seemingly impossible game of Go, for example.) It’s definitely a competitive race. Using Radar While Tesla works on their vision systems, they also announced a plan to make much more use of radar. That’s an interesting plan. Radar has been the poor 3rd-class sensor of the robocar, after LIDAR and vision. Everybody uses it — you would be crazy not to unless you need to be very low cost. Radar sees further than the other systems, and it tells you immediately how fast any radar target you see is moving relative to you. It sees through fog and other weather, and it can even see under and around big cars in front of you as it bounces off the road and other objects. It’s really good at licence plates as well. What radar doesn’t have is high resolution. Today’s automotive radars have gotten good enough to tell you what lane an object like another car is in, but they are not designed to have any vertical resolution — you will get radar returns from a stalled car ahead of you on the road and a sign above that lane, and not be sure of the difference. You need your car to avoid a stalled car in your lane, but you[...]



Robotaxi Economics

Thu, 08 Sep 2016 13:07:08 -0700

The vision of many of us for robocars is a world of less private car ownership and more use of robotaxis — on demand ride service in a robocar. That’s what companies like Uber clearly are pushing for, and probably Google, but several of the big car companies including Mercedes, Ford and BMW among others have also said they want to get there — in the case of Ford, without first making private robocars for their traditional customers. In this world, what does it cost to operate these cars? How much might competitive services charge for rides? How much money will they make? What factors, including price, will they compete on, and how will that alter the landscape? Here are some basic models of cost. I compare a low-cost 1-2 person robotaxi, a higher-end 1-2 person robotaxi, a 4-person traditional sedan robotaxi and the costs of ownership for a private car, the Toyota Prius 2, as calculated by Edmunds. An important difference is that the taxis are forecast to drive 50,000 miles/year (as taxis do) and wear out fully in 5 years. The private car is forecast to drive 15,000 miles/year (higher than the average for new cars, which is 12,000) and to have many years and miles of life left in it. As such the taxis are fully depreciated in this 5 year timeline, and the private car only partly. Some numbers are speculative. I am predicting that the robotaxis will have an insurance cost well below today’s cars, which cost about 6 cents/mile for liability insurance. The taxis will actually be self-insured, meaning this is the expected cost of any incidents. In the early days, this will not be true — the taxis will be safer, but the incidents will cost more until things settle down. As such the insurance prices are for the future. This is a model of an early maturing market where the volume of robotaxis is fairly high (they are made in the low millions) and the safety record is well established. It’s a world where battery prices and reliability have improved. It’s a world where there is still a parking glut, before most surplus parking is converted to other purposes. Fuel is electric for the taxis, gasoline/hybrid for the Prius. The light vehicle is very efficient. Maintenance is also speculative. Today’s cars spend about 6 cents/mile, including 1 cent/mile for the tires. Electric cars are expected to have lower maintenance costs, but the totals here are higher because the car is going 250,000 miles not 75,000 miles like the Prius. With this high level of maintenance and such smooth driving, I forecast low repair cost. Parking is cheaper for the taxis for several reasons. First, they can freely move around looking for the cheapest place to wait, which will often be free city parking, or the cheapest advertised parking on the auction “spot” market. They do not need to park right where the passenger is going, as the private car does. They will park valet style, and so the small cars will use less space and pay less too. Parking may actually be much cheaper than this, even free in many cases. Of course, many private car owners do not pay for parking overtly, so this varies a lot from city to city. width=100% height=480 src="https://docs.google.com/spreadsheets/d/1dEstdodejZJbjPd2s4SJVGf8NpxMcgJmd5rIFKyWkVc/pubhtml?gid=0&single=true&widget=true&headers=false"> (You can view the spreadsheet directly on Google docs and download it to your own tool to play around with the model. Adjust my assumptions and report your own price estimates.) The Pri[...]



Museums in ruins and old buildings will take on new life with Augmented Reality

Wed, 07 Sep 2016 14:09:46 -0700

We’re on the cusp of a new wave of virtual reality and augmented reality technology. The most exciting is probably the Magic Leap. I have yet to look through it, but friends who have describe it as hard to tell from actual physical objects in your environment. The Hololens (which I have looked through) is not that good, and has a very limited field of view, but it already shows good potential. It’s becoming easier and easier to create VR versions of both fictional and real environments. Every historical documentary show seems to include a nice model reconstructing what something used to look like, and this is going to get better and better with time. This will be an interesting solution for many of the world’s museums and historical sites. A few years from now, every visit to a ruin or historical building won’t just include a boring and slow audioguide, but some AR glasses to allow you to see a model of what the building was really like in its glory. Not just a building — it should be possible to walk around ancient Rome or other towns and do this as well. Now with VR you’ll be able to do that in your own home if you like, but you won’t be able to walk very far in that space. (There are tricks that let you fool people into thinking they walked further but they are just not the same as walking in the real space with the real geometry.) They will also be able to populate the space with recordings or animations of people in period costumes doing period things. This is good news for historical museums. Many of them have very few actual interesting artifacts to see, so they end up just being placards and photos and videos and other multimedia presentations. Things I could easily see on the museum web site; their only virtue is that I am reading the text and looking at the picture in the greatly changed remains of where it happened. These days, I tend to skip museums that have become little more than multimedia. But going to see the virtual recreation will be a different story, I predict. Soon will be the time for museum and tourist organizations to start considering what spaces will be good for this. You don’t need to restore or rebuild that old castle, as long as it’s safe to walk around. You just need to instrument it with tracking sensors for the AR gear and build and refine those models. Over time, the resolution of the AR glasses will approach that of the eyes, and the reality of the models will improve too. In time, many will feel like they got an experience very close to going back and time and seeing it as it was. width="853" height="480" src="https://www.youtube.com/embed/GmdXJy_IdNw" frameborder="0" allowfullscreen> Well, not quite as it was. It will be full of tourists from the future, including yourself. AR keeps them present, which is good because you don’t want to bump into them. A more advanced system will cover the tourists in period clothing, or even replace their faces. You would probably light the space somewhat dimly to assure the AR can cover up what it needs to cover up, while still keeping enough good vision of the floor so you don’t trip. Of course, if you cover everything up with the AR, you could just do this in a warehouse, and that will happen too. You would need to reproduce the staircases of the recreated building but could possibly get away with producing very little else. As long as the other visitors don’t walk through walls the walls don’t have to be there. [...]