Sun, 23 Oct 2016 14:33:44 -0700
I frequently say that there is no “internet of things.” That’s a marketing phrase for now. You can’t go buy a “thing” and plug it into the “internet of things.” IoT is still interesting because underneath the name is a real revolution from the way that computing, sensing and communications are getting cheaper, smaller and using less power. New communications protocols are also doing interesting things.
We learned a lesson on Friday though, about why using the word “internet” is its own mistake. The internet — one of the world’s greatest inventions — was created as a network of networks where anything could talk to anything, and it was useful for this to happen. Later, for various reasons, we moved to putting most devices behind NATs and firewalls to diminish this vision, but the core idea remains.
Attackers on Friday made use of growing collection of low cost IoT devices with low security to mount a DDOS attack on DYN’s domain name servers, shutting off name lookup for some big sites. While not the only source of the attack, a lot of attention has come to certain Chinese brands of IP based security cameras and baby monitors. To make them easy to use, they are designed with very poor security, and as a result they can be hijacked and put into botnets to do DDOS — recruiting a million vulnerable computers to all overload some internet site or service at once.
Most applications for small embedded systems — the old and less catchy name of the “internet of things” — aren’t at all in line with the internet concept. They have no need or desire to be able to talk to the whole world the way your phone, laptop or web server do. They only need to talk to other local devices, and sometimes to cloud servers from their vendor. We are going to see billions of these devices connected to our networks in the coming years, perhaps hundreds of billions. They are going to be designed by thousands of vendors. They are going to be cheap and not that well made. They are not going to be secure, and little we can do will change that. Even efforts to make punishments for vendors of insecure devices won’t change that.
So here’s an alternative; a long term plan for our routers and gateways to take the internet out of IoT.
Our routers should understand that two different classes of devices will connect to them. The regular devices, like phones and laptops, should connect to the internet as we expect today. There should also be a way to know that the connecting devices does not want regular internet access, and not to give it. One way to do that is for the devices to know about this, and to convey how much access they need when they first connect. One proposal for this is my friend Eliot Lear’s MUD proposal. Unfortunately, we can’t count on devices to do this. We must limit stupid devices and old devices too. read more »
Sat, 22 Oct 2016 12:19:13 -0700California Hearings Wednesday, California held hearings on the latest draft of their regulations. The new regulations heavily incorporate the new NHTSA guidelines released last month, and now incorporate language on the testing and deployment of unmanned vehicles. The earlier regulations caused consternation because they correctly identified that nobody had sufficient understanding of unmanned vehicle operations to write regulations, but incorrectly proceeded to forbid those vehicles until later. Once you ban something, it’s very hard to un-ban it. The new approach does not ban the vehicles, but attempts instead to write regulations for them that are too premature. Comment from developers of the vehicles reflected sentiment that all the regulations are premature. California worked together with NHTSA on their regulations, and incorporated them. In particular, while NHTSA’s regulations lay out a 15 point list of functional domains that creators of vehicles should certify, the federal regulations technically declare this certification to be optional. A vendor in submitting a report can explicitly state they decline to certify most of the items. California suggests that this certification might be mandatory here. For all my criticism of NHTSA’s plan, they do have an understanding that it is still far too early to be writing detailed rules for vehicles that don’t yet exist, and left these avenues for change and disagreement within their regulations. The avenues are not great — I feel that vendors will be concerned that truly treating the regulations as voluntary will will be done at their peril — but at least they exist. Several vendors also pointed out the serious problems with traditional regulatory timelines and the speed of development of computer technologies. The California regulations may require that a car be tested for a year before it is deployed. On the surface that sounds normal by old standards, but the reality of development is very different. Pretty much all the vendors I know are producing new builds of their vehicle software and testing them out on the roads the next day — with trained safety drivers behind the wheel. The software goes through extensive “regression testing,” running through every tricky situation the team has encountered anywhere, as well as simulated situations, but the safety driver is there to deal with any problem not found with that testing. Vendors won’t release into production cars with only one night of testing, but neither can they wait a year. This is particularly true because in the early days of this technology, new problems will be found during deployment, and you want to get the fixes out on the road as quickly as is safe to do. An arbitrary timeline makes no sense. This is just the start of the problems. While one may argue that it was always going to be hard for startups and tinkerers to develop these cars, these regulations (and the federal ones) put more nails in the coffin of the small innovator. The amount of bureaucracy, the size of the insurance bonds and many other factors will make it hard for teams the size of the DARPA challenge teams who kickstarted this technology and make it real to actually play in the game. The auto industry has a long history of allowing tinkerers to innovate, even at the cost of relaxing safety requirements applied to them. We may end up with a world where only the big players can play at all, and we know that this is generally not good at all for the pace of innovation. Delivery Robots The new regulations allowing unmanned vehicles might seem to open doors for delivery robots like we’re working on at Starship. Unfortunately they seem aimed primarily at large vehicles. Since California rules define the sidewalk as part of the street, these regulations might end up demanding a small, slow, light delivery robot still comply with the bulky Federal Motor Vehicle Safety Standards (which are meant for passenger cars[...]
Sun, 16 Oct 2016 18:25:41 -0700When people vote, what do they think it will accomplish? How does this affect how they vote, and how should it? My apologies for more of this in a season when our social media are overwhelmed with politics, but in a lot of the postings I see about voting plans, I see different implicit views on just what the purpose of voting is. The main focus will be on the vote for US President. The vast majority of people will vote in non-contested states. The logic is different in the “swing” states where all the campaign attention is. In a non-contested state, there is essentially zero chance your vote will affect the result of the election. If you’re voting thinking you are exerting your small power to have a say in who wins, you are deluding yourself. Your vote does one, and only one thing — it changes the popular vote totals that are published and looked at by some people. You will change the total for the nation, your state, and some will even look at the totals in your region. For minor party candidates, having a higher vote total — in particular reaching 5% — can also make a giant difference by giving access to federal campaign funding, which can make a serious difference in the funding level for those parties. Voters should ask themselves, whose popular vote total do they want to increase? Some logic suggests that it makes more sense to vote for a minor party that you support. Not because they will win, but because you will create a larger proportionate increase in their total. One more vote for a Republican or Democrat will be barely noticed. One more vote for a minor party will also on its own make no difference, but proportionately it may be 10 times or more greater. It’s for the next election, not this one You don’t increase the popular vote totals to affect this election. You do it to affect the next one. Supporting a party makes other supporters realize they are not alone. It makes them just a bit more likely to join the cause, if they believe in it. Most voters don’t understand this “next election” principle, and so while a minor party remains too small to win or affect the election, they are less likely to support it. This is how most movements go from being small to being large. When a protest movement is small, people are afraid to show their support. When they see a real crowd march in the square, they are now more likely to join the crowd and to let the world see how much support there really is. As such, the particular platform planks and candidate quirks are almost entirely irrelevant for the non-swing voter. When you’re voting for the next election, you are really supporting only the party and its broad platform, or a basic overall impression of a candidate. I often see voters say, “I could not vote for a candidate who supports X” but they do not realize that is not what they are doing. The minor parties are particularly bad at this. Most of them like to pretend they are just like major parties. They nominate candidates based on what they say or stand for. They create detailed party platforms. This is an error. A detailed platform is only a reason for people to vote against you. Detailed platforms are only for candidates who might actually have a shot at implementing their platform. Minor party candidates take it as gospel that they should never admit that they can’t win, even though any rational person knows it quite clearly. The reality is that you can know you can’t win the current election, but can more reasonably hope you can step higher and get within range of winning in a future election. Only when this happens should you act like a major party. You almost never see minor candidates say the truth: “Vote for me, not because you can make me win — you can’t — but to show and build support for the ideas of our party.” I personally would much rather vote for somebody who said th[...]
Thu, 13 Oct 2016 21:31:10 -0700I had hoped I was done ranting about our obsession with what robocars will do in no-win “who do I hit?” situations, but this week, even Barack Obama in his interview with Wired opined on the issue, prompted by my friend Joi Ito from the MIT Media Lab. (The Media Lab recently ran a misleading exercise asking people to pretend they were a self-driving car deciding who to run over.) I’ve written about the trouble with these problems and even proposed a solution but it seems there is still lots of need to revisit this. Let’s examine why this problem is definitely not important enough to merit the attention of the President or his regulators, and how it might even make the world more dangerous. We are completely fascinated by this problem Almost never do I give a robocar talk without somebody asking about this. Two nights ago, I attended another speaker’s talk and he got the question as his 2nd one. He looked at his watch and declared he had won a bet with himself about how quickly somebody would ask. It has become the #1 question in the mind of the public, and even Presidents. It is not hard to understand why. Life or death issues are morbidly attractive to us, and the issue of machines making life or death decisions is doubly fascinating. It’s been the subject of academic debates and fiction for decades, and now it appears to be a real question. For those who love these sorts of issues, and even those who don’t, the pull is inescapable. At the same time, even the biggest fan of these questions, stepping back a bit, would agree they are of only modest importance. They might not agree with the very low priority that I assign, but I don’t think anybody feels they are anywhere close to the #1 question out there. As such we must realize we are very poor at judging the importance of these problems. So each person who has not already done so needs to look at how much importance they assign, and put an automatic discount on this. This is hard to do. We are really terrible at statistics sometimes, and dealing with probabilities of risk. We worry much more about the risks of a terrorist attack on a plane flight than we do about the drive to the airport, but that’s entirely wrong. This is one of those situations, and while people are free to judge risks incorrectly, academics and regulators must not. Academics call this the Law of triviality. A real world example is terrorism. The risk of that is very small, but we make immense efforts to prevent it and far smaller efforts to fight much larger risks. These situations are quite rare, and we need data about how rare they are In order to judge the importance of these risks, it would be great if we had real data. All traffic fatalities are documented in fairly good detail, as are many accidents. A worthwhile academic project would be to figure out just how frequent these incidents are. I suspect they are extremely infrequent, especially ones involving fatality. Right now fatalities happen about every 2 million hours of driving, and the majority of those are single car fatalities (with fatigue and alcohol among leading causes.) I have still yet to read a report of a fatality or serious injury that involved a driver having no escape, but the ability to choose what they hit with different choices leading to injuries for different people. I am not saying they don’t exist, but first examinations suggest they are quite rare. Probably hundreds of billions of miles, if not more, between them. Those who want to claim they are important have the duty to show that they are more common than these intuitions suggest. Frankly, I think if there were accidents where the driver made a deliberate decision to run down one person to save another, or to hurt themselves to save another, this would be a fairly big human interest news story. Our fascination with this question demands it. Just how many lives would be really saved[...]
Sun, 02 Oct 2016 19:49:03 -0700The social networks have access (or more to the point can give their users access) to an unprecedented trove of information on political views and activities. Could this make a radical difference in affecting who actually shows up to vote, and thus decide the outcome of elections? I’ve written before about how the biggest factor in US elections is the power of GOTV - Get Out the Vote. US Electoral turnout is so low — about 60% in Presidential elections and 40% in off-year — that the winner is determined by which side is able to convince more of their weak supporters to actually show up and vote. All those political ads you see are not going to make a Democrat vote Republican or vice versa, they are going to scare a weak supporter to actually show up. It’s much cheaper, in terms of votes per dollar (or volunteer hour) to bring in these weak supporters than it is to swing a swing voter. The US voter turnout numbers are among the worst in the wealthy world. Much of this is blamed on the fact the US, unlike most other countries, has voter registration; effectively 2 step voting. Voter registration was originally implemented in the USA as a form of vote suppression, and it’s stuck with the country ever since. In almost all other countries, some agency is responsible for preparing a list of citizens and giving it to each polling place. There are people working to change that, but for now it’s the reality. Registration is about 75%, Presidential voting about 60%. (Turnout of registered voters is around 80%) Scary negative ads are one thing, but one of the most powerful GOTV forces is social pressure. Republicans used this well under Karl Rove, working to make social groups like churches create peer pressure to vote. But let’s look at the sort of data sites like Facebook have or could have access to: They can calculate a reasonably accurate estimate of your political leaning with modern AI tools and access to your status updates (where people talk politics) and your friend network, along with the usual geographic and demographic data They can measure the strength of your political convictions through your updates They can bring in the voter registration databases (which are public in most states, with political use allowed on the data. Commercial use is forbidden in a portion of states but this would not be commercial.) In many cases, the voter registration data also reveals if you voted in prior elections Your status updates and geographical check-ins and postings will reveal voting activity. Some sites (like Google) that have mobile apps with location sensing can detect visits to polling places. Of course, for the social site to aggregate and use this data for its own purposes would be a gross violation of many important privacy principles. But social networks don’t actually do (too many) things; instead they provide tools for their users to do things. As such, while Facebook should not attempt to detect and use political data about its users, it could give tools to its users that let them select subsets of their friends, based only on information that those friends overtly shared. On Facebook, you can enter the query, “My friends who like Donald Trump” and it will show you that list. They could also let you ask “My Friends who match me politically” if they wanted to provide that capability. Now imagine more complex queries aimed specifically at GOTV, such as: “My friends who match me politically but are not scored as likely to vote” or “My friends who match me politically and are not registered to vote.” Possibly adding “Sorted by the closeness of our connection” which is something they already score. read more » [...]
Fri, 30 Sep 2016 12:27:30 -0700
After my initial reactions and Overall Analysis here is a point by point consideration of second set of elements from NHTSA’s 15 point certification list for robocars. See my series for other articles or the first half of the list.
In this section, the remind vendors they still need to meet the same standards as regular cars do. We are not ready to start removing heavy passive safety systems just because the vehicles get in fewer crashes. In the future we might want to change that, as those systems can be 1/3 of the weight of a vehicle.
They also note that different seating configurations (like rear facing seats) need to protect as well. It’s already the case that rear facing seats will likely be better in forward collisions. Face-to-face seating may present some challenges in this environment, as it is less clear how to deploy the airbags. Taxis in London often feature face-to-face seating, though that is less common in the USA. Will this be possible under these regulations?
The rules also call for unmanned vehicles to absorb energy like existing vehicles. I don’t know if this is a requirement on unusual vehicle design for regular cars or not. (If it were, it would have prohibited SUVs with their high bodies that can cause a bad impact with a low-body sports-car.)
This seems like another mild goal, but we don’t want a world where you can’t ride in a taxi unless you are certified as having taking a training course. Especially if it’s one for which you have very little to do. These rules are written more for people buying a car (for whom training can make sense) than those just planning to be a passenger.
This section imagines labels for drivers. It’s pretty silly and not very practical. Is a car going to have a sticker saying “This car can drive itself on Elm St. south of Pine, or on highway 101 except in Gilroy?” There should be another way, not labels, that this is communicated, especially because it will change all the time.
This set is fairly reasonable — it requires a process describing what you do to a vehicle after a crash before it goes back into service.
This section calls for a detailed plan on how to assure compliance with all the laws. Interestingly, it also asks for a plan on how the vehicle will violate laws that human drivers sometimes violate. This is one of the areas where regulatory effort is necessary, because strictly cars are not allowed to violate the law — doing things like crossing the double-yellow line to pass a car blocking your path. read more »
Fri, 30 Sep 2016 11:49:55 -0700After my initial reactions and Overall Analysis here is a point by point consideration of the elements from NHTSA’s 15 point certification list for robocars. See also the second half and the whole series Let’s dig in: Data Recording and Sharing These regulations require a plan about how the vehicle keep logs around any incident (while following privacy rules.) This is something everybody already does — in fact they keep logs of everything for now — since they want to debug any problems they encounter. NHTSA wants the logs to be available to NHTSA for crash investigation. NHTSA also wants recordings of positive events (the system avoided a problem.) Most interesting is a requirement for a data sharing plan. NHTSA wants companies to share their logs with their competitors in the event of incidents and important non-incidents, like near misses or detection of difficult objects. This is perhaps the most interesting element of the plan, but it has seen some resistance from vendors. And it is indeed something that might not happen at scale without regulation. Many teams will consider their set of test data to be part of their crown jewels. Such test data is only gathered by spending many millions of dollars to send drivers out on the roads, or by convincing customers or others to voluntarily supervise while their cars gather test data, as Tesla has done. A large part of the head-start that leaders have in this field is the amount of different road situations they have been able to expose their vehicles to. Recordings of mundane driving activity are less exciting and will be easier to gather. Real world incidents are rare and gold for testing. The sharing is not as golden, because each vehicle will have different sensors, located in different places, so it will not be easy to adapt logs from one vehicle directly to another. While a vehicle system can play its own raw logs back directly to see how it performs in the same situation, other vehicles won’t readily do that. Instead this offers the ability to build something that all vendors want and need, and the world needs, which is a high quality simulator where cars can be tested against real world recordings and entirely synthetic events. The data sharing requirement will allow the input of all these situations into the simulator, so every car can test how it would have performed. This simulation will mostly be at the “post perception level” where the car has (roughly) identified all the things on the road and is figuring out what to do with them, but some simulation could be done at lower levels. These data logs and simulator scenarios will create what is known as a regression test suite. You test your car in all the situations, and every time you modify the software, you test that your modifications didn’t break something that used to work. It’s an essential tool. In the history of software, there have been shared public test suites (often sourced from academia) and private ones that are closely guarded. For some time, I have proposed that it might be very useful if there were a a public and open source simulator environment which all teams could contribute scenarios to, but I always expected most contributions would come from academics and the open source community. Without this rule, the teams with the most test miles under their belts might be less willing to contribute. Such a simulator would help all teams and level the playing field. It would allow small innovators to even build and test prototype ideas entirely in simulator, with very low cost and zero risk compared to building it in physical hardware. This is a great example of where NHTSA could use its money rather than its regulatory power to improve safety, by funding the development of such test tools. In fact, if done open source, the agencies and academic instituti[...]
Wed, 28 Sep 2016 23:50:58 -0700The recent Federal Automated Vehicles Policy is long. (My same-day analysis is here and the whole series is being released.) At 116 pages (to be fair, less than half is policy declarations and the rest is plans for the future and associated materials) it is much larger than many of us were expecting. The policy was introduced with a letter attributed to President Obama, where he wrote: There are always those who argue that government should stay out of free enterprise entirely, but I think most Americans would agree we still need rules to keep our air and water clean, and our food and medicine safe. That’s the general principle here. What’s more, the quickest way to slam the brakes on innovation is for the public to lose confidence in the safety of new technologies. Both government and industry have a responsibility to make sure that doesn’t happen. And make no mistake: If a self-driving car isn’t safe, we have the authority to pull it off the road. We won’t hesitate to protect the American public’s safety. This leads in to an unprecedented effort to write regulations for a technology that barely exists and has not been deployed beyond the testing stage. The history of automotive regulation has been the opposite, and so this is a major change. The key question is what justifies such a big change, and the cost that will come with it. Make no mistake, the cost will be real. The cost of regulations is rarely known in advance but it is rarely small. Regulations slow all players down and make them more cautious — indeed it is sometimes their goal to cause that caution. Regulations result in projects needing “compliance departments” and the establishment of procedures and legal teams to assure they are complied with. In almost all cases, regulations punish small companies and startups more than they punish big players. In some cases, big players even welcome regulation, both because it slows down competitors and innovators, and because they usually also have skilled governmental affairs teams and lobbying teams which are able to subtly bend the regulations to match their needs. This need not even be nefarious, though it often is. Companies that can devote a large team to dealing with regulations, those who can always send staff to meetings and negotiations and public comment sessions will naturally do better than those which can’t. The US has had a history of regulating after the fact. Of being the place where “if it’s not been forbidden, it’s permitted.” This is what has allowed many of the most advanced robocar projects to flourish in the USA. The attitude has been that industry (and startups) should lead and innovate. Only if the companies start doing something wrong or harmful, and market forces won’t stop them from being that way, is it time for the regulators to step in and make the errant companies do better. This approach has worked far better than the idea that regulators would attempt to understand a product or technology before it is deployed, imagine how it might go wrong, and make rules to keep the companies in line before any of them have shown evidence of crossing a line. In spite of all I have written here, the robocar industry is still young. There are startups yet to be born which will develop new ideas yet to be imagined that change how everybody thinks about robocars and transportation. These innovative teams will develop new concepts of what it means to be safe and how to make things safe. Their ideas will be obvious only well after the fact. Regulations and standards don’t deal well with that. They can only encode conventional wisdom. “Best practices” are really “the best we knew before the innovators came.” Innovators don’t ignore the old wisdom willy-nilly, they often ignor[...]
Mon, 19 Sep 2016 22:58:12 -0700The long awaited list of recommendations and potential regulations for Robocars has just been released by NHTSA, the federal agency that regulates car safety and safety issues in car manufacture. Normally, NHTSA does not regulate car technology before it is released into the market, and the agency, while it says it is wary of slowing down this safety-increasing technology, has decided to do the unprecedented — and at a whopping 115 pages. Broadly, this is very much the wrong direction. Nobody — not Google, Uber, Ford, GM or certainly NHTSA — knows the precise form of these cars will have when deployed. Almost surely something will change from our existing knowledge today. They know this, but still wish to move. Some of the larger players have pushed for regulation. Big companies like certainty. They want to know what the rules will be before they invest. Startups thrive better in the chaos, making up the rules as we go along. NHTSA hopes to define “best practices” but the best anybody can do in 2016 is lay down existing practices and conventional wisdom. The entirely new methods of providing safety that are yet to be invented won’t be in such a definition. The document is very detailed, so it will generate several blog posts of analysis. Here I present just initial reactions. Those reactions are broadly negative. This document is too detailed by an order of magnitude. Its regulations begin today, but fortunately they are also accepting public comment. The scope of the document is so large, however, that it seems extremely unlikely that they would scale back this document to the level it should be at. As such, the progress of robocar development in the USA may be seriously negatively affected. Vehicle performance guidelines The first part of the regulations is a proposed 15 point safety standard. It must be certified (by the vendor) that the car meets these standards. NHTSA wants the power, according to an Op-Ed by no less than President Obama, to be able to pull cars from the road that don’t meet these safety promises. Data Recording and Sharing Privacy System Safety Vehicle Cybersecurity Human Machine Interface Crashworthiness Consumer Education and Training Registration and Certification Post-Crash Behavior Federal, State and Local Laws Operational Design Domain Object and Event Detection and Response Fall Back (Minimal Risk Condition) Validation Methods Ethical Considerations As you might guess, the most disturbing is the last one. As I have written many times, the issue of ethical “trolley problems” where cars must decide between killing one person or another are a philosophy class tool, not a guide to real world situations. Developers should spend as close to zero effort on these problems as possible, since they are not common enough to warrant special attention, if not for our morbid fascination with machines making life or death decisions in hypothetical situations. Let the policymakers answer these questions if they want to; programmers and vendors don’t. For the past couple of years, this has been a game that’s kept people entertained and ethicists employed. The idea that government regulations might demand solutions to these problems before these cars can go on the road is appalling. If these regulations are written this way, we will delay saving lots of real lives in the interest of debating which highly hypothetical lives will be saved or harmed in ridiculously rare situations. NHTSA’s rules demand that ethical decisions be “made consciously and intentionally.” Algorithms must be “transparent” and based on input from regulators, drivers, passengers and road users. While the section makes mention of machine learning techniques, it seems in the same breat[...]
Mon, 19 Sep 2016 13:43:51 -0700Some people have wondered about my forecast in the spreadsheet on Robotaxi economics about the very low parking costs I have predicted. I wrote about most of the reasons for this in my 2007 essay on Robocar Parking but let me expand and add some modern notes here. The Glut of Parking Today, researchers estimate there are between 3 and 8 parking spots for every car in the USA. The number 8 includes lots of barely used parking (all the shoulders of all the rural roads, for example) but the value of 3 is not unreasonable. Almost all working cars have a spot at their home base, and a spot at their common destination (the workplace.) There are then lots of other places (streets, retail lots, etc.) to find that 3rd spot. It’s probably an underestimate. We can’t use all of these at once, but we’re going to get a great deal more efficient at it. Today, people must park within a short walk of their destination. Nobody wants to park a mile away. Parking lots, however, need to be sized for peak demand. Shopping malls are surrounded by parking that is only ever used during the Christmas shopping season. Robocars will “load balance” so that if one lot is full, a spot in an empty lot too far away is just fine. Small size and Valet Density When robocars need to park, they’ll do it like the best parking valets you’ve ever seen. They don’t even need to leave space for the valet to open the door to get out. (The best ones get close by getting out the window!) Because the cars can move in concert, a car at the back can get out almost as quickly as one at the front. No fancy communications network is needed; all you need is a simple rule that if you boxed somebody in, and they turn on their lights and move an inch towards you, you move an inch yourself (and so on with those who boxed you in) to clear a path. Already, you’ve got 1.5x to 2x the density of an ordinary lot. I forecast that many robotaxis will be small, meant for 1-2 people. A car like that, 4’ by 12’ would occupy under 50 square feet of space. Today’s parking lots tend to allocate about 300 square feet per car. With these small cars you’re talking 4 to 6 times as many cars in the same space. You do need some spare space for moving around, but less than humans need. When we’re talking about robotaxis, we’re talking about sharing. Much of the time robotaxis won’t park at all, they would be off to pick up their next passenger. A smaller fraction of them would be waiting/parked at any given time. My conservative prediction is that one robotaxi could replace 4 cars (some estimate up to 10 but they’re overdoing it.) So at a rough guess we replace 1,000 cars, 900 of which are parked, with 250 cars, only 150 of which are parked at slow times. (Almost none are parked during the busy times.) Many more spaces available for use Robocars don’t park, they “stand.” Which means we can let them wait all sorts of places we don’t let you park. In front of hydrants. In front of driveways. In driveways. A car in front of a hydrant should be gone at the first notification of a fire or sound of a siren. A car in front of your driveway should be gone the minute your garage opens or, if your phone signals your approach, before you get close to your house. Ideally, you won’t even know it was there. You can also explicitly rent out your driveway space for money if you wish it. (You could rent your garage too, but the rate might be so low you will prefer to use it to add a new room to your house unless you still own a car.) In addition, at off-peak times (when less road capacity is needed) robocars can double park or triple park along the sides of roads. (Human cars would need to u[...]
Sat, 17 Sep 2016 17:13:51 -0700Tesla’s spat with MobilEye reached a new pitch this week, and Tesla announced a new release of their autopilot and new plans. As reported here earlier, MobilEye announced during the summer that they would not be supplying the new and better versions of their EyeQ system to Tesla. Since that system was and is central to the operation of the Telsa autopilot, they may have been surprised that MBLY stock took a big hit after that announcement (though it recovered for a while and is now back down) and TSLA did not. Statements and documents now show a nastier battle, with MobilEye intimating they were worried about Tesla using their tool in an unsafe way, invoking all the debate about the fatality and other crashes allegedly caused by people who are lulled into not bothering to supervise the autopilot. Tesla says that instead they have been developing their own advanced vision tools, and that MobilEye was afraid of that and told Tesla that if they wanted more EyeQ chips, they would need to halt the competing project and commit to ME. That’s a nasty spat. Tesla’s own efforts represent a threat to MobilEye from the growing revolution in neural network pattern matchers. Computer vision is going through a big revolution. MobilEye is a big player in that revolution, because their ASICs do both standard machine vision functions and can do neural networks. An ASIC will beat a general purpose processor when it comes to cost, speed and power, but only if the ASIC’s abilities were designed to solve those particular problems. Since it takes years to bring an ASIC to production, you have to aim right. MobilEye aimed pretty well, but at the same time lots of research out there is trying to aim even better, or do things with more general purpose chips like GPUs. Soon we will see ASICs aimed directly at neural network computations. To solve the problem with neural networks, you need the computing horsepower, and you need well designed deep network architectures, and you need the right training data and lots of it. Tesla and ME both are gaining lots of training data. Many companies, including Nvidia, Intel and others are working on the hardware for neural networks. Most people would point to Google as the company with the best skills in architecting the networks, though there are many doing interesting work there. (Google’s DeepMind built the tools that beat humans at the seemingly impossible game of Go, for example.) It’s definitely a competitive race. Using Radar While Tesla works on their vision systems, they also announced a plan to make much more use of radar. That’s an interesting plan. Radar has been the poor 3rd-class sensor of the robocar, after LIDAR and vision. Everybody uses it — you would be crazy not to unless you need to be very low cost. Radar sees further than the other systems, and it tells you immediately how fast any radar target you see is moving relative to you. It sees through fog and other weather, and it can even see under and around big cars in front of you as it bounces off the road and other objects. It’s really good at licence plates as well. What radar doesn’t have is high resolution. Today’s automotive radars have gotten good enough to tell you what lane an object like another car is in, but they are not designed to have any vertical resolution — you will get radar returns from a stalled car ahead of you on the road and a sign above that lane, and not be sure of the difference. You need your car to avoid a stalled car in your lane, but you can’t have a car that hits the brakes every time it sees a road sign or bridge! Real world radar is messy. Your antennas send out and receive from a very broad cone with potential si[...]
Thu, 08 Sep 2016 13:07:08 -0700The vision of many of us for robocars is a world of less private car ownership and more use of robotaxis — on demand ride service in a robocar. That’s what companies like Uber clearly are pushing for, and probably Google, but several of the big car companies including Mercedes, Ford and BMW among others have also said they want to get there — in the case of Ford, without first making private robocars for their traditional customers. In this world, what does it cost to operate these cars? How much might competitive services charge for rides? How much money will they make? What factors, including price, will they compete on, and how will that alter the landscape? Here are some basic models of cost. I compare a low-cost 1-2 person robotaxi, a higher-end 1-2 person robotaxi, a 4-person traditional sedan robotaxi and the costs of ownership for a private car, the Toyota Prius 2, as calculated by Edmunds. An important difference is that the taxis are forecast to drive 50,000 miles/year (as taxis do) and wear out fully in 5 years. The private car is forecast to drive 15,000 miles/year (higher than the average for new cars, which is 12,000) and to have many years and miles of life left in it. As such the taxis are fully depreciated in this 5 year timeline, and the private car only partly. Some numbers are speculative. I am predicting that the robotaxis will have an insurance cost well below today’s cars, which cost about 6 cents/mile for liability insurance. The taxis will actually be self-insured, meaning this is the expected cost of any incidents. In the early days, this will not be true — the taxis will be safer, but the incidents will cost more until things settle down. As such the insurance prices are for the future. This is a model of an early maturing market where the volume of robotaxis is fairly high (they are made in the low millions) and the safety record is well established. It’s a world where battery prices and reliability have improved. It’s a world where there is still a parking glut, before most surplus parking is converted to other purposes. Fuel is electric for the taxis, gasoline/hybrid for the Prius. The light vehicle is very efficient. Maintenance is also speculative. Today’s cars spend about 6 cents/mile, including 1 cent/mile for the tires. Electric cars are expected to have lower maintenance costs, but the totals here are higher because the car is going 250,000 miles not 75,000 miles like the Prius. With this high level of maintenance and such smooth driving, I forecast low repair cost. Parking is cheaper for the taxis for several reasons. First, they can freely move around looking for the cheapest place to wait, which will often be free city parking, or the cheapest advertised parking on the auction “spot” market. They do not need to park right where the passenger is going, as the private car does. They will park valet style, and so the small cars will use less space and pay less too. Parking may actually be much cheaper than this, even free in many cases. Of course, many private car owners do not pay for parking overtly, so this varies a lot from city to city. width=100% height=480 src="https://docs.google.com/spreadsheets/d/1dEstdodejZJbjPd2s4SJVGf8NpxMcgJmd5rIFKyWkVc/pubhtml?gid=0&single=true&widget=true&headers=false"> (You can view the spreadsheet directly on Google docs and download it to your own tool to play around with the model. Adjust my assumptions and report your own price estimates.) The Prius has one of the lowest costs of ownership of any regular car (take out the parking and it’s only 38 cents/mile) but its price is massively undercut by the electric robotaxi, especiall[...]
Wed, 07 Sep 2016 14:09:46 -0700We’re on the cusp of a new wave of virtual reality and augmented reality technology. The most exciting is probably the Magic Leap. I have yet to look through it, but friends who have describe it as hard to tell from actual physical objects in your environment. The Hololens (which I have looked through) is not that good, and has a very limited field of view, but it already shows good potential. It’s becoming easier and easier to create VR versions of both fictional and real environments. Every historical documentary show seems to include a nice model reconstructing what something used to look like, and this is going to get better and better with time. This will be an interesting solution for many of the world’s museums and historical sites. A few years from now, every visit to a ruin or historical building won’t just include a boring and slow audioguide, but some AR glasses to allow you to see a model of what the building was really like in its glory. Not just a building — it should be possible to walk around ancient Rome or other towns and do this as well. Now with VR you’ll be able to do that in your own home if you like, but you won’t be able to walk very far in that space. (There are tricks that let you fool people into thinking they walked further but they are just not the same as walking in the real space with the real geometry.) They will also be able to populate the space with recordings or animations of people in period costumes doing period things. This is good news for historical museums. Many of them have very few actual interesting artifacts to see, so they end up just being placards and photos and videos and other multimedia presentations. Things I could easily see on the museum web site; their only virtue is that I am reading the text and looking at the picture in the greatly changed remains of where it happened. These days, I tend to skip museums that have become little more than multimedia. But going to see the virtual recreation will be a different story, I predict. Soon will be the time for museum and tourist organizations to start considering what spaces will be good for this. You don’t need to restore or rebuild that old castle, as long as it’s safe to walk around. You just need to instrument it with tracking sensors for the AR gear and build and refine those models. Over time, the resolution of the AR glasses will approach that of the eyes, and the reality of the models will improve too. In time, many will feel like they got an experience very close to going back and time and seeing it as it was. width="853" height="480" src="https://www.youtube.com/embed/GmdXJy_IdNw" frameborder="0" allowfullscreen> Well, not quite as it was. It will be full of tourists from the future, including yourself. AR keeps them present, which is good because you don’t want to bump into them. A more advanced system will cover the tourists in period clothing, or even replace their faces. You would probably light the space somewhat dimly to assure the AR can cover up what it needs to cover up, while still keeping enough good vision of the floor so you don’t trip. Of course, if you cover everything up with the AR, you could just do this in a warehouse, and that will happen too. You would need to reproduce the staircases of the recreated building but could possibly get away with producing very little else. As long as the other visitors don’t walk through walls the walls don’t have to be there. This might be popular (since it needs no travel) but many of us still do have an attraction to the idea that we’re standing in the actual old place, not in our hometown. And the museu[...]
Wed, 31 Aug 2016 19:11:57 -0700At this week’s Singularity U Global Summit, I got a chance to meet with Josh Silver and learn about his organization, represent.us. I have written often in My New Democracy Category on ways to attack the corruption and money in politics. Represent.us is making a push for the use of laws to fix some of these issues, through ballot propositions. In the past, I have felt this approach to be very difficult, because for every step that could improve democracy, one of the major parties is benefiting from the flaw, and will fight any effort to fix it. Fixes in congress or the statehouses are difficult, and many of the fixes people like (like campaigning restrictions) violate the 1st amendment. This organization is trying for a few specific measures in a bipartisan effort to pass ballot resolutions. To make it bipartisan, they are doing it in pairs of “red” and “blue” states. The core changes they are looking for are: Public campaign finance through vouchers. Every voter gets “vouchers” they can hand to the candidates they wish Rules to fix the nightmare of gerrymandering, primarily by having non-partisan committees draw the district boundaries, as has already happened in some states Preferential ballot systems to allow minor parties to participate in elections without risk of “spoiling” the battle between the 2 main parties, as Nader did in Florida 2000 and Perot did in 1992. Improved voter participation though improved registration (another common approach in place in some districts.) Limitations on revolving door lobbying and favours for donors. RU’s plan is a surprising one — that all 4 of these together might have a better chance of passing than the individual components do. Polls show that voters often have strong support for this full package, even if they don’t like one of the items. So they have this on the ballot in South Dakota and Washington, though the ballot language in Washington is not superb. They are looking for money and support in their campaigns, and I have offered to be on their advisory board. They have already passed versions of their anti-corruption bills in several cities. Their strategy might work on me (if I were a voter.) I have my own preferred versions of these approaches, but I would rather see this package pass than fight for the perfect version of either one. Nonetheless a few things I would tweak: width="560" height="315" src="https://www.youtube.com/embed/NAtunJv6NtE" frameborder="0" allowfullscreen> Gerrymandering is one of the great cheats of political systems, and it got a lot worse in 2010 through a deliberate effort of the Republican party to massively overspend national money on key statehouse races, allowing it to control those statehouses and redraw the lines to both assure continued control of the statehouses and a control of the House of Representatives in spite of getting a serious minority of the popular vote. Non-partisan redistricting committees are a start, but we need more, and parties that have gained control this way will be unlikely to give it up. I have advocated a rule of convexity to prevent even partisan groups from gerrymandering. But the only hope I have hear is finding a constitutional principle — such as the basic right of franchise — that can get this stopped. Preferential ballots are good, but sadly the “instant runoff” (also known as Hare, Single Transferable Vote and Australian ballot) is actually the worst of the systems. The problem is not just the chaotic conditions in that simulation article, but that it is one of the harder systems to explain to the voters. If the voters are n[...]
Thu, 18 Aug 2016 15:18:14 -0700
The past period has seen some very big robocar news. Real news, not the constant “X is partnering with Y” press releases that fill the airwaves some times.
Uber has made a deal to purchase Otto, a self-driving truck company I wrote about earlier founded by several friends of mine from Google. The rumoured terms of the deal as astronomical — possibly 1% of Uber’s highly valued stock (which means almost $700M) and other performance rewards. I have no other information yet on the terms, but it’s safe to say Otto was just getting started with ambitious goals and would not have sold for less than an impressive amount. For a company only 6 months old, the rumoured terms surpass even the amazing valuation stories of Cruise and Zoox.
While Otto has been working on self-driving technology for trucks, any such technology can also move into cars. Uber already has an active lab in Pittsburgh, but up to now has not been involved in long haul trucking. (It does do local deliveries in some places.) There are many startups out there calling themselves the “Uber for Trucks” and Otto has revealed it was also working on shipping management platform tools, so this will strike some fear into those startups. Because of my friendship with Otto’s team, I will do more commentary when more details become public.
(image) In other Uber news, Uber has announced it will sell randomly assigned Uber rides in their self-driving vehicles in Pittsburgh. If your ride request is picked at random (and because it’s in the right place) Uber will send one of their own cars to drive you on your ride, and will make the ride free, to boot. Of course, there will be an Uber safety driver in the vehicle monitoring it and ready to take over in any problem or complex situation. So the rides are a gimmick to some extent, but if they were not free, it would be a sign of another way to get customers to pay for the cost of testing and verifying self-driving cars. The free rides, however, will probably actually cause more people to take Uber rides hoping they will win the lottery and get not simply the free ride but the self-driving ride.
GM announced a similar program for Lyft — but not until next year.
Ford has announced it wants to commit to making unmanned capable taxi vehicles, the same thing Uber, Google, Cruise/GM, Zoox and most non-car companies want to make. For many years I have outlined the difference between the usual car company approaches, which are evolutionary and involve taking cars and improving their computers and the approaches of the non-car companies which bypass all legacy thinking (mostly around ADAS) to go directly to the final target. I call that “taking a computer and putting wheels on it.” It’s a big and bold move for Ford to switch to the other camp, and a good sign for them. They have said they will have a fleet of such vehicles as soon as 2021. read more »
Thu, 11 Aug 2016 17:23:53 -0700At the recent AUVSI/TRB conference in San Francisco, there was much talk of upcoming regulation, particularly from NHTSA. Secretary of Transportation Foxx and his NHTSA staff spoke with just vague hints about what might come in the proposals due this fall. Generally, they said good things, namely that they are wary of slowing down the development of the technology. But they said things that suggest other directions. Secretary Foxx began by agreeing that the past history of automotive driving systems was quite different. Regulations have typically been written years or decades after technologies have been deployed. And the written regulations have tended to involve standards which the vendors self-certify their compliance with. What this means is that there is not a government test center which confirms a car complies with the rules in the safety standards. Instead, the vendor certifies they are following the rules. If they certify falsely, that can get them in trouble later with regulators and more importantly in lawsuits. It’s by far the best approach unless the vendors have shown that they can’t be trusted in spite of the fear of these actions. But Foxx said that they were going to go against that history and consider “pre-market regulation.” Regular readers will know I think that’s an unwise idea, and so do many regulators, who admit that we don’t know enough about the final form of the technology to regulate yet. Fortunately it was also suggested that NHTSA’s new documents would be more in the form of “guidance” for states. Many states ask NHTSA to help them write self-driving car regulations. Which gets us to a statement that was echoed by several speakers to justify federal regulation, “Nobody wants 50 different regulations” on these cars. At first, that seems obvious. I mean, who would want it to be that complex? Clearly it’s simpler to have to deal with only one set of regulations. But while that’s true, it doesn’t mean it’s the best idea. They are overestimating the work involved in dealing with different regulations, and underestimating the value of having the ability for states to experiment with new ideas in regulation, and the value of having states compete on who can write the best regulations. If regulations differed so much between states as to require different hardware, that makes a stronger case. But most probably we are talking about rules that affect the software. That can be annoying, but it’s just annoying. A car can switch what rules it follows in software when it crosses a border with no trouble. It already has to, just because of the different rules of the road found in every state, and indeed every city and even every street! Having a few different policies state by state is no big addition. Jurisdictional competition is a good thing though, particularly with emerging technologies. Let some states do it wrong, and others do it better, at least at the start. Le them compete to bring the technology first to their region, and invent new ideas on how to regulate something the world has never seen. Over time these regulations can be normalized. By the time people are making 10s of millions of robocars, that normalization will make more sense. But most vendors only plan to deploy in just a few states to begin, anyway. If a state feels its regulations are making it harder for the cars to spread to its cities, it can copy the rules of the other state it likes best. The competition assures any mistake is localized — and probably eventually fixed. If Califor[...]
Wed, 10 Aug 2016 17:49:18 -0700Today, Robin Chase wrote an article wondering if robocars will improve or ruin our cities and asked for my comment on it. It’s a long article, and I have lots of comment, since I have been considering these issues for a while. On this site, I spend most of my time on the potential positive future, though I have written various articles on downsides and there are yet more to write about. Robin’s question has been a popular one of late, in part a reaction by urban planners who are finally starting to think more deeply on the topic, and reacting to the utopian visions sometimes presented. I am guilty of such visions, though not as guilty as some. We are all seduced in part by excitement of what’s possible in a world where most or all cars are robocars — a world that is not coming for several decades, if in our lifetimes at all. It’s very fair to look at the topic from both sides, and no technology does nothing but good. When I first met Robin, she was, like most people, a robocar skeptic. She’s done pioneering work in new transportation ideas, but the pace of improvement has surprised even the optimists. I agree with many of the potential negatives directions that she and others paint; in fact I’ve said them myself. Nonetheless my core position is that we can and probably will get tremendous good out of this. While I want city planners to understand these trends, I think it’s too early for them to actually attempt to guide them. Even the developers of the technology don’t quite know the final form it will take when it starts taking over the transport world in the 2020s. Long term planning is simply impossible at this stage — it must be done not with the knowledge of 2016 but with the knowledge of 2023. That approach — the norm in the high tech world, where we expect the world to constantly change underneath us — is anathema to governments and planners. When Marc A. said that software was eating the world, he was telling the world that it will need to start learning the rules of innovation that come from the high tech, internet and computer worlds. Instead, today’s knowledge can at least guide planners in what not to do. Not to put big investments in things likely to become obsolete. Not to be too clever in thinking they understand the “smart city” of 2025. They need to be like the builders of the internet, who made the infrastructure as simple and stupid as they could, moving innovation away from the infrastructure and into the edges where it could flourish in a way that astounded humanity. Congestion We will get more congestion in the start. Not because of empty vehicles cruising around — most research suggests that will be around 15% of miles, and then only after everybody switches. We’ll get more congestion from two factors: The early cars, especially the big car company offerings, will make traffic jams more tolerable. As such, people will not work as hard to avoid them. Car travel will be come much better and much cheaper; far more people will be able to use it, so they’ll travel more miles in cars than they do today. For some, longer commutes will be more tolerable so they will live further from work. That won’t increase congestion in the central areas (they would still have driven those roads if they lived closer to work) but will increase it in the more remote places. The tolerance for longer commutes may increase “sprawl.” The good news is that the era of the ubiquitous smartphone brings us the potential for a traffic “mi[...]
Wed, 10 Aug 2016 15:14:29 -0700At the recent AUVSI/TRB symposium, a popular research topic was platooning for robocars and trucks. Platooning is perhaps the oldest practical proposal when it comes to car automation because you can have the lead vehicle driven by a human, even a specially trained one, and thus resolve all the problems that come from road situations too complex for software to easily handle. Early experiments indicated fuel savings, though relatively modest ones. At practical distances, you can see about 10% saving for following vehicles and 5% for the lead vehicle. Unfortunately, a few big negatives showed up. It’s hard to arrange platoons, errors can become catastrophic multi-car pile-ups, other drivers keep inserting themselves into the gap unless it’s dangerously small, and the surprising deal-breaker that comes from the stone chips which are thrown up by lead vehicles which destroy the finish — and in some cases the radiator or windshield — of following cars. They can also create a congestion problem and highway exit problem the way existing convoys of trucks sometimes do that. One local company named Peloton is making progress with a very simple platooning problem. They platoon two (and only two) trucks on rural highways. The trucks find one another over the regular data networks, and when they get close they establish a local radio connection (using the DSRC protocol that many mistakenly hope will be the standard for vehicle to vehicle communications.) Both drivers keep driving, but the rear driver goes feet-off-the-pedals like a cruise control. The system keeps the vehicles a fixed distance to save fuel. The trucks don’t mind the stone chips too much. Some day, the rear driver might be allowed to go in the back and sleep, which would allow cargo to move 22 hours/day at a lower cost, probably similar to the cost of today’s team driving (about 50 cents/mile) but with two loads instead of one. Trucks are an easy win, but I also saw a lot of proposals for car platoons. Car platoons are meant to save fuel, but also to increase road capacity. But after looking at all the research a stronger realization came to me. If you have robocars, why would you platoon when you can carpool?. To carpool, you need to find two cars who are going to share a long segment of trip together. Once you have found that, however, you get far more savings in fuel and road usage if the cars can quickly pause together and the passengers from one transfer into the other. Then the empty car can go and move other commuters. This presumes, of course, that the cars are like almost all cars out there today, with many empty seats. When the groups of passengers come to where their path diverts, the vehicle would need to stop at a transfer point and some passengers would move into waiting robotaxis to take them the rest of the way. All of this is not as convenient as platooning, which in theory can happen without slowing down and finding a transfer point. This is one reason that the carpool transfer stations I wrote about last month could be a very useful thing. Such stations would add only 1-2 minutes of delay, and that’s well worth it if you consider that compared to platooning, this carpooling means a vastly greater fuel saving (almost 50%) and a much greater increase in road capacity, with none of the other downsides of platooning. If you’re thinking ahead, however, you will connect this idea to my proposed plan for the future of group transit. The real win is to have the computers of the transport service providers noti[...]
Sun, 07 Aug 2016 12:11:26 -0700It’s common for people to write that those who vote for a minor party in an election are “throwing away” their vote. Here’s a recent article by my friend Clay Shirky declaring there’s no such thing as a protest vote and many of the cases are correct, but the core thesis is wrong. Instead, I will argue that outside the swing states, you are throwing away your vote if you vote for a major party candidate. To be clear, if you are in one of the crucial swing states where the race is close — and trust me, you know that from the billions of dollars of ad spend in your state, as well as from reading polls — then you should vote for the least evil of the two party candidates as you judge it. And even in most of the country, (non-swing) you should continue to vote for those if you truly support them. But in a non-swing state, in this election in particular, you have an additional option and an additional power. Consider here in California, which is very solidly for Clinton. Nate Silver rates it as 99.9% (or higher) to go for Clinton. A vote for Clinton or Trump here is wasted. It adds a miniscule proportion to their totals. Clinton will fetch around 8 million votes. You can do the un-noticed thing of making it 8 million and 1, and you’ll bump her federally by an even tinier fraction. Your vote can make no difference to the result (you already know that) and nor will it be noticed in the totals. You’re throwing it away, getting an insignificant benefit for its use. Of course, the 3rd party candidates had no chance of winning California, or the USA. And while they like to talk a pretend bluster about that, they know that. You know that. Their voters know that. 3rd party voters aren’t voting to help their candidate win, any more than Trump voters imagine their vote could help him win California, or Clinton voters imagine they could affect her assured victory. Third party voters, however, will express their support for other idea in the final vote totals. If Jill Stein gets 50,000 votes in California, making it 50,001 doesn’t make a huge difference, but it makes 160 times as much difference to her total than a Clinton vote does, or 100x what a Trump vote does. Gary Johnson is doing so well this year (polling about 8% of national popular vote) that his voters won’t do quite as much to his total, but still many times more improvement than the major party votes. Clay argues that “nobody is receiving” the message of your vote for a third party, but the truth is, your vote for Clinton in California or Trump in Texas is a message that has even less chance of being received. A big difference this year is that the press are paying attention to the minor parties. This year, you will see much more press on Johnson’s and Stein’s totals. It is true that in other years, the TV networks would often ignore those parties. In some case, TV network software is programmed to report only the top two results, and to make the percentages displayed add up to 100%. This is wrong of the networks, but I suspect there is less chance of it happening. Johnson will probably appear in those totals. Web sites and newspapers have generally reported the proper totals. Does anybody look at these totals for minor candidates? Some don’t, but the big constituency for them is others interested in minor parties. People want a tribe. Many people don’t want to support something unless they see they are not alone, that others are supporting [...]
Sun, 31 Jul 2016 13:59:05 -0700Social media are jam packed with analysis of the rise of Donald Trump these days. Most of us in what we would view as the intellectual and educated community are asking not just why Trump is a success, but as Trevor Noah asked, “Why is this even a contest?” Clinton may not be, as the Democrats claim, the most qualified person ever to run, but she’s certainly decently qualified, and Trump is almost the only candidate with no public service experience ever to run. Even his supporters readily agree he’s a bit of a buffoon, that he says tons of crazy things, and probably doesn’t believe most of the things he says. (The fact that he doesn’t actually mean many of the crazy things has become the primary justification of those who support him.) But it is a contest, and while it looks like Clinton will probably win it is also disturbing to me to note that in polls broken down by race and sex, Trump is actually ahead of Clinton by a decent margin among my two groups — whites and males. (Polls have been varying a lot in the weeks of the conventions.) Whites and males have their biases and privileges, of course, but they are very large and diverse groups, and again, to the coastal intellectual view, this shouldn’t even be a contest. (It’s also my view as a foreigner of libertarian leanings and no association with either party.) The things stacked in favour of the Republican nominee There have been lots of essays examining the reason for Trump’s success. Credible essays have described a swing to nationalism and/or authoritarianism which Trump has exploited. Trump’s skill at marketing and memes is real. His appeal to paternalism and strength works well (Lakeoff’s “strong father” narrative.) The RNC also identified Hillary Clinton as a likely nominee 2 decades ago, and since then has put major effort into discrediting her, much more time than it’s ever had to work on other opponents. And Clinton herself certainly has her flaws and low approval ratings, even within her own party. It is also important to note that the chosen successor of a Democratic incumbent has never in history defeated the Republican. (In 1856 Buchanan defeated the 1st ever Republican nominee, Fremont, but was Franklin Pierce’s opponent at the convention.) This stacks the deck in favour of this year’s Republican. Of course, Wilson, Cleveland, Roosevelt the 2nd, Carter and Clinton the 1st all defeated incumbent Republicans, so Democrats are far from impotent. The specific analysis of this election is interesting, but my concern is about the broader trend I see, a much bigger geopolitical trend arising from technology, globalization, income inequality and redistribution among nations as well as the decline of religion and the classic lifetime middle class career. This big topic will get more analysis in time here. I was particularly interested in this recent article linking globalization and the comparative reduced share for the U.S. middle class. The ascendancy of the secular, western, technological, intellectual capitalist liberal elite is facing an increasing backlash. Where Trump’s support comes from Trump of course begins, as Clinton does, with a large “base.” There is an element of the Republican base that will never tolerate voting for Clinton almost no matter how bad Trump is. There is a similar Democratic contingent. This base has been boosted by that 2 decade anti-Clinton campaign. read [...]