2016-09-23T13:30:00-04:00"With Donald Trump, you're either going to get something very good or very bad," former Democratic presidential hopeful Jim Webb said back in March. "But with Hillary Clinton were going to get more of the same thing. Do you want the same thing?" Silicon Valley entrepreneur Peter Thiel had a similar thought in a Washington Post op-ed this month: Supporting Trump, is a way to say "to the incompetent elites who feel entitled to govern: 'You're fired.'" Rifle through the internet and you'll find the sentiment—even among some* frustrated Reason commenters—distilled crudely to "burn it all down." The burn-it-all-down Trump supporter (or potential supporter, in Webb's case) is engaged in what I call political antinomianism. In Christian theology, an antinomian is a person who believes the moral law is of no use or obligation because faith alone is necessary to salvation. In the current electoral context, voters disgusted with how corrupted our political system has become are attracted to the lawlessness at the heart of Trump's personalized theory of governance. "Nobody knows the system better than me, which is why I alone can fix it," declared Trump at the Republican National Convention. Supporters have faith in Trump the Great Man and therefore are political antinomians. Yet the rule of law is the bulwark of liberty, as Friedrich Hayek argued. The rule of law is embodied in the principles of generality, equality, certainty, and justice. That is, laws must apply to all, including government officials; they should be equally applied, so legal privileges are prohibited; they should be clear and consistent and not arbitrarily changed; and they should aim solely to prevent the infringement of individuals' protected domains. Trump embodies the spirit of lawlessness. He showed how little respect he has for the First Amendment when he suggested he'd like to "open up" the libel laws to make it easier for aggrieved celebrities and politicians to sue the media. Even more egregiously, Trump threatened to use the IRS to go after Jeff Bezos, the owner of the Washington Post, because he disliked what the paper had reported about him. With regard to Fourth Amendment guarantees of privacy, Trump has said that "security is going to rule" and that therefore "we're going to have to do certain things that were frankly unthinkable a year ago." Trump has also said he'd be "fine" with restoring the NSA's authority to bulk-collect telecommunications data on to whom every American speaks, when, where, and for how long; back in 2013, he called NSA whistleblower Edward Snowden a "traitor" and hinted that he should be executed. The Fifth Amendment provides that private property shall not be taken for public use without just compensation. Yet Trump is a huge fan of using government eminent domain power to take private property and then turn it over to developers like him. He hailed the infamous Kelo decision, in which a Connecticut woman was forced out of her house so the city could turn her property over to Pfizer to build a business campus for the company. "I happen to agree with it 100%," Trump declared. "If you have a person living in an area that's not even necessarily a good area, and...government wants to build a tremendous economic development, where a lot of people are going to be put to work and...create thousands upon thousands of jobs and beautification and lots of other things, I think it happens to be good." Indeed, he tried to do the same thing to a woman in Atlantic City whose property he wanted for building a limousine parking lot. And this week he bemoaned the fact that the accused New York City bomber has, like all U.S. citizens, a right to an attorney. Apparently Trump would junk the Sixth Amendment too. Beyond the Bill of Rights, a parsing of Trump's policy proposals finds that, to the degree that anything he says can be believed, he has no intention of minimizing the role of the state in other areas either. He has promised to spend "at least double" what Clinton has proposed on infrastructure—that is, about $500 bill[...]
2016-09-16T13:30:00-04:00What is the attraction of socialism? The Cato Institute held a policy forum Wednesday to consider that question, featuring talks from the moral psychologist Jonathan Haidt and the evolutionary psychologists Leda Cosmides and John Tooby. One problem they quickly encountered was how to define socialism in the first place. Is it pervasive, state-directed central planning? A Scandinavian-style safety net? Something else? Sen. Bernie Sanders of Vermont, who pursued the Democratic presidential nomination while describing himself as a socialist, attracted a big following among voters under age 30. But most of those voters actually rejected the idea of the government running businesses or owning the means of production; they tended to be safety-net redistributionists who want to tax the rich to pay for health care and college education. And this was, in fact, the platform Sanders was running on. Cosmides suggested the contemporary left/right divide rests on the question of whether people are inherently good or bad. The liberal thinks people are good but are ruined by exploitation; the conservative thinks people are bad and their selfish impulses must be reined in by cultural norms and controls. In fact, she continued, evolutionary psychology shows that human nature is composed of an extensive set of neural programs that are triggered by different experiences. Human beings evolved to handle the social challenges encountered in small bands of 50 to 200 people. Globe-spanning market economies strain our brains. Cosmides than critiqued the Marxist belief that early hunter-gatherers practiced primitive communism—that all labor was collective, and the products of that labor were distributed on the rule of from each according to his ability to each according to his need. Cosmides cited a classic study by the University of Utah anthropologists Hillard Kaplan and Kim Hill, who looked at how Ache foragers shared food. They reported that rarer, high-yield, hunted foods like game were more extensively shared than more common gathered plant foods. Finding game depends a lot on luck whereas finding plant foods depends more on effort. Such behavior reemerged in a 2012 experiment conducted by the Nobel-winning economist Vernon Smith, Cosmides noted. The study, which was published in The Proceedings of the Royal Society B, had modern college students hunt and gather in a virtual environment. In one patch, resources were highly valuable but hard to find—in economic lingo, they were high-variance. In another patch, the resources were more common and less valuable: low-variance. Since acquiring high-variance resources depends a lot on luck, sharing emerged quickly among participants who foraged in that patch; they recognized that otherwise they could easily go home with nothing. In low-variance situations, by contrast, how much you earned depended chiefly on how hard you worked. Sharing was almost non-existent among the low-variance foragers. Cosmides then turned to a fascinating 2014 study in The Journal of Politics by the Danish political scientists Lene Aarøe and Michael Bang Petersen. Aarøe and Petersen found that certain cues could turn supposedly individualistic Americans into purportedly welfare-state loving Danes, and vice versa. In that experiment, researchers asked 2,000 Danes and Americans to react to three cases involving a person on welfare. In one, they had no background information on the welfare client. In the second, he lost his job due to an injury and was actively looking for new work. In the third, he has never looked for a job at all. The Danes turned out to be slightly more likely than the Americans to assume that the person they knew nothing about was on welfare because of bad luck. But both Americans and Danes were no different in opposing welfare for the lazy guy and strongly favoring it for the unlucky worker. "When we assess people on welfare, we use certain [evolved] psychological mechanisms to spot anyone who might be cheating," Michael Bang Petersen explained in press relea[...]
2016-09-09T13:30:00-04:00Beyond Human: How Cutting-Edge Science Is Extending Our Lives, by Eve Herold, St. Martin's Press, 291 pages, $26.99 "Transhumanism is becoming more respectable, and transhumanism, with a small t, is rapidly emerging through conventional mainstream avenues," Eve Herold reports in her astute new book, Beyond Human. While big-T Transhumanism is the activist movement that advocates the use of technology to expand human capacities, small-t transhumanism is the belief or theory that the human race will evolve beyond its current physical and mental limitations, especially by means of deliberate technological interventions. As the director of public policy research and education at the Genetics Policy Institute, Herold knows these scientific, medical, and bioethical territories well. Movements attract countermovements, and Herold covers the opponents of transhuman transformation too. These bioconservatives range from moralizing neocons to egalitarian liberals who fear the new technologies somehow threaten human dignity and human equality. "I began this book committed to exploring all the arguments, both for and against human enhancement," she writes. "In the process I have found time and again that the bioconservative arguments are less than persuasive." (Herold cites some of my own critiques of bioconservatism in her book.) Herold opens with a tale of Victor Saurez, a man living a couple of centuries from now who at age 250 looks and feels like a 30-year-old. Back in dark ages of the 21st century, Victor was ideologically set against any newfangled technologies that would artificially extend his life. But after experiencing early onset heart failure, he agreed have a permanent artificial heart implanted because he wanted to know his grandchildren. Next, in order not to be a burden to his daughter, he decided to have vision chips installed in his eyes to correct blindness from macular degeneration. Eventually he agreed to smart guided nanoparticle treatments that reversed the aging process by correcting the relentlessly accumulating DNA errors that cause most physical and mental deterioration. Science fiction? For now. "Those of us living today stand a good chance of someday being the beneficiaries of such advances," Herold argues Consider artificial hearts. In 2012 Stacie Sumandig, a 40-year-old mother of four, was told that she would be dead within days due to heart failure caused by a viral infection. Since no donor heart was available, so she opted to have the Syncardia Total Artificial Heart (TAH) installed instead. The TAH completely replaces the natural heart and is powered by batteries carried in backpack. It enabled Sumandig to live, work, and take care of her kids for 196 days before a donor heart became available. As of this month, 1,625 TAHs have been implanted; one person lived with one for 4 years before receiving a donor heart. In 2015, an ongoing clinical trial began in which 19 patients received permanent TAHs. Herold goes on to describe pioneering research on artificial kidneys, livers, lungs, and pancreases. "Artificial organs will soon be designed that are more durable and perhaps more powerful than natural ones, leading them to become not only curative but enhancing," she argues. In the future, people will be loaded up with technologies working to keep them healthy and alive. (One troubling issue this raises: What do we do when someone using such biomedical technologies chooses to die? Who would be actually be in charge of deactivating those technologies? Would the law treat deactivation by a third party as tantamount to murder? In such cases, something akin to today's legalized physician-assisted dying may have to be sanctioned.) Artificial organs have considerable competition too. Herold, unfortunately, does not report on the remarkable prospects for growing transplantable human organs inside pigs and sheep. Nor does she focus much attention on therapies using stem cells that could replace and repair damaged tissues and organs. But such res[...]
2016-09-02T13:30:00-04:00Lots of voters, especially Republicans, are worried about voter fraud. GOP presidential candidate Donald Trump stoked those fears when he warned supporters, "I'm afraid the election's going to be rigged. I have to be honest." These fears and not-so-subtle efforts to skew voter registration in partisan directions have prompted strict voter ID requirements in several states with the purported aim of preventing the almost non-existent crime of voter impersonation fraud. But a recent Federal Bureau of Investigation "flash alert" suggests that the real threat of voter fraud might come from abroad. Earlier this week, reports surfaced that the FBI has warned election officials in Illinois and Arizona that their voter databases had been penetrated by intruders linked to IP addresses associated with Russian hackers. The hackers managed to download personal data on 200,000 Illinois voters and posted online the username and password of a user with access to the Arizona voter registration database. This cyber-intrusion followed on the now notorious hacks of the Democratic National Committee's dossier on Trump and later its email system. The release of those emails by WikiLeaks showed that DNC officials favored Hillary Clinton over Bernie Sanders and led to Florida Rep. Debbie Wasserman Schultz's resignation as Democratic Party chair. The reports of voter registration database hacking provoked Senate Minority Leader Harry Reid (D-Nev.) to send a letter to FBI Director James Comey that claimed "Russia's intent to influence the outcome of our presidential election has been well-documented by numerous news organizations." Reid also suggested that the Russian government might try to target American voting systems to throw the election to Trump. "The prospect of a hostile government actively seeking to undermine our free and fair elections," he wrote, "represents one of the gravest threats to our democracy since the Cold War." There is, of course, more than one way to interfere in an election. It isn't paranoid to worry about a Russian disinformation campaign aimed at confusing Americans. A fascinating and disquieting Rand Corporation review, titled "The Russian 'Firehose of Falsehood' Propaganda Model," finds that "the Russian propaganda model is high-volume and multichannel, and it disseminates messages without regard for the truth. It is also rapid, continuous, and repetitive, and it lacks commitment to consistency." Recent Russian disinformation ranges from hacking an official Ukrainian website to claim a far-right candidate had won that country's 2014 presidential election to a social-media hijack trying to panic Louisiana residents with reports of a chemical plant explosion. But is it actually possible for Russian agents to stuff American ballot boxes? Probably not. America's decentralized electoral system is a significant bulwark against hacking the vote. There are some 8,000 jurisdictions in the U.S., and they use a mix of disparate electronic and paper balloting systems. Hackers trying to influence a national election would have to attack a whole bunch of individual machines, each with different software. On top of that, 75 percent of Americans will vote this year using paper ballots. (Of course, erasing voter registration rolls in key states just before the election would be disruptive, to say the least.) For years, many researchers have been warning that our electronic voting machines are vulnerable. Only last year were the "worst voting machines" in America decertified by the board of elections in my home state of Virginia. (The machines left no paper trail and were so insecure that they could be hacked from the parking lot of the polling place.) Still, there is no evidence that those machines were in fact tampered with during any election. And even if they were, that in itself is unlikely to be enough to swing the results nationally. If Russians can't effectively hack our voting machines, they can still try to introduce a little havo[...]
2016-08-26T13:30:00-04:00"Science, the pride of modernity, our one source of objective knowledge, is in deep trouble." So begins "Saving Science," an incisive and deeply disturbing essay by Daniel Sarewitz at The New Atlantis. As evidence, Sarewitz, a professor at Arizona State University's School for Future Innovation and Society, points to reams of mistaken or simply useless research findings that have been generated over the past decades. Sarewitz cites several examples of bad science that I reported in my February article "Broken Science." These include a major biotech company's finding in 2012 that only six out of 53 landmark published preclinical cancer studies could be replicated. Researchers at a leading pharmaceutical company reported that they could not replicate 43 of the 67 published preclinical studies that the company had been relying on to develop cancer and cardiovascular treatments and diagnostics. In 2015, only about a third of 100 psychological studies published in three leading psychology journals could be adequately replicated. A 2015 editorial in The Lancet observed that "much of the scientific literature, perhaps half, may simply be untrue." A 2015 British Academy of Medical Sciences report suggested that the false discovery rate in some areas of biomedicine could be as high as 69 percent. In an email exchange with me, the Stanford biostatistician John Ioannidis estimated that the non-replication rates in biomedical observational and preclinical studies could be as high as 90 percent. Sarewitz also notes that 1,000 peer-reviewed and published breast cancer research studies turned out to be using a skin cancer cell line instead. Furthermore, when amyotrophic lateral sclerosis researchers tested more than 100 potential drugs reported to slow disease progression in mouse models, none were found to be beneficial when tested on the same mouse strains. A 2016 article suggested that fMRI brain imaging studies suffered from a 70 percent false positive rate. Sarewitz also notes that decades of nutritional dogma about the alleged health dangers of salt, fats, and red meat appears to be wrong. And then there is the huge problem of epidemiology, which manufactures false positives by the hundreds of thousands. In the last decade of the 20th century, some 80,000 observational studies were published, but the numbers more than tripled to nearly 264,000 between 2001 and 2011. S. Stanley Young of the U.S. National Institute of Statistical Sciences has estimated that only 5 to 10 percent of those observational studies can be replicated. "Within a culture that pressures scientists to produce rather than discover, the outcome is a biased and impoverished science in which most published results are either unconfirmed genuine discoveries or unchallenged fallacies," four British neuroscientists bleakly concluded in a 2014 editorial for the journal AIMS Neuroscience. Some alarmed researchers refer to this situation as the "reproducibility crisis," but Sarewitz convincingly argues that they are not getting to the real source of the rot. The problem starts with the notion, propounded in the MIT technologist Vannevar Bush's famous 1945 report Science: The Endless Frontier, that scientific progress "results from the free play of free intellects, working on subjects of their own choice, in the manner dictated by their curiosity for exploration of the unknown." Sarewitz calls this a "beautiful lie." Why it is a lie? Because it makes "it easy to believe that scientific imagination gives birth to technological progress, when in reality technology sets the agenda for science, guiding it in its most productive directions and providing continual tests of its validity, progress, and value." He adds, "Technology keeps science honest." Basically, research detached from trying to solve well-defined problems spins off self-validating, career-enhancing publications like those breast cancer studies that actually were using skin cancer cells. Yet[...]
2016-08-19T13:30:00-04:00Democratic presidential candidate Hillary Clinton and Republican presidential candidate Donald Trump agree on at least one thing. Both support the federal Renewable Fuels Standard (RFS), which mandates the production of billions of gallons of biofuels. When asked at the Iowa Faith and Freedom Coalition banquet if he supported it, Trump replied, "Yes, and a very strong yes. There is no reason not to. We need it. We need every form we can get. Ethanol is terrific, especially with the new process. And I am totally in favor of ethanol 100-percent and I will support it." In a 2015 Cedar Rapids Gazette op-ed, Clinton declared, "The United States should also continue supporting—and improving—the Renewable Fuel Standard and other federal incentives that have been a success for Iowa and much of rural America." Trump is more interested in biofuels as replacements for imported oil, but Clinton notes that they "can also play an important role in reducing carbon emissions." (Clinton may now be backtracking on her support of the RFS.) The RFS was passed as part of the Energy Policy Act of 2005, and it mandates the production of 36 billion gallons of biofuels by 2022. The Environmental Protection Agency (EPA) calculates that substituting biofuels for gasoline and diesel will reduce greenhouse gas emissions by 138 million metric tons by that time. Not so fast, a group of University of Minnesota researchers say in a new study for the journal Energy Policy. They argue that the biofuels mandate is more likely to increase than reduce overall greenhouse gas emissions from the U.S. transportation sector. Why? The rebound effect. The rebound effect can be illustrated by a consumer who buys a more fuel-efficient car, sees that the fuel costs of driving have been reduced, and thus drives more, partly negating the energy and greenhouse reductions that are supposed to result from fuel-efficiency mandates. In some cases, the rebound is greater than the initial energy savings, so consumers end up using more energy and emitting more greenhouse gases than before. This phenomenon is known as "backfire." The Minnesota researchers' study calculates that that is what will happen with the RFS—that it will backfire and result in more rather than less greenhouse gas emissions. "The amount of fossil fuel displaced by a low-carbon fuel is determined by the economic forces of supply and demand," the authors observe. "In general, an increase in fuel supply causes a decrease in fuel prices, which in turn encourages greater fuel consumption." The authors conservatively estimate, based on an extensive survey of previous research, that about a half-gallon of gasoline is displaced by the energy equivalent of a gallon of biofuel. In other words, they assume a 50 percent gasoline displacement rate. The RFS has different tiers of biofuels. Conventional biofuels, such as corn ethanol, are supposed to emit 20 percent less greenhouse gases than gasoline. The advanced biofuels used to replace diesel are supposed to emit 50 percent less, and cellulosic biofuels 60 percent less, than burning gasoline. The researchers first calculate what the effect on greenhouse gas emissions in 2022 would be if each gallon of biofuel fully replaced a gallon of gasoline or diesel fuel. In other words, no rebound effect. They get a reduction of 110 metric tons of greenhouse gases, which is pretty close to the EPA's estimate. After crunching the numbers this means that the biodiesel mandate in which the fuels are supposed to reduce greenhouse gas emissions by 50 percent of the fossil fuel equivalent is a wash—no increase in emissions, but no reductions either. Focusing on the biofuels that aim to substitute for gasoline by 2022, the researchers find that the cellulosic biofuels should cut emissions by 12 million metric tons annually. This seems a bit optimistic, since the Energy Policy Act mandated the production of 4.25 billion gallons of [...]
2016-08-12T13:30:00-04:00"If the election is rigged, I would not be surprised," Donald Trump told The Washington Post on August 2. "The voter ID situation has turned out to be a very unfair development. We may have people vote 10 times." Trump was reacting to recent federal court decisions that threw out strict voter identification laws in North Carolina, North Dakota, Texas, and Wisconsin. The main source of contention in the cases had been the requirement that voters show a government-issued photo ID before being permitted to cast their ballots. Proponents of strict voter ID laws say that they are necessary to prevent voter impersonation, a form of fraud in which individuals cast more than one ballot. Opponents counter that the demand for photo identification is meant to suppress the turnout of minority and poor voters, who are less likely to have such documents. In other words, they say voter ID laws are an attempt to rig elections against those candidates who are more likely to be supported by minorities and poor people. Voter impersonation fraud appears to be almost non-existent. In the wake of 2000's ballot-counting fiasco, the Help America Vote Act of 2002 created the U.S. Election Assistance Commission to improve voting systems and voter access. In 2007, the commission issued its Election Crimes report, which reviewed what data there was and analyzed numerous anecdotes about voter fraud. The report noted that many experts "asserted that impersonation of voters is probably the least frequent type of fraud because it is the most likely type of fraud to be discovered, there are stiff penalties associated with this type of fraud, and it is an inefficient method of influencing an election." The penalties include $10,000 in fines and up to five years in prison. The New York Times reported in 2007 that a five-year Department of Justice crackdown on voter fraud had yielded just 86 convictions. In 2014, Justin Levitt, a professor at Loyola Law School, Los Angeles, reported finding just 31 cases of voter impersonation fraud out of 1 billion ballots cast between 2000 and 2014. Politifact calculated in 2015 that you are 13 times more likely to be struck by lightning than to stumble across an instance of in-person voter fraud in Texas. In other words, Trump's allegation is a hallucination. What happens if you look at the question from the other direction? Do strict voter ID laws significantly skew electoral results? The evidence is mixed. For example, a 2013 study by Indiana University law professor Michael Pitts looked at the total number of ballots cast in the 2008 and 2012 Indiana primaries. He also counted the total number of provisional ballots cast and the total number of provisional ballots counted. Ultimately, Pitts finds that provisional ballots resulting from ID problems amounted to a minuscule 0.026 and 0.012 percent of all ballots cast in the 2012 and 2008 primaries respectively. "With the lack of evidence of actual instances of in-person voter fraud, it's quite possible that even though the actual disfranchisement caused by photo identification on the overall electorate is slight, the actual disfranchisement is vastly higher than the amount of in-person voter fraud that would occur," Pitts argues. "From this perspective, one could easily conclude that a photo identification law does much more harm than good." By triggering grassroots anger, strict voter ID requirements may actually mobilize voters among the ethnic and demographic groups allegedly targeted by the new rules, according to a 2016 study in Political Psychology. This argument is bolstered by a 2007 University of Missouri study that found that Indiana's photo ID laws appear to have actually increased Democratic turnout by 2 percent in the 2006 election. Two University of Georgia political scientists estimated in their 2012 analysis that state's new stricter voter ID laws did reduce turnout in the 2008 ele[...]
2016-08-05T13:30:00-04:00If you're concerned about climate change, it would be perverse to fight a technology that can supply copious quantities of no-carbon energy 24 hours a day—right? Well, when it comes to nuclear power, lots of leading environmental activists are indulging in just such perversion. For orthodox greens, the only untainted electrons are those jiggled free by sunlight or stirred by wind. One battle in this intra-green war just played out in New York State this week. The good news is that the eco-modernist supporters of nuclear power were strong enough to win. The bad news is that the plan they were fighting for will lead to more government meddling in energy markets. What happened? Unable to compete with heavily subsidized wind and solar power or electricity generated using cheap natural gas, the operators of four upstate New York nuclear reactors were planning to shut them down. Closing the plants would be a significant setback for Gov. Andrew Cuomo's ambitious plan to reduce the state's carbon dioxide emissions from the electric power sector. Currently the state gets 32 percent of its electricity from nuclear power, 19 percent from hydropower, 3 percent from wind, and 0.1 percent from solar. Burning natural gas currently generates about 41 percent of the state's electricity with the remainder from coal and oil. In order to forestall these nuclear shut-downs, state regulators decided this week to subsidize nuclear power plants at a rate of $500 million per year. The deal was announced by the state's Public Service Commission when it adopted a plan to mandate that 50 percent of the state's electricity be produced using renewable energy by 2030. Under the new Clean Energy Standards, each nuclear plant will be allocated zero emissions credits, which utilities must purchase when buying power from them. It is estimated that the credits will sell for about $17.48 per megawatt-hour of electricity. That money will go to the bottom lines of the plant's owners, Entergy and Exelon. Now everybody's a subsidized rent-seeker. The idea of subsidizing nuclear power plants sparked a furious round of recriminations among various environmental groups. For example, the Sierra Club opposed what it characterized as "massive ratepayer-funded subsidies to the nuclear power industry." The Alliance for a Green Economy organized a coalition of 112 activist groups, including Greenpeace, Food & Water Watch, Frack Action, and Upstate New York for Bernie Sanders, to sign an open letter arguing against the proposed nuclear subsidies. Spearheading the pro-nuclear green campaign was a new group, Environmental Progress. Founded by eco-modernist Michael Shellenbeger, Environmental Progress, unlike most dogmatic green groups, fully understands that poverty is the biggest threat to the integrity of the natural world. In its open letter to the Public Service Commission, Environmental Progress argued that the subsidies "embody a fair and equitable standard in treating nuclear power on a similar footing with other low-carbon sources." The letter added that the subsidies were "critical to safeguarding New York's low-carbon nuclear power, ensuring the security of the electricity supply, and meeting the state's decarbonization goals." New York State's electric power sector currently emits 30 million tons of carbon dioxide annually. If the four upstate nuclear power plants were to be replaced by natural gas plants, the state's annual carbon dioxide emissions would jump by 15.5 million tons, a 50 percent increase. The Environmental Progress letter was signed by several environmental heavy-hitters, including Whole Earth Catalog creator Stewart Brand, climate change crusader James Hansen, and a former president of the Missouri Botanical Garden, Peter Raven. So why can't these plants compete without subsidies? Existing nuclear power plants are extraordinarily efficient, pr[...]
Republican presidential hopeful Donald Trump promises that he would deport all 11 million undocumented migrants living in the United States. But why did so many come and then choose to stay?
A March study in the American Journal of Sociology, "Why Border Enforcement Backfired," finds that ever-greater efforts to close down the border led to the decision by many of those who made it here to remain in America rather than risk returning to their home countries. From 1986 to 2010, the U.S. government spent $35 billion on border enforcement. The result was to essentially militarize the boundary line with Mexico, making it harder for migrants to travel safely back and forth. Crossers were more frequently caught and forced to choose more dangerous routes to slip back into the U.S.
As one of the researchers, the Princeton sociologist Douglas Massey, explained at Phys.org, "As the costs and risks rose, migrants naturally minimized border crossing—not by remaining in Mexico but by staying in the United States." And under those circumstances, migrants were more likely to bring their families with them. The researchers conclude that greater border enforcement unintentionally transformed "undocumented Mexican migration from a circular flow of male workers going to three states into an 11 million person population of settled families living in 50 states."
2016-08-01T01:00:00-04:00Is America's accumulating pile of regulations slowing down economic growth? According to a new study from the Mercatus Center at George Mason University, the answer is yes: Thanks to regulatory drag, the U.S. economy is $4 trillion smaller than it otherwise would have been. How do regulations harm economic growth? "For each new regulation added to the existing pile, there is a greater possibility for…inefficient company resource allocation, and for reduced ability to invest in innovation," explain the Progressive Policy Institute economists Michael Mandel and Diana Carew in a 2013 policy memo. "The negative effect on U.S. industry of regulatory accumulation actually compounds on itself for every additional regulation added to the pile." Mandel and Carew offer three explanations for how that pile slows growth. In the first, regulations act as "pebbles in the stream." Tossing a few small rocks into a stream will have no discernible effect on its flow, but the accumulation of regulatory pebbles eventually dams the river of innovation. (The development of mobile health apps, for example, has arguably been blocked by the accretion of medical privacy rules, Food and Drug Administration approvals, and insurance regulations.) The second explanation rests on how regulations can interact in counterproductive ways—think of how fuel economy standards push automakers toward lighter vehicles even as safety standards favor heavier cars. The third focuses on "behavioral overload." As the web of regulations becomes more complex, confused managers and workers must direct more resources to compliance and away from innovation and company growth. The proliferation of federal regulations ultimately affects the rate of improvement in total factor productivity, a measure of technological dynamism and efficiency. Regulations also affect the allocation of labor and capital—by, say, raising the costs of new hires or encouraging investment in favored technologies. The Mercatus Center's new study refines the earlier work of two economists, John Dawson of Appalachian State University and John Seater of North Carolina State. In a 2013 Journal of Economic Growth article, Dawson and Seater constructed a regulatory burden index by tracking the growth in the number of pages in the Code of Federal Regulations since 1949. That number, they note, increased sixfold from 19,335 to 134,261 in 2005. (As of 2014, it had risen to 175,268.) The authors devised a pretty standard endogenous growth theory model and then inserted their regulatory burden index to calculate how federal regulations have affected economic growth. Their astonishing conclusion: Annual output in 2005 was "28 percent of what it would have been had regulation remained at its 1949 level." If not for the growth in the regulatory burden, gross domestic product would have been $53.9 trillion in 2011 instead of $15.1 trillion—a 2 percent annual reduction in economic growth cumulated over 56 years. Americans are significantly poorer due to federal regulations, without which 2011 U.S. per capita income would have been almost four times higher, at $168,000 instead of $48,000. The compliance costs alone are enormous. The Competitive Enterprise Institute's report Ten Thousand Commandments 2015 estimated that it costs consumers and businesses almost $1.9 trillion—more than 11 percent of current GDP—to comply with current federal regulations. That exceeds the $1.82 trillion that the IRS is expected to collect in both individual and corporate income taxes for 2015. The report notes, "Federal regulation is a hidden tax that amounts to nearly $15,000 per U.S. household each year." But as bad as that is, regulatory compliance costs pale in comparison to the loss of tens of trillions in overall wealth, as calculated by Dawson and Seater. To updat[...]
2016-07-29T13:30:00-04:00Hillbilly Elegy: A Memoir of a Family and Culture in Crisis, J.D. Vance, HarperCollins, $27.99, 264 pp. Read this remarkable book: It is by turns tender and funny, bleak and depressing, and thanks to Mamaw, always wildly, wildly profane. An elegy is a lament for the dead, and with Hillbilly Elegy Vance mourns the demise of the mostly Scots-Irish working class from which he springs. I teared up more than once as I read this beautiful and painful memoir of his hillbilly family and their struggles to cope with the modern world. Vance grew up poor with a semi-employed, drug-addicted mother who lived with a string of five or six husbands/boyfriends in the fading Rust Belt city of Middletown, Ohio. The only constants in his chaotic life were his grandparents, Mamaw and Papaw. Vance nearly failed out of high school but eventually graduated from Yale Law School. That personal journey is in the book, but Vance's main story is about the ongoing collapse of hillbilly culture as seen through the lens of his own family's disordered experiences. Before going on, I should make a disclosure: Like Vance, I grew up as a hillbilly. Neither of my grandfathers could read nor write. My paternal grandparents, Mom and Daddy Bailey, left the Appalachian coal country of McDowell County, West Virginia, around 1950 and bought a dairy farm 80 miles away in Washington County, Virginia. I grew up on that farm. For most of my childhood, all six of my grandparents' adult children lived within 10 miles of the home place, as did my dozens of cousins. Every Sunday, a massive family midday "dinner"—somewhere around 40 to 50 people—convened at my grandparents' house. I left the farm at age 16, when my parents got divorced. I will spare you further details, but let's just say that the Baileys did not model their family life on the Waltons. Before I made my escape to the University of Virginia, I lived for a while with my mother and one of my sisters in a rented trailer. Though he mostly grew up in the Rust Belt, Vance identifies as a hillbilly—his family's roots are in the hollers of Breathitt County, Kentucky. Vance's Papaw and Mamaw, like tens of thousands of other mountain folk, left coal country in 1947 to find work and their shot at the American Dream in the booming steelworks 200 miles north. As a kid, Vance would accompany his grandparents as they traveled back nearly every weekend to visit with family in Kentucky. Middletown was Vance's "address," but the town of Jackson in Breathitt County where his great-grandmother Mamaw Blanton lived is his "home." Today hillbilly culture is scarred by spectacular rates of joblessness, single motherhood, drug addiction, crime, and incarceration. Vance places most of the blame for this on the hillbillies' own shoulders. Globalization and automation decimated the manufacturing jobs that many low-skilled workers leveraged into a middle-class lives in the mid-20th century, he argues, but that's no excuse for fatalistic victimhood now. Throughout the book, Vance offers stories from family, friends, and neighbors that illustrate the growing cultural dysfunction among poor whites. For example, he takes a job at a floor tile warehouse for $13 an hour where one of his co-workers is a 19-year-old with a pregnant girlfriend. The warehouse owner gives the girlfriend a job as a receptionist. The 19-year-old and his girlfriend are warned about their increasingly frequent absences and tardiness, and eventually both were fired. The 19-year lashes out at the manager, saying, "How could you do this to me? Don't you know that I've got a pregnant girlfriend?" At another point, Vance meets an old acquaintance in a Middletown bar who tells him he recently quit his job because he was sick of waking up early. Later, the same guy was complaining on Fa[...]
2016-07-22T13:30:00-04:00St. Petersburg, Russia—Did legislation in the United Kingdom and the United States inspire Russian authorities to adopt their country's new domestic spying laws? Maybe. On July 7, Russian President Vladimir Putin signed the "Yarovaya Law," which came into effect earlier this week. The Yarovaya Law—named after Irina Yarovaya, the ultraconservative legislator who pushed for it—is styled as an "anti-terrorism" measure. Among other things, it mandates that telecommunications and internet service providers store all telephone conversations, text messages, videos, and picture messages for six months. In addition, telecom companies must retain for three years customer metadata—that is, data showing with whom, when, for how long, and from where they communicated. The law requires "the organizers of information distribution on the Internet" to do the same thing, except they need only retain the metadata for only one year. Under the Yarovaya Law, providers of telecommunication services, such as a messenger app, a social network, an email client, or a website that encrypts its data, are required to help Russia's Federal Security Service decipher any message sent by its users. In other words, the new law essentially requires internet service providers to install back doors in their services. The fine for refusing to cooperate can be as high as a million rubles (more than $15,000). In order to comply, telecommunications firms operating in Russia claim that they will have to build vast new data storage infrastructure costing many times more than they now make in profits. They also point out that most of the data storage technologies being required are manufactured outside of Russia. And they plausibly argue that the new rules will bring information technology investment and innovation in Russia to a halt. From the authorities' point of view, the fact that most Russian telecoms will not be able to comply with the Yarovaya Law is a feature, not a bug. As the U.S.-based Electronic Frontier Foundation notes, those companies are now "de facto criminals," giving the Russian government "the leverage to extract from them any other concession it desires." Russia Direct tellingly observes that "in Russia, the legislation is compared to the USA Patriot Act." But there is one big difference: "While the Patriot Act prescribed covert surveillance of citizens, the new so-called 'Yarovaya Law' mandates open surveillance." Russia is implementing what some lawmakers in the United States and the United Kingdom have long advocated in their own countries. For example, Britain's Investigatory Powers Bill, nicknamed the "Snooper's Charter," was just passed by an overwhelming majority in the House of Commons and is now under consideration of the House of Lords. It sets up a review process that will likely end up authorizing the bulk collection and retention of telecommunications and internet metadata. (Earlier this week, the Court of Justice of European Union ruled that Britain's data retention mandates violate the right privacy of its citizens. But Brexit will make such rulings moot.) And like its Russian legal counterpart, the Investigatory Powers Bill gives the British government the authority to ban end-to-end encryption in telecommunications and web services and to force companies to provide "back doors" so that government spies can listen to and read citizens' communications. The House of Commons adopted this legislation in June, before the Russian Duma passed its new domestic spying law. How about the United States? No doubt the extensive capabilities exercised in secret by the National Security Agency disclosed in 2013 must have elicited considerable professional envy among Russian spy agencies. Still, those revelations did provoke ala[...]
2016-07-15T13:30:00-04:00Race and policing preoccupied Americans last week. The two shootings by police of black men in Louisiana and Minnesota rightly sparked outrage and protest across the country. Then came the racially motivated attack that killed five police officers during a Black Lives Matter rally in Dallas. Tensions between police and the African-American community were already high. For example, a June Gallup Poll reported that 67 percent of blacks believe that police treat African Americans less fairly than whites in their community. Is there any evidence of such biased policing? Unfortunately, yes. In a new study, "Targeting young men of color by search and arrest during traffic stops," by the University of North Carolina political scientist Frank Baumgartner and his colleagues find clear evidence of extensive police racial profiling. The researchers reached their conclusions by parsing data encompassing more than 18 million traffic stops in North Carolina between 2002 and 2013. North Carolina was the first state in the nation to mandate police-stop data collection, and since 2002 the North Carolina Department of Justice has gathered information on every traffic stop from law-enforcement agencies throughout the state. The police can assign reasons for a traffic stop to 10 different categories—speeding, safe movement, equipment issues, not having a seat belt buckled, expired registration tags, and so on. In general, the researchers report that while blacks constitute about 22 percent of North Carolina's population, 31 percent of stopped motorists are black. Whites and blacks are about equally likely to receive citations (tickets) for traffic infractions. The difference in police treatment becomes apparent when looking at search and arrest statistics. Except in the case of driving while drunk, black motorists are much more likely to be searched or arrested than are white motorists. For all nine categories of traffic stops (checkpoint stops are excluded) considered in the study, 2.61 percent of white drivers are searched, whereas 4.57 percent of black drivers are. In other words, black drivers are 75 percent more likely to be searched than white ones. Similarly, 1.9 percent of whites are arrested, compared to 2.71 percent of blacks—so blacks are 43 percent more likely to be arrested than white ones. And worse yet, the search disparities for black versus white drivers have been growing over time. In 2002, black men were 70 percent more likely to be searched than white men. By 2007, black men were twice as likely to be searched; by 2013, this difference had grown to over 140 percent. The percent difference in the likelihood of arrest between black and white male drivers remained stable at about 60 percent. Interestingly, black and white women of whatever age were about equally likely to be searched, cited, or arrested during traffic stops. The researchers suggest that there are two possible explanations for the disparities they document: racially differential policing or racially differential possession of contraband. Since searches subject to warrants and incident to arrests are procedurally mandatory, the researchers look chiefly at searches done with consent or based on probable cause. They report that black men are twice as likely as white men to be searched with consent. This suggests that black men are either more willing to give their consent to being searched or that they are asked more often for such consent. In addition, "Probable cause searches skew strongly toward blacks, indicating that officers are much more likely to be suspicious of criminal wrongdoing when interacting with black motorists." By one definition, probable cause means reasonably reliable information to suspect there is a f[...]
2016-07-08T13:30:00-04:00Algorithms are everywhere, and in most ways they make our lives better. In the simplest terms, algorithms are procedures or formulas aimed at solving problems. Implemented on computers, they sift through big databases to reveal compatible lovers, products that please, faster commutes, news of interest, stocks to buy, and answers to queries. Dud dates or boring book recommendations are no big deal. But John Danaher, a lecturer in the law school at the National University of Ireland, warns that algorithmic decision-making takes on a very different character when it guides government monitoring and enforcement efforts. Danaher worries that encroaching algorithmic governance, what he calls "algocracy," could "create problems for the moral or political legitimacy of our public decision-making processes." Given algorithms' successes in the private sector, it is not surprising that government agencies are also implementing algorithmic strategies. The Social Security Administration uses algorithms to aid its agents in evaluating benefits claims; the Internal Revenue Service uses them to select taxpayers for audit; the Food and Drug Administration uses them to study patterns of foodborne illness; the Securities and Exchange Commission uses them to detect trading misconduct; and local police departments employ algorithmic insights to predict both the emergence of crime hotspots and which persons are more likely to be involved in criminal activities. Most commonly, algorithms are rule-based systems constructed by programmers to make automated decisions. Because each rule is explicit, it is possible to understand how and why the algorithm produces its outputs, although the continual addition of rules and exceptions over time can make keeping track of what the system is doing ever more difficult. Alternatively, various machine-learning algorithms are being deployed as increasingly effective techniques for dealing with the growing flood and complexity of data. Broadly speaking, machine learning is a type of artificial intelligence that gives computers the ability to learn without being explicitly programmed. Such learning algorithms are generally trained to organize and extract information from being exposed to relevant data sets. It is often hard to discern exactly how the algorithm is devising the rules from which it makes predictions. While machine learning offers great efficiencies in digesting data, the answers supplied by learning algorithms can be badly skewed. In a recent New York Times op-ed, titled "Artificial Intelligence's White Guy Problem," Kate Crawford, a researcher at Microsoft who serves as co-chairwoman of the White House Symposium on Society and Artificial Intelligence, cites several such instances. For example, in 2015 Google Photo's facial recognition app tagged snapshots of a couple of black guys as "gorillas." Back in 2010, Nikon's camera software misread images of Asian people as blinking. "This is fundamentally a data problem. Algorithms learn by being fed certain images," notes Crawford. "If a system is trained on photos of people who are overwhelmingly white, it will have a harder time recognizing nonwhite faces." As embarrassing as the photo recognition problems were for Google and Nikon, algorithmic misfires can have much direr consequences when used to guide government decision making. It does not take too much imagination to worry about the civil liberties implications of the development of algorithms that purport to identify would-be terrorists before they can act. In her op/ed, Crawford cites the results of a recent investigation by ProPublica into how the COMPAS recidivism risk assessment system evaluates the likelihood that a criminal defe[...]
2016-07-01T13:30:00-04:00The Inevitable: Understanding the 12 Technological Forces that Will Shape Our Future, by Kevin Kelly, Viking, 328 pages, $28. Arguably the internet came into existence thirty years ago, when the National Science Foundation (NSF) in 1986 connected five university-based computer centers. The NSFnet could initially transfer information at the rate of 56 kilobits per second (kbps). My current fiber optic connection at home transfers information at a rate of 1,000 megabits per second (mbps). That is nearly 18,000 times faster than the NSFnet. In 1985, there were 340,000 cell phone subscriptions in the U.S. Last year, cell phone subscriptions exceeded 355 million, a more-than-1,000-fold increase. The smartphone in your pocket has vastly more computing power than the state-of-the-art Cray-2 supercomputer did in 1985. Thirty years ago, the information-soaked world we currently live in and enjoy was for most of us unimaginable. So what technological wonders will the next 30 years bring? Answering that question is the task that Wired senior maverick and co-founding editor Kevin Kelly sets himself in his new book The Inevitable. In his visionary What Technology Wants, Kelly previously argued that technology is becoming in some sense autonomous, and that autonomous technology, or the "technium" in his terminology, "is now as great a force in our world as nature." But you don't have to buy into Kelly's semi-teleological explanations of the trajectory of the modern technological project to recognize that he does a great deal of deep thinking about how technology evolves, and the ideas in his new work about what's to come are also well worth pondering. Kelly's key observation is that "there is a bias in the nature of technology that tilts in certain directions and not others." By scrutinizing the nature of technology and how it will evolve, Kelly aims to tell readers something about how the world will "inevitably" look in 2046, a time as distant from us now as 1986 is. His basic answer is that the world in 30 years will be a zillion times smarter, faster, and better. Kelly urges the reader to adopt a stance of "vigilant acceptance" toward inevitable technological progress, which he believes will include the vast decentralization of commerce, creativity, services, and institutions. "Massive copying is here to stay," he argues. "Massive tracking and total surveillance is here to stay. Ownership is shifting away. Virtual reality is becoming real. We can't stop artificial intelligence and robots from improving, creating new businesses, and taking our current jobs." Kelly organizes his book around 12 verbs as present participles: For him, the world is becoming, cognifying, flowing, screening, accessing, sharing, filtering, remixing, interacting, tracking, questioning, and beginning. "There is nothing as consequential as a dumb thing made smarter," declares Kelly in his chapter on cognifying. In 30 years, he thinks even the most prosaic products and services will, like the Scarecrow from The Wizard of Oz, have a brain. He describes how the three trends—cheap parallel computation, big data, and better algorithms—are already enabling artificial intelligence to pervade our world. Deep learning in which neural networks are trained on vast quantities of data is creating A.I.s that can now recognize faces, categorize photographs, make personalized media recommendations, and simultaneously translate languages. Kelly also grapples with the prospect of artificially intelligent robots stealing human jobs. Intelligent machines will inevitably take over tasks that humans can do but robots can do even better; jobs that humans can't do but robots can; jobs we didn[...]