2017-03-24T13:30:00-04:00"It's gotten to a point where it's not even being reported. In many cases, the very, very dishonest press doesn't want to report it," asserted President Donald Trump a month ago. He was referring to a purported media reticence to report on terror attacks in Europe. "They have their reasons, and you understand that," he added. The implication, I think, is that the politically correct press is concealing terrorists' backgrounds. To bolster the president's claims, the White House then released a list of 78 terror attacks from around the globe that Trump's minions think were underreported. All of the attackers on the list were Muslim—and all of the attacks had been reported by multiple news outlets. Some researchers at Georgia State University have an alternate idea: Perhaps the media are overreporting some of the attacks. Political scientist Erin Kearns and her colleagues raise that possibility in a preliminary working paper called "Why Do Some Terrorist Attacks Receive More Media Attention Than Others?" First they ask how many terror attacks have taken place between 2011 and 2015. (The 2016 data will become available later this summer.) The Global Terrorism Database at the University of Maryland, which catalogs information on over 150,000 incidents since 1970, defines terrorism as an "intentional act of violence or threat of violence by a non-state actor" that meets at least two of three criteria. First, that it be "aimed at attaining a political, economic, religious, or social goal." Second, that there is "evidence of an intention to coerce, intimidate, or convey some other message to a larger audience (or audiences) other than the immediate victims." And finally, that it be "outside the precepts of International Humanitarian Law." The Georgia State researchers report that the database catalogs 110 terrorist attacks in the U.S. over the most recent five-year span period in the database. (Globally, there were more than 57,000 terrorist attacks during that period.) In some cases, the media tended to report several attacks perpetrated by the same people as a single combined story; following their lead, the researchers reduce the number to 89 attacks. They then set out to answer four different questions: Would an attack receive more coverage if the perpetrators were Muslim, if they were arrested, if they aimed at government employees or facilities, or if it resulted in a high number of deaths? From a series of searches at LexisNexis and CNN.com, Kearns and her colleagues gathered a dataset of 2,413 relevant news articles. If each attack had received equal media attention, they would have garnered an average of 27 news articles apiece. Interestingly, 24 of the attacks listed in the GTD did not receive any reports in the news sources they probed. For example, a cursory Nexis search failed to turn up any news stories about a 2011 arson attack on townhouses under construction in Grand Rapids, Michigan. An internet search by me did find several local news reports that cited a threatening letter warning residents to leave the neighborhood: "This attack was not isolated, nor will it be the last. We are not peaceful. We are not willing to negotiate." The GTD reports so far that no one has been apprehended for the attack. For those five years, the researchers found, Muslims carried out only 11 out of the 89 attacks, yet those attacks received 44 percent of the media coverage. (Meanwhile, 18 attacks actually targeted Muslims in America. The Boston marathon bombing generated 474 news reports, amounting to 20 percent of the media terrorism coverage during the period analyzed. Overall, the authors report, "The average attack with a Muslim perpetrator is covered in 90.8 articles. Attacks with a Muslim, foreign-born perpetrator are covered in 192.8 articles on average. Compare this with other attacks, which received an average of 18.1 articles." Some non-Muslims did get intense coverage. Wade Michael Page, who killed six people in an attack on a Sikh temple in Oak Creek, Wisconsin, generated 92 articles, or 3.8 percent of the [...]
2017-03-17T13:30:00-04:00In his Commentaries on the Laws of England, William Blackstone declared, "It is better that ten guilty persons escape, than that one innocent suffer." In an 1785 letter, Benjamin Franklin was even more exacting: "That it is better 100 guilty Persons should escape, than that one innocent Person should suffer, is a Maxim that has been long and generally approv'd, never that I know of controverted." In 2011, the U.S. Department of Education took a different position. That was the year the department's Office of Civil Rights sent a "dear colleague" letter reinterpreting Title IX of the Education Amendments Act of 1972. That section reads: "No person in the United States shall, on the basis of sex, be excluded from participation in, be denied the benefits of, or be subjected to discrimination under any education program or activity receiving Federal financial assistance." The OCR's letter declared that sexual assault is "a form of sex discrimination prohibited by Title IX." (Sexual violence is a great deal more than discrimination, of course, but set that aside for the moment.) Afraid of losing their federal funding, colleges then set about devising grievance procedures to address complaints of sexual harassment and sexual assault on their campuses. The problem: The OCR decreed that these Title IX tribunals must eschew "the 'clear and convincing' standard"—that is, that they cannot refuse to punish people unless "it is highly probable or reasonably certain that the sexual harassment or violence occurred." Such procedures, the office explained, "are inconsistent with the standard of proof established for violations of the civil rights laws." Instead the tribunals should embrace the weaker "preponderance of the evidence" standard, in which "it is more likely than not that sexual harassment or violence occurred." Without wading into the weeds of specific cases (I refer readers to the excellent and thorough reporting of my Reason colleague Robby Soave), it is apparent that applying a lower standard of proof means that it is easier to punish those guilty of sexual violence. Conversely, it also means that more innocent people will be falsely found guilty of offenses they did not commit. So how high a risk of false conviction do the innocent face under the OCR's Title IX guidance standards? John Villasenor, a professor of public policy at the University of California, Los Angeles, set out to answer that in a study that uses probability theory to model false Title IX convictions under the preponderance of the evidence standard. What he found should take all fair-minded Americans aback. Villasenor begins by examining how legal scholars assess the stringency of burden of proof when it comes to determining the guilt or innocence of defendants. For example, surveys of judges, jurors, and college students find that when it comes to determining guilt beyond a reasonable doubt, they converge on a 90 percent probability as the threshold for finding that a defendant has committed the infraction as being fair. For the preponderance of the evidence standard, the figure is 50 percent. The lower standard of proof doesn't merely make it more likely that someone will be convicted; it provides prosecutors a greater incentive to risk bringing a case. Villasenor outlines an example in which 100 people are accused of wrongdoing. He supposes that 84 are guilty and 16 are innocent. Now suppose that the tribunal convicts 76 of the guilty while letting eight guilty individuals go, and that it acquits 12 of the innocent while convicting four. The overall probability of conviction is 80 percent (76 guilty + 4 innocent), and by definition the probability of being innocent is 16 percent. But since four innocent defendants are convicted, there is a 25 percent probability (4 out of 16) that an innocent person will be found guilty. Villasenor aims to be very conservative in his estimations, so he decides to use a four percent threshold that an innocent defendant would be wrongly convicted under the beyond-a-reasonable-dou[...]
2017-03-10T13:30:00-05:00At his first post-election press conference, Donald Trump declared that pharmaceutical companies are "getting away with murder" by pricing their drugs too high. "Pharma has a lot of lobbies, a lot of lobbyists, and a lot of power. And there's very little bidding on drugs," Trump said in January. "We're the largest buyer of drugs in the world, and yet we don't bid properly." At a meeting with pharmaceutical company executives later in January, Trump stated, "The U.S. drug companies have produced extraordinary results for our country, but the pricing has been astronomical for our country." He added, "For Medicare, for Medicaid, we have to get the prices way down." Trump was characteristically vague about just how he would lower pharmaceutical prices, but let's assume that Medicare was legally mandated to negotiate prices with drug companies. In this case, "negotiate" amounts to creating price controls since pharmaceutical manufacturers would largely have to take whatever price the government wanted to offer, much like what already occurs in the case of the Veterans Affairs Department. Most companies would likely agree to the government price controls because they would still make money from their existing drugs because marginal costs of each additional pill are so low. What would happen? A new study in Forum for Health Economics & Policy by a team of researchers led by Jeffrey Sullivan at the consultancy Precision Health Economics finds that price controls would indeed reduce the cost of drugs to Medicare Part D participants.* But the unintended consequences to Americans' lives and health in the future would be substantial and bad. The researchers sought to analyze what would happen if Veterans Affairs drug pricing and formulary policies were applied to Medicare Part D—the federal program that subsidizes the costs of prescription drugs for senior citizens. More than 41 million Americans are enrolled in it, and estimated benefits will total $94 billion, representing 15.6 percent of net Medicare outlays in 2017. Without going into great detail, the Veterans Affairs Department sets the federal ceiling price it will pay for drugs at 24 percent below the average price a pharmaceutical company gets from wholesalers. In addition, the VA keeps overall spending down by restricting the drugs to which veterans have access. For example, the Avalere consultancy reported in 2015 that the VA's formulary does not offer access to nearly a fifth of the top 200 most commonly prescribed Medicare Part D drugs. So what would happen if Medicare adopted VA-style price controls? Cutting prices would save Medicare money. The researchers cite a 2013 study by Dean Baker, co-director of the Center for Economic and Policy Research, that calculated that drug price controls could save Medicare between $24.8 and $58.3 billion annually. On the other hand, less revenue to pharmaceutical companies means less money devoted to research and development. A separate study, published in Managerial and Decision Economics in 2007, estimated that cutting prices by 40 to 50 percent in the U.S. will lead to between 30 and 60 percent fewer R&D projects being undertaken. Reduced investments in pharmaceutical R&D consequently results in reduced numbers of new drugs becoming available to patients. A 2009 study in Health Affairs calculated that as a result of fewer innovative pharmaceuticals being developed, average American life expectancy in 2060 would be around 2 years lower than it would otherwise have been. Aiming to calculate the effect of drug price controls on future Medicare savings, Sullivan and his colleagues input these data and estimates into an econometric model that aims to track cohorts of people over the age of 50 and to project their health and economic outcomes. They estimate how both VA-style price controls and formulary restrictions would reduce Medicare drug expenditures, along with their effects on future drug development by pharmaceutical companies between 2010 and 2060. Keep in min[...]
2017-03-03T14:15:00-05:00"The social cost of carbon is the most important number that you've never heard of," according to University of Chicago economist Michael Greenstone. Greenstone led the interagency working group that devised this metric during the Obama administration. Since it was first calculated in 2010, the social cost of carbon has been used to justify 80 different federal regulations that purportedly provide Americans with over a trillion dollars' worth of benefits. "The social cost of carbon is nothing but a political tool lacking scientific integrity and transparency conceived and utilized by an administration pushing a green agenda to the detriment of the American taxpayers," insisted Rep. Darin LaHood (R-Il.), chair of the Oversight Subcommittee of the House Committee on Science, Space and Technology. LaHood's remarks were made as he opened a hearing called "At What Cost? Examining the Social Cost of Carbon" earlier this week. "This metric did not simply materialize out of thin, and dirty, air," countered Rep. Don Beyer (D-Va). Beyer argued that the social cost of carbon (SCC) metric was devised by the Obama administration through a process that "was transparent, has been open to public comment, has been validated over the years and, much like our climate, is not static and changes over time in response to updated inputs." So what are these politicians fighting about? The social cost of carbon is a measure, in dollars, of the long-term damage done by a ton of carbon dioxide emissions in a given year. Most of the carbon dioxide that people add to the atmosphere comes from burning coal, oil, and natural gas. The Obama administration's interagency working group calculated that the SCC was about $36 per ton (in 2007 dollars). This figure was determined by cranking damage estimates through three different integrated assessment models that try to link long-term climate change with econometric projections. Notionally speaking, imposing a tax equal to the SCC would encourage people to reduce their carbon dioxide emissions while yielding revenues that could be used to offset the estimated damage, e.g., by building higher sea walls or developing heat-resistant crops. Can citizens take that $36-a-ton estimate to the bank? Not really. First, consider that integrated assessment models are trying to forecast how extra carbon dioxide will impact climate and economic growth over the course of this century and beyond. One of the critical variables is climate sensitivity, conventionally defined as how much warming can be expected from doubling the amount of carbon dioxide in the atmosphere. The working group calculating the SCC also used various discount rates. (One way to think of discount rates is to consider how much interest you'd require to put off getting a dollar for 10 years.) Finally, instead of focusing on domestic harms, the working group included the global damages in calculating the SCC. Republicans, who convened the subcommittee hearing, argue that the SCC is bogus and therefore many of the regulations aimed at cutting the emissions of carbon dioxide by restricting burning of fossil fuels are too. In his 2013 analysis of the IAMs relied upon by the Obama administration's interagency working group, Massachusetts Institute of Technology economist Robert Pindyck concluded that all three models "have crucial flaws that make them close to useless as tools for policy analysis." He pointedly added, "These models can be used to obtain almost any result one desires." In other words: Garbage In, Garbage Out. Having tossed the models aside, Pindyck earnestly sought another method for establishing a plausible SCC. In November, he published a new study in which he asked selected economists and climate scientists to provide their best guess of what the SCC should be, assuming the possibility of a climate-induced reduction in global economic output 50 years from now of 20 percent or more. Pindyck reports that his experts converged on an SCC estimate of a[...]
Craft distilling is burgeoning in America, with around 800 distilleries making small batch rums, vodkas, gins, and whiskeys. The industry took off after 1980, when the government lifted the requirement that a federal agent be on-site daily at such booze-making operations.
Nevertheless, the spirits industry remains strangled in red tape, especially in Alcoholic Beverage Control (ABC) monopoly states such as Virginia. In December I visited three Charlottesville-area craft distilleries—Silverback, Vitae Spirits, and Virginia Distillery Co.—to sample their offerings. At each, existing law held me back to a measly three ounces of imbibing. Still, that is a big improvement over last year, when customers could sample just two ounces, let alone the year before, when in-house sipping was rationed at one and a half ounces.
And six years ago, would-be tipplers could only "nose" but not sip spirits at distilleries in the state. There are no such prescribed limits on quaffing at any of Virginia's 262 wineries and 70 breweries; the hard stuff is held to harder standards. The bureaucratic workaround to the ABC monopoly is to license distillery tasting rooms as ABC stores. Distillers must buy their own product and send all in-store sales receipts to the ABC Board, which then returns the wholesale price back to the distilleries.
As of August 1, 2016, Virginia distilleries can sell directly to restaurants. The catch is that they can't deliver their product; restaurateurs must physically go to distilleries to make their purchases.
"Progress in changing liquor regulations is made incrementally," says Virginia Distillers Association President Gareth Moore, who is also CEO of Virginia Distillery Co. He noted that the new three-ounce limit enables customers to try several products in half-ounce pours and enjoy a single full cocktail during their visits. Moore next hopes to change the law to allow distillers to sell bottles at local festivals.
2017-02-24T13:30:00-05:00A year makes quite a difference. During the run-up to 2016's Conservative Political Action Conference (CPAC), many activists on the right urged the American Conservative Union, which organizes the annual event, to rescind its invitation to Donald Trump. Allowing Trump to speak "will do lasting and huge (yuge!) damage to the reputations of CPAC, ACU, individual ACU board members, the conservative movement, and indeed the GOP and America," warned Republican strategist Liz Mair, who worked with the anti-Trump political action committee Make America Awesome. The candidate ultimately cancelled his long-planned speech, pointing to campaign events in Kansas and Florida as an excuse. There's a good chance he also wanted to avoid answering questions after his talk, not to mention the embarrassment of having hundreds of conservative activists stage a walkout. Winning a presidential election certainly changes things. "By tomorrow this will be TPAC," Trump adviser Kellyanne Conway quipped yesterday. This morning Trump received a sustained standing ovation and chants of USA! USA! He told the CPAC crowd that "our victory was a win for conservative values." As the rest of his nationalist address made clear, Trump is no more conservative now than he was before the election. Nevertheless, his support among Republican voters stands high, and Republican politicians are falling in line behind him because rank-and-file party members trust him more than they trust GOP congressional leaders. Clearly some citizens support Trump because they believe his "alternative facts" about crime rates and free trade and hope that his hodge-podge of anti-liberty promises will somehow "make America great again." But how to explain the surge in support among once-skeptical CPAC participants and other conservative voters in favor of Trump? Perhaps because lots of conservatives are just acting as though they believe Trump's promises. That's the explanation suggested by the Cornell political scientist Andrew Little in "Propaganda and Credulity," a paper just published in Games and Economic Behavior. "Politicians lie, and coerce others to lie on their behalf," argued Little. "These lies take many forms, from rewriting the history taught in schools, to preventing the media from reporting on policy failures, to relatively innocuous spinning of the economy's performance in press conferences." Little rather sanguinely observes that most people accept that lying plays a "central role in politics." This poses a game-theory problem: If audiences know that they are being lied to, why do politicians bother doing it? Little's explanation: "Politicians lie because some people believe them." Little cites psychological experiments that show most people tend to believe what they are told even when they know the speaker has incentives to mislead them. In addition, empirical studies show that government propaganda actually works. "You can fool all the people some of the time and some of the people all the time, but you cannot fool all the people all the time," Abraham Lincoln purportedly said. Little has constructed a model that suggests that fooling some of the people can be enough to get most of the people acting like they are fooled. "While those who believe whatever the government tells them are tautologically responsive to propaganda," notes Little, "their presence has powerful effects on the behavior of those who are aware that they are being lied to, as well as those doing the lying." Less credulous folks look around to gauge how their fellow citizens are responding to the politicians' claims, and then must decide how they will act. If those fellow citizens seem to believe the propaganda, then the less credulous might well conclude that it's not worth sticking their necks out to yell that the emperor is naked. The upshot, Little says, is that "all can act as if they believe the government's lies even though most do not." One[...]
Google's parent company, Alphabet, revealed in December that an unaccompanied blind man had successfully traveled around Austin, Texas, in one of the company's cars, which had neither a steering wheel nor floor pedals. That same month, Alphabet announced that it is spinning off its self-driving vehicle technology into a new division called Waymo. Also in December, Uber launched an experimental self-driving ride-sharing service in San Francisco.
The future is rushing toward us. Unfortunately, the government wants to help.
In the case of Uber, the California Department of Motor Vehicles (DMV) was so eager to help that it ordered the company to shut down its service, declaring that its regulations "clearly establish that an autonomous vehicle may be tested on public roads only if the vehicle manufacturer, including anyone that installs autonomous technology on a vehicle, has obtained a permit to test such vehicles from the DMV."
Anthony Levandowski, head of Uber's Advanced Technology Group, responded by observing that "most states see the potential benefits" of self-driving technology and "have recognized that complex rules and requirements could have the unintended consequence of slowing innovation." By refraining from excessive regulation, added Levandowski, these jurisdictions "have made clear that they are pro technology. Our hope is that California, our home state and a leader in much of the world's dynamism, will take a similar view." Uber moved its self-driving fleet to Arizona.
The U.S. Department of Transportation (DOT) likewise wants to "accelerate the next revolution in roadway safety"—so in September, naturally, the agency issued a 116-page Federal Automated Vehicles Policy that outlines a 15-point design and development checklist applicable to the makers of automated cars. In case that was not enough help, the agency then issued a 392-page Notice of Proposed Rulemaking to mandate that all new light cars talk to each other using a very specific vehicle-to-vehicle (V2V) technology.
Instead, these rules are likely to slow down innovation and development. Compliance with the agency's 15-point safety assessment is supposedly voluntary, but woe betide any company that fails to file the proper paperwork. Even more worrying, the DOT is calling for a shift from the current regime, in which automakers self-certify that their vehicles meet safety standards, to a system where the agency tests and approves the product before it can go to market. This would bring all of the speed and efficiency of the federal government's drug approval process to the auto industry.
Plus, as Competitive Enterprise Institute researcher Marc Scribner points out, the safety benefits of the V2V mandate "will be trivial for the next 15 years, at which point far superior automated vehicle technology may be deployed to consumers." Self-driving cars equipped with autonomous collision avoidance technologies will likely provide all of the supposed benefits of V2V communications—and do it sooner. If the incoming Trump administration really wants to help, it'll get Washington out of the way and withdraw these two proposed rules.
2017-02-10T13:30:00-05:00High population density might induce better habits, according to some new research at the University of Michigan. If so, that's good news for the residents of an ever more highly populated world—and a big surprise for a generation of social critics. "Popollution" threatens to destroy the planet, Larry Gordon warned in his 1982 presidential address to the American Public Health Association. "When we consider the problems of hunger, poverty, depletion of resources, and overcrowding among the residents of our planet today, the future of human welfare looks grim indeed," he declared. Overcrowding was a big concern for those 20th-century prophets of population doom. In 1962, National Institute of Mental Health researcher John Calhoun published an influential article, "Population Density and Social Pathology," in Scientific American. Calhoun had conducted experiments in which he monitored overcrowded rats. As population density increased, female rats became less able to carry pregnancies to full term—and they so neglected the pups that were born that most died. Calhoun also documented increasing behavioral disturbances among the male rats, ranging "from sexual deviation to cannibalism and from frenetic overactivity to a pathological withdrawal." All of these pathologies amounted to a "behavioral sink" in which infant mortality ran as high as 96 percent. Calhoun's work was cited both by professional researchers and by overpopulation popularizers. Gordon, for example, argued that "too many members of the human species are already being destroyed by violence in overpopulated areas in the same manner as suggested by laboratory research utilizing other animals." In his 1961 book The City in History, the anti-modernist critic Lewis Mumford cited "scientific experiments in rats—for when they are placed in equally congested quarters, they exhibit the same symptoms of stress, alienation, hostility, sexual perversion, parental incompetence, and rabid violence that we now find in the Megapolis." In The Pump House Gang (1968), the hipster journalist Tom Wolfe referenced Calhoun's behavioral sink: "Overcrowding gets the adrenalin going, and the adrenalin gets them hyped up. And here they are, hyped up, turning bilious, nephritic, queer, autistic, sadistic, barren, batty, sloppy, hot-in-the-pants, chancred-on-the-flankers, leering, puling, numb..." And in his 1968 screed The Population Bomb, the Stanford biologist Paul Ehrlich declared that he had come to emotionally understand the population explosion "one stinking hot night in Delhi" during a taxi ride to his hotel. "The streets seemed alive with people. People eating, people washing, people sleeping. People visiting, arguing, and screaming. People thrusting their hands through the taxi window, begging. People defecating and urinating. People clinging to buses. People herding animals. People, people, people, people....[T]he dust, the noise, heat, and cooking fires gave the scene a hellish prospect." The theme of dystopian overcrowding inspired many popular books in the 1960s and 1970s, note the London School of Economics historians Edmund Ramsden and Jon Adams. Among the texts they cite are Terracide (1970), by Ron M. Linton; My Petition for More Space (1974), by John Hersey; and the novels Make Room! Make Room! (1966), by Harry Harrison; Logan's Run (1967), by William Nolan and George Johnson; and Stand on Zanzibar (1968), by John Brunner. But now, in stark contrast to these visions of chaos and collapse, new research suggests that increased population density isn't a disaster at all. Indeed, it's channeling human efforts and aspirations in productive directions. So says a report by a team of researchers led by Oliver Sng, a social psychologist at the University of Michigan. In "The Crowded Life Is a Slow Life," a new paper published in the Journal of Personality and Social Psychol[...]
2017-02-03T13:30:00-05:00After the atrocities of September 11, 2001, President George W. Bush's approval rating soared from 50 to 90 percent. A month after the attacks, nearly 60 percent of Americans said they trusted the government in Washington to do what is right almost always or most of the time; that was the highest it had been in 40 years. In the weeks after 9/11, more than 50 percent were very to somewhat worried that they or a family member would be a victim in a terrorist attack. Keying off of these fears, various commentators stepped forward to sagely intone that the "Constitution is not a suicide pact." (I prefer "Give me liberty or give me death.") Evidently averse to potentially committing suicide, 74 percent of the country agreed that "Americans will have to give up some of their personal freedoms in order to make the country safe from terrorist attacks." In 2002, an ABC News/Washington Post poll reported that 79 percent of Americans agreed that it was "more important right now for the federal government to investigate terrorist threats even if that intrudes on personal privacy." Support for intrusive investigations purportedly aimed at preventing terrorist attacks fell to only 57 percent in 2013, shortly after Edward Snowden's revelations of extensive domestic spying by the National Security Agency (NSA). In the most recent poll, it has ticked back up to 72 percent. Instead of urging Americans to exercise bravery and defend their liberty, our political leaders fanned fears and argued that we must surrender freedoms. The consequences included the creation of the Department of Homeland Security, the proliferation of metal detectors at the entrances of public buildings, the requirement to show government-issued IDs at more and more public venues, the increased militarization of our police forces, and tightened travel restrictions to neighboring countries where passports were once not required. In October 2001, the House of Representatives passed the Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism (USA PATRIOT) Act just 15 minutes after its 315 pages of text were made available to members. This law eviscerated the Fourth Amendment's privacy protections, and a massive secret domestic spying operation run by the NSA was set up. (Years later, numerous reports by outside and government analysts found that surrendering our civil liberties had been useless, since NSA domestic spying had had "no discernible impact on preventing acts of terrorism.") The Central Intelligence Agency was authorized to torture suspected terrorists; that too proved not just illiberal but ineffective. According to a recent Brown University study, the Global War on Terror (*) has cost $3.2 trillion, in addition to leaving nearly 7,000 American military personnel dead and scores of thousands wounded. How would President Donald Trump react to a significant terrorist attack, especially one motivated by radical jihadist beliefs? In a rally-around-the-flag reaction, his approval rating could surge. It is theoretically possible that such a crisis would reveal Trump as a fierce defender of American liberties, but the signs all point in a more authoritarian direction. In a 2015 speech at the U.S.S. Yorktown, Trump argued for "closing that internet in some way" to prevent ISIS from recruiting people. "Somebody will say, 'Oh freedom of speech, freedom of speech,'" he said. "These are foolish people. We have a lot of foolish people." When Apple refused the FBI's demand that it provide a backdoor to San Bernardino terrorist Syed Farook's iPhone, Trump asked, "Who do they [Apple] think they are? No, we have to open it up." He urged Americans to boycott Apple until it complied with the FBI's demand to decrypt the phone. More generally speaking, Trump has said that he tends "to err on the side[...]
2017-01-27T13:30:00-05:00When U.S. automakers met with President Donald Trump this week, they asked him to relax the vehicle fuel efficiency standards imposed by his predecessor. Just before Barack Obama left office, the Environmental Protection Agency issued a final determination that its Corporate Average Fuel Efficiency (CAFE) standard of requiring fleet-wide fuel efficiency of 50.8 miles per gallon on new cars by 2025 was achievable. "At every step in the process the analysis has shown that the greenhouse gas emissions standards for cars and light trucks remain affordable and effective through 2025, and will save American drivers billions of dollars at the pump while protecting our health and the environment," said outgoing EPA head Gina McCarthy. Ratcheting up the mandatory energy efficiency standards for vehicles and appliances was a major part of Obama's effort to reduce greenhouse gas emissions. The Department of Energy calculated that the Obama administration's energy efficiency standards would save consumers more than $520 billion on electricity costs by 2030. But not all consumers are alike. In a new study contrasting the effects on consumers of energy efficiency standards versus energy taxes, the Georgetown economist Arik Levinson notes that both energy efficiency standards and energy taxes function as a regressive tax, taking a larger percentage of a lower income and a smaller percentage of a higher income. His analysis aims to find out which is more regressive—in other words, which is worse for poor Americans. Levinson cites earlier research that estimates a gasoline tax would cost 71 percent less than the comparable CAFE policy per gallon of fuel saved. Meanwhile, a 2013 study calculates that CAFE standards cost more than six times as much as a corresponding gas tax for the same reduction in fuel consumption. In other words, if policy makers want people to use less fuel and drive more fuel-efficient cars, taxing gasoline is a much cheaper way to achieve that goal than mandating automobile fuel efficiency. Levinson concludes that "efficiency standards are, ironically, inefficient." But would energy taxes be more regressive? Many analysts argue that while both hit low-income Americans, energy efficiency standards whack them less. Levinson disagrees. Levinson argues that energy efficiency standards can be treated analytically as an equivalent to a tax on inefficient appliances and vehicles. Using data from 2009 National Household Travel Survey, he compares the amount of gasoline consumed by Americans at various income levels. The poorest 5 percent (with annual incomes of under $10,000) consume an average of 247 gallons per year; for the richest 20 percent (over $100,000), the average is 991. Assuming a gasoline tax of 29 cents per gallon, the poor pay $71, compared to $286 per year for the wealthy. Families with 10 times the income pay only four times more in fuel taxes. At the outset Levinson cites research that rejects the notion that consumers are shortsighted when it comes to purchasing more expensive vehicles and appliances that will save them money in the long run. Levinson compares the consequences of a 29 cent per gallon gas tax with a notional CAFE standard "tax" on inefficient vehicles that would raise the same amount of revenue. Rich folks own more and larger vehicles and drive more miles than do poor Americans, so they would pay more in either gas taxes or CAFE "taxes." Another wrinkle makes CAFE standards even more regressive. In 2012, the Obama administration set CAFE footprint standards based on vehicle size, determined by multiplying the vehicle's wheelbase by its average track width. Basically, a vehicle with a larger footprint has a lower fuel economy requirement than a vehicle with a smaller footprint. The footprint standard means that gas guzzlers like full-s[...]
2017-01-20T13:30:00-05:00"Donald Trump is a climate menace, no doubt about it," asserts Greenpeace U.K. spokesperson John Sauven. "President-elect Donald Trump threatens our environment and we vow to fight him every step of the way," declares Kate Colwell from Friends of the Earth. The Union of Concerned Scientists Research Director Gretchen Goldman warns, "It is hard to imagine a Trump administration where science won't be politicized." It is the case that in a 2014 tweet Trump notoriously asked, "Is our country still spending money on the GLOBAL WARMING HOAX?" In 2012, Trump tweeted that the concept of global warming had been created by the Chinese to make American manufacturing noncompetitive. During the presidential campaign, he vowed that he would "cancel" the Paris Agreement on climate change. Being his usual consistently inconsistent self, Trump claimed during a Fox News interview last year that the Chinese tweet was a "joke," and he told The New York Times after the election that he would keep an "open mind" about the Paris Agreement. Yet none of Trump's cabinet picks seem to agree that man-made climate change is hoax. In the hearings for various cabinet nominees, Democrats have sought valiantly to unmask them as "climate change deniers." So far, not one has questioned the scientific reality of man-made global warming. On the other hand, they have tended not to be as alarmed as their interlocutors, and/or have failed to endorse the climate policies that Democrats prefer. Take Scott Pruitt. The Oklahoma attorney-general, nominated to run the Environmental Protection Agency, stated flatly: "I do not believe that climate change is a hoax." He added, "Science tells us the climate is changing and human activity in some manner impacts that change. The ability to measure and pursue the degree and the extent of that impact and what to do about it are subject to continuing debate and dialogue." Sen. Bernie Sanders (I-Vt.) was particularly annoyed that Pruitt pointed to uncertainties about the future course of warming. But those uncertainties are real. The latest report from the Intergovernmental Panel on Climate Change (IPCC) argues that warming will continue unless GHG emissions are curbed, but it also notes that "the projected size of those changes cannot be precisely predicted." IPCC further observed that "some underlying physical processes are not yet completely understood, making them difficult to model." Pruitt is one of the 27 state attorneys-general that are challenging the legality of President Obama's Clean Power Plan (CPP), which would require electric utilities to cut their emissions of carbon dioxide by 30 percent below their 2005 levels by 2030. The Supreme Court stayed the implementation of the CPP last February, which indicates that Pruitt and his fellow attorneys-general have substantial legal grounds to challenge that EPA regulation. In November, the eco-modernist think tank the Breakthrough Institute released a study that suggested that the U.S. could well speed up its GHG reduction trends if the CPP was abandoned. Other nominees asked about their views on climate change include former ExxonMobil CEO Rex Tillerson (nominated to run the State Department), Montana Rep. Ryan Zinke (Interior Department); Alabama Sen. Jeff Sessions (Justice Department); businessman Wilbur Ross (Commerce Department); and former Texas governor Rick Perry (Energy Department). Tillerson testified, "I came to the decision a few years ago that the risk of climate change does exist and the consequences could be serious enough that it warrants action." Zinke similarly declared that he does not believe climate change is "hoax." Sessions offered, "I don't deny that we have global warming. In fact, the theory of it always struck me as plausible, and it's the question of how much [...]
2017-01-13T13:30:00-05:00"I believe that each of us who has his place to make should go where men are wanted, and where employment is not bestowed as alms," advised New York Tribune editor Horace Greeley in a famous 1871 letter. "Of course, I say to all who are in want of work, Go West!" Basically, Greeley was telling Americans to pick up and go to where the jobs and opportunities are. Americans were once more willing to heed Greeley's advice. From the end of World War II through the 1980s, the Census Bureau reports, about 20 percent of Americans changed their residences annually, with more than 3 percent moving to a different state each year. Now more are staying home. In November, the Census Bureau reported that Americans were moving at historically low rates: Only 11.2 percent moved in 2015, and just 1.5 percent moved to a different state. Yet many of the places where people are stuck offer few opportunities. Why have we become homebodies? In a draft article called "Stuck in Place," Yale law professor David Schleicher blames bad public policy. Schleicher argues that more Americans are stuck in places with few good jobs and little opportunity, largely because "governments, mostly at the state and local levels, have created a huge number of legal barriers to inter-state mobility." To get a handle on the mobility slow-down, Schleicher identifies and analyzes the policies that limit people's ability to enter job-rich markets and exit job-poor ones. He also describes how economically declining cities get caught in a policy spiral of fiscal and physical ruin that ultimately discourages labor mobility. The effects of lower labor mobility, he argues, include less effective monetary policy, significantly reduced economic output and growth, and rising inequality. Consider monetary policy. A dollar doesn't buy the same amounts of goods and services across the country. In a sense there are New York dollars, Ohio dollars, Mississippi dollars, California dollars, and so on. Think of what a worker earning the average household income of $30,000 in economically depressed Youngstown, Ohio, would need to have the same standard of living in other more prosperous regions of the country. In San Francisco, according to CNN's cost of living calculator, a Youngstown job seeker would need an annual salary of more than $63,000. (San Francisco's housing, groceries, transportation, and health care are 366, 56, 34, and 42 percent higher than Youngstown's, respectively.) In Manhattan, he'd need nearly $82,000. The median household income in San Francisco is around $84,000, up in real dollars from $59,000 in 1995. Economic theory suggests that this income differential should be bid down considerably as folks from declining areas like Youngstown move to economically vibrant centers such as San Francisco, but that is not happening. The per capita GDP among the states was converging before the 1970s, as people moved from poor states for more lucrative opportunities in richer states. That process has stopped. Why? First, lots of job-rich areas have erected barriers that keep job-seekers from other regions out. The two biggest barriers are land use and occupational licensing restrictions. Prior to the 1980s, strict zoning limitations were mostly confined to rich suburbs and did not appreciably check housing construction in most metropolitan areas. But now many prosperous areas in the United States require specific lot sizes, zone out manufactured and rental housing, perversely limit new rental housing construction by establishing rent control, or set up "historic districts" that limit the changes that owners can make to their houses. Land-use restrictions limit construction to boost housing and rental prices to the benefit current property owners who vote for local offic[...]
In 1971, Richard Nixon vowed "a national commitment for the conquest of cancer" as he signed the law establishing the National Cancer Institute (NCI). Forty-five years later, Barack Obama declared in his 2016 State of the Union address that our country would embark upon a "new moonshot" with the aim of making "America the country that cures cancer once and for all"; Vice President Joe Biden would be in charge of "mission control." In its October 17, 2016, report, the Cancer Moonshot Task Force declared that its goal is "to make a decade's worth of progress in preventing, diagnosing, and treating cancer in just 5 years."
How? The usual federal bureaucratic efforts of "catalyzing," "leveraging," and "targeting" are promised. But there is some meat to the proposals. For example, the NCI is creating a pre-approved "formulary" of promising therapeutic compounds from 30 pharmaceutical companies that will make them immediately available to researchers. In addition, the task force aims to establish open science computational platforms to provide data to all researchers on successful and failed investigations, and a consortium of 12 leading biotech and pharmaceutical companies are working together to identify and validate biomarkers for response and resistance to cancer therapies.
Prevention is also a focus. The moonshot aims to save lives by boosting the colorectal cancer screening rate among Americans 50 and older and raising HPV vaccination rates for adolescents.
The lifetime risk of cancer for American men is 1 in 2. For women it's 1 in 3. So what would a decade's worth of progress look like? According to the latest American Cancer Society figures, the cancer death rate has dropped by 23 percent since 1991, translating into more than 1.7 million deaths averted through 2012. The five-year survival rate has also increased from 49 to 69 percent. Doubling progress might mean doubling the annual reduction in cancer death rates for men to 3.6 percent and for women to 2.8 percent. That would cut the number of Americans dying of cancer from about 600,000 per year now to just above 500,000 in 2021.
But progress may happen even faster than that. The most exciting recent therapeutic breakthrough is immunotherapy—a treatment where cancer patients' immune cells are unleashed as guided missiles to kill their cancer. "It's actually plausible that in 10 years we'll have curative therapies for most if not all human cancers," declared Gary Gilliland, president and director of the Fred Hutchinson Cancer Research Center, at a conference in 2015. The good news is that the cancer moonshot may end up trailing advances that have already taken off.
2017-01-06T13:30:00-05:00The Case Against Sugar, by Gary Taubes, Knopf, 368 pp., 26.95. Less than 1 percent of Americans—1.6 million people—were diagnosed with Type 2 diabetes in 1958. As of 2014, that figure had risen to 9.3 percent, or 29.1 million. If current trends continue, the figure could rise to more than 33 percent by 2050. Something has clearly gone wrong with American health. The rising rate of diabetes is associated with the rising prevalence of obesity. Since the early 1960s, the percent of Americans who are obese—that is, whose body mass index is greater than 30—has increased from 13 percent to 35.7 percent today. (Nearly 70 percent of Americans are overweight, meaning their BMIs are over 25.) Roughly put, the prevailing theory is that rising fatness causes rising diabetes. But what if both are caused by something else? That is the intriguing and ultimately persuasive argument that Gary Taubes, author Why We Get Fat (2011) and cofounder of the Nutrition Science Initiative, makes in his new book, The Case Against Sugar. For Taubes, sugar—be it sucrose or high-fructose corn syrup—is "the principal cause of the chronic diseases that are most likely to kill us, or at least accelerate our demise," explains Taubes at the outset. "If this were a criminal case, The Case Against Sugar would be the argument for the prosecution." In making his case, Taubes explores the "claim that sugar is uniquely toxic—perhaps having prematurely killed more people than cigarettes or 'all wars combined,' as [diabetes epidemiologist] Kelly West put it." Taubes surveys the admittedly sparse research on sugar's psychoactive effects. For example, researchers have found that eating sugar stimulates the release of dopamine, a neurotransmitter that is also released when consuming nicotine, cocaine, heroin, or alcohol. Researchers are still debating the question of whether or not sugar is, in some sense, addictive. In the course of his exploration, Taubes devastatingly shows that most nutrition "science" is bunk. Various nutritionists have sought to blame our chronic ills on such elements of our diets as fats, cholesterol, meat, gluten and so forth. Few have focused their attention on sugar. His discussion of how nutritionists started and promoted the now-debunked notion that eating fats is a significant cause of heart disease is particularly enlightening and dismaying. Nowadays the debate over the role of fats in cardiovascular disease consists mostly of skirmishes over which fats might marginally increase risk. Interestingly, Taubes finds that a good bit of the research on fats was funded by the sugar industry. It is not just a coincidence that the low-fat food craze took off when the U.S. Department of Agriculture issued its first dietary guidelines in 1980 advising Americans to eat less fat. The added sugar that made the newly low-fat versions of prepared foods more palatable contributed to the rise in sweetener consumption. The USDA guidelines did advise Americans cut back on eating sugar, but they also stated that, "contrary to widespread opinion, too much sugar in your diet does not seem to cause diabetes." By the way, Taubes agrees since both sucrose and high-fructose corn syrup are essentially half glucose and half fructose there is no important metabolic differences between them. Taubes reviews the global history of sugar consumption. The average American today eats as much sugar in two weeks as our ancestors 200 years ago consumed in a year. The U.S. Department of Agriculture estimates that per-person annual consumption of caloric sweeteners peaked at 153.1 pounds in 1999 and fell to only 131.1 pounds in 2014. A 2014 analysis of data from 165 countries found that "gross per capita [...]
In its 2013 letter shutting down 23andMe's Personal Genome Service, the Food and Drug Administration (FDA) dreamed up some entirely hypothetical problems, but cited not a single example of customer confusion or dissatisfaction with the California-based genotype screening company. At the time, 23andMe had developed an excellent, transparent, and still improving online consumer interface that enabled users to obtain and understand the significance of their genetic test results. Customers were linked to the scientific studies on which 23andMe's interpretations of their data were based. The company was helping its customers learn how to understand and use genetic information.
Before the FDA's ban, the company ranked its results using a star system. Well-established research, in which at least two big studies found an association between a customer's genetic variants and a health risk, got four stars. Very preliminary studies got one star. As more scientific research came in, 23andMe would update each customer's health risks.
In my case, 23andMe reported that I had genetic variants that increased my risk for atrial fibrillation, venous thromboembolism, and age-related macular degeneration. On the other hand, based on the genes that were tested, it informed me that my risk for gout, Alzheimer's disease, melanoma, and rheumatoid arthritis were lower than average.
The old 23andMe provided me with results related to more than 125 different health risks associated with my genetic variants. In addition, the company reported my results for more than 60 different traits that I might have inherited, and it told me how I might respond to nearly 30 different pharmaceuticals. Now the FDA merely allows 23andMe to tell me less useful information, such as the fact that I carry more Neanderthal genetic variants than 85 percent of its other customers.
In all, the company is now legally permitted to provide me with just seven "Wellness Reports," which tell me, among other things, that I am probably not lactose intolerant, I don't flush when I drink liquor, I am not likely to be a sprinter, I probably have no back hair, and my second toe is probably longer than my big toe. Thanks for nothing, FDA.