2017-04-28T13:30:00-04:00Tomorrow around 100,000 Americans are expected to join the Peoples Climate March, which plans to stream from the Capitol up Pennsylvania Avenue while demanding jobs, justice, and—oh, yes—action on climate change. The plan is to "literally" surround the White House, then stage a 100-second sit-in, symbolizing the first 100 days of Donald Trump's administration. (Perhaps President Trump will hear the protests tomorrow afternoon, but he plans to hold a rally in Pennsylvania that evening.) It's another example of social justice movements hijacking the problem of climate change and using it as a pretext for attacking our system of market-tested betterment and innovation. Naomi Klein made this agenda explicit in her 2014 book This Changes Everything, which asserted that climate science has given progressives "the most powerful argument against unfettered capitalism" ever. Canonical Marxism predicted that capitalism would collapse under the weight of its class "contradictions," in which the bourgeoisie profit from the proletariat's labor until we reach a social breaking point. In Klein's update, capitalism will collapse because the pollution produced by heedless overconsumption will build to an ecological breaking point. "Only mass social movements can save us now," she declared. Tomorrow's march is a reprise of a 2014 Peoples Climate March in New York City, in which some 400,000 people participated. As in 2014, progressive economic and social policies are at the top of the marchers' agenda. The demonstrators are demanding emissions cuts deep enough to keep the planet from warming more than 1.5 degrees Celsius over the pre-industrial global average temperature. (By at least one calculation, that would mean that global carbon dioxide emissions would have to peak by 2020 and fall to zero by 2050.) But they also want "an equitable and sustainable New Energy and Economic Future." Among other things, this entails a $15 per hour minimum wage, the right to form unions, and investments targeted to give low-income Americans and people of color access to good jobs. Paul Getsos, national coordinator for the Peoples Climate Movement, also demands that the White House "immediately stop attacks on communities of color and immigrant, Muslim, indigenous and LGBTQIA communities." How this helps solve climate change is not at all clear. The pre-march line-up confirms the organizers' social justice aspirations. Heading off the parade are the "protectors of justice," which includes native youth and youth of color, the indigenous women's delegation, and Black Lives Matter activists, among others. Next up are the "creators of sanctuary," which includes immigrants, LGBTQI, women, Latinos, Waterkeepers, and food sovereignty and land rights marchers. Third in line stand the "builders of democracy," who are representatives from labor, government workers, voting rights, and democracy organizations. The fourth contingent is the "guardians of the future," who speak for kids, parents, elders, youth, students, and peace activists. Fifth come the "defenders of the truth," representing scientists, educators, technologists, and the health community; sixth are the "keepers of faith," consisting of religious groups. The "reshapers of power" are seventh: anti-corporate, anti-nuclear, anti–fossil fuel, and pro–renewable energy activists, plus bicyclists and other transportation advocates. The final place in the lineup is called "many struggles, one home." It's reserved for environmentalists, climate activists, the business community, and everyone else. The Peoples Climate March will alienate tens of millions of Americans who accept that man-made climate change poses significant risks but do not agree that the only solution is to try to transform the global economy into a post-capitalist utopia. Rather helping solve the problem, the march will help ensure that the issue remains politically divisive and intractable. [...]
2017-04-23T06:00:00-04:00Getting Risk Right: Understanding the Science of Elusive Health Risks, by Geoffrey C. Kabat, Columbia University Press, 248 pages, $35 Eating bacon and ham four times a week could make asthma symptoms worse. Drinking hot coffee and tea may cause cancer of the esophagus. South Africa's minister of health warns that doggy-style sex is a major cause of stroke and cancer in men. And those claims come from the health headlines of just one December week. The media inundate us daily with studies that seem to show that modern life is increasingly risky. Most of those stories must be false, given that life expectancy for American men and women, respectively, has risen from 71.8 and 78.8 years in 1990 to 76.3 and 81.1 years now. Apparently, we are suffering through an epidemic of bad epidemiology. When it comes to separating the wheat of good public health research from the chaff of studies that are mediocre or just plain bad, Albert Einstein College of Medicine epidemiologist Geoffrey Kabat is a national treasure. "Most research findings are false or exaggerated, and the more dramatic the result, the less likely it is to be true," he declares in his excellent new book Getting Risk Right. Kabat's earlier book, 2008's Hyping Health Risks (Columbia University Press), thoroughly dismantled the prevalent medical myths that man-made chemicals, electromagnetic fields, radon, and passive smoking were significant causes of such illnesses as cancer and heart disease. His new book shows how scientific research so often goes wrong—and how hard it is for it to go right. Kabat first reminds readers that finding a correlation between phenomena X and Y does not mean that X causes Y. Nevertheless, many researchers are happy to overinterpret such findings to suggest causation. "If researchers can slip into this way of interpreting and presenting results of their studies," observes Kabat, "it becomes easier to understand how journalists, regulators, activists of various stripes, self-appointed health gurus, promoters of health-related foods and products, and the public can make the unwarranted leap that the study being reported provides evidence of a causal relationship and therefore is worthy of our interest." He offers some principles to keep in mind when evaluating studies. First and foremost is the toxicological maxim that the dose makes the poison. The more exposure to a toxin, the greater the harm. Potency matters greatly too. Often very sensitive assays show that two different compounds can bind to the same receptors in the body, but what really matters biologically is how avidly and how strongly one binds compared to the other. Another principle: Do not confuse hazard, a potential source of harm, with risk, the likelihood that the hazard will cause harm. Consider bacon. The influential International Agency for Research on Cancer declared bacon a hazard for cancer in 2015, but the agency does not make risk assessments. Eating two slices of bacon per day is calculated to increase your lifetime risk of colorectal cancer from 5 to 6 percent. Put that way, I suspect most people would choose to continue to enjoy cured pork products. Kabat also argues that an editorial bias skews the scientific literature toward publishing results suggesting harms. Such findings, he notes, get more attention from other researchers, from regulators, from journalists, and from activists. Ever since Rachel Carson's 1962 book Silent Spring wrongly linked cancer with exposures to trace amounts of pesticides, the American public has been primed to blame external causes rather than personal behaviors for their health problems. Unfortunately, as Kabat notes, the existence of an alarmed and sensitized public is all too useful to regulators and other interest groups. He quotes an honest but incautious remark in the air pollution researcher Robert Phalen's 2010 testimony to the California Air Resources Board: "It benefits us personally to have the public be afraid, even if these risks are trivial." Kabat suggests that the precautiona[...]
2017-04-21T13:30:00-04:00In the flush of excitement after the post-inaugural Women's March on Washington, someone in a Reddit conversation suggested, "There needs to be a Scientists' March on Washington." Sensing that a march on Washington might sound too aggressively partisan, the organizers have now renamed the event the March for Science. That march will take place tomorrow, on Earth Day, which the coordinators somehow figured would be the perfect nonpartisan date on which to muster tens of thousands of scientists and their comrades on the National Mall. "We face a possible future where people not only ignore scientific evidence, but seek to eliminate it entirely," warns the march's mission statement. "Staying silent is a luxury that we can no longer afford. We must stand together and support science." From whom do the marchers hope to defend science? Certainly not the American public: Most Americans are fairly strong supporters of the scientific enterprise. An October 2016 Pew Research Center poll reported, "Three-quarters of Americans (76%) have either a great deal (21%) or a fair amount of confidence (55%) in scientists, generally, to act in the public interest." The General Social Survey notes that public confidence in scientists stands out among the most stable of about 13 institutions rated in the GSS survey since the mid-1970s. (For what it's worth, the GSS reports only 8 percent of the public say that they have a great deal of confidence in the press, but at least that's higher than the 6 percent who say the same about Congress.) The mission statement also declares, "The application of science to policy is not a partisan issue. Anti-science agendas and policies have been advanced by politicians on both sides of the aisle, and they harm everyone—without exception." I thoroughly endorse that sentiment. But why didn't the scientific community march when the Obama administration blocked over-the-counter access to emergency contraception to women under age 17? Or dawdled for years over the approval of genetically enhanced salmon? Or tried to kill off the Yucca Mountain nuclear waste storage facility? Or halted the development of direct-to-consumer genetic testing? One problem is that many of the marchers apparently believe that scientific evidence necessarily implies the adoption of certain policies. This ignores the always salient issue of trade-offs. For example, acknowledging that man-made global warming could become a significant problem does not mean that the only "scientific" policy response must be the immediate deployment of the current versions of solar and wind power. The mission statement proclaims that the marchers "unite as a diverse, nonpartisan group to call for science that upholds the common good and for political leaders and policy makers to enact evidence based policies in the public interest." Setting aside the fact that the march was conceived in the immediate wake of the decidedly partisan and specifically anti-Trump Women's March on Washington, how credible are these claims to non-partisanship? As it happens, I received an email on Thursday from the publicist for Shaughnessy Naughton, who is a chemist, a cancer researcher, and the founder of the activist group 314 Action. Naughton's group is one of the March's 170 partner organizations. 314 Action's political action committee is recruiting scientists, engineers, and other technologists to run for political office, and it plans to provide them with the "resources they need to become viable, credible, Democratic candidates." The publicist informed me that Naughton is "available to discuss this weekend's March for Science in Washington, D.C., which will assemble scientists from across the country to rally against the Trump 'war on science.'" The headline earlier this week in the reliably left-wing Guardian makes no bones about the intent of the marchers: "Science strikes back: anti-Trump march set to draw thousands to Washington." The 170 partner organizations that have endorsed the march include such [...]
2017-04-14T13:30:00-04:00"The benefits that immigration brings to society far outweigh their costs," declares an open letter to congressional leaders and President Donald Trump. The letter, published on Wednesday and signed by nearly 1,500 economists—including six Nobel Prize winners—notes that immigrant entrepreneurs start new businesses that hire lots of Americans; that immigrants are far more likely to work in innovative, job-creating fields such as science, technology, and engineering; and that they bring diverse skill sets that keep our workforce flexible, help companies grow, and increase the productivity of American workers. A new study parsing employment data between 1991 and 2008 confirms that immigrants significantly boost both the productivity and the wages of workers. The paper, published this week in the journal Economic Geography, compares how 160 U.S. metropolitan areas are faring according to the statistics compiled by the Census Bureau's Longitudinal Employer-Household Dynamics program. (This dataset has information on more than 30 million workers at 1.2 million businesses, including their sex, age, race, wages, length of employment, education, and country of birth.) The authors, economic geographers Abigail Cooke of SUNY-Buffalo and Tom Kemeny of the University of Southampton, note that "inclusive institutions" encourage trust, lowering costs and fostering cooperation. To get a handle on how inclusive various American cities are with respect to immigrants, the two researchers devise two indicators. The first measures how widespread social capital is in each city, and the second accounts for pro- and anti-immigrant ordinances adopted by local governments. Social capital consists of the connections of trust between individuals and entities that can be economically valuable. The authors constructed their indicator for social capital by assessing data from the County Business Patterns on the number of social, political, advocacy, business, professional, and labor associations per 10,000 residents in each metropolitan area. They also take into account the number of gathering places, such as specialty food shops, restaurants, cafés, bars, hair salons, corner stores, fitness centers, sports clubs, and bowling alleys. For a stark contrast between places with inclusive institutions and those without, the researchers focus their analysis on the cities that scored in the top and bottom third of their social capital indicator. Municipalities with the highest social capital included Appleton, Wisconsin; Des Moines, Iowa; and Trenton, New Jersey. Those with the lowest include McAllen, Texas; Fayetteville, North Carolina; and San Bernardino, California. Next they develop an inclusiveness indicator based on pro- and anti-immigration ordinances enacted by various metropolitan areas. They note that most of the ordinances specifically focus on undocumented immigrants, but they argue that their adoption indicates residents' attitudes toward immigration more generally. Some cities pass English-only rules or try to punish employers who hire undocumented immigrants; others pass sanctuary laws. In their analysis, they include the 160 urban areas that alternatively crossed thresholds in which at least 50 percent of their municipalities and counties had adopted either pro- or anti-immigrant ordinances. Metropolitan regions with more mixed policies were excluded. Among the cities scoring highest on the pro-immigrant indicator were Salem, Oregon; Austin, Texas; and Fresno, California. Anti-immigrant areas included Charlotte, North Carolina; Green Bay, Wisconsin; and Harrisonburg, Virginia. On top of all that, the researchers used the Census data to determine what percentage of people in each urban area is native and foreign-born. They also follow people's work and wage histories. The results? "What we found was remarkable. In cities that are unwelcoming to immigrants, as diversity rises, people's wages either don't change, or they go up by only a small amoun[...]
2017-04-07T13:30:00-04:00Generally speaking, Americans would be satisfied if the average temperature where they live was a tad higher. Or at least that's what the sociologist Jonathan Kelley concludes in a recent study published in Social Indicators Research. Another study, however, suggests that folks in countries that are already hot will not be so happy. Kelley, who is based at the University of Nevada, notes that the Paris climate agreement describes a global warming of two degrees Celsius—3.6 degrees Fahrenheit—above pre-industrial levels as "dangerous." Many Americans, he notes, currently live in regions that are at least that much warmer than other parts of the country. (The temperatures over the contiguous 48 states range from 15 degrees Fahrenheit in Minnesota winters to 81 degrees during Florida's torrid summers.) So he combines National Oceanic and Atmospheric Administration temperature data with survey data to probe how much a two-degree increase would bother Americans. The survey in question asked a national survey of more than 2,000 Americans to rate how satisfied they were with their summer and winter weather on a scale of 0 to 100. A 25-year old woman in Wisconsin, for example, rated winter in the Badger State at 0 points and summer at 90. Across the nation as a whole, Americans gave their summer weather an average rating of 67 and their winter weather 61. Each extra degree Fahrenheit reduced their satisfaction with summer by -0.82 points, and every higher degree Fahrenheit increased their satisfaction with winter by +1.03 points. Northerners' feelings about their winters were somewhat negative, with more than 10 percent rating them at 0 points; 30 percent of Southerners scored their winter weather at 100 points. "Such warming will greatly increase Americans' satisfaction with winter weather, especially in the north, and somewhat decrease satisfaction with summer weather in both north and south," reports Kelley. "On balance the nation benefits slightly." Using NOAA data, Kelley calculates that a 4-degree-Fahrenheit temperature increase would be the equivalent for a typical American of moving about 180 miles south. To experience an average of 4 degrees Fahrenheit warming, a Virginian like me would head for North Carolina. (My wife spent her childhood in North Carolina; it's not so bad.) As it happens, those of us who reside in the Old Dominion rate their summer and winter weather at 61 and 62 points, respectively; those smug North Carolinians correspondingly give theirs 72 and 70 points. Kelley reports that over the year as a whole, residents in warmer states are generally happier with their weather. Next Kelley compares the weather satisfaction scores of states in comparable temperature bands. For example, the average yearly temperature of states like Minnesota, Maine, North Dakota, and Montana hovers around 44 degrees Fahrenheit; in Michigan, New York, Colorado, and Oregon, it's 48. Parsing the weather preferences in the survey, he finds that southerners' rising dissatisfaction with their climate-change-induced higher summertime temperatures is more than counterbalanced by the increased happiness of northerners with their warmer winters. A four-degree increase in both summer and winter temperatures produces an almost two-point increase in year-round happiness with the weather. More surprisingly, an eight-degree increase in heat yields a two-point increase in weather satisfaction. Kelley then turns to life-satisfaction surveys to try to figure out what monetary value Americans would put on improved weather. Through a complicated process, he calculates that a one-point increase in weather satisfaction is equivalent to about a $3,000 annual increase in income. "By our (admittedly rough) estimates for 'dangerous' warming's effect over the year as a whole, combining its gains for winter and losses for summer and aggregating over the US as a whole, the $3000 gain from a single climate satisfaction point comes t[...]
(image) "You've got to see Sweat," urged a friend who had just attended the play during its premiere run in early 2016 at Washington, D.C.'s Arena Stage. "It's why Donald Trump is going to be president."
I finally got a chance to follow his advice at the Public Theater in New York in December, after an election in which white working-class votes propelled the billionaire reality TV star into the Oval Office. When it comes to Broadway's Studio 54 in March, still more theatergoers will be able to check out my friend's bold claim for themselves.
The play is a personal and political drama that searingly portrays how mechanization and globalization upend blue-collar Americans' lives. Written by the Pulitzer-winning playwright Lynn Nottage, Sweat is set in a fading central Pennsylvania manufacturing town in 2000 and 2008. It opens with two young men, Chris and Jason, meeting with their parole officer. They have evidently been convicted of the same crime.
The arc of the play is the story of how they got there. The main characters are three middle-aged women—Jason's mother Tracey, Chris' mother Cynthia, and their friend Jessie—who have proudly worked their whole lives on the line at a local factory. In 2000, they are regulars at the neighborhood bar run by Stan. A former factory worker, Stan is more aware of how the broader economic winds are blowing. He warns the three friends, "You could wake up tomorrow and all your jobs are in Mexico, wherever."
Sure enough, the company announces that it will move its operations south of the border unless the workers take a pay cut. A strike ensues, and the company hires immigrants at lower wages to replace them. The friendships fray spectacularly as each woman tries to survive her personal economic apocalypse. Yes, it will give coastal elites (you know who you are) some insight about Trump's victory.
2017-03-31T13:30:00-04:00"In this world nothing can be said to be certain, except death and taxes," quipped Benjamin Franklin. For now both remain inevitable, but two exciting new studies suggest that the grim reaper might be put off by novel treatments that can slow and even reverse aging. Peter de Keizer, a molecular geneticist at Erasmus University, reports in the journal Cell that he and his colleagues have developed a technique that kills off senescent cells. Our bodies have two ways of preventing damaged cells from becoming cancerous: kill them off, or cause them to cease replication and thus become senescent. Senescent cells accumulate as we grow older, secreting inflammatory substances that harm neighboring cells and contribute to many age-related diseases, including atherosclerorsis and diabetes. De Keizer and his colleagues have developed a treatment in mice that selectively kills senescent cells while leaving healthy normal cells alone. They discovered that old or damaged cells become senescent rather than die when the FOXO4 protein binds to the tumor suppressor gene p53. They have designed another protein that interferes with the ability of FOXO4 to halt p53 from causing cells to die. De Keizer's team administered the new protein to both fast-aging and normally aged mice. The treatment worked as they had hoped, jumpstarting the ability of p53 to make senescent cells commit suicide. Eliminating senescent cells restored stamina, fur density, and kidney function in both strains of mice. The researchers report that they are continuing to study the rodents to see if the treatment extends their lifespans. They plan to try the treatment to stop brain cancer in human beings, but the ultimate goal is to treat aging as a disease. "Maybe when you get to 65 you'll go every five years for your anti-senescence shot in the clinic. You'll go for your rejuvenation shot," de Keizer told the Tech Times. In the same week, another group of Harvard researchers led by molecular biologist David Sinclair reported in Science about experiments in mice that thwart DNA damage associated with aging and exposure to radiation. As we age, our cells lose their ability to repair the damage to the DNA that makes up our genes. The repair process is orchestrated by the SIRT1 and PARP1 proteins. Both proteins consume the ubiquitous coenzyme nicotinamide adenine dinucleotide (NAD) to operate. As we grow older, the amount of NAD in our cells declines, thus allowing another protein, DBC1, to inhibit the DNA repair activity of both SIRT1 and PARP1. In their new research, the scientists fed the NAD precursor nicotinamide mononucleotide (NMN) to mice that were equivalent in age to an 80-year-old person. They also gave it to mice whose DNA had been damaged by radiation. The compound boosted NAD back to youthful levels and restored their ability to repair the DNA damage in both the old and irradiated cells. Sinclair said, "The cells of the old mice were indistinguishable from the young mice, after just one week of treatment." In addition, dosing astronauts traveling to Mars with NMN could counteract the damage that radiation in deep space would cause them. In an earlier experiment by Sinclair and his associates, the muscles of two-year-old mice fed NMN resembled those of six-month-old mice with respect to insulin resistance, inflammation, muscle wasting, and other important markers. Sinclair says that his group plans to launch human NMN trials in the next six months. Other groups have already started and completed safety trials of other NAD precursors in human beings. Leonard Guarente, director of the Massachusetts Institute of Technology's Glenn Laboratory for the Science of Aging, reported the results in December of a clinical trial involving 120 people who took the NAD precursor nicotinimide riboside (NR). The trial found that subjects experienced no serious adverse events. The participants ranged in age[...]
2017-03-24T13:30:00-04:00"It's gotten to a point where it's not even being reported. In many cases, the very, very dishonest press doesn't want to report it," asserted President Donald Trump a month ago. He was referring to a purported media reticence to report on terror attacks in Europe. "They have their reasons, and you understand that," he added. The implication, I think, is that the politically correct press is concealing terrorists' backgrounds. To bolster the president's claims, the White House then released a list of 78 terror attacks from around the globe that Trump's minions think were underreported. All of the attackers on the list were Muslim—and all of the attacks had been reported by multiple news outlets. Some researchers at Georgia State University have an alternate idea: Perhaps the media are overreporting some of the attacks. Political scientist Erin Kearns and her colleagues raise that possibility in a preliminary working paper called "Why Do Some Terrorist Attacks Receive More Media Attention Than Others?" First they ask how many terror attacks have taken place between 2011 and 2015. (The 2016 data will become available later this summer.) The Global Terrorism Database at the University of Maryland, which catalogs information on over 150,000 incidents since 1970, defines terrorism as an "intentional act of violence or threat of violence by a non-state actor" that meets at least two of three criteria. First, that it be "aimed at attaining a political, economic, religious, or social goal." Second, that there is "evidence of an intention to coerce, intimidate, or convey some other message to a larger audience (or audiences) other than the immediate victims." And finally, that it be "outside the precepts of International Humanitarian Law." The Georgia State researchers report that the database catalogs 110 terrorist attacks in the U.S. over the most recent five-year span period in the database. (Globally, there were more than 57,000 terrorist attacks during that period.) In some cases, the media tended to report several attacks perpetrated by the same people as a single combined story; following their lead, the researchers reduce the number to 89 attacks. They then set out to answer four different questions: Would an attack receive more coverage if the perpetrators were Muslim, if they were arrested, if they aimed at government employees or facilities, or if it resulted in a high number of deaths? From a series of searches at LexisNexis and CNN.com, Kearns and her colleagues gathered a dataset of 2,413 relevant news articles. If each attack had received equal media attention, they would have garnered an average of 27 news articles apiece. Interestingly, 24 of the attacks listed in the GTD did not receive any reports in the news sources they probed. For example, a cursory Nexis search failed to turn up any news stories about a 2011 arson attack on townhouses under construction in Grand Rapids, Michigan. An internet search by me did find several local news reports that cited a threatening letter warning residents to leave the neighborhood: "This attack was not isolated, nor will it be the last. We are not peaceful. We are not willing to negotiate." The GTD reports so far that no one has been apprehended for the attack. For those five years, the researchers found, Muslims carried out only 11 out of the 89 attacks, yet those attacks received 44 percent of the media coverage. (Meanwhile, 18 attacks actually targeted Muslims in America. The Boston marathon bombing generated 474 news reports, amounting to 20 percent of the media terrorism coverage during the period analyzed. Overall, the authors report, "The average attack with a Muslim perpetrator is covered in 90.8 articles. Attacks with a Muslim, foreign-born perpetrator are covered in 192.8 articles on average. Compare this with other attacks, which received an average of 18.1 art[...]
2017-03-17T13:30:00-04:00In his Commentaries on the Laws of England, William Blackstone declared, "It is better that ten guilty persons escape, than that one innocent suffer." In an 1785 letter, Benjamin Franklin was even more exacting: "That it is better 100 guilty Persons should escape, than that one innocent Person should suffer, is a Maxim that has been long and generally approv'd, never that I know of controverted." In 2011, the U.S. Department of Education took a different position. That was the year the department's Office of Civil Rights sent a "dear colleague" letter reinterpreting Title IX of the Education Amendments Act of 1972. That section reads: "No person in the United States shall, on the basis of sex, be excluded from participation in, be denied the benefits of, or be subjected to discrimination under any education program or activity receiving Federal financial assistance." The OCR's letter declared that sexual assault is "a form of sex discrimination prohibited by Title IX." (Sexual violence is a great deal more than discrimination, of course, but set that aside for the moment.) Afraid of losing their federal funding, colleges then set about devising grievance procedures to address complaints of sexual harassment and sexual assault on their campuses. The problem: The OCR decreed that these Title IX tribunals must eschew "the 'clear and convincing' standard"—that is, that they cannot refuse to punish people unless "it is highly probable or reasonably certain that the sexual harassment or violence occurred." Such procedures, the office explained, "are inconsistent with the standard of proof established for violations of the civil rights laws." Instead the tribunals should embrace the weaker "preponderance of the evidence" standard, in which "it is more likely than not that sexual harassment or violence occurred." Without wading into the weeds of specific cases (I refer readers to the excellent and thorough reporting of my Reason colleague Robby Soave), it is apparent that applying a lower standard of proof means that it is easier to punish those guilty of sexual violence. Conversely, it also means that more innocent people will be falsely found guilty of offenses they did not commit. So how high a risk of false conviction do the innocent face under the OCR's Title IX guidance standards? John Villasenor, a professor of public policy at the University of California, Los Angeles, set out to answer that in a study that uses probability theory to model false Title IX convictions under the preponderance of the evidence standard. What he found should take all fair-minded Americans aback. Villasenor begins by examining how legal scholars assess the stringency of burden of proof when it comes to determining the guilt or innocence of defendants. For example, surveys of judges, jurors, and college students find that when it comes to determining guilt beyond a reasonable doubt, they converge on a 90 percent probability as the threshold for finding that a defendant has committed the infraction as being fair. For the preponderance of the evidence standard, the figure is 50 percent. The lower standard of proof doesn't merely make it more likely that someone will be convicted; it provides prosecutors a greater incentive to risk bringing a case. Villasenor outlines an example in which 100 people are accused of wrongdoing. He supposes that 84 are guilty and 16 are innocent. Now suppose that the tribunal convicts 76 of the guilty while letting eight guilty individuals go, and that it acquits 12 of the innocent while convicting four. The overall probability of conviction is 80 percent (76 guilty + 4 innocent), and by definition the probability of being innocent is 16 percent. But since four innocent defendants are convicted, there is a 25 percent probability (4 out of 16) that an innocent person will be found guilty. Vi[...]
2017-03-10T13:30:00-05:00At his first post-election press conference, Donald Trump declared that pharmaceutical companies are "getting away with murder" by pricing their drugs too high. "Pharma has a lot of lobbies, a lot of lobbyists, and a lot of power. And there's very little bidding on drugs," Trump said in January. "We're the largest buyer of drugs in the world, and yet we don't bid properly." At a meeting with pharmaceutical company executives later in January, Trump stated, "The U.S. drug companies have produced extraordinary results for our country, but the pricing has been astronomical for our country." He added, "For Medicare, for Medicaid, we have to get the prices way down." Trump was characteristically vague about just how he would lower pharmaceutical prices, but let's assume that Medicare was legally mandated to negotiate prices with drug companies. In this case, "negotiate" amounts to creating price controls since pharmaceutical manufacturers would largely have to take whatever price the government wanted to offer, much like what already occurs in the case of the Veterans Affairs Department. Most companies would likely agree to the government price controls because they would still make money from their existing drugs because marginal costs of each additional pill are so low. What would happen? A new study in Forum for Health Economics & Policy by a team of researchers led by Jeffrey Sullivan at the consultancy Precision Health Economics finds that price controls would indeed reduce the cost of drugs to Medicare Part D participants.* But the unintended consequences to Americans' lives and health in the future would be substantial and bad. The researchers sought to analyze what would happen if Veterans Affairs drug pricing and formulary policies were applied to Medicare Part D—the federal program that subsidizes the costs of prescription drugs for senior citizens. More than 41 million Americans are enrolled in it, and estimated benefits will total $94 billion, representing 15.6 percent of net Medicare outlays in 2017. Without going into great detail, the Veterans Affairs Department sets the federal ceiling price it will pay for drugs at 24 percent below the average price a pharmaceutical company gets from wholesalers. In addition, the VA keeps overall spending down by restricting the drugs to which veterans have access. For example, the Avalere consultancy reported in 2015 that the VA's formulary does not offer access to nearly a fifth of the top 200 most commonly prescribed Medicare Part D drugs. So what would happen if Medicare adopted VA-style price controls? Cutting prices would save Medicare money. The researchers cite a 2013 study by Dean Baker, co-director of the Center for Economic and Policy Research, that calculated that drug price controls could save Medicare between $24.8 and $58.3 billion annually. On the other hand, less revenue to pharmaceutical companies means less money devoted to research and development. A separate study, published in Managerial and Decision Economics in 2007, estimated that cutting prices by 40 to 50 percent in the U.S. will lead to between 30 and 60 percent fewer R&D projects being undertaken. Reduced investments in pharmaceutical R&D consequently results in reduced numbers of new drugs becoming available to patients. A 2009 study in Health Affairs calculated that as a result of fewer innovative pharmaceuticals being developed, average American life expectancy in 2060 would be around 2 years lower than it would otherwise have been. Aiming to calculate the effect of drug price controls on future Medicare savings, Sullivan and his colleagues input these data and estimates into an econometric model that aims to track cohorts of people over the age of 50 and to project their health and economic outcomes. They estimate how both VA-style price [...]
2017-03-03T14:15:00-05:00"The social cost of carbon is the most important number that you've never heard of," according to University of Chicago economist Michael Greenstone. Greenstone led the interagency working group that devised this metric during the Obama administration. Since it was first calculated in 2010, the social cost of carbon has been used to justify 80 different federal regulations that purportedly provide Americans with over a trillion dollars' worth of benefits. "The social cost of carbon is nothing but a political tool lacking scientific integrity and transparency conceived and utilized by an administration pushing a green agenda to the detriment of the American taxpayers," insisted Rep. Darin LaHood (R-Il.), chair of the Oversight Subcommittee of the House Committee on Science, Space and Technology. LaHood's remarks were made as he opened a hearing called "At What Cost? Examining the Social Cost of Carbon" earlier this week. "This metric did not simply materialize out of thin, and dirty, air," countered Rep. Don Beyer (D-Va). Beyer argued that the social cost of carbon (SCC) metric was devised by the Obama administration through a process that "was transparent, has been open to public comment, has been validated over the years and, much like our climate, is not static and changes over time in response to updated inputs." So what are these politicians fighting about? The social cost of carbon is a measure, in dollars, of the long-term damage done by a ton of carbon dioxide emissions in a given year. Most of the carbon dioxide that people add to the atmosphere comes from burning coal, oil, and natural gas. The Obama administration's interagency working group calculated that the SCC was about $36 per ton (in 2007 dollars). This figure was determined by cranking damage estimates through three different integrated assessment models that try to link long-term climate change with econometric projections. Notionally speaking, imposing a tax equal to the SCC would encourage people to reduce their carbon dioxide emissions while yielding revenues that could be used to offset the estimated damage, e.g., by building higher sea walls or developing heat-resistant crops. Can citizens take that $36-a-ton estimate to the bank? Not really. First, consider that integrated assessment models are trying to forecast how extra carbon dioxide will impact climate and economic growth over the course of this century and beyond. One of the critical variables is climate sensitivity, conventionally defined as how much warming can be expected from doubling the amount of carbon dioxide in the atmosphere. The working group calculating the SCC also used various discount rates. (One way to think of discount rates is to consider how much interest you'd require to put off getting a dollar for 10 years.) Finally, instead of focusing on domestic harms, the working group included the global damages in calculating the SCC. Republicans, who convened the subcommittee hearing, argue that the SCC is bogus and therefore many of the regulations aimed at cutting the emissions of carbon dioxide by restricting burning of fossil fuels are too. In his 2013 analysis of the IAMs relied upon by the Obama administration's interagency working group, Massachusetts Institute of Technology economist Robert Pindyck concluded that all three models "have crucial flaws that make them close to useless as tools for policy analysis." He pointedly added, "These models can be used to obtain almost any result one desires." In other words: Garbage In, Garbage Out. Having tossed the models aside, Pindyck earnestly sought another method for establishing a plausible SCC. In November, he published a new study in which he asked selected economists and climate scientists to provide their best guess of what the SCC should b[...]
Craft distilling is burgeoning in America, with around 800 distilleries making small batch rums, vodkas, gins, and whiskeys. The industry took off after 1980, when the government lifted the requirement that a federal agent be on-site daily at such booze-making operations.
Nevertheless, the spirits industry remains strangled in red tape, especially in Alcoholic Beverage Control (ABC) monopoly states such as Virginia. In December I visited three Charlottesville-area craft distilleries—Silverback, Vitae Spirits, and Virginia Distillery Co.—to sample their offerings. At each, existing law held me back to a measly three ounces of imbibing. Still, that is a big improvement over last year, when customers could sample just two ounces, let alone the year before, when in-house sipping was rationed at one and a half ounces.
And six years ago, would-be tipplers could only "nose" but not sip spirits at distilleries in the state. There are no such prescribed limits on quaffing at any of Virginia's 262 wineries and 70 breweries; the hard stuff is held to harder standards. The bureaucratic workaround to the ABC monopoly is to license distillery tasting rooms as ABC stores. Distillers must buy their own product and send all in-store sales receipts to the ABC Board, which then returns the wholesale price back to the distilleries.
As of August 1, 2016, Virginia distilleries can sell directly to restaurants. The catch is that they can't deliver their product; restaurateurs must physically go to distilleries to make their purchases.
"Progress in changing liquor regulations is made incrementally," says Virginia Distillers Association President Gareth Moore, who is also CEO of Virginia Distillery Co. He noted that the new three-ounce limit enables customers to try several products in half-ounce pours and enjoy a single full cocktail during their visits. Moore next hopes to change the law to allow distillers to sell bottles at local festivals.
2017-02-24T13:30:00-05:00A year makes quite a difference. During the run-up to 2016's Conservative Political Action Conference (CPAC), many activists on the right urged the American Conservative Union, which organizes the annual event, to rescind its invitation to Donald Trump. Allowing Trump to speak "will do lasting and huge (yuge!) damage to the reputations of CPAC, ACU, individual ACU board members, the conservative movement, and indeed the GOP and America," warned Republican strategist Liz Mair, who worked with the anti-Trump political action committee Make America Awesome. The candidate ultimately cancelled his long-planned speech, pointing to campaign events in Kansas and Florida as an excuse. There's a good chance he also wanted to avoid answering questions after his talk, not to mention the embarrassment of having hundreds of conservative activists stage a walkout. Winning a presidential election certainly changes things. "By tomorrow this will be TPAC," Trump adviser Kellyanne Conway quipped yesterday. This morning Trump received a sustained standing ovation and chants of USA! USA! He told the CPAC crowd that "our victory was a win for conservative values." As the rest of his nationalist address made clear, Trump is no more conservative now than he was before the election. Nevertheless, his support among Republican voters stands high, and Republican politicians are falling in line behind him because rank-and-file party members trust him more than they trust GOP congressional leaders. Clearly some citizens support Trump because they believe his "alternative facts" about crime rates and free trade and hope that his hodge-podge of anti-liberty promises will somehow "make America great again." But how to explain the surge in support among once-skeptical CPAC participants and other conservative voters in favor of Trump? Perhaps because lots of conservatives are just acting as though they believe Trump's promises. That's the explanation suggested by the Cornell political scientist Andrew Little in "Propaganda and Credulity," a paper just published in Games and Economic Behavior. "Politicians lie, and coerce others to lie on their behalf," argued Little. "These lies take many forms, from rewriting the history taught in schools, to preventing the media from reporting on policy failures, to relatively innocuous spinning of the economy's performance in press conferences." Little rather sanguinely observes that most people accept that lying plays a "central role in politics." This poses a game-theory problem: If audiences know that they are being lied to, why do politicians bother doing it? Little's explanation: "Politicians lie because some people believe them." Little cites psychological experiments that show most people tend to believe what they are told even when they know the speaker has incentives to mislead them. In addition, empirical studies show that government propaganda actually works. "You can fool all the people some of the time and some of the people all the time, but you cannot fool all the people all the time," Abraham Lincoln purportedly said. Little has constructed a model that suggests that fooling some of the people can be enough to get most of the people acting like they are fooled. "While those who believe whatever the government tells them are tautologically responsive to propaganda," notes Little, "their presence has powerful effects on the behavior of those who are aware that they are being lied to, as well as those doing the lying." Less credulous folks look around to gauge how their fellow citizens are responding to the politicians' claims, and then must decide how they will act. If those fellow citizens seem to believe the propaganda, then the less credulous might well conclude that [...]
Google's parent company, Alphabet, revealed in December that an unaccompanied blind man had successfully traveled around Austin, Texas, in one of the company's cars, which had neither a steering wheel nor floor pedals. That same month, Alphabet announced that it is spinning off its self-driving vehicle technology into a new division called Waymo. Also in December, Uber launched an experimental self-driving ride-sharing service in San Francisco.
The future is rushing toward us. Unfortunately, the government wants to help.
In the case of Uber, the California Department of Motor Vehicles (DMV) was so eager to help that it ordered the company to shut down its service, declaring that its regulations "clearly establish that an autonomous vehicle may be tested on public roads only if the vehicle manufacturer, including anyone that installs autonomous technology on a vehicle, has obtained a permit to test such vehicles from the DMV."
Anthony Levandowski, head of Uber's Advanced Technology Group, responded by observing that "most states see the potential benefits" of self-driving technology and "have recognized that complex rules and requirements could have the unintended consequence of slowing innovation." By refraining from excessive regulation, added Levandowski, these jurisdictions "have made clear that they are pro technology. Our hope is that California, our home state and a leader in much of the world's dynamism, will take a similar view." Uber moved its self-driving fleet to Arizona.
The U.S. Department of Transportation (DOT) likewise wants to "accelerate the next revolution in roadway safety"—so in September, naturally, the agency issued a 116-page Federal Automated Vehicles Policy that outlines a 15-point design and development checklist applicable to the makers of automated cars. In case that was not enough help, the agency then issued a 392-page Notice of Proposed Rulemaking to mandate that all new light cars talk to each other using a very specific vehicle-to-vehicle (V2V) technology.
Instead, these rules are likely to slow down innovation and development. Compliance with the agency's 15-point safety assessment is supposedly voluntary, but woe betide any company that fails to file the proper paperwork. Even more worrying, the DOT is calling for a shift from the current regime, in which automakers self-certify that their vehicles meet safety standards, to a system where the agency tests and approves the product before it can go to market. This would bring all of the speed and efficiency of the federal government's drug approval process to the auto industry.
Plus, as Competitive Enterprise Institute researcher Marc Scribner points out, the safety benefits of the V2V mandate "will be trivial for the next 15 years, at which point far superior automated vehicle technology may be deployed to consumers." Self-driving cars equipped with autonomous collision avoidance technologies will likely provide all of the supposed benefits of V2V communications—and do it sooner. If the incoming Trump administration really wants to help, it'll get Washington out of the way and withdraw these two proposed rules.
2017-02-10T13:30:00-05:00High population density might induce better habits, according to some new research at the University of Michigan. If so, that's good news for the residents of an ever more highly populated world—and a big surprise for a generation of social critics. "Popollution" threatens to destroy the planet, Larry Gordon warned in his 1982 presidential address to the American Public Health Association. "When we consider the problems of hunger, poverty, depletion of resources, and overcrowding among the residents of our planet today, the future of human welfare looks grim indeed," he declared. Overcrowding was a big concern for those 20th-century prophets of population doom. In 1962, National Institute of Mental Health researcher John Calhoun published an influential article, "Population Density and Social Pathology," in Scientific American. Calhoun had conducted experiments in which he monitored overcrowded rats. As population density increased, female rats became less able to carry pregnancies to full term—and they so neglected the pups that were born that most died. Calhoun also documented increasing behavioral disturbances among the male rats, ranging "from sexual deviation to cannibalism and from frenetic overactivity to a pathological withdrawal." All of these pathologies amounted to a "behavioral sink" in which infant mortality ran as high as 96 percent. Calhoun's work was cited both by professional researchers and by overpopulation popularizers. Gordon, for example, argued that "too many members of the human species are already being destroyed by violence in overpopulated areas in the same manner as suggested by laboratory research utilizing other animals." In his 1961 book The City in History, the anti-modernist critic Lewis Mumford cited "scientific experiments in rats—for when they are placed in equally congested quarters, they exhibit the same symptoms of stress, alienation, hostility, sexual perversion, parental incompetence, and rabid violence that we now find in the Megapolis." In The Pump House Gang (1968), the hipster journalist Tom Wolfe referenced Calhoun's behavioral sink: "Overcrowding gets the adrenalin going, and the adrenalin gets them hyped up. And here they are, hyped up, turning bilious, nephritic, queer, autistic, sadistic, barren, batty, sloppy, hot-in-the-pants, chancred-on-the-flankers, leering, puling, numb..." And in his 1968 screed The Population Bomb, the Stanford biologist Paul Ehrlich declared that he had come to emotionally understand the population explosion "one stinking hot night in Delhi" during a taxi ride to his hotel. "The streets seemed alive with people. People eating, people washing, people sleeping. People visiting, arguing, and screaming. People thrusting their hands through the taxi window, begging. People defecating and urinating. People clinging to buses. People herding animals. People, people, people, people....[T]he dust, the noise, heat, and cooking fires gave the scene a hellish prospect." The theme of dystopian overcrowding inspired many popular books in the 1960s and 1970s, note the London School of Economics historians Edmund Ramsden and Jon Adams. Among the texts they cite are Terracide (1970), by Ron M. Linton; My Petition for More Space (1974), by John Hersey; and the novels Make Room! Make Room! (1966), by Harry Harrison; Logan's Run (1967), by William Nolan and George Johnson; and Stand on Zanzibar (1968), by John Brunner. But now, in stark contrast to these visions of chaos and collapse, new research suggests that increased population density isn't a disaster at all. Indeed, it's channeling human efforts and aspirations in productive directions. So says a report by a team of r[...]