Subscribe: Ronald Bailey: Reason Magazine articles.
Added By: Feedage Forager Feedage Grade A rated
Language: English
americans  attacks  carbon  climate  march  new  people  percent  protections  public  researchers  science  social  trump  weather 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Ronald Bailey: Reason Magazine articles.

Ronald Bailey: articles.

Updated: 2017-05-29T00:00:00-04:00


Will Florida Ban Fracking?


Florida produces very little oil and natural gas. According to the state's Department of Environmental Protection, it has just 64 wells in operation, which gave the world a total of 2 million barrels of oil and 20 billion cubic feet of natural gas in all of 2016. None of those were drilled using fracking techniques.

So why did the Florida Senate consider a ban in March? It turns out that familiarity breeds acceptance, according to a January 2017 working paper by the Oregon State University sociologist Hilary Boudet and her colleagues. In the Sunshine State, the inverse seems to be true.

The authors wanted to find out how Americans who live next door to fracked wells feel about them, compared to folks who don't. So they analyzed nationally representative survey data probing the attitudes of nearly 20,000 people in nine waves between 2012 and 2016. They combined the survey results with data about how close respondents actually lived to oil and gas wells.

Among respondents who said they were familiar with fracking, which involves injecting high-pressure fluids into wells to create minute cracks that release trapped oil and natural gas, the researchers found "generalizable empirical evidence that those who are located closer to new unconventional oil and gas wells are more familiar with and more supportive of hydraulic fracturing." In other words, folks who live closer to wells are more likely to come down on the side of yimby—Yes In My Backyard.

Conversely, people living farther away from oil and gas development are more likely to associate fracking with negative impacts. Respondents in Denver (12 miles away on average from a newly active well) are more supportive of fracking than respondents in Orlando (400 miles away on average). So Floridians say nimby—Not In My Backyard—even though fracking is nowhere near their backyards.

Combined with directional drilling, this form of unconventional well development has boosted daily U.S. oil production from 5 million barrels in 2008 to nearly 9 million barrels now, and it has increased annual U.S. natural gas production from a plateau in 1970–2005 at 18 trillion cubic feet to over 27 trillion cubic feet today. This helped to cut the prices of these fossil fuels to about half of what they were a decade ago.

The Left Is Rebranding Environmental Regulations As Environmental Protections


"Trump signs order at the EPA to dismantle environmental protections," declares a March 28 headline in The Washington Post. An April 27 article in the Post described an "effort to remove environmental protections." Two days later, another Post article stated that Trump's term in office has "already seen multiple rollbacks of environmental protections." The Post isn't the only publication pushing such language. Here's The New York Times: "President Trump's unfortunate and misguided rollback of environmental protections has led to a depressing and widespread belief that the United States can no longer meet its commitment under the Paris climate change agreement." Here's The Huffington Post: "Environmental Protections Save Lives, Create Jobs And Strengthen The Economy." Here's The New Yorker: "It's clear that we're about to witness the steady demolition, or attempted demolition, of the environmental protections that have been put in place over the past five decades." In each of those instances, the words "environmental protections" could easily have been replaced by "environmental regulations." I'm speaking anecdotally here, but in recent months both mainstream and activist media have seemed to use "environmental protections" more often and "environmental regulations" less. Aristotle defined rhetoric as "the faculty of observing in any given case the available means of persuasion." And one of the chief paths of persuasion, he argued, comes "when the speech stirs their emotions." So which word has more emotional appeal, regulation or protection? Regulation denotes "a law, rule, or other order prescribed by authority, especially to regulate conduct." Protection is defined as "the act of protecting or the state of being protected; preservation from injury or harm." Regulation is coercive, perhaps punitive; protection is warm and fuzzy. As I puzzled over this apparent shift in terminology, my mind naturally turned to the retired Berkeley linguist and cognitive scientist George Lakoff. Lakoff has spent years thinking about how political progressives could become more persuasive with the public. To achieve that, he wants progressives to engage in what he calls "honest reframing." "Reframing is telling the truth as we see it—telling it forcefully, straightforwardly, articulately, with moral conviction and without hesitation," he writes. Lakoff believes that conservatives have been masterful at rhetoric, ah, framing. He cites the phrase "tax relief," which implies that taxes are an affliction and the politicians who favor it are heroes. People on the left, he argues, need to reframe progressive taxation as requiring "those who benefit most should pay their fair share." So I was not surprised to discover that in January Lakoff wrote a short essay titled "The Public's Viewpoint: Regulations Are Protections." He begins by citing Trump's assertion that he intends to "cut regulations by 75 percent, maybe more." Then Lakoff asks, "What is a 'regulation'?" He goes on to assert that from the viewpoint of corporations, "'regulations' are limitations on their freedom to do whatever they want no matter who it harms." (Never mind that killing customers is usually not a good business strategy.) On the other hand, Lakoff claims that the public views a regulation as being "a protection against harm done by unscrupulous corporations seeking to maximize profit at the cost of harm to the public." Lakoff's solution? "Imagine the NY Times, or even the USA Today headline: Trump to Eliminate 75% of Public Protections," writes Lakoff. "Imagine reporters finding out and reporting all over America exactly what protections would be removed." One of his three key takeaways is: "Shift the frame: always say 'protections' instead of 'regulations.' "Protections" is a more simple and accurate description." From simply inspecting recent coverage, I don't have to imagine that. I can just open the paper and read it. Interestingly, Lakoff also urges his readers to always take the Public's Viewpoint by asking themselves, "What[...]

You Are Ignorant, But Not Necessarily Dumb.


You probably suffer from the "illusion of explanatory depth." Moreover, you often succumb to the "illusion of understanding." So say two cognitive scientists, Philip Fernbach of Colorado University and Steven Sloman of Brown, in The Knowledge Illusion: Why We Never Think Alone. Disagree? OK, then write down how a zipper works. Or draw all the parts of a simple bicycle in their proper places. If that's too complicated, tell me: How does a flush toilet operate? The illusion of explanatory depth was exposed in experiments by Frank Keil, a cognitive scientist at Cornell. Keil asked subjects to rate on a scale of 1 to 7 how confident they were about their understanding of how such mechanisms as zippers, flush toilets, helicopters, quartz watches, and piano keys worked. Then Keil asked them to write down a detailed explanation. Most could not. Afterwards, Keil reported, "many participants reported genuine surprise and new humility at how much less they knew than they originally thought." Fernbach and Sloman then report cognitive scientist Thomas Landauer's estimate that the average adult's brain has the capacity to store about a gigabyte of information. The computer on which I am typing this review has about 1,000 times more memory than that. "Human beings are not warehouses of knowledge," the authors observe. Instead, we maneuver through the complexities that surround us by abstracting the relevant information that enables us to achieve our goals. The purpose of thinking, Fernbach and Sloman argue, is to choose the most effective action given the current situation. Our minds think causally, not logically. To illustrate that, the authors offer a logical puzzle: If my underwear is blue, then my socks must be green. My socks are green. Therefore my underwear is blue. When asked, many people agree with the conclusion. But what about: If I fall into a sewer, then I need a shower. I took a shower. Therefore, I fell into a sewer. It's the same logical mistake, but this time our knack for causal thinking prevents most people from making it. The authors also note that we are much better at thinking about how a cause produces an effect than we are at reasoning backward from an effect to find its cause. It is easier for a doctor predict that an ulcer will cause stomach pain than that stomach pain is the result of an ulcer. We are better at prediction than diagnosis. The authors also cite Daniel Kahneman, the economics Nobelist who elucidated the difference between intuitive and deliberative thinking. Think of an animal whose name starts with E. For most Americans, elephant comes to mind quickly and intuitively. (For the record, I thought of echidnas. I don't know why.) Now unravel the anagram: vaeertidebli. The answer is "deliberative" and, for most of us, it takes deliberative thinking to figure it out. The authors argue that we depend upon intuitive thinking to navigate most of our daily lives. We tend to turn to deliberative thinking when we encounter novel situations or engage in cooperative activities with others. Sloman and Fernbach note that more deliberative folks are somewhat less subject to the illusion of explanatory depth, and that they score better on the standard 3-item test measuring cognitive reflection. (Less than 20 percent of the U.S. population gets all three answers right.) If we are all so deeply ignorant, how is the modern world possible? The book's answer is that we live in hive mind where knowledge is distributed throughout the human community. We are, in the authors' words, "built to collaborate." When we don't know something, we tap into the knowledge and expertise of our fellow human beings." Ignorance has to do with how much you, whereas being dumb is relative to other people," the authors point out. Like everyone else you are ignorant, but you are not therefore necessarily dumb. We don't need to know how a flush toilet or the internet works. All we need to know is how to use these tools effectively to achieve our goals. Most our "knowledge" [...]

Peoples Climate Movement March for Jobs, Justice and the Climate


Tomorrow around 100,000 Americans are expected to join the Peoples Climate March, which plans to stream from the Capitol up Pennsylvania Avenue while demanding jobs, justice, and—oh, yes—action on climate change. The plan is to "literally" surround the White House, then stage a 100-second sit-in, symbolizing the first 100 days of Donald Trump's administration. (Perhaps President Trump will hear the protests tomorrow afternoon, but he plans to hold a rally in Pennsylvania that evening.) It's another example of social justice movements hijacking the problem of climate change and using it as a pretext for attacking our system of market-tested betterment and innovation. Naomi Klein made this agenda explicit in her 2014 book This Changes Everything, which asserted that climate science has given progressives "the most powerful argument against unfettered capitalism" ever. Canonical Marxism predicted that capitalism would collapse under the weight of its class "contradictions," in which the bourgeoisie profit from the proletariat's labor until we reach a social breaking point. In Klein's update, capitalism will collapse because the pollution produced by heedless overconsumption will build to an ecological breaking point. "Only mass social movements can save us now," she declared. Tomorrow's march is a reprise of a 2014 Peoples Climate March in New York City, in which some 400,000 people participated. As in 2014, progressive economic and social policies are at the top of the marchers' agenda. The demonstrators are demanding emissions cuts deep enough to keep the planet from warming more than 1.5 degrees Celsius over the pre-industrial global average temperature. (By at least one calculation, that would mean that global carbon dioxide emissions would have to peak by 2020 and fall to zero by 2050.) But they also want "an equitable and sustainable New Energy and Economic Future." Among other things, this entails a $15 per hour minimum wage, the right to form unions, and investments targeted to give low-income Americans and people of color access to good jobs. Paul Getsos, national coordinator for the Peoples Climate Movement, also demands that the White House "immediately stop attacks on communities of color and immigrant, Muslim, indigenous and LGBTQIA communities." How this helps solve climate change is not at all clear. The pre-march line-up confirms the organizers' social justice aspirations. Heading off the parade are the "protectors of justice," which includes native youth and youth of color, the indigenous women's delegation, and Black Lives Matter activists, among others. Next up are the "creators of sanctuary," which includes immigrants, LGBTQI, women, Latinos, Waterkeepers, and food sovereignty and land rights marchers. Third in line stand the "builders of democracy," who are representatives from labor, government workers, voting rights, and democracy organizations. The fourth contingent is the "guardians of the future," who speak for kids, parents, elders, youth, students, and peace activists. Fifth come the "defenders of the truth," representing scientists, educators, technologists, and the health community; sixth are the "keepers of faith," consisting of religious groups. The "reshapers of power" are seventh: anti-corporate, anti-nuclear, anti­–fossil fuel, and pro–renewable energy activists, plus bicyclists and other transportation advocates. The final place in the lineup is called "many struggles, one home." It's reserved for environmentalists, climate activists, the business community, and everyone else. The Peoples Climate March will alienate tens of millions of Americans who accept that man-made climate change poses significant risks but do not agree that the only solution is to try to transform the global economy into a post-capitalist utopia. Rather helping solve the problem, the march will help ensure that the issue remains politically divisive and intractable. [...]

An Epidemic of Bad Epidemiology


Getting Risk Right: Understanding the Science of Elusive Health Risks, by Geoffrey C. Kabat, Columbia University Press, 248 pages, $35 Eating bacon and ham four times a week could make asthma symptoms worse. Drinking hot coffee and tea may cause cancer of the esophagus. South Africa's minister of health warns that doggy-style sex is a major cause of stroke and cancer in men. And those claims come from the health headlines of just one December week. The media inundate us daily with studies that seem to show that modern life is increasingly risky. Most of those stories must be false, given that life expectancy for American men and women, respectively, has risen from 71.8 and 78.8 years in 1990 to 76.3 and 81.1 years now. Apparently, we are suffering through an epidemic of bad epidemiology. When it comes to separating the wheat of good public health research from the chaff of studies that are mediocre or just plain bad, Albert Einstein College of Medicine epidemiologist Geoffrey Kabat is a national treasure. "Most research findings are false or exaggerated, and the more dramatic the result, the less likely it is to be true," he declares in his excellent new book Getting Risk Right. Kabat's earlier book, 2008's Hyping Health Risks (Columbia University Press), thoroughly dismantled the prevalent medical myths that man-made chemicals, electromagnetic fields, radon, and passive smoking were significant causes of such illnesses as cancer and heart disease. His new book shows how scientific research so often goes wrong—and how hard it is for it to go right. Kabat first reminds readers that finding a correlation between phenomena X and Y does not mean that X causes Y. Nevertheless, many researchers are happy to overinterpret such findings to suggest causation. "If researchers can slip into this way of interpreting and presenting results of their studies," observes Kabat, "it becomes easier to understand how journalists, regulators, activists of various stripes, self-appointed health gurus, promoters of health-related foods and products, and the public can make the unwarranted leap that the study being reported provides evidence of a causal relationship and therefore is worthy of our interest." He offers some principles to keep in mind when evaluating studies. First and foremost is the toxicological maxim that the dose makes the poison. The more exposure to a toxin, the greater the harm. Potency matters greatly too. Often very sensitive assays show that two different compounds can bind to the same receptors in the body, but what really matters biologically is how avidly and how strongly one binds compared to the other. Another principle: Do not confuse hazard, a potential source of harm, with risk, the likelihood that the hazard will cause harm. Consider bacon. The influential International Agency for Research on Cancer declared bacon a hazard for cancer in 2015, but the agency does not make risk assessments. Eating two slices of bacon per day is calculated to increase your lifetime risk of colorectal cancer from 5 to 6 percent. Put that way, I suspect most people would choose to continue to enjoy cured pork products. Kabat also argues that an editorial bias skews the scientific literature toward publishing results suggesting harms. Such findings, he notes, get more attention from other researchers, from regulators, from journalists, and from activists. Ever since Rachel Carson's 1962 book Silent Spring wrongly linked cancer with exposures to trace amounts of pesticides, the American public has been primed to blame external causes rather than personal behaviors for their health problems. Unfortunately, as Kabat notes, the existence of an alarmed and sensitized public is all too useful to regulators and other interest groups. He quotes an honest but incautious remark in the air pollution researcher Robert Phalen's 2010 testimony to the California Air Resources Board: "It benefits us personally to have the publi[...]

Scientists’ March on Washington


In the flush of excitement after the post-inaugural Women's March on Washington, someone in a Reddit conversation suggested, "There needs to be a Scientists' March on Washington." Sensing that a march on Washington might sound too aggressively partisan, the organizers have now renamed the event the March for Science. That march will take place tomorrow, on Earth Day, which the coordinators somehow figured would be the perfect nonpartisan date on which to muster tens of thousands of scientists and their comrades on the National Mall. "We face a possible future where people not only ignore scientific evidence, but seek to eliminate it entirely," warns the march's mission statement. "Staying silent is a luxury that we can no longer afford. We must stand together and support science." From whom do the marchers hope to defend science? Certainly not the American public: Most Americans are fairly strong supporters of the scientific enterprise. An October 2016 Pew Research Center poll reported, "Three-quarters of Americans (76%) have either a great deal (21%) or a fair amount of confidence (55%) in scientists, generally, to act in the public interest." The General Social Survey notes that public confidence in scientists stands out among the most stable of about 13 institutions rated in the GSS survey since the mid-1970s. (For what it's worth, the GSS reports only 8 percent of the public say that they have a great deal of confidence in the press, but at least that's higher than the 6 percent who say the same about Congress.) The mission statement also declares, "The application of science to policy is not a partisan issue. Anti-science agendas and policies have been advanced by politicians on both sides of the aisle, and they harm everyone—without exception." I thoroughly endorse that sentiment. But why didn't the scientific community march when the Obama administration blocked over-the-counter access to emergency contraception to women under age 17? Or dawdled for years over the approval of genetically enhanced salmon? Or tried to kill off the Yucca Mountain nuclear waste storage facility? Or halted the development of direct-to-consumer genetic testing? One problem is that many of the marchers apparently believe that scientific evidence necessarily implies the adoption of certain policies. This ignores the always salient issue of trade-offs. For example, acknowledging that man-made global warming could become a significant problem does not mean that the only "scientific" policy response must be the immediate deployment of the current versions of solar and wind power. The mission statement proclaims that the marchers "unite as a diverse, nonpartisan group to call for science that upholds the common good and for political leaders and policy makers to enact evidence based policies in the public interest." Setting aside the fact that the march was conceived in the immediate wake of the decidedly partisan and specifically anti-Trump Women's March on Washington, how credible are these claims to non-partisanship? As it happens, I received an email on Thursday from the publicist for Shaughnessy Naughton, who is a chemist, a cancer researcher, and the founder of the activist group 314 Action. Naughton's group is one of the March's 170 partner organizations. 314 Action's political action committee is recruiting scientists, engineers, and other technologists to run for political office, and it plans to provide them with the "resources they need to become viable, credible, Democratic candidates." The publicist informed me that Naughton is "available to discuss this weekend's March for Science in Washington, D.C., which will assemble scientists from across the country to rally against the Trump 'war on science.'" The headline earlier this week in the reliably left-wing Guardian makes no bones about the intent of the marchers: "Science strikes back: anti-Trump march set to draw thousands to Was[...]

Welcoming Immigrants Means Higher Wages


"The benefits that immigration brings to society far outweigh their costs," declares an open letter to congressional leaders and President Donald Trump. The letter, published on Wednesday and signed by nearly 1,500 economists—including six Nobel Prize winners—notes that immigrant entrepreneurs start new businesses that hire lots of Americans; that immigrants are far more likely to work in innovative, job-creating fields such as science, technology, and engineering; and that they bring diverse skill sets that keep our workforce flexible, help companies grow, and increase the productivity of American workers. A new study parsing employment data between 1991 and 2008 confirms that immigrants significantly boost both the productivity and the wages of workers. The paper, published this week in the journal Economic Geography, compares how 160 U.S. metropolitan areas are faring according to the statistics compiled by the Census Bureau's Longitudinal Employer-Household Dynamics program. (This dataset has information on more than 30 million workers at 1.2 million businesses, including their sex, age, race, wages, length of employment, education, and country of birth.) The authors, economic geographers Abigail Cooke of SUNY-Buffalo and Tom Kemeny of the University of Southampton, note that "inclusive institutions" encourage trust, lowering costs and fostering cooperation. To get a handle on how inclusive various American cities are with respect to immigrants, the two researchers devise two indicators. The first measures how widespread social capital is in each city, and the second accounts for pro- and anti-immigrant ordinances adopted by local governments. Social capital consists of the connections of trust between individuals and entities that can be economically valuable. The authors constructed their indicator for social capital by assessing data from the County Business Patterns on the number of social, political, advocacy, business, professional, and labor associations per 10,000 residents in each metropolitan area. They also take into account the number of gathering places, such as specialty food shops, restaurants, cafés, bars, hair salons, corner stores, fitness centers, sports clubs, and bowling alleys. For a stark contrast between places with inclusive institutions and those without, the researchers focus their analysis on the cities that scored in the top and bottom third of their social capital indicator. Municipalities with the highest social capital included Appleton, Wisconsin; Des Moines, Iowa; and Trenton, New Jersey. Those with the lowest include McAllen, Texas; Fayetteville, North Carolina; and San Bernardino, California. Next they develop an inclusiveness indicator based on pro- and anti-immigration ordinances enacted by various metropolitan areas. They note that most of the ordinances specifically focus on undocumented immigrants, but they argue that their adoption indicates residents' attitudes toward immigration more generally. Some cities pass English-only rules or try to punish employers who hire undocumented immigrants; others pass sanctuary laws. In their analysis, they include the 160 urban areas that alternatively crossed thresholds in which at least 50 percent of their municipalities and counties had adopted either pro- or anti-immigrant ordinances. Metropolitan regions with more mixed policies were excluded. Among the cities scoring highest on the pro-immigrant indicator were Salem, Oregon; Austin, Texas; and Fresno, California. Anti-immigrant areas included Charlotte, North Carolina; Green Bay, Wisconsin; and Harrisonburg, Virginia. On top of all that, the researchers used the Census data to determine what percentage of people in each urban area is native and foreign-born. They also follow people's work and wage histories. The results? "What we found was remarkable. In cities that are unwelcoming to immigrants, as divers[...]

Warmer Temperatures: More Climate Satisfaction in U.S.


Generally speaking, Americans would be satisfied if the average temperature where they live was a tad higher. Or at least that's what the sociologist Jonathan Kelley concludes in a recent study published in Social Indicators Research. Another study, however, suggests that folks in countries that are already hot will not be so happy. Kelley, who is based at the University of Nevada, notes that the Paris climate agreement describes a global warming of two degrees Celsius—3.6 degrees Fahrenheit—above pre-industrial levels as "dangerous." Many Americans, he notes, currently live in regions that are at least that much warmer than other parts of the country. (The temperatures over the contiguous 48 states range from 15 degrees Fahrenheit in Minnesota winters to 81 degrees during Florida's torrid summers.) So he combines National Oceanic and Atmospheric Administration temperature data with survey data to probe how much a two-degree increase would bother Americans. The survey in question asked a national survey of more than 2,000 Americans to rate how satisfied they were with their summer and winter weather on a scale of 0 to 100. A 25-year old woman in Wisconsin, for example, rated winter in the Badger State at 0 points and summer at 90. Across the nation as a whole, Americans gave their summer weather an average rating of 67 and their winter weather 61. Each extra degree Fahrenheit reduced their satisfaction with summer by -0.82 points, and every higher degree Fahrenheit increased their satisfaction with winter by +1.03 points. Northerners' feelings about their winters were somewhat negative, with more than 10 percent rating them at 0 points; 30 percent of Southerners scored their winter weather at 100 points. "Such warming will greatly increase Americans' satisfaction with winter weather, especially in the north, and somewhat decrease satisfaction with summer weather in both north and south," reports Kelley. "On balance the nation benefits slightly." Using NOAA data, Kelley calculates that a 4-degree-Fahrenheit temperature increase would be the equivalent for a typical American of moving about 180 miles south. To experience an average of 4 degrees Fahrenheit warming, a Virginian like me would head for North Carolina. (My wife spent her childhood in North Carolina; it's not so bad.) As it happens, those of us who reside in the Old Dominion rate their summer and winter weather at 61 and 62 points, respectively; those smug North Carolinians correspondingly give theirs 72 and 70 points. Kelley reports that over the year as a whole, residents in warmer states are generally happier with their weather. Next Kelley compares the weather satisfaction scores of states in comparable temperature bands. For example, the average yearly temperature of states like Minnesota, Maine, North Dakota, and Montana hovers around 44 degrees Fahrenheit; in Michigan, New York, Colorado, and Oregon, it's 48. Parsing the weather preferences in the survey, he finds that southerners' rising dissatisfaction with their climate-change-induced higher summertime temperatures is more than counterbalanced by the increased happiness of northerners with their warmer winters. A four-degree increase in both summer and winter temperatures produces an almost two-point increase in year-round happiness with the weather. More surprisingly, an eight-degree increase in heat yields a two-point increase in weather satisfaction. Kelley then turns to life-satisfaction surveys to try to figure out what monetary value Americans would put on improved weather. Through a complicated process, he calculates that a one-point increase in weather satisfaction is equivalent to about a $3,000 annual increase in income. "By our (admittedly rough) estimates for 'dangerous' warming's effect over the year as a whole, combining its gains for winter and losses for summer and aggregating over [...]



(image) "You've got to see Sweat," urged a friend who had just attended the play during its premiere run in early 2016 at Washington, D.C.'s Arena Stage. "It's why Donald Trump is going to be president."

I finally got a chance to follow his advice at the Public Theater in New York in December, after an election in which white working-class votes propelled the billionaire reality TV star into the Oval Office. When it comes to Broadway's Studio 54 in March, still more theatergoers will be able to check out my friend's bold claim for themselves.

The play is a personal and political drama that searingly portrays how mechanization and globalization upend blue-collar Americans' lives. Written by the Pulitzer-winning playwright Lynn Nottage, Sweat is set in a fading central Pennsylvania manufacturing town in 2000 and 2008. It opens with two young men, Chris and Jason, meeting with their parole officer. They have evidently been convicted of the same crime.

The arc of the play is the story of how they got there. The main characters are three middle-aged women—Jason's mother Tracey, Chris' mother Cynthia, and their friend Jessie—who have proudly worked their whole lives on the line at a local factory. In 2000, they are regulars at the neighborhood bar run by Stan. A former factory worker, Stan is more aware of how the broader economic winds are blowing. He warns the three friends, "You could wake up tomorrow and all your jobs are in Mexico, wherever."

Sure enough, the company announces that it will move its operations south of the border unless the workers take a pay cut. A strike ensues, and the company hires immigrants at lower wages to replace them. The friendships fray spectacularly as each woman tries to survive her personal economic apocalypse. Yes, it will give coastal elites (you know who you are) some insight about Trump's victory.

Is the Cure for Aging Just Around the Corner?


"In this world nothing can be said to be certain, except death and taxes," quipped Benjamin Franklin. For now both remain inevitable, but two exciting new studies suggest that the grim reaper might be put off by novel treatments that can slow and even reverse aging. Peter de Keizer, a molecular geneticist at Erasmus University, reports in the journal Cell that he and his colleagues have developed a technique that kills off senescent cells. Our bodies have two ways of preventing damaged cells from becoming cancerous: kill them off, or cause them to cease replication and thus become senescent. Senescent cells accumulate as we grow older, secreting inflammatory substances that harm neighboring cells and contribute to many age-related diseases, including atherosclerorsis and diabetes. De Keizer and his colleagues have developed a treatment in mice that selectively kills senescent cells while leaving healthy normal cells alone. They discovered that old or damaged cells become senescent rather than die when the FOXO4 protein binds to the tumor suppressor gene p53. They have designed another protein that interferes with the ability of FOXO4 to halt p53 from causing cells to die. De Keizer's team administered the new protein to both fast-aging and normally aged mice. The treatment worked as they had hoped, jumpstarting the ability of p53 to make senescent cells commit suicide. Eliminating senescent cells restored stamina, fur density, and kidney function in both strains of mice. The researchers report that they are continuing to study the rodents to see if the treatment extends their lifespans. They plan to try the treatment to stop brain cancer in human beings, but the ultimate goal is to treat aging as a disease. "Maybe when you get to 65 you'll go every five years for your anti-senescence shot in the clinic. You'll go for your rejuvenation shot," de Keizer told the Tech Times. In the same week, another group of Harvard researchers led by molecular biologist David Sinclair reported in Science about experiments in mice that thwart DNA damage associated with aging and exposure to radiation. As we age, our cells lose their ability to repair the damage to the DNA that makes up our genes. The repair process is orchestrated by the SIRT1 and PARP1 proteins. Both proteins consume the ubiquitous coenzyme nicotinamide adenine dinucleotide (NAD) to operate. As we grow older, the amount of NAD in our cells declines, thus allowing another protein, DBC1, to inhibit the DNA repair activity of both SIRT1 and PARP1. In their new research, the scientists fed the NAD precursor nicotinamide mononucleotide (NMN) to mice that were equivalent in age to an 80-year-old person. They also gave it to mice whose DNA had been damaged by radiation. The compound boosted NAD back to youthful levels and restored their ability to repair the DNA damage in both the old and irradiated cells. Sinclair said, "The cells of the old mice were indistinguishable from the young mice, after just one week of treatment." In addition, dosing astronauts traveling to Mars with NMN could counteract the damage that radiation in deep space would cause them. In an earlier experiment by Sinclair and his associates, the muscles of two-year-old mice fed NMN resembled those of six-month-old mice with respect to insulin resistance, inflammation, muscle wasting, and other important markers. Sinclair says that his group plans to launch human NMN trials in the next six months. Other groups have already started and completed safety trials of other NAD precursors in human beings. Leonard Guarente, director of the Massachusetts Institute of Technology's Glenn Laboratory for the Science of Aging, reported the results in December of a clinical trial involving 120 people who took the NAD precursor nicotinimide riboside (NR). The trial found t[...]

Do Muslims Commit Most U.S. Terrorist Attacks?


"It's gotten to a point where it's not even being reported. In many cases, the very, very dishonest press doesn't want to report it," asserted President Donald Trump a month ago. He was referring to a purported media reticence to report on terror attacks in Europe. "They have their reasons, and you understand that," he added. The implication, I think, is that the politically correct press is concealing terrorists' backgrounds. To bolster the president's claims, the White House then released a list of 78 terror attacks from around the globe that Trump's minions think were underreported. All of the attackers on the list were Muslim—and all of the attacks had been reported by multiple news outlets. Some researchers at Georgia State University have an alternate idea: Perhaps the media are overreporting some of the attacks. Political scientist Erin Kearns and her colleagues raise that possibility in a preliminary working paper called "Why Do Some Terrorist Attacks Receive More Media Attention Than Others?" First they ask how many terror attacks have taken place between 2011 and 2015. (The 2016 data will become available later this summer.) The Global Terrorism Database at the University of Maryland, which catalogs information on over 150,000 incidents since 1970, defines terrorism as an "intentional act of violence or threat of violence by a non-state actor" that meets at least two of three criteria. First, that it be "aimed at attaining a political, economic, religious, or social goal." Second, that there is "evidence of an intention to coerce, intimidate, or convey some other message to a larger audience (or audiences) other than the immediate victims." And finally, that it be "outside the precepts of International Humanitarian Law." The Georgia State researchers report that the database catalogs 110 terrorist attacks in the U.S. over the most recent five-year span period in the database. (Globally, there were more than 57,000 terrorist attacks during that period.) In some cases, the media tended to report several attacks perpetrated by the same people as a single combined story; following their lead, the researchers reduce the number to 89 attacks. They then set out to answer four different questions: Would an attack receive more coverage if the perpetrators were Muslim, if they were arrested, if they aimed at government employees or facilities, or if it resulted in a high number of deaths? From a series of searches at LexisNexis and, Kearns and her colleagues gathered a dataset of 2,413 relevant news articles. If each attack had received equal media attention, they would have garnered an average of 27 news articles apiece. Interestingly, 24 of the attacks listed in the GTD did not receive any reports in the news sources they probed. For example, a cursory Nexis search failed to turn up any news stories about a 2011 arson attack on townhouses under construction in Grand Rapids, Michigan. An internet search by me did find several local news reports that cited a threatening letter warning residents to leave the neighborhood: "This attack was not isolated, nor will it be the last. We are not peaceful. We are not willing to negotiate." The GTD reports so far that no one has been apprehended for the attack. For those five years, the researchers found, Muslims carried out only 11 out of the 89 attacks, yet those attacks received 44 percent of the media coverage. (Meanwhile, 18 attacks actually targeted Muslims in America. The Boston marathon bombing generated 474 news reports, amounting to 20 percent of the media terrorism coverage during the period analyzed. Overall, the authors report, "The average attack with a Muslim perpetrator is covered in 90.8 articles. Attacks with a Muslim, foreign-born perpetrator are covered in 192.8 articles [...]

How Title IX Sexual Assault Injustice Operates


In his Commentaries on the Laws of England, William Blackstone declared, "It is better that ten guilty persons escape, than that one innocent suffer." In an 1785 letter, Benjamin Franklin was even more exacting: "That it is better 100 guilty Persons should escape, than that one innocent Person should suffer, is a Maxim that has been long and generally approv'd, never that I know of controverted." In 2011, the U.S. Department of Education took a different position. That was the year the department's Office of Civil Rights sent a "dear colleague" letter reinterpreting Title IX of the Education Amendments Act of 1972. That section reads: "No person in the United States shall, on the basis of sex, be excluded from participation in, be denied the benefits of, or be subjected to discrimination under any education program or activity receiving Federal financial assistance." The OCR's letter declared that sexual assault is "a form of sex discrimination prohibited by Title IX." (Sexual violence is a great deal more than discrimination, of course, but set that aside for the moment.) Afraid of losing their federal funding, colleges then set about devising grievance procedures to address complaints of sexual harassment and sexual assault on their campuses. The problem: The OCR decreed that these Title IX tribunals must eschew "the 'clear and convincing' standard"—that is, that they cannot refuse to punish people unless "it is highly probable or reasonably certain that the sexual harassment or violence occurred." Such procedures, the office explained, "are inconsistent with the standard of proof established for violations of the civil rights laws." Instead the tribunals should embrace the weaker "preponderance of the evidence" standard, in which "it is more likely than not that sexual harassment or violence occurred." Without wading into the weeds of specific cases (I refer readers to the excellent and thorough reporting of my Reason colleague Robby Soave), it is apparent that applying a lower standard of proof means that it is easier to punish those guilty of sexual violence. Conversely, it also means that more innocent people will be falsely found guilty of offenses they did not commit. So how high a risk of false conviction do the innocent face under the OCR's Title IX guidance standards? John Villasenor, a professor of public policy at the University of California, Los Angeles, set out to answer that in a study that uses probability theory to model false Title IX convictions under the preponderance of the evidence standard. What he found should take all fair-minded Americans aback. Villasenor begins by examining how legal scholars assess the stringency of burden of proof when it comes to determining the guilt or innocence of defendants. For example, surveys of judges, jurors, and college students find that when it comes to determining guilt beyond a reasonable doubt, they converge on a 90 percent probability as the threshold for finding that a defendant has committed the infraction as being fair. For the preponderance of the evidence standard, the figure is 50 percent. The lower standard of proof doesn't merely make it more likely that someone will be convicted; it provides prosecutors a greater incentive to risk bringing a case. Villasenor outlines an example in which 100 people are accused of wrongdoing. He supposes that 84 are guilty and 16 are innocent. Now suppose that the tribunal convicts 76 of the guilty while letting eight guilty individuals go, and that it acquits 12 of the innocent while convicting four. The overall probability of conviction is 80 percent (76 guilty + 4 innocent), and by definition the probability of being innocent is 16 percent. But since four innocent defendants are convicted, there is a 25 [...]

President Trump: Pharmaceutical Price Controls Are a Bad Idea


At his first post-election press conference, Donald Trump declared that pharmaceutical companies are "getting away with murder" by pricing their drugs too high. "Pharma has a lot of lobbies, a lot of lobbyists, and a lot of power. And there's very little bidding on drugs," Trump said in January. "We're the largest buyer of drugs in the world, and yet we don't bid properly." At a meeting with pharmaceutical company executives later in January, Trump stated, "The U.S. drug companies have produced extraordinary results for our country, but the pricing has been astronomical for our country." He added, "For Medicare, for Medicaid, we have to get the prices way down." Trump was characteristically vague about just how he would lower pharmaceutical prices, but let's assume that Medicare was legally mandated to negotiate prices with drug companies. In this case, "negotiate" amounts to creating price controls since pharmaceutical manufacturers would largely have to take whatever price the government wanted to offer, much like what already occurs in the case of the Veterans Affairs Department. Most companies would likely agree to the government price controls because they would still make money from their existing drugs because marginal costs of each additional pill are so low. What would happen? A new study in Forum for Health Economics & Policy by a team of researchers led by Jeffrey Sullivan at the consultancy Precision Health Economics finds that price controls would indeed reduce the cost of drugs to Medicare Part D participants.* But the unintended consequences to Americans' lives and health in the future would be substantial and bad. The researchers sought to analyze what would happen if Veterans Affairs drug pricing and formulary policies were applied to Medicare Part D—the federal program that subsidizes the costs of prescription drugs for senior citizens. More than 41 million Americans are enrolled in it, and estimated benefits will total $94 billion, representing 15.6 percent of net Medicare outlays in 2017. Without going into great detail, the Veterans Affairs Department sets the federal ceiling price it will pay for drugs at 24 percent below the average price a pharmaceutical company gets from wholesalers. In addition, the VA keeps overall spending down by restricting the drugs to which veterans have access. For example, the Avalere consultancy reported in 2015 that the VA's formulary does not offer access to nearly a fifth of the top 200 most commonly prescribed Medicare Part D drugs. So what would happen if Medicare adopted VA-style price controls? Cutting prices would save Medicare money. The researchers cite a 2013 study by Dean Baker, co-director of the Center for Economic and Policy Research, that calculated that drug price controls could save Medicare between $24.8 and $58.3 billion annually. On the other hand, less revenue to pharmaceutical companies means less money devoted to research and development. A separate study, published in Managerial and Decision Economics in 2007, estimated that cutting prices by 40 to 50 percent in the U.S. will lead to between 30 and 60 percent fewer R&D projects being undertaken. Reduced investments in pharmaceutical R&D consequently results in reduced numbers of new drugs becoming available to patients. A 2009 study in Health Affairs calculated that as a result of fewer innovative pharmaceuticals being developed, average American life expectancy in 2060 would be around 2 years lower than it would otherwise have been. Aiming to calculate the effect of drug price controls on future Medicare savings, Sullivan and his colleagues input these data and estimates into an econometric model that aims to track cohorts of people over the age of 50 and to [...]

The Social Cost of Carbon Calculation Debate*


"The social cost of carbon is the most important number that you've never heard of," according to University of Chicago economist Michael Greenstone. Greenstone led the interagency working group that devised this metric during the Obama administration. Since it was first calculated in 2010, the social cost of carbon has been used to justify 80 different federal regulations that purportedly provide Americans with over a trillion dollars' worth of benefits. "The social cost of carbon is nothing but a political tool lacking scientific integrity and transparency conceived and utilized by an administration pushing a green agenda to the detriment of the American taxpayers," insisted Rep. Darin LaHood (R-Il.), chair of the Oversight Subcommittee of the House Committee on Science, Space and Technology. LaHood's remarks were made as he opened a hearing called "At What Cost? Examining the Social Cost of Carbon" earlier this week. "This metric did not simply materialize out of thin, and dirty, air," countered Rep. Don Beyer (D-Va). Beyer argued that the social cost of carbon (SCC) metric was devised by the Obama administration through a process that "was transparent, has been open to public comment, has been validated over the years and, much like our climate, is not static and changes over time in response to updated inputs." So what are these politicians fighting about? The social cost of carbon is a measure, in dollars, of the long-term damage done by a ton of carbon dioxide emissions in a given year. Most of the carbon dioxide that people add to the atmosphere comes from burning coal, oil, and natural gas. The Obama administration's interagency working group calculated that the SCC was about $36 per ton (in 2007 dollars). This figure was determined by cranking damage estimates through three different integrated assessment models that try to link long-term climate change with econometric projections. Notionally speaking, imposing a tax equal to the SCC would encourage people to reduce their carbon dioxide emissions while yielding revenues that could be used to offset the estimated damage, e.g., by building higher sea walls or developing heat-resistant crops. Can citizens take that $36-a-ton estimate to the bank? Not really. First, consider that integrated assessment models are trying to forecast how extra carbon dioxide will impact climate and economic growth over the course of this century and beyond. One of the critical variables is climate sensitivity, conventionally defined as how much warming can be expected from doubling the amount of carbon dioxide in the atmosphere. The working group calculating the SCC also used various discount rates. (One way to think of discount rates is to consider how much interest you'd require to put off getting a dollar for 10 years.) Finally, instead of focusing on domestic harms, the working group included the global damages in calculating the SCC. Republicans, who convened the subcommittee hearing, argue that the SCC is bogus and therefore many of the regulations aimed at cutting the emissions of carbon dioxide by restricting burning of fossil fuels are too. In his 2013 analysis of the IAMs relied upon by the Obama administration's interagency working group, Massachusetts Institute of Technology economist Robert Pindyck concluded that all three models "have crucial flaws that make them close to useless as tools for policy analysis." He pointedly added, "These models can be used to obtain almost any result one desires." In other words: Garbage In, Garbage Out. Having tossed the models aside, Pindyck earnestly sought another method for establishing a plausible SCC. In November, he published a new study in which he asked selected econo[...]

Virginia Distilleries


Craft distilling is burgeoning in America, with around 800 distilleries making small batch rums, vodkas, gins, and whiskeys. The industry took off after 1980, when the government lifted the requirement that a federal agent be on-site daily at such booze-making operations.

Nevertheless, the spirits industry remains strangled in red tape, especially in Alcoholic Beverage Control (ABC) monopoly states such as Virginia. In December I visited three Charlottesville-area craft distilleries—Silverback, Vitae Spirits, and Virginia Distillery Co.—to sample their offerings. At each, existing law held me back to a measly three ounces of imbibing. Still, that is a big improvement over last year, when customers could sample just two ounces, let alone the year before, when in-house sipping was rationed at one and a half ounces.

And six years ago, would-be tipplers could only "nose" but not sip spirits at distilleries in the state. There are no such prescribed limits on quaffing at any of Virginia's 262 wineries and 70 breweries; the hard stuff is held to harder standards. The bureaucratic workaround to the ABC monopoly is to license distillery tasting rooms as ABC stores. Distillers must buy their own product and send all in-store sales receipts to the ABC Board, which then returns the wholesale price back to the distilleries.

As of August 1, 2016, Virginia distilleries can sell directly to restaurants. The catch is that they can't deliver their product; restaurateurs must physically go to distilleries to make their purchases.

"Progress in changing liquor regulations is made incrementally," says Virginia Distillers Association President Gareth Moore, who is also CEO of Virginia Distillery Co. He noted that the new three-ounce limit enables customers to try several products in half-ounce pours and enjoy a single full cocktail during their visits. Moore next hopes to change the law to allow distillers to sell bottles at local festivals.