Published: Wed, 28 Sep 2016 00:00:00 -0400
Last Build Date: Wed, 28 Sep 2016 13:07:11 -0400
Tue, 27 Sep 2016 15:15:00 -0400California Gov. Jerry Brown has only a couple of days left to decide whether he's going to sign or veto an important reform bill that would seriously reduce the ability of local law enforcement agencies to abuse the asset forfeiture process to seize and keep millions of dollars from citizens without having to prove they've committed a crime. But in the meantime, we've got this: Brown has signed into law a bill that censors the Internet Movie Database (IMDB) in what appears to be a fairly straightforward violation of the company's First Amendment rights. The IMDB is a familiar site for anybody looking to track down work by people in the film, television, and video games industry. It publishes the backgrounds of actors, their work histories, their biographies, and their birthdates. That last part—birthdates (meaning ages)—is what several actors have a problem with. One sued unsuccessfully to try to force the IMDB to prevent the site from publishing her actual date of birth. The argument was that age discrimination in Hollywood and the acting industry is a serious, chronic issue, and publishing actors' ages could harm their chances at finding work. After that attempt failed, the Screen Actors Guild then pushed lawmakers in Sacramento to fix the problem for them. They responded by passing AB-1687, which forbids IMDB (or similar sites) from publishing or sharing birthdates or ages from paying subscribers (industry folks who use the site for employment services). Gov. Brown signed the bill into law on Sunday. So, is this unconstitutional censorship? Yes it most certainly is, says nearly every lawyer The Hollywood Reporter consulted. In fact, the only attorney who was absolutely certain the law would survive a constitutional challenge and gave it a full-throated defense was the general counsel for the very union who pushed it through the legislature. Some of the opponents: "Creating liability for the truthful reporting of lawfully obtained information is deeply problematic under the First Amendment," said UC Irvine dean and Constitutional scholar Erwin Chemerinsky. "It is different to say 'men only' or 'women only' or 'whites only' in an ad. That is discrimination that is impermissible. A birthday or an age is a fact, and I don't think there can be liability under the First Amendment for publishing true facts." Said Bruce Johnson, of Seattle's Davis Wright Tremaine, "Obviously, to the extent that it requires the removal of truthful information from websites reporting on matters of public interest, the statute would appear to be an unconstitutional abridgement of First Amendment rights." The bill's sponsor, Democratic Assembly Majority Leader Ian Calderon, defended the law as a legitimate business regulation: "Requiring websites to remove all age information from profiles would seem to run afoul of the First Amendment restrictions on the regulation of commercial speech," Calderon had said in a statement to THR. "Limiting the bill to only subscribers makes it clear that the bill advances an important government interest — that of reducing age discrimination in a manner that is substantially related to that interest and no more extensive than necessary to achieve that interest." Yes, but it's attempting to achieve the interest in reducing age discrimination by censoring a third-party site that is not responsible at all for the age discrimination these actors are claiming. This is the sort of mentality that has led to the European Union's terrible "right to be forgotten" policies, which permit people to demand that search sites censor links to information about them that may be completely factually correct but that they nevertheless don't want people to see. That's a good reason why the rest of us should care. It may not directly affect us whether actors' ages are allowed to be censored, but the justification for this government intervention can be directed elsewhere. In addition, one lawyer noted, limiting the censorship to paying subscribers has the absurd side effect of requiring actors to "bribe" the IMDB for their si[...]
Tue, 27 Sep 2016 12:05:00 -0400To the extent that last night's debate drifted into concrete policy discussion, it was often a brief trip before the two candidates returned to the evergreen discussion of how awful their opponent is. (Fact check: True) That's pretty much what happened during the short section discussing cybersecurity and state-sponsored hacking. Thanks to the hacking of the Democratic National Committee and the possibility that Russians were involved, Hillary Clinton was able to get Donald Trump on the defensive by suggesting he "invited [Vladimir] Putin to hack into Americans." That was a pretty audacious exaggeration, given that what Trump actually asked for was for the Russian hackers to provide Clinton's deleted emails from her private server scandal. But when she ended her comments by saying how many national security folks had endorsed her, that was all it took to get Trump off-track to talk about all the wonderful people who had endorsed him. In reality, neither candidate expressed a vision of cybersecurity that suggested either of them were even remotely familiar with the subject. Asked by moderator Lester Holt how to fight cyberattacks, here was part of Clinton's response. Note the familiar hawkish tone: And one of the things [Vladimir Putin's] done is to let loose cyber attackers to hack into government files, to hack into personal files, hack into the Democratic National Committee. And we recently have learned that, you know, that this is one of their preferred methods of trying to wreak havoc and collect information. We need to make it very clear -- whether it's Russia, China, Iran or anybody else -- the United States has much greater capacity. And we are not going to sit idly by and permit state actors to go after our information, our private-sector information or our public-sector information. And we're going to have to make it clear that we don't want to use the kinds of tools that we have. We don't want to engage in a different kind of warfare. But we will defend the citizens of this country. [Emphasis added] And the Russians need to understand that. I think they've been treating it as almost a probing, how far would we go, how much would we do. By casting this debate in the terms of a hack that essentially embarrassed the Democratic Party establishment, Clinton's threat comes off as petty as anything Trump says. Trump responded in part by pointing out that what the hack revealed was how terribly the Democratic Party treated Bernie Sanders. That's what Clinton is threatening a cyberwar over? Trump, though, didn't exactly present much of an alternative. When presented with a policy question, his instinct is to simply say things are bad and need to be better. That's exactly what happened here: We came in with the Internet, we came up with the Internet, and I think Secretary Clinton and myself would agree very much, when you look at what ISIS is doing with the Internet, they're beating us at our own game. ISIS. So we have to get very, very tough on cyber and cyber warfare. It is -- it is a huge problem. I have a son. He's 10 years old. He has computers. He is so good with these computers, it's unbelievable. The security aspect of cyber is very, very tough. And maybe it's hardly doable. But I will say, we are not doing the job we should be doing. But that's true throughout our whole governmental society. We have so many things that we have to do better, Lester, and certainly cyber is one of them. It's probably a bit too much to expect that presidential candidates be cybersecurity experts. We shouldn't be expecting them to write guest commentaries about zero day exploits. But what we should take from this—if at all possible—is what kind of experts these people are going to be turning to in the development of cybersecurity policy. For Trump, I have no idea what to expect from this response. Recall that when Apple resisted the Department of Justice when they demanded the company weaken its security to help them break into an iPhone, Trump's response was that people should boycott the company[...]
Tue, 27 Sep 2016 09:30:00 -0400Recent "hate speech" investigations in European countries have been spawned by homily remarks by a Spanish Cardinal who opposed "radical feminism," a hyperbolic hashtag tweeted by a U.K. diversity coordinator, a chant for fewer Moroccan immigrants to enter the Netherlands, comments from a reality TV star implying Scottish people have Ebola, a man who put a sign in his home window saying "Islam out of Britian," French activists calling for boycotts of Israeli products, an anti-Semitic tweet sent to a British politician, a Facebook post referring to refugees to Germany as "scum," and various other sorts of so-called "verbal radicalism" on social media. One might consider any or all of these comments distasteful, but Americans (recent trends on college campuses notwithstanding) tend to appreciate that for a free-speech right to truly exist, we must severely limit the types of speech—true threats, slander, etc.—that don't deserve protection from government censorship and potential prosecution. Not so in European Union (E.U.) member countries, many of which have laws against any language that "insults," "offends," "degrades," "expresses contempt," or "incites hatred" based on certain protected traits like race, religion, or sexual orientation. As Nick Gillespie has put it, "hate speech" is like the secular equivalent of blasphemy. On Monday, Věra Jourová, the E.U. Commissioner for Justice, Consumers and Gender Equality, gave a speech stressing the importance of such laws and calling for even more intense policing of so-called hate speech. (Just to be clear, by "hate speech" we are not talking about things like threats or criminal harassment.) "My top priority is to ensure that the Framework Decision on Combatting Racism and Xenophobia is correctly translated into the national criminal codes and enforced, so that perpetrators of online hate speech are duly punished," Jourová said. The commissioner offered a characteristically European rationale for the imposition: only by government censorship of free expression can free expression flourish. "In recent years, we have seen messages of extremism and intolerance spread around the globe like wildfire" and "we need to stand united against this growing phenomenon," said Jourová. "Our commitment is to deliver change so that people do not need to live in fear, and to ensure that the internet remains a place of free and democratic expression, where European values and laws are respected." "The spread of illegal hate speech online not only distresses the people it targets," she continued, "it also affects those who speak up for freedom, tolerance and non-discrimination in our society. If left unattended, the fear of intimidation can keep opinion makers, journalists and citizens away from social media platforms." It's easy to see how folks might buy Jourová's idea that allowing intolerant speech online "means a shrinking digital space for freedom of expression." We've all heard about public figures or controversial thinkers who were allegedly hounded off of social media by online criticism, with its harsh, vulgar, and sometimes violent tones. And what is gained by such uncivil opprobrium? By sanctioning not only violent threats and ongoing harassment but also speech that serves no purpose but to troll, denigrate, or spread bigotry, we can usher in a more welcoming environment for all sorts of ideas and speakers online... Or so the thinking goes, anyway. But the fatal flaw in this conceit is pretending there's some bright line between desirable, pro-social speech and speech that merely incites offense, fear, or feelings of negativity. Of course, many of us object on pure principle to censoring the latter forms of speech. But setting aside classical-liberal notions, there are still plenty of good arguments against EU-style speech policing. For one, it makes distinctions between legal and illegal speech based not only on what is being said but who is saying it and whom it's said to. For instance, a few years ago Slate's William[...]
Fri, 23 Sep 2016 12:35:00 -0400Defense Distributed's blueprints for 3d-printed guns will remain offline and censored for now. Well, actually, they're probably not offline and you can find them if you know where to look. But a federal appeals court panel has rejected an attempt by the company to stop the State Department's order censoring the company itself from hosting its blueprints online. Reason's Brian Doherty has been extensively covering Cody Wilson and Defense Distributed's fight against the State Department's unusual tactics in enforcing weapon export laws. Technically the company isn't exporting any weapons. It is providing information that allows people anywhere in the world to use 3d-printers to create the pieces that make a gun. The State Department's demand that Defense Distributed not host the files then is clearly censorship. But is such censorship legal? Several members of Congress had submitted an amicus brief saying that the State Department had drastically overstepped its bounds by interpreting federal law as allowing them to censor online information. But for now, the 5th Circuit Court of Appeals declined a request for an injunction to stop the State Department's censorship demands. It has ruled that the alleged harms the State Department claims will occur if the information is made available outweighs the temporary harms faced by Defense Distributed for being censored: The fact that national security might be permanently harmed while Plaintiffs-Appellants' constitutional rights might be temporarily harmed strongly supports our conclusion that the district court did not abuse its discretion in weighing the balance in favor of national defense and national security. That is an awful lot of heavy lifting that "might" is doing, and an awful lot of judicial deference. There is a footnote explaining further that the potential for harm to national security involves not just the existing files but potentially future files that provide for even more weapon production outside the control of the federal government. Note that this ruling does not address whether it believes Defense Distributed arguments are legitimate. This is not a ruling about the underlying case. The panel is just going to defer to the Department of State for now while the underlying arguments are fought over. Not all three judges agreed. Judge Edith Jones dissented, saying the panel had failed to take the issues of prior restraint and censorship seriously, pointing out that the State Department had never previously sought to block information presented on the Internet. She also argues that the court had failed to analyze the case with the right level of judicial scrutiny. She warns: Undoubtedly, the denial of a temporary injunction in this case will encourage the State Department to threaten and harass publishers of similar non-classified information. There is also little certainty that the government will confine its censorship to Internet publication. Yet my colleagues in the majority seem deaf to this imminent threat to protected speech. More precisely, they are willing to overlook it with a rote incantation of national security, an incantation belied by the facts here and nearly forty years of contrary Executive Branch pronouncements. Jones' dissent is actually much longer than the majority ruling and delves heavily into regulations and precedents. She concludes: By refusing to address the plaintiffs' likelihood of success on the merits and relying solely on the Government's vague invocation of national security interests, the majority leave in place a preliminary injunction that degrades First Amendment protections and implicitly sanctions the State Department's tenuous and aggressive invasion of citizens' rights. The majority's nondecision here encourages case-by-case adjudication of prepublication review "requests" by the State Department that will chill the free exchange of ideas about whatever USML-related technical data the government chooses to call "novel," "functional," or [...]
Thu, 22 Sep 2016 16:10:00 -0400Yelp is refusing to remove reviews posted to the website that were ruled defamatory by California courts. The company has appealed to the California Supreme Court, which this week agreed to take on the case. And it's a good thing, too—as it stands, California courts have essentially created a European-style "right to be forgotten," in which people could force the removal of online content that portrays them in a true but unflattering light. Legal scholar Eugene Volokh called the case "an interesting and important" one for Internet law and civil procedure. In an August letter asking the California Supreme Court to review the case, Volokh and co-authors said the appellate court's decision jeopardized "a vast range of online speech." Another signatory to the letter, Santa Clara University law professor Eric Goldman, described the decision—which, because it was one of the rare (less than 10 percent) appellate rulings marked as published, is citable and binding precedent—as "flat-out wrong" and wrote that he "can't stress enough how terrible [the] opinion is." The case revolves around personal-injury lawyer Dawn Hassell, managing attorney of the Hassell Law Group. In 2013, Hassell sued former client Ava Bird over negative comments Bird made on Yelp.com. Hassell said Bird's comments were defamatory. Defamatory speech falls under one of a few exceptions to broad First Amendment protection, and Yelp's lawyers say it usually follows court orders to take down content that has been ruled defamatory. But in this case, the reason the court ruled in Hassell's favor is because Bird submitted no documents or statements in her defense and never showed up to the trial. The San Francisco County Superior Court issued a default judgement to Hassell, awarding her $557,918 and ordering Bird to remove the offending content from Yelp. In addition, the court held that "Yelp.com is ordered to remove all reviews posted by AVA BIRD under user names 'Birdzeye B.' and 'J.D.' attached hereto as Exhibit A and any subsequent comments of these reviewers within 7 business days of the date of the court's order." (J.D. was allegedly an alias of Bird's on Yelp, though this was never definitively established.) The judgment became final on March 16, 2014. Yelp was served with an injunction to remove Bird's reviews if she didn't do it herself. She didn't. Neither did Yelp. The company's lawyers contended that it couldn't be compelled to remove Bird's content because Yelp hadn't been party to the court proceedings in question. Bird may not have had the resources to fight Hassell's lawsuit, but Yelp certainly does. Yet Yelp was never named in Hassell's suit, and thus had no opportunity to defend itself. In a letter to Hassell, Yelp said the court's judgement and order had been "rife with deficiencies and Yelp sees no reason at this time to remove the reviews at issue. Of course, Yelp has no desire to display defamatory content on its site, but defamation must first be proven." That May, Yelp filed a motion to set aside and vacate the Bird decision on the "grounds that the legal basis for the decision is not consistent with or supported by the facts or applicable law." Specifically, it asserted that the First Amendment protected Yelp from having to remove the content, as did section 230 of the federal Communications Decency Act. It also claimed that the company's right to due process had been ignored. A California Superior Court denied the order. It also found that Yelp was "aiding and abetting the ongoing violation of the injunction" and thus "demonstrated a unity of interest with Bird." Yelp then appealed to the California Court of Appeal for the First Appellate District. In June, the appellate court denied Yelp's motion to vacate the decision and upheld the bulk of the original decision. It did remand the case back to the trial court with the direction to limit Yelp's removal requirements to "specific defamatory statements" and not all future posts[...]
Thu, 22 Sep 2016 10:05:00 -0400Cook County's Tom Dart, the prostitution-obsessed sheriff who launched a national month of police playing sex workers to arrest "johns" and unconstitutionally threatened Visa and Mastercard for doing business with the ad-site Backpage, has found a new way to threaten people's privacy, screw over sex workers, and grow the police state. The latest Dart-led initiative involves creating a national database of prostitution customers, using solicitation-arrest data submitted by cops through a phone app. Demand Abolition—a Massachusetts-based advocacy group that recently gave Boston Police $30,000 to look into new strategies to target prostitution customers—reported on Sheriff Dart's new plot in a late-August post crowing that "1,300 sex buyers—a record—were arrested across 18 states in just one month" of Dart's National John Suppression Initiative. Now, the sheriff is using data from that sting to start a national database of people arrested for soliciting prostitution. You know, for research purposes. "We are well on our way to developing a stronger, more nuanced understanding of who buyers are—information that can be used to find new ways to change their behavior," Demand Abolition chirps. This year's sex stings led to an "unprecedented level of buyer data collected, and shared, by this year's arresting officers," notes Demand Abolition. This is thanks to a new app that streamlines the logging of prostitution arrest information. The app was developed at a January "social justice hackathon", in which a hundred or so techies were presided over by a team of anti-prostitution zealots from across the country—including Dart, Boston Mayor Marty Walsh, and Seattle-area prosecutor Val Richey (for more on Richey's work, see my recent series of stories on Seattle prostitution busts). The presumably well-intentioned developers and data scientists were told their work would help put an end to human trafficking, but the tools they developed are designed for police to target and track adults engaging in consensual prostitution. The January hackathon, funded by Thomson Reuters' Data Innovation Lab, gave birth to what Demand Abolition is calling an "arrest app," which "allows officers to easily log arrest info into a national database, which Dart's team can then use to identify trends in buyer demographics." During the last John Suppression Initiative, cops logged info from 80 percent of all arrests into the database. Keeping the personal info of people arrested for prostitution-related charges in one handy national database might help with whatever new Vice-Squad-on-Steroids agenda that Dart is designing. But it's obviously worrisome from a privacy perspective. Keeping all that sensitive information in one place would seem to make it a ripe target for hackers, but nowhere do Demand Abolition or Dart even mention cybersecurity. It's also important to note that the people being logged in the database have merely been arrested for, not convicted of, any crimes. Yet the arrest app isn't concerned with case outcomes. If police arrest someone and the charges are later dropped or beat, that person will still be counted in Dart's database as having been picked up in a sex sting. I reached out to the Cook County Sheriff's Office to get more details about the app and database—what security measures are in place, whether the info collected is subject to public-records requests, etc.—and will update if I hear back. Update: Cook County Sheriff's Office Press Secretary Sophia Ansari said no individual names or case numbers will be entered into the database. "Demographic information entered includes age range, race, marital status and education level–but that information is never connected to an individual or a number that could be connected to an individual," Ansari said in an email. Nor does the database reflect what ultimately happens with cases. It's meant to simply track info on solicitation arrest[...]
Wed, 21 Sep 2016 08:30:00 -0400This week Anthony Novak, the man who was arrested for creating a parody of the Parma, Ohio, police department's Facebook page, filed a federal lawsuit accusing seven officers of violating his constitutional rights by using the legal system to punish him for making fun of them. Last month Novak was acquitted of using a computer and the internet to "disrupt, interrupt, or impair" police services, a felony punishable by up to 18 months in prison. Now he is trying to get some compensation from the city for the injuries inflicted by that trumped-up charge, arguing that the cops did not have probable cause to arrest him or search his apartment. He also argues that the statute used to prosecute him is "unconstitutionally overbroad because it provides the police unfettered discretion to wrongfully arrest and charge civilians in the State of Ohio with a crime for exercising their First Amendment rights." There is some tension between those two arguments, because if the law is as vague as Novak claims, police arguably did have probable cause to believe he had violated it. Either way, it should have been obvious to them that their vendetta against Novak was unconstitutional. It also should have been obvious to the municipal judge(s) who obligingly issued the warrants that police sought, the local prosecutors who pursued the case, and the judge who oversaw the trial after rejecting Novak's argument that his prosecution was inconsistent with the First Amendment. Novak's parody, which he posted on March 1 and deleted on March 3 after the Parma Police Department issued an indignant press release about it, copied the logo from the department's actual Facebook page but was in other respects notably different. It included notices announcing "our official stay inside and catch up with the family day," during which anyone venturing outside between noon and 9 p.m. would be arrested; advertising a "Pedophile Reform event" where sex offenders who visited all of the "learning stations" could qualify to be removed from the state's sex offender registry; and offering teenagers abortions, to be performed in a van in the parking lot of a local supermarket "using an experimental technique discovered by the Parma Police Department." There was also a warning that anyone caught feeding the homeless would go to jail as part of "an attempt to have the homeless population eventually leave our city due to starvation," along with an ad seeking applicants for jobs with the police department that said "Parma is an equal opportunity employer but is strongly encouraging minorities to not apply." The police were not amused. "The Parma Police Department would like to warn the public that a fake Parma Police Facebook page has been created," said a Facebook notice posted on March 2. "This matter is currently being investigated by the Parma Police Department and Facebook. This is the Parma Police Department's official Facebook page. The public should disregard any and all information posted on the fake Facebook account. The individual(s) who created this fake account are not employed by the police department in any capacity and were never authorized to post information on behalf of the department." Despite the implication that people might think cops really were performing abortions in a van or really did plan to promote family togetherness by forcibly confining people to their homes, it is hard to believe anyone mistook the parody for the real thing. "The Facebook page was not reasonably believable as conveying the voice or messages of the City of Parma Police Department," Novak's complaint notes. "Mr. Novak had no intention of deceiving people into believing that the account was actually operated by a representative of the police department, and no reasonable person could conclude such an intent from the content of the page." Parma police nevertheless launched an investigation that involved at least sev[...]
Tue, 30 Aug 2016 08:00:00 -0400With a name like the National Security Agency, America's chief intelligence outfit might at least attempt to promote American security online. At the very least, one would hope its activities don't actively undermine U.S. cybersecurity. But—bad news—a recent leak of the agency's digital spy tools by a myterious group called the Shadow Brokers shows how the agency prioritizes online surveillance over online security. For years, there have been rumors that the National Security Agency (NSA) was stockpiling a secret cache of powerful computer bugs to exploit for cyber-snooping. Recent revelations by the Shadow Brokers appear to confirm these allegations. On August 13, the group published a number of "cyber weapons" that it claims were used by an NSA-linked hacking outfit known as the Equation Group. The leak was supposed to be a teaser for the Shadow Brokers' upcoming auction of a larger batch of software security-vulnerabilities, or exploits. "You see pictures. We give you some Equation Group files free, you see. This is good proof no?" the Shadow Brokers proclaimed. The Shadow Brokers' asking price for the upcoming dump? One million Bitcoin, or about $575.2 million (and no, the FBI are not getting in on the action). The dumped information appears to be legitimate, and is dated from around 2013. It's clear that the exploits are functional, as networking manufacturer Cisco confirmed (and promptly set about correcting). But how do we know the exploits were actually used by the NSA? Journalists at The Intercept compared the Shadow Brokers' data to its trove of Edward Snowden documents, some of which were never released to the public. The leak is consistent with their still-secret Snowden files, lending credibility to the Shadow Brokers' claims. Researchers at Kaspersky Labs likewise verified that the exploits themselves "share a strong connection" to previous tools known to have been used by the Equation Group. Sloppy Spies and Secret Bugs There are many concerning elements to this story. First, it's incredibly troubling that the NSA left itself or its tools open to a hack. If the NSA is going to spend billions of dollars to build a god-like system of dystopian digital control, they could at least not leave their dark materials lying around for any enterprising hacker to scoop up and sell to the highest bidder. It is still unclear whether hackers directly infiltrated NSA systems, or whether the hacker was able to take the exploits from a staging server that NSA agents use. Either way, it's unacceptable. Then there's the question of who was behind the hack. Was it Russia? Maybe. But the Russian government might not want to advertise the hack in such a public manner, opting instead to keep the exploits for themselves to use. Could it have been a new Snowden, exposing the NSA's secrets from the inside? That's also possible, but there's not much specific evidence to confirm this. One computer scientist believes that the group's broken English is a ruse to shift blame to the Russians, which could be true, but is insufficient to prove anything. It might as well have been Bitcoin creator Satoshi Nakamoto behind the hack. Attribution is notoriously difficult, and we may never be completely certain of who was behind this dump. Whoever they are, however, the Shadow Brokers' actions have provided some long-overdue transparency for NSA hacking methods. The leak confirms what many have suspected for decades: The NSA opportunistically hoards and deploys powerful bugs that make everyone less secure online. These bugs were particularly potent because NSA agents are the only people who knew about them—until now, obviously. In the industry, they are known as "zero day vulnerabilities," or simply "0days," and they get their name because software vendors have had "zero days" to patch up the vulnerability before a malicious actor can exploit them. Inte[...]
Tue, 16 Aug 2016 08:30:00 -0400If you can't understand how a cutting-edge new investment platform works, it's probably a bad idea to put serious money (or a good portion of an infant cryptocurrency network) behind it. This is a lesson that backers and enthusiasts of the Ethereum platform and its pet project—a bot-run investment corporation known as The Decentralized Autonomous Organization (DAO)—had to learn the hard way recently. In May, I discussed the development of this new "leaderless" investment corporation, which was purported to be "bound by code"—i.e. run by a bot—and supposed to operate as an automated crowdfunding and profit-sharing venture that obviated the need for human administration. Since its creation on April 30, The DAO raised $150 million in investment on the trendy Ethereum smart-contract platform and plenty of positive press in the weeks leading up to its maiden IPO. There was just one big problem: The code was broken, and The DAO got hacked. Bound by Code The DAO was conceptualized as a kind of decentralized venture-capital fund that could not be controlled by any one person or group. People who wanted to invest in The DAO could purchase "DAO tokens" using Ether (ETH), the native cryptocurrency of the Ethereum platform. With DAO tokens, people could then vote to invest in a number of pre-approved, startup-like projects proposed by entrepreneurs The DAO called "contractors." If a project got enough votes, it would be green-lit and the funds immediately distributed. If the startup began to rake in money, the profits would be dispersed among token holders. If, however, a project started hemorrhaging money, token holders would just have to take that hit. The core innovation of The DAO was that all of these operations were to occur autonomously, facilitated by code rather than fund managers and administrators. In technical terms, The DAO was designed as a kind of "smart contract," a digitized system set up in such a way that breaches of contract are expensive or impossible. There would be no Kickstarter administrator or venture capital general partner that would be capable of censoring or overriding decisions. As The DAO developer Stephen Tual told the Wall Street Journal on May 16, the project was "not bound by terms of law and jurisdiction. It's bound by code." At least, this was the theory. Ack! A Hack! But a funny thing happened on the way to a post-capitalist crypto-anarchist utopia. Amid the fawning press and general euphoria imbuing The DAO community, a group of security researchers led by Cornell University's Emin Gün Sirer published a May white paper sounding the alarm about many troubling vulnerabilities present in The DAO's code. The researchers noted a number of mechanism design weaknesses that could promote sub-optimal voting behavior among token holders or even outright theft of funds. The DAO developers did issue some patches to smooth everything over—but it was too little, too late. The DAO proceeded along its original deployment timeline, warts and all. This rush to release proved fatal for the project. On the morning of June 17, startled token-holders logged online to learn that The DAO was being rapidly drained of its funds. Just as Sirer and his associates warned, an attacker had exploited a vulnerability in The DAO's "split function," which allowed the hacker to drain Ether multiple times during the course of one transaction. Panic struck the community as ETH trickled into the attacker's clutches without pause. The price of ETH tumbled. Panicked token-holders took to the forums to demand answers and quick action from developers of Ethereum and The DAO. In the course of one fateful day, The DAO went from a "new paradigm in economic cooperation" to yet another punchline in the wild world of cryptocurrency. So Much for "Code Is Law" In the aftermath of the hack, the high-tech sloganeering u[...]
Thu, 11 Aug 2016 11:30:00 -0400A federal appeals court panel has ruled this week that the Federal Communications Commission (FCC) overstepped its powers by attempting to subvert and overrule state laws that forbid cities from developing and operating their own broadband networks and competing with private providers. This is a big deal in reining in an FCC that is attempting to intervene more and more in how Americans receive internet access, and it also represents a blow against a potential avenue for porkbarrel federal infrastructure spending in whatever projects the next president hopes to put into place (both Hillary Clinton and Donald Trump have each promised hundreds of billions of dollars in more federal spending in these areas). The Sixth Circuit Court of Appeals panel ruled unanimously (3-0) that the FCC did not have the authority to bypass state laws that restrict or forbid municipal development and operation of broadband. To be clear, though, this was a very narrow ruling. The court didn't rule that the FCC could never overrule these types of state laws. Rather, the ruling was that there was no federal authorizing legislation that specifically gave the FCC authority to do so. Congress could pass a law that would allow the federal government to preempt the state laws that preempt city involvement in broadband operations. But it hasn't done so, and the FCC's attempts to bend the rules to make it happen anyway were smacked down. FCC chairman Tom Wheeler complained about the outcome and ignored the legal issues that drove it: Wheeler criticized the decision that "appears to halt the promise of jobs, investment and opportunity that community broadband has provided in Tennessee and North Carolina." He said since 2015, "over 50 communities have taken steps to build their own bridges across the digital divide. The efforts of communities wanting better broadband should not be thwarted by the political power of those who, by protecting their monopoly, have failed to deliver acceptable service at an acceptable price." Anybody who thinks that municipal broadband provides "acceptable service at an acceptable price" should read Kevin Glass' Reason piece from 2015 about what disasters and money pits government-operated broadband programs actually are. Far from competing with monopolies, many of them are proposed as revenue generators at the public's expense. Chattanooga, Tennessee's broadband program is typically invoked as a success story (one of the lawsuits in this case involved the city trying to expand its program beyond its territorial boundaries, forbidden by state law). But as Glass noted, the reason the city was able to avoid going into debt building their broadband infrastructure was due to a huge infusion of federal stimulus spending: What goes unmentioned is the cost. Chattanooga didn't build the network cheaply, nor did they even pay for it themselves. No, it took $111 million in federal tax dollars to get the network off the ground. This was doled out to Chattanooga as a part of President Obama's stimulus program. The success that Chattanooga has had in putting federal tax money to work was actually the impetus for the FCC's unilateral, unprecedented overturn of state-level municipal broadband laws; the Chattanooga EPB wants to bring its service beyond the lines of its current authority. We can see the folly in using Chattanooga as a model for how other municipal broadband projects could work. Not every city can use the federal government to extract money from taxpayers in other cities and states to pay for their government broadband projects. The money has to come from somewhere; the feds can't redistribute hundreds of millions to every city in the country, and the cost for these networks in larger cities would be much, much higher. A proposed network in Seattle, for example, has been projected to[...]
Wed, 10 Aug 2016 16:00:00 -0400
(image) Microsoft has helpfully provided a real-world example showing why mandating "back doors" so that authorities can bypass encryption to access digital data is a very bad idea. The fact that this example is a result of a complete mistake and apparently not staged or hypothetical should make it all the more powerful to law enforcement and lawmakers who want to compromise data security in the pursuit of crime or terrorism.
To summarize the best I can: Microsoft devices have a system that upon booting, will only function with operating systems that it authenticates. This means users cannot just install any other operating system on Windows tablets and phones and work them.
As explained by The Register, Microsoft created "golden keys" for internal use only to allow programmers to disable or bypass this authentication process, most likely to test new operating system builds and updates without having to get them approved.
But this method of bypassing Microsoft's booting process mistakenly got out of the hands of the company and into the clutches of a couple of hackers, who wrote a report explaining how it all worked here (trigger warning: MIDI music).
The hackers are very blunt about their reasons for revealing how this works: They're trying to get people at the FBI and in Congress to understand that any attempt to require a "golden key" to allow officials to bypass encryption, even with the best of intentions, can and eventually will go terribly, terribly awry. They note:
"About the FBI: are you reading this? If you are, then this is a perfect real world example about why your idea of backdooring cryptosystems with a "secure golden key" is very bad! Smarter people than me have been telling this to you for so long, it seems you have your fingers in your ears. You seriously don't understand still? Microsoft implemented a "secure golden key" system. And the golden keys got released from MS own stupidity. Now, what happens if you tell everyone to make a "secure golden key" system? Hopefully you can add 2+2..."
In the hands of those with sinister intent (either hackers or rogue authorities), a mechanism to bypass encryption can utterly devastate the privacy of citizens and expose them to criminal mischief and secret surveillance.
The larger question is whether or not lawmakers and government leaders actually care about the risks as long as it gets them the information they want. As I've noted repeatedly at Reason, surveillance-loving senators like Dianne Feinstein (D-Calif) and Richard Burr (R-N.C.) and Great Britain's new Prime Minister Theresa May seem to have absolutely no interest in whether encryption back doors actually compromise everybody's security as long as it allows the government to access whatever data it demands.
Wed, 10 Aug 2016 08:27:00 -0400"What are you doing tonight?" Mark Moser, a 42-year-old Minnesota man, typed while chatting with a girl on Facebook a couple of years ago. "When can I meet you and fuck that awesome pussy of yours?" That question resulted in a felony conviction for soliciting sex with a child, because it turned out the girl was 14. Under state law, it did not matter that she had told Moser she was 16, which is the age of consent in Minnesota. But according to the Minnesota Court of Appeals, which overturned Moser's conviction on Monday, preventing him from raising that defense violated his constitutional right to due process. Statutory rape laws generally do not require proof that the defendant knew his sexual partner was too young, on the theory that if you meet someone for sex you should be able to figure out how old she is. As UCLA law professor Eugene Volokh notes, that assumption is dubious even when applied to in-person encounters. What if a girl "lied about her age, and perhaps even showed the defendant a credible-seeming fake ID"? But in Moser's case, he never met the girl in person and never even saw any pictures of her (although she kept promising to send him some). So how was he supposed to know she was lying? Since Minnesota's law barred Moser from raising that question, the appeals court concluded, it imposed strict liability, meaning his intent was irrelevant: Typically, criminal offenses require both a volitional act and a criminal intent, referred to as mens rea. A statute imposes strict liability when it dispenses with mens rea by failing to "require the defendant to know the facts that make his conduct illegal." The state argues that the child-solicitation statute does not impose strict liability because it requires an "intent to engage in sexual conduct." But it is the intent to engage in sexual conduct with a child that makes the conduct illegal, not the intent to engage in sexual conduct generally. The child-solicitation statute imposes strict liability because it does not require the state to prove that the defendant had knowledge of the child's age (the fact that makes the conduct illegal), and it prohibits the defendant from raising mistake of age as a defense. The court notes that "strict-liability crimes are generally disfavored," although the Supreme Court has recognized some exceptions. One is "public welfare offenses" such as the sale of contaminated food or the possession of unregistered hand grenades. "Public welfare offenses generally involve items or conduct that by their very nature inform the defendant that his conduct may be subject to strict regulation," the court says. "These offenses also usually carry only small penalties." The court adds that "select crimes have also been excluded from the normal mens rea requirement where the circumstances make it reasonable to charge the defendant with knowledge of the facts that make the conduct illegal." Those crimes include statutory rape and production of child pornography, where "a defendant can reasonably be required to ascertain the age of a person the defendant meets in person." The court distinguishes Moser's case from these mens rea exceptions. Unlike most public welfare offenses, it says, Moser's offense does not carry a light penalty: "Under the child-solicitation statute, Moser is labeled a felon, subject to a three-year prison sentence, required to register as a predatory offender for the next ten years, and assigned one criminal-history point for his conviction." And unlike statutory rape or production of child pornography, online solicitation of sex does not involve a physical meeting that would facilitate age verification. "The child-solicitation statute imposes an unreasonable duty on defendants to ascertain the relevant facts," the court says. "W[...]
Mon, 08 Aug 2016 11:00:00 -0400
(image) The snake-bearing yellow banner declaring "Don't Tread on Me" was first used during the Revolutionary War. Designed by Continental Colonel Christopher Gadsden, the flag drew on imagery popularized by Benjamin Franklin in a 1754 editorial cartoon of the American colonies as a rattlesnake. In the intermin centuries, the "Gadsden flag," as it's become known, has endured as a symbol of U.S. patriotism and struggle against government tyranny.
(image) But because the flag was adopted this century by right-leaning political movements such as the Tea Party, liberals and government officials have grown suspicious of its symbolism, suggesting the Gadsden flag may indicate "terrorist or criminal operations" or connote white hostility toward blacks. Last week, legal scholar Eugene Volokh flagged a complaint currently under consideration by the federal Equal Opportunity Employment Commission (EEOC) over whether displaying Gadsden flag imagery in the workplace constitutes race-based harassment.
In response, some liberty-loving and meme-minded folks on Twitter began sharing Gadsden-flag alternatives culled from around the Internet with cuddlier featured creatures and more polite requests to be left alone—Gadsden flags for the safe space generation, if you will. I've rounded up some of the best of them below.
Fri, 29 Jul 2016 14:09:00 -0400
(image) This week many fans of YouTuber Marina Joyce, who posts videos on her channel about make-up and fashion tips, decided she must be in danger. Joyce didn't say so, and even told fans multiple times she was not in danger, but internet users, as internet users are wont to do, began to pull details from her recent videos to concoct a theory about Joyce being abused or kidnapped, possibly even by ISIS for use as a lure in an upcoming terrorist attack. There were so many calls to local police (Joyce lives in England) that they went to her house to check on her and tweeted that she was fine.
Eventually, Joyce's mother revealed she was suffering from schizophrenia. Joyce herself had repeatedly expressed surprise at people's concerns, and in a livestream described it as a "publicity stunt started by my viewers, not me." So the online mob that formed to dish out some collective "compassion" turned on her. The quote was passed around Twitter and the internet with the "by my viewers" part cut out. People who had spent days reading about Joyce and trying to "figure out" what happened were now angry, not with themselves for wasting time and bothering a stranger they might like to watch on YouTube based on their interpretations of her life, but with Joyce.
A Twitter search of the #SaveMarinaJoyce hashtag will find some sympathetic comments, and a lot of folks with no connection to Joyce except possibly subscribing to her YouTube channel or following her on Twitter (both of which they are always free to stop doing) expressing anger that Joyce wasn't clearer about not having been kidnapped or held hostage. How much clearer could she be?
The story of #SaveMarinaJoyce, which started less than a week ago, is illustrative of the same emotional inputs involved in bad policies pushed in the name of helping someone or something, from the drug war to the effort to "rescue" sex workers to "humanitarian" interventions like the one in Libya. They begin under the guise of compassion, and when it turns out a lot of people aren't necessarily interested in the kind of "compassion" that comes with coercion, the boot comes down. The widely reviled 1994 crime bill, which contributed to rising incarceration rates, is still defended under the premise lawmakers had to do something to "help" with crime. Hillary Clinton eventually started to blame an "obstructionist" Libyan government for the aftermath of the U.S.-backed intervention. The changing mob reaction captured in #SaveMarinaJoyce is as good an example of any why "I'm from the government and I'm here to help" can be such a dangerous phrase. Government is just a word for the meddling we want to do together.
Fri, 22 Jul 2016 13:30:00 -0400
(image) St. Petersburg, Russia—Did legislation in the United Kingdom and the United States inspire Russian authorities to adopt their country's new domestic spying laws? Maybe.
On July 7, Russian President Vladimir Putin signed the "Yarovaya Law," which came into effect earlier this week. The Yarovaya Law—named after Irina Yarovaya, the ultraconservative legislator who pushed for it—is styled as an "anti-terrorism" measure. Among other things, it mandates that telecommunications and internet service providers store all telephone conversations, text messages, videos, and picture messages for six months. In addition, telecom companies must retain for three years customer metadata—that is, data showing with whom, when, for how long, and from where they communicated. The law requires "the organizers of information distribution on the Internet" to do the same thing, except they need only retain the metadata for only one year.
Russia is evidently implementing domestic surveillance that some lawmakers in the United States and the United Kingdom have long advocated in their own countries.