Published: Fri, 30 Sep 2016 00:00:00 -0400
Last Build Date: Fri, 30 Sep 2016 13:17:45 -0400
Wed, 28 Sep 2016 12:30:00 -0400Every so often we hear or read news reports about a municipal government official—a police officer, a DMV clerk, et cetera—using access to databases of citizen information for unauthorized and often very illegal purposes. We've seen them use personal information to stalk ex-lovers, track down potential romantic interests, and even to facilitate identity theft. In this age where both the federal government and municipal law enforcement agencies are deliberately attempting to collect more and more data about us, the Associated Press attempted to investigate how frequently government officials misuse their access to citizen information. The results of their investigation were published today. What they've found is unsurprisingly concerning—and very, very incomplete. The AP requested reports of incidences of database misuse from all 50 states and three dozen large municipal police departments. Over just two years they determined there were at least 650 cases where an employee or police officer was fired, suspended or otherwise disciplined for inappropriately accessing and using information from government databases. The Associated Press acknowledges that these numbers are woefully undercounted due to lack of reliable recordkeeping and are likely much, much higher. And how many cases of unauthorized access don't even get caught? When we spread these numbers out over time, what this essentially means is that every single day a government official somewhere is inappropriately looking up information about citizens in state or municipal databases. The story describes many cases where police use databases to stalk people, often connected to romantic entanglements. Some cases revolve around simple curiosity—like looking up information about celebrities. These are bad enough examples on their own. There are some other examples provided, though, that highlight situations where officials and officers were—in what appears to be an organized fashion—using access to database information to snoop on and even intimidate its critics. In one case in Minnesota, a county commissioner discovered that law enforcement and government officials had repeatedly searched databases for information about her and her family members. These searches came after she criticized county spending and programs of the sheriff's department. In Miami-Dade County in Florida, a highway trooper found herself stalked and threatened by police after she pulled an officer over for speeding in 2011, assisted with information about her from the state's driver databases. (Reason previously took note of this case, and a local newspaper won a Pulitzer Prize for exposing police officers' habit of dangerous speeding, even when off duty, on local highways.) Also of note: The efforts by the county commissioner to fight back were unsuccessful because she couldn't prove the searches about her and her family were not permitted. A good chunk of the story is about the complexity of trying to regulate the circumstances by which government officials access these databases and how to engage in oversight to make sure the information isn't being misused. Sadly, that means there isn't nearly a big enough discussion of what information city government should be gathering and storing in the first place. Police, just like the federal government, have been increasingly collecting and storing data about citizens even when they're not even suspected of any criminal behavior whatsoever. There has not been nearly enough of a connection between the capacity of government officials to threaten and intimidate citizens and how this push for more and more data helps make it happen. Heaven knows Reason has been raising the alarm. Back when Edward Snowden first leaked details about the National Security Agency (NSA) collecting massive amounts of metadata from all Americans' communications, I explained several reasons why people with "nothing to hide" still needed to be concerned about government collection of their personal info. One of the reasons was exactly what we see here: Occasionally ther[...]
Thu, 22 Sep 2016 10:05:00 -0400Cook County's Tom Dart, the prostitution-obsessed sheriff who launched a national month of police playing sex workers to arrest "johns" and unconstitutionally threatened Visa and Mastercard for doing business with the ad-site Backpage, has found a new way to threaten people's privacy, screw over sex workers, and grow the police state. The latest Dart-led initiative involves creating a national database of prostitution customers, using solicitation-arrest data submitted by cops through a phone app. Demand Abolition—a Massachusetts-based advocacy group that recently gave Boston Police $30,000 to look into new strategies to target prostitution customers—reported on Sheriff Dart's new plot in a late-August post crowing that "1,300 sex buyers—a record—were arrested across 18 states in just one month" of Dart's National John Suppression Initiative. Now, the sheriff is using data from that sting to start a national database of people arrested for soliciting prostitution. You know, for research purposes. "We are well on our way to developing a stronger, more nuanced understanding of who buyers are—information that can be used to find new ways to change their behavior," Demand Abolition chirps. This year's sex stings led to an "unprecedented level of buyer data collected, and shared, by this year's arresting officers," notes Demand Abolition. This is thanks to a new app that streamlines the logging of prostitution arrest information. The app was developed at a January "social justice hackathon", in which a hundred or so techies were presided over by a team of anti-prostitution zealots from across the country—including Dart, Boston Mayor Marty Walsh, and Seattle-area prosecutor Val Richey (for more on Richey's work, see my recent series of stories on Seattle prostitution busts). The presumably well-intentioned developers and data scientists were told their work would help put an end to human trafficking, but the tools they developed are designed for police to target and track adults engaging in consensual prostitution. The January hackathon, funded by Thomson Reuters' Data Innovation Lab, gave birth to what Demand Abolition is calling an "arrest app," which "allows officers to easily log arrest info into a national database, which Dart's team can then use to identify trends in buyer demographics." During the last John Suppression Initiative, cops logged info from 80 percent of all arrests into the database. Keeping the personal info of people arrested for prostitution-related charges in one handy national database might help with whatever new Vice-Squad-on-Steroids agenda that Dart is designing. But it's obviously worrisome from a privacy perspective. Keeping all that sensitive information in one place would seem to make it a ripe target for hackers, but nowhere do Demand Abolition or Dart even mention cybersecurity. It's also important to note that the people being logged in the database have merely been arrested for, not convicted of, any crimes. Yet the arrest app isn't concerned with case outcomes. If police arrest someone and the charges are later dropped or beat, that person will still be counted in Dart's database as having been picked up in a sex sting. I reached out to the Cook County Sheriff's Office to get more details about the app and database—what security measures are in place, whether the info collected is subject to public-records requests, etc.—and will update if I hear back. Update: Cook County Sheriff's Office Press Secretary Sophia Ansari said no individual names or case numbers will be entered into the database. "Demographic information entered includes age range, race, marital status and education level–but that information is never connected to an individual or a number that could be connected to an individual," Ansari said in an email. Nor does the database reflect what ultimately happens with cases. It's meant to simply track info on solicitation arrests and not any subsequent outcomes of those cases.[...]
Thu, 15 Sep 2016 12:45:00 -0400"The Feds Will Soon Be Able to Legally Hack Anyone" warns the headline at Wired in a commentary written partly by Democratic Oregon Sen. Ron Wyden. The piece is warning about coming changes to "Rule 41," a reference familiar to everybody in any field remotely connected to cybersecurity and privacy and mostly unknown to anybody else. Right now the Department of Justice is working to expand its hacking and surveillance authorities under the Federal Rules of Criminal Procedure. It's already legal, under this Rule 41, for the FBI to get authorization from a judge to attempt to install malware to hack into computers that are believed to be connected to crimes. But Rule 41 has limits—judges can only authorize intrusions into computers within their own jurisdictions. This update would lift that limit and would authorize the feds to turn to a judge to get permission to "hack a million computers or more with a single warrant." Wyden has teamed up with other tech privacy-minded legislators (like Republican Sen. Rand Paul) to try to stop this amendment to Rule 41. They have until December to block it. In the middle of this debate over how much hacking authority the government should have, one activist organization is analyzing whether such authority fundamentally endangers human rights, at least in its current use. Access Now, an advocacy and policy group devoted to protecting and advancing the rights of users of digital and technological tools, released a lengthy report about how government hacking can threaten our liberties. Access Now notes how little public debate there has been on the "scope, impact, or human rights safeguards for government hacking." The massive release of secret government surveillance documents by Edward Snowden did prompt significant discussions about digital snooping in the United States and some European countries, and there have been some very modest reforms in the states. Others, though, like the United Kingdom, are actually considering formalizing a system that expands authorized government snooping on private digital data. Access Now carefully analyzed how governments engage in hacking, separating their behaviors into three categories: Message control (in censoring and manipulating digital information), deliberate damage (malware that harms systems and data), and surveillance and information-gathering. Access Now determined that government hacking causes significant harms to human rights. Here's a brief sample of how they describe the impact of hacking that installs malware or otherwise damages a person's systems: Government hacking that falls under this umbrella is often designed specifically to deprive a person of their property in some way. This implicates due process protections, which require a fair trial overseen by a competent judicial authority, qualified legal representation, and the ability to appeal. It also directly conflicts with the right recognized in most countries for individuals to own private property. When the damage a government seeks to carry out also implicates human life or wellbeing, the threat to human rights is exceptionally grave. Government hacking to do damage also implicates other human rights, such as freedom of expression and association, since these rights are frequently exercised using devices that such hacking could render inoperable. Access Now ultimately concludes that "there should be a presumptive prohibition on all government hacking," based on international human rights law. In order to justify hacking, Access Now recommends 10 specific safeguards, such as requiring very specific, tailored laws that describe when hacking is permitted in the narrowest terms possible. That's pretty much the opposite of what the Rule 41 expansion is attempting to accomplish. Access Now's report coincides with a high-profile government hacking story, but it didn't get much traction in the U.S. amid all the election coverage. In August, Apple iPhone and iPad users were sent a security update over a vulnerability that was [...]
Tue, 06 Sep 2016 17:45:00 -0400
(image) What does it mean for the state of sci-fi dystopias when Americans have to rely upon those allegedly nasty, self-interested international megacorporations to fight on their behalf against a government that refuses to respect citizens' rights to data privacy?
So it goes with the frequent attempts by government officials and law enforcement agencies to collect our personal information without us knowing they're doing so. We've had Google fighting against a gag that forbid them to even tell its customers how many data requests it has gotten from the National Security Agency (NSA), let alone who it was targeting. We've had Apple's extremely high-profile fight against the Department of Justice over a push to force the company to weaken or bypass its own encryption in a way that could subject everybody to additional secret surveillance.
And we have Microsoft suing to strike down a law that allows the government to gag companies and not let them inform customers when officials demand and collect data about them.
As has been thoroughly established by this point, Supreme Court precedents dating back to the 1970s have determined that information about ourselves and information or data that we ourselves have created don't have full Fourth Amendment protections when they are kept or stored by a third party. Often a full warrant served against the citizen is not needed. A subpoena directed to the company that holds the data is often all that is required. Despite the massive amount of data about citizens held by third-parties now (pretty much everything about us) as compared to what was available then, the precedent still holds for now.
Microsoft filed suit against the Department of Justice in April, arguing that gags prohibiting them from telling customers they've had their data taken by government officials are unconstitutional, violating both the Fourth Amendment rights of their customers and the First Amendment rights of Microsoft. But they're not fighting alone. A whole bunch of corporations from across the spectrum have just announced their support. It's not just tech companies and tech privacy activists, as we've seen in some cases (like the Apple encryption fight). As Reuters notes, we're talking about a wide-ranging group of companies that includes the Washington Post, Delta Air Lines, pharmaceutical company Eli Lilly, the U.S. Chamber of Commerce, and Fox News. There are even five former federal law enforcement officials supporting Microsoft's position.
Read more about the case here.
Tue, 30 Aug 2016 16:00:00 -0400When a powerful, unelected federal official says we need to have an "adult conversation" about the limits of federal authority as contrasted with the civil liberties of the people he allegedly works for, hold on to your butts. Whenever somebody declares the need for an "adult conversation" in the first place, they are suggesting that one hasn't already been happening, and often accomplishes little but to raise the hackles. It is a deliberate insult against those with opposing views. In this particular case, the person invoking the term is FBI Director James Comey, and it's pretty much directed at the entire tech sector and privacy advocates who have been pushing back (for decades) against government attempts to tamper with and weaken encryption. There's something remarkably telling about the man saying we need to have an "adult conversation" on encryption limits primarily because he can't just get whatever he wants. Isn't that the child's argument? The government wants this information. Give us this information! Of course, as always, it's couched in terms of the alleged threat of the Internet going "dark" and federal investigators worried they're not able to track down alleged criminals and terrorists. Comey complained about it at a tech symposium today. The Associated Press reports: "The conversation we've been trying to have about this has dipped below public consciousness now, and that's fine," Comey said. "Because what we want to do is collect information this year so that next year we can have an adult conversation." The American people, he said, have a reasonable expectation of privacy in houses, cars and electronic devices — but he argued that right is not absolute. "With good reason, the people of the United States — through judges and law enforcement — can invade our public spaces," Comey said, adding that that "bargain" has been at the heart of the country since its inception. This is what he thinks is an "adult conversation." While Comey wants to present this is a reasonably as possible, recall that when they found a phone in the possession by a terrorist that was protected with a password, what the Department of Justice thought was the reasonable, adult response was to try to use the courts to conscript Apple and to actually force it to compromise its own security systems to give the government access to the phone's contents. The government wants things! Give the government those things! This is the "adult conversation" Comey's side is having right now. No, privacy is not absolute, but just because they government has the authority to pursue information related to crimes doesn't mean they're guaranteed access to it. I'll dredge up an old example: A suspect may take a box containing evidence to a crime and bury it somewhere out in the desert. The government absolutely has the authority to try to track down that evidence and use any number of tools to do so. But they can't order the desert to cough it up for them or command some desert experts to track it down for them (though they can certainly hire them). The "adult conversation" that's actually already happening is trying to get people like Comey to understand that there's no magical system where the Department of Justice (or any other government entity) can get access to information to encrypted data that doesn't leave the whole system vulnerable. The "adult conversation" is about trying to get authoritarian senators like Dianne Feinstein (D-Calif.) and Richard Burr (R-N.C.) to understand that the legislation they wrote to order tech companies to assist the government in cracking their own security was so bad—so childish, in fact—that it's impossible to imagine any tech or privacy-minded adult trusting what they'll suggest next. The "adult conversation" is "white hat" hackers showing how easy it is for a mistake to compromise the data of millions of computer users. The "adult conversation" is about understanding that oppressive governmen[...]
Tue, 30 Aug 2016 12:00:00 -0400News that the city of Baltimore has been under surreptitious, mass-scale camera surveillance will have ramifications across the criminal justice world. When it comes to constitutional criminal procedure, privacy, and the Fourth Amendment, it's time to get ready for the concept of "pre-search." Like the PreCrime police unit in the 2002 movie Minority Report, which predicted who was going to conduct criminal acts, pre-search uses technology to conduct the better part of a constitutional search before law enforcement knows what it might search for. Since January, police in Baltimore have been testing an aerial surveillance system developed for military use in Iraq. The system records visible activity across an area as wide as thirty square miles for as much as ten hours at a time. Police can use it to work backward from an event, watching the comings and goings of people and cars to develop leads about who was involved. "Google Earth with TiVo capability," says the founder of the company that provides this system to Baltimore. But the technology collects images of everyone and everything. From people in their backyards to anyone going from home to work, to the psychologist's or marriage counselor's office, to meetings with lawyers or advocacy groups, and to public protests. It's a powerful tool for law enforcement—and for privacy invasion. In high-tech Fourth Amendment cases since 2001, the U.S. Supreme Court has stated a goal of preserving the degree of privacy people enjoyed when the Constitution was framed. Toward that end, the Court has struck down convictions based on scanning a house with a thermal imager and attaching a GPS device to a suspect's car without a warrant. The Fourth Amendment protects against unreasonable searches and seizures. The straightforward way to administer this law is to determine when there has been a search or seizure, then to decide whether it was reasonable. With just a few exceptions the hallmark of a reasonable search or seizure is getting a warrant ahead of time. Applying the "search" concept to persistent aerial surveillance is hard. But that's where pre-search comes in. In an ordinary search, you have in mind what you are looking for and you go look for it. If your dog has gone missing in the woods, for example, you take your mental snapshot of the dog and you go into the woods comparing that snapshot to what you see and hear. Pre-search reverses the process. It takes a snapshot of everything in the woods so that any searcher can quickly and easily find what they later decide to look for. The pre-search concept is at play in a number of policies beyond aerial surveillance and Baltimore. Departments of Motor Vehicles (DMVs) across the country are digitally scanning the faces of drivers with the encouragement of the Department of Homeland Security under the REAL ID Act. Some DMVs compare the facial scans of applicants to other license-holders on the spot. They are searching the faces of all drivers without any suspicion of fraud. And the facial scan databases are available for further searching and sharing with other governmental entities whenever the law enforcement need is felt acutely enough. The National Security Agency's telephone meta-data program is an example of pre-seizure. Phone records that telecom companies used to dispose of, having kept them confidential under their privacy policies and federal regulation, are now held so that the government can search them should the need arise. Exactly how courts will apply the pre-search concept to mass aerial surveillance remains to be seen. The Fourth Amendment doesn't directly protect our movements in public, but it does protect our "persons" and "houses." Mass aerial surveillance captures data about both. The Supreme Court struck down warrant-less GPS tracking in public. The practice rips away the natural concealment that time gives to most people's public activities. Courts may find that a pre-sea[...]
Tue, 30 Aug 2016 08:00:00 -0400With a name like the National Security Agency, America's chief intelligence outfit might at least attempt to promote American security online. At the very least, one would hope its activities don't actively undermine U.S. cybersecurity. But—bad news—a recent leak of the agency's digital spy tools by a myterious group called the Shadow Brokers shows how the agency prioritizes online surveillance over online security. For years, there have been rumors that the National Security Agency (NSA) was stockpiling a secret cache of powerful computer bugs to exploit for cyber-snooping. Recent revelations by the Shadow Brokers appear to confirm these allegations. On August 13, the group published a number of "cyber weapons" that it claims were used by an NSA-linked hacking outfit known as the Equation Group. The leak was supposed to be a teaser for the Shadow Brokers' upcoming auction of a larger batch of software security-vulnerabilities, or exploits. "You see pictures. We give you some Equation Group files free, you see. This is good proof no?" the Shadow Brokers proclaimed. The Shadow Brokers' asking price for the upcoming dump? One million Bitcoin, or about $575.2 million (and no, the FBI are not getting in on the action). The dumped information appears to be legitimate, and is dated from around 2013. It's clear that the exploits are functional, as networking manufacturer Cisco confirmed (and promptly set about correcting). But how do we know the exploits were actually used by the NSA? Journalists at The Intercept compared the Shadow Brokers' data to its trove of Edward Snowden documents, some of which were never released to the public. The leak is consistent with their still-secret Snowden files, lending credibility to the Shadow Brokers' claims. Researchers at Kaspersky Labs likewise verified that the exploits themselves "share a strong connection" to previous tools known to have been used by the Equation Group. Sloppy Spies and Secret Bugs There are many concerning elements to this story. First, it's incredibly troubling that the NSA left itself or its tools open to a hack. If the NSA is going to spend billions of dollars to build a god-like system of dystopian digital control, they could at least not leave their dark materials lying around for any enterprising hacker to scoop up and sell to the highest bidder. It is still unclear whether hackers directly infiltrated NSA systems, or whether the hacker was able to take the exploits from a staging server that NSA agents use. Either way, it's unacceptable. Then there's the question of who was behind the hack. Was it Russia? Maybe. But the Russian government might not want to advertise the hack in such a public manner, opting instead to keep the exploits for themselves to use. Could it have been a new Snowden, exposing the NSA's secrets from the inside? That's also possible, but there's not much specific evidence to confirm this. One computer scientist believes that the group's broken English is a ruse to shift blame to the Russians, which could be true, but is insufficient to prove anything. It might as well have been Bitcoin creator Satoshi Nakamoto behind the hack. Attribution is notoriously difficult, and we may never be completely certain of who was behind this dump. Whoever they are, however, the Shadow Brokers' actions have provided some long-overdue transparency for NSA hacking methods. The leak confirms what many have suspected for decades: The NSA opportunistically hoards and deploys powerful bugs that make everyone less secure online. These bugs were particularly potent because NSA agents are the only people who knew about them—until now, obviously. In the industry, they are known as "zero day vulnerabilities," or simply "0days," and they get their name because software vendors have had "zero days" to patch up the vulnerability before a malicious actor can exploit them. Intelligence-agencies such as th[...]
Mon, 29 Aug 2016 17:00:00 -0400Hillary Clinton may want Edward Snowden to stand trial for leaking information about the federal government's expansive surveillance of American citizens, but apparently that doesn't mean her campaign won't take advantage of his cybersecurity expertise. In a tech piece posted over at Vanity Fair, reporter Nick Bilton notes that the Clinton campaign has jumped on an application called "Signal," a heavily encrypted private communications program. A visit to the site of developers Open Whisper Systems shows an endorsement by both Snowden and Laura Poitras, the filmmaker who made the documentary Citizenfour about Snowden's whistleblowing. And in Vanity Fair's piece, the fact that the app was "Snowden-approved" was used to encourage its adoption to keep communications out of the hands of Russian hackers. Snowden raised an eyebrow about it on Twitter, noting that Clinton called for him to face trial and possible imprisonment just last year. (I would add here that Clinton has said Snowden should have gone through the proper channels and incorrectly believes he would have whistleblower protections.) Techdirt also takes note that Clinton herself is kind of vague about where she stands on cybersecurity policy when it comes to areas like encryption. She's one of those politicians who wants to have it both ways. It seems clear that she doesn't necessarily support mandatory "back doors" that require tech companies to provide access to law enforcement or the government to bypass encryption, but she also believes in the unicorn that Silicon Valley eggheads can come up with some magical key that only the "good guys" (the government will obviously determine who those might be) can use. She has previously said she wants some sort of "Manhattan Project" between the tech industry and the government to figure it all out. Maybe Clinton's tech policy briefing that her campaign released in June might help straighten out where she actually stands on tech privacy and security? Sadly no. Try and tease an actual policy out of this paragraph on encryption: Hillary rejects the false choice between privacy interests and keeping Americans safe. She was a proponent of the USA Freedom Act, and she supports Senator Mark Warner and Representative Mike McCaul's idea for a national commission on digital security and encryption. This commission will work with the technology and public safety communities to address the needs of law enforcement, protect the privacy and security of all Americans that use technology, assess how innovation might point to new policy approaches, and advance our larger national security and global competitiveness interests. All throughout the tech initiative paper, it is chock full of extremely specific proposals (which I've critiqued before as essentially a call for tech industry government lobbying for handouts). But the above paragraph says very little but to say that they'll look into the matter. There's two possibilities to consider: One, that her campaign maybe recognizes that trying to fight encryption is a doomed effort, and the committee is being promoted as the place for the negotiations to go and quietly die. Alternatively, though, there's the terrible precedent of President Barack Obama's administration, which made a big deal out of protecting consumer data privacy. It publicly opposed legislation to promote the sharing of private consumer data with the government in order to help fight cybercrime. But then it quietly worked with lawmakers to get what it wanted, and we ended up with a law that encourages private companies to hand your consumer data over to the government in order to fight all sorts of types of crimes, and then subsequently immunizes these companies from financial liability for breaches. This was an increase in government surveillance authority, without a doubt. Is there any reason to expect anything different from [...]
Fri, 26 Aug 2016 12:30:00 -0400
(image) The iPhone security breach that prompted the latest Apple software update is not about encryption, but it's still very important for Western government officials who want to meddle with tech security standards in service of their own national security agendas to pay attention.
Apple just released a new security update for iPhone and iPad users because of what recently happened to Ahmed Mansoor, a human rights advocate and promoter of a free press and democracy in the United Arab Emirates. Mansoor was sent a link in a text from an unknown source that said it would show him information about torture within UAE's prisons. This was lie, which he fortunately did not fall for. The link would have actually installed spyware within his phone that would have allowed hackers to snoop on Mansoor and even remotely activate the phone's camera.
Mansoor has been targeted before, and fortunately for him (and all Apple users), he knew who to turn to in order to investigate the malware. Citizen Lab figured out the nature and purpose of the malware, which has been traced back to a secretive Israeli-based firm. This makes things a bit, well, as the Associated Press diplomatically describes it, awkward:
The apparent discovery of Israeli-made spyware being used to target a dissident in the United Arab Emirates raises awkward questions for both countries. The use of Israeli technology to police its own citizens is an uncomfortable strategy for an Arab country with no formal diplomatic ties to the Jewish state. And Israeli complicity in a cyberattack on an Arab dissident would seem to run counter to the country's self-description as a bastion of democracy in the Middle East.
The Associated Press sent a journalist to the Israeli company's headquarters only to find they had recently moved. They have not been able to get authorities from either Israel or UAE to respond. They best they were able to get from the company, NSO Group, was a bland statement that its mission was to provide "authorized governments with technology that helps them combat terror and crime."
That's the kind of statement that should send off warning sirens and alarm bells in the minds of government officials here in the United States and Europe. That's the same kind of motivation lawmakers and investigators claim in calls for tech companies provide them ways to bypass encryption or security of their devices or programs.
At the same time that Mansoor was trying to fend off government-sponsored hacking targeting him because of his human rights advocacy, leaders within France and Germany are calling on the European Union to adopt a law that requires app makers to provide government officials the tools to bypass encryption for the purpose of … helping them combat terror and crime.
Fortunately European human rights and data protection experts are loudly pointing out what a terrible plan this is. There are politicians who still believe that somehow tech companies can create some sort of magical key that only the "right" people can use. This is clearly an absurd contention, but even if it were true, the United Arab Emirates, which has a terrible record of imprisoning its critics, apparently counts as the "right" people.
Wed, 24 Aug 2016 00:01:00 -0400Nine years ago, Gawker ran a blog post headlined "Peter Thiel Is Totally Gay, People." The item rankled Thiel, a billionaire who had co-founded Paypal and invested early in Facebook but had not yet gotten around to publicly acknowledging his sexual orientation, although he had told people close to him. This week Thiel finally got his revenge as Gawker ceased operations, driven out of business by an invasion-of-privacy lawsuit he financed. Whether or not you mourn the loss of Gawker, a website known for its snarky blend of gossip and journalism, its death does not bode well for freedom of the press. The lawsuit that Thiel supported involved the professional wrestler Hulk Hogan, a.k.a. Terry Bollea, who claimed to have been mortified by a 2006 video showing him having sex with the wife of his best friend at the time, Tampa shock jock Bubba the Love Sponge Clem. Clem, who arranged the liaison, recorded it without Bollea's knowledge, and in 2012 someone sent the 30-minute video to Gawker, which posted a 100-second excerpt along with a 1,400-word description. Given Bollea's eagerness to discuss his sexual exploits, including the incident shown in the video, on talk shows and in print, his claim of life-impairing trauma seemed rather implausible. Last March a Florida jury nevertheless awarded him $115 million in compensation for the emotional distress and economic harm the video supposedly inflicted, later tacking on $25 million in punitive damages. That absurdly disproportionate award, which was even more than Bollea had sought, was a measure of the jurors' disgust rather than any injury he suffered, and it probably will be reduced on appeal. But it already has driven Gawker Media into bankruptcy, leading to its purchase by Univision, which decided to shut down Gawker while keeping the company's other outlets alive. In a recent New York Times op-ed piece defending his involvement in the Hulk Hogan case, Thiel argues that the threat of ruinous litigation is necessary to protect privacy from media outlets that thrive by violating it. "Gawker violated my privacy and cashed in on it," he writes. "A story that violates privacy and serves no public interest should never be published." The public clearly was interested in the Hulk Hogan sex tape, which generated more than 7 million page views. Whether the public should have been interested is a different question, and Thiel thinks his answer should be legally enforceable. Gawker argued that the video, which had been described by other outlets and discussed by Bollea himself, was already a news story, and the first few paragraphs of the accompanying post tried to place it in cultural context, giving the appearance of a purpose other than titillation. The 2007 post about Thiel's sexuality likewise aspired to a serious purpose: questioning the "wrongheaded sense of caution" on this subject among venture capitalists. I'm not sure I buy these rationales, but I am sure they should be judged by readers, not by courts. Empowering jurors to define the public interest and decide which stories serve it is bound to have a chilling effect even on journalism that Thiel would consider legitimate, because people commonly disagree about such matters. "Since sensitive information can sometimes be publicly relevant, exercising judgment is always part of the journalist's profession," Thiel says. "It's not for me to draw the line, but journalists should condemn those who willfully cross it." Notwithstanding his avowed reticence, Thiel is drawing a line by supporting Bollea's lawsuit and offering to help other litigants who believe journalists have violated their privacy. His choice of cases will tell journalists where he thinks the line should be, and they will cross it at their peril. People who do not like what journalists say about them already can sue for de[...]
Thu, 18 Aug 2016 15:40:00 -0400Gawker.com is dead! Long live … everything else at Gawker? Univision has announced—after purchasing the Gawker media empire in an auction for $135 million—it will be shutting down the site for which the outlet got its name. However, according to reports, its other sites, like Jezebel and Deadspin, will remain in operation. This is all the consequence of the successful lawsuit by Terry "Hulk Hogan" Bollea against the site for publishing a private sex tape. It turned out that the lawsuit was bankrolled by billionaire Peter Thiel, who was furious at Gawker for outing him as gay and other behavior he thought was "bullying." The fight brought to the forefront the tension between the free press and the right to privacy. This conflict is most certainly not a new fight and it's not the first time a media outlet has been determined (by a jury in Florida in this case) to have gone too far. But the top concern that has come out of this case is a fear that the wealthiest among us can seek revenge against media outlets that say things they don't like by bankrolling others in an effort to destroy them that is very difficult to fight against. Note that Gawker isn't accused here of printing factually inaccurate information about Thiel. It's information that Thiel didn't want Gawker to publish about him and that he didn't believe was in the "public interest." As Jacob Sullum noted on Tuesday, that's not a calculation upon which we're all in agreement: Giving juries the power to determine what counts as a "public interest" or a "good reason," not to mention whether a story advances it, poses a threat even to journalism that Peter Thiel would recognize as legitimate, because people commonly disagree about such matters. Even when a news outlet crosses the line of decency by publishing true but sensitive information that causes an innocent person anguish without any benefit aside from entertainment or titillation (as Gawker arguably did in Thiel's case), it does not follow that anyone's rights were violated, which should be a prerequisite for legal action. While the media is focused on the impact on the media and about rich elites attacking the media, Ken White, over at Popehat, wants to remind us all that this also happened because we have a court system that is very receptive to this kind of behavior: [F]or most of us the scary part of the story is that our legal system is generally receptive to people abusing it to suppress speech. Money helps do that, but it's not necessary to do it. A hand-to-mouth lunatic with a dishonest contingency lawyer can ruin you and suppress your speech nearly as easily as a billionaire. Will you prevail against a malicious and frivolous defamation suit? Perhaps sooner if you're lucky enough to be in a state with a good anti-SLAPP statute. Or perhaps years later. Will you be one of the lucky handful who get pro bono help? Or will you be like almost everyone else, who has to spend tens or hundreds of thousands of dollars to protect your right to speak, or else abandon your right to speak because you can't afford to defend it? The system isn't just broken for affluent publications targeted by billionaires. It's broken for everyone, and almost everyone else's speech is at much greater risk. Don't point to Peter Thiel as an exception. He's just a vivid and outlying expression of the rule. Indeed, attacks like this on media outlets actually happen all the time but are much lower profile. It's very easy to harass a newspaper or television station and force them to spend money over the dumbest of things. When I was the editor of a small community newspaper (daily circulation: 5,000), we spent more than a year defending ourselves against a baseless, silly lawsuit for publishing the contents of a publicly released police report. In another c[...]
Tue, 16 Aug 2016 10:39:00 -0400Paypal billionaire Peter Thiel, who helped finance the Hulk Hogan sex-tape lawsuit that drove Gawker into bankruptcy, defends that decision in a New York Times op-ed piece, arguing that the threat of ruinous litigation is necessary to protect privacy from media outlets that thrive by violating it. Thiel says he decided to help Terry Bollea, who plays Hulk Hogan, with the lawsuit largely because of his own bitter experience with Gawker, which gratuitously outed him in 2007, when he "had begun coming out to people I knew" but had not publicly acknowledged his sexual orientation. "Gawker violated my privacy and cashed in on it," he says. But Thiel, a self-described libertarian who is nevertheless supporting Donald Trump this year, wants us to know he was defending a principle, and not simply pursuing revenge, when he helped Bollea obtain a $140 million judgment against Gawker. If so, that principle is decidedly dangerous to freedom of speech. "A story that violates privacy and serves no public interest should never be published," Thiel writes. "It is ridiculous to claim that journalism requires indiscriminate access to private people's sex lives....It is wrong to expose people's most intimate moments for no good reason." One can agree with all of these propositions without agreeing that they should be legally enforceable. Giving juries the power to determine what counts as a "public interest" or a "good reason," not to mention whether a story advances it, poses a threat even to journalism that Peter Thiel would recognize as legitimate, because people commonly disagree about such matters. Even when a news outlet crosses the line of decency by publishing true but sensitive information that causes an innocent person anguish without any benefit aside from entertainment or titillation (as Gawker arguably did in Thiel's case), it does not follow that anyone's rights were violated, which should be a prerequisite for legal action. When Gawker accurately reported that Thiel is gay, he seems to believe, Gawker violated his right to privacy, which includes the right to prevent publication of legally discovered but inconvenient facts unless there is a "good reason" to publish them. That right is quite different from, say, a movie star's right to prevent a paparazzo from trespassing on her property, a businessman's right to prevent a former employee from divulging information covered by a nondisclosure agreement, or a citizen's right to be secure against unreasonable searches and seizures. Thiel's free-floating right to privacy is completely unmoored from property rights, contract rights, or constitutional law. It can be invoked whenever someone objects to a story that reflects negatively on him or complicates his personal life, even when the story is completely accurate, provided that person has the resources to pay for a lawsuit. And as with defamation suits, he need not win to punish his adversary. Thiel glides over these problems by suggesting that the prospect of financial ruin based on amorphous tort claims can only improve the quality of journalism: A free press is vital for public debate. Since sensitive information can sometimes be publicly relevant, exercising judgment is always part of the journalist's profession. It's not for me to draw the line, but journalists should condemn those who willfully cross it. The press is too important to let its role be undermined by those who would search for clicks at the cost of the profession's reputation. Nothwithstanding his protestations, Thiel certainly is trying to draw a line by supporting Bollea's lawsuit and offering to help other litigants who believe their privacy has been violated by the press. "I will support him until his final victory," Thiel writes, noting that Gawker plans to [...]
Tue, 16 Aug 2016 08:30:00 -0400If you can't understand how a cutting-edge new investment platform works, it's probably a bad idea to put serious money (or a good portion of an infant cryptocurrency network) behind it. This is a lesson that backers and enthusiasts of the Ethereum platform and its pet project—a bot-run investment corporation known as The Decentralized Autonomous Organization (DAO)—had to learn the hard way recently. In May, I discussed the development of this new "leaderless" investment corporation, which was purported to be "bound by code"—i.e. run by a bot—and supposed to operate as an automated crowdfunding and profit-sharing venture that obviated the need for human administration. Since its creation on April 30, The DAO raised $150 million in investment on the trendy Ethereum smart-contract platform and plenty of positive press in the weeks leading up to its maiden IPO. There was just one big problem: The code was broken, and The DAO got hacked. Bound by Code The DAO was conceptualized as a kind of decentralized venture-capital fund that could not be controlled by any one person or group. People who wanted to invest in The DAO could purchase "DAO tokens" using Ether (ETH), the native cryptocurrency of the Ethereum platform. With DAO tokens, people could then vote to invest in a number of pre-approved, startup-like projects proposed by entrepreneurs The DAO called "contractors." If a project got enough votes, it would be green-lit and the funds immediately distributed. If the startup began to rake in money, the profits would be dispersed among token holders. If, however, a project started hemorrhaging money, token holders would just have to take that hit. The core innovation of The DAO was that all of these operations were to occur autonomously, facilitated by code rather than fund managers and administrators. In technical terms, The DAO was designed as a kind of "smart contract," a digitized system set up in such a way that breaches of contract are expensive or impossible. There would be no Kickstarter administrator or venture capital general partner that would be capable of censoring or overriding decisions. As The DAO developer Stephen Tual told the Wall Street Journal on May 16, the project was "not bound by terms of law and jurisdiction. It's bound by code." At least, this was the theory. Ack! A Hack! But a funny thing happened on the way to a post-capitalist crypto-anarchist utopia. Amid the fawning press and general euphoria imbuing The DAO community, a group of security researchers led by Cornell University's Emin Gün Sirer published a May white paper sounding the alarm about many troubling vulnerabilities present in The DAO's code. The researchers noted a number of mechanism design weaknesses that could promote sub-optimal voting behavior among token holders or even outright theft of funds. The DAO developers did issue some patches to smooth everything over—but it was too little, too late. The DAO proceeded along its original deployment timeline, warts and all. This rush to release proved fatal for the project. On the morning of June 17, startled token-holders logged online to learn that The DAO was being rapidly drained of its funds. Just as Sirer and his associates warned, an attacker had exploited a vulnerability in The DAO's "split function," which allowed the hacker to drain Ether multiple times during the course of one transaction. Panic struck the community as ETH trickled into the attacker's clutches without pause. The price of ETH tumbled. Panicked token-holders took to the forums to demand answers and quick action from developers of Ethereum and The DAO. In the course of one fateful day, The DAO went from a "new paradigm in economic cooperation" to yet another punchline in the wild [...]
Tue, 16 Aug 2016 00:01:00 -0400How telling is it that in the summer of the hacker, when politicians have been repeatedly (and entertainingly!) humiliated by unauthorized access to information, FBI Director James Comey is still wagging his finger at the American public, chastising us for our insistence on protecting privacy by encrypting our gadgets and communications. "In the first 10 months of this fiscal year, our examiners received 5,000 or so devices from state and local law enforcement asking for our help, with a search warrant, to open those devices. About 650 of them we could not open. We did not have the technology. We can't open them. They are a brick to us," he told the American Bar Association annual meeting. Well, yes. That's the whole idea of encrypting things, as well as of locking doors, and hiding valuables—so that strangers can't get to them without our permission. This should be an obvious point to Comey, who recently excoriated former Secretary of State Hillary Clinton for her sloppy handling of classified emails. In a now-famous July press conference, Comey stopped short of recommending the current presidential candidate's prosecution, but took her and her staff to task for being "extremely careless in their handling of very sensitive, highly classified information." He went on to admit "hostile actors gained access to the private commercial e-mail accounts of people with whom Secretary Clinton was in regular contact" and that "it is possible that hostile actors gained access to Secretary Clinton's personal e-mail account." How could that be? In part, because "for the first 3 months of Secretary Clinton's term, access to the [email] server was not encrypted or authenticated with a digital certificate," according to cybersecurity firm Venafi. During that time she traveled to countries including China, Egypt, Indonesia, Israel, Japan and South Korea where her messages were relayed through networks controlled by foreign officials. Even after that time, only server access was encrypted—not the email itself. Since then, of course, the Democratic National Committee also found itself on the receiving end of hackers' curiosity—with the results released for public edification. The Democratic presidential nomination convention, expected to be a somnolent coronation, was much enlivened by the release of a treasure trove of communications revealing the allegedly impartial party apparatus colluding with the Clinton campaign (and friendly journalists) to defeat rival Bernie Sanders. It's all very amusing for the public at large, but it's also a fiasco that could have been prevented had the DNC implemented basic security measures. "Encrypt everything! I'm here to preach the gospel of encryption," commented Jamie Winterton, director of strategy for Arizona State University's Global Security Initiative. "While of course I'm not standing up for unethical, immoral or illegal activities being hidden by encryption, the DNC could have avoided it by encrypting their files and communications." But they didn't. And now the FBI—headed by Comey—is investigating the incident. And today roughly 200 members of Congress are screeching over the unauthorized release of their email addresses and phone numbers which, shudder, might allow constituents to actually contact them. This comes as the result of a hack of the Democratic Congressional Campaign Committee which is also being investigated by the FBI. Y'know, a different FBI might recommend that government officials, political parties, businesses, and the world at large should take security more seriously and implement measures such as encryption to prevent data breaches. But Comey's FBI has bigger worries—it's afraid that pawing through our text messag[...]
Thu, 11 Aug 2016 17:00:00 -0400California's gang database operates without enough oversight, doesn't properly make sure the people it places in the database truly qualify as gang members, doesn't purge the names of people who are placed in the database when it's supposed to, and ultimately ends up with a high potential of violating the privacy of California citizens. Those are the results of a state audit released today of California's "CalGang Criminal Intelligence System." The flaw in using data to assist in policing is that the subsequent policing is only as good as the data that is collected. And if the data is bad, the outcomes are not abstract. There are real world implications. The outcome of this audit was hardly unpredictable. Reporter Ali Winston documented some of the actual real world consequences of the faulty gang database data earlier in the year in a story appropriately titled "You may be in California's gang database and not even know it." As Winston noted, being part of the gang database is not just about being treated with more suspicion by police during interactions. California laws allow for harsher penalties for some crimes if defendants are shown to be gang members. In addition, there's almost no transparency with the gang database. People are not informed if they've been added to the database and therefore cannot challenge their inclusion. Police are instructed specifically not to reference the database in reports and testimony. In 2013, the state updated the law to require that police notify the parents of minors included in the database and allow parents to challenge the inclusion. People currently in prison are also informed if they're in the database and can challenge it. But the average citizen would have little way of knowing whether they're listed. The full audit documents all sorts of examples where CalGang is being used incorrectly by police based on examining just four participating law enforcement agencies. Here's just a sample of some of the problems they found: Gangs were sometimes added to the database without establishing the appropriate documentation showing that they met the state's requirement to be considered criminal gangs (engaging in criminal activity, having common symbols or names, and having more than three members). Once a gang has been established in the system, law enforcement officers were then able to collect and share info about people they suspected to be members of this "gang." But in the audit, one administrator was unable to provide any documentation supporting the inclusion of a gang in the database. The auditors were unable to find supporting evidence that 13 of the 100 people included in the database that they selected to examine should have been included in the database. They found that in some cases, people were added to the database under the criterion that they admitted to being a member of a gang, but the auditors could not find any supporting documentation. In one case, they found the exact opposite: The person who was added to the database specifically said in a jail interview that he was not a member of a gang and didn't want to be jailed with them. Within just 100 cases, they found 131 instances where the criteria used to classify somebody as a gang member did not match the source documentation. In nine instances, law enforcement justified adding people to the gang database on the criterion that they had committed a crime associated with gang behavior, but the auditors could find no evidence that the listed people had been arrested for any crime at all. Another 20 had been arrested for crimes that did not meet the requirements for being considered gang-related. Even though information from the CalGang da[...]