Published: Thu, 27 Oct 2016 00:00:00 -0400
Last Build Date: Thu, 27 Oct 2016 19:21:42 -0400
Tue, 25 Oct 2016 12:42:00 -0400It is easy to forget that Americans had actually been clued in to the likelihood of domestic telecommunication surveillance by the National Security Agency (NSA) long before Edward Snowden's leaks. Snowden helped us understand the massive scope and many particulars about which we were unaware and really put the issue before the public in a way we hadn't seen before. But have we all forgotten Room 641A? That was the room in San Francisco where telecommunications company AT&T set up a system for the NSA to access the company's internet traffic for surveillance. It was exposed by the Electronic Frontier Foundation and a former AT&T technician all the way back in 2006, years before Snowden's leaks. That background is relevant again as The Daily Beast has a story that puts AT&T's cooperation with federal authorities with surveillance in a whole new light. The reason why AT&T has been so helpful to the government in providing data about its customers (or anybody who communicates with their customers) is because it has found a way to monetize it. The Hemisphere Project was first revealed in 2013 by The New York Times. Hemisphere was a database system provided by AT&T to help law enforcement officials access data from any call that passes through their system (which means not just their own customers). This data was then used by the government for drug busts. In order to participate in the program, government agencies were required by the contract to keep the existence of Hemisphere a secret and not reference it in any government document. This was not unlike the contracts we have been seeing from law enforcement agencies using "Stingray" devices used to track mobile phones. It's where we learned about "parallel construction," where law enforcement agencies kept this information they had received from surveillance secret, but used it to create a second chain of evidence that could be introduced in court cases. The Times story from 2013 mentioned that government paid AT&T for access to Hemisphere (and even for AT&T employees to embed with drug-fighting units to assist them), but the Times didn't know the cost. Kenneth Lipp from The Daily Beast provides that information today, along with news that the Hemisphere program was not just for fighting drugs. It's being used to fight all kinds of crime. And while the government can force telecommunications companies like AT&T to provide data about users with a subpoena, it appears AT&T took this all to the next level, turning it all into a program that can be marketed, and more importantly, sold to law enforcement agencies. They found a way to make money off government demands for data. And if you think the prices were modest, you obviously know nothing about government contracts: Sheriff and police departments pay from $100,000 to upward of $1 million a year or more for Hemisphere access. Harris County, Texas, home to Houston, made its inaugural payment to AT&T of $77,924 in 2007, according to a contract reviewed by The Daily Beast. Four years later, the county's Hemisphere bill had increased more than tenfold to $940,000. "Did you see that movie Field of Dreams?" [American Civil Liberties Union tech policy analyst Christopher] Soghoian asked. "It's like that line, 'if you build it, they will come.' Once a company creates a huge surveillance apparatus like this and provides it to law enforcement, they then have to provide it whenever the government asks. They've developed this massive program and of course they're going to sell it to as many people as possible." This reporting hits AT&T at a time when they're trying to plan a merger with Time Warner. There's a thought, one supposes, that this either might or should hurt AT&T's chances there, but given that this program is in full cooperation with the very federal government that has authority to approve or deny the merger, it seems unlikely that whether this particular behavior is "good for the consumer" is going to influence the Department of Justice's decision whether to intervene. On the one hand, this certainly makes it appear as [...]
Tue, 18 Oct 2016 16:15:00 -0400Fun fact about fingerprint lock "Touch ID" system on iPhones or iPads: If its owner hasn't unlocked his or her device in the past 48 hours, it reverts back to a passcode system. This means if a phone gets, for example, seized by authorities of some sort and locked away, there's a window through which they can physically make its owner unlock it before they have to get a numerical passcode. That may explain why, in a federal warrant uncovered by Forbes' Thomas Fox-Brewster, the Department of Justice attempted to get a judge's permission to attempt to force people to unlock any Touch ID-locked phones at the scene of the search itself. This was a search of a home in Lancaster, California, last May, and while Fox-Brewster wasn't able to get his hands on the warrant itself, he was able to track down a very particular and concerning request. The Department of Justice wanted: "authorization to depress the fingerprints and thumbprints of every person who is located at the SUBJECT PREMISES during the execution of the search and who is reasonably believed by law enforcement to be the user of a fingerprint sensor-enabled device that is located at the SUBJECT PREMISES and falls within the scope of the warrant." To simplify, the Department of Justice wanted to force anybody on scene with a fingerprint-locked phone or tablet to open it then and right there so they could review the contents. There's two issues here: One, can authorities force somebody to provide a thumbprint to unlock a phone; two, even if they can, can the authorities just demand access to every device connected to a search scene without any proof it's connected to any crime? For the first question, so far judges have been inclined to allow authorities to provide a thumbprint and do not believe this violates Fifth Amendment protections against self-incrimination. At least that's how things stand so far. As for the second question, based on the vagueness of the memo, legal scholars were not impressed: "They want the ability to get a warrant on the assumption that they will learn more after they have a warrant," said Marina Medvin of Medvin Law. "Essentially, they are seeking to have the ability to convince people to comply by providing their fingerprints to law enforcement under the color of law – because of the fact that they already have a warrant. They want to leverage this warrant to induce compliance by people they decide are suspects later on. This would be an unbelievably audacious abuse of power if it were permitted." Jennifer Lynch, senior staff attorney at the Electronic Frontier Foundation (EFF), added: "It's not enough for a government to just say we have a warrant to search this house and therefore this person should unlock their phone. The government needs to say specifically what information they expect to find on the phone, how that relates to criminal activity and I would argue they need to set up a way to access only the information that is relevant to the investigation. But we don't know exactly what was in the warrant so we don't know how specific the request was. Forbes tracked down the recipients of the warrant to determine that it was indeed served, but they wouldn't say much other than to say that nobody there had been accused of involvement in a crime. Assuming they're telling the truth, the Justice Department's behavior is even more of a concern, because they used the vaguest possible justifications to try to access private data on devices that may have had absolutely no connection with any sort of crime. And we don't know how many times the Department of Justice (or another law enforcement agency) has attempted this method. Or even whether they succeeded. But it does make it clear that the federal government is going to quietly do whatever they can to get around private data security unless they're told otherwise by judges. Read the memo over at Forbes here.[...]
Thu, 13 Oct 2016 12:55:00 -0400It's incredibly obvious that neither Donald Trump nor Hillary Clinton are particularly savvy or even remotely articulate about cybersecurity, encryption, and other tech policy issues. It's probably too much to expect people of their ages and backgrounds to stay on top of such an ever-evolving, complicated web of concerns, so what matters here would ultimately be who they choose to help guide federal policies and what sort of principles undergird them (if any). Amid the dump of hacked emails from Clinton campaign Chairman John Podesta are bits and pieces of discussion that help indicate her mindset on citizen privacy and the use of encryption to protect data. As Apple was fighting with the FBI earlier in the year over whether the government could force a private tech company to develop tools to defeat its own encryption, California Democratic Rep. Zoe Lofgren, a strong supporter of tech privacy and Fourth Amendment safeguards from unwarranted surveillance, communicated with the campaign. She was hoping that Clinton would take a stand opposing the FBI's attempts to draft Apple's cooperation via court order and wanted to speak with Clinton if she was thinking of taking the FBI's side. Lofgren supplied a copy of her statement rejecting the FBI's authority and the court's ruling against Apple, saying "It is astonishing that a court would consider it lawful to order that a private American company be commandeered for the creation of a new operating system in response." Podesta's response was that Clinton and the campaign did not seem to want to get involved: "I think we are inclined to stay out of this and push it back to Companies and USG to dialogue and resolve. Won't embrace FBI." When a top politician appears to take an uninvolved stance in a conflict between the executive branch and private citizens or companies, don't mistake it as neutrality. It's deference to authority. As a candidate running to be in charge of the executive branch, "staying out of it" is really approval for the Department of Justice to push the issue to see what would happen. Indeed, in a prior email communication last November, Podesta openly acknowledged Clinton's attitude of deference to authority here. In a very interesting email exchange, campaign strategist Luke Albee suggests that Clinton maybe learn from conservative Tea Party types who were concerned about mass surveillance and "big brother" government and potentially use those concerns against Trump. Albee wrote: Trump (and others?) have called for registering Muslims. He has called for a federal domestic police force that will be focussed [sic] on arresting and deporting 11 million people. Other candidates are talking about separating immigrants by religion. All of this is about building up and feeding the BIG BROTHER beast. The Tea Party was born bc of the perception of government encroachment in peoples [sic] lives — and it really has always been the Rand Paul mantra: government wants to control your health care decisions. Government wants [to] register all your guns, which will ultimately lead to gun confiscation. Government wants the ability to murder its own citizens with drones (remember the Rand Paul filibuster on that one?). The Snowden stuff confirmed what many felt. . .the government was collecting vast troves of information on everyone. At a certain point, I think HRC might bring together all the different strands (mostly Trumps) of expanding federal Big Brother government — and talk about how its [sic] possible to be safe without creating some kind of large and cumbersome and intrusive police state. What a remarkable email within the Clinton campaign. Podesta's response: Interesting. Her instincts are to buy some of the law enforcement arguments on crypto and Snowden type issues. So may be tough, but worth looking for an opening. That's pretty telling, too. Ultimately the path Clinton has chosen to tread is to—as much as possible—not tread a path at all. In February, she was asked to weigh in on the Apple vs. [...]
Wed, 12 Oct 2016 15:47:00 -0400By now everybody must be hip to the fact that social media companies make a good chunk of their money not off us users tweeting about the latest outrage, Instagramming our lunches, or Facebooking political memes with completely inaccurate or made up information. It makes money by taking what we voluntarily reveal about ourselves, packaging it up (or allowing software developers to package it up), and selling it to customers. It has also become increasingly obvious that governments—local, national, and international—understand social media as an organizational and tracking tool and have adjusted their information-gathering techniques accordingly. This is not inherently a negative—it helps police and emergency responders map out how to react to a crisis, for example. But that's also the problem. Governments have a noted trend toward seeing anything from its own citizenry that disrupts its own precious order as a "crisis," including groups of people gathering in public to loudly express opinions that their government sucks. We already know full well that local police and the FBI have been using surveillance tools to snoop on its own citizens, particularly during protests. The American Civil Liberties Union (ACLU) has discovered as a result of records requests that American law enforcement agencies are social media datamining tools to keep tabs on protesters and activists. Specifically, law enforcement agencies have been using tools by a company called Geofeedia to keep track of location-based social media trends in real time. It was doing so via its access to Twitter, Facebook and Instagram user data, and according to the ACLU, it was marketing itself to law enforcement specifically for easier surveillance purposes. Based on what the ACLU was able to gather, it doesn't appear Geofeedia had access to private information. Rather it had access to database info of what people publicly choose to share on social media and was able to use the same tools marketers use to try to promote products to you based on what you talk about on social media. The ACLU here is taking an approach that I personally find a little unusual and a bit concerning. They are pressuring the social media outlets to be accountable to how developers and customers use the data. They're putting out a list of recommendations for tech companies: Beyond the agreements with Geofeedia, we are concerned about a lack of robust or properly enforced anti-surveillance policies. Neither Facebook nor Instagram has a public policy specifically prohibiting developers from exploiting user data for surveillance purposes. Twitter does have a "longstanding rule" prohibiting the sale of user data for surveillance as well as a Developer Policy that bans the use of Twitter data "to investigate, track or surveil Twitter users." Publicly available policies like these need to exist and be robustly enforced. Here is what we're asking of the social networks: No Data Access for Developers of Surveillance Tools:Social media companies should not provide data access to developers who have law enforcement clients and allow their product to be used for surveillance, including the monitoring of information about the political, religious, social views, racial background, locations, associations or activities of any individual or group of individuals. Clear, Public & Transparent Policies:Social media companies should adopt clear, public, and transparent policies to prohibit developers from exploiting user data for surveillance purposes. The companies should publicly explain these policies, how they will be enforced, and the consequences of such violations. These policies should also appear prominently in specific materials and agreements with developers. Oversight of Developers:Social media companies should institute both human and technical auditing mechanisms designed to effectively identify potential violations of this policy, both by the developers and end users, and take swift action for violations. Th[...]
Tue, 11 Oct 2016 08:00:00 -0400It's been a rough month for Yahoo. Within a few weeks, the struggling tech-company was accused of undermining its customers' security and privacy, after a massive hack of user-data from 2014 was followed-up this fall with allegations of involvement in an unprecedented government surveillance program. The question now is whether more tech companies are secretly complying with federal orders to spy on us. For Yahoo, the woes started in late September, when chief information security officer (CISO) Bob Lord delivered some harsh news on the firm's official Tumblr account: Yahoo had been hacked. Lord confessed that the account information of some half a billion customers had been extracted and rested in the hands of unknown parties. Fortunately, no financial information appears to have been leaked. Still, the names, email addresses, birthdays, telephone numbers, security questions, and passwords of 500 million users had been successfully lifted in the 2014 incident. Then, in early October, Reuters reported that Yahoo secretly allowed a massive government surveillance program to scan all incoming emails to Yahoo accounts. The custom software program was reportedly built by Yahoo at the behest of the National Security Agency (NSA) and the FBI, at the direction of a Foreign Intelligence Surveillance Court judge. According to Reuters' unidentified sources ("three former employees and a fourth person apprised of the events"), the decision of Yahoo Chief Executive Officer (CEO) Marissa Mayer to follow the directive angered some senior executives at Yahoo, and led to the departure of then-CISO Alex Stamos in June 2015. The New York Times reports a history of skirmishes between Stamos and Yahoo executives over how much to invest in security. Stamos, who is known in the industry as somewhat of a privacy and security hardliner, often butted heads with Mayer, the Times said. Mayer was fearful that the introduction of standard security measures, like an automatic reset of all user passwords, would anger Yahoo users and drive them away to other services. Yet few things can drive users away quite like a record-setting security breach... After the hack was revealed, Yahoo encouraged affected users to change their passwords and security questions immediately. But this was almost certainly too little, too late. Many people re-use the same exact password and security questions for many, if not all, of their online accounts. A criminal who had the hacked data could have gained access to all sorts of users' other accounts with these "master" passwords and answers to security questions. Even if this hasn't happened yet, many Yahoo users won't change their passwords for other websites and a good number won't even change their Yahoo passwords. The company was quick to blame the attack on "state-backed actors." But as some skeptical information-security experts have pointed out, this excuse is often deployed to downplay suggestion of company negligence. In the words of security writer Bruce Schneier, "'state-sponsored actor' is often code for 'please don't blame us for our shoddy security because it was a really sophisticated attacker and we can't be expected to defend ourselves against that.'" Unfortunately for Yahoo, the hacking news broke right in the middle of a $4.83 billion acquisition deal with Verizon. The purchase was expected to infuse new direction and capital into the legacy tech-company. Now, it looks like Verizon may be hoping to get a $1 billion discount if it does go ahead with the deal. But the hacking of Yahoo-user account data is small compared to recent revelations about the company cooperating with government surveillance. It's unclear what exactly the NSA and FBI were looking for, but sources told The New York Times that some Yahoo tools to scan emails for spam and child-pornography had been modified to scan for email signatures linked to a state-sponsored terrorist groups. Others took issue with this characterization, [...]
Tue, 04 Oct 2016 15:35:00 -0400
(image) When Edward Snowden revealed the existence of several mass surveillance systems by which the National Security Agency (NSA) collected metadata about the communications of all Americans, President Barack Obama and supporters of the NSA from both parties were quick to tell Americans, "Nobody from the government is reading your emails."
It turns out that's because they were dragooning tech companies into doing it for them. Today Reuters, based on information from a couple of former Yahoo employees, is reporting that the tech company built a custom software program to search the content of emails for a particular string of characters or words on the behalf of the NSA and FBI. Reuters notes that this is the first known case where a third party was scanning all incoming emails in real time on behalf of the government. This was not a situation where they were searching stored emails for a particular piece of content or targeted emails from those under suspicion of some sort of crime.
According to Reuters, this all happened last year, after Snowden's leaks, mind you, and Yahoo President Marissa Mayer and the company's legal team kept the order secret from the company's security team. There were consequences for such a decision:
The sources said the program was discovered by Yahoo's security team in May 2015, within weeks of its installation. The security team initially thought hackers had broken in.
When [Alex] Stamos found out that Mayer had authorized the program, he resigned as chief information security officer and told his subordinates that he had been left out of a decision that hurt users' security, the sources said. Due to a programming flaw, he told them hackers could have accessed the stored emails.
In case anybody had forgotten, remember that Yahoo just recently revealed that state-sponsored hackers had somehow gotten access to hundreds of millions of Yahoo accounts back in 2014. So Stamos kind of has a point there.
Read more from Reuters here. And bring on the end-to-end encryption! The kind without back doors. Oh, and this all is yet another reminder about how important it is that we have whistleblowers who aren't willing to just let this stuff go on without the public's knowledge.
Wed, 28 Sep 2016 12:30:00 -0400Every so often we hear or read news reports about a municipal government official—a police officer, a DMV clerk, et cetera—using access to databases of citizen information for unauthorized and often very illegal purposes. We've seen them use personal information to stalk ex-lovers, track down potential romantic interests, and even to facilitate identity theft. In this age where both the federal government and municipal law enforcement agencies are deliberately attempting to collect more and more data about us, the Associated Press attempted to investigate how frequently government officials misuse their access to citizen information. The results of their investigation were published today. What they've found is unsurprisingly concerning—and very, very incomplete. The AP requested reports of incidences of database misuse from all 50 states and three dozen large municipal police departments. Over just two years they determined there were at least 650 cases where an employee or police officer was fired, suspended or otherwise disciplined for inappropriately accessing and using information from government databases. The Associated Press acknowledges that these numbers are woefully undercounted due to lack of reliable recordkeeping and are likely much, much higher. And how many cases of unauthorized access don't even get caught? When we spread these numbers out over time, what this essentially means is that every single day a government official somewhere is inappropriately looking up information about citizens in state or municipal databases. The story describes many cases where police use databases to stalk people, often connected to romantic entanglements. Some cases revolve around simple curiosity—like looking up information about celebrities. These are bad enough examples on their own. There are some other examples provided, though, that highlight situations where officials and officers were—in what appears to be an organized fashion—using access to database information to snoop on and even intimidate its critics. In one case in Minnesota, a county commissioner discovered that law enforcement and government officials had repeatedly searched databases for information about her and her family members. These searches came after she criticized county spending and programs of the sheriff's department. In Miami-Dade County in Florida, a highway trooper found herself stalked and threatened by police after she pulled an officer over for speeding in 2011, assisted with information about her from the state's driver databases. (Reason previously took note of this case, and a local newspaper won a Pulitzer Prize for exposing police officers' habit of dangerous speeding, even when off duty, on local highways.) Also of note: The efforts by the county commissioner to fight back were unsuccessful because she couldn't prove the searches about her and her family were not permitted. A good chunk of the story is about the complexity of trying to regulate the circumstances by which government officials access these databases and how to engage in oversight to make sure the information isn't being misused. Sadly, that means there isn't nearly a big enough discussion of what information city government should be gathering and storing in the first place. Police, just like the federal government, have been increasingly collecting and storing data about citizens even when they're not even suspected of any criminal behavior whatsoever. There has not been nearly enough of a connection between the capacity of government officials to threaten and intimidate citizens and how this push for more and more data helps make it happen. Heaven knows Reason has been raising the alarm. Back when Edward Snowden first leaked details about the National Security Agency (NSA) collecting massive amounts of metadata from all Americans' communications, I explained several reasons why people with "[...]
Thu, 22 Sep 2016 10:05:00 -0400Cook County's Tom Dart, the prostitution-obsessed sheriff who launched a national month of police playing sex workers to arrest "johns" and unconstitutionally threatened Visa and Mastercard for doing business with the ad-site Backpage, has found a new way to threaten people's privacy, screw over sex workers, and grow the police state. The latest Dart-led initiative involves creating a national database of prostitution customers, using solicitation-arrest data submitted by cops through a phone app. Demand Abolition—a Massachusetts-based advocacy group that recently gave Boston Police $30,000 to look into new strategies to target prostitution customers—reported on Sheriff Dart's new plot in a late-August post crowing that "1,300 sex buyers—a record—were arrested across 18 states in just one month" of Dart's National John Suppression Initiative. Now, the sheriff is using data from that sting to start a national database of people arrested for soliciting prostitution. You know, for research purposes. "We are well on our way to developing a stronger, more nuanced understanding of who buyers are—information that can be used to find new ways to change their behavior," Demand Abolition chirps. This year's sex stings led to an "unprecedented level of buyer data collected, and shared, by this year's arresting officers," notes Demand Abolition. This is thanks to a new app that streamlines the logging of prostitution arrest information. The app was developed at a January "social justice hackathon", in which a hundred or so techies were presided over by a team of anti-prostitution zealots from across the country—including Dart, Boston Mayor Marty Walsh, and Seattle-area prosecutor Val Richey (for more on Richey's work, see my recent series of stories on Seattle prostitution busts). The presumably well-intentioned developers and data scientists were told their work would help put an end to human trafficking, but the tools they developed are designed for police to target and track adults engaging in consensual prostitution. The January hackathon, funded by Thomson Reuters' Data Innovation Lab, gave birth to what Demand Abolition is calling an "arrest app," which "allows officers to easily log arrest info into a national database, which Dart's team can then use to identify trends in buyer demographics." During the last John Suppression Initiative, cops logged info from 80 percent of all arrests into the database. Keeping the personal info of people arrested for prostitution-related charges in one handy national database might help with whatever new Vice-Squad-on-Steroids agenda that Dart is designing. But it's obviously worrisome from a privacy perspective. Keeping all that sensitive information in one place would seem to make it a ripe target for hackers, but nowhere do Demand Abolition or Dart even mention cybersecurity. It's also important to note that the people being logged in the database have merely been arrested for, not convicted of, any crimes. Yet the arrest app isn't concerned with case outcomes. If police arrest someone and the charges are later dropped or beat, that person will still be counted in Dart's database as having been picked up in a sex sting. I reached out to the Cook County Sheriff's Office to get more details about the app and database—what security measures are in place, whether the info collected is subject to public-records requests, etc.—and will update if I hear back. Update: Cook County Sheriff's Office Press Secretary Sophia Ansari said no individual names or case numbers will be entered into the database. "Demographic information entered includes age range, race, marital status and education level–but that information is never connected to an individual or a number that could be connected to an individual," Ansari said in an email. Nor does the database reflect what ultimately happens with cases.[...]
Thu, 15 Sep 2016 12:45:00 -0400"The Feds Will Soon Be Able to Legally Hack Anyone" warns the headline at Wired in a commentary written partly by Democratic Oregon Sen. Ron Wyden. The piece is warning about coming changes to "Rule 41," a reference familiar to everybody in any field remotely connected to cybersecurity and privacy and mostly unknown to anybody else. Right now the Department of Justice is working to expand its hacking and surveillance authorities under the Federal Rules of Criminal Procedure. It's already legal, under this Rule 41, for the FBI to get authorization from a judge to attempt to install malware to hack into computers that are believed to be connected to crimes. But Rule 41 has limits—judges can only authorize intrusions into computers within their own jurisdictions. This update would lift that limit and would authorize the feds to turn to a judge to get permission to "hack a million computers or more with a single warrant." Wyden has teamed up with other tech privacy-minded legislators (like Republican Sen. Rand Paul) to try to stop this amendment to Rule 41. They have until December to block it. In the middle of this debate over how much hacking authority the government should have, one activist organization is analyzing whether such authority fundamentally endangers human rights, at least in its current use. Access Now, an advocacy and policy group devoted to protecting and advancing the rights of users of digital and technological tools, released a lengthy report about how government hacking can threaten our liberties. Access Now notes how little public debate there has been on the "scope, impact, or human rights safeguards for government hacking." The massive release of secret government surveillance documents by Edward Snowden did prompt significant discussions about digital snooping in the United States and some European countries, and there have been some very modest reforms in the states. Others, though, like the United Kingdom, are actually considering formalizing a system that expands authorized government snooping on private digital data. Access Now carefully analyzed how governments engage in hacking, separating their behaviors into three categories: Message control (in censoring and manipulating digital information), deliberate damage (malware that harms systems and data), and surveillance and information-gathering. Access Now determined that government hacking causes significant harms to human rights. Here's a brief sample of how they describe the impact of hacking that installs malware or otherwise damages a person's systems: Government hacking that falls under this umbrella is often designed specifically to deprive a person of their property in some way. This implicates due process protections, which require a fair trial overseen by a competent judicial authority, qualified legal representation, and the ability to appeal. It also directly conflicts with the right recognized in most countries for individuals to own private property. When the damage a government seeks to carry out also implicates human life or wellbeing, the threat to human rights is exceptionally grave. Government hacking to do damage also implicates other human rights, such as freedom of expression and association, since these rights are frequently exercised using devices that such hacking could render inoperable. Access Now ultimately concludes that "there should be a presumptive prohibition on all government hacking," based on international human rights law. In order to justify hacking, Access Now recommends 10 specific safeguards, such as requiring very specific, tailored laws that describe when hacking is permitted in the narrowest terms possible. That's pretty much the opposite of what the Rule 41 expansion is attempting to accomplish. Access Now's report coincides with a high-profile government hacking story, but it di[...]
Tue, 06 Sep 2016 17:45:00 -0400
(image) What does it mean for the state of sci-fi dystopias when Americans have to rely upon those allegedly nasty, self-interested international megacorporations to fight on their behalf against a government that refuses to respect citizens' rights to data privacy?
So it goes with the frequent attempts by government officials and law enforcement agencies to collect our personal information without us knowing they're doing so. We've had Google fighting against a gag that forbid them to even tell its customers how many data requests it has gotten from the National Security Agency (NSA), let alone who it was targeting. We've had Apple's extremely high-profile fight against the Department of Justice over a push to force the company to weaken or bypass its own encryption in a way that could subject everybody to additional secret surveillance.
And we have Microsoft suing to strike down a law that allows the government to gag companies and not let them inform customers when officials demand and collect data about them.
As has been thoroughly established by this point, Supreme Court precedents dating back to the 1970s have determined that information about ourselves and information or data that we ourselves have created don't have full Fourth Amendment protections when they are kept or stored by a third party. Often a full warrant served against the citizen is not needed. A subpoena directed to the company that holds the data is often all that is required. Despite the massive amount of data about citizens held by third-parties now (pretty much everything about us) as compared to what was available then, the precedent still holds for now.
Microsoft filed suit against the Department of Justice in April, arguing that gags prohibiting them from telling customers they've had their data taken by government officials are unconstitutional, violating both the Fourth Amendment rights of their customers and the First Amendment rights of Microsoft. But they're not fighting alone. A whole bunch of corporations from across the spectrum have just announced their support. It's not just tech companies and tech privacy activists, as we've seen in some cases (like the Apple encryption fight). As Reuters notes, we're talking about a wide-ranging group of companies that includes the Washington Post, Delta Air Lines, pharmaceutical company Eli Lilly, the U.S. Chamber of Commerce, and Fox News. There are even five former federal law enforcement officials supporting Microsoft's position.
Read more about the case here.
Tue, 30 Aug 2016 16:00:00 -0400When a powerful, unelected federal official says we need to have an "adult conversation" about the limits of federal authority as contrasted with the civil liberties of the people he allegedly works for, hold on to your butts. Whenever somebody declares the need for an "adult conversation" in the first place, they are suggesting that one hasn't already been happening, and often accomplishes little but to raise the hackles. It is a deliberate insult against those with opposing views. In this particular case, the person invoking the term is FBI Director James Comey, and it's pretty much directed at the entire tech sector and privacy advocates who have been pushing back (for decades) against government attempts to tamper with and weaken encryption. There's something remarkably telling about the man saying we need to have an "adult conversation" on encryption limits primarily because he can't just get whatever he wants. Isn't that the child's argument? The government wants this information. Give us this information! Of course, as always, it's couched in terms of the alleged threat of the Internet going "dark" and federal investigators worried they're not able to track down alleged criminals and terrorists. Comey complained about it at a tech symposium today. The Associated Press reports: "The conversation we've been trying to have about this has dipped below public consciousness now, and that's fine," Comey said. "Because what we want to do is collect information this year so that next year we can have an adult conversation." The American people, he said, have a reasonable expectation of privacy in houses, cars and electronic devices — but he argued that right is not absolute. "With good reason, the people of the United States — through judges and law enforcement — can invade our public spaces," Comey said, adding that that "bargain" has been at the heart of the country since its inception. This is what he thinks is an "adult conversation." While Comey wants to present this is a reasonably as possible, recall that when they found a phone in the possession by a terrorist that was protected with a password, what the Department of Justice thought was the reasonable, adult response was to try to use the courts to conscript Apple and to actually force it to compromise its own security systems to give the government access to the phone's contents. The government wants things! Give the government those things! This is the "adult conversation" Comey's side is having right now. No, privacy is not absolute, but just because they government has the authority to pursue information related to crimes doesn't mean they're guaranteed access to it. I'll dredge up an old example: A suspect may take a box containing evidence to a crime and bury it somewhere out in the desert. The government absolutely has the authority to try to track down that evidence and use any number of tools to do so. But they can't order the desert to cough it up for them or command some desert experts to track it down for them (though they can certainly hire them). The "adult conversation" that's actually already happening is trying to get people like Comey to understand that there's no magical system where the Department of Justice (or any other government entity) can get access to information to encrypted data that doesn't leave the whole system vulnerable. The "adult conversation" is about trying to get authoritarian senators like Dianne Feinstein (D-Calif.) and Richard Burr (R-N.C.) to understand that the legislation they wrote to order tech companies to assist the government in cracking their own security was so bad—so childish, in fact—that it's impossible to imagine any tech or privacy-minded adult trusting what they'll suggest next. The "adult conversation" is "white hat" hackers [...]
Tue, 30 Aug 2016 12:00:00 -0400News that the city of Baltimore has been under surreptitious, mass-scale camera surveillance will have ramifications across the criminal justice world. When it comes to constitutional criminal procedure, privacy, and the Fourth Amendment, it's time to get ready for the concept of "pre-search." Like the PreCrime police unit in the 2002 movie Minority Report, which predicted who was going to conduct criminal acts, pre-search uses technology to conduct the better part of a constitutional search before law enforcement knows what it might search for. Since January, police in Baltimore have been testing an aerial surveillance system developed for military use in Iraq. The system records visible activity across an area as wide as thirty square miles for as much as ten hours at a time. Police can use it to work backward from an event, watching the comings and goings of people and cars to develop leads about who was involved. "Google Earth with TiVo capability," says the founder of the company that provides this system to Baltimore. But the technology collects images of everyone and everything. From people in their backyards to anyone going from home to work, to the psychologist's or marriage counselor's office, to meetings with lawyers or advocacy groups, and to public protests. It's a powerful tool for law enforcement—and for privacy invasion. In high-tech Fourth Amendment cases since 2001, the U.S. Supreme Court has stated a goal of preserving the degree of privacy people enjoyed when the Constitution was framed. Toward that end, the Court has struck down convictions based on scanning a house with a thermal imager and attaching a GPS device to a suspect's car without a warrant. The Fourth Amendment protects against unreasonable searches and seizures. The straightforward way to administer this law is to determine when there has been a search or seizure, then to decide whether it was reasonable. With just a few exceptions the hallmark of a reasonable search or seizure is getting a warrant ahead of time. Applying the "search" concept to persistent aerial surveillance is hard. But that's where pre-search comes in. In an ordinary search, you have in mind what you are looking for and you go look for it. If your dog has gone missing in the woods, for example, you take your mental snapshot of the dog and you go into the woods comparing that snapshot to what you see and hear. Pre-search reverses the process. It takes a snapshot of everything in the woods so that any searcher can quickly and easily find what they later decide to look for. The pre-search concept is at play in a number of policies beyond aerial surveillance and Baltimore. Departments of Motor Vehicles (DMVs) across the country are digitally scanning the faces of drivers with the encouragement of the Department of Homeland Security under the REAL ID Act. Some DMVs compare the facial scans of applicants to other license-holders on the spot. They are searching the faces of all drivers without any suspicion of fraud. And the facial scan databases are available for further searching and sharing with other governmental entities whenever the law enforcement need is felt acutely enough. The National Security Agency's telephone meta-data program is an example of pre-seizure. Phone records that telecom companies used to dispose of, having kept them confidential under their privacy policies and federal regulation, are now held so that the government can search them should the need arise. Exactly how courts will apply the pre-search concept to mass aerial surveillance remains to be seen. The Fourth Amendment doesn't directly protect our movements in public, but it does protect our "persons" and "houses." Mass aerial surveillance captures data about both. The Supreme Court struck down wa[...]
Tue, 30 Aug 2016 08:00:00 -0400With a name like the National Security Agency, America's chief intelligence outfit might at least attempt to promote American security online. At the very least, one would hope its activities don't actively undermine U.S. cybersecurity. But—bad news—a recent leak of the agency's digital spy tools by a myterious group called the Shadow Brokers shows how the agency prioritizes online surveillance over online security. For years, there have been rumors that the National Security Agency (NSA) was stockpiling a secret cache of powerful computer bugs to exploit for cyber-snooping. Recent revelations by the Shadow Brokers appear to confirm these allegations. On August 13, the group published a number of "cyber weapons" that it claims were used by an NSA-linked hacking outfit known as the Equation Group. The leak was supposed to be a teaser for the Shadow Brokers' upcoming auction of a larger batch of software security-vulnerabilities, or exploits. "You see pictures. We give you some Equation Group files free, you see. This is good proof no?" the Shadow Brokers proclaimed. The Shadow Brokers' asking price for the upcoming dump? One million Bitcoin, or about $575.2 million (and no, the FBI are not getting in on the action). The dumped information appears to be legitimate, and is dated from around 2013. It's clear that the exploits are functional, as networking manufacturer Cisco confirmed (and promptly set about correcting). But how do we know the exploits were actually used by the NSA? Journalists at The Intercept compared the Shadow Brokers' data to its trove of Edward Snowden documents, some of which were never released to the public. The leak is consistent with their still-secret Snowden files, lending credibility to the Shadow Brokers' claims. Researchers at Kaspersky Labs likewise verified that the exploits themselves "share a strong connection" to previous tools known to have been used by the Equation Group. Sloppy Spies and Secret Bugs There are many concerning elements to this story. First, it's incredibly troubling that the NSA left itself or its tools open to a hack. If the NSA is going to spend billions of dollars to build a god-like system of dystopian digital control, they could at least not leave their dark materials lying around for any enterprising hacker to scoop up and sell to the highest bidder. It is still unclear whether hackers directly infiltrated NSA systems, or whether the hacker was able to take the exploits from a staging server that NSA agents use. Either way, it's unacceptable. Then there's the question of who was behind the hack. Was it Russia? Maybe. But the Russian government might not want to advertise the hack in such a public manner, opting instead to keep the exploits for themselves to use. Could it have been a new Snowden, exposing the NSA's secrets from the inside? That's also possible, but there's not much specific evidence to confirm this. One computer scientist believes that the group's broken English is a ruse to shift blame to the Russians, which could be true, but is insufficient to prove anything. It might as well have been Bitcoin creator Satoshi Nakamoto behind the hack. Attribution is notoriously difficult, and we may never be completely certain of who was behind this dump. Whoever they are, however, the Shadow Brokers' actions have provided some long-overdue transparency for NSA hacking methods. The leak confirms what many have suspected for decades: The NSA opportunistically hoards and deploys powerful bugs that make everyone less secure online. These bugs were particularly potent because NSA agents are the only people who knew about them—until now, obviously. In the industry, they are known as "zero day vulnerabilities," or simply "0days," and they ge[...]
Mon, 29 Aug 2016 17:00:00 -0400Hillary Clinton may want Edward Snowden to stand trial for leaking information about the federal government's expansive surveillance of American citizens, but apparently that doesn't mean her campaign won't take advantage of his cybersecurity expertise. In a tech piece posted over at Vanity Fair, reporter Nick Bilton notes that the Clinton campaign has jumped on an application called "Signal," a heavily encrypted private communications program. A visit to the site of developers Open Whisper Systems shows an endorsement by both Snowden and Laura Poitras, the filmmaker who made the documentary Citizenfour about Snowden's whistleblowing. And in Vanity Fair's piece, the fact that the app was "Snowden-approved" was used to encourage its adoption to keep communications out of the hands of Russian hackers. Snowden raised an eyebrow about it on Twitter, noting that Clinton called for him to face trial and possible imprisonment just last year. (I would add here that Clinton has said Snowden should have gone through the proper channels and incorrectly believes he would have whistleblower protections.) Techdirt also takes note that Clinton herself is kind of vague about where she stands on cybersecurity policy when it comes to areas like encryption. She's one of those politicians who wants to have it both ways. It seems clear that she doesn't necessarily support mandatory "back doors" that require tech companies to provide access to law enforcement or the government to bypass encryption, but she also believes in the unicorn that Silicon Valley eggheads can come up with some magical key that only the "good guys" (the government will obviously determine who those might be) can use. She has previously said she wants some sort of "Manhattan Project" between the tech industry and the government to figure it all out. Maybe Clinton's tech policy briefing that her campaign released in June might help straighten out where she actually stands on tech privacy and security? Sadly no. Try and tease an actual policy out of this paragraph on encryption: Hillary rejects the false choice between privacy interests and keeping Americans safe. She was a proponent of the USA Freedom Act, and she supports Senator Mark Warner and Representative Mike McCaul's idea for a national commission on digital security and encryption. This commission will work with the technology and public safety communities to address the needs of law enforcement, protect the privacy and security of all Americans that use technology, assess how innovation might point to new policy approaches, and advance our larger national security and global competitiveness interests. All throughout the tech initiative paper, it is chock full of extremely specific proposals (which I've critiqued before as essentially a call for tech industry government lobbying for handouts). But the above paragraph says very little but to say that they'll look into the matter. There's two possibilities to consider: One, that her campaign maybe recognizes that trying to fight encryption is a doomed effort, and the committee is being promoted as the place for the negotiations to go and quietly die. Alternatively, though, there's the terrible precedent of President Barack Obama's administration, which made a big deal out of protecting consumer data privacy. It publicly opposed legislation to promote the sharing of private consumer data with the government in order to help fight cybercrime. But then it quietly worked with lawmakers to get what it wanted, and we ended up with a law that encourages private companies to hand your consumer data over to the government in order to fight all sorts of types of crimes, and then subsequently immunizes these companies from f[...]
Fri, 26 Aug 2016 12:30:00 -0400The iPhone security breach that prompted the latest Apple software update is not about encryption, but it's still very important for Western government officials who want to meddle with tech security standards in service of their own national security agendas to pay attention. Apple just released a new security update for iPhone and iPad users because of what recently happened to Ahmed Mansoor, a human rights advocate and promoter of a free press and democracy in the United Arab Emirates. Mansoor was sent a link in a text from an unknown source that said it would show him information about torture within UAE's prisons. This was lie, which he fortunately did not fall for. The link would have actually installed spyware within his phone that would have allowed hackers to snoop on Mansoor and even remotely activate the phone's camera. Mansoor has been targeted before, and fortunately for him (and all Apple users), he knew who to turn to in order to investigate the malware. Citizen Lab figured out the nature and purpose of the malware, which has been traced back to a secretive Israeli-based firm. This makes things a bit, well, as the Associated Press diplomatically describes it, awkward: The apparent discovery of Israeli-made spyware being used to target a dissident in the United Arab Emirates raises awkward questions for both countries. The use of Israeli technology to police its own citizens is an uncomfortable strategy for an Arab country with no formal diplomatic ties to the Jewish state. And Israeli complicity in a cyberattack on an Arab dissident would seem to run counter to the country's self-description as a bastion of democracy in the Middle East. The Associated Press sent a journalist to the Israeli company's headquarters only to find they had recently moved. They have not been able to get authorities from either Israel or UAE to respond. They best they were able to get from the company, NSO Group, was a bland statement that its mission was to provide "authorized governments with technology that helps them combat terror and crime." That's the kind of statement that should send off warning sirens and alarm bells in the minds of government officials here in the United States and Europe. That's the same kind of motivation lawmakers and investigators claim in calls for tech companies provide them ways to bypass encryption or security of their devices or programs. At the same time that Mansoor was trying to fend off government-sponsored hacking targeting him because of his human rights advocacy, leaders within France and Germany are calling on the European Union to adopt a law that requires app makers to provide government officials the tools to bypass encryption for the purpose of … helping them combat terror and crime. Fortunately European human rights and data protection experts are loudly pointing out what a terrible plan this is. There are politicians who still believe that somehow tech companies can create some sort of magical key that only the "right" people can use. This is clearly an absurd contention, but even if it were true, the United Arab Emirates, which has a terrible record of imprisoning its critics, apparently counts as the "right" people.[...]