Published: Sat, 10 Dec 2016 00:00:00 -0500
Last Build Date: Sat, 10 Dec 2016 05:51:47 -0500
Tue, 06 Dec 2016 13:50:00 -0500
(image) Las Vegas-based startup Biometrica Systems describes its business as "creating software and systems that link the physical to the digital" and vice versa, "with the intention of minimizing criminality" and "events that could lead to crime."
The company's encrypted Security & Surveillance Information Network (SSIN) is already used by law-enforcement and gaming, retail, and hospitality businesses to share real-time information about suspicious incidents and individuals. Now, the network's newest iteration will give clients "the ability to run facial recognition scans of any individual or group on their properties and match them against a law enforcement verified database of criminals numbering in the millions, including more than one million registered sex offenders"—all using a convenient mobile app. What could go wrong?
Initially focused on the casino and gaming sector, Biometrica has since expanded SSIN to serve "shopping centers, stores, malls, and movie theaters." In an explanation of Biometrica products, the company website notes that federal and state governments have been "seeing the upside of sharing data with private partners" and that has allowed Biometrica to "collect and amalgamate several different law enforcement watch-lists—local, federal, state, and international."
And this, in turn, has allowed Biometrica "to create a composite set of images of an individual and their known associates, and build a set of dynamic attributes to attach to the individual and/or group" to provide businesses with a more "holistic" way of conducting "threat identification and crime prevention."
In a show of spectacularly creepy bravado, Biometrica CEO Wyly Wade called the new SSIN "revolutionary," and not only for security and surveillance companies. "This might be the first time a private company has taken Department of Defense-developed Facial Recognition software… and attached that to mobile devices for private customer use," he said.
The facial-recognition app can also benefit "non-bank financial institutions," said Biometrica Chief Financial Officer Nigel White in a statement. "They have an imperative to fulfill Know Your Customer requirements on an everyday basis. Helping them have access to faces and backgrounders of known white-collar felons in the system, will support their KYC and Anti-Money Laundering obligations."
Wed, 30 Nov 2016 13:45:00 -0500Queen Elizabeth gave her assent to the British Investigatory Powers Bill on Tuesday, the last step needed before the massive surveillance authorization bill becomes law in 2017. A new, deeper analysis of the final law by tech experts suggests there's more to fear than simply government access to citizens' browser history. This law may ultimately put everybody's data privacy and security at risk. To refresh everybody's memory, the Investigatory Powers Bill—nicknamed by critics the "Snooper's Charter"—formally legally increases the power of the British government to engage in online surveillance, provides rules to allow for the bulk collection of citizen metadata, and the authority to hack into devices remotely. The law requires Internet Service Providers to store information about users' browser history for a year and hand over this information to government officials when provided a warrant. Essentially the law formalizes some secretive surveillance methods already being used by government that were exposed by Edward Snowden, but it also provides for some judicial oversight. While the law is being sold as a way to keep the United Kingdom "safe" and to fight terrorism, the reality is that a whole host of government agencies who have nothing to do with national defense will also have access to this information. These are agencies that investigate fraud and deal with taxation and licensing issues. It is abundantly clear to anybody familiar with the law that it is designed and intended to be used to investigate domestic crime, not just terrorism. But there's more. Privacy advocates and tech companies had been fighting with the British government over the crafting of the law, particularly about the inclusion of mandates for encryption "back doors" so that government officials would not be stymied in their surveillance efforts. While the new law doesn't officially mandate encryption back doors, U.K.-based tech media site The Register scoured the 300-page law and discovered buried deep within something just as bad. Government leaders will be able to give a company what they're calling a "technical capability notice" that can impose obligations and changes upon the products (software, apps, whatever) that may demand "removal by a relevant operator of electronic protection applied by or on behalf of that operator to any communications or data." That is to say: The law doesn't mandate encryption back doors outright, but it gives the government the authority to demand that specific companies remove the encryption protecting data. That means the British government expects that all of these companies will have the capacity to break their own encryption on the demand. So in reality, the law does mandate encryption bypasses and back doors for communication tools, but it's allowing the companies to maintain control over the "keys." If this sounds familiar to Americans, this line of the law has the same impact as the widely mocked terrible legislation proposed by Sens. Diane Feinstein (D-Calif.) and Richard Burr (R-N.C.) in the spring. In response to Apple's refusal to help the FBI decrypt the iPhone that was in the possession of one of the San Bernardino terrorists, the senators crafted the technologically illiterate "Compliance with Court Orders Act of 2016." Like the text of the British law, it doesn't order tech companies to create back doors for the government to bypass encryption, but it does require that the tech companies themselves bypass their own encryption when given a court order to do so. What's the big deal? There is a simple truth that everybody who works within the tech industry or writes about technology understands that many government officials are either choosing to ignore or unwilling to accept: When a company creates encryption methods that have built-in bypass methods, there is no guarantee it will stay in the hands of the company or that only the "right people" will gain access. Accidents happen. Espionage happens. We saw an example of it earlier in the year when an internal secur[...]
Mon, 28 Nov 2016 12:45:00 -0500The United Kingdom's Gangmasters and Labour Abuse Authority is not part of an agency tasked with fighting terrorism. They are a licensing body monitoring labor rules in the U.K.'s agriculture industries. Nevertheless, under a new mass surveillance law, high-ranking officials of this agency will have as much access to the private Internet information of British citizens as agencies that actually are tasked with fighting terrorism. This will be the outcome of the passage of the Investigatory Powers Act, also known as the Snooper's Charter. It has passed both houses of the British Parliament and will become law in 2017 if approved by the queen. The Investigatory Powers Act makes the surveillance authorized by America's PATRIOT Act look remarkably tame in comparison. The law requires Internet Service Providers to keep all metadata and web browsing history of users for 12 months. And it allows top officials of dozens of government agencies to demand access to this information, not to fight terrorism, but any sort of crime. The list of agencies granted access included in Schedule 4 of the 300+-page law includes several government bodies whose job it is to fight various forms of fraud or general crimes. It contains rules on how to get warrants to access confidential information stored by journalists and to try to track down a journalist's sources. It, of course, creates special protections for members of Parliament to provide extra requirements before snooping on them. This is not a law about fighting terrorism. This is a law that completely destroys citizens' online privacy for the benefit of any sort of governmental investigation to solve domestic crimes. Edward Snowden called it "the most extreme surveillance in the history of Western democracy." This was a pet project of new Prime Minister Theresa May, and I've previously noted that she is absolutely awful on surveillance and privacy, going so far as to think that snooping on private communications is an acceptable way to fight "cyberbullying." People are now petitioning to try to force the House of Commons to reconsider the legislation. At the same time this domestic surveillance law is being passed, the U.K. is also considering a bill adding additional restrictions to the availability of online pornography. The law's stated purpose is to demand age checks to access porn sites, but a clause would potentially ban portrayal of certain types of "non-traditional" sex acts, meaning the kinky stuff, like spanking, female ejaculation, and anything that looks non-consensual (even though it's just role-playing). It doesn't take a brain surgeon to see the very, very bad ways that these two laws could intersect. Ron Bailey previously noted how Russia is using surveillance laws like those in the U.K. and the United States as models for their own. The Investigatory Powers Act is an autocrat's wet dream. Laws exactly like this one will be used in other countries to snoop and crack down on dissenters and protesters, and the United Kingdom will hardly be in a position to criticize. And if President-Elect Donald Trump's choice to head the CIA—Rep. Mike Pompeo—is an indicator, America may be following in England's footsteps.[...]
Wed, 23 Nov 2016 13:20:00 -0500
(image) A natural consequence of most Thanksgiving feasts is an incredible amount of leftover food, some of which—from congealed gravy to godawful ambrosia—is promptly tossed in the trash.
Usually, this throwing away of leftover holiday vittles warrants little attention from anyone. Not so in King County, Washington, however, where government officials have been found rummaging through residents' garbage in search of food waste.
On Monday, Q13—the local Fox affiliate—reported that a King County woman named Sandi England had come across men in an unmarked rental Penske truck digging through her garbage cans at 5:30 a.m. Suspecting identity thieves, she confronted the men only to be told they were working for the county on a study of residents' composting habits.
A local radio program called the Dori Monson Show reported that another woman had caught men with flashlights cataloging her household's refuse in the middle of the night as well.
This state-sanctioned dumpster diving is apparently all part of an 18-month-long Residential Cart Tagging Project. Started in November of last year, the study aims to get a more accurate picture of how much food waste is going into people's trash cans.
The idea is to encourage more of that waste to go into "yard waste" carts instead, says Jeff Gaisford, a recycling and environmental services manager with King County Solid Waste. According to Gaisford, his department has been leaving informational tags on the trash cans of its involuntary study participants reminding them of proper food waste disposal practices. The follow-up "surveys" are intended to measure whether these tags are working to encourage people to put said waste in the right bins.
People weren't informed about the unsolicited site visits, he added, because King County does not want them to change their behavior in response to being part of the study.
As weird, creepy, and likely pointless as all this is, it's actually not the first time the area has experienced curb-side privacy violations.
The city of Seattle—which sits in King County—was rebuked earlier this year when a judge found that a similar program to measure how much recyclable material was being thrown in the trash was unconstitutional. That ruling rested on the fact that Seattle was looking to level fines on those who failed to properly sort their recyclable high-density polyethylene from their non-recyclable polypropylene. As the county is not looking to hand out fines to callous food wasters, its program probably won't suffer a similar fate.
Fines or no, though, the Residential Cart Tagging Project has rankled more than a few people. Drew Barth of the Dori Monson Show voiced some rather libertarian sentiments, for example, when he called the whole thing "idiotic" and a waste of taxpayer money. "I should have the freedom to throw away whatever I want," he said.
Fri, 11 Nov 2016 17:30:00 -0500The FBI has a controversial new method of fighting child pornography: distributing child pornography. As part of "Operation Pacifier," the federal law-enforcement agency ran a dark-web child porn clearinghouse called The Playpen for two weeks, delivering malware to any site visitors, in a scheme that was revealed last summer. But it turns out that site may not have been the only dark-web site that the FBI maintained. According to documents obtained by the American Civil Liberties Union (ACLU), the agency was actually authorized to takeover 23 child-pornography websites in addition to The Playpen. According to a recently unsealed FBI affidavit, the 23 Tor-hidden sites were run on one computer server and the FBI requested authority to seize this server and deploy its "network investigative technique" on these sites. During the 30-day deployment period, any visitor to sites 1-23 would receive the NIT instructions (essentially malware), which were "designed to cause the 'activating' computer to deliver certain information to a computer controlled by or known to the government," as the FBI affidavit explained it. This information included the computer's actual IP address, host name, operating-system type, and MAC address. Near the end of the affidavit, the FBI notes that "while Websites 1-23 operate at a government facility," normal user-request data associated with websites 1-23 will be collected" and "such request data can be paired with data collected by the NIT... to attempt to identify a particular user and to determine that particular user's actions on Websites 1-23." Cybercrime lawyer Fred Jennings told Ars Technica, "That paragraph alone doesn't quite say the FBI is operating" the websites. "But definitely no other way to read that than websites 1-23 were hosted at a government facility, with the FBI's knowledge and to the FBI's informational benefit. It's clever phrasing on their part." An analysis from security researcher Sarah Jamie Lewis found that between April and August 2016, there were 29 Tor-hidden websites devoted to child porn. Lewis told Ars Technica that "it's a pretty reasonable assumption" that the FBI as running about half of them at some point. "Doing the math, it's not zero sites, it's probably not all the sites, but we know that they're getting authorization for some of them. I think it's a reasonable assumption—I don't think the FBI would be doing their job if they weren't." As Reason's Jacob Sullum noted when the Playpen news came out, "the government's position is that children are revictimized every time images of their sexual abuse are viewed or shared," which "is one of the main rationales for punishing mere possession of child pornography," not just those who make or distribute or profit from it. Under federal law, distributing a sexually explicit image of a child comes with a mandatory minimum prison sentence of five years and a possible 20 years."If such actions merit criminal prosecution because they are inherently harmful," Sullum points out, "there is no logical reason why the federal agents who ran The Playpen should escape the penalties they want to impose on the people who visited the site." The Playpen authorization came from a judge in Virginia, while the mysterious websites 1-23 were authorized by a judge in Maryland. So far, more than 200 charges have stemmed from the FBI's Playpen foray, mostly for receiving or possession of child porn. But in trying to prosecute theses cases, the FBI has already faced resistance from federal judges holding that the FBI's operation was unconstitutional and any evidence obtained must be suppressed. "The courts have been divided in their rulings on whether the FBI went too far," explained Mike Carter at The Seattle Times, "and prosecutors and defense lawyers say the case is almost certainly headed to the U.S. Supreme Court."[...]
Thu, 10 Nov 2016 19:45:00 -0500
(image) Donald Trump famously called mass surveillance whistleblower Edward Snowden a traitor and promised that if he became president, he would negotiate for Snowden's return from Russia with Putin. And in at least one interview Trump threatened to treat Snowden the way America treated traitors "in the good old days when we were a strong country," which is to say, execute him.
And now Trump is our president-elect. So when Snowden participated in a video conference interview today hosted by Dutch search engine StartPage (which emphasizes the privacy of your searches), everybody wanted to know what Snowden thought of Trump's election.
People might be surprised to discover that Snowden was neither quaking in his boots, nor was he actually openly contemptuous of Trump, despite what Trump may think of him.
From Snowden's perspective, the issue of data privacy and protection is much, much bigger than whoever gets picked to hang out in the Oval Office. America is not the alpha and omega of privacy rules and intrusive government surveillance.
"What we need to start thinking about now is not how to defend against a President Trump, but how defend people everywhere," Snowden said. If the president of the United States is not open or is even actively hostile to protecting tech privacy, then it falls upon private citizens and the tech sector to work out other solutions. "We could guarantee through technology. This election reminds us that capability is within our reach, and we don't just have the right to try, but a duty."
He reminded the audience that when President Barack Obama ran for office, he promised that there would be surveillance reform, and that didn't really happen until Snowden gave the public the information necessary to force the issue. "We should be cautious to put too much faith or fear in the work of elected officials. At the end of the day, this is just the president."
His suggestion given the increased government approval of surveillance tools (not just in the United States, but in Europe as well) that consumers looks for particular companies that are willing to protect data privacy with tools like end-to-end encryption and not just hope for politicians to fix the problem. "This will only be the work of the people," he explained. "Politicians do what they think will gain them support." He also encouraged listeners to support and donate to causes and expert groups that educate and fight (and sue) for tech privacy.
As for what comes next for Snowden, he said he wasn't worried too much, though he's very aware of the relationship between Trump and the Russian government. But Russia has declared Snowden to be a human rights defender and they have a policy of not extraditing such people.
If Snowden did end up being forced back home against his will he still said he was proud of what he had done, and if he was concerned about his safety, he never would have left his job in Hawaii to blow the whistle in the first place.
"I have more ability to change the world for the better today than when I was working for the NSA," he said. "I can give a voice to the side of the NSA that's forbidden from speaking."
Below, watch Nick Gillespie's interview with Snowden back in February:
src="https://www.youtube.com/embed/o8pkUTav0mk" allowfullscreen="allowfullscreen" width="560" height="340" frameborder="0">
Mon, 07 Nov 2016 13:35:00 -0500It's possible to make a libertarian argument in favor of municipal-provided animal control services. Feral dogs, at the very least, present a concrete threat to public safety and have been known to injure and even kill people. But even if a small government case were made for animal control, what actually happens is that these municipal agencies end up expanding far beyond the public safety purposes for which they were created and take on a bureaucratic life of their own. Go figure! Pet licenses aren't just for covering the bureaucratic costs of making sure animal owners can be held responsible for their pet's behavior. They're revenue generators. Cities and counties want the money for things like fancy, expensive animal shelters and the staff needed to operate them. The public often supports the idea of these shelters because people hate seeing animals suffer, and they associate big shelters with more space, and therefore the likelihood of fewer animals being put to sleep. But citizens aren't exactly putting up the money to pay for it, and in reality, many pet owners do not comply with licensing laws. Obviously, this does not mean that these owners are not taking care of their pets; they're making sure the animals are getting vaccinated and are otherwise not presenting a public safety threat to their communities. It just means that municipal governments are missing out on millions of dollars that they think they're entitled to. All that leads us to King County, Washington (home of Seattle). The county's animal services recently sent out loads of threatening letters to pet-owning residents, warning them that failing to get their pets properly licensed could lead to $250 fines. The county was going extract money from them either way. But how did the county know who owned pets if they weren't licensed? It turns out they got their mitts on direct mail lists from stores that tracked customer purchasing habits through membership cards and the like. For the stores and the private retail environment, they're tools to more directly market consumers with goods they may want or need. In the hands of government, it becomes a lot more sinister. A woman who no longer owned a pet received one of these threatening letters and wondered what was going on. The media picked up the story. From the Seattle Times: The county hired a Seattle mailing company named Lacy & Par, which retrieved a list of prospective pet owners from another data firm. The county took that list of possible pet owners, compared it against an internal database of licensed pets, and — voilà! — had a list of Fido lovers who might be stiffing the county. Out went the letters. These lists, though, could be complete crap, given that the example that triggered the story was about a woman who owned no pets receiving a letter. It seems likely that this method sent the letter to tons of people who did not own pets and likely missed any number of residents who do own pets that are not licensed. The Times reporting suggests that there's probably not going to be an enforcement follow-up. That is to say, animal control officials aren't going to be showing up on these people's doorsteps to spot check whether they've got unlicensed pets. They were hoping to scare citizens into complying, and a few apparently did. This example, though, could be another camel poking another nose under the citizen privacy tent to take a big sniff. We have many, many other examples of municipal government collecting citizen data for petty policing purposes and then misusing it in broad, judgmental fashion like this. Consider cities where police collect license plate numbers of vehicles in areas where prostitutes are known to frequent and then sending threatening letters to the homes connected to those cars. Do they know whether those drivers were consorting with prostitutes? Nope. Do these letters have the major potential to interfere [...]
Mon, 07 Nov 2016 11:15:00 -0500Los Angeles model Dani Mathers, Playboy's "Playmate of the Year" for 2015, has been indicted on criminal charges over a photo of an older, nude woman Mathers took in a gym locker-room and allegedly shared on Snapchat. The Los Angeles city attorney's office announced last Friday that Mathers, 29, was charged with one count of misdemeanor invasion of privacy. She is accused of secretly snapping a photo of the naked 70-year-old woman in an LA Fitness shower-area in July and posting it to Snapchat with the comment: "If I can't unsee this then you can't either." LA Fitness officials somehow saw the snap, and reported it to Los Angeles police. Mathers does not deny taking and sharing the photo, but she says it was meant to be sent as a private message and not to be posted publicly. She has since apologized on social media, sharing an apology video on Snapchat and writing on Twitter: "I'm sorry for what I did... I need to take some time to myself now to reflect on why I did this horrible thing. Goodnight." "Prosecutors often use invasion of privacy charges against peeping Toms and people who conceal cameras to take sexually suggestive photos of women," the Los Angeles Times noted. "But legal experts said this marks a rare time authorities have brought charges against someone over photos making fun of someone's weight. It comes amid growing awareness and outrage about 'body-shaming'—particularly common on social media." Despite the spin about body-shaming and the use of Snapchat here, however, there doesn't seem to be much that's novel about it. Mathers' motivation may not have been sexual and her intention may not have been to share the photo to a mass audience, but both are likely irrelevant from a legal standpoint. Like any peeping Tom, Dick, or Harry, she still captured an intimate image of someone without their knowledge or consent, in a place where they have a reasonable expectation of privacy, with the intent to share it with one or more individuals. This would seem to fulfill the requirements under California's invasion of privacy statutes, so long as it can be shown that Mathers acted "with the intent to invade the privacy" of the woman of whom she took and shared a nude pic. Thomas Mesereau, Mathers' attorney, defended her by saying that a) the woman did not have a reasonable expectation of privacy in the fitness-center shower area and that b) Mathers "never tried to invade anyone's privacy and never tried to violate any laws." But, again: she secretly took a picture of an old lady who was just trying to shower after a workout and then tried to share it with at least someone. I'll buy that Mathers didn't know she was breaking the law, but not that her actions can be construed as anything other than a conscious choice to invade someone's right to privacy. Unfortunately, Los Angeles law-enforcement is actively stoking the impression that this isn't about old-fashioned privacy rights but some new-fangled effort at eradicating "body shaming." Announcing the charges against Mathers, City Attorney Mike Feuer said it was important that city officials send a message about the "painful, long-term consequences" that can come from making mean comments about people's physical appearances. "Body-shaming is humiliating," said Feuer. "It mocks and stigmatizes its victims, tearing down self-respect and perpetuating the harmful idea that our unique physical appearances should be compared to air-brushed notions of 'perfect.' What really matters is our character and humanity."[...]
Tue, 25 Oct 2016 12:42:00 -0400It is easy to forget that Americans had actually been clued in to the likelihood of domestic telecommunication surveillance by the National Security Agency (NSA) long before Edward Snowden's leaks. Snowden helped us understand the massive scope and many particulars about which we were unaware and really put the issue before the public in a way we hadn't seen before. But have we all forgotten Room 641A? That was the room in San Francisco where telecommunications company AT&T set up a system for the NSA to access the company's internet traffic for surveillance. It was exposed by the Electronic Frontier Foundation and a former AT&T technician all the way back in 2006, years before Snowden's leaks. That background is relevant again as The Daily Beast has a story that puts AT&T's cooperation with federal authorities with surveillance in a whole new light. The reason why AT&T has been so helpful to the government in providing data about its customers (or anybody who communicates with their customers) is because it has found a way to monetize it. The Hemisphere Project was first revealed in 2013 by The New York Times. Hemisphere was a database system provided by AT&T to help law enforcement officials access data from any call that passes through their system (which means not just their own customers). This data was then used by the government for drug busts. In order to participate in the program, government agencies were required by the contract to keep the existence of Hemisphere a secret and not reference it in any government document. This was not unlike the contracts we have been seeing from law enforcement agencies using "Stingray" devices used to track mobile phones. It's where we learned about "parallel construction," where law enforcement agencies kept this information they had received from surveillance secret, but used it to create a second chain of evidence that could be introduced in court cases. The Times story from 2013 mentioned that government paid AT&T for access to Hemisphere (and even for AT&T employees to embed with drug-fighting units to assist them), but the Times didn't know the cost. Kenneth Lipp from The Daily Beast provides that information today, along with news that the Hemisphere program was not just for fighting drugs. It's being used to fight all kinds of crime. And while the government can force telecommunications companies like AT&T to provide data about users with a subpoena, it appears AT&T took this all to the next level, turning it all into a program that can be marketed, and more importantly, sold to law enforcement agencies. They found a way to make money off government demands for data. And if you think the prices were modest, you obviously know nothing about government contracts: Sheriff and police departments pay from $100,000 to upward of $1 million a year or more for Hemisphere access. Harris County, Texas, home to Houston, made its inaugural payment to AT&T of $77,924 in 2007, according to a contract reviewed by The Daily Beast. Four years later, the county's Hemisphere bill had increased more than tenfold to $940,000. "Did you see that movie Field of Dreams?" [American Civil Liberties Union tech policy analyst Christopher] Soghoian asked. "It's like that line, 'if you build it, they will come.' Once a company creates a huge surveillance apparatus like this and provides it to law enforcement, they then have to provide it whenever the government asks. They've developed this massive program and of course they're going to sell it to as many people as possible." This reporting hits AT&T at a time when they're trying to plan a merger with Time Warner. There's a thought, one supposes, that this either might or should hurt AT&T's chances there, but given that this program is in full cooperation with the very federal government that has authority to approve or deny the merg[...]
Tue, 18 Oct 2016 16:15:00 -0400Fun fact about fingerprint lock "Touch ID" system on iPhones or iPads: If its owner hasn't unlocked his or her device in the past 48 hours, it reverts back to a passcode system. This means if a phone gets, for example, seized by authorities of some sort and locked away, there's a window through which they can physically make its owner unlock it before they have to get a numerical passcode. That may explain why, in a federal warrant uncovered by Forbes' Thomas Fox-Brewster, the Department of Justice attempted to get a judge's permission to attempt to force people to unlock any Touch ID-locked phones at the scene of the search itself. This was a search of a home in Lancaster, California, last May, and while Fox-Brewster wasn't able to get his hands on the warrant itself, he was able to track down a very particular and concerning request. The Department of Justice wanted: "authorization to depress the fingerprints and thumbprints of every person who is located at the SUBJECT PREMISES during the execution of the search and who is reasonably believed by law enforcement to be the user of a fingerprint sensor-enabled device that is located at the SUBJECT PREMISES and falls within the scope of the warrant." To simplify, the Department of Justice wanted to force anybody on scene with a fingerprint-locked phone or tablet to open it then and right there so they could review the contents. There's two issues here: One, can authorities force somebody to provide a thumbprint to unlock a phone; two, even if they can, can the authorities just demand access to every device connected to a search scene without any proof it's connected to any crime? For the first question, so far judges have been inclined to allow authorities to provide a thumbprint and do not believe this violates Fifth Amendment protections against self-incrimination. At least that's how things stand so far. As for the second question, based on the vagueness of the memo, legal scholars were not impressed: "They want the ability to get a warrant on the assumption that they will learn more after they have a warrant," said Marina Medvin of Medvin Law. "Essentially, they are seeking to have the ability to convince people to comply by providing their fingerprints to law enforcement under the color of law – because of the fact that they already have a warrant. They want to leverage this warrant to induce compliance by people they decide are suspects later on. This would be an unbelievably audacious abuse of power if it were permitted." Jennifer Lynch, senior staff attorney at the Electronic Frontier Foundation (EFF), added: "It's not enough for a government to just say we have a warrant to search this house and therefore this person should unlock their phone. The government needs to say specifically what information they expect to find on the phone, how that relates to criminal activity and I would argue they need to set up a way to access only the information that is relevant to the investigation. But we don't know exactly what was in the warrant so we don't know how specific the request was. Forbes tracked down the recipients of the warrant to determine that it was indeed served, but they wouldn't say much other than to say that nobody there had been accused of involvement in a crime. Assuming they're telling the truth, the Justice Department's behavior is even more of a concern, because they used the vaguest possible justifications to try to access private data on devices that may have had absolutely no connection with any sort of crime. And we don't know how many times the Department of Justice (or another law enforcement agency) has attempted this method. Or even whether they succeeded. But it does make it clear that the federal government is going to quietly do whatever they can to get around private data security unless they[...]
Thu, 13 Oct 2016 12:55:00 -0400It's incredibly obvious that neither Donald Trump nor Hillary Clinton are particularly savvy or even remotely articulate about cybersecurity, encryption, and other tech policy issues. It's probably too much to expect people of their ages and backgrounds to stay on top of such an ever-evolving, complicated web of concerns, so what matters here would ultimately be who they choose to help guide federal policies and what sort of principles undergird them (if any). Amid the dump of hacked emails from Clinton campaign Chairman John Podesta are bits and pieces of discussion that help indicate her mindset on citizen privacy and the use of encryption to protect data. As Apple was fighting with the FBI earlier in the year over whether the government could force a private tech company to develop tools to defeat its own encryption, California Democratic Rep. Zoe Lofgren, a strong supporter of tech privacy and Fourth Amendment safeguards from unwarranted surveillance, communicated with the campaign. She was hoping that Clinton would take a stand opposing the FBI's attempts to draft Apple's cooperation via court order and wanted to speak with Clinton if she was thinking of taking the FBI's side. Lofgren supplied a copy of her statement rejecting the FBI's authority and the court's ruling against Apple, saying "It is astonishing that a court would consider it lawful to order that a private American company be commandeered for the creation of a new operating system in response." Podesta's response was that Clinton and the campaign did not seem to want to get involved: "I think we are inclined to stay out of this and push it back to Companies and USG to dialogue and resolve. Won't embrace FBI." When a top politician appears to take an uninvolved stance in a conflict between the executive branch and private citizens or companies, don't mistake it as neutrality. It's deference to authority. As a candidate running to be in charge of the executive branch, "staying out of it" is really approval for the Department of Justice to push the issue to see what would happen. Indeed, in a prior email communication last November, Podesta openly acknowledged Clinton's attitude of deference to authority here. In a very interesting email exchange, campaign strategist Luke Albee suggests that Clinton maybe learn from conservative Tea Party types who were concerned about mass surveillance and "big brother" government and potentially use those concerns against Trump. Albee wrote: Trump (and others?) have called for registering Muslims. He has called for a federal domestic police force that will be focussed [sic] on arresting and deporting 11 million people. Other candidates are talking about separating immigrants by religion. All of this is about building up and feeding the BIG BROTHER beast. The Tea Party was born bc of the perception of government encroachment in peoples [sic] lives — and it really has always been the Rand Paul mantra: government wants to control your health care decisions. Government wants [to] register all your guns, which will ultimately lead to gun confiscation. Government wants the ability to murder its own citizens with drones (remember the Rand Paul filibuster on that one?). The Snowden stuff confirmed what many felt. . .the government was collecting vast troves of information on everyone. At a certain point, I think HRC might bring together all the different strands (mostly Trumps) of expanding federal Big Brother government — and talk about how its [sic] possible to be safe without creating some kind of large and cumbersome and intrusive police state. What a remarkable email within the Clinton campaign. Podesta's response: Interesting. Her instincts are to buy some of the law enforcement arguments on crypto and Snowden type issues. So may be tough, but[...]
Wed, 12 Oct 2016 15:47:00 -0400By now everybody must be hip to the fact that social media companies make a good chunk of their money not off us users tweeting about the latest outrage, Instagramming our lunches, or Facebooking political memes with completely inaccurate or made up information. It makes money by taking what we voluntarily reveal about ourselves, packaging it up (or allowing software developers to package it up), and selling it to customers. It has also become increasingly obvious that governments—local, national, and international—understand social media as an organizational and tracking tool and have adjusted their information-gathering techniques accordingly. This is not inherently a negative—it helps police and emergency responders map out how to react to a crisis, for example. But that's also the problem. Governments have a noted trend toward seeing anything from its own citizenry that disrupts its own precious order as a "crisis," including groups of people gathering in public to loudly express opinions that their government sucks. We already know full well that local police and the FBI have been using surveillance tools to snoop on its own citizens, particularly during protests. The American Civil Liberties Union (ACLU) has discovered as a result of records requests that American law enforcement agencies are social media datamining tools to keep tabs on protesters and activists. Specifically, law enforcement agencies have been using tools by a company called Geofeedia to keep track of location-based social media trends in real time. It was doing so via its access to Twitter, Facebook and Instagram user data, and according to the ACLU, it was marketing itself to law enforcement specifically for easier surveillance purposes. Based on what the ACLU was able to gather, it doesn't appear Geofeedia had access to private information. Rather it had access to database info of what people publicly choose to share on social media and was able to use the same tools marketers use to try to promote products to you based on what you talk about on social media. The ACLU here is taking an approach that I personally find a little unusual and a bit concerning. They are pressuring the social media outlets to be accountable to how developers and customers use the data. They're putting out a list of recommendations for tech companies: Beyond the agreements with Geofeedia, we are concerned about a lack of robust or properly enforced anti-surveillance policies. Neither Facebook nor Instagram has a public policy specifically prohibiting developers from exploiting user data for surveillance purposes. Twitter does have a "longstanding rule" prohibiting the sale of user data for surveillance as well as a Developer Policy that bans the use of Twitter data "to investigate, track or surveil Twitter users." Publicly available policies like these need to exist and be robustly enforced. Here is what we're asking of the social networks: No Data Access for Developers of Surveillance Tools:Social media companies should not provide data access to developers who have law enforcement clients and allow their product to be used for surveillance, including the monitoring of information about the political, religious, social views, racial background, locations, associations or activities of any individual or group of individuals. Clear, Public & Transparent Policies:Social media companies should adopt clear, public, and transparent policies to prohibit developers from exploiting user data for surveillance purposes. The companies should publicly explain these policies, how they will be enforced, and the consequences of such violations. These policies should also appear prominently in specific materials and agreements with developers. Oversight of Developers:Social media c[...]
Tue, 11 Oct 2016 08:00:00 -0400It's been a rough month for Yahoo. Within a few weeks, the struggling tech-company was accused of undermining its customers' security and privacy, after a massive hack of user-data from 2014 was followed-up this fall with allegations of involvement in an unprecedented government surveillance program. The question now is whether more tech companies are secretly complying with federal orders to spy on us. For Yahoo, the woes started in late September, when chief information security officer (CISO) Bob Lord delivered some harsh news on the firm's official Tumblr account: Yahoo had been hacked. Lord confessed that the account information of some half a billion customers had been extracted and rested in the hands of unknown parties. Fortunately, no financial information appears to have been leaked. Still, the names, email addresses, birthdays, telephone numbers, security questions, and passwords of 500 million users had been successfully lifted in the 2014 incident. Then, in early October, Reuters reported that Yahoo secretly allowed a massive government surveillance program to scan all incoming emails to Yahoo accounts. The custom software program was reportedly built by Yahoo at the behest of the National Security Agency (NSA) and the FBI, at the direction of a Foreign Intelligence Surveillance Court judge. According to Reuters' unidentified sources ("three former employees and a fourth person apprised of the events"), the decision of Yahoo Chief Executive Officer (CEO) Marissa Mayer to follow the directive angered some senior executives at Yahoo, and led to the departure of then-CISO Alex Stamos in June 2015. The New York Times reports a history of skirmishes between Stamos and Yahoo executives over how much to invest in security. Stamos, who is known in the industry as somewhat of a privacy and security hardliner, often butted heads with Mayer, the Times said. Mayer was fearful that the introduction of standard security measures, like an automatic reset of all user passwords, would anger Yahoo users and drive them away to other services. Yet few things can drive users away quite like a record-setting security breach... After the hack was revealed, Yahoo encouraged affected users to change their passwords and security questions immediately. But this was almost certainly too little, too late. Many people re-use the same exact password and security questions for many, if not all, of their online accounts. A criminal who had the hacked data could have gained access to all sorts of users' other accounts with these "master" passwords and answers to security questions. Even if this hasn't happened yet, many Yahoo users won't change their passwords for other websites and a good number won't even change their Yahoo passwords. The company was quick to blame the attack on "state-backed actors." But as some skeptical information-security experts have pointed out, this excuse is often deployed to downplay suggestion of company negligence. In the words of security writer Bruce Schneier, "'state-sponsored actor' is often code for 'please don't blame us for our shoddy security because it was a really sophisticated attacker and we can't be expected to defend ourselves against that.'" Unfortunately for Yahoo, the hacking news broke right in the middle of a $4.83 billion acquisition deal with Verizon. The purchase was expected to infuse new direction and capital into the legacy tech-company. Now, it looks like Verizon may be hoping to get a $1 billion discount if it does go ahead with the deal. But the hacking of Yahoo-user account data is small compared to recent revelations about the company cooperating with government surveillance. It's unclear what exactly the NSA and FBI were looking for, but sources told The New [...]
Tue, 04 Oct 2016 15:35:00 -0400
(image) When Edward Snowden revealed the existence of several mass surveillance systems by which the National Security Agency (NSA) collected metadata about the communications of all Americans, President Barack Obama and supporters of the NSA from both parties were quick to tell Americans, "Nobody from the government is reading your emails."
It turns out that's because they were dragooning tech companies into doing it for them. Today Reuters, based on information from a couple of former Yahoo employees, is reporting that the tech company built a custom software program to search the content of emails for a particular string of characters or words on the behalf of the NSA and FBI. Reuters notes that this is the first known case where a third party was scanning all incoming emails in real time on behalf of the government. This was not a situation where they were searching stored emails for a particular piece of content or targeted emails from those under suspicion of some sort of crime.
According to Reuters, this all happened last year, after Snowden's leaks, mind you, and Yahoo President Marissa Mayer and the company's legal team kept the order secret from the company's security team. There were consequences for such a decision:
The sources said the program was discovered by Yahoo's security team in May 2015, within weeks of its installation. The security team initially thought hackers had broken in.
When [Alex] Stamos found out that Mayer had authorized the program, he resigned as chief information security officer and told his subordinates that he had been left out of a decision that hurt users' security, the sources said. Due to a programming flaw, he told them hackers could have accessed the stored emails.
In case anybody had forgotten, remember that Yahoo just recently revealed that state-sponsored hackers had somehow gotten access to hundreds of millions of Yahoo accounts back in 2014. So Stamos kind of has a point there.
Read more from Reuters here. And bring on the end-to-end encryption! The kind without back doors. Oh, and this all is yet another reminder about how important it is that we have whistleblowers who aren't willing to just let this stuff go on without the public's knowledge.
Wed, 28 Sep 2016 12:30:00 -0400Every so often we hear or read news reports about a municipal government official—a police officer, a DMV clerk, et cetera—using access to databases of citizen information for unauthorized and often very illegal purposes. We've seen them use personal information to stalk ex-lovers, track down potential romantic interests, and even to facilitate identity theft. In this age where both the federal government and municipal law enforcement agencies are deliberately attempting to collect more and more data about us, the Associated Press attempted to investigate how frequently government officials misuse their access to citizen information. The results of their investigation were published today. What they've found is unsurprisingly concerning—and very, very incomplete. The AP requested reports of incidences of database misuse from all 50 states and three dozen large municipal police departments. Over just two years they determined there were at least 650 cases where an employee or police officer was fired, suspended or otherwise disciplined for inappropriately accessing and using information from government databases. The Associated Press acknowledges that these numbers are woefully undercounted due to lack of reliable recordkeeping and are likely much, much higher. And how many cases of unauthorized access don't even get caught? When we spread these numbers out over time, what this essentially means is that every single day a government official somewhere is inappropriately looking up information about citizens in state or municipal databases. The story describes many cases where police use databases to stalk people, often connected to romantic entanglements. Some cases revolve around simple curiosity—like looking up information about celebrities. These are bad enough examples on their own. There are some other examples provided, though, that highlight situations where officials and officers were—in what appears to be an organized fashion—using access to database information to snoop on and even intimidate its critics. In one case in Minnesota, a county commissioner discovered that law enforcement and government officials had repeatedly searched databases for information about her and her family members. These searches came after she criticized county spending and programs of the sheriff's department. In Miami-Dade County in Florida, a highway trooper found herself stalked and threatened by police after she pulled an officer over for speeding in 2011, assisted with information about her from the state's driver databases. (Reason previously took note of this case, and a local newspaper won a Pulitzer Prize for exposing police officers' habit of dangerous speeding, even when off duty, on local highways.) Also of note: The efforts by the county commissioner to fight back were unsuccessful because she couldn't prove the searches about her and her family were not permitted. A good chunk of the story is about the complexity of trying to regulate the circumstances by which government officials access these databases and how to engage in oversight to make sure the information isn't being misused. Sadly, that means there isn't nearly a big enough discussion of what information city government should be gathering and storing in the first place. Police, just like the federal government, have been increasingly collecting and storing data about citizens even when they're not even suspected of any criminal behavior whatsoever. There has not been nearly enough of a connection between the capacity of government officials to threaten and intimidate citizens and how this push for more and more data helps make it happen. Heaven knows Reason has been raising[...]