Subscribe: Don Marti
http://zgp.org/~dmarti/blosxom/index.rss
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
ads  advertising  browser  bug futures  bug  data  don  facebook  google  people  search  site  sites  tracking  user  work 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Don Marti

Don Marti



personal blog feed



Last Build Date: Thu, 14 Sep 2017 16:34:07 GMT

 



another 2x2 chart

Thu, 14 Sep 2017 07:00:00 GMT

What to do about different kinds of user data interchange:

Data collected without permission Data collected with permission
Good dataBuild tools and norms to reduce the amount of reliable data that is available without permission. Develop and test new tools and norms that enable people to share data that they choose to share.
Bad data Report on and show errors in low-quality data that was collected without permission. Offer users incentives and tools that help them choose to share accurate data and correct errors in voluntarily shared data.

Most people who want data about other people still prefer data that's collected without permission, and collaboration is something that they'll settle for. So most voluntary user data sharing efforts will need a defense side as well. Freedom-loving technologists have to help people reduce the amount of data that they allow to be taken from them without permission in order for data listen to people about sharing data.




Tracking protection defaults on trusted and untrusted sites

Wed, 13 Sep 2017 07:00:00 GMT

(I work for Mozilla. None of this is secret. None of this is official Mozilla policy. Not speaking for Mozilla here.)

Setting tracking protection defaults for a browser is hard. Some activities that the browser might detect as third-party tracking are actually third-party services such as single sign-on—so when the browser sets too high of a level of protection it can break something that the user expects to work.

Meanwhile, new research from Pagefair shows that The very large majority (81%) of respondents said they would not consent to having their behaviour tracked by companies other than the website they are visiting. A tracking protection policy that leans too far in the other direction will also fail to meet the user's expectations.

So you have to balance two kinds of complaints.

  • "your dumbass browser broke a site that was working before"

  • "your dumbass browser let that stupid site do stupid shit"

Maybe, though, if the browser can figure out which sites the user trusts, you can keep the user happy by taking a moderate tracking protection approach on the trusted sites, and a more cautious approach on less trusted sites.

Apple Intelligent Tracking Prevention allows third-party tracking by domains that the user interacts with.

If the user has not interacted with example.com in the last 30 days, example.com website data and cookies are immediately purged and continue to be purged if new data is added. However, if the user interacts with example.com as the top domain, often referred to as a first-party domain, Intelligent Tracking Prevention considers it a signal that the user is interested in the website and temporarily adjusts its behavior (More...)

But it looks like this could give large companies an advantage—if the same domain has both a service that users will visit and third-party tracking, then the company that owns it can track users even on sites that the users don't trust. Russell Brandom: Apple's new anti-tracking system will make Google and Facebook even more powerful.

It might makes more sense to set the trust level, and the browser's tracking protection defaults, based on which site the user is on. Will users want a working "Tweet® this story" button on a news site they like, and a "Log in with Google" feature on a SaaS site they use, but prefer to have third-party stuff blocked on random sites that they happen to click through to?

How should the browser calculate user trust level? Sites with bookmarks would look trusted, or sites where the user submits forms (especially something that looks like an email address). More testing is needed, and setting protection policies is still a hard problem.

Bonus link: Proposed Principles for Content Blocking.




New WebExtension reveals targeted political ads: Interview with Jeff Larson

Tue, 12 Sep 2017 07:00:00 GMT

The investigative journalism organization ProPublica is teaming up with three German news sites to collect political ads on Facebook in advance of the German parliamentary election on Sept. 24. Because typical Facebook ads are shown only to finely targeted subsets of users, the best way to understand them is to have a variety of users cooperate to run a client-side research tool. ProPublica developer Jeff Larson has written a WebExtension, that runs on Mozilla Firefox and Google Chrome, to do just that. I asked him how the development went. Q: Who was involved in developing your WebExtension? A: Just me. But I can't take credit for the idea. I was at a conference in Germany a few months ago with my colleague Julia Angwin, and we were talking with people who worked at Spiegel about our work on the Machine Bias series. We all thought it would be a good idea to look at political ads on Facebook during the German election cycle, given what little we knew about what happened in the U.S. election last year. Q: What documentation did you use, and what would you recommend that people read to get started with WebExtensions? A: I think both Mozilla and Google's documentation sites are great. I would say that the tooling for Firefox is much better due to the web-ext tool. I'd definitely start there (Getting started with web-ext) the next time around. Basically, web-ext takes care of a great deal of the fiddly bits of writing an extension—everything from packaging to auto reloading the extension when you edit the source code. It makes the development process a lot more smooth. Q: Did you develop in one browser first and then test in the other, or test in both as you went along? A: I started out in Chrome, because most of the users of our site use Chrome. But I started using Firefox about halfway through because of web-ext. After that, I sort of ping ponged back and forth because I was using source maps and each browser handles those a bit differently. Mostly the extension worked pretty seamlessly across both browsers. I had to make a couple of changes but I think it took me a few minutes to get it working in Firefox, which was a pleasant surprise. Q: What are you running as a back end service to collect ads submitted by the WebExtension? A: We're running a Rust server that collects the ads and uploads images to an S3 bucket. It is my first Rust project, and it has some rough edges, but I'm pretty much in love with Rust. It is pretty wonderful to know that the server won't go down because of all the built in type and memory safety in the language. We've open sourced the project, I could use help if anyone wants to contribute: Facebook Political Ad Collector on GitHub. Q: Can you see that the same user got a certain set of ads, or are they all anonymized? A: We strive to clean the ads of all identifying information. So, we only collect the id of the ad, and the targeting information that the advertiser used. For example, people 18 to 44 who live in New York. Q: What are your next steps? A: Well, I'm planning on publishing the ads we've received on a web site, as well as a clean dataset that researchers might be interested in. We also plan to monitor the Austrian elections, and next year is pretty big for the U.S. politically, so I've got my work cut out for me. Q: Facebook has refused to release some "dark" political ads from the 2016 election in the USA. Will your project make "dark" ads in Germany visible? A: We've been running for about four days, and so far we've collected 300 political ads in Germany. My hope is we'll start seeing some of the more interesting ones from fly by night groups. Political advertising on sites like Facebook isn't regulated in either the United States or Germany, so on some level just having a repository of these ads is a public service. Q: Your project reveals the "dark" possibly deceptive ads in Chrome and Firefox but not on mobile platforms. Will it drive d[...]



Some ways that bug futures markets differ from open source bounties

Mon, 11 Sep 2017 07:00:00 GMT

Question about Bugmark: what's the difference between a futures market on software bugs and an open source bounty system connected to the issue tracker? In many simple cases a bug futures market will function in a similar way, but we predict that some qualities of the futures market will make it work differently.

  • Open source bounty systems have extra transaction costs of assigning credit for a fix.

  • Open source bounty systems can incentivize contention over who can submit a complete fix, when we want to be able to incentivize partial work and meta work.

Incentivizing partial work and meta work (such as bug triage) would be prohibitively expensive to manage using bounties claimed by individuals, where each claim must be accepted or rejected. The bug futures concept addresses this with radical simplicity: the owners of each side of the contract are tracked completely separately from the reporter and assignee of a bug in the bug tracker.

And bug futures contracts can be traded in advance of expiration. Any work that you do that meaningfully changes the probability of the bug getting fixed by the contract closing date can move the price.

You might choose to buy the "fixed" side of the contract, do some work that makes it look more fixable, sell at a higher price. Bugmark might make it practical to do "day trading" of small steps, such as translating a bug report originally posted in a language that the developers don't know, helping a user submit a log file, or writing a failing test.

With the right market design, participants in a bug futures market have the incentive to talk their books, by sharing partial work and metadata.

Related: Some ways that bug futures markets differ from prediction markets, Smart futures contracts on software issues talk, and bullshit walks?







Some ways that bug futures markets differ from prediction markets

Wed, 30 Aug 2017 07:00:00 GMT

Question about Bugmark: what's the difference between a futures market on software bugs and a prediction market? We don't know how much a bug futures market will tend to act like a prediction market, but here are a few guesses about how it may turn out differently.

Prediction markets tend to have a relatively small number of tradeable questions, with a large number of market participants on each side of each question. Each individual bug future is likely to have a small number of participants, at least on the "fixed" side.

Prediction markets typically have participants who are not in a position to influence the outcome. For example, The Good Judgment Project recruited regular people to trade on worldwide events. Bug futures are designed to attract participants who have special knowledge and ability to change an outcome.

Prediction markets are designed for gathering knowledge. Bug futures are for incentivizing tasks. A well-designed bug futures market will monetize haters by turning a "bet" that a project will fail into a payment that makes it more likely to succeed. If successful in this, the market will have this feature in common with Alex Tabarrok's Dominant Assurance Contract.

Prediction markets often implement conditional trading. Bug markets rely on the underlying bug tracker to maintain the dependency relationships among bugs, and trades on the market can reflect the strength of the connections among bugs as seen by the participants.




hey, kids, 2x2 chart!

Tue, 29 Aug 2017 07:00:00 GMT

What's the difference between spam and real advertising?

No signalingSignaling
Interruptionspamadvertising
No interruption organic socialcontent marketing

Advertising is a signal for attention bargain. People pay attention to advertising that carries some hard-to-fake information about the seller's intentions in the market.

Rory Sutherland says, What seems undoubtedly true is that humans, like peahens, attach significance to a piece of communication in some way proportionally to the cost of generating or transmitting it.

If I get spam email, that's clearly signal-free because it costs practically nothing. If I see a magazine ad, it carries signal because I know that it cost money to place.

Today's web ads are more like spam, because they can be finely targeted enough that no significant advertiser resources stand behind the message I'm looking at. (A bot might have even written the copy.) People don't have to be experts in media buying to gauge the relative costs of different ads, and filter out the ones that are clearly micro-targeted and signal-free.




Want to lose a hacking contest or win a reputation contest?

Sun, 27 Aug 2017 07:00:00 GMT

Doc Searls: How the personal data extraction industry ends. Our data, and data about us, is the crude that Facebook and Google extract, refine and sell to advertisers. This by itself would not be a Bad Thing if it were done with our clearly expressed (rather than merely implied) permission, and if we had our own valves to control personal data flows with scale across all the companies we deal with, rather than countless different valves, many worthless, buried in the settings pages of the Web’s personal data extraction systems, as well as in all the extractive mobile apps of the world. Today's web advertising business is a hacking contest. Whoever can build the best system to take personal information from the user wins, whether or not the user knows about it. (And if you challenge adfraud and adtech hackers to a hacking contest, you can expect to come in third.) As users get the tools to control who they share their information with (and they don't want to leak it to everyone) then the web advertising business has to transform into a reputation contest. Whoever can build the most trustworthy place for users to choose to share their information wins. This is why the IAB is freaking out about privacy regulations, by the way. IAB member companies are winning at hacking and failing at building reputation. (I want to do a user focus group where we show people a random IAB company's webinar, then count how many participants ask for tracking protection support afterward.) But regulations are a sideshow. In the long run regulators will support the activities that legit business needs. So Doc has an important point. We have a big opportunity to rebuild important parts of the web advertising stack, this time based on the assumption that you only get user data if you can convince the user, or at least convince the maintainers of the user's trusted tools, that you will use the data in a way that complies with that user's norms. One good place to check is: how many of a site's readers are set up with protecion tools that make them "invisible" to Google Analytics and Chartbeat? (script) And how many of the "users" who sites are making decisions for are just bots? If you don't have good answers for those, you get dumbassery like "pivot to video" which is a polite expression for "make videos for bots because video ad impressions are worth enough money to get the best bot developers interested." Yes, "pivot to video" is still a thing, even though News from the "pivot to video" department, by Lara O'Reilly, at the Wall Street Journal: Google is issuing refunds for ads that ran on websites with fake traffic... ... Google’s refunds amount to only a fraction of the cost of the ads served to invalid traffic, which has left some advertising executives unsatisfied... ... In the recent cases Google discovered, the affected traffic involved video ads, which carry higher ad rates than typical display ads and are therefore an attractive target for fraudsters. (read the whole thing. If we're lucky, Bob Hoffman will blog about that story. "Some advertising executives unsatisfied"? Gosh, Bob, you think so?) The good news here is that legit publishers, trying to transform web advertising from a hacking game into a reputation game, don't have to do a perfect job right away. Incrementally make reputation-based, user-permissioned advertising into a better and better investment, while adfraud keeps making unpermissioned tracking into a worse and worse investment. Then wait for some ambitious marketer (and marketers are always looking for a new angle to reinvent Marketing) to discover the opportunity and take credit for it. Anyway, bonus links. Facebook Figured Out My Family Secrets, And It Won't Tell Me How This App Tracks Political [...]



List-based and behavior-based tracking protection

Tue, 22 Aug 2017 07:00:00 GMT

In the news...

User privacy is at risk from both hackers and lawyers. Right now, lawyers are better at attacking lists, and hackers are better at modifying tracker behavior to get around protections.

The more I think about it, the more that I think it's counterproductive to try to come up with one grand unified set of protection rules or cookie policies for everybody.

Spam filters don't submit their scoring rules to ANSI—spammers would just work around them.

Search engines don't standardize and publish their algorithms, because gray hat SEOs would just use the standard to make useless word salad pages that score high.

And different people have different needs.

If you're a customer service rep at an HERBAL ENERGY SUPPLEMENTS company, you need a spam filter that can adjust for your real mail. And any user of a site that has problems with list-based tracking protection will need to have the browser adjust, and rely more on cleaning up third-party state after a session instead of blocking outright.

Does your company intranet become unusable if you fail to accept third-party tracking that comes from an internal domain that your employer acquired and still has some services running on? Browser developers can't decide up front, so the browser will need to adjust. Every change breaks someone's workflow.

That means the browser has to work to help the user pick a working set of protection methods and rules.

0. Send accurate Do Not Track

Inform sites of the user’s preferences on data sharing. (This will be more important in the future because Europe, but privacy-crazed Eurocrats will not save us from having to do our share of the work.

1. Block connections to third-party trackers

This will need to include both list-based protection and monitoring tracking behavior, like Privacy Badger, because hackers and lawyers are good at getting around different ones.

2. Limit data sent to third-party sites

Apple Safari does this, so it's likely to get easier to do cookie double keying without breaking sites.

3. Scramble or delete unsafe data

If a tracking cookie or other identifier does get through, delete or scramble it on leaving the site or later, as the Self-Destructing Cookies extension does. This could be a good backup for when the browser "learns" that a user needs some third-party state to do something like a shopping cart or comment form, but then doesn't want the info to be used for "ads that follow me around" later.




How is everyone's tracking protection working? An update

Sun, 20 Aug 2017 07:00:00 GMT

When I set up this blog, I put in a script to check how many of the users here are protected from third-party tracking.

The best answer for now is 31%. Of the clients that ran JavaScript on this site over the past two weeks, 31% did not also run JavaScript from the Aloodo "fake third-party tracker".

The script is here: /code/check3p

This is not as good as I had hoped (turn on your tracking protection, people! Don't get tricked by ad blockers that leave you unprotected by default!) but it's a start.

The Information Trust Exchange is doing research on the problem of third-party tracking at news sites. News industry consultant Greg Swanson:

All of the conversations on the newspaper side have been focused on how can we join the advertising technology ecosystem. For example, how can a daily newspaper site in Bismarck, North Dakota deliver targeted advertising to a higher-value soccer mom? And none of the newspapers them have considered the fact that when they join that ecosystem they are enabling spam sites, fraudulent sites – enabling those sites to get a higher CPM rate by parasitically riding on the data collected from the higher-value newspaper sites.

More info: Aloodo for web publishers.




SEO hats and the browser of the future

Sat, 19 Aug 2017 07:00:00 GMT

The field of Search Engine Optimization has white hat SEO, black hat SEO, and gray hat SEO. White hat SEO helps a user get a better search result, and complies with search engine policies. Examples include accurately using the same words that users search on, and getting honest inbound links. Black hat SEO is clearly against search engine policies. Link farming, keyword stuffing, cloaking, and a zillion other schemes. If they see you doing it, your site gets penalized in search results. Gray hat SEO is everything that doesn't help the user get a better search result, but technically doesn't violate a search engine policy. Most SEO experts advise you not to put a lot of time and effort into gray hat, because eventually the search engines will notice your gray hat scheme and start penalizing sites that do it. Gray hat is just stuff that's going to be black hat when the search engines figure it out. Adtech has gray hat, too. Rocket Fuel Awarded Two Patents to Help Leverage First-Party Cookies to More Meaningfully Reach Consumers. This scheme seems to be intended to get around existing third-party cookie protection, which is turned on by default in Apple Safari and available in other browsers. But how long will it work? Maybe the browser of the future won't run a "kangaroo cookie court" but will ship with a built-in "kangaroo law school" so that each copy of the browser will develop its own local "courts" and its own local "case law" based on the user's choices. It will become harder to predict how long any single gray hat adtech scheme will continue working. In the big picture: in order to sell advertising you need to give the advertiser some credible information on who the audience is. Since the "browser wars" of the 1990s, most browsers have been bad at protecting personal information about the user, so web advertising has become a game where a whole bunch of companies compete to covertly capture as much user info as they can. Today, browsers are getting better at implementing people's preferences about sharing their information. The result is a change in the rules of the game. Investment in taking people's personal info is becoming less rewarding, as browsers compete to reflect people's preferences. (That patent will be irrelevant thanks to browser updates long before it expires.) Adfraud is the other half of this story. Fraudbots are getting smarter at creating human-looking ad impressions just as humans are getting better protected. If you think that a web publisher's response to harder-to-detect bots, viewing more high-CPM video ads, should be "pivot to video!!1!!" I don't know if I can help you. And investments in building sites and brands that are trustworthy enough for people to want to share their information will tend to become more rewarding. (This shift naturally leads to complaints from people who are used to winning the old game, but will probably be better for customers who want to use trustworthy brands and for people who want to earn money by making ad-supported news and cultural works.) Bonus links One of the big advertising groups is partnering with Digital Content Next’s trust-focused ad marketplace Partisanship, Propaganda, and Disinformation: Online Media and the 2016 U.S. Presidential Election ANA Endorses TrustX, Encourages Members To Use Programmatic Media-Buying Stamp Of Approval Call for Papers: Policy and Internet Special Issue on Reframing ‘Fake News’: Architectures, Influence, and Automation Time to sink the Admiral (or, why using the DMCA to block adblockers is a bad move) I'm a woman in computer science. Let me ladysplain the Google memo to you. Easylist block list removes entry after DMCA takedown noti[...]



cdparanoia returned code 73

Fri, 18 Aug 2017 07:00:00 GMT

Welcome, people googling for the above error message.

I saw the error

cdparanoia returned code 73

and it turns out I was trying to run two abcde processes in two terminal windows. Kill the second one and the error goes away.

Hope your problem was as simple as that.




ePrivacy and marketing budgets

Wed, 16 Aug 2017 07:00:00 GMT

(Update 18 Aug 2017: this post is also available at Digital Content Next.) As far as I know, there are three ways to match an ad to a user. User intent: Show an ad based on what the user is searching for. Old-school version: the Yellow Pages. Context: Show an ad based on where the user is, or what the user is interested in. Old-school versions: highway billboards (geographic context), specialized magazines (interest context). User identity: Show an ad based on who the user is. Old-school version: direct mail. Most online advertising is matched to the user based on a mix of all three. And different players have different pieces of the action for each one. For user intent, search engines are the gatekeepers. The other winners from matching ads to users by intent are browsers and mobile platforms, who get paid to set their default search engine. Advertising based on context rewards the owners of reputations for producing high-quality news, information, and cultural works. Finally, user identity now has a whole Lumascape of vendors in a variety of categories, all offering to help identify users in some way. (the Lumascape is rapidly consolidating, but that's another story.) Few of the web ads that you might see today are matched to you purely based on one of the three methods. Investments in all three tend to shift as the available technology, and the prevailing norms and laws, change. Enough background. Randall Rothenberg of the IAB is concerned about the proposed ePrivacy Regulation in Europe, and writes, The basic functionality of the internet, which is built on data exchanges between a user’s computer and publishers’ servers, can no longer be used for the delivery of advertising unless the consumer agrees to receive the ads – but the publisher must deliver content to that consumer regardless. This doesn't look accurate. I don't know of any proposal that would require publishers to serve users who block ads entirely. What Rothenberg is really complaining about is that the proposed regulation would limit the ability of sites and ad intermediaries to match ads to users based on user identity, forcing them to rely on user intent and context. If users choose to block ads delivered from ad servers that use their personal data without permission, then sites won't be able to refuse to serve them the content, but will be able to run ads that are relevant to the content of the site. As far as I can tell, sites would still be able to pop a "turn off your ad blocker" message in place of a news story if the user was blocking an ad placed purely by context, magazine style. Privacy regulation is not so much an attack on the basic functionality of the Internet, as it is a shift that lowers the return on investment on knowing who the user is, and drives up the return on investment on providing search results and content. That's a big change in who gets paid: more money for search and for trustworthy content brands, and less for adtech intermediaries that depend on user tracking. Advertising: a fair deal for the user? That depends. Search advertising is clearly the result of a user choice. The user chooses to view ads that come with search results, as part of choosing to do a search. As long as the ads are marked as ads, it's pretty obvious what is happening. The same goes for ads placed in context. The advertiser trades economic signal, in the form of costly support of an ad-supported resource, for the user's attention. This is common in magazine and broadcast advertising, and when you use a site with one of the (rare) pure in-context ad platforms such as Project Wonderful, it works about the same way. The place where things start to get problematic is ads ba[...]



Moral values in society

Tue, 08 Aug 2017 07:00:00 GMT

Moral values in society are collapsing? Really? Elizabeth Stoker Bruenig writes, The baseline moral values of poor people do not, in fact, differ that much from those of the rich. (read the whole thing). Unfortunately, if you read the fine print, it's more complicated than that. Any market economy depends on establishing trust between people who trade with each other. Tim Harford writes, Being able to trust people might seem like a pleasant luxury, but economists are starting to believe that it’s rather more important than that. Trust is about more than whether you can leave your house unlocked; it is responsible for the difference between the richest countries and the poorest. Somehow, over thousands of years, business people have built up a set of norms about high-status and low-status business activities. Craftsmanship, consistent supply of high-quality staple goods, and construction of noteworthy projects are high-status activities. Usury and deception are examples of low-status activities. (You make your money in quarters, gambling with retired people? You lend people $100 until Friday at a 300% interest rate? No club invitation for you.) Somehow, though, that is now changing in the USA. Those who earn money through deception now have seats at the same table as legitimate business. Maybe it started with the shift into "consumer credit" by respectable banks. But why were high-status bankers willing to play loan shark to begin with? Something had to have been building, culturally. (It started too early to blame the Baby Boomers.) We tend to blame information technology companies for complex, one-sided Terms of Service and EULAs, but it's not so much a tech trend as it is a general business culture trend. It shows up in tech fast, because rapid technology change provides cover and concealment for simultaneous changes in business terms. US business was rapidly losing its connection to basic norms when it was still moving at the speed of FedEx and fax. (You can't say, all of a sudden, "car crashes in existing fast-food drive-thrus are subject to arbitration in Unfreedonia" but you can stick that kind of term into a new service's ToS.) There's some kind of relativistic effect going on. Tech bros just seem like bigger douchebags because they're moving faster. Regulation isn't the answer. We have a system in which business people can hire lobbyists to buy the laws and regulations we want. The question is whether we're going to use our regulatory capture powers in a shortsighted, society-eroding hustler way, or in a conservative way. Economic conservatism means not just limiting centralized state control of capital, but preserving the balance among all the long-standing stewards of capital, including households, municipalities, and religious and educational institutions. Economic conservatism and radical free-marketism are fundamentally different. People blame trashy media for the erosion of norms among the poor, so let's borrow that explanation for the erosion of norms among the rich as well. Maybe our problem with business norms results from the globablization and sensationalism of business media. Joe CEO isn't just the most important corporate leader of Mt. Rose, MN, any more—on a global scale he's just another broke-ass hustler. [...]



Pragmatists for copyleft, or, corporate hive minds don't accept software licenses

Sun, 06 Aug 2017 07:00:00 GMT

One of the common oversimplifications in discussing open-source software licenses is that copyleft licenses are "idealistic" while non-copyleft licenses are "pragmatic." But that's not all there is to it.

The problem is that most people redistributing licensed code are doing so in an organizational context. And no human organization is a hive mind where those who participate within it subordinate their goals to that of the collective. Human organizations are full of of people with their own motivations.

Instead of treating the downstrem developer's employer as a hive mind, it can be more producive to assume good faith on the part of the individual who intends to contribute to the software, and think about the license from the point of view of a real person.

Releasing source for a derivative work costs time and money. The well-intentioned "downstream" contributor wants his or her organization to make those investments, but he or she has to make a case for them. The presence of copyleft helps steer the decision in the right direction. Jane Hacker at an organization planning to release a derivative work can say, matter-of-factly, "we need to comply with the upstream license" if copyleft is involved. The organization is then more likely to do the right thing. There are always violations, but the license is a nudge in the right direction.

(The extreme case is university licensing offices. University-owned software patents can exclude a graduate student from his or her own project when the student leaves the university, unless he or she had the foresight to build it as a derivative work of something under copyleft.)

Copyleft isn't a magic commons-building tool, and it isn't right for every situation. But it can be enough to push an organization over the line. (One place where I worked had to a do a source release for one dependency licensed under GPLv2, and it turned out to be easist to just build one big source code release with all the dependencies in it, and offer that.)




More random links

Sun, 06 Aug 2017 07:00:00 GMT

Not the Google story everyone is talking about, but related: Google Is Matching Your Offline Buying With Its Online Ads, But It Isn’t Sharing How. (If a company becomes known for doing creepy shit, it will get job applications from creepy people, and at a large enough company some of them will get hired. Related: The Al Capone theory of sexual harassment) Least surprising news story ever: The Campaign Against Facebook And Google's Ad "Duopoly" Is Going Nowhere Independent online publishers can't beat the big surveillance marketing companies at surveillance marketing? How about they try to beat Amazon and Microsoft at cloud services, or Apple and Lenovo at laptop computers? There are possible winning strategies for web publishers, but doing the same as the incumbents with less money and less data is not one of them. Meanwhile, from an investor point of view: It’s the Biggest Scandal in Tech (and no one’s talking about it) Missing the best investment advice: get out of any B-list adtech company that is at risk of getting forced into a low-value acquisition by a sustained fraud story. Or short it and research the fraud story yourself. Did somebody at The Atlantic get a loud phone notification during a classical music concert or something? Your Smartphone Reduces Your Brainpower, Even If It's Just Sitting There and Have Smartphones Destroyed A Generation?, by Jean M. Twenge, The Atlantic Good news: Math journal editors resign to start rival open-access journal Apple’s Upcoming Safari Changes Will Shake Up Ad Tech: Not surprisingly, Facebook and Amazon are the big winners in this change. Most of their users come every day or at least every week. And even the mobile users click on links often, which, on Facebook, takes them to a browser. These companies will also be able to buy ad inventory on Safari at lower prices because many of the high-dollar bidders will go away. A good start by Apple, but other browsers can do better. (Every click on a Facebook ad from a local business is $0.65 of marketing money that's not going to local news, Little League sponsorships, and other legit places.) Still on the upward slope of the Peak Advertising curve: Facebook 'dark ads' can swing political opinions, research shows You’re more likely to hear from tech employers if you have one of these 10 things on your resume (and only 2 of them are proprietary. These kids today don't know how good they have it.) The Pac-Man Rule at Conferences How “Demo-or-Die” Helped My Career [...]



Hey kids, favicon!

Sat, 05 Aug 2017 07:00:00 GMT

Finally fixed those 404s from browsers looking for favicon.ico on this blog.

  1. Google image search for images where "reuse with modification" is allowed.

  2. Found this high-quality lab mouse SVG image.

  3. Opened it in GNU Image Manipulation Program, posterized, cropped to a square. Kept the transparent background.

  4. Just went to realfavicongenerator.net and did what it says, and added the resulting images and markup to the site.

That's about it. Now there's a little mouse in the browser tab (and it should do the right thing with the icons if someone pins it to their home screen on mobile.)




Why surveillance marketers don't worry about GDPR (but privacy nerds should)

Tue, 01 Aug 2017 07:00:00 GMT

A lot of privacy people these days sound like a little kid arguing with a sibling. You're going to be in big trouble when Dad gets home!

Dad, here, is the European Union, who's going to put the General Data Protection Regulation foot down, and then, oh, boy, those naughty surveillance marketers are going to catch it, and wish that they had been listening to us about privacy all along.

Right?

But Internet politics never works like that. Sure, European politicians don't want to hand over power to the right-wing factions who are better at surveillance marketing than they are. And foreign agents use Facebook (and other US-based companies) to attack legit political systems. But that stuff is not going to be enough to save GDPR.

The problem is that perfectly normal businesses are using GDPR-violating sneaky tracking pixels and other surveillance marketing as part of their daily marketing routine.

As the GDPR deadline approaches, surveillance marketers in Europe are going to sigh and painstakingly explain to European politicians that of course this GDPR thing isn't going to work. "You see, politicians, it's an example of political overreach that completely conflicts with technical reality." European surveillance marketers will use the same kind of language about GDPR that the freedom-loving side used when we talked about the proposed CBDTPA. It's just going to Break the Internet! People will lose their jobs!

The result is predictable. GDPR will be delayed, festooned with exceptions, or both, and the hoped-for top-down solution to privacy problems will not come. There's no shortcut. We'll only get a replacement for surveillance marketing when we build the tools, the networks, the business processes, the customer/voter norms, and then the political power.




Extracting just the audio from big video files

Sat, 29 Jul 2017 07:00:00 GMT

Update 24 Aug 2017: How to get the big video file from an Air Mozilla page.

  1. Sign in if needed and go to the page with the video on it.

  2. Control-I to open the page info window.

  3. Open the "Media" tab in the page info window, and find the item with type "Video".

  4. Click "Save As" to save the video.

Got a big video, and want a copy of just the audio for listening on a device with limited storage? Use Soundconverter.

soundconverter -b -m mp3 -s .mp3 long-video.webm

(MP3 patents are expired now, hooray! I'm just using MP3 here because if I get a rental car that lets me plug in a USB stick for listening, the MP3 format is most likely to be supported.)

Soundconverter has a GUI but you can use -b for batch mode from the shell. soundconverter --help for help. You do need to set both the MIME type, with -m, and the file suffix, with -s.




Online ads don't matter to P&G

Fri, 28 Jul 2017 07:00:00 GMT

In the news: P&G Cuts More Than $100 Million in ‘Largely Ineffective’ Digital Ads

Not surprising.

Proctor & Gamble makes products that help you comply with widely held cleanliness norms.

Digital ads are micro-targeted to you as an individual.

That's the worst possible brand/medium fit. If you don't know that the people who expect you to keep your house or body clean are going to be aware of the same product, how do you know whether to buy it?

Bonus link from Bob Hoffman last year: Will The P&G Story Bring Down Ad Tech? Please?