Subscribe: Planet Mozilla
http://planet.mozilla.org/rss20.xml
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
add  build  code  compiler  firefox  make  mozilla  new  open source  open  people  rust  support  time  web  work   
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Planet Mozilla

Planet Mozilla



Planet Mozilla - http://planet.mozilla.org/



 



Seif Lotfy: Playing with .NET (dotnet) and IronFunctions

Mon, 05 Dec 2016 22:15:39 +0000

Again if you missed it, IronFunctions is open-source, lambda compatible, on-premise, language agnostic, server-less compute service. While AWS Lambda only supports Java, Python and Node, Iron Functions allows you to use any language you desire by running your code in containers. With Microsoft being one of the biggest players in open source and .NET going cross-platform it was only right to add support for it in the IronFunctions's fn tool. TL;DR: The following demos a .NET function that takes in a URL for an image and generates a MD5 checksum hash for it: Using dotnet with functions Make sure you downloaded and installed dotnet. Now create an empty dotnet project in the directory of your function: dotnet new By default dotnet creates a Program.cs file with a main method. To make it work with IronFunction's fn tool please rename it to func.cs. mv Program.cs func.cs Now change the code as you desire to do whatever magic you need it to do. In our case the code takes in a URL for an image and generates a MD5 checksum hash for it. The code is the following: using System; using System.Text; using System.Security.Cryptography; using System.IO; namespace ConsoleApplication { public class Program { public static void Main(string[] args) { // if nothing is being piped in, then exit if (!IsPipedInput()) return; var input = Console.In.ReadToEnd(); var stream = DownloadRemoteImageFile(input); var hash = CreateChecksum(stream); Console.WriteLine(hash); } private static bool IsPipedInput() { try { bool isKey = Console.KeyAvailable; return false; } catch { return true; } } private static byte[] DownloadRemoteImageFile(string uri) { var request = System.Net.WebRequest.CreateHttp(uri); var response = request.GetResponseAsync().Result; var stream = response.GetResponseStream(); using (MemoryStream ms = new MemoryStream()) { stream.CopyTo(ms); return ms.ToArray(); } } private static string CreateChecksum(byte[] stream) { using (var md5 = MD5.Create()) { var hash = md5.ComputeHash(stream); var sBuilder = new StringBuilder(); // Loop through each byte of the hashed data // and format each one as a hexadecimal string. for (int i = 0; i < hash.Length; i++) { sBuilder.Append(hash[i].ToString("x2")); } // Return the hexadecimal string. return sBuilder.ToString(); } } } } Note: IO with an IronFunction is done via stdin and stdout. This code Using with IronFunctions Let's first init our code to become IronFunctions deployable: fn init / Since IronFunctions relies on Docker to work (we will add rkt support soon) the is required to publish to docker hub. The is the identifier of the function. In our case we will use dotnethash as the , so the command will look like: fn init seiflotfy/dotnethash When running the command it will create the func.yaml file required by functions, which can be built by running: Push to docker fn push This will create a docker image and push the image to docker. Publishing to IronFunctions To publish to IronFunctions run ... fn routes create where is (no surprise here) the name of the app, which can encompass many functions. This creates a full path in the form of http://:/r// In my case, I will call the app myapp: fn routes create myapp Calling Now you can fn call &l[...]



The Mozilla Blog: Why I’m joining Mozilla’s Board, by Helen Turvey

Mon, 05 Dec 2016 18:07:51 +0000

Today, I’m very honored to join Mozilla’s Board.

Firefox is how I first got in contact with Mozilla. The browser was my first interaction with free and open source software. I downloaded it in 2004, not with any principled stance in mind, but because it was better, faster, more secure and allowed me to determine how I used it, with add-ons and so forth.

(image)

Helen Turvey joins the Mozilla Foundation Board

My love of open began, seeing the direct implications for philanthropy, for diversity, moving from a scarcity to abundance model in terms of the information and data we need to make decisions in our lives. The web as a public resource is precious, and we need to fight to keep it an open platform, decentralised, interoperable, secure and accessible to everyone.

Mozilla is community driven, and it is my belief that it makes a more robust organisation, one that bends and evolves instead of crumbles when facing the challenges set before it. Whilst we need to keep working towards a healthy internet, we also need to learn to behave in a responsible manner. Bringing a culture of creating, not just consuming, questioning, not just believing, respecting and learning, to the citizens of the web remains front and centre.

I am passionate about people, and creating spaces for them to evolve, grow and lead in the roles they feel driven to effect change in. I am interested in all aspects of Mozilla’s work, but helping to think through how Mozilla can strategically and tactically support leaders, what value we can bring to the community who is working to protect and evolve the web is where I will focus in my new role as a Mozilla Foundation Board member.

For the last decade I have run the Shuttleworth Foundation, a philanthropic organisation that looks to drive change through open models. The FOSS movement has created widely used software and million dollar businesses, using collaborative development approaches and open licences. This model is well established for software, it is not the case for education, philanthropy, hardware or social development.

We try to understand whether, and how, applying the ethos, processes and licences of the free and open source software world to areas outside of software can add value. Can openness help provide key building blocks for further innovation? Can it encourage more collaboration, or help good ideas spread faster? It is by asking these questions that I have learnt about effectiveness and change and hope to bring that along to the Mozilla Foundation Board.




The Mozilla Blog: Helen Turvey Joins the Mozilla Foundation Board of Directors

Mon, 05 Dec 2016 18:04:25 +0000

Today, we’re welcoming Helen Turvey as a new member of the Mozilla Foundation Board of Directors. Helen is the CEO of the Shuttleworth Foundation. Her focus on philanthropy and openness throughout her career makes her a great addition to our Board.

Throughout 2016, we have been focused on board development for both the Mozilla Foundation and the Mozilla Corporation boards of directors. Our recruiting efforts for board members has been geared towards building a diverse group of people who embody the values and mission that bring Mozilla to life. After extensive conversations, it is clear that Helen brings the experience, expertise and approach that we seek for the Mozilla Foundation Board.

Helen has spent the past two decades working to make philanthropy better, over half of that time working with the Shuttleworth Foundation, an organization that provides funding for people engaged in social change and helping them have a sustained impact. During her time with the Shuttleworth Foundation, Helen has driven the evolution from traditional funder to the current co-investment Fellowship model.

Helen was educated in Europe, South America and the Middle East and has 15 years of experience working with international NGOs and agencies. She is driven by the belief that openness has benefits beyond the obvious. That openness offers huge value to education, economies and communities in both the developed and developing worlds.

Helen’s contribution to Mozilla has a long history: Helen chaired the digital literacy alliance that we ran in UK in 2013 and 2014; she’s played a key role in re-imagining MozFest; and she’s been an active advisor to the Mozilla Foundation executive team during the development of the Mozilla Foundation ‘Fuel the Movement’ 3 year plan.

Please join me in welcoming Helen Turvey to the Mozilla Foundation Board of Directors.

Mitchell

You can read Helen’s message about why she’s joining Mozilla here.

Background:

Twitter: @helenturvey

High-res photo




Dustin J. Mitchell: Connecting Bugzilla to TaskWarrior

Sun, 04 Dec 2016 11:22:00 +0000

I’ve mentioned before that I use TaskWarrior to organize my life. Mostly for work, but for personal stuff too (buy this, fix that thing around the house, etc.) At Mozilla, at least in the circles I run in, the central work queue is Bugzilla. I have bugs assigned to me, suggesting I should be working on them. And I have reviews or “NEEDINFO” requests that I should respond to. Ideally, instead of serving two masters, I could just find all of these tasks represented in TaskWarrior. Fortunately, there is an integration called BugWarrior that can do just this! It can be a little tricky to set up, though. So in hopes of helping the next person, here’s my configuration: [general] targets = bugzilla_mozilla, bugzilla_mozilla_respond annotation_links = True log.level = WARNING legacy_matching = False [bugzilla_mozilla] service = bugzilla bugzilla.base_uri = bugzilla.mozilla.org bugzilla.ignore_cc = True # assigned bugzilla.query_url = https://bugzilla.mozilla.org/query.cgi?list_id=13320987&resolution=---&emailtype1=exact&query_format=advanced&emailassigned_to1=1&email1=dustin%40mozilla.com&product=Taskcluster add_tags = bugzilla project_template = moz description_template = http://bugzil.la/ bugzilla.username = USERNAME bugzilla.password = PASSWORD [bugzilla_mozilla_respond] service = bugzilla bugzilla.base_uri = bugzilla.mozilla.org bugzilla.ignore_cc = True # ni?, f?, r?, not assigned bugzilla.query_url = https://bugzilla.mozilla.org/query.cgi?j_top=OR&list_id=13320900&emailtype1=notequals&emailassigned_to1=1&o4=equals&email1=dustin%40mozilla.com&v4=dustin%40mozilla.com&o7=equals&v6=review%3F&f8=flagtypes.name&j5=OR&o6=equals&v7=needinfo%3F&f4=requestees.login_name&query_format=advanced&f3=OP&bug_status=UNCONFIRMED&bug_status=NEW&bug_status=ASSIGNED&bug_status=REOPENED&f5=OP&v8=feedback%3F&f6=flagtypes.name&f7=flagtypes.name&o8=equals add_tags = bugzilla, respond project_template = moz description_template = http://bugzil.la/ bugzilla.username = USERNAME bugzilla.password = PASSWORD Out of the box, this tries to do some default things, but they are not very fine-grained. The bugzilla_query_url option overrides those default things (along with bugzilla_ignore_cc) to just sync the bugs matching the query. Sadly, this does, indeed, require me to include my Bugzilla password in the configuration file. API token support would be nice but it’s not there yet – and anyway, that token allows everything the password does, so not a great benefit. The query URLs are easy to build if you follow this one simple trick: Use the Bugzilla search form to create the query you want. You will end up with a URL containing buglist.cgi. Change that to query.cgi and put the whole URL in BugWarrior’s bugzilla_query_url parameter. I have two stanzas so that I can assign the respond tag to bugs for wihch I am being asked for review or needinfo. When I first set this up, I got a lot of errors about duplicate tasks from BugWarrior, because there were bugs matching both stanzas. Write your queries carefully so that no two stanzas will match the same bug. In this case, I’ve excluded bugs assigned to me from the second stanza – why would I be reviewing my own bug, anyway? I have a nice little moz report that I use in TaskWarrior. Its output looks like this: ID Pri Urg Due Description 98 M 7.09 2016-12-04 add a docs page or blog post 58 H 18.2 http://bugzil.la/1309716 Create a framework for displaying team dashboards 96 H 7.95 http://bugzil.la/1252948 cron.yml for periodic in-tree tasks 91 M 6.87 blog about bugwarrior config 111 M 6.71 guide to microservices, to help folks find the services they need to read th 59 M 6.08 update label-matching in taskcluster/taskgraph/transforms/signing.py to use 78 M 6.02 http://bugzil.la/1316877 Allow `t[...]



Cameron Kaiser: 45.6.0b1 available, plus sampling processes for fun and profit

Sat, 03 Dec 2016 22:57:00 +0000

Test builds for TenFourFox 45.6.0 are available (downloads, hashes, release notes). The release notes indicate the definitive crash fix in Mozilla bug 1321357 (i.e., the definitive fix for the issue mitigated in 45.5.1) is in this build; it is not, but it will be in the final release candidate. 45.6.0 includes the removal of HiDPI support, which also allowed some graphical optimizations the iMac G4 particularly improved with, the expansion of the JavaScript JIT non-volatile general purpose register file, an image-heavy scrolling optimization too late for the 45ESR cut that I pulled down, the removal of telemetry from user-facing chrome JS and various minor fixes to the file requester code. An additional performance improvement will be landed in 45ESR by Mozilla as a needed prerequisite for another fix; that will also appear in the final release. Look for the release candidate next week sometime with release to the public late December 12 as usual, but for now, please test the new improvements so far. There is now apparently a potential workaround for those of you still having trouble getting the default search engine to stick. I still don't have a good theory for what's going on, however, so if you want to try the workaround please read my information request and post the requested information about your profile before and after to see if the suggested workaround affects that. I will be in Australia for Christmas and New Years' visiting my wife's family, so additional development is likely to slow over the holidays. Higher priority items coming up will be implementing user agent support in the TenFourFox prefpane, adding some additional HTML5 features and possibly excising telemetry from garbage and cycle collection, but probably for 45.8 instead of 45.7. I'm also looking at adding some PowerPC-specialized code sections to the platform-independent Ion code generator to see if I can crank up JavaScript performance some more, and possibly some additional work to the AltiVec VP9 codec for VMX-accelerated intraframe prediction. I'm also considering adding AltiVec support to the Theora (VP3) decoder; even though its much lighter processing requirements yield adequate performance on most supported systems it could be a way to get higher resolution video workable on lower-spec G4s. One of the problems with our use of a substantially later toolchain is that (in particular) debugging symbols from later compilers are often gibberish to older profiling and analysis tools. This is why, for example, we have a customized gdb, or debugging at even a basic level wouldn't be possible. If you're really a masochist, go ahead and compile TenFourFox with the debug profile and then try to use a tool like sample or vmmap, or even Shark, to analyze it. If you're lucky, the tool will just freeze. If you're unlucky, your entire computer will freeze or go haywire. I can do performance analysis on a stripped release build, but this yields sample backtraces which are too general to be of any use. We need some way of getting samples off a debug build but not converting the addresses in the backtrace to function names until we can transfer the samples to our own tools that do understand these later debugging symbols. Apple's open source policy is problematic -- they'll open source the stuff they have to, and you can get at some components like the kernel this way, but many deep dark corners are not documented and one of those is how tools like /usr/bin/sample and Shark get backtraces from other processes. I suspect this is so that they can keep the interfaces unstable and avoid abetting the development of applications that depend on any one particular implementation. But no one said I couldn't disassemble the damn thing. So let's go. (NB: the below analysis is based on Tiger 10.4.11. It is possible, and even likely, the interface changed in Leopard 10.5.) With Depeche Mode blaring on the G5, because Dave Gahan is good for debugging, let's look[...]



Christian Heilmann: Taking a look behind the scenes before publicly dismissing something

Sat, 03 Dec 2016 12:53:33 +0000

Lately I started a new thing: watching “behind the scenes” features of movies I didn’t like. At first this happened by chance (YouTube autoplay, to be precise), but now I do it deliberately and it is fascinating. Van Helsing to me bordered on the unwatchable, but as you can see there are a lot of reasons for that. When doing that, one thing becomes clear: even if you don’t like something — *it was done by people*. People who had fun doing it. People who put a lot of work into it. People who — for a short period of time at least — thought they were part of something great. That the end product us flawed or lamentable might not even be their fault. Many a good movie was ruined in the cutting room or hindered by censorship. Hitchcock’s masterpiece Psycho almost didn’t make it to the screen because you see the flushing of a toilet. Other movies are watered down to get a rating that is more suitable for those who spend the most in cinemas: teenagers. Sometimes it is about keeping the running time of the movie to one that allows for just the right amount of ads to be shown when aired on television. Take for example Halle Berry as Storm in X-Men. Her “What happens to a toad when it gets struck by lightning? The same thing that happens to everything else.” in her battle with Toad is generally seen as one of the cheesiest and most pointless lines: This was a problem with cutting. Originally this is a comeback for Toad using this as his tagline throughout the movie: However, as it turns out, that was meant to be the punch line to a running joke in the movie. Apparently, Toad had this thing that multiple times throughout the movie, he would use the line ‘Do you know what happens when a Toad…’ and whatever was relevant at the time. It was meant to happen several times throughout the movie and Storm using the line against him would have actually seemed really witty. If only we had been granted the context. In many cases this extra knowledge doesn’t change the fact that I don’t like the movie. But it makes me feel different about it. It makes my criticism more nuanced. It makes me realise that a final product is a result of many changes and voices and power being yielded and it isn’t the fault of the actors or sometimes even the director. And it is arrogant and hurtful of me to viciously criticise a product without knowing what went into it. It is easy to do. It is sometimes fun to do. It makes you look like someone who knows their stuff and is berating bad quality products. But it is prudent to remember that people are behind things you criticise. Let’s take this back to the web for a moment. Yesterday I had a quick exchange on Twitter that reminded me of this approach of mine. Somebody said people write native apps because a certain part of the web stack is broken. Somebody else answered that if you want to write apps that work you shouldn’t even bother with JavaScript in the first place I replied that this makes no sense and is not helping the conversation about the broken technology. And that it is overly dismissive and hurtful The person then admitted knowing nothing about app creation, but was pretty taken aback by me saying what he did was hurtful instead of just dismissive. But it was. And it is hurtful. Right now JavaScript is hot. JavaScript is relatively easy to learn and the development environment you need for it is free and in many cases a browser is enough. This makes it a great opportunity for someone new to enter our market. Matter of fact, I know people who do exactly that right now and get paid JavaScript courses by the unemployment office to up their value in the market and find a job. Now imagine this person seeing this exchange. Hearing a developer relations person who worked for the largest and coolest companies flat out stating that what you’re trying to get your head around right now [...]



Robert Kaiser: I Want an Internet of Humans

Sat, 03 Dec 2016 03:13:59 +0000

I'm going through some difficult times right now, for various reasons I'm not going into here. It's harder than usual to hold onto my hopes and dreams and the optimism for what's to come that fuels my life and powers me with energy. Unfortunately, there's also not a lot of support for those things in the world around me right now. Be it projects that I shared a vision with being shut down, be it hateful statements coming from and being thrown at a president elect in the US, politicians in many other countries, including e.g. the presidential candidates right here in Austria, or even organizations and members of communities I'm part of. It looks like the world is going through difficult times, and having an issue with holding on to hopes, dreams, and optimism. And it feels like even those that usually are beacons of light and preach hope are falling into the trap of preaching the fear of darkness - and as soon as fear enters our minds, it's starting a vicious cycle. Some awesome person or group of people wrote a great dialog into Star Wars Episode I, peaking in Yoda's "Fear is the path to the dark side. Fear leads to anger. Anger leads to hate. Hate leads to suffering." - And so true this is. Think about it. People fear about securing their well-being, about being able to live the life they find worth living (including their jobs(, and about knowing what to expect of today and tomorrow. When this fear is nurtured, it grows, leads to anger about anything that seems to threaten it. They react hatefully to anyone just seeming to support those perceived threats. And those targeted by that hate hurt and suffer, start to fear the "haters", and go through the cycle from the other side. And and in that climate, the basic human uneasy feeling of "life for me is mostly OK, so any change and anything different is something to fear" falls onto fertile ground and grows into massive metathesiophobia (fear of change) and things like racism, homophobia, xenophobia, hate of other religions, and all kinds of other demons rise up. Those are all deeply rooted in sincere, common human emotions (maybe even instincts) that we can live with, overcome and even turn around into e.g. embracing infinite diversity in infinite combinations like e.g. Star Trek, or we can go and shove them away into a corner of our existence, not decomposing them at their basic stage, and letting them grow until they are large enough that they drive our thinking, our personality - and make us easy to influence by people talking to them. And that works well both for the fears that e.g. some politicians spread and play with and the same for the fears of their opponents. Even the fear of hate and fear taking over is not excluded from this - on the contrary, it can fire up otherwise loving humans into going fully against what the actually want to be. That said, when a human stands across another human and looks in his or her face, looks into their eyes, as long as we still realize there is a feeling, caring other person on the receiving end of whatever we communicate, it's often harder to start into this circle - if we are already deep into the fear and hate, and in some other circumstances this may not be always true, but in a lot of cases it is. On the Internet, not so much. We interact with and through a machine, see an "account" on the other end, remove all the context of what was said before and after, of the tone of voice and body language, of what surroundings others are in, we reduce to a few words that are convenient to type or what the communication system limits us to - and we go for whatever gives us the most attention. Because we don't actually feel like we interact with other real humans, it's mostly about what we get out of it. A lot of likes, reshares, replies, interactions. It helps that the services we use maximize whatever their KPI are and not optimize for what people actually want - afte[...]



Mike Hoye: William Gibson Overdrive

Fri, 02 Dec 2016 21:56:24 +0000

From William Gibson’s “Spook Country”:

She stood beneath Archie’s tail, enjoying the flood of images rushing from the arrowhead fluke toward the tips of the two long hunting tentacles. Something about Victorian girls in their underwear had just passed, and she wondered if that was part of Picnic at Hanging Rock, a film which Inchmale had been fond of sampling on DVD for preshow inspiration. Someone had cooked a beautifully lumpy porridge of imagery for Bobby, and she hadn’t noticed it loop yet. It just kept coming.

And standing under it, head conveniently stuck in the wireless helmet, let her pretend she wasn’t hearing Bobby hissing irritably at Alberto for having brought her here.

It seemed almost to jump, now, with a flowering rush of silent explosions, bombs blasting against black night. She reached up to steady the helmet, tipping her head back at a particularly bright burst of flame, and accidentally encountered a control surface mounted to the left of the visor, over her cheekbone. The Shinjuku squid and its swarming skin vanished.

Beyond where it had been, as if its tail had been a directional arrow, hung a translucent rectangular solid of silvery wireframe, crisp yet insubstantial. It was large, long enough to park a car or two in, and easily tall enough to walk into, and something about these dimensions seemed familiar and banal. Within it, too, there seemed to be another form, or forms, but because everything was wireframed it all ran together visually, becoming difficult to read.

She was turning, to ask Bobby what this work in progress might become, when he tore the helmet from her head so roughly that she nearly fell over.

This left them frozen there, the helmet between them. Bobby’s blue eyes loomed owl-wide behind diagonal blondness, reminding her powerfully of one particular photograph of Kurt Cobain. Then Alberto took the helmet from them both. “Bobby,” he said, “you’ve really got to calm down. This is important. She’s writing an article about locative art. For Node.”

“Node?”

“Node.”

“The fuck is Node?”

I just finished building that. A poor man’s version of that, at least – there’s more to do, but you can stand it up in a couple of seconds and it works; a Node-based Flyweb discovery service that serves up a discoverable VR environment.

It was harder than I expected – NPM and WebVR are pretty uneven experiences from a novice web-developer’s perspective, and I have exciting opinions about the state of the web development ecosystem right now – but putting that aside: I just pushed the first working prototype up to Github a few minutes ago. It’s crude, the code’s ugly but it works; a 3D locative virtual art gallery. If you’ve got the right tools and you’re standing in the right place, you can look through the glass and see another world entirely.

Maybe the good parts of William Gibson’s visions of the future deserve a shot at existing too.




Manish Goregaokar: Reflections on Rusting Trust

Fri, 02 Dec 2016 19:28:27 +0000

The Rust compiler is written in Rust. This is overall a pretty common practice in compiler development. This usually means that the process of building the compiler involves downloading a (typically) older version of the compiler. This also means that the compiler is vulnerable to what is colloquially known as the “Trusting Trust” attack, an attack described in Ken Thompson’s acceptance speech for the 1983 Turing Award. This kind of thing fascinates me, so I decided to try writing one myself. It’s stuff like this which started my interest in compilers, and I hope this post can help get others interested the same way. To be clear, this isn’t an indictment of Rust’s security. Quite a few languages out there have popular self-hosted compilers (C, C++, Haskell, Scala, D, Go) and are vulnerable to this attack. For this attack to have any effect, one needs to be able to uniformly distribute this compiler, and there are roughly equivalent ways of doing the same level of damage with that kind of access. If you already know what a trusting trust attack is, you can skip the next section. If you just want to see the code, it’s in the trusting-trust branch on my Rust fork, specifically this code. The attack The essence of the attack is this: An attacker can conceivably change a compiler such that it can detect a particular kind of application and make malicious changes to it. The example given in the talk was the UNIX login program — the attacker can tweak a compiler so as to detect that it is compiling the login program, and compile in a backdoor that lets it unconditionally accept a special password (created by the attacker) for any user, thereby giving the attacker access to all accounts on all systems that have login compiled by their modified compiler. However, this change would be detected in the source. If it was not included in the source, this change would disappear in the next release of the compiler, or when someone else compiles the compiler from source. Avoiding this attack is easily done by compiling your own compilers and not downloading untrusted binaries. This is good advice in general regarding untrusted binaries, and it equally applies here. To counter this, the attacker can go one step further. If they can tweak the compiler so as to backdoor login, they could also tweak the compiler so as to backdoor itself. The attacker needs to modify the compiler with a backdoor which detects when it is compiling the same compiler, and introduces itself into the compiler that it is compiling. On top of this it can also introduce backdoors into login or whatever other program the attacker is interested in. Now, in this case, even if the backdoor is removed from the source, every compiler compiled using this backdoored compiler will be similarly backdoored. So if this backdoored compiler somehow starts getting distributed, it will spread itself as it is used to compile more copies of itself (e.g. newer versions, etc). And it will be virtually undetectable — since the source doesn’t need to be modified for it to work; just the non-human-readable binary. Of course, there are ways to protect against this. Ultimately, before a compiler for language X existed, that compiler had to be written in some other language Y. If you can track the sources back to that point you can bootstrap a working compiler from scratch and keep compiling newer compiler versions till you reach the present. This raises the question of whether or not Y’s compiler is backdoored. While it sounds pretty unlikely that such a backdoor could be so robust as to work on two different compilers and stay put throughout the history of X, you can of course trace back Y back to other languages and so on till you find a compiler in assembly that you can verify1. Backdooring Rust Alright, so I want to backdoor my compiler. I[...]



Air Mozilla: Participation Demos - Q4/2016

Fri, 02 Dec 2016 15:00:00 +0000

(image) Find out what Participation has been up to in Q4 2016




Christian Heilmann: Pixels, Politics and P2P – Internet Days Stockholm 2016

Fri, 02 Dec 2016 11:13:28 +0000

I just got back from the Internet Days conference in Stockholm, Sweden. I was flattered when I was asked to speak at this prestigious event, but I had no idea until I arrived just how much fun it would be. I loved the branding of he conference as it was all about pixels and love. Things we now need more of – the latter more than the former. As a presenter, I felt incredibly pampered. I had a driver pick me up at the airport (which I didn’t know, so I took the train) and I was put up in the pretty amazing Waterfront hotel connected to the convention centre of the conference. This was the first time I heard about the internet days and for those who haven’t either, I can only recommend it. Imagine a mixture of a deep technical conference on all matters internet – connectivity, technologies and programming – mixed with a TED event on current political matters. The technology setup was breathtaking. The stage tech was flawless and all the talks were streamed and live edited (mixed with slides). Thus they became available on YouTube about an hour after you delivered them. Wonderful work, and very rewarding as a presenter. I talked in detail about my keynote in another post, so here are the others I enjoyed: Juliana Rotich of BRCK and Ushahidi fame talked about connectivity for the world and how this is far from being a normal thing. Erika Baker gave a heartfelt talk about how she doesn’t feel safe about anything that is happening in the web world right now and how we need to stop seeing each other as accounts but care more about us as people. Incidentally, this reminded me a lot of my TEDx talk in Linz about making social media more social again: The big bang to end the first day of the conference was of course the live skype interview with Edward Snowden. In the one hour interview he covered a lot of truths about security, privacy and surveillance and he had many calls to action anyone of us can do now. What I liked most about him was how humble he was. His whole presentation was about how it doesn’t matter what will happen to him, how it is important to remember the others that went down with him, and how he wishes for us to use the information we have now to make sure our future is not one of silence and fear. In addition to my keynote I also took part in a panel discussion on how to inspire creativity. The whole conference was about activism of sorts. I had lots of conversations with privacy experts of all levels: developers, network engineers, journalists and lawyers. The only thing that is a bit of an issue is that most talks outside the keynotes were in Swedish, but having lots of people to chat with about important issues made up for this. The speaker present was a certificate that all the CO2 our travel created was off-set by the conference and an Arduino-powered robot used to teach kids. In general, the conference was about preservation and donating to good courses. There was a place where you can touch your conference pass and coins will fall into a hat describing that your check-in just meant that the organisers donated a Euro to doctors without frontiers. The catering was stunning and with the omission of meat CO2 friendly. Instead of giving out water bottles the drinks were glasses of water, which in Stockholm is in some cases better quality than bottled water. I am humbled and happy that I could play my part in this great event. It gave me hope that the web isn’t just run over by trolls, privileged complainers and people who don’t care if this great gift of information exchange is being taken from us bit by bit. Make sure to check out all the talks, it is really worth your time. Thank you to everyone involved in this wonderful event! [...]



Mike Hommey: Faster git-cinnabar graft of gecko-dev

Fri, 02 Dec 2016 05:30:47 +0000

Cloning Mozilla repositories from scratch with git-cinnabar can be a long process. Grafting them to gecko-dev is an equally long process.

The metadata git-cinnabar keeps is such that it can be exchanged, but it’s also structured in a way that doesn’t allow git push and fetch to do that efficiently, and pushing refs/cinnabar/metadata to github fails because it wants to push more than 2GB of data, which github doesn’t allow.

But with some munging before a push, it is possible to limit the push to a fraction of that size and stay within github limits. And inversely, some munging after fetching allows to produce the metadata git-cinnabar wants.

The news here is that there is now a cinnabar head on https://github.com/glandium/gecko-dev that contains the munged metadata, and a script that fetches it and produces the git-cinnabar metadata in an existing clone of gecko-dev. An easy way to run it is to use the following command from a gecko-dev clone:

$ curl -sL https://gist.github.com/glandium/56a61454b2c3a1ad2cc269cc91292a56/raw/bfb66d417cd1ab07d96ebe64cdb83a4217703db9/import.py | git cinnabar python

On my machine, the process takes 8 minutes instead of more than an hour. Make sure you use git-cinnabar 0.4.0rc for this.

Please note this doesn’t bring the full metadata for gecko-dev, just the metadata as of yesterday. This may be updated irregularly in the future, but don’t count on that.

So, from there, you still need to add mercurial remotes and pull from there, as per the original workflow.

Planned changes for version 0.5 of git-cinnabar will alter the metadata format such that it will be exchangeable without munging, making the process simpler and faster.




Matthew Ruttley: Avoiding multiple reads with top-level imports

Fri, 02 Dec 2016 00:10:31 +0000

Recently I’ve been working with various applications that require importing large JSON definition files which detail complex application settings. Often, these files are required by multiple auxiliary modules in the codebase. All principles of software engineering point towards importing this sort of file only once, regardless of how many secondary modules it is used in.

My instinctive approach to this would be to have a main handler module read in the file and then pass its contents as a class initialization argument:

# main_handler.py

import json

from module1 import Class1
from module2 import Class2

with open("settings.json") as f:
    settings = json.load(f)

init1 = Class1(settings=settings)
init2 = Class2(settings=settings)

The problem with this is that if you have an elaborate import process, and multiple files to import, it could start to look messy. I recently discovered that this multiple initialization argument approach isn’t actually necessary.

In Python, you can actually import the same settings loader module in the two auxiliary modules (module1 and module2), and python will only load it once:

# main_handler.py

from module1 import Class1
from module2 import Class2

init1 = Class1()
init2 = Class2()

# module1.py

import settings_loader

class Class1:
    def __init__(self):
        self.settings = settings_loader.settings

# module2.py

import settings_loader

class Class2:
    def __init__(self):
        self.settings = settings_loader.settings

# settings_loader.py

import json

with open("settings.json") as f:
    print "Loading the settings file!"
    settings = json.load(f)

Now when we test this out in the terminal:

MRMAC:importtesting mruttley$
MRMAC:importtesting mruttley$
MRMAC:importtesting mruttley$ python main_handler.py
Loading the settings file!
MRMAC:importtesting mruttley$

Despite calling

import settings_loader
  twice, Python actually only called it once. This is extremely useful but also could cause headaches if you actually wanted to import the file twice. If so, then I would include the settings importer inside the
__init__()
  of each ClassX and instantiate it twice.




Mozilla Addons Blog: December’s Featured Add-ons

Thu, 01 Dec 2016 22:31:11 +0000

(image)

Pick of the Month: Enhancer for YouTube

by Maxime RF
Watch YouTube on your own terms! Tons of customizable features, like ad blocking, auto-play setting, mouse-controlled volume, cinema mode, video looping, to name a few.

“All day long, I watch and create work-related YouTube videos. I think I’ve tried every video add-on. This is easily my favorite!”

Featured: New Tab Override

by Sören Hentzschel
Designate the page that appears every time you open a new tab.

“Simply the best, trouble-free, new tab option you’ll ever need.”

Nominate your favorite add-ons

Featured add-ons are selected by a community board made up of add-on developers, users, and fans. Board members change every six months. Here’s further information on AMO’s featured content policies.

If you’d like to nominate an add-on for featuring, please send it to amo-featured@mozilla.org for the board’s consideration. We welcome you to submit your own add-on!




Daniel Stenberg: 2nd best in Sweden

Thu, 01 Dec 2016 22:30:06 +0000

“Probably the only person in the whole of Sweden whose code is used by all people in the world using a computer / smartphone / ATM / etc … every day. His contribution to the world is so large that it is impossible to understand the breadth. “ (translated motivation from the Swedish original page) Thank you everyone who nominated me. I’m truly grateful, honored and humbled. You, my community, is what makes me keep doing what I do. I love you all! To list “Sweden’s best developers” (the list and site is in Swedish) seems like a rather futile task, doesn’t it? Yet that’s something the Swedish IT and technology news site Techworld has been doing occasionally for the last several years. With two, three year intervals since 2008. Everyone reading this will of course immediately start to ponder on what developers they speak of or how they define developers and how on earth do you judge who the best developers are? Or even who’s included in the delimiter “Sweden” – is that people living in Sweden, born in Sweden or working in Sweden? I’m certainly not alone in having chuckled to these lists when they have been published in the past, as I’ve never seen anyone on the list be even close to my own niche or areas of interest. The lists have even worked a little as a long-standing joke in places. It always felt as if the people on the lists were found on another planet than mine – mostly just Java and .NET people. and they very rarely appeared to be developers who actually spend their days surrounded by code and programming. I suppose I’ve now given away some clues to some characteristics I think “a developer” should posses… This year, their fifth time doing this list, they changed the way they find candidates, opened up for external nominations and had a set of external advisors. This also resulted in me finding several friends on the list that were never on it in the past. Tonight I got called onto the stage during the little award ceremony and I was handed this diploma and recognition for landing at second place in the best developer in Sweden list. And just to keep things safe for the future, this is how the listing looks on the Swedish list page: Yes I’m happy and proud and humbled. I don’t get this kind of recognition every day so I’ll take this opportunity and really enjoy it. And I’ll find a good spot for my diploma somewhere around the house. I’ll keep a really big smile on my face for the rest of the day for sure! (Photo from the award ceremony by Emmy Jonsson/IDG)[...]



Mozilla Privacy Blog: Remember when we Protected Net Neutrality in the U.S. ?

Thu, 01 Dec 2016 21:23:07 +0000

We may have to do it again. Importantly, we can and we will.

President-Elect Trump has picked his members of the “agency landing team” for the Federal Communications Commission. Notably, two of them are former telecommunications executives who weren’t supportive of the net neutrality rules ultimately adopted on February 25, 2015. There is no determination yet of who will ultimately lead the FCC – that will likely wait until next year. However, the current “landing team” picks have people concerned that the rules enacted to protect net neutrality – the table stakes for an open internet – are in jeopardy of being thrown out.

Is this possible? Of course it is – but it isn’t quite that simple. We should all pay attention to these picks – they are important for many reasons – but we need to put this into context.

The current FCC, who ultimately proposed and enacted the rules, faced a lot of public pressure in its process of considering them. The relevant FCC docket on net neutrality (“Protecting and Promoting the Open Internet”) currently contains 2,179,599 total filings from interested parties. The FCC also received 4 million comments from the public – most in favor of strong net neutrality rules. It took all of our voices to make it clear that net neutrality was important and needed to be protected.

So, what can happen now? Any new administration can reconsider the issue. We hope they don’t. We have been fighting this fight all over the world, and it would be nice to continue to count the United States as among the leaders on this issue, not one of the laggards. But, if the issue is revisited – we are all still here, and so are others who supported and fought for net neutrality.

We all still believe in net neutrality and in protecting openness, access and equality. We will make our voices heard again. As Mozilla, we will fight for the rules we have – it is a fight worth having. So, pay attention to what is going on in these transition teams – but remember we have strength in our numbers and in making our voices heard.

(image)

image from Mozilla 2014 advocacy campaign and petition




The Mozilla Blog: Why I’m joining Mozilla’s Board, by Julie Hanna

Thu, 01 Dec 2016 20:31:25 +0000

Today, I’m joining Mozilla’s Board. What attracts me to Mozilla is its people, mission and values. I’ve long admired Mozilla’s noble mission to ensure the internet is free, open and accessible to all. That Mozilla has organized itself in a radically transparent, massively distributed and crucially equitable way is a living example of its values in action and a testament to the integrity with which Mozillians have pursued that mission. They walk the talk. Similarly, having had the privilege of knowing a number of the leaders at Mozilla, their sincerity, character and competence are self-evident. Julie Hanna, new Mozilla Corporation Board member (Photo credit: Chris Michel) (photo credit: Chris Michel) The internet is the most powerful force for good ever invented. It is the democratic air half our planet breathes. It has put power into the hands of people that didn’t have it. Ensuring the internet continues to serve our humanity, while reaching all of humanity is vital to preserving and advancing the internet as a public good. The combination of these things are why helping Mozilla maximize its impact is an act with profound meaning and a privilege for me. Mozilla’s mission is bold, daring and simple, but not easy. Preserving the web as a force for good and ensuring the balance of power between all stakeholders – private, commercial, national and government interests – while preserving the rights of individuals, by its nature, is a never ending challenge. It is a deep study in choice, consequence and unintended consequences over the short, medium and long term. Understanding the complex, nuanced and dynamic forces at work so that we can skillfully and collaboratively architect a digital organism that’s in service to the greater public good is by its very nature complicated. And then there’s the challenge all organizations face in today’s innovate or die world – how to stay agile, innovative, and relevant, while riding the waves of disruption. Not for the faint of heart, but incredibly worthwhile and consequential to securing the future of the internet. I prescribe to the philosophy of servant leadership. When it comes to Board service, my emphasis is on the service part. First and foremost, being in service to the mission and to Mozillians, who are doing the heavy lifting on the front lines. I find that a mindset of radical empathy and humility is critical to doing this effectively. The invisible work of deep listening and effort to understand what it’s like to walk a mile in their shoes. As is creating a climate of trust and psychic safety so that tough strategic issues can be discussed with candor and efficiency. Similarly, cultivating a creative tension so diverse thoughts and ideas have the headroom to emerge in a way that’s constructive and collaborative. My continual focus is to listen, learn and be of service in the areas where my contribution can have the greatest impact. Mozilla is among the pioneers of Open Source Software. Open Source Software is the foundation of an open internet and a pervasive building block in 95% of all applications. The net effect is a shared public good that accelerates innovation. That will continue. Open source philosophy and methodology are also moving into other realms like hardware and medicine. This will also continue. We tend to overestimate the short term impact of technology and underestimate its long term effect. I believe we’ve only begun catalyzing the potential of open source. Harnessing the democratizing power of the internet to enable a more just, abundant and free world is the long running purpose that has driven [...]



Cameron Kaiser: 45.5.1 available, and 32-bit Intel Macs go Tier-3

Thu, 01 Dec 2016 19:24:00 +0000

Test builds for 45.5.1, with the single change being the safety fix for the Firefox 0-day in bug 1321066 (CVE-2016-9079), are now available. Release notes and hashes to follow when I'm back from my business trip late tonight. I will probably go live on this around the same time, so please test as soon as you can. In other news, the announcement below was inevitable after Mozilla dropped support for 10.6 through 10.8, but for the record (from BDS): As of Firefox 53, we are intending to switch Firefox on mac from a universal x86/x86-64 build to a single-architecture x86-64 build. To simplify the build system and enable other optimizations, we are planning on removing support for universal mac build from the Mozilla build system. The Mozilla build and test infrastructure will only be testing the x86-64 codepaths on mac. However, we are willing to keep the x86 build configuration supported as a community-supported (tier 3) build configuration, if there is somebody willing to step forward and volunteer as the maintainer for the port. The maintainer's responsibility is to periodically build the tree and make sure it continues to run. Please contact me directly (not on the list) if you are interested in volunteering. If I do not hear from a volunteer by 23-December, the Mozilla project will consider the Mac-x86 build officially unmaintained. The precipitating event for this is the end of NPAPI plugin support (see? TenFourFox was ahead of the curve!), except, annoyingly, Flash, with Firefox 52. The only major reason 32-bit Mac Firefox builds weren't ended with the removal of 10.6 support (10.6 being the last version of Mac OS X that could run on a 32-bit Intel Mac) was for those 64-bit Macs that had to run a 32-bit plugin. Since no plugins but Flash are supported anymore, and Flash has been 64-bit for some time, that's the end of that. Currently we, as OS X/ppc, are a Tier-3 configuration also, at least for as long as we maintain source parity with 45ESR. Mozilla has generally been deferential to not intentionally breaking TenFourFox and the situation with 32-bit x86 would probably be easier than our situation. That said, candidly I can only think of two non-exclusive circumstances where maintaining the 32-bit Intel Mac build would be advantageous, and they're both bigger tasks than simply building the browser for 32 bits: You still have to run a 32-bit plugin like Silverlight. In that case, you'd also need to undo the NPAPI plugin block (see bug 1269807) and everything underlying it. You have to run Firefox on a 32-bit Mac. As a practical matter this would essentially mean maintaining support for 10.6 as well, roughly option 4 when we discussed this in a prior blog post with the added complexity of having to pull the legacy Snow Leopard support forward over a complete ESR cycle. This is non-trivial, but hey, we've done just that over six ESR cycles, although we had the advantage of being able to do so incrementally. I'm happy to advise anyone who wants to take this on but it's not something you'll see coming from me. If you decide you'd like to try, contact Benjamin directly (his first name, smedbergs, us).[...]



Support.Mozilla.Org: What’s Up with SUMO – 1st December

Thu, 01 Dec 2016 19:20:46 +0000

Greetings, SUMO Nation! This is it! The start of the last month of the year, friends :-) We are kicking off the 12th month of 2016 with some news for your reading pleasure. Dig in! Welcome, new contributors! Gregg Virostek Dr. Vinny Goombatz normac Lazimshahad Gardiananj If you just joined us, don’t hesitate – come over and say “hi” in the forums! Contributors of the week All the forum supporters who tirelessly helped users out for the last week. All the writers of all languages who worked tirelessly on the KB for the last week. Huge thanks to philipp, pollti and Artist for their help with getting more Klar-ity :-) We salute all of you! Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month! SUMO Community meetings LATEST ONE: 30th of November – you can read the notes here (and see the video at AirMozilla once we get it up there, since we had some technical issues). NEXT ONE: happening on the 14th of December! If you want to add a discussion topic to the upcoming meeting agenda: Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting). Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda). If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback. Community The State of Mozilla Report is here! Help us connect all people to the open Internet through the #EqualRating Innovation Challenge. Please remember that Persona has shut down as of yesterday. All affected services will offer alternative log in methods, so look for more information about that on their sites. Final reminder: If you want to help us brainstorm how to promote “Internet Awareness” among our visitors, please join us here. Get musical with SUMO! The playlist is here… and the forum thread is here. Platform Check the notes from the last meeting in this document. (and don’t forget about our meeting recordings). The main points of today’s meeting were: Announcing a new Bugzilla component for working on further optimization of the new platform. Postponing of the final migration until the first week of 2017. Announcing that the next Platform meeting will take place on the 15th of December. Reminder: You can preview the current migration test site here. If you can’t access the site, contact Madalina.  Drop your feedback into the same feedback document as usual. Reminder: The post-migration feature list can be found here (courtesy of Roland – ta!) We are still looking for user stories to focus on immediately after the migration (also known as “Phase 1”) Social Reminder: Army of Awesome (as a “community trademark”) is going away. Please reach out to the Social Support team or ask in #sumo for more information. Remember, you can contact Sierra (sreed@), Elisabeth (ehull@), or Rachel (guigs@) to get started with Social support. Help us provide friendly help through the likes of @firefox, @firefox_es, @firefox_fr and @firefoxbrasil on Twitter and beyond :-) Support Forum If you see Firefox users asking about the “zero day exploit”, please let them know that “We have been made aware of the issue and are working on a fix.  We will have more to say once the fix has been shipped.” (some media context here) A polite rem[...]



Mitchell Baker: Julie Hanna Joins the Mozilla Corporation Board of Directors

Thu, 01 Dec 2016 19:18:12 +0000

This post was originally posted on the Mozilla.org website.

(image)

Julie Hanna, new Mozilla Corporation Board member

Today, we are very pleased to announce the latest addition to the Mozilla Corporation Board of Directors – Julie Hanna. Julie is the Executive Chairman for Kiva and a Presidential Ambassador for Global Entrepreneurship and we couldn’t be more excited to have her joining our Board.

Throughout this year, we have been focused on board development for both the Mozilla Foundation and the Mozilla Corporation boards of directors. We envisioned a diverse group who embodied the same values and mission that Mozilla stands for. We want each person to contribute a unique point of view. After extensive conversations, it was clear to the Mozilla Corporation leadership team that Julie brings exactly the type of perspective and approach that we seek.

Born in Egypt, Julie has lived in various countries including Jordan and Lebanon before finally immigrating to the United States. Julie graduated from the University of Alabama at Birmingham with a B.S. in Computer Science. She currently serves as Executive Chairman at Kiva, a peer-peer lending pioneer and the world’s largest crowdlending marketplace for underserved entrepreneurs. During her tenure, Kiva has scaled its reach to 190+ countries and facilitated nearly $1 billion dollars in loans to 2 million people with a 97% repayment rate. U.S. President Barack Obama appointed Julie as a Presidential Ambassador for Global Entrepreneurship to help develop the next generation of entrepreneurs. In that capacity, her signature initiative has delivered over $100M in capital to nearly 300,000 women and young entrepreneurs across 86 countries.

Julie is known as a serial entrepreneur with a focus on open source. She was a founder or founding executive at several innovative technology companies directly relevant to Mozilla’s world in browsers and open source. These include Scalix, a pioneering open source email/collaboration platform and developer of the most advanced AJAX application of its time, the first enterprise portal provider 2Bridge Software, and Portola Systems, which was acquired by Netscape Communications and become Netscape Mail.

She has also built a wealth of experience as an active investor and advisor to high-growth technology companies, including sharing economy pioneer Lyft, Lending Club and online retail innovator Bonobos. Julie also serves as an advisor to Idealab, Bill Gross’ highly regarded incubator which has launched dozens of IPO-destined companies.

Please join me in welcoming Julie Hanna to the Mozilla Board of Directors.

Mitchell

Background:

Twitter: @JulesHanna

High-res photo

 




The Mozilla Blog: Julie Hanna Joins the Mozilla Corporation Board of Directors

Thu, 01 Dec 2016 19:17:50 +0000

Today, we are very pleased to announce the latest addition to the Mozilla Corporation Board of Directors – Julie Hanna. Julie is the Executive Chairman for Kiva and a Presidential Ambassador for Global Entrepreneurship and we couldn’t be more excited to have her joining our Board.

Throughout this year, we have been focused on board development for both the Mozilla Foundation and the Mozilla Corporation boards of directors. We envisioned a diverse group who embodied the same values and mission that Mozilla stands for. We want each person to contribute a unique point of view. After extensive conversations, it was clear to the Mozilla Corporation leadership team that Julie brings exactly the type of perspective and approach that we seek.

Born in Egypt, Julie has lived in various countries including Jordan and Lebanon before finally immigrating to the United States. Julie graduated from the University of Alabama at Birmingham with a B.S. in Computer Science. She currently serves as Executive Chairman at Kiva, a peer-peer lending pioneer and the world’s largest crowdlending marketplace for underserved entrepreneurs. During her tenure, Kiva has scaled its reach to 190+ countries and facilitated nearly $1 billion dollars in loans to 2 million people with a 97% repayment rate. U.S. President Barack Obama appointed Julie as a Presidential Ambassador for Global Entrepreneurship to help develop the next generation of entrepreneurs. In that capacity, her signature initiative has delivered over $100M in capital to nearly 300,000 women and young entrepreneurs across 86 countries.

Julie is known as a serial entrepreneur with a focus on open source. She was a founder or founding executive at several innovative technology companies directly relevant to Mozilla’s world in browsers and open source. These include Scalix, a pioneering open source email/collaboration platform and developer of the most advanced AJAX application of its time, the first enterprise portal provider 2Bridge Software, and Portola Systems, which was acquired by Netscape Communications and become Netscape Mail.

She has also built a wealth of experience as an active investor and advisor to high-growth technology companies, including sharing economy pioneer Lyft, Lending Club and online retail innovator Bonobos. Julie also serves as an advisor to Idealab, Bill Gross’ highly regarded incubator which has launched dozens of IPO-destined companies.

Please join me in welcoming Julie Hanna to the Mozilla Board of Directors.

Mitchell

You can read Julie’s message about why she’s joining Mozilla here.

Background:

Twitter: @JulesHanna

High-res photo (photo credit: Chris Michel)




Air Mozilla: Connected Devices Weekly Program Update, 01 Dec 2016

Thu, 01 Dec 2016 18:45:00 +0000

(image) Weekly project updates from the Mozilla Connected Devices team.




The Mozilla Blog: State of Mozilla 2015 Annual Report

Thu, 01 Dec 2016 16:40:48 +0000

We just released our State of Mozilla annual report for 2015. This report highlights key activities for Mozilla in 2015 and includes detailed financial documents.

Mozilla is not your average company. We’re a different kind of organization – a nonprofit, global community with a mission to ensure that the internet is a global public resource, open and accessible to all.

I hope you enjoy reading and learning more about Mozilla and our developments in products, web technologies, policy, advocacy and internet health.

 

 




Air Mozilla: Webinar 2: What is Equal Rating.

Thu, 01 Dec 2016 16:16:51 +0000

(image) Overview of Equal Rating




Mozilla Open Innovation Team: The Problem with Privacy in IoT

Thu, 01 Dec 2016 16:11:31 +0000

Every year Mozilla hosts DinoTank, an internal pitch platform, and this year instead of pitching ideas we focused on pitching problems. To give each DinoTank winner the best possible start, we set up a design sprint for each one. This is the first installment of that series of DinoTank sprints…The ProblemI work on the Internet of Things at Mozilla but I am apprehensive about bringing most smart home products into my house. I don’t want a microphone that is always listening to me. I don’t want an internet connected camera that could be used to spy on me. I don’t want my thermostat, door locks, and light bulbs all collecting unknown amounts of information about my daily behavior. Suffice it to say that I have a vague sense of dread about all the new types of data being collected and transmitted from inside my home. So I pitched this problem to the judges at DinoTank. It turns out they saw this problem as important and relevant to Mozilla’s mission. And so to explore further, we ran a 5 day product design sprint with the help of several field experts.BrainstormingA team of 8 staff members was gathered in San Francisco for a week of problem refinement, insight gathering, brainstorming, prototyping, and user testing. Among us we had experts in Design Thinking, user research, marketing, business development, engineering, product, user experience, and design. The diversity of skillsets and backgrounds allowed us to approach the problem from multiple different angles, and through our discussion several important questions arose which we would seek to answer by building prototypes and putting them in front of potential consumers.The SolutionAfter 3 days of exploring the problem, brainstorming ideas and then them narrowing down, we settled on a single product solution. It would be a small physical device that plugs into the home’s router to monitor the network activity of local smart devices. It would have a control panel that could be accessed from a web browser. It would allow the user to keep up to date through periodic status emails, and only in critical situations would it notify the user’s phone with needed actions. We mocked up an end-to-end experience using clickable and paper prototypes, and put it in front of privacy aware IoT home owners.What We LearnedSurprisingly, our test users saw the product as more of an all inclusive internet security system rather than a IoT only solution. One of our solutions focused more on ‘data protection’ and here we clearly learned that there is a sense of resignation towards large data collection, with comments like “Google already has all my data anyway.”Of the positive learnings, the mobile notifications really resonated with users. And interestingly — though not surprisingly — people became much more interested in the privacy aspects of our mock-ups when their children were involved in the conversation.Next StepsThe big question we were left with was: is this a latent but growing problem, or was this never a problem at all? To answer this, we will tweak our prototypes to test different market positioning of the product as well as explore potential audiences that have a larger interest in data privacy.My ReflectionsNow, if I had done this project without DinoTank’s help, I probably would have started by grabbing a Raspberry Pi and writing some code. But instead I learned how to take a step back and sta[...]



Myk Melez: Mozilla and Node

Thu, 01 Dec 2016 16:03:46 +0000

Recently the Node Foundation announced that Mozilla is joining forces with IBM, Intel, Microsoft, and NodeSource on the Node API. So what’s Mozilla doing with Node? Actually, a few things…

You may already know about SpiderNode, a Node implementation on SpiderMonkey, which Ehsan Akhgari announced in April. Ehsan, Trevor Saunders, Brendan Dahl, and other contributors have since made a bunch of progress on it, and it now builds successfully on Mac and Linux and runs some Node programs.

Brendan additionally did the heavy lifting to build SpiderNode as a static library, link it with Positron, and integrate it with Positron’s main process, improving that framework’s support for running Electron apps. He’s now looking at opportunities to expose SpiderNode to WebExtensions and to chrome code in Firefox.

Meanwhile, I’ve been analyzing the Node API being developed by the API Working Group, and I’ve also been considering opportunities to productize SpiderNode for Node developers who want to use emerging JavaScript features in SpiderMonkey, such as WebAssembly and Shared Memory.

If you’re a WebExtension developer or Firefox engineer, would you use Node APIs if they were available to you? If you’re a Node programmer, would you use a Node implementation running on SpiderMonkey? And if so, would you require Node Addons (i.e. native modules) to do so?




Air Mozilla: Reps Weekly Meeting Dec. 01, 2016

Thu, 01 Dec 2016 16:00:00 +0000

(image) This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.




Daniel Pocock: Using a fully free OS for devices in the home

Thu, 01 Dec 2016 13:11:03 +0000

There are more and more devices around the home (and in many small offices) running a GNU/Linux-based firmware. Consider routers, entry-level NAS appliances, smart phones and home entertainment boxes. More and more people are coming to realize that there is a lack of security updates for these devices and a big risk that the proprietary parts of the code are either very badly engineered (if you don't plan to release your code, why code it properly?) or deliberately includes spyware that calls home to the vendor, ISP or other third parties. IoT botnet incidents, which are becoming more widely publicized, emphasize some of these risks. On top of this is the frustration of trying to become familiar with numerous different web interfaces (for your own devices and those of any friends and family members you give assistance to) and the fact that many of these devices have very limited feature sets. Many people hail OpenWRT as an example of a free alternative (for routers), but I recently discovered that OpenWRT's web interface won't let me enable both DHCP and DHCPv6 concurrently. The underlying OS and utilities fully support dual stack, but the UI designers haven't encountered that configuration before. Conclusion: move to a device running a full OS, probably Debian-based, but I would consider BSD-based solutions too. For many people, the benefit of this strategy is simple: use the same skills across all the different devices, at home and in a professional capacity. Get rapid access to security updates. Install extra packages or enable extra features if really necessary. For example, I already use Shorewall and strongSwan on various Debian boxes and I find it more convenient to configure firewall zones using Shorewall syntax rather than OpenWRT's UI. Which boxes to start with? There are various considerations when going down this path: Start with existing hardware, or buy new devices that are easier to re-flash? Sometimes there are other reasons to buy new hardware, for example, when upgrading a broadband connection to Gigabit or when an older NAS gets a noisy fan or struggles with SSD performance and in these cases, the decision about what to buy can be limited to those devices that are optimal for replacing the OS. How will the device be supported? Can other non-technical users do troubleshooting? If mixing and matching components, how will faults be identified? If buying a purpose-built NAS box and the CPU board fails, will the vendor provide next day replacement, or could it be gone for a month? Is it better to use generic components that you can replace yourself? Is a completely silent/fanless solution necessary? Is it possibly to completely avoid embedded microcode and firmware? How many other free software developers are using the same box, or will you be first? Discussing these options I recently started threads on the debian-user mailing list discussing options for routers and home NAS boxes. A range of interesting suggestions have already appeared, it would be great to see any other ideas that people have about these choices. [...]



Giorgos Logiotatidis: Taco Bell Parallel Programming

Thu, 01 Dec 2016 13:02:04 +0000

While working on migrating support.mozilla.org away from Kitsune (which is a great community support platform that needs love, remember that internet) I needed to convert about 4M database rows of a custom, Markdown inspired, format to HTML. The challenge of the task is that it needs to happen as fast as possible so we can dump the database, convert the data and load the database onto the new platform with the minimum possible time between the first and the last step. I started a fresh MySQL container and started hacking: Load the database dump Kitsune's database weights about 35GiB so creating and loading the dump is a lengthy procedure. I used some tricks taken from different places with most notable ones: Set innodb_flush_log_at_trx_commit = 2 for more speed. This should not be used in production as it may break ACID compliance but for my use case it's fine. Set innodb_write_io_threads = 16 Set innodb_buffer_pool_size=16G and innodb_log_file_size=4G. I read that the innodb_log_file_size is recommended to be 1/4th of innodb_buffer_pool_size and I set the later based on my available memory. Loading the database dump takes about 60 minutes. I'm pretty sure there's room for improvement there. Extra tip: When dumping such huge databases from production websites make sure to use a replica host and mysqldump's --single-transaction flag to avoid locking the database. Create a place to store the processed data Kitsune being a Django project I created extra fields named content_html in the Models with markdown content, generated the migrations and run them against the db. Process the data An AWS m4.2xl gives 8 cores at my disposal and 32GiB of memory, of which 16 I allocated to MySQL earlier. I started with a basic single core solution:: for question in Question.objects.all(): question.content_html = parser.wiki_2_html(question.content) question.save() which obviously does the job but it's super slow. Transactions take a fair amount of time, what if we could bundle multiple saves into one transaction? def chunks(count, nn=500): """Yield successive n-sized chunks from l.""" offset = 0 while True: yield (offset, min(offset+nn, count)) offset += nn if offset > count: break for low, high in chunks(Question.objects.count()): with transaction.atomic(): for question in Question.objects.all().limit[low:high]: question.content_html = parser.wiki_2_html(question.content) question.save() This is getting better. Increasing the chunk size to 20000 items in the cost of more RAM used produces faster results. Anything above this value seems to require about the same time to complete. Tried pypy and I didn't get better results so I defaulted to CPython. Let's add some more cores into the mix using Python's multiprocessing library. I created a Pool with 7 processes (always leave one core outside the Pool so the system remains responsive) and used apply_async to generate the commands to run by the Pool. results = [] it = Question.objects.all() number_of_rows = it.count() pool = mp.Pool(processes=7) [pool.apply_async(process_chunk), (chunk,), callback=results.append) for chunk in chunks(it)] sum_results = 0 while sum_results < number_of_rows: print 'Progress: {}/{}'.format(s[...]



Daniel Glazman: Eighteen years later

Thu, 01 Dec 2016 10:17:00 +0000

In December 1998, our comrade Bert Bos released a W3C Note: List of suggested extensions to CSS. I thought it could be interesting to see where we stand 18 years later... Id Suggestion active WD CR, PR or REC Comment 1 Columns ✅ ✅ 2 Swash letters and other glyph substitutions ✅ ✅ 3 Running headers and footers ✅ ❌ 4 Cross-references ✅ ❌ 5 Vertical text ✅ ✅ 6 Ruby ✅ ✅ 7 Diagonal text ✅ ❌ through Transforms 7 Text along a path ❌ ❌ 8 Style properties for embedded 2D graphics ➡️ ➡️ through filters 9 Hyphenation control ✅ ❌ 10 Image filters ✅ ❌ 11 Rendering objects for forms ✅ ✅ 12 :target ✅ ✅ 13 Floating boxes to top & bottom of page ❌ ❌ 14 Footnotes ✅ ❌ 15 Tooltips ❌ ❌ possible with existing properties 16 Maths ❌ ❌ there was no proposal, only an open question 17 Folding lists ❌ ❌ possible with existing properties 18 Page-transition effects ❌ ❌ 19 Timed styles ✅ ❌ Transitions & Animations 20 Leaders ✅ ❌ 21 Smart tabs ❌ ❌ not sure it belongs to CSS 22 Spreadsheet functions ❌ ❌ does not belong to CSS 23 Non-rectangular wrap-around ✅ ❌ Exclusions, Shapes 24 Gradients ✅ ✅ Backgrounds & Borders 25 Textures/images instead of fg colors ❌ ❌ 26 Transparency ✅ ✅ opacity 27 Expressions partly ✅ calc() 28 Symbolic constants ✅ ✅ Variables 29 Mixed mode rendering ❌ ❌ 30 Grids for TTY ❌ ❌ 31 Co-dependencies between rules ✅ ✅ Conditional Rules 32 High-level constraints ❌ ❌ 33 Float: gutter-side/fore-edge-side ❌ ❌ 34 Icons & minimization ❌ ❌ 35 Namespaces ✅ ✅ 36 Braille ❌ ❌ 37 Numbered floats ✅ ❌ GCPM 38 Visual top/bottom margins ❌ ❌ 39 TOCs, tables of figures, etc. ❌ ❌ 40 Indexes ❌ ❌ 41 Pseudo-element for first n lines ❌ ❌ 42 :first-word ❌ ❌ 43 Corners ✅ ✅ border-radius and border-image 44 Local and external anchors ✅ ❌ Selectors level 4 45 Access to attribute values ➡️ ❌ access to arbitrary attributes hosted by arbitrary elements theough a selector inside attr() was considered and dropped 46 Linked flows ✅ ❌ Regions 47 User states ❌ ❌ 48 List numberings ✅ ✅ Counter Styles 49 Substractive text-decoration ❌ ❌ 50 Styles for map/area ➡️ ➡️ never discussed AFAIK 51 Transliteration ➡️ ➡️ discussed and dropped 52 Regexps in selectors ❌ ❌ 53 Last-of... selectors ✅ ✅ 54 Control over progressive rendering ❌ ❌ 55 Inline-blocks ✅ ✅ 56 Non-breaking inlines ✅ ✅ white-space applies to all elements since CSS 2.0... 57 Word-spacing: none ❌ ❌ 58 HSV or HSL colors ✅ ✅ 59 Standardize X colors ✅ ✅ 60 Copy-fitting/auto-sizing/auto-spacing ✅ ✅ Flexbox 61 @page inside @media ❌ ❌ 62 Color profiles ✅ ❌ dropped from Colors level 3 but in level 4 63 Underline styles ✅ ✅ 64 BECSS ➡️ ➡️ BECSS, dropped 65 // comments ❌ ❌ 66 Replaced elements w/o intrinsic size ✅ ✅ object-fit 67 Fitting replaced elements ✅ ✅ object-fit [...]



Chris Finke: Reenact is dead. Long live Reenact.

Thu, 01 Dec 2016 07:22:39 +0000

Last November, I wrote an iPhone app called Reenact that helps you reenact photos. It worked great on iOS 9, but when iOS 10 came out in July, Reenact would crash as soon as you tried to select a photo. It turns out that in iOS 10, if you don’t describe exactly why your app needs access to the user’s photos, Apple will (intentionally) crash your app. For a casual developer who doesn’t follow every iOS changelog, this was shocking — Apple essentially broke every app that accesses photos (or 15 other restricted resources) if they weren’t updated specifically for iOS 10 with this previously optional feature… and they didn’t notify the developers! They have the contact information for the developer of every app, and they know what permissions every app has requested. When you make a breaking change that large, the onus is on you to proactively send some emails. I added the required description, and when I tried to build the app, I ran into another surprise. The programming language I used when writing Reenact was version 2 of Apple’s Swift, which had just been released two months prior. Now, one year later, Swift 2 is apparently a “legacy language version,” and Reenact wouldn’t even build without adding a setting that says, “Yes, I understand that I’m using an ancient 1-year-old programming language, and I’m ok with that.” After I got it to build, I spent another three evenings working through all of the new warnings and errors that the untouched and previously functional codebase had somehow started generating, but in the end, I didn’t do the right combination of head-patting and tummy-rubbing, so I gave up. I’m not going to pay $99/year for an Apple Developer Program membership just to spend days debugging issues in an app I’m giving away, all because Apple isn’t passionate about backwards-compatibility. So today, one year from the day I uploaded version 1.0 to the App Store (and serendipitously, on the same day that my Developer Program membership expires), I’m abandoning Reenact on iOS. …but I’m not abandoning Reenact. Web browsers on both desktop and mobile provide all of the functionality needed to run Reenact as a Web app — no app store needed — so I spent a few evenings polishing the code from the original Firefox OS version of Reenact, adding all of the features I put in the iOS and Android versions. If your browser supports camera sharing, you can now use Reenact just by visiting app.reenact.me. It runs great in Firefox, Chrome, Opera, and Amazon’s Silk browser. iOS users are still out of luck, because Safari supports precisely 0% of the necessary features. (Because if web pages can do everything apps can do, who will write apps?) One of these things just doesn’t belong. In summary: Reenact for iOS is dead. Reenact for the Web is alive. Both are open-source. Don’t trust anyone over 30. Leave a comment below. [...]



Mozilla Security Blog: Fixing an SVG Animation Vulnerability

Wed, 30 Nov 2016 21:50:32 +0000

At roughly 1:30pm Pacific time on November 30th, Mozilla released an update to Firefox containing a fix for a vulnerability reported as being actively used to deanonymize Tor Browser users.  Existing copies of Firefox should update automatically over the next 24 hours; users may also download the updated version manually.

Early on Tuesday, November 29th, Mozilla was provided with code for an exploit using a previously unknown vulnerability in Firefox.  The exploit was later posted to a public Tor Project mailing list by another individual.  The exploit took advantage of a bug in Firefox to allow the attacker to execute arbitrary code on the targeted system by having the victim load a web page containing malicious JavaScript and SVG code.  It used this capability to collect the IP and MAC address of the targeted system and report them back to a central server.  While the payload of the exploit would only work on Windows, the vulnerability exists on Mac OS and Linux as well.  Further details about the vulnerability and our fix will be released according to our disclosure policy.

The exploit in this case works in essentially the same way as the “network investigative technique” used by FBI to deanonymize Tor users (as FBI described it in an affidavit).  This similarity has led to speculation that this exploit was created by FBI or another law enforcement agency.  As of now, we do not know whether this is the case.  If this exploit was in fact developed and deployed by a government agency, the fact that it has been published and can now be used by anyone to attack Firefox users is a clear demonstration of how supposedly limited government hacking can become a threat to the broader Web.




Robert Helmer: about:addons in React

Wed, 30 Nov 2016 20:34:00 +0000

While working on tracking down some tricky UI bugs in about:addons, I wondered what it would look like to rewrite it using web technologies. I've been meaning to learn React (which the Firefox devtools use), and it seems like a good choice for this kind of application:

  1. easy to create reusable components
XBL is used for this in the current about:addons, but this is a non-standard Mozilla-specific technology that we want to move away from, along with XUL.
  1. manage state transitions, undo, etc.
There is quite a bit of code in the current about:addons implementation to deal with undoing various actions. React makes it pretty easy to track this sort of thing through libraries like Redux.

To explore this a bit, I made a simple React version of about:addons. It's actually installable as a Firefox extension which overrides about:addons.

Note that it's just a proof-of-concept and almost certainly buggy - the way it's hooking into the existing sidebar in about:addons needs some work for instance. I'm also a React newb so pretty sure I'm doing it wrong. Also, I've only implemented #1 above so far, as of this writing.

I am finding React pretty easy to work with, and I suspect it'll take far less code to write something equivalent to the current implementation.




Robert Helmer: Toy Add-on Manager in Rust

Wed, 30 Nov 2016 20:14:00 +0000

I've been playing with Rust lately, and since I mostly work on the Add-on Manager these days, I thought I'd combine these into a toy rust version.

The Add-on Manager in Firefox is written in Javascript. It uses a lot of ES6 features, and has "chrome" (as opposed to "content") privileges, which means that it can access internal Firefox-only APIs to do things like download and install extensions, themes, and plugins.

One of the core components is a class named AddonInstall which implements a state machine to download, verify, and install add-ons. The main purpose of this toy Rust project so far has been to model the design and see what it looks like.

So far mostly it's an exercise in how awesome Enum is compared to the JS equivalent (int constants), and how nice match is (versus switch statements).

It's possible to compile the Rust app to a native binary, or alternatively to asm/wasm, so one thing I'd like to try soon is loading a wasm version of this Rust app inside a Firefox JSM (which is the type of JS module used for internal Firefox code).

There's a webplatform crate on crates.io that enables which allows for easy DOM access, it'd be interesting to see if this works for Firefox chrome code too.




Air Mozilla: The Joy of Coding - Episode 82

Wed, 30 Nov 2016 18:00:00 +0000

(image) mconley livehacks on real Firefox bugs while thinking aloud.




Daniel Glazman: Rest in peace, Opera...

Wed, 30 Nov 2016 08:42:00 +0000

I think we can now safely say Opera, the browser maker, is no more. My opinions about the acquisition of the browser by a chinese trust were recently confirmed and people are let go or fleeing en masse. Rest in Peace Opera, you brought good, very good things to the Web and we'll miss you.

In fact, I'd love to see two things appear:

  • Vivaldi is of course the new Opera, it was clear from day 1. Even the name was chosen for that. The transformation will be completed the day Vivaldi joins W3C and sends representatives to the Standardization tables.
  • Vivaldi and Brave should join forces, in my humble opinion.



Cameron Kaiser: 45.5.1 chemspill imminent

Wed, 30 Nov 2016 06:56:00 +0000

The plan was to get you a test build of TenFourFox 45.6.0 this weekend, but instead you're going to get a chemspill for 45.5.1 to fix an urgent 0-day exploit in Firefox which is already in the wild. Interestingly, the attack method is very similar to the one the FBI infamously used to deanonymise Tor users in 2013, which is a reminder that any backdoor the "good guys" can sneak through, the "bad guys" can too. TenFourFox is technically vulnerable to the flaw, but the current implementation is x86-based and tries to attack a Windows DLL, so as written it will merely crash our PowerPC systems. In fact, without giving anything away about the underlying problem, our hybrid-endian JavaScript engine actually reduces our exposure surface further because even a PowerPC-specific exploit would require substantial modification to compromise TenFourFox in the same way. That said, we will still implement the temporary safety fix as well. The bug is a very old one, going back to at least Firefox 4. Meanwhile, 45.6 is going to be scaled back a little. I was able to remove telemetry from the entire browser (along with its dependencies), and it certainly was snappier in some sections, but required wholesale changes to just about everything to dig it out and this is going to hurt keeping up with the ESR repository. Changes this extensive are also very likely to introduce subtle bugs. (A reminder that telemetry is disabled in TenFourFox, so your data is never transmitted, but it does accumulate internal counters and while it is rarely on a hot codepath there is still non-zero overhead having it around.) I still want to do this but probably after feature parity, so 45.6 has a smaller change where telemetry is instead only removed from user-facing chrome JavaScript. This doesn't help as much but it's a much less invasive change while we're still on source parity with 45ESR. Also, tests with the "non-volatile" part of IonPower-NVLE showed that switching to all, or mostly, non-volatile registers in the JavaScript JIT compiler had no obvious impact on most benchmarks and occasionally was a small negative. Even changing the register allocator to simply favour non-volatile registers, without removing volatiles, had some small regressions. As it turns out, Ion actually looks pretty efficient with saving volatile registers prior to calls after all and the overhead of having to save non-volatile registers upon entry apparently overwhelms any tiny benefit of using them. However, as a holdover from my plans for NVLE, we've been saving three more non-volatile general purpose registers than we allow the allocator to use; since we're paying the overhead to use them already, I added those unused registers to the allocator and this got us around 1-2% benefit with no regression. That will ship with 45.6 and that's going to be the extent of the NVLE project. On the plus side, however, 45.6 does have HiDPI support completely removed (because no 10.6-compatible system has a retina display, let alone any Power Mac), which makes the widget code substantially simpler in some sections, and has a cou[...]



Chris H-C: Privileged to be a Mozillian

Tue, 29 Nov 2016 22:00:22 +0000

Mike Conley noticed a bug. There was a regression on a particular Firefox Nightly build he was tracking down. It looked like this: A pretty big difference… only there was a slight problem: there were no relevant changes between the two builds. Being the kind of developer he is, :mconley looked elsewhere and found a probe that only started being included in builds starting November 16. The plot showed him data starting from November 15. He brought it up on irc.mozilla.org#telemetry. Roberto Vitillo was around and tried to reproduce, without success. For :mconley the regression was on November 5 and the data on the other probe started November 15. For :rvitillo the regression was on November 6 and the data started November 16. After ruling out addons, they assumed it was the dashboard’s fault and roped me into the discussion. This is what I had to say: You see, :mconley is in the Toronto (as in Canada) Mozilla office, and Roberto is in the London (as in England) Mozilla office. There was a bug in how dates were being calculated that made it so the data displayed differently depending on your timezone. If you were on or East of the Prime Meridian you got the right answer. West? Everything looks like it happens one day early. I hammered out a quick fix, which means the dashboard is now correct… but in thinking back over this bug in a post-mortem-kind-of-way, I realized how beneficial working in a distributed team is. Having team members in multiple timezones not only provided us with a quick test location for diagnosing and repairing the issue, it equipped us with the mindset to think of timezones as a problematic element in the first place. Working in a distributed fashion has conferred upon us a unique and advantageous set of tools, experiences, thought processes, and mechanisms that allow us to ship amazing software to hundreds of millions of users. You don’t get that from just any cube farm. #justmozillathings :chutten [...]



Chris Cooper: RelEng & RelOps highlights - November 29, 2016

Tue, 29 Nov 2016 21:27:33 +0000

Welcome back. As the podiatrist said, lots of exciting stuff is afoot. Modernize infrastructure: The big news from the past few weeks comes from the TaskCluster migration project where we now have nightly updates being served for both Linux and Android builds on the Date project branch. If you’re following along in treeherder, this is the equivalent of “tier 2” status. We’re currently working on polish bugs and a whole bunch of verification work before we attempt to elevate these new nightly builds to tier 1 status on the mozilla-central branch, effectively supplanting the buildbot-generated variants. We hope to achieve that goal before the end of 2017. Even tier 2 is a huge milestone here, so cheers to everyone on the team who has helped make this happen, chiefly Aki, Callek, Kim, Jordan, and Mihai. A special shout-out to Dustin who helped organize the above migration work over the past few months but writing a custom dependency tracking tool. The code is here https://github.com/taskcluster/migration and you can see output here: http://migration.taskcluster.net/ It’s been super helpful! Improve Release Pipeline: Many improvements to Balrog were put into production this past week, including one from a new volunteer. Ben blogged about them in detail. Aki released several scriptworker releases to stabilize polling and gpg homedir creation. scriptworker 1.0.0b1 enables chain of trust verification. Aki added multi-signing-format capability to scriptworker and signingscript; this is live on the Date project branch. Aki added a shared scriptworker puppet module, making it easier to add new instance types. https://bugzilla.mozilla.org/show_bug.cgi?id=1309293 Aki released dephash 0.3.0 with pip>=9.0.0 and hashin>=0.7.0 support. Improve CI Pipeline: Nick optimized our requests for AWS spot pricing, shaving several minutes off the runtime of the script which launches new instances in response to pending buildbot jobs. Kim disabled Windows XP tests on trunk, and winxp talos on all branches (https://bugzilla.mozilla.org/show_bug.cgi?id=1310836 and https://bugzilla.mozilla.org/show_bug.cgi?id=1317716) Now Alin is rebalancing the Windows 8 pools so we can enable e10s testing on Windows 8 with the re-imaged XP machines. Recall that Windows XP is moving to the ESR branch with Firefox 52 which is currently on the Aurora/Developer Edition release branch. Kim enabled Android x86 nightly builds on the Date project branch: https://bugzilla.mozilla.org/show_bug.cgi?id=1319546 Kim enabled SETA on the graphics projects branch to reduce wait times for test machines: https://bugzilla.mozilla.org/show_bug.cgi?id=1319490 Operational: Rok has deployed the first service based on the new releng microservices architecture. You can find the new version of TryChooser here: https://mozilla-releng.net/trychooser/ More information about the services and framework itself can be found here: https://docs.mozilla-releng.net/ Release: Firefox 50 has been released. We’re are currently in the beta cycle f[...]



Mozilla VR Blog: WebVR coming to Servo: Architecture and latency optimizations

Tue, 29 Nov 2016 20:23:55 +0000

We are happy to announce that the first WebVR patches are landing in Servo. For the impatients: You can download a highly experimental Servo binary compatible with HTC Vive. Switch on your headset and run servo.exe --resources-path resources webvr\room-scale.html The current implementation supports the WebVR 1.2 spec that enables the API in contexts other than the main page thread, such as WebWorkers. We’ve been working hard on an optimized render path for VR to achieve smooth FPS and the required less than 20ms of latency to avoid motion sickness. This is the overall architecture: Rust-WebVR Library The Rust WebVR implementation is a dependency-free library providing both the WebVR spec implementation and the integration with the vendor specific SDKs (OpenVR, Oculus …). Having it decoupled on its own component comes with multiple advantages: Fast develop-compile-test cycle. Compilation times are way faster than developing and testing in a full browser. Contributions are easier because developers don’t have to deal with the complexity of a browser code base. It can be used on any third party project: Room scale demo using vanilla Rust. The API is inspired on the easy to use WebVR API but adapted to Rust design patterns. The VRService trait offers an entry point to access native SDKs like OpenVR and Oculus SDK. It allows to perform operations such as initialization, shutdown, event polling and VR Device discovery: The VRDevice trait provides a way to interact with Virtual Reality headsets: The integration with vendor specific SDKs (OpenVR, Oculus…) are built on top of the VRService and VRDevice traits. OpenVRService, for instance interfaces with Valve’s OpenVR API. While the code is written in Rust, native SDKs are usually implemented in C or C++. We use Rust FFI and rust-bindgen to generate Rust bindings from the native C/C++ header files. MockService implements a mock VR device that can be used for testing or developing without having a physical headset available. You will be able to get some code done in the train or while attending a boring talk or meeting ;) VRServiceManager is the main entry point to the rust-webvr library. It handles the life cycle and interfaces to all available VRService implementations. You can use cargo-features to register the default implementations or manually register your own ones. Here is an example of initialization in a vanilla Rust app: WebVR integration in Servo Performance and security are both top priorities in a Web browser. DOM Objects are allowed to use VRDevices but they neither own them or have any direct pointers to native objects. There are many reasons for this: JavaScript execution is untrusted and the input data might be malicious . That’s why the entire JavaScript thread must be isolated to its own sandboxed process. There could be many parallel JavaScript contexts requesting access to the same native VRDevice instance which could lead to data race conditions. WebVR Spec enforces privacy a[...]



François Marier: Persona Guiding Principles

Tue, 29 Nov 2016 19:42:00 +0000

Given the impending shutdown of Persona and the lack of a clear alternative to it, I decided to write about some of the principles that guided its design and development in the hope that it may influence future efforts in some way. Permission-less system There was no need for reliers (sites relying on Persona to log their users in) to ask for permission before using Persona. Just like a site doesn't need to ask for permission before creating a link to another site, reliers didn't need to apply for an API key before they got started and authenticated their users using Persona. Similarly, identity providers (the services vouching for their users identity) didn't have to be whitelisted by reliers in order to be useful to their users. Federation at the domain level Just like email, Persona was federated at the domain name level and put domain owners in control. Just like they can choose who gets to manage emails for their domain, they could: run their own identity provider, or delegate to their favourite provider. Site owners were also in control of the mechanism and policies involved in authenticating their users. For example, a security-sensitive corporation could decide to require 2-factor authentication for everyone or put a very short expiry on the certificates they issued. Alternatively, a low-security domain could get away with a much simpler login mechanism (including a "0-factor" mechanism in the case of http://mockmyid.com!). Privacy from your identity provider While identity providers were the ones vouching for their users' identity, they didn't need to know the websites that their users are visiting. This is a potential source of control or censorship and the design of Persona was able to eliminate this. The downside of this design of course is that it becomes impossible for an identity provider to provide their users with a list of all of the sites where they successfully logged in for audit purposes, something that centralized systems can provide easily. The browser as a trusted agent The browser, whether it had native support for the BrowserID protocol or not, was the agent that the user needed to trust. It connected reliers (sites using Persona for logins) and identity providers together and got to see all aspects of the login process. It also held your private keys and therefore was the only party that could impersonate you. This is of course a power which it already held by virtue of its role as the web browser. Additionally, since it was the one generating and holding the private keys, your browser could also choose how long these keys are valid and may choose to vary that amount of time depending on factors like a shared computer environment or Private Browsing mode. Other clients/agents would likely be necessary as well, especially when it comes to interacting with mobile applications or native desktop applications. Each client would have its own key, but they would all be signed by the identity provider and there[...]



Nathan Froyd: accessibility tools for everyone

Tue, 29 Nov 2016 19:07:37 +0000

From The Man Who Is Transforming Microsoft:

[Satya Nadella] moves to another group of kids and then shifts his attention to a teenage student who is blind. The young woman has been working on building accessibility features using Cortana, Microsoft’s speech-activated digital assistant. She smiles and recites the menu options: “Hey Cortana. My essentials.” Despite his transatlantic jet lag Nadella is transfixed. “That’s awesome,” he says. “It’s fantastic to see you pushing the boundaries of what can be done.” He thanks her and turns toward the next group.

“I have a particular passion around accessibility, and this is something I spend quite a bit of cycles on,” Nadella tells me later. He has two daughters and a son; the son has special needs. “What she was showing me is essentially how she’s building out as a developer the tools that she can use in her everyday life to be productive. One thing is certain in life: All of us will need accessibility tools at some point.”




David Lawrence: Happy BMO Push Day!

Tue, 29 Nov 2016 15:06:26 +0000

the following changes have been pushed to bugzilla.mozilla.org:

  • [1264821] We want to replace the project kick-off form with a contract request form
  • [1310757] Update form: bugzilla.mozilla.org/form.CRM

discuss these changes on mozilla.tools.bmo.


(image) (image)



Bogomil Shopov: I’ve launched a Mozilla Donation Campaign for #CyberMonday craziness.

Tue, 29 Nov 2016 05:49:36 +0000

I have started a small campaign today and I am so happy to see it working – 138 engagements so far and a few donations. There is no way to see the donations, but I can see more “I have donated” tweets in the target languages.

Please retweet and take action :)

Update: Actually there is a way to check the effect of the campaign. I used a web tool to count the tweets that every user can tweet after the donation.

I can see the trend here:
(image)

The post I’ve launched a Mozilla Donation Campaign for #CyberMonday craziness. appeared first on Bogomil Shopov.




This Week In Rust: This Week in Rust 158

Tue, 29 Nov 2016 05:00:00 +0000

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions. This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR. Updates from Rust Community News & Blog Posts How to speed up the Rust compiler some more. Rust’s standard hash table types could go quadratic. Rust’s iterators are inefficient, and here’s what we can do about it. Goto statement considered (mostly) harmless: Translating C goto statements to Rust with Corrode, a C to Rust translator. This year in nom: 2.0 is here. First steps in Nom: Parsing pseudo GLSL. Writing GStreamer Elements in Rust (Part 3). Parsing data from untrusted sources like it’s 2016. Painless Rust tests and benches on iOS and Android with Dinghy. First Rust+GObject coding session. Other Weeklies from Rust Community This week in Rust docs 32. Updates from the Rust documentation team. This week in Tock Embedded OS 9. Tock is a safe, multitasking operating system for low-power, low-memory microcontrollers. This week in Ruma 2016-11-27. Ruma is a Matrix homeserver written in Rust. This week in TiKV 2016-11-28. TiKV is a distributed Key-Value database. Way Cooler Alpha Update (2016 November). Way Cooler is a customizable tiling window manager written in Rust for Wayland. Crate of the Week Since there were no nominations, this week has to go without a Crate of the Week. Sorry. Submit your suggestions and votes for next week! Call for Participation Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started! Some of these tasks may also have mentors available, visit the task page for more information. [less easy] rayon: Parity with the Iterator trait. [easy] rust: Compiling libunwind with --test for arm-musl targets produces dynamically linked binaries. [less easy] servo: Make FetchMetadata reflect all possible response types. [easy] servo: Make HTTP redirect fetch return an error if redirecting to non-HTTP(S). If you are a Rust project owner and are looking for contributors, please submit tasks here. Updates from Rust Core 66 pull requests were merged in the last week. Not much, but there were a good number of awesome changes: Implement break with value (RFC #1624) Stabilized name resolution changes (RFC #1560) Implement panic-safe slicing (RFC #1679) Make more types uninhabited Pad const enums only once Simplify HashMap probing Reduce type construction calls, Reduce allocations while walking types, make HirVec smaller – nnethercote is at it again… Improve macro name r[...]



Mike Hommey: Announcing git-cinnabar 0.4.0 release candidate

Tue, 29 Nov 2016 00:18:14 +0000

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.4.0b3?

  • Updated git to 2.10.2 for cinnabar-helper.
  • Added a new git cinnabar download command to download a helper on platforms where one is available.
  • Fixed some corner cases with pack windows in the helper. This prevented cloning mozilla-central with the helper.
  • Fixed bundle2 support that broke cloning from a mercurial 4.0 server in some cases.
  • Fixed some corner cases involving empty files. This prevented cloning Mozilla’s stylo incubator repository.
  • Fixed some correctness issues in file parenting when pushing changesets pulled from one mercurial repository to another.
  • Various improvements to the rules to build the helper.
  • Experimental (and slow) support for pushing merges, with caveats. See issue #20 for details about the current status.

And since I realize I didn’t announce beta 3:

What’s new since 0.4.0b2?

  • Properly handle bundle2 errors, avoiding git to believe a push happened when it didn’t. (0.3.x is unaffected)



The Mozilla Blog: The Glass Room: Looking into Your Online Life

Mon, 28 Nov 2016 22:32:32 +0000

It’s that time of year! The excitement of Black Friday carries into today – CyberMonday – the juxtaposition of the analog age and the digital age. Both days are fueled by media and retailers alike and are about shopping. And both days are heavily reliant on the things that we want, that we need and what we think others want and need. And, all of it is powered by the data about us as consumers. So, today – the day of electronic shopping – is the perfect day to provoke some deep thinking on how our digital lives impact our privacy and online security. How do we do this? One way is by launching “The Glass Room” – an art exhibition and educational space that teaches visitors about the relationship between technology, privacy and online security. The Glass Room will be open in downtown New York City for most of the holiday shopping season. Anyone can enter the “UnStore” for free to get a behind the scenes look at what happens to your privacy online. You’ll also get access to a crew of “InGeniouses” who can help you with online privacy and data tips and tricks. The Glass Room has 54 interactive works that show visitors the relationship between your personal data and the technology services and products you use. This is no small task. Most of us don’t think about our online security and privacy every day. As with our personal health it is important but presumed. Still, when we don’t take preventative care of ourselves, we are at greater risk for getting sick. The same is true online. We are impacted by security and privacy issues everyday without even realizing it. In the crush of our daily lives, few of us have the time to learn how to better protect ourselves and preserve our privacy online. We don’t always take enough time to get our checkups, eat healthily and stay active – but we would be healthier if we did. We are launching The Glass Room to allow you to think, enjoy and learn how to do a checkup of your online health. We can buy just about anything we imagine on CyberMonday and have it immediately shipped to our door. We have to work a little harder to protect our priceless privacy and security online. As we collectively exercise our shopping muscles, I hope we can also think about the broader importance of our online behaviors to maintaining our online health. If you are in New York City, please come down to The Glass Room and join the discussion. You can also check out all the projects, products and stories that The Glass Room will show you to look into your online life from different perspectives by visiting The Glass Room online.  [...]



Seif Lotfy: Rustifying IronFunctions

Mon, 28 Nov 2016 21:47:56 +0000

As mentioned in my previous blog post there is new open-source, lambda compatible, on-premise, language agnostic, server-less compute service called IronFunctions. While IronFunctions is written in Go. Rust is still very much admired language and it was decided to add support for it in the fn tool. So now you can use the fn tool to create and publish functions written in rust. Using rust with functions The easiest way to create a iron function in rust is via cargo and fn. Prerequisites First create an empty rust project as follows: $ cargo init --name func --bin Make sure the project name is func and is of type bin. Now just edit your code, a good example is the following "Hello" example: use std::io; use std::io::Read; fn main() { let mut buffer = String::new(); let stdin = io::stdin(); if stdin.lock().read_to_string(&mut buffer).is_ok() { println!("Hello {}", buffer.trim()); } } You can find this example code in the repo. Once done you can create an iron function. Creating a function $ fn init --runtime=rust / in my case its fn init --runtime=rust seiflotfy/rustyfunc, which will create the func.yaml file required by functions. Building the function $ fn build Will create a docker image / (again in my case seiflotfy/rustyfunc). Testing You can run this locally without pushing it to functions yet by running: $ echo Jon Snow | fn run Hello Jon Snow Publishing In the directory of your rust code do the following: $ fn publish -v -f -d ./ This will publish you code to your functions service. Running it Now to call it on the functions service: $ echo Jon Snow | fn call seiflotfy rustyfunc which is the equivalent of: $ curl -X POST -d 'Jon Snow' http://localhost:8080/r/seiflotfy/rustyfunc Next In the next post I will be writing a more computation intensive rust function to test/benchmark IronFunctions, so stay tune :D[...]



William Lachance: Training an autoclassifier

Mon, 28 Nov 2016 21:29:47 +0000

Here at Mozilla, we’ve accepted that a certain amount of intermittent failure in our automated testing of Firefox is to be expected. That is, for every push, a subset of the tests that we run will fail for reasons that have nothing to do with the quality (or lack thereof) of the push itself. On the main integration branches that developers commit code to, we have dedicated staff and volunteers called sheriffs who attempt to distinguish these expected failures from intermittents through a manual classification process using Treeherder. On any given push, you can usually find some failed jobs that have stars beside them, this is the work of the sheriffs, indicating that a job’s failure is “nothing to worry about”: This generally works pretty well, though unfortunately it doesn’t help developers who need to test their changes on Try, which have the same sorts of failures but no sheriffs to watch them or interpret the results. For this reason (and a few others which I won’t go into detail on here), there’s been much interest in having Treeherder autoclassify known failures. We have a partially implemented version that attempts to do this based on structured (failure line) information, but we’ve had some difficulty creating a reasonable user interface to train it. Sheriffs are used to being able to quickly tag many jobs with the same bug. Having to go through each job’s failure lines and manually annotate each of them is much more time consuming, at least with the approaches that have been tried so far. It’s quite possible that this is a solvable problem, but I thought it might be an interesting exercise to see how far we could get training an autoclassifier with only the existing per-job classifications as training data. With some recent work I’ve done on refactoring Treeherder’s database, getting a complete set of per-job failure line information is only a small SQL query away: 1 2 3 4 5 select bjm.id, bjm.bug_id, tle.line from bug_job_map as bjm left join text_log_step as tls on tls.job_id = bjm.job_id left join text_log_error as tle on tle.step_id = tls.id where bjm.created > '2016-10-31' and bjm.created < '2016-11-24' and bjm.user_id is not NULL and bjm.bug_id is not NULL order by bjm.id, tle.step_id, tle.id; Just to give some explanation of this query, the “bug_job_map” provides a list of bugs that have been applied to jobs. The “text_log_step” and “text_log_error” tables contain the actual errors that Treeherder has extracted from the textual logs (to explain the failure). From this raw list of mappings and errors, we can construct a data structure incorporating the job, the assigned bug and the textual errors inside it. For exa[...]



Mike Hoye: Planet: A Minor Administrative Note

Mon, 28 Nov 2016 20:50:23 +0000

I will very shortly be adding some boilerplate to the Planet homepage as well as the Planet.m.o entry on Wikimo, to the effect that:

All of this was true before, but we’re going to highlight it on the homepage and make it explicit in the wiki; we want Planet to stay what it is, open, participatory, an equal and accessible platform for everyone involved, but we also don’t want Planet to become an attack surface, against Mozilla or anyone else, and won’t allow that to happen out of willful blindness or neglect.

If you’ve got any questions or concerns about this, feel free to leave a comment or email me.




Nick Cameron: How fast can I build Rust?

Mon, 28 Nov 2016 20:17:32 +0000

I've been collecting some data on the fastest way to build the Rust compiler. This is primarily for Rust developers to optimise their workflow, but it might also be of general interest. TL;DR: the fastest ways to build Rust (on a computer with lots of cores) is with -j6, RUSTFLAGS=-Ccodegen-units=10. I tested using a commit, from the 24th November 2016. I was using the make build system (though I would expect the same results using Rustbuild). The test machine is a dedicated build machine - it has 12 physical cores, lots of RAM, and an SSD. It wasn't used for anything else during the benchmarking, and doesn't run a windowing system. It was running Ubuntu 16.10 (Linux). I only did one run per set of variables. That is not ideal, but where I repeated runs, they were fairly consistent - usually within a second or two and never more than 10. I've rounded all results to the nearest 10 seconds, and I believe that level of precision is about right for the experiment. I varied the number of jobs (-jn) and the number of codegen units (RUSTFLAGS=-Ccodegen-units=n). The command line looked something like RUSTFLAGS=-Ccodegen-units=10 make -j6. I measured the time to do a normal build of the whole compiler and libraries (make), to build the stage1 compiler (make rustc-stage, this is the minimal amount of work required to get a compiler for testing), and to build and bootstrap the compiler and run all tests (make && make check, I didn't run a simple make check because adding -jn to that causes many tests to be skipped; setting codegen-units > 1 causes some tests to fail). The jobs number is the number of tasks make can run in parallel. These runs are self-contained instances of the compiler, i.e., this is parallelism outside the compiler. The amount of parallelism is limited by dependencies between crates in the compiler. Since the crates in the compiler are rather large and there are a lot of dependencies, the benefits of using a large number of jobs is much weaker than in a typical C or C++ program (e.g., LLVM). Note however that there is no real drawback to using a larger number of jobs, there just won't be any benefit. Codegen units introduce parallelism within the compiler. First, some background. Compilation can be roughly split into two: first, code is analysed (parsing, type checking, etc.), then object code is generated from the products of analysis. The Rust compiler uses LLVM for the code generation part. Roughly half the time running an optimised build is spent in each of analysis and code generation. Nearly all optimisation is performed in the code generation part. The compilation unit in Rust is a crate; that is, the Rust compiler analyses and compiles a single cra[...]



Mozilla Open Innovation Team: Announcing Panel of Judges for Mozilla’s Equal Rating Innovation Challenge

Mon, 28 Nov 2016 19:24:26 +0000

Mozilla is delighted to announce the esteemed judges for the Equal Rating Innovation ChallengeRocio Fonseca (Chile), Executive Director of Start-Up ChileOmobola Johnson (Nigeria), Honorary Chair of the Alliance for Affordable Internet and Partner of TLcom Capital LLPNikhil Pahwa (India), Founder at MediaNama and Co-founder of savetheinternet.inMarlon Parker (South Africa), Founder of Reconstructed Living LabsThese four leaders will join Mitchell Baker (USA), Executive Chairwoman of Mozilla, on the judging panel for the Equal Rating Innovation Challenge. The judges will be bringing their wealth of industry experience and long-standing expertise from various positions in policy, entrepreneurship, and consulting in the private and public sector to assess the challenge submissions.Mozilla seeks to find novel solutions to connect all people to the open Internet so they can realize the full potential of this globally shared resource. We’re both thrilled and proud to have gathered such a great roster of judges for the Innovation Challenge — it’s a testament to the global scope of the initiative. Each one of these leaders has already contributed in many ways to tackle the broader challenge of connecting the unconnected and it is an honour to have these global heavyweights in our panel.The Equal Rating Innovation Challenge will support promising solutions through expert mentorship and funding of US$250,000 in prize monies split into three categories: Best Overall (with a key focus on scalability), Best Overall Runner-up, and Most Novel Solution (based on experimentation with a potential high reward).The judges will score submissions according to the degree by which they meet the following attributes:25pts: Scalability20pts: Focus on user experience15pts: Differentiation10pts: Ability to be quickly deployed into the market (within 9–12 months)10pts: Potential of the team10pts: Community voting resultsThe deadline for submission is 6 January 2017. On 17 January, the judges will announce five semifinalists. Those semifinalists will be provided advice and mentorship from Mozilla experts in topics such as policy, business, engineering, and design to hone their solution. The semifinalists will take part in a Demo Day on 9 March 2017 in New York City to pitch their solutions to the judges. The public will then be invited to vote for their favorite solution online during a community voting period from 10–16 March, and the challenge winners will be announced on 29 March 2017.Announcing Panel of Judges for Mozilla’s Equal Rating Innovation Challenge was originally published in Mozilla Open Innovation on Medium, where people are continuing the conver[...]



QMO: Firefox 51 Beta 3 Testday Results

Mon, 28 Nov 2016 13:26:22 +0000

Hi everyone!

Last Friday, November 25th, we held Firefox 51 Beta 3 Testday.  It was a successful event (please see the results section below) so a big Thank You goes to everyone involved.

First of all, many thanks to our active contributors: Krithika MAPMoin Shaikh, M A Prasanna, Steven Le Flohic, P Avinash Sharma, Iryna Thompson.

Bangladesh team: Nazir Ahmed Sabbir, Sajedul Islam, Maruf Rahman, Majedul islam Rifat, Ahmed Safa,  Md Rakibul Islam, M. Almas Hossain, Foysal Ahmed, Nadim Mahmud, Amir Hossain Rhidoy, Mohammad Abidur Rahman Chowdhury, Mahfujur Rahman Mehedi, Md Omar Faruk sobuj, Sajal Ahmed, Rezwana Islam Ria, Talha Zubaer, maruf hasan, Farhadur Raja Fahim, Saima sharleen, Azmina AKterPapeya, Syed Nayeem Roman.

India team:  Vibhanshu Chaudhary, Surentharan.R.A, Subhrajyoti Sen, Govindarajan Sivaraj, Kavya Kumaravel, Bhuvana Meenakshi.K, Paarttipaabhalaji, P Avinash Sharma, Nagaraj V, Pavithra R, Roshan Dawande, Baranitharan, SriSailesh, Kesavan S, Rajesh. D, Sankararaman, Dinesh Kumar M, Krithikasowbarnika.

Secondly, a big thank you to all our active moderators.

Results:

We hope to see you all in our next events, all the details will be posted on QMO!




Alessio Placitelli: Measuring tab and window usage in Firefox

Mon, 28 Nov 2016 10:23:48 +0000

With Mozilla’s Telemetry system, we have a powerful way to collect measurements in the clients while still complying to our rules of lean data collection and anonymization. Most of the measurements are collected in form of histograms that are created on the client side and submitted to our Telemetry pipeline. However, recent needs for better … 



Myk Melez: Embedding Use Cases

Mon, 28 Nov 2016 09:29:50 +0000

A couple weeks ago, I blogged about Why Embedding Matters. A rendering engine can be put to a wide variety of uses. Here are a few of them. Which would you prioritize? Headless Browser A headless browser is an app that renders a web page (and executes its script) without displaying the page to a user. Headless browsers themselves have multiple uses, including automated testing of websites, web crawling/scraping, and rendering engine comparisons. Longstanding Mozilla bug 446591 tracks the implementation of headless rendering in Gecko, and SlimerJS is a prime example of a headless browser would benefit from it. It’s a “scriptable browser for Web developers” that integrates with CasperJS and is compatible with the WebKit-based PhantomJS headless browser. It currently uses Firefox to “embed” Gecko, which means it doesn’t run headlessly (SlimerJS issue #80 requests embedding Gecko as a headless browser). Hybrid Desktop App A Hybrid Desktop App is a desktop app that is implemented primarily with web technologies but packaged, distributed, and installed as a native app. It enables developers to leverage web development skills to write an app that runs on multiple desktop platforms (typically Windows, Mac, Linux) with minimal platform-specific development. Generally, such apps are implemented using an application framework, and Electron is the one with momentum and mindshare; but there are others available. While frameworks can support deep integration with the native platform, the apps themselves are often shallower, limiting themselves to a small subset of platform APIs (window management, menus, etc.). Some are little more than a local web app loaded in a native window. Hybrid Desktop Web Browser A specialization of the Hybrid Desktop App, the Hybrid Desktop Web Browser is notable not only because Mozilla’s core product offering is a web browser but also because the category is seeing a wave of innovation, both within and outside of Mozilla. Besides Mozilla’s Tofino and Browser.html projects, there are open source startups like Brave; open-source hobbyist projects like Min, Alloy, electron-browser, miserve, and elector; and proprietary browsers like Blisk and Vivaldi. Those products aren’t all Hybrid Apps, but many of them are (and they all need to embed a rendering engine, one way or another). Hybrid Mobile App A Hybrid Mobile App is like a Hybrid Desktop App, but for mobile platforms (primarily iOS and Android). As with their desktop counterparts, they’re usually implemented using an application framework (like Cordova). And some use the system’s web rende[...]



Hub Figuière: libopenraw 0.1.0

Sun, 27 Nov 2016 05:16:06 +0000

I just released libopenraw 0.1.0. It is to be treated as a snapshot as it hasn't reached the level of functionality I was hoping for and it has been 5 years since last release.

Head on to the download page to get a tarball.

Several new API, some API + ABI breakage. Now the .pc files are parallel installable.




Mozilla Open Design Blog: Heading into the home stretch

Sun, 27 Nov 2016 01:56:41 +0000

Over the past few weeks, we’ve been exploring different iterations of our brand identity system. We know we need a solution that represents both who Mozilla is today and where we’re going in the future, and appeals both to people who know Mozilla well and new audiences who may not know or understand Mozilla yet. If you’re new to this project, you can read all about our journey on our blog, and the most recent post about the two different design directions that are informing this current round of work. [TL;DR: Our “Protocol” design direction delivers well on our mission, legacy and vision to build an Internet as a global public resource that is healthy, open and accessible to all. Based on quantitative surveys, Mozillians and developers believe this direction does the best job supporting an experience that’s innovative, opinionated and inclusive, the attributes we want to be known for.  In similar surveys, our target consumers evaluated our “Burst” design direction as the better option in terms of delivering on those attributes, and we also received feedback that this direction did a good job communicating interconnectedness and liveliness. Based on all of this feedback, our decision was to lead with the “Protocol” design direction, and explore ways to infuse it with some of the strengths of the “Burst” direction.] Here’s an update on what we’ve been up to: Getting to the heart of the matter Earlier in our open design project, we conducted quantitative research to get statistically significant insights from our different key audiences (Mozillians, developers, consumers), and used these data points to inform our strategic decision about which design directions to continue to refine. At this point of our open design project, we used qualitative research to understand better what parts of the refined identity system were doing a good job creating that overall experience, and what was either confusing or contradictory. We want Mozilla to be known and experienced as a non-profit organization that is innovative, opinionated and inclusive, and our logo and other elements in our brand identity system – like color, language and imagery – need to reinforce those attributes. So we recruited participants in the US, Brazil, Germany and India between the ages of 18  – 40 years, who represent our consumer target audience: people who make decisions about the companies they support based on their personal values and ideals, which are driven by bettering their communities and themselves. 157 people parti[...]



Daniel Stenberg: HTTPS proxy with curl

Sat, 26 Nov 2016 15:49:24 +0000

Starting in version 7.52.0 (due to ship December 21, 2016), curl will support HTTPS proxies when doing network transfers, and by doing this it joins the small exclusive club of HTTP user-agents consisting of Firefox, Chrome and not too many others. Yes you read this correctly. This is different than the good old HTTP proxy. HTTPS proxy means that the client establishes a TLS connection to the proxy and then communicates over that, which is different to the normal and traditional HTTP proxy approach where the clients speak plain HTTP to the proxy. Talking HTTPS to your proxy is a privacy improvement as it prevents people from snooping on your proxy communication. Even when using HTTPS over a standard HTTP proxy, there’s typically a setting up phase first that leaks information about where the connection is being made, user credentials and more. Not to mention that an HTTPS proxy makes HTTP traffic “safe” to and from the proxy. HTTPS to the proxy also enables clients to speak HTTP/2 more easily with proxies. (Even though HTTP/2 to the proxy is not yet supported in curl.) In the case where a client wants to talk HTTPS to a remote server, when using a HTTPS proxy, it sends HTTPS through HTTPS. Illustrating this concept with images. When using a traditional HTTP proxy, we connect initially to the proxy with HTTP in the clear, and then from then on the HTTPS makes it safe: to compare with the HTTPS proxy case where the connection is safe already in the first step: The access to the proxy is made over network A. That network has traditionally been a corporate network or within a LAN or something but we’re seeing more and more use cases where the proxy is somewhere on the Internet and then “Network A” is really huge. That includes use cases where the proxy for example compresses images or otherwise reduces bandwidth requirements. Actual HTTPS connections from clients to servers are still done end to end encrypted even in the HTTP proxy case. HTTP traffic to and from the user to the web site however, will still be HTTPS protected to the proxy when a HTTPS proxy is used. A complicated pull request This awesome work was provided by Dmitry Kurochkin, Vasy Okhin, and Alex Rousskov. It was merged into master on November 24 in this commit. Doing this sort of major change in the TLS area in curl code is a massive undertaking, much so because of the fact that curl supports getting built with one out of 11 or 12 different TLS libraries. Several of those are also system-specific so hardly any single d[...]



Andy McKay: Innovation

Sat, 26 Nov 2016 08:00:00 +0000

A while ago someone mentioned that the move to WebExtensions for Firefox was "an end to innovation". I asked that person to move that comment to do a discussion forum so we could chat more. It didn't happen, so I thought I'd write a blog post so you can discuss it in the comments... except I don't have comments, so please file a reply on your blog. Innovation is great, everyone wants more innovation, how could you possibly not? Innovation is like beer, or cookies, or money, or fresh Caprese salad in Italy whilst sitting on the beach in the sun. You can't possibly want less of it. But innovation cannot occur in a vacuum there are costs. In the past add-ons have provided innovation and helped Firefox grow. But sadly there has also been a real cost the way they currently exist, here's three reasons: the cost has been the inability of Firefox to innovate, the cost of developers getting into building WebExtensions and the maintenance cost of add-ons. I touched on some of these points in my blog post "The add-ons burden", but let's unpack a couple of those. The classic example for Firefox has been multi-process Firefox (or e10s). I've heard multiple stories since being involved in add-ons. One is that it was tried time and time again, but no-one could figure out how to do it without breaking add-ons. There are so many more changes coming down the pipeline like Servo, that will keep breaking add-ons. The ability to innovate Firefox is as important as the ability to innovate Firefox through add-ons. It turns out that add-ons really, really crimp Firefox development. Building an extension before WebExtensions was complicated, it required knowledge of Firefox and the Mozilla framework. The SDK and other things helped. Getting to a simple content script was made pretty easy by the SDK (which is something like 40% of all add-ons). But with WebExtensions you need to be able to read a limited set of APIs and understand web development. There are more web developers in the world than there are Mozilla developers. By allowing more developers to get involved in extensions, we provide a larger funnel into extensions. Every day people file bugs against Firefox, many of them. I spend quite a lot of my time in triages trying to understand those bugs, as I'm sure lots of people do at Mozilla. I don't have numbers on this but I bet the first question in many of those bugs is "what happens when you try in a clean profile?" (basically with no add-ons). Add-ons cause so many bugs and iss[...]



Tantek Çelik: My 2017-01-01 #IndieWeb Commitment: Own All My RSVPs To Public Events

Sat, 26 Nov 2016 07:43:00 +0000

My 2017-01-01 #indieweb commitment is to own 100% of my RSVPs to public events, by posting them on my site first, and other sites second, if at all.

RSVPs will be my third public kind of post to fully own on my own site:

  • 2002-08-08 100% articles: owned all my articles using my site
  • 2010-01-01 100% public notes: owned all my public notes, instead of tweeting
  • 2013-05-12 partial replies: owned all my replies to indieweb posts or @-replies to tweets (but not e.g. replies to public or private Facebook posts)
  • 2014-12-31 partial likes: owned all my favorites/likes of tweets (using my site, automatically propagated to Twitter, but not likes of Facebook or Instagram posts)
  • 2017-01-01 Planned: 100% of public RSVPs.

For a while now I’ve been posting RSVPs to indie events, such as Homebrew Website Club meetups. Those RSVPs are nearly always multi-RSVPs that are also automatically RSVPing to the Facebook copies of such indie events.

Recently I started to post some (most?) of my RSVPs to public events (regardless of where they were hosted) on my own site first, and then syndicate (POSSE) them to other sites, often automatically.

My previous post is one such example RSVP. I posted it on my site, and my server used the Bridgy service to automatically perform the equivalent RSVP on the public Facebook event, without me having to directly interact with Facebook’s UI at all.

For events on Eventbrite, Lanyrd, and other event sites I still have to manually POSSE, that is, manually cross-post an RSVP there that I originally posted on my own site.

My commitment for 2017 is to always, 100% of the time, post RSVPs to public events on my own site first, and only secondarily (manually if I must) RSVP to silo (social media) event URLs.

What’s your 2017-01-01 #indieweb commitment?




Air Mozilla: Foundation Demos November 25 2016

Fri, 25 Nov 2016 18:58:55 +0000

(image) Foundation Demos November 25 2016




Botond Ballo: Trip Report: C++ Standards Meeting in Issaquah, November 2016

Fri, 25 Nov 2016 15:00:56 +0000

Summary / TL;DR Project What’s in it? Status C++17 See below Committee Draft published; final publication on track for 2017 Filesystems TS Standard filesystem interface Published! Part of C++17 Library Fundamentals TS v1 optional, any, string_view and more Published! Part of C++17 Library Fundamentals TS v2 source code information capture and various utilities Voted for publication! Concepts (“Lite”) TS Constrained templates Published! Not part of C++17 Parallelism TS v1 Parallel versions of STL algorithms Published! Part of C++17 Parallelism TS v2 Task blocks, library vector types and algorithms, context tokens (maybe), and more Under active development Transactional Memory TS Transaction support Published! Not part of C++17 Concurrency TS v1 future.then(), latches and barriers, atomic smart pointers Published! Not part of C++17 Concurrency TS v2 TBD. Exploring synchronic types, atomic views, concurrent data structures, sycnhronized output streams. Executors to go into a separate TS. Under active development Networking TS Sockets library based on Boost.ASIO Voted for balloting by national standards bodies Ranges TS Range-based algorithms and views Voted for balloting by national standards bodies Numerics TS Various numerical facilities Under active development Array Extensions TS Stack arrays whose size is not known at compile time Withdrawn; any future proposals will target a different vehicle Modules TS A component system to supersede the textual header file inclusion model Initial TS wording reflects Microsoft’s design; changes proposed by Clang implementers expected. Not part of C++17. Graphics TS 2D drawing API In design review stage. No new progress since last meeting. Coroutines TS Resumable functions First revision will reflect Microsoft’s await design. Other approaches may be pursued in subsequent iterations. Not part of C++17. Reflection Code introspection and (later) reification mechanisms Introspection proposal undergoing design review; likely to target a future TS Contracts Preconditions, postconditions, and assertions In design review stage. No new progress since last meeting. Note: At the time of publication, a few of the links in this blog post resolve to a password-protected page. They will start resolving to public pages once the post-meeting mailing is published, which should happen within a few days. Than[...]



Matjaž Horvat: Combining multiple filters

Fri, 25 Nov 2016 13:08:15 +0000

Pontoon now allows you to apply multiple string filters simultaneously, which gives you more power in selecting strings listed in the sidebar. To apply multiple fillters, click their status icons in the filter menu as they turn into checkboxes on hover.

The new functionality allows you to select all translations by Alice and Bob, or all strings with suggestions submitted in the last week. The untranslated filter has been changed to behave as a shortcut to applying missing, fuzzy and suggested filters.

(image)

Big thanks to our fellow localizer Victor Bychek, who not only developed the new functionality, but also refactored a good chunk of the filters codebase, particularly on frontend.

And thanks to Jarek, who contributed optimizations and fixed a bug reported by Michal!




Henrik Mitsch: (Fun) Your Daily ‘We Are The World’ Reminder

Fri, 25 Nov 2016 10:22:47 +0000

Mozilla is a distributed place. About a third of its workforce are remote employees or remoties. So we speak to each other a lot on video chats. A lot.

Some paid contributors still have hobbies aside from working on the Mozilla project. For example, there’s the enterprise architect who is a music aficionado. There’s a number of people building Satellite Ground Stations. And I am sure we have many, many more pockets of awesomeness around.

And of course there are people who record their own music. So if you own a professional microphone, why not use it to treat your colleagues to a perfectly echo-canceled, smooth and noiseless version of your voice? Yay!

This is the point where I am continuously reminded of the song We Are The World from the 80ies. For example, check out Michael Jackson’s (2:41 min) or Bruce Springsteen’s (5:35 min) performances. This makes my day. Every single time.

 

PS: This article was published as part of the Participation Systems Turing Day. It aims to help people on our team who were born well past the 80ies to understand why I am frequently smiling in our video chats.

PPS: Oh yes, I confused “Heal the World” with “We Are The World” in the session proposal. Sorry for this glitch.

PPPS: Thank you to you-know-who-you-are for the inspiration.


(image) (image)



Andy McKay: What people hear

Fri, 25 Nov 2016 08:00:00 +0000

Many years ago when I had just started working in tech properly, I was had a huge amount of imposter syndrome. It felt like I was struggling every day to get my job done and understand the basic things. One day we were in a company meeting and the CEO said: If that person comes to me and says "I can't do my job", its my job to find a replacement for him. I was mortified. I just heard: You cannot come to with any problem because I will think you can't do your job and replace you. Later on, when I'd stopped caring about working in that company, I brought this up to the CEO. He said it was taken out of context and I could come to him with problems. The truth was, I really couldn't. That one line had put me in a state of indecision for many months. There were so many better ways to say what he said such as: "it my job to work with that employee to find a way to help them" etc. But the CEO at the time wasn't that kind of a person. I've taken this to heart because people apply their own context to things you say. The context at work includes things like: your position in the org chart relevant to that person, your experiences in similar situations. For example: if your boss says "you are doing a great job", is that different from your mother saying "you are doing a great job"? Of course it does, the context for the person hearing it is applied. This became clear to me quickly as a manager when off the cuff minor technical decision became an issue. A junior person said (paraphrasing) "we should do it this way because Andy said so". Another person replied (paraphrasing again) "don't do it because Andy said so, do it because its right". Both reported to me, but the first person had assumed that because I had said it, it must be the way to go. The other (and more experienced person) correctly pointed out the need to critically think about what I'd said. As you get higher up the chain of responsibility within an organisation this gets harder and harder. That's why you often see senior people think before they say anything. What they say will be interpreted by different people differently and lead to mis-interpretation. The more people you have in that organisation and the more diverse it is, the more different contexts will applied. For that reason I'm trying to stop and think more before responding with somet[...]



Support.Mozilla.Org: What’s Up with SUMO – 24th November

Thu, 24 Nov 2016 22:15:38 +0000

Greetings, SUMO Nation! Great to be read by you once more :-) Have you had a good November so far? One week left! And then… one month left until 2017 – time flies! Welcome, new contributors! …yes, autumn and winter are the slowest months for us, in case you haven’t noticed :-) But, all of you who are already on board keep things rolling! If you just joined us, don’t hesitate – come over and say “hi” in the forums! Contributors of the week All the forum supporters who tirelessly helped users out for the last week. All the writers of all languages who worked tirelessly on the KB for the last week. Thiago, Magno, and Dani for being our Social Superstars! We salute all of you! Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month! SUMO Community meetings LATEST ONE: 23rd of November – you can read the notes here and see the video at AirMozilla. NEXT ONE: happening on the 30th of November! If you want to add a discussion topic to the upcoming meeting agenda: Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting). Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda). If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback. Community There are two events coming up for the SUMO Mozillians in Ivory Coast. Thanks to Abbackar for kicking off the preparations! We still remember the previous event :-) Are you in or around New York? The Glass Room is visiting the Big Apple. Reminder: If you want to help us brainstorm how to promote “Internet Awareness” among our visitors, please join us here. Have you read Seburo’s post about a small fox and a big river? ;-) …one more thing – if you were to introduce a SUMO holiday, which day would it be? Let us know in the comments or on our forums. Platform Check the notes from the last meeting in this document. (and don’t forget about our meeting recordings). The main point of today’s meeting was reviewing[...]



Air Mozilla: Dr. James Iveniuk - Social Networks, Diffusion, and Contagion

Thu, 24 Nov 2016 19:01:00 +0000

(image) James Iveniuk: Social Networks, Diffusion, and Contagion




Air Mozilla: Reps Weekly Meeting Nov. 24, 2016

Thu, 24 Nov 2016 16:00:00 +0000

(image) This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.




Seif Lotfy: Importing AWS Lambda to IronFunctions

Thu, 24 Nov 2016 13:13:00 +0000

Imagine AWS Lambda being: On-Premise (host it anywhere) Language Agnostic (writing lambda functions in any language: Go, Rust, Python, Scala, Elixir you name it...) Horizontally Scalable Open-Source Would be nice wouldn't it? . Well fear not Iron.io released IronFunctions last week and its all that. IronFunction supports a simple stdin/stdout API, as well as being able to import your existing functions directly from AWS Lambda. Getting started! You can grab the latest code from GitHub or just run it in docker: docker run --rm -it --name functions --privileged -v $PWD/data:/app/data -p 8080:8080 iron/functions Currently IronFunctions supports importing the following languages from AWS Lambda: Python: 2.7 Java8: 1.80 NodeJS: 0.10 NodeJS: 4.3 in development The almighty fn tool The fn tool includes a set of commands to act on Lambda functions. Most of these are described in the getting-started in the repository. One more subcommand is aws-import. If you have an existing AWS Lambda function, you can use this command to automatically convert it to a Docker image that is ready to be deployed on other platforms. Credentials To use this, either have your AWS access key and secret key set in config files, or in environment variables. In addition, you'll want to set a default region. You can use the aws tool to set this up. Full instructions are in the AWS documentation. Importing The aws-import command is constructed as follows: fn lambda aws-import arn: describes the ARN formats which uniquely identify the AWS lambda resource region: region on which the lambda is hosted image: the name of the created docker image which should have the format / Assuming you have a lambda with the following arn arn:aws:lambda:us-west-2:123141564251:function:my-function, the following command: fn lambda aws-import arn:aws:lambda:us-west-2:123141564251:function:my-function us-east-1 user/my-function will import the function code from the region us-east-1 to a directory called ./user/my-function. Inside the directory you will find the function.yml, Dockerfile, and all the files needed for running the function. Using Lambda with Docker Hub and IronFunc[...]



Hannes Verschore: Spidermonkey JIT improvements in FF52

Thu, 24 Nov 2016 09:28:37 +0000

Last week we signed off our hard work on FF52 and we will start working on FF53. The expected release date of this version is the 6th of March. In the meantime we still have time to stabilize the code and fix issues that we encounter. In the JavaScript JIT engine a lot of important work was done. WebAssembly has made great advances. We have fully implemented the draft specification and are requesting final feedback as part of a cross-browser Browser Preview in the W3C WebAssembly Community Group. Assuming the Browser Preview concludes without major changes before 52 is released, we’ll enable WebAssembly by default in 52. Step by step our CacheIR infrastructure is improving. In this release primitive value getprop was ported to the CacheIR infrastructure. GC sometimes needs to discard the compilations happening in the helper threads. It seemed like we waited for the compilation to stop one after another. As a result it could take a while till all compilations were discarded. Now we signal all threads at the same time to stop compiling and afterwards wait for all of them to finish. This was a major win in our investment to make sure GC doesn’t interrupt the execution for too long. The register allocator also received a nice improvement. Sometimes we were adding spills (stack to/from registers moves) while they were not needed. A fix was added to combat this. Like in every release a long list of differential bugs and crashes have been fixed as well. This release also include code from contributors: Emanuel Hoogeveen improved our crash annotations. He noticed that we didn’t save the crash reason in our crash reports when using “MOZ_CRASH”. Tooru Fujisawa has been doing incredible work throughout the codebase. Johannes Schulte has landed a patch that improves code for “testArg ? testArg : 0.0” and also eliminates needless control flow. Sander Mathijs van Veen made sure we could use unsigned integer modulo instead of a double modulo operation and also added code to fold additions with multiple constants. André Bargull added code to inline String.fromCodePoint in IonMonkey. As a result the performance should now be equal to using String.fromCharCode Robin Templeto[...]



Laura de Reynal: OLYMPUS DIGITAL CAMERA

Thu, 24 Nov 2016 01:56:57 +0000

(image) Rio, 2015

Filed under: Mozilla (image)



Laura de Reynal: OLYMPUS DIGITAL CAMERA

Thu, 24 Nov 2016 01:53:16 +0000

(image) Bangladesh, 2015 

Filed under: Mozilla (image)



Mozilla Addons Blog: Add-ons in 2017

Thu, 24 Nov 2016 00:04:43 +0000

A little more than a year ago we started talking about where add-ons were headed, and what the future would look like. It’s been busy, and we wanted to give everyone an update as well as provide guidance on what to expect in 2017. Over the last year, we’ve focused as a community on foundational work building out WebExtensions support in Firefox and addons.mozilla.org (AMO), reducing the time it takes for listed add-ons to be reviewed while maintaining the standards we apply to them, and getting Add-ons ready for e10s. We’ve made a number of changes to our process and products to make it easier to submit, distribute, and discover add-ons through initiatives like the signing API and a revamped Discovery Pane in the add-ons manager. Finally, we’ve continued to focus on communicating changes to the developer community via direct outreach, mailing lists, email campaigns, wikis, and the add-ons blog. As we’ve mentioned, WebExtensions are the future of add-ons for Firefox, and will continue to be where we concentrate efforts in 2017. WebExtensions are decoupled from the platform, so the changes we’ll make to Firefox in the coming year and beyond won’t affect them. They’re easier to develop, and you won’t have to learn about Firefox internals to get up and running. It’ll be easier to move your add-ons over to and from other browsers with minimal changes, as we’re making the APIs compatible – where it makes sense – with products like Opera, Chrome, and Edge. By the end of 2017, and with the release of Firefox 57, we’ll move to WebExtensions exclusively, and will stop loading any other extension types on desktop. To help ensure any new extensions work beyond the end of 2017, AMO will stop accepting any new extensions for signing that are not WebExtensions in Firefox 53. Throughout the year we’ll expand the set of APIs available, add capabilities to Firefox that don’t yet exist in other browsers, and put more WebExtensions in front of users. There’s a lot of moving parts, and we’ll be tracking more detailed information – including a timeline and roadmap – on the [...]



Henrik Mitsch: Autonomy, Mastery & Purpose at Mozilla’s Participation Systems

Wed, 23 Nov 2016 21:38:27 +0000

This week I was reminded of Dan Pink’s Drive and it’s key message: Autonomy, Mastery & Purpose. We are doing some work on Mozilla’s Moderator application: infrastructure migration, decommission Persona, and give it a visual refresh. It’s the first part that held a strong lesson. In the past, the Moderator site scored an F in the HTTP Observatory, a way to measure a server and application web security. Following the migration, the site now scores A+. By the way, you can always verify this yourself.   What I Learned This Week: Autonomy: Provide a team with autonomy over it’s entire product value chain and be surprised of the cool stuff that happens. Mastery: Going to A+ wasn’t an acceptance criteria. It’s our intrinsic motivation which helps us be better every day. Purpose: The Mozilla Manifesto provides us with a great set of shared values. In this case it was probably principle #4 on treating individuals’ security which served as North Star. Of course the same Observatory rating could have been achieved on the old infrastructure. We just never did. It’s probably the perfect storm of a cross-functional team operating in autonomy, growing mastery and with a clear sense of purpose that made it so easily possible. Blessed to be working on the Participation Systems team. [...]



Air Mozilla: The Joy of Coding - Episode 81

Wed, 23 Nov 2016 18:00:00 +0000

(image) mconley livehacks on real Firefox bugs while thinking aloud.