Subscribe: Planet Mozilla
http://planet.mozilla.org/rss20.xml
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
add  community  firefox  mozilla  new  open  people  rust  session  support  test  things  time  web  week  work   
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Planet Mozilla

Planet Mozilla



Planet Mozilla - http://planet.mozilla.org/



 



Firefox Nightly: Fosdem 2017 Nightly slides and video

Mon, 20 Feb 2017 16:05:21 +0000

(image)

FOSDEM is a two-day event organised by volunteers to promote the widespread use of free and open source software.

Every year in February, Mozillians from all over the world go to Belgium to attend Fosdem, the biggest Free and Open Source Software event in Europe with over 5,000 developers and Free Software advocate attending.

Mozilla has its own Developer Room and a booth and many or our projects were presented. A significant part of the Firefox Release Management team attended the event and we had the opportunity to present the Firefox Nightly Reboot project in our Developer room on Saturday to a crowd of Mozillians and visitors interested in Mozilla and the future of Firefox.

Here are the slides of my presentation and this is the video recording of my talk:

With Mozilla volunteers (thanks guys!), we also heavily promoted the use of Nightly on the Mozilla booth over the two days of the event.

(image)

We had some interesting Nightly-specific feedback such as:

  • Many visitors thought that the Firefox Dev Edition was actually Nightly (promoted to developers, dark theme, updates daily).
  • Some users mentionned that they prefer to use the Dev Edition or Beta over Nightly not because of a concern about stability but because they find the updating window that pops up if you don’t update daily to be annoying.
  • People were very positive about Firefox and wanted to help Mozilla but said they lacked time to get involved. So they were happy to know that just using Firefox Nightly with telemetry activated and sending crash reports is already a great way to help us.

In a nutshell, this event was really great, we probably spoke to a hundred developers about Nightly and it was almost as popular on the booth as Rust (people really love Rust!).

Do you want to talk about Nightly yourself?

Of course my slides can be used as a basis for your own presentations to promote the use of Nightly to power users and our core community through the open source events you participate in your region or the ones organized by Mozilla Clubs!

The slides use reveal as a presentation framework and only need a browser to be displayed. You can download the tar.gz/zip archive of the slides or pull them from github with this command:
git clone https://github.com/pascalchevrel/reveal/ -b nightly_fosdem_2017




Anjana Vakil: Notes from KatsConf2

Sun, 19 Feb 2017 00:00:00 +0000

Hello from Dublin! Yesterday I had the privilege of attending KatsConf2, a functional programming conference put on by the fun-loving, welcoming, and crazy-well-organized @FunctionalKats. It was a whirlwind of really exciting talks from some of the best speakers around. Here’s a glimpse into what I learned. There’s no such thing as an objectively perfect programming language: all languages make tradeoffs. But it is possible to find/design a language that’s more perfect for you and your project’s needs. Automation, automation, automation: Generative programming lets you write high-level code that generates low-level code Program derivation and synthesis let you write specifications/tests and leave it to the computer to figure out the code (Boring!) code rewriting tasks can be automated too Relational programming, Total programming and Type-Driven Development are (cool/mindblowing) things. You can do web programming with FP - and interestingly, even in a total language like Idris. I took a bunch of notes during the talks, in case you’re hungering for more details. But @jessitron took amazing graphical notes that I’ve linked to in the talks below, so just go read those! And for the complete experience, check out this storify Vicky Twomey-Lee, who led a great ally skills workshop the evening before the conference, made of the #KatsConf2 tweets: Hopefully this gives you an idea of what was said and which brain-exploding things you should go look up now! Personally it opened up a bunch of cans of worms for me - definitely a lot of the material went over my head, but I have a ton of stuff to go find out more (i.e. the first thing) about. Disclaimer: The (unedited!!!) notes below represent my initial impressions of the content of these talks, jotted down as I listened. They may or may not be totally accurate, or precisely/adequately represent what the speakers said or think, and the code examples are almost certainly mistake-ridden. Read at your own risk! The origin story of FunctionalKats FunctionalKatas => FunctionalKats => (as of today) FunctionalKubs Meetups in Dublin & other locations Katas for solving programming problems in different functional languages Talks about FP and related topics Welcome to all, including beginners The Perfect Language Bodil Stokke @bodil Bodil's opinions on the Perfect Language. #katsConf2Rather noninflammatory, it must be early in the morning https://t.co/KsqGAKubpd— Jessica Kerr (@jessitron) February 18, 2017 What would the perfect programming language look like? “MS Excel!” “Nobody wants to say ‘JavaScript’ as a joke?” “Lisp!” “I know there are Clojurians in the audience, they’re suspiciously silent…” There’s no such thing as the perfect language; Languages are about compromise. What the perfect language actually is is a personal thing. I get paid to make whatever products I feel like to make life better for programmers. So I thought: I should design the perfect language. What do I want in a language? It should be hard to make mistakes On that note let’s talk about JavaScript. It was designed to be easy to get into, and not to place too many restrictions on what you can do. But this means it’s easy to make mistakes & get unexpected results (cf. crazy stuff that happens when you add different things in JS). By restricting the types of inputs/outputs (see TypeScript), we can throw errors for incorrect input types - error messages may look like the compiler yelling at you, but really they’re saving you a bunch of work later on by telling you up front. Let’s look at PureScript Category theory! Semiring: something like addition/multiplication that has commutativity (a+b == b+a). Semigroup: …? There should be no ambiguity 1 + 2 * 3 vs. (+ 1 (* 2 3)) Pony: 1 + (2 * 3) – have to[...]



Gervase Markham: Technology Is More Like Magic Than Like Science

Sat, 18 Feb 2017 22:18:04 +0000

So said Peter Kreeft, commenting on three very deep sentences from C.S. Lewis on the problems and solutions of the human condition.

Suppose you are asked to classify four things –

  • religion,
  • science,
  • magic, and
  • technology.

– and put them into two categories. Most people would choose “religion and magic” and “science and technology”. Read Justin Taylor’s short article to see why the deeper commonalities are between “religion and science” and “magic and technology”.

(image)



Chris Cooper: Being productive when distributed teams get together, take 2

Fri, 17 Feb 2017 22:48:55 +0000

(image) Every year, hundreds of release engineers swim upstream because they’re built that way.

Last week, we (Mozilla release engineering) had a workweek in Toronto to jumpstart progress on the TaskCluster (TC) migration. After the success of our previous workweek for release promotion, we were anxious to try the same format once again and see if we could realize any improvements.

Prior preparation prevents panic

We followed all of the recommendations in the Logistics section of Jordan’s post to great success.

Keeping developers fed & watered is an integral part of any workweek. If you ever want to burn a lot of karma, try building consensus between 10+ hungry software developers about where to eat tonight, and then finding a venue that will accommodate you all. Never again; plan that shit in advance. Another upshot of advance planning is that you can also often go to nicer places that cost the same or less. Someone on your team is a (closet) foodie, or is at least a local. If it’s not you, ask that person to help you with the planning.

What stage are you at?

The workweek in Vancouver benefitted from two things:

  1. A week of planning at the All-Hands in Orlando the month before; and,
  2. Rail flying out to Vancouver a week early to organize much of the work to be done.

For this workweek, it turned out we were still at the planning stage, but that’s totally fine! Never underestimate the power of getting people on the same page. Yes, we did do *some* hacking during the week. Frankly, I think it’s easier to do the hacking bit remotely, but nothing beats a bunch of engineers in a room in front of a whiteboard for planning purposes. As a very distributed team, we rarely have that luxury.

Go with it

…which brings me to my final observation. Because we are a very distributed team, opportunities to collaborate in person are infrequent at best. When you do manage to get a bunch of people together in the same room, you really do need to go with discussions and digressions as they develop.

This is not to say that you shouldn’t facilitate those discussions, timeboxing them as necessary. If I have one nit to pick with Jordan’s post it’s that the “Operations” role would be better described as a facilitator. As a people manager for many years now, this is second-nature to me, but having someone who understands the problem space enough to know “when to say when” and keep people on track is key to getting the most out of your time together.


By and large, everything worked out well in Toronto. It feels like we have a really solid format for workweeks going forward.




Air Mozilla: Webdev Beer and Tell: February 2017

Fri, 17 Feb 2017 19:00:00 +0000

(image) Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...




Christian Heilmann: My closing keynote of the Tweakers DevSummit – slides and resources

Fri, 17 Feb 2017 09:56:17 +0000

Yesterday I gave the closing keynote of the Tweakers Developer Summit in Utrecht, The Netherlands. The conference topic was “Webdevelopment – Coding the Universe” and the organisers asked me to give a talk about Machine Learning and what it means for developers in the nearer future. So I took out my crystal ball and whipped up the following talk: Suit up, bring extra oxygen Internet space explorers needed. from Christian Heilmann Here are the resources covered in the talk: GOAT raising $25M for marketplace for collectible sneakers – this was an example of how we as developers failing at influencing what our market is doing. We’re still at a stage where plain consumerism wins the big investments and technology is used to cater to the needs of a very small, very affluent group The Robot Takeover – an interactive site telling you how safe your job is from being taken over by a robot. Based on this research in PDF format Movie written by AI Daddy’s Car – a song written by AI after analysing lots of Beatles songs AI software learns to write AI software – Google Brain coding itself Beer brewed by AI Chinese factory replacing 90% of humans with robots – 250% more productivity, defects 90% down The Power of Big Data and Psychographics – an incredibly scary presentation of Cambridge Analytica at Concordia summit showing how applying the OCEAN psychology test to Facebook quizzes allowed them to influence voters in the Brexit and Trump votes Meitu’s App Permissions – an app to make sparkly selfies has full access to your phone now Google Photos and its impressive use of Machine Learning to tag and find your own photos Pinterest Lens – a Shazam for everyday objects AIPoly – an app to detect objects and tell blind users what they are using a Neural Network on the device instead of having to be online and suffering the latency that comes with that. Demo video here Google’s DeepMind’s lip reading success – AI can read lips four times better than humans. You can also read the scientific paper on it Microsoft Office using Cognitive Services – PowerPoint now creates text alternatives for images Learning tools in Microsoft OneNote and a video on how that helps educators Live translation of phone calls using Cognitive Services and Skype Google Allo using machine learning to create recommended answers to posted images Pixel recursive super resolution is a method to create facial images from 8×8 pixel input using machine learning. In essence it is making the zoom and enhance movie trope reality. Google open sourced their DeepMind Lab to train AI. The code is on GitHub and the “paper available on Arxiv Facebook explained and released their image segmentation algo showing you how to detect shapes in images and classify them ImageNet is a database of described images Open images dataset is Google’s alternative to ImageNet, a lot of metadata for image URLs described with humain aid CaptionBot is a demo by microsoft using Cognitive Services to describe images. Microsoft Cognitive Services is a suite of APIs using Machine Learning to understand images, sounds, text and enhance your projects to have a more human output and allow for human input Custom Recognition Service is a service by Microsoft to filter content by giving it context, thus improving the outcome of OCR or speech to text conversions Video Breakdown is a service to detect speakers, sentiments and create a transcript of videos My visit to the medical holodeck is my report about visiting Weill Cornell hospital in New York where they use VR for visualisation of diseases and HoloLens to design medication Sophia Roshal’s HoloLens demo shows the intern’s project at Weill Cornell using HoloLens to visualise cancer medication and tumor data Anamaria Cotîrlea is a Google employee whose research as an intern saves the company 1.5 petabytes o[...]



Karl Dubost: [worklog] Edition 055 - Tiles on a wall

Fri, 17 Feb 2017 09:05:00 +0000

webcompat life

With all these new issues, it's sometimes difficult to follow everything. The team has grown. We are more active on more fronts and the non-written rules we had when we were 3 or 4 persons were working. When we grow, we need a bit more process, and slightly more dedicated areas for each of us. It will help us to avoid work conflicts and to make progress smoothly. Defining responsibilities and empowering people. This week it happens some tiles were put in my house on two days. After the first day, the tiles were there on the wall without the joint in between. I started to regret the choice of tiles. Too big, too obvious. But the day after, the joint was put in place in between the tiles and everything made sense. Everything was elegant and how we had envision it.

webcompat issues

webcompat.com dev

Otsukare!




Niko Matsakis: Project idea: datalog output from rustc

Fri, 17 Feb 2017 05:00:00 +0000

I want to have a tool that would enable us to answer all kinds of queries about the structure of Rust code that exists in the wild. This should cover everything from synctactic queries like “How often do people write let x = if { ... } else { match foo { ... } }?” to semantic queries like “How often do people call unsafe functions in another module?” I have some ideas about how to build such a tool, but (I suspect) not enough time to pursue them. I’m looking for people who might be interested in working on it! The basic idea is to build on Datalog. Datalog, if you’re not familiar with it, is a very simple scheme for relating facts and then performing analyses on them. It has a bunch of high-performance implementations, notably souffle, which is also available on GitHub. (Sadly, it generates C++ code, but maybe we’ll fix that another day.) Let me work through a simple example of how I see this working. Perhaps we would like to answer the question: How often do people write tests in a separate file (foo/test.rs) versus an inline module (mod test { ... })? We would (to start) have some hacked up version of rustc that serializes the HIR in Datalog form. This can include as much information as we would like. To start, we can stick to the syntactic structures. So perhaps we would encode the module tree via a series of facts like so: // links a module with the id `id` to its parent `parent_id` ModuleParent(id, parent_id). ModuleName(id, name). // specifies the file where a given `id` is located File(id, filename). So for a module structure like: // foo/mod.rs: mod test; // foo/test.rs: #[test] fn test() { } we might generate the following facts: // module with id 0 has name "" and is in foo/mod.rs ModuleName(0, ""). File(0, "foo/mod.rs"). // module with id 1 is in foo/test.rs, // and its parent is module with id 0. ModuleName(1, "test"). ModuleParent(1, 0). File(1, "foo/test.rs"). Then we can write a query to find all the modules named test which are in a different file from their parent module: // module T is a test module in a separate file if... TestModuleInSeparateFile(T) :- // ...the name of module T is test, and... ModuleName(T, "test"), // ...it is in the file T_File... File(T, T_File), // ...it has a parent module P, and... ModuleParent(T, P), // ...the parent module P is in the file P_File... File(P, P_File), // ...and file of the parent is not the same as the file of the child. T_File != P_File. Anyway, I’m waving my hands here, and probably getting datalog syntax all wrong, but you get the idea! Obviously my encoding here is highly specific for my particular query. But eventually we can start to encode all kinds of information this way. For example, we could encode the types of every expression, and what definition each path resolved to. Then we can use this to answer all kinds of interesting queries. For example, some things I would like to use this for right now (or in the recent past): Evaluating new lifetime elision rules. Checking what kinds of unsafe code patterns exist in real life and how frequently. Checking how much might benefit from accepting the else match { ... } RFC Testing how much code in the wild might be affected by deprecating Trait in favor of dyn Trait So, you interested? If so, contact me – either privmsg over IRC (nmatsakis) or over on the internals threads I created.[...]



Mozilla Addons Blog: The Road to Firefox 57 – Compatibility Milestones

Thu, 16 Feb 2017 17:00:31 +0000

Back in November, we laid out our plans for add-ons in 2017. Notably, we defined Firefox 57 as the first release where only WebExtensions will be supported. In parallel, the deployment of Multiprocess Firefox (also known as e10s) continues, with many users already benefiting from the performance and stability gains. There is a lot going on and we want you to know what to expect, so here is an update on the upcoming compatibility milestones. We’ve been working on setting out a simple path forward, minimizing the compatibility hurdles along the way, so you can focus on migrating your add-ons to WebExtensions. Legacy add-ons By legacy add-ons, we’re referring to: All extensions that aren’t WebExtensions. Specifically: XUL overlay extensions, bootstrapped extensions, SDK extensions, and Embedded WebExtensions. Complete themes. These add-ons shouldn’t have problems with multiprocess compatibility but will follow the same compatibility milestones as other legacy add-ons. We will provide more details on what’s coming for themes very soon in this blog. Language packs, dictionaries, OpenSearch providers, lightweight themes, and add-ons that only support Thunderbird or SeaMonkey aren’t considered legacy. Firefox 53, April 18th release Firefox will run in multiprocess mode by default for all users, with some exceptions. If your add-on has the multiprocessCompatible flag set to false, Firefox will run in single process mode if the add-on is enabled. Add-ons that are reported and confirmed as incompatible with Multiprocess Firefox (and don’t have the flag set to false) will be marked as incompatible and disabled in Firefox. Add-ons will only be able to load binaries using the Native Messaging API. No new legacy add-ons will be accepted on addons.mozilla.org (AMO). Updates to existing legacy add-ons will still be accepted. Firefox 54-56 Legacy add-ons that work with Multiprocess Firefox in 53 may still run into compatibility issues due to followup work: Multiple content processes is being launched in Firefox 55. This enables multiple content processes, instead of the single content process currently used. Sandboxing will be launched in Firefox 54. Additional security restrictions will prevent certain forms of file access from content processes. Firefox 57, November 14th release Firefox will only run WebExtensions. AMO will continue to support listing and updating legacy add-ons after the release of 57 in order to have an easier transition. The exact cut-off time for this support hasn’t been determined yet. Multiprocess compatibility shims are removed from Firefox. This doesn’t affect WebExtensions, but it’s one of the reasons went with this timeline. For all milestones, keep in mind that Firefox is released using a “train” model, where Beta, Developer Edition, and Nightly correspond to the future 3 releases. You should expect users of pre-release versions to be impacted by these changes earlier than their final release dates. The Release Calendar lists future release dates per channel. We are committed to this timeline, and will work hard to make it happen. We urge all developers to look into WebExtensions and port their add-ons as soon as possible. If you think your add-on can’t be ported due to missing APIs, here’s how you can let us know.[...]



Christopher Arnold

Thu, 16 Feb 2017 16:14:35 +0000

https://en.wikipedia.org/wiki/BeowulfA long time ago I remember reading Stephen Pinker discussing the evolution of language.  I had read Beowulf, Chaucer and Shakespeare, so I was quite interested in these linguistic adaptations over time.  Language shifts rapidly through the ages, to the  point that even English of 500 years ago sounds foreign to us now.  His thesis in the piece was about how language is going to shift toward the Chinese pronunciation of it.  Essentially, the majority of speakers will determine the rules of the language’s direction.  There are more Chinese in the world than native English speakers, so as they adopt and adapt the language, more of us will speak like the greater factions of our language’s custodians.  The future speakers of English, will determine its course.  By force of "majority rules", language will go in the direction of its greatest use, which will be the Pangea of the global populace seeking common linguistic currency with others of foreign tongues.  Just as the US dollar is an “exchange currency” standard at present between foreign economies, English is the shortest path between any two ESL speakers, no matter which background.Subsequently, I heard these concepts reiterated in a Scientific American podcast.  The concept there being that English, when spoken by those who learned it as a second language, is easier for other speakers to understand than native-spoken English.  British, Indian, Irish, Aussie, New Zealand and American English are relics in a shift, very fast, away from all of them.  As much as we appreciate each, they are all toast.  Corners will be cut, idiomatic usage will be lost, as the fastest path to information conveyance determines that path that language takes in its evolution.  English will continue to be a mutt language flavored by those who adopt and co-opt it.  Ultimately meaning that no matter what the original language was, the common use of it will be the rules of the future.  So we can say goodbye to grammar as native speakers know it.  There is a greater shift happening than our traditions.  And we must brace as this evolution takes us with it to a linguistic future determined by others.I’m a person who has greatly appreciated idiomatic and aphoristic usage of English.  So I’m one of those, now old codgers, who cringes at the gradual degradation of language.  But I’m listening to an evolution in process, a shift toward a language of broader and greater utility.  So the cringes I feel, are reactions to the time-saving adaptations of our language as it becomes something greater than it has been in the past.  Brits likely thought/felt the same as their linguistic empire expanded.  Now is just a slightly stranger shift.This evening I was in the kitchen, and I decided to ask Amazon Alexa to play some Led Zeppelin.  This was a band that used to exist in the 1970’s era during which I grew up.  I knew their entire corpus very well.  So when I started hearing one of my favorite songs, I knew this was not what I had asked for.  It was a good rendering for sure, but it was not Robert Plant singing.  Puzzled, I asked Alexa who was playing.  She responded “Lez Zeppelin”.  This was a new band to me.  A very good cover band I admit.  (You can read about them here: http://www.lezzeppelin.com/)But why hadn't Alexa wanted to respond to my initial request?  Was it because Atlantic Records hadn't licensed Led Zeppelin's actual catalog for Amazon Prime subscribers?Two things struck me.  First, we aren’t going to be tailoring our English to Chinese ESL common speech patterns as Mr. Pinker predicted.  We’re probably also going to be shifting our speech patterns to what Alexa, Siri, Cortana and Google Home can actually understand.  They are the new ESL vector that we hadn't a[...]



Air Mozilla: Reps Weekly Meeting Feb. 16, 2017

Thu, 16 Feb 2017 16:00:00 +0000

(image) This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.




Emma Irwin: Thank you Guillermo Movia

Thu, 16 Feb 2017 15:12:44 +0000

  I first got to know Guillermo during our time together on Mozilla Reps council –  which was actually his second time contributing community leadership, the original was as a founding council member. Since this time, I’ve come to appreciate and rely on his intuition, experience and skill navigating complexities of community management as a peer in the community and colleague at Mozilla for the past two years. Before I go any further I would like to thank Guillermo, on behalf of many,  for politely responding to terrible mispronunciations of his name over the years including (but not limited to)  ‘G-glermo, geejermo, Glermo, Juremo, Glermo, Gillermo and various versions of Guilllllllllmo’. Although I am  excited to see Guillermo off to new adventures – I ,  and many others in the Mozilla community wanted to mark his 12 years with Mozilla by honoring and celebrating a journey so far.  Thankfully, he took some time to meet with me last week for an interview… In the Beginning… As many who do not speak English as a first language might understand, Guillermo remembers spending his early days on IRC and mailing lists trying to understand ways to get involved in his first language – Spanish.  It was this experience and eventual collaboration with other Spanish speaking community leaders Ruben and Francisco that led to the formation of the Hispano Community. Emerging Leader, Emerging Community Guillermo’s love of the open web, radiates through all aspects of his life, and history including his Bachelor’s thesis with a cover which you might notice resembles a browser… During this same time of dedicated study, Guillermo began to both participate-in, and organize Mozilla events in Argentina.  One of his most memorable moments of empowerment was when Asa Dozler from Mozilla, who had been visiting his country, declared to Guillermo and the emerging community group  ‘you are the Argentina Community’ – with subsequent emails in support of that from both Asa and Mary Colvig that ultimately led to a new community evolution. Building Participation Guillermo joined Mozilla as staff during Firefox OS era, at first part time while he also worked at Nobox  organizing events and activities for De Todos Para Todos campaign.  He started full time not long afterwards stepping into the role of community manager for LATAM.   His work with Participation included developing regional leadership strategies including development of a coaching framework. Proudest Moments I asked Guillermo to reflect on what his proudest moments have been so far, and here’s what he said: Participation Mozilla Hispano creation community. Being part of Firefox OS launch teams, and localization Organizing community members, training. translating. Being part of the Mozilla Reps original council. A Central Theme In all that Guillermo shared, there was such a strong theme of empowering people – of building frameworks and opportunities that help people reach into their potential as emerging community leaders and mobilizers.  I think as a community we have been fortunate recipients of his talents in this area. And that theme continues on in his wishes for Mozilla’s future – as an organization were community members continue to  innovate and have impact on Mozilla’s mission. Thank you Guillermo! Goodbye – not Goodbye!  Once a Mozillian, always a Mozillian – see you soon. Please share your #mozlove memories, photos and gtratitude for #gmovia on Twitter and other social media!   Save Save Share[...]



Air Mozilla: Mozilla Curriculum Workshop, February 2017 - Privacy & Security

Thu, 16 Feb 2017 15:00:00 +0000

(image) Join us for a discussion and prototyping session about privacy and security curriculum.




Tarek Ziadé: Molotov, simple load testing

Wed, 15 Feb 2017 23:00:00 +0000

I don't know why, but I am a bit obsessed with load testing tools. I've tried dozens, I built or been involved in the creation of over ten of them in the past 15 years. I am talking about load testing HTTP services with a simple HTTP client. Three years ago I built Loads at Mozilla, which is still being used to load test our services - and it's still evolving. It was based on a few fundamental principles: A Load test is an integration test that's executed many times in parallel against a server. Ideally, load tests should be built with vanilla Python and a simple HTTP client. There's no mandatory reason we have to rely on a Load Test class or things like this - the lighter the load test framework is, the better. You should be able to run a load test from your laptop without having to deploy a complicated stack, like a load testing server and clients, etc. Because when you start building a load test against an API, step #1 is to start with small loads from one box - not going nuclear from AWS on day 1. Doing a massively distributed load test should not happen & be driven from your laptop. Your load test is one brick and orchestrating a distributed load test is a problem that should be entirely solved by another software that runs in the cloud on its own. Since Loads was built, two major things happened in our little technical word: Docker is everywhere Python 3.5 & asyncio, yay! Python 3.5+ & asyncio just means that unlike my previous attempts at building a tool that would generate as many concurrent requests as possible, I don't have to worry anymore about key principle #2: we can do async code now in vanilla Python, and I don't have to force ad-hoc async frameworks on people. Docker means that for running a distributed test, a load test that runs from one box can be embedded inside a Docker image, and then a tool can orchestrate a distributed test that runs and manages Docker images in the cloud. That's what we've built with Loads: "give me a Docker image of something that's performing a small load test against a server, and I shall run it in hundreds of box." This Docker-based design was a very elegant evolution of Loads thanks to Ben Bangert who had that idea. Asking for people to embed their load test inside a Docker image also means that they can use whatever tool they want as long as it performs HTTP calls on the server to stress, and optionally send some info via statsd. But proposing a helpful, standard tool to build the load test script that will be embedded in Docker is still something we want to suggest. And frankly, 90% of the load tests happen from a single box. Going nuclear is not happening that often. Introducing Molotov Molotov is a new tool I've been working on for the past few months - it's based on asyncio, aiohttp and tries to be as light as possible. Molotov scripts are coroutines to perform HTTP calls, and spawning a lot of them in a few processes can generate a fair amount of load from a single box. Thanks to Richard, Chris, Matthew and others - my Mozilla QA teammates, I had some great feedback to create the tool, and I think it's almost ready for being used by more folks - it stills need to mature, and the docs to improve but the design is settled, and it works well already. I've pushed a release at PyPI and plan to push a first stable final release this month once the test coverage is looking better & the docs are polished. But I think it's ready for a bit of community feedback. That's why I am blogging about it today -- if you want to try it, help building it here are a few links: Docs: http://molotov.readthedocs.io code: https://github.com/loads/molotov/ Try it with the console mode (-c), try to see if it fits your brain[...]



Christian Heilmann: ScriptConf in Linz, Austria – if you want all the good with none of the drama.

Wed, 15 Feb 2017 18:53:46 +0000

Last month I was very lucky to be invited to give the opening keynote of a brand new conference that can utterly go places: ScriptConf in Linz, Austria. What I liked most about the event was an utter lack of drama. The organisation for us presenters was just enough to be relaxed and allowing us to concentrate on our jobs rather than juggling ticket bookings. The diversity of people and subjects on stage was admirable. The catering and the location did the job and there was not much waste left over. I said it before that a great conference stands and falls with the passion of the organisers. And the people behind ScriptConf were endearingly scared and amazed by their own success. There were no outrageous demands, no problems that came up in the last moment, and above all there was a refreshing feeling of excitement and a massive drive to prove themselves as a new conference in a country where JavaScript conferences aren’t a dime a dozen. ScriptConf grew out of 5 different meetups in Austria. It had about 500 extremely well behaved and excited attendees. The line-up of the conference was diverse in terms of topics and people and it was a great “value for money” show. As a presenter you got spoiled. The hotel was 5 minutes walk away from the event and 15 minutes from the main train station. We had a dinner the day before and a tour of a local ars electronica center before the event. It is important to point out that the schedule was slightly different: the event started at noon and ended at “whenever” (we went for “Leberkäse” at 3am, I seem to recall). Talks were 40 minutes and there were short breaks in between each two talks. As the opening keynote presenter I loved this. It is tough to give a rousing talk at 8am whilst people file slowly into the building and you’ve still got wet hair from the shower. You also have a massive lull in the afternoon when you get tired. It is a totally different thing to start well-rested at noon with an audience who had enough time to arrive and settle in. Presenters were from all around the world, from companies like Slack, NPM, Ghost, Google and serverless. The presentations: Here’s a quick roundup of who spoke on what: I was the opening keynote, talking about how JavaScript is not a single thing but a full development environment now and what that means for the community. I pointed out the importance of understanding different ways to use JavaScript and how they yield different “best practices”. I also did a call to arms to stop senseless arguing and following principles like “build more in shorter time” and “move fast and break things” as they don’t help us as a market. I pointed out how my employer works with its engineers as an example how you can innovate but also have a social life. It was also an invitation to take part in open source and bring more human, understanding communication to our pull requests. Raquel Vélez of NPM told the history of NPM and explained in detail how they built the web site and the NPM search Nik Graf of Serverless covered the serverless architecture of AWS Lambda Hannah Wolfe of Ghost showed how they moved their kickstarter-funded NodeJS based open blogging system from nothing to a ten people company and their 1.0 release explaining the decisions and mistakes they did. She also announced their open journalism fund “Ghost for journalism” Felix Rieseberg of Slack is an ex-Microsoft engineer and his talk was stunning. His slides about building Apps with Electron are here and the demo code is on GitHub. His presentation was a live demo of using Electron to built a clone of Visual Studio Code by embedding Monaco into[...]



Air Mozilla: The Joy of Coding - Episode 91

Wed, 15 Feb 2017 18:00:00 +0000

(image) mconley livehacks on real Firefox bugs while thinking aloud.




Tim Taubert: The Future of Session Resumption

Wed, 15 Feb 2017 17:00:00 +0000

A while ago I wrote about the state of server-side session resumption implementations in popular web servers using OpenSSL. Neither Apache, nor Nginx or HAproxy purged stale entries from the session cache or rotated session tickets automatically, potentially harming forward secrecy of resumed TLS session. Enabling session resumption is an important tool for speeding up HTTPS websites, especially in a pre-HTTP/2 world where a client may have to open concurrent connections to the same host to quickly render a page. Subresource requests would ideally resume the session that for example a GET / HTTP/1.1 request started. Let’s take a look at what has changed in over two years, and whether configuring session resumption securely has gotten any easier. With the TLS 1.3 spec about to be finalized I will show what the future holds and how these issues were addressed by the WG. Did web servers react? No, not as far as I’m aware. None of the three web servers mentioned above has taken steps to make it easier to properly configure session resumption. But to be fair, OpenSSL didn’t add any new APIs or options to help them either. All popular TLS 1.2 web servers still don’t evict cache entries when they expire, keeping them around until a client tries to resume — for performance or ease of implementation. They generate a session ticket key at startup and will never automatically rotate it so that admins have to manually reload server configs and provide new keys. The Caddy web server I want to seize the chance and positively highlight the Caddy web server, a relative newcomer with the advantage of not having any historical baggage, that enables and configures HTTPS by default, including automatically acquiring and renewing certificates. Version 0.8.3 introduced automatic session ticket key rotation, thereby making session tickets mostly forward secure by replacing the key every ~10 hours. Session cache entries though aren’t evicted until access just like with the other web servers. But even for “traditional” web servers all is not lost. The TLS working group has known about the shortcomings of session resumption for a while and addresses those with the next version of TLS. 1-RTT handshakes by default One of the many great things about TLS 1.3 handshakes is that most connections should take only a single round-trip to establish. The client sends one or more KeyShareEntry values with the ClientHello, and the server responds with a single KeyShareEntry for a key exchange with ephemeral keys. If the client sends no or only unsupported groups, the server will send a HelloRetryRequest message with a NamedGroup selected from the ones supported by the client. The connection will fall back to two round-trips. That means you’re automatically covered if you enable session resumption only to reduce network latency, a normal handshake is as fast as 1-RTT resumption in TLS 1.2. If you’re worried about computational overhead from certificate authentication and key exchange, that still might be a good reason to abbreviate handshakes. Pre-shared keys in TLS 1.3 Session IDs and session tickets are obsolete since TLS 1.3. They’ve been replaced by a more generic PSK mechanism that allows resuming a session with a previously established shared secret key. Instead of an ID or a ticket, the client will send an opaque blob it received from the server after a successful handshake in a prior session. That blob might either be an ID pointing to an entry in the server’s session cache, or a session ticket encrypted with a key known only to the server. enum { psk_ke(0), psk_dhe_ke(1), (255) } Ps[...]



Tim Taubert: The Future of Session Resumption

Wed, 15 Feb 2017 17:00:00 +0000

A while ago I wrote about the state of server-side session resumption implementations in popular web servers using OpenSSL. Neither Apache, nor Nginx or HAproxy purged stale entries from the session cache or rotated session tickets automatically, potentially harming forward secrecy of resumed TLS session. Enabling session resumption is an important tool for speeding up HTTPS websites, especially in a pre-HTTP/2 world where a client may have to open concurrent connections to the same host to quickly render a page. Subresource requests would ideally resume the session that for example a GET / HTTP/1.1 request started. Let’s take a look at what has changed in over two years, and whether configuring session resumption securely has gotten any easier. With the TLS 1.3 spec about to be finalized I will show what the future holds and how these issues were addressed by the WG. Did web servers react? No, not as far as I’m aware. None of the three web servers mentioned above has taken steps to make it easier to properly configure session resumption. But to be fair, OpenSSL didn’t add any new APIs or options to help them either. All popular TLS 1.2 web servers still don’t evict cache entries when they expire, keeping them around until a client tries to resume — for performance or ease of implementation. They generate a session ticket key at startup and will never automatically rotate it so that admins have to manually reload server configs and provide new keys. The Caddy web server I want to seize the chance and positively highlight the Caddy web server, a relative newcomer with the advantage of not having any historical baggage, that enables and configures HTTPS by default, including automatically acquiring and renewing certificates. Version 0.8.3 introduced automatic session ticket key rotation, thereby making session tickets mostly forward secure by replacing the key every ~10 hours. Session cache entries though aren’t evicted until access just like with the other web servers. But even for “traditional” web servers all is not lost. The TLS working group has known about the shortcomings of session resumption for a while and addresses those with the next version of TLS. 1-RTT handshakes by default One of the many great things about TLS 1.3 handshakes is that most connections should take only a single round-trip to establish. The client sends one or more KeyShareEntry values with the ClientHello, and the server responds with a single KeyShareEntry for a key exchange with ephemeral keys. If the client sends no or only unsupported groups, the server will send a HelloRetryRequest message with a NamedGroup selected from the ones supported by the client. The connection will fall back to two round-trips. That means you’re automatically covered if you enable session resumption only to reduce network latency, a normal handshake is as fast as 1-RTT resumption in TLS 1.2. If you’re worried about computational overhead from certificate authentication and key exchange, that still might be a good reason to abbreviate handshakes. Pre-shared keys in TLS 1.3 Session IDs and session tickets are obsolete since TLS 1.3. They’ve been replaced by a more generic PSK mechanism that allows resuming a session with a previously established shared secret key. Instead of an ID or a ticket, the client will send an opaque blob it received from the server after a successful handshake in a prior session. That blob might either be an ID pointing to an entry in the server’s session cache, or a session ticket encrypted with a key known only to the server. enum { psk_ke(0[...]



Daniel Stenberg: New screen and new fuses

Wed, 15 Feb 2017 10:06:54 +0000

I got myself a new 27″ 4K screen to my work setup, a Dell P2715Q, and replaced one of my old trusty twenty-four inch friends with it. I now work with the “Thinkpad 13″ on the left as my video conference machine (it does nothing else and it runs Windows!), the two mid screens are a 24″ and the new 27” and they are connected to my primary dev machine while the rightmost thing is my laptop for when I need to move. Did everything run smoothly? Heck no. When I first inserted the 4K screen without modifying anything else in the setup, it was immediately obvious that I really needed to upgrade my graphics card since it didn’t have muscles enough to drive the screen at 4K so the screen would then instead upscale a 1920×1200 image in a slightly blurry fashion. I couldn’t have that! New graphics card So when I was out and about later that day I more or less accidentally passed a Webhallen store, and I got myself a new card. I wanted to play it easy so I stayed with an AMD processor and went with ASUS Dual-Rx460-O2G. The key feature I wanted was to be able to drive one 4K screen and one at 1920×1200, and then I unfortunately had to give up on the ones with only passive cooling and I instead had to pick what sounds like a gaming card. (I hate shopping graphics cards.)As I was about to do surgery on the machine anyway. I checked and noticed that I could add more memory to the motherboard so I bought 16 more GB to a total of 32GB. Blow some fuses Later that night, when the house was quiet and dark I shut down my machine, inserted the new card, the new memory DIMMs and powered it back up again. At least that was the plan. When I fired it back on, it said clock and my lamps around me all got dark and the machine didn’t light up at all. The fuse was blown! Man, wasn’t that totally unexpected? I did some further research on what exactly caused the fuse to blow and blew a few more in the process, as I finally restored the former card and removed the memory DIMMs again and it still blew the fuse. Puzzled and slightly disappointed I went to bed when I had no more spare fuses. I hate leaving the machine dead in parts on the floor with an uncertain future, but what could I do? A new PSU Tuesday morning I went to get myself a PSU replacement (Plexgear PS-600 Bronze), and once I had that installed no more fuses blew and I could start the machine again! I put the new memory back in and I could get into the BIOS config with both screens working with the new card (and it detected 32GB ram just fine). But as soon as I tried to boot Linux, the boot process halted after just 3-4 seconds and seemingly just froze. Hm, I tested a few different kernels and safety mode etc but they all acted like that. Weird! BIOS update A little googling on the messages that appeared just before it froze gave me the idea that maybe I should see if there’s an update for my bios available. After all, I’ve never upgraded it and it was a while since I got my motherboard (more than 4 years). I found a much updated bios image on ASUS support site, put it on a FAT-formatted USB-drive and I upgraded. Now it booted. Of course the error messages I had googled for are still present, and I suppose they were there before too, I just hadn’t put any attention to them when everything was working dandy! Displayport vs HDMI I had the wrong idea that I should use the display port to get 4K working, but it just wouldn’t work. DP + DVI just showed up on one screen and I even went as far as trying to download some Ubuntu Linux driver package for Radeon RX460 that I found, but of cou[...]



Mozilla Open Design Blog: Roads not taken

Wed, 15 Feb 2017 06:49:03 +0000

Here, Michael Johnson (MJ), founder of johnson banks, and Tim Murray (TM), Mozilla creative director, have a long-distance conversation about the Mozilla open design process while looking in the rear-view mirror. TM: We’ve come a long way from our meet-in-the-middle in Boston last August, when my colleague Mary Ellen Muckerman and I first saw a dozen or so brand identity design concepts that had emerged from the studio at johnson banks. MJ: If I recall, we didn’t have the wall space to put them all up, just one big table – but by the end of the day, we’d gathered around seven broad approaches that had promise. I went back to London and we gave them a good scrubbing to put on public show. It’s easy to see, in retrospect, certain clear design themes starting to emerge from these earliest concepts. Firstly, the idea of directly picking up on Mozilla’s name in ‘The Eye’ (and in a less overt way in the ‘Flik Flak’). ‘The Eye’ also hinted at the dinosaur-slash-Godzilla iconography that had represented Mozilla at one time. We also see at this stage the earliest, and most minimal version of the ‘Protocol’ idea. TM: You explored several routes related to code, and ‘Protocol’ was the cleverest. Mary Ellen and I were both drawn to ‘The Eye’ for its suggestion that Mozilla would be opinionated and bold. It had a brutal power, but we were also a bit worried that it was too reminiscent of the Eye of Sauron or Monsters, Inc. MJ: Logos can be a bit of a Rorschach test, can’t they? Within a few weeks, we’d come to a mutual conclusion as to which of these ideas needed to be left on the wayside, for various reasons. The ‘Open button’, whilst enjoyed by many, seemed to restrict Mozilla too closely to just one area of work. Early presentation favourites such as ‘The Impossible M’ turned out to be just a little too close to other things out in the ether, as did Flik Flak – the value, in a way, of sharing ideas publicly and doing an impromptu IP check with the world. ‘Wireframe World’ was to come back, in a different form in the second round. TM: This phase was when our brand advisory group, a regular gathering of Mozillians representing different parts of our organization, really came into play. We had developed a list of criteria by which to review the designs – Global Reach, Technical Beauty, Breakthrough Design, Scalability, and Longevity – and the group ranked each of the options. It’s funny to look back on this now, but ‘Protocol’ in its original form received some of the lowest scores. MJ: One of my sharpest memories of this round of designs, once they became public was how many online commentators critiqued the work for being ‘too trendy’ or said ‘this would work for a gallery not Mozilla’. This was clearly going to be an issue because, rightly or wrongly, it seemed to me that the tech and coding community had set the bar lower than we had expected in terms of design. TM: A bit harsh there, Michael? It was the tech and coding community that had the most affinity for ‘Protocol’ in the beginning. If it wasn’t for them, we might have let it go early on. MJ: Ok, that’s a fair point. Well, we also started to glimpse what was to become another recurring theme – despite johnson banks having been at the vanguard of broadening out brands into complete and wide-ranging identity systems, we were going to have to get used to a TL:DR way of seeing/reading that simply judged a route by its logo alone. TM: Right! And no matter how many times we said that these[...]



Eitan Isaacson: Dark Windows for Dark Firefox

Wed, 15 Feb 2017 00:23:03 +0000

I recently set the Compact Dark theme as my default in Firefox. Since we don’t yet have Linux client-side window decorations yet (when is that happening??), it looks kind of bad in GNOME. The window decorator shows up as a light band in a sea of darkness:

(image)

It just looks bad. You know? I looked for an addon that would change the decorator to the dark-themed one, but I couldn’t find any. I ended up adapting the gtk-dark-theme Atom addon to a Firefox one. It was pretty easy. I did it over a remarkable infant sleep session on a Saturday morning. Here is the result:

(image)

You can grab the yet-to-be-reviewed addon here.


(image) (image)



Mozilla VR Blog: New features in A-Frame Inspector v0.5.0

Tue, 14 Feb 2017 23:59:48 +0000

This is a small summary of some new features of the latest A-Frame Inspector version that may pass unnoticed to some. Image assets dialog v0.5.0 introduces an assets management window to import textures in your scene without having to manually type URLs. The updated texture widget includes the following elements: Preview thumbnail: it will open the image assets dialog. Input box: Hover the mouse over it and it will show the complete URL of the asset.. Open in a new tab: It will open a new tab with the full sized texture Clear: It will clear the value of the attribute. Once the image assets dialog is open you’ll see the list of images currently being used in your project, with the previous selection for the widget, if any, highlighted. You could click in any image from this gallery to set the value of the map attribute you’re editing. If you want to include new images to this list, click on LOAD TEXTURE and you’ll see several options to include a new image on your project: Here you could add new image assets to your scene by: Entering an URL Opening an uploadcare dialog that will let you upload files from your computer, google drive, dropbox.. and from other sources in ( this is currently uploading the images to our uploadcare account, so please be kind :), we’re working on letting you define your API key to use your own account). Drag and dropping from your computer. This will upload to uploadcare too. Choosing one from the curated list of images we’ve included in the assets-sample https://github.com/aframevr/sample-assets repo. Once added your image you’ll see a thumbnail showing some information about the image and the name that will have this texture in your project (the asset ID that can be referenced as #name). After editing the name if needed, click on LOAD TEXTURE and it will add your texture to the list of assets available in your project, showing you the list of textures you saw when you opened the dialog. Now just clicking on the newly created texture you’ll set the new value for the attribute you were editing. New features in the scenegraph Toggle visibility Toggle panels: New shortcuts: 1: Toggle scenegraph panel 2: Toggle components panel TAB: Toggle both panels Toggle entity visibility of each element in the scene is now possible by pressing the eye icon in the scenegraph. Broader scenegraph filtering In the previous version of the inspector we could filter by the tag name of the entity or by its ID. In the new version the filter will take into account also the names of the components that each entity has and the values of the attributes of these components. For example if we write: red it will return the entities which name contains red but also all of them with a red color in the material component. We could also filter by geometry, or directly by sphere and so on. We’ve added the shortcut CTRL or CMD + f to set the focus on the filter input for a faster filtering, and ESC to clear the filter. Cut, copy and paste Thanks to @vershwal it’s now possible to cut, copy and paste entities using the expected shorcuts: CTRL or CMD + x: Cut selected entity CTRL or CMD + c: Copy selected entity CTRL or CMD + v: Paste the latest copied or cut entity New shortcuts The list of the new shorcuts introduced in this version: 1: Toggle scenegraph panel 2: Toggle components panel TAB: Toggle both scenegraph and components panel CTRL or CMD + x: Cut selected entity CTRL or CMD + c: Cop[...]



Emma Irwin: Thank you Brian King!

Tue, 14 Feb 2017 21:46:24 +0000

Brian King was one of the first people I met at Mozilla.  He is someone whose opinion,  ideas, trust, support and friendship have meant a lot to me – and I know countless others would  similarly describe Brian as someone who made collaborating, working and gathering together as a highlight of their Mozilla experiences, and personal success. Brian has  been a part of the Mozilla community for nearly 18 years – and even though we are thrilled for his new adventures, we really wanted to find a megaphone to say thank you…   Here are some highlights from my interview with him last week. Finding Mozilla Brian came to Mozilla all those years ago, as a developer.  He worked for a company that developed software which promoted minority languages including Basque, Catalan, Frisian, Irish, Welsh.  As many did back in the day – he met people in newsgroups and on IRC, and slowly became immersed in the community – regularly attending developer meetups.  Community, from the very beginning was the reason Brian became grew more deeply involved and connected to Mozilla’s mission. Shining Brightly Early contributions were code – becoming involved in with the HTML Editor, then part of the Mozilla Suite. He got a job at Activestate in Vancouver, and worked on the Komodo IDE for dynamic languages. Skipping forward he became more and more invested in transitioning to Add-On contribution, and review – even co-authoring a book “Creating Applications with Mozilla”  – which I did not know!  Very cool. During this time he describes himself as being “very fortunate” to be able to make a living by working in the Mozilla and Web ecosystem while running a consultancy writing Firefox add-ons and other software. Dear Community – “You had me at Hello”       Something Brian shared with me, was that being part of the community essentially sustained  his connection with Mozilla during times when he was to busy to contribute – and I think many other Mozillians feel this same way  – it’s never goodbye, only see you soon.  On Brian’s next adventure, I think we can take comfort that the open door of community will sustain our connection for years to come. As Staff Brian came on as Mozilla staff in 2012 as the European Community Manager, with success in this and overseeing the evolution of the Mozilla Reps program. He was instrumental in successfully building Firefox OS launch teams all around the world. Most recently he has been sharpening that skillset of empowering individuals, teams and communities with support for various programs, regional support, and the Activate campaign. Proud Moments With a long string of accomplishments at Mozilla, I asked Brian what his proudest moments were. Some of those he listed were: AMO editor for a few years reviewing thousands of Addons Building community in the Balkan area Building out the Mozilla Reps program, and being a founding council member. Helping drive Mozilla success at FOSDEM Building FFOS Launch Teams But he emphasized, in all of these, the opportunity to bring new people into the community, to nurture and help individuals and groups reach their goals provided an enormous sense of accomplishment and fulfillment. He didn’t mention it, but I also found this photo of Brian on TV in Transylvania, Romania that looks pretty cool. Look North! To wrap up, I asked Brian what he most wanted to see for Mozilla in the next 5 years, leaning on what he knows for years as [...]



Patrick Walton: Pathfinder, a fast GPU-based font rasterizer in Rust

Tue, 14 Feb 2017 19:03:00 +0000

Ever since some initial discussions with Raph Levien (author of font-rs) at RustConf last September, I’ve been thinking about ways to improve vector graphics rendering using modern graphics hardware, specifically for fonts. These ideas began to come together in December, and over the past couple of months I’ve been working on actually putting them into a real, usable library. They’ve proved promising, and now I have some results to show. Today I’m pleased to announce Pathfinder, a Rust library for OpenType font rendering. The goal is nothing less than to be the fastest vector graphics renderer in existence, and the results so far are extremely encouraging. Not only is it very fast according to the traditional metric of raw rasterization performance, it’s practical, featuring very low setup time (end-to-end time superior to the best CPU rasterizers), best-in-class rasterization performance even at small glyph sizes, minimal memory consumption (both on CPU and GPU), compatibility with existing font formats, portability to most graphics hardware manufactured in the past five years (DirectX 10 level), and security/safety. Performance To illustrate what it means to be both practical and fast, consider these two graphs: (Click each graph for a larger version.) The first graph is a comparison of Pathfinder with other rasterization algorithms with all vectors already prepared for rendering (and uploaded to the GPU, in the case of the GPU algorithms). The second graph is the total time taken to prepare and rasterize a glyph at a typical size, measured from the point right after loading the OTF file in memory to the completion of rasterization. Lower numbers are better. All times were measured on a Haswell Intel Iris Pro (mid-2015 MacBook Pro). From these graphs, we can see two major problems with existing GPU-based approaches: Many algorithms aren’t that fast, especially at small sizes. Algorithms aren’t fast just because they run on the GPU! In general, we want rendering on the GPU to be faster than rendering on the CPU; that’s often easier said than done, because modern CPUs are surprisingly speedy. (Note that, even if the GPU is somewhat slower at a task than the CPU, it may be a win for CPU-bound apps to offload some work; however, this makes the use of the algorithm highly situational.) It’s much better to have an algorithm that actually beats the CPU. Long setup times can easily eliminate the speedup of algorithms in practice. This is known as the “end-to-end” time, and real-world applications must carefully pay attention to it. One of the most common use cases for a font rasterizer is to open a font file, rasterize a character set from it (Latin-1, say) at one pixel size for later use, and throw away the file. With Web fonts now commonplace, this use case becomes even more important, because Web fonts are frequently rasterized once and then thrown away as the user navigates to a new page. Long setup times, whether the result of tessellation or more exotic approaches, are real problems for these scenarios, since what the user cares about is the document appearing quickly. Faster rasterization doesn’t help if it regresses that metric. (Of the two problems mentioned above, the second is often totally ignored in the copious literature on GPU-based vector rasterization. I’d like to see researchers start to pay attention to it. In most scenarios, we don’t have the luxur[...]



Mozilla Addons Blog: Add-ons Update – 2017/02

Tue, 14 Feb 2017 18:04:46 +0000

Here’s the state of the add-ons world this month.

If you haven’t read Add-ons in 2017, I suggest that you do. It lays out the high-level plan for add-ons this year.

The Review Queues

In the past month, 1,670 listed add-on submissions were reviewed:

  • 1148 (69%) were reviewed in fewer than 5 days.
  • 106 (6%) were reviewed between 5 and 10 days.
  • 416 (25%) were reviewed after more than 10 days.

There are 467 listed add-ons awaiting review.

If you’re an add-on developer and are looking for contribution opportunities, please consider joining us. Add-on reviewers are critical for our success, and can earn cool gear for their work. Visit our wiki page for more information.

Compatibility

The blog post for 52 is up and the bulk validation was already run. Firefox 53 is coming up.

Multiprocess Firefox is enabled for some users, and will be deployed for all users very soon. Make sure you’ve tested your add-on and either use WebExtensions or set the multiprocess compatible flag in your add-on manifest.

As always, we recommend that you test your add-ons on Beta and Firefox Developer Edition to make sure that they continue to work correctly. End users can install the Add-on Compatibility Reporter to identify and report any add-ons that aren’t working anymore.

Recognition

We would like to thank the following people for their recent contributions to the add-ons world:

  • eight04
  • Aayush Sanghavi
  • zombie
  • Doug Thayer
  • ingoe
  • totaki
  • Piotr Drąg
  • ZenanZha
  • Joseph Frazier
  • Revanth47

You can read more about their work in our recognition page.




Firefox Nightly: These Weeks in Firefox: Issue 10

Tue, 14 Feb 2017 17:59:31 +0000

Highlights The Sidebar WebExtension API (compatible with Opera’s API) has been implemented Preferences reorg and search project is fully underway. jaws and mconley lead a “hack-weekend” this past weekend with some MSU students working on the reorg and search projects Video of search work-in-progress, being implemented by iFerguson and manotejmeka Preference reorg screenshots, being implemented by zack and ava1on: A lot of people were stuck on Firefox Beta 44, we found out about it and fixed it. Read more about it on :chuttens blog According to our Telemetry, ~62% of our release population has multi-process Firefox enabled by default now Page Shot is going to land in Firefox 54.  We are planning on making it a WebExtension so that users can remove it fully if they choose to. Friends of the Firefox team Resolved bugs (excluding employees): https://mzl.la/2ksFIxm More than one bug fixed: Iaroslav Sheptykin Mayank Sebastian Hengst [:aryx][:archaeopteryx] Tomislav Jovanovic :zombie Tooru Fujisawa [:arai] New contributors ( = First Patch!) ahsan.r.kazmi reorganized some of the code that queries the Places database Dorel Barbu converted some of the graphics used for our Sync interfaces from PNG to SVG Project Updates Activity Stream Intent to Implement System Add-on thread on firefox-dev Activity Stream localized in 30+ locales :+1: @flod and @mathjazz and team for all the help Two separate teams working on ‘graduation’ (landing Activity Stream in m-c) and MVP (product features and A/B testing to ensure high user engagement) Content Handling Enhancement Landed string redesign inside Downloads Panel Next is the download progress indication redesign Both scheduled for Firefox 54, while the visual redesign landed in Firefox 52 Electrolysis (e10s) e10s-multi is tentatively targeted to ride the trains in Firefox 55 Hoping to use a scheme where we measure the user’s available memory in order to determine maximum content process count Here’s the bug to track requirements to enable e10s-multi on Dev Edition by default Firefox Core Engineering This week we’re forcing Flash on Linux to be windowless and removing the support code for windowed mode on Linux. Should cause much better scrolling behavior with e10s, and kill a class of random-oranges. “Ping sender” landed! Will allow us to get telemetry data immediately at shutdown instead of waiting for next startup. Dramatically reduced data latency, starting with crash pings Form Autofill Landed Make adjustHeight method adapt profile item list Cleanup browser/extensions/formautofill/.eslintrc Need to have a place in the Preference -> Setting for users to launch the profile list add/edit/remove dialog Fallback to form history if form autofill pref is disabled [Form Autofill] Prevent duplicate autocomplete search registration Fill the selected autofill profile when an autocomplete entry is chosen Go Faster 1-day uptake of system add-ons is ~85% in beta (thanks to restartless), and ~72% in release (Wiki) Platform UI and other Platform Audibles mconley made it so that opening new maximized windows on Windows skips the opening animation Have filed a number of Photon-related bugs for Platform to aid in performance (both perceived and legit) Patch up for review to turn on menulist powered drop-down improvements features – 8 bugs verified: 1311795, 1336411, 1299428, 1316266, 1327155, 1327972, 1327953, 1322737 – 7 new bugs filed: 1336725, 1336723, 1336717, 1336721, 1336941, 1336933, 1336938 Again thanks for another successful testday We hope to see you all in our next events, all the details will be posted on QMO![...]



Gervase Markham: FOSDEM Talk: Video Available

Tue, 07 Feb 2017 11:49:38 +0000

I spoke on Sunday at the FOSDEM conference in the Policy devroom about the Mozilla Root Program, and about the various CA-related incidents of the past 5 years. Here’s the video (48 minutes, WebM):

Given that this only happened two days ago, I should give kudos to the FOSDEM people for their high quality and efficient video processing operation.

(image)


Media Files:
http://video.fosdem.org/2017/H.1301/mozilla_root_program.vp8.webm




This Week In Rust: This Week in Rust 168

Tue, 07 Feb 2017 05:00:00 +0000

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions. This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR. Updates from Rust Community News & Blog Posts Announcing Rust 1.15. Rust's 2017 roadmap. Announcing Diesel 0.10.0. Diesel now works on Rust stable. Rocket v0.2. Managed state & more. Incremental Compilation is available on nightly and ready for public beta testing. Unsafe code and shared references. Communicating intent. Understanding newtype pattern, as well as the From and Input traits. Writing Python extensions in Rust. Stupid tricks with Rust higher-order functions and "impl trait". What Rust can do that other languages can't, in six short lines. Benchmarking Paillier encryption in Rust, C, and more languages. Rust on Teensy part 2: Sending a message. PJRC Teensy is a USB-based microcontroller development system. This week in Rust docs 42. This week in Servo 91. [video] Ferris makes Emulators 19: Sweep and Mod 2. Crate of the Week This week's crate of the week is djangohashers, a Rust port of Django's password primitives. Thanks to Ronaldo Ferreira for the suggestion! Submit your suggestions and votes for next week! Call for Participation Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started! Some of these tasks may also have mentors available, visit the task page for more information. [easy] servo: Looking for something to work on. [easy] clippy: Lint fns that take immutable refs and return mutables. kafka-rust: Parallel communication to brokers. kafka-rust: Integration testing. [easy] clippy: Lint for redundant cast. [easy] clippy: Exclude self-by-value trait methods implemented on Box from boxed_local. [easy] clippy: Lint on method/struct fields sharing the same name. android-rs-glue: Add more arguments and use clap to parse the arguments. tokei: Add package repositories. If you are a Rust project owner and are looking for contributors, plea[...]



Mike Hoye: The Scope Of The Possible

Mon, 06 Feb 2017 22:34:44 +0000

This is a rough draft; I haven’t given it much in the way of polish, and it kind of just trails off. But a friend of mine asked me what I think web browsers look like in 2025 and I promised I’d let that percolate for a bit and then tell him, so here we go. For whatever disclaimers like this are worth, I don’t have my hands on any of the product direction levers here, and as far as the orgchart’s concerned I am a leaf in the wind. This is just my own speculation. I’m a big believer in Conway’s Law, but not in the sense that I’ve heard most people talk about it. I say “most people”, like I’m the lone heretic of some secret cabal that convenes once a month to discuss a jokey fifty year old observation about software architecture, I get that, but for now just play along. Maybe I am? If I am, and I’m not saying one way or another, between you and me we’d have an amazing secret handshake. So: Conway’s Law isn’t anything fancier than the observation that software is a collaborative effort, so the shape of large piece of software will end up looking a lot like the orgchart or communication channels of the people building it; this emerges naturally from the need to communicate and coordinate efforts between teams. My particular heresy here is that I don’t think Conway’s Law needs to be the curse it’s made out to be. Communication will never not be expensive, but it’s also a subset of interaction. So if you look at how the nature of people’s interactions with and expectations from a communication channel are changing, you can use it as a kind of oracle to predict what the next evolutionary step of a product should look like. At the highest level, some 23 years after Netscape Navigator 1.0 came out, the way we interact with a browser is pretty much the same as it ever was; we open it, poke around it and close it. Sure, we poke around a lot more things, and they’re way cooler and have a lot more people on far end of them but… for the most part, that’s it. That was all that you could do in the 90’s, because that’s pretty much all that interacting with the web of the 90’s could let you do. The nature of the Web has changed profoundly since then, and like I’ve said before, the web is everywhere and in everything now. But despite that, and the[...]



Robert O'Callahan: What Rust Can Do That Other Languages Can't, In Six Short Lines

Mon, 06 Feb 2017 22:14:30 +0000

struct X {
y: Y
}
impl X {
fn y(&self) -> &Y { &self.y }
}

This defines an aggregate type containing a field y of type Y directly (not as a separate heap object). Then we define a getter method that returns the field by reference. Crucially, the Rust compiler will verify that all callers of the getter prevent the returned reference from outliving the X object. It compiles down to what you'd expect a C compiler to produce (pointer addition) with no space or time overhead.

As far as I know, no other remotely mainstream language can express this, for various reasons. C/C++ can't do it because the compiler does not perform the lifetime checks. (The C++ Core Guidelines lifetime checking proposal proposes such checks, but it's incomplete and hasn't received any updates for over a year.) Most other languages simply prevent you from giving away an interior reference, or require y to refer to a distinct heap object from the X.

This is my go-to example every time someone suggests that modern C++ is just as suitable for safe systems programming as Rust.

Update As I half-expected, people did turn up a couple of non-toy languages that can handle this example. D has some special-casing for this case, though its existing safety checks are limited and easily circumvented accidentally or deliberately. (Those checks are in the process of being expanded, though it's hard to say where D will end up.) Go can also do this, because its GC supports interior pointers. That's nice, though you're still buying into the fundamental tradeoffs of GC: some combination of increased memory usage, reduced throughput, and pauses. Plus, relying on GC means it's nearly impossible to handle the "replace a C library" use case.




Dave Townsend: hgchanges is down, probably for good

Mon, 06 Feb 2017 22:01:35 +0000

My little tool to help folks track when changes are made to files or directories in Mozilla’s mercurial repositories has gone down again. This time an influx of some 8000 changesets from the servo project are causing the script that does the updating to fail so I’ve turned off updating. I no longer have any time to work on this tool so I’ve also taken it offline and don’t really have any intention to bring it back up again. Sorry to the few people that this inconveniences. Please go lobby the engineering productivity folks if you still need a tool like this.




Air Mozilla: Lara Hogan on Demystifying Public Speaking

Mon, 06 Feb 2017 22:00:00 +0000

(image) Based on her book "Demystifying Public Speaking," Lara Hogan will talk through the process of writing, practicing, and getting feedback on a talk!




Air Mozilla: Mozilla Weekly Project Meeting, 06 Feb 2017

Mon, 06 Feb 2017 19:00:00 +0000

(image) The Monday Project Meeting




The Mozilla Blog: Mozilla Files Brief Against U.S. Immigration Executive Order

Mon, 06 Feb 2017 17:11:01 +0000

Mozilla filed a legal brief against the Executive Order on immigration, along with nearly 100 other major companies across different industries.

We joined this brief in support of the State of Washington v. Trump case because the freedom for ideas and innovation to flow across borders is something we strongly believe in as a tech company.  More importantly it is something we know is necessary to fulfill our mission to protect and advance the internet as a global public resource that is open and accessible to all.

The order is also troubling because of the way it undermines trust in U.S. immigration law. This sets a dangerous precedent that could damage the international cooperation required to develop and maintain the open internet.

We believe this Executive Order is misplaced and damaging to Mozilla, to the country, and to the global technology industry. These restrictions are significant and have created a negative impact to Mozilla and our operations, especially as a mission-based organization and global community with international scope and influence over the health of the internet.

The ability for individuals, and the ideas and expertise they carry with them, to travel across borders is central to the creation of the technologies and standards that power the open internet. We will continue to fight for more trust and transparency across organizations and borders to help protect the health of the internet and to nurture the innovation needed to advance the internet.




Daniel Stenberg: Post FOSDEM 2017

Mon, 06 Feb 2017 14:23:17 +0000

I attended FOSDEM again in 2017 and it was as intense, chaotic and wonderful as ever. I met old friends, got new friends and I got to test a whole range of Belgian beers. Oh, and there was also a set of great open source related talks to enjoy! On Saturday at 2pm I delivered my talk on curl in the main track in the almost frighteningly large room Janson. I estimate that it was almost half full, which would mean upwards 700 people in the audience. The talk itself went well. I got audible responses from the audience several times and I kept well within my given time with time over for questions. The trickiest problem was the audio from the people who asked questions because it wasn’t at all very easy to hear, while the audio is great for the audience and in the video recording. Slightly annoying because as everyone else heard, it made me appear half deaf. Oh well. I got great questions both then and from people approaching me after the talk. The questions and the feedback I get from a talk is really one of the things that makes me appreciate talking the most. The video of the talk is available, and the slides can also be viewed. So after I had spent some time discussing curl things and handing out many stickers after my talk, I managed to land in the cafeteria for a while until it was time for me to once again go and perform. We’re usually a team of friends that hang out during FOSDEM and we all went over to the Mozilla room to be there perhaps 20 minutes before my talk was scheduled and wow, there was a huge crowd outside of that room already waiting by the time we arrived. When the doors then finally opened (about 10 minutes before my talk started), I had to zigzag my way through to get in, and there was a large amount of people who didn’t get in. None of my friends from the cafeteria made it in! The Mozilla devroom had 363 seats, not a single one was unoccupied and there was people standing along the sides and the back wall. So, an estimated nearly 400 persons in that room saw me speak about HTTP/2 deployments numbers right now, how HTTP/2 doesn’t really work well under 2% packet loss situations and then a bit[...]


Media Files:
https://video.fosdem.org/2017/Janson/curl.mp4




Francesco Lodolo: Goodbye Jooliaan

Mon, 06 Feb 2017 11:36:36 +0000

Today is a sad day for Mozilla, not just for the Italian community but for the Mozilla Community as a whole. We just learned that Giuliano Masseroni, aka jooliaan, passed away last night.

jooliaan, even if he hasn’t been active for a few years, had a crucial role in the growth of the Italian community and the creation of the legal association known as Mozilla Italia, as Vice President first, then President, and administrator of the local support forum. If you’re using Firefox in Italian, or seeking and getting help on the forum, it’s also thanks to his work and dedicated contribution.

Today we’re sharing in the pain of his family and friends, remembering a person, a friend, who’s given a fundamental contribution to the history of the Open Source in Italy and Mozilla, and to our own lives.




Daniel Glazman: Dual View in BlueGriffon

Mon, 06 Feb 2017 10:56:00 +0000

Long, long ago, one of the dreams of the manager of the Editor's Team at Netscape, Beth Epperson aka beppe, was a dual Source+Wysiwyg view, with Source and Wysiwyg kept in sync. And to do that, a few on the team (beppe, cmanskey and myself) were hoping to use the following:

  • in full theory, the CSS Object Model was made to allow different Views on each document. That's the reason why we have a getDefaultView() when we query a computed style... Unfortunately, rendering engines never really allowed that.
  • we hoped to use that mechanism to create a Stylesheet that would precisely and exactly render the source of a document. We missed a few things, of course: a value for the content property serializing the start tag of an element with all its attributes, another one for the end tag, possibly a property to control entity output and we had of course a problem with all the stuff a CSS rule cannot select (comments, PIs, prolog, cdata sections)

It never worked that way, unfortunately. We tried and had to declare failure. We also briefly tried another approach, classic, of keeping in sync a real source view and a real Wysiwyg view. Unfortunately, Gecko was rather slow on slow computers at that time and the performance hit was too high. But modern engines with super-fast JS and much better CPUs changed all of that.

I'm then glad to report a Gecko-based Wysiwyg editor finally has a sync'd Dual View: BlueGriffon. It will ship with forthcoming version 2.3.

(image)




Mike Taylor: How not to feature detect

Mon, 06 Feb 2017 06:00:00 +0000

Bug 1331638 is a good example of why feature detection is important, but also a good reminder to test in enviroments where the feature totally doesn't exist. A website (Twitter, in this example) tries to do the right thing and check for support before throwing notifications at you: this.isSupported = function() { var a = "PushManager" in window, b = this.getNotificationPermission() !== "denied", c = "serviceWorker" in navigator, d = c && "showNotification" in ServiceWorkerRegistration.prototype; return d && a && b && c } Digging into getNotificationPermission: this.getNotificationPermission = function() { return Notification && Notification.permission } OK, that looks pretty good. A boolean expression checking for the existence of Notification then returning its permission property value. So why the bug report with ReferenceError: Notification is not defined? In Firefox, you can turn off the Notifications API via the dom.webnotifications.enabled pref. If you choose to do that, the Notification interface global totally doesn't exist. And what happens if you refer to something that doesn't exist? ReferenceError. Boom. The page totally explodes. Well, that's dumb. My app doesn't support users who do that, you're thinking. But the situation won't be any different for older browsers that don't implement the API. The fix in this case is very simple (and to their credit, Twitter deployed the fix within a few hours of them finding out), check for the existence of the interface on the global first, then do the rest as usual. this.getNotificationPermission = function() { return window.Notification && window.Notification.permission } (Note that they get this right in the "PushManager" in window feature detect in isSupported.)[...]