Subscribe: Planet Mozilla
Added By: Feedage Forager Feedage Grade B rated
Language: English
add  code  data  firefox  meeting  mozilla  new  people  rust  support  team  time  users  web  week  work  year   
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Planet Mozilla

Planet Mozilla

Planet Mozilla -


Chris Ilias: Working for Postbox

Mon, 16 Jan 2017 01:02:55 +0000

I’m happy to announce that I’ve started working for Postbox, doing user content and support.
This means that I won’t have time for some of my commitments within Mozilla. Over the next while, I may be cancelling or transferring some of my projects and responsibilities.


Karl Dubost: [worklog] Edition 050 - Intern hell and testing

Sun, 15 Jan 2017 22:00:00 +0000

Monday was a day off in Japan.

webcompat issues

  • Do you remember last week the issues related to the implementation of global. Well, we are not the only one it seems. Apple announced that they released the same feature for Safari Tech Preview 21, but they had to revert it immediately because it broke their Polymer tests. dev

  • After rehacking on the two size images after a mistake I had done in one JavaScript. My manual testing are allright, but the intern tests using selenium are not even starting. I tried a couple of hours to solve it but ran in a dead end. Spent already two much time on this. At least I'm not the only one. It led to the need to update Intern, the functional testing tool we use.
  • Discussions around labels is going on with interesting ideas.
  • We also restarted the discussion about closing non strictly webcompat issues. Your opinion is welcome.
  • How to deal with a flash message on a URI just at the creation of a new issue.
  • Spent a bit of time reading and hacking on mock testing. Trying to rewrite parts of the upload images tests section using mocking, but was yet not successful reaching a working state. The exercise was kind of beneficial in some ways, because to do it we have to be better at the way we coded the uploads module.


  • Medium is in trouble.

    I’ll start with the hard part: As of today, we are reducing our team by about one third — eliminating 50 jobs, mostly in sales, support, and other business functions. We are also changing our business model to more directly drive the mission we set out on originally.

    Each time, I have read an article published on the—yes slick but not original—Medium platform, a little voice inside me told me: "If you like this article, you should save it locally. This might disappear one day." If you publish content, you need to own and liberate your content. That seems contradictory. Ownership: You need to publish the content on your blog so you are responsible for its future. Liberation: You need to use a permissive licence (Creative Commons CC0, Public Domain, etc.), so people can disseminate it and make it this way more resilient. Culture propagates because we share.

  • Dietrich on Elephant Trail This reminds me of our walk on the same trail with webcompat team in Taipei last October.


Cameron Kaiser: 45.7.0 available (also: Talos fails)

Sun, 15 Jan 2017 05:00:00 +0000

TenFourFox 45.7.0 is now available for testing (downloads, hashes, release notes). In addition to reducing the layout paint delay I also did some tweaks to garbage collection by removing some code that isn't relevant to us, including some profile accounting work we don't need to bother computing. If there is a request to reinstate this code in a non-debug build we can talk about a specific profiling build down the road, probably after exiting source parity. As usual the build finalizes Monday evening Pacific time. For 45.8 I plan to start work on the built-in user-agent switcher, and I'm also looking into a new initiative I'm calling "Operation Short Change" to wring even more performance out of IonPower. Currently, the JavaScript JIT's platform-agnostic section generates simplistic unoptimized generic branches. Since these generic branches could call any code at any displacement and PowerPC conditional branch instructions have only a limited number of displacement bits, we pad the branches with nops (i.e., nop/nop/nop/bc) so they can be patched up later if necessary to a full-displacement branch (lis/ori/mtctr/bcctr) if the branch turns out to be far away. This technique of "branch stanzas" dates back all the way to the original nanojit we had in TenFourFox 4 and Ben Stuhl did a lot of optimization work on it for our JaegerMonkey implementation that survived nearly unchanged in PPCBC and in a somewhat modified form today in IonPower-NVLE. However, in the case of many generic branches the Ion code generator creates, they jump to code that is always just a few instruction words away and the distance between them never moves. These locations are predictable and having a full branch stanza in those cases wastes memory and instruction cache space; fortunately we already have machinery to create these fixed "short branches" in our PPC-specific code generator and now it's time to further modify Ion to generate these branches in the platform-agnostic segment as well. At the same time, since we don't generally use LR actually as a link register due to a side effect of how we branch, I'm going to investigate whether using LR is faster for long branches than CTR (i.e., lis/ori/mtlr/b(c)lr instead of mtctr/b(c)ctr). Certainly on G5 I expect it probably will be because having mtlr and blr/bclr in the same dispatch group doesn't seem to incur the same penalty that mtctr and bctr/bcctr in the same dispatch group do. (Our bailouts do use LR, but in an indirect form that intentionally clobbers the register anyway, so saving it is unimportant.) On top of all that there is also the remaining work on AltiVec VP9 and some other stuff, so it's not like I won't have anything to do for the next few weeks. On a more disappointing note, the Talos crowdfunding campaign for the most truly open, truly kick-*ss POWER8 workstation you can put on your desk has run aground, "only" raising $516,290 of the $3.7m goal. I guess it was just too expensive for enough people to take a chance on, and in fairness I really can't fault folks for having a bad case of sticker shock with a funding requirement as high as they were asking. But you get the computer you're willing to pay for. If you want a system made cheaper by economies of scale, then you're going to get a machine that doesn't really meet your specific needs because it's too busy not meeting everybody else's. Ultimately it's sad that no one's money was where their mouths were because for maybe double-ish the cost of the mythical updated Mac Pro Tim Cook doesn't see fit to make, you could have had a truly unencumbered machine that really could compete on performance with x86. But now we won't. And worst of all, I think this will scare off other companies from even trying.[...]

Will Kahn-Greene: Me: 2016 retrospective

Sat, 14 Jan 2017 15:00:00 +0000

My 2016 retrospective in which I talk about projects I worked on, pushed off and other things.

Read more… (6 mins to read)

David Lawrence: Happy Early BMO Push Day!

Fri, 13 Jan 2017 23:10:35 +0000

the following changes have been pushed to

  • [1280406] [a11y] Make each start of a comment a heading 3 for easier navigation
  • [1280395] [a11y] Make the Add Comment and Preview tabs real accessible tabs
  • [1329659] Copy A11y tweaks from BugzillaA11yFixes.user
  • [1330449] Clarify “Edit” button
  • [1329511] Any link to user-entered URL with target=”_blank” should have rel=”noopener” or rel=”noreferrer”

discuss these changes on

(image) (image)

Mark Côté: Project Conduit

Fri, 13 Jan 2017 17:56:20 +0000

In 2017, Engineering Productivity is starting on a new project that we’re calling “Conduit”, which will improve the automation around submitting, testing, reviewing, and landing commits. In many ways, Conduit is an evolution and course correction of the work on MozReview we’ve done in the last couple years. Before I get into what Conduit is exactly, I want to first clarify that the MozReview team has not really been working on a review tool per se, aside from some customizations requested by users (line support for inline diff comments). Rather, most of our work was building a whole pipeline of automation related to getting code landed. This is where we’ve had the most success: allowing developers to push commits up to a review tool and to easily land them on try or mozilla-central. Unfortunately, by naming the project “MozReview” we put the emphasis on the review tool (Review Board) instead of the other pieces of automation, which are the parts unique to Firefox’s engineering processes. In fact, the review tool should be a replaceable part of our whole system, which I’ll get to shortly. We originally selected Review Board as our new review tool for a few reasons: The back end is Python/Django, and our team has a lot of experience working with both. The diff viewer has a number of fancy features, like clearly indicating moved code blocks and indentation changes. A few people at Mozilla had previously contributed to the Review Board project and thus knew its internals fairly well. However, we’ve since realized that Review Board has some big downsides, at least with regards to Mozilla’s engineering needs: The UI can be confusing, particularly in how it separates the Diff and the Reviews views. The Reviews view in particular has some usability issues. Loading large diffs is slow, but also conversely it’s unable to depaginate, so long diffs are always split across pages. This restricts the ability to search within diffs. Also, it’s impossible to view diffs file by file. Bugs in interdiffs and even occasionally in the diffs themselves. No offline support. In addition, the direction that the upstream project is taking is not really in line with what our users are looking for in a review tool. So, we’re taking a step back and evaluating our review-tool requirements, and whether they would be best met with another tool or by a small set of focussed improvements to Review Board. Meanwhile, we need to decouple some pieces of MozReview so that we can accelerate improvements to our productivity services, like Autoland, and ensure that they will be useful no matter what review tool we go with. Project Conduit is all about building a flexible set of services that will let us focus on improving the overall experience of submitting code to Firefox (and some other projects) and unburden us from the restrictions of working within Review Board’s extension system. In order to prove that our system can be independent of review tool, and to give developers who aren’t happy with Review Board access to Autoland, our first milestone will be hooking the commit repo (the push-to-review feature) and Autoland up to BMO. Developers will be able to push a series of one or more commits to the review repo, and reviewers will be able to choose to review them either in BMO or Review Board. The Autoland UI will be split off into its own service and linked to from both BMO and Review Board. (There’s one caveat: if there are multiple reviewers, the first one gets to choose, in order to limit complexity. Not ideal, but the problem quickly gets much more difficult if we fork the reviews out to several tools.) As with MozReview, the push-to-BMO feature won’t support confidential bugs right away, but we have been working on a design to support them. Implementating that will be a priority right after we finish BMO integration. We have an aggressive plan for Q1, so [...]

Matěj Cepl: Harry Potter and The Jabber Spam

Fri, 13 Jan 2017 15:21:16 +0000

After many many years of happy using XMPP we were finally awarded with the respect of spammers and suddenly some of us (especially those who have their JID in their email signature) are getting a lot of spim. Fortunately, the world of Jabber is not so defenceless, thanks to XEP-0016 (Privacy Lists). Not only it is possible to set up list of known spammers (not only by their complete JIDs, but also by whole domains), but it is also possible to build a more complicated constructs. Usually these constructs are not very well supported by GUI so most of the work must be done by sending plain XML stanzas to the XMPP stream. For example with pidgin one can open XMPP Console by going to Tools/XMPP Console and selecting appropriate account for which the privacy lists are supposed to be edited. Whole system of ACLs consists from multiple lists. To get a list of all those privacy lists for the particular server, we need to send this XMPP stanza: If the stanza is send correctly and your server supports XEP-0016, then the server replies with the list of all privacy lists: To get a content of one particular list we need to send this stanza: And again the server replies with this list: Server goes through every item in the list and decides based on the value of action attribute. If the actual considered stanza does not match any item in the list, the whole system defaults to allow. I was building a blocking list like this for some time (I have even authored a simple Python script for adding new JID to the list), but it seems to be road to nowhere. Spammers are just generating new and new domains. The only workable solution seems to me to be white-list. Some domains are allowed, but everything else is blocked. See this list stanza sent to the server (answer should be simple one line empty XML element):

Panos Astithas: Feeling safer online with Firefox

Fri, 13 Jan 2017 12:50:39 +0000

The latest privacy and security improvements in Firefox [This post originally appeared on Medium]Firefox is the only browser that answers only to you, our users; so all of us who work on Firefox spend a lot of effort making your browsing experience more private and secure. We update Firefox every 6 weeks, and every new change ships to you as fast as we can make and verify it. For a few releases now, we have been landing bits and pieces of a broader set of privacy and security changes. This post will outline the big picture of all these changes. Site Identity and Permissions PanelThe main project involved improvements to the way Firefox handles permission requests from sites that want to do things that the web platform doesn't allow by default - like accessing your laptop's camera or GPS sensor. To find out how our existing model fares, we ran it through a number of user studies and gathered feedback from users and web developers alike.What we found was clear: users were having trouble making full use of web permissions. Here are some of the observations: It’s easy by design to dismiss a permission prompt, to prevent websites from nagging you. But it’s not so obvious how to get back to an inadvertently dismissed prompt, which users found confusing.Managing the permissions of an individual site was hard, due to the multitude of presented options.It was cumbersome to grant access to screen sharing. This was because it was difficult to select which area of the screen would be shared and because screen sharing was only permitted on websites included in a manually curated list. In order for the open web platform to be on par with the capabilities of native, closed platforms, we needed to fix these issues. So we first focused on putting all privacy and security related information in one place. We call it the Site Identity and Permissions Panel, or more affectionately, the Control Center™.The Site Identity panel appears when you click on the circled “i” icon – “i” for “information” – on the left side of the Awesome Bar. The panel is designed to be the one-stop shop for all security and privacy information specific to the site you’re on. This includes encrypted connections certificate, mixed content warning messages, tracking protection status, as well as non-default permissions. We were happy to see Chrome adopt a similar UI, too. Elevated Privileges for Non-Default PermissionsBy default, web sites need an elevated level of privilege to access your computer hardware like camera, microphone, GPS or other sensors. When a site requests such a permission and the user grants it, the Site Identity panel will display the allowed item along with an "x" button to revert it. In the case of particularly privacy-sensitive permissions, like microphone or camera access, the icon will have a red hue and a gentle animation to draw attention.When a site has been granted elevated privileges, the "i" icon in the URL bar is badged with a dot that indicates the additional information present in the Site Identity panel. This lets you assess the security properties of your current session with a quick glance at the awesomebar, where the "i" and lock icons are displayed together.Users who want even more fine-grained control over all available permissions can go to the Permissions tab in the Page Info dialog (right arrow in the Identity panel -> More Information).Permission Prompt and DialogPermission dialogs are now more consistent than before, both in terms of available actions and messaging.When a site asks for a permission, a permission prompt appears with a message and iconography specific to the type of permission being requested and the available actions. Most of the time, there will only be two: allow or don’t allow access. The default action will stand out in a blue highlight, making the common action easier to perform.In the[...]

Support.Mozilla.Org: What’s Up with SUMO – 12th January 2017

Thu, 12 Jan 2017 21:22:23 +0000

Hello, SUMO Nation! Yes, it’s Friday the 13th this week, considered by some cultures to be a day of “bad luck”, so… Read quickly, before the bells chime for midnight! But, before we get there… Happy birthday, Charles Perrault! and a happy National Youth Day to everyone in India! Welcome, new contributors! Volodymyr Nerovnia If you just joined us, don’t hesitate – come over and say “hi” in the forums! Contributors of the week All the forum supporters who tirelessly helped users over the last week. All the writers of all languages who worked tirelessly on the KB over the last week. Caspy and Noah for investigating issues with Facebook on a Friday night! We salute all of you! SUMO Community meetings LATEST ONE: 11th of January – you can read the notes here (and see the video at AirMozilla). NEXT ONE: happening on the 18th of January! Reminder – if you want to add a discussion topic to the upcoming meeting agenda: Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting). Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda). If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback. Community Have you had a chance to take a look at the 2016 Report for SUMO? (and if you do, say “thanks” to Rachel for her hard work on all the data!) What are the best and worst support experiences you have had? Share them with us in the Best Practices document. Roland will share his plans for Internet Awareness this quarter soon. In the meantime, we are keeping our ears and eyes open to your ideas – brainstorm with us! Reminder: If you are struggling with using Vidyo on Ubuntu, please contact Seburo – he may have found a solution for your pains. Reminder: Are you interested in working together on training for new contributors? Talk to Rachel! Calendar time! January dates that you should remember: 17th – Firefox for iOS 6.0 release 18th – SUMO Community Meeting 19th – SUMO Platform Meeting 24th – Firefox 51 release 25th-26th – SUMO Day & SUMO Social Day 28th – International Privacy Day 31st – tentative platform migration day (changed – more details below) …any other dates you want us to keep in mind? Use the comments below! Platform Check the notes from the last meeting in this document. (and don’t forget about our meeting recordings). The bug list was tweaked to show all the bugs filed so far. Now you should be able to see when your issue was resolved. The main points of today’s meeting were: Migration rescheduled to the 31st of January (Tuesday) to make sure we have a smooth launch of Firefox 51 (on Kitsune). The latest test migration was kicked off only recently. Results to follow soon. No anonymous kudos (upvotes) in Lithium (you need to be logged in to upvote someone’s contribution) mean probably fewer upvotes as a result. Reminder: You can preview the current migration test site here. If you can’t access the site, contact Madalina.  Drop your feedback into the same feedback document as usual. We have a Bugzilla component for issues. You can see their list here and you can create new ones here – please do not assign them to anyone when you create them. Reminder: The post-migration feature list can be found here. Social Your inboxes should soon contain a message with a link to a post-2016 survey about Social, where you will be able to help us shape the near future of Social Support. Reminder: you can contact Sierra (sreed@), Elisabeth (ehull@), or Rachel (guigs@) to get started with Social support. Help us provide frie[...]

Air Mozilla: Connected Devices Weekly Program Update, 12 Jan 2017

Thu, 12 Jan 2017 18:45:00 +0000

(image) Weekly project updates from the Mozilla Connected Devices team.

Air Mozilla: Reps Weekly Meeting Jan. 12, 2017

Thu, 12 Jan 2017 16:00:00 +0000

(image) This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Roberto A. Vitillo: Telemetry meets HBase (again)

Thu, 12 Jan 2017 15:41:39 +0000

At the end of November AWS announced that HBase on EMR supported S3 as data store. That’s great news because it means one doesn’t have to keep around an HDFS cluster with 3x replication, which is not only costly but it comes with its own operational burden. At the same time we had some use cases that could have been addressed with a key-value store and this seemed like a good opportunity to give HBase a try. What is HBase? HBase is an open source, non-relational, distributed key-value store which traditionally runs on top of HDFS. It provides a fault-tolerant, efficient way of storing large quantities of sparse data using column-based compression and storage. In addition, it provides fast lookup of data thanks to indexing and in-memory cache. HBase is optimized for sequential write operations, and is highly efficient for batch inserts, updates, and deletes. HBase also supports cell versioning so one can look up and use several previous versions of a cell or a row. The system can be imagined as a distributed log-structured merge tree and is ultimately an open source implementation of Google’s BigTable whitepaper. A HBase table is partitioned horizontally in so called regions, which contains all rows between the region’s start and end key. Region Servers are responsible to serve regions while the HBase master handles region assignments and DDL operations. A region server has: a BlockCache which serves as a LRU read cache; a BucketCache (EMR version only), which caches reads on local disk; a WAL, used to store writes not yet persisted to HDFS/S3 and stored on HDFS; a MemStore per column family (a collection of columns); a MemStore is a write cache which, once it accumulated enough data, is written to a store file; a store file stores rows as sorted key values on HDFS/S3; HBase architecture with HDFS storage This is just a 10000-foot overview of the system and there are many articles out there that go into important details, like store file compaction. EMR’s HBase architecture with S3 storage and BucketCache One nice property of HBase is that it guarantees linearizable consistency, i.e. if operation B started after operation A successfully completed, then operation B must see the the system in the same state as it was on completion of operation A, or a newer state. That’s easy to do since each row can only be served by one region server. Why isn’t Parquet good enough? Many of our datasets are stored on S3 in Parquet form. Parquet is a great format for typical analytical workloads where one needs all the data for a particular subset of measurements. On the other hand, it isn’t really optimized for finding needles in haystacks; partitioning and sorting can help alleviate this issue only so much. As some of our analysts have the need to efficiently access the telemetry history for a very small and well-defined sub-population of our user base (think of test-pilot clients before they enrolled), a key-value store like HBase or DynamoDB fits that requirement splendidly. HBase stores and compresses the data per column-family, unlike Parquet which does the same per column. That means the system will read way more data than it is actually needed if only a small subset of columns is read during a full scan. And no, you can’t just have a column family for each individual column as column families are flushed in concert. Furthermore, HBase doesn’t have a concept of types unlike Parquet. Both the key and the value are just bytes and it’s up to the user to interpret those bytes accordingly. It turns out that Mozilla’s telemetry data was once stored in HBase! If you knew that then you have been around at Mozilla much longer than I have. That approach was later abandoned as keeping around mostly un-utilized data in HDFS [...]

QMO: Firefox 52.0 Aurora Testday, January 20th

Thu, 12 Jan 2017 13:44:53 +0000

Hello Mozillians,

We are happy to let you know that Friday, January 20th, we are organizing Firefox 52.0 Aurora Testday. We’ll be focusing our testing on the following features: Responsive Design Mode and Skia Content for Windows. Check out the detailed instructions via this etherpad .

No previous testing experience is required, so feel free to join us on #qa IRCchannel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

Manish Goregaokar: Rust Tidbits: What Is a Lang Item?

Thu, 12 Jan 2017 05:01:13 +0000

Rust is not a simple language. As with any such language, it has many little tidbits of complexity that most folks aren’t aware of. Many of these tidbits are ones which may not practically matter much for everyday Rust programming, but are interesting to know. Others may be more useful. I’ve found that a lot of these aren’t documented anywhere (not that they always should be), and sometimes depend on knowledge of compiler internals or history. As a fan of programming trivia myself, I’ve decided to try writing about these things whenever I come across them. “Tribal Knowledge” shouldn’t be a thing in a programming community; and trivia is fun! Previously in tidbits: Box is Special Last time I talked about Box and how it is a special snowflake. Corey asked that I write more about lang items, which are basically all of the special snowflakes in the stdlib. So what is a lang item? Lang items are a way for the stdlib (and libcore) to define types, traits, functions, and other items which the compiler needs to know about. For example, when you write x + y, the compiler will effectively desugar that into Add::add(x, y)1. How did it know what trait to call? Did it just insert a call to ::core::Add::add and hope the trait was defined there? This is what C++ does; the Itanium ABI spec expects functions of certain names to just exist, which the compiler is supposed to call in various cases. The __cxa_guard_* functions from C++s deferred-initialization local statics (which I’ve explored in the past) are an example of this. You’ll find that the spec is full of similar __cxa functions. While the spec just expects certain types, e.g. std::type_traits (“Type properties” §, to be magic and exist in certain locations, the compilers seem to implement them using intrinsics like __is_trivial which aren’t defined in C++ code at all. So C++ compilers have a mix of solutions here, they partly insert calls to known ABI functions, and they partly implement “special” types via intrinsics which are detected and magicked when the compiler comes across them. However, this is not Rust’s solution. It does not care what the Add trait is named or where it is placed. Instead, it knew where the trait for addition was located because we told it. When you put #[lang = "add"] on a trait, the compiler knows to call YourTrait::add(x, y) when it encounters the addition operator. Of course, usually the compiler will already have been told about such a trait since libcore is usually the first library in the pipeline. If you want to actually use this, you need to replace libcore. Huh? You can’t do that, can you? It’s not a big secret that you can compile rust without the stdlib using #![no_std]. This is useful in cases when you are on an embedded system and can’t rely on an allocator existing. It’s also useful for writing your own alternate stdlib, though that’s not something folks do often. Of course, libstd itself uses #![no_std], because without it the compiler will happily inject an extern crate std while trying to compile libstd and the universe will implode. What’s less known is that you can do the same thing with libcore, via #![no_core]. And, of course, libcore uses it to avoid the cyclic dependency. Unlike #![no_std], no_core is a nightly-only feature that we may never stabilize2. #![no_core] is something that’s basically only to be used if you are libcore (or you are an alternate Rust stdlib/core implementation trying to emulate it). Still, it’s possible to write a working Rust binary in no_core mode: #![feature(no_core)] #![feature(lang_items)] // Look at me. // Look at me. // I'm the libcore now. #![no_core] // Tell the[...]

Mike Taylor: Report Site Issue button in Firefox Nightly

Wed, 11 Jan 2017 23:00:00 +0000

In Bug 1324062 we're landing a new button to the default hamburger menu panel in Firefox Nightly, like we did in Firefox for Android Nightly and Aurora. For now, the plan is for this feature to be Nightly-only, maybe one day graduating up to Beta.

If you click it, Firefox will take a screenshot of the page you're on and open the new issue form on (you can remove the screenshot before submitting if you prefer). You can then report an issue with your GitHub account, if you have one (they're free!), or anonymously.


What this does is allow our Nightly population to more easily report compatibility issues, which in turn helps us discover regressions in Firefox, find bugs in sites or libraries, and understand what APIs and features the web relies on (standard and non-standard).

What's a compat issue? In a nutshell, when a site works in one browser, but not another. You can read more about that on the Mozilla Compat wiki page if you're interested.

If you find bugs related to this new button, or have meaningful feedback please file a bug!

(Also if you would prefer it to live somewhere else in your UI, you can click the Customize button in the menu panel and go wild, or hide it from sight.)

The Mozilla Blog: Missing from the Trump Cabinet Nominee Hearings: Cybersecurity for Everyday Internet Users

Wed, 11 Jan 2017 19:14:07 +0000

This week, the U.S. Senate is assessing a slate of cabinet nominees for the incoming Trump administration. If confirmed, these nominees are some of the people who will shape public policy for the next several years on critical issues — including civil liberties and national security.

Members of the Senate asked a range of essential and direct questions. But cybersecurity questions were not a significant part of the discussion in the hearing for potential Attorney General Jeff Sessions, who will lead the Department of Justice, including law enforcement investigations that involve technology.

At the recent Sessions’ Senate hearings, cybersecurity was discussed chiefly in regard to government-sponsored cyberattacks. Discussion about robust cybersecurity for everyday Internet users — through practices like strong encryption — was largely absent.

Mozilla is disappointed that cybersecurity — and the stances from appointees who will need to work on it regularly — was not a priority at the Senate hearings. It would have been helpful if the Senate asked Sessions to clarify his position, and even better if they asked him to clarify that privacy and security are important for all Americans and a healthy Internet.

We need a government that openly discusses — and values — a more secure Internet for all users.

Protecting users’ privacy and security online is a crucial issue for all of us. Security protects elections, economies and our private online and offline lives. And many recent events (cyber attacks, hacks and threats by foreign governments) show that a secure Internet is currently under threat.

I recently wrote about how cybersecurity is a shared responsibility. Governments, technology companies and users need to work together to strengthen cybersecurity. Mozilla knows that even one weak link — be it technical or legislative — can break the chain of security and put Internet users at risk. The chain only remains strong if technology companies, governments and users work together to keep the Internet as secure as it can be.

You can help Mozilla stand up for a more secure Internet. We’re asking readers to pen a Letter to the Editor to their local newspaper in response to this week’s Senate hearings, and support personal security and privacy online. Get started here.

Air Mozilla: The Joy of Coding - Episode 86

Wed, 11 Jan 2017 18:00:00 +0000

(image) mconley livehacks on real Firefox bugs while thinking aloud.

Air Mozilla: Weekly SUMO Community Meeting Jan. 11, 2017

Wed, 11 Jan 2017 17:00:00 +0000

(image) This is the sumo weekly call

Yunier José Sosa Vázquez: Actualizados los canales de Firefox y Thunderbird

Wed, 11 Jan 2017 13:55:06 +0000

Recién hemos actualizamos los canales de Firefox y Thunderbird en nuestra zona de Descargas y tenemos nuevas versiones para ofrecerles. La mayoría de las versiones se encuentran comprimidas puesto que algunas personas han reportado que no pueden descargar archivos .exe y .dmg.

Release: Firefox 50.1.0 y Thunderbird 45.6.0

Beta: Firefox 51.0.b13

Aurora/Developer Edition: Firefox 52.0a2

Nightly: Firefox 53.0a1

Ir a Descargas

Matjaž Horvat: Set permissions per project

Wed, 11 Jan 2017 13:46:25 +0000

Until now, Pontoon allowed team managers to grant Translator permission on a locale level. That means permission to submit and approve translations could only be granted to users in batch – for all projects enabled for that locale.

From now on, permission can also be set for each project within a locale, allowing team managers to set default Translators for a locale, and then override them for a specific project.

Simply select a project you would like to define custom permissions for and then add additional Translators or remove ones with General Translator permission.


This feature is yet another community effort. Kudos to Jotes who implemented the entire backend piece, and Michal for contributing wireframes and filing bug 1223945!

Manish Goregaokar: Rust Tidbits: Box Is Special

Wed, 11 Jan 2017 06:59:43 +0000

Rust is not a simple language. As with any such language, it has many little tidbits of complexity that most folks aren’t aware of. Many of these tidbits are ones which may not practically matter much for everyday Rust programming, but are interesting to know. Others may be more useful. I’ve found that a lot of these aren’t documented anywhere (not that they always should be), and sometimes depend on knowledge of compiler internals or history. As a fan of programming trivia myself, I’ve decided to try writing about these things whenever I come across them. “Tribal Knowledge” shouldn’t be a thing in a programming community; and trivia is fun! So. Box. Your favorite heap allocation type that nobody uses1. I was discussing some stuff on the rfcs repo when @burdges realized that Box has a funky Deref impl. Let’s look at it: #[stable(feature = "rust1", since = "1.0.0")] impl Deref for Box { type Target = T; fn deref(&self) -> &T { &**self } } #[stable(feature = "rust1", since = "1.0.0")] impl DerefMut for Box { fn deref_mut(&mut self) -> &mut T { &mut **self } } Wait, what? Squints fn deref(&self) -> &T { &**self } The call is coming from inside the house! In case you didn’t realize it, this deref impl returns &**self – since self is an &Box, dereferencing it once will provide a Box, and the second dereference will dereference the box to provide a T. We then wrap it in a reference and return it. But wait, we are defining how a Box is to be dereferenced (that’s what Deref::deref is for!), such a definition cannot itself dereference a Box! That’s infinite recursion. And indeed. For any other type such a deref impl would recurse infinitely. If you run this code: use std::ops::Deref; struct LolBox(T); impl Deref for LolBox { type Target = T; fn deref(&self) -> &T { &**self } } the compiler will warn you: warning: function cannot return without recurring, #[warn(unconditional_recursion)] on by default --> :7:5 | 7 | fn deref(&self) -> &T { | ^ | note: recursive call site --> :8:10 | 8 | &**self | ^^^^^^ = help: a `loop` may express intention better if this is on purpose Actually trying to dereference the type will lead to a stack overflow. Clearly something is fishy here. This deref impl is similar to the deref impl for &T, or the Add impl for number types, or any other of the implementations of operators on primitive types. For example we literally define Add on two integers to be their addition. The reason these impls need to exist is so that people can still call Add::add if they need to in generic code and be able to pass integers to things with an Add bound. But the compiler knows how to use builtin operators on numbers and dereference borrowed references without these impls. But those are primitive types which are defined in the compiler, while Box is just a regular smart pointer struct, right? Turns out, Box is special. It, too, is somewhat of a primitive type. This is partly due to historical accident. To understand this, we must look back to Ye Olde days of pre-1.0 Rust (ca 2014). Back in these days, we had none of this newfangled “stability” business. The co[...]

Cameron Kaiser: Not dead, didn't perish in an airline crash over the Pacific

Wed, 11 Jan 2017 04:57:00 +0000

Yes, I'm alive, and yes, I'm back at Floodgap orbiting headquarters. Meanwhile, candidate builds for TenFourFox 45.7 are scheduled for this weekend. Since no one has voiced any problems, the change to nglayout.initialpaint.delay mentioned in the prior post (to 100ms) will take effect. If this caused adverse issues for you, speak now, or forever hold your peace right up until you post frantic bug reports.

Mozilla Privacy Blog: Mozilla Comments on TRAI Free Data Recommendations

Wed, 11 Jan 2017 00:49:02 +0000

On December 19th, the Telecom Regulatory Authority of India (TRAI) released new recommendations on “Encouraging Data Usage in Rural Areas Through Provisioning of Free Data.” This is the latest salvo from the Indian regulator on what types of models for providing subsidized access to the internet should be permitted. While we have questions about how some of these recommendations will be implemented, we’re glad to see TRAI continuing to uphold the Data Services Regulation and interested to see how two new innovations in providing access envisioned in these recommendations will be developed. In February 2016, in its landmark Data Services Regulation, TRAI ruled that differential pricing practices (including many zero rating models) were too harmful to consumers and competition to be allowed in the market. Yet, according to the latest figures from TRAI, only 376 million of India’s 1.25 billion strong population are connected to the internet, clearly much work remains in the shared challenge to bring everyone online. To that end, TRAI’s Free Data consultation this summer contemplated additional, alternative models that might help bring all of the internet to all Indians. These latest recommendations are based on the feedback from that consultation. In many respects, TRAI’s guidance follows the recommendations that Mozilla and other partners made in our submissions. TRAI rightfully notes that: “Systems that make free data a feasible model for all content and ISPs, and available to the maximum addressable consumer market, are clearly the more desirable,” and adds: “any scheme for the provision of free data should meet certain basic criteria… that it should not be possible for a TSP/ISP to use discriminatory pricing of certain data content as a service differentiator.” TRAI also struck down “toll free models” which would allow a content/edge provider to subsidize the cost of accessing their website/service, and which was on of the models considered in the Free Data consultation. TRAI argued, as Mozilla and others did in our submissions, that this model would entail the same discriminatory effects as zero rating/differential pricing. TRAI’s recommendations also include discussion of two models for providing free data. In the first, free data will be provided by third party aggregators which are “TSP agnostic,” (i.e., the aggregator does not have a relationship with any individual telecommunications company). Notably, TRAI requires that the activities of the aggregators “should not be designed to circumvent the Prohibition of Discriminatory Tariffs for Data Services Regulations.” While it’s unclear how the aggregator model will work in practice, and what companies would have an incentive to offer such a service, this explicit prohibition on circumventing the ban on differential pricing should be a strong bulwark against harms to users and competition. Moreover, we’re generally supportive of additional competition in the market for internet access, which often helps to drive down prices and provide additional benefits for users. In the second model, TRAI recommends the creation of a scheme to provide 100 MB a month to rural users for up to six months to be paid for by India’s Universal Service Obligation Fund. This is very similar to the Klif phone model Mozilla pioneered with Orange in several sub-Saharan Africa and Middle Eastern markets whereby the user gets unlimited voice, SMS, and 500 MB of dat[...]

Daniel Stenberg: Lesser HTTPS for non-browsers

Tue, 10 Jan 2017 18:37:37 +0000

An HTTPS client needs to do a whole lot of checks to make sure that the remote host is fine to communicate with to maintain the proper high security levels. In this blog post, I will explain why and how the entire HTTPS ecosystem relies on the browsers to be good and strict and thanks to that, the rest of the HTTPS clients can get away with being much more lenient. And in fact that is good, because the browsers don’t help the rest of the ecosystem very much to do good verification at that same level. Let me me illustrate with some examples. CA certs The server’s certificate must have been signed by a trusted CA (Certificate Authority). A client then needs the certificates from all the CAs that are trusted. Who’s a trusted CA and how would a client get their certs to use for verification? You can say that you trust the same set of CAs that your operating system vendor trusts (which I’ve always thought is a bit of a stretch but hey, I can very well understand the convenience in this). If you want to do this as an HTTPS client you need to use native APIs in Windows or macOS, or you need to figure out where the cert bundle is stored if you’re using Linux. If you’re not using the native libraries on windows and macOS or if you can’t find the bundle in your Linux distribution, or you’re in one of a large amount of other setups where you can’t use someone else’s bundle, then you need to gather this list by yourself. How on earth would you gather a list of hundreds of CA certs that are used for the popular web sites on the net of today? Stand on someone else’s shoulders and use what they’ve done? Yeps, and conveniently enough Mozilla has such a bundle that is licensed to allow others to use it… Mozilla doesn’t offer the set of CA certs in a format that anyone else can use really, which is the primary reason why we offer Mozilla’s cert bundle converted to PEM format on the curl web site. The other parties that collect CA certs at scale (Microsoft for Windows, Apple for macOS, etc) do even less. Before you ask, Google doesn’t maintain their own list for Chrome. They piggyback the CA store provided on the operating system it runs on. (Update: Google maintains its own list for Android/Chrome OS.) Further constraints But the browsers, including Firefox, Chrome, Edge and Safari all add additional constraints beyond that CA cert store, on what server certificates they consider to be fine and okay. They blacklist specific fingerprints, they set a last allowed date for certain CA providers to offer certificates for servers and more. These additional constraints, or additional rules if you want, are never exported nor exposed to the world in ways that are easy for anyone to mimic (in other ways than that everyone of course can implement the same code logic in their ends). They’re done in code and they’re really hard for anyone not a browser to implement and keep up with. This makes every non-browser HTTPS client susceptible to okaying certificates that have already been deemed not OK by security experts at the browser vendors. And in comparison, not many HTTPS clients can compare or stack up the amount of client-side TLS and security expertise that the browser developers can. HSTS preload HTTP Strict Transfer Security is a way for sites to tell clients that they are to be accessed over HTTPS only for a specified time into the future, and plain HTTP should then not be used for the durati[...]

Jared Wein: eslint updates for Firefox developers

Tue, 10 Jan 2017 15:26:25 +0000

In the past week there has been quite a lot of progress made on the eslint front. Last week I enabled the following rules for the default mozilla-central eslint configuration (/toolkit/.eslintrc): no-extra-label no-iterator no-regex-spaces no-self-assign no-unsafe-negation no-unused-labels object-shorthand brace-style no-multi-spaces no-debugger no-delete-var no-sparse-arrays no-unsafe-finally no-cond-assign no-extra-bind no-useless-call no-lone-blocks no-useless-return Mark Banner has continued to work on fixing the remaining no-undef errors. This work is on-going and is being tracked by a meta bug. Florian Quèze just landed a patch yesterday to simplify calls to so the two trailing arguments are optional. Previously 99% of the calls to the function passed in null for the trailing arguments. Florian is planning on cleaning up some addEventListener code as well and I am pushing for him to implement special eslint rules along with them to help enforce these changes going forward. I enabled most of the rules for eslint and gathered counts of the number of errors related to each rule. The following list shows each disabled rule along with the number of associated errors as of mozilla-central revision f13abb8ba9f3: array-callback-return = 3 no-new-func = 13 no-useless-concat = 14 no-void = 14 no-multi-str = 15 no-new-wrappers = 18 no-array-constructor = 20 no-eval = 20 no-await-in-loop = 21 no-sequences = 22 no-inner-declarations = 23 no-unmodified-loop-condition = 24 wrap-iife = 25 no-constant-condition = 28 no-template-curly-in-string = 39 no-loop-func = 44 no-fallthrough = 51 no-new = 56 no-throw-literal = 134 no-prototype-builtins = 158 no-caller = 165 no-unused-expressions = 171 no-useless-escape = 194 complexity = 208 no-case-declarations = 238 guard-for-in = 284 radix = 342 no-shadow = 356 no-eq-null = 442 dot-notation = 459 default-case = 485 block-scoped-var = 749 no-empty-function = 1144 dot-location = 2327 no-extra-parens = 2464 no-invalid-this = 2947 If you would like to work on fixing any of these, please file a bug in the Toolkit :: General component of Bugzilla and request review from myself, Mossop, or Standard8. If you’d like eslint to run on a directory that you work in, remove the reference to it from the .eslintignore file located at the mozilla-central root and add a .eslintrc file. This will now allow eslint to scan that directory. Also, another project that someone can pick up is to help us move towards a single rule definition. We would like to move to a single set of rules which will help for consistent coding styling. You can look at this listing of .eslintrc files to see the differences between them. Some define globals that are unique to the directory or have different include paths to the root configuration, but some also define extra rules. We would like to get those rules added to the root configuration, though we haven’t determined how to settle rule conflicts yet. Tagged: eslint, firefox, mozilla, planet-mozilla [...]

David Lawrence: Happy BMO Push Day!

Tue, 10 Jan 2017 15:07:47 +0000

the following changes have been pushed to

  • [1328665] Two issues with Project Review form for RRAs
  • [1307478] Elasticsearch Indexer / Bulk Indexer
  • [1328650] Update HRBP list in Recruiting Product
  • [1209242] Can’t locate object method “_reverseoperator” via package “Bugzilla::Search” at /data/www/ line 3134.
  • [1280388] [a11y] Make the bug summary a heading level 1

discuss these changes on

(image) (image)

Mozilla Reps Community: Rep of the Month – December 2016

Tue, 10 Jan 2017 14:09:23 +0000

Join us to congratulate Srushtika as Rep of the month for December.

Srushtika is an undergraduate student in her final year. She describes herself as “Tech-Speaker at Mozilla that loves speaking and advocating new technologies that could change the way we spend our lives.” But she is so much more than that. During the last few months she has been working along with Ram on building the local Indian WebVR community. She has also created MozActivate best practices while she is also working on an intro guide for newbies in WebVR events based on Rust guides.


Moreover, she is heavily involved on shaping the Campus program and suggesting activities for campus students. All the above gained her a mention on the VR/AR inspirations of 2016 blogpost. When she is not studying or contributing to VR, Srushtika is helping the privacy month team from India on advocating about privacy in social media. Check them out on #privacymonth.

Congratulations Srushtika, you’re a true inspiration to all of us. Keep on rocking! Please join us in congratulating her over at Discourse!

Gervase Markham: Modern Communications

Tue, 10 Jan 2017 11:33:48 +0000

I just sent something very like the following to someone buying a house from me:

This text is to tell you that I just emailed you a PDF copy of the fax my solicitor just sent your solicitor, containing the email he originally sent last week which your solicitor claimed he didn’t get, plus the confirmation that the fax was received.


Brian Birtles: Web animation in 2017

Tue, 10 Jan 2017 08:11:03 +0000

Happy new year! As promised I thought I’d share a few of the Web animation things I’m looking forward to in 2017. I’m terrible at predicting the future (I used to be a believer in BeOS and VRML) so this is mostly based on what is already in motion. Specs: CSS transitions – this should move to CR status soon. Part of that involves splitting off a separate Timing Functions spec. That separate spec would give us: Level 1: An additional frames() timing function to do what step-end and step-start should have done in the first place. Level 2: Low-level syntax for export of complex timing functions (multi-segment béziers?), spring timing functions, script-defined timing functions, and perhaps even timing functions that affect the duration of animations. CSS animations – this too should move to CR soon. All that is really missing is some clarification about the liveness of values and some text about how @keyframes rules cascade. Then we can start work on new things in level 2 like animation-composition. Web animations – this too is approaching CR and I hope we can ship (most of) the remainder of the API in the first half of this year in Firefox and Chrome. For that we still need to: Add a FillAnimation concept to allow browsers to compact finished but filling animations so they don’t keep consuming memory. This is a bit hard, but seems do-able. Simplify the timing interfaces to use fewer live objects and make the interface consistent with keyframe interfaces. I hope this will simplify the implementation for Edge and Safari too. Add compositeBefore and compositeAfter methods to control how animations combine and overlap. Replace SharedKeyframeList with StylePropertyMaps from Houdini. Integrate a few tweaks to making specifying keyframes more flexible. I’m looking forward to shipping additive animation soon since it helps with a lot of use cases, but it really needs FillAnimation first. getAnimations is also exciting—being able to inspect and manipulate CSS animations and transitions from the same API—but probably won’t ship until the second half of the year when we have the mapping between CSS and Web Animations clearly specified. Being able to ship the finished and ready promise would be great but was blocked on cancelable promises being realized and now it’s not clear what will happen there. Scroll-driven animations – This is going to take quite a bit of work to get right, but hopefully within this year we can start shipping parts of it so you can create hidey bars and parallax effects that run smoothly on the compositor. AnimationWorklet – This is also going to take time to make sure it plays well with the other animation pieces in the platform but fortunately the Chrome folks pushing it have been very responsive to feedback and admirable in their willingness to rethink ideas. At Mozilla, apart from editing and implementing the above specs, some of the bigger animation items I anticipate this year include: Making our current animation features work with Quantum CSS (introduction), i.e. Servo’s style engine. This involves a lot of tricky plumbing but it means Firefox gets faster and Servo gets more spec compliant. CSS offset (aka CSS motion). We’ve been putting this off for a while as the spec stabilizes but I hope this year we will actu[...]

This Week In Rust: This Week in Rust 164

Tue, 10 Jan 2017 05:00:00 +0000

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions. This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR. Updates from Rust Community News & Blog Posts The Rust programming language, in the words of its practitioners. Replacing the jet engine while still flying. Project Quantum: An effort to incrementally add Servo technologies in Firefox. Exploring double faults. Part of the series Writing an OS in Rust. Announcing Alacritty, a GPU-accelerated terminal emulator. Piston: The image library is now pure Rust. A guide to compiling Rust to WebAssembly. Should you convert your C project to Rust? american fuzzy lop’ing Rust. american fuzzy lop is a fuzzer that employs genetic algorithms in order to efficiently increase code coverage of the test cases. Inner workings of a search engine written in Rust. SoA (Structure of Arrays) in Rust with macros 1.1. Rust makes implicit invariants explicit. The Rust module system is too confusing. Rust at OneSignal. OneSignal shares its experience of using Rust in production for more than a year. Librsvg 2.41.0 requires Rust. Other Weeklies from Rust Community This week in Rust docs 38. These weeks in Servo 86. [video] Ferris makes emulators 14. Crate of the Week This week's Crate of the Week is trust, a Travis CI and AppVeyor template to test your Rust crate on 5 architectures and publish binary releases of it for Linux, macOS and Windows. Thanks to Vikrant for the suggestion! Submit your suggestions and votes for next week! Call for Participation Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started! Some of these tasks may also have mentors available, visit the task page for more information. rust: Make Rust on wasm + emscripten a reliable, 1st class Rust target. [easy] rust: Rvalue static promotion. [easy] Diesel: Refactorings using macros in type position. [easy] Diesel: Deny missing docs. android-rs-glue: Add more arguments and use clap to parse the arguments. tokei: Add package repositories. RustCrypto/hashes: Missing hash functions. RustCrypto/block-ciphers: Missing block ciphers. If you are a Rust project owner and are looking for contributors, please submit tasks here. Updates from Rust Core 112 pull requests were merged in the last week. This contains a good number of plugin-breaking changes. mem::transmute::(_) alignment fixed writeln!() more specific errors with str slices Impl From<{Ipv4Addr, Ipv6Addr}> for IpAddr placement-in for Vec: Vec.place_back() <- _ BinaryHeap::peek_mut().pop() missing feature gate for {std, core}::{i128, u128} stability check only public items (fixes ICE) fix stack overflow when promoting MIR terminators associated types fixed in Copy implementation stabilize Macros 1.1 improved diagnostics for Macros 1.1 fix handling of empty types in patterns fix regression with doubly exported macro rules const_eval [...]

James Long: A Prettier JavaScript Formatter

Tue, 10 Jan 2017 00:00:00 +0000

Today I am announcing prettier, a JavaScript formatter inspired by refmt with advanced support for language features from ES2017, JSX, and Flow. Prettier gets rid of all original styling and guarantees consistency by parsing JavaScript into an AST and pretty-printing the AST. Unlike eslint, there aren't a million configuration options and rules. But more importantly: everything is fixable. I'm excited to have time for my own open-source work now that I've left Mozilla, so this is my way of kicking off 2017. Here's a live demo. Note the JSX and Flow support. You can type anything into the editor below and it will format it automatically. The maximum line length here is 60. The formatted version: Many of you know that I usually don't use JSX when writing React code. Over a month ago I wanted to try it out, and I realized one of the things holding me back was poor JSX support in Emacs. Emacs has great support for automatically indenting code; I never manually indent anything. But this doesn't work with JSX, and when I looked around at other editors, I found similar problems (other editors are generally worse at forcing correct indentation). Around the same time I had been using Reason which provides a refmt tool which automatically formats code. I was hooked. It removes all the distractions of writing code; you can write it however you like and instantly format it correctly. I realized this would not only solve my JSX problem, but provide a tool for enforcing consistent styles across teams no matter what editor is used. If computers are good at anything, they are good at parsing code and analyzing it. So I set out to make this work, and prettier was born. I didn't want to start from scratch, so it's a fork of recast's printer with the internals rewritten to use Wadler's algorithm from "A prettier printer". Why did I choose this algorithm? First lets look at why none of the existing style tools really work. There's an extremely important piece missing from existing styling tools: the maximum line length. Sure, you can tell eslint to warn you when you have a line that's too long, but that's an after-thought (eslint never knows how to fix it). The maximum line length is a critical piece the formatter needs for laying out and wrapping code. For example, take the following code: foo(arg1, arg2, arg3); That looks like the right way to format it. However, we've all run into this situation: foo(reallyLongArg(), omgSoManyParameters(), IShouldRefactorThis(), isThereSeriouslyAnotherOne()); Suddenly our previous format for calling function breaks down because this is too long. What you would probably do is this instead: foo( reallyLongArg(), omgSoManyParameters(), IShouldRefactorThis(), isThereSeriouslyAnotherOne() ); This clearly shows that the maximum line length has a direct impact on the style of code we desire. The fact that current style tools ignore this means they can't really help with the situations that are actually the most troublesome. Individuals on teams will all format these differently according to their own rules and we lose the consistency we sought after. Wadler's algorithm described in the paper is a simple constraint-based layout system for code. It "measures" code and will b[...]

Christian Heilmann: 7 tricks to have very successful conference calls

Mon, 09 Jan 2017 23:07:03 +0000

I work remotely and with a team eight hours away from me. Many will be in the same boat, and often the problem with this is that your meetings are late at night your time, but early for the others. Furthermore, the other team meets in a room early in the morning. This either means that they are fresh and bushy tailed or annoyed after having been stuck in traffic. Many different moods and agendas at play here. To avoid this being a frustrating experience, here are seven tips any team in the same situation should follow to ensure that everyone involved gets the most out of the conference call: Be on time and stick to the duration – keep it professional – of course things go wrong, but there is no joy in being in a hotel room at 11pm listening to 6 people tell each other that others are still coming as they are “getting a quick coffee first”. It’s rude to waste people’s time. The meeting time should be information and chats that apply to all, regardless of location and time. You can of course add a social part before or after the meeting for the locals. Have a meeting agenda and stick to it – that way people who have a hard time being part of the meeting due to time difference can decline to come to the meeting and this may make it shorter Have the agenda editable to everyone available during the meeting – this way people can edit and note down things that have been said. This is beneficial as it acts as a script for those who couldn’t attend and it also means that you can ensure people remotely on the call are on the ball and not watching TV Introduce yourself when you speak and go close to the mic – for people dialing in, this is a feature of the conference call software, but when 10 people in a room speak, remote employees who dialed in have no no idea what’s going on. Avoid unnecessary sounds – as someone dialing in, mute your microphone. Nobody needs your coughing, coffee sipping, or – at worst – typing sounds – on the conference call. As someone in the room, don’t have conversations with others next to the microphone. Give the current presenters the stage they deserve. Have a chat window open – this allows people to post extra info or give updates when something goes wrong. It is frustrating to speak when nobody hears you and you can’t even tell them that it doesn’t work. A text chat next to the conf call hardly ever fails to work and is a good feedback mechanism Distribute presenter materials before the call – often presenting a slide deck or web product over Skype or others fails for various reasons or people dialing in are on a very bad connection. If they have the slide deck locally, they can watch it without blurs and delays Using these tricks you end up with a call that results in a documented agenda you can send to those who couldn’t attend. You can also have an archive of all your conf calls for reference later on. Of course, you could just record the sessions, but it is much more annoying to listen to a recording and it may be tough to even download them for remote attendees on bad connections. By separating the social part of the meeting from the official one you still have the joy of meeting in the mornings with[...]

Joel Maher: Project Stockwell – January 2016

Mon, 09 Jan 2017 21:21:58 +0000

Every month this year I am planning to write a summary of Project Stockwell.

Last year we started this project with a series of meetings and experiments.  We presented in Hawaii (a Mozilla all-hands event) an overview of our work and path forward.

With that said, we will be tracking two items every month:

Week of Jan 02 -> 09, 2017

Orange Factor 13.76
# High Frequency bugs 42

What are these high frequency bugs:

  • linux32 debug timeouts for devtools (bug 1328915)
  • Turning on leak checking (bug 1325148) – note, we did this Dec 29th and whitelisted a lot, still much exists and many great fixes have taken place
  • some infrastructure issues, other timeouts, and general failures

I am excited for the coming weeks as we reduce the orange factor back down <7 and get the high frequency bugs <20.

Outside of these tracking stats there are a few active projects we are working on:

  • adding BUG_COMPONENTS to all files in m-c (bug 1328351) – this will allow us to then match up triage contacts for each components so test case ownership has a patch to a live person
  • retrigger an existing job with additional debugging arguments (bug 1322433) – easier to get debug information, possibly extend to special runs like ‘rr-chaos’
  • add |mach test-info| support (bug 1324470) – allows us to get historical timing/run/pass data for a given test file
  • add a test-lint job to linux64/mochitest (bug 1323044) – ensure a test runs reliably by itself and in –repeat mode

While these seem small, we are currently actively triaging all bugs that are high frequency (>=30 times/week).  In January triage means letting people know this is high frequency and trying to add more data to the bugs.


(image) (image)

Air Mozilla: Mozilla Weekly Project Meeting, 09 Jan 2017

Mon, 09 Jan 2017 19:00:00 +0000

(image) The Monday Project Meeting

Air Mozilla: Réforme du droit d'auteur pour le 21e siècle / Copyright Reform for the 21st Century: Paris, France

Mon, 09 Jan 2017 18:30:00 +0000

(image) Réforme du droit d'auteur pour le 21e siècle Après quasiment une décennie, la Commission européenne a présenté son projet de loi sur la réforme du...

Air Mozilla: Révélations Snowden: ce que ça change pour vous

Mon, 09 Jan 2017 18:13:00 +0000

(image) Révélations Snowden: ce que ça change pour vous

QMO: Firefox 51 Beta 12 Testday Results

Mon, 09 Jan 2017 15:24:12 +0000

Hi everyone! Last Friday, January 6th, we held the first testday of this year Firefox 51 Beta 12 Testday.  It was yet another successful event (please see the results section below) so a big Thank You goes to everyone involved. First of all, many thanks to our active contributors: Vuyisile Ndlovu, Moin Shaikh, P Avinash Sharma, Ilse Macías. Bangladesh team: Maruf Rahman, Humayra Khanum, Jobayer Ahmed Mickey, Md. Almas Hossain, Raihan Ali, Iftekher Alam, Tariqul Islam Chowdhury, Saima Sharleen, Md.Tarikul Islam Oashi, Toki Yasir, Majedul islam Rifat, Kazi Nuzhat Tasnem, Rezwana Islam Ria, Aminul Islam Alvi and Tanvir Rahman. India team: Subhrajyoti Sen, Baranitharan, Aishwarya.B, Deepak Chandh, Roshan Dawande, Vishnupriya .V, Selva Makilan, Rajesh D, SriSailesh, R.Krithika Sowbarnika, P Avinash Sharma, Sakshi Prajapati, Sankaraman, Sriram, Surentharan .R.A, Nagaraj.V, Pavithra.R, Paarttipaabhalaji, Kavya Kumaravel, Vinothini, Satchidanandam.M, Karthikeyan S, Dhevendhiran, Kavipriya and Dinesh Kumar. Secondly, a big thank you to all our active moderators. Results: several testcases executed for the WebGL2, Flash Support and Zoom Indicator features. 8 bugs verified: 1170955, 1120967, 1310737, 1297890, 1301788, 1301999, 1300562, 1296870 7 new bugs filed: 1329292, 1329293, 1329204, 1328559, 1329430, 1329421, 1168422 Third, some tests that were failed need more information from you guys. I left a few comments in the etherpads so please provide me with that information so we can see if we should log bugs on those or not. Again thanks for another hugely successful testday We hope to see you all in our next events, all the details will be posted on QMO! Happy new year![...]

Karl Dubost: [worklog] Edition 049 - Hello 2017

Mon, 09 Jan 2017 13:44:00 +0000

Welcoming a new webcompat team member, Ola on our webcompat teleconf. The Japanese engineering team has moved to a new office near Tokyo station. A lot less fancy than the beautiful space near National Arts Center, but a new space for discovery and exploration. webcompat issues Yet another regression related to the combination of a -moz-appearance: none and a background: 0 blocking the checkmark on input element. Strange issue with AbemaTV where the user agent sniffing is not always working. Flickr and Deezer are breaking because of a change in Firefox on the global property. And Jira Mazda Japanese Web site on mobile delivering a different broken experience with user agent sniffing. dev Proposed a pull request for issue 1261 on redesigning the comment area. Proposed a pull request for issue 722 on uploading two images. Merged. But forgotten the tests. Miscellaneous Reading about OKR and laughing hard: The Objective is designed to get people jumping out of bed in the morning with excitement. And while CEOs and VCs might jump out of bed in the morning with joy over a three percent‐ gain in conversion, most mere mortals get excited by a sense of meaning and progress. Use the language of your team. If they want to use slang and say “pwn it” or “kill it,” use that wording. Where is the humanity? Where is the individual accomplishment? And the collective realization of good? The thing which worries most is not the OKR method (it might be pretty good), but more the pseudo-intellectual mush given to managers. Instead, if you are a manager, go read Citadelle de Saint-Exupéry for example or go on a long hike in the mountains with another person. Anything which builds your understanding of humankind. Otsukare![...]

Christian Heilmann: Taking my G-Ro for a spin…

Mon, 09 Jan 2017 13:23:32 +0000

Almost two years ago the G-Ro travel bag kickstarter did the rounds and all of us travelers pricked up our ears. It sounded revolutionary and a really cool bag that is a mix of carry-on and laptop bag. It’s unique physics and large wheels promise easy travel and the in-built charger for mobiles and laptops seems excellent. As with many kickstarters, this took ages to, well, kick in and by the time mine arrived my address had already changed. They dealt with this easily though and this last trip I took the cool bag for its first spin. Now, a caveat: if you use the bag the way it is intended, I am sure it performs amiably. The problem I find is that the use case shown in the videos isn’t really one that exists for an international traveler. Let’s start with the great things about the G-Ro: It looks awesome. Proper Star Trek stuff going on there. It does feel a lot lighter when you roll it compared to other two wheeled rollers. The larger wheels and the higher axle point makes physically sense. It comes with a lot of bags for the interior to fold shirts and jackets and lots of clever features. Once you spend the time to go through the instructions on the kickstarter page you’ll find more and more clever bits in it. The handle is sturdy and the right length to pull. It is less of a danger to other travelers, as all in all the angle you use it on is steeper. You use less space walking. However, it still is worse than a four-wheeled bag you push on your side. People still manage to run into the G-Ro at airports. Now, for a weekend trip with a few meetings an a conference, this thing surely is cool and does the job. However, on my 4 day trip with two laptops and a camera it turns out to be just not big enough and the laptop bag is measured only for one laptop and not even a sensible space for the chargers. Here are the things that miffed me about the G-Ro: Whilst advertising that it is the correct size for every airline to be a carry-on, the G-Ro is big and there are no straps to make it thinner. This is what I like about my The North Face Overhead Carry on Bag. This means that on an Airbus in Business Class, the G-Ro is a tight fit, both in height and length. As most airlines ask you to put your coats on your bag, this is a no-go. The easy access bag on the front for your liquids and gels is flat and big, but the problem with liquids and deodorant/perfume bottles is that they are bulkier and less wide than that. This easy-access bag would be much better as another laptop/tablet holder. With your liquids in that bag, the G-Ro looks bulky and you’re sure to bump against the top of the overhead compartment with your liquids. Basically there is a good chance for accidental spillage. A bag on the side or a wider one on the back would make more sense. The bag in the back in between the handle bars is supposed to be for your wallet and passport, and thus works as an advertisement for pick-pockets. I used it for the chargers of my laptops instead, and that’s actually pre[...]

Karl Dubost: Web Compatibility Talk at Tech in Asia Jakarta 2016

Mon, 09 Jan 2017 13:20:00 +0000

The setup Last week, I went to the conference Tech in Asia Jakarta 2016. Digressing… This URL will not be relevant for the 2016 content next year. It is a weak URL, already rusty from the start. A good way to fix this is to set a temporary redirection from to so people have the right link in their bookmarks. Once the conference is finished, you can start redirecting to 2017. I have been put in touch with David Corbin by Sandra Persing and Dietrich Ayala for talking about Web Compatibility on the Developer stage. The conference has a strong marketing and product placement agenda. This is not the usual crowd I venture to, but it's always interesting to have a different view on how people conceive and foresee the Web. But we know for a long time Web Compatibility is about participating. Jakarta Jakarta is a vibrant city which is bustling to the sounds of motorbikes. Buildings are growing everywhere. The city is being modernized at a fast pace and like in many other cities going through these transformations, it is socially and ecologically violent. The Guardian has this week a full live coverage of Jakarta. The street food is amazing and quality coffee places are not hard to find. The population is young. This participates to the dynamism of the city and the startup ecosystem. Web Compatibility talk First talk of the morning, I was not expecting that much, but wow what a crowd. Ratri Chibi introduced me and I tried to gave an overview about Web compatibility in 25 minutes. The context For my talk I usually try to connect to something the audience can relate to. This time I have used in most of my slides the Indonesian masks culture (topeng). Barong is a good spirit (on the left), while Rangda is a demon (on the right). The Web compatibility work is very much a battle in between the good and evil ways of doing things. But we all have to remember that like in the world of spirits, the daily reality is a lot more complex than just being right or wrong. Rangda is an evil force and… a protective force at the same time. What do we mean when we talk about the Web? The Web is a space where a person should be able to use whatever device and browser of their choice to access and interact with the content. The Web has been instrumental in making information cheaply accessible anywhere at anytime everywhere on earth. It enlarged the ability of individuals to publish content. A couple of years ago, at Mozilla, we tested the top 1000 Web sites in Japan and China on Firefox for Android. We quickly realized that around 20% of these sites were broken in some ways to the point of being completely unusable. The common issues were related to WebKit properties (CSS and DOM) such as old flexbox, gradient, transition, background-size, etc. Sometimes sites were relying on old non-standard properties implemented by other browsers but not Firefox such as innerText (the standard keyword is textContent). In this image of[...]

Daniel Stenberg: DMARC helped me ditch gmail

Mon, 09 Jan 2017 08:47:10 +0000

I’ve been a gmail user for many years (maybe ten). Especially since the introduction of smart phones it has been a really convenient system to read email on the go. I rarely respond to email from my phone but I’ve done that occasionally too and it has worked adequately. All this time I’ve used my own domain and email address and simply forwarded a subset of my email over to gmail, and I had gmail setup so that when I emailed out from it, it would use my own email address and not the one. Nothing fancy, just convenient. The gmail spam filter is also pretty decent so it helped me to filter off some amount of garbage too. It was fine until DMARC However, with the rise of DMARC over the recent years and with Google insisting on getting on that bandwagon, it has turned out to be really hard to keep forwarding email to gmail (since gmail considers forwarded emails using such headers fraudulent and it rejects them). So a fair amount of email simply never showed up in my gmail inbox (and instead caused the senders to get a bounce from a gmail address they didn’t even know I had). I finally gave up and decided gmail doesn’t work for this sort of basic email setup anymore. DMARC and its siblings have quite simply made it impossible to work with emails this way, a way that has been functional for decades (I used similar approaches already back in the mid 90s on my first few jobs). Similarly, DMARC has turned out to be a pain for mailing lists since they too forward email in a similar fashion and this causes the DMARC police to go berserk. Luckily, recent versions of mailman has options that makes it rewrite the From:-lines from senders that send emails from domains that have strict DMARC policies. That mitigates most of the problems for mailman lists. I love the title of this old mail on the subject: “Yahoo breaks every mailing list in the world including the IETF’s” I’m sure DMARC works for the providers in the sence that they block huge amounts of spam and fake users and that’s what it was designed for. The fact that it also makes ordinary old-school mail forwards really difficult and forces mailing list admins all over to upgrade mailman or just keep getting rejects since they use mailing list software that lacks the proper features, that’s probably all totally ignored. DMARC was as designed: it reduces spam at the big providers’ systems. Mission accomplished. The fact that they at the same time made world wide Internet email a lot less useful is probably not something they care about. It’s done gmail can read mails from remote inboxes, but it doesn’t support IMAP (only POP3) so simply switching to such a method wouldn’t even work. I just refuse to enable POP3 anywhere again. Of course it isn’t an irreversible decision, but I’ve stopped the forward to gmail, cleared the inbox there and instead I’ve switched to Aqua mail on Android. It seems fairly feature complete and snappy. It isn’t quite as fancy and cool [...]

The Servo Blog: These Weeks In Servo 87

Mon, 09 Jan 2017 00:30:00 +0000

In the last weeks, we landed 104 PRs in the Servo organization’s repositories. Planning and Status Our overall roadmap is available online. Plans for 2017 (including Q1) will be solidified in the coming week. Please check it out and provide feedback! This week’s status updates are here. Notable Additions gw added non-square texture page sizes to the renderer. UK992 fixed some packaging issues breaking the macOS nightly builds. nox corrected the way the text nodes get added to documents during parsing. UK992 enabled using ccache on Appveyor builds. mrnayak implemented support for subresource integrity checks. charlesvdv enabled setting numeric preferences from the command line. bzbarsky made per-document styles possible in Stylo. anholt implemented an overload of the WebGL bufferData API. jdm fixed an incorrect script/layout interaction preventing logging into many Google applications. Ms2ger implemented the “entry global” specification concept. emilio redesigned the interactions between the style system and media queries. Manishearth implemented the @supports directive for CSS. bholley improved performance of manipulating threadsafe RefCells. Manishearth added better documentation to all CSS properties. wafflespeanut made the behaviour of the input DOM event match the specification. cynicaldevil implemented the missing Document overload for the XMLHttpRequest API. MortimerGoro implemented the WebVR API. asajeffrey made Servo not retain every web page that it has ever loaded in the current session. paulrouget fixed the problem preventing the brew nightly formula from working. asajeffrey avoided the problem of creating many RNGs that would eventually exhaust available file descriptors. New Contributors Dowon Cha Frederick F. Kautz IV Josh Holmer Jure Podgoršek Konstantin Veretennicov charlesvdv mrnayak Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors! Screenshot No screenshots.[...]

Support.Mozilla.Org: The Big SUMO Report: 2016

Sun, 08 Jan 2017 03:04:18 +0000

Hey there, SUMO Nation! Since 2017 has finally arrived (a while ago), our team wanted to share a few numbers and remarks with you. This report is a bit delayed, but we hope it will prove to be worth the extra few days’ wait. We are on the brink of probably the biggest change to the “way things are” around SUMO in the recent years, so taking a good look back is a great way to make sure we stay focused on the road ahead. 2016 has not been an easy year for many members of our community, for many of reasons. Fortunately, we all persevered and managed to come out stronger / wiser / more prepared on this side of the calendar. We will definitely keep working together on many aspects of our site, our community, and our presence in Mozilla’s mission. Our small core team counts on your presence and strength and you can count on our support. You rock the helpful web. As for all the data you will see on this page and the ones it links to: remember that while we use some of these numbers to prepare plans for the future and put them into reality, they are just numbers – and as such cannot and will not fully represent the passion, the values, the talent and effort behind every single angry user turned into a happy one thanks to your dedication to making the open web better and more helpful. For all that we thank you from the bottom of our hearts. Now, let’s get on with the show! SUMO 2016 – the major facts We joined forces with the Marketing Team, becoming a part of MarComms, and working more closely with the Mozillians at the heart of Social and PR projects. With Firefox 46, we started publishing SUMO Release Reports, which met with a very positive response from all sides of Mozilla: Firefox 46 Firefox 47 Firefox 48 Firefox 49 Firefox 50  Kitsune, our home-made support platform went through a LOT of changes, the final of which was the decision to replace it with an external solution, due to our lovely developer team being reassigned to other parts of the Mozilla project. We spent the better part of the second half of the year investigating the next generation of SUMO tech. We should be using the new platform any day now… Social Support and Army of Awesome morphed into two different beasts this year. While trying out a new tool (Sprinklr), a robust social engagement tool used for brand marketing, we engaged many new people, but it took us some time to refine our goals. We started to onboard with a goal of 25 active volunteers and soon found the tool overwhelming for just a few volunteers. Once we moved to another tool (Respond), a new community was created from the retiring Army of Awesome and the 4 main Mozilla Brand Social accounts for English, Spanish, Portuguese, and French languages. After transitioning to the new support tool, 33 people received training and the response rate went up from 10% to 33%. The Knowledge Base articles explaining how to contribute were revised, rewritten[...]

Daniel Stenberg: My talks at FOSDEM 2017

Sat, 07 Jan 2017 15:25:12 +0000

I couldn’t even recall how many times I’ve done this already, but in 2017 I am once again showing up in the cold and grey city called Brussels and the lovely FOSDEM conference, to talk. (Yes, it is cold and grey every February, trust me.) So I had to go back and count, and it turns out 2017 will become my 8th straight visit to FOSDEM and I believe it is the 5th year I’ll present there.First, a reminder about what I talked about at FOSDEM 2016: An HTTP/2 update. There’s also a (rather low quality) video recording of the talk to see there. I’m scheduled for two presentations in 2017, and this year I’m breaking new ground for myself as I’m doing one of them on the “main track” which is the (according to me) most prestigious track held in one of the biggest rooms – seating more than 1,400 persons. You know what’s cool? Running on billions of devices Room: Janson, time: Saturday 14:00 Thousands of contributors help building the curl software which runs on several billions of devices and are affecting every human in the connected world daily. How this came to happen, who contributes and how Daniel at the wheel keeps it all together. How a hacking ring is actually behind it all and who funds this entire operation. So that was HTTP/2, what’s next? Room: UD2.218A, time: Saturday 16:30 A shorter recap on what HTTP/2 brought that HTTP/1 couldn’t offer before we dig in and look at some numbers that show how HTTP/2 has improved (browser) networking and the web experience for people. Still, there are scenarios where HTTP/1’s multiple connections win over HTTP/2 in performance tests. Why is that and what is being done about it? Wasn’t HTTP/2 supposed to be the silver bullet? A closer look at QUIC, its promises to fix the areas where HTTP/2 didn’t deliver and a check on where it is today. Is QUIC perhaps actually HTTP/3 in everything but the name? Depending on what exactly happens in this area over time until FOSDEM, I will spice it up with more details on how we work on these protocol things in Mozilla/Firefox. This will become my 3rd year in a row that I talk in the Mozilla devroom to present the state of the HTTP protocol and web transport.[...]

Gervase Markham: Support the Software Freedom Conservancy

Sat, 07 Jan 2017 11:43:47 +0000

The Software Freedom Conservancy is an organization which provides two useful services.

Firstly, they provide “fiscal sponsor” services for free software projects which wish to benefit from being a non-profit but which do not have the resources to set up their own Foundation. They have over 35 member projects which they support. If you use WINE, Samba, Mercurial, Inkscape, Git or any of the others, you can thank and support those projects by supporting SFC.

Secondly, if you believe that copyleft has a role (and it doesn’t even have to be an exclusive role) to play in the free software licensing ecosystem, you have an interest in making sure that copyleft licenses do not de facto become the same as permissive ones. That requires working with companies to help them understand their quid pro quo obligations to share and, rarely, taking them to court when flagrant violations are not corrected after significant time. The SFC is basically the only organization which does this valuable work, and that fact makes companies (sadly) less likely to support it.

This means that SFC greatly relies on support from individuals. I have just re-committed as a supporter for 2017 and I hope many of my readers will do the same.


Christopher Arnold

Sat, 07 Jan 2017 02:16:01 +0000

At last year’s Game Developers Conference I had the chance to experience new immersive video environments that are being created by game developers releasing titles for the new Oculus and HTC Vive and Google Daydream platforms.  One developer at the conference, Opaque Mulitimedia, demonstrated "Earthlight" which gave the participant an opportunity to crawl on the outside of the International Space Station as the earth rotated below.  In the simulation, a Microsoft Kinect sensor was following the position of my hands.  But what I saw in the visor was that my hands were enclosed in an astronaut’s suit.  The visual experience was so compelling that when my hands missed the rungs of the ladder I felt a palpable sense of urgency because the environment was so realistically depicted.  (The space station was rendered as a scale model of the actual space station using the "Unreal" game physics engine.)  The experience was so far beyond what I’d experienced a decade ago with the crowd-sourced simulated environments like Second Life, where artists create 3D worlds in a server-hosted environment that other people could visit as avatars.  Since that time I’ve seen some fascinating demonstrations at Mozilla’s Virtual Reality developer events.  I’ve had the chance to witness a 360 degree video of a skydive, used the WoofbertVR application to visit real art gallery collections displayed in a simulated art gallery, spectated a simulated launch and lunar landing of Apollo 11, and browsed 360 photography depicting dozens of fascinating destinations around the globe.  This is quite a compelling and satisfying way to experience visual splendor depicted spatially.  With the New York Times and  iMax now entering the industry, we can anticipate an incredible surfeit of media content to take us to places in the world we might never have a chance to go.Still the experiences of these simulated spaces seems very ethereal.  Which brings me to another emerging field.  At Mozilla Festival in London a few years ago, I had a chance to meet Yasuaki Kakehi of Keio University in Japan, who was demonstrating a haptic feedback device called Techtile.  The Techtile was akin to a microphone for physical feedback that could then be transmitted over the web to another mirror device.  When he put marbles in one cup, another person holding an empty cup could feel the rattle of the marbles as if the same marble impacts were happening on the sides of the empty cup held by the observer.  The sense was so realistic, it was hard to believe that it was entirely synthesized and transmitted over the Internet.  Subsequently, at the Consumer Electronics Show, I witnessed another of these haptic speakers.  But this one conveyed the sense not by mirroring precise physical impacts, but by giving precisely timed pulses, which the holder coul[...]

The Mozilla Blog: Fighting Back Against Unlawful Warrants and Indefinite Gag Orders to Protect Internet Privacy and Security

Sat, 07 Jan 2017 00:23:43 +0000

Mozilla and other major technology companies, including Amazon, Apple, Google and Twitter, are joining together in an amicus brief filing that supports Facebook’s ability to challenge both a search warrant for nearly 400 Facebook users’ data, and an indefinite gag order which forbids Facebook from notifying users about government requests for their data.

Mozilla is joining this brief because we believe this type of lengthy, never-ending gag order ultimately infringes on the ability to control one’s online experience.  This is part of our fight to protect individual privacy and security online, and to improve internet health by promoting cybersecurity and increasing transparency.

In this case, the government argued that Facebook has no legal right to even challenge the warrant’s scope or validity, and a lower court agreed. This would mean companies like Mozilla couldn’t challenge unlawful orders we receive. And, because gag orders would prevent us from notifying users, those users wouldn’t know to challenge them either. Unlawful warrants would never see the light of day or be apparent to users. This is staggering and unacceptable.

While we have yet to receive a gag order that would prevent us from notifying a user about a request for data, we said in our transparency report earlier this year that we are committed to opposing any unlawful or overbroad requests for our users’ data and that’s exactly what we’re doing today. Mozilla also joined an amicus brief in September to fight back against indiscriminate use of permanent gag orders that prevent companies from ever notifying users about requests for their data.

We will continue to look for opportunities like these to protect privacy and security online for all users and to improve the overall health of the internet.

Chris H-C: @moz_stats – A Silly Thing I Made

Fri, 06 Jan 2017 19:06:23 +0000

In honour and image of @stats_canada, may I present @moz_stats, a parody Twitter account that will, irregularly, post stats of dubious reality about being a Mozillian.

Follow and risk learning something almost, but not quite, entirely unlike truth.

If you have any (fake) statistics about being a Mozillian, please @ the account or email me. I only have so much material.


(image) (image)

Support.Mozilla.Org: What’s Up with SUMO – 5th January (2017!)

Thu, 05 Jan 2017 22:59:26 +0000

Hello, SUMO Nation! Welcome to the new year, which is going to be full of novelty, challenges, greatness, problems to solve, ideas to share, and many other things – courtesy of all of you reading these words, we hope :-). Let’s roll into the new days with the new from the most recent past ones! Welcome, new contributors! NPapalymberis gaurav1999 cningyuan If you just joined us, don’t hesitate – come over and say “hi” in the forums! Contributors of the week All the forum supporters who tirelessly helped users over the last week. All the writers of all languages who worked tirelessly on the KB over the last week. Seburo for being the friendly ghost dropping useful and important information on us ;-) We salute all of you! SUMO Community meetings LATEST ONE: 4th of January – you can read the notes here (and see the video at AirMozilla). NEXT ONE: happening on the 11th of January! Reminder – if you want to add a discussion topic to the upcoming meeting agenda: Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting). Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda). If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback. Community Congrats to Joni and Roland for their mention in the 2017 recap video (for Focus)! If you are struggling with using Vidyo on Ubuntu, please contact Seburo – he may have found a solution for your pains. Help us help others with using the Web in a smart, easy, and safe way. Calendar time! January dates that you should remember: 10th – Firefox for iOS 6.0 released (to be confirmed). 11th – SUMO Community Meeting 12th – SUMO Platform Meeting 16th – 22nd – tentative final migration period to Lithium 18th – SUMO Community Meeting 19th – SUMO Platform Meeting 24th – Firefox 51 release 25th-26th – SUMO Day & SUMO Social Day …any other dates you want us to keep in mind? Use the comments below! Are you interested in working together on training for new contributors? Talk to Rachel! Platform Check the notes from the last meeting in this document. (and don’t forget about our meeting recordings). The main points of today’s meeting were: GET READY! The final migration is happening after the 15th of January! Another test migration will take place soon. More updates during the next Platform meeting. The existing bugs are getting prioritized. More details in the meeting document. Reminder: You can preview the current migration test site [...]

Firefox UX: Project Prox?

Thu, 05 Jan 2017 21:17:38 +0000


At our recent Mozilla All Hands in Kona, HI, we announced the first project by the New Mobile Experience Team— Project Prox.

Emma Humphries: BMO Roadmap for the 1st Half of 2017

Thu, 05 Jan 2017 20:03:55 +0000

While the team were together at Mozilla's end-of-year all hands meeting, the BMO team (mcote, Dylan, dkl, and me) got together to plan our work for the first part of the new year.

Mark Côté's reviewed the work we completed in 2016. Here are the highlights of what we want to have completed before we all meet again in person in SF this June.

What We're Doing

  • Deliver features requested by users
    • Markdown in comments
    • Emoji (because you want to use 💩 in bug titles) Deliver web hooks for new user creation and first bugs
    • Web hooks enable integration with more tools and services
    • We're going to do this first with the new bug-reporter experience so that we can follow up with them as they file more, high quality bugs
  • Conduct survey of BMO user experience so we have a baseline for improvements

This is not all we're doing, but the major results we want to achieve.

What We're Not Doing: Custom Forms

Also, as Mark mentioned in his post, this year I'm taking on product management for Bugzilla. If you want to discuss things that Bugzilla should not be doing, and things it should be doing, seek me out on #bteam, or #bugmasters on or open a new ticket in the product in Bugzilla and set the severity to 'enhancement'.

(image) comments

Air Mozilla: Connected Devices Weekly Program Update, 05 Jan 2017

Thu, 05 Jan 2017 18:45:00 +0000

(image) Weekly project updates from the Mozilla Connected Devices team.

Mark Côté: BMO in 2016

Thu, 05 Jan 2017 16:54:11 +0000

Stuff that landed in 2016 Here’s a sampling of improvements to BMO that were launched in 2016. Improvements to bug-modal We’ve continued to refine the modal bug view, aka the “experimental UI”. The BMO team fixed 39 bugs relating to the new interface in 2016. We’ve got a couple more blockers before we make the modal view the default, which should happen in the middle of January. We know there are a still a few outstanding bugs and some missing functionality, so we will leave the standard view available for a little while, at least until all the blockers of bug 1273046 are resolved. All future improvements to the bug view will happen only on the modal UI, which is not just more usable but also more hackable. HTML email There was actually no real work involved here, as HTML email was added to BMO years ago. At that time, since it was a new feature, we didn’t enable it by default… and then 4 years went by. Just a couple weeks ago a new BMO user suggested that we implement HTML emails, having no idea that the option was already there (buried in many other preferences). That was the prompting we needed to finally enable it by default. Readable bug statuses Emma Humphries added readable statuses prominently displayed at the top of the status panel (in the modal UI only). They quickly summarize the status of a bug in a visible place, mainly for triaging and tracking purposes. A couple examples: This is part of Emma’s on-going efforts to improve contributors' experiences. Time zones One big change we made this year was hopefully completely invisible to everyone: BMO’s database moved to UTC. When BMO was originally deployed in 1998, the database, being based in California, was set to Pacific Time. 6 years later someone suggested that UTC would be a better choice. When I took over management of the BMO team about 4.5 years ago, I was pretty horrified that a major application would be running in any time zone other than UTC, not in the least because of the confusion caused by an hour being repeated every year when PDT switched back to PST, since the presence or absence of DST is not noted in the database. However, we were never able to justify the required effort to move over to UTC, that is, until last year, as we were setting up a failover system in AWS. RDS, the natural choice for a MySQL-based application, supported only UTC, thus giving us a hard requirement to migrate. A heroic effort by dkl got us smoothly switched over in May 2016. Memory-usage & perf improvements We’ve known for some time that Bugzilla has a persistent memory leak. It was never a huge issue because the webheads [...]

Mozilla Addons Blog: Friend of Add-ons: Shubheksha Jalan

Thu, 05 Jan 2017 16:37:18 +0000

Our newest Friend of Add-ons is Shubheksha Jalan! Shubheksha has been involved with the add-on community since last year. In December, she was accepted to Mozilla’s Outreachy cohort to work on web-ext, a command line utility to help make WebExtensions development more awesome. At first, getting involved in the open source community proved challenging for Shubheksha. After spending two years trying to break into various open source projects, she began searching issue labels on Github for good first bugs and beginner issues. One bug in particular stood out because of a comment from Kumar McMillan that read, “I can mentor this.” Shubheksha says, “Now that I look back at it, that one single comment played a huge role in helping me muster enough courage to take it up. From then on, it sort of became a routine and I found myself very eager to work on web-ext everyday after my classes in college were over. “ Fixing that first bug was only the beginning. Since then, Shubheksha has closed 22 add-on bugs and is authoring an article on web-ext for Mozilla Hacks. Contributing code to Mozilla projects has given Shubheksha a taste of real-world software development and has helped develop and refine her skill set. She says, “I learnt to navigate and read through a decently sized codebase. I learnt how to write code that makes it testable, how to write unit tests using stubs, spies, etc,. I also learnt a variety of debugging techniques for a node JS. I learnt a ton about writing ES6 Javascript and surrounding tooling, especially Flowtype, a static type checker for Javascript. It has been an incredibly rewarding experience. My mentors, Kumar and Luca, have been incredibly helpful and supportive throughout the way and I can’t thank them enough for investing all their time and patience.” You can read Shubheksha’s blog post about getting involved in the open source community here. If you’re interested in contributing code to add-on projects, please take a look at our onboarding wiki. Thanks to all the code contributors, developers, reviewers, and users who pitch in to make add-ons awesome. We encourage you to document your contributions on the Recognition wiki. We are looking forward to collaborating with you in 2017![...]

Air Mozilla: Reps Weekly Meeting Jan. 05, 2017

Thu, 05 Jan 2017 16:00:00 +0000

(image) This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

The Mozilla Blog: Living Inside the Computer: Building Responsible IoT

Thu, 05 Jan 2017 12:12:08 +0000

A new paper by the NetGain Partnership examines the opportunities and dangers of a pervasive web Today, we live online. The Internet intersects with everything from commerce and journalism to art and civic participation. But more and more, living online doesn’t mean sitting in front of a screen, mouse in hand. The Internet of Things — the networked computing environment that spans the globe — allows the web to permeate our clothes, our homes, our healthcare. The web is now made up of billions of connected devices and zettabytes of data. It’s pervasive. A pervasive Internet isn’t a novelty or a linear step forward. It’s an extraordinary leap that brings online power grids, emergency alert systems, pacemakers and appliances. It requires our deep thought and attention. And it needs a guiding set of principles. Over the past few decades, we’ve seen the power the Internet wields. It’s a force that can unseat dictators, revolutionize education, reshape economies and connect billions. But it’s also a force that can surveil, repress, harass and exclude. It can undermine our most important values. Now, we’re at an inflection point. As IoT evolves and permeates even more personal corners of our lives, we must balance progress with principles. We can’t only ask, “What’s possible?” We must also ask, “What’s responsible?” The NetGain Partnership — a broad coalition of nonprofits committed to an Internet in the public interest — has published a paper on the road ahead. “We All Live in the Computer Now” explores the opportunities of a pervasive Internet, the challenges and where we go next. IoT can work for the public good. It can fuel the movement for open knowledge and technology. IoT can contribute to a better planet: Cities like San Antonio, Barcelona and Hubli have used IoT to conserve water and energy. IoT can empower citizens: From Hong Kong to Dublin, people are using the web to participate in government. And IoT can fuel do-good organizations and movements, from Arduino to makerspaces. On the flip side, there are existential dangers. IoT can erode privacy: Legions of connected microphones and cameras unknowingly track our movements and conversations. Governments surveil citizens en masse, and profit-minded businesses horde personal data. IoT also means more vulnerabilities, from the recent Dyn attack to the hacking of elections. Examining past Internet inflection points is helpful. There were times that — in hindsight — would have benefited from a better balance of progress and principle. As the web exploded [...]

Christian Heilmann: First Decoded Chat of the year: Paul Bakaus on AMP

Thu, 05 Jan 2017 11:51:02 +0000

Today on the Decoded Blog I published the first ever Decoded Chat I recorded, where I grilled Paul Baukaus in detail about AMP. This is an hour long Skype call and different to the newer ones – I was still finding the format :). There are quite a few changes that happened to AMP since then and soon there will be an AMP Summit to look forward to. All in all, I do hope though that this will give you some insight into what AMP is and what it could be if the focus were to go away from “Google only” with it. These are the questions we covered: What is AMP to you?The main focus of AMP seems to be mobile, is that fair to say?Was AMP an answer to Facebooks’ and Apple’s news formats? Does it rely on Google technology and – if so – will it be open to other providers?It seems that the cache infrastructure of AMP is big and expensive. How can we ensure it will not just go away as an open system as many other open APIs vanished?Do large corporations have a problem finding contributors to open source projects? Are they too intimidating?Is there a historical issue of large corporations re-inventing open source solutions to “production quality code”? Is this changing?Whilst it is easy to get an AMP version of your site with plugins to CMS, some of the content relies on JavaScript. Will this change?AMP isn’t forgiving. One mistake in the markup and the page won’t show up. Isn’t that XHTML reinvented – which we agreed was a mistake.AMP seems to be RSS based on best practices in mobile performance. How do we prevent publishers to exclusively create AMP content instead of fixing their broken and slow main sites?It seems to me that AMP is a solution focused on CMS providers. Is that fair, and how do we reach those to allow people to create AMP without needing to code?A lot of “best practice” content shown at specialist events seems to be created for those. How can we tell others about this?AMP seems to be designed to be limiting. For example, images need a height and width, right?In terms of responsive design, does the AMP cache create differently sized versions of my images?Are most of the benefits of AMP limited to Chrome on Android or does it have benefits for other browsers, too?Do the polyfills needed for other browsers slow down AMP?How backwards compatible is AMP?One big worry about publishing in AMP is that people are afraid of being fully dependent on Google. Is that so?Are there any limitations to meta information in AMP pages? Can I add – for example – Twitter specific meta information?Do[...]

Mozilla Addons Blog: January’s Featured Add-ons

Thu, 05 Jan 2017 02:50:43 +0000


Pick of the Month: Tile Tabs

by DW-dev
Display open tabs in a tile layout—arrange them however you like.

“Possibly the most amazing plugin ever. For a portrait monitor this is a must have. I split my social media pages so I can see them all at once, and split my two email accounts so I can see them at once.”

Featured: Emoji Cheatsheet

by Johann Hoffman
A simple search function helps you find the perfect emoji for any occasion.

“Works exactly as it says. Very simple interface and easy to use.”

Nominate your favorite add-ons

Featured add-ons are selected by a community board made up of add-on developers, users, and fans. Board members change every six months. Here’s further information on AMO’s featured content policies.

If you’d like to nominate an add-on for featuring, please send it to for the board’s consideration. We welcome you to submit your own add-on!

Seif Lotfy: Hot Functions for IronFunctions

Wed, 04 Jan 2017 20:50:00 +0000

For every request, IronFunctions would spin up a new container to handle the job, which depending on container and task could add a couple of 100ms of overhead. So why not reuse the containers if possible? Well that is exactly what Hot Functions do. Hot Functions improve IronFunctions throughput by 8x (depending on duration of task). Hot Functions reside in long-lived containers addressing the same type of task, which take incoming workload and feed into their standard input and read from their standard output. In addition, permanent network connections are reused. Here is how a hot function looks like. Currently, IronFunctions implements a HTTP-like protocol to operate hot containers, but instead of communication through a TCP/IP port, it uses standard input/output. So to test this baby we deployed on 1 GB Digital Ocean instances (which is not much), and used Honeycomb to track and plot the performance. Simple function printing "Hello World" called for 10s (MAX CONCURRENCY = 1). Hot Functions have 162x higher throughput. Complex function pulling image and md5 checksumming called for 10s (MAX CONCURRENCY = 1). Hot Functions have 1,39x higher throughput. By combining Hot Functions with concurrency we saw even better results: Complex function pulling image and md5 checksumming called for 10s (MAX CONCURRENCY = 7) Hot Functions have 7,84x higher throughput. So there you have it, pure awesomeness by the team in the making. Also a big thank you to the good people from Honeycomb for their awesome product that allowed us to benchmark and plot (All the screenshots in this article are from Honeycomb). Its a great and fast new tool for debugging complex systems by combining the speed and simplicity of time series metrics with the raw accuracy and context of log aggregators. Since it supports answering arbitrary, ad-hoc questions about those systems in real time, it was an awesome, flexible, powerful way for us to test IronFunctions![...]

Air Mozilla: The Joy of Coding - Episode 85

Wed, 04 Jan 2017 18:00:00 +0000

(image) mconley livehacks on real Firefox bugs while thinking aloud.

Air Mozilla: Weekly SUMO Community Meeting Jan. 04, 2017

Wed, 04 Jan 2017 17:00:00 +0000

(image) This is the sumo weekly call

David Lawrence: Happy BMO Push Day!

Wed, 04 Jan 2017 16:43:48 +0000

the following changes have been pushed to

  • [1328464] The checkbox of “Add me to CC list (follow this bug)” doesn’t work

discuss these changes on

(image) (image)

Emma Humphries: How To Tag Comments in Bugzilla So We Can Respond Appropriately

Wed, 04 Jan 2017 00:36:21 +0000

Hello from and happy 2017.

You might recall that you can tag comments in Bugzilla to mark abuse, off-topic discussion, and spam.


Please use the right tags when reporting comments that violate our etiquette policies so that site admins can take the right actions and keep Bugzilla a friendly and spam-free place to report and track bugs.

spam: comments that look like someone's trying to game search results by adding links to unrelated sites to comments should be tagged as spam. Look out for comments of the form "I see this happening on my site too," as they may be spam and not additional information for a bug.

off-topic, offtopic, me-too, or advocacy: "me too", "fix this ''now''!", "I can't believe this isn't important to Mozilla" and similar comments should be tagged with one of these. See our etiquette guide.

abuse, abusive, or admin: Flames, threats, name-calling, and slurs are abuse, and should be marked as such so they are brought to the attention of an admin.

obsolete and typo should only be used to remove comments which are no longer relevant, contain outdated or incorrect information, or ones the author wishes to redact.

Our response to violations of Bugzilla etiquette depends on the type of violation. We don't want to mark an eager new contributor, who just needs some coaching, as a spammer, and we don't want to go uninformed about abuse.

If in doubt, tag with admin, and we'll have a look at it.

(image) comments

About:Community: Regional communities and Reps in 2016

Wed, 04 Jan 2017 00:24:18 +0000

Full static version is on the wiki and its conversation on discourse, feel free to point there anyone that might be interested. What got shipped from the Reps and Regional Communities Team Index Executive summary The Team Release notes Executive summary Our two objectives for 2016 were: A focus set of relevant training and learning opportunities for mobilizers are systematized and they regularly access these opportunities to be more effective in their contributions and as a result providing more impact to Mozilla’s main initiatives. Reps is the program for most core volunteers where many communities feel their voice represented and influencing the organization, and where mozillians join to be more aligned, grow their skills and be more impactful in mobilizing others. During 2016, the Reps and Regional communities team delivered: A coaching training material to systematize training and coaching support to core mobilizers and communities, starting to be completely volunteer-driven in 2017. An initial Leadership toolkit tailored to invest on the main identified skills our core mobilizers need to support Mozilla’s focus initiatives and areas. Five in-person community gatherings in our top focus regions (Brazil, India, Europe, Arabic and Mexico) to test, iterate and deliver these coaching and leadership opportunities to key core mobilizers, as well as document and systematize this effort to allow volunteer to run their own local ones by themselves in 2017. Support the creation (and regular update) of Activate Mozilla, a site to summarize the main focus areas for Mozilla (Rust, Servo, Test Pilot, WebVR, Internet Issues…) and how to provide value through activities co-created with functional partners. Clear alignment, re-activation and impact delivery from the five focus communities, re-energizing and providing value to organization goals with Activate Mozilla activities during 2016 and helping them to come up with aligned plans for 2017 to support focus projects, including future partnerships with local organizations. Alignment and impact delivery from regional communities around the world, with our Reps mobilizing almost 150 activities and events in more than 23 countries in the last 4 months supporting Test Pilot, Webcompat, Rust, Addons and E10s 2016 team goals. A big update to Mozilla Reps (RepsNext) to evolve the program b[...]

Code Simplicity: Measuring Developer Productivity

Tue, 03 Jan 2017 19:00:19 +0000

Almost as long as I have been working to make the lives of software engineers better, people have been asking me how to measure developer productivity. How do we tell where there are productivity problems? How do we know if a team is doing worse or better over time? How does a manager explain to senior managers how productive the developers are? And so on and so on. In general, I tended to focus on focus on code simplicity first, and put a lower priority on measuring every single thing that developers do. Almost all software problems can be traced back to some failure to apply software engineering principles and practices. So even without measurements, if you simply get good software engineering practices applied across a company, most productivity problems and development issues disappear. Now, that said, there is tremendous value in measuring things. It helps you pinpoint areas of difficulty, allows you to reward those whose productivity improves, justifies spending more time on developer productivity work where that is necessary, and has many other advantages. But programming is not like other professions. You can’t measure it like you would measure some manufacturing process, where you could just count the number of correctly-made items rolling off the assembly line. So how would you measure the production of a programmer? The Definition of “Productivity” The secret is in appropriately defining the word “productivity.” Many people say that they want to “measure productivity,” but have never thought about what productivity actually is. How can you measure something if you haven’t even defined it? The key to understanding what productivity is is realizing that it has to do with products. A person who is productive is a person who regularly and efficiently produces products. The way to measure the productivity of a developer is to measure the product that they produce. That statement alone probably isn’t enough to resolve the problem, though. So let me give you some examples of things you wouldn’t measure, and then some things you would, to give you a general idea. Why Not “Lines of Code?” Probably the most common metrics that the software industry has attempted to develop have been centered around how many lines of code (abbreviated LoC) a developer writes. I understand why pe[...]

Air Mozilla: Webdev Extravaganza: January 2017

Tue, 03 Jan 2017 18:00:00 +0000

(image) Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on. This...

Daniel Glazman: One BlueGriffon

Tue, 03 Jan 2017 17:36:00 +0000

There are - there were - two BlueGriffons: the Web Editor (aka BlueGriffon) and the EPUB2/EPUB3 editor (aka BlueGriffon EPUB Edition). With forthcoming v2.2, these are going to merge into a single product so they will never be out of sync again. Technically, the merge is already achieved and the next days will be spent on testing/ironing. Licenses for each product will work with that v2.2. BlueGriffon licenses will, as before, enable the commercial features of the product while licenses of the EPUB Edition will enable the commercial features AND EPUB2/EPUB3 editing. We'll also add an upgrade at special cost from the former to the latter. Stay tuned!


Air Mozilla: Martes Mozilleros, 03 Jan 2017

Tue, 03 Jan 2017 16:00:00 +0000

(image) Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

David Lawrence: Happy BMO Push Day!

Tue, 03 Jan 2017 15:28:23 +0000

the following changes have been pushed to

  • [1262457] the list of comment tags in the ‘tags’ menu isn’t updated in real time
  • [1321662] Ensure that Carp and Carp::Heavy are only loaded after @INC is setup with the vendor bundles
  • [1325432] Work ‘New’ in bug summary for form.crm is redundant and should be removed
  • [1299855] Implement token-bucket rate limiting on top of memcached
  • [1324058] Migrate from jquery-cookie to js-cookie, as the former is no longer maintained
  • [1262465] ensure unprivileged users can mark a bug as security sensitive

discuss these changes on

(image) (image)

The Mozilla Blog: Mozilla Welcomes Ashley Boyd, VP of Advocacy

Tue, 03 Jan 2017 13:33:38 +0000

Movement-building veteran joins the Mozilla leadership team This month, Ashley Boyd joins Mozilla as VP, Advocacy. Ashley will lead Mozilla’s work to fuel the open Internet movement and mobilize millions to stand up for a free, open web. Our mission is ambitious: making the health of the Internet a mainstream issue. It is also vital: as centralization, surveillance, exclusion and other online threats proliferate, we need a movement to keep the web a global public resource. In this role, Ashley will work with other teams within Mozilla, with ally organizations and with digital citizens around the world through advocacy campaigns and public education initiatives. Ashley Boyd, Mozilla’s new VP of Advocacy Ashley was most recently Vice President & Chief Field Officer for MomsRising, a national grassroots organization in the U.S. As a founding staff member, she was instrumental in building MomsRising into an organization of one million grassroots supporters, 200 partner organizations and over 20 funding partners. Ashley has over two decades of experience in public interest advocacy, with a specialization in effective uses of technology and public engagement. During her career, she has worked with leading public interest advocacy organizations in the U.S. and India including M&R Strategic Services, the Advocacy Institute, the Self Employed Women’s Association (India) and AmericaSpeaks. She has led multi-state organizing efforts around the issues of health reform, paid family leave, social security reform and the national budget. Ashley has a Master’s Degree in Rhetoric from the University of Maryland, College Park. Ashley will build on the success of Mozilla’s advocacy work over the past years. In fall of 2016, Mozilla fought for common-sense copyright reform in the EU, creating public education media that engaged over one million citizens and sending hundreds of rebellious selfies to EU Parliament. Earlier in 2016, Mozilla launched a public education campaign around encryption and emerged as a staunch ally of Apple in the company’s clash with the FBI. Mozilla has also fought for mass surveillance reform, net neutrality and data retention reform. Welcome, Ash[...]

This Week In Rust: This Week in Rust 163

Tue, 03 Jan 2017 05:00:00 +0000

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions. This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR. Updates from Rust Community News & Blog Posts Rust is more than safety. Rust is mostly safety. Safety is Rust's Fireflower. Fire Mario, not Fire Flowers. Rust is about productivity. Rust is its community. Why Rust? Sum types. Rust is software's salvation. Creating an enum iterator using Macros 1.1. Elegant library APIs in Rust. Rust on RTL8710 running FreeRTOS. Golang and Rustlang memory safety. Six easy ways to make your crate awesome. Constrain API versions statically with traits. Rust on the WiFi Pineapple (and OpenWrt). Rust: Borrowing, ownership, and lifetimes. Xargo v0.3.0 released: Build Your Own std. [podcast] New Rustacean News 2: Let's talk roadmap! — Rust's achievements in 2016 and goals for 2017. Other Weeklies from Rust Community This week in Rust docs 37. Updates from the Rust documentation team. This year in Redox. Redox is an operating-system written in Rust. This year in Robigalia. Robigalia is a project to create a highly reliable persistent capability OS, continuing the heritage of EROS and Coyotos. This year in Ruma. Ruma is a Matrix homeserver written in Rust. This week in Ruma 2017-01-01. These weeks in PlanetKit #6: the joy of motion. PlanetKit generates colorful blobs that might one day resemble planets. Crate of the Week This week's Crate of the Week is rocket, an experimental web framework (will need a nightly Rust!) with a focus on ease-of-use, expressability and speed. Thanks to Vikrant for the suggestion! Submit your suggestions and votes for next week! Call for Participation Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started! Some of these tasks may also have[...]

Christian Heilmann: Web bloat isn’t a knowledge problem

Mon, 02 Jan 2017 22:09:12 +0000

There is a consensus among all browser makers and lovers of the web: the current state of the web is a mess. The average web page is 2.4 megabyte big and has over 200 requests. We spend more time waiting for trackers and ads to load than we spend reading content. Our high-powered computers and phones, our excellent mobile devices choke on megabytes of JavaScript and CSS. Code that often isn’t needed in the environments it currently gets delivered to. Code that fixes issues of the past that aren’t a problem anymore these days. Our reaction to this is a lot of times an assumption that people use libraries and frameworks because they don’t know better. Or that people use them to solve an issue they have that we don’t — like having to support legacy environments. I’m getting more and more the impression that we are wrong in our approach. This is not about telling people that “you don’t need to use a framework” or that “using XYZ is considered harmful”. It isn’t even about “if you do this now, it will be a maintenance nightmare later”. These are our solutions to our problems. We’ve done this. Over and over again. Information is plentiful and tools aren’t a problem We try to do our best to battle this. We make our browsers evergreen and even accessible as headless versions for automatic testing. We create tools to show you without a doubt what’s wrong about a certain web product. We have simulators that show you how long it takes for your product to become available and responsive to interaction. We allow you to simulate mobile devices on your desktop machine and get a glimpse of what your product will look like for your end users. We create toolchains that undo the worst offenses when it comes to performance. We concatenate and minify scripts and CSS, we automatically optimise images and we flag up issues in a build process before they go live. We give talks and record training videos on how to build a great, responsive product using modern browsers whilst supporting old browsers. We release all of this for free and openly available — even as handy collections an[...]

Air Mozilla: Mozilla Weekly Project Meeting, 02 Jan 2017

Mon, 02 Jan 2017 19:00:00 +0000

(image) The Monday Project Meeting

The Servo Blog: These Weeks In Servo 86

Mon, 02 Jan 2017 00:30:00 +0000

In the last two weeks, we landed 164 PRs in the Servo organization’s repositories. This number is lower than usual due many contributors rightfully taking time to enjoy the holidays. Due to hard work by UK992 and the team at the University of Szeged our continuous integration systems are now gating on Windows MSVC and Android builds. This will make it much harder to regress support for these platforms as we go forward. All changes to Servo’s style system implementation now require documentation. Thank you to bz for filing the issue prompting this change! Please welcome antrik as an official reviewer for ipc-channel. They have been performing this role unofficially for some time, as well as being a prolific contributor to the library, and it’s good to finally recognize antrik’s efforts here. Planning and Status Our overall roadmap is available online. Plans for 2017 (including Q1) will be solidified in the coming week. Please check it out and provide feedback! This week’s status updates are here. Notable Additions emilio went on a massive documentation spree of the code in the style component. mbrubeck corrected the layout of tables with rowspan in certain cases. mrnayak redesigned the storage of HSTS data. emilio implemented @import support for Stylo. mattnenterprise updated some older Fetch code to match changes to the specification. beholdnec fixed a shader problem that was breaking Servo on some AMD drivers. karenher added support for tracking line numbers in the HTML parser. canaltinova corrected the serialization of overflow properties in CSS. hiikezoe made animated colors use premultiplied alpha. DominoTree reversed the default direction of linear gradients. mattnenterprise improved the connection pooling of HTTP requests. mrnayak added support for HSTS to the Fetch network stack. UK992 improved various aspects of the packaging process related to browser.html. cgwalters implemented support for using repository collaborators as reviewers in homu. nox removed some uses of transmute that triggered undefined behavio[...]

Karl Dubost: [worklog] Edition 048 - Bye 2016

Sat, 31 Dec 2016 14:50:00 +0000

Let's restart with a simpler worklog a bit more freestyle and less structured, before exploring a new format next year. 2016 had plenty of good things for Mozilla Webcompat team. Let's focus more on "this is what I'm doing" than "what I will do". The "will do" have a tendency to make you drift and create a burden you are dragging into the other tasks. Published the incomplete worklog pseudo-drafts of the last two months. It's ugly but it's better than nothing. I need to simplify my worklog and automate some of the things. dev Not sure I'm comfortable with deprecating this header, but maybe I don't understand fully the issue with just the text there. Python review for a change of strategy on the reported-with information. Code for hidding comments Some labels do not have a good contrast Another design issue with our labels: Labels section should be above the comment box Attempt to improve the layout of the comments box after our issue view redesign. Some typo in our template. Good first bug. webcompat issues strange issue with the font domain at Google taking a hell of a time for getting the resources, but I'm not sure it's really a Web compatibility issue. Some sites put on a overflow-y: hidden for forcing users to interact with a pop-up on mobile. When doing that, make sure to remove it once the interaction with the pop-up happened, if not, it makes the Web site unusable. Strange issue with an old version of Firefox that I can't reproduce in Nightly. That said found an interesting bogus strict-transport-security HTTP header and it seems it ties to the right version of the bug reporter. Some weird and interesting issue about background-image being repeated twice. A JavaScript user-agent sniffing for a table of content on Apple developer docummentation including a strange CSS decision for Firefox users. Nesting contexts and z-index are delicate specifically when it seems it is behaving differently in IE than other browsers. onkeypress on walmart website forbids user to delete what they typed. A button slightly too tiny[...]