Subscribe: Planet Mozilla
Added By: Feedage Forager Feedage Grade B rated
Language: English
add  branch  code  firefox  internet  mozilla org  mozilla  new  org  people  rust  support  time  users  web  work 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Planet Mozilla

Planet Mozilla

Planet Mozilla -


François Marier: Tweaking Referrers For Privacy in Firefox

Mon, 24 Oct 2016 00:00:00 +0000

The Referer header has been a part of the web for a long time. Websites rely on it for a few different purposes (e.g. analytics, ads, CSRF protection) but it can be quite problematic from a privacy perspective. Thankfully, there are now tools in Firefox to help users and developers mitigate some of these problems. Description In a nutshell, the browser adds a Referer header to all outgoing HTTP requests, revealing to the server on the other end the URL of the page you were on when you placed the request. For example, it tells the server where you were when you followed a link to that site, or what page you were on when you requested an image or a script. There are, however, a few limitations to this simplified explanation. First of all, by default, browsers won't send a referrer if you place a request from an HTTPS page to an HTTP page. This would reveal potentially confidential information (such as the URL path and query string which could contain session tokens or other secret identifiers) from a secure page over an insecure HTTP channel. Firefox will however include a Referer header in HTTPS to HTTPS transitions unless network.http.sendSecureXSiteReferrer (removed in Firefox 52) is set to false in about:config. Secondly, using the new Referrer Policy specification web developers can override the default behaviour for their pages, including on a per-element basis. This can be used both to increase or reduce the amount of information present in the referrer. Legitimate Uses Because the Referer header has been around for so long, a number of techniques rely on it. Armed with the Referer information, analytics tools can figure out: where website traffic comes from, and how users are navigating the site. Another place where the Referer is useful is as a mitigation against cross-site request forgeries. In that case, a website receiving a form submission can reject that form submission if the request originated from a different website. It's worth pointing out that this CSRF mitigation might be better implemented via a separate header that could be restricted to particularly dangerous requests (i.e. POST and DELETE requests) and only include the information required for that security check (i.e. the origin). Problems with the Referrer Unfortunately, this header also creates significant privacy and security concerns. The most obvious one is that it leaks part of your browsing history to sites you visit as well as all of the resources they pull in (e.g. ads and third-party scripts). It can be quite complicated to fix these leaks in a cross-browser way. These leaks can also lead to exposing private personally-identifiable information when they are part of the query string. One of the most high-profile example is the accidental leakage of user searches by Solutions for Firefox Users While web developers can use the new mechanisms exposed through the Referrer Policy, Firefox users can also take steps to limit the amount of information they send to websites, advertisers and trackers. In addition to enabling Firefox's built-in tracking protection by setting privacy.trackingprotection.enabled to true in about:config, which will prevent all network connections to known trackers, users can control when the Referer header is sent by setting network.http.sendRefererHeader to: 0 to never send the header 1 to send the header only when clicking on links and similar elements 2 (default) to send the header on all requests (e.g. images, links, etc.) It's also possible to put a limit on the maximum amount of information that the header will contain by setting the network.http.referer.trimmingPolicy to: 0 (default) to send the full URL 1 to send the URL without its query string 2 to only send the scheme, host and port or using the network.http.referer.XOriginTrimmingPolicy option (added in Firefox 52) to only restrict the contents of referrers attached to cross-origin requests. Site owners can opt to share less information with other sites, but they can't share any more than what the user trimming[...]

Smokey Ardisson: Thoughts on the Mac OS X upgrade cycle

Sun, 23 Oct 2016 06:43:50 +0000

Michael Tsai recently linked to Ricardo Mori’s lament on the unfashionable state of the Mac, quoting the following passage: Having a mandatory new version of Mac OS X every year is not necessarily the best way to show you’re still caring, Apple. This self-imposed yearly update cycle makes less and less sense as time goes by. Mac OS X is a mature operating system and should be treated as such. The focus should be on making Mac OS X even more robust and reliable, so that Mac users can update to the next version with the same relative peace of mind as when a new iOS version comes out. I wonder how much the mandatory yearly version cycle is due to the various iOS integration features—which, other than the assorted “bugs introduced by rewriting stuff that ‘just worked,’” seem to be the main changes in every Mac OS X (er, macOS, previously OS X) version of late. Are these integration features so wide-ranging that they touch every part of the OS and really need an entire new version to ship safely, or are they localized enough that they could safely be released in a point update? Of course, even if they are safe to release in an update, it’s still probably easier on Apple’s part to state “To use this feature, your Mac must be running macOS 10.18 or newer, and your iOS device must be running iOS 16 or newer” instead of “To use this feature, your Mac must be running macOS 10.15.5 or newer, and your iOS device must be running iOS 16 or newer” when advising users on the availability of the feature. At this point, as Mori mentioned, Mac OS X is a mature, stable product, and Apple doesn’t even have to sell it per se anymore (although for various reasons, they certainly want people to continue to upgrade). So even if we do have to be subjected to yearly Mac OS X releases to keep iOS integration features coming/working, it seems like the best strategy is to keep the scope of those OS releases small (iOS integration, new Safari/WebKit, a few smaller things here and there) and rock-solid (don’t rewrite stuff that works fine, fix lots of bugs that persist). I think a smaller, more scoped release also lessens the “upgrade burnout” effect—there’s less fear and teeth-gnashing over things that will be broken and never fixed each year, but there’s still room for surprise and delight in small areas, including fixing persistent bugs that people have lived with for upgrade after upgrade. (Regressions suck. Regressions that are not fixed, release after release, are an indication that your development/release process sucks or your attention to your users’ needs sucks. Neither is a very good omen.) And when there is something else new and big, perhaps it has been in development and QA for a couple of cycles so that it ships to the user solid and fully-baked. I think the need not to have to “sell” the OS presents Apple a really unique opportunity that I can imagine some vendors would kill to have—the ability to improve the quality of the software—and thus the user experience—by focusing on the areas that need attention (whatever they may be, new features, improvements, old bugs) without having to cram in a bunch of new tentpole items to entice users to purchase the new version. Even in terms of driving adoption, lots of people will upgrade for the various iOS integration features alone, and with a few features and improved quality overall, the adoption rate could end up being very similar. Though there’s the myth that developers are only happy when they get to write new code and new features (thus the plague of rewrite-itis), I know from working on Camino that I—and, more importantly, most of our actual developers1—got enormous pleasure and satisfaction from fixing bugs in our features, especially thorny and persistent bugs. I would find it difficult to believe that Apple doesn’t have a lot of similar-tempered developers working for it, so keeping them happy without cranking out tons of brand-new code shouldn’t be overly difficult. I just wish Apple w[...]

Daniel Stenberg: Another wget reference was Bourne

Fri, 21 Oct 2016 23:36:40 +0000

(image) Back in 2013, it came to light that Wget was used to to copy the files private Manning was convicted for having leaked. Around that time, EFF made and distributed stickers saying wget is not a crime.

Weirdly enough, it was hard to find a high resolution version of that image today but I’m showing you a version of it on the right side here.

In the 2016 movie Jason Bourne, Swedish actress Alicia Vikander is seen working on her laptop at around 1:16:30 into the movie and there’s a single visible sticker on that laptop. Yeps, it is for sure the same EFF sticker. There’s even a very brief glimpse of the top of the red EFF dot below the “crime” word.


Also recall the wget occurance in The Social Network.

Yunier José Sosa Vázquez: Actualización para Firefox 49

Fri, 21 Oct 2016 19:26:54 +0000

En el día de hoy Mozilla a publicado una nueva actualización para su navegador, en esta ocasión la 49.0.2.

Esta liberación resuelve pequeños problemas que han estado confrontando algunos usuarios, por lo que recomendamos actualizar.

La pueden obtener desde nuestra zona de Descargas para Linux, Mac, Windows y Android en español e inglés.

Air Mozilla: Webdev Beer and Tell: October 2016

Fri, 21 Oct 2016 18:00:00 +0000

(image) Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

QMO: Firefox 51.0a2 Aurora Testday, October 28th

Fri, 21 Oct 2016 15:57:44 +0000

Hello Mozillians,

We are happy to let you know that Friday, October 28th, we are organizing Firefox 51.0 Aurora Testday. We’ll be focusing our testing on the following features: Zoom indicator, Downloads dropmaker.

Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

Hal Wine: Using Auto Increment Fields to Your Advantage

Fri, 21 Oct 2016 07:00:00 +0000

Using Auto Increment Fields to Your Advantage

I just found, and read, Clément Delafargue’s post “Why Auto Increment Is A Terrible Idea” (via @CoreRamiro). I agree that an opaque primary key is very nice and clean from an information architecture viewpoint.

However, in practice, a serial (or monotonically increasing) key can be handy to have around. I was reminded of this during a recent situation where we (app developers & ops) needed to be highly confident that a replica was consistent before performing a failover. (None of us had access to the back end to see what the DB thought the replication lag was.)


Christian Heilmann: Decoded Chats – second edition featuring Monica Dinculescu on Web Components

Thu, 20 Oct 2016 21:42:01 +0000

At SmashingConf Freiburg this year I was lucky enough to find some time to sit down with Monica Dinculescu (@notwaldorf) and chat with her about Web Components, extending the web, JavaScript dependency and how to be a lazy but dedicated developer. I’m sorry about the sound of the recording and some of the harsher cuts but we’ve been interrupted by tourists trying to see the great building we were in who couldn’t read signs that it is closed for the day.

You can see the video and get the audio recording of our chat over at the Decoded blog:


I played a bit of devil’s advocate interviewing Monica as she has a lot of great opinions and the information to back up her point of view. It was very enjoyable seeing the current state of the web through the eyes of someone talented who just joined the party. It is far too easy for those who have been around for a long time to get stuck in a rut of trying not to break up with the past or considering everything broken as we’ve seen too much damage over the years. Not so Monica. She is very much of the opinion that we can trust developers to do the right thing and that by giving them tools to analyse their work the web of tomorrow will be great.

I’m happy that there are people like her in our market. It is good to pass the torch to those with a lot of dedication rather than those who are happy to use whatever works.


Support.Mozilla.Org: What’s Up with SUMO – 20th October

Thu, 20 Oct 2016 21:23:00 +0000

Hello, SUMO Nation! We had a bit of a break, but we’re back! First, there was the meeting in Toronto with the Lithium team about the migration (which is coming along nicely), and then I took a short holiday. I missed you all, it’s great to be back, time to see what’s up in the world of SUMO! Welcome, new contributors! superzorro Just Sew Cris cell_division julianocristian sarthak_1011 paymon23 Marto Nieto G. m.emeksiz OCMichael syam3526 jgmaldo If you just joined us, don’t hesitate – come over and say “hi” in the forums! Contributors of the week All the forum supporters who tirelessly helped users out for the last week. All the writers of all languages who worked tirelessly on the KB for the last week. We salute you! Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month! SUMO Community meetings LATEST ONE: 19th of October – you can read the notes here and see the video at AirMozilla. NEXT ONE: happening on the 26th of October! If you want to add a discussion topic to the upcoming meeting agenda: Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting). Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda). If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback. Community The Firefox Release Report for version 49 (including your input) has been published recently! Thanks to everyone who contributed to the document (and to the whole release). You make things happen :-) Platform Check the notes from the last meeting in this document. (and don’t forget about our meeting recordings). You can also follow the migration through our public Trello board. Highlights from the last few days include a walkthrough of the basic style and layout of the site (watch the video for more details) and locking down a list of locales for the first wave of migration (more details in the L10n section). There was a test migration performed by the Lithium team… More details as we get them! Giorgos (our glorious technical admin for the duration of the migration) has also proposed creating a snapshot of Kitsune’s state as an archive version of the site. More details to follow as we get them, but now you can be sure that none of the previously invested work is going to disappear from the web. Take a look at the first iteration of the upcoming post-migration community page – you can find the background and provide feedback for this here. For the front page, take a look here (background and feedback are here) – huge thanks to Joni for working on these two! Don’t forget about the main migration thread, with the list of areas that can benefit from your input: Roles Gamification: ranks & badges Metrics and measurement Design ideas (1st wave) Forum moderation Shutting down Kitsune If you are interested in test-driving the new platform now, please contact Madalina. IMPORTANT: the whole place is a work in progress, and a ton of the final content, assets, and configurations (e.g. layout pieces) are missing. QUESTIONS? CONCERNS? Use the migration thread to put questions/comments about it for everyone to share and discuss. Social Sierra from the Social team joined us recently for a meeting – you can reach out to her anytime! Want to join us? Please email Rachel and/or Madalina to get started supporting Mozilla’s product users on Facebook and Twitter. We need your help! Use the step-by-step guide here. Take a look at some useful videos: Getting started & replying to users [...]

Cameron Kaiser: We need more desktop processor branches

Thu, 20 Oct 2016 20:49:00 +0000

Ars Technica is reporting an interesting attack that uses a side-channel exploit in the Intel Haswell branch translation buffer, or BTB (kindly ignore all the political crap Ars has been posting lately; I'll probably not read any more articles of theirs until after the election). The idea is to break through ASLR, or address space layout randomization, to find pieces of code one can string together or directly attack for nefarious purposes. ASLR defeats a certain class of attacks that rely on the exact address of code in memory. With ASLR, an attacker can no longer count on code being in a constant location. Intel processors since at least the Pentium use a relatively simple BTB to aid these computations when finding the target of a branch instruction. The buffer is essentially a dictionary with virtual addresses of recent branch instructions mapping to their predicted target: if the branch is taken, the chip has the new actual address right away, and time is saved. To save space and complexity, most processors that implement a BTB only do so for part of the address (or they hash the address), which reduces the overhead of maintaining the BTB but also means some addresses will map to the same index into the BTB and cause a collision. If the addresses collide, the processor will recover, but it will take more cycles to do so. This is the key to the side-channel attack. (For the record, the G3 and the G4 use a BTIC instead, or a branch target instruction cache, where the table actually keeps two of the target instructions so it can be executing them while the rest of the branch target loads. The G4/7450 ("G4e") extends the BTIC to four instructions. This scheme is highly beneficial because these cached instructions essentially extend the processor's general purpose caches with needed instructions that are less likely to be evicted, but is more complex to manage. It is probably for this reason the BTIC was dropped in the G5 since the idea doesn't work well with the G5's instruction dispatch groups; the G5 uses a three-level hybrid predictor which is unlike either of these schemes. Most PowerPC implementations also have a return address stack for optimizing the blr instruction. With all of these unusual features Power ISA processors may be vulnerable to a similar timing attack but certainly not in the same way and probably not as predictably, especially on the G5 and later designs.) To get around ASLR, an attacker needs to find out where the code block of interest actually got moved to in memory. Certain attributes make kernel ASLR (KASLR) an easier nut to crack. For performance reasons usually only part of the kernel address is randomized, in open-source operating systems this randomization scheme is often known, and the kernel is always loaded fully into physical memory and doesn't get swapped out. While the location it is loaded to is also randomized, the kernel is mapped into the address space of all processes, so if you can find its address in any process you've also found it in every process. Haswell makes this even easier because all of the bits the Linux kernel randomizes are covered by the low 30 bits of the virtual address Haswell uses in the BTB index, which covers the entire kernel address range and means any kernel branch address can be determined exactly. The attacker finds branch instructions in the kernel code such as by disassembling it that service a particular system call and computes (this is feasible due to the smaller search space) all the possible locations that branch could be at, creates a "spy" function with a branch instruction positioned to try to force a BTB collision by computing to the same BTB index, executes the system call, and then executes the spy function. If the spy process (which times itself) determines its branch took longer than an average br[...]

Air Mozilla: Connected Devices Weekly Program Update, 20 Oct 2016

Thu, 20 Oct 2016 17:30:00 +0000

(image) Weekly project updates from the Mozilla Connected Devices team.

Air Mozilla: Reps Weekly Meeting Oct. 20, 2016

Thu, 20 Oct 2016 16:00:00 +0000

(image) This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Mozilla Reps Community: Rep of the Month – September 2016

Thu, 20 Oct 2016 10:59:58 +0000

Please join us in congratulating Mijanur Rahman Rayhan, Rep of the Month for September 2016!

Mijanur is a Mozilla Rep and Tech Speaker from Sylhet, Bangladesh. With his diverse knowledge he organized hackathons around Connected Devices and held a Web Compatibility event to find differences in different browsers.


Mijanur proved himself as a very active Mozillian through his different activities and work with different communities. With his patience and consistency to reach his goals he is always ready and prepared for these. He showed commitment to the Reps program and his proactive spirit these last elections by running as a nominee for the Cohort position in Reps Council.

Be sure to follow his activities as he continues the activate series with a Rust workshop, Dive Into Rust events, Firefox Testpilot MozCoffees, Web Compatibility Sprint and Privacy and Security seminar with Bangladesh Police!

Gervase Markham: No Default Passwords

Thu, 20 Oct 2016 10:06:32 +0000

One of the big problems with IoT devices is default passwords – here’s the list coded into the malware that attacked Brian Krebs. But without a default password, you have to make each device unique and then give the randomly-generated password to the user, perhaps by putting it on a sticky label. Again, my IoT vision post suggests a better solution. If the device’s public key and a password are in an RFID tag on it, and you just swipe that over your hub, the hub can find and connect securely to the device over SSL, and then authenticate itself to the device (using the password) as the user’s real hub, with zero configuration on the part of the user. And all of this works without the need for any UI or printed label which needs to be localized. Better usability, better security, better for the internet.


Gervase Markham: Someone Thought This Was A Good Idea

Thu, 20 Oct 2016 09:55:39 +0000

You know that problem where you want to label a coffee pot, but you just don’t have the right label? Technology to the rescue!


Of course, new technology does come with some disadvantages compared to the old, as well as its many advantages:


And pinch-to-zoom on the picture viewer (because that’s what it uses) does mean you can play some slightly mean tricks on people looking for their caffeine fix:


And how do you define what label the tablet displays? Easy:


Seriously, can any reader give me one single advantage this system has over a paper label?


Laura de Reynal: Understanding what is possible

Thu, 20 Oct 2016 08:32:45 +0000


Filed under: Methods, Mozilla, Research (image)

Daniel Pocock: Choosing smartcards, readers and hardware for the Outreachy project

Thu, 20 Oct 2016 07:25:44 +0000

One of the projects proposed for this round of Outreachy is the PGP / PKI Clean Room live image. Interns, and anybody who decides to start using the project (it is already functional for command line users) need to decide about purchasing various pieces of hardware, including a smart card, a smart card reader and a suitably secure computer to run the clean room image. It may also be desirable to purchase some additional accessories, such as a hardware random number generator. If you have any specific suggestions for hardware or can help arrange any donations of hardware for Outreachy interns, please come and join us in the pki-clean-room mailing list or consider adding ideas on the PGP / PKI clean room wiki. Choice of smart card For standard PGP use, the OpenPGP card provides a good choice. For X.509 use cases, such as VPN access, there are a range of choices. I recently obtained one of the SmartCard HSM cards, Card Contact were kind enough to provide me with a free sample. An interesting feature of this card is Elliptic Curve (ECC) support. More potential cards are listed on the OpenSC page here. Choice of card reader The technical factors to consider are most easily explained with a table: On disk Smartcard reader without PIN-pad Smartcard reader with PIN-pad Software Free/open Mostly free/open, Proprietary firmware in reader Key extraction Possible Not generally possible Passphrase compromise attack vectors Hardware or software keyloggers, phishing, user error (unsophisticated attackers) Exploiting firmware bugs over USB (only sophisticated attackers) Other factors No hardware Small, USB key form-factor Largest form factor Some are shortlisted on the GnuPG wiki and there has been recent discussion of that list on the GnuPG-users mailing list. Choice of computer to run the clean room environment There are a wide array of devices to choose from. Here are some principles that come to mind: Prefer devices without any built-in wireless communications interfaces, or where those interfaces can be removed Even better if there is no wired networking either Particularly concerned users may also want to avoid devices with opaque micro-code/firmware Small devices (laptops) that can be stored away easily in a locked cabinet or safe to prevent tampering No hard disks required Having built-in SD card readers or the ability to add them easily SD cards and SD card readers The SD cards are used to store the master private key, used to sign the certificates/keys on the smart cards. Multiple copies are kept. It is a good idea to use SD cards from different vendors, preferably not manufactured in the same batch, to minimize the risk that they all fail at the same time. For convenience, it would be desirable to use a multi-card reader: although the software experience will be much the same if lots of individual card readers or USB flash drives are used. Other devices One additional idea that comes to mind is a hardware random number generator (TRNG), such as the FST-01. Can you help with ideas or donations? If you have any specific suggestions for hardware or can help arrange any donations of hardware for Outreachy interns, please come and join us in the pki-clean-room mailing list or consider adding ideas on the PGP / PKI clean room wiki. [...]

Mozilla Open Design Blog: Nearly there

Thu, 20 Oct 2016 05:55:22 +0000

We’ve spent the past two weeks asking people around the world to think about our four refined design directions for the Mozilla brand identity. The results are in and the data may surprise you. If you’re just joining this process, you can get oriented here and here. Our objective is to refresh our Mozilla logo and related visual assets that support our mission and make it easier for people who don’t know us to get to know us. A reminder of the factors we’re taking into account in this phase. Data is our friend, but it is only one of several aspects to consider. In addition to the three quantitative surveys—of Mozillians, developers, and our target consumer audience—qualitative and strategic factors play an equal role. These include comments on this blog, constructive conversations with Mozillians, our 5-year strategic plan for Mozilla, and principles of good brand design. Here is what we showed, along with a motion study, for each direction:   We asked survey respondents to rate these design directions against seven brand attributes. Five of them—Innovative, Activist, Trustworthy, Inclusive/Welcoming, Opinionated—are qualities we’d like Mozilla to be known for in the future. The other two—Unique, Appealing—are qualities required for any new brand identity to be successful. Mozillians and developers meld minds. Members of our Mozilla community and the developers surveyed through MDN (the Mozilla Developer Network) overwhelmingly ranked Protocol 2.0 as the best match to our brand attributes. For over 700 developers and 450 Mozillians, Protocol scored highest across 6 of 7 measures. People with a solid understanding of Mozilla feel that a design embedded with the language of the internet reinforces our history and legacy as an Internet pioneer. The link’s role in connecting people to online know-how, opportunity and knowledge is worth preserving and fighting for. But consumers think differently. We surveyed people making up our target audience, 400 each in the U.S., U.K., Germany, France, India, Brazil, and Mexico. They are 18- to 34-year-old active citizens who make brand choices based on values, are more tech-savvy than average, and do first-hand research before making decisions (among other factors). We asked them first to rank order the brand attributes most important for a non-profit organization “focused on empowering people and building technology products to keep the internet healthy, open and accessible for everyone.” They selected Trustworthy and Welcoming as their top attributes. And then we also asked them to evaluate each of the four brand identity design systems against each of the seven brand attributes. For this audience, the design system that best fit these attributes was Burst. Why would this consumer audience choose Burst? Since this wasn’t a qualitative survey, we don’t know for sure, but we surmise that the colorful design, rounded forms, and suggestion of interconnectedness felt appropriate for an unfamiliar nonprofit. It looks like a logo. Also of note, Burst’s strategic narrative focused on what an open, healthy Internet feels and acts like, while the strategic narratives for the other design systems led with Mozilla’s role in world. This is a signal that our targeted consumer audience, while they might not be familiar with Mozilla, may share our vision of what the Internet could and should be. Why didn’t they rank Protocol more highly across the chosen attributes? We can make an educated guess that these consumers found it one dimensional by comparison, and they may have missed the meaning of the[...]

The Rust Programming Language Blog: Announcing Rust 1.12.1

Thu, 20 Oct 2016 00:00:00 +0000

The Rust team is happy to announce the latest version of Rust, 1.12.1. Rust is a systems programming language with a focus on reliability, performance, and concurrency. As always, you can install Rust 1.12.1 from the appropriate page on our website, or install via rustup with rustup update stable. What’s in 1.12.1 stable Wait… one-point-twelve-point… one? In the release announcement for 1.12 a few weeks ago, we said: The release of 1.12 might be one of the most significant Rust releases since 1.0. It was true. One of the biggest changes was turning on a large compiler refactoring, MIR, which re-architects the internals of the compiler. The overall process went like this: Initial MIR support landed in nightlies back in Rust 1.6. While work was being done, a flag, --enable-orbit, was added so that people working on the compiler could try it out. Back in October, we would always attempt to build MIR, even though it was not being used. A flag was added, -Z orbit, to allow users on nightly to try and use MIR rather than the traditional compilation step (‘trans’). After substantial testing over months and months, for Rust 1.12, we enabled MIR by default. In Rust 1.13, MIR will be the only option. A change of this magnitude is huge, and important. So it’s also important to do it right, and do it carefully. This is why this process took so long; we regularly tested the compiler against every crate on, we asked people to try out -Z orbit on their private code, and after six weeks of beta, no significant problems appeared. So we made the decision to keep it on by default in 1.12. But large changes still have an element of risk, even though we tried to reduce that risk as much as possible. And so, after release, 1.12 saw a fair number of regressions that we hadn’t detected in our testing. Not all of them are directly MIR related, but when you change the compiler internals so much, it’s bound to ripple outward through everything. Why make a point release? Now, given that we have a six-week release cycle, and we’re halfway towards Rust 1.13, you may wonder why we’re choosing to cut a patch version of Rust 1.12 rather than telling users to just wait for the next release. We have previously said something like “point releases should only happen in extreme situations, such as a security vulnerability in the standard library.” The Rust team cares deeply about the stability of Rust, and about our users’ experience with it. We could have told you all to wait, but we want you to know how seriously we take this stuff. We think it’s worth it to demonstrate our commitment to you by putting in the work of making a point release in this situation. Furthermore, given that this is not security related, it’s a good time to practice actually cutting a point release. We’ve never done it before, and the release process is semi-automated but still not completely so. Having a point release in the world will also shake out any bugs in dealing with point releases in other tooling as well, like rustup. Making sure that this all goes smoothly and getting some practice going through the motions will be useful if we ever need to cut some sort of emergency point release due to a security advisory or anything else. This is the first Rust point release since Rust 0.3.1, all the way back in 2012, and marks 72 weeks since Rust 1.0, when we established our six week release cadence along with a commitment to aggressive stability guarantees. While we’re disappointed that 1.12 had these regressions, we’re rea[...]

Support.Mozilla.Org: Firefox 49 Support Release Report

Wed, 19 Oct 2016 23:13:29 +0000

This report is aiming to capture and explain what has happened during and after the launch of Firefox 49on multiple support fronts: Knowledge Base and localization, 1:1 social and forum support, trending issues and reported bugs, as well as to celebrate and recognize the tremendous work the SUMO community is putting in to make sure our users experience a happy release. We have lots of ways to contribute, from Support to Social to PR, the ways you can help shape our communications program and tell the world about Mozilla are endless. For more information: [] Knowledge Base and Localization Article Voted “helpful” (English/US only) Global views Comments from dissatisfied users Desktop (Sept. 20 – Oct. 12) 76-80% 93871 “No explanation of why it was removed.” 61-76% 8625 none 36-71% 11756 “Didn’t address Firefox not playing YouTube tutorials” 70-75% 5147 “Please continue to support Firefox for Pentium III. It is not that hard to do.” “What about those who can’t afford to upgrade their processors?” Android (Sept. 20 – Oct. 12) 68% 292 none Localization Article Top 10 locale coverage Top 20 locale coverage Desktop (Sept. 20 – Oct. 12) 100% 86% 100% 81% 100% 81% 100% 81% Android (Sept. 20 – Oct. 12) 100% 71%   Support Forum Threads   Great teamwork between some top contributors Firefox 49 won’t start after installation Both Noah and Philipp narrowed down the cause of Firefox not starting, the build that the issuw with Kapersky started and filed bug 1305436 below  Every Bookmark Displayed Solved Missing mozglue.dll when launching Firefox Updating AVG to get rid of the missing mozglue.dll error Firefox crashes FredMcD did a great job restoring bookmarks and How to remove “Recently bookmarked” 377 views Please also keep in mind for Firefox updates that are not legit: Example of fake update for Firefox Bugs Created from Forum threads – SUMO Community [Bug 1305436] Firefox 49 won’t start after installation [Bug 1304848] Users report Firefox is no longer launching after the 49 update with a mozglue.dll missing error instead (Contributed to) [Bug 1304360] Firefox49 showing graphics artifacts with HWA enabled Army Of Awesome (by Stefan Costen -Costenslayer) My thanks goes out to all contributors for their help in supporting everyone from crashed (which can be difficult and annoying) to people thanking us. All of your hard work has been noticed and is much appreciated Along with Amit Roy (twitter: amitroy2779) for helping uses every day Social Support Highlights Brought to you by Sprinklr Total active contributors in program ~16 Top 12 Contributors Name Engagements Noah 103 Magdno 69 Daniela 28 Andrew 25 Geraldo 10 Cynthia 10 Marcelo 4 Jhonatas 2 Thiago 2 Joa Paulo 1 Number of Replies:   Trending issues Innbound, what people[...]

Air Mozilla: Singularity University

Wed, 19 Oct 2016 22:11:49 +0000

(image) Mozilla Executive Chair Mitchell Baker's address at Singularity University's 2016 Closing Ceremony.

Air Mozilla: IEEE Global Connect

Wed, 19 Oct 2016 22:10:00 +0000

(image) Mozilla Executive Chair Mitchell Baker's address at IEEE Global Connect

Eric Shepherd: Finding stuff: My favorite Firefox search keywords

Wed, 19 Oct 2016 21:33:42 +0000

One of the most underappreciated features of Firefox’s URL bar and its bookmark system is its support for custom keyword searches. These let you create special bookmarks that type a keyword followed by other text, and have that text inserted into a URL identified uniquely by the keyword, then that URL gets loaded. This lets you type, for example, “quote aapl” to get a stock quote on Apple Inc. You can check out the article I linked to previously (and here, as well, for good measure) for details on how to actually create and use keyword searches. I’m not going to go into details on that here. What I am going to do is share a few keyword searches I’ve configured that I find incredibly useful as a programmer and as a writer on MDN. For web development Here are the search keywords I use the most as a web developer. Keyword Description URL if Opens an API reference page on MDN given an interface name. elem Opens an HTML element’s reference page on MDN. css Opens a CSS reference page on MDN. fx Opens the release notes for a given version of Firefox, given its version number. mdn Searches MDN for the given term(s) using the default filters, which generally limit the search to include only pages most useful to Web developers. mdnall Searches MDN for the given term(s) with no filters in place. For documentation work When I’m writing docs, I actually use the above keywords a lot, too. But I have a few more that I get a lot of use out of, too. Keyword Description URL bug Opens the specified bug in Mozilla’s Bugzilla instance, given a bug number. bs Searches Bugzilla for the specified term(s). dxr Searches the Mozilla source code on DXR for the given term(s). file Looks for files whose name contains the specified text in the Mozilla source tree on DXR. ident Looks for definitions of the specified identifier (such as a method or class name) in the Mozilla code on DXR. func Searches for the definition of function(s)/method(s) with the specified name, using DXR. t Opens the specified MDN KumaScript macro page, given the template/macro name. wikimo Searches for the specified term(s). Obviously, DXR is a font of fantastic information, and I suggest click the “Operators” button at the right end of the search bar there to see a list of the available filters; building search keywords for many of these filters can make your life vastly easier, depending on your specific needs and work habits![...]

Air Mozilla: MozFest Volunteer Health & Safety Briefing

Wed, 19 Oct 2016 18:30:00 +0000

(image) Excerpt from 2016 MozFest Volunteer Briefing on 19th October for Health and Safety

Air Mozilla: Getting Started with Mozilla Maker Party 2016

Wed, 19 Oct 2016 18:21:14 +0000

(image) Getting Started with Mozilla Maker Party 2016

Air Mozilla: MozFest Volunteer Meetup - October 19, 2016

Wed, 19 Oct 2016 18:00:00 +0000

(image) Meetup for 2016 MozFest Volunteers

Air Mozilla: The Joy of Coding - Episode 76

Wed, 19 Oct 2016 17:00:00 +0000

(image) mconley livehacks on real Firefox bugs while thinking aloud.

Air Mozilla: Weekly SUMO Community Meeting Oct. 19, 2016

Wed, 19 Oct 2016 16:00:00 +0000

(image) This is the sumo weekly call

Yunier José Sosa Vázquez: Nueva versión de Firefox llega con mejoras en la reproducción de videos y mucho más

Tue, 18 Oct 2016 22:24:15 +0000

El pasado martes 19 de septiembre Mozilla liberó una nueva versión de su navegador e inmediatamente compartimos con ustedes sus novedades y su descarga. Pedimos disculpa a todas las personas por las molestias que esto pudo causar. Lo nuevo El administrador de contraseñas ha sido actualizado para permitir a las páginas HTTPS emplear las credenciales HTTP almacenadas. Esta es una forma más para soportar Let’s Encrypt y ayudar a los usuarios en la transición hacia una web más segura. El modo de lectura ha recibido varias funcionalidades que mejoran nuestra lectura y escucha mediante la adición de controles para ajustar el ancho y el espacio entre líneas del texto, y la inclusión de narración donde el navegador lee en voz alta el contenido de la página; sin dudas características que mejorarán la experiencia de uso en personas con discapacidad visual. El modo de lectura ahora incluye controles adicionales y lectura en alta voz El reproductor de audio y video HTML5 ahora posibilita la reproducción de archivos a diferentes velocidades (0.5x, Normal, 1.25x, 1.5x, 2x) y repetirlos indefinidamente. En este sentido, se mejoró el rendimiento al reproducir videos para usuarios con sistemas que soportan instrucciones SSSE3 sin aceleración por hardware. Firefox Hello, el sistema de comunicación mediante videollamadas y chat ha sido eliminado por su bajo empleo. No obstante, Mozilla seguirá desarrollando y mejorando WebRTC. Fin del soporte para sistemas OS X 10.6, 10.7 y 10.8, y Windows que soportan procesadores SSE. Para desarrolladores Añadida la columna Causa al Monitor de Red para mostrar la causa que generó la petición de red. Introducida la API web speech synthesis. Para Android Adicionado el modo de vista de página sin conexión, con esto podrás ver algunas páginas aunque no tengas acceso a Internet. Añadido un paseo por características fundamentales como el Modo de Lectura y Sync a la página Primera Ejecución. Introducidos las localizaciones Español de Chile (es-CL) y Noruego (nn-NO). El aspecto y comportamiento de las pestañas ha sido actualizado y ahora: Las pestañas antiguas ahora son ocultadas cuando la opción restaurar pestañas está establecida en “Siempre restaurar”. Recuerdo de la posición del scroll y el nivel de zoom para las pestañas abiertas. Los controles multimedia han sido actualizados para evitar sonidos desde múltiples pestañas al mismo tiempo. Mejoras visuales al mostrar los favicons. Otras novedades Mejoras en la página about:memory para reportar el uso de memoria dedicada a las fuentes. Rehabilitado el valor por defecto para la organización de las fuentes mediante Graphite2. Mejorado el rendimiento en sistemas Windows y OS X que no cuentan con aceleración por hardware. Varias correcciones de seguridad. Si prefieres ver la lista completa de novedades, puedes llegarte hasta las notas de lanzamiento (en inglés). Puedes obtener esta versión desde nuestra zona de Descargas en español e inglés para Android, Linux, Mac y Windows. Si te ha gustado, por favor comparte con tus amigos esta noticia en las redes sociales. No dudes en dejarnos un comentario.[...]

Gervase Markham: Security Updates Not Needed

Tue, 18 Oct 2016 21:55:56 +0000

As Brian Krebs is discovering, a large number of internet-connected devices with bad security can really ruin your day. Therefore, a lot of energy is being spent thinking about how to solve the security problems of the Internet of Things. Most of it is focussed on how we can make sure that these devices get regular security updates, and how to align the incentives to achieve that. And it’s difficult, because cheap IoT devices are cheap, and manufacturers make more money building the next thing than fixing the previous one.

Perhaps, instead, of trying to make water flow uphill, we should be taking a different approach. How can we design these devices such that they don’t need any security updates for their lifetime?

One option would be to make them perfect first time. Yeah, right.

Another option would be the one from my blog post, An IoT Vision. In that post, I outlined a world where IoT devices’ access to the Internet is always mediated through a hub. This has several advantages, including the ability to inspect all traffic and the ability to write open source drivers to control the hardware. But one additional outworking of this design decision is that the devices are not Internet-addressable, and cannot send packets directly to the Internet on their own account. If that’s so, it’s much harder to compromise them and much harder to do anything evil with them if you do. At least, evil things affecting the rest of the net. And if that’s not sufficient, the hub itself can be patched to forbid patterns of access necessary for attacks.

Can we fix IoT security not by making devices secure, but by hiding them from attacks?


Gervase Markham: WoSign and StartCom

Tue, 18 Oct 2016 21:37:35 +0000

One of my roles at Mozilla is that I’m part of the Root Program team, which manages the list of trusted Certificate Authorities (CAs) in Firefox and Thunderbird. And, because we run our program in an open and transparent manner, other entities often adopt our trusted list. In that connection, I’ve recently been the lead investigator into the activities of a Certificate Authority (CA) called WoSign, and a connected CA called StartCom, who have been acting in ways contrary to those expected of a trusted CA. The whole experience has been really interesting, but I’ve not seen a good moment to blog about it. Now that a decision has been taken on how to move forward, it seems like a good time. The story started in late August, when Google notified Mozilla about some issues with how WoSign was conducting its operations, including various forms of what seemed to be certificate misissuance. We wrote up the three most serious of those for public discussion. WoSign issued a response to that document. Further issues were pointed out in discussion, and via the private investigations of various people. That led to a longer, curated issues list and much more public discussion. WoSign, in turn produced a more comprehensive response document, and a “final statement” later. One or two of the issues on the list turned out to be not their fault, a few more were minor, but several were major – and their attempts to explain them often only led to more issues, or to a clearer understanding of quite how wrong things had gone. On at least one particular issue, the question of whether they were deliberately back-dating certificates using an obsolete cryptographic algorithm (called “SHA-1”) to get around browser blocks on it, we were pretty sure that WoSign was lying. Around that time, we privately discovered a couple of certificates which had been mis-issued by the CA StartCom but with WoSign fingerprints all over the “style”. Up to this point, the focus has been on WoSign, and StartCom was only involved because WoSign bought them and didn’t disclose it as they should have done. I started putting together the narrative. The result of those further investigations was a 13-page report which conclusively proved that WoSign had been intentionally back-dating certificates to avoid browser-based restrictions on SHA-1 cert issuance. If you can write an enthralling page-turner about f**king certificate authorities doing scuzzy nerd sh*t, damn, I couldn't pull that off. — SwiftOnSecurity (@SwiftOnSecurity) September 28, 2016 The report proposed a course of action including a year’s dis-trust for both CAs. At that point, Qihoo 360 (the Chinese megacorporation which is the parent of WoSign and StartCom) requested a meeting with Mozilla, which was held in Mozilla’s London office, and attended by two representatives of Qihoo, and one each from StartCom and WoSign. At that meeting, WoSign’s CEO admitted to intentionally back-dating SHA-1 certificates, as our investigation had discovered. The representatives of Qihoo 360 wanted to know whether it would be possible to disentangle StartCom from WoSign and then treat it separately. Mozilla representatives gave advice on the route which might most likely achieve this, but said tha[...]

Christian Heilmann: Decoded Chats – first edition live on the Decoded Blog

Tue, 18 Oct 2016 17:02:29 +0000

Over the last few weeks I was busy recording interviews with different exciting people of the web. Now I am happy to announce that the first edition of Decoded Chats is live on the new Decoded Blog.


In this first edition, I’m interviewing Rob Conery about his “Imposter Handbook“. We cover the issues of teaching development, how to deal with a constantly changing work environment and how to tackle diversity and integration.

We’ve got eight more interviews ready and more lined up. Amongst the people I talked to are Sarah Drasner, Monica Dinculescu, Ada-Rose Edwards, Una Kravets and Chris Wilson. The format of Decoded Chats is pretty open: interviews ranging from 15 minutes to 50 minutes about current topics on the web, trends and ideas with the people who came up with them.

Some are recorded in a studio (when I am in Seattle), others are Skype calls and yet others are off-the-cuff recordings at conferences.

Do you know anyone you’d like me to interview? Drop me a line on Twitter @codepo8 and I see what I can do :)


Aki Sasaki: scriptworker 0.8.1 and 0.7.1

Tue, 18 Oct 2016 16:47:15 +0000

Tl;dr: I just shipped scriptworker 0.8.1 (changelog) (github) (pypi) and scriptworker 0.7.1 (changelog) (github) (pypi)
These are patch releases, and are currently the only versions of scriptworker that work.

scriptworker 0.8.1

The json, embedded in the Azure XML, now contains a new property, hintId. Ideally this wouldn't have broken anything, but I was using that json dict as kwargs, rather than explicitly passing taskId and runId. This means that older versions of scriptworker no longer successfully poll for tasks.

This is now fixed in scriptworker 0.8.1.

scriptworker 0.7.1

Scriptworker 0.8.0 made some non-backwards-compatible changes to its config format, and there may be more such changes in the near future. To simplify things for other people working on scriptworker, I suggested they stay on 0.7.0 for the time being if they wanted to avoid the churn.

To allow for this, I created a 0.7.x branch and released 0.7.1 off of it. Currently, 0.8.1 and 0.7.1 are the only two versions of scriptworker that will successfully poll Azure for tasks.

(image) comments

Mike Ratcliffe: Running ESLint in Atom for Mozilla Development

Tue, 18 Oct 2016 15:54:59 +0000

Due to some recent changes in the way that we use eslint to check that our coding style linting Mozilla source code in Atom has been broken for a month or two.

I have recently spent some time working on Atom's linter-eslint plugin making it possible to bring all of that linting goodness back to life!

From the root of the project type:

./mach eslint --setup

Install the linter-eslint package v.8.00 or above. Then go to the package settings and enable the following options:


Once done, you should see errors and warnings as shown in the screenshot below:


Air Mozilla: MozFest 2016 Brown Bag

Tue, 18 Oct 2016 15:00:00 +0000

(image) MozFest 2016 Brown Bag - October 18th, 2016 - 16:00 London

Mozilla Security Blog: Phasing Out SHA-1 on the Public Web

Tue, 18 Oct 2016 14:40:30 +0000

An algorithm we’ve depended on for most of the life of the Internet — SHA-1 — is aging, due to both mathematical and technological advances. Digital signatures incorporating the SHA-1 algorithm may soon be forgeable by sufficiently-motivated and resourceful entities.

Via our and others’ work in the CA/Browser Forum, following our deprecation plan announced last year and per recommendations by NIST, issuance of SHA-1 certificates mostly halted for the web last January, with new certificates moving to more secure algorithms. Since May 2016, the use of SHA-1 on the web fell from 3.5% to 0.8% as measured by Firefox Telemetry.

In early 2017, Firefox will show an overridable “Untrusted Connection” error whenever a SHA-1 certificate is encountered that chains up to a root certificate included in Mozilla’s CA Certificate Program. SHA-1 certificates that chain up to a manually-imported root certificate, as specified by the user, will continue to be supported by default; this will continue allowing certain enterprise root use cases, though we strongly encourage everyone to migrate away from SHA-1 as quickly as possible.

This policy has been included as an option in Firefox 51, and we plan to gradually ramp up its usage.  Firefox 51 is currently in Developer Edition, and is currently scheduled for release in January 2017. We intend to enable this deprecation of SHA-1 SSL certificates for a subset of Beta users during the beta phase for 51 (beginning November 7) to evaluate the impact of the policy on real-world usage. As we gain confidence, we’ll increase the number of participating Beta users. Once Firefox 51 is released in January, we plan to proceed the same way, starting with a subset of users and eventually disabling support for SHA-1 certificates from publicly-trusted certificate authorities in early 2017.

Questions about SHA-1 based certificates should be directed to the forum.

Christian Heilmann: crossfit

Tue, 18 Oct 2016 11:43:34 +0000

Also on Medium, in case you want to comment. When I first heard about Crossfit, I thought it to be an excellent idea. I still do, to be fair: Short, very focused and intense workouts instead of time consuming exercise schedules No need for expensive and complex equipment; it is basically running and lifting heavy things A lot of the workouts use your own body weight instead of extra equipment A strong focus on good nutrition. Remove the stuff that is fattening and concentrate on what’s good for you In essence, it sounded like the counterpoint to overly complex and expensive workouts we did before. You didn’t need expensive equipment. Some bars, ropes and tyres will do. There was also no need for a personal trainer, tailor-made outfits and queuing up for machines to be ready for you at the gym. Fast forward a few years and you’ll see that we made Crossfit almost a running joke. You have overly loud Crossfit bros crashing weights in the gym, grunting and shouting and telling each other to “feel the burn” and “when you haven’t thrown up you haven’t worked out hard enough”. You have all kind of products branded Crossfit and even special food to aid your Crossfit workouts. Thanks, commercialism and marketing. You made something simple and easy annoying and elitist again. There was no need for that. One thing about Crossfit is that it can be dangerous. Without good supervision by friends it is pretty easy to seriously injure yourself. It is about moderation, not about competition. I feel the same thing happened to JavaScript and it annoys me. JavaScript used to be an add-on to what we did on the web. It gave extra functionality and made it easier for our end users to finish the tasks they came for. It was a language to learn, not a lifestyle to subscribe to. Nowadays JavaScript is everything. Client side use is only a small part of it. We use it to power servers, run tasks, define build processes and create fat client software. And everybody has an opinionated way to use it and is quick to tell others off for “not being professional” if they don’t subscribe to it. The brogrammer way of life rears its ugly head. Let’s think of JavaScript like Crossfit was meant to be. Lean, healthy exercise going back to what’s good for you: Use your body weight – on the client, if something can be done with HTML, let’s do it with HTML. When we create HTML with JavaScript, let’s create what makes sense, not lots of DIVs. Do the heavy lifting – JavaScript is great to make complex tasks easier. Use it to create simpler interfaces with fewer reloads. Change user input that was valid but not in the right format. Use task runners to automate annoying work. However, if you realise that the task is a nice to have and not a need, remove it instead. Use worker threads to do heavy computation without clobbering the main UI. Watch what you consume – keep dependencies to a minimum and make sure that what you depend on is reliable, safe to use and update-able. Run a lot – performance is the most important part.[...]

QMO: Firefox 50 Beta 7 Testday Results

Tue, 18 Oct 2016 07:00:35 +0000

Hello Mozillians!

As you may already know, last Friday – October 14th – we held a new Testday event, for Firefox 50 Beta 7.

Thank you all for helping us making Mozilla a better place – Onek Jude, Sadamu Samuel, Moin Shaikh, Suramya,ss22ever22 and Ilse Macías.

From Bangladesh: Maruf Rahman, Md.Rahimul Islam, Sayed Ibn Masud, Abdullah Al Jaber Hridoy, Zayed News, Md Arafatul Islam, Raihan Ali, Md.Majedul islam, Tariqul Islam Chowdhury, Shahrin Firdaus, Md. Nafis Fuad, Sayed Mahmud, Maruf Hasan Hridoy, Md. Almas Hossain, Anmona Mamun Monisha, Aminul Islam Alvi, Rezwana Islam Ria, Niaz Bhuiyan Asif, Nazmul Hassan, Roy Ayers, Farhadur Raja Fahim, Sauradeep Dutta, Sajedul Islam, মাহফুজা হুমায়রা মোহনা.

A big thank you goes out to all our active moderators too!


  • there were 4 verified bugs:
  • all the tests performed on Flash 23 were marked as PASS and 1 new possible issue was found on the New Awesome Bar feature that need to be investigated.

Keep an eye on QMO for upcoming events!

Nicholas Nethercote: How to speed up the Rust compiler

Tue, 18 Oct 2016 04:06:27 +0000

Rust is a great language, and Mozilla plans to use it extensively in Firefox. However, the Rust compiler (rustc) is quite slow and compile times are a pain point for many Rust users. Recently I’ve been working on improving that. This post covers how I’ve done this, and should be of interest to anybody else who wants to help speed up the Rust compiler. Although I’ve done all this work on Linux it should be mostly applicable to other platforms as well. Getting the code The first step is to get the rustc code. First, I fork the main Rust repository on GitHub. Then I make two local clones: a base clone that I won’t modify, which serves as a stable comparison point (rust0), and a second clone where I make my modifications (rust1). I use commands something like this: user=nnethercote for r in rust0 rust1 ; do cd ~/moz git clone$user/rust $r cd $r git remote add upstream git remote set-url origin$user/rust done Building the Rust compiler Within the two repositories, I first configure: ./configure --enable-optimize --enable-debuginfo I configure with optimizations enabled because that matches release versions of rustc. And I configure with debug info enabled so that I get good information from profilers. Then I build: RUSTFLAGS='' make -j8 [Update: I previously had -Ccodegen-units=8 in RUSTFLAGS because it speeds up compile times. But Lars Bergstrom informed me that it can slow down the resulting program significantly. I measured and he was right — the resulting rustc was about 5–10% slower. So I’ve stopped using it now.] That does a full build, which does the following: Downloads a stage0 compiler, which will be used to build the stage1 local compiler. Builds LLVM, which will become part of the local compilers. Builds the stage1 compiler with the stage0 compiler. Builds the stage2 compiler with the stage1 compiler. It can be mind-bending to grok all the stages, especially with regards to how libraries work. (One notable example: the stage1 compiler uses the system allocator, but the stage2 compiler uses jemalloc.) I’ve found that the stage1 and stage2 compilers have similar performance. Therefore, I mostly measure the stage1 compiler because it’s much faster to just build the stage1 compiler, which I do with the following command. RUSTFLAGS='-Ccodegen-units=8' make -j8 rustc-stage1 Building the compiler takes a while, which isn’t surprising. What is more surprising is that rebuilding the compiler after a small change also takes a while. That’s because a lot of code gets recompiled after any change. There are two reasons for this. Rust’s unit of compilation is the crate. Each crate can consist of multiple files. If you modify a crate, the whole crate must be rebuilt. This isn’t surprising. rustc’s dependency checking is very coarse. If you modify a crate, every other crate that depends on it will also be rebuilt, no matter how trivial the modification. This surprised me greatly. For example, any modifi[...]

This Week In Rust: This Week in Rust 152

Tue, 18 Oct 2016 04:00:00 +0000

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions. This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR. Updates from Rust Community Blog Posts Compiling to the web with Rust and emscripten. How to speed up the Rust compiler. Exploring ARM inline assembly in Rust. Usefulness and pitfalls of asm!. Pretty state machine patterns in Rust. What makes slog fast. Tips on writing efficient Rust code. Control flow for visitor callbacks. New pattern for visitor callbacks that hits the sweet spot for both convenience and the “pay for what you use” principle. How Rust do? Devlog of learning Rust by developing a small project in it. Using Haskell in Rust. Follow-up to Rust in Haskell. Game of Life implemented in Rust-sdl2. News & Project Updates @withoutboats joins language design team! Future updates to the rustup distribution format. Solving checksum failures and more! Other Weeklies from Rust Community This week in Rust docs 26. These weeks in TiKV 2016-10-17. Crate of the Week This week's Create of the Week is xargo - for effortless cross compilation of Rust programs to custom bare-metal targets like ARM Cortex-M. It recently reached version 0.2.0 and you can read the announcement here. Submit your suggestions and votes for next week! Call for Participation Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started! Some of these tasks may also have mentors available, visit the task page for more information. [easy] rust: Provide a better error message when the target sysroot is not installed. [less easy] servo: Implement HTMLTimeElement#dateTime. [hard] rust: Optimize emscripten targets with emcc. [hard] rust: Tell emscripten to remove exception handling code when the panic runtime is used. [easy] imag: Iterator for Iterator> tracing (wanna learn how to implement iterators?. [easy] maud: Support "while" and "while let". Maud is an HTML template engine for Rust. If you are a Rust project owner and are looking for contributors, please submit tasks here. Updates from Rust Core 106 pull requests were merged in the last week. Implement read_offset and write_offset. Add ThreadId for comparing threads. Cache conscious hashmap table. Add method str::repeat(self, usize) -> String. Add two functions to check type of given address. Add Vec::dedup_by and Vec::dedup_by_key. Add two functions to check type of SockAddr. Add println!() macro with out any arguments. stabilise ?, attributes on stmts, deprecate Reflect. Error monitor should emit error to stderr instead of stdout. Make the AF_NETLINK constant av[...]

Daniel Stenberg: curl up in Nuremberg!

Mon, 17 Oct 2016 20:45:32 +0000

I’m very happy to announce that the curl project is about to run our first ever curl meeting and developers conference.

March 18-19, Nuremberg Germany

Everyone interested in curl, libcurl and related matters is invited to participate. We only ask of you to register and pay the small fee. The fee will be used for food and more at the event.

You’ll find the full and detailed description of the event and the specific location in the curl wiki.

The agenda for the weekend is purposely kept loose to allow for flexibility and unconference-style adding things and topics while there. You will thus have the chance to present what you like and affect what others present. Do tell us what you’d like to talk about or hear others talk about! The sign-up for the event isn’t open yet, as we first need to work out some more details.

We have a dedicated mailing list for discussing the meeting, called curl-meet, so please consider yourself invited to join in there as well!

Thanks a lot to SUSE for hosting!

Feel free to help us make a cool logo for the event!


(The 19th birthday of curl is suitably enough the day after, on March 20.)

Air Mozilla: Mozilla Weekly Project Meeting, 17 Oct 2016

Mon, 17 Oct 2016 18:00:00 +0000

(image) The Monday Project Meeting

Firefox Nightly: These Weeks in Firefox: Issue 3

Mon, 17 Oct 2016 15:00:14 +0000

The Firefox Desktop team met yet again last Tuesday to share updates. Here are some fresh updates that we think you might find interesting: Highlights The add-ons team is landing more WebExtension APIs, namely: identity, topSites, omniBox If you’re interested in keeping tabs on what WebExtension work is in the pipe, check out their public Trello board jaws and mikedeboer are working on the new theming API in the Cedar twig repository. Once ready, it will merge into mozilla-central. Recently landed on Cedar is the ability to set the background image of about:home In an effort to reduce e10s tab switch spinners, billm landed a patch that lets us pause GCs to force a paint ddurst happily reports that Nightly is currently blocking all NPAPI plugins except for Flash! This is done behind a pref, which will be disabled for ESR 52, but enabled for release 52. In the next cycle, the pref will be removed and this will be hardcoded. ddurst also reports that the stub installer will soon be able to install 64-bit versions of Firefox on supported systems nhnt11 has returned from his time off, and will now continue his captive portal work! Contributor(s) of the Week adamg2 helped to make the Histograms.json parser more bulletproof! Ajay freed us all from some unnecessary code within Telemetry! Saad fixed a bug in the AwesomeBar that would occur if a hex code was put in as an address! Thauã Silveira added a new Telemetry probe so that we can get a sense of how often people abort print jobs! Project Updates Context Graph tobyelliot reports that the Recommendation Engine Experiment has gathered over 2000 volunteer participants Folks who are interested in the data handling practices can read up on them here Electrolysis (e10s) mconley has landed a patch that will annotate hangs that occur in the main thread in the content process during tab switch This should give the team important clues on what kinds of things are causing tab switch spinners mrbkap has started an etherpad the brainstorm ways in which e10s-multi might accidentally break some add-ons Platform UI MSU students continue to improve the has landed in Nightly, and scottwu’s patch to add the picker has passed review and should land shortly! Privacy / Security A patch by florian is being reviewed that adds a “Temporarily Blocked” permission state to the permission panel past has a patch being reviewed to convert our content permission popup notifica[...]

Firefox Nightly: Better default bookmarks for Nightly

Mon, 17 Oct 2016 13:41:10 +0000

Because software defaults matter, we have just changed the default bookmarks for the Nightly channel to be more useful to power-users deeply interested in day to day progress of Firefox and potentially willing to help Mozilla improve their browser through bug and crash reports, shared telemetry data and technical feedback.

Users on the Nightly channels had the same bookmarks than users on the release channel, these bookmarks target end-users with limited technical knowledge and link to Mozilla sites providing end-user support, add-ons or propose a tour of Firefox features. Not very compelling for a tech-savvy audience that installed pre-alpha software!

As of last week, new Nightly users or existing Nightly users creating a new profile have a different set of bookmarks that are more likely to meet their interest in the technical side of Mozilla and contributing to Firefox as an alpha tester. Here is what the default bookmarks are:


There are links to this blog of course, to Planet Mozilla, to the Mozilla Developer Network, to the Nightly Testers Tools add-on, to about:crashes and to the IRC #nightly channel in case you find a bug and would like to talk to other Nightly users about it and of course a link to Bugzilla. The Firefox tour link was also replaced by a link to the contribute page on

It’s a minor change to the profile data as we don’t want to make of Nightly a different product from Firefox, but I hope another small step in the direction of empowering our more technical user base to help Mozilla build the most stable and reliable browser for hundreds of millions of people!

Giorgos Logiotatidis: Systemd Unit to activate loopback devices before LVM

Mon, 17 Oct 2016 11:53:00 +0000

In a Debian server I'm using LVM to create a single logical volume from multiple different volumes. One of the volumes is a loop-back device which refers to a file in another filesystem. The loop-back device needs to be activated before the LVM service starts or the later will fail due to missing volumes. To do so a special systemd unit needs to be created which will not have the default dependencies of units and will get executed before lvm2-activation-early service. Systemd will set a number of dependencies for all units by default to bring the system into a usable state before starting most of the units. This behavior is controlled by DefaultDependencies flag. Leaving DefaultDependencies to its default True value creates a dependency loop which systemd will forcefully break to finish booting the system. Obviously this non-deterministic flow can result in different than desired execution order which in turn will fail the LVM volume activation. Setting DefaultDependencies to False will disable all but essential dependencies and will allow our unit to execute in time. Systemd manual confirms that we can set the option to false: Generally, only services involved with early boot or late shutdown should set this option to false. The second is to execute before lvm2-activation-early. This is simply achieved by setting Before=lvm2-activation-early. The third and last step is to set the command to execute. In my case it's /sbin/losetup /dev/loop0 /volume.img as I want to create /dev/loop0 from the file /volume.img. Set the process type to oneshot so systemd waits for the process to exit before it starts follow-up units. Again from the systemd manual Behavior of oneshot is similar to simple; however, it is expected that the process has to exit before systemd starts follow-up units. Place the unit file in /etc/systemd/system and in the next reboot the loop-back device should be available to LVM. Here's the final unit file: [Unit] Description=Activate loop device DefaultDependencies=no After=systemd-udev-settle.service Before=lvm2-activation-early.service Wants=systemd-udev-settle.service [Service] ExecStart=/sbin/losetup /dev/loop0 /volume.img Type=oneshot [Install] See also: - Anthony's excellent LVM Loopback How-To[...]

Firefox Nightly: DevTools now display white space text nodes in the DOM inspector

Mon, 17 Oct 2016 09:59:45 +0000

Web developers don’t write all their code in just one line of text. They use white space between their HTML elements because it makes markup more readable: spaces, returns, tabs.

In most instances, this white space seems to have no effect and no visual output, but the truth is that when a browser parses HTML it will automatically generate anonymous text nodes for elements not contained in a node. This includes white space (which is, after all a type of text).

If these auto generated text nodes are inline level, browsers will give them a non-zero width and height, and you will find strange gaps between the elements in the context, even if you haven’t set any margin or padding on nearby elements.

This behaviour can be hard to debug, but Firefox DevTools are now able to display these whitespace nodes, so you can quickly spot where do the gaps come from in your markup, and fix the issues.



The demo shows two examples with slightly different markup to highlight the differences both in browser rendering and what DevTools are showing.

The first example has one img per line, so the markup is readable, but the browser renders gaps between the images:


The second example has all the img tags in one line, which makes the markup unreadable, but it also doesn’t have gaps in the output:

(image) (image) 

If you inspect the nodes in the first example, you’ll find a new whitespace indicator that denotes the text nodes created for the browser for the whitespace in the code. No more guessing! You can even delete the node from the inspector, and see if that removes mysterious gaps you might have in your website.

The Servo Blog: These Weeks In Servo 81

Mon, 17 Oct 2016 00:30:00 +0000

In the last two weeks, we landed 171 PRs in the Servo organization’s repositories. Planning and Status Our overall roadmap is available online and now includes the Q4 plans and tentative outline of some ideas for 2017. Please check it out and provide feedback! This week’s status updates are here. Notable Additions bholley added benchmark support to mach’s ability to run unit tests frewsxcv implemented the value property on