Subscribe: Planet Mozilla
Added By: Feedage Forager Feedage Grade B rated
Language: English
blog  code  curl  firefox  lot  make  mozilla  new  open  people  rust  things  time  users  web  work  year   
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Planet Mozilla

Planet Mozilla

Planet Mozilla -


Robert O'Callahan: Long-Term Consequences Of Spectre And Its Mitigations

Wed, 17 Jan 2018 01:20:56 +0000

The dust is settling on the initial wave of responses to Spectre and Meltdown. Meltdown was relatively simple to deal with; we can consider it fixed. Spectre is much more difficult and has far-reaching consequences for the software ecosystem. The community is treating Spectre as two different issues, "variant 1" involving code speculatively executed after a conditional branch, and "variant 2" involving code speculatively executed via an indirect branch whose predicted destination is attacker-controlled. I wish these had better names, but c'est la vie. Spectre variant 1 mitigationsProposals for mitigating variant 1 have emerged from Webkit, the Linux kernel, and Microsoft. The former two propose similar ideas: masking array indices so that even speculative array loads can't load out-of-bounds. MSVC takes a different approach, introducing LFENCE instructions to block speculative execution when the load address appears to be guarded by a range check. Unfortunately Microsoft says It is important to note that there are limits to the analysis that MSVC and compilers in general can perform when attempting to identify instances of variant 1. As such, there is no guarantee that all possible instances of variant 1 will be instrumented under /Qspectre.This seems to be a great weakness, as developers won't know whether this mitigation is actually effective on their code. The Webkit and Linux kernel approaches have the virtue of being predictable, but at the cost of requiring manual code changes. The fundamental problem is that in C/C++ the compiler generally does not know with certainty the array length associated with an array lookup, thus the masking code must be introduced manually. Webkit goes further and adds protection against speculative loads guarded by dynamic type checks, but again this must be done manually in many cases since C/C++ have no built-in tagged union type. I think "safe" languages like Rust should generalize the idea behind Webkit's mitigations: require that speculatively executed code adhere to the memory safety constraints imposed by the type system. This would make Spectre variant 1 a lot harder to exploit. It would subsume every variant 1 mitigation I've seen so far, and could be automatic for safe code. Unsafe Rust code would need to be updated. Having said that, there could be variant-1 attacks that don't circumvent the type system, that none of these mitigations would block. Consider a browser running JS code: let x = bigArray[iframeElem.contentWindow.someProperty];Conceivably that could get compiled to some mix of JIT code and C++ that does if (iframeElemOrigin == selfDocumentOrigin) { index = ... get someProperty ... x = bigArray[index]; } else { ... error ... }The speculatively executed code violates no type system invariants, but could leak the value of the property across origins. This example suggests that complete protection against Spectre variant 1 will require draconian mitigations, either pervasive and expensive code instrumentation or deep (and probably error-prone) analysis. Spectre variant 2 mitigationsThere are two approaches here. One is microcode and silicon changes to CPUs to enable flushing and/or disabling of indirect branch predictors. The other is "retpolines" — replace indirect branches with an instruction sequence that doesn't trigger the indirect branch predictor. Apparently the Linux community is advising all compilers and assembly writers to avoid all indirect branches on Intel even in user-space. This means, for example, that we should update rr's handwritten assembly to avoid indirect branches. On the other hand, Microsoft is not giving such advice and apparently is not planning to introduce retpoline support in MSVC. I don't know why this difference is occurring. Assuming the Linux community advice is followed, things get even more complicated. Future CPUs can be secure against variant 2 without requiring retpolines. We will want to avoid retpolines on those CPUs for performance reasons. Also, Intel's future CET control-flow-integrity hardware will not work [...]

Firefox UX: The User Journey for Firefox Extensions Discovery

Tue, 16 Jan 2018 22:58:33 +0000

The ability to customize and extend Firefox are an essential part of Firefox’s value to users. Extensions are small tools that allow developers and users who install the extensions to modify, customize, and extend the functionality of Firefox. For example, during our workflows research in 2016, we interviewed a participant who was a graduate student in Milwaukee, Wisconsin. While she used Safari as her primary browser for common browsing, she used Firefox specifically for her academic work because of the extension Zotero was the best choice for keeping track of her academic work and citations.Popular categories of extensions include ad blockers, password managers, and video downloaders. Given the variety of extensions and the benefits to customization they offer, why is it that only 40% of Firefox users have installed at least one extension? Certainly, some portion of Firefox users may be aware of extensions but have no need or desire to install one. However, some users could find value in some extensions but simply may not be aware of the existence of extensions in the first place.Why not? How can Mozilla facilitate the extension discovery process?A fundamental assumption about the extension discovery process is that users will learn about extensions through the browser, through word of mouth, or through searching to solve a specific problem. We were interested in setting aside this assumption and to observe the steps participants take and the decisions they make in their journey toward possibly discovering extensions. To this end, the Firefox user research team ran two small qualitative studies to understand better how participants solved a particular problem in the browser that could be solved by installing an extension. Our study helped us understand how participants do — or do not — discover a specific category of extension.Our StudyBecause ad blockers are easily the most popular type of extension in installation volume on Firefox (more generally, some kind of ad blocking is being used on 615 million devices worldwide in 2017), we chose them as an obvious candidate for this study. Their popularity and many users’ perception of some advertising as invasive and distracting is a good mix that we believed could pose a common solvable problem for participants to engage with. (Please do not take our choice of this category of extensions for this study as an official or unofficial endorsement or statement from Mozilla about ad blocking as a user tool.)
Some ad blocker extensions currently available on AMO.
We conducted the first study in-person in our Portland, Oregon offices with five participants. To gather more data, we conducted a remote, unmoderated study using with nine participants in the US, UK, and Canada. In total, we worked with fourteen participants. These participants used Firefox as their primary web browser and were screened to make certain that they had no previous experience with extensions in Firefox.In both iterations of the study, we asked participants to complete the following task in Firefox:Let’s imagine for a moment that you are tired of seeing advertising in Firefox while you are reading a news story online. You feel like ads have become too distracting and you are tired of ads following you around while you visit different sites. You want to figure out a way to make advertisements go away while you browse in Firefox. How would you go about doing that? Show us. Take as much time as you need to reach a solution that you are satisfied with.Participants fell into roughly two categories: those who installed an ad blocking extension — such as Ad Block Plus or uBlock Origin — and those who did not. The participants who did not install an extension came up with a more diverse set of solutions.First, among the fourteen participants across the two studies, only six completed the task by discovering an ad blocking extension (two of these did not install the extension for other reasons). The parti[...]

The Mozilla Blog: Mozilla Files Suit Against FCC to Protect Net Neutrality

Tue, 16 Jan 2018 20:55:46 +0000

Today, Mozilla filed a petition in federal court in Washington, DC against the Federal Communications Commission for its recent decision to overturn the 2015 Open Internet Order.

Why did we do this?

The internet is a global, public resource. It relies on the core principle of net neutrality (that all internet traffic be treated equally) to exist. If that principle is removed — with only some content and services available or with roadblocks inserted by ISPs to throttle or control certain services — the value and impact of that resource can be impaired or destroyed.

Ending net neutrality could end the internet as we know it. That’s why we are committed to fighting the order. In particular, we filed our petition today because we believe the recent FCC decision violates both federal law as well as harms internet users and innovators. In fact, it really only benefits large Internet Service Providers.

What is next?

As we have said many times over the years, we’ll keep fighting for the open internet to ensure everyone has access to the entire internet and do everything in our power to protect net neutrality. In addition to our court challenge, we are also taking steps to ask Congress and the courts to fix the broken policies.

As a process note, the FCC decision made it clear that suits should be filed 10 days after it is published in the Federal Register, which has not yet occurred. However, federal law is more ambiguous. Due to the importance of this issue, even though we believe the filing date should be later, we filed in the event a court determines the appropriate date is today. The FCC or a court may accept this order or require us and others to refile at a later date. In fact, we’re urging them to use the later date. In either instance, we will continue to challenge the order in the courts.

What can you do?

It is imperative that all internet traffic be treated equally, without discrimination against content or type of traffic — that’s the how the internet was built and what has made it one of the greatest inventions of all time.

You can help by calling your elected officials and urge them to support an open internet. Net neutrality is not a partisan or U.S. issue and the decision to remove protections for net neutrality is the result of broken processes, broken politics, and broken policies. We need politicians to decide to protect users and innovation online rather than increase the power of a few large ISPs.

The post Mozilla Files Suit Against FCC to Protect Net Neutrality appeared first on The Mozilla Blog.

Mozilla GFX: WebRender newsletter #12

Tue, 16 Jan 2018 17:56:23 +0000

Hi there, your 12th WebRender newsletter has arrived. As I mentioned in the previous newsletter, the biggest theme in the last two weeks has been fixing correctness bugs and adding missing features. I keep learning new and scary things about text while Lee digs up more and more features that we need to add support for. If you are testing on Windows and WebRender makes rendering flicker horribly, worry not Kvark has landed a fix in WebRender that will make it in nightly soon. Notable WebRender changes Glenn is making progress (1),(2) in the conversion to segmented primitives which lets us move more content to the opaque pass and improves batching. Glenn removed the need to re-build the scene when a dynamic property changes. This saves a lot of CPU time during scrolling and animations. Glenn added support for caching render tasks across frames and display lists, and added support for it in box shadows. This nice because it will let us avoid re-doing some of the same potentially expensive things every frame (like blurs). Morris fixed a drop-shadow rendering issues (#2243 and #2261). Lee fixed/implemented various text rendering features (1) (2). Kvark fixed edge mask decoding in the shaders. Martin fixed/improved the clipping logic (1) (2) (3). Nical added the concept of transactions to the API (allows us to express certain constraints important for correctness and avoid redundant work). Ethan fixed some glitches when box-shadow parameters are close to zero. Glenn switched travis off for mac builds, we are now using taskcluster. Travis is great but it couldn’t sustain the load on Mac. Testing turnaround times are going to be much better now! Kats resurrected support for non-premultiplied images (We need it for canvas). Glenn ported line primitives to the brush infrastructure. Martin implemented a more expressive API for clipping. Ethan fixed a glitch when mixing edges with and without borders. Lee implemented support for glyph transforms for writing modes. Markus fixed a bug in the calculation of corner radius overlap. Ethan fixed a bug in with inset box-shadow offsets. Glenn moved alpha targets from using the A8 format to R8 on all platforms which avoids slow paths on ANGLE and some drivers. Notable Gecko changes Lee implemented the plumbing for synthetic italic. Nical updated the gecko integration to use the transaction API and take advantage of it. Nical implemented a simple shared memory recycling scheme for blob images and fonts (improves the performance of display list building). Gankro fixed sub-pixel aa for some bullet display items. Andrew fixed a bug causing images to disappear after dragging a tab to a different window. Kats sped up display list building by removing duplicated work for adjacent items with equivalent clip chains. Ethan fixed a text rendering issue. Milan added the “gfx.webrender.all” pref to simplify enabling the most up to date WebRender prefs on all platforms. Sotaro fixed a memory leak and a lot of smaller things (too many for me to link them here). Vincent and Jerry fixed a bug with video on unfocused tabs freezing. Enabling WebRender in Firefox Nightly In about:config: – just set “gfx.webrender.all” to true (no other prefs needed anymore \o/) Note that WebRender can only be enabled in Firefox Nightly.[...]

The Mozilla Blog: Mozilla and Sundance Film Festival Present: VR the People

Tue, 16 Jan 2018 17:05:07 +0000

On Monday January 22, Mozilla is bringing together a panel of the top VR industry insiders in the world to the Sundance Film Festival in Park City, Utah, to explain how VR storytelling is revolutionizing the film and entertainment industry. “We want the storyteller’s vision to exceed the capacity of existing technology, to push boundaries, because then the technologist is inspired to engineer new mechanisms that enable things initially thought impossible” says Kamal Sinclair, Director of New Frontier Lab Programs at Sundance Institute.  “However, this is not about creating something that appeals to people simply because of its novel technical achievements; rather it is something that has real meaning, and where that meaning can be realized by engineering the technologies to deliver the best experience possible.” Mozilla selected the Sundance Film Festival as a destination because the Sundance Institute is a nonprofit organization that promotes artists in film, theatre, and new media to create and thrive. This is a perfect fit for Mozilla because both organizations are aligned with keeping creators first, especially within the Mixed Reality (VR & AR) space. “We want to empower artists to create immersive, impactful storytelling with ease and accessible tools. It’s a part of our mission to ensure the internet is a global public resource, open and accessible to all,” said Sean White, Mozilla Senior Vice President of Emerging Technologies. The all-star lineup includes Mozilla Senior Vice President of Emerging Technologies Sean White, founder and CEO of Emblematic Group Nonny de la Peña, Reggie Watts, and immersive director Chris Milk, CEO of WITHIN. The panel will be moderated by Kamal Sinclair, Director of New Frontier Lab Programs at the Sundance Institute. Attendees will also get to step inside an exclusive preview of an immersive volumtetric demonstration from immersive journalist Nonny de la Peña, as well as a special opportunity to deliver a powerful message about women in technology. At Mozilla, our vision is an internet that truly puts people first, where individuals can shape their own experience and are empowered, safe and independent. Our commitment to VR has been in the works for a while. Mozilla, the maker of Firefox, was the first browser to bring WebVR to the web browser in August of 2017. This created a fast lane for developers and artists to create web based VR experiences to browse with Firefox. Please join Mozilla as we help creators use the web to the maximum of its potential to reach the world.   Mozilla and Sundance Film Festival Present: VR the People Technology is often viewed as a barrier to immersive storytelling. Instead, it can be used to enhance, amplify, and simplify the creative process. Join Mozilla at the Sundance Film Festival ClaimJumper building 3rd Floor as we explore how technology can be used to push the limits of storytelling Our stellar lineup of speakers includes: Mozilla Senior Vice President of Emerging Technologies Sean White, Founder and CEO of Emblematic Group & Immersive Journalist Nonny de la Peña, Reggie Watts, CEO of WITHIN & Immersive Director Chris Milk, and Moderator and Director, Kamal Sinclair New Frontier Lab Programs at Sundance Institute Mon, January 22, 2018 Venue: Claim Jumper Court, 3rd Floor 12:00 PM – 1:30 PM MST Park City, UT 84060   About Mozilla Mozilla is the not-for-profit behind the popular web browser, Firefox. We believe the Internet is a global public resource, open and accessible to all. We work to ensure it stays open by building products, technologies and programs that put people in control of their online lives, and contribute to a healthier Internet. About Sundance Film Festival Founded in 1981 by Robert Redford, Sundance Institute is a nonprofit organization that provides and preserves the space for artists in film, theatre, and new media to create and thrive. The Institute’s signature Labs, granting,[...]

Mozilla Open Policy & Advocacy Blog: Host an Open Internet Activist

Tue, 16 Jan 2018 17:00:38 +0000

The Ford-Mozilla Open Web Fellowship program is seeking host organizations: like-minded groups working on the front lines of internet health Today, we’re launching the Ford-Mozilla Open Web Fellowship call for host organizations. If your organization is devoted to a healthy internet for all users, we encourage you to apply. Now entering its fourth year, the Open Web Fellows program is part of a growing movement to build global leadership around public interest technology and open web advocacy. Backed by the Ford Foundation and Mozilla, the program partners emerging technologists (fellows) with civil society organizations (hosts) for a 10-month period of collaboration and creative work to support public awareness around online policy, to defend digital rights, and to champion internet freedom. In the past four years, the fellowship has partnered with 21 host organizations around the globe, including the American Civil Liberties Union, Amnesty International, Derechos Digitales, European Digital Rights, Freedom of the Press Foundation and many more, across sectors and specialties, united by a shared goal to protect public interaction on the open web. Says Ford Foundation’s Michael Brennan, who helps manage the fellowship program: “Technology suffuses every aspect of our lives, and the challenges and opportunities it presents for civil society and social justice are greater than ever. It’s very important that civil society organizations understand the technical landscape, and how developing technologies can serve the public interest.” WHY TO APPLY Organizations defending an open and accessible web are the backbone of our fellowship program. They socialize fellows to a global assortment of issues, policies, and technologies and inspire them with meaningful work that impacts how the internet grows and supports community globally. The Fellowship sponsors fellows with a competitive stipend and suite of benefits such that they can comfortably contribute to and learn from their host organizations for their 10 month fellowship tenure. In turn, the host organizations mentor their fellows to tackle challenges to the open web: digital inclusion, online privacy and security, transparent policy, and human rights advocacy. You can read more about the fellows’ interests and skills via the 2015-2017 cohorts’ bios, or check out this video series featuring their profiles and passions. A CALL FOR HOST ORGANIZATIONS Effective immediately, any organization is welcome to apply to be a Ford-Mozilla Open Web Fellowship host. We are seeking applications from organizations that are passionate about preserving the open web as a global resource for all. Selected organizations have a reputation for influencing public policies that impact the internet and a strong interest in onboarding talented emerging technologists to the issues they advocate for and defend on a daily basis. The application deadline is Friday, February 16th, 2018. Have questions or concerns? Check this guide to applying, read through the program overview, or reach out by email to Apply today, and spread the word! The post Host an Open Internet Activist appeared first on Open Policy & Advocacy.[...]

Hacks.Mozilla.Org: Using Hardware Token-based 2FA with the WebAuthn API

Tue, 16 Jan 2018 16:02:14 +0000

To provide higher security for logins, websites are deploying two-factor authentication (2FA), often using a smartphone application or text messages. Those mechanisms make phishing harder but fail to prevent it entirely — users can still be tricked into passing along codes, and SMS messages can be intercepted in various ways. Firefox 60 will ship with the WebAuthn API enabled by default, providing two-factor authentication built on public-key cryptography immune to phishing as we know it today. Read on for an introduction and learn how to secure millions of users already in possession of FIDO U2F USB tokens. Creating a new credential Let’s start with a simple example: this requests a new credential compatible with a standard USB-connected FIDO U2F device; there are many of these compliant tokens sold with names like Yubikey, U2F Zero, and others: const cose_alg_ECDSA_w_SHA256 = -7; /* The challenge must be produced by the server */ let challenge = new Uint8Array([21,31,105 /* 29 more random bytes generated by the server */]); let pubKeyCredParams = [{ type: "public-key", alg: cose_alg_ECDSA_w_SHA256 }]; let rp = { name: "Test Website" }; let user = { name: "Firefox User ", displayName: "Firefox User", id: new TextEncoder("utf-8").encode("") }; let publicKey = {challenge, pubKeyCredParams, rp, user}; navigator.credentials.create({publicKey}) .then(decodeCredential); In the case of USB U2F tokens, this will make all compatible tokens connected to the user’s system wait for user interaction. As soon as the user touches any of the devices, it generates a new credential and the Promise resolves. The user-defined function decodeCredential() will decode the response to receive a key handle, either a handle to the ECDSA key pair stored on the device or the ECDSA key pair itself, encrypted with a secret, device-specific key. The public key belonging to said pair is sent in the clear. The key handle, the public key, and a signature must be verified by the backend using the random challenge. As a credential is cryptographically tied to the web site that requested it, this step would fail if the origins don’t match. This prevents reuse of credentials generated for other websites. The key handle and public key will from now on be associated with the current user. The WebAuthn API mandates no browser UI, which means it’s the sole responsibility of the website to signal to users they should now connect and register a token. Getting an assertion for an existing credential The next time the user logs into the website they will be required to prove possession of the second factor that created the credential in the previous section. The backend will retrieve the key handle and send it with a new challenge to the user. As allowCredentials is an array, it allows sending more than one token, if multiple tokens are registered with a single user account. /* The challenge must be produced by the server */ let challenge = new Uint8Array([42,42,33 /* 29 more random bytes generated by the server */]); let key = new Uint8Array(/* … retrieve key handle … */); let allowCredentials = [{ type: "public-key", id: key, transports: ["usb"] }]; let publicKey = {challenge, allowCredentials}; navigator.credentials.get({publicKey}) .then(decodeAssertion); Again, all connected USB U2F tokens will wait for user interaction. When the user touches a token it will try to either find the stored key handle with the given ID, or try to decrypt it with the internal secret key. On success, it will return a signature. Otherwise the authentication flow will abort and will need to be retried by the website. After decoding, the signature and the key handle that were used to sign are sent to the backend. If the public key stored with the key handle is able to verify the given signature over the provided challenge, the assertion[...]

Kim Moir: Mae Jemison as Luke Skywalker and other stories

Tue, 16 Jan 2018 12:21:52 +0000

My husband gave me the Lego women in STEM and Lego Women of NASA sets for Christmas.  He is knows me well! Our son was quite fascinated by the women of NASA kit and we assembled it together.    I have an Lego X-Wing fighter from a long time ago aka my childhood.  Our son was playing with it one day, and I noticed that Mae Jemison was sitting in the pilot’s seat instead of Luke Skywalker. Me to my son: “Huh, Mae Jemison is flying the X-Wing now” He: “She’s an astronaut Mommy, she can fly anything.” A few days later, I noticed she was flying a small Millennium Falcon.  A very talented astronaut indeed! Last week I spoke at CUSEC (Canadian Undergraduate Software Engineering Conference) in Montreal. I wrote about it here.  After my talk, a couple of women mentioned that they really appreciated that I featured the work of women in my slides.  I said to them that I did this in purpose.  If you work in a field where people always reference the work of people who don’t look like you, you can begin to think you don’t belong. If you look at your company org chart and see the people above you all look the same but not like you, you can wonder if you how difficult it will be for you to advance.   Underrepresented folks in tech do amazing work but often aren’t referenced as experts due to bias. You can change this by highlighting their work and making them more visible.
Slide on applying the concepts of the code to your development + speaker notes
I’ve been taking a course at work on leadership.  One of topics is being an authentic leader. As part of the that course, we talked about how people cover or hide parts of their identities at work.  For instance, some parents don’t talk the childcare responsibilities they have because they don’t want to seem less dedicated to their work and be offered less challenging assignments.  People who grew up in families with less financial resources might hide that fact from coworkers who grew up in more privileged circumstances.  People who are members of communities that are disproportionately subject to police violence might not want to discuss the latest news with their coworkers who don’t have a reason to fear the police.  People who identify as LGBTQIA might hide their identify for fear of job discrimination or dismissal, because in some jurisdictions this is still (effectively) legal. The course talks about ways that as a leader you should be try to show more of your authentic self if this is safe for you to do.  Which is easier to do with the people that are your peers, but it more difficult to do for the people who report at a higher level than you in the org chart.  For instance, I’ve a manager in the past who blocked off part of his calendar everyday to pick up his kids from school.  This is a signal to others that this is acceptable behaviour within the group. So I tried to be a little more authentic in my talk. “Outside of work,  I like baking and running long distances. I have an amazing family too! I put these pictures up here to show you that as a developer you can have a life outside of work.  Our industry tends to glamourize long hours at work at the expense of everything else but it doesn’t have to be that way.” It’s just a very small thing but I wanted to let people know that you all belong here.  You are welcome. And you’ll be amazing. Representation matters.[...]

Marco Zehe: NVDA and Firefox 58 – The team is regaining strength

Tue, 16 Jan 2018 10:15:59 +0000

A week before the Firefox 57 “Quantum” release in November, I published an Article detailing some bits to be aware of when using Firefox and the NVDA screen reader together. In Firefox 58, due on January 23, 2018, the reliable team is regaining strength in playing well together and offering you good and fast web accessibility. After the Firefox 57 release, due to many changes under the hood, NVDA and Firefox temporarily lapsed in performance. Statistics quickly showed that about two thirds of the NVDA user base stayed with us despite of this. So to all of you who stuck with us on this difficult release: Thank you! Many of the others moved to the extended support release of Firefox 52. Thank you to those of you as well, you decided to stick with Firefox! Also, statistics show that barely any of those of you who stuck with 57 decided to turn off multi-process Firefox, but instead used the new technology, and some of you even reported problems to us. Since then, the accessibility team at Mozilla have worked hard to improve the situation. And I am glad to report that in Firefox 58, most of the bottlenecks and slowdowns could be fixed. We’re now delivering most pages fast enough again so that NVDA doesn’t feel the need to notify you that it is loading a document. In addition, you will start feeling the actual effects of the Quantum technologies, like faster page load times and other more smooth experiences now that the accessibility bottlenecks are out of the way. Feedback from users who have been using the Firefox 58 beta releases has been good, so we feel confident that those of you who upgrade from 57 to 58 upon release, will immediately feel more comfortable with your screen reader and browser combination again. And I am hoping that this encourages many of those who switched to the 52 ESR, to come back to Firefox 58 and give it a try. If you still don’t like it, you can go back to 52 ESR as before, but I sincerely hope you’ll stick with 58 once you notice the improvements. Moreover, Firefox 58 introduces a new feature that NVDA will take advantage of starting in the 2018.1 release, due in February. Those of you NVDA users on the Next or Master development snapshots, already have that particular improvement. That improvement will speed up the cooperation between NVDA and Firefox even more, causing web content to render in the virtual buffer faster. What’s coming beyond 58? Of course, more bug fixes! We’re continuing to make improvements which bring us closer to parity with Firefox 56, or even beyond that. On many small and medium size pages, that is already the case now, so Firefox 59 deals with stuff that mostly hits on big pages with lots of hyperlinks or many chunks of differently formatted text. We’ll also continue to work hard to iron out any problems that may cause JAWS and Firefox to slow down so badly together. While some of the above mentioned improvements also improve the interaction between Firefox and JAWS somewhat, JAWS’s interactions with Firefox are different enough that there is still something causing bad slowness that we don’t see with NVDA at all. And while we’ll do everything to improve the situation on our end, there will also need to come some updates to JAWS from Freedom Scientific, the makers of JAWS. So for JAWS users, the recommendation still stands to remain on 52 ESR, which will receive regular security updates until way into the third quarter of 2018, or keep multi-process tabs turned off, as many JAWS users who remained with current Firefox releases have done. Note, however, that turning off multi-process tabs is something you do at your own risk. This is officially an unsupported configuration, so if anything breaks there, it’s tough luck. I will have more updates in a separate article once we have them and there is significant progress to report. In [...]

Anthony Hughes: Firefox 60 Product Integrity Requests Report

Mon, 15 Jan 2018 23:28:30 +0000

Late last year I was putting out weekly reports on the number of requests Mozilla’s Product Integrity group was receiving and how well we were tracking toward our self-imposed service-level agreement (respond to 90% within 48 hours).

The initial system we set up was only ever intended to be minimally viable and has not scaled well, although that’s probably to be expected. There’s been quite a lot of growing pains so I’ve been tasked with taking it to the next level.

What does that have to do with the weekly reports?

Going forward I have decided to stop giving reports on a weekly basis and instead try giving milestone-based reports. Unfortunately the data in the current system is not easily tracked against milestones and so the data presented in these reports is not 100% accurate.

In general I am tracking two categories of requests for each milestone, based very loosely on the dates:

After Deadline” are requests filed after the deadline for the current milestone but before the “PI request deadline” email goes out for the next milestone.

Within Deadline” are requests filed in the week long window between the “PI request deadline” email going out and the deadline for the current milestone.

Total” is quite simply an aggregate of the two.

There will be some inconsistency in this data with regards to the milestone as not every request filed within a milestone window is not necessarily tied to that exact milestone. However this is the closest approximation we can make with the data given at this time; a failing that I hope to address in the next iteration of the PI requests system.

Firefox 58, 59, 60

The deadline to submit PI requests for Firefox 60 passed on January 7, 2018.


By and large we’re getting most of our requests after the deadline. It is possible that some of the “after deadline” number includes requests for the next milestone, particularly if requested toward the end of the current milestone. Unfortunately there is no concrete way for me to know this based on the information we currently record. Regardless I would like to see a trend over time of getting the majority of our requests prior to the deadline. Time will tell.


In terms of our SLA performance we’re tracking better with requests made after the deadline than those filed within the week-long deadline window. Granted the lower volume of requests within the deadline window could be an impact. Currently we’re tracking within 10% of our SLA which I’d like to see improved but is actually better than I thought we’d be doing at this point, considering the number of requests we receive and the number of resources we have available.

That’s all for today. I’ll be back toward the end of the Firefox 60 cycle to report on how that milestone shook out and to look forward to Firefox 61.

Have questions or feedback about this report or the PI Requests system in general? Please get in touch.

Air Mozilla: Mozilla Weekly Project Meeting, 15 Jan 2018

Mon, 15 Jan 2018 19:00:00 +0000

(image) The Monday Project Meeting

Kim Moir: Thank you CUSEC!

Mon, 15 Jan 2018 16:50:25 +0000

Last week, I spoke at CUSEC (Canadian Undergraduate Software Engineering Conference) in Montreal.   I really enjoy speaking with students and learning what they are working on.  They are the future of our industry!  I was so impressed by the level of organization and the kindness and thoughtfulness of the CUSEC organizing committee who were all students from various universities across Canada. I hope that you all are enjoying some much needed rest after your tremendous work in the months approaching the conference and last week.
Gracey Hlywa Maytan and Kenny Hong opening CUSEC 2018 in Montreal
Some of of their thoughtful gestures included  handwritten thank you notes to speakers.  They had organized speakers lunches and dinners. They arranged times for AV checks hours before the your talk.   They gave speakers a gift bag of things that might be useful during the conference – lip balm, kleenex, hand sanitizer, snacks, a tuque, a t-shirt. I felt so welcome and will be recommending this conference to others. Neither the organizing committee nor the speakers were all pale and/or male. I think a lot of organizers of other conferences could learn a lot from these students because a lot of industry conferences fail at this metric.   I really enjoyed the other speaker’s talks as well and wish I could have stayed for the rest of the conference.  I’ll have to watch them online as I had to leave Thursday night because I had an appointment I had to attend in Ottawa on Friday. My talk was about one of my favourite things to do, replacing parts of running distributed systems.  Thank you to Sébastien Roy for inviting me to speak and to Anushka Paliwal for introducing me before my talk. From hello world to goodbye code from Kim Moir I really enjoyed preparing and delivering this talk.  (Sidebar: I took the picture on the title slide while snowshoeing in Gatineau Parc near Wakefield, which is spectacular). When I’m preparing for a talk I really spend a lot of time thinking about the background of the audience and what they can get out of a talk. The audience is mostly students, and they are studying different areas of software engineering so I tried to make the talk pretty general. Understanding existing code bases, how to read them and how they evolve is a useful skill to share.  I had a lot of positive feedback from attendees about the talk – how talking about the release process is great because this is a often an overlooked part of software development. The talk was 20 minutes in length exactly.  When I was writing the talk, I had about 10 other slides that I  removed.  Perhaps some day I could give this talk in an expanded format. There is certainly a lot of material to cover in this subject area. After my talk, I talked to a lot of students and gave away a lot of fox stickers. Here are some of the questions that came up that I thought I could answer here as well How many lines of code changed during the Firefox Quantum release? See What is the process to I apply for an internship at Mozilla?  The internship process at Mozilla starts in the fall for summer internships.  The interview process usually starts in October/November and continues until each position is filled.  There are still internship positions listed on the web page, so if you are interested, now is the time to apply. What do you look for when interviewing intern candidates? I interviewed many candidates for our Python internships this past fall. At Mozilla, we have a online technical screening test for applicants.  If you proceed past that, you will have some more interviews that dig deeper into technical que[...]

Mozilla Security Blog: Secure Contexts Everywhere

Mon, 15 Jan 2018 16:00:51 +0000

Since Let’s Encrypt launched, secure contexts have become much more mature. We have witnessed the successful restriction of existing, as well as new features to secure contexts. The W3C TAG is about to drastically raise the bar to ship features on insecure contexts. All the building blocks are now in place to quicken the adoption of HTTPS and secure contexts, and follow through on our intent to deprecate non-secure HTTP. Requiring secure contexts for all new features Effective immediately, all new features that are web-exposed are to be restricted to secure contexts. Web-exposed means that the feature is observable from a web page or server, whether through JavaScript, CSS, HTTP, media formats, etc. A feature can be anything from an extension of an existing IDL-defined object, a new CSS property, a new HTTP response header, to bigger features such as WebVR. In contrast, a new CSS color keyword would likely not be restricted to secure contexts. Requiring secure contexts in standards development Everyone involved in standards development is strongly encouraged to advocate requiring secure contexts for all new features on behalf of Mozilla. Any resulting complication should be raised directly against the Secure Contexts specification. Exceptions to requiring secure contexts There is room for exceptions, provided justification is given to the dev.platform mailing list. This can either be inside the “Intent to Implement/Ship” email or a separate dedicated thread. It is up to Mozilla’s Distinguished Engineers to judge the outcome of that thread and ensure the dev.platform mailing list is notified. Expect to be granted an exception if: other browsers already ship the feature insecurely it can be demonstrated that requiring secure contexts results in undue implementation complexity. Secure contexts and legacy features Features that have already shipped in insecure contexts, but are deemed more problematic than others from a security, privacy, or UX perspective, will be considered on a case-by-case basis. Making those features available exclusively to secure contexts should follow the guidelines for removing features as appropriate. Developer tools and support To determine whether features are available developers can rely on feature detection. E.g., by using the @supports at-rule in CSS. This is recommend over the self.isSecureContext API as it is a more widely applicable pattern. Mozilla will provide developer tools to ease the transition to secure contexts and enable testing without an HTTPS server. The post Secure Contexts Everywhere appeared first on Mozilla Security Blog.[...]

Daniel Stenberg: Inspect curl’s TLS traffic

Mon, 15 Jan 2018 09:44:25 +0000

Since a long time back, the venerable network analyzer tool Wireshark (screenshot above) has provided a way to decrypt and inspect TLS traffic when sent and received by Firefox and Chrome. You do this by making the browser tell Wireshark the SSL secrets: set the environment variable named SSLKEYLOGFILE to a file name of your choice before you start the browser Setting the same file name path in the Master-secret field in Wireshark. Go to Preferences->Protocols->SSL and edit the path as shown in the screenshot below. Having done this simple operation, you can now inspect your browser’s HTTPS traffic in Wireshark. Just super handy and awesome. Just remember that if you record TLS traffic and want to save it for analyzing later, you need to also save the file with the secrets so that you can decrypt that traffic capture at a later time as well. curl Adding curl to the mix. curl can be built using a dozen different TLS libraries and not just a single one as the browsers do. It complicates matters a bit. In the NSS library for example, which is the TLS library curl is typically built with on Redhat and Centos, handles the SSLKEYLOGFILE magic all by itself so by extension you have been able to do this trick with curl for a long time – as long as you use curl built with NSS. A pretty good argument to use that build really. Since curl version 7.57.0 the SSLKEYLOGFILE feature can also be enabled when built with GnuTLS, BoringSSL or OpenSSL. In the latter two libs, the feature is powered by new APIs in those libraries and in GnuTLS the library’s own logic similar to how NSS does it. Since OpenSSL is the by far most popular TLS backend for curl, this feature is now brought to users more widely. In curl 7.58.0 (due to ship on Janurary 24, 2018), this feature is built by default also for curl with OpenSSL and in 7.57.0 you need to define ENABLE_SSLKEYLOGFILE to enable it for OpenSSL and BoringSSL. And what’s even cooler? This feature is at the same time also brought to every single application out there that is built against this or later versions of libcurl. In one single blow. now suddenly a whole world opens to make it easier for you to debug, diagnose and analyze your applications’ TLS traffic when powered by libcurl! Like the description above for browsers, you set the environment variable SSLKEYLOGFILE to a file name to store the secrets in tell Wireshark to use that same file to find the TLS secrets (Preferences->Protocols->SSL), as the screenshot showed above run the libcurl-using application (such as curl) and Wireshark will be able to inspect TLS-based protocols just fine! trace options Of course, as a light weight alternative: you may opt to use the –trace or –trace-ascii options with the curl tool and be fully satisfied with that. Using those command line options, curl will log everything sent and received in the protocol layer without the TLS applied. With HTTPS you’ll see all the HTTP traffic for example. Credits Most of the curl work to enable this feature was done by Peter Wu and Ray Satiro.[...]

Daniel Pocock: RHL'18 in Saint-Cergue, Switzerland

Mon, 15 Jan 2018 08:02:54 +0000

RHL'18 was held at the centre du Vallon à St-Cergue, the building in the very center of this photo, at the bottom of the piste:


People from various free software communities in the region attended for a series of presentations, demonstrations, socializing and ski. This event is a lot of fun and I would highly recommend that people look out for the next edition. (subscribe to rhl-annonces on for a reminder email)

Ham radio demonstration

I previously wrote about building a simple antenna for shortwave (HF) reception with software defined radio. That article includes links to purchase all the necessary parts from various sources. Everything described in that article, together with some USB sticks running Debian Hams Live (bootable ham radio operating system), some rolls of string and my FT-60 transceiver, fits comfortably into an OSCAL tote bag like this:


It is really easy to take this kit to an event anywhere, set it up in 10 minutes and begin exploring the radio spectrum. Whether it is a technical event or a village fair, radio awakens curiosity in people of all ages and provides a starting point for many other discussions about technological freedom, distributing stickers and inviting people to future events. My previous blog contains photos of what is in the bag and a video demo.

Open Agriculture Food Computer discussion

We had a discussion about progress building an Open Agriculture (OpenAg) food computer in Switzerland. The next meeting in Zurich will be held on 30 January 2018, please subscribe to the forum topic to receive further details.

Preparing for Google Summer of Code 2018

In between eating fondue and skiing, I found time to resurrect some of my previous project ideas for Google Summer of Code. Most of them are not specific to Debian, several of them need co-mentors, please contact me if you are interested.

Support.Mozilla.Org: The Big SUMO Report: 2017

Sat, 13 Jan 2018 19:18:01 +0000

Hey there, SUMO Nation! We are happy to see you reading these words at the beginning of 2018 – and curious to see what it brings… But before we dive deep into the intriguing future, let’s take a look back at 2017. It has been a year full of changes big and small, but we were all in it together and we could not have reached the end of the year as a team without you. There is quite a lot to talk about, so let’s get rolling with the numbers and details! SUMO 2017 – the major facts We had a fair share of “platform adventures”, switching the whole site over to a completely new front- and back-end experience for admins, contributors, and users – and then coming back to our previous (and current) setup – both for good reasons. It has been a rocky road, full of big challenges and unexpected plot twists. Thank you for staying with us through all the good winds and bad turbulence. More on the platform side – Giorgos and Ben have been quietly moving things around to ensure a stable future for the site – the works are finishing only now, but the prep work happened all in 2017. We proudly and loudly took part in the biggest event of 2017 for everyone at Mozilla – the release of Firefox Quantum! Remember – even if you never contributed a single line of code to Firefox, your contributions and participation in what Mozilla does is crucial to the future of an open web. The “big one” aside, we also welcomed Firefox Rocket and Firefox for Fire TV to our family of “all the software for your burning browsing needs” ;-). We met in San Francisco and Austin to talk about all things Mozilla and SUMO – and celebrate our diversity and achievements. We also met in Berlin to talk about our community plans for 2018. We were invited to join the kicking off of the Non-Coding Volunteers project and are really looking forward to more of it in 2018. This humble blog took a dip in general activity and traffic (15 new posts vs 68 in 2016 and over 7,200 page views vs 9,200 in 2016). Still, some of the most read posts in its history appeared last year – and were written by you. We’re talking about community event reports, All Hands tales, and stories about bugs – quality over quantity! SUMO 2017 in numbers – the highlights Just like the year before, our activity and its results could be analysed from many perspectives, with dozens of data sources. Putting everything together, especially given our platform changes in 2018, is quite impossible – and the numbers have been affected by interruptions in tracking, as expected. Capturing the potential, reach, and power of SUMO as a source of knowledge and help and as an example of community collaboration is something we definitely want to improve upon in 2018. No amount of simple digits can tell the complete story of our joint voyage, but let’s try with the ones below. General site stats Number of total page views: 855,406,690 vs 806,225,837 (2016) – over 49 million more! Number of users with at least one recorded session (= visit to the site): 296,322,215 vs 259,584,893 (2016) – over 36 million more! Number of sessions (= periods of active user engagement on the site): 513,443,121 vs 446,537,566 (2016) – over 66 million more! Percentage of people returning to the site: 44.1% vs 45% (2016) Average time spent on site per session: 01:26 vs 01:39 (2016) Average number of pages visited per session: 1.67 vs 1.81 (2016) Percentage of single page visits: 39.7% vs 25% (2016) 89.65% of users saw the website on desktop or laptop computers (90.9% in 2016). The remaining 10.35% visited us from a mobile device (9.1% in 2016). Percentage of people who visited SU[...]

Wladimir Palant: News flash: is not special in any way

Sat, 13 Jan 2018 19:01:15 +0000

Once upon a time, Google dared to experiment with HTTPS encryption for their search instead of allowing all search data to go unencrypted through the wire. For this experiment, they created a new subdomain: was the address where your could get some extra privacy. What some people apparently didn’t notice: the experiment was successful, and Google rolled out HTTPS encryption to all of their domains. I don’t know why is still around, but there doesn’t seem to be anything special about it any more. Which doesn’t stop some people from imagining that there is.

Myth #1: Your data is extra private, thanks to the extra encryption

The “encrypted” in “” refers to HTTPS encryption. You get it on any Google domain, and pretty much every other search engine switched to HTTPS-only as well. There is no indication that Google experiments with any other kind of encryption on this domain.

Myth #2: doesn’t send your search query to websites you click on

That myth seems to be based on this answer by a Google employee noting differences in referrer handling. And maybe in 2013 it was even accurate. But these days anybody running a website will confirm that they don’t see your search query, no matter what Google domain you are on. Until recently the reason was Google’s Instant Search feature, the search terms simply weren’t part of the address. Now Instant Search is gone, but Google uses Referrer Policy to prevent giving away your search terms to other websites.

Actual differences?

I did notice a few actual differences on that domain. For example, won’t redirect me to because of my geographical location — but it will still show me results that are specific to Germany, same as It also doesn’t provide links to some other Google apps such as Maps. Not really a reason to prefer it over regular Google domains, but maybe somebody knows about more substantial differences and can list those in the comments.

Pascal Chevrel: User Style for

Sat, 13 Jan 2018 14:49:35 +0000

Yesterday, I was talking with Kohei Yoshino (the person behind the Bugzilla Quantum effort that recently landed significant UX improvements to the header strip) about some visual issues I have on which basically boil down to our default view being a bit too noisy for my taste and not emphasizing enough on the key elements I want to glance at immediately when I visit a bug (bug Status, description, comments). Given that I spend a significant amount of time on Bugzilla and that I also spend some time on Github issues, I decided to see if I could improve our default theme on Bugzilla with a user style to make it easier on the eyes and also closer visually to Github, which I think is good when you use both on a daily basis. After an evening spent on it, I am happy with the result so I decided to share it via To install this style on Firefox, you need an extension such as Stylish or Styl-us (I use the former), go to the user style page above and install it. Load a bug (ex: Bug 1406825) and you should see the result. Note that this CSS patches the default theme on, not the old one. You need to have the option "Use modal user interface" set to ON in your bugzilla preferences (that's the default value). Here are a few Before/After screenshots:
Overview and header before
Overview and header after
Comment before
Comment after
That's a v1 (a v1.1 actually), it uses sans-serif fonts instead of monospace, styles replies and comments in a more modern way, removes some visual elements and emphasizes on readibility of comments. Cheers[...]

Daniel Stenberg: Microsoft curls too

Sat, 13 Jan 2018 08:51:51 +0000

On December 19 2017, Microsoft announced that since insider build 17063 of Windows 10, curl is now a default component. I’ve been away from home since then so I haven’t really had time to sit down and write and explain to you all what this means, so while I’m a bit late, here it comes! I see this as a pretty huge step in curl’s road to conquer the world. curl was already existing on Windows Ever since we started shipping curl, it has been possible to build curl for Windows and run it on Windows. It has been working fine on all Windows versions since at least Windows 95. Running curl on Windows is not new to us. Users with a little bit of interest and knowledge have been able to run curl on Windows for almost 20 years already. Then we had the known debacle with Microsoft introducing a curl alias to PowerShell that has put some obstacles in the way for users of curl. Default makes a huge difference Having curl shipped by default by the manufacturer of an operating system of course makes a huge difference. Once this goes out to the general public, all of a sudden several hundred million users will get a curl command line tool install for them without having to do anything. Installing curl yourself on Windows still requires some skill and knowledge and on places like stackoverflow, there are many questions and users showing how it can be problematic. I expect this to accelerate the curl command line use in the world. I expect this to increase the number of questions on how to do things with curl. Lots of people mentioned how curl is a “good” new tool to use for malicious downloads of files to windows machines if you manage to run code on someone’s Windows computer. curl is quite a capable thing that you truly do not want to get invoked involuntarily. But sure, any powerful and capable tool can of course be abused. About the installed curl This is what it looks when you run check out the curl version on this windows build: (screenshot from Steve Holme) I don’t think this means that this is necessarily exactly what curl will look like once this reaches the general windows 10 installation, and I also expect Microsoft to update and upgrade curl as we go along. Some observations from this simple screenshot, and if you work for Microsoft you may feel free to see this as some subtle hints on what you could work on improving in future builds: They ship 7.55.1, while 7.57.0 was the latest version at the time. That’s just three releases away so I consider that pretty good. Lots of distros and others ship (much) older releases. It’ll be interesting to see how they will keep this up in the future. Unsurprisingly, they use a build that uses the WinSSL backend for TLS. They did not build it with IDN support. They’ve explicitly disabled support a whole range of protocols that curl supports natively by default (gopher, smb, rtsp etc), but they still have a few rare protocols enabled (like dict). curl supports LDAP using the windows native API, but that’s not used. The Release-Date line shows they built curl from unreleased sources (most likely directly from a git clone). No HTTP/2 support is provided. There’s No automatic decompression support for gzip or brotli content. The build doesn’t support metalink and no PSL (public suffix list). (curl gif from the original Microsoft curl announcement blog post) Independent Finally, I’d like to add that like all operating system distributions that ship curl (macOS, Linux distros, the BSDs, AIX, etc) Microsoft builds, packages and ships the curl binary completely independently from th[...]

David Humphrey: GitHub Knows

Fri, 12 Jan 2018 16:24:50 +0000

I was reflecting the other day how useful it would be if GitHub, in addition to the lists it has now like Trending and Explore, could also provide me a better view into which projects a) need help; and more, b) can accept that help when it arrives. Lots of people responded, and I don't think I'm alone in wanting better ways to find things in GitHub. Lots of GitHub users might not care about this, since you work on what you work on already, and finding even more work to do is the last thing on your mind. For me, my interest stems from the fact that I constantly need to find good projects, bugs, and communities for undergrads wanting to learn how to do open source, since this is what I teach. Doing it well is an unsolved problem, since what works for one set of students automatically disqualifies the next set: you can't repeat your success, since closed bugs (hopefully!) don't re-open. And because I write about this stuff, I hear from lots of students that I don't teach, students from all over the world who, like my own, are struggling to find a way in, a foothold, a path to get started. It's a hard problem, made harder by the size of the group we're discussing. GitHub's published numbers from 2017 indicate that there are over 500K students using its services, and those are just the ones who have self-identified as such--I'm sure it's much higher. The usual response I get from people is to use existing queries for labels with some variation of "good first bug". This can work, especially if you get in quickly when a project, or group of projects, does a triage through their issues. For example, this fall I was able to leverage the Hacktoberfest efforts, since many projects took the time to go and label bugs they felt were a good fit for new people (side note: students love this, and I had quite a few get shirts and a sense that they'd become part of the community). But static labeling of issues doesn't work over time. For example, I could show you thousands of "good first bugs" sitting patiently in projects that have long ago stopped being relevant, developed, or cared about by the developers. It's like finding a "Sale!" sign on goods in an abandoned store, and more of a historical curiosity than a signal to customers. Unless these labels auto-expire, or are mercilessly triaged by the dev team, I don't think they solve the problem. So what could we do instead? Well, one thing we could do is make better use of the fact that we all now work in a monorepo called Everyone's favourite distributed version control system has evolved to (hilariously) become the most centralized system we've ever had. As such, GitHub knows about you, your team, and your code and could help us navigate through everything it contains. What I'm going to describe is already starting to happen. For example, if you have any node projects on GitHub, you've probably received emails about npm packages being out of date and vulnerable to security issues: We found a potential security vulnerability in a repository which you have been granted security alert access. Known high severity security vulnerability detected in package-name < X.Y.Z defined in package-lock.json. package-lock.json update suggested: package-name ~> X.Y.Z Now imagine we take this further. What sorts of things could we do? GitHub knows how likely your project is to accept work from new contributors, based on that fact that it knows whether any given user has commits in your repo. If a project doesn't accept contributions from new peop[...]

Chris Pearce: Not every bit of code you write needs to be optimal

Fri, 12 Jan 2018 10:49:37 +0000

It's easy to fall into the trap of obsessing about performance and try to micro-optimize every little detail in the code you're writing. Or reviewing for that matter. Most of the time, this just adds complexity and is a waste of effort.

If a piece of code only runs a few (or even a few hundred) times a second, a few nanoseconds per invocation won't make a significant difference. Chances are the performance wins you'll gain by micro optimizing such code won't show up on a profile.

Given that, what should you do instead? Code is read and edited much more than it is written, so optimize for readability, and maintainability.

If you find yourself wondering whether a piece of code is making your program slow, one of the first things you should do is fire up a profiler, and measure it. Or add telemetry to report how long your function takes in the wild. Then you can stop guessing, and start doing science.

If data shows that your code is slow, by all means optimize it. But if not, you can get more impact out of your time by directing your efforts elsewhere.

Frédéric Wang: Review of Igalia's Web Platform activities (H2 2017)

Thu, 11 Jan 2018 23:00:00 +0000

Last september, I published a first blog post to let people know a bit more about Igalia’s activities around the Web platform, with a plan to repeat such a review each semester. The present blog post focuses on the activity of the second semester of 2017. Accessibility As part of Igalia’s commitment to diversity and inclusion, we continue our effort to standardize and implement accessibility technologies. More specifically, Igalian Joanmarie Diggs continues to serve as chair of the W3C’s ARIA working group and as an editor of Accessible Rich Internet Applications (WAI-ARIA) 1.1, Core Accessibility API Mappings 1.1, Digital Publishing WAI-ARIA Module 1.0, Digital Publishing Accessibility API Mappings 1.0 all of which became W3C Recommandations in December! Work on versions 1.2 of ARIA and the Core AAM will begin in January. Stay tuned for the First Public Working Drafts. We also contributed patches to fix several issues in the ARIA implementations of WebKit and Gecko and implemented support for the new DPub ARIA roles. We expect to continue this collaboration with Apple and Mozilla next year as well as to resume more active maintenance of Orca, the screen reader used to access graphical desktop environments in GNU/Linux. Last but not least, progress continues on switching to Web Platform Tests for ARIA and “Accessibility API Mappings” tests. This task is challenging because, unlike other aspects of the Web Platform, testing accessibility mappings cannot be done by solely examining what is rendered by the user agent. Instead, an additional tool, an “Accessible Technology Test Adapter” (ATTA) must be also be run. ATTAs work in a similar fashion to assistive technologies such as screen readers, using the implemented platform accessibility API to query information about elements and reporting what it obtains back to WPT which in turn determines if a test passed or failed. As a result, the tests are currently officially manual while the platform ATTAs continue to be developed and refined. We hope to make sufficient progress during 2018 that ATTA integration into WPT can begin. CSS This semester, we were glad to receive Bloomberg’s support again to pursue our activities around CSS. After a long commitment to CSS and a lot of feedback to Editors, several of our members finally joined the Working Group! Incidentally and as mentioned in a previous blog post, during the CSS Working Group face-to-face meeting in Paris we got the opportunity to answer Microsoft’s questions regarding The Story of CSS Grid, from Its Creators (see also the video). You might want to take a look at our own videos for CSS Grid Layout, regarding alignment and placement and easy design. On the development side, we maintained and fixed bugs in Grid Layout implementation for Blink and WebKit. We also implemented alignment of positioned items in Blink and WebKit. We have several improvements and bug fixes for editing/selection from Bloomberg’s downstream branch that we’ve already upstreamed or plan to upstream. Finally, it’s worth mentioning that the work done on display: contents by our former coding experience student Emilio Cobos was taken over and completed by antiik (for WebKit) and rune (for Blink) and is now enabled by default! We plan to pursue these developments next year and have various ideas. One of them is improving the way grids are stored in memory to allow huge grids (e.g. spreadsheet). Web Platform Predictability One of the area where we would like [...]

Mike Conley: Making tab switching faster in Firefox with tab warming

Thu, 11 Jan 2018 20:00:36 +0000

Making tab operations fast Since working on the Electrolysis team (and having transitioned to working on various performance initiatives), I’ve been working on making tab operations feel faster in Firefox. For example, I wrote a few months back about a technique we used to make tab closing faster. Today, I’m writing to talk about how we’re trying to make tab switching feel faster in some cases. What is “tab warming”? When you switch a tab in multi-process Firefox, traditionally we’d send a message to the content process to tell it to paint its layers, and then we’d wait for the compositor to tell us that it had received those layers before finally doing the tab switch. With the exception of some degenerate cases, this mechanism has worked pretty well since we introduced it, but I think we can do slightly better. “Tab warming” is what we’re calling the process of pre-emptively rendering the layers for a tab, and pre-emptively uploading them to the compositor, when we’re pretty sure you’re likely to switch to that tab.1 Maybe this is my Canadian-ness showing, but I like to think of it almost like coming in from shoveling snow off of the driveway, and somebody inside has already made hot chocolate for you, because they knew you’d probably be cold. For many cases, I don’t actually think tab warming will be very noticeable; in my experience, we’re able to render and upload the layers2 for most sites quickly enough for the difference to be negligible. There are certain sites, however, that we can’t render and upload layers for as quickly. These are the sites that I think warming will help with. Here’s an example of such a site The above link is using SVGs and CSS to do an animation. Unfortunately, on my MBP, if I have this open in a background tab in Firefox right now, and switch to it, there’s an appreciable delay between clicking that tab and it finally being presented to me.3 With tab warming enabled, when you hover over the tab with your mouse cursor, the rendering of that sophisticated SVG will occur while your finger is still on its way to click on the mouse button to actually choose the tab. Those precious milliseconds are used to do the rendering and uploading, so that when the click event finally comes, the SVG is ready and waiting for you. Assuming a sufficiently long delay between hover and click, the tab switch should be perceived as instantaneous. If the delay was non-zero but still not long enough, we will have nonetheless shaved that time off in eventually presenting the tab to you. And in the event that we were wrong, and you weren’t interested in seeing the tab, we eventually throw the uploaded layers away. On my own machine, this makes a significant difference in the perceived tab switch performance with the above site. Trying it out in Nightly Tab warming is currently controlled via this preference: browser.tabs.remote.warmup.enabled and is currently off by default while we test it and work out more kinks. If you’re interested in helping us test, flip that preference in Firefox Nightly, and file bugs if you see it introducing strange behaviour. Hopefully we’ll be able to flip it on by default soon. Stay tuned! Right now, we simply detect whether you’re hovering a tab with a mouse to predict that you’re likely going to choose that, but there are certain more opportunities to introduce warming based on other user behaviours. ↩We can even interrupt JavaScript to do this, th[...]

Mozilla Future Releases Blog: Announcing ESR60 with policy engine

Thu, 11 Jan 2018 19:01:02 +0000

(image) The Firefox ESR (extended support release) is based on an official release of Firefox desktop for use by organizations including schools, universities, businesses and others who need extended support for mass deployments. Since Firefox 10, ESR has grown in popularity and many large organisations rely on it to let their employees browse the Internet securely.

We want to make customization of Firefox deployments simpler for system administrators and we’re pleased to announce that our next ESR version, Firefox 60, will include a policy engine that increases customization possibilities and integration into existing management systems.

What is the policy engine?

The Policy Engine is a project to build a Firefox desktop configuration and customization feature for enterprise users. The policy engine will work with any tool that wants to set policies and we intend to bring Windows Group Policy support as well. We’ll be initially supporting a limited set of policies but this will be evolving through user feedback.

More details on the policy engine can be found here.

Bug reference

What’s the plan?

In order to accommodate the group policy implementation, we are making Firefox 60 our next ESR version and will be following this plan:

  • May 8th – ESR 60.0 released (we’d love feedback from early adopters at that point and will be sharing a feedback form through the enterprise mailing list)
  • July 3rd – ESR 60.1 released
  • August 28th – End of life for ESR 52 and release of ESR 60.2.0. No further updates will be offered for ESR52 and an update to ESR 60.2.0 will be provided through the application update service

Also please keep in mind that Firefox 57, released last month, supports only add-ons built with the WebExtensions API. This means that Firefox 52 ESR is the last release that will support legacy add-ons.  If you developed an add-on that has not been updated to the WebExtensions API, there is still time to do so. Documentation is available, and you can ask questions by emailing or joining the #webextensions channel at

If you are supporting users who use add-ons, now is a good time to encourage them to check if their add-on works with Firefox 57+.

The post Announcing ESR60 with policy engine appeared first on Future Releases.

Mozilla Reps Community: Reps Council at Austin

Thu, 11 Jan 2018 13:41:51 +0000

TL;DR The All Hands is a special time of the year where Mozilla employees along with core volunteers gather for a week of many meetings and brainstorming. The All Hands Wiki page has more information about the general setting. During the All Hands, the Reps Council participated in the Open Innovation meetings as well as had meetings about what 2018 planning. One of our main topics was about the Mission Driven Mozillians proposal. Hot Topics As you already know, we asked in the past for your input on discourse (if you’re not already subscribed to the Reps category, maybe now it’s the right moment) Before we start with the recap of the week, we would like to say thank you to the Reps that left feedbacks on that thread (Felipe, Edoardo and André) but also all to the Mozillians on that left their feedbacks too. These suggestions were very important and on shaping the document that will be available soon (we had more discussions during the All Hands so the new revised version is not yet ready). Mission Driven Mozillians was the major topic on the Reps Council meetings. After consulting the results of Communities and Contributors Qualitative Survey we shaped the following questions. Is the Reps program really a representative of all the mozillian communities? How does the program need to evolve to engage more with our members? How can we get new Reps to join the new Coach and Resources initiatives? How can we improve the use of the report tool by our Reps? Plans for 2018   A photo with: Reps Peers, Reps Council (missing Prathamesh and Elio), Kiki (the awesome girl that manage all our budget/swag requests) and Manel (an unexpected guest during the photo shoot)   Moving forward, our plan is to deep dive into those because and also start acting on them. The council also highly engaged with the Leadership proposal (that had a high priority) since it will be the base on how Mozilla communities will be shaped in the near future and will of course influence our program. Open by Design was the motto of this All Hands during all the meetings inside Open Innovation, but also across the organization. You see Mozilla has a strong value that the other IT companies don’t have: the community. Mozilla is more than a Silicon Valley IT company One of the most curious and interesting, for the purpose of openness, thing is the ability to access  official Mozilla Slack channels for all the mozillians with NDA to. This support enables the volunteers to be able to communicate directly with the employees. And Yes, as Reps you have already signed the NDA so you can access without problems the various channels![...]

Mozilla Release Management Team: Firefox Release Management at FOSDEM 2018

Thu, 11 Jan 2018 12:00:00 +0000


Like every year, Brussels will host on February 3 & 4 the largest European Open Source event gathering thousands of developers, namely FOSDEM.

Sylvestre, head of the Firefox Release Management team, will explain on Saturday how we ship Firefox to hundreds of millions of users every 6 weeks and how our test suites, pre-release channels, static analyzers, data mining on uplifts, code coverage, fuzzers and community feedback are part of the tools and processes Mozilla uses to ship Firefox every 6 weeks.

Here are the details of the conference:

Firefox: How to ship quality software
6000 new patches, a release every 6 weeks, how Mozilla does it?
Speaker Sylvestre Ledru
Track Mozilla devroom
Room UA2.118 (Henriot)
Day Saturday
Start 14:00
End 14:30

Several members of the Release Management team as well as other Mozilla employees from other departments and Mozilla volunteers will be at Fosdem so don’t hesitate to come and talk to us at the Mozilla Developper Room or at our booth.

In addition to the official schedule for our devroom, our Mozilla Wiki page for Fosdem 2018 contains all the information you may need regarding our participation to this event.

Last but not least, we would like to thank our Belgian Mozilla community, particularly Anthony Maton and Ziggy Maes for organizing our participation at Fosdem!

Daniel Glazman: Web. Period.

Thu, 11 Jan 2018 06:58:00 +0000

Well. I have published something on Medium. #epub #web #future #eprdctn

Mozilla B-Team: happy additional push day!

Wed, 10 Jan 2018 20:01:31 +0000

release tag

the following changes have been pushed to

  • [1429398] Scrolling down with keyboard no longer works correctly in at least Firefox
  • [1429449] Dropdown menu shouldn’t handle non-primary button click
  • [1429290] Bugs now display too wide to fit in my window; reading bugs is much harder

discuss these changes on

Air Mozilla: The Joy of Coding - Episode 125

Wed, 10 Jan 2018 18:00:00 +0000

(image) mconley livehacks on real Firefox bugs while thinking aloud.

Air Mozilla: Weekly SUMO Community Meeting, 10 Jan 2018

Wed, 10 Jan 2018 17:00:00 +0000

(image) This is the SUMO weekly call

Jet Villegas: Turning a Corner in the New Year

Wed, 10 Jan 2018 05:01:10 +0000

I’m going to start the year off with a blog post, mostly to procrastinate on replying to the many e-mails that my very productive colleagues have sent my way (image)

2017 was quite a year beyond the socio-economic, geo-political, and bizarre. I, and many of my colleagues did what we could: find solace in work. I’ve often found that in uncertain times, making forward progress on difficult technical projects provides just enough incentive to continue for a bit longer. With the successful release of Firefox 57, I’m again optimistic about the future for the technical work. The Firefox Layout Engine team has a lot to be proud of in the 57 version. The winning combination was shipping big-ticket investments, and grinding down on many very difficult bugs. Plan “A” all the way!

Of course, it’s easy to see this in retrospect. I recall many late nights wondering “is this even going to work?” My friend and former colleague Robert O’Callahan recently revealed that he had similar doubts from before my time on the project. I wonder how much of that is inherent in the work. Is the Mozilla mission also the same narrative that fills me and other leaders with the sense that we’re always in rescue mode? Is that a sustainable lifestyle? In any case, it does feel like we’ve turned a corner.

For the first time in a long time, I feel like the wind’s at our backs and about to go into 2018 with that momentum. It would be hubris on my part to say that we’ve figured it all out. 2018’s uncertainties (e.g, Spectre/Meltdown) promise more late nights ahead. We’ve got lots of things in flight, and more ideas to investigate. We’ll need to make changes to deal with it all, but that’s par for the coming year’s course.

Happy New Year!

Mozilla B-Team: happy bmo push day

Wed, 10 Jan 2018 03:27:01 +0000

release tag

the following changes have been pushed to

  • [1429110] Update type checking to treat bug id as a simple string
  • [1429075] Encoding issue in notification area (mojibake)
  • [1429076] Add lock icon for requests on security-sensitive bugs in notification area
  • [1429194] Clicking requests icon gives “Loading… ” but never actually completes
  • [1429220] Issues with the new fixed header implementation

discuss these changes on

Manish Goregaokar: What's Tokio and Async IO All About?

Wed, 10 Jan 2018 00:00:00 +0000

The Rust community lately has been focusing a lot on “async I/O” through the tokio project. This is pretty great! But for many in the community who haven’t worked with web servers and related things it’s pretty confusing as to what we’re trying to achieve there. When this stuff was being discussed around 1.0, I was pretty lost as well, having never worked with this stuff before. What’s all this Async I/O business about? What are coroutines? Lightweight threads? Futures? How does this all fit together? What problem are we trying to solve? One of Rust’s key features is “fearless concurrency”. But the kind of concurrency required for handling a large amount of I/O bound tasks – the kind of concurrency found in Go, Elixir, Erlang – is absent from Rust. Let’s say you want to build something like a web service. It’s going to be handling thousands of requests at any point in time (known as the “c10k problem”). In general, the problem we’re considering is having a huge number of I/O bound (usually network I/O) tasks. “Handling N things at once” is best done by using threads. But … thousands of threads? That sounds a bit much. Threads can be pretty expensive: Each thread needs to allocate a large stack, setting up a thread involves a bunch of syscalls, and context switching is expensive. Of course, thousands of threads all doing work at once is not going to work anyway. You only have a fixed number of cores, and at any one time only one thread will be running on a core. But for cases like web servers, most of these threads won’t be doing work. They’ll be waiting on the network. Most of these threads will either be listening for a request, or waiting for their response to get sent. With regular threads, when you perform a blocking I/O operation, the syscall returns control to the kernel, which won’t yield control back, because the I/O operation is probably not finished. Instead, it will use this as an opportunity to swap in a different thread, and will swap the original thread back when its I/O operation is finished (i.e. it’s “unblocked”). Without Tokio and friends, this is how you would handle such things in Rust. Spawn a million threads; let the OS deal with scheduling based on I/O. But, as we already discovered, threads don’t scale well for things like this1. We need “lighter” threads. Lightweight threading I think the best way to understand lightweight threading is to forget about Rust for a moment and look at a language that does this well, Go. So instead, Go has lightweight threads, called “goroutines”. You spawn these with the go keyword. A web server might do something like this: listener, err = net.Listen(...) // handle err for { conn, err := listener.Accept() // handle err // spawn goroutine: go handler(conn) } This is a loop which waits for new TCP connections, and spawns a goroutine with the connection and the function handler. Each connection will be a new goroutine, and the goroutine will shut down when handler finishes. In the meantime, the main loop continues executing, because it’s running in a different goroutine. So if these aren’t “real” (operating sys[...]

Manish Goregaokar: Rust in 2018

Wed, 10 Jan 2018 00:00:00 +0000

A week ago we put out a call for blog posts for what folks think Rust should do in 2018. This is mine. Overall focus I think 2017 was a great year for Rust. Near the beginning of the year, after custom derive and a bunch of things stabilized, I had a strong feeling that Rust was “complete”. Not really “finished”, there’s still tons of stuff to improve, but this was the first time stable Rust was the language I wanted it to be, and was something I could recommend for most kinds of work without reservations. I think this is a good signal to wind down the frightening pace of new features Rust has been getting. And that happened! We had the impl period, which took some time to focus on getting things done before proposing new things. And Rust is feeling more polished than ever. Like Nick, I feel like 2018 should be boring. I feel like we should focus on polishing what we have, implementing all the things, and improving our approachability as a language. Basically, I want to see this as an extended impl period. This doesn’t mean I’m looking for a moratorium on RFCs, really. Hell, in the past few days I’ve posted one pre-pre-RFC1, one pre-RFC, and one RFC (from the pre-RFC). I’m mostly looking for prioritizing impl work over designing new things, but still having some focus on design. Language I think Rust still has some “missing bits” which make it hard to justify for some use cases. Rust’s async story is being fleshed out. We don’t yet have stable SIMD or stable inline ASM. The microcontroller story is kinda iffy. RLS/clippy need nightly. I’d like to see these crystallize and stabilize this year. I think this year we need to continue to take a critical look at Rust’s ergonomics. Last year the ergonomics initiative was really good for Rust, and I’d like to see more of that. This is kind of at odds with my “focus on polishing Rust” statement, but fixing ergonomics is not just new features. It’s also about figuring out barriers in Rust, polishing mental models, improving docs/diagnostics, and in general figuring out how to best present Rust’s features. Starting dialogues about confusing bits of the language and figuring out the best mental model to present them with is something we should continue doing. Sometimes this may need new features, indeed, but not always. We must continue to take a critical look at how our language presents itself to newcomers. Community I’d like to see a stronger focus in mentoring. Mentoring on rustc, mentoring on major libraries, mentoring on Rust tooling, mentoring everywhere. This includes not just the mentors, but the associated infrastructure – contribution docs, sites like servo-starters and findwork, and similar tooling. I’m also hoping for more companies to invest back into Rust. This year Buoyant became pretty well known within the community, and many of their employees are paid to work on various important parts of the Rust ecosystem. There are also multiple consulting groups that contribute to the ecosystem. It’s nice to see that “paid to work on Rust” is no longer limited to Mozilla, and this is crucial for the health of the language. I ho[...]

Robert O'Callahan: On Keeping Secrets

Tue, 09 Jan 2018 22:37:12 +0000

Once upon a time I was at a dinner at a computer science conference. At that time the existence of Chrome was a deeply guarded secret; I knew of it, but I was sworn to secrecy. Out of the blue, one of my dinner companions turned to me and asked "is Google working on a browser?"

This was a terrible dilemma. I could not answer "no" or "I don't know"; Christians mustn't lie. "Yes" would have betrayed my commitment. Refusing to answer would obviously amount to a positive answer, as would any obvious attempt to dodge the question ("hey I think that's Donald Knuth over there!").

I can't remember exactly what I said, but it was something evasive, and I remember feeling it was not satisfactory. I spent a lot of time later thinking about what I should have said, and what I should say or do if a similar situation arises again. Perhaps a good answer would have been: "aren't you asking the wrong person?" Alternatively, go for a high-commitment distraction, perhaps a cleverly triggered app that self-dials a phone call. "You're going into labour? I'll be right there!" (Note: not really, this would also be a deception.) It's worth being prepared.

One thing I really enjoyed about working at Mozilla was that we didn't have many secrets to keep. Most of the secrets I had to protect were about other companies. Minimizing one's secrecy burden generally seems like a good idea, although I can't eliminate it because it's often helpful to other people for them to be able to share secrets with me in confidence.

Update The situation for Christians has some nuance.

Mark Surman: The internet doesn’t suck

Tue, 09 Jan 2018 19:50:35 +0000

It’s easy to think the internet sucks these days. My day job is defending net neutrality and getting people to care about privacy and the like. From that perch, it more often than not feels like things are getting worse on the internet. So, I thought I’d share an experience that reminded me that the internet doesn’t suck as much as we might think. In fact, in many moments, the internet still delivers all the wonder and empowerment that made me fall in love with it 25 years ago. The experience in question: my two sons Facetimed me into their concert in Toronto last week, lovingly adding me to a show that I almost missed. A little more context: my eldest son was back from college for Christmas. He and his brother were doing a reunion show with their high school band (listen to them on Spotify). I was happy for them — and grumpy that the show was scheduled for the one night over my son’s holiday visit that I had to be on a work trip. My son felt bad, but the show must go on. While in Chicago on my trip, I got a text message. “Dad, can you be on Facetime around 9pm central?” Smile. “Yup,” I texted back. I eagerly waited for the call at the appointed time, but was distracted by a passionate conversation with a colleague about All The Things We Need to Do to Save the Internet. I looked at my phone about 9:20. Gulp. I’d missed two calls. Frown. I wished the kids well with the concert by text — and headed back to my hotel room. As I kicked back on the bed, the phone rang. I picked it up. There was Tristan. “Hey guys, here’s my dad, from Chicago.” He waves the phone over his head. I see the audience blur by. They scream and clap. I was at the concert! Tristan then handed the phone to a young woman in the audience. She looked at me quizzically and smiled. Then she pressed the screen to flip to the front camera on the phone. She shakily held me through the last two songs of the show. Which, by the way, was great. That band is tight. Tristan grabbed the camera and waved goodbye. My version of the show was over as quickly as it began. I felt so good for those 10 minutes. I was so proud and in love with my two sons. So grateful and impressed that Tristan had turned the challenge of me being away into a cool part of his live show schtick. And, so happy — and a bit reflective — about how skilled we’ve started to become as a society that loves and cares for each other using the internet. At this moment, the internet did not suck. Far from it. Tristan and I each had powerful computers and cameras in our pockets with high speed internet connections. We were easily able to make secure, ad-free, flawless point to point television for each other. And, each of us, including the young woman in the audience, knew how to make all this happen on a whim. Thinking back from my crazy activist camcorder days in the early 1990s, when I also heard the first crack of a modem, it’s hard to believe that we have built the digital world we have. And, thinking about it as a father in 2018, it feels like: this is an awesome way to be a family. This is the internet I wanted — and want [...]

Mozilla B-Team: happy bmo push day

Tue, 09 Jan 2018 14:56:31 +0000

release tag

the following changes have been pushed to

  • [1428166] Move to start of
  • [1428156] BMO will mark a revision as public if it sees a new one created without a bug id associated with it
  • [1428227] Google doesn’t index any more
  • [1428079] No horizontal scrollbar when bug summary is very long or narrow browser width
  • [1428146] Fix static assets url rewriting rule
  • [1427800] Wrong anchor scrolling with old UI
  • [1428642] Fix minor bugs on new global header
  • [1428641] Implement Requests quick look dropdown on global header
  • [1429060] Fix nagios checker blocker breakage from PR #340

discuss these changes on

This Week In Rust: This Week in Rust 216

Tue, 09 Jan 2018 05:00:00 +0000

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions. This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR. Updates from Rust Community News & Blog Posts New year's Rust: A call for community blogposts. Announcing Rust 1.23. Announcing Diesel 1.0 — a safe, extensible query builder and ORM. Ashley Williams joins the Core Team and taking lead of the Community Team. Lessons from the impl period. How to use Rust non lexical lifetimes on nightly. A proof-of-concept GraphQL server framework for Rust. Web scraping with Rust. A beginner-friendly tutorial highlighting Rust’s viability as a scripting language for everyday use. This week in Rust docs 87. [podcast] New Rustacean: News – Rust 1.23 – Rustdoc changes, the first impl period, Firefox Quantum, and more wasm! (Also note the Script struct docs if you prefer reading to listening!) [videos] Videos from Rust Belt Rust 2017 are now available. #Rust2018 2018 should be boring by nrc. Don’t be the new Haskell by /u/tibodelor. Improving how we improve Rust in 2018 by jonathandturner. Three humble paper cuts by cessen. What Rust needs in 2018 to succeed by llogiq. What I want changed for Rust to help Way Cooler by Timidger. Back to the roots by /u/0b_0101_001_1010. Looking back and looking forward by est31. My wish list for 2018 by mmrath. Looking in on Rust in 2018 by KasMA1990. The new wave of Rust by QuietMisdreavus. New faces for our lovely bots by LukasKalbertodt. Better Debug derive by lokathor. Machine learning perspective by /u/osamc. Rust 2018 by AndrewBrinker. Goals and directions for Rust in 2018 by wezm. Crate of the Week This week's crate is artifact, a design documentation tool. Thanks to musicmatze for the suggestion! Submit your suggestions and votes for next week! Call for Participation Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started! Some of these tasks may also have mentors available, visit the task page for more information. New year's Rust: A call for community blogposts. Get started with these beginner-friendly issues. [medium] mdBook: Introduce preprocessors. If you are a Rust project owner and are looking for contributors, please submit tasks here. Updates from Rust Core 130 pull requests were merged in the last week delete the old docs, lift up the new generate code for unused const- and inline-fns if -Clink-dead-code is specified allow non-alphabetic underscores in camel case NLL fixes only bump error count when we are sure that the diagnostic is not a repetition limit style lint to non-synthetic generic params try to improve LLVM pass ordering and the pass manager order use name-discarding [...]

Mozilla GFX: Retained Display Lists

Tue, 09 Jan 2018 02:26:34 +0000

Hot on the heels of Off-Main-Thread Painting, our next big Firefox graphics performance project is Retained Display Lists! I you haven’t already read it, I highly recommend reading David’s post about Off-Main-Thread Painting as it provides a lot of background information on how our painting pipeline works. Display list building is the process in which we collect the set of high-level items that we want to display on screen (borders, backgrounds, text and many, many more) and then sort it according to the CSS painting rules into the correct back-to-front order. It’s at this point that we figure out which parts of the page are currently visible on-screen. Currently, whenever we want to update what’s on the screen, we build a full new display list from scratch and then we use it paint everything on the screen. This is great for simplicity, we don’t to worry about figuring out which bits changed or went away. Unfortunately, it can take a really long time. This has always been a problem, but as websites gets more complex and users get higher resolution monitors the problem has magnified. The solution is to retain the display list between paints, only build a new display list for the parts of the page that changed since we last painted and then merge the new list into the old to get an updated list. This adds a lot more complexity, since we need to figure out which items to remove from the old list, and where to insert new items. The upside is that in a lot of cases the new list can be significantly smaller than a full list, and we have the opportunity to save a lot of time. If you’re interested in the lower level details on how the partial updates and merging works, take a look at the project planning document. Motivation: As part of the lead up to Firefox Quantum, we added new telemetry to Firefox to help us measure painting performance, and to let us make more informed decisions as to where to direct our efforts. One of these measurements defined a minimum threshold for a ‘slow’ paint (16ms), and recorded percentages of time spent in various paint stages when it occurred. We expected display list building to be significant, but were still surprised with the results: On average, display list building was consuming more than 40% of the total paint time, for work that was largely identical to the previous frame. We’d long been planning on an overhaul of how we built and managed display lists, but with this new data we decided that it needed to be a top priority for our Painting team. Results: Once we had everything working, the next step was to see how much of an effect it had on performance! We ran an A/B test on the Beta 58 population so that we could collect telemetry for the two groups, and compare the results. The first and most significant change is that the frequency of slow paints dropped by almost 30%! The horizontal axis shows the duration of the paints, and the vertical axis shows how frequently (as a percent) this duration happened. As you can see, paints in the 2-7ms range beca[...]

Marco Castelluccio: How to collect code coverage on Windows with Clang

Tue, 09 Jan 2018 00:00:00 +0000

With the upcoming version of Clang 6, support for collecting code coverage information on Windows is now mature enough to be used in production. As a proof, we can tell you that we have been using Clang to collect code coverage information on Windows for Firefox. In this post, I will show you a simple example to go from a C++ source file to a coverage report (in a readable format or in a JSON format which can be parsed to generate custom nice reports or upload results to Coveralls/Codecov). Build Let’s say we have a simple file, main.cpp: #include int main() { int reply = 42; if (reply == 42) { std::cout << "42" << std::endl; } else { std::cout << "impossible" << std::endl; } return 0;} In order to make Clang generate an instrumented binary, pass the ‘--coverage’ option to clang: clang-cl --coverage main.cpp In the directory where main.cpp is, both the executable file of your program and a file with extension ‘gcno’ will be present. The gcno file contains information about the structure of the source file (functions, branches, basic blocks, and so on). 09/01/2018 16:21 . 09/01/2018 16:21 .. 09/01/2018 16:20 173 main.cpp 09/01/2018 16:21 309.248 main.exe 09/01/2018 16:21 88.372 main.gcno Run Now, the instrumented executable can be executed. A new file with the extension ‘gcda’ will be generated. It contains the coverage counters associated with the ‘gcno’ file (how many times a line was executed, how many times a branch was taken, and so on). 09/01/2018 16:22 . 09/01/2018 16:22 .. 09/01/2018 16:20 173 main.cpp 09/01/2018 16:21 309.248 main.exe 09/01/2018 16:22 21.788 main.gcda 09/01/2018 16:21 88.372 main.gcno At this point, we need a tool to parse the gcno/gcda file that was generated by Clang. There are two options, llvm-cov and grcov. llvm-cov is part of LLVM and can generate bare bones reports, grcov is a separate tool and can generate LCOV, Coveralls and Codecov reports. We had to develop grcov (in Rust!) to have a tool that could scale to the size of Firefox. Parse with llvm-cov You can simply run: llvm-cov gcov main.gcno Several files with extension gcov will be generated (one for each source file of your project, including system header files). For example, here’s main.cpp.gcov: -: 0:Source:main.cpp -: 0:Graph:main.gcno -: 0:Data:main.gcda -: 0:Runs:1 -: 0:Programs:1 -: 1:#include -: 2: -: 3:int main() { 1: 4: int reply = 42; -: 5: 1: 6: if (reply == 42) { 1: 7: std::cout << "42" << std::endl; 1: 8: } else { #####: 9: std::cout << "impossible" << std::endl; -: 10: } -: 11: 1[...]

Niko Matsakis: #Rust2018

Mon, 08 Jan 2018 23:00:00 +0000

As part of #Rust2018, I thought I would try to writeup my own (current) perspective. I’ll try to keep things brief. First and foremost, I think that this year we have to finish what we started and get the “Rust 2018” release out the door. We did good work in 2017: now we have to make sure the world knows it and can use it. This primarily means we have to do stabilization work, both for the recent features added in 2017 as well as some, ahem, longer-running topics, like SIMD. It also means keeping up our focus on tooling, like IDE support, rustfmt, and debugger integration. Looking beyond the Rust 2018 release, we need to continue to improve Rust’s learning curve. This means language changes, yes, but also improvements in tooling, error messages, documentation, and teaching techniques. One simple but very important step: more documentation targeting intermediate-level Rust users. I think we should focus on butter-smooth (and performant!) integration of Rust with other languages. Enabling incremental adoption is key.1 This means projects like Helix but also working on bindgen and improving our core FFI capabilities. Caution is warranted, but I think there is room for us to pursue a select set of advanced language features. I am thinking primarily of const generics, procedural macros, and generic associated types. Each of these can be a massive enabler. They also are fairly obvious generalizations of things that the compiler currently supports, so they don’t come at a huge complexity cost to the language. It’s worth emphasizing also that we are not done when it comes to improving compiler performance. The incremental infrastructure is working and en route to a stable compiler near you, but we need to shoot for instantaneous build times after a small change (e.g., adding a println! to a function). (To help with this, I think we should start a benchmarking group within the compiler team (and/or the infrastruture team). This group would be focused on establishing and analyzing important benchmarks for both compilation time and the performance of generated code. Among other things, this group would maintain and extend the site. I envision people in this group both helping to identify bottlenecks and, when it makes sense, working to fix them.) I feel like we need to do more production user outreach. I would really like to get to the point where we have companies other than Mozilla paying people to work full-time on the Rust compiler and standard library, similar to how Buoyant has done such great work for tokio. I would also really like to be getting more regular feedback from production users on their needs and experiences. I think we should try to gather some kind of limited telemetry, much like what Jonathan Turner discussed. I think it would be invaluable if we had input on typical compile times that people are experiencing or – even better – some insight into what errors they are getting, and maybe the edits m[...]

Zibi Braniecki: Multilingual Gecko in 2017

Mon, 08 Jan 2018 20:53:43 +0000

The outline In January 2017, we set the course to get a new localization framework named Fluent into Firefox. Below is a story of the work performed on the Firefox engine – Gecko – over the last year to make Fluent in Firefox possible. This has been a collaborative effort involving a lot of people from different teams. It’s impossible to document all the work, so keep in mind that the following is just the story of the Gecko refactor, while many other critical pieces were being tackled outside of that range. Also, the nature of the project does make the following blog post long, text heavy and light on pictures. I apologize for that and hope that the value of the content will offset this inconvenience and make it worth reading. Why? The change is necessary and long overdue – our aged localization model is brittle, is blocking us from adding new localizability features for developers and localizers, doesn’t work with the modern Intl APIs and doesn’t scale to other architectural changes we’re planning like migration away from XBL/XUL. Fluent is a modern, web-ready localization system developed by Mozilla over the last 7 years. It was initially intended for Firefox OS, but the long term goal was always to use it in Firefox (first attempt from 7 years ago!). Unfortunately, replacing the old system with Fluent is a monumental task similar in scope only to other major architectural changes like Electrolysis or Oxidation. The reason for that is not the localization system change itself, but rather that localization in Gecko has more or less been set in stone since the dawn of the project.  Most of the logic and some fundamental paradigms in Gecko are deeply rooted in architectural choices from 1998-2000.  Since then, a lot of build system choices, front end code, and core runtime APIs were written with assumptions that hold true only for this system. Getting rid of those assumptions requires major refactors of many of the Internationalization modules, build system pieces, language packs, test frameworks, and resource handling modules before we even touch the front end. All of this will have to be migrated to the new API. On top of that, the majority of the Internationalization APIs in Gecko were designed at the time when Gecko could not carry its own internationalization system. Instead, it used host operating system methods to format dates, times, numbers etc. Those approaches were incomplete and the outcome differed from setup to setup, making the Gecko platform unpredictable and hard to internationalize. Fluent has been designed to be aligned with the modern Internationalization API for Javascript developed as the ECMA402 standard by the TC39 Working Group. This is the API available to all web content and fortunately, by 2016, Gecko was already carrying this API powered by a modern cross-platform internationalization library designed by the Unicode Consortium – ICU. That meant that we were using a modern sta[...]

Air Mozilla: Mozilla Weekly Project Meeting, 08 Jan 2018

Mon, 08 Jan 2018 19:00:00 +0000

(image) The Monday Project Meeting

Nick Cameron: A proof-of-concept GraphQL server framework for Rust

Mon, 08 Jan 2018 18:02:06 +0000

Recently, I've been working a new project, a framework for GraphQL server implementations in Rust. It's still very much at the proof of concept stage, but it is complete enough that I want to show it to the world. The main restriction is that it only works with a small subset of the GraphQL language. As far as I'm aware, it's the only framework which can provide an 'end to end' implementation of GraphQL in Rust (i.e., it handles IDL parsing, generates Rust code from IDL, and parses, validates, and executes queries). The framework provides a seamless GraphQL interface for Rust servers. It is type-safe, ergonomic, very low boilerplate, and customisable. It has potential to be very fast. I believe that it can be one of the best experiences for GraphQL development in any language, as well as one of the fastest implementations (in part, because it seems to me that Rust and GraphQL are a great fit). GraphQL GraphQL is an interface for APIs. It's a query-based alternative to REST. A GraphQL API defines the structure of data, and clients can query that structured data. Compared to a traditional RESTful API, a GraphQL interface is more flexible and allows for clients and servers to evolve more easily and separately. Since a query returns exactly the data the client needs, there is less over-fetching or under-fetching of data, and fewer API calls. A good example of what a GraphQL API looks like is v4 of the GitHub API; see their blog post for a good description of GraphQL and why they chose if for their API. Compared to a REST or other HTTP APIs, a GraphQL API takes a fair bit more setting up. Whereas, you can make a RESTful API from scratch with only a little work on top of most HTTP servers, to present a GraphQL API, you need to use a server library. That library takes care of understanding your data's schema, parsing and validating (effectively type-checking) queries against that schema, and orchestrating execution of queries (a good server framework also takes care of various optimisations such as caching query validation and batching parts of execution). The server developer plugs code into the framework as resolvers - code which implements 'small' functions in the schema. Rust and GraphQL Rust is a modern systems language, it offers a rich type system, memory safety without garbage collection, and an expanding set of libraries for writing very fast servers. I believe Rust is an excellent fit for implementing GraphQL servers. Over the past year or so, many developers have found Rust to be excellent for server-side software, and providing a good experience in that domain was a key goal for the Rust community in 2017. Rust's data structures are a good match for GraphQL data structures, combined with a strong, safe, and static type system, this means that GraphQL implementations can be type-safe. Rust's powerful procedural macro system means that a GraphQL framework can[...]

Mozilla Addons Blog: New Contribution Opportunity: Content Review for

Mon, 08 Jan 2018 16:30:11 +0000

For over a dozen years, extension developers have volunteered their time and skills to review extensions submitted to (AMO). While they primarily focused on ensuring that an extension’s code adhered to Mozilla’s add-on policies, they also moderated the content of the listings themselves, like titles, descriptions, and user reviews.

To help add-on reviewers focus on the technical aspects of extension review and expand contribution opportunities to non-technical volunteers, we are creating a new volunteer program for reviewing listing content.

Add-on content reviewers will be focused on ensuring that extensions listed on AMO comply with Mozilla’s Acceptable Use Policy. Having a team of dedicated content reviewers will help ensure that extensions listed on AMO are not spam do not contain hate speech or obscene materials.

Since no previous development experience is necessary to review listing content, this is a great way to make an impactful, non-technical contribution to AMO. If you have a keen eye for details and want to make sure that users and developers have a great experience on, please take a look at our wiki to learn more about how to become an add-on content reviewer.

The post New Contribution Opportunity: Content Review for appeared first on Mozilla Add-ons Blog.

Will Kahn-Greene: Socorro in 2017

Mon, 08 Jan 2018 14:00:00 +0000


Socorro is the crash ingestion pipeline for Mozilla's products like Firefox. When Firefox crashes, the Breakpad crash reporter asks the user if the user would like to send a crash report. If the user answers "yes!", then the Breakpad crash reporter collects data related to the crash, generates a crash report, and submits that crash report as an HTTP POST to Socorro. Socorro saves the crash report, processes it, and provides an interface for aggregating, searching, and looking at crash reports.

2017 was a big year for Socorro. In this blog post, I opine about our accomplishments.

Read more… (23 mins to read)

Robert O'Callahan: The Fight For Patent-Unencumbered Media Codecs Is Nearly Won

Mon, 08 Jan 2018 09:41:50 +0000

Apple joining the Alliance for Open Media is a really big deal. Now all the most powerful tech companies — Google, Microsoft, Apple, Mozilla, Facebook, Amazon, Intel, AMD, ARM, Nvidia — plus content providers like Netflix and Hulu are on board. I guess there's still no guarantee Apple products will support AV1, but it would seem pointless for Apple to join AOM if they're not going to use it: apparently AOM membership obliges Apple to provide a royalty-free license to any "essential patents" it holds for AV1 usage.

It seems that the only thing that can stop AOM and AV1 eclipsing patent-encumbered codecs like HEVC is patent-infringement lawsuits (probably from HEVC-associated entities). However, the AOM Patent License makes that difficult. Under that license, the AOM members and contributors grant rights to use their patents royalty-free to anyone using an AV1 implementation — but your rights terminate if you sue anyone else for patent infringement for using AV1. (It's a little more complicated than that — read the license — but that's the idea.) It's safe to assume AOM members do hold some essential patents covering AV1, so every company has to choose between being able to use AV1, and suing AV1 users. They won't be able to do both. Assuming AV1 is broadly adopted, in practice that will mean choosing between making products that work with video, or being a patent troll. No doubt some companies will try the latter path, but the AOM members have deep pockets and every incentive to crush the trolls.

Opus (audio) has been around for a while now, uses a similar license, and AFAIK no patent attacks are hanging over it.

Xiph, Mozilla, Google and others have been fighting against patent-encumbered media for a long time. Mozilla joined the fight about 11 years ago, and lately it has not been a cause célèbre, being eclipsed by other issues. Regardless, this is still an important victory. Thanks to everyone who worked so hard for it for so long, and special thanks to the HEVC patent holders, whose greed gave free-codec proponents a huge boost.

Andy McKay: A WebExtensions scratch pad

Mon, 08 Jan 2018 08:00:00 +0000

Every Event is a daft little WebExtension I wrote a while ago to try and capture the events that WebExtensions APIs fires. It tries to listen to every possible event that is generated. I've found this pretty useful in past for asking questions like "What tab events fire when I move tabs between windows?" or "What events fire when I bookmark something?".

To use Every Event, install it from To open, click the alarm icon on the menu bar.


If you turn on every event for everything, you get an awful lot of traffic to the console. You might want to limit that down. So to test what happens when you bookmark something, click "All off", then click "bookmarks". Then click "Turn on". Then open the "Browser Console".

Each time you bookmark something, you'll see the event and the contents of the API as shown below:


That's also thanks to the Firefox Developer Tools which are great for inspecting objects.

But there's one other advantage to having Every Event around. Because it requests every single permission, it has access to every API. So that means if you go to about:debugging and then click on Debug for Every Event, you can play around with all the APIs and get nice autocomplete:


All you have to do is enter "browser." at the browser console and there's all the WebExtension APIs autocompletable.

Let's add in our own custom handler for browser.bookmarks.onRemoved.addListener and see what happens when I remove a bookmark...


Finally, I keep a checkout of Every Event near by on all my machines. All I have to do is enter the Every Event directory and start web-ext:

web-ext run --firefox /Applications/ --verbose --start-url about:debugging

That's aliased to a nice short command on my Mac and gives me a clean profile with the relevant console just one click away...

Update: see also shell WebExtension project by Martin Giger which has some more sophisticated content script support.

Cameron Kaiser: Actual field testing of Spectre on various Power Macs (spoiler alert: G3 and 7400 survive!)

Mon, 08 Jan 2018 04:24:00 +0000

Tip of the hat to miniupnp who ported the Spectre proof of concept to PowerPC intrinsics. I ported it to 10.2.8 so I could get a G3 test result, and then built generic PowerPC, G3, 7400, 7450 and G5 versions at -O0, -O1, -O2 and -O3 for a grand total of 20 variations. Recall from our most recent foray into the Spectre attack that I believed the G3 and 7400 would be hard to successfully exploit because of their unusual limitations on speculative execution through indirect branches. Also, remember that this PoC assumes the most favourable conditions possible: that it already knows exactly what memory range it's looking for, that the memory range it's looking for is in the same process and there is no other privilege or partition protection, that it can run and access system registers at full speed (i.e., is native), and that we're going to let it run to completion. miniupnp's implementation uses the mftb(u) instructions, so if you're porting this to the 601, you weirdo, you'll need to use the equivalent on that architecture. I used Xcode 2.5 and gcc 4.0.1. Let's start with, shall we say, a positive control. I felt strongly the G5 would be vulnerable, so here's what I got on my Quad G5 (DC/DP 2.5GHz PowerPC 970MP) under 10.4.11 with Energy Saver set to Reduced Performance: -arch ppc -O0: partial failure (two bytes wrong, but claims all "success") -arch ppc -O1: recovers all bytes (but claims all "unclear") -arch ppc -O2: same -arch ppc -O3: same -arch ppc750 -O0: partial failure (twenty-two bytes wrong, but claims all "unclear") -arch ppc750 -O1: recovers all bytes (but claims all "unclear") -arch ppc750 -O2: almost complete failure (twenty-five bytes wrong, but claims all "unclear") -arch ppc750 -O3: almost complete failure (twenty-six bytes wrong, but claims all "unclear") -arch ppc7400 -O0: almost complete failure (twenty-eight bytes wrong, claims all "success") -arch ppc7400 -O1: recovers all bytes (but claims all "unclear") -arch ppc7400 -O2: almost complete failure (twenty-six bytes wrong, but claims all "unclear") -arch ppc7400 -O3: almost complete failure (twenty-eight bytes wrong, but claims all "unclear") -arch ppc7450 -O0: recovers all bytes (claims all "success")-arch ppc7450 -O1: recovers all bytes (but claims all "unclear") -arch ppc7450 -O2: same -arch ppc7450 -O3: same -arch ppc970 -O0: recovers all bytes (claims all "success")-arch ppc970 -O1: recovers all bytes, but noticeably more slowly (and claims all "unclear") -arch ppc970 -O2: partial failure (one byte wrong, but claims all "unclear") -arch ppc970 -O3: recovers all bytes (but claims all "unclear") Twiddling CACHE_HIT_THRESHOLD to any value other than 1 caused the test to fail completely, even on the working scenarios. These results are frankly all over the map and [...]

Cameron Kaiser: More about Spectre and the PowerPC (or why you may want to dust that G3 off)

Mon, 08 Jan 2018 04:07:13 +0000

UPDATE: IBM is releasing firmware patches for at least the POWER7+ and forward, including the POWER9 expected to be used in the Talos II. My belief is that these patches disable speculative execution through indirect branches, making the attack much more difficult though with an unclear performance cost. See below for why this matters.UPDATE the 2nd: The G3 and 7400 survived Spectre! (my personal favourite Blofeld)Most of the reports on the Spectre speculative execution exploit have concentrated on the two dominant architectures, x86 (in both its AMD and Meltdown-afflicted Intel forms) and ARM. In our last blog entry I said that PowerPC is vulnerable to the Spectre attack, and in broad strokes it is. However, I also still think that the attack is generally impractical on Power Macs due to the time needed to meaningfully exfiltrate information on machines that are now over a decade old, especially with JavaScript-based attacks even with the TenFourFox PowerPC JIT (to say nothing of various complicating microarchitectural details). But let's say that those practical issues are irrelevant or handwaved away. Is PowerPC unusually vulnerable, or on the flip side unusually resistant, to Spectre-based attacks compared to x86 or ARM? For the purposes of this discussion and the majority of our audience, I will limit this initial foray to processors used in Power Macintoshes of recent vintage, i.e., the G3, G4 and G5, though the G5's POWER4-derived design also has a fair bit in common with later Power ISA CPUs like the Talos II's POWER9, and ramifications for future Power ISA CPUs can be implied from it. I'm also not going to discuss embedded PowerPC CPUs here such as the PowerPC 4xx since I know rather less about their internal implementational details. First, let's review the Spectre white paper. Speculative execution, as the name implies, allows the CPU to speculate on the results of an upcoming conditional branch instruction that has not yet completed. It predicts future program flow will go a particular way and executes that code upon that assumption; if it guesses right, and most CPUs do most of the time, it has already done the work and time is saved. If it guesses wrong, then the outcome is no worse than idling during that time save the additional power usage and the need to restore the previous state. To do this execution requires that code be loaded into the processor cache to be run, however, and the cache is not restored to its previous state; previously no one thought that would be necessary. The Spectre attack proves that this seemingly benign oversight is in fact not so. To determine the PowerPC's vulnerability requires looking at how it does branch prediction and indirect branching. Indirect branching, [...]

Mozilla Marketing Engineering & Ops Blog: Kuma Report, December 2017

Mon, 08 Jan 2018 00:00:00 +0000

Here’s what happened in December in Kuma, the engine of MDN Web Docs: Purged 162 KumaScript Macros Increased availability of MDN Improved Kuma deployments Added browser compatibility data Said goodbye to Stephanie Hobson Shipped tweaks and fixes by merging 209 pull requests, including 13 pull requests from 11 new contributors. Here’s the plan for January: Prepare for a CDN Ship more interactive examples Update Django to 1.11 Plan for 2018 Done in December Purged 162 KumaScript Macros We moved the KumaScript macros to GitHub in November 2016, and added a new macro dashboard. This gave us a clearer view of macros across MDN, and highlighted that there were still many macros that were unused or lightly used. Reducing the total macro count is important as we change the way we localize the output and add automated tests to prevent bugs. We scheduled some time to remove these old macros at our Austin work week, when multiple people could quickly double-check macro usage and merge the 86 Macro Massacre pull requests. Thanks to Florian Scholz, Ryan Johnson, and wbamberg, we’ve removed 162 old macros, or 25% of the total at the start of the month. Increased Availability of MDN We made some additional changes to keep MDN available and to reduce alerts. Josh Mize added rate limiting to several public endpoints, including the homepage and wiki documents (PR 4591). The limits should be high enough for all regular visitors, and only high-traffic scrapers should be blocked. I adjusted our liveness tests, but kept the database query for now (PR 4579). We added new thresholds for liveness and readiness in November, and these appear to be working well. We continue to get alerts about MDN during high-traffic spikes. We’ll continue to work on availability in 2018. Improved Kuma Deployments Ryan Johnson worked to make our Jenkins-based tests more reliable. For example, Jenkins now confirms that MySQL is ready before running tests that use the database (PR 4581). This helped find an issue with the database being reused, and we’re doing a better job of cleaning up after tests (PR 4599). Ryan continued developing branch-based deployments, making them more reliable (PR 4587) and expanding to production deployments (PR 4588). We can now deploy to staging and production by merging to stage-push and prod-push for Kuma as well as KumaScript, and we can monitor the deployment with bot notifications in #mdndev. This makes pushes easier and more reliable, and gets us closer to an automated deployment pipeline. Added Browser Compatibility Data Daniel D. Beck continued to convert CSS compatibility data from the wiki to the repository, and wrote 35 of the 57 PRs mer[...]

Nick Cameron: Rust 2018

Sun, 07 Jan 2018 22:39:37 +0000

I want 2018 to be boring. I don't want it to be slow, I want lots of work to happen, but I want it to be 'boring' work. We got lots of big new things in 2017 and it felt like a really exciting year (new language features, new tools, new libraries, whole new ways of programming (!), new books, new teams, etc.). That is great and really pushed Rust forward, but I feel we've accumulated a lot of technical and social debt along the way. I would like 2018 to be a year of consolidation on 2017's gains, of paying down technical debt, and polishing new things into great things. More generally, we could think of a tick-tock cadence to Rust's evolution - 2015 and 2017 were years with lots of big, new things, 2016 and 2018 should be consolidation years. Some specifics Not in priority order. finish design and implementation of 'in flight' language features: const exprs modules and crates macros default generics ergonomics initiative things impl Trait specialisation more ... stabilisation debt (there are a lot of features that are 'done' but need stabilising. This is actually a lot of work since the risk is higher at this stage than at any other point in the language design process, so although it looks like just ticking a box, it takes a lot of time and effort). async/await - towards a fully integrated language feature and complete library support, so that Rust is a first choice for async programming. unsafe guidelines - we need this for reliable and safe programming and to facilitate compiler optimisations. There is too much uncertainty right now. web assembly support - we started at the end of 2017, there is lots of opportunity for Rust in the space. compiler performance - we made some big steps in 2017 (incremental compilation), but there is lots of 'small' work to do before compiling Rust programs is fast in the common case. It's also needed for a great IDE experience. error handling - the Failure library is a good start, I think it is important that we have a really solid story here. There are other important pieces too, such as ? in main, stable catch blocks, and probably some better syntax for functions which return Results. IDE support - we're well on our way and made good progress in 2017. We need to release the RLS, improve compiler integration, and then there's lots of opportunity for improving the experience, for example with debugger integration and refactoring tools. mature other tools (Rustfmt and Clippy should both have 1.0 releases and we should have a robust distribution mechanism) Cargo build system integration (we planned this out in 2017, but didn't start implementation) ongoing improvements (in particular I think we need to address[...]

Robert O'Callahan: Ancient Browser-Wars History: MD5-Hashed Posts Declassified

Sun, 07 Jan 2018 05:40:04 +0000

2007-2008 was an interesting time for Mozilla. In the market, Firefox was doing well, advancing steadily against IE. On the technical front we were doing poorly. Webkit was outpacing us in performance and rapid feature development. Gecko was saddled with design mistakes and technical debt, and Webkit captured the mindshare of open-source contributors. We knew Google was working on a Webkit-based browser which would probably solve Webkit's market-share problems. I was very concerned and, for a while, held the opinion that Mozilla should try to ditch Gecko and move everything to Webkit. For me to say so loudly would have caused serious damage, so I only told a few people. In public, I defended Gecko from unfair attacks but was careful not to contradict my overall judgement. I wasn't the only one to be pessimistic about Gecko. Inside Mozilla, under the rubric of "Mozilla 2.0", we thrashed around for considerable time trying to come up with short-cuts to reducing our technical debt, such as investments in automatic refactoring tools. Outside Mozilla, competitors expected to rapidly outpace us in engine development. As it turned out, we were all mostly wrong. We did not find any silver bullets, but just by hard work Gecko mostly kept up, to an extent that surprised our competitors. Weaknesses in Webkit — some expedient shortcuts taken to boost performance or win points on specific compatibility tests, but also plain technical debt — became apparent over time. Chrome boosted Webkit, but Apple/Google friction also caused problems that eventually resulted in the Blink fork. The reaction to Firefox 57 shows that Gecko is still at least competitive today, even after the enormous distraction of Mozilla's failed bet on FirefoxOS. One lesson here is even insiders can be overly pessimistic about the prospects of an old codebase; dedicated, talented staff working over the long haul can do wonders, and during that time your competitors will have a chance to develop their own problems. Another lesson: in 2007-2008 I was overly focused on toppling IE (and Flash and WPF), and thought having all the open-source browsers sharing a single engine implementation wouldn't be a big problem for the Web. I've changed my mind completely; the more code engines share, the more de facto standardization of bugs we would see, so having genuinely separate implementations is very important. I'm very grateful to Brendan and others for disregarding my opinions and not letting me lead Mozilla down the wrong path. It would have been a disaster for everyone. To let off steam, and leave a paper trail for the future, I wrote four blog posts during 2007-2008 describin[...]

Gervase Markham: A New Scam?

Sat, 06 Jan 2018 20:38:53 +0000

I got this email recently; I’m 99% sure it’s some new kind of scam, but it’s not one I’ve seen before. Anyone have any info? Seems like it’s not totally automated, and targets Christians. Or perhaps it’s some sort of cult recruitment? The email address looks very computer-generated (

Good morning,

I am writing in accordance to my favourite Christian website, I could do with sending you some documents regarding Christ. I am a Christian since the age of 28, when I got a knock at the door at my house by a group of males asking me to come to a Christian related event, I of course graciously accepted.

I have since opened up about my homosexuality which my local church somewhat accepted, as I am of course, one of the most devout members of the Church. I am very grateful to the church for helping me discover whom I really was at a time where I needed to discover who I was the most.

I would like to obtain your most recent address, as I have seen on your website that you have recently moved house (as of 2016) to a Loughborough address. I would like to send you some documents regarding my struggles with depression and then finding God and how much he helped me discover my real identity.

I thank you very much for your aid in helping me find God and Christ within myself, as you helped me a lot with your website and your various struggles, which gave me strength to succeed and to carry on in the name of Jesus Christ, our Lord and Saviour.

Hope to hear a reply soon,

Kind regards,


Cameron Kaiser: Is PowerPC susceptible to Spectre? Yep.

Sat, 06 Jan 2018 20:09:26 +0000

UPDATE: Yes, TenFourFox will implement relevant Spectre-hardening features being deployed to Firefox, and the changes to will be part of FPR5 final. We also don't support SharedArrayBuffer anyway and right now are not likely to implement it any time soon.UPDATE the 2nd: This post is getting a bit of attention and was really only intended as a quick skim, so if you're curious whether all PowerPC chips are vulnerable in the same fashion and why, read on for a deeper dive.If you've been under a rock the last couple days, then you should read about Meltdown and Spectre (especially if you are using an Intel CPU). Meltdown is specific to x86 processors made by Intel; it does not appear to affect AMD. But virtually every CPU going back decades that has a feature called speculative execution is vulnerable to a variety of the Spectre attack. In short, for those processors that execute "future" code downstream in anticipation of what the results of certain branching operations will be, Spectre exploits the timing differences that occur when certain kinds of speculatively executed code changes what's in the processor cache. The attacker may not be able to read the memory directly, but (s)he can find out if it's in the cache by looking at those differences (in broad strokes, stuff in the cache is accessed more quickly), and/or exploit those timing changes as a way of signaling the attacking software with the actual data itself. Although only certain kinds of code can be vulnerable to this technique, an attacker could trick the processor into mistakenly speculatively executing code it wouldn't ordinarily run. These side effects are intrinsic to the processor's internal implementation of this feature, though it is made easier if you have the source code of the victim process, which is increasingly common. Power ISA is fundamentally vulnerable going back even to the days of the original PowerPC 601, as is virtually all current architectures, and there are no simple fixes. So what's the practical impact to Power Macs? Well, not much. As far as directly executing an attacking application, there are a billion more effective ways to write a Trojan horse than this, and they would have to be PowerPC-specific (possibly even CPU-family specific due to microarchitectural changes) to be functional. It's certainly possible to devise JavaScript that could attack the cache in a similar fashion, especially since TenFourFox implements a PowerPC JIT, but such an attack would -- surprise! -- almost certainly have to be PowerPC-specific too, and the TenFourFox JIT doesn't easily give up the [...]

Jared Hirsch: An idea for fixing fake news with crypto

Sat, 06 Jan 2018 00:09:00 +0000

Here’s a sketch of an idea that could be used to implement a news trust system on the web. (This is apropos of Mozilla’s Information Trust Initiative, announced last year.)

Suppose we used public key crypto (digital badges?) to make some verifiable assertions about an online news item:

  • person X wrote the headline / article
  • news organization Y published the headline / article
  • person X is a credentialed journalist (degreed or affiliated with a news organization)
  • news organization Y is a credentialed media organization (not sure how to define this)
  • the headline / article has not been altered from its published form

The article could include a meta tag in the page’s header that links to a .well-known/news-trust file at a stable URL, and that file could then include everything needed to verify the assertions.

If this information were available to social media feeds / news aggregators, then it’d be easy to:

  • automatically calculate the trustworthiness of shared news items at internet scale
  • inform users which items are trustworthy (for example, show a colored border around the item)
  • factor trustworthiness into news feed algorithms (allow users to hide or show lower-scored items, based on user settings)

One unsolved issue here is who issues the digital credentials for media. I need to look into this.

What do you think? Is this a good idea, a silly idea, something that’s been done before?

I don’t have comments set up here, so you can let me know on twitter or via email.

Mozilla B-Team: Looking back at Bugzilla and BMO in 2017

Fri, 05 Jan 2018 05:46:05 +0000

dylanwh: Recently in the Bugzilla Project meeting, Gerv informed us that he would be resigning, and it was pretty clear that my lack of technical leadership was the cause. While I am sad to see Gerv go, it did make me realize I need to write more about the things I do. As is evident in this post, all of the things I’ve accomplished have been related to the BMO codebase and not upstream Bugzilla – which is why upstream must be rebased on BMO. See Bug 1427884 for one of the blockers to this. Accessibility Changes In 2017, we made over a dozen a11y changes, and I’ve heard from a well-known developer that using BMO with a screen reader is far superior to other bugzillas. :-) Infrastructure Changes BMO is quite happy to use carton to manage its perl dependencies, and Docker handle its system-level dependencies. We’re quite close to being able to run on Kubernetes. While the code is currently turned off in production, we also feature a very advanced query translator that allows the use of ElasticSearch to index all bugs and comments. Performance Changes I sort of wanted to turn each of these into a separate blog post, but I never got time for that – and I’m even more excited about writing about future work. But rather than just let them hide away in bugs, I thought I’d at least list them and give a short summary. February Bug 1336958 - HTML::Tree requires manual memory management or it leaks memory. I discovered this while looking at some unrelated code. Bug 1335233 - I noticed that the job queue runner wasn’t calling the end-of-request cleanup code, and a result it was also leaking memory. March Bug 1345181 - make html_quote() about five times faster. Bug 1347570 - make it so apache in the dev/test environments didn’t need to restart after every request (by enforcing a minimum memory limit) Bug 1350466 - switched JSON serialization to JSON::XS, which is nearly 1000 times faster. Bug 1350467 - caused more modules (those provided by optional features) to be preloaded at apache startup. Bug 1351695 - Pre-load “.htaccess” files and allow apache to ignore them April Bug 1355127 - rewrote a template that is in a tight loop to Perl. Bug 1355134 - fetch all group at once, rather than row-at-a-time. Bug 1355137 - Cache objects that represent bug fields. Bug 1355142 - Instead of using a regular expression to “trick” Perl’s string tainting system, use a module to directly flip the “taint” bit. This was hundreds of times faster. Bug 1352264 - Compile all templates and store them[...]

Mozilla Open Policy & Advocacy Blog: Mozilla statement on breach of Aadhaar data

Fri, 05 Jan 2018 04:08:10 +0000

Mozilla is deeply concerned about recent reports that a private citizen was able to easily access the private Aadhaar data of more than one billion Indian citizens as reported by The Tribune.

Despite declaring in November that the Aadhaar system had strict privacy controls, the Unique Identification Authority of India (UIDAI) has failed to protect the private details entrusted to them by Indians, proving the concerns that Mozilla and other organizations have been raising. Breaches like this demonstrate the urgent need for India to pass a strong data protection law.

Mozilla has been raising concerns about the security risks of companies using and integrating Aadhaar into their systems, and this latest, egregious breach should be a giant red flag to all companies as well as to the UIDAI and the Modi Government.

The post Mozilla statement on breach of Aadhaar data appeared first on Open Policy & Advocacy.

Niko Matsakis: Lessons from the impl period

Thu, 04 Jan 2018 23:00:00 +0000

So, as you likely know, we tried something new at the end of 2017. For roughly the final quarter of the year, we essentially stopped doing design work, and instead decided to focus on implementation – what we called the “impl period”. We had two goals for the impl period: (a) get a lot of high-value implementation work done and (b) to do that by expanding the size of our community, and making it easy for new people to get involved. To that end, we spun up about 40 working groups, which is really a tremendous figure when you think about it, each of which was devoted to a particular task. For me personally, this was a very exciting three months. I really enjoyed the enthusiasm and excitement that was in the air. I also enjoyed the opportunity to work in a group of people collectively trying to get our goals done – one thing I’ve found working on an open-source project is that it is often a much more “isolated” experience than working in a more traditional company. The impl period really changed that feeling. I wanted to write a brief post kind of laying out my experience and trying to dive a bit into what I felt worked well and what did not. I’d very much like to hear back from others who participated (or didn’t). I’ve opened up a dedicated thread on internals for discussion, please leave comments there! TL;DR If you don’t want to read the details, here are the major points: Overall, the impl period worked great. Having structure to the year felt liberating and I think we should do more of it. We need to grow and restructure the compiler team around the idea of mentoring and inclusion. I think having more focused working groups will be a key part of that. We have work to do on making the compiler code base accessible, beginning with top-down documentation but also rustdoc. We need to develop skills and strategies for how to split tasks up. IRC isn’t great, but Gitter wasn’t either. The search for a better chat solution continues. =) Worked well: establishing focus and structure to the year Working on Rust often has this kind of firehose quality: so much is going on at once. At any one time, we are: fixing bugs in existing code, developing code for new features that have been designed, discussing the minutae and experience of some existing feature we may consider stabilizing, designing new features and APIs via RFCs. It can get pretty exhausting to keep all that in your head at once. I really enjoyed having a quart[...]

Hacks.Mozilla.Org: New flexbox guides on MDN

Thu, 04 Jan 2018 16:10:36 +0000

In preparation for CSS Grid shipping in browsers in March 2017, I worked on a number of guides and reference materials for the CSS Grid specification, which were published on MDN. With that material updated, we thought it would be nice to complete the documentation with similar guides for Flexbox, and so I updated the existing material to reflect the core use cases of Flexbox. This works well; with the current state of the specs, Flexbox now sits alongside Grid and Box Alignment to form a new way of thinking about layout for the web. It is useful to reflect this in the documentation. The new docs at a glance I’ve added eight new guides to MDN: Basic concepts of flexbox Relationship of flexbox to other layout methods Aligning items in a flex container Ordering flex items Controlling ratios of flex items along the main axis Mastering wrapping of flex items Typical use cases of flexbox Backwards compatibility of flexbox One of the things I’ve tried to do in creating this material is to bring Flexbox into place as one part of an overall layout system. Prior to Grid shipping, Flexbox was seen as the spec to solve all of our layout problems, yet a lot of the difficulty in using Flexbox is when we try to use it to create the kind of two-dimensional layouts that Grid is designed for. Once again, we find ourselves fighting to persuade a layout method to do things it wasn’t designed to do. Therefore, in these guides I have concentrated on the real use cases of Flexbox, looked at where Grid should be used instead, and also clarified how the specification works with Writing Modes, Box Alignment and ordering of items. A syllabus for layout I’ve been asked whether people should learn Flexbox first then move on to Grid Layout. I’d suggest instead learning the basics of each specification, and the reasons to use each layout method. In a production site you are likely to have some patterns that make sense to lay out using Flexbox and some using Grid. On MDN you should start with Basic Concepts of flexbox along with the companion article for Grid — Basic concepts of grid layout. Next, take a look at the two articles that detail how Flexbox and Grid fit into the overall CSS Layout picture: Relationship of Flexbox to other layout methods Relationship of Grid Layout to other layout methods Having worked through these guides you will have a reasonable overview. As you start to create design patterns using the specifications, you can then dig into[...]

Air Mozilla: Reps Weekly Meeting, 04 Jan 2018

Thu, 04 Jan 2018 16:00:00 +0000

(image) This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Mozilla B-Team: happy bmo push day!

Thu, 04 Jan 2018 14:00:07 +0000

release tag the following changes have been pushed to [1426390] Serve WOFF2 Fira Sans font [1426963] Make Bugzilla::User->in_group() use a hash lookup [1427230] Avoid loading CGI::Carp, which makes templates slow. [1326233] doesn’t fail NRPE gracefully with bad inputs. [1426507] Upgrade BMO to HTML5 [1330293] Prevent from running longer than 5 minutes (and log to sentry if it does) [1426673] The logout link cannot be found as what Sessions page says [1427656] Remove ZeroClipboard helper [1426685] Fix regressions from fixed-positioning global header [1427743] legacy phabbugz API code errors when trying to set an inactive review flag [1427646] Remove Webmaker from product selector on Browse page and guided bug entry form [1426518] Revisions can optionally not have a bug id so we need to make it optional in the type constraints of [1423998] Add ‘Pocket’ to Business Unit drop down for Legal bugs [1426475] Make unknown bug id / alias error message more obvious to prevent content spoofing [1426409] github_secret key has no rate limiting discuss these changes on[...]

David Burns: What makes a senior developer or senior engineer

Thu, 04 Jan 2018 11:52:50 +0000

Over the festive break I sent out this tweet.

The, now deleted, quoted tweet went along the lines of "If you have 3 senior engineers earning $150k and a junior developer breaks the repository is it worth the $60k for having a junior". The original tweet, and then similar tweets that came out after that shows that there seems to be a disconnect on what some engineers, and even managers, believe a senior engineer should act like and what it really takes to be a senior or higher engineer.

The following is my belief, and luckily for me its also the guide I am given by my employer, is that seniority of an engineer is more to do with their interpersonal skills and less to do with their programming ability. So what do I mean by this? The answer is really simple.

A senior developer or senior engineer should be able to build up and mentor engineers who are "below" them. A senior engineer should be working to make the rest of their team senior engineers. This might actually mean that a senior engineer might do less programming than the more junior members in a team. This is normally accounted for by engineering management and even by project management. The time when they are not programming is now filled with architectural discussions, code reviews and general mentoring of newer members. These tasks might not be as tangible as producing code but it is just as important.

Whether you are on an management track or individual contributor track, the further you go up the more it is dependent on you doing less coding and more making sure that you raise everyone up with you. This is just how it all goes. After all "A raising tide raises all ships".

Mozilla Addons Blog: January’s Featured Extensions

Thu, 04 Jan 2018 01:34:51 +0000

Pick of the Month: Search by Image – Reverse Image Search by Armin Sebastian Powerful image search tool that’s capable of leveraging multiple engines, such as Google, Bing, Yandex, Baidu, and TinEye. “I tried several ‘search by image’ add-ons and this one seems to be the best out there with a lot of features.” Featured: Resurrect Pages by Anthony Lieuallen Bring back broken links and dead pages from previous internet lives! “One of my favorite websites took down content from readers and I thought I’d never see those pages again. Three minutes and an add-on later I’m viewing everything as if it was never deleted. Seriously stunned and incredibly happy.” Featured: VivaldiFox by Tim Nguyen Change the colors of Firefox pages with this adaptive interface design feature (akin to Vivaldi-style coloring). “Definitely brings a bit more life to Firefox.” Nominate your favorite add-ons Featured add-ons are selected by a community board made up of add-on developers, users, and fans. Board members change every six months. Here’s further information on AMO’s featured content policies. If you’d like to nominate an add-on for featuring, please send it to amo-featured [at] mozilla [dot] org for the board’s consideration. We welcome you to submit your own add-on! The post January’s Featured Extensions appeared first on Mozilla Add-ons Blog.[...]

Mozilla Security Blog: Mitigations landing for new class of timing attack

Thu, 04 Jan 2018 00:23:41 +0000

Several recently-published research articles have demonstrated a new class of timing attacks (Meltdown and Spectre) that work on modern CPUs.  Our internal experiments confirm that it is possible to use similar techniques from Web content to read private information between different origins.  The full extent of this class of attack is still under investigation and we are working with security researchers and other browser vendors to fully understand the threat and fixes.  Since this new class of attacks involves measuring precise time intervals, as a partial, short-term, mitigation we are disabling or reducing the precision of several time sources in Firefox.  This includes both explicit sources, like, and implicit sources that allow building high-resolution timers, viz., SharedArrayBuffer. Specifically, in all release channels, starting with 57: The resolution of will be reduced to 20µs. The SharedArrayBuffer feature is being disabled by default. Furthermore, other timing sources and time-fuzzing techniques are being worked on. In the longer term, we have started experimenting with techniques to remove the information leak closer to the source, instead of just hiding the leak by disabling timers.  This project requires time to understand, implement and test, but might allow us to consider reenabling SharedArrayBuffer and the other high-resolution timers as these features provide important capabilities to the Web platform. Update [January 4, 2018]: We have released the two timing-related mitigations described above with Firefox 57.0.4, Beta and Developers Edition 58.0b14, and Nightly 59.0a1 dated “2018-01-04” and later. Firefox 52 ESR does not support SharedArrayBuffer and is less at risk; the mitigations will be included in the regularly scheduled Firefox 52.6 ESR release on January 23, 2018. The post Mitigations landing for new class of timing attack appeared first on Mozilla Security Blog.[...]

The Rust Programming Language Blog: Announcing Rust 1.23

Thu, 04 Jan 2018 00:00:00 +0000

The Rust team is happy to announce a new version of Rust, 1.23.0. Rust is a systems programming language focused on safety, speed, and concurrency. If you have a previous version of Rust installed via rustup, getting Rust 1.23.0 is as easy as: $ rustup update stable If you don’t have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.23.0 on GitHub. What’s in 1.23.0 stable New year, new Rust! For our first improvement today, we now avoid some unnecessary copies in certain situations. We’ve seen memory usage of using rustc to drop 5-10% with this change; it may be different with your programs. The documentation team has been on a long journey to move rustdoc to use CommonMark. Previously, rustdoc never guaranteed which markdown rendering engine it used, but we’re finally committing to CommonMark. As part of this release, we render the documentation with our previous renderer, Hoedown, but also render it with a CommonMark compliant renderer, and warn if there are any differences. There should be a way for you to modify the syntax you use to render correctly under both; we’re not aware of any situations where this is impossible. Docs team member Guillaume Gomez has written a blog post showing some common differences and how to solve them. In a future release, we will switch to using the CommonMark renderer by default. This warning landed in nightly in May of last year, and has been on by default since October of last year, so many crates have already fixed any issues that they’ve found. In other documentation news, historically, Cargo’s docs have been a bit strange. Rather than being on, they’ve been at With this release, that’s changing. You can now find Cargo’s docs at Additionally, they’ve been converted to the same format as our other long-form documentation. We’ll be adding a redirect from to this page, and you can expect to see more improvements and updates to Cargo’s docs throughout the year. See the detailed release notes for more. Library stabilizations As of Rust 1.0, a trait named AsciiExt existed to provide ASCII related functionality on u8, char, [u8], and str. To use it, you’d write code like [...]

Air Mozilla: Bugzilla Project Meeting, 03 Jan 2018

Wed, 03 Jan 2018 21:00:00 +0000

(image) The Bugzilla Project Developers meeting.

Air Mozilla: Weekly SUMO Community Meeting, 03 Jan 2018

Wed, 03 Jan 2018 17:00:00 +0000

(image) This is the SUMO weekly call

Mozilla GFX: WebRender newsletter #11

Wed, 03 Jan 2018 13:40:24 +0000

Newsletter #11 is finally here, even later than usual due to an intense week in Austin where all of Mozilla’s staff and a few independent contributors gathered, followed by yours truly taking two weeks off. Our focus before the Austin allhands was on performance, especially on Windows. We had some great results out of this and are shifting priorities back to correctness issues for a little while. Notable WebRender changes Martin added some clipping optimizations in #2104 and #2156. Ethan improved the performance of rendering large ellipses. Kvark implemented different texture upload strategies to be selected at runtime depending on the driver. This has a very large impact when using Windows. Kvark worked around the slow depth clear implementation in ANGLE. Glenn implemented splitting rectangle primitives, which allows moving a lot of pixels to the opaque pass and reduce overdraw. Ethan sped up ellipse calculations in the shaders. Morris implemented the drop-shadow() CSS filter. Gankro introduced deserialize_from in serde for faster deserialization, and added it to WebRender. Glenn added dual-source blending path for subpixel text when supported, yielding performance improvements when the text color is different between text runs. Many people fixed a lot of bugs, too many for me to list them here. Notable Gecko changes Sotaro made Gecko use EGL_EXPERIMENTAL_PRESENT_PATH_FAST_ANGLE for WebRender. This avoids a full screen copy when presenting. With this change the peak fps of on P50(Win10) was changed from 50fps to 60fps. Sotaro prevented video elements from rendering at 60fps when they have a lower frame rate. Jeff removed two copies of the display list (one of which happens on the main thread). Kats removed a performance cliff resulting from linear search through clips. This drastically improves MazeSolver time (~57 seconds down to ~14 seconds). Jeff removed a copy of the glyph buffer Lots and lots of more fixes and improvements. Enabling WebRender in Firefox Nightly In about:config: set “gfx.webrender.enabled” to true, set “gfx.webrender.blob-images” to true, set “image.mem.shared” to true, if you are on Linux, set “layers.acc[...]

Ryan Harter: Managing Someday-Maybe Projects with a CLI

Wed, 03 Jan 2018 08:00:00 +0000

I have a problem managing projects I'm interested in but don't have time for. For example, the CLI for generating slack alerts I posted about last year. Not really a priority, but helpful and not that complicated. I sat on that project for about a year before I could finally …

The Rust Programming Language Blog: New Year's Rust: A Call for Community Blogposts

Wed, 03 Jan 2018 00:00:00 +0000

‘Tis the season for people and communities to reflect and set goals- and the Rust team is no different. Last month, we published a blogpost about our accomplishments in 2017, and the teams have already begun brainstorming goals for next year. Last year, the Rust team started a new tradition: defining a roadmap of goals for the upcoming year. We leveraged our RFC process to solicit community feedback. While we got a lot of awesome feedback on that RFC, we’d like to try something new in addition to the RFC process: a call for community blog posts for ideas of what the goals should be. As open source software becomes more and more ubiquitous and popular, the Rust team is interested in exploring new and innovative ways to solicit community feedback and participation. We’re commited to extending and improving our community organization and outreach- and this effort is just the first of what we hope to be many iterations of new kinds of community feedback mechanisms. #Rust2018 Starting today and running until the end of January we’d like to ask the community to write blogposts reflecting on Rust in 2017 and proposing goals and directions for Rust in 2018. These can take many forms: A post on your personal or company blog A Medium post A GitHub gist Or any other online writing platform you prefer. We’re looking for posts on many topics: Ideas for community programs Language features Documentation improvements Ecosystem needs Tooling enhancements Or anything else Rust related you hope for in 2018 :D A great example of what we’re looking for is this post, “Rust and the case for WebAssembly in 2018” by @mgattozzi or this post, “Rust in 2017” by Anonyfox. You can write up these posts and email them to or tweet them with the hashtag #Rust2018. We’ll aggregate any blog posts sent via email or with the hashtag in one big blog post here. The Core team will be reading all of the submitted posts and using them to inform the initial roadmap RFC for 2018. Once the RFC is submitted, we’ll open up the normal RFC process, though if you want, you are welcome to write a post and lin[...]

Shing Lyu: Taking notes with MkDocs

Tue, 02 Jan 2018 20:14:58 +0000

I’ve been using TiddlyWiki for note-taking for a few years. I use them to keep track of my technical notes and checklists. TiddlyWiki is a brilliant piece of software. It is a single HTML file with a note-taking interface, where the notes you take are stored directly in the HTML file itself, so you can easily carry (copy) the file around and easily deploy it online for sharing. However, most modern browsers don’t allow web pages to access the filesystem, so in order to let TiddlyWiki save the notes, you need to rely on browser extensions or Dropbox integration service like TiddlyWiki in the Sky. But they still have some frictions. So recently I started to look for other alternatives for note-taking. Here are some of the requirements I desire: Notes are stored in non-proprietary format. In case the service/software is discontinued, I can easily migrate them to other tool. Has some form of formatting, e.g. write in Markdown and render to HTML. Auto-generated table of contents and search functionality. Can be used offline and data is stored locally. Can be easily shared with other people. Can be version-controlled, and you won’t lose data if there is a conflict. TiddlyWiki excels at 1 to 5, and I can easily sync it with Dropbox. However, I often forgot to click “save” in TiddlyWiki, and in some case when I accidentally create some conflict while syncing, Dropbox simply creates two copies and I have to manually merge them. There is also no version history so it’s hard to merge and look into the history. Then I suddenly had an epiphany during shower: all I need is a bunch of Markdown files, version controlled by git, then I can use a static site generator to render them as HTML, with table of contents and client-side search. After some quick search I found MkDocs, which is a Python-based static-site generator for project documentations. It also have a live-reloading development server, which I really love. Using MkDocs MkDocs is really straight-forward to setup and use (under Linux). To install, simply use pip (assuming you have Python and pip installed): sudo pip install mkdocs Then you [...]

Ryan Harter: Removing Disqus

Tue, 02 Jan 2018 08:00:00 +0000

I'm removing Disqus from this blog. Disqus allowed readers to post comments on articles. I added it because it was easy to do, but I no longer think it's worth keeping.

If you'd like to share your thoughts, feel free to shoot me an email at harterrt on gmail. I …

This Week In Rust: This Week in Rust 215

Tue, 02 Jan 2018 05:00:00 +0000

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions. This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR. Updates from Rust Community News & Blog Posts Not explicit. What it means for Rust syntax to be explicit. Porting a roguelike game to WebAssembly. Simulating conditional enum variants in Rust. Rust IDE + REPL In Vim. Making TRust-DNS faster than BIND9. Diesel 1.0 released. Announcing ndarray version 0.11.0. This year in Gfx-rs 2017. This week in Redox 34. Crate of the Week This week's crate is YEW, a framework for making Elm/React/Angular-like client web-apps with Rust. Thanks to Willi Kappler for the suggestion! Submit your suggestions and votes for next week! Call for Participation Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started! Some of these tasks may also have mentors available, visit the task page for more information. Get started with these beginner-friendly issues. [easy] gluon: Add syntax aware contexts to errors. gluon: Make it possible to use a custom allocator for the vm. gluon: Add raw string literals. gluon: Run clippy. gluon: Try to make gluon compile compile to wasm. mdBook: Article named 'print' triggers printing dialog. mdBook: Facilitate maintaining URLs with redirect mapping. mdBook: build is overeager to delete files in destination. mdBook: Styling flash on RBE. mdBook: Changing Chapter removes focus from main area. mdBook: hyperlink to another section jumps to wrong location on Firefox 57. mdBook: Document the Ace Editor. If you are a Rust project owner and are looking for contributors, please submit tasks here. Updates from Rust Core 79 pull requests were merged in the last week allo[...]

Robert O'Callahan: Mixed Blessings Of Greenfield Software Development

Sat, 30 Dec 2017 00:17:57 +0000

The biggest software project I have ever worked on, and hopefully ever will work on, was Gecko. I was not one of its original architects, so my work on Gecko was initially very tightly constrained by design decisions made by others. Over the years some of those decisions were rescinded, and some of the new decisions were mine, but I always felt frustrated at being locked into designs that I didn't choose and (with the benefit of hindsight, at least) would not have chosen. "Wouldn't it be great", I often thought, "to build something entirely new from scratch, hopefully getting things right, or at least having no-one other than myself to blame for my mistakes!" I guess a lot of programmers feel this, and that's why we see more project duplication than the world really needs. I was lucky enough at Mozilla to work on a project that gave me a bit of an outlet for this — rr. I didn't actually do much coding in rr at the beginning — Albert Noll, Nimrod Partush and Chris Jones got it underway — but I did participate in the design decisions. Over the last two years Kyle Huey and I have been working on a new project which is broader in scope and more complicated than rr. The two of us have designed everything (with some feedback from some wonderful people, who know who they are) and implemented everything. We've been able to make our design choices quite freely — still constrained by external realities (e.g. the vagaries of the x86 instruction set), but not by legacy code. It has been exhilarating. However, with freedom comes responsibility. My decision-making has constantly been haunted by the fear of being "That Person" — the one whom, years from now, developers will curse as they work around and within That Person's mistakes. I've also come to realize that being unencumbered by legacy design decisions can make design more difficult because the space is so unconstrained. This is exacerbated by the kind of project we're undertaking: it's a first-of-a-kind, so there aren't patterns to follow, and our system has novel, powerful and flexible[...]