Subscribe: Planet Mozilla
http://planet.mozilla.org/rss20.xml
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
add  api  code  firefox  mozilla  new  open  people  project  release  rust  support  time  web  webvr  week  work 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Planet Mozilla

Planet Mozilla



Planet Mozilla - http://planet.mozilla.org/



 



Hacks.Mozilla.Org: Inside a super fast CSS engine: Quantum CSS (aka Stylo)

Tue, 22 Aug 2017 15:30:55 +0000

You may have heard of Project Quantum… it’s a major rewrite of Firefox’s internals to make Firefox fast. We’re swapping in parts from our experimental browser, Servo, and making massive improvements to other parts of the engine. The project has been compared to replacing a jet engine while the jet is still in flight. We’re making the changes in place, component by component, so that you can see the effects in Firefox as soon as each component is ready. And the first major component from Servo—a new CSS engine called Quantum CSS (previously known as Stylo)—is now available for testing in our Nightly version. You can make sure that it’s turned on for you by going to about:config and setting layout.css.servo.enabled to true. This new engine brings together state-of-the-art innovations from four different browsers to create a new super CSS engine. It takes advantage of modern hardware, parallelizing the work across all of the cores in your machine. This means it can run up to 2 or 4 or even 18 times faster. On top of that, it combines existing state-of-the-art optimizations from other browsers. So even if it weren’t running in parallel, it would still be one fast CSS engine. But what does the CSS engine do? First let’s look at the CSS engine and how it fits into the rest of the browser. Then we can look at how Quantum CSS makes it all faster. What does the CSS engine do? The CSS engine is part of the browser’s rendering engine. The rendering engine takes the website’s HTML and CSS files and turns them into pixels on the screen. Each browser has a rendering engine. In Chrome, it’s called Blink. In Edge, it’s called EdgeHTML. In Safari, it’s called WebKit. And in Firefox, it’s called Gecko. To get from files to pixels, all of these rendering engines basically do the same things: Parse the files into objects the browser can understand, including the DOM. At this point, the DOM knows about the structure of the page. It knows about parent child/relationships between elements. It doesn’t know what those elements should look like, though. Figure out what the elements should look like. For each DOM node, the CSS engine figures out which CSS rules apply. Then it figures out values for each CSS property for that DOM node. Figure out dimensions for each node and where it goes on the screen. Boxes are created for each thing that will show up on the screen. The boxes don’t just represent DOM nodes… you will also have boxes for things inside the DOM nodes, like lines of text. Paint the different boxes. This can happen on multiple layers. I think of this like old-time hand drawn animation, with onionskin layers of paper. That makes it possible to just change one layer without having to repaint things on other layers. Take those different painted layers, apply any compositor-only properties like transforms, and turn them into one image. This is basically like taking a picture of the layers stacked together. This image will then be rendered on the screen. This means when it starts calculating the styles, the CSS engine has two things: a DOM tree a list of style rules It goes through each DOM node, one by one, and figures out the styles for that DOM node. As part of this, it gives the DOM node a value for each and every CSS property, even if the stylesheets don’t declare a value for that property. I think of it kind of like somebody going through and filling out a form. They need to fill out one of these forms for each DOM node. And for each form field, they need to have an answer. To do this, the CSS engine needs to do two things: figure out which rules apply to the node — aka selector matching fill in any missing values with values from the parent or a default value—aka the cascade Selector matching For this step, we’ll add any rule that matches the DOM node to a list. Because multiple rules can match, there may be multiple declarations for the same property. Plus, the browser itself adds some default CSS (called user agent style sheets). How does the CSS engine know which value [...]



Joshua Cranmer: A review of the solar eclipse

Tue, 22 Aug 2017 04:59:14 +0000

On Monday, I, along with several million other people, decided to view the Great American Eclipse. Since I presently live in Urbana, IL, that meant getting in my car and driving down I-57 towards Carbondale. This route is also what people from Chicago or Milwaukee would have taken, which means traffic was heavy. I ended up leaving around 5:45 AM, which puts me around the last clutch of people leaving. Our original destination was Goreville, IL (specifically, Ferne Clyffe State Park), but some people who arrived earlier got dissatisfied with the predicted cloudy forecast, so we moved the destination out to Cerulean, KY, which meant I ended up arriving around 11:00 AM, not much time before the partial eclipse started. Partial eclipses are neat, but they're very much a see-them-once affair. When the moon first entered the sun, you get a flurry of activity as everyone puts on the glasses, sees it, and then retreats back into the shade (it was 90°F, not at all comfortable in the sun). Then the temperature starts to drop—is that the eclipse, or this breeze that started up? As more and more gets covered, then it starts to dim: I had the impression that a cloud had just passed in front of the sun, and I wanted to turn and look at that non-existent cloud. And as the sun really gets covered, then trees start acting as pinhole cameras and the shadows take on a distinctive scalloped pattern. A total eclipse though? Completely different. The immediate reaction of everyone in the group was to start planning to see the 2024 eclipse. For those of us who spent 10, 15, 20 hours trying to see 2-3 minutes of glory, the sentiment was not only that it was time well spent, but that it was worth doing again. If you missed the 2017 eclipse and are able to see the 2024 eclipse, I urge you to do so. Words and pictures simply do not do it justice. What is the eclipse like? In the last seconds of partiality, everyone has their eyes, eclipse glasses on of course, staring at the sun. The thin crescent looks first like a side picture of an eyeball. As the time ticks by, the tendrils of orange slowly diminish until nothing can be seen—totality. Cries come out that it's safe to take the glasses off, but everyone is ripping them off anyways. Out come the camera phones, trying to capture that captivating image. That not-quite-perfect disk of black, floating in a sea of bright white wisps of the corona, not so much a circle as a stretched oval. For those who were quick enough, the Baily's beads can be seen. The photos, of course, are crap: the corona is still bright enough to blot out the dark disk of the moon. Then, our attention is drawn away from the sun. It's cold. It's suddenly cold; the last moment of totality makes a huge difference. Probably something like 20°F off the normal high in that moment? Of course, it's dark. Not midnight, all-you-see-are-stars dark; it's more like a dusk dark. But unlike normal dusk, you can see the fringes of daylight in all directions. You can see some stars (or maybe that's just Venus; astronomy is not my strong suit), and of course a few planes are in the sky. One of them is just a moving, blinking light in the distance; another (chasing the eclipse?) is clearly visible with its contrail. And the silence. You don't notice the usual cacophony of sounds most of the time, but when everyone shushes for a moment, you hear the deafening silence of insects, of birds, of everything. Naturally, we all point back to the total eclipse and stare at it for most of the short time. Everything else is just a distraction, after all. How long do we have? A minute. Still more time for staring. A running commentary on everything I've mentioned, all while that neck is craned skyward and away from the people you're talking to. When is it no longer safe to keep looking? Is it still safe—no orange in the eclipse glasses, should still be fine. How long do we need to look at the sun to damage our eyes? Have we done that already? Are the glasses themselves safe? As the moon moves off the sun, hold that s[...]



Emma Humphries: Firefox Triage Report 2017-08-21

Mon, 21 Aug 2017 23:09:06 +0000

It's the weekly report on the state of triage in Firefox-related components. I apologize for missing last week’s report. I was travelling and did not have a chance to sit down and focus on this. Hotspots The components with the most untriaged bugs remain the JavaScript Engine and Build Config. I discussed the JavaScript bugs with Naveed. What will happen is that the JavaScript bugs which have not been marked as a priority for Quantum Flow (the ‘\[qf:p[1:3]\]’ whiteboard tags) or existing work (the ‘\[js:p[1:3]\]’ whiteboard tags) will be moved to the backlog (P3) for review after the Firefox 57 release. See https://bugzilla.mozilla.org/show_bug.cgi?id=1392436. **Rank** **Component** **2017-08-07** **This Week** ---------- ------------------------------ ---------------- --------------- 1 Core: JavaScript Engine 449 471 2 Core: Build Config 429 450 3 Firefox for Android: General 411 406 4 Firefox: General 242 246 5 Core: General 234 235 6 Core: XPCOM 176 178 7 Core: JavaScript: GC — 168 8 Core: Networking — 161 All Components 8,373 8,703 Please make sure you’ve made it clear what, if anything will happen with these bugs. Not sure how to triage? Read https://wiki.mozilla.org/Bugmasters/Process/Triage. Next Release **Version** 56 56 56 56 57 57 57 ----------------------------------------- ------- ------- ------- ------- ----- ------ ------- **Date** 7/10 7/17 7/24 7/31 8/7 8/14 8/14 **Untriaged this Cycle** 4,525 4,451 4,317 4,479 479 835 1,196 **Unassigned Untriaged this Cycle** 3,742 3,682 3,517 3,674 356 634 968 **Affected this Upcoming Release (56)** 111 126 139 125 123 119 **Enhancements** 102 107 91 103 3 5 11 **Orphaned P1s** 199 193 183 192 196 191 183 **Stalled P1s** 195 173 159 179 157 152 155 What should we do with these bugs? Bulk close them? Make them into P3s? Bugs without decisions add noise to our system, cause despair in those trying to triage bugs, and leaves the community wondering if we listen to them. Methods and Definitions In this report I talk about bugs in Core, Firefox, Firefox for Android, Firefox for IOs, and Toolkit which are unresolved, not filed from treeherder using the intermittent-bug-filer account*, and have no pending needinfos. By triaged, I mean a bug has been marked as P1 (work on now), P2 (work on next), P3 (backlog), or P5 (will not work on but will accept a patch). A triage decision is not the same as a release decision (status and tracking flags.) https://mozilla.github.io/triage-report/#report Age of Untriaged Bugs The average age of a bug filed since June 1st of 2016 which has gone without triage. https://mozilla.github.io/triage-report/#date-report Untriaged Bugs in Current Cycle Bugs filed since the start of the Firefox 55 release cycle (March 6th, 2017) which do not have a triage decision. https://mzl.la/2u1R7gA Recommendation: review bugs you are responsible for (https://bugzilla.mozilla.org/page.cgi?id=triage_owners.html) and make triage decision, or RESOLVE. Untriaged Bugs in Current Cycle Affecting Next Release Bugs marked status_firefox56 = affected and untriaged. https://mzl.la/2u2GCcG Enhancements in Release Cycle Bugs filed in the release cycle which are enhancement requests, severity = enhancement, and untriaged. ​Recommendation: ​product managers should rev[...]



Air Mozilla: Mozilla Weekly Project Meeting, 21 Aug 2017

Mon, 21 Aug 2017 18:00:00 +0000

(image) The Monday Project Meeting




The Mozilla Blog: Welcome Michael DeAngelo, Chief People Officer

Mon, 21 Aug 2017 15:00:40 +0000

Michael DeAngelo joins the Mozilla leadership team this week as Chief People Officer.

As Chief People Officer, Michael is responsible for all aspects of HR and Organizational Development at Mozilla Corporation with an overall focus on ensuring we’re building and growing a resilient, high impact global organization as a foundation for our next decade of growth and impact.

Michael brings two decades of experience leading people teams at Pinterest, Google and Pepsico. Earlier in his career Michael held a number of HR roles in Organization Development, Compensation, and People Operations at Microsoft, Merck and AlliedSignal.

At Pinterest, Michael built out the majority of the HR function. One of the most important teams was the Diversity and Inclusion function which is recognized as one of the best in the industry. Two of his proudest moments there were being the first company ever to set public diversity goals. and growing new hires from under-represented backgrounds from 1% to 9% in one year.

Michael brings a global perspective from his tenure at Google, where for four years he led HR for the Europe/Middle East/Africa region based in Zurich, Switzerland. At Google he also led the HR team supporting 10,000 employees for Search, Android, Chrome, and Google+. Prior to Google, Michael was Vice President of HR at Pepsico leading all people functions for the Quaker Foods global P&L.

“Having spent so much of my career in technology, I have long been an admirer of Mozilla and the important contributions it has made to keeping the Internet open and accessible to all. This is an exciting time for Mozilla as we’re about to deliver a completely revitalized Firefox, we’re ramping investments in new emerging technologies and we’re making important strides in fighting for a healthier Internet platform for all. I am excited to come on board and support the continued development and growth of the organization’s talent and capabilities to help us reach our goals.”

Michael will be based in the Bay Area and will work primarily out of our San Francisco office.

Welcome Michael!

chris

The post Welcome Michael DeAngelo, Chief People Officer appeared first on The Mozilla Blog.




The Firefox Frontier: Outfoxin’ the Trackers: Android Private Browsing with Firefox Focus

Mon, 21 Aug 2017 12:49:52 +0000

Want to avoid trackers when you click links in Android apps? Set Firefox Focus as your default. It’s hard not to appreciate the ingenuity (and snark) of Redditors. These frontierspeople of … Read more

The post Outfoxin’ the Trackers: Android Private Browsing with Firefox Focus appeared first on The Firefox Frontier.




Kartikaya Gupta: Firewalling, part 2

Sat, 19 Aug 2017 22:57:49 +0000

I previously wrote about setting up multiple VLANs to segment your home network and improve the security characteristics. Since then I've added more devices to my home network, and keeping everything in separate VLANs was looking like it would be a hassle. So instead I decided to put everything into the same VLAN but augment the router's firewall rules to continue restricting traffic between "trusted" and "untrusted" devices.

The problem is that didn't work. I set up all the firewall rules but for some reason they weren't being respected. After (too much) digging I finally discovered that you have to install the kmod-ebtables package to get this to actually work. Without it, the netfilter code in the kernel doesn't filter traffic between hosts on the same VLAN and so any rules you have for that get ignored. After installing kmod-ebtables my firewall rules started working. Yay!

Along the way I also discovered that OpenWRT is basically dead now (they haven't had a release in a long time) and the LEDE project is the new fork/successor project. So if you were using OpenWRT you should probably migrate. The migration was relatively painless for me, since the images are compatible.

There's one other complication that I've run into but haven't yet resolved. After upgrading to LEDE and installing kmod-ebtables, for some reason I couldn't connect between two FreeBSD machines on my network via external IP and port forwarding. The setup is like so:

  • Machine A has internal IP address 192.168.1.A
  • Machine B has internal IP address 192.168.1.B
  • The router's external IP address is E
  • The router is set to forward port P to machine A
  • The router is set to forward port Q to machine B

Now, from machine B, if connect to E:P, it doesn't work. Likewise, from machine A, connecting to E:Q doesn't work. I can connect using the internal IP address (192.168.1.A:P or 192.168.1.B:Q) just fine; it's only the via the external IP that it doesn't work. All the other machines on my network can connect to E:P and E:Q fine as well. It's only machines A and B that can't talk to each other. The thing A and B have in common is they are running FreeBSD; the other machines I tried were Linux/OS X.

Obviously the next step here is to fire up tcpdump and see what's going on. Funny thing is, when I run tcpdump on my router, the problem goes away and the machines can connect to each other. So there's that. I'm sure with more investigation I'll get to the bottom of this but for now I've shelved it under "mysteries that I can work around easily". If anybody has run into this before I'd be interested in hearing about it.

Also if anybody knows of good tools to visualize and debug iptables rules I'd be interested to try them out, because I haven't found anything good yet. I've been using the counters in the tables to try and figure out which rules the packets are hitting but since I'm debugging this "live" there's a lot of noise from random devices and the counters are not as reliable as I'd like.




Robert Kaiser: Celebrating LCARS With One Last Theme Release

Sat, 19 Aug 2017 22:21:49 +0000

30 years ago, a lot of people were wondering what the new Star Trek: The Next Generation series would bring when it would debut in September 1987. The principal cast had been announced, as well as having a new Enterprise and even the pilot's title was known, but - as always with a new production - a lot of questions were open, just like today in 2017 with Star Trek Discovery, which is set to debut in September almost to the day on the 30th anniversary of The Next Generation. Given that the story was set to play 100 years after the original and what was considered "futuristic" had significantly changed between the late 1960s and 1980s, the design language had to be significantly updated, including the labels and screens on the new Enterprise. Scenic art supervisor and technical consultant Michael Okuda, who had done starship computer displays for The Voyage Home, was hired to do those for the new series, and was instructed by series creator and show runner Gene Roddenberry that this futuristic ship should have "simple and clean" screens and not much animation (the latter probably also due to budget and technology constraints - the "screens" were built out of colored plexiglass with lights behind them). With that, Okuda created a look that became known as "LCARS" (for Library Computer Access and Retrieval System (which actually was the computer system's name). Instead of the huge gray panels with big brightly-colored physical buttons in the original series, The Next Generation had touch-screen panels with dark background and flat-style buttons in pastel color tones. The flat design including the fonts and flat-design frames are very similar to quite a few designs we see on touch-friendly mobile apps 30 years later. Touch screens (and even cell phones and tablets) were pretty much unheard of and "future talk" when Mike Okuda created those designs, but he came to pretty similar design conclusions as those who design UIs for modern touch-screen devices (which is pretty awesome when you think of it). I was always fascinated with that style of UI design even on non-touch displays (and am even more so now that I'm using touch screens daily), and so 18 years ago, when I did my first experiments with Mozilla's new browser-mail all-in-one package and realized that the UI was displayed with the same rendering engine and the same or very similar technologies as websites, I immediately did some CSS changes to see if I could apply LCARS-like styling to this software - and awesomeness ensued when I found out that it worked! Over the years, I created a full LCARStrek theme from those experiments (first release, 0.1, was for Mozilla suite nightlies in late 2000), adapted it to Firefox (starting with LCRStrek 2.1 for Firefox 4), refined it and even made it work with large Firefox redesigns. But as you may have heard, huge changes are coming to Firefox add-ons, and full-blown themes in a manner of LCARStrek cannot be done in the new world as it stands right now, so I'm forced to stop developing this theme. Given that LCARS has a huge anniversary this year, I want to end my work on this theme on a high instead of a too sad a note though, so right along the very awesome Star Trek Las Vegas convention, which just celebrated 30 years of The Next Generation, of course, I'm doing one last LCARStrek release this weekend, with special thanks to Mike Okuda, whose great designs made this theme possible in the first place (picture taken by myself at that convention just two weeks ago, where he was talking about the backlit LCARS panels that were dubbed "Okudagrams" by other crew members): Live long and prosper![...]



Robert Kaiser: Lantea Maps: GPS Track Upload to OpenStreetMap Broken

Sat, 19 Aug 2017 14:49:01 +0000

During my holidays, when I was using Lantea Maps daily to record my GPS tracks, I suddenly found out one day that upload of the tracks to OpenStreetMap was broken.

I had added that functionality so that people (including myself) could get their GPS tracks out of their mobile devices and into a place from which they can download them anywhere. A bonus was that the tracks were available to the OpenStreetMap project as guides to improve the maps.

After I had wasted about EUR 50 of data roaming costs to verify that it was not only broken on hotel networks but also my mobile network that usually worked, I tried on a desktop Nightly and used the Firefox devtools to find out the actual error message, which was a CORS issue. I filed a GitHub issue but apparently it was an intentional change and OpenStreetMap doesn't support GPS track uploads any more in a way that is simple for pure web apps and also doesn't want to re-add support for that. Find more details in the GitHub issue.

Because of that, I think that this will mark the end of uploading tracks from Lantea Maps to OpenStreetMap. When I have time, I will probably add a GPS track store on my server instead, where third-party changes can't break stuff while I'm on vacation. If any Lantea Maps user wants their tracks on OpenStreetMap in the future, they'll need to manually upload the tracks themselves.



J.C. Jones: The State of CRLs Today

Fri, 18 Aug 2017 21:41:07 +0000

Certificate Revocation Lists (CRLs) are a way for Certificate Authorities to announce to their relying parties (e.g., users validating the certificates) that a Certificate they issued should no longer be trusted. E.g., was revoked. As the name implies, they're just flat lists of revoked certificates. This has advantages and disadvantages: Advantages: It's easy to see how many revocations there are It's easy to see differences from day to day Since processing the list is up to the client, it doesn't reveal what information you're interested in Disadvantages: They can quickly get quite big, leading to significant latency while downloading a web page They're not particularly compressible There's information in there you probably will never care about CRLs aren't much used anymore; Firefox stopped checking them in version 28 in 2014, in favor of online status checks (OCSP). The Baseline Requirements nevertheless still require that CRLs, if published, remain available: 4.10.2 Service availability The CA SHALL operate and maintain its CRL and OCSP capability with resources sufficient to provide a response time of ten seconds or less under normal operating conditions. Since much as been written about the availability of OCSP, I thought I'd check-in on CRLs. Collecting available CRLs When a certificate's status will be available in a CRL, that's encoded into the certificate itself (RFC 5280, 4.2.1.13). If that field is there, we should expect the CRL to survive for the lifetime of the certificate. I went to Censys.io and after a quick request to them for SQL access, I ran this query: SELECT parsed.extensions.crl_distribution_points FROM certificates.certificates WHERE validation.nss.valid = true AND parsed.extensions.crl_distribution_points LIKE 'http%' AND parsed.validity.end >= '2017-07-18 00:00' GROUP BY parsed.extensions.crl_distribution_points Today, this yields 3,035 CRLs, the list of which I've posted on Github. Downloading those CRLs into a directory downloaded_crls can be done serially using wget quite simply, logging to a file named wget_log-all_crls.txt: mkdir downloaded_crls script wget_log-all_crls.txt wget --recursive --tries 3 --level=1 --force-directories -P downloaded_crls/ --input-file=all_crls.csv This took 2h 36m 31s on my Internet connection. Analyzing the Download Process Out of 3,035 CRLs, I ended up downloading 2,993 files. The rest failed. I post-processed the command line wget log (wget_log-all_crls.txt) using a small Python script to categorize each CRL download by how it completed. Ignoring all the times when requesting a file resulted in the file straightaway (hey, those cases are boring), here's the graphical breakdown of the other cases: Missing CRLs There are 40 CRLs that weren't available to me when I checked, or more simply put, 1% of CRLs appear to be dead. Some of them are dead in temporary-looking ways, like the load balancer giving a 500 Internal Server Error, some of them have hostnames that aren't resolving in DNS. These aren't currently resolving for me: http://www.cert.fnmt.es.testa.eu/crls/ARLFNMTRCMEU.crl [Censys.io Search] http://crl.testa.eu/ [Censys.io Search] http://grcl2.crl.telesec.de/rl/T-TeleSec_GlobalRoot_Class_2.crl [Censys.io Search] http://cr.wscrl.cn/ca2.crl [Censys.io Search] http://www.ws.symantec.com/pca3-g3.crl [Censys.io Search] http://crl.sca0a.amazontrust.com/sca0a.crl [Censys.io Search] http://atospki/crl/Atos_TrustedRoot_CA_2011.crl [Censys.io Search] Searching Censys' dataset, these CRLs are only used by intermediate CAs, so presumably if one of the handful of CA certificates covered would need to be revoked, their IT staff could fix these links. Except for http://atospki/, which is clearly an internal name. Mistakes like that can only be revoked v[...]



Jared Wein: Photon Engineering Newsletter #13

Fri, 18 Aug 2017 20:18:03 +0000

This week I’m taking over for Dolske as he takes a vacation to view the eclipse. This is issue #13 of the Photon Engineering Newsletter. This past week the Nightly team has had some fun with the Firefox icon. We’ve seen the following icons grace Nightly builds in the past week: The icon in the top-left was created in 2011 by Sean Martell. The icon in the top-right was the original Phoenix icon. Phoenix was later renamed Firebird, and then the name was later changed to Firefox. The icon in the bottom left was the first “Firefox” icon, designed by Steven Garrity in 2003. The icon in the bottom-right, well it is such logo with much browser, we couldn’t help ourselves to not share it. Recent Changes Menus/structure: The Report Site Issue button has been moved to the Page Action menu in Nightly and Dev Edition. This button doesn’t ship to users on Beta or Release. Probably the biggest visual change this week is that we now have spacers in the toolbar. These help to separate the location bar from the other utility buttons, and also keep the location bar relatively centered within the window. We have also replaced the bookmarks menu button with the Library button (it’s the icon that looks like books on a shelf). We also widened various panels to help fit more text in them. Animation: The Pin to Overflow animation has also been tweaked to not move as far. This will likely be the final adjustment to this animation (seen on the left). The Pocket button has moved to the location bar and the button expands when a page is saved to Pocket (seen on the right). Preferences: Preferences has continued to work towards their own visual redesign for Firefox 57. New icons were landed for the various categories within Preferences, and some borders and margins have been adjusted. Visual redesign: The tab label is no longer centered on Mac. This now brings Linux, Mac, and Windows to all have the same visual treatment for tabs. Changing to Compact density within Customize mode changes the toolbar buttons to now use less horizontal space. The following GIF shows the theme changing from Compact to Normal to Touch densities. Onboarding: New graphics for the onboarding tour have landed. Performance: Two of the main engineers focusing on Performance were on PTO this past week so we don’t have an update from them. Tagged: firefox, photon, planet-mozilla [...]



Mitchell Baker: Resignation as co-chair of the Digital Economy Board of Advisors

Fri, 18 Aug 2017 19:12:08 +0000

For the past year and a half I have been serving as one of two co-chairs of the U.S. Commerce Department Digital Economy Board of Advisors. The Board was appointed in March 2016 by then-Secretary of Commerce Penny Pritzer to serve a two year term. On Thursday I sent the letter below to Secretary Ross.

Dear Secretary Ross,
I am resigning from my position as a member and co-chair of the Commerce Department’s Digital Economy Board of Advisors, effective immediately.
It is the responsibility of leaders to take action and lift up each and every American. Our leaders must unequivocally denounce bigotry, racism, sexism, hate, and violence.
The digital economy is fundamental to creating an economy that offers opportunity to all Americans. It has been an honor to serve as member and co-chair of this board and to work with the Commerce Department staff.
Sincerely,
Mitchell Baker
Executive Chairwoman
Mozilla




Air Mozilla: Webdev Beer and Tell: August 2017, 18 Aug 2017

Fri, 18 Aug 2017 18:00:00 +0000

(image) Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...




Air Mozilla: Webdev Beer and Tell: August 2017, 18 Aug 2017

Fri, 18 Aug 2017 18:00:00 +0000

(image) Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...




David Teller: JavaScript Binary AST Engineering Newsletter #1

Fri, 18 Aug 2017 11:46:54 +0000

Hey, all cool kids have exciting Engineering Newsletters these days, so it’s high time the JavaScript Binary AST got one! Summary JavaScript Binary AST is a joint project between Mozilla and Facebook to rethink how JavaScript source code is stored/transmitted/parsed. We expect that this project will help visibly speed up the loading of large codebases of JS applications and will have a large impact on the JS development community, including both web developers, Node developers, add-on developers and ourselves.



Ehsan Akhgari: Quantum Flow Engineering Newsletter #20

Fri, 18 Aug 2017 05:26:00 +0000

It is hard to believe that we’ve gotten to the twentieth of these newsletters.  That also means that we’re very quickly approaching the finish line for this sprint.  We only have a bit more than five more weeks to go before Firefox 57 merges to beta.  It may be a good time to start to think more carefully about what we pay attention to in the remaining time, both in terms of the risk of patches landing, and the opportunity cost of what we decide to put off until 58 and the releases after. We still have a large number of triaged bugs that are available for someone to pick up and work on.  If you have some spare cycles, we really would appreciate if you consider picking one or two bugs from this list and working on them.  They span many different areas of the codebase so finding something in your area of interest and expertise should hopefully be simple.  Quantum Flow isn’t the kind of project that requires fixing every single one of these bugs to be finished successfully, but at the same time big performance improvements often consist of many small parts, so the cumulative impact of a few additional fixes can make a big impact. It is worth mentioning that lately while lurking on various tech news and blog sites where Nightly users comment, I have seen quite a few positive comments about Nightly performance from users.  It’s easy to get lost in the details of the work involved in getting rid of synchronous IPCs, synchronous layout/style flushes, unnecessary memory allocations, hashtable lookups, improving data locality, JavaScript JIT performance, making sure code gets inlined better, ship a new CSS engine, etc. etc. but it is reassuring to see people take notice.  Moving on to mention one point about Speedometer charts on AWFY which I have gotten a few questions about recently.  We now have Speedometer benchmark numbers on Firefox Beta on the reference hardware reported in addition to inbound optimized and PGO builds.  You may notice that the benchmark score numbers we are getting on Beta are around the same as Nightly (which swings around 83-84 these days).  This doesn’t mean that we haven’t made any improvements on Nightly since the last Beta merge!  We have some Nightly only telemetry code and some features that are only enabled on the Nightly channel, and those add a bit of overhead, which causes us to see a bit of an improvement after an uplift from mozilla-central to mozilla-beta without any code changes.  This means that when the current code on Nightly gets merged to Beta 57, we should expect a bit of an improvement similarly. And now let me take a moment to acknowledge the work of some of those who helped make Firefox faster last week.  I hope I’m not dropping anyone’s name mistakenly. Perry Jiang made _shouldCapture() run off of the idle queue and do nothing for about: pages. Perry also made it so that we don’t unnecessarily load the autoscroll PNG when opening a new window. Kris Maglione fixed a recent regression causing extremely poor performance with extensions which have scripts creating large numbers of message listeners which never call their response callbacks.  He also made code that registers a lot of lazy module and service getters use loops to make such code easier to optimize by SpiderMonkey JIT.  He furthermore switched away from using FileUtils.getFile() which does main-thread I/O to check for the respective directory exists.  Kris also made us not create the IndexedDB bindings in sandboxes since they’re never used and avoided adding the caller location to the sandbox name if an explicit name if provided by the caller. Jan de Mooij added a megamorphic SetEle[...]



Karl Dubost: About Publishing Code Benchmarks

Fri, 18 Aug 2017 00:20:00 +0000

We often see code benchmarks. Some browser X HTML renderer is faster than browser Y renderer. Some JavaScript engine outperforms the competition by two folds.

While these benchmarks give a kind of instant gratification for the product, they always make me dubious coming from anyone. If the target is to outperform another browser, then I sense that nothing useful has really been accomplished. Even as a marketing technique, I don't think it's working.

When/if publishing a benchmark, focus on three things:

  • How this new code outperform the previous versions of the code? It's good to show that we care about our product and that we want to be faster where/when it matters.
  • How does this improve the user experience on some specific sites? Improving speed in controled environment like benchmarks is nice, but improving speed on real cases Web site is even better. Did it make the JavaScript-controled scrolling faster and smoother?
  • How did we get there? What are the steps which have been taken to improve the code performance? The coding tricks and techniques used to make it faster.

These will be benchmarks blog posts I like to read. So as a summary

Good benchmarks show 1. Outperform your own code 2. Real websites improvement demos 3. Give Technical explanations.

Otsukare!




Air Mozilla: Intern Presentations: Round 5: Thursday, August 17th

Thu, 17 Aug 2017 20:00:00 +0000

(image) Intern Presentations 7 presenters Time: 1:00PM - 2:45PM (PDT) - each presenter will start every 15 minutes 3 SF, 1 TOR, 1 PDX, 2 Paris




The Firefox Frontier: The Lightweight Browser: Firefox Focus Does Less, Which Is So Much More

Thu, 17 Aug 2017 16:31:33 +0000

Firefox had a baby and named it Focus! Firefox Focus is the new private browser for iOS and Android, made for those times when you just need something simple and … Read more

The post The Lightweight Browser: Firefox Focus Does Less, Which Is So Much More appeared first on The Firefox Frontier.




Emma Irwin: I Need Your Open Source Brain

Wed, 16 Aug 2017 17:52:14 +0000

Photo credit: Internet Archive Book Images via Visual Hunt / No known copyright restrictions Together with help from leaders in Teaching Open Source(TOS), POSSE and others, I’m developing a series of learning modules intended to help Computer Science / Technical Students gain a holistic understanding of open source, with goals for build-in opportunities to ‘learn by doing’.  These modules are intended to enable students in their goals as they build Open Source Clubs(new website coming soon) on their campuses. And I need your help! I need your brain, what it knows about Open Source, what skills, knowledge, attitudes, visions you think are important and crucial. I also need your brain to ..um brainstorm(!) ideas for real world value in open source ‘open educational’ offerings. There’s a Github task for that! Did I mention I need your Open Source Brain? I really do… You’ll find checklists for review at the bottom of each Module 1-  Open is a noun, verb an attitude!  Module 2 – Through the Looking Glass – Evaluating Open Source Projects Module 3 –  Opening Your Project 101 Module 4 – Stepping Into an Open Project (soft skills) Module 5 – Stepping Into an Open Project (technical skills) Module 6 – Prototyping your Future! Question!  Creating Real World Value for Open Source ‘Open Education’ resources. Share[...]



Mozilla VR Blog: Samsung Gear VR support lands in Servo

Wed, 16 Aug 2017 17:41:59 +0000

We are happy to announce that Samsung Gear VR headset support is landing in Servo. The current implementation is WebVR 1.1 spec-compliant and supports both the remote and headset controllers available in the Samsung Gear VR 2017 model. If you are eager to explore, you can download a project template compatible with Gear VR Android phones. Add your Oculus signature file, and run the project to launch the application on your mobile phone. Alongside the Gear VR support, we worked on other Servo areas in order to provide A-Frame compatibility, WebGL extensions, optimized Android compilations and reduced Servo startup times. A-Frame Compatibility Servo now supports Mutation Observers that enables us to polyfill Custom Elements. Together with a solid WebVR architecture and better texture loading we can now run any A-Frame content across mobile (Google Daydream, Samsung Gear VR) and desktop (HTC Vive) platforms. All the pieces have fallen into place thanks to all the amazing work that the Servo team is doing. WebGL Extensions WebGL Extensions enable applications to get optimal performance by taking advantage of state-of-the-art GPU capabilities. This is even more important in VR because of the extra work required for stereo rendering. We designed the WebGL extension architecture and implemented some of the extensions used by A-Frame/Three such as float textures, instancing, compressed textures and VAOs. Compiling Servo for Android Recently, the Rust team changed the default Android compilations targets. They added an armv7-linux-androideabi target corresponding to the armeabi-v7a official ABI and changed the arm-linux-androideabi to correspond to the armeabi official ABI instead of armeabi-v7a. This could cause important performance regressions on Servo because it was using the arm-linux-androideabi target by default. Using the new armv7 compilation target is easy for pure Rust based crates. It’s not so trivial for cmake or makefile based dependencies because they infer the toolchain and compiler names based on the target name triple. We adapted all the problematic dependencies. We took advantage of this work to add arm64 compilation support and provided a simple CLI API to select any Android compilation target in Servo. Reduced startup times C based libfontconfig library was causing long startup times in Servo for Android. We didn’t find a way to fix the library itself so we opted to get rid of it and implement an alternative way to query Android system fonts. Unfortunately, Android doesn't provide an API to query system fonts until Android O so we were forced to parse the system configuration files and load fonts manually. Gear VR support on Rust-WebVR Library We started working on ovr-mobile-sys, the Rust bindings crate for the Oculus Mobile SDK API. We used rust-bindgen to automatically generate the bindings from the C headers but had to manually transpile some of the inline SDK header code since inline functions don’t generate symbols and are not exported by rust-bindgen. Then we added the SDK integration into the rust-webvr standalone library. The OculusVRService class offers the entry point to access Oculus SDK and handles life-cycle operations such as initialization, shutdown, and VR device discovery. The integration with the headset is implemented in OculusVRDisplay. Gear VR lacks positional tracking, but by using the neck model provided in the SDK, we expose a basic position vector simulating how the human head naturally rotates relative to the base o[...]



Air Mozilla: Weekly SUMO Community Meeting August 16, 2017

Wed, 16 Aug 2017 16:00:00 +0000

(image) This is the sumo weekly call




Firefox Nightly: These Weeks in Firefox: Issue 22

Wed, 16 Aug 2017 15:18:34 +0000

Highlights Prathiksha added site permission settings for Location, Camera, Microphone and replaced the old permission dialog for Notifications in about:preferences. The new permission settings. jkt has improved the WebExtensions contextualIdentities APIs to the point where the Test Pilot Containers experiment can run as a WebExtension. Legacy Extensions, including classic and add-on SDK extensions, are now disabled by default in Nightly. Contributor UK92 made it possible to customize the position of the back/forward and reload/stop buttons. Felipe landed a patch that moves random timeouts to idle callbacks and got some solid performance improvements. Activity Stream rolling out in 56 to 10% of users on en-US builds, geolocated in USA and Canada. Up next is shipping to more users in more locales, and rolling out Pocket recommendations in Germany. The Page Action now has a Pocket item. Code for a screenshots item has landed in screenshots’ github repo but needs screenshots to merge to m-c. Items can now be added/removed from the context menu. Finally, the Report Site Issue button has now also made its way into the page action menu for Nightly/DevEdition. Handy! The main toolbar now has 2 flexible spaces, one on either side of the url/search bar(s). The library button has also replaced the bookmarks menu button in the default toolbar set. Friends of the Firefox team Resolved bugs (excluding employees): https://mzl.la/2x0m5n4 More than one bug fixed: Alejandro Rodriguez Salamanca Dan Banner Hossain Al Ikram [:ikram] (QA Contact) Masatoshi Kimura [:emk] Michael Kohler [:mkohler] Michael Smith [:mismith] Richard Marti (:Paenglab) Rob Wu [:robwu] Tomislav Jovanovic :zombie flyingrub New contributors (🌟 = First Patch!) ♥ Ami ♥ refactored `_handleMessage` in `UpdateTopLevelContentWindowIDHelper.jsm`. Luciano I cleaned up unused variables in Sync code. Project Updates Add-ons Numerous bugs on out-of-process webextensions have been fixed. kmag has fixed a bunch of webextension performance issues. Sending a message to a tab no longer instantiates lazy tabs. The browsingData API can now clear local storage. ntim has added toolbar colors and per-window themes to the themes API. Little fixes to webRequest, optional permissions, downloads, experiments, hybrid extensions. rpl fixed several bugs with browser actions, page actions, and options on Android. Activity Stream Landed pref’ed off in 56 Beta, with localization, snippets, performance telemetry, and Pocket recommendations. Up next Adding “Recent Bookmarks” and “Recently Visited” to Highlights. Adding custom sections via a Web Extension. More customization for Top Sites: Pin/Dismiss, Show More/Less, Add/Edit Top Site. Creating a site summary pipeline (high-res page icons -> Tippytop -> Screenshot + Favicon). Optimizing metadata queries and Tippytop Icon DB improvements. Firefox Core Engineering Installer Profile cleanup option has landed in the stub installer for 57. Users who are running the stub installer and have an older version of Firefox installed will be presented with the option to clean up their profile. Updater LZMA/SHA384 changes have landed as of 56 beta 3. Quantum & Photon Performance pile-on: Felipe Gomes, Kirk Steuber, Adam Gashlin, Perry Jiang, Doug Thayer, Robert Strong closed 16 bugs and are currently on 11 more bugs. Form Autofill Planning to enable address autofill (with sync) on beta in the next two weeks f[...]



Christian Heilmann: Taking a break – and so should you

Wed, 16 Aug 2017 14:59:53 +0000

TL;DR: I am going on holiday for a week and don’t take any computer with me. When I’m back I will cut down on my travels, social media and conference participation and focus more on coaching others, writing and developing with a real production focus. Larry shows how it is done You won’t hear much from me in the next week or so as I am taking a well-deserved vacation. I’m off to take my partner to the Cayman Islands to visit friends who have a house with a spare room as hotels started to feel like work for me. I’m also making the conscious decision to not take any computer with me as I will be tempted to do work whilst I am there. Which would be silly. Having just been in a lot of meetings with other DevRel people and a great event about it I found a pattern: we all have no idea how to measure our success and feel oddly unsatisfied if not worried about this. And we are all worried about keeping up to do date in a daily changing market. I’m doing OK on both of these, but I also suffer from the same worries. Furthermore, I am disturbed about the gap between what we talk about at events and workshops and what gets released in the market afterwards. The huge gap between publication and application We have all the information what not to do to create engaging, fast and reliable solutions. We have all the information how to even automate some of these to not disrupt fast development processes. And yet I feel a massive lack of longevity or maintainability in all the products I see and use. I even see a really disturbing re-emergence of “this only needs to work on browser $x and platform $y” thinking. As if the last decade hadn’t happened. Business decisions dictate what goes into production, less so what we get excited about. Even more worrying is security. We use a lot of third party code, give it full access to machines and fail to keep it up-to-date. We also happily use new and untested code in production even when the original developers state categorically that it shouldn’t be used in that manner. When it comes to following the tech news I see us tumbling in loops. Where in the past there was a monthly cadence of interesting things to come out, more readily available publication channels and a “stream of news” mentality makes it a full-time job just to keep up with what’s happening. Many thoughtpieces show up in several newsletters and get repurposed even if the original authors admitted in commentary that they were wrong. A lot is about being new and fast, not about being right. There is also a weird premature productisation happening. When JavaScript, Browsers and the web weren’t as ubiquitous as they are now, we showed and explained coding tricks and workarounds in blog posts. Now we find a solution, wrap it in a package or a library and release it for people to use. This is a natural progression in any software, but I miss the re-use and mulling around of the original thought. And I am also pretty sure that the usage numbers and stars on GitHub are pretty inflated. My new (old) work modus Instead of speaking at a high amount of conferences, I will be much pickier with where I go. My time is more limited now, and I want to use my talents to have a more direct impact. This is due to a few reasons: I want to be able to measure more directly what I do – it is a good feeling to be told that you were inspiring and great. But it fails to stay a good f[...]



Christian Heilmann: DevRelSummit was well worth it

Wed, 16 Aug 2017 09:47:21 +0000

Last week I was in Seattle to attend a few meetings and I was lucky to attend DevRelSummit in the Galvanize space. I was invited to cover an “Ask me anything” slot about Developer Outreach in Microsoft and help out Charles Morris of the Edge team who gave a presentation a similar matter. Q & A session with @Microsoft's @codepo8 about #devrel. #DevRelSummit #tech #microsoft pic.twitter.com/PFaW8Wi4Fl— Angel Banks (@angelmbanks) August 11, 2017 It feels weird to have a conference that is pretty meta about the subject of Developer relations (and there is even a ConfConf for conference organisers), but I can wholeheartedly recommend DevRelSummit for people who already work in this field and those who want to. The line-up and presentations were full of people who know their job and shared real information from the trenches instead of advertising products to help you. This is a very common worry when a new field in our job market gains traction. Anyone who runs events or outreach programs drowns in daily offers of “the turn-key solution to devrel success” or similar snake oil. In short, the presentations were: Bear Douglas of Slack (formerly Twitter and Facebook) sharing wins and fails of developer outreach Charles Morris of Microsoft showing how he scaled from 3 people on the Edge team to a whole group, aligning engineering and outreach Kyle Paul showing how to grow a community in spaces that are not technical cool spots and how to measure DevFest success AJ Glasser of Unity explaining how to deal with and harvest feedback you get showing some traps to avoid Damon Hernandez of Samsung talking about building community around hackathons Linda Xie of Sourcegraph showing the product and growth cycle of a new software product Robert Nyman of Google showing how he got into DevRel and what can be done to stay safe and sound on the road Angel Banks and Beth Laing sharing the road to and the way to deliver an inclusive conference with their “We Rise” event as the example Jessica Tremblay and Sam Richard showing how IBM scaled their developer community In between the presentations there were breakout discussions, lightning talks and general space and time to network and share information. As expected, the huge topics of the event were increasing diversity, running events smoothly, scaling developer outreach and measuring devrel success. Also, as expected, there were dozens of ways and ideas how to do these things with consensus and agreeable discourse. All in all, DevRelSummit was a very well executed event and a superb networking opportunity without any commercial overhead. There was a significant lack of grandstanding and it was exciting to have a clear and open information exchange amongst people who should be in competition but know that when it comes to building communities, this is not helpful. There is a finite amount of people we want to reach doing Developer Relations. There is no point in trying to subdivide this group even further. Cheers to another successful @DevRelSummit, Sandra & Barry! ?@SandraPersing @bteiger pic.twitter.com/aPVfpvRe9w— Tia Over (@tiaover) August 12, 2017 I want to thank everyone involved about the flawless execution and the willingness to share. Having a invite-only slack group with pre-set channels for each talk and session was incredibly helpful and means the conversations are going on right [...]



Air Mozilla: Intern Presentations: Round 4: Tuesday, August 15th

Tue, 15 Aug 2017 20:00:00 +0000

(image) Intern Presentations 6 presenters Time: 1:00PM - 2:30PM (PDT) - each presenter will start every 15 minutes 5 MTV, 1 Berlin




Hacks.Mozilla.Org: Essential WebVR resources

Tue, 15 Aug 2017 13:30:29 +0000

The general release of Firefox 55 brought a number of cool new features to the Gecko platform, one of which is the WebVR API v1.1. This allows developers to create immersive VR experiences inside web apps, compatible with popular hardware such as HTC VIVE, Oculus Rift, and Google Daydream. This article looks at the resources we’ve made available to facilitate getting into WebVR development. Support notes Version 1.1 of the WebVR API is very new, with varying support available across modern browsers: Firefox 55 sees full support on Windows, and more experimental support available for Mac in the Beta/Nightly release channels only, until testing and final work is completed. Supported VR hardware includes HTC VIVE, Oculus Rift, and Google Daydream. Chrome support is still experimental — you can currently only see support out in the wild on Chrome for Android with Google Daydream. Edge fully supports WebVR 1.1, through the Windows Mixed Reality headset. Support is also available in Samsung Internet, via their GearVR hardware. Note that the 1.0 version of the API can be considered obsolete, and has been (or will be) removed from all major browsers. Controlling WebVR apps using the full features of VR controllers relies on the Gamepad Extensions API. This adds features to the Gamepad API that provide access to controller features like haptic actuators (e.g. vibration hardware) and position/orientation data (i.e., pose). This currently has even more limited support than the WebVR API; Firefox 55+ has it available in Beta/Nightly channels. In other browsers, you’ll have to make do for now with basic Gamepad API functionality, like reporting button presses. vr.mozilla.org vr.mozilla.org — Mozilla’s new landing pad for WebVR — features demos, utilities, news and updates, and all the other information you’ll need to get up and running with WebVR. MDN documentation MDN has full documentation available for both the APIs mentioned above. See: WebVR API reference Gamepad API reference, which includes the Gamepad Extensions API In addition, we’ve written some useful guides to get you familiar with the basics of using these APIs: WebVR concepts Using the WebVR API Using VR controllers with WebVR A-Frame and other libraries WebVR experiences can be fairly complex to develop. The API itself is easy to use, but you need to use WebGL to create the 3D scenes you want to feature in your apps, and this can prove difficult to those not well-versed in low-level graphics programming. However, there are a number of libraries to hand that can help with this. The hero of the WebVR world is Mozilla’s A-Frame library, which allows you to create nice looking 3D scenes using custom HTML elements, handling all the WebGL for you behind the scenes. A-Frame apps are also WebVR-compatible by default. It is perfect for putting together apps and experiences quickly. A-Frame documentation A-Frame demos Salva’s Tools for VR development article series There are a number of other well-written 3D libraries available too, which abstract away the difficulty of working with raw WebGL. Good examples include: Three BabylonJS PlayCanvas These don’t include VR capabilities out of the box, but it is not too difficult to write your own WebVR rendering code around them. If you are worried about supporting older browsers that only include WebVR 1.0 [...]



Mozilla Reps Community: Reps Program Objectives – Q3 2017

Tue, 15 Aug 2017 08:16:02 +0000

As with every quarter, we define Objectives and Key Results for the Reps Program. We are happy to announce the Objectives for the current quarter.

Objective 1: The Reps program continues to grow its process maturity
KR1: 20 Reps have been trained with the Resource training
KR2: 100% of the budget requests of new Reps are filed by Resource Track Reps
KR3: 30 Reps complete the coaching training
KR4: The amount of mentor-less Reps is reduced by 50%
KR5: Increase number of authors for Reps tweets to 10 people

Objective 2: The Reps program is the backbone for any mobilizing needs
KR1: We documented what mobilizing Reps are focusing on
KR2: An implementation roadmap for mobilizers’ recommendations is in place.
KR3: Identified 1 key measures that is defining how our Mobilizers add value to the coding and Non-Coding/Enthusiast communities

Objective 3: The Activate Portal is improved for Mobilizer Reps and Functional Areas
KR1: The Rust activity is updated
KR2: The WebExtensions activity update has been tested in 3 pilot events in 3 different countries
KR3: 60 unique Reps have run a MozActivate event
KR4: The website is updated to the new branding

We will work closely with the Community Development Team to achieve our goals. You can follow the progress of these tasks in the Reps Issue Tracker. We also have a dashboard to track the status of each objective.

Which of the above objectives are you most interested in? What key result would you like to hear more about? What do you find intriguing? Which thoughts cross your mind upon reading this? Where would you like to help out? Let’s keep the conversation going! Join the discussion on Discourse.




Nick Cameron: These Weeks in Dev-tools #1

Tue, 15 Aug 2017 05:37:50 +0000

2017-08-14 Welcome to the first ever issue of 'These Weeks in Dev-Tools'! The dev-tools team is responsible for developer tools for Rust developers. That means any tools a developer might use (or want to use) when reading, writing, or debugging Rust code, such as Rustdoc, IDEs, editors, Racer, bindgen, Clippy, Rustfmt, etc. These Weeks in Dev-Tools will keep you up to date with all the exciting news in this area. We plan to have a new issue every few weeks. If you have any news you'd like us to report, please comment on the tracking issue. If you're interested in Rust's developer tools and want to contribute or ask questions, come chat to us in #rust-dev-tools. Announced and released the RLS Visual Studio Code extension Clippy and Bindgen moved into the rust-lang nursery Rust IntelliJ support is official! Blog post - one environment to rule them all - how the RLS handle environment variables by Xanewok Xanewok also has a few other blog posts about his GSoC project working on the RLS Blog post - what the RLS can do by nrc Bindgen @bkchr added the ability to impl Debug manually in our generated bindings when it can't be derive(..)ed: https://github.com/rust-lang-nursery/rust-bindgen/pull/899 @bkchr is adding (should land by the time any post is made) the ability to run rustfmt on the emitted bindings: https://github.com/rust-lang-nursery/rust-bindgen/pull/905 @photoszzt added the ability to derive(Hash) in the generated bindings: https://github.com/rust-lang-nursery/rust-bindgen/pull/887 @WiSaGaN added support for derive(Copy) on large arrays: https://github.com/rust-lang-nursery/rust-bindgen/pull/874 @photoszzt ported our analysis for which types can derive(Copy) from an ad-hoc algorithm to our fix-point framework: https://github.com/rust-lang-nursery/rust-bindgen/pull/866 @tmfink added support for targeting different Rust versions and channels (previously it was a binary stable vs nightly choice): https://github.com/rust-lang-nursery/rust-bindgen/pull/892 Releases Rustfmt 0.2 and 0.2.1 No longer saves backups by default Orders imports and extern crates RLS Visual Studio Code extension 0.1, 0.2.0, 0.2.1: changelog Racer 2.0.10 changelog Bindgen 0.29.0 announcement IntelliJ Rust #48 changelog cargo-edit 0.2 release notes RFCs Add external doc attribute to rustc has been proposed to enter FCP Thanks! @photoszzt has been re-writing various ad-hoc computations into fix-point analyses in Bindgen: whether we can add derive(Debug) to a struct: rust-lang-nursery/rust-bindgen#824 and whether a struct has a virtual table: rust-lang-nursery/rust-bindgen#850 @topecongiro for doing sustained, impressive work on Rustfmt - implementing the new RFC style, fixing (literally) hundreds of bugs, and lots more. Shout out to @TedDriggs for continuing to push Racer forward. Jwilm and the rest of Racer's users continue to appreciate all your hard work! Meetings We've had a bunch of meetings. You can find all the minutes here. Some that might be interesting: dev-tools team roadmap 2017-07-31 Xargo and cross-compilation 2017-06-26 [...]



Mozilla Open Policy & Advocacy Blog: Bringing the 4th Amendment into the Digital Age

Tue, 15 Aug 2017 04:31:48 +0000

Today, Mozilla has joined other major technology companies in filing an amicus brief urging the Supreme Court of the United States to reexamine how the 4th Amendment and search warrant requirements should apply in our digital era. We are joining this brief because we believe our laws need to keep up with what we already know to be true: that the Internet is an integral part of modern life, and that user privacy must not be treated as optional.

At the heart of this case is the government’s attempt to obtain “cell site location information” to aid in a criminal investigation. This information is generated continuously when your phone is on. Your phone communicates with nearby cell sites to connect with the cellular network and those sites create a record of your phone’s location as you go about your business. In the case at hand, the government did not obtain a warrant, which would have required probable cause, before obtaining this location information. Instead, the government sought a court order under the Stored Communications Act of 1986, which requires a lesser showing.

Looking at how the courts have dealt with the cell phone location records in this case demonstrates why our laws must be revisited to account for modern technological reality. The district court decided that the government didn’t have to obtain a warrant because people do not have a reasonable expectation of privacy in their cell phone location information. On appeal, the Sixth Circuit acknowledged that similar information, such as GPS monitoring in government investigations, would require a warrant. But it too found no warrant was needed because the location information was a “business record” from a “third party” (i.e., the service providers).

We believe users should not be forced to surrender their expectations of privacy when using their phones and we hope the Court will reconsider the law in this area.

*Brief link updated on August 16

The post Bringing the 4th Amendment into the Digital Age appeared first on Open Policy & Advocacy.




This Week In Rust: This Week in Rust 195

Tue, 15 Aug 2017 04:00:00 +0000

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions. This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR. Updates from Rust Community News & Blog Posts Announcing Gotham - a flexible web framework that does not sacrifice safety, security or speed.. What the RLS can do for Rust support in IDEs. Optimizing Rust. Rust for the web. Setting up a Rust environment on Windows. Of boxes and trees - smart pointers in Rust. Rustls and Servo. My experience contributing to Servo. Debugging a race condition in a release target. Designing a channel. Types as contracts: Implementation and evaluation. Exposing a Rust library to C. Think local, act local in Rust. Announcing Rusty Object Notation. Implementing a bot for Slack in Rust, Rocket and Anterofit - Part 2. Evolution of a simple du -s clone. REST calls made rustic: RS-ES in idiomatic Rust tutorial. User-friendly Elasticsearch queries with Rust and Elastic. This week in Rust docs 68. These weeks in dev-tools 1. This week in Redox 28. Crate of the Week This week's crate is exa, a modern ls replacement (with a tree thrown in as well) written in Rust. Thanks to Vikrant for the suggestion. Submit your suggestions and votes for next week! Call for Participation Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started! Some of these tasks may also have mentors available, visit the task page for more information. Pleco.rs - a chess engine in Rust is looking for contributors. [easy] bindgen: Rename TypeKind::Named to TypeKind::TypeParam. [less easy] bindgen: Emitting or deriving trait implementations. [less easy] bindgen: Emit a "manual" implementation of Debug when it cannot be derived. [less easy] bindgen: "manually" implement Hash when it cannot be derived. [less easy] bindgen: "manually" implement PartialEq when it cannot be derived. [less easy] bindgen: Derive Eq when possible. [less easy] bindgen: "manually" implement Eq when we cannot derive it. [less easy] bindgen: Derive PartialOrd when possible. [less easy] bindgen: "manually" implement PartialOrd when we cannot derive it. [less easy] bindgen: Derive Ord when possible. [less easy] bindgen: "manually" implement Ord when we cannot derive it. PumpkinDB: Rust nightly after 2017-06-20 affects benchmarks negatively. (Discuss here). wayland-window: Add control buttons. wayland-window: Make borders prettier. [doc] lyon: API guidelines: methods on collections that produce iterators follow the iter, iter_mut, into_iter conventions. Lyon is a GPU-based 2D graphics rendering engine in Rust. [doc] lyon: API guidelines: ad-hoc conversions follow as_, to_, into_ conventions. [doc] lyon: API guidelines: iterator type names should match the methods that produce them. [medium] lyon: Implement clipping line joins at the miter limit. [easy] ggez: Input doesn't work on mac using tmux and iterm2. ggez i[...]



Mozilla Marketing Engineering & Ops Blog: MozMEAO SRE Status Report - August 15, 2017

Tue, 15 Aug 2017 00:00:00 +0000

Here’s what happened on the MozMEAO SRE team from August 8th - August 15th.

Current work

MDN Migration to AWS

  • We’ve setup a few cronjobs to periodically sync static files from the current SCL3 datacenter to an S3 bucket. Our Kubernetes development environment runs a cronjobs that pulls these files from S3 to a local EFS mount.
    • There was some additional work needed to deal with files in SCL3 that contained unicode characters in their names.
  • A cronjob in Kubernetes has been implemented to backup new files uploaded to our shared EFS volume.

  • We’ve finished our evaluation of hosted Elasticsearch from elastic.co, which we’ll be using for our initial migration in production.

Upcoming Portland Deis 1 cluster decommissioning

The Deis 1 cluster in Portland is tentatively scheduled to be decommissioned later this week.

Links




Mozilla Addons Blog: Add-ons Update – 2017/08

Mon, 14 Aug 2017 23:23:28 +0000

Here’s the monthly update of the state of the add-ons world.

The Review Queues

In the past month, our team reviewed 1,803 listed add-on submissions:

  • 1368 in fewer than 5 days (76%).
  • 147 between 5 and 10 days (8%).
  • 288 after more than 10 days (16%).

274 listed add-ons are awaiting review.

If you’re an add-on developer and are looking for contribution opportunities, please consider joining us. Visit our wiki page for more information.

Compatibility Update

We published the blog post for 56 and the bulk validation has been run. This is the last one of these we’ll do, since compatibility is a much smaller problem with the WebExtensions API.

Firefox 57 is now on the Nightly channel, and only accepting WebExtension add-ons by default. Here are some changes we’re implementing on AMO to ease the transition to 57.

We recommend that you test your add-ons on Beta. If you’re an add-ons user, you can install the Add-on Compatibility Reporter. It helps you identify and report any add-ons that aren’t working anymore.

Recognition

We would like to thank the following people for their recent contributions to the add-ons world:

  • Apoorva Pandey
  • Neha Tekriwal
  • Swapnesh Kumar Sahoo
  • rctgamer3
  • Tushar Saini
  • vishal-chitnis
  • Cameron Kaiser
  • zombie
  • Trishul Goel
  • Krzysztof Modras
  • Tushar Saini
  • Tim Nguyen
  • Richard Marti
  • Christophe Villeneuve
  • Jan Henning
  • Leni Mutungi
  • dw-dev
  • Dino Herbert

You can read more about their work in our recognition page.

The post Add-ons Update – 2017/08 appeared first on Mozilla Add-ons Blog.




The Firefox Frontier: 64-bit Firefox is the new default on 64-bit Windows

Mon, 14 Aug 2017 20:26:24 +0000

Users on 64-bit Windows who download Firefox will now get our 64-bit version by default. That means they’ll install a more secure version of Firefox, one that also crashes a … Read more

The post 64-bit Firefox is the new default on 64-bit Windows appeared first on The Firefox Frontier.




Air Mozilla: Mozilla Weekly Project Meeting, 14 Aug 2017

Mon, 14 Aug 2017 18:00:00 +0000

(image) The Monday Project Meeting




Hacks.Mozilla.Org: A-Frame comes to js13kGames: build a game in WebVR

Mon, 14 Aug 2017 15:03:13 +0000

It’s that time of the year again – the latest edition of the js13kGames competition opened yesterday, on Sunday, August 13th. Just like last year, and going back to 2012 when I started this competition. Every year the contest has a new theme, but his time there’s another new twist that’s a little bit different – a brand new A-Frame VR category just in time for the arrival of WebVR to Firefox 55 and a desktop browser near you. Js13kGames is an online competition for HTML5 game developers where the fun part is that the size limit is set to 13 kilobytes. Unlike a 48-hour game jam, you have a whole month to come up with your best idea, create it, polish as much as you can, and submit – deadline is September 13th. A brief history of js13kgames It started five years ago from the pure need of having a competition for JavaScript game developers like me – I couldn’t find anything interesting, so I created one myself. Somehow it was cool enough for people to participate, and from what I heard they really enjoyed it, so I kept it going over the years even though managing everything on my own is exhausting and time-consuming. There have been many great games created since the beginning – you can check GitHub’s recent blog post for a quick recap of some of my personal favourites. Two of the best entries from 2016 ended up on Steam in their post-competition versions: Evil Glitch and Glitch Buster, and keys for both of them are available as prizes in the competition this year. A-Frame category The big news this year that I’m really proud of: Virtual Reality has arrived with the new A-Frame category. Be sure to check it out the A-Frame landing page for the rules and details. You can reference the minified version of the A-Frame library and you are not required to count its size as part of the 13 kilobytes size limit that defines this contest. Since the A-Frame library itself was announced I have been really excited trying it out. I believe it’s a real game changer (pun intended) for the WebVR world. With just a few lines of HTML markup you can set up a simple scene with VR mode, controls, lights. Prototyping is extremely easy, and you can build really cool experiments within minutes. There are many useful components in the Registry that can help you out too, so you don’t have to write everything yourself. A-Frame is very powerful, yet so easy to use – I really can’t wait to see what you’ll come up with this year. Resources If WebVR is all brand new to you and you have no idea where to start, read Chris Mills’ recent article “WebVR Essentials”. Then be sure to check out the A-Frame website for useful docs and demos, and a lively community of WebVR creators: A-Frame documentation A-Frame demos I realize the 13K size limit is very constraining, but these limitations spawn creativity. There have been many cool and inspiring games created over the years, and all their source code is available on GitHub in a readable form for everyone to learn from. There are plenty of A-Frame tutorials out there, so feel free to look for the specific solutions to your ideas. [...]



Daniel Stenberg: keep finding old security problems

Mon, 14 Aug 2017 12:47:42 +0000

I decided to look closer at security problems and the age of the reported issues in the curl project. One theory I had when I started to collect this data, was that we actually get security problems reported earlier and earlier over time. That bugs would be around in public release for shorter periods of time nowadays than what they did in the past. My thinking would go like this: Logically, bugs that have been around for a long time have had a long time to get caught. The more eyes we’ve had on the code, the fewer old bugs should be left and going forward we should more often catch more recently added bugs. The time from a bug’s introduction into the code until the day we get a security report about it, should logically decrease over time. What if it doesn’t? First, let’s take a look at the data at hand. In the curl project we have so far reported in total 68 security problems over the project’s life time. The first 4 were not recorded correctly so I’ll discard them from my data here, leaving 64 issues to check out. The graph below shows the time distribution. The all time leader so far is the issue reported to us on March 10 this year (2017), which was present in the code since the version 6.5 release done on March 13 2000. 6,206 days, just three days away from 17 whole years. There are no less than twelve additional issues that lingered from more than 5,000 days until reported. Only 20 (31%) of the reported issues had been public for less than 1,000 days. The fastest report was reported on the release day: 0 days. The median time from release to report is a whopping 2541 days. When we receive a report about a security problem, we want the issue fixed, responsibly announced to the world and ship a new release where the problem is gone. The median time to go through this procedure is 26.5 days, and the distribution looks like this: What stands out here is the TLS session resumption bypass, which happened because we struggled with understanding it and how to address it properly. Otherwise the numbers look all reasonable to me as we typically do releases at least once every 8 weeks. We rarely ship a release with a known security issue outstanding. Why are very old issues still found? I think partly because the tools are gradually improving that aid people these days to find things much better, things that simply wasn’t found very often before. With new tools we can find problems that have been around for a long time. Every year, the age of the oldest parts of the code get one year older. So the older the project gets, the older bugs can be found, while in the early days there was a smaller share of the code that was really old (if any at all). What if we instead count age as a percentage of the project’s life time? Using this formula, a bug found at day 100 that was added at day 50 would be 50% but if it was added at day 80 it would be 20%. Maybe this would show a graph where the bars are shrinking over time? But no. In fact it shows 17 (27%) of them having been present during 80% or more of the project’s life time! The median issue had[...]



Justin Dolske: Photon Engineering Newsletter #12

Mon, 14 Aug 2017 05:23:36 +0000

Let’s get straight into update #12! Oh, hey, anyone notice any icon chances recently? Yeah, they’re pretty wonderful. Or maybe I should say funderful? Looking forward to where they end up! Speaking of looking forward, I’m going to be on vacation for the next two weeks. But fear not! Jared and Mike will be covering Photon updates, so you’ll still be able to get your Photon phix. Recent Changes Menus/structure: When users first open Firefox 57, we will now move items they added to the old hamburger panel into the new overflow panel As part of this work, the photon structure pref was removed, and we made a start with removing some of the old code we were lugging around until now. This removed over 2500 lines of code! Pocket was added to the  page action menu. You can now removed pinned page actions from the URL bar via a context menu on the icons (in addition to the menu items in the page action menu itself). The favicon for customize mode now matches the icon used elsewhere. We broke, and then fixed, the history button. Oops. Sorry about that! Fixed a bunch of styling and behavior issues. Animation: New panel animations have landed. More of the new download animation has landed. When a download starts, and arrow zooms into the download icon, and when it finishes the download icon expands/pulses a couple times. Also made the download progressbar fill in the opposite direction in RTL locales. Made adjustments to the stop-reload animation to stop playing the animation if pages load fast, also not playing the animation while tabs are opening or closing. Preferences: Completed a general facelift of preferences (font & size & colors) to match Photon style. New icons, too. Removed all colons from subdialog headers. Nearing completion of the Photon preferences MVP work! Visual redesign: Updated the button positions in the navbar, and made them more customizable. (This was a contributor patch – thanks!) Close buttons updated across the UI (also a contributor patch!) The “Compact Light” and “Compact Dark” themes have been renamed to simply “Light” and “Dark”. (The UI density setting is already independent of the theme.) Onboarding: Added the opt-out auto-refresh checkbox into the stub installer. When it finds you’re (re)installing Firefox on top of an already-installed Firefox that’s 2+ versions out of date, it will offer to perform a profile refresh. This helps avoid problems with people who once tried Firefox, but then stopped using it (sometimes due to problems caused by an add-on or setting). Updated the UITour highlight style to the Photon style. Added UITour support for the Page Action panel, so the Onboarding can later use it to introduce the Screenshot feature located there. Addressed a couple of accessibility issues. Performance: The tab strip now uses CSS smooth-scroll, which avoids janky synchronous reflows while scrolling through tabs. When closing a tab, the next tab is now selected faster. Replaced various timers with idle callbacks to avoid jank soon after startup.  [...]



Code Simplicity: Kindness and Code

Sat, 12 Aug 2017 16:06:24 +0000

It is very easy to think of software development as being an entirely technical activity, where humans don’t really matter and everything is about the computer. However, the opposite is actually true. Software engineering is fundamentally a human discipline. Many of the mistakes made over the years in trying to fix software development have been made by focusing purely on the technical aspects of the system without thinking about the fact that it is human beings who write the code. When you see somebody who cares about optimization more than readability of code, when you see somebody who won’t write a comment but will spend all day tweaking their shell scripts to be fewer lines, when you have somebody who can’t communicate but worships small binaries, you’re seeing various symptoms of this problem. In reality, software systems are written by people. They are read by people, modified by people, understood or not by people. They represent the mind of the developers that wrote them. They are the closest thing to a raw representation of thought that we have on Earth. They are not themselves human, alive, intelligent, emotional, evil, or good. It’s people that have those qualities. Software is used entirely and only to serve people. They are the product of people, and they are usually the product of a group of those people who had to work together, communicate, understand each other, and collaborate effectively. As such, there’s an important point to be made about working with a group of software engineers: There is no value to being cruel to other people in the development community. It doesn’t help to be rude to the people that you work with. It doesn’t help to angrily tell them that they are wrong and that they shouldn’t be doing what they are doing. It does help to make sure that the laws of software design are applied, and that people follow a good path in terms of making systems that can be easily read, understood, and maintained. It doesn’t require that you be cruel to do this, though. Sometimes you do have to tell people that they haven’t done the right thing. But you can just be matter of fact about it—you don’t have to get up in their face or attack them personally for it. For example, let’s say somebody has written a bad piece of code. You have two ways you could comment on this: “I can’t believe you think this is a good idea. Have you ever read a book on software design? Obviously you don’t do this.” That’s the rude way—it’s an attack on the person themselves. Another way you could tell them what’s wrong is this: “This line of code is hard to understand, and this looks like code duplication. Can you refactor this so that it’s clearer?” In some ways, the key point here is that you’re commenting on the code, and not on the developer. But also, the key point is that you’re not being a jerk. I mean, come on. The first response is obviously rude. Does it make the person want to work with you, want to contribute more code, or want to get be[...]



Cameron Kaiser: Time to sink the Admiral (or, why using the DMCA to block adblockers is a bad move)

Sat, 12 Aug 2017 01:38:00 +0000

One of the testing steps I have to do, but don't enjoy, is running TenFourFox "naked" (without my typical adblock add-ons) to get an assessment of how it functions drinking from the toxic firehose that is the typical modern ad network. (TL;DR: Power Macs run modern Web ads pretty poorly. But, as long as it doesn't crash.) Now to be sure, as far as I'm concerned sites gets to monetize their pages however they choose. Heck, there's ads on this blog, provided through Google AdSense, so that I can continue to not run a tip jar. The implicit social contract is that they can stick it behind a paywall or run ads beside them and it's up to me/you to decide whether we're going to put up with that and read the content. If we read it, we should pony up in either eyeballs or dinero. This, of course, assumes that the ads we get served are reasonable and in a reasonable quantity. However, it's pretty hard to make money simply off per-click ads and networks with low CPM, so many sites run a quantity widely referred to as a "metric a$$ton" and the ads they run are not particularly selective. If those ads end up being fat or heavy or run scripts and drag the browser down, they consider that the cost of doing business. If, more sinisterly, they end up spying on or fingerprinting you, or worse, try to host malware and other malicious content, well, it's not their problem because it's not their ad (but don't block them all the same). What the solution to this problem is not, is begging us to whitelist them because they're a good site. If you're not terribly discriminating about what ads you burden your viewers with, then how good can your site really be? The other non-solution is to offer effectively the Hobson's choice of "ads or paywall." What, the solution to the ads you don't curate is to give you my credit card number so you can be equally as careful with that? So until this situation changes and sites get a little smarter about how they do sponsorship (let me call out a positive example: The Onion's sponsored content [slightly NSFW related article]), I don't have a moral problem with adblocking because really that's the only way to equalize the power dynamic. Block the ads on this blog if you want; I don't care. Click on them or not, your choice. In fact, for the Power Macs TenFourFox targets, I find an adblocker just about essential and my hats are off to those saints of the church who don't run one. Lots of current sites are molasses in January on barbituates without it and I can only improve this problem to a certain degree. Heck, they drag on my i7 MacBook Air. What chance does my iMac G4 have? That's why this egregious abuse of statute is particularly pernicious: a company called Admiral, which operates an anti-adblocker, managed to use a DMCA request to Github to get the address of the site hosting their beacon image (to determine if you're blocking them or not) removed from the EasyList adblock listing. They've admitted it, too. The legal theory, as I understand it (don't ask me to d[...]



QMO: Firefox 56 Beta 4 Testday, August 18th

Fri, 11 Aug 2017 13:56:38 +0000

Hello dear Mozillians!

We are happy to let you know that Friday, August 18th, we are organizing Firefox 56 Beta 4 Testday. We’ll be focusing our testing on the following new features: Media Block Autoplay, Preferences Search [Photon] and Photon Preferences reorg V2.

Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!




The Mozilla Blog: Honoring Our Friend Bassel: Announcing the Bassel Khartabil Free Culture Fellowship

Fri, 11 Aug 2017 13:55:03 +0000

To honor Bassel Khartabil’s legacy and his lasting impact on the open web, a slate of nonprofits are launching a new fellowship in his name   By Katherine Maher (executive director, Wikimedia Foundation), Ryan Merkley (CEO, Creative Commons) and Mark Surman (executive director, Mozilla) On August 1, 2017, we received the heartbreaking news that our friend Bassel (Safadi) Khartabil, detained since 2012, was executed by the Syrian government shortly after his 2015 disappearance. Khartabil was a Palestinian Syrian open internet activist, a free culture hero, and an important member of our community. Our thoughts are with Bassel’s family, now and always. Today we’re announcing the Bassel Khartabil Free Culture Fellowship to honor his legacy and lasting impact on the open web. Bassel Khartabil Bassel was a relentless advocate for free speech, free culture, and democracy. He was the cofounder of Syria’s first hackerspace, Aiki Lab, Creative Commons’ Syrian project lead, and a prolific open source contributor, from Firefox to Wikipedia. Bassel’s final project, relaunched as #NEWPALMYRA, entailed building free and open 3D models of the ancient Syrian city of Palmyra. In his work as a computer engineer, educator, artist, musician, cultural heritage researcher, and thought leader, Bassel modeled a more open world, impacting lives globally. To honor that legacy, the Bassel Khartabil Free Culture Fellowship will support outstanding individuals developing the culture of their communities under adverse circumstances. The Fellowship — organized by Creative Commons, Mozilla, the Wikimedia Foundation, the Jimmy Wales Foundation, #NEWPALMAYRA, and others — will launch with a three-year commitment to promote values like open culture, radical sharing, free knowledge, remix, collaboration, courage, optimism, and humanity. As part of this new initiative, fellows can work in a range of mediums, from art and music to software and community building. All projects will catalyze free culture, particularly in societies vulnerable to attacks on freedom of expression and free access to knowledge. Special consideration will be given to applicants operating within closed societies and in developing economies where other forms of support are scarce. Applications from the Levant and wider MENA region are greatly encouraged. Throughout their fellowship term, chosen fellows will receive a stipend, mentorship from affiliate organizations, skill development, project promotion, and fundraising support from the partner network. Fellows will be chosen by a selection committee composed of representatives of the partner organizations. Says Mitchell Baker, Mozilla executive chairwoman: “Bassel introduced me to Damascus communities who were hungry to learn, collaborate and share. He introduced me to the Creative Commons community which he helped found. He introduced me to the open source hacker space he founded, where Linux and Mozilla and JavaScri[...]



Ehsan Akhgari: Quantum Flow Engineering Newsletter #19

Fri, 11 Aug 2017 03:10:15 +0000

As usual, I have some quick updates to share about what we’ve been up to on improving the performance of the browser in the past week or so.  Let’s first look at our progress on the Speedometer benchmark.  Our performance goal for Firefox 57 was to get within 20% of Chrome’s benchmark score on our Acer reference hardware on Win64.  Those of you who watch the Firefox Health Dashboards every once in a while may have noticed that now we are well within that target: It’s nice to see the smiley face on this chart, finally!  You can see the more detailed downward slope on the AWFY graph that shows the progress in the past couple of weeks or so (dark red dots are PGO builds, orange dots are non-PGO builds, and of course green in Chrome): The situation on Win32 is a bit worse, due to Chrome’s recent switch to use clang-cl on Windows instead of MSVC which gave them an around 30% speed boost on the 32-bit Speedometer score, but we have made progress nonetheless.  Such is the nature of tracking moving targets! The other performance aspect to have a look at again is our progress at eliminating slow synchronous IPC calls.  I last wrote about this about three weeks ago, and since then at least one major change happened: the infamous document.cookie synchronous IPC call was eliminated, so I figured it may be a good time to look at the data again. Telemetry data is laggy since it includes data from older versions of Nightly, but if you compare this to the previous chart, there should be a stark difference visible: PCookieService::Msg_GetCookieString is now a much smaller part of the overall data (at around 26.1%).  Looking at the list of the top ten messages, the next ones in order are the usual suspects for those who have followed these newsletters for a while: some JS initiated IPC, PAPZCTreeManager::Msg_ReceiveMouseInputEvent, followed by more JS IPC, followed by PBrowser::Msg_NotifyIMEFocus, followed by even more JS IPC, followed by 2 new messages that are now surfacing as we’ve fixed the worst ones of these: PDocAccessible::Msg_SyncTextChangeEvent which is related to accessibility and the data shows it affects a relatively small number of sessions due to its low submission rate, and PContent::Msg_ClassifyLocal, which probably comes from turning the Flash plugin click-to-play by default. Now let’s look at the breakdown of synchronous IPC messages initiated from JS: The story here remains unchanged: most of the sync IPC messages we’re seeing come from legacy extensions, and there is also the contextmenu sync IPC, which has a patch pending review.  However, the picture here may start changing quite soon.  You may have seen the recent announcement about legacy extensions being disabled on Nightly starting from tomorrow, so hopefully this data (and the C++ sync IPC data) will soon start to shift to reflect more of the performance characteristics that our users on the release channel wi[...]



Mozilla Addons Blog: WebExtensions in Firefox 56

Thu, 10 Aug 2017 20:23:47 +0000

Firefox 56 landed in Beta this week, so it’s time for another update on the WebExtensions transition. Documentation for the APIs discussed here can be found on MDN Web Docs. API changes The browsingData API can now remove cookies by host. The initial implementation of browsingData has landed for Android with support for the settings and removeCookies APIs. The contextMenus API also has a few improvements. The text of the link is now included in the onClickData event and text selection is no longer limited to 150 characters. Optional permission requests can now also be triggered from context menus. An alternative, more general namespace was added, called browser.menus. It supports the same API and all existing menu contexts, plus a new one that allows you to add items to the Tools menu. You can also provide different icons for your menu items. For example: browser.menus.create({ id: "sort-tabs", title: "A-Z", contexts: ["tools_menu"], icons: { 16: "icon-16-context-menu.png", }, }); The windows API now has the ability to read and preface the title of the window object, by passing titlePreface to the window object. This allows extensions to label different windows so they’re easier to distinguish. The downloads.open API now requires user interaction to be called. This mirrors the Chrome API which also requires user interaction. You can now download a blob created in a background page. The tabs API has new printing APIs. The tabs.print, tabs.printPreview and tabs.saveAsPDF (not on Mac OS X) methods will bring up the respective print dialogs for the page. The tabs.Tab object now includes the time the tab was lastAccessed. The webRequests API can now monitor web socket connections (but not the messages)  by specifying ws:// or wss:// in the match pattern. Similarly the match patterns now support moz-extension URLs, however this only applies to the same extension. Importantly a HTTP 302 redirection to a moz-extension page will now work. For example, this was a common use case for extensions that integrated with OAuth. The pageActions API can now be shown on a per tab basis on Android. The privacy API gained two new APIs. The privacy.services.passwordSavingEnabled API allows an extension to toggle the preferences that control password saving. The privacy.websites.referrersEnabled API allows an extension to toggle the preferences that control the sending of HTTP Referrer headers. A new API to control browserSettings has been added with an API to disable the browser’s cache. We’ll use this API for similar settings in the future. In WebExtensions, we manage the changing of preferences and effects when extensions get uninstalled. This management was applied to chrome_url_overrides. The same management now prevents extensions overriding user changed preferences. The theming API gained a reset method which can be called after an update to reset Firefox to t[...]



Firefox Test Pilot: My Summer Internship with Firefox Test Pilot

Thu, 10 Aug 2017 19:33:00 +0000

As part of my internship, I participated in a range of activities with the Test Pilot team, including in-home interviews with people who use Test Pilot experiments.This past summer I had the opportunity to work as an intern on the Firefox Test Pilot team. Upon joining the team I was informed of my summer project: a web experiment to allow for private and secure file transfers.Firefox Test Pilot experiments tend to act as supplemental features available for the browser, but this project was different. It was necessary to move this experiment to the web because we felt it would be too restricting to force both file senders and file recipients to use Firefox. After much reworking and refactoring, the project was finally released as Send.Defining StandardsOne of the most important things we needed to do before we started writing code was to define exactly what we meant by “private” and “secure file transfer”. Different people can have greatly varied perceptions of what constitutes a satisfactory level of privacy, so it was essential to define our standards before starting the project so as to not compromise our users’ privacy.Our goal was to make a product that would allow users to share files anonymously and without fear that a third party could snoop in on the transfer. At first we considered WebRTC to allow for peer-to-peer connections, but decided against it in the end as it wasn’t entirely reliable for larger file sizes. It also would have been a hassle for users as it would require both the sender and recipient to keep the browser tab open for the entire duration of transfer.Without peer-to-peer connections, we decided to host files on Amazon’s S3. We also decided to encrypt the file using client-side cryptography libraries to prevent Mozilla or any third-party from ever seeing the contents of the file. We settled on appending secret 128-bit AES-GCM keys as a hash field on a generated URL that a sender could then share to a recipient. We would then pipe the upload of the encrypted file through our servers to an S3 bucket.The use of the hash parameter in the URL means the key is never sent to the server and ensures that the file stays encrypted until the recipient’s browser downloads the entire file. We believe that the use of client-side encryption and decryption greatly mitigates any possible information leakage while sharing files, and as a result, would be satisfactory for almost all use cases.We also decided on adding in auto-expiry of files after twenty-four hours to prevent a user’s files from lingering online indefinitely. We felt that this approach would both provide sufficient privacy and also be seamless enough to satisfy all users of Send.Building an MVPI spent the next couple of weeks building a minimal viable product that we could provide to the Legal and Security teams for review. This involved multiple[...]



Gervase Markham: How One Tweet Can Ruin Your Life

Thu, 10 Aug 2017 16:20:34 +0000

This video is pretty awesome throughout, but the pinnacle is at the end:

The great thing about social media was how it gave a voice to voiceless people, but we’re now creating a surveillance society, where the smartest way to survive is to go back to being voiceless. Let’s not do that. — Jon Ronson

(image)



Mozilla Addons Blog: Upcoming Changes in Compatibility Features

Thu, 10 Aug 2017 16:00:24 +0000

Firefox 57 is now on the Nightly channel (along with a shiny new logo!). And while it isn’t disabling legacy add-ons just yet, it will soon. There should be no expectation of legacy add-on support on this or later versions. In preparation for Firefox 57, a number of compatibility changes are being implemented on addons.mozilla.org (AMO) to support this transition.

Upcoming Compatibility Changes

  • All legacy add-ons will have strict compatibility set, with a maximum version of 56.*. This is the end of the line for legacy add-on compatibility. They can still be installed on Nightly with some preference changes, but may break due to other changes happening in Firefox.
  • Related to this, you won’t be able to upload legacy add-ons that have a maximum version set higher than 56.*.
  • It will be easier to find older versions of add-ons when the latest one isn’t compatible. Some developers will be submitting ports to the WebExtensions API that depend on very recent API developments, so they may need to set a minimum version of 56.0 or 57.0. That can make it difficult for users of older versions of Firefox to find a compatible version. To address this, compatibility filters on search will be off by default. Also, we will give more prominence to the All Versions page, where older versions of the add-on are available.
  • Add-ons built with WebExtensions APIs will eventually show up higher on search rankings. This is meant to reduce instances of users installing add-ons that will break within a few weeks.

We will be rolling out these changes in the coming weeks.

Add-on compatibility is one of the most complex AMO features, so it’s possible that some things won’t work exactly right at first. If you run into any compatibility issues, please file them here.

The post Upcoming Changes in Compatibility Features appeared first on Mozilla Add-ons Blog.




Air Mozilla: Reps Weekly Meeting Aug. 10, 2017

Thu, 10 Aug 2017 16:00:00 +0000

(image) This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.




Air Mozilla: Mozilla Science Lab August 2017 Bi-Monthly Community Call

Thu, 10 Aug 2017 15:00:00 +0000

(image) Mozilla Science Lab August 2017 Bi-Monthly Community Call




Dzmitry Malyshau: Overhead analysis for Vulkan Portability

Thu, 10 Aug 2017 14:43:48 +0000

One of the design goals for the portability API is to keep any overhead (when translating to other APIs) to be minimum, optimally providing a zero-cost abstraction. In this article, we’ll dissect the potential sources of the overhead into groups and analyze the prospect of each, suggesting possible solutions. The problem in question is very broad, but we’ll spice it with examples raised in Vulkan Portability Initiative. Another unit of compilation When adding an indirection layer from inside a program, given a language with zero-cost abstractions like C++ or Rust, it is possible to have the layer completely optimized away. However, the library will be provided as a static/dynamic binary, which would prevent the linker to inline the calls. That means doubling the cost of a function invocation (as opposed to execution) compared to a native API. Solutions: whole program optimization pure header library locks into using C/C++ inconvenient long compile times Native API differences Some aspects of the native APIs don’t exactly match together. This is amplified by the flexible nature of Vulkan, which tends to provide the richest feature set comparing to D3D12 and Metal. For example, Vulkan allows command buffers to be re-used, and so does D3D12. In Metal, however, it’s not directly supported. If this ability is exposed unconditionally, the Metal backend would have to record all the encoded commands on the side to translating them to the corresponding MTL*CommandEncoder interface. When the user requests to use the command buffer again, Metal backend would have to re-encode the native command buffer on the spot, which means a considerable delay to otherwise inexpensive operation of submitting a command buffer for execution. Solutions: more granularity of device capabilities pressure other platforms to add native support for missing features Skewed idiomaticity An API typically is associated with a certain way of thinking and approaching the problems to solve. Providing a Vulkan-like front-end to an “alien” API would skew the users to think in terms of Vulkan and what is efficient in it, as opposed to the native APIs. For example, Vulkan has render sub-passes. Organizing the graphics workload in sub-passes allows tiled hardware (typically found in mobile devices) to re-use intermediate rendering results without a road-trip to VRAM. This can be a big optimization, yielding up to 50% performance increase as well as reduced power usage. No other API has sub-passes. It is straightforward to emulate them by treating each sub-pass as an independent pass. However, ignoring the fact of intermediate results go back and forth to VRAM would cause the graphics pip[...]



Hub Figuière: Status update, August 2017

Thu, 10 Aug 2017 02:46:23 +0000

Work:

In March I joined the team at eyeo GmbH, the company behind Adblock Plus, as a core developer. Among other things I'm improving the filtering capabilities.

While they are based in Cologne, Germany, I'm still working remotely from Montréal.

It is great to help making the web more user centric.

Personal project:

I started working again on Niepce, currently implementing the file import. I also started to rewrite the back-end in Rust. The long term is to move completely to Rust, this will happen in parallel with feature implementation.

This and other satellite projects are part of my great plan I have for digital photography on Linux with GNOME.

'til next time.




Nick Cameron: What the RLS can do

Thu, 10 Aug 2017 01:17:15 +0000

IDE support for Rust is one of the most requested features in our surveys and is a key part of Rust's 2017 roadmap. Here, I'm going to talk about one of the things we're doing to bring Rust support to IDEs - the RLS. Programmers can be pretty picky about their editors, so we want to support as broad a selection of editors as possible. A key step towards that goal is implementing the Rust Language Server (RLS). The RLS is a service for providing information about Rust programs. It works with multiple sources of data, primarily the Rust compiler. It communicates with editors using the Language Server Protocol (LSP) so that clients can perform actions such as 'code completion', 'jump to definition', and 'find all references'. The intention is that the RLS will support multiple clients. Any editor that wants to provide IDE-type functionality for Rust programs can use the RLS. In fact many editors can get fairly good support by just using a generic LSP client plugin; but you get the best results by using a dedicated Rust client. We've been working on two very different clients. One is a Visual Studio Code plugin which makes VSCode a Rust IDE. The other is rustw, an experimental web-app for building and exploring Rust programs. Rustw might become a useful tool in its own right or be used to browse source code in Rustdoc. We're also working on a new version of Rustdoc that uses the RLS, rather than being tightly integrated with the compiler. I plan to follow-up this blog post with another going over the RLS internals, for RLS client implementors and RLS contributors. In this post, I'll cover the fun stuff - features that will improve your life as a Rust developer. Type and docs on hover Hover over an identifier in VSCode or rustw to see its type and documentation. You'll get a link to rustdoc and the source code for standard library types too. Semantic highlighting When you hover a name in rustw or click inside one in VSCode, we highlight other uses of the same name. Because this is powered by the compiler, we can be smart about this and show exactly the right uses, for example, skipping different variables with the same name. Code completion Code completion is where an IDE suggests variable, field, or method names for you based on what you're typing. The RLS uses Racer behind the scenes for code completion, but clients don't need to be aware of this. In the long-term, the compiler should power code completion. Jump to definition Jump from the use of a name to where it is defined. This is a key feature for IDEs and code exploration tools. Use F12 in VSCode or a left click in rustw. This works for variables, fields, methods, functions, mod[...]



Dave Townsend: New Firefox and Toolkit module peers in Taipei!

Wed, 09 Aug 2017 23:05:13 +0000

Please join me in welcoming three new peers to the Firefox and Toolkit modules. All of them are based in Taipei and I believe that they are our first such peers which is very exciting as it means we now have more global coverage.

  • Tim Guan-tin Chien
  • KM Lee Rex
  • Fred Lin

I’ve blogged before about the things I expect from the peers and while I try to keep the lists up to date myself please feel free to point out folks you think may have been passed over.




Firefox Test Pilot: Say Hi to Send 1.1.0

Wed, 09 Aug 2017 20:23:01 +0000

(image)

We’re excited to announce the arrival of Send 1.1.0. Send now supports Microsoft Edge and Safari! In addition to expanded browser support, we’ve made several other improvements:

  • You can now send files from iOS (results may vary with receiving on iOS).
  • We no longer send file hashes to the server.
  • We fixed a bug that let users accidentally cancel downloads mid-stream.
  • You can now copy to clipboard from a mobile device, and we detect if copy-to-clipboard is disabled.
  • We now ship in 36 languages!

Right now we’re working on a raft of minor fixes, before moving on to larger features such as PIN protected files and multi-file uploads. We’re hoping to maintain a steady shipping schedule in the coming weeks even though we’re losing our beloved interns. I’ll post about performance and feature improvements as they ship.

(image)

Say Hi to Send 1.1.0 was originally published in Firefox Test Pilot on Medium, where people are continuing the conversation by highlighting and responding to this story.




Air Mozilla: The Joy of Coding - Episode 109

Wed, 09 Aug 2017 17:00:00 +0000

(image) mconley livehacks on real Firefox bugs while thinking aloud.




Mozilla Open Innovation Team: The Mozilla Information Trust Initiative: Building a movement to fight misinformation online

Wed, 09 Aug 2017 16:33:34 +0000

Today, we are announcing the Mozilla Information Trust Initiative (MITI) — a comprehensive effort to keep the Internet credible and healthy. Mozilla is developing products, research, and communities to battle information pollution and so-called ‘fake news’ online. And we’re seeking partners and allies to help us do so.Here’s why.Imagine this: Two news articles are shared simultaneously online.The first is a deeply reported and thoroughly fact checked story from a credible news-gathering organization. Perhaps Le Monde, the Wall Street Journal, or Süddeutsche Zeitung.The second is a false or misleading story. But the article is designed to mimic content from a credible newsroom, from its headline to its dissemination.How do the two articles fare?The first article — designed to inform — receives limited attention. The second article — designed for virality — accumulates shares. It exploits cognitive bias, belief echos, and algorithmic filter bubbles. It percolates across the Internet, spreading misinformation.This isn’t a hypothetical scenario — it’s happening now in the U.S., in the U.K., in France, in Germany, and beyond. The Pope did not endorse a U.S. presidential candidate, nor does India’s 2000-rupee note contain a tracking device. But fabricated content, misleading headlines, and false context convinced millions of Internet users otherwise.The impact of misinformation on our society is one of the most divisive, fraught, and important topics of our day. Misinformation depletes transparency and sows discord, erodes participation and trust, and saps the web’s public benefit. In short: it makes the Internet less healthy. As a result, the Internet’s ability to power democratic society suffers greatly.This is why we’re launching MITI. We’re investing in people, programs, and projects that disrupt misinformation online.Why Mozilla? The spread of misinformation violates nearly every tenet of the Mozilla Manifesto, our guiding doctrine. Mozilla has a long history of putting community and principles first, and devoting resources to urgent issues — our Firefox browser is just one example. Mozilla is committed to building tolerance rather than hate, and building technology that can protect individuals and the web.So we’re drawing on the unique depth and breadth of the Mozilla Network — from journalists and technologists to policymakers and scientists — to build functional products, research, and community-based solutions.Misinformation is a complex problem with roots in tech[...]



Dzmitry Malyshau: Rusty Object Notation

Wed, 09 Aug 2017 16:11:11 +0000

JavaScript. The practice-oriented language made scripting the Web possible for millions of programmers. It grew an ecosystem of libraries, even started attacking the domains seemingly independent of the Web, such as: native applications (Node) and interchange formats (JSON).

There is a lot not to like in JSON, but the main issue here is the lack of semantics. JavaScript doesn’t differentiate between a map and a struct, so any other language using JSON has to suffer. If only we had an interchange format made for a semantically strong language, preferably modern and efficient… like Rust. Here comes Rusty Object Notation - RON.

RON aims to be a superior alternative to JSON/YAML/TOML/etc, while having consistent format and simple rules. RON is a pleasure to read and write, especially if you have 5+ years of Rust experience. It has support for structures, enums, tuples, homogeneous maps and lists, comments, and even trailing commas!

We are happy to announce the release of RON library version 0.1. The implementation uses serde for convenient (de-)serialization of your precious data. It has already been accepted as the configuration format for Amethyst engine. And we are just getting started ;)

RON has been designed a few years ago, to be used for a game no longer in development. The idea rested peacefully until one shiny day torkleyy noticed the project and brought it to life. Now the library is perfectly usable and solves the question of readable data format for all of my future projects, and I hope - yours too!




Air Mozilla: Weekly SUMO Community Meeting August 9, 2017

Wed, 09 Aug 2017 16:00:00 +0000

(image) This is the sumo weekly call




Air Mozilla: Bugzilla Project Meeting, 09 Aug 2017

Wed, 09 Aug 2017 16:00:00 +0000

(image) The Bugzilla Project Developers meeting.




Mozilla Addons Blog: Friend of Add-ons: Santosh Viswanatham

Wed, 09 Aug 2017 15:30:20 +0000

Our newest Friend of Add-ons is Santosh Viswanatham! Santosh attended a regional event hosted by Mozilla Rep Srikar Ananthula in 2012 and has been an active leader in the community ever since.  Having previously served as a Firefox Student Ambassador and Regional Ambassador Lead, he is currently a Tech Speaker and a member of the Mozilla Campus Clubs Advisory Committee, where he is helping develop an activity for building extensions for Firefox. Santosh has brought his considerable enthusiasm for open source software to the add-ons community. Earlier this year, he served a six-month term as a member of the Featured Add-ons Advisory Board, where he helped nominate and select extensions to be featured on addons.mozilla.org each month. Additionally, Santosh hosted a hackathon in Hyderabad, India, where 100 developers spent the night creating more than 20 extensions. When asked to describe his experience contributing to Mozilla, Santosh says: “It has been a wonderful opportunity to work with like-minded incredible people. Contributing to Mozilla gave me an opportunity to explore myself and stretched my limits working around super cool technologies. I learned tons of things about technology and communities, improved my skill set, received global exposure, and made friends for a lifetime by contributing to Mozilla.” In his free time, Santosh enjoys dining out at roadside eateries, spending time with friends, and watching TV shows and movies. Congratulations, Santosh, and thank you for all of contributions! Are you a contributor to the add-ons community or know of someone who should be recognized? Please be sure to add them to our Recognition Wiki! The post Friend of Add-ons: Santosh Viswanatham appeared first on Mozilla Add-ons Blog.[...]



Daniel Stenberg: Some things to enjoy in curl 7.55.0

Wed, 09 Aug 2017 10:01:27 +0000

In this endless stream of frequent releases, the next release isn’t terribly different from the previous. curl’s 167th release is called 7.55.0 and while the name or number isn’t standing out in any particular way, I believe this release has a few extra bells and whistles that makes it stand out a little from the regular curl releases, feature wise. Hopefully this will turn out to be a release that becomes the new “you should at least upgrade to this version” in the coming months and years. Here are six things in this release I consider worthy some special attention. (The full changelog.) 1. Headers from file The command line options that allows users to pass on custom headers can now read a set of headers from a given file. 2. Binary output prevention Invoke curl on the command line, give it a URL to a binary file and see it destroy your terminal by sending all that gunk to the terminal? No more. 3. Target independent headers You want to build applications that use libcurl and build for different architectures, such as 32 bit and 64 bit builds, using the same installed set of libcurl headers? Didn’t use to be possible. Now it is. 4. OPTIONS * support! Among HTTP requests, this is a rare beast. Starting now, you can tell curl to send such requests. 5. HTTP proxy use cleanup Asking curl to use a HTTP proxy while doing a non-HTTP protocol would often behave in unpredictable ways since it wouldn’t do CONNECT requests unless you added an extra instruction. Now libcurl will assume CONNECT operations for all protocols over an HTTP proxy unless you use HTTP or FTP. 6. Coverage counter The configure script now supports the option –enable-code-coverage. We now build all commits done on github with it enabled, run a bunch of tests and measure the test coverage data it produces. How large share of our source code that is exercised by our tests. We push all coverage data to coveralls.io. That’s a blunt tool, but it could help us identify parts of the project that we don’t test well enough. Right now it says we have a 75% coverage. While not totally bad, it’s not very impressive either. Stats This release ships 56 days since the previous one. Exactly 8 weeks, right on schedule. 207 commits. This release contains 114 listed bug-fixes, including three security advisories. We list 7 “changes” done (new features basically). We got help from 41 individual contributors who helped making this single release[...]



Cameron Kaiser: And now for several things that are completely different: Vintage Computer Festival aftermath, I pass a POWER9 kidneystone, and isindex isdead which issad

Wed, 09 Aug 2017 05:54:00 +0000

So, you slugs who didn't drag yourselves to the Computer History Museum in Mountain View for this year's Vintage Computer Festival West, here's what you didn't see (and here's what you didn't see last year). You didn't see me cram two dorm refrigerator-sized Apple servers and a CRT monitor into my Honda Civic, you didn't see my Apple Network Server exhibit, complete with a Shiner HE prototype and twin PowerBook 2300 Duos and an Outbound notebook serving as clients, you didn't see a functioning Xerox Alto, you didn't see SDF's original AT&T 3B2, you didn't see Bil Herd, Leonard Tramiel and other old Commodore luminaries talking about then and now, you didn't see a replica "CADET" IBM 1620, "just as it was" in 1959 (the infamous system that used lookup tables for addition rather than a proper adder, hence the acronym's alternative expansion as "Can't Add, Doesn't Even Try"), you didn't see a JLPGA PowerBook 170 signed by John Sculley, you didn't see a prototype dual G4 PowerBook, you didn't see a prototype Mac mini with an iPod dock (and an amusing FAIL sticker), you didn't see components from the Cray-1 supercomputer, you didn't see this 6502-based astrology system in the consignment section, of the same model used by Nancy Reagan's astrologer Joan Quigley, and you didn't see me investigate this Gbike parked out front, possibly against company policy. You could have, if you had come. But now it's too late. Try again next year. But what you still have a chance to see is your very own Talos II POWER9 workstation under your desk, because preorders opened today. Now, a reminder: I don't work for Raptor, I don't get any money from Raptor, and I paid retail; I'm just a fairly intransigent PowerPC bigot who is willing to put my Visa card where my mouth is. Currently on its way to my doorstep is a two-CPU, octocore (each core is SMT-4, so that's 32 threads) Sforza POWER9 Talos II with 32GB of DDR4 ECC RAM, an AMD Radeon Pro WX7100, a 500GB NVMe SSD and an LSI 9300 8-port internal SAS controller. The system comes standard with a case, eight SAS/SATA bays, EATX motherboard, fans for each CPU, dual 1400W redundant PSUs, USB 3.0 and 2.0, RS-232, VGA, Blu-ray optical drive, dual Gigabit Ethernet, five PCIe slots (PCIe 4.0, 3 x16 and 2 x8) and a recovery disc. It runs Linux on ppc64le, which is fully supported. The total cost shipped to my maildrop with a hex driver for the high-speed fan assemblie[...]



Michael Verdi: New download and install flow for Firefox 55

Tue, 08 Aug 2017 23:39:42 +0000

It’s been quite a while (January!) since I posted an update about the onboarding work we’ve been doing. If you’ve been using Nightly or read any of the Photon Engineering newsletters, you may have seen the new user tour we’re building but onboarding encompasses much more than that and we shipped some important pieces in Firefox 55 today. The experiment we ran back in February (along with a follow up in May) went really well*. We had 4 important successes: The changes to the installer resulted in 8% more installs (that’s unheard of!). We retained 2.4% more of the people who went through our new experience. (combined with the installer change that means 10.6% more people using Firefox). Ratings for the new flow were on par with ratings of the existing flow. In addition, in user research, participants responded positively to the art on the new download page and installer and some were delighted by the animation on the firstrun page.   I thought it was really cute. Especially the little sunrise at the beginning. That was precious. I thought it was kind of ingenious. It kind of implied that you’re using a product that’s pulling you into the light. Something like that. It was a cute little interactive feature which I really enjoyed. – Research participant Changing the /firstrun page to a sign in flow instead of a sign up flow resulted in a 14.8% increase in people ending up with second device connected to sync (which is the whole point of sync). So today with Firefox 55 we shipped a new streamlined installer, we moved the default browser ask to the second session and we now open the privacy notice in a second tab instead of displaying a bottom notification bar. These changes join the new download and firstrun pages that shipped 2 weeks ago. Here’s a quick video of Firefox 55 in action. Planet Mozilla viewers – you can watch this video on YouTube (1 min.). It is not an easy feat to build a whole new flow that cuts a swath across internal organizations and I’m incredibly proud of the work our team did to get here. And there’s a lot more to come (like that new user tour) that I’ll outline in another post. *We weren’t able to properly test the automigration feature (automatically importing your stuff from another browser) back in February because of underlying performance issues that we discovered in the migration tool. We fixed many of [...]



The Mozilla Blog: The Mozilla Information Trust Initiative: Building a movement to fight misinformation online

Tue, 08 Aug 2017 22:36:02 +0000

Today, we are announcing the Mozilla Information Trust Initiative (MITI)—a comprehensive effort to keep the Internet credible and healthy. Mozilla is developing products, research, and communities to battle information pollution and so-called ‘fake news’ online. And we’re seeking partners and allies to help us do so. Here’s why. Imagine this: Two news articles are shared simultaneously online. The first is a deeply reported and thoroughly fact checked story from a credible news-gathering organization. Perhaps Le Monde, the Wall Street Journal, or Süddeutsche Zeitung. The second is a false or misleading story. But the article is designed to mimic content from a credible newsroom, from its headline to its dissemination. How do the two articles fare? The first article—designed to inform—receives limited attention. The second article—designed for virality—accumulates shares. It exploits cognitive bias, belief echos, and algorithmic filter bubbles. It percolates across the Internet, spreading misinformation. This isn’t a hypothetical scenario—it’s happening now in the U.S., in the U.K., in France, in Germany, and beyond. The Pope did not endorse a U.S. presidential candidate, nor does India’s 2000-rupee note contain a tracking device. But fabricated content, misleading headlines, and false context convinced millions of Internet users otherwise. The impact of misinformation on our society is one of the most divisive, fraught, and important topics of our day. Misinformation depletes transparency and sows discord, erodes participation and trust, and saps the web’s public benefit. In short: it makes the Internet less healthy. As a result, the Internet’s ability to power democratic society suffers greatly. This is why we’re launching MITI. We’re investing in people, programs, and projects that disrupt misinformation online. Why Mozilla? The spread of misinformation violates nearly every tenet of the Mozilla Manifesto, our guiding doctrine. Mozilla has a long history of putting community and principles first, and devoting resources to urgent issues—our Firefox browser is just one example. Mozilla is committed to building tolerance rather than hate, and building technology that can protect individuals and the web. So we’re drawing on the unique depth and breadth of the Mozilla Network—from journalists an[...]



Princi Vershwal: Getting into Outreachy : An open source internship program.

Tue, 08 Aug 2017 19:09:34 +0000

What is Outreachy?Outreachy is a wonderful initiative for women and people from groups underrepresented in free and open source software to get involved. If you are new to open source and searching for an internship that can boost up your confidence in open source, Outreachy would be a great start for you.Outreachy interns work on a project for an organization under the supervision of a mentor, for 3 months. Various open source organizations(e.g. Mozilla, GNOME, Wikimedia, Linux kernel to name a few) take part in the Outreachy program. It is similar to the Google Summer of Code program, but a difference is that participation isn’t limited to just students.Another major difference is that it happens twice a year. There are both summer and winter rounds. So you don’t have to wait for the entire year but you can start contributing anytime and prepare for the next round which would be some months later.My involvement with OutreachyBefore Outreachy, I had done web and android development projects in my college but I was new to the huge world of open source. My first encounter with open source was in November last year and at that time Outreachy was nowhere in my mind.I heard about the program in through a college senior who has earlier participated in Outreachy. I decided to participate in the coming round and started solving good-first-bugs.The application period itself gave me a lot of confidence in my skills and work as a developer. I enjoyed it so much that I used to spend my whole day solving bugs here and there or just reading blogs about the program or the participating organizations.Finally, there was the result day and I was selected for an internship at Mozilla for round 14.I am currently working on Push Notifications for Signin Confirmation in Firefox Accounts. I am really enjoying my work. It is super exciting!!Applying for Outreachy?If you are planning to apply for the next round of Outreachy, here’s some advice that I can offer:Start earlyIt is always better to know what is coming up. Try to explore as much as you can before the organizations and projects are announced. If you are a beginner, read about Outreachy, previously participated organizations, and start making contributions. You will learn a lot while contributing.Chose your project/organization wiselyOnce the organizations are [...]



Hacks.Mozilla.Org: Firefox 55: first desktop browser to support WebVR

Tue, 08 Aug 2017 13:01:03 +0000

WebVR Support on Desktop Firefox on Windows is the first desktop browser to support the new WebVR standard (and macOS support is in Nightly!). As the originators of WebVR, Mozilla wanted it to embody the same principles of standardization, openness, and interoperability that are hallmarks of the Web, which is why WebVR works on any device: Vive, Rift, and beyond. To learn more, check out vr.mozilla.org, or dive into A-Frame, an open source framework for building immersive VR experiences on the Web. New Features for Developers Firefox 55 supports several new ES2017/2018 features, including async generators and the rest/spread (“...“) operator for objects: let a = { foo: 1, bar: 2 }; let b = { bar: 'two' }; let c = { ...a, ...b }; // { foo: 1, bar: 'two' }; MDN has great documentation on using ... with object literals or for destructuring assignment, and the TC39 proposal also provides a concise overview of this feature. Over in DevTools, the Network panel now supports filtering results with queries like “status-code:200“. There are also new, optional columns for cookies, protocol, scheme, and more that can be hidden or shown inside the Network panel, as seen in the screenshot above. Making Firefox Faster We’ve implemented several new features to keep Firefox itself running quickly: New installations of Firefox on Windows will now default to the more stable and secure 64-bit version. Existing installations will upgrade to 64-bit with our next release, Firefox 56. Restoring a session or restarting Firefox with many tabs open is now an order of magnitude faster. For reasons unknown, Dietrich Ayala has a Firefox profile with 1,691 open tabs. With Firefox 54, starting up his instance of Firefox took 300 seconds and 2 GB of memory. Today, with Firefox 55, it takes just 15 seconds and 0.5 GB of memory. This improvement is primarily thanks to the tireless work of an external contributor, Kevin Jones, who virtually eliminated the fixed costs associated with restoring tabs. Users can now adjust Firefox’s number of content processes from within Preferences. Multiple content processes debuted in Firefox 54, and allow Firefox to take better advantage of modern, multi-core CPUs, while still being respectful of RAM utilization. Firefox now uses its built-in Tracking Protection li[...]



The Mozilla Blog: Firefox Is Better, For You. WebVR and new speedy features launching today in Firefox

Tue, 08 Aug 2017 12:59:59 +0000

Perhaps you’re starting to see a pattern – we’re working furiously to make Firefox faster and better than ever. And today we’re shipping a new release that’s our best yet, one that introduces exciting, empowering new technologies for creators as well as improves the everyday experience for all Firefox users. Here’s what’s new today: WebVR opens up a whole new world for the WWW On top of Firefox’s new super-fast multi-process foundation, today we’re launching a breakthrough feature that expands the web to an entirely new experience. Firefox for Windows is the first desktop browser to support WebVR for all users, letting you experience next-generation entertainment in virtual reality. WebVR enables developers and artists to create web-based VR experiences you can browse to with Firefox. So whether you’re a current Oculus Rift or HTC Vive owner – or still deciding when you’re going to take the VR leap – Firefox can get you to your VR fix faster. Once you find a web game or app that supports VR, you can experience it with your headset just by clicking the VR goggles icon visible on the web page. You can navigate and control VR experiences with handset controllers and your movements in physical space. For a look at what WebVR can do, check out this sizzle reel (retro intro intended!). If you’re ready to try out VR with Firefox, a growing community of creators has already been building content with WebVR. Visit vr.mozilla.org to find some experiences we recommend, many made with A-Frame, an easy-to-use WebVR content creation framework made by Mozilla.. One of our favorites is A Painter, a VR painting experience. None of this would have been possible without the hard work of the Mozilla VR team, who collaborated with industry partners, fellow browser makers and the developer community to create and adopt the WebVR specification. If you’d like to learn more about the history and capabilities of WebVR, check out this Medium post by Sean White. Performance Panel – fine-tune browser performance Our new multi-process architecture allows Firefox to easily handle complex websites, particularly when you have many of them loaded in tabs. We believe we’ve struck a good balance for most computers, but for those of you who are tinkerers[...]



This Week In Rust: This Week in Rust 194

Tue, 08 Aug 2017 04:00:00 +0000

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions. This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR. Updates from Rust Community News & Blog Posts Revisiting Rust’s modules, part 2. RustFest 2017: Supporter tickets on-sale, diversity ticket applications open. JetBrains announces official support for Rust plugin for IntelliJ IDEA, CLion, and other JetBrains IDEs. Building a mobile app in Rust and React Native - part 1. Asynchronous Rust: complaints & suggestions. A Stratego interpreter in Rust. Building a faster interpreter for an old language in Rust. Compile-time safety is for everybody. Follow-up to Ownership is theft. Fearless concurrency with hazard pointers. Scrapmetal — Scrap your Rust boilerplate. Rust: Optimising decoder experience. Follow-up to Rust: Not so great for codec implementing. This week in Rust docs 67. Crate of the Week This week's crate is aesni, a crate providing a Rust AES (Rijndael) block ciphers implementation using AES-NI. Thanks to newpavlov for the suggestion. Submit your suggestions and votes for next week! Call for Participation Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started! Some of these tasks may also have mentors available, visit the task page for more information. PumpkinDB: Rust nightly after 2017-06-20 affects benchmarks negatively. (Discuss here). [easy] gimli: Improve error handling in dwarfdump. gimli is a lazy, zero-copy parser for the DWARF debugging format. wayland-window: Add control buttons. wayland-window: Make borders prettier. [doc] lyon: API guidelines: methods on collections that produce iterators follow the iter, iter_mut, into_iter conventions. Lyon is a GPU-based 2D graphics rendering engine in Rust. [doc] lyon: API guidelines: ad-hoc conversions follow as_, to_, into_ conventions. [doc] lyon: API guideline[...]



Mozilla Marketing Engineering & Ops Blog: MozMEAO SRE Status Report - August 8, 2017

Tue, 08 Aug 2017 00:00:00 +0000

Here’s what happened on the MozMEAO SRE team from August 1st - August 8th. Current work MDN Migration to AWS Our goal is to pilot a read-only maintenance mode with Kubernetes-hosted MySQL. For production, there’s some work related to MySQL custom collation that needs to be resolved before we move to AWS RDS. More on this in coming weeks. We’ve implemented Terraform automation for redis, memcached and EFS for use in our Portland Kubernetes cluster. The remaining httpd rewrites have been implemented in Django. This allows us to move from Apache httpd in SCL3 to an all-Django deployment in Kubernetes. We’re going to be storing static samples, diagrams, and presentations in a shared EFS persistent volume, and there’s work in progress on an automated backup solution to Amazon S3. Additionally, we’re working on a solution to synchronize content from SCL3 to S3 to prep for stage/production environments. This week we’ll be evaluating hosted Elasticsearch from elastic.co . Virginia and EUW cluster decommissioning Both Virginia (Kubernetes 1.5/Deis Workflow) and Ireland (Fleet/Deis 1) have both been decommissioned. Some cleanup work remains, including removing references from documentation, and cleaning up credentials. Upcoming Portland Deis 1 cluster decommissioning The Deis 1 cluster in Portland is tentatively scheduled to be decommissioned next week. Links Github project tracking SRE work How MozMEAO SRE’s work Weekly SRE meeting notes [...]



Mozilla Marketing Engineering & Ops Blog: Kuma Report, July 2017

Tue, 08 Aug 2017 00:00:00 +0000

Here’s what happened in July in Kuma, the engine of MDN Web Docs: Shipped the new design to all users Shipped the sample database Shipped tweaks and fixes Here’s the plan for August: Continue the redesign and interactive examples Update localization of macros Establish maintenance mode in AWS Done in July Shipped the New Design to All Users In June, we revealed the new MDN web docs design to beta testers. In July, Stephanie Hobson and Schalk Neethling fixed many bugs, adjusted styles, shipped the homepage redesign, and answered a lot of feedback. The new design was shipped to all MDN Web Docs users on July 25, and the old design files were retired. The redesign was a big change, with some interesting problems that called for creative solutions. For details, see Stephanie’s blog post, The MDN Redesign “Behind the Scenes”. Shipped the Sample Database The sample database project, started in May 2016, finally shipped in July. Data is an important part of Kuma development. With the code and backing services you get the home page, and not much else. To develop features or test changes, you often need wiki pages, historical revisions, waffle flags, constance settings, tags, search topics, users and groups. Staff developers could download a 2 GB anonymized production database, wait 30 minutes for it to load, and then they would have a useful dev environment. Contributors had to manually copy data from production, and usually didn’t bother. The sample database has a small but representative data set, suitable for 90% of development tasks, and takes less than a minute to download and install. The sample database doesn’t have all the data on MDN, to keep it small. There are now scraping tools for adding more production data to your development database. This is especially useful for development and testing of KumaScript macros, which often require specific pages. Finally, integration testing is challenging because non-trivial testing requires some known data to be present, such as specific pages and editor accounts. Now, a testing deployment can combine new code with the sample database, and automated browser-based tests can verify new and [...]



Air Mozilla: Mozilla Weekly Project Meeting, 07 Aug 2017

Mon, 07 Aug 2017 18:00:00 +0000

(image) The Monday Project Meeting




Hacks.Mozilla.Org: WebVR for All Windows Users

Mon, 07 Aug 2017 14:53:45 +0000

With the release of Firefox 55 on August 8, Mozilla is pleased to make WebVR 1.1 available for all 64-bit Windows users with an Oculus Rift or HTC VIVE headset. Since we first announced this feature two months ago, we’ve seen tremendous growth in the tooling, art content, and applications being produced for WebVR – check out some highlights in this showcase video: Sketchfab also just announced support for exporting their 3D models into the glTF format and have over 100,000 models available for free download under Creative Commons licensing, so it’s easier to bring high-quality art assets into your WebVR scenes with libraries such three and Babylon and know that they will just work. They are also one of the first sites to take advantage of WebVR to make an animated short and highlight the openness of URLs to support link traversal to build awesome in-VR experiences within web content. The growth in numbers of new users having their first experiences with WebVR content has been phenomenal as well. In the last month, we have seen over 13 million uses of the A-Frame library, started here at Mozilla to make it easier for web developers, designers and people of all backgrounds to create WebVR content. We can’t wait to see what you will build with WebVR. Please show off what you’re doing by tweeting to @MozillaVR or saying hi in the WebVR Slack. Stay tuned for an upcoming A-Frame contest announcement with even more opportunities to learn, experiment, and get feedback![...]



Mic Berman: What do you want for you life? knowing oneself

Mon, 07 Aug 2017 14:52:28 +0000

Socrates   There are many wiser than me that have offered knowing yourself as a valuable pursuit that brings great rewards.   Here are a few of my favourite quotes on why to do this: “I can teach anybody how to get what they want out of life. The problem is that I can’t find anybody who can tell me what they want.” – Mark Twain "If you know the enemy and know yourself you need not fear the results of a hundred battles."  Sun Tzu “It’s a helluva start, being able to recognize what makes you happy.” – Lucille Ball “The searching-out and thorough investigation of truth ought to be the primary study of man.” – Cicero   This is how I invest in knowing myself - I hope it inspires you to create your own practice I spend time understanding my motivations, values in action or inaction, and my triggers. I leverage my coach to deconstruct situations that were particularly difficult or rewarding, where I’m overwhelmed by emotion and don’t feel I can think rationally - I check in to get crystal clear on what is going on, how am I feeling, what was the trigger(s) and how will I be at choice in action going forward. I challenge myself in areas I want to understand more about myself by reading, going to lectures, sharing honestly with leanred or experienced friends. I keep a daily journal - particularly around areas in my life I want to change or improve like being on time and creating sufficient time in my day for reflection. I’ve long run my calendar to ‘maximize time and service’ i.e. every available minute is in meetings, working on a project, etc. This is not only un-sustainable for me, it doesn’t leave me any room for the unexpected and more importantly an opportunity to reflect on what may have just happened or prepare for what or who I may be seeing next. This is not fair to me nor to the people I work with. [...]



Mic Berman: How are you taking care of yourself?

Mon, 07 Aug 2017 14:42:56 +0000

The leaders I coach drive themselves and their teams to great achievements, are engaged in what they do, love their work and have passion and compassion in how they work for their teams and customers. They face tough situations - impossible-seeming deadlines or goals, difficult conversations, constant re-balancing of work-life priorities, and crazy business scenarios we’ve never faced before. Their days can be both energizing and completely draining. And each day they face those choices and predicaments at times with full grace and others with total foolishness. Along the way I hear and offer the questions - how are you taking care of yourself? how will you rejuvenate? how will you maintain balance? so you I ask these questions of the leaders I work with so that they can keep driving their goals, over-achieving each day and showing up for the important people in your life :)   I focus on three ways to do this myself. Knowing myself - spending time to understand and check in with my values, triggers, and motivations. Doing a daily practice - i’ve created a daily and weekly practice that touches on my mind, body and spirit. this discipline and evolving practice keeps me learning, present and ‘in balance’ Being discerning about my influences - choose the people, experiences and beauty that influence my life and what’s important about that today, this week or month or year. [...]



Shing Lyu: Porting Chrome Extension to Firefox

Mon, 07 Aug 2017 06:37:22 +0000

Edit: Andreas from the Mozilla Add-on team points out a few errors, I’ll keep them here before I can inline them into the post: Do NOT create a new version of the extension on AMO, upload and replace your legacy extension using the same listing. The user drop is related to https://blog.mozilla.org/addons/2017/06/21/upcoming-changes-usage-statistics/ The web-ext run should work without an ID strict_min_version is not mandatory Three years ago, I wrote the FocusBlocker to help me focus on my master thesis. It’s basically a website blocker that stops me from checking Facebook every five minute. But is different from other blockers like LeechBlock that requires you to set a fixed schedule. FocusBlocker lets you set a quota, e.g. I can browse 10 minutes of Facebook then block it for 50 minutes. So as long as you have remaining quota, you can check Facebook anytime. I’m glad that other people find it useful, and I even got my first donation through AMO because of happy users. Since this extension serves my need, I’m not actively maintaining it or adding new features. But I was aware of Firefox’s transition from the legacy Add-on SDK to WebExtension API. So before WebExtension API is fully available, I started to migrate it to Chrome’s extension format. But I didn’t got the time to actually migrate it back to Firefox, until a user emails me asking for a WebExtension version. I looked into the statistics, the daily active user count drops from ~1000 to ~300. That’s when I rolled up my sleeve and actually migrated it in one day. Here is how I did it and what I’ve learned from the process. What needs to be changed To evaluate the scope of the work. We need to first look at what APIs I used. The FocusBlocker Chrome version uses the three main APIs: chrome.tabs: to monitor new tabs opening and actually block existing tabs. chrome.alarm: Set timers for blocking and unblocking. chrome.storage.sync: To store the settings and persist the timer across browser restarts. It’s nice that these APIs are all suppor[...]