Mon, 24 Oct 2016 00:00:00 +0000The Referer header has been a part of the web for a long time. Websites rely on it for a few different purposes (e.g. analytics, ads, CSRF protection) but it can be quite problematic from a privacy perspective. Thankfully, there are now tools in Firefox to help users and developers mitigate some of these problems. Description In a nutshell, the browser adds a Referer header to all outgoing HTTP requests, revealing to the server on the other end the URL of the page you were on when you placed the request. For example, it tells the server where you were when you followed a link to that site, or what page you were on when you requested an image or a script. There are, however, a few limitations to this simplified explanation. First of all, by default, browsers won't send a referrer if you place a request from an HTTPS page to an HTTP page. This would reveal potentially confidential information (such as the URL path and query string which could contain session tokens or other secret identifiers) from a secure page over an insecure HTTP channel. Firefox will however include a Referer header in HTTPS to HTTPS transitions unless network.http.sendSecureXSiteReferrer (removed in Firefox 52) is set to false in about:config. Secondly, using the new Referrer Policy specification web developers can override the default behaviour for their pages, including on a per-element basis. This can be used both to increase or reduce the amount of information present in the referrer. Legitimate Uses Because the Referer header has been around for so long, a number of techniques rely on it. Armed with the Referer information, analytics tools can figure out: where website traffic comes from, and how users are navigating the site. Another place where the Referer is useful is as a mitigation against cross-site request forgeries. In that case, a website receiving a form submission can reject that form submission if the request originated from a different website. It's worth pointing out that this CSRF mitigation might be better implemented via a separate header that could be restricted to particularly dangerous requests (i.e. POST and DELETE requests) and only include the information required for that security check (i.e. the origin). Problems with the Referrer Unfortunately, this header also creates significant privacy and security concerns. The most obvious one is that it leaks part of your browsing history to sites you visit as well as all of the resources they pull in (e.g. ads and third-party scripts). It can be quite complicated to fix these leaks in a cross-browser way. These leaks can also lead to exposing private personally-identifiable information when they are part of the query string. One of the most high-profile example is the accidental leakage of user searches by healthcare.gov. Solutions for Firefox Users While web developers can use the new mechanisms exposed through the Referrer Policy, Firefox users can also take steps to limit the amount of information they send to websites, advertisers and trackers. In addition to enabling Firefox's built-in tracking protection by setting privacy.trackingprotection.enabled to true in about:config, which will prevent all network connections to known trackers, users can control when the Referer header is sent by setting network.http.sendRefererHeader to: 0 to never send the header 1 to send the header only when clicking on links and similar elements 2 (default) to send the header on all requests (e.g. images, links, etc.) It's also possible to put a limit on the maximum amount of information that the header will contain by setting the network.http.referer.trimmingPolicy to: 0 (default) to send the full URL 1 to send the URL without its query string 2 to only send the scheme, host and port or using the network.http.referer.XOriginTrimmingPolicy option (added in Firefox 52) to only restrict the contents of referrers attached to cross-origin requests. Site owners can opt to share less information with other sites, but they can't share any more than what the user trimming[...]
Sun, 23 Oct 2016 06:43:50 +0000Michael Tsai recently linked to Ricardo Mori’s lament on the unfashionable state of the Mac, quoting the following passage: Having a mandatory new version of Mac OS X every year is not necessarily the best way to show you’re still caring, Apple. This self-imposed yearly update cycle makes less and less sense as time goes by. Mac OS X is a mature operating system and should be treated as such. The focus should be on making Mac OS X even more robust and reliable, so that Mac users can update to the next version with the same relative peace of mind as when a new iOS version comes out. I wonder how much the mandatory yearly version cycle is due to the various iOS integration features—which, other than the assorted “bugs introduced by rewriting stuff that ‘just worked,’” seem to be the main changes in every Mac OS X (er, macOS, previously OS X) version of late. Are these integration features so wide-ranging that they touch every part of the OS and really need an entire new version to ship safely, or are they localized enough that they could safely be released in a point update? Of course, even if they are safe to release in an update, it’s still probably easier on Apple’s part to state “To use this feature, your Mac must be running macOS 10.18 or newer, and your iOS device must be running iOS 16 or newer” instead of “To use this feature, your Mac must be running macOS 10.15.5 or newer, and your iOS device must be running iOS 16 or newer” when advising users on the availability of the feature. At this point, as Mori mentioned, Mac OS X is a mature, stable product, and Apple doesn’t even have to sell it per se anymore (although for various reasons, they certainly want people to continue to upgrade). So even if we do have to be subjected to yearly Mac OS X releases to keep iOS integration features coming/working, it seems like the best strategy is to keep the scope of those OS releases small (iOS integration, new Safari/WebKit, a few smaller things here and there) and rock-solid (don’t rewrite stuff that works fine, fix lots of bugs that persist). I think a smaller, more scoped release also lessens the “upgrade burnout” effect—there’s less fear and teeth-gnashing over things that will be broken and never fixed each year, but there’s still room for surprise and delight in small areas, including fixing persistent bugs that people have lived with for upgrade after upgrade. (Regressions suck. Regressions that are not fixed, release after release, are an indication that your development/release process sucks or your attention to your users’ needs sucks. Neither is a very good omen.) And when there is something else new and big, perhaps it has been in development and QA for a couple of cycles so that it ships to the user solid and fully-baked. I think the need not to have to “sell” the OS presents Apple a really unique opportunity that I can imagine some vendors would kill to have—the ability to improve the quality of the software—and thus the user experience—by focusing on the areas that need attention (whatever they may be, new features, improvements, old bugs) without having to cram in a bunch of new tentpole items to entice users to purchase the new version. Even in terms of driving adoption, lots of people will upgrade for the various iOS integration features alone, and with a few features and improved quality overall, the adoption rate could end up being very similar. Though there’s the myth that developers are only happy when they get to write new code and new features (thus the plague of rewrite-itis), I know from working on Camino that I—and, more importantly, most of our actual developers1—got enormous pleasure and satisfaction from fixing bugs in our features, especially thorny and persistent bugs. I would find it difficult to believe that Apple doesn’t have a lot of similar-tempered developers working for it, so keeping them happy without cranking out tons of brand-new code shouldn’t be overly difficult. I just wish Apple w[...]
Fri, 21 Oct 2016 23:36:40 +0000
(image) Back in 2013, it came to light that Wget was used to to copy the files private Manning was convicted for having leaked. Around that time, EFF made and distributed stickers saying wget is not a crime.
Weirdly enough, it was hard to find a high resolution version of that image today but I’m showing you a version of it on the right side here.
In the 2016 movie Jason Bourne, Swedish actress Alicia Vikander is seen working on her laptop at around 1:16:30 into the movie and there’s a single visible sticker on that laptop. Yeps, it is for sure the same EFF sticker. There’s even a very brief glimpse of the top of the red EFF dot below the “crime” word.
Also recall the wget occurance in The Social Network.
Fri, 21 Oct 2016 19:26:54 +0000
En el día de hoy Mozilla a publicado una nueva actualización para su navegador, en esta ocasión la 49.0.2.
Esta liberación resuelve pequeños problemas que han estado confrontando algunos usuarios, por lo que recomendamos actualizar.
La pueden obtener desde nuestra zona de Descargas para Linux, Mac, Windows y Android en español e inglés.
Fri, 21 Oct 2016 18:00:00 +0000
(image) Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...
Fri, 21 Oct 2016 15:57:44 +0000
We are happy to let you know that Friday, October 28th, we are organizing Firefox 51.0 Aurora Testday. We’ll be focusing our testing on the following features: Zoom indicator, Downloads dropmaker.
Check out the detailed instructions via this etherpad.
No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.
Join us and help us make Firefox better!
See you on Friday!
Fri, 21 Oct 2016 07:00:00 +0000
I just found, and read, Clément Delafargue’s post “Why Auto Increment Is A Terrible Idea” (via @CoreRamiro). I agree that an opaque primary key is very nice and clean from an information architecture viewpoint.
However, in practice, a serial (or monotonically increasing) key can be handy to have around. I was reminded of this during a recent situation where we (app developers & ops) needed to be highly confident that a replica was consistent before performing a failover. (None of us had access to the back end to see what the DB thought the replication lag was.)Read more...
Thu, 20 Oct 2016 21:42:01 +0000
I played a bit of devil’s advocate interviewing Monica as she has a lot of great opinions and the information to back up her point of view. It was very enjoyable seeing the current state of the web through the eyes of someone talented who just joined the party. It is far too easy for those who have been around for a long time to get stuck in a rut of trying not to break up with the past or considering everything broken as we’ve seen too much damage over the years. Not so Monica. She is very much of the opinion that we can trust developers to do the right thing and that by giving them tools to analyse their work the web of tomorrow will be great.
I’m happy that there are people like her in our market. It is good to pass the torch to those with a lot of dedication rather than those who are happy to use whatever works.(image)
Thu, 20 Oct 2016 21:23:00 +0000Hello, SUMO Nation! We had a bit of a break, but we’re back! First, there was the meeting in Toronto with the Lithium team about the migration (which is coming along nicely), and then I took a short holiday. I missed you all, it’s great to be back, time to see what’s up in the world of SUMO! Welcome, new contributors! superzorro Just Sew Cris cell_division julianocristian sarthak_1011 paymon23 Marto Nieto G. m.emeksiz OCMichael syam3526 jgmaldo If you just joined us, don’t hesitate – come over and say “hi” in the forums! Contributors of the week All the forum supporters who tirelessly helped users out for the last week. All the writers of all languages who worked tirelessly on the KB for the last week. We salute you! Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month! SUMO Community meetings LATEST ONE: 19th of October – you can read the notes here and see the video at AirMozilla. NEXT ONE: happening on the 26th of October! If you want to add a discussion topic to the upcoming meeting agenda: Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting). Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda). If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback. Community The Firefox Release Report for version 49 (including your input) has been published recently! Thanks to everyone who contributed to the document (and to the whole release). You make things happen :-) Platform Check the notes from the last meeting in this document. (and don’t forget about our meeting recordings). You can also follow the migration through our public Trello board. Highlights from the last few days include a walkthrough of the basic style and layout of the site (watch the video for more details) and locking down a list of locales for the first wave of migration (more details in the L10n section). There was a test migration performed by the Lithium team… More details as we get them! Giorgos (our glorious technical admin for the duration of the migration) has also proposed creating a snapshot of Kitsune’s state as an archive version of the site. More details to follow as we get them, but now you can be sure that none of the previously invested work is going to disappear from the web. Take a look at the first iteration of the upcoming post-migration community page – you can find the background and provide feedback for this here. For the front page, take a look here (background and feedback are here) – huge thanks to Joni for working on these two! Don’t forget about the main migration thread, with the list of areas that can benefit from your input: Roles Gamification: ranks & badges Metrics and measurement Design ideas (1st wave) Forum moderation Shutting down Kitsune If you are interested in test-driving the new platform now, please contact Madalina. IMPORTANT: the whole place is a work in progress, and a ton of the final content, assets, and configurations (e.g. layout pieces) are missing. QUESTIONS? CONCERNS? Use the migration thread to put questions/comments about it for everyone to share and discuss. Social Sierra from the Social team joined us recently for a meeting – you can reach out to her anytime! Want to join us? Please email Rachel and/or Madalina to get started supporting Mozilla’s product users on Facebook and Twitter. We need your help! Use the step-by-step guide here. Take a look at some useful videos: Getting started & replying to users [...]
Thu, 20 Oct 2016 20:49:00 +0000Ars Technica is reporting an interesting attack that uses a side-channel exploit in the Intel Haswell branch translation buffer, or BTB (kindly ignore all the political crap Ars has been posting lately; I'll probably not read any more articles of theirs until after the election). The idea is to break through ASLR, or address space layout randomization, to find pieces of code one can string together or directly attack for nefarious purposes. ASLR defeats a certain class of attacks that rely on the exact address of code in memory. With ASLR, an attacker can no longer count on code being in a constant location. Intel processors since at least the Pentium use a relatively simple BTB to aid these computations when finding the target of a branch instruction. The buffer is essentially a dictionary with virtual addresses of recent branch instructions mapping to their predicted target: if the branch is taken, the chip has the new actual address right away, and time is saved. To save space and complexity, most processors that implement a BTB only do so for part of the address (or they hash the address), which reduces the overhead of maintaining the BTB but also means some addresses will map to the same index into the BTB and cause a collision. If the addresses collide, the processor will recover, but it will take more cycles to do so. This is the key to the side-channel attack. (For the record, the G3 and the G4 use a BTIC instead, or a branch target instruction cache, where the table actually keeps two of the target instructions so it can be executing them while the rest of the branch target loads. The G4/7450 ("G4e") extends the BTIC to four instructions. This scheme is highly beneficial because these cached instructions essentially extend the processor's general purpose caches with needed instructions that are less likely to be evicted, but is more complex to manage. It is probably for this reason the BTIC was dropped in the G5 since the idea doesn't work well with the G5's instruction dispatch groups; the G5 uses a three-level hybrid predictor which is unlike either of these schemes. Most PowerPC implementations also have a return address stack for optimizing the blr instruction. With all of these unusual features Power ISA processors may be vulnerable to a similar timing attack but certainly not in the same way and probably not as predictably, especially on the G5 and later designs.) To get around ASLR, an attacker needs to find out where the code block of interest actually got moved to in memory. Certain attributes make kernel ASLR (KASLR) an easier nut to crack. For performance reasons usually only part of the kernel address is randomized, in open-source operating systems this randomization scheme is often known, and the kernel is always loaded fully into physical memory and doesn't get swapped out. While the location it is loaded to is also randomized, the kernel is mapped into the address space of all processes, so if you can find its address in any process you've also found it in every process. Haswell makes this even easier because all of the bits the Linux kernel randomizes are covered by the low 30 bits of the virtual address Haswell uses in the BTB index, which covers the entire kernel address range and means any kernel branch address can be determined exactly. The attacker finds branch instructions in the kernel code such as by disassembling it that service a particular system call and computes (this is feasible due to the smaller search space) all the possible locations that branch could be at, creates a "spy" function with a branch instruction positioned to try to force a BTB collision by computing to the same BTB index, executes the system call, and then executes the spy function. If the spy process (which times itself) determines its branch took longer than an average br[...]
Thu, 20 Oct 2016 17:30:00 +0000
(image) Weekly project updates from the Mozilla Connected Devices team.
Thu, 20 Oct 2016 16:00:00 +0000
(image) This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.
Thu, 20 Oct 2016 10:59:58 +0000
Please join us in congratulating Mijanur Rahman Rayhan
Mijanur is a Mozilla Rep and Tech Speaker from Sylhet, Bangladesh. With his diverse knowledge he organized hackathons around Connected Devices
Mijanur proved himself as a very active Mozillian through his different activities and work with different communities. With his patience and consistency to reach his goals he is always ready and prepared for these. He showed commitment to the Reps program and his proactive spirit these last elections by running as a nominee for the Cohort position in Reps Council.
Be sure to follow his activities as he continues the activate series with a Rust workshop, Dive Into Rust events, Firefox Testpilot MozCoffees, Web Compatibility Sprint and Privacy and Security seminar with Bangladesh Police!
Thu, 20 Oct 2016 10:06:32 +0000
One of the big problems with IoT devices is default passwords – here’s the list coded into the malware that attacked Brian Krebs. But without a default password, you have to make each device unique and then give the randomly-generated password to the user, perhaps by putting it on a sticky label. Again, my IoT vision post suggests a better solution. If the device’s public key and a password are in an RFID tag on it, and you just swipe that over your hub, the hub can find and connect securely to the device over SSL, and then authenticate itself to the device (using the password) as the user’s real hub, with zero configuration on the part of the user. And all of this works without the need for any UI or printed label which needs to be localized. Better usability, better security, better for the internet.(image)
Thu, 20 Oct 2016 09:55:39 +0000
You know that problem where you want to label a coffee pot, but you just don’t have the right label? Technology to the rescue!
Of course, new technology does come with some disadvantages compared to the old, as well as its many advantages:
And pinch-to-zoom on the picture viewer (because that’s what it uses) does mean you can play some slightly mean tricks on people looking for their caffeine fix:
And how do you define what label the tablet displays? Easy:
Seriously, can any reader give me one single advantage this system has over a paper label?(image)
Thu, 20 Oct 2016 08:32:45 +0000
Thu, 20 Oct 2016 07:25:44 +0000One of the projects proposed for this round of Outreachy is the PGP / PKI Clean Room live image. Interns, and anybody who decides to start using the project (it is already functional for command line users) need to decide about purchasing various pieces of hardware, including a smart card, a smart card reader and a suitably secure computer to run the clean room image. It may also be desirable to purchase some additional accessories, such as a hardware random number generator. If you have any specific suggestions for hardware or can help arrange any donations of hardware for Outreachy interns, please come and join us in the pki-clean-room mailing list or consider adding ideas on the PGP / PKI clean room wiki. Choice of smart card For standard PGP use, the OpenPGP card provides a good choice. For X.509 use cases, such as VPN access, there are a range of choices. I recently obtained one of the SmartCard HSM cards, Card Contact were kind enough to provide me with a free sample. An interesting feature of this card is Elliptic Curve (ECC) support. More potential cards are listed on the OpenSC page here. Choice of card reader The technical factors to consider are most easily explained with a table: On disk Smartcard reader without PIN-pad Smartcard reader with PIN-pad Software Free/open Mostly free/open, Proprietary firmware in reader Key extraction Possible Not generally possible Passphrase compromise attack vectors Hardware or software keyloggers, phishing, user error (unsophisticated attackers) Exploiting firmware bugs over USB (only sophisticated attackers) Other factors No hardware Small, USB key form-factor Largest form factor Some are shortlisted on the GnuPG wiki and there has been recent discussion of that list on the GnuPG-users mailing list. Choice of computer to run the clean room environment There are a wide array of devices to choose from. Here are some principles that come to mind: Prefer devices without any built-in wireless communications interfaces, or where those interfaces can be removed Even better if there is no wired networking either Particularly concerned users may also want to avoid devices with opaque micro-code/firmware Small devices (laptops) that can be stored away easily in a locked cabinet or safe to prevent tampering No hard disks required Having built-in SD card readers or the ability to add them easily SD cards and SD card readers The SD cards are used to store the master private key, used to sign the certificates/keys on the smart cards. Multiple copies are kept. It is a good idea to use SD cards from different vendors, preferably not manufactured in the same batch, to minimize the risk that they all fail at the same time. For convenience, it would be desirable to use a multi-card reader: although the software experience will be much the same if lots of individual card readers or USB flash drives are used. Other devices One additional idea that comes to mind is a hardware random number generator (TRNG), such as the FST-01. Can you help with ideas or donations? If you have any specific suggestions for hardware or can help arrange any donations of hardware for Outreachy interns, please come and join us in the pki-clean-room mailing list or consider adding ideas on the PGP / PKI clean room wiki. [...]
Thu, 20 Oct 2016 05:55:22 +0000We’ve spent the past two weeks asking people around the world to think about our four refined design directions for the Mozilla brand identity. The results are in and the data may surprise you. If you’re just joining this process, you can get oriented here and here. Our objective is to refresh our Mozilla logo and related visual assets that support our mission and make it easier for people who don’t know us to get to know us. A reminder of the factors we’re taking into account in this phase. Data is our friend, but it is only one of several aspects to consider. In addition to the three quantitative surveys—of Mozillians, developers, and our target consumer audience—qualitative and strategic factors play an equal role. These include comments on this blog, constructive conversations with Mozillians, our 5-year strategic plan for Mozilla, and principles of good brand design. Here is what we showed, along with a motion study, for each direction: We asked survey respondents to rate these design directions against seven brand attributes. Five of them—Innovative, Activist, Trustworthy, Inclusive/Welcoming, Opinionated—are qualities we’d like Mozilla to be known for in the future. The other two—Unique, Appealing—are qualities required for any new brand identity to be successful. Mozillians and developers meld minds. Members of our Mozilla community and the developers surveyed through MDN (the Mozilla Developer Network) overwhelmingly ranked Protocol 2.0 as the best match to our brand attributes. For over 700 developers and 450 Mozillians, Protocol scored highest across 6 of 7 measures. People with a solid understanding of Mozilla feel that a design embedded with the language of the internet reinforces our history and legacy as an Internet pioneer. The link’s role in connecting people to online know-how, opportunity and knowledge is worth preserving and fighting for. But consumers think differently. We surveyed people making up our target audience, 400 each in the U.S., U.K., Germany, France, India, Brazil, and Mexico. They are 18- to 34-year-old active citizens who make brand choices based on values, are more tech-savvy than average, and do first-hand research before making decisions (among other factors). We asked them first to rank order the brand attributes most important for a non-profit organization “focused on empowering people and building technology products to keep the internet healthy, open and accessible for everyone.” They selected Trustworthy and Welcoming as their top attributes. And then we also asked them to evaluate each of the four brand identity design systems against each of the seven brand attributes. For this audience, the design system that best fit these attributes was Burst. Why would this consumer audience choose Burst? Since this wasn’t a qualitative survey, we don’t know for sure, but we surmise that the colorful design, rounded forms, and suggestion of interconnectedness felt appropriate for an unfamiliar nonprofit. It looks like a logo. Also of note, Burst’s strategic narrative focused on what an open, healthy Internet feels and acts like, while the strategic narratives for the other design systems led with Mozilla’s role in world. This is a signal that our targeted consumer audience, while they might not be familiar with Mozilla, may share our vision of what the Internet could and should be. Why didn’t they rank Protocol more highly across the chosen attributes? We can make an educated guess that these consumers found it one dimensional by comparison, and they may have missed the meaning of the[...]
Thu, 20 Oct 2016 00:00:00 +0000The Rust team is happy to announce the latest version of Rust, 1.12.1. Rust is a systems programming language with a focus on reliability, performance, and concurrency. As always, you can install Rust 1.12.1 from the appropriate page on our website, or install via rustup with rustup update stable. What’s in 1.12.1 stable Wait… one-point-twelve-point… one? In the release announcement for 1.12 a few weeks ago, we said: The release of 1.12 might be one of the most significant Rust releases since 1.0. It was true. One of the biggest changes was turning on a large compiler refactoring, MIR, which re-architects the internals of the compiler. The overall process went like this: Initial MIR support landed in nightlies back in Rust 1.6. While work was being done, a flag, --enable-orbit, was added so that people working on the compiler could try it out. Back in October, we would always attempt to build MIR, even though it was not being used. A flag was added, -Z orbit, to allow users on nightly to try and use MIR rather than the traditional compilation step (‘trans’). After substantial testing over months and months, for Rust 1.12, we enabled MIR by default. In Rust 1.13, MIR will be the only option. A change of this magnitude is huge, and important. So it’s also important to do it right, and do it carefully. This is why this process took so long; we regularly tested the compiler against every crate on crates.io, we asked people to try out -Z orbit on their private code, and after six weeks of beta, no significant problems appeared. So we made the decision to keep it on by default in 1.12. But large changes still have an element of risk, even though we tried to reduce that risk as much as possible. And so, after release, 1.12 saw a fair number of regressions that we hadn’t detected in our testing. Not all of them are directly MIR related, but when you change the compiler internals so much, it’s bound to ripple outward through everything. Why make a point release? Now, given that we have a six-week release cycle, and we’re halfway towards Rust 1.13, you may wonder why we’re choosing to cut a patch version of Rust 1.12 rather than telling users to just wait for the next release. We have previously said something like “point releases should only happen in extreme situations, such as a security vulnerability in the standard library.” The Rust team cares deeply about the stability of Rust, and about our users’ experience with it. We could have told you all to wait, but we want you to know how seriously we take this stuff. We think it’s worth it to demonstrate our commitment to you by putting in the work of making a point release in this situation. Furthermore, given that this is not security related, it’s a good time to practice actually cutting a point release. We’ve never done it before, and the release process is semi-automated but still not completely so. Having a point release in the world will also shake out any bugs in dealing with point releases in other tooling as well, like rustup. Making sure that this all goes smoothly and getting some practice going through the motions will be useful if we ever need to cut some sort of emergency point release due to a security advisory or anything else. This is the first Rust point release since Rust 0.3.1, all the way back in 2012, and marks 72 weeks since Rust 1.0, when we established our six week release cadence along with a commitment to aggressive stability guarantees. While we’re disappointed that 1.12 had these regressions, we’re rea[...]
Wed, 19 Oct 2016 23:13:29 +0000This report is aiming to capture and explain what has happened during and after the launch of Firefox 49on multiple support fronts: Knowledge Base and localization, 1:1 social and forum support, trending issues and reported bugs, as well as to celebrate and recognize the tremendous work the SUMO community is putting in to make sure our users experience a happy release. We have lots of ways to contribute, from Support to Social to PR, the ways you can help shape our communications program and tell the world about Mozilla are endless. For more information: [https://goo.gl/NwxLJF] Knowledge Base and Localization Article Voted “helpful” (English/US only) Global views Comments from dissatisfied users Desktop (Sept. 20 – Oct. 12) https://support.mozilla.org/en-US/kb/hello-status 76-80% 93871 “No explanation of why it was removed.” https://support.mozilla.org/en-US/kb/firefox-reader-view-clutter-free-web-pages 61-76% 8625 none https://support.mozilla.org/en-US/kb/html5-audio-and-video-firefox 36-71% 11756 “Didn’t address Firefox not playing YouTube tutorials” https://support.mozilla.org/en-US/kb/your-hardware-no-longer-supported 70-75% 5147 “Please continue to support Firefox for Pentium III. It is not that hard to do.” “What about those who can’t afford to upgrade their processors?” Android (Sept. 20 – Oct. 12) https://support.mozilla.org/en-US/kb/whats-new-firefox-android 68% 292 none Localization Article Top 10 locale coverage Top 20 locale coverage Desktop (Sept. 20 – Oct. 12) https://support.mozilla.org/en-US/kb/hello-status 100% 86% https://support.mozilla.org/en-US/kb/firefox-reader-view-clutter-free-web-pages 100% 81% https://support.mozilla.org/en-US/kb/html5-audio-and-video-firefox 100% 81% https://support.mozilla.org/en-US/kb/your-hardware-no-longer-supported 100% 81% Android (Sept. 20 – Oct. 12) https://support.mozilla.org/en-US/kb/whats-new-firefox-android 100% 71% Support Forum Threads Great teamwork between some top contributors Firefox 49 won’t start after installation Both Noah and Philipp narrowed down the cause of Firefox not starting, the build that the issuw with Kapersky started and filed bug 1305436 below Every Bookmark Displayed Solved Missing mozglue.dll when launching Firefox Updating AVG to get rid of the missing mozglue.dll error Firefox crashes FredMcD did a great job restoring bookmarks and How to remove “Recently bookmarked” 377 views Please also keep in mind for Firefox updates that are not legit: Example of fake update for Firefox Bugs Created from Forum threads – SUMO Community [Bug 1305436] Firefox 49 won’t start after installation [Bug 1304848] Users report Firefox is no longer launching after the 49 update with a mozglue.dll missing error instead (Contributed to) [Bug 1304360] Firefox49 showing graphics artifacts with HWA enabled Army Of Awesome (by Stefan Costen -Costenslayer) My thanks goes out to all contributors for their help in supporting everyone from crashed (which can be difficult and annoying) to people thanking us. All of your hard work has been noticed and is much appreciated Along with Amit Roy (twitter: amitroy2779) for helping uses every day Social Support Highlights Brought to you by Sprinklr Total active contributors in program ~16 Top 12 Contributors Name Engagements Noah 103 Magdno 69 Daniela 28 Andrew 25 Geraldo 10 Cynthia 10 Marcelo 4 Jhonatas 2 Thiago 2 Joa Paulo 1 Number of Replies: Trending issues Innbound, what people[...]
Wed, 19 Oct 2016 22:11:49 +0000
(image) Mozilla Executive Chair Mitchell Baker's address at Singularity University's 2016 Closing Ceremony.
Wed, 19 Oct 2016 22:10:00 +0000
(image) Mozilla Executive Chair Mitchell Baker's address at IEEE Global Connect
Wed, 19 Oct 2016 21:33:42 +0000One of the most underappreciated features of Firefox’s URL bar and its bookmark system is its support for custom keyword searches. These let you create special bookmarks that type a keyword followed by other text, and have that text inserted into a URL identified uniquely by the keyword, then that URL gets loaded. This lets you type, for example, “quote aapl” to get a stock quote on Apple Inc. You can check out the article I linked to previously (and here, as well, for good measure) for details on how to actually create and use keyword searches. I’m not going to go into details on that here. What I am going to do is share a few keyword searches I’ve configured that I find incredibly useful as a programmer and as a writer on MDN. For web development Here are the search keywords I use the most as a web developer. Keyword Description URL if Opens an API reference page on MDN given an interface name. https://developer.mozilla.org/en-US/docs/Web/API/%s elem Opens an HTML element’s reference page on MDN. https://developer.mozilla.org/en-US/docs/Web/HTML/Element/%s css Opens a CSS reference page on MDN. https://developer.mozilla.org/en-US/docs/Web/CSS/%s fx Opens the release notes for a given version of Firefox, given its version number. https://developer.mozilla.org/en-US/Firefox/Releases/%s mdn Searches MDN for the given term(s) using the default filters, which generally limit the search to include only pages most useful to Web developers. https://developer.mozilla.org/en-US/search?q=%s mdnall Searches MDN for the given term(s) with no filters in place. https://developer.mozilla.org/en-US/search?q=%s&none=none For documentation work When I’m writing docs, I actually use the above keywords a lot, too. But I have a few more that I get a lot of use out of, too. Keyword Description URL bug Opens the specified bug in Mozilla’s Bugzilla instance, given a bug number. https://bugzilla.mozilla.org/show_bug.cgi?id=%s bs Searches Bugzilla for the specified term(s). https://bugzilla.mozilla.org/buglist.cgi?quicksearch=%s dxr Searches the Mozilla source code on DXR for the given term(s). https://dxr.mozilla.org/mozilla-central/search?q=%s file Looks for files whose name contains the specified text in the Mozilla source tree on DXR. https://dxr.mozilla.org/mozilla-central/search?q=path%3A%s ident Looks for definitions of the specified identifier (such as a method or class name) in the Mozilla code on DXR. https://dxr.mozilla.org/mozilla-central/search?q=id%3A%s func Searches for the definition of function(s)/method(s) with the specified name, using DXR. https://dxr.mozilla.org/mozilla-central/search?q=function%3A%s t Opens the specified MDN KumaScript macro page, given the template/macro name. https://developer.mozilla.org/en-US/docs/Template:%s wikimo Searches wiki.mozilla.org for the specified term(s). https://wiki.mozilla.org/index.php?search=%s Obviously, DXR is a font of fantastic information, and I suggest click the “Operators” button at the right end of the search bar there to see a list of the available filters; building search keywords for many of these filters can make your life vastly easier, depending on your specific needs and work habits![...]
Wed, 19 Oct 2016 18:30:00 +0000
(image) Excerpt from 2016 MozFest Volunteer Briefing on 19th October for Health and Safety
Wed, 19 Oct 2016 18:21:14 +0000
(image) Getting Started with Mozilla Maker Party 2016
Wed, 19 Oct 2016 18:00:00 +0000
(image) Meetup for 2016 MozFest Volunteers
Wed, 19 Oct 2016 17:00:00 +0000
(image) mconley livehacks on real Firefox bugs while thinking aloud.
Wed, 19 Oct 2016 16:00:00 +0000
(image) This is the sumo weekly call
Tue, 18 Oct 2016 22:24:15 +0000El pasado martes 19 de septiembre Mozilla liberó una nueva versión de su navegador e inmediatamente compartimos con ustedes sus novedades y su descarga. Pedimos disculpa a todas las personas por las molestias que esto pudo causar. Lo nuevo El administrador de contraseñas ha sido actualizado para permitir a las páginas HTTPS emplear las credenciales HTTP almacenadas. Esta es una forma más para soportar Let’s Encrypt y ayudar a los usuarios en la transición hacia una web más segura. El modo de lectura ha recibido varias funcionalidades que mejoran nuestra lectura y escucha mediante la adición de controles para ajustar el ancho y el espacio entre líneas del texto, y la inclusión de narración donde el navegador lee en voz alta el contenido de la página; sin dudas características que mejorarán la experiencia de uso en personas con discapacidad visual. El modo de lectura ahora incluye controles adicionales y lectura en alta voz El reproductor de audio y video HTML5 ahora posibilita la reproducción de archivos a diferentes velocidades (0.5x, Normal, 1.25x, 1.5x, 2x) y repetirlos indefinidamente. En este sentido, se mejoró el rendimiento al reproducir videos para usuarios con sistemas que soportan instrucciones SSSE3 sin aceleración por hardware. Firefox Hello, el sistema de comunicación mediante videollamadas y chat ha sido eliminado por su bajo empleo. No obstante, Mozilla seguirá desarrollando y mejorando WebRTC. Fin del soporte para sistemas OS X 10.6, 10.7 y 10.8, y Windows que soportan procesadores SSE. Para desarrolladores Añadida la columna Causa al Monitor de Red para mostrar la causa que generó la petición de red. Introducida la API web speech synthesis. Para Android Adicionado el modo de vista de página sin conexión, con esto podrás ver algunas páginas aunque no tengas acceso a Internet. Añadido un paseo por características fundamentales como el Modo de Lectura y Sync a la página Primera Ejecución. Introducidos las localizaciones Español de Chile (es-CL) y Noruego (nn-NO). El aspecto y comportamiento de las pestañas ha sido actualizado y ahora: Las pestañas antiguas ahora son ocultadas cuando la opción restaurar pestañas está establecida en “Siempre restaurar”. Recuerdo de la posición del scroll y el nivel de zoom para las pestañas abiertas. Los controles multimedia han sido actualizados para evitar sonidos desde múltiples pestañas al mismo tiempo. Mejoras visuales al mostrar los favicons. Otras novedades Mejoras en la página about:memory para reportar el uso de memoria dedicada a las fuentes. Rehabilitado el valor por defecto para la organización de las fuentes mediante Graphite2. Mejorado el rendimiento en sistemas Windows y OS X que no cuentan con aceleración por hardware. Varias correcciones de seguridad. Si prefieres ver la lista completa de novedades, puedes llegarte hasta las notas de lanzamiento (en inglés). Puedes obtener esta versión desde nuestra zona de Descargas en español e inglés para Android, Linux, Mac y Windows. Si te ha gustado, por favor comparte con tus amigos esta noticia en las redes sociales. No dudes en dejarnos un comentario.[...]
Tue, 18 Oct 2016 21:55:56 +0000
As Brian Krebs is discovering, a large number of internet-connected devices with bad security can really ruin your day. Therefore, a lot of energy is being spent thinking about how to solve the security problems of the Internet of Things. Most of it is focussed on how we can make sure that these devices get regular security updates, and how to align the incentives to achieve that. And it’s difficult, because cheap IoT devices are cheap, and manufacturers make more money building the next thing than fixing the previous one.
Perhaps, instead, of trying to make water flow uphill, we should be taking a different approach. How can we design these devices such that they don’t need any security updates for their lifetime?
One option would be to make them perfect first time. Yeah, right.
Another option would be the one from my blog post, An IoT Vision. In that post, I outlined a world where IoT devices’ access to the Internet is always mediated through a hub. This has several advantages, including the ability to inspect all traffic and the ability to write open source drivers to control the hardware. But one additional outworking of this design decision is that the devices are not Internet-addressable, and cannot send packets directly to the Internet on their own account. If that’s so, it’s much harder to compromise them and much harder to do anything evil with them if you do. At least, evil things affecting the rest of the net. And if that’s not sufficient, the hub itself can be patched to forbid patterns of access necessary for attacks.
Can we fix IoT security not by making devices secure, but by hiding them from attacks?(image)
Tue, 18 Oct 2016 21:37:35 +0000One of my roles at Mozilla is that I’m part of the Root Program team, which manages the list of trusted Certificate Authorities (CAs) in Firefox and Thunderbird. And, because we run our program in an open and transparent manner, other entities often adopt our trusted list. In that connection, I’ve recently been the lead investigator into the activities of a Certificate Authority (CA) called WoSign, and a connected CA called StartCom, who have been acting in ways contrary to those expected of a trusted CA. The whole experience has been really interesting, but I’ve not seen a good moment to blog about it. Now that a decision has been taken on how to move forward, it seems like a good time. The story started in late August, when Google notified Mozilla about some issues with how WoSign was conducting its operations, including various forms of what seemed to be certificate misissuance. We wrote up the three most serious of those for public discussion. WoSign issued a response to that document. Further issues were pointed out in discussion, and via the private investigations of various people. That led to a longer, curated issues list and much more public discussion. WoSign, in turn produced a more comprehensive response document, and a “final statement” later. One or two of the issues on the list turned out to be not their fault, a few more were minor, but several were major – and their attempts to explain them often only led to more issues, or to a clearer understanding of quite how wrong things had gone. On at least one particular issue, the question of whether they were deliberately back-dating certificates using an obsolete cryptographic algorithm (called “SHA-1”) to get around browser blocks on it, we were pretty sure that WoSign was lying. Around that time, we privately discovered a couple of certificates which had been mis-issued by the CA StartCom but with WoSign fingerprints all over the “style”. Up to this point, the focus has been on WoSign, and StartCom was only involved because WoSign bought them and didn’t disclose it as they should have done. I started putting together the narrative. The result of those further investigations was a 13-page report which conclusively proved that WoSign had been intentionally back-dating certificates to avoid browser-based restrictions on SHA-1 cert issuance. If you can write an enthralling page-turner about f**king certificate authorities doing scuzzy nerd sh*t, damn, I couldn't pull that off. — SwiftOnSecurity (@SwiftOnSecurity) September 28, 2016 The report proposed a course of action including a year’s dis-trust for both CAs. At that point, Qihoo 360 (the Chinese megacorporation which is the parent of WoSign and StartCom) requested a meeting with Mozilla, which was held in Mozilla’s London office, and attended by two representatives of Qihoo, and one each from StartCom and WoSign. At that meeting, WoSign’s CEO admitted to intentionally back-dating SHA-1 certificates, as our investigation had discovered. The representatives of Qihoo 360 wanted to know whether it would be possible to disentangle StartCom from WoSign and then treat it separately. Mozilla representatives gave advice on the route which might most likely achieve this, but said tha[...]
Tue, 18 Oct 2016 17:02:29 +0000
In this first edition, I’m interviewing Rob Conery about his “Imposter Handbook“. We cover the issues of teaching development, how to deal with a constantly changing work environment and how to tackle diversity and integration.
We’ve got eight more interviews ready and more lined up. Amongst the people I talked to are Sarah Drasner, Monica Dinculescu, Ada-Rose Edwards, Una Kravets and Chris Wilson. The format of Decoded Chats is pretty open: interviews ranging from 15 minutes to 50 minutes about current topics on the web, trends and ideas with the people who came up with them.
Some are recorded in a studio (when I am in Seattle), others are Skype calls and yet others are off-the-cuff recordings at conferences.
Do you know anyone you’d like me to interview? Drop me a line on Twitter @codepo8 and I see what I can do :)(image)
Tue, 18 Oct 2016 16:47:15 +0000
Tl;dr: I just shipped
scriptworker 0.8.1 (changelog) (github) (pypi)
scriptworker 0.7.1 (changelog) (github) (pypi)
These are patch releases, and are currently the only versions of scriptworker that work.
The json, embedded in the Azure XML, now contains a new property, hintId. Ideally this wouldn't have broken anything, but I was using that json dict as kwargs, rather than explicitly passing
runId. This means that older versions of scriptworker no longer successfully poll for tasks.
Scriptworker 0.8.0 made some non-backwards-compatible changes to its config format, and there may be more such changes in the near future. To simplify things for other people working on scriptworker, I suggested they stay on 0.7.0 for the time being if they wanted to avoid the churn.
To allow for this, I created a 0.7.x branch and released 0.7.1 off of it. Currently, 0.8.1 and 0.7.1 are the only two versions of scriptworker that will successfully poll Azure for tasks.
Tue, 18 Oct 2016 15:54:59 +0000
Due to some recent changes in the way that we use eslint to check that our coding style linting Mozilla source code in Atom has been broken for a month or two.
I have recently spent some time working on Atom's linter-eslint plugin making it possible to bring all of that linting goodness back to life!
From the root of the project type:
./mach eslint --setup
Install the linter-eslint package v.8.00 or above. Then go to the package settings and enable the following options:
Once done, you should see errors and warnings as shown in the screenshot below:
Tue, 18 Oct 2016 15:00:00 +0000
(image) MozFest 2016 Brown Bag - October 18th, 2016 - 16:00 London
Tue, 18 Oct 2016 14:40:30 +0000
An algorithm we’ve depended on for most of the life of the Internet — SHA-1 — is aging, due to both mathematical and technological advances. Digital signatures incorporating the SHA-1 algorithm may soon be forgeable by sufficiently-motivated and resourceful entities.
Via our and others’ work in the CA/Browser Forum, following our deprecation plan announced last year and per recommendations by NIST, issuance of SHA-1 certificates mostly halted for the web last January, with new certificates moving to more secure algorithms. Since May 2016, the use of SHA-1 on the web fell from 3.5% to 0.8% as measured by Firefox Telemetry.
In early 2017, Firefox will show an overridable “Untrusted Connection” error whenever a SHA-1 certificate is encountered that chains up to a root certificate included in Mozilla’s CA Certificate Program. SHA-1 certificates that chain up to a manually-imported root certificate, as specified by the user, will continue to be supported by default; this will continue allowing certain enterprise root use cases, though we strongly encourage everyone to migrate away from SHA-1 as quickly as possible.
This policy has been included as an option in Firefox 51, and we plan to gradually ramp up its usage. Firefox 51 is currently in Developer Edition, and is currently scheduled for release in January 2017. We intend to enable this deprecation of SHA-1 SSL certificates for a subset of Beta users during the beta phase for 51 (beginning November 7) to evaluate the impact of the policy on real-world usage. As we gain confidence, we’ll increase the number of participating Beta users. Once Firefox 51 is released in January, we plan to proceed the same way, starting with a subset of users and eventually disabling support for SHA-1 certificates from publicly-trusted certificate authorities in early 2017.
Questions about SHA-1 based certificates should be directed to the mozilla.dev.security.policy forum.
Tue, 18 Oct 2016 07:00:35 +0000
As you may already know, last Friday – October 14th – we held a new Testday event, for Firefox 50 Beta 7.
Thank you all for helping us making Mozilla a better place – Onek Jude, Sadamu Samuel, Moin Shaikh,.
From Bangladesh: Maruf Rahman, Md.Rahimul Islam, Sayed Ibn Masud, Abdullah Al Jaber Hridoy, Zayed News, Md Arafatul Islam, Raihan Ali, Md.Majedul islam, Tariqul Islam Chowdhury, Shahrin Firdaus, Md. Nafis Fuad, Sayed Mahmud, Maruf Hasan Hridoy, Md. Almas Hossain, Anmona Mamun Monisha, Aminul Islam Alvi, Rezwana Islam Ria, Niaz Bhuiyan Asif, Nazmul Hassan, Roy Ayers, Farhadur Raja Fahim, Sauradeep Dutta, Sajedul Islam, মাহফুজা হুমায়রা মোহনা.
A big thank you goes out to all our active moderators too!
Keep an eye on QMO for upcoming events!
Tue, 18 Oct 2016 04:06:27 +0000Rust is a great language, and Mozilla plans to use it extensively in Firefox. However, the Rust compiler (rustc) is quite slow and compile times are a pain point for many Rust users. Recently I’ve been working on improving that. This post covers how I’ve done this, and should be of interest to anybody else who wants to help speed up the Rust compiler. Although I’ve done all this work on Linux it should be mostly applicable to other platforms as well. Getting the code The first step is to get the rustc code. First, I fork the main Rust repository on GitHub. Then I make two local clones: a base clone that I won’t modify, which serves as a stable comparison point (rust0), and a second clone where I make my modifications (rust1). I use commands something like this: user=nnethercote for r in rust0 rust1 ; do cd ~/moz git clone https://github.com/$user/rust $r cd $r git remote add upstream https://github.com/rust-lang/rust git remote set-url origin firstname.lastname@example.org:$user/rust done Building the Rust compiler Within the two repositories, I first configure: ./configure --enable-optimize --enable-debuginfo I configure with optimizations enabled because that matches release versions of rustc. And I configure with debug info enabled so that I get good information from profilers. Then I build: RUSTFLAGS='' make -j8 [Update: I previously had -Ccodegen-units=8 in RUSTFLAGS because it speeds up compile times. But Lars Bergstrom informed me that it can slow down the resulting program significantly. I measured and he was right — the resulting rustc was about 5–10% slower. So I’ve stopped using it now.] That does a full build, which does the following: Downloads a stage0 compiler, which will be used to build the stage1 local compiler. Builds LLVM, which will become part of the local compilers. Builds the stage1 compiler with the stage0 compiler. Builds the stage2 compiler with the stage1 compiler. It can be mind-bending to grok all the stages, especially with regards to how libraries work. (One notable example: the stage1 compiler uses the system allocator, but the stage2 compiler uses jemalloc.) I’ve found that the stage1 and stage2 compilers have similar performance. Therefore, I mostly measure the stage1 compiler because it’s much faster to just build the stage1 compiler, which I do with the following command. RUSTFLAGS='-Ccodegen-units=8' make -j8 rustc-stage1 Building the compiler takes a while, which isn’t surprising. What is more surprising is that rebuilding the compiler after a small change also takes a while. That’s because a lot of code gets recompiled after any change. There are two reasons for this. Rust’s unit of compilation is the crate. Each crate can consist of multiple files. If you modify a crate, the whole crate must be rebuilt. This isn’t surprising. rustc’s dependency checking is very coarse. If you modify a crate, every other crate that depends on it will also be rebuilt, no matter how trivial the modification. This surprised me greatly. For example, any modifi[...]
Tue, 18 Oct 2016 04:00:00 +0000Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions. This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR. Updates from Rust Community Blog Posts Compiling to the web with Rust and emscripten. How to speed up the Rust compiler. Exploring ARM inline assembly in Rust. Usefulness and pitfalls of asm!. Pretty state machine patterns in Rust. What makes slog fast. Tips on writing efficient Rust code. Control flow for visitor callbacks. New pattern for visitor callbacks that hits the sweet spot for both convenience and the “pay for what you use” principle. How Rust do? Devlog of learning Rust by developing a small project in it. Using Haskell in Rust. Follow-up to Rust in Haskell. Game of Life implemented in Rust-sdl2. News & Project Updates @withoutboats joins language design team! Future updates to the rustup distribution format. Solving checksum failures and more! Other Weeklies from Rust Community This week in Rust docs 26. These weeks in TiKV 2016-10-17. Crate of the Week This week's Create of the Week is xargo - for effortless cross compilation of Rust programs to custom bare-metal targets like ARM Cortex-M. It recently reached version 0.2.0 and you can read the announcement here. Submit your suggestions and votes for next week! Call for Participation Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started! Some of these tasks may also have mentors available, visit the task page for more information. [easy] rust: Provide a better error message when the target sysroot is not installed. [less easy] servo: Implement HTMLTimeElement#dateTime. [hard] rust: Optimize emscripten targets with emcc. [hard] rust: Tell emscripten to remove exception handling code when the panic runtime is used. [easy] imag: Iterator for Iterator
Mon, 17 Oct 2016 20:45:32 +0000
March 18-19, Nuremberg Germany
Everyone interested in curl, libcurl and related matters is invited to participate. We only ask of you to register and pay the small fee. The fee will be used for food and more at the event.
You’ll find the full and detailed description of the event and the specific location in the curl wiki.
The agenda for the weekend is purposely kept loose to allow for flexibility and unconference-style adding things and topics while there. You will thus have the chance to present what you like and affect what others present. Do tell us what you’d like to talk about or hear others talk about! The sign-up for the event isn’t open yet, as we first need to work out some more details.
We have a dedicated mailing list for discussing the meeting, called curl-meet, so please consider yourself invited to join in there as well!
Thanks a lot to SUSE for hosting!
Feel free to help us make a cool logo for the event!
(The 19th birthday of curl is suitably enough the day after, on March 20.)
Mon, 17 Oct 2016 18:00:00 +0000
(image) The Monday Project Meeting
Mon, 17 Oct 2016 15:00:14 +0000The Firefox Desktop team met yet again last Tuesday to share updates. Here are some fresh updates that we think you might find interesting: Highlights The add-ons team is landing more WebExtension APIs, namely: identity, topSites, omniBox If you’re interested in keeping tabs on what WebExtension work is in the pipe, check out their public Trello board jaws and mikedeboer are working on the new theming API in the Cedar twig repository. Once ready, it will merge into mozilla-central. Recently landed on Cedar is the ability to set the background image of about:home In an effort to reduce e10s tab switch spinners, billm landed a patch that lets us pause GCs to force a paint ddurst happily reports that Nightly is currently blocking all NPAPI plugins except for Flash! This is done behind a pref, which will be disabled for ESR 52, but enabled for release 52. In the next cycle, the pref will be removed and this will be hardcoded. ddurst also reports that the stub installer will soon be able to install 64-bit versions of Firefox on supported systems nhnt11 has returned from his time off, and will now continue his captive portal work! Contributor(s) of the Week adamg2 helped to make the Histograms.json parser more bulletproof! Ajay freed us all from some unnecessary code within Telemetry! Saad fixed a bug in the AwesomeBar that would occur if a hex code was put in as an address! Thauã Silveira added a new Telemetry probe so that we can get a sense of how often people abort print jobs! Project Updates Context Graph tobyelliot reports that the Recommendation Engine Experiment has gathered over 2000 volunteer participants Folks who are interested in the data handling practices can read up on them here Electrolysis (e10s) mconley has landed a patch that will annotate hangs that occur in the main thread in the content process during tab switch This should give the team important clues on what kinds of things are causing tab switch spinners mrbkap has started an etherpad the brainstorm ways in which e10s-multi might accidentally break some add-ons Platform UI MSU students continue to improve the
Mon, 17 Oct 2016 13:41:10 +0000
Because software defaults matter, we have just changed the default bookmarks for the Nightly channel to be more useful to power-users deeply interested in day to day progress of Firefox and potentially willing to help Mozilla improve their browser through bug and crash reports, shared telemetry data and technical feedback.
Users on the Nightly channels had the same bookmarks than users on the release channel, these bookmarks target end-users with limited technical knowledge and link to Mozilla sites providing end-user support, add-ons or propose a tour of Firefox features. Not very compelling for a tech-savvy audience that installed pre-alpha software!
As of last week, new Nightly users or existing Nightly users creating a new profile have a different set of bookmarks that are more likely to meet their interest in the technical side of Mozilla and contributing to Firefox as an alpha tester. Here is what the default bookmarks are:
There are links to this blog of course, to Planet Mozilla, to the Mozilla Developer Network, to the Nightly Testers Tools add-on, to about:crashes and to the IRC #nightly channel in case you find a bug and would like to talk to other Nightly users about it and of course a link to Bugzilla. The Firefox tour link was also replaced by a link to the contribute page on mozilla.org.
It’s a minor change to the profile data as we don’t want to make of Nightly a different product from Firefox, but I hope another small step in the direction of empowering our more technical user base to help Mozilla build the most stable and reliable browser for hundreds of millions of people!
Mon, 17 Oct 2016 11:53:00 +0000In a Debian server I'm using LVM to create a single logical volume from multiple different volumes. One of the volumes is a loop-back device which refers to a file in another filesystem. The loop-back device needs to be activated before the LVM service starts or the later will fail due to missing volumes. To do so a special systemd unit needs to be created which will not have the default dependencies of units and will get executed before lvm2-activation-early service. Systemd will set a number of dependencies for all units by default to bring the system into a usable state before starting most of the units. This behavior is controlled by DefaultDependencies flag. Leaving DefaultDependencies to its default True value creates a dependency loop which systemd will forcefully break to finish booting the system. Obviously this non-deterministic flow can result in different than desired execution order which in turn will fail the LVM volume activation. Setting DefaultDependencies to False will disable all but essential dependencies and will allow our unit to execute in time. Systemd manual confirms that we can set the option to false: Generally, only services involved with early boot or late shutdown should set this option to false. The second is to execute before lvm2-activation-early. This is simply achieved by setting Before=lvm2-activation-early. The third and last step is to set the command to execute. In my case it's /sbin/losetup /dev/loop0 /volume.img as I want to create /dev/loop0 from the file /volume.img. Set the process type to oneshot so systemd waits for the process to exit before it starts follow-up units. Again from the systemd manual Behavior of oneshot is similar to simple; however, it is expected that the process has to exit before systemd starts follow-up units. Place the unit file in /etc/systemd/system and in the next reboot the loop-back device should be available to LVM. Here's the final unit file: [Unit] Description=Activate loop device DefaultDependencies=no After=systemd-udev-settle.service Before=lvm2-activation-early.service Wants=systemd-udev-settle.service [Service] ExecStart=/sbin/losetup /dev/loop0 /volume.img Type=oneshot [Install] WantedBy=local-fs.target See also: - Anthony's excellent LVM Loopback How-To[...]
Mon, 17 Oct 2016 09:59:45 +0000
Web developers don’t write all their code in just one line of text. They use white space between their HTML elements because it makes markup more readable: spaces, returns, tabs.
In most instances, this white space seems to have no effect and no visual output, but the truth is that when a browser parses HTML it will automatically generate anonymous text nodes for elements not contained in a node. This includes white space (which is, after all a type of text).
If these auto generated text nodes are inline level, browsers will give them a non-zero width and height, and you will find strange gaps between the elements in the context, even if you haven’t set any margin or padding on nearby elements.
This behaviour can be hard to debug, but Firefox DevTools are now able to display these whitespace nodes, so you can quickly spot where do the gaps come from in your markup, and fix the issues.
The demo shows two examples with slightly different markup to highlight the differences both in browser rendering and what DevTools are showing.
The first example has one img per line, so the markup is readable, but the browser renders gaps between the images:
The second example has all the img tags in one line, which makes the markup unreadable, but it also doesn’t have gaps in the output:
If you inspect the nodes in the first example, you’ll find a new whitespace indicator that denotes the text nodes created for the browser for the whitespace in the code. No more guessing! You can even delete the node from the inspector, and see if that removes mysterious gaps you might have in your website.
Mon, 17 Oct 2016 00:30:00 +0000In the last two weeks, we landed 171 PRs in the Servo organization’s repositories. Planning and Status Our overall roadmap is available online and now includes the Q4 plans and tentative outline of some ideas for 2017. Please check it out and provide feedback! This week’s status updates are here. Notable Additions bholley added benchmark support to mach’s ability to run unit tests frewsxcv implemented the value property on
Sun, 16 Oct 2016 03:34:44 +0000
Are they trying to be ironic?
Sun, 16 Oct 2016 00:16:00 +0000It's Talos time. You can now plunk down your money for an open, auditable, non-x86 workstation-class computer that doesn't suck. It's PowerPC. It's modern. It's beefy. It's awesome. Let's not mince words, however: it's also not cheap, and you're gonna plunk down a lot if you want this machine. The board runs $4100 and that's without the CPU, which is pledged for separately though you can group them in the same order (this is a little clunky and I don't know why Raptor did it this way). To be sure, I think we all suspected this would be the case but now it's clear the initial prices were underestimates. Although some car repairs and other things have diminished my budget (I was originally going to get two of these), I still ponied up for a board and for one of the 190W octocore POWER8 CPUs, since this appears to be the sweetspot for those of us planning to use it as a workstation (remember each core has eight threads via SMT for a grand total of 64, and this part has the fastest turbo clock speed at 3.857GHz). That ran me $5340. I think after the RAM, disks, video card, chassis and PSU I'll probably be all in for around $7000. Too steep? I don't blame you, but you can still help by donating to the project and enable those of us who can afford to jump in first to smoothe the way out for you. Frankly, this is the first machine I consider a meaningful successor to the Quad G5 (the AmigaOne series isn't quite there yet). Non-x86 doesn't have the economies of scale of your typical soulless Chipzilla craptop or beige box, but if we can collectively help Raptor get this project off the ground you'll finally have an option for your next big machine when you need something free, open and unchained -- and there's a lot of chains in modern PCs that you don't control. You can donate as little as $10 and get this party started, or donate $250 and get to play with one remotely for a few months. Call it a rental if you like. No, I don't get a piece of this, I don't have stock in Raptor and I don't owe them a favour. I simply want this project to succeed. And if you're reading this blog, odds are you want that too. The campaign ends December 15. Donate, buy, whatever. Let's do this.My plans are, even though I confess I'll be running it little-endian (since unfortunately I don't think we have much choice nowadays), to make it as much a true successor to the last Power Mac as possible. Yes, I'll be sinking time into a JIT for it, which should fully support asm to truly run those monster applications we're seeing more and more of, porting over our AltiVec code with an endian shift (since the POWER8 has VMX), and working on a viable and fast way of running legacy Power Mac software on it, either through KVM or QEMU or whatever turns out to be the best opti[...]
Fri, 14 Oct 2016 21:20:05 +0000
Here’s the state of the add-ons world this month.
In the past month, 1,755 listed add-on submissions were reviewed:
There are 223 listed add-ons awaiting review.
If you’re an add-on developer and are looking for contribution opportunities, please consider joining us. Add-on reviewers are critical for our success, and can earn cool gear for their work. Visit our wiki page for more information.
The compatibility blog post for Firefox 50 is up, and the bulk validation was run recently. The compatibility blog post for Firefox 51 has published yesterday. It’s worth pointing out that the Firefox 50 cycle will be twice as long, so 51 won’t be released until January 24th, 2017.
Multiprocess Firefox is now enabled for users without add-ons, and add-ons will be gradually phased in, so make sure you’ve tested your add-on and either use WebExtensions or set the multiprocess compatible flag in your add-on manifest.
As always, we recommend that you test your add-ons on Beta and Firefox Developer Edition to make sure that they continue to work correctly. End users can install the Add-on Compatibility Reporter to identify and report any add-ons that aren’t working anymore.
We would like to thank Atique Ahmed Ziad, Surya Prashanth, freaktechnik, shubheksha, bjdixon, zombie, berraknil, Krizzu, rackstar17, paenglab, and Trishul Goel (long list!) for their recent contributions to the add-ons world. You can read more about their work in our recognition page.
Fri, 14 Oct 2016 07:00:00 +0000
The Test Pilot 2016 Q4 OKRs are published. Primarily we'll be focused on continued growth of users (our overall 2016 goal). We deprioritized localization last quarter and over-rotated on publishing experiments by launching four when we were only aiming for one. This quarter we'll turn that knob back down (we're aiming for two new experiments) and get localization done.
We also failed to graduate any experiments last quarter -- arguably the most important part of our entire process since it includes drawing conclusions and publishing our results. This quarter we'll graduate three experiments from Test Pilot, publish our findings so we can improve Firefox, and clear out space in Test Pilot for the next big ideas.
Thu, 13 Oct 2016 22:52:10 +0000Firefox 51 will be released on January 24th. Note that the scheduled release on December 13th is a point release, not a major release, hence the much longer cycle. Here’s the list of changes that went into this version that can affect add-on compatibility. There is more information available in Firefox 51 for Developers, so you should also give it a look. General Let accel-click and middle-click on the new tab button open a new tab next to the current one. This changes the behavior of the new tab button and removes the BrowserOpenNewTabOrWindow function. Code executed in wrong order with setTimeout after alert. This changes the behavior of the onButtonClick function to make it async. Drop the prefixed version of visibility API. This removes mozVisibilityState and mozHidden. Don’t dispatch close event, and remove onclose. This applies to Workers. Prevent installation of an extension when “*” is used as part of strict_min_version in a WebExtension manifest.json). Minimum versions should always be specific versions, rather than a mask. Loaded-as-data XML documents should not generate
Thu, 13 Oct 2016 22:41:30 +0000Announcing Mozilla’s Equal Rating Innovation Challenge, a $250,000 contest including expert mentorship to spark new ways to connect everyone to the Internet. At Mozilla, we believe the Internet is most powerful when anyone – regardless of gender, income, or geography – can participate equally. However the digital divide remains a clear and persistent reality. Today more than 4 billion people are still not online, according to the World Economic Forum. That is greater than 55% of the global population. Some, who live in poor or rural areas, lack the infrastructure. Fast wired and wireless connectivity only reaches 30% of rural areas. Other people don’t connect because they don’t believe there is enough relevant digital content in their language. Women are also less likely to access and use the Internet; only 37% access the Internet versus 59% of men, according to surveys by the World Wide Web Foundation. Access alone, however, is not sufficient. Pre-selected content and walled gardens powered by specific providers subvert the participatory and democratic nature of the Internet that makes it such a powerful platform. Mitchell Baker coined the term equal rating in a 2015 blog post. Mozilla successfully took part in shaping pro-net neutrality legislation in the US, Europe and India. Today, Mozilla’s Open Innovation Team wants to inject practical, action-oriented, new thinking into these efforts. This is why we are very excited to launch our global Equal Rating Innovation Challenge. This challenge is designed to spur innovations for bringing the members of the Next Billion online. The Equal Rating Innovation Challenge is focused on identifying creative new solutions to connect the unconnected. These solutions may range from consumer products and novel mobile services to new business models and infrastructure proposals. Mozilla will award US$250,000 in funding and provide expert mentorship to bring these solutions to the market. We seek to engage entrepreneurs, designers, researchers, and innovators all over the world to propose creative, engaging and scalable ideas that cultivate digital literacy and provide affordable access to the full diversity of the open Internet. In particular, we welcome proposals that build on local knowledge and expertise. Our aim is to entertain applications from all over the globe. The US$250,000 in prize monies will be split in three categories: Best Overall (key metric: scalability) Best Overall Runner-up Most Novel Solution (key metric: experimental with potential high reward) This level of funding may be everything a team needs to go [...]
Thu, 13 Oct 2016 22:31:02 +0000Announcing Mozilla’s Equal Rating Innovation Challenge, a $250,000 contest including expert mentorship to spark new ways to connect everyone to the Internet.At Mozilla, we believe the Internet is most powerful when anyone — regardless of gender, income, or geography — can participate equally. However the digital divide remains a clear and persistent reality. Today more than 4 billion people are still not online, according to the World Economic Forum. That is greater than 55% of the global population. Some, who live in poor or rural areas, lack the infrastructure. Fast wired and wireless connectivity only reaches 30% of rural areas. Other people don’t connect because they don’t believe there is enough relevant digital content in their language. Women are also less likely to access and use the Internet; only 37% access the Internet versus 59% of men, according to surveys by the World Wide Web Foundation.Access alone, however, is not sufficient. Pre-selected content and walled gardens powered by specific providers subvert the participatory and democratic nature of the Internet that makes it such a powerful platform. Mitchell Baker coined the term equal rating in a 2015 blog post. Mozilla successfully took part in shaping pro-net neutrality legislation in the US, Europe and India. Today, Mozilla’s Open Innovation Team wants to inject practical, action-oriented, new thinking into these efforts.This is why we are very excited to launch our global Equal Rating Innovation Challenge. This challenge is designed to spur innovations for bringing the members of the Next Billion online. The Equal Rating Innovation Challenge is focused on identifying creative new solutions to connect the unconnected. These solutions may range from consumer products and novel mobile services to new business models and infrastructure proposals. Mozilla will award US$250,000 in funding and provide expert mentorship to bring these solutions to the market.We seek to engage entrepreneurs, designers, researchers, and innovators all over the world to propose creative, engaging and scalable ideas that cultivate digital literacy and provide affordable access to the full diversity of the open Internet. In particular, we welcome proposals that build on local knowledge and expertise. Our aim is to entertain applications from all over the globe.The US$250,000 in prize monies will be split in three categories:Best Overall (key metric: scalability)Best Overall Runner-upMost Novel Solution (key metric: experimental with potential high reward)This level of funding may be ev[...]
Thu, 13 Oct 2016 19:10:27 +0000
By design, taskcluster workers are very flexible and user-input-driven. This allows us to put CI task logic in-tree, which means developers can modify that logic as part of a try push or a code commit. This allows for a smoother, self-serve CI workflow that can ride the trains like any other change.
However, a secure release workflow requires certain tasks to be less permissive and more auditable. If the logic behind code signing or pushing updates to our users is purely in-tree, and the related checks and balances are also in-tree, the possibility of a malicious or accidental change being pushed live increases.
Enter scriptworker. Scriptworker is a limited-purpose taskcluster worker type: each instance can only perform one type of task, and validates its restricted inputs before launching any task logic. The scriptworker instances are maintained by Release Engineering, rather than the Taskcluster team. This separates roles between teams, which limits damage should any one user's credentials become compromised.
The past several releases have included changes involving the
chain of trust. Scriptworker 0.8.0 is the first release that enables gpg key management and chain of trust signing.
An upcoming scriptworker release will enable upstream chain of trust validation. Once enabled, scriptworker will fail fast on any task or graph that doesn't pass the validation tests.
Thu, 13 Oct 2016 17:45:00 +0000
(image) Weekly project updates from the Mozilla Connected Devices team.
Thu, 13 Oct 2016 16:00:00 +0000
(image) This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.
Thu, 13 Oct 2016 15:00:00 +0000
(image) The presence and participation of women in STEM is on the rise thanks to the efforts of many across the globe, but still has many...
Thu, 13 Oct 2016 07:24:38 +0000So, Aria Stewart tweeted two questions and statement the other day: Programmers: can you describe change over time in a system? How completely? Practice this. — Aria Stewart (@aredridel) October 10, 2016 I wanted to discuss this idea as it pertains to debugging and strategies I’ve been employing more often lately. But lets examine the topic at face value first, describing change over time in a system. The abstract “system” is where a lot of the depth in this question comes from. It could be talking about the computer you’re on, the code base you work in, data moving through your application, an organization of people, many things fall into a system of some sort, and time acts upon them all. I’m going to choose the data moving through a system as the primary topic, but also talk about code bases over time. Another part of the question that keeps it quite open is the lack of “why”, “what”, or “how” in it. This means we could discuss why the data needs to be transformed in various ways, why we added a feature or change some code, why an organization is investing in writing software at all. We could talk about what change at each step in a data pipeline, what changes have happened in a given commit, or what goals were accomplished each month by some folks. Or, the topic could be how a compiler changes the data as it passes through, how a programmer sets about making changes to a code base, or how an organization made its decisions to go in the directions it did. All quite valid and this is but a fraction of the depth in this simple question. Let’s talk about systems a bit. At work, we have a number of services talking via a messaging system and a relational database. The current buzz phrase for this is “micro services” but we also called them “service oriented architectures” in the past. My previous job, I worked in a much smaller system that had many components for gathering data, as well as a few components for processing that data and sending it back to the servers. Both of these systems shared common attributes which most other systems also must cope with: events that provide data to be processed happen in functionally random order, that data is fed into processors who then stage data to be consumed by other parts of the system. When problems arise in systems like these, it can be difficult to tell what piece is causing disruption. The point where the data changes from h[...]
Thu, 13 Oct 2016 07:00:00 +0000Creating and deleting a Git branch named -D git branch -D deletes a Git branch. Yet someone on IRC asked, “I accidentaly got a git branch named -D. How do I delete it?”. I took this as a personal challenge to create and nuke a -D branch myself, to explore this edge case of one of my favorite tools. Making a branch with an illegal name You create a branch in Git by typing git branch branchname. If you type git branch -D, the -D will be passed as an argument to the program by your shell, because your shell knows that all things starting with - are arguments. You can tell your shell “I just mean a literal -, not an argument” by escaping it, like git branch \-D. But Git sees what we’re up to, and won’t let that fly. It complains fatal: '-D' is not a valid branch name.. So even when we get the string -D into Git, the porcelain spits it right back out at us. But since this is Unix and Everything’s A File(TM), I can create a branch with a perfectly fine name to get through the porcelain and then change it later. If I was at the Git wizardry level of Emily Xie I could just write the files into .git without the intermediate step of watching the porcelain do it first, but I’m not quite that good yet. So, let’s make a branch with a perfectly fine name in a clean repo, then swap things around under the hood: $ mkdir dont $ cd dont $ git init $ git commit --allow-empty -am "initial commit" [master (root-commit) da1f6b6] initial commit $ git branch * master $ git checkout -b dashdee switched to a new branch 'dashdee' $ git branch * dashdee master $ grep -ri dashdee .git/ .git/HEAD:ref: refs/heads/dashdee .git/logs/HEAD:da1f6b67446e83a456c4aeaeef1e256a8531640e da1f6b67446e83a456c4aeaeef1e256a8531640e E. Dunham
Thu, 13 Oct 2016 01:20:50 +0000In Q3 the MozReview team started to focus on tackling various usability issues. We started off with a targetted effort on the “Finish Review” dialog, which was not only visually unappealing but difficult to use. The talented David Walsh compressed the nearly full-screen dialog into a dropdown expanded from the draft banner, and he changed button names to clarify their purpose. We have some ideas for further improvements as well. David has now embarked on a larger mission: reworking the main review-request UI to improve clarity and discoverability. He came up with some initial designs and discussed them with a few MozReview users, and here’s the result of that conversation: This design provides some immediate benefits, and it sets us up for some future improvements. Here are the thoughts behind the changes: The commits table, which was one of the first things we added to stock Review Board, was never in the right place. All the surrounding text and controls reflect just the commit you are looking at right now. Moving the table to a separate panel above the commit metadata is a better, and hopefully more intuitive, representation of the hierarchical relationship between commit series and individual commit. The second obvious change is that the commit table is now collapsed to show only the commit you are currently looking at, along with its position (e.g. “commit 3 of 5”) and navigation links to previous and next commits. This places the emphasis on the selected commit, while still conveying the fact that it is part of a series of commits. (Even if that series is actually only one commit, it is still important to show that MozReview is designed to operate on series.) To address feedback from people who like always seeing the entire series, it will be possible to expand the table and set that as a preference. The commit title is still redundant, but removing it from the second panel left the rest of the information there looking rather abandoned and confusing. I’m not sure if there is a good fix for this. The last functional change is the addition of a “Quick r+” button. This fixes the annoying process of having to select “Finish Review”, set the dropdown to “r+”, and then publish. It also removes the need for the somewhat redundant and confusing “Finish Review” button, since for anything other than an r+ a reviewer wi[...]
Wed, 12 Oct 2016 17:11:04 +0000Cross post from: The Mozilla Blog. Mozilla’s annual celebration of making online is challenging outdated copyright law in the EU. Here’s how you can participate. It’s that time of year: Maker Party. Each year, Mozilla hosts a global celebration to inspire learning and making online. Individuals from around the world are invited. It’s an opportunity for artists to connect with educators; for activists to trade ideas with coders; and for entrepreneurs to chat with makers. This year, we’re coming together with that same spirit, and also with a mission: To challenge outdated copyright laws in the European Union. EU copyright laws are at odds with learning and making online. Their restrictive nature undermines creativity, imagination, and free expression across the continent. Mozilla’s Denelle Dixon-Thayer wrote about the details in her recent blog post. By educating and inspiring more people to take action, we can update EU copyright law for the 21st century. Over the past few months, everyday internet users have signed our petition and watched our videos to push for copyright reform. Now, we’re sharing copyright reform activities for your very own Maker Party. Want to join in? Maker Party officially kicks-off today. Here are activities for your own Maker Party: Be a #cczero Hero In addition to all the amazing live events you can host or attend, we wanted to create a way for our global digital community to participate. We’re planning a global contribute-a-thon to unite Mozillians around the world and grow the number of images in the public domain. We want to showcase what the open internet movement is capable of. And we’re making a statement when we do it: Public domain content helps the open internet thrive. Check out our #cczero hero event page and instructions on contributing. You should be the owner of the copyright in the work. It can be fun, serious, artistic — whatever you’d like. Get started. For more information on how to submit your work to the public domain or to Creative Commons, click here. Post Crimes Mozilla has created an app to highlight the outdated nature of some of the EU’s copyright laws, like the absurdity that photos of public landmarks can be unlawful. Try the Post Crimes web app: Take a selfie in front of the Eiffel Tower’s night-time light display, or the Little Mermaid in Denmark. Th[...]
Wed, 12 Oct 2016 17:00:00 +0000
(image) mconley livehacks on real Firefox bugs while thinking aloud.
Wed, 12 Oct 2016 17:00:00 +0000
(image) Today's proliferation of mobile devices and platforms such as Google and Facebook has exacerbated an extensive, prolific sharing about users and their behaviors in ways...
Wed, 12 Oct 2016 16:00:00 +0000
(image) This is the sumo weekly call
Wed, 12 Oct 2016 14:48:27 +0000On this update we will look at the progress made in the last two weeks.A reminder that this quarter’s main focus is on:Debugging tests on interactive workers (only Linux on TaskCluster)Improve end to end times on Try (Thunder Try project)For all bugs and priorities you can check out the project management page for it:https://wiki.mozilla.org/EngineeringProductivity/Projects/Debugging_UX_improvementsStatus update:Debugging tests on interactive workers---------------------------------------------------Tracking bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1262260Accomplished recently:No new progressUpcoming:Android xpcshellBlog/newsgroup postThunder Try - Improve end to end times on try---------------------------------------------Project #1 - Artifact builds on automation##########################################Tracking bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1284882Accomplished recently:The following platforms are now supported: linux, linux64, macosx64, win32, win64An option was added to download symbols for our compiled artifacts during the artifact buildUpcoming:Debug artifact builds on try. (Right now --artifact always results in an opt artifact build.) Android artifact builds on try, thanks to nalexander.Project #2 - S3 Cloud Compiler Cache####################################Tracking bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1280641Some of the issues found last quarter for this project was around NSS which also was in need of replacing. This project was put on hold until the NSS work was completed. We’re going to resume this for Q4.Project #3 - Metrics####################Tracking bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1286856Accomplished recently:Brittle running example here: http://people.mozilla.org/~klahnakoski/temp/End-to-End.htmlProblem with low populations; 90th percentile is effectively everything, and a couple of outliers impacts the End-to-End time shown. Upcoming:Figure out what to do with these small populations:Ignore them - too small to be statistically significantAggregate them - All the rarely run suites can be pushed into a “Other” categoryShow some other statistic: Maybe median is better?Show median of past day, and 90% for the week: That can show the longer [...]
Wed, 12 Oct 2016 11:13:17 +0000The following is based on my doctoral thesis, my experience as Web Literacy Lead at the Mozilla Foundation, and the work that I’ve done as an independent consultant, identifying, developing, and credentialing digital skills and literacies. To go into more depth on this topic, check out my book, The Essential Elements of Digital Literacies. 1. Take people on the journey with you The quotation below, illustrated by Bryan Mathers, is an African proverb that I’ve learned to be true. The easiest thing to do, especially if you’re short of time, is to take a definition - or even a whole curriculum / scheme of work - and use it off-the-shelf. This rarely works, for a couple of reasons. First, every context is different. Everything can look great, but the devil really is in the details of translating even very practical resources into your particular situation. Second, because the people within your organisation or programme haven’t been part of the definition, they’re not invested in it. Why should they do something that’s been imposed upon them? 2. Focus on identity I’m a fan of Helen Beetham’s work. The diagram below is from a collaboration with Rhona Sharpe, which illustrates an important point: any digital literacies curriculum should scaffold towards digital identity. These days, we assume access (perhaps incorrectly?) and focus on skills and practices. What we need from any digital literacies curriculum is a way to develop learners’ identities. There are obvious ways to do this - for example encourage students to create their own, independent, presence on the web. However, it’s also important to note that identities are multi-faceted, and so any digital literacies curriculum should encourage learners to develop identities in multiple places on the web. Interacting in various online communities involves different methods of expression. 3. Cast the net wide We all have pressures and skillsets we need to develop immediately. Nevertheless, equally important when developing digital literacies are the mindsets behind these skillsets. In my doctoral thesis and subsequent book I outlined eight ‘essential elements’ of digital literacies from the literature[...]
Wed, 12 Oct 2016 05:00:04 +0000Mozilla’s annual celebration of making online is challenging outdated copyright law in the EU. Here’s how you can participate It’s that time of year: Maker Party. Each year, Mozilla hosts a global celebration to inspire learning and making online. Individuals from around the world are invited. It’s an opportunity for artists to connect with educators; for activists to trade ideas with coders; and for entrepreneurs to chat with makers. This year, we’re coming together with that same spirit, and also with a mission: To challenge outdated copyright laws in the European Union. EU copyright laws are at odds with learning and making online. Their restrictive nature undermines creativity, imagination, and free expression across the continent. Mozilla’s Denelle Dixon-Thayer wrote about the details in her recent blog post. By educating and inspiring more people to take action, we can update EU copyright law for the 21st century. Over the past few months, everyday internet users have signed our petition and watched our videos to push for copyright reform. Now, we’re sharing copyright reform activities for your very own Maker Party. Want to join in? Maker Party officially kicks-off today. Here are activities for your own Maker Party: Be a #cczero Hero In addition to all the amazing live events you can host or attend, we created a way for our global digital community to participate. We’re planning a global contribute-a-thon to unite Mozillians around the world and grow the number of images in the public domain. We want to showcase what the open internet movement is capable of. And we’re making a statement when we do it: Public domain content helps the open internet thrive. Check out our #cczero hero event page and instructions on contributing. You should be the owner of the copyright in the work. It can be fun, serious, artistic — whatever you’d like. Get started. For more information on how to submit your work to the public domain or to Creative Commons, click here. Post Crimes Mozilla has created an app to highlight the outdated nature of some of the EU’s copyright laws, like the absurdity that photos of public landmarks can be unlawful. Try the Post Crimes [...]
Tue, 11 Oct 2016 22:00:00 +0000
(image) As part of the TechWomen program, Mozilla has had the fortunate opportunity to host five Emerging Leaders over the past month. Estelle Ndedi (Cameroon), Chioma...
Tue, 11 Oct 2016 18:53:22 +0000
If you are interested in helping as either an intern or mentor, please follow the instructions there to make contact.
Even if you can't participate, if you have the opportunity to promote the topic in a university or any other environment where potential interns will see it, please do so as this makes a big difference to the success of these programs.
The project could involve anything related to SIP, XMPP, WebRTC or peer-to-peer real-time communication, as long as it emphasizes a specific feature or benefit for the Debian community. If other Outreachy organizations would also like to have a Free RTC project for their community, then this could also be jointly mentored.
Tue, 11 Oct 2016 15:00:00 +0000
(image) Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...
Tue, 11 Oct 2016 14:24:55 +0000One of the most useful things you can do to help improve the quality for Firefox is to warn developers about regressions and give them all the details that will turn your issue into a useful actionable bug report. This is actually the raison d’être of Nightly builds: parallelizing the work done by our communities of testers and developpers so as to catch regressions as early as possible and provide a feedback loop to developers in the early stages of development. Complementary to nighly builds, we also have a nifty Python command line tool called mozregression (with an experimental GUI version but that will be for another post…) that allows a tester to easily find when a regression happened and what code changes are likely to have introduced the regression. This tool is largely the result of the awesome work of Julien Pagès, William Lachance, Heather Arthur and Mike Ling! How do I install that cool tool? tl;dr for people that do Python: mozregression is a python package installable via pip, preferably in a virtualenv. The installation steps detailed for mozregression on their documentation site are a bit short on details and tell you to install the tool with admin rights on linux/mac (sudo) which is not necessarily a good idea since we don’t want to potentially create conflicts with other python tools installed on your system and that may have different dependencies. I would instead install mozregression in a virtualenv so as to isolate it from your OS, it works the same, avoids breaking something else on your OS and AFAIK, there is no advantage to having mozregression installed as a global package except of course having it available for all accounts on your system. The installation steps in a terminal are those: sudo apt install python-pip virtualenvwrapper echo 'export WORKON_HOME=~/.virtualenvs' >> ~/.bashrc source ~/.bashrc mkvirtualenv mozreg pip install mozregression mozregression --write-config Here are line by line explanations because understanding is always better than copy-pasting and you may want to adapt those steps with your own prefere[...]
Tue, 11 Oct 2016 13:00:00 +0000
(image) Join us for this special Ada Lovelace Day webcast of the Mozilla Curriculum Workshop as we recognize the challenges, work and contributions of women leaders...