Subscribe: Planet Mozilla
http://planet.mozilla.org/rss20.xml
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
add  code  data  firefox  git  internet  memory  mozilla  new  people  process  project  rust  time  web  week  work 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Planet Mozilla

Planet Mozilla



Planet Mozilla - http://planet.mozilla.org/



 



Niko Matsakis: Unification in Chalk, part 1

Sat, 25 Mar 2017 04:00:00 +0000

So in my first post on chalk, I mentioned that unification and normalization of associated types were interesting topics. I’m going to write a two-part blog post series covering that. This first part begins with an overview of how ordinary type unification works during compilation. The next post will add in associated types and we can see what kinds of mischief they bring with them. What is unification? Let’s start with a brief overview of what unification is. When you are doing type-checking or trait-checking, it often happens that you wind up with types that you don’t know yet. For example, the user might write None – you know that this has type Option, but you don’t know what that type T is. To handle this, the compiler will create a type variable. This basically represents an unknown, to-be-determined type. To denote this, I’ll write Option, where the leading question mark indicates a variable. The idea then is that as we go about type-checking we will later find out some constraints that tell us what ?T has to be. For example, imagine that we know that Option must implement Foo, and we have a trait Foo that is implemented only for Option: trait Foo { } impl Foo for Option { } In order for this impl to apply, it must be the case that the self types are equal, i.e., the same type. (Note that trait matching never considers subtyping.) We write this as a constraint: Option = Option Now you can probably see where this is going. Eventually, we’re going to figure out that ?T must be String. But it’s not immediately obvious – all we see right now is that two Option types have to be equal. In particular, we don’t yet have a simple constraint like ?T = String. To arrive at that, we have to do unification. Basic unification So, to restate the previous section in mildly more formal terms, the idea with unification is that we have: a bunch of type variables like ?T. We often call these existential type variables because, when you look at things in a logical setting, they arise from asking questions like exists ?T. (Option = Option) – i.e., does there exist a type ?T that can make Option equal to Option.1 a bunch of unification constraints U1..Un like T1 = T2, where T1 and T2 are types. These are equalities that we know have to be true. We would like to process these unification constraints and get to one of two outcomes: the unification cannot be solved (e.g., u32 = i32 just can’t be true); we’ve got a substitution (mapping) from type variables to their values (e.g., ?T => String) that makes all of the unification constraints hold. Let’s start out with a really simple type system where we only have two kinds of types (in particular, we don’t yet have associated types): T = ?X // type variables | N // "applicative" types The first kind of type is type variables, as we’ve seen. The second kind of type I am calling “applicative” types, which is really not a great name, but that’s what I called it in chalk for whatever reason. Anyway they correspond to types like Option, Vec, and even types like i32. Here the name N is the name of the type (i.e., Option, Vec, i32) and the type parameters T1...Tn represent the type parameters of the type. Note that there may be zero of them (as is the case for i32, which is kind of “shorthand” for i32<>). So the idea for unification then is that we start out with an empty substitution S and we have this list of unification constraints U1..Un. We want to pop off the first constraint (U1) and figure out what to do based on what category it falls into. At each step, we may update our substitution S (i.e., we may figure out the value of a variable). In that case, we’ll replace the variable with its value for all the later steps. Other times, we’ll create new, simpler unification problems. ?X = ?Y – if U equates two variables toget[...]



Michael Comella: Introducing ScrollingCardView for iOS

Sat, 25 Mar 2017 00:00:00 +0000

For Project Prox, we were asked to implement a design1 like this: Specifically, this is a card view that: Hugs its content, dynamically expanding the height when the content does Will scroll its content if the content is taller than the card After some searching, we discovered that no such widgets existed! Hoping our efforts could be useful to others, we created the ScrollingCardView library. Usage ScrollingCardView is used much like any other view. First, create your view, enable autolayout, and add it to the view hierarchy: let cardView = ScrollingCardView() cardView.translatesAutoresizingMaskIntoConstraints = false parentView.addSubview(cardView) // e.g. parent could be the ViewController's Then constrain the card view as you would any other view: NSLayoutConstraint.activate([ cardView.topAnchor.constraint( equalTo: topLayoutGuide.bottomAnchor, constant: 16), cardView.leadingAnchor.constraint( equalTo: view.leadingAnchor, constant: 16), cardView.trailingAnchor.constraint( equalTo: view.trailingAnchor, constant: -16), // If you don't constrain the height, the card // will grow to match its intrinsic content size. // Or use lessThanOrEqualTo to allow your card // view to grow only until a certain size, e.g. // the size of the screen. cardView.bottomAnchor.constraint( lessThanOrEqualTo: bottomLayoutGuide.topAnchor, constant: -16), // Or you can constrain it to a particular height: // cardView.bottomAnchor.constraint( // equalTo: bottomLayoutGuide.topAnchor, constant: -16), // cardView.heightAnchor.constraint(equalToConstant: 300), ]) Finally, specify the card view’s content: // 3. Set your card view's content. let content = UILabel() content.text = "Hello world!" content.numberOfLines = 0 cardView.contentView = content The card view comes with smart visual defaults (including a shadow), but you can also customize them: cardView.backgroundColor = .white cardView.cornerRadius = 2 cardView.layer.shadowOffset = CGSize(width: 0, height: 2) cardView.layer.shadowRadius = 2 cardView.layer.shadowOpacity = 0.4 Want it? ScrollingCardView is available on CocoaPods: you can find installation instructions and the source on GitHub. Questions? Feature requests? File an issue or find us on #mobile. Notes 1: This particular design was not actually used because we felt we could provide a better user experience if we also moved the card itself, which lets the user fill the screen with the long-form content they were trying to read. Further discussion in prox#372. [...]



Mozilla Addons Blog: Migrating AdBlock for Firefox to WebExtensions

Fri, 24 Mar 2017 11:15:32 +0000

AdBlock for Firefox is a fast and powerful ad blocker with over 40 million users. They are in the process of transitioning to WebExtensions, and have completed the first step of porting their data using Embedded WebExtensions. You can read more about the AdBlock extension here. For more resources on updating your extension, please check out MDN. You can also contact us via these methods. Please provide a short background on your add-on. What does it do, when was it created, and why was it created? We created our original Firefox extension in 2014. We had seen some early success on Chrome and Safari and believed we could replicate that success on Firefox, which had developed a good community of users that downloaded add-ons for Firefox. It seemed like a natural place for us to be. What add-on technologies or APIs were used to build your add-on? The Firefox Add-On SDK was being promoted at the time, which wasn’t compatible with the Chrome Extension API, so we went through the Chrome code to identify areas where we could leverage work we had done previously. Since the APIs were a little different, we ended up having to modify some modules to use the Firefox Add-on SDK. Why did you decide to transition your add-on to WebExtensions APIs? With the Firefox SDK set to be deprecated, we knew our extension would slowly become unusable, so it made sense to transition to the WebExtension API. The benefit, from our standpoint, was that by using this API our software would be on a similar codebase and have similar features and functionalities to what we do on some of the other browsers we support. Walk us through the process of how you are making the transition. How was the experience of finding WebExtensions APIs to replace legacy APIs? What are some advantages and limitations? Last year we ported our Chrome extension to Edge, so when Firefox announced its plans, we had a good idea of what we wanted to do and how to go about it. Also, we were familiar with the WebExtension API from our years working on Chrome, but we knew we needed to educate ourselves on the differences. Fortunately, the Firefox documentation on the difference was very helpful in that education process. These pages, in particular, were helpful: https://developer.mozilla.org/en-US/Add-ons/WebExtensions/Comparison_with_the_Add-on_SDK https://developer.mozilla.org/en-US/Add-ons/WebExtensions/Chrome_incompatibilities We did run into a few challenges. Chrome allows us to create an alert message or a confirm message from the background page (e.g., “Are you sure you want do this…”), and Firefox doesn’t allow us to do that. We use that type of messaging in our Chrome extension that we had to find a workaround for, which we were able to do. For us, this impacted our ability to message our users when they were manipulating custom filters within AdBlock, but was not a major issue. We hope to land Permission capabilities in Firefox 54, and you can read about its implementation progress in the WebExtensions in Firefox 53 blog post. What, if anything, will be different about your add-on when it becomes a WebExtension? Will you be able to transition with all the features intact? Anecdotally, the extension appears to be faster, specifically around page load times. But the big advantage, from our perspective, is that we will be able to manage the transition with almost all of our features intact. As a result, we aren’t losing any meaningful functionality of AdBlock, which was our main concern before we embarked upon this transition. We did notice that a few of the APIs that AdBlock utilizes are not available on Firefox for Android, so we are currently unable to release a new version of AdBlock that supports Firefox for Android. We hope to address this issue in a coming version of AdBlock. We have lots of work planned for Android in upcoming releases, with the goal of making ad blockers possible in Firefox 57. What advice would you give other legacy add-on developers? Make sure you have a migration pl[...]



Mozilla Addons Blog: Privacy Features, Tab Tools & Other New WebExtensions

Fri, 24 Mar 2017 00:18:45 +0000

Tabzen is great for tab hoarders.As of late March, addons.mozilla.org (AMO) has around 2,000 listed add-ons built with WebExtensions APIs, the new cross-browser standard for writing add-ons. Average daily installs for them total more than 5,000,000. That’s nice momentum as we hurtle towards the Firefox 57 release. Volume aside, I continue to be impressed with the quality of content emerging… Smart HTTPS (revived) is the WebExtensions version of one of my favorite yet simple security add-ons—it changes HTTP addresses to the secure HTTPS. Disconnect for Facebook (WebExtension) is another fine privacy tool that prevents Facebook from tracking your Web movement by blocking all Facebook related requests sent between third-party sites. While we’re talking Facebook, you know what annoys me? Facebook’s “suggested posts” (for some reason I get served a lot of “suggested” content that implies I may have a fatty liver). Kick Facebook Suggested Posts puts an end to that nonsense. History Cleaner conveniently wipes away your browsing history after a set amount of time, while History Zebra features white and black lists to manage specific sites you want to appear (or not) in your browsing history. Tab Auto Refresh lets you set timed refresh intervals per tab.Don’t Touch My Tabs! protects against hyperlinks that try to hijack your previous tab. In other words, when you typically click a link, it grants the new page control over the one you clicked from, which is maybe not-so awesome for a number of reasons, like reloading the page with intrusive ads, or hackers throwing up a fake login to phish your info. Speaking of tabs, I dig Tab Auto Refresh because I follow a couple of breaking news sites, so with this add-on I set certain tabs to refresh at specific time intervals, then I just click over every so often to catch the latest. Tabzen is another dynamic tab tool that treats tab management in a familiar bookmarking fashion. Turn any Reddit page into literally the darkest corner of the Web with dark page themer Reddit Slate Night 2.0 (while we’re on the Reddit tip, this comment collapser presents a compelling alternate layout for perusing conversations). Dark Mode turns the entire internet goth. Link Cleaner removes extraneous guck from your URLs, including tracking parameters from commerce sites Amazon and AliExpress. These are awesome add-ons from very creative developers! It’s great to see such diverse, interesting WebExtensions crop up. If you’re a developer of a legacy add-on or Chrome extension and want more info about the WebExtensions porting process, this should help. Or if you’re interested in writing a WebExtension from scratch, check this out. The post Privacy Features, Tab Tools & Other New WebExtensions appeared first on Mozilla Add-ons Blog.[...]



Kent James: Caspia Projects and Thunderbird – Open Source In Absentia

Thu, 23 Mar 2017 19:20:41 +0000

Clallam Bay is located among various Native American tribes where the Thunderbird is an important cultural symbol. I’m recycling an old trademark that I’ve used, Caspia, to describe my projects to involve Washington State prisoners in open-source projects. After an afternoon of brainstorming, Caspia is a new acronym “Creating Accomplished Software Professionals In Absentia”. What does this have to do with Thunderbird? I sat in a room a few weeks ago with 10 guys at Clallam Bay, all who have been in a full-time, intensive software training program for about a year, who are really interested in trying to do real-world projects rather than simply hidden internal projects that are classroom assignments, or personal projects with no public outlet. I start in April spending two days per week with these guys. Then there are another 10 or so guys at WSR in Monroe that started last month, though the situation there is more complex. The situation is similar to other groups of students that might be able to work on Thunderbird or Mozilla projects, with these differences:1) Student or GSOC projects tend to have a duration of a few months, while the expected commitment time for this group is much longer. 2) Communication is extremely difficult. There is no internet access. Any communication of code or comments is accomplished through sneakernet options. It is easier to get things like software artifacts in rather than bring them out. The internal issues of allowing this to proceed at all are tenuous at both facilities, though we are further along at Clallam Bay. 3) Given the men’s situation, they are very sensitive to their ability to accumulate both publicly accessible records of their work, and personal recommendations of their skill. Similarly, they want marketable skills. 4) They have a mentor (me) that is heavily engaged in the Thunderbird/Mozilla world. Because they are for the most part not hobbyists trying to scratch an itch, but rather people desperate to find a pathway to success in the future, I feel a very large responsibility to steer them in the direction of projects that would demonstrate skills that are likely to be marketable, and provide visibility that would be easily accessible to possible future employees. Fixing obscure regressions in legacy Thunderbird code, with contributions tracked only in hg.mozilla.org and BMO, does not really fit that very well. For those reasons, I have a strong bias in favor of projects that 1) involve skills usable outside the narrow range of the Mozilla platform, and 2) can be tracked on github. I’ve already mentioned one project that we are looking at, which is the broad category of Contact manager. This is the primary focus of the group at WSR in Monroe. For the group at Clallam Bay, I am leaning toward focusing on the XUL->HTML conversion issue. Again I would look at this more broadly than just the issues in Thunderbird, perhaps developing a library of Web Components that emulate XUL functionality, and can be used both to easily migrate existing XUL to HTML, but also as a separate library for desktop-focused web applications. This is one of the triad of platform conversions that Thunderbird needs to do (the others being C++->JavaScript, and XPCOM->SomethingElse). I can see that if the technical directions I am looking at turn out to work with Thunderbird, it will mean some big changes. These projects will mostly be done using GitHub repos, so we would need to improve our ability to work with external libraries. (We already do that with JsMime but poorly). The momentum in the JS world these days, unfortunately, is with Node and Chrome V8. That is going to cause a lot of grief as we try to co-exist with Node/V8 and Gecko. I could also see large parts of our existing core functionality (such as the IMAP backend) migrated to a third-party library. Our progress will be very slow at first as we undergo internal training, but I think these groups could star[...]



Wladimir Palant: LastPass: Security done wrong

Thu, 23 Mar 2017 16:37:56 +0000

Disclaimer: I am the author of Easy Passwords which is also a password manager and could be considered LastPass competitor in the widest sense. Six month ago I wrote a detailed analysis of LastPass security architecture. In particular, I wrote: So much for the general architecture, it has its weak spots but all in all it is pretty solid and your passwords are unlikely to be compromised at this level. However, as described in my blog post the browser integration turned out to be a massive weakness. The LastPass extension on your computer works with decrypted data, so it needs to be extra careful – and at the moment it isn’t. I went on to point out Auto Fill functionality and internal messaging as the main weak spots of the Last Pass browser extensions. And what do I read in the news today? Google reporter Tavis Ormandy found two security vulnerabilities in LastPass. In which areas? Well, Auto Fill and internal messaging of course. Now I could congratulate myself on a successful analysis of course, but predicting these reports wasn’t really a big feat. See, I checked out LastPass after reports about two security vulnerabilities have been published last August. I looked into what those vulnerabilities were and how they have been resolved. And I promptly found that the issues haven’t been resolved completely. Six months later Tavis Ormandy appears to have done the same and… well, you can still find ways to exploit the same old issues. Altogether it looks like LastPass is a lot better at PR than they are at security. Yes, that’s harsh but this is what I’ve seen so far. In particular, security vulnerabilities have been addressed punctually, only the exact scenario reported has been tested by the developers. This time LastPass has driven it to an extreme by fixing a critical bug in their Chrome extension and announcing the fix even though the exact same exploit was working against their Firefox extension as well. But also with the bugs I reported previously nobody seemed to have an interest in going through the code base looking for other instances of the same issue, let alone taking obvious measures to harden the code against similar attacks or reconsidering the overall approach. In addition to that, LastPass is very insistently downplaying the impact of the vulnerabilities. For example, an issue where I couldn’t provide an exploit (hey, I’m not even a user of the product, I don’t know it too well) was deemed not a vulnerability — Tavis Ormandy has now demonstrated that it is exploitable after all. On other occasions LastPass only admitted what the proof of concept exploit was doing, e.g. removing passwords in case of the vulnerability published by Tavis Ormandy in August last year. The LastPass developers should have known however that the messaging interface he hijacked could do far more than that. This might be the reason why this time Tavis Ormandy shows how you can run arbitrary applications through LastPass, it’s hard to deny that the issue is really, really bad. So this time their announcement says: Our investigation to date has not indicated that any sensitive user data was lost or compromised No site credential passwords need to be changed Sure it didn’t — because compromising clients this way doesn’t require access to LastPass servers. So even if black hats found this vulnerability years ago and are abusing it on a large scale, LastPass wouldn’t be likely to know. This should really have been: We messed up and we don’t know whether your passwords are compromised as a result. You should probably change them now, just to be sure. [...]



Air Mozilla: Reps Weekly Meeting Mar. 23, 2017

Thu, 23 Mar 2017 16:00:00 +0000

(image) This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.




Axel Hecht: Can’t you graph that graph

Thu, 23 Mar 2017 14:01:00 +0000

I’m going to just recreate blame, he said. It’s going to be easy, he said. We have a project to migrate the localization of Firefox to one repository for all channels, nick-named cross-channel, or x-channel in short. The plan is to create one repository that holds all the en-US strings we need for Firefox and friends on all channels. One repository to rule them all, if you wish. So you need to get the contents of mozilla-central, comm-central, *-aurora, *-beta, *-release, and also some of *-esr?? together in one repository, with, say, one toolkit/chrome/global/customizeToolbar.dtd file that has all the strings that are used by any of the apps on any branch. We do have some experience with merging the content of localization files as part of l10n-merge which is run at Firefox build time. So this shouldn’t be too hard, right? Enter version control, and the fact that quite a few of our localizers are actually following the development of Firefox upstream, patch by patch. That they’re trying to find the original bug if there’s an issue or a question. So, it’d be nice to have the history and blame in the resulting repository reflect what’s going on in mozilla-central and its dozen siblings. Can’t we just hg convert and be done with it? Sadly, that only converts one DAG into another hg DAG, and we have a dozen. We have a dozen heads, and we want a single head in the resulting repository. Thus, I’m working on creating that repository. One side of the task is to update that target repository as we see updates to our 12 original heads. I’m pretty close to that one. The other task is to create a good starting point. Or, good enough. Maybe if we could just create a repo that had the same blame as we have right now? Like, not the hex or integer revisions, but annotate to the right commit message etc? That’s easy, right? Well, I thought it was, and now I’m learning. To understand the challenges here, one needs to understand the data we’re throwing at any algorithm we write, and the mercurial code that creates the actual repository. As of FIREFOX_AURORA_45_BASE, just the blame for the localized files for Firefox and Firefox for Android includes 2597 hg revisions. And that’s not even getting CVS history, but just what’s in our usual hg repository. Also, not including comm-central in that number. If that history was linear, things would probably be pretty easy. At least, I blame the problems I see in blame on things not being linear. So, how non-linear is that history. The first attempt is to look at the revision set with hg log -G -r .... . That creates a graph where the maximum number of parents of a single changeset is at 1465. Yikes. We can’t replay that history in the target repository, as hg commits can only have 2 parents. Also, that’s clearly not real, we’ve never had that many parallel threads of development. Looking at the underlying mercurial code, it’s showing all reachable roots as parents of a changeset, if you have a sparse graph. That is, it gives you all possible connections through the underlying full graph to the nodes in your selection. But that’s not what we’re interested in. We’re interested in the graph of just our nodes, going just through our nodes. In a first step, I wrote code that removes all grandchildren from our parents. That reduces the maximum number of parents to 26. Much better, but still bad. At least it’s at a size where I can start to use graphviz to create actual visuals to inspect and analyze. Yes, I can graph that graph. The resulting graph has a few features that are actually already real. mozilla-central has multiple roots. One is the initial hg import of the Firefox code. Another is including Firefox for Android in mozilla-central, which used to be an independent repository. Yet another is the merge of services/sync. And then I have two heads, which isn’t mu[...]



Mike Hommey: Why is the git-cinnabar master branch slower to clone?

Thu, 23 Mar 2017 07:38:05 +0000

Apart from the memory considerations, one thing that the data presented in the “When the memory allocator works against you” post that I haven’t touched in the followup posts is that there is a large difference in the time it takes to clone mozilla-central with git-cinnabar 0.4.0 vs. the master branch. One thing that was mentioned in the first followup is that reducing the amount of realloc and substring copies made the cloning more than 15 minutes faster on master. But the same code exists in 0.4.0, so this isn’t part of the difference. So what’s going on? Looking at the CPU usage during the clone is enlightening. On 0.4.0: On master: (Note: the data gathering is flawed in some ways, which explains why the git-remote-hg process goes above 100%, which is not possible for this python process. The data is however good enough for the high level analysis that follows, so I didn’t bother to get something more acurate) On 0.4.0, the git-cinnabar-helper process was saturating one CPU core during the File import phase, and the git-remote-hg process was saturating one CPU core during the Manifest import phase. Overall, the sum of both processes usually used more than one and a half core. On master, however, the total of both processes barely uses more than one CPU core. What happened? This and that happened. Essentially, before those changes, git-remote-hg would send instructions to git-fast-import (technically, git-cinnabar-helper, but in this case it’s only used as a wrapper for git-fast-import), and use marks to track the git objects that git-fast-import created. After those changes, git-remote-hg asks git-fast-import the git object SHA1 of objects it just asked to be created. In other words, those changes replaced something asynchronous with something synchronous: while it used to be possible for git-remote-hg to work on the next file/manifest/changeset while git-fast-import was working on the previous one, it now waits. The changes helped simplify the python code, but made the overall clone process much slower. If I’m not mistaken, the only real use for that information is for the mapping of mercurial to git SHA1s, which is actually rarely used during the clone, except at the end, when storing it. So what I’m planning to do is to move that mapping to the git-cinnabar-helper process, which, incidentally, will kill not 2, but 3 birds with 1 stone: It will restore the asynchronicity, obviously (at least, that’s the expected main outcome). Storing the mapping in the git-cinnabar-helper process is very likely to take less memory than what it currently takes in the git-remote-hg process. Even if it doesn’t (which I doubt), that should still help stay under the 2GB limit of 32-bit processes. The whole thing that spikes memory usage during the finalization phase, as seen in previous post, will just go away, because the git-cinnabar-helper process will just have prepared the git notes-like tree on its own. So expect git-cinnabar 0.5 to get moar faster, and to use moar less memory.[...]



Mike Hommey: Analyzing git-cinnabar memory use

Thu, 23 Mar 2017 04:30:26 +0000

In previous post, I was looking at the allocations git-cinnabar makes. While I had the data, I figured I’d also look how the memory use correlates with expectations based on repository data, to put things in perspective. As a reminder, this is what the allocations look like (horizontal axis being the number of allocator function calls): There are 7 different phases happening during a git clone using git-cinnabar, most of which can easily be identified on the graph above: Negotiation. During this phase, git-cinnabar talks to the mercurial server to determine what needs to be pulled. Once that is done, a getbundle request is emitted, which response is read in the next three phases. This phase is essentially invisible on the graph. Reading changeset data. The first thing that a mercurial server sends in the response for a getbundle request is changesets. They are sent in the RevChunk format. Translated to git, they become commit objects. But to create commit objects, we need the entire corresponding trees and files (blobs), which we don’t have yet. So we keep this data in memory. In the git clone analyzed here, there are 345643 changesets loaded in memory. Their raw size in RawChunk format is 237MB. I think by the end of this phase, we made 20 million allocator calls, have about 300MB of live data in about 840k allocations. (No certainty because I don’t actually have definite data that would allow to correlate between the phases and allocator calls, and the memory usage change between this phase and next is not as clear-cut as with other phases). This puts us at less than 3 live allocations per changeset, with “only” about 60MB overhead over the raw data. Reading manifest data. In the stream we receive, manifests follow changesets. Each changeset points to one manifest ; several changesets can point to the same manifest. Manifests describe the content of the entire source code tree in a similar manner as git trees, except they are flat (there’s one manifest for the entire tree, where git trees would reference other git trees for sub directories). And like git trees, they only map file paths to file SHA1s. The way they are currently stored by git-cinnabar (which is planned to change) requires knowing the corresponding git SHA1s for those files, and we haven’t got those yet, so again, we keep everything in memory. In the git clone analyzed here, there are 345398 manifests loaded in memory. Their raw size in RawChunk format is 1.18GB. By the end of this phase, we made 23 million more allocator calls, and have about 1.52GB of live data in about 1.86M allocations. We’re still at less than 3 live allocations for each object (changeset or manifest) we’re keeping in memory, and barely over 100MB of overhead over the raw data, which, on average puts the overhead at 150 bytes per object. The three phases so far are relatively fast and account for a small part of the overall process, so they don’t appear clear-cut to each other, and don’t take much space on the graph. Reading and Importing files. After the manifests, we finally get files data, grouped by path, such that we get all the file revisions of e.g. .cargo/.gitignore, followed by all the file revisions of .cargo/config.in, .clang-format, and so on. The data here doesn’t depend on anything else, so we can finally directly import the data. This means that for each revision, we actually expand the RawChunk into the full file data (RawChunks contain patches against a previous revision), and don’t keep the RawChunk around. We also don’t keep the full data after it was sent to the git-cinnabar-helper process (as far as cloning is concerned, it’s essentially a wrapper for git-fast-import), except for the previous revision of the file, which is likely the patch base for the next revision. We however keep in[...]



Air Mozilla: March Privacy Lab: Cryptographic Engineering for Everyone

Thu, 23 Mar 2017 01:00:00 +0000

(image) Our March speaker is Justin Troutman, creator of PocketBlock - a visual, gamified curriculum that makes cryptographic engineering fun. It's suitable for everyone from an...




Air Mozilla: March Privacy Lab: Cryptographic Engineering for Everyone 3.22.17

Thu, 23 Mar 2017 01:00:00 +0000

(image) Our March speaker is Justin Troutman, creator of PocketBlock - a visual, gamified curriculum that makes cryptographic engineering fun. It's suitable for everyone from an...




Emma Humphries: Linking to GitHub issues from bugzilla.mozilla.org

Wed, 22 Mar 2017 23:21:09 +0000

I wanted to draw your attention to a lovely, new feature in BMO which went out with this week's push: auto-linking to GitHub issues.

Now, in a Bugzilla bug's comments, if you reference a GitHub issue, such as mozilla-bteam/bmo#26, Bugzilla converts that to a link to the issue on GitHub.

(image)

This will save you some typing in the future, and if you used this format in earlier comments, they'll be linkified as well.

Thanks to Sebastin Santy for his patch, and Xidorn Quan for the suggestion and code review.

If you come across a false positive, please file a bug against bugzilla.mozilla.org::General.

The original bug: 1309112 - Detect and linkify GitHub issue in comment



(image) comments



Tarek Ziadé: Load Testing at Mozilla

Wed, 22 Mar 2017 23:00:00 +0000

After a stabilization phase, I am happy to announce that Molotov 1.0 has been released! (Logo by Juan Pablo Bravo) This release is an excellent opportunity to explain a little bit how we do load testing at Mozilla, and what we're planning to do in 2017 to improve the process. I am talking here specifically about load testing our HTTP services, and when this blog post mentions what Mozilla is doing there, it refers mainly to the Mozilla QA team, helped with Services developers team that works on some of our web services. What's Molotov? Molotov is a simple load testing tool Molotov is a minimalist load testing tool you can use to load test an HTTP API using Python. Molotov leverages Python 3.5+ asyncio and uses aiohttp to send some HTTP requests. Writing load tests with Molotov is done by decorating asynchronous Python functions with the @scenario function: from molotov import scenario @scenario(100) async def my_test(session): async with session.get('http://localhost:8080') as resp: assert resp.status == 200 When this script is executed with the molotov command, the my_test function is going to be repeatedly called to perform the load test. Molotov tries to be as transparent as possible and just hands over session objects from the aiohttp.client module. The full documentation is here: http://molotov.readthedocs.io Using Molotov is the first step to load test our services. From our laptops, we can run that script and hammer a service to make sure it can hold some minimal charge. What Molotov is not Molotov is not a fully-featured load testing solution Load testing application usually comes with high-level features to understand how the tested app is performing. Things like performance metrics are displayed when you run a test, like what Apache Bench does by displaying how many requests it was able to perform and their average response time. But when you are testing web services stacks, the metrics you are going to collect from each client attacking your service will include a lot of variation because of the network and clients CPU overhead. In other words, you cannot guarantee reproducibility from one test to the other to track precisely how your app evolves over time. Adding metrics directly in the tested application itself is much more reliable, and that's what we're doing these days at Mozilla. That's also why I have not included any client-side metrics in Molotov, besides a very simple StatsD integration. When we run Molotov at Mozilla, we mostly watch our centralized metrics dashboards and see how the tested app behaves regarding CPU, RAM, Requests-Per-Second, etc. Of course, running a load test from a laptop is less than ideal. We want to avoid the hassle of asking people to install Molotov & all the dependencies a test requires everytime they want to load test a deployment -- and run something from their desktop. Doing load tests occasionally from your laptop is fine, but it's not a sustainable process. And even though a single laptop can generate a lot of loads (in one project, we're generating around 30k requests per second from one laptop, and happily killing the service), we also want to do some distributed load. We want to run Molotov from the cloud. And that's what we do, thanks to Docker and Loads. Molotov & Docker Since running the Molotov command mostly consists of using the right command-line options and passing a test script, we've added in Molotov a second command-line utility called moloslave. Moloslave takes the URL of a git repository and will clone it and run the molotov test that's in it by reading a configuration file. The configuration file is a simple JSON file that needs to be at the root of the repo, like how you woul[...]



Air Mozilla: Bugzilla Project Meeting, 22 Mar 2017

Wed, 22 Mar 2017 21:00:00 +0000

(image) The Bugzilla Project developers meeting.




Air Mozilla: Weekly SUMO Community Meeting Mar. 22, 2017

Wed, 22 Mar 2017 16:00:00 +0000

(image) This is the sumo weekly call




Mike Hommey: When the memory allocator works against you, part 2

Wed, 22 Mar 2017 06:57:46 +0000

This is a followup to the “When the memory allocator works against you” post from a few days ago. You may want to read that one first if you haven’t, and come back. In case you don’t or didn’t read it, it was all about memory consumption during a git clone of the mozilla-central mercurial repository using git-cinnabar, and how the glibc memory allocator is using more than one would expect. This post is going to explore how/why it’s happening. I happen to have written a basic memory allocation logger for Firefox, so I used it to log all the allocations happening during a git clone exhibiting the runaway memory increase behavior (using a python that doesn’t use its own allocator for small allocations). The result was a 6.5 GB log file (compressed with zstd ; 125 GB uncompressed!) with 2.7 billion calls to malloc, calloc, free, and realloc, recorded across (mostly) 2 processes (the python git-remote-hg process and the native git-cinnabar-helper process ; there are other short-lived processes involved, but they do less than 5000 calls in total). The vast majority of those 2.7 billion calls is done by the python git-remote-hg process: 2.34 billion calls. We’ll only focus on this process. Replaying those 2.34 billion calls with a program that reads the log allowed to reproduce the runaway memory increase behavior to some extent. I went an extra mile and modified glibc’s realloc code in memory so it doesn’t call memcpy, to make things faster. I also ran under setarch x86_64 -R to disable ASLR for reproducible results (two consecutive runs return the exact same numbers, which doesn’t happen with ASLR enabled). I also modified the program to report the number of live allocations (allocations that haven’t been freed yet), and the cumulated size of the actually requested allocations (that is, the sum of all the sizes given to malloc, calloc, and realloc calls for live allocations, as opposed to what the memory allocator really allocated, which can be more, per malloc_usable_size). RSS was not tracked because the allocations are never filled to make things faster, such that pages for large allocations are never dirty, and RSS doesn’t grow as much because of that. Full disclosure: it turns out the “system bytes” and “in-use bytes” numbers I had been collecting in the previous post were smaller than what they should have been, and were excluding memory that the glibc memory allocator would have mmap()ed. That however doesn’t affect the trends that had been witnessed. The data below is corrected. (Note that in the graph above and the graphs that follow, the horizontal axis represents the number of allocator function calls performed) While I was here, I figured I’d check how mozjemalloc performs, and it has a better behavior (although it has more overhead). What doesn’t appear on this graph, though, is that mozjemalloc also tells the OS to drop some pages even if it keeps them mapped (madvise(MADV_DONTNEED)), so in practice, it is possible the actual RSS decreases too. And jemalloc 4.5: (It looks like it has better memory usage than mozjemalloc for this use case, but its stats are being thrown off at some point, I’ll have to investigate) Going back to the first graph, let’s get a closer look at what the allocations look like when the “system bytes” number is increasing a lot. The highlights in the following graphs indicate the range the next graph will be showing. So what we have here is a bunch of small allocations (small enough that they don’t seem to move the “requested” line ; most under 512 bytes, so under normal circumstances, they would be a[...]



Mark Côté: Conduit's Commit Index

Tue, 21 Mar 2017 20:41:29 +0000

As with MozReview, Conduit is being designed to operate on changesets. Since the end result of work on a codebase is a changeset, it makes sense to start the process with one, so all the necessary metadata (author, message, repository, etc.) are provided from the beginning. You can always get a plain diff from a changeset, but you can’t get a changeset from a plain diff. Similarly, we’re keeping the concept of a logical series of changesets. This encourages splitting up a unit of work into incremental changes, which are easier to review and to test than large patches that do many things at the same time. For more on the benefits of working with small changesets, a few random articles are Ship Small Diffs, Micro Commits, and Large Diffs Are Hurting Your Ability To Ship. In MozReview, we used the term commit series to refer to a set of one or more changesets that build up to a solution. This term is a bit confusing, since the series itself can have multiple revisions, so you end up with a series of revisions of a series of changesets. For Conduit, we decided to use the term topic instead of commit series, since the commits in a single series are generally related in some way. We’re using the term iteration to refer to each update of a topic. Hence, a solution ends up being one or more iterations on a particular topic. Note that the number of changesets can vary from iteration to iteration in a single topic, if the author decides to either further split up work or to coalesce changesets that are tightly related. Also note that naming is hard, and we’re not completely satisfied with “topic” and “iteration”, so we may change the terminology if we come up with anything better. As I noted in my last post, we’re working on the push-to-review part of Conduit, the entrance to what we sometimes call the commit pipeline. However, technically “push-to-review” isn’t accurate, as the first process after pushing might be sending changesets to Try for testing, or static analysis to get quick automated feedback on formatting, syntax, or other problems that don’t require a human to look at the code. So instead of review repository, which we’ve used in MozReview, we’re calling it a staging repository in the Conduit world. Along with the staging repository is the first service we’re building, the commit index. This service holds the metadata that binds changesets in the staging repo to iterations of topics. Eventually, it will also hold information about how changesets moved through the pipeline: where and when they were landed, if and when they were backed out, and when they were uplifted into release branches. Unfortunately a simple “push” command, whether from Mercurial or from Git, does not provide enough information to update the commit index. The main problem is that not all of the changesets the author specifies for pushing may actually be sent. For example, I have three changesets, A, B, and C, and pushed them up previously. I then update C to make C′ and push again. Despite all three being in the “draft” phase (which is how we differentiate work in progress from changes that have landed in the mainline repository), only C′ will actually be sent to the staging repo, since A and B already exist there. Thus, we need a Mercurial or Git client extension, or a separate command-line tool, to tell the commit index exactly what changesets are part of the iteration we’re pushing up—in this example, A, B, and C′. When it receives this information, the commit index creates a new topic, if necessary, and a new iteration in that topic,[...]



Air Mozilla: Rust Libs Meeting 2017-03-21

Tue, 21 Mar 2017 20:00:00 +0000

(image) Rust Libs Meeting 2017-03-21




Support.Mozilla.Org: Guest post: “That Bug about Mobile Bookmarks”

Tue, 21 Mar 2017 17:10:09 +0000

Hi, SUMO Nation! Time for a guest blog post by Seburo – one of our “regulars”, who wanted to share a very personal story about Firefox with all of you. He originally posted in on Mozilla’s Discourse, but the more people it reaches, the better. Thank you for sharing, Seburo! (As always, if you want to post something to our blog about your Mozilla and/or SUMO adventures and experiences, let us know.) Here we go…   As a Mozillian I like to set myself goals and targets. It helps me to plan what I would like to do and to ensure that I am constantly focusing on activities that help Mozilla as well as maintain a level of contribution. But under these “public” goals are a number of things that are more long term, that are possible and have been done by many Mozillians, but for me just seem a little out of reach. If you were to see the list, it may seem a little odd and possibly a little egotistical, even laughable, but however impossible some of them are, they serve as a reminder of what I may be able to achieve. This blog entry is about me achieving one of them… In the time leading up to the London All-Hands, I had been invited by a fellow SUMO contributor to attend a breakfast meeting to learn more about the plans around Nightly. This clashed with another breakfast meeting between SUMO and Sync to continue to work to improve our support for this great and useful feature of Firefox. Not wanting to upset anyone, I went with the first invite, but hoped to catch up with members of the Sync team during the week. Having spent the morning better understanding how SUMO fits into the larger corporate structure, I made use of the open time in the schedule to visit the Firefox Homeroom which was based in a basement meeting room, home for the week to all the alchemists and magicians that bring Mozilla software to life. It was on the way back up the stairs that I bumped into Mark from the Firefox Desktop team. Expecting to arrange some time for later in the week, Mark was free to have a chat there and then. Sync is straightforward when used to connect desktop and mobile versions of Firefox but I wanted to better understand how it would work if a third device was included. It was at the end of the conversation that one of us mentioned about how the bookmarks coming to desktop Firefox could be seen in the Mobile Bookmarks folder in the bookmark drop down menus. But it is not there, which can make it look like your bookmarks have disappeared. Sure, you can open the bookmark library, but this is extra mouse clicks to open a separate tool. Mark suggested that this could be easy to fix and that I should file a bug, a task that duly went in the list of things to do on returning from the week. A key goal for contributors at an All-Hands is to come back with a number of ways to build upon your ability to contribute in the future and I came back with a long list that took time to work through. The bug was also delayed in filing due to natural pessimism about its chances of success. But I realised…what if we all thought like that? All things that we have done started with someone having an idea that was put forward knowing that other ideas had failed, but they still went ahead regardless. So I wrote a bug and submitted it and nothing much happened. But after a while there was a spark of activity. Thom from the Sync team had decided to resolve it and seemed to fully understand how this could work. The bug was assigned various flags and it soon became clear to me that work was being done on it. Not having any coding abili[...]



QMO: Firefox 53 Beta 3 Testday Results

Tue, 21 Mar 2017 17:08:11 +0000

Hello Mozillians!

As you may already know, last Friday – March 17th – we held a new Testday event, for Firefox 53 Beta 3.

Thank you all for helping us making Mozilla a better place – Iryna Thompsn, Surentharan and Suren, Jeremy Lam and jaustinlam.

From Bangladesh team: Nazir Ahmed Sabbir | NaSb, Rezaul Huque Nayeem, Md.Majedul islam, Rezwana Islam Ria, Maruf Rahman, Aminul Islam Alvi | AiAlvi, Sayed Mahmud, Mohammad Mosfiqur Rahman, Ridwan, Tanvir Rahman, Anmona Mamun Monisha, Jaber Rahman, Amir Hossain Rhidoy, Ahmed Safa, Humayra Khanum, Sajal Ahmed, Roman Syed, Md Rakibul Islam, Kazi Nuzhat Tasnem, Md. Almas Hossain, Md. Asif Mahmud Apon, Syeda Tanjina Hasan, Saima Sharleen, Nusrat jahan, Sajedul Islam, আল-যুনায়েদ ইসলাম ব্রোহী, Forhad Hossain and Toki Yasir.

From India team: Guna / Skrillex, Subhrajyoti Sen / subhrajyotisen, Pavithra R, Nagaraj.V, karthimdav7, AbiramiSD/@Teens27075637, subash M, Monesh B, Kavipriya.A, Vibhanshu Chaudhary | vibhanshuchaudhary, R.KRITHIKA SOWBARNIKA, HARITHA KAMARAJ and VIGNESH B S.

Results:

– several test cases executed for the WebM Alpha, Compact Themes and Estimated Reading Time features.

– 2 bugs verified: 1324171, 1321472.

– 2 new bugs filed: 1348347, 1348483.

Again thanks for another successful testday! (image)

We hope to see you all in our next events, all the details will be posted on QMO!




Robert O'Callahan: Deterministic Hardware Performance Counters And Information Leaks

Tue, 21 Mar 2017 08:38:07 +0000

Summary: Deterministic hardware performance counters cannot leak information between tasks, and more importantly, virtualized guests.

rr relies on hardware performance counters to help measure application progress, to determine when to inject asynchronous events such as signal delivery and context switches. rr can only use counters that are deterministic, i.e., executing a particular sequence of application instructions always increases the counter value by the same amount. For example rr uses the "retired conditional branches" (RCB) counter, which always returns exactly the number of conditional branches actually retired.

rr currently doesn't work in environments such as Amazon's cloud, where hardware performance counters are not available to virtualized guests. Virtualizing hardware counters is technically possible (e.g. rr works well in Digital Ocean's KVM guests), but for some counters there is a risk of leaking information about other guests, and that's probably one reason other providers haven't enabled them.

However, if a counter's value can be influenced by the behavior of other guests, then by definition it is not deterministic in the sense above, and therefore it is useless to rr! In particular, because the RCB counter is deterministic ("proven" by a lot of testing), we know it does not leak information between guests.

I wish Intel would identify a set of counters that are deterministic, or at least free of cross-guest information leaks, and Amazon and other cloud providers would enable virtualization of them.




This Week In Rust: This Week in Rust 174

Tue, 21 Mar 2017 04:00:00 +0000

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions. This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR. Updates from Rust Community News & Blog Posts Announcing Rust 1.16. A gentle introduction to Rust. Series of tutorials to get you started with Rust. Getting started with Piston, a game library for Rust. Math with distances in Rust: safety and correctness across units. Rendering vector map tiles (Rust + asm demo). ZeroMQ communication between Python and Rust. Announcing the tokio-io crate. VSCode adds support for ripgrep in latest nightly. ripgrep is a line oriented search tool written in Rust. [video] Jeremy Soller, founder of Redox OS - interview. [video] Rust game demo - Box crash. Source code. This week in Rust docs 48. This week in Servo 95. Crate of the Week We don't have a Crate of this Week for lack of suggestions. Sorry. Submit your suggestions and votes for next week! Call for Participation Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started! Some of these tasks may also have mentors available, visit the task page for more information. Hey crate authors, please start testing your code on Rust's beta branch to find regressions. The Underhanded Rust Contest. [easy] rustup: Installation failure via the script has bad error message. rustup: Build with panic=abort. [easy] rustup: Improve indentation of help. [easy] rustup: Document the usage of CARGO_HOME and RUSTUP_HOME to install to a custom location. [easy] rustup: Document the use of toolchain link. [easy] rustup: "update not yet available" message should not error. [easy] rustup: Replace custom download crate with reqwest. bitflags: Hide @_impl implementation detail from the bitflags! rustdoc. [easy] bitflags: Empty bitflags has unhelpful Debug representation. [easy] bitflags: "const" items are followed by a semicolon. [easy] bitflags: Mention Default trait in the docs. [easy] bitflags: Move docs to the crate level. [easy] bitflags: Add CI badges to Cargo.toml. [easy] bitflags: Add keywords and categories to Cargo.toml. [easy] bitflags: Add html_root_url crate attribute. [easy] bitflags: Remove mention of stable 'assignment_ops' feature from docs. [easy] bitflags: Add an example of what the macro-expanded API looks like. [easy] bitflags: Implement Hex, Octal, and Binary. [easy] byteorder: Add categories to toml file. [easy] byteorder: Add CI badges to toml file. [medium] notify-rust: Implement icons and images. tempdir: TempDir affected by remove_dir_all unreliability on windows. [easy] servo: Looking for something to work on. If you are a Rust project owner and are looking for contributors, please submit tasks here. Updates from Rust Core 117 pull requests were merged in the last week. 1.17 library stabilizations 0e+10 is now a valid Rust float literal Rust works again on pre 1.12 macOS pass attributes to procedural macros as TokenStream (macro plugin-breaking) fix include!(_) regression add catch { } to AST (plugin-breaking) avoid alignment-related undefined behavior on operand-pair store TryFrom is [...]



The Mozilla Blog: How Do We Connect First-Time Internet Users to a Healthy Web?

Mon, 20 Mar 2017 19:46:28 +0000

Fresh research from Mozilla, supported by the Bill & Melinda Gates Foundation, explores how low-income, first-time smartphone users in Kenya experience the web — and what digital skills can make a difference   Three billion of us now share the Internet. But our online experiences differ greatly, depending on geography, gender and income. For a software engineer in San Francisco, the Internet can be open and secure. But for a low-income, first-time smartphone user in Nairobi, the Internet is most often a small collection of apps in an unfamiliar language, limited further by high data costs. This undercuts the Internet’s potential as a global public resource — a resource everyone should be able to use to improve their lives and societies. Twelve months ago, Mozilla set out to study this divide. We wanted to understand the barriers that low-income, first-time smartphone users in Kenya face when adapting to online life. And we wanted identify the skills and education methods necessary to overcome them. To do this, Mozilla created the Digital Skills Observatory: a participatory research project exploring the complex relationship between devices, digital skills, social life, economic life and digital life. The work — funded by the Bill & Melinda Gates Foundation — was developed and led by Mozilla alongside Digital Divide Data and A Bit of Data Inc. Today, we’re sharing our findings.   For one year, Mozilla researchers and local Mozilla community members worked with about 200 participants across seven Kenyan regions. All participants identified as low income and were coming online for the first time through smartphones. To hone our focus, we paid special attention to the impact of digital skills on digital financial services (DFS) adoption. Why? A strong grasp of digital financial services can open doors for people to access the formal financial environment and unlock economic opportunity. In conducting the study, one group of participants was interviewed regularly and shared smartphone browsing and app usage data. A second group did the same, but also received digital skills training on topics like app stores and cybersecurity. Our findings were significant. Among them: Without proper digital skills training, smartphone adoption can worsen — not improve — existing financial and social problems. Without media literacy and knowledge of online scams, users fall prey to fraudulent apps and news. The impact of these scams can be devastating on people who are already financially precarious Users employ risky methods to circumvent the high price of data, like sharing apps via Bluetooth. As a result, out-of-date apps with security vulnerabilities proliferate A set of 53 teachable skills can reduce barriers and unlock opportunity. These skills — identified by both participants and researchers — range from managing data usage and recognizing scams to resetting passwords, managing browser settings and understanding business models behind app stores Our treatment group learned these skills, and the end-of-study evaluation showed increased agency and understanding of what is possible online Without these fundamental skills, users are blocked in their discoveries and adoption of digital products Gender and internet usage are deeply entwined. Men often have an effect on the way women use apps and services — for example, telling them to stop, or controlling their usage Women were al[...]



Roberto A. Vitillo: On technical leadership

Mon, 20 Mar 2017 19:35:26 +0000

I have been leading a team of data engineers for over a year and I feel like I have a much better idea of what leadership entails since the beginning of my journey. Here is a list of things I learned so far: Have a vision As a leader, you are supposed to have an idea of where you are leading your project or product to. That doesn’t mean you have to be the person that comes up with a plan and has all the answers! You work with smart people that have opinions and ideas, use them to shape the vision but make sure everyone on your team is aligned. Be your team’s champion During my first internship in 2008, I worked on a real-time monitoring application used to assess the quality of the data coming in from the ATLAS detector. My supervisor at the time had me present my work at CERN in front of a dozen scientists and engineers. I recall being pretty anxious, especially because my spoken English wasn’t that great. Even so, he championed my work and pushed me beyond my comfort zone and ultimately ensured I was recognized for what I built. Having me fly over from Pisa to present my work in person might not have been a huge deal for him but it made all the difference to me. I had the luck to work with amazing technical leads over the years and they had all one thing in common: they championed my work and made sure people knew about it. You can build amazing things but if nobody knows about it, it’s like it never happened. Split your time between maker & manager mode Leading a team means you are going to be involved in lots of non-coding related activities that don’t necessarily feel immediately productive, like meetings. While it’s all too easy just jump back to coding, one shouldn’t neglect the managerial activities that a leadership role necessarily entails. One of your goals should be to improve the productivity of your colleagues and coding isn’t the best way to do that. That doesn’t mean you can’t be a maker, though! When I am in manager mode, interruptions and meetings dominate all my time. On the other hand, when I am in maker mode I absolutely don’t want to be interrupted. One simple thing I do is to schedule some time on my calendar to fully focus on my maker mode. It really helps to know that I have a certain part of the day that others can’t schedule over. I also tend to stay away from IRC/Slack during those hours. So far this has been working great; I stopped feeling unproductive and not in control of my time as soon as I adopted this simple hack. Pick strategic tasks After being around long enough in a company you will have probably picked up a considerable baggage of domain-specific technical expertise. That expertise allows you to not only easily identify the pain points of your software architecture, but also know what solutions are appropriate to get rid of them. When those solutions are cross-functional in nature and involve changes to various components, they provide a great way for you to add value as they are less likely to be tackled spontaneously by more junior peers. Be a sidekick The best way I found to mentor a colleague is to be their sidekick on a project of which they are the technical lead, while I act more as a consultant intervening when blockers arise. Ultimately you want to grow leaders and experts and the best way to do that is to give them responsibilities even if they only have 80% of the skills required. If you give y[...]



Air Mozilla: Mozilla Weekly Project Meeting, 20 Mar 2017

Mon, 20 Mar 2017 18:00:00 +0000

(image) The Monday Project Meeting




The Mozilla Blog: WebVR and AFrame Bringing VR to Web at the Virtuleap Hackathon

Mon, 20 Mar 2017 16:15:02 +0000

Imagine an online application that lets city planners walk through three-dimensional virtual versions of proposed projects, or a math program that helps students understand complex concepts by visualizing them in three dimensions. Both CityViewR & MathworldVR are amazing applications experiences that bring to life the possibilities of virtual reality (VR). Both are concept virtual reality applications for the web that were generated for the Virtuleap WebVR Hackathon. Amazingly, nine out of ten of the winning projects used AFrame, an open source project sponsored by Mozilla, which makes it much easier to create VR experiences.. CityView really illustrates the capabilities of WebVR to have real life benefits that impact the quality of people’s daily lives beyond the browser. A top-notch batch of leading VR companies, including Mozilla, funded and supported this global event with the goal of building the grassroots community for WebVR. For non-techies, WebVR is the experimental JavaScript API that allows anyone with a web browser to experience immersive virtual reality on almost any device. WebVR is designed to be completely platform and device agnostic and so it is a scalable and democratic path to stoking a mainstream VR industry that can take advantage of the most valuable thing the web has to offer: built-in traffic and hundreds of millions of users. Over three months, long contest teams from a dozen countries submitted 34 VR concepts. Seventeen judges and audience panels voted on the entries. Below is a list of the top 10 projects. I wanted to congratulate @ThePascalRascal and @Geczy for their work that won the €30,000 prize and spots to VR accelerator programs in Amsterdam, respectively. Here’s the really excellent part. With luck and solid code, virtual reality should start appearing in standard general availability web browsers in 2017. That’s a big deal. To date, VR has been accessible primarily on proprietary platforms. To put that in real world terms, the world of VR has been like a maze with many doors opening into rooms. Each room held something cool. But there was no way to walk easily and search through the rooms, browse the rooms, or link one room to another. This ability to link, browse, collaborate and share is what makes the web powerful and it’s what will help WebVR take off. To get an idea of how we envision this might work, consider the APainter app built by Mozilla’s team. It is designed to let artists create virtual art installations online. Each APainter work has a unique URL and other artists can come in and add to or build on top of the creation of the first artist, because the system is open source. At the same time, anyone with a browser can walk through an APainter work. And artists using APainter can link to other works within their virtual works, be it a button on a wall, a traditional text block, or any other format. Mozilla participated in this hackathon, and is supporting WebVR,  because we believe keeping the web open and ensuring it is built on open standards that work across all devices and browsers is a key to keeping the internet vibrant and healthy. To that same end, we are sponsoring the AFrame Project. The goal of AFrame is to make coding VR apps for the web even easier than coding web apps with standard HTML and javascript. Our vision at Mozilla is that, in the very[...]



Daniel Stenberg: curlup 2017: curl now

Mon, 20 Mar 2017 12:44:49 +0000

At curlup 2017 in Nuremberg, I did a keynote and talked a little about the road to what we are and where we are right now in the curl project. There will hopefully be a recording of this presentation made available soon, but I wanted to entertain you all by also presenting some of the graphs from that presentation in a blog format for easy access and to share the information. Some stats and numbers from the curl project early 2017. Unless otherwise mentioned, this is based on the availability of data that we have. The git repository has data from December 1999 and we have detailed release information since version 6.0 (September 13, 1999). Web traffic First out, web site traffic to curl.haxx.se over the seven last full years that I have stats for. The switch to a HTTPS-only site happened in February 2016. The main explanation to the decrease in spent bandwidth in 2016 is us removing the HTML and PDF versions of all documentation from the release tarballs (October 2016). My log analyze software also tries to identify “human” traffic so this graph should not include the very large amount of bots and automation that hits our site. In total we serve almost twice the amount of data to “bots” than to human. A large share of those download the cacert.pem file we host. Since our switch to HTTPS we have a 301 redirect from the HTTP site, and we still suffer from a large number of user-agents hitting us over and over without seemingly following said redirect… Number of lines in git Since we also have documentation and related things this isn’t only lines of code. Plain and simply: lines added to files that we have in git, and how the number have increased over time. There’s one notable dip and one climb and I think they both are related to how we have rearranged documentation and documentation formatting. Top-4 author’s share This could also talk about how seriously we suffer from “the bus factor” in this project. Look at how large share of all commits that the top-4 commiters have authored. Not committed; authored. Of course we didn’t have proper separation between authors and committers before git (March 2010). Interesting to note here is also that the author listed second here is Yang Tse, who hasn’t authored anything since August 2013. Me personally seem to have plateaued at around 57% of all commits during the recent year or two and the top-4 share is slowly decreasing but is still over 80% of the commits. I hope we can get the top-4 share well below 80% if I rerun this script next year! Number of authors over time In comparison to the above graph, I did one that simply counted the total number of unique authors that have contributed a change to git and look at how that number changes over time. The time before git is, again, somewhat of a lie since we didn’t keep track of authors vs committers properly then so we shouldn’t put too much value into that significant knee we can see on the graph. To me, the main take away is that in spite of the top-4 graph above, this authors-over-time line is interestingly linear and shows that the vast majority of people who contribute patches only send in one or maybe a couple of changes and then never appear again in the project. My hope is that this line will continue to climb over the coming years. Commits per release We started doin[...]



Daniel Stenberg: curl up 2017, the venue

Mon, 20 Mar 2017 10:58:12 +0000

The fist ever physical curl meeting took place this last weekend before curl’s 19th birthday. Today curl turns nineteen years old. After much work behind the scenes to set this up and arrange everything (and thanks to our awesome sponsors to contributed to this), over twenty eager curl hackers and friends from a handful of countries gathered in a somewhat rough-looking building at curl://up 2017 in Nuremberg, March 18-19 2017. The venue was in this old factory-like facility but we put up some fancy signs so that people would find it: Yes, continue around the corner and you’ll find the entrance door for us: I know, who’d guessed that we would’ve splashed out on this fancy conference center, right? This is the entrance door. Enter and look for the next sign. Yes, move in here through this door to the right. And now, up these stairs… When you’ve come that far, this is basically the view you could experience (before anyone entered the room): And when Igor Chubin presents about wttr,in and using curl to do console based applications, it looked like this: It may sound a bit lame to you, but I doubt this would’ve happened at all and it certainly would’ve been less good without our great sponsors who helped us by chipping in what we didn’t want to charge our visitors. Thank you very much Kippdata, Ergon, Sevenval and Haxx for backing us! [...]



Daniel Stenberg: 19 years ago

Mon, 20 Mar 2017 10:27:19 +0000

19 years ago on this day I released the first ever version of a software project I decided to name curl. Just a little hobby you know. Nothing fancy.

19 years ago that was a few hundred lines of code. Today we’re at around 150.000 lines.

19 years ago that was mostly my thing and I sent it out hoping that *someone* would like it and find good use. Today virtually every modern internet-connected device in the world run my code. Every car, every TV, every mobile phone.

19 years ago was a different age not only to me as I had no kids nor house back then, but the entire Internet and world has changed significantly since.

19 years ago we’d had a handful of persons sending back bug reports and a few patches. Today we have over 1500 persons having helped out and we’re adding people to that list at a rapid pace.

19 years ago I would not have imagined that someone can actually stick around in a project like this for this long time and still find it so amazingly fun and interesting still.

19 years ago I hadn’t exactly established my “daily routine” of spare time development already but I was close and for the larger part of this period I have spent a few hours every day. All days really. Working on curl and related stuff. 19 years of a few hours every day equals a whole lot of time

I took us 19 years minus two days to have our first ever physical curl meeting, or conference if you will.

(image)




The Servo Blog: This Week In Servo 95

Mon, 20 Mar 2017 00:30:00 +0000

In the last week, we landed 110 PRs in the Servo organization’s repositories. Planning and Status Our overall roadmap is available online, including the overall plans for 2017 and Q1. Please check it out and provide feedback! This week’s status updates are here. Congratulations to our new reviewers, avadacatavra and canaltinova. Diane joined the Servo team last year and has been upgrading our networking and security stack, while Nazım has been an important part of the Stylo effort so far. We’re excited to see them both use their new powers for good! Notable Additions SimonSapin reduced the overhead of locking associated with CSSOM objects. nox corrected a case that did not properly merge adjacent text nodes. glennw improved the rendering quality of transforms in WebRender. Manishearth added support for CSS system colors in Stylo. mukilan and canaltinova implemented HTML parser support for form owners. n0max fixed a panic when resizing canvases. ajeffrey implemented support for setting document.domain. mchv removed assumptions that browsing context’s could be safely unwrapped in many circumstances. ajeffrey made the constellation store more information about the original request for a document, rather than just the URL. montrivo implemented missing constructors for the ImageData API. ajeffrey made the top and parent APIs work for cross-thread origins. paulrouget added support for vetoing navigation in embeddings. ajeffrey implemented cross-thread postMessage support. Manishearth converted a macro into a higher-order macro for cleaner, more idiomatic code. New Contributors George White Mariot Chauvin Panashe M. Fundira Sneha Sinha Stefano Chiodino Volodymyr M. Lisivka Zach Ploskey cku n0max Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors![...]



James Long: How I Became a Better Programmer

Mon, 20 Mar 2017 00:00:00 +0000

Several people at React Conf asked me for advice on becoming a better programmer. For some reason, people see me as a pretty advanced programmer worth listening to. I thought it would be worthwhile to write down my "mental model" for how I have approached programming over the years. Some details about me: I'm 32 years old and have over 10 years of solid experience. It probably wasn't until the last few years until I really felt confident in what I was doing. Even now, though, I continually doubt myself. The point is that this feeling doesn't go away, so just try to ignore it, keep hacking, and keep building experience. Let me be clear that these are only a few tips for improving your skills. Ultimately you need to figure out what works best for you. These are just things that I have found helpful. Find people who inspire you, but don't idolize them. Over the years there have been many people that I looked up to and watched for new tech. I learned a lot by simply trusting they were right and digging into things they worked on. These people tend to be very productive, brilliant, and inspiring. Find them and let them inspire and teach you. However, make sure not to idolize them. It's easy to seem intimidating from a twitter feed, but if you look at how they work in real life, you'll see that they aren't that different. Hacks everywhere, etc. We're all just experimenting. Lastly, don't blindly trust them; if you disagree, engage them and learn from it. Some of my most productive conversations happened this way. My Emacs config is a mess. I don't know why my OCaml autocompletion is broken (it's been broken for over a month). I don't automate stuff and have to dig around in my shell history to find commands I need sometimes. I write the ugliest code at first. I stick things on the global object until I know what I'm doing. The most experienced programmer uses hacks all the time; the important part is that you're getting stuff done. Don't devalue your work. Newer programmers tend to feel like their work isn't worth much because they are new. Or maybe you are an experienced programmer, but working in a new area that makes you uncomfortable. In my opinion, some of the best ideas come from newer programmers who see improvements to existing tech that those who have already-formed opinions don't see. Your work is worthwhile, no matter what. In the worst case, if your idea doesn't work out, the community will have learned better why that approach doesn't make sense. (A note to the community: it's up to us to execute on this and be welcoming to newcomers.) Don't feel pressured to work all the time. With new tech coming out every day, it can feel like the world will move on without you if you take a night off. That's not true. In fact, you will do better work if you disengage a lot. Your perspective will be fresh, and I find myself subconsciously coming up with new ideas when I'm not working. The majority of the stuff being released every day is just a rehash of the same ideas. Truly revolutionary stuff only happens every few years. A good talk to watch on this subject is Hammock Driven Development. Ignore fluff. One of the biggest ways you can objectively get better faster is by ignoring "fl[...]



Mozilla Reps Community: Reps of the Month – February 2017

Sun, 19 Mar 2017 16:32:29 +0000

Please join us in congratulating Siddhartha Rao and Vishal Chavan, our Reps of the Month for February 2017! Siddhartha Siddhartha is a Computer Engineering major from New York, he is also a technology leader with rich experience in Security, Web Development and Data Privacy Evangelism. He started with his Mozilla contribution by leading the Firefox Club and Cyber Cell for a year at his university. Sid was a speaker and Data Privacy Ingeniouses at the Glassroom initiative by Tactical Tech and Mozilla. He contributed to the strata of data privacy in browser metadata, mobile devices and social networks and took the responsibility to delve deep in the topics and spread awareness about the same at the Data Detox Bar. He also is an important part of the privacy month campaign in the New York community. Creating quality content for the campaign, Sid was responsible for putting together the social media content for Twitter and Facebook posts and ensuring timely delivery of graphics to global teams for localization.   Vishal Vishal is a web entrepreneur and a champion Mozilla contributor since April 2013. He started his Mozilla journey by organizing sessions focused on teaching the web to school and university students. He then became a part of Mozilla Foundation when assigned the role of a Regional Coordinator for Mozilla Clubs. One of the key roles he played as a contributor was being a core team member who started the January Privacy Month Campaign. A campaign which, as of 2017, has proven to be a global success for the third year in a row. In the last six months, Vishal has contributed as a Regional Coordinator, has helped set up and mentor many new Mozilla Clubs in and around India. He has been an integral part of campaigns to spread privacy awareness in India. Involved in the planning and the flow of the privacy month campaig, Vishal was responsible for assigning roles, mobilizing contributors and encouraging participation in India and in global communities. He also mentors at the Maker-Fest in India. Congrats to both of you! Join us in congratulating them in discourse.[...]



Alex Vincent: A practical whitelist in JavaScript: es7-membrane, version 0.7

Sun, 19 Mar 2017 05:30:00 +0000

Several months ago, I announced es7-membrane, a new project for letting JavaScript developers control, through JS proxies, what their customers see of their own libraries.  A proxy, as you may recall, lets its creator define rules for looking up properties, defining properties, calling methods, etc., often with a real object underneath which the proxy’s handler internally refers to. First off, a little bit of basics: an object is a collection of properties (some of which are functions, and officially named “methods”). This may sound obvious, but it’s important: you refer to other values by the object and a property name. Proxies allow you to rewrite the rules for referring to other values by that tuple of the (containing) object and a property name. For instance, you can hide “private” members behind a proxy. (Stack Overflow) A membrane presents a one-to-one relationship between each proxy and a corresponding real object.  If you’re given a proxy to a document, and not the document itself, you can get other proxies, and those proxies can offer other properties that let you get back to the proxy of the document… but you can’t break out of any of the proxies to the underlying set of objects (or “object graph”). When I made the announcement back in August of this new project, the membrane I presented was quite useless, implementing only a mirroring capability.  Not anymore.  There’s a few features that the latest version currently supports: Membrane owners can replace any proxy the membrane generates with a custom proxy, using the .modifyRules API .createChainHandler(…) creates a new ProxyHandler derived from a handler used for an object graph the membrane already knows about. .replaceProxy(oldProxy, newHandler) takes the new handler and returns a new proxy for the original value, provided the new proxy handler was created via .createChainHandler(). Membrane owners can require a proxy to store new properties locally on the proxy, instead of propagating them through to the underlying object.  (.storeUnknownAsLocal(…) ) Membrane owners can require a proxy to delete properties locally, instead of on the underlying object.  (.requireLocalDelete(…) ) Membrane owners can hide existing properties of an object from the proxy’s users, as if those properties do not exist.  (.filterOwnKeys(…) ) Membrane owners can be notified when a new proxy is about to go to the customer, and set up new rules for that proxy before the customer ever sees it.  (ObjectGraphHandler.prototype.addProxyListener(callback), ..removeProxyListener(callback) ) Membrane owners can define as many object graphs as they want.  Traditionally, a membrane in JavaScript has supported only two object graphs.  (“wet”/”dry”, or “protected”/”public”, or “internal”/”external”… you get the idea)  But there’s no technical reason for that to be the upper limit.  The initial design of es7-membrane allowed the owner to define an object graph by name with a string. Having more than one object graph could have a few applications:  different privileges for different users or customers, for example. The es7-membrane tes[...]



Manish Goregaokar: I Never Hear the Phrase 'INHTPAMA' Anymore

Sun, 19 Mar 2017 02:50:42 +0000

Imagine never hearing the phrase ‘INHTPAMA’ again.

Oh, that’s already the case? Bummer.

Often, when talking about Rust, folks refer to the core aliasing rule as “that &mut thing”, “compile-time RWLock” (or “compile-time RefCell”), or something similar. Basically, referring to the fact that you can’t mutate the data that is currently held via an & reference, and that you can’t mutate or read the data currently held via an &mut reference except through that reference itself.

It’s always bugged me that we really don’t have a name for this thing. It’s one of the core bits of Rust, and crops up often in discussions.

But we did have a name for it! It was “INHTPAMA” (which was later butchered into “INHTWAMA”).

This is a reference to Niko’s 2012 blog post, titled “Imagine Never Hearing The Phrase ‘aliasable, mutable’ again”. It’s where the aliasing rules came from. Go read it, it’s great. It talks about this weird language with at symbols and purity, but I assure you, that language is Baby Rust. Or maybe Teenage Rust. The lifecycle of rusts is complex and interesting and I don’t know how to categorize it.

The point of this post isn’t really to encourage reviving the use of “INHTWAMA”; it’s a rather weird acronym that will probably confuse folks. I would like to have a better way of refering to “that &mut thing”, but I’d prefer if it wasn’t a confusing acronym that carries no meaning of its own if you don’t know the history of it. That’s a recipe for making new community members feel like outsiders.

But that post is amazing and I’d hate to see it drop out of the collective memory of the Rust community.




Cameron Kaiser: 45.8.1 not available (also: 45.9 and FPR1 progress, and goodbye, App.net)

Sat, 18 Mar 2017 01:47:00 +0000

TenFourFox 45.8.1 is not available, because there isn't one, even though Firefox 52.0.1 is available to fix the fallout from Pwn2Own. However, the exploited API in question does not exist in Firefox 45 (against which we are based) and a second attack against Firefox was apparently unsuccessful, so at least right now no urgent TenFourFox chemspill is required. 45.9, the last release we will make at source parity against the Mozilla code base, is still on schedule for April 18th. 45.9 has more microimprovements to JavaScript, including some sections of hand-written assembly code that have been completely overhauled (especially in the inline caches for arithmetic operations) and fixing a stupid bug that caused logical comparisons on floating point operations to always hit a slow code path, some additional microimprovements to hard-coding code flow with xptcall, graphics and text runs, and then bug fixes for geolocation and the font blacklist. The last two should be done by the end of next week or slightly after, and then after I test it internally there will be a beta prior to release. Today, though, I've been playing with Google's new Guetzli JPEG encoder, which promises higher compression ratios with great quality that any JPEG decoder can view, because you really can have "tastes great" and "less filling" at the same time, apparently. Yes, I was able to get it to compile on the Power Mac and it works pretty well on my G5 from the command line; I'm trying to package it as a drag-and-drop tool which I might release a little later if I have time. If Guetzli takes off, maybe Google will stop it with their stupid WebP fetish since this is a much better solution and far more compatible. Finally, yesterday was the last day of App.net, fondly called "ADN" by denizens such as myself, Martin, Sevan and Riccardo. It unfairly got tarred as a "pay Twitter clone," which in fairness its operators didn't do enough to dispel, though most of us longtimers think that the service sealed its doom when they moved from a strictly pay model to a freemium model. That then destabilized the service by allowing a tier of user that wasn't really invested in its long-term success (like, say, blog spammers, etc.), and it gradually dropped below profitability because the pay tier didn't offer enough at that point. But ADN had a real sense of community that just doesn't exist with Facebook, nor Twitter in particular. There were much fewer trolls and mob packs, and those that did engage in that behaviour found themselves ostracised quickly. Furthermore, you didn't have the sense of people breathing down your neck or endlessly searching for victims who might post the wrong thing so they can harass and "out" you for not toeing the party line. I think the smaller surface area and user base really led to that kind of healthier online relating, and I still believe that a social media service that forces a smaller number of people to be invested in the success of that service -- that in turn treats them as customers and not cattle -- is the most effective way to get around the problems the large free social sites have. Mea[...]



Air Mozilla: Webdev Beer and Tell: March 2017

Fri, 17 Mar 2017 18:00:00 +0000

(image) Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...




Mozilla Addons Blog: Migrating to WebExtensions? Don’t Forget Your Users

Fri, 17 Mar 2017 16:42:21 +0000

(image)

Stranded users feel sadness.

If you’re the developer of a legacy add-on with an aim to migrate to WebExtensions, here are a few tips to consider so your users aren’t left behind.

Port your user data

If your legacy add-on stores user data, we encourage you to take advantage of Embedded WebExtensions to transfer the data to a format that can be used by WebExtensions. This is critical if you want to seamlessly migrate your users—without putting any actionable burden on them—to the new WebExtensions version when it’s ready. (Embedded WebExtensions is a framework that contains your WebExtension inside of a bootstrapped or SDK extension.)

Testing beta versions

If you want to test a WebExtensions version of your add-on with a smaller group of users, you can make use of the beta versions feature on addons.mozilla.org (AMO). This lets you test pre-release beta versions that are signed and available to Firefox users who want to give them a spin. You’ll benefit from real users interacting with your new version and providing valuable feedback—without sacrificing the good reputation and rating of your listed version. We don’t recommend creating a separate listing on release because this will fragment your user base and leave a large number of them behind when Firefox 57 is released.

Don’t leave your users behind

Updating your listing on AMO when your WebExtension is ready is the only way to ensure all of your users move over without any noticeable interruption.

Need further assistance with your migration journey? You can find real-time help during office hours, or by emailing webextensions-support [@] mozilla [dot] org.

The post Migrating to WebExtensions? Don’t Forget Your Users appeared first on Mozilla Add-ons Blog.




Niko Matsakis: The Lane Table algorithm

Fri, 17 Mar 2017 04:00:00 +0000

For some time now I’ve been interested in better ways to construct LR(1) parsers. LALRPOP currently allows users to choose between the full LR(1) algorithm or the LALR(1) subset. Neither of these choices is very satisfying: the full LR(1) algorithm gives pretty intuitive results but produces a lot of states; my hypothesis was that, with modern computers, this wouldn’t matter anymore. This is sort of true – e.g., I’m able to generate and process even the full Rust grammar – but this results in a ton of generated code. the LALR(1) subset often works but sometimes mysteriously fails with indecipherable errors. This is because it is basically a hack that conflates states in the parsing table according to a heuristic; when this heuristic fails, you get strange results. The Lane Table algorithm published by Pager and Chen at APPLC ‘12 offers an interesting alternative. It is an alternative to earlier work by Pager, the “lane tracing” algorithm and practical general method. In any case, the goal is to generate an LALR(1) state machine when possible and gracefully scale up to the full LR(1) state machine as needed. I found the approach appealing, as it seemed fairly simple, and also seemed to match what I would try to do intuitively. I’ve been experimenting with the Lane Table algorithm in LALRPOP and I now have a simple prototype that seems to work. Implementing it required that I cover various cases that the paper left implicit, and the aim of this blog post is to describe what I’ve done so far. I do not claim that this description is what the authors originally intended; for all I know, it has some bugs, and I certainly think it can be made more efficient. My explanation is intended to be widely readable, though I do assume some familiarity with the basic workings of an LR-parser (i.e., that we shift states onto a stack, execute reductions, etc). But I’ll review the bits of table construction that you need. First example grammar: G0 To explain the algorithm, I’m going to walk through two example grammars. The first I call G0 – it is a reduced version of what the paper calls G1. It is interesting because it does not require splitting any states, and so we wind up with the same number of states as in LR(0). Put another way, it is an LALR(1) grammar. I will be assuming a basic familiarity with the LR(0) and LR(1) state construction. Grammar G0 G0 = X "c" | Y "d" X = "e" X | "e" Y = "e" Y | "e" The key point here is that if you have "e" ..., you could build an X or a Y from that "e" (in fact, there can be any number of "e" tokens). You ultimately decide based on whether the "e" tokens are followed by a "c" (in which case you build an X) or a "d" (in which case you build a Y). LR(0), since it has no lookahead, can’t handle this case. LALR(1) can, since it augments LR(0) with a token of lookahead; using that, after we see the "e", we can peek at the next thing and figure out what to do. Step 1: Construct an LR(0) state machine We begin by constructing an [...]



Air Mozilla: Reps Weekly Meeting Mar. 16, 2017

Thu, 16 Mar 2017 16:00:00 +0000

(image) This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.




Doug Belshaw: What does it mean to be 'digitally employable'?

Thu, 16 Mar 2017 14:47:37 +0000

Let’s define terms: Employable Suitable for paid work. Able to be used. So being employable means that someone is useful and can fill a particular role, whether as an employee or freelancer, for paid work. Having written my doctoral thesis on digital literacy, and led Mozilla’s Web Literacy Map from inception to v1.5, one of my biggest frustrations has been how new literacies are developed. It’s all very well having a framework and a development methodology, but how on earth do you get to actually effect the change you want to see in the world? For the past few weeks, the germ of an idea has been growing in my mind. On the one hand there are digital literacy frameworks specifying what people should be able to know, do, and think. On the other there are various approaches to employability skills. The latter is a reaction to formal education institutions being required to track the destination of people who graduate (or leave) them. The third factor in play here is Open Badges. This is a metadat specification and ecosystem for verifiable digital credentials that can be used to provide evidence of anything. In addition, anybody can earn and issue them, the value coming in recognition of the issuing body, and/or whether the supplied evidence shows that the individual has done something worthwhile. The simplest way of representing these three areas of focus is using a Venn diagram: The reason it’s important that Open Badges are in there is that they provide evidence of digital literacies and employability skills. They provide a ‘route to market’ for the frameworks to actually make a difference in the world. I know there’s an appetite for this, as I present on a regular basis and one of the slides I use is this one from Sussex Downs College: People ask me to go back to the slide ‘so they can take a photo of it’. Our co-op knows the team behind this project, as we helped with the kick-off of the project, and then ran a thinkathon for them as they looked to scale it. The Sussex Downs approach has a number of great elements to it: a three part badge system; competencies defined with the help of local businesses; a very visual design; and the homely metaphor of a ‘passport’. One thing that’s perhaps lacking is the unpacking of ‘Digital Literacy’ as more than a single element on the grid. To my mind, there’s a plethora of digital knowledge, skills, and behaviours that’s directly applicable to employability. In my experience, it’s important to come up with a framework for the work that you do. That’s why Sussex Downs’ Employability Passport is so popular. However, I think there’s also a need for people to be able to do the equivalent of pressing ‘view source’ on the framework to see why and how it was put together. What I’d like to do is to come up with a framework for digital employability that looks at the knowledge, skills, and understanding to thrive in the new digital economy. That wil[...]



QMO: Extra Testday event hold by Mozilla Tamilnadu community

Thu, 16 Mar 2017 09:19:12 +0000

Hello Mozillians,

This week, Mozilla community from Tamilnadu organized and held a Testday event in various campus clubs from their region.

I just wanted to thank you all for taking part in this. With the community help, Mozilla is improving every day.

Several test cases were executed for the WebM Alpha, Reader Mode Displays Estimate Reading Time and Quantum – Compositor Process features.

Many thanks to Prasanth P, Surentharan R A, Monesh, Subash, Rohit R, @varun1102, Akksaya, Roshini, Swathika, Suvetha Sri, Bhava, aiswarya.M, Aishvarya, Divya, Arpana, Nivetha, Vallikannu, Pavithra Roselin, Suryakala, prakathi, Bhargavi.G, Vignesh.R, Meganisha.B, Aishwarya.k, harshini.k, Rajesh, Krithika Sowbarnika, harini shilpa, Dhinesh kumar, KAVIPRIYA.S, HARITHA K SANKARI, Nagaraj V, abarna, Sankararaman, Harismitaa R K, Kavya, Monesh, Harini, Vignesh, Anushri, Vishnu Priya, Subash.M, Vinothini K, Pavithra R.

Keep up the good work!
Mihai Boldan, QA Community Mentor
Firefox for Desktop, Release QA Team




The Rust Programming Language Blog: Announcing Rust 1.16

Thu, 16 Mar 2017 00:00:00 +0000

The Rust team is happy to announce the latest version of Rust, 1.16.0. Rust is a systems programming language focused on safety, speed, and concurrency. If you have a previous version of Rust installed, getting Rust 1.16 is as easy as: $ rustup update stable If you don’t have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.16.0 on GitHub. What’s in 1.16.0 stable The largest addition to Rust 1.16 is cargo check. This new subcommand should speed up the development workflow in many cases. What does it do? Let’s take a step back and talk about how rustc compiles your code. Compilation has many “passes”, that is, there are many distinct steps that the compiler takes on the road from your source code to producing the final binary. You can see each of these steps (and how much time and memory they take) by passing -Z time-passes to a nightly compiler: rustc .\hello.rs -Z time-passes time: 0.003; rss: 16MB parsing time: 0.000; rss: 16MB recursion limit time: 0.000; rss: 16MB crate injection time: 0.000; rss: 16MB plugin loading time: 0.000; rss: 16MB plugin registration time: 0.049; rss: 34MB expansion There’s a lot of them. However, you can think of this process in two big steps: first, rustc does all of its safety checks, makes sure your syntax is correct, all that stuff. Second, once it’s satisfied that everything is in order, it produces the actual binary code that you end up executing. It turns out that that second step takes a lot of time. And most of the time, it’s not neccesary. That is, when you’re working on some Rust code, many developers will get into a workflow like this: Write some code. Run cargo build to make sure it compiles. Repeat 1-2 as needed. Run cargo test to make sure your tests pass. GOTO 1. In step two, you never actually run your code. You’re looking for feedback from the compiler, not to actually run the binary. cargo check supports exactly this use-case: it runs all of the compiler’s checks, but doesn’t produce the final binary. So how much speedup do you actually get? Like most performance related questions, the answer is “it depends.” Here are some very un-scientific benchmarks:   thanks cargo diesel initial build 134.75s 236.78s 15.27s initial check 50.88s 148.52s 12.81s speedup 2.648 1.594 1.192 secondary build 15.97s 64.34s 13.54s secondary check 2.9s 9.29s 12.3s speedup 5.506 6.925 1.100 The ‘initial’ categories are the first build after cloning down a project. The ‘secondary’ categories involved adding one blank line to the top of src\lib.rs and running the command [...]



Myk Melez: Introducing qbrt

Wed, 15 Mar 2017 18:01:30 +0000

I recently blogged about discontinuing Positron. I’m trying a different tack with a new experiment, codenamed qbrt, that reuses an existing Gecko runtime (and its existing APIs) while simplifying the process of developing and packaging a desktop app using web technologies. qbrt is a command-line interface written in Node and available via NPM: npm install -g qbrt Installing it also installs a Gecko runtime (currently a nightly build of Firefox, but in the future it could be a stable build of Firefox or a custom Gecko runtime). Its simplest use is then to invoke the ‘run’ command with a URL: qbrt run https://eggtimer.org/ Which will start a process and load the URL into a native window: URLs loaded in this way don’t have privileged access to the system. They’re treated as web content, not application chrome. To load a desktop application with system privileges, point qbrt at a local directory containing a package.json file and main entry script: qbrt run path/to/my/app/ For example, clone qbrt’s repo and try its example/ app: git clone https://github.com/mozilla/qbrt.git qbrt run qbrt/example/ This will start a process and load the app into a privileged context, giving it access to Gecko’s APIs for opening windows and loading web content along with system integration APIs for file manipulation, networking, process management, etc. (Another good example is the “shell” app that qbrt uses to load URLs.) To package an app for distribution, invoke the ‘package’ command, which creates a platform-specific package containing both the app’s resources and the Gecko runtime: qbrt package path/to/my/app/ Note that while qbrt is written in Node, it doesn’t provide Node APIs to apps. It might be useful to do so, using SpiderNode, as we did with Positron, although Gecko’s existing APIs expose equivalent functionality. Also, qbrt doesn’t yet support runtime version management (i.e. being able to specify which version of Gecko to use, and to switch between them). At the time you install it, it downloads the latest nightly build of Firefox. (You can update that nightly build by reinstalling qbrt.) And the packaging support is primitive. qbrt creates a shell script (batch script on Windows) to launch your app, and it packages your app using a platform-specific format (ZIP on Windows, DMG on Mac, and tar/gzip on Linux). But it doesn’t set icons nor most other package meta-data, and it doesn’t create auto-installers nor support signing the package. In general, qbrt is immature and unstable! It’s appropriate for testing, but it isn’t yet ready for you to ship apps with it. Nevertheless, I’m keen to hear how it works for you, and whether it supports your use cases. What would you want to do with it, and what additional features would you need from it?[...]



The Mozilla Blog: Five issues that will determine the future of Internet Health

Wed, 15 Mar 2017 17:48:51 +0000

In January, we published our first Internet Health Report on the current state and future of the Internet. In the report, we broke down the concept of Internet health into five issues. Today, we are publishing issue briefs about each of them: online privacy and security, decentralization, openness, web literacy and digital inclusion. These issues are the building blocks to a healthy and vibrant Internet. We hope they will be a guide and resource to you. We live in a complex, fast moving, political environment. As policies and laws around the world change, we all need to help protect our shared global resource, the Internet. Internet health shouldn’t be a partisan issue, but rather, a cause we can all get behind. And our choices and actions will affect the future health of the Internet, for better or for worse. We work on many other policies and projects to advance our mission, but we believe that these issue briefs help explain our views and actions in the context of Internet health:   1. Online Privacy & Security: Security and privacy on the Internet are fundamental and must not be treated as optional. In our brief, we highlight the following subtopics: Meaningful user control – People care about privacy. But effective understanding and control are often difficult, or even impossible, in practice. Data collection and use – The tech industry, too often, reflects a culture of ‘collect and hoard all the data’. To preserve trust online, we need to see a change. Government surveillance – Public distrust of government is high because of broad surveillance practices. We need more transparency, accountability and oversight. Cybersecurity – Cybersecurity is user security. It’s about our Internet, our data, and our lives online. Making it a reality requires a shared sense of responsibility. Protecting your privacy and security doesn’t mean you have something to hide. It means you have the ability to choose who knows where you go and what you do. 2. Openness: A healthy Internet is open, so that together, we can innovate. To make that a reality, we focus on these three areas: Open source – Being open can be hard. It exposes every wrinkle and detail to public scrutiny. But it also offers tremendous advantages. Copyright – Offline copyright law built for an analog world doesn’t fit the current digital and mobile reality. Patents – In technology, overbroad and vague patents create fear, uncertainty and doubt for innovators. Copyright and patent laws should better foster collaboration and economic opportunity. Open source, open standards, and pro-innovation policies must continue to be at the heart of the Internet. 3. Decentralization: There shouldn’t be online monopolies or oligopolies; a decentralized Internet is a healthy Internet. To accomplish that goal, we are focusing on the following policy areas. Net [...]



Air Mozilla: The Joy of Coding - Episode 95

Wed, 15 Mar 2017 17:00:00 +0000

(image) mconley livehacks on real Firefox bugs while thinking aloud.




Air Mozilla: Building Habit-Forming Products with Nir Eyal

Wed, 15 Mar 2017 17:00:00 +0000

(image) Nir Eyal has built and invested in products reaching hundreds of millions of users including AdNectar, Product Hunt and EventBrite. He'll draw on core psychological...




Mozilla Addons Blog: Add-ons Update – 2017/03

Tue, 14 Mar 2017 17:47:19 +0000

Here’s the state of the add-ons world this month. The Road to Firefox 57 explains what developers should look forward to in regards to add-on compatibility for the rest of the year. Please give it a read if you haven’t already. The Review Queues In the past month, 1,414 listed add-on submissions were reviewed: 1132 (80%) were reviewed in fewer than 5 days. 31 (2%) were reviewed between 5 and 10 days. 251 (18%) were reviewed after more than 10 days. There are 594 listed add-ons awaiting review. We met last week to discuss the state of the queues and our plans to reduce waiting times. There are already some changes coming in the next month or so that should help significantly, but we have larger plans that we will share soon that should address this recurring problem permanently. If you’re an add-on developer and are looking for contribution opportunities, please consider joining us. Add-on reviewers are critical for our success, and can earn cool gear for their work. Visit our wiki page for more information. Compatibility The blog post for 53 is up and the bulk validation will be run soon. Firefox 54 is coming up. Multiprocess Firefox is enabled for some users, and will be deployed for most users very soon. Make sure you’ve tested your add-on and either use WebExtensions or set the multiprocess compatible flag in your add-on manifest. As always, we recommend that you test your add-ons on Beta and Firefox Developer Edition to make sure that they continue to work correctly. End users can install the Add-on Compatibility Reporter to identify and report any add-ons that aren’t working anymore. Recognition We would like to thank the following people for their recent contributions to the add-ons world: Piotr Drąg Niharika Khanna saintsebastian Atique Ahmed Ziad gilbertginsberg felixgirault StandB lavish205 numrut fitojb totaki ingoe You can read more about their work in our recognition page. The post Add-ons Update – 2017/03 appeared first on Mozilla Add-ons Blog.[...]



Robert Kaiser: Final Round for My LCARStrek and EarlyBlue Themes

Tue, 14 Mar 2017 17:33:07 +0000

As you may have noted, Mozilla published a plan for a new themes system that doesn't fully cover my thoughts on the matter and ends up making themes that go as far as my LCARStrek theme impossible. The only way I could still hold up this extent of theming is to spread it guerilla-style as userChrome.css mods, i.e. a long CSS sheet to be copied into people's userChromes.css manually. That would still allow the extent of theming, but be extremely inconvenient to distribute. Because of that, I will stop development of my themes as soon as Firefox 57 hits Nightly and I can't use the LCARStrek theme myself any more (EarlyBlue, which is SeaMonkey-only, is something I just dragged along anyhow). Given the insecurity of even having releases and the small "market", I also will not continue them for SeaMonkey only, Firefox has been the only thing that really mattered any more there. Also, explicit theming support for Firefox devtools is being removed from LCARStrek with the 2.49 release that I just submitted to AMO as it's extremely complicated to maintain and with the looming removal of full themes from Firefox, that amount of work is not worth my time any more. Because of this, there is a bit of a mixture of styles in some areas of devtools esp. in Firefox 52 (improving in newer versions) but that is outside of the control of a theme author. I tested that devtools are usable this way, contrast of icons in toolbars isn't optimal at times but visible enough so developers can work with them. To any LCARStrek users, sorry for the inconvenience, I would have put more work into this if the theming feature of this extent would not be removed. This is a hard step for me as the first thing I experimented with when I downloaded my first Mozilla M5 build in 1999 was actually the theming files, and LCARStrek came out of that as a demonstration of how awesome this system of customization was and how far it could go. It achieve a look that really was out of this world, but I guess the new direction of Firefox is not compatible with a 24th century look. It will also be hard for me go move back to the bland look of the default theme, esp. as it looks even more boring on Linux than on other platforms, but I have a few months to get used to the idea before I actually have to do this, and I will keep the themes going for that little while. Somehow this fits well with the overall theme that MoCo and myself are at odds right now on a number of things, but you can be assured that I'm not gone from the community, as a matter of fact I have planned a few activities in Vienna in the next months, from WebVR workshops to conference appearances, and I'm just about to finish the Tech Speakers training and hope to be more active in that area in the future. LLAP![...]



Firefox Nightly: These Weeks in Firefox: Issue 12

Tue, 14 Mar 2017 17:09:22 +0000

Highlights Turning on the permissions dialog for WebExtensions this week Now you get a better sense of what the add-on is capable of. Download progress indication redesign landed and is now in Nightly and Firefox Developer Edition! (UX spec) Special thanks to the front-end engineering and user experience team who worked on the project: Bryant Mao, Carol Huang, Morpheus Chen, Rex Lee, and Sean Lee! e10s-multi tentatively scheduled to ship to release in Firefox 55 with 4 content processes Recent measurements show that we’re quite competitive, memory-wise, in this configuration Feeling slim and trim! Updated orphaning dashboard! This will help us figure out why some people are stuck on older versions of Firefox. You can now reorder tabs in Firefox for Android (Nightly)! You can reorder tabs in Nightly builds of Firefox for Android now! https://t.co/2jYkPXUmYY – Code contributed by our volunteer Tom Klein. pic.twitter.com/m9F5Gm1eFn — Sebastian Kaspari (@Anti_Hype) March 13, 2017 " class="wp-smiley" style="height: 1em; max-height: 1em;" />New Test Pilot experiment" class="wp-smiley" style="height: 1em; max-height: 1em;" />: Containers launched March 2nd Intrigued? Try it or read the Hacks post A lot of community interest, running product prioritization through GH upvotes Please submit your ideas for new Firefox features ehsan sent out a Quantum Flow Engineering Newsletter that you should read Friends of the Firefox team Resolved bugs (excluding employees): https://mzl.la/2mHvE8d More than one bug fixed: Bharat Raghunathan Deepjyoti Mondal Kevin Jones Svetlana Orlik Tomislav Jovanovic :zombie Vedant Sareen [:fionn_mac] New contributors ( = First Patch!) Avikalpa Kundu [:kalpa] added a GTest for the Telemetry Histogram API Barun Parruck removed some redundant CSS from toolbar.css Bharat Raghunathan made Sync logs on Nightly more useful Vineet Reddy removed some unneeded SVGs milindl made Control-tab previews more efficient Kate Ustiuzhanina removed some unneeded Telemetry for Desktop CactusTribe made search engine tooltips more useful Subhdeep Saha made Control-tab previews not update until they really need to Tony fixed some styling glitches in the Add-ons Manager Welcome goes out to Sam Foster (:sfoster) and Jim Porter (:squib) who have just joined the Firefox team! Project Updates Add-ons Working on streaming download API Graduating first experiment in Firefox 55, nsLoginInfo API, which allows add-on authors to access and edit saved logins in the browser Work to deprecate XUL add-ons in Firefox 57 will start to land this release, preffed off by default. Activity Stream ursula landed a patch that allows the Activity Stream system add-on to override about:newtab[...]



Mark Côté: Conduit Field Report, March 2017

Tue, 14 Mar 2017 16:44:28 +0000

For background on Conduit, please see the previous post and the Intent to Implement. Autoland We kicked off Conduit work in January starting with the new Autoland service. Right now, much of the Autoland functionality is located in the MozReview Review Board extension: the permissions model, the rewriting of commit messages to reflect the reviewers, and the user interface. The only part that is currently logically separate is the “transplant service”, which actually takes commits from one repo (e.g. reviewboard-hg) and applies it to another (e.g. try, mozilla-central). Since the goal of Conduit is to decouple all the automation from code-review tools, we have to take everything that’s currently in Review Board and move it to new, separate services. The original plan was to switch Review Board over to the new Autoland service when it was ready, stripping out all the old code from the MozReview extension. This would mean little change for MozReview users (basically just a new, separate UI), but would get people using the new service right away. After Autoland, we’d work on the push-to-review side, hooking that up to Review Board, and then extend both systems to interface with BMO. This strategy of incrementally replacing pieces of MozReview seemed like the best way to find bugs as we went along, rather than a massive switchover all at once. However, progress was a bit slower than we anticipated, largely due to the fact that so many things were new about this project (see below). We want Autoland to be fully hooked up to BMO by the end of June, and integrating the new system with both Review Board and BMO as we went along seemed increasingly like a bad idea. Instead, we decided to put BMO integration first, and then follow with Review Board later (if indeed we still want to use Review Board as our rich-code-review solution). This presented us with a problem: if we wouldn’t be hooking the new Autoland service up to Review Board, then we’d have to wait until the push service was also finished before we hooked them both up to BMO. Not wanting to turn everything on at once, we pondered how we could still launch new services as they were completed. Moving to the other side of the pipeline The answer is to table our work on Autoland for now and switch to the push service, which is the entrance to the commit pipeline. Building this piece first means that users will be able to push commits to BMO for review. Even though they would not be able to Autoland them right away, we could get feedback and make the service as easy to use as possible. Think of it as a replacement for bzexport. Thanks to our new Scrum process (see also below), this priority adjustment was n[...]



Air Mozilla: Martes Mozilleros, 14 Mar 2017

Tue, 14 Mar 2017 16:00:00 +0000

(image) Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...




Christian Heilmann: Want to learn more about using the command line? Remy helps!

Tue, 14 Mar 2017 13:29:57 +0000

This is an unashamed plug for Remy Sharp’s terminal training course command line for non–techies. Go over there and have a look at what he’s lined up for a very affordable price. In a series of videos he explains all the ins and outs of the terminal and its commands that can make you much more effective in your day-to-day job. I’ve read the ebook of the same course and have to say that I learned quite a few things but – more importantly – remembered a lot I had forgotten. By using the findings over and over a lot has become muscle memory, but it is tough to explain what I am doing. Remy did a great job making the dark command line magic more understandable and less daunting. Here is what the course covers: Course material “Just open the terminal” Just open the terminal (03:22) Why use a terminal? (03:23) Navigating directories (07:71) Navigation shortcuts (01:06) Install all the things Running applications (05:47) brew install fun (07:46) gem install (06:32) npm install—global (09:44) Which is best? (02:13) Tools of the Terminal Trade Connecting programs (08:25) echo & cat (01:34) grep “searching” (06:22) head tail less (10:24) sort | uniq (07:58) How (not) to shoot yourself in the foot Delete all the things (07:42) Super user does…sudo (07:50) Permissions: mode & owner (11:16) Kill kill kill! (12:21) Health checking (12:54) Making the shell your own Owning your terminal (09:19) Fish ~> (10:18) Themes (01:51) zsh (zed shell) (10:11) zsh plugins: z st… (08:26) Aliases (05:43) Alias++ → functions (08:15) Furthering your command line Piping workflow (08:14) Setting environment values (03:04) Default environment variable values (01:46) Terminal editors (06:41) wget and cURL (09:53) ngrok for tunnelling (06:38) json command for data massage (07:51) awk for splitting output into columns (04:11) xargs (for when pipes won’t do) (02:15) …fun bonus-bonus video (04:13) I am not getting anything for this, except for making sure that someone as lovely and dedicated as Remy may reach more people with his materials. So, take a peek. [...]



The Mozilla Blog: A Public-Private Partnership for Gigabit Innovation and Internet Health

Tue, 14 Mar 2017 11:18:54 +0000

Mozilla, the National Science Foundation and U.S. Ignite announce $300,000 in grants for gigabit internet projects in Eugene, OR and Lafayette, LA   By Chris Lawrence, VP, Leadership Network At Mozilla, we believe in a networked approach — leveraging the power of diverse people, pooled expertise and shared values. This was the approach we took nearly 15 years ago when we first launched Firefox. Our open-source browser was — and is — built by a global network of engineers, designers and open web advocates. This is also the approach Mozilla takes when working toward its greater mission: keeping the internet healthy. We can’t build a healthy internet — one that cherishes freedom, openness and inclusion — alone. To keep the internet a global public resource, we need a network of individuals and organizations and institutions. One such partnership is Mozilla’s ongoing collaboration with the National Science Foundation (NSF) and U.S. Ignite. We’re currently offering a $2 million prize for projects that decentralize the web. And together in 2014, we launched the Gigabit Community Fund. We committed to supporting promising projects in gigabit-enabled U.S. cities — projects that use connectivity 250-times normal speeds to make learning more engaging, equitable and impactful. Today, we’re adding two new cities to the Gigabit Community Fund: Eugene, OR and Lafayette, LA.   Beginning in May 2017, we’re providing a total of $300,000 in grants to projects in both new cities. Applications for grants will open in early summer 2017; applicants can be individuals, nonprofits and for-profits. We’ll support educators, technologists and community activists in Eugene and Lafayette who are building and beta-testing the emerging technologies that are shaping the web. We’ll fuel projects that leverage gigabit networks to make learning more inclusive and engaging through VR field trips, ultra-high definition classroom collaboration, and real-time cross-city robot battles. (These are all real examples from the existing Mozilla gigabit cities of Austin, Chattanooga and Kansas City.) We’re also investing in the local communities on the ground in Eugene and Lafayette — and in the makers, technologists, and educators who are passionate about local innovation. Mozilla will bring its Mozilla Network approach to both cities, hosting local events and strengthening connections between individuals, schools, nonprofits, museums, and other organizations. Video: Learn how the Mozilla Gigabit Community Fund supports innovative local projects across the U.S. Why Eugene and Lafayette? Mozilla Community Gigabit [...]



This Week In Rust: This Week in Rust 173

Tue, 14 Mar 2017 04:00:00 +0000

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions. This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR. Updates from Rust Community News & Blog Posts Rust's type system is Turing-complete: An exploration of type-level programming in Rust. A response to Rust's type system is not Turing complete challenge. Targeting the web with Rust. A demo web app that implements CPU bound portions in Rust (compiled to wasm/asm) while using existing web technologies to handle user facing pieces. Gentle intro to type-level recursion in Rust: From zero to HList sculpting. Math with distances in Rust: safety and correctness across units. Exploring dynamic dispatch in Rust. Running Rust on the ARM Cortex M3. Little tour of multiple iterators implementation in Rust. How to use Hyper HTTP library asynchronously. Building a parallel ECS in Rust. The development story of specs commemorating the 0.8 version release. Map of a lifetime. flat_map vs. nested loops. Reference iterators in Rust. ripgrep 0.5.0 released with UTF-16 support. ripgrep is a line oriented search tool that combines the usability of The Silver Searcher with the raw speed of GNU grep. Rust now beats C++ in many benchmarks in The Computer Language Benchmarks Game and is on par in others. This week in Rust docs 47. Parsing into an AST. Part of the series - Writing an interpreter in Rust. Crate of the Week This week's crate of the week is µtest, a testing framework for embedded software. Thanks to nasa42 for the suggestion. Submit your suggestions and votes for next week! Call for Participation Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started! Some of these tasks may also have mentors available, visit the task page for more information. Crossbeam project is looking for new maintainers. The Underhanded Rust Contest. [medium] notify-rust: Implement icons and images. notify-rust let's you send desktop notifications on Linux and BSD. tempdir: TempDir affected by remove_dir_all unreliability on windows. [easy] servo: Looking for something to work on. If you are a Rust project owner and are looking for contributors, please submit tasks here. Updates from Rust Core 142 pull requests we[...]



Mitchell Baker: The “Worldview” of Mozilla

Mon, 13 Mar 2017 20:59:24 +0000

There are a set of topics that are important to Mozilla and to what we stand for in the world — healthy communities, global communities, multiculturalism, diversity, tolerance, inclusion, empathy, collaboration, technology for shared good and social benefit.  I spoke about them at the Mozilla All Hands in December, if you want to (re)listen to the talk you can find it here.  The sections where I talk about these things are at the beginning, and also starting at about the 14:30 minute mark.

These topics are a key aspect of Mozilla’s worldview.  However, we have not set them out officially as part of who we are, what we stand for and how we describe ourselves publicly.   I’m feeling a deep need to do so.

My goal is to develop a small set of principles about these aspects of Mozilla’s worldview. We have clear principles that Mozilla stands for topics such as security and free and open source software (principles 4 and 7 of the Manifesto).  Similarly clear principles about topic such as global communities and multiculturalism will serve us well as we go forward.  They will also give us guidance as to the scope and public voice of Mozilla, spanning official communications from Mozilla, to the unofficial ways each of us describes Mozilla.

Currently, I’m working on a first draft of the principles.  We are working quickly, as quickly as we can have rich discussions and community-wide participation. If you would like to be involved and can potentially spend some hours reviewing and providing input please sign up here. Jascha and Jane are supporting me in managing this important project.  
I’ll provide updates as we go forward.  




Chris Cooper: Shameless self (release) promotion: Firefox 53.0b1 from TaskCluster

Mon, 13 Mar 2017 19:17:17 +0000

You may recall two short months ago when we moved Linux and Android nightlies from buildbot to TaskCluster. Due to the train model, this put us (release engineering) on a clock: either we’d be ready to release a beta version of Firefox 53 for Linux and Android using release promotion in TaskCluster, or we’d need to hold back our work for at least the next cycle, causing uplift headaches galore.

I’m happy to report that we were able to successfully release Firefox 53.0b1 for Linux and Android from TaskCluster last week. This is impressive for 3 reasons:

  1. Mac and Windows builds were still promoted from buildbot, so we were able to seamlessly integrate the artifacts of two different continuous integration (CI) platforms.
  2. The process whereby nightly builds are generated has always been different from how we generate release builds. Firefox 53.0b1 represents the first time a beta build was generated using the same taskgraph we use for a nightly, thereby reducing the delta between CI builds and release builds. More work to be done here, for sure.
  3. Nobody noticed. With all the changes under the hood, this may be the most impressive achievement of all.

A round of thanks to Aki, Johan, Kim, and Mihai who worked hard to get the pieces in place for Android, and a special shout-out to Rail who handled the Linux beta while also dealing with the uplift requirements for ESR52. Of course, thanks to everyone else who has helped with the migration thus far. All of that foundational work is starting to pay off.

Much more to do, but I look forward to updating you about Mac and Windows progress soon.




Air Mozilla: Mozilla Weekly Project Meeting, 13 Mar 2017

Mon, 13 Mar 2017 18:00:00 +0000

(image) The Monday Project Meeting




Mozilla Addons Blog: WebExtensions in Firefox 54

Mon, 13 Mar 2017 17:00:42 +0000

Firefox 54 landed in Developer Edition this week, so we have another update on WebExtensions for you. In addition to new APIs to help more developers port over to WebExtensions, we also announced a new Office Hours Support schedule where developers can get more personalized help with the transition. New APIs A new API for creating sidebars was implemented. This allows you to place a local HTML file inside the sidebar. The API is similar to the one in Opera. If you specify the sidebar_action manifest key, Firefox will create a sidebar: To allow keyboard commands to be sent to the sidebar, a new _execute_sidebar_action was added to the commands API which allows you trigger the showing of the sidebar. The ability to override the about:newtab with pages inside your extension was added to the chrome_url_overrides field in the manifest. Check out the example that uses the topSites API to show the top sites you visit . The privacy API gives you the ability to flip certain Firefox preferences related to privacy. Although the preferences in Chrome and Firefox aren’t a direct mapping, we’ve mapped the Firefox preferences that makes sense to the APIs. Currently implemented are: networkPredictionEnabled, webRTCIPHandlingPolicy and hyperlinkAuditingEnabled. The protocol_handler API lets you easily map protocols to actions in your extension. For example: we use irccloud at Mozilla, so we can map ircs:// links to irccloud by adding this into an extension: "protocol_handlers": [ { "protocol": "ircs", "name": "IRC Mozilla Extension", "uriTemplate": "https://irccloud.mozilla.com/#!/%s" } ] When a user clicks on an IRC link, it shows the application selector with the IRC Mozilla Extension visible: This release also marks the landing of the first sets of devtools APIs. Quite a few APIs landed including: inspectedWindow.reload(), inspectedWindow.eval(), inspectedWindow.tabId, network.onNavigated, and panels.create(). Here’s an example of the Redux DevTools extension running on Firefox: Backwards incompatible changes The webRequest API will now require that you’ve requested the appropriate hosts’ permissions before allowing you to perform webRequest operations on a URL. This will be a backwards-incompatible change for any extension which used webRequest but did not request the host permission. Deletes in storage.sync are now encrypted. This would be a breaking change for any extensions using storage.sync on Developer Edition. API Changes Some key[...]



Tim Guan-tin Chien: Ben’s Story of Firefox OS

Mon, 13 Mar 2017 14:50:51 +0000

Like my good old colleague Ben Francis, I too have a lot to say about Firefox OS.

It’s been little over a year since my team and I moved away from Firefox OS and the ill-fated Connected Devices group. Over the course of last year, each time I think about the Firefox OS experience, I arrived at different conclusions and complicated, sometimes emotional, belives. I didn’t feel I am ready to take a snapshot of these thoughts and publish it permanently, so I didn’t write about it here. Frankly, I don’t even think I am ready now.

In his post, Ben has pointed out many of the turning points of the project. I agree with many of his portrayals, most importantly, lost of direction between being a product (which must ship fast and deliver whatever partners/consumers wanted and used to) and a research project (which involves engineering endeavors that answer questions asked in the original announcement). I, however, have not figured out what can be done instead (which Ben proposed in his post nicely).

Lastly, a final point: to you, this might as well be another story in the volatile tech industry, but to me, I felt the cost of people enormously whenever a change was announced during the “slow death” of Firefox OS.

People moves on and recovers, including me (which fortunately wasn’t nearly being hit the hardest). I can only extend my best wishes to those who had fought the good fight with.

(image)




Gregory Szorc: from __past__ import bytes_literals

Mon, 13 Mar 2017 09:55:00 +0000

Last year, I simultaneously committed one of the ugliest and impressive hacks of my programming career. I haven't had time to write about it. Until now. In summary, the hack is a source-transforming module loader for Python. It can be used by Python 3 to import a Python 2 source file while translating certain primitives to their Python 3 equivalents. It is kind of like 2to3 except it executes at run-time during import. The main goal of the hack was to facilitate porting Mercurial to Python 3 while deferring having to make the most invasive - and therefore most annoying - elements of the port in the canonical source code representation. For the technically curious, it works as follows. The hg Python executable registers a custom meta path finder instance. This entity is invoked during import statements to try to find the module being imported. It tells a later phase of the import mechanism how to load that module from wherever it is (usually a .py or .pyc file on disk) to a Python module object. The custom finder only responds to requests for modules known to be managed by the Mercurial project. For these modules, it tells the next stage of the import mechanism to invoke a custom SourceLoader instance. Here's where the real magic is: when the custom loader is invoked, it tokenizes the Python source code using the tokenize module, iterates over the token stream, finds specific patterns, and rewrites them to something more appropriate. It then untokenizes back to Python source code then falls back to the built-in loader which does the heavy lifting of compiling the source to Python code objects. So, we have Python 2 source files on disk that magically get transformed to be Python compatible when they are loaded by Python 3. Oh, and there is no performance penalty for the token transformation on subsequence loads because the transformed bytecode is cached in the .pyc file (using a custom header so we know it was transformed and can be invalidated when the transformation logic changes). At the time I wrote it, the token stream manipulation converted most string literals ('') to bytes literals (b''). In other words, it restored the Python 2 behavior of string literals being bytes and not unicode. We jokingly call it from __past__ import bytes_literals (a play on Python 2's from __future__ import unicode_literals special syntax which changes string literals from Python 2's str/bytes type to unicode to match Python 3's be[...]



The Servo Blog: These Weeks In Servo 94

Mon, 13 Mar 2017 00:30:00 +0000

In the last two weeks, we landed 185 PRs in the Servo organization’s repositories. Planning and Status Our overall roadmap is available online, including the overall plans for 2017 and Q1. Please check it out and provide feedback! This week’s status updates are here. Notable Additions samgiles and rabisg added the Origin header to fetch requests. hiikezoe made CSS animations be processed by Stylo. Manishearth supported SVG presentation attributes in Stylo. ferjm allowed redirects to occur after a CORS preflight fetch request. sendilkumarn corrected the behaviour of loading user scripts in iframes. pcwalton improved the performance of layout queries and requestAnimationFrame. emilio removed unnecessary heap allocation for some CSS parsers. mephisto41 implemented gradient border support in WebRender. jdm avoided some panics triggered by image elements initiating multiple requests. MortimerGoro improved the Android integration and lifecycle hooks. nox removed the last uses of serde_codegen. ferjm avoided a deadlock triggered by the Document.elementsFromPoint API. fitzgen improved the rust-bindgen support for complex template parameter usages. gw implemented page zoom support in WebRender. dpyro added support for the nosniff algorithm in the Fetch implementation. KiChjang implemented the :lang pseudoclass. New Contributors iakis Jamie Nicol Sneha Sinha ak1t0 lucantrop projektir rabisg Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors![...]



Mike Hommey: When the memory allocator works against you

Sun, 12 Mar 2017 01:47:12 +0000

Cloning mozilla-central with git-cinnabar requires a lot of memory. Actually too much memory to fit in a 32-bits address space. I hadn’t optimized for memory use in the first place. For instance, git-cinnabar keeps sha-1s in memory as hex values (40 bytes) rather than raw values (20 bytes). When I wrote the initial prototype, it didn’t matter that much, and while close(ish) to the tipping point, it didn’t require more than 2GB of memory at the time. Time passed, and mozilla-central grew. I suspect the recent addition of several thousands of commits and files has made things worse. In order to come up with a plan to make things better (short or longer term), I needed data. So I added some basic memory resource tracking, and collected data while cloning mozilla-central. I must admit, I was not ready for what I witnessed. Follow me for a tale of frustrations (plural). I was expecting things to have gotten worse on the master branch (which I used for the data collection) because I am in the middle of some refactoring and did many changes that I was suspecting might have affected memory usage. I wasn’t, however, expecting to see the clone command using 10GB(!) memory at peak usage across all processes. (Note, those memory sizes are RSS, minus “shared”) It also was taking an unexpected long time, but then, I hadn’t cloned a large repository like mozilla-central from scratch in a while, so I wasn’t sure if it was just related to its recent growth in size or otherwise. So I collected data on 0.4.0 as well. Less time spent, less memory usage… ok. There’s definitely something wrong on master. But wait a minute, that slope from ~2GB to ~4GB on the git-remote-hg process doesn’t actually make any kind of sense. I mean, I’d understand it if it were starting and finishing with the “Import manifest” phase, but it starts in the middle of it, and ends long before it finishes. WTH? First things first, since RSS can be a variety of things, I checked /proc/$pid/smaps and confirmed that most of it was, indeed, the heap. That’s the point where you reach for Google, type something like “python memory profile” and find various tools. One from the results that I remembered having used in the past is guppy’s heapy. Armed with pdb, I broke execution in the middle of the slope, and tried to get memory stats with heapy. SIGSEGV. Ouch. Let’s try somet[...]



Eric Rahm: Are they slim yet, round 2

Fri, 10 Mar 2017 18:16:55 +0000

A year later let’s see how Firefox fares on Windows, Linux, and OSX with multiple content processes enabled. Results We can see that Firefox with four content processes fares better than Chrome on all platforms which is reassuring; Chrome is still about 2X worse on Windows and Linux. Our current plan is to only move up to four content processes, so this is great news. Two content processes is still better than IE, with four we’re a bit worse. This is pretty impressive given last year we were in the same position with one content process. Surprisingly on Mac Firefox is better than Safari with two content processes, compared with last year where we used 2X the memory with just one process, now we’re on par with four content processes. I included Firefox with eight content processes to keep us honest. As you can see we actually do pretty well, but I don’t think it’s realistic to ship with that many nor do we currently plan to. We already have or are adding additional processes such as the plugin process for Flash and the GPU process. These need to be taken into consideration when choosing how many content processes to enable and pushing to eight doesn’t give us much breathing room. Making sure we have measurements now is important; it’s good to know where we can improve. Overall I feel solid about these numbers, especially considering where we were just a year ago. This bodes well for the e10s-multi project. Test setup This is the same setup as last year. I load the first 30 pages of the tp5 page set (a snapshot of Alexa top 100 websites from a few years ago), each in its own tab, with 10 seconds in between loads and 60 seconds of settle time at the end. Note: There was a minor change to the setup to give each page a unique domain. At least Safari and Chrome are roughly doing process per domain, so just using different ports on localhost was not enough. A simple solution was to modify my /etc/hosts file to add localhost-<1-30> aliases. Methodology Measuring multiprocess browser memory usage is tricky. I’ve settled with a somewhat simple formula of: total_memory = sum_uss(content processes) + sum_rss(parent processes); Where a parent process is defined as anything that is not a content process (I’ll explain in a moment). Historically there was just one parent process that manages all other processes, this is st[...]



Gervase Markham: Firefox Secure Travel Addon

Fri, 10 Mar 2017 17:15:45 +0000

In these troubled times, business travellers occasionally have to cross borders where the border guards have significant powers to seize your electronic devices, and even compel you to unlock them or provide passwords. You have the difficult choice between refusing, and perhaps not getting into the country, or complying, and having sensitive data put at risk.

It is possible to avoid storing confidential data on your device if it’s all in the cloud, but then your browser is logged into (or has stored passwords for) various important systems which have lots of sensitive data, so anyone who has access to your machine has access to that data. And simply deleting all these passwords and cookies is a) a pain, and b) hard to recover from.

What might be very cool is a Firefox Secure Travel addon where you press a “Travelling Now” button and it:

  • Disconnects you from Sync
  • Deletes all cookies for a defined list of domains
  • Deletes all stored passwords for the same defined list of domains

Then when you arrive, you can log back in to Sync and get your passwords back (assuming it doesn’t propagate the deletions!), and log back in to the services.

I guess the border authorities can always ask for your Sync password but there’s a good chance they might not think to do that. A super-paranoid version of the above would also:

  • Generate a random password
  • Submit it securely to a company-run web service
  • On receiving acknowledgement of receipt, change your Sync password to
    the random password

Then, on arrival, you just need to call your IT department (who would ID you e.g. by voice or in person) to get the random password from them, and you are up and running. In the mean time, your data is genuinely out of your reach. You can unlock your device and tell them any passwords you know, and they won’t get your data.

Worth doing?

(image)



Mozilla Addons Blog: Improvements to add-on review communications

Fri, 10 Mar 2017 13:00:57 +0000

We recently made some improvements to our tools and processes to better support communication between add-on developers and reviewers. Previously, when you submitted an add-on to addons.mozilla.org (AMO) and a reviewer emailed you, your replies went to a mailing list (amo-editors AT mozilla DOT org) where a few reviewers (mostly admins) handled every response. This approach had some flaws—it put the burden on very few people to reply, who had to first get familiar with the add-on code and previous review actions. Further replies from either party went to the mailing list only, rather than being fed back into the review tools on AMO. These flaws slowed things down unnecessarily, and contributed to information clutter. Now, add-on developers can choose to reply to a review by email—like they’re used to—or from the Manage Status & Versions page of the add-on in the developer hub. Replies are picked up by AMO and displayed in the review history for reviewers and developers. In addition, everyone  involved in the review of the particular version will be notified by email. Admin reviewers will make sure all inquiries are followed up with. This long-anticipated feature will not only make follow-ups for reviews more efficient for both developers and reviewers, it also makes upcoming reviews easier by having all information in the same place. The mailing list (amo-editors AT mozilla DOT org) will be discontinued shortly, so we ask all developers to use this system instead. For other questions not related to a particular review, please send a message to amo-admins AT mozilla DOT org. The Add-on Review team would like to thank Andrew Williamson for implementing this new feature and our QA team for testing it! The post Improvements to add-on review communications appeared first on Mozilla Add-ons Blog.[...]



Daniel Glazman: OS X TouchBar

Fri, 10 Mar 2017 12:13:00 +0000

Currently adding OS X TouchBar support to XUL (image) So far so good. Should be trivial to add multiple touchbars to a given XUL window when it's done.




Karl Dubost: [worklog] Edition 058 - Rain, sign of spring

Fri, 10 Mar 2017 09:05:00 +0000

webcompat life Some progress in the discussion around viewport A question about client-hints on first HTML request for images webcompat issues Trying to recontact uniqlo with a real usage of 410 Gone, but unfortunately bogus in that case. Parent is button and element is an anchor with an href. Result: Fail in Firefox, work in Chrome/Safari webcompat.com dev By introducing HTTP Caching to our HTML resources, I also introduced an issue. The bug report says that people can't login, logout. In fact they can. The issue is what the page looks like, it doesn't display that the user is logged-in. It's normal I initial put a max-age of one day in addition to the etag value. In browser speak, it means that if the browser has a cached copy it will not even try a conditional requests with If-None-Match. Because during the login process we are hitting the same resource. Users are receiving the previous cached copy and don't see that they have actually logged in. A force reload solves that. So instead of putting a max-age I can solve this with a no-cache. Unfortunately, browsers didn't respect the HTTP semantics of no-cache and decided to make it a no-store. Back to the departure case. must-revalidate, max-age=0 will solve it and force the conditional request. Opened quick issue about our current twitter link based on the UX research currently done. Discussions about issues with dependencies and how do we handle them? Discussing about the Contact page friendliness We currently have some outreachy participants. Time for review and help: review Our issues title are… pretty poor for now. Otsukare![...]



Hub Figuière: So, I got a Dell

Fri, 10 Mar 2017 05:16:11 +0000

Long. Overdue. Upgrade. I bought a Dell XPS13 as my new portable workstation for Linux and GNOME. This is the model 9360 that is currently available as in a Developer Edition with Ubuntu 16.04 LTS (project Sputnik for those who follow). It satisifies all I was looking for in a laptop: lightweigh, small 13", 16 GB of RAM (at least), core i7 CPU (this is a Kaby Lake) and must run Linux well. But I didn't buy the Developer Edition. Price-wise the Developer Edition is CAD$150 less than the Windows version in the "business" line and CAD$50 less than the Windows version in the "home" line (which only has that version). Exact same hardware. Last week, it was on sale, CAD$500 off the regular price, so it didn't matter. I got one. I had delayed so long for getting one, this was the opportunity for a bargain. I double checked, and unlike the previous Skylake based model that didn't have the same wifi card, this one is really the same thing. I got a surprise door bell ring, from the delivery person (the website didn't tell me it was en route). Unboxing It came in a box that contain a cardboard wrap and a nice black box. The cardboard wrap contain the power brick and the AC cord. I'm genuinely surprised of the power adapter. It is small, smaller than what I'm used to. It is just odd that it doesn't come in the same box as the laptop (not a separate shipping box, just that it is boxes in a box, shipped to you at once). The nice black box only bear the Dell logo and contain the laptop, and two small booklets. One is for the warranty, the other is a getting started guide. Interestingly it mentions Ubuntu as well, which lead me to think that it is same for both preloads. This doesn't really matter in the end but it just show the level of refinement for a high-end laptop, which until the last Apple refresh, was still more expensive than the Apple equivalent. Fiddling with the BIOS It took me more time to download the Fedora live ISO and flash it to the USB stick than the actual setup of the machine, minus some fiddling. As I had booted, Fedora 25 was installed in 20 minutes. I did wipe the internal SSD, BTW. Once I figured out it was F2 I had to press to get the BIOS upon boot, to set it to boot off t[...]



Mozilla Addons Blog: Office Hours Support for Transitioning and Porting to WebExtensions

Thu, 09 Mar 2017 20:28:10 +0000

To help facilitate more mutual support among developers migrating and porting to WebExtensions, we asked the add-on developer community to sign up for blocks of time when they can be available to assist each other. This week, we published the schedule, which shows you the days and hours (in your time zone) when people are available to answer questions in IRC and the add-on forum. Each volunteer helper has indicated their specialties, so you can find the people who are most likely able to help you.

If you’d like to get help asynchronously, you can join and email the webextensions-support [at] mozilla [dot] org mailing list, where more people are on hand to answer questions.

If you have any knowledge in or expertise with add-ons, please sign up to help! Just go to the etherpad and add your IRC handle, times you’re available, and your specialties, and we’ll add you to the schedule. Or, join the mailing list to help out at any time.

The post Office Hours Support for Transitioning and Porting to WebExtensions appeared first on Mozilla Add-ons Blog.




Air Mozilla: Equal Ratings Conference Demo Day Presentations 3.09.17

Thu, 09 Mar 2017 20:00:00 +0000

(image) We Believe in Equal Rating Mozilla seeks to make the full range of the Internet's extraordinary power and innovative potential available to all. We advocate...




Air Mozilla: Denise Graveline on Graceful ways with Q & A

Thu, 09 Mar 2017 19:00:00 +0000

(image) Some speakers love Q & A. Others dread it. No matter which group you are in, this session will share tips for how to plan...




Air Mozilla: Equal Ratings Conference Judges' Panel Discussion 3.09.17

Thu, 09 Mar 2017 18:30:00 +0000

(image) We Believe in Equal Rating Mozilla seeks to make the full range of the Internet's extraordinary power and innovative potential available to all. We advocate...




Air Mozilla: Reps Weekly Meeting Mar. 09, 2017

Thu, 09 Mar 2017 16:00:00 +0000

(image) This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.




Air Mozilla: Equal Ratings Conference AM Session 3.09.17

Thu, 09 Mar 2017 14:30:00 +0000

(image) We Believe in Equal Rating Mozilla seeks to make the full range of the Internet's extraordinary power and innovative potential available to all. We advocate...