Subscribe: Will's blog
Added By: Feedage Forager Feedage Grade B rated
Language: English
backends  crash  import  issues  lib  logging  maintainer  markus  metrics  mozilla  project  projects  python  release  statsd  work 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Will's blog

Will's blog

Will Kahn-Greene's blog of Python, Mozilla, GNU/Linux, random content, dennis, Input, SUMO, and other projects mixed in there ad hoc, half-baked and with a twist of lemon

Last Build Date: Tue, 13 Mar 2018 03:23:41 GMT

Copyright: Contents © 2018 Will Kahn-Greene CC BY-SA 3.0

Side projects and swag-driven development

Tue, 13 Mar 2018 02:00:00 GMT


I work at Mozilla. I work on a lot of stuff:

  • a main project I do a ton of work on and maintain: Socorro
  • a bunch of projects related to that project which I work on and maintain: Antenna, Everett, Markus
  • some projects that I work on occasionally but don't maintain: mozilla-django-oidc
  • one project that many Mozilla sites use that somehow I ended up with but has no relation to my main project: Bleach
  • some projects I'm probably forgetting about
  • a side-project that isn't related to anything else I do that I "maintain": Standups

For most of those projects, they're either part of my main job or I like working on them or I get some recognition for owning them. Whatever the reason, I don't work on them because I feel bad. Then there's Standups which I work on solely because I feel bad.

This blog post talks about me and Standups, pontificates about some options I've talked with others about, and then lays out the concept of swag-driven development.

Read more… (8 mins to read)

Socorro in 2017

Mon, 08 Jan 2018 14:00:00 GMT


Socorro is the crash ingestion pipeline for Mozilla's products like Firefox. When Firefox crashes, the Breakpad crash reporter asks the user if the user would like to send a crash report. If the user answers "yes!", then the Breakpad crash reporter collects data related to the crash, generates a crash report, and submits that crash report as an HTTP POST to Socorro. Socorro saves the crash report, processes it, and provides an interface for aggregating, searching, and looking at crash reports.

2017 was a big year for Socorro. In this blog post, I opine about our accomplishments.

Read more… (23 mins to read)

html5lib-python 1.0 released!

Fri, 08 Dec 2017 17:00:00 GMT

html5lib-python v1.0 released! Yesterday, Geoffrey released html5lib 1.0 [1]! The changes aren't wildly interesting. The more interesting part for me is how the release happened. I'm going to spend the rest of this post talking about that. [1]Technically there was a 1.0 release followed by a 1.0.1 release because the 1.0 release had issues. The story of Bleach and html5lib I work on Bleach which is a Python library for sanitizing and linkifying text from untrusted sources for safe usage in HTML. It relies heavily on another library called html5lib-python. Most of the work that I do on Bleach consists of figuring out how to make html5lib do what I need it to do. Over the last few years, maintainers of the html5lib library have been working towards a 1.0. Those well-meaning efforts got them into a versioning model which had some unenthusing properties. I would often talk to people about how I was having difficulties with Bleach and html5lib 0.99999999 (8 9s) and I'd have to mentally count how many 9s I had said. It was goofy [2]. In an attempt to deal with the effects of the versioning, there's a parallel set of versions that start with 1.0b. Because there are two sets of versions, it was a total pain in the ass to correctly specify which versions of html5lib that Bleach worked with. While working on Bleach 2.0, I bumped into a few bugs and upstreamed a patch for at least one of them. That patch sat in the PR queue for months. That's what got me wondering--is this project dead? I tracked down Geoffrey and talked with him a bit on IRC. He seems to be the only active maintainer. He was really busy with other things, html5lib doesn't pay at all, there's a ton of stuff to do, he's burned out, and recently there have been spats of negative comments in the issues and PRs. Generally the project had a lot of stop energy. Some time in August, I offered to step up as an interim maintainer and shepherd html5lib to 1.0. The goals being: land or close as many old PRs as possible triage, fix, and close as many issues as possible clean up testing and CI clean up documentation ship 1.0 which ends the versioning issues [2]Many things in life are goofy. Thoughts on being an interim maintainer I see a lot of open source projects that are in trouble in the sense that they don't have a critical mass of people and energy. When the sole part-time volunteer maintainer burns out, the project languishes. Then the entitled users show up, complain, demand changes, and talk about how horrible the situation is and everyone should be ashamed. It's tough--people are frustrated and then do a bunch of things that make everything so much worse. How do projects escape the raging inferno death spiral? For a while now, I've been thinking about a model for open source projects where someone else pops in as an interim maintainer for a short period of time with specific goals and then steps down. Maybe this alleviates users' frustrations? Maybe this gives the part-time volunteer burned-out maintainer a breather? Maybe this can get the project moving again? Maybe the temporary interim maintainer can make some of the hard decisions that a regular long-term maintainer just can't? I wondered if I should try that model out here. In the process of convincing myself that stepping up as an interim maintainer was a good idea [3], I looked at projects that rely on html5lib [4]: pip vendors it Bleach relies upon it heavily, so anything that uses Bleach uses html5lib (jupyter, hypermark, readme_renderer, tensorflow, ...) most web browsers (Firefox, Chrome, servo, etc) have it in their repositories because web-platform-tests uses it I talked with Geoffrey and offered to step up with these goals in mind. I started with cleaning up the milestones in GitHub. I bumped everything from the 0.9999999999 (10 9s) milestone which I determined will never happen into a 1.0 milestone. I used this as a bucket for collecting all the issues and PRs that piqued my interest. I went through the issue tracker and triaged a[...]

Markus v1.0 released! Better metrics API for Python projects.

Mon, 30 Oct 2017 13:00:00 GMT

What is it? Markus is a Python library for generating metrics. Markus makes it easier to generate metrics in your program by: providing multiple backends (Datadog statsd, statsd, logging, logging roll-up, and so on) for sending data to different places sending metrics to multiple backends at the same time providing a testing framework for easy testing providing a decoupled architecture making it easier to write code to generate metrics without having to worry about making sure creating and configuring a metrics client has been done--similar to the Python logging Python logging module in this way I use it at Mozilla in the collector of our crash ingestion pipeline. Peter used it to build our symbols lookup server, too. v1.0 released! This is the v1.0 release. I pushed out v0.2 back in April 2017. We've been using it in Antenna (the collector of the Firefox crash ingestion pipeline) since then. At this point, I think the API is sound and it's being used in production, ergo it's production-ready. This release also adds Python 2.7 support. Why you should take a look at Markus Markus does three things that make generating metrics a lot easier. First, it separates creating and configuring the metrics backends from generating metrics. Let's create a metrics client that sends data nowhere: import markus markus.configure() That's not wildly helpful, but it works and it's 2 lines. Say we're doing development on a laptop on a speeding train and want to spit out metrics to the Python logging module so we can see what's being generated. We can do this: import markus markus.configure( backends=[ { 'class': 'markus.backends.logging.LoggingMetrics' } ] ) That will spit out lines to Python logging. Now I can see metrics getting generated while I'm testing my code. I'm ready to put my code in production, so let's add a statsd backend, too: import markus markus.configure( backends=[ { # Log metrics to the logs 'class': 'markus.backends.logging.LoggingMetrics', }, { # Log metrics to statsd 'class': 'markus.backends.statsd.StatsdMetrics', 'options': { 'statsd_host': '', 'statsd_port': 8125, 'statsd_prefix': '', } } ] ) That's it. Tada! Markus can support any number of backends. You can send data to multiple statsd servers. You can use the LoggingRollupBackend which will generate statistics every flush_interval of count, current, min, and max for incr stats and count, min, average, median, 95%, and max for timing/histogram stats for metrics data. If Markus doesn't have the backends you need, writing your own metrics backend is straight-forward. For more details, see the usage documentation and the backends documentation. Second, writing code to generate metrics is straight-forward and easy to do. Much like the Python logging module, you add import markus at the top of the Python module and get a metrics interface. The interface can be module-level or in a class. It doesn't matter. Here's a module-level metrics example: import markus metrics = markus.get_metrics(__name__) Then you use it: @metrics.timer_decorator('chopping_vegetables') def some_long_function(vegetable): for veg in vegetable: chop_vegetable() metrics.incr('vegetable', 1) That's it. No bootstrapping problems, nice handling of metrics key prefixes, decorators, context managers, and so on. You can use multiple metrics interfaces in the same file. You can pass them around. You can reconfigure the metrics client and backends dynamically while your program is running. For more details, see the metrics overview documentation. Third, testing metrics generation is easy to do. Markus provides a MetricsMock to make testing easier: import markus from markus.testing import MetricsMock def test_something(): with MetricsMock() as mm: # ... Do things that mig[...]

rob-bugson 1.0: or how I wrote a webextension

Thu, 19 Oct 2017 16:00:00 GMT

I work on Socorro and other projects which use GitHub for version control and code review and use Mozilla's Bugzilla for bug tracking.

After creating a pull request in GitHub, I attach it to the related Bugzilla bug which is a contra-dance of clicking and copy-and-paste. Github tweaks for Bugzilla simplified that by adding a link to the GitHub pull request page that I could click on, edit, and then submit the resulting form. However, that's a legacy addon and I use Firefox Nightly and it doesn't look like anyone wrote a webextension version of it, so I was out-of-luck.

Today, I had to bring in my car for service and was sitting around at the dealership for a few hours. I figured instead of working on Socorro things, I'd take a break and implement an attach-pr-to-bug webextension.

I've never written a webextension before. I had written a couple of addons years ago using the SDK and then Jetpack (or something like that). My JavaScript is a bit rusty, especially ES6 stuff. I figured this would be a good way to learn about webextensions.

It took me about 4 hours of puzzling through docs, writing code, and debugging and then I had something that worked. Along the way, I discovered exciting things like:

  • host permissions let you run content scripts in web pages
  • content scripts can't access browser.tabs--you need a background script for that
  • you can pass messages from content scripts to background scripts
  • seems like everything returns a promise, but async/await make that a lot easier to work with
  • the attachment page on Bugzilla isn't like the create-bug page and ignores querystring params

The MDN docs for writing webextensions and the APIs involved are fantastic. The webextension samples are also great--I started with them when I was getting my bearings.

I created a new GitHub repository. I threw the code into a pull request making it easier for someone else to review it. Mike Cooper kindly skimmed it and provided insightful comments. I fixed the issues he brought up.

TheOne helped me resurrect my AMO account which I created in 2012 back when Gaia apps were the thing.

I read through Publishing your webextension, generated a .zip, and submitted a new addon.

About 10 minutes later, the addon had been reviewed and approved.

Now it's a thing and you can install rob-bugson.