Subscribe: Planet Debian
Added By: Feedage Forager Feedage Grade B rated
Language: English
code  data  debian  freedink  git  hours  memory  months  new  packages  people  release  reproducible builds  reproducible  time 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Planet Debian

Planet Debian

Planet Debian -


Gunnar Wolf: Dear lazyweb: How would you visualize..?

Fri, 24 Mar 2017 20:46:16 +0000

Dear lazyweb, I am trying to get a good way to present the categorization of several cases studied with a fitting graph. I am rating several vulnerabilities / failures according to James Cebula et. al.'s paper, A taxonomy of Operational Cyber Security Risks; this is a somewhat deep taxonomy, with 57 end items, but organized in a three levels deep hierarchy. Copying a table from the cited paper (click to display it full-sized): My categorization is binary: I care only whether it falls within a given category or not. My first stab at this was to represent each case using a star or radar graph. As an example: As you can see, to a "bare" star graph, I added a background color for each top-level category (blue for actions of people, green for systems and technology failures), red for failed internal processes and gray for external events), and printed out only the labels for the second level categories; for an accurate reading of the graphs, you have to refer to the table and count bars. And, yes, according to the Engineering Statistics Handbook: Star plots are helpful for small-to-moderate-sized multivariate data sets. Their primary weakness is that their effectiveness is limited to data sets with less than a few hundred points. After that, they tend to be overwhelming. I strongly agree with the above statement — And stating that "a few hundred points" can be understood is even an overstatement. 50 points are just too much. Now, trying to increase usability for this graph, I came across the Sunburst diagram. One of the proponents for this diagram, John Stasko, has written quite a bit about it. Now... How to create my beautiful Sunburst diagram? That's a tougher one. Even though the page I linked to in the (great!) Data visualization catalogue presents even some free-as-in-software tools to do this... They are Javascript projects that will render their beautiful plots (even including an animation)... To the browser. I need them for a static (i.e. to be printed) document. Yes, I can screenshot and all, but I want them to be automatically generated, so I can review and regenerate them all automatically. Oh, I could just write JSON and use SaaS sites such as Aculocity to do the heavy-lifting, but if you know me, you will understand why I don't want to. So... I set out to find a Gunnar-approved way to display the information I need. Now, as the Protovis documentation says, an icicle is simply a sunburst transformed from polar to cartesian coordinates... But I came to a similar conclusion: The tools I found are not what I need. OK, but an icicle graph seems much simpler to produce — I fired up my Emacs, and started writing using Ruby, RMagick and RVG... I decided to try a different way. This is my result so far: So... What do you think? Does this look right to you? Clearer than the previous one? Worst? Do you have any idea on how I could make this better? Oh... You want to tell me there is something odd about it? Well, yes, of course! I still need to tweak it quite a bit. Would you believe me if I told you this is not really a left-to-right icicle graph, but rather a strangely formatted Graphviz non-directed graph using the dot formatter? I can assure you you don't want to look at my Graphviz sources... But in case you insist... Take them and laugh. Or cry. Of course, this file comes from a hand-crafted template, but has some autogenerated bits to it. I have still to tweak it quite a bit to correct several of its usability shortcomings, but at least it looks somewhat like what I want to achieve. Anyway, I started out by making a "dear lazyweb" question. So, here it goes: Do you think I'm using the right visualization for my data? Do you have any better suggestions, either of a graph or of a graph-generating tool? Thanks! [update] Thanks for the first pointer, Lazyweb! I found a beautiful solution; we will see if it is what I need or not (it is too space-greedy to be readable... But I will check it out more thoroughly). It lays out much better than anything I can spew out by myself — Writing it as a mindmap using TikZ[...]

Sylvain Beucler: Practical basics of reproducible builds

Fri, 24 Mar 2017 08:40:36 +0000

As GNU FreeDink upstream, I'd very much like to offer pre-built binaries: one (1) official, tested, current, distro-agnostic version of the game with its dependencies. I'm actually already doing that for the Windows version. One issue though: people have to trust me -- and my computer's integrity. Reproducible builds could address that. My release process is tightly controlled, but is my project reproducible? If not, what do I need? Let's check! I quickly see that documentation is getting better, namely :) (The first docs I read on reproducibility looked more like a crazed date-o-phobic rant than actual solution - plus now we have SOURCE_DATE_EPOCH implemented in gcc ;)) However I was left unsatisfied by the very high-level viewpoint and the lack of concrete examples. The document points to various issues but is very vague about what tools are impacted. So let's do some tests! Let's start with a trivial program: $ cat > hello.c #include int main(void) { printf("Hello, world!\n"); } OK, first does GCC compile this reproducibly? I'm not sure because I heard of randomness in identifiers and such in the compilation process... $ gcc-5 hello.c -o hello-5 $ md5sum hello-5 a00416d7392442321bad4afc5a461321 hello-5 $ gcc-5 hello.c -o hello-5 $ md5sum hello-5 a00416d7392442321bad4afc5a461321 hello-5 Cool, ELF compiler output is stable through time! Now do 2 versions of GCC compile a hello world identically? $ gcc-6 hello.c -o hello-6 $ md5sum hello-6 f7f52c2f5f82fe2a95061a771a6c5acd hello-6 $ hexcompare hello-5 hello-6 [lots of red] ... Well let's not get our hopes too high ;) Trivial build options change? $ gcc-6 hello.c -lc -o hello-6 $ gcc-6 -lc hello.c -o hello-6b $ md5sum hello-6 hello-6b f7f52c2f5f82fe2a95061a771a6c5acd hello-6 f73ee6d8c3789fd8f899f5762025420e hello-6b $ hexcompare hello-6 hello-6b [lots of red] ... OK, let's be very careful with build options then. What about 2 different build paths? $ cd .. $ cp -a repro/ repro2/ $ cd repro2/ $ gcc-6 hello.c -o hello-6 $ md5sum hello-6 f7f52c2f5f82fe2a95061a771a6c5acd hello-6 Basic compilation is stable across directories. Now I tried recompiling identically FreeDink on 2 different git clones. Disappointment: $ md5sum freedink/native/src/freedink freedink2/native/src/freedink 839ccd9180c72343e23e5d9e2e65e237 freedink/native/src/freedink 6d5dc6aab321fab01b424ac44c568dcf freedink2/native/src/freedink $ hexcompare freedink2/native/src/freedink freedink/native/src/freedink [lots of red] Hmm, what about stripped versions? $ strip freedink/native/src/freedink freedink2/native/src/freedink $ md5sum freedink/native/src/freedink freedink2/native/src/freedink 415e96bb54456f3f2a759f404f18c711 freedink/native/src/freedink e0702d798807c83d21f728106c9261ad freedink2/native/src/freedink $ hexcompare freedink/native/src/freedink freedink2/native/src/freedink [1 single red spot] OK, what's happening? diffoscope to the rescue: $ diffoscope freedink/native/src/freedink freedink2/native/src/freedink --- freedink/native/src/freedink +++ freedink2/native/src/freedink ├── readelf --wide --notes {} │ @@ -3,8 +3,8 @@ │ Owner Data size Description │ GNU 0x00000010 NT_GNU_ABI_TAG (ABI version tag) │ OS: Linux, ABI: 2.6.32 │ │ Displaying notes found in: │ Owner Data size Description │ GNU 0x00000014 NT_GNU_BUILD_ID (unique build ID bitstring) │ - Build ID: a689574d69072bb64b28ffb82547e126284713fa │ + Build ID: d7be191a61e84648a58c18e9c108b3f3ce500302 What on earth is Build ID and how it is computed? After much digging, I find it's a 2008 plan with application in selecting matching detached debugging symbols. is the most detailed overview/rationale I found. It is supposed to be computed from parts of the binary. It's actually pretty resistant to changes, e.g. I could add the mis[...]

Dirk Eddelbuettel: RApiDatetime 0.0.1

Fri, 24 Mar 2017 01:30:00 +0000


Very happy to announce a new package of mine is now up on the CRAN repository network: RApiDatetime.

It provides six entry points for C-level functions of the R API for Date and Datetime calculations: asPOSIXlt and asPOSIXct convert between long and compact datetime representation, formatPOSIXlt and Rstrptime convert to and from character strings, and POSIXlt2D and D2POSIXlt convert between Date and POSIXlt datetime. These six functions are all fairly essential and useful, but not one of them was previously exported by R. Hence the need to put them together in the this package to complete the accessible API somewhat.

These should be helpful for fellow package authors as many of us have either our own partial copies of some of this code, or rather farm back out into R to get this done.

As a simple (yet real!) illustration, here is an actual Rcpp function which we could now cover at the C level rather than having to go back up to R (via Rcpp::Function()):

    inline Datetime::Datetime(const std::string &s, const std::string &fmt) {
        Rcpp::Function strptime("strptime");    // we cheat and call strptime() from R
        Rcpp::Function asPOSIXct("as.POSIXct"); // and we need to convert to POSIXct
        m_dt = Rcpp::as<double>(asPOSIXct(strptime(s, fmt)));

I had taken a first brief stab at this about two years ago, but never finished. With the recent emphasis on C-level function registration, coupled with a possible use case from anytime I more or less put this together last weekend.

It currently builds and tests fine on POSIX-alike operating systems. If someone with some skill and patience in working on Windows would like to help complete the Windows side of things then I would certainly welcome help and pull requests.

For questions or comments please use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Simon McVittie: GTK hackfest 2017: D-Bus communication with containers

Thu, 23 Mar 2017 18:07:00 +0000

At the GTK hackfest in London (which accidentally became mostly a Flatpak hackfest) I've mainly been looking into how to make D-Bus work better for app container technologies like Flatpak and Snap. The initial motivating use cases are: Portals: Portal authors need to be able to identify whether the container is being contacted by an uncontained process (running with the user's full privileges), or whether it is being contacted by a contained process (in a container created by Flatpak or Snap). dconf: Currently, a contained app either has full read/write access to dconf, or no access. It should have read/write access to its own subtree of dconf configuration space, and no access to the rest. At the moment, Flatpak runs a D-Bus proxy for each app instance that has access to D-Bus, connects to the appropriate bus on the app's behalf, and passes messages through. That proxy is in a container similar to the actual app instance, but not actually the same container; it is trusted to not pass messages through that it shouldn't pass through. The app-identification mechanism works in practice, but is Flatpak-specific, and has a known race condition due to process ID reuse and limitations in the metadata that the Linux kernel maintains for AF_UNIX sockets. In practice the use of X11 rather than Wayland in current systems is a much larger loophole in the container than this race condition, but we want to do better in future. Meanwhile, Snap does its sandboxing with AppArmor, on kernels where it is enabled both at compile-time (Ubuntu, openSUSE, Debian, Debian derivatives like Tails) and at runtime (Ubuntu, openSUSE and Tails, but not Debian by default). Ubuntu's kernel has extra AppArmor features that haven't yet gone upstream, some of which provide reliable app identification via LSM labels, which dbus-daemon can learn by querying its AF_UNIX socket. However, other kernels like the ones in openSUSE and Debian don't have those. The access-control (AppArmor mediation) is implemented in upstream dbus-daemon, but again doesn't work portably, and is not sufficiently fine-grained or flexible to do some of the things we'll likely want to do, particularly in dconf. After a lot of discussion with dconf maintainer Allison Lortie and Flatpak maintainer Alexander Larsson, I think I have a plan for fixing this. This is all subject to change: see fd.o #100344 for the latest ideas. Identity model Each user (uid) has some uncontained processes, plus 0 or more containers. The uncontained processes include dbus-daemon itself, desktop environment components such as gnome-session and gnome-shell, the container managers like Flatpak and Snap, and so on. They have the user's full privileges, and in particular they are allowed to do privileged things on the user's session bus (like running dbus-monitor), and act with the user's full privileges on the system bus. In generic information security jargon, they are the trusted computing base; in AppArmor jargon, they are unconfined. The containers are Flatpak apps, or Snap apps, or other app-container technologies like Firejail and AppImage (if they adopt this mechanism, which I hope they will), or even a mixture (different app-container technologies can coexist on a single system). They are containers (or container instances) and not "apps", because in principle, you could install com.example.MyApp 1.0, run it, and while it's still running, upgrade to com.example.MyApp 2.0 and run that; you'd have two containers for the same app, perhaps with different permissions. Each container has an container type, which is a reversed DNS name like org.flatpak or io.snapcraft representing the container technology, and an app identifier, an arbitrary non-empty string whose meaning is defined by the container technology. For Flatpak, that string would be another reversed DNS name like com.example.MyGreatApp; for Snap, as far as I can tell it would look like example-my-great-app. The container technology can also put arbitra[...]

Neil McGovern: GNOME ED Update – Week 12

Thu, 23 Mar 2017 11:43:39 +0000


New release!

In case you haven’t seen it yet, there’s a new GNOME release – 3.24! The release is the result of 6 months’ work by the GNOME community.

The new release is a major step forward for us, with new features and improvements, and some exciting developments in how we build applications. You can read more about it in the announcement and release notes.

As always, this release was made possible partially thanks to the Friends of GNOME project. In particular, it helped us provide a Core apps hackfest in Berlin last November, which had a direct impact on this release.


GTK+ hackfest

I’ve just come back from the GTK+ hackfest in London – thanks to RedHat and Endless for sponsoring the venues! It was great to meet a load of people who are involved with GNOME and GTK, and some great discussions were had about Flatpak and the creation of a “FlatHub” – somewhere that people can get all their latest Flatpaks from.


As I’m writing this, I’m sitting on a train going to Heathrow, for my flight to LibrePlanet 2017! If you’re going to be there, come and say hi. I’ve a load of new stickers that have been produced as well so these can brighten up your laptop.

Mike Hommey: Why is the git-cinnabar master branch slower to clone?

Thu, 23 Mar 2017 07:38:05 +0000

Apart from the memory considerations, one thing that the data presented in the “When the memory allocator works against you” post that I haven’t touched in the followup posts is that there is a large difference in the time it takes to clone mozilla-central with git-cinnabar 0.4.0 vs. the master branch. One thing that was mentioned in the first followup is that reducing the amount of realloc and substring copies made the cloning more than 15 minutes faster on master. But the same code exists in 0.4.0, so this isn’t part of the difference. So what’s going on? Looking at the CPU usage during the clone is enlightening. On 0.4.0: On master: (Note: the data gathering is flawed in some ways, which explains why the git-remote-hg process goes above 100%, which is not possible for this python process. The data is however good enough for the high level analysis that follows, so I didn’t bother to get something more acurate) On 0.4.0, the git-cinnabar-helper process was saturating one CPU core during the File import phase, and the git-remote-hg process was saturating one CPU core during the Manifest import phase. Overall, the sum of both processes usually used more than one and a half core. On master, however, the total of both processes barely uses more than one CPU core. What happened? This and that happened. Essentially, before those changes, git-remote-hg would send instructions to git-fast-import (technically, git-cinnabar-helper, but in this case it’s only used as a wrapper for git-fast-import), and use marks to track the git objects that git-fast-import created. After those changes, git-remote-hg asks git-fast-import the git object SHA1 of objects it just asked to be created. In other words, those changes replaced something asynchronous with something synchronous: while it used to be possible for git-remote-hg to work on the next file/manifest/changeset while git-fast-import was working on the previous one, it now waits. The changes helped simplify the python code, but made the overall clone process much slower. If I’m not mistaken, the only real use for that information is for the mapping of mercurial to git SHA1s, which is actually rarely used during the clone, except at the end, when storing it. So what I’m planning to do is to move that mapping to the git-cinnabar-helper process, which, incidentally, will kill not 2, but 3 birds with 1 stone: It will restore the asynchronicity, obviously (at least, that’s the expected main outcome). Storing the mapping in the git-cinnabar-helper process is very likely to take less memory than what it currently takes in the git-remote-hg process. Even if it doesn’t (which I doubt), that should still help stay under the 2GB limit of 32-bit processes. The whole thing that spikes memory usage during the finalization phase, as seen in previous post, will just go away, because the git-cinnabar-helper process will just have prepared the git notes-like tree on its own. So expect git-cinnabar 0.5 to get moar faster, and to use moar less memory. [...]

Mike Hommey: Analyzing git-cinnabar memory use

Thu, 23 Mar 2017 04:30:26 +0000

In previous post, I was looking at the allocations git-cinnabar makes. While I had the data, I figured I’d also look how the memory use correlates with expectations based on repository data, to put things in perspective. As a reminder, this is what the allocations look like (horizontal axis being the number of allocator function calls): There are 7 different phases happening during a git clone using git-cinnabar, most of which can easily be identified on the graph above: Negotiation. During this phase, git-cinnabar talks to the mercurial server to determine what needs to be pulled. Once that is done, a getbundle request is emitted, which response is read in the next three phases. This phase is essentially invisible on the graph. Reading changeset data. The first thing that a mercurial server sends in the response for a getbundle request is changesets. They are sent in the RevChunk format. Translated to git, they become commit objects. But to create commit objects, we need the entire corresponding trees and files (blobs), which we don’t have yet. So we keep this data in memory. In the git clone analyzed here, there are 345643 changesets loaded in memory. Their raw size in RawChunk format is 237MB. I think by the end of this phase, we made 20 million allocator calls, have about 300MB of live data in about 840k allocations. (No certainty because I don’t actually have definite data that would allow to correlate between the phases and allocator calls, and the memory usage change between this phase and next is not as clear-cut as with other phases). This puts us at less than 3 live allocations per changeset, with “only” about 60MB overhead over the raw data. Reading manifest data. In the stream we receive, manifests follow changesets. Each changeset points to one manifest ; several changesets can point to the same manifest. Manifests describe the content of the entire source code tree in a similar manner as git trees, except they are flat (there’s one manifest for the entire tree, where git trees would reference other git trees for sub directories). And like git trees, they only map file paths to file SHA1s. The way they are currently stored by git-cinnabar (which is planned to change) requires knowing the corresponding git SHA1s for those files, and we haven’t got those yet, so again, we keep everything in memory. In the git clone analyzed here, there are 345398 manifests loaded in memory. Their raw size in RawChunk format is 1.18GB. By the end of this phase, we made 23 million more allocator calls, and have about 1.52GB of live data in about 1.86M allocations. We’re still at less than 3 live allocations for each object (changeset or manifest) we’re keeping in memory, and barely over 100MB of overhead over the raw data, which, on average puts the overhead at 150 bytes per object. The three phases so far are relatively fast and account for a small part of the overall process, so they don’t appear clear-cut to each other, and don’t take much space on the graph. Reading and Importing files. After the manifests, we finally get files data, grouped by path, such that we get all the file revisions of e.g. .cargo/.gitignore, followed by all the file revisions of .cargo/, .clang-format, and so on. The data here doesn’t depend on anything else, so we can finally directly import the data. This means that for each revision, we actually expand the RawChunk into the full file data (RawChunks contain patches against a previous revision), and don’t keep the RawChunk around. We also don’t keep the full data after it was sent to the git-cinnabar-helper process (as far as cloning is concerned, it’s essentially a wrapper for git-fast-import), except for the previous revision of the file, which is likely the patch base for the next revision. We however keep in memory one or two things for each file revision: a mapping of its mercurial SHA[...]

Arturo Borrero González: IPv6 and CGNAT

Wed, 22 Mar 2017 17:47:00 +0000


Today I ended reading an interesting article by the 4th spanish ISP regarding IPv6 and CGNAT. The article is in spanish, but I will translate the most important statements here.

Having a spanish Internet operator to talk about this subject is itself good news. We have been lacking any news regarding IPv6 in our country for years. I mean, no news from private operators. Public networks like the one where I develop my daily job has been offering native IPv6 since almost a decade…

The title of the article is “What is CGNAT and why is it used”.

They start by admiting that this technique is used to address the issue of IPv4 exhaustion. Good. They move on to say that IPv6 was designed to address IPv4 exhaustion. Great. Then, they state that ‘‘the internet network is not ready for IPv6 support’’. Also that ‘‘IPv6 has the handicap of many websites not supporting it’’. Sorry?

That is not true. If they refer to the core of internet (i.e, RIRs, interexchangers, root DNS servers, core BGP routers, etc) they have been working with IPv6 for ages now. If they refer to something else, for example Google, Wikipedia, Facebook, Twitter, Youtube, Netflix or any random hosting company, they do support IPv6 as well. Hosting companies which don’t support IPv6 are only a few, at least here in Europe.

The traffic to/from these services is clearly the vast majority of the traffic traveling in the wires nowaday. And they support IPv6.

The article continues defending CGNAT. They refer to IPv6 as an alternative to CGNAT. No, sorry, CGNAT is an alternative to you not doing your IPv6 homework.

The article ends by insinuing that CGNAT is more secure and useful than IPv6. That’s the final joke. They mention some absurd example of IP cams being accessed from the internet by anyone.

Sure, by using CGNAT you are indeed making the network practically one-way only. There exists RFC7021 which refers to the big issues of a CGNAT network. So, by using CGNAT you sacrifice a lot of usability in the name of security. This supposed security can be replicated by the most simple possible firewall, which could be deployed in Dual Stack IPv4/IPv6 using any modern firewalling system, like nftables.

(Here is a good blogpost of RFC7021 for spanish readers: Midiendo el impacto del Carrier-Grade NAT sobre las aplicaciones en red)

By the way, Google kindly provides some statistics regarding their IPv6 traffic. These stats clearly show an exponential growth:


Others ISP operators are giving IPv6 strong precedence over IPv4, that’s the case of Verizon in USA: Verizon Static IP Changes IPv4 to Persistent Prefix IPv6.

My article seems a bit like a rant, but I couldn’t miss the oportunity to claim for native IPv6. None of the major spanish ISP have IPv6.

Michael Stapelberg: Debian stretch on the Raspberry Pi 3 (update)

Wed, 22 Mar 2017 16:36:00 +0000

I previously wrote about my Debian stretch preview image for the Raspberry Pi 3.

Now, I’m publishing an updated version, containing the following changes:

  • A new version of the upstream firmware makes the Ethernet MAC address persist across reboots.
  • Updated initramfs files (without updating the kernel) are now correctly copied to the VFAT boot partition.
  • The initramfs’s file system check now works as the required fsck binaries are now available.
  • The root file system is now resized to fill the available space of the SD card on first boot.
  • SSH access is now enabled, restricted via iptables to local network source addresses only.
  • The image uses the linux-image-4.9.0-2-arm64 4.9.13-1 kernel.

A couple of issues remain, notably the lack of HDMI, WiFi and bluetooth support (see wiki:RaspberryPi3 for details. Any help with fixing these issues is very welcome!

As a preview version (i.e. unofficial, unsupported, etc.) until all the necessary bits and pieces are in place to build images in a proper place in Debian, I built and uploaded the resulting image. Find it at To install the image, insert the SD card into your computer (I’m assuming it’s available as /dev/sdb) and copy the image onto it:

$ wget
$ sudo dd if=2017-03-22-raspberry-pi-3-stretch-PREVIEW.img of=/dev/sdb bs=5M

If resolving client-supplied DHCP hostnames works in your network, you should be able to log into the Raspberry Pi 3 using SSH after booting it:

$ ssh root@rpi3
# Password is “raspberry”

Dirk Eddelbuettel: Suggests != Depends

Wed, 22 Mar 2017 15:16:00 +0000


A number of packages on CRAN use Suggests: casually.

They list other packages as "not required" in Suggests: -- as opposed to absolutely required via Imports: or the older Depends: -- yet do not test for their use in either examples or, more commonly, unit tests.

So e.g. the unit tests are bound to fail because, well, Suggests != Depends.

This has been accomodated for many years by all parties involved by treating Suggests as a Depends and installing unconditionally. As I understand it, CRAN appears to flip a switch to automatically install all Suggests from major repositories glossing over what I consider to be a packaging shortcoming. (As an aside, treatment of Additonal_repositories: is indeed optional; Brooke Anderson and I have a fine paper under review on this)

I spend a fair amount of time with reverse dependency ("revdep") checks of packages I maintain, and I will no longer accomodate these packages.

These revdep checks take long enough as it is, so I will now blacklist these packages that are guaranteed to fail when their "optional" dependencies are not present.

Writing R Extensions says in Section 1.1.3

All packages that are needed10 to successfully run R CMD check on the package must be listed in one of ‘Depends’ or ‘Suggests’ or ‘Imports’. Packages used to run examples or tests conditionally (e.g. via if(require(pkgname))) should be listed in ‘Suggests’ or ‘Enhances’. (This allows checkers to ensure that all the packages needed for a complete check are installed.)

In particular, packages providing “only” data for examples or vignettes should be listed in ‘Suggests’ rather than ‘Depends’ in order to make lean installations possible.


It used to be common practice to use require calls for packages listed in ‘Suggests’ in functions which used their functionality, but nowadays it is better to access such functionality via :: calls.

and continues in Section

Note that someone wanting to run the examples/tests/vignettes may not have a suggested package available (and it may not even be possible to install it for that platform). The recommendation used to be to make their use conditional via if(require("pkgname"))): this is fine if that conditioning is done in examples/tests/vignettes.

I will now exercise my option to use 'lean installations' as discussed here. If you want your package included in tests I run, please make sure it tests successfully when only its required packages are present.

Mike Hommey: When the memory allocator works against you, part 2

Wed, 22 Mar 2017 06:57:46 +0000

This is a followup to the “When the memory allocator works against you” post from a few days ago. You may want to read that one first if you haven’t, and come back. In case you don’t or didn’t read it, it was all about memory consumption during a git clone of the mozilla-central mercurial repository using git-cinnabar, and how the glibc memory allocator is using more than one would expect. This post is going to explore how/why it’s happening. I happen to have written a basic memory allocation logger for Firefox, so I used it to log all the allocations happening during a git clone exhibiting the runaway memory increase behavior (using a python that doesn’t use its own allocator for small allocations). The result was a 6.5 GB log file (compressed with zstd ; 125 GB uncompressed!) with 2.7 billion calls to malloc, calloc, free, and realloc, recorded across (mostly) 2 processes (the python git-remote-hg process and the native git-cinnabar-helper process ; there are other short-lived processes involved, but they do less than 5000 calls in total). The vast majority of those 2.7 billion calls is done by the python git-remote-hg process: 2.34 billion calls. We’ll only focus on this process. Replaying those 2.34 billion calls with a program that reads the log allowed to reproduce the runaway memory increase behavior to some extent. I went an extra mile and modified glibc’s realloc code in memory so it doesn’t call memcpy, to make things faster. I also ran under setarch x86_64 -R to disable ASLR for reproducible results (two consecutive runs return the exact same numbers, which doesn’t happen with ASLR enabled). I also modified the program to report the number of live allocations (allocations that haven’t been freed yet), and the cumulated size of the actually requested allocations (that is, the sum of all the sizes given to malloc, calloc, and realloc calls for live allocations, as opposed to what the memory allocator really allocated, which can be more, per malloc_usable_size). RSS was not tracked because the allocations are never filled to make things faster, such that pages for large allocations are never dirty, and RSS doesn’t grow as much because of that. Full disclosure: it turns out the “system bytes” and “in-use bytes” numbers I had been collecting in the previous post were smaller than what they should have been, and were excluding memory that the glibc memory allocator would have mmap()ed. That however doesn’t affect the trends that had been witnessed. The data below is corrected. (Note that in the graph above and the graphs that follow, the horizontal axis represents the number of allocator function calls performed) While I was here, I figured I’d check how mozjemalloc performs, and it has a better behavior (although it has more overhead). What doesn’t appear on this graph, though, is that mozjemalloc also tells the OS to drop some pages even if it keeps them mapped (madvise(MADV_DONTNEED)), so in practice, it is possible the actual RSS decreases too. And jemalloc 4.5: (It looks like it has better memory usage than mozjemalloc for this use case, but its stats are being thrown off at some point, I’ll have to investigate) Going back to the first graph, let’s get a closer look at what the allocations look like when the “system bytes” number is increasing a lot. The highlights in the following graphs indicate the range the next graph will be showing. So what we have here is a bunch of small allocations (small enough that they don’t seem to move the “requested” line ; most under 512 bytes, so under normal circumstances, they would be allocated by python, a few between 512 and 2048 bytes), and a few large allocations, one of which triggers a bump in memory use. What can appear weird at first gl[...]

Elena 'valhalla' Grandi: XMPP VirtualHosts, SRV records and letsencrypt certificates

Wed, 22 Mar 2017 06:32:08 +0000

XMPP VirtualHosts, SRV records and letsencrypt certificates

When I set up my XMPP server, a friend of mine asked if I was willing to have a virtualhost with his domain on my server, using the same address as the email.

Setting up prosody and the SRV record on the DNS was quite easy, but then we stumbled on the issue of certificates: of course we would like to use letsencrypt, but as far as we know that means that we would have to setup something custom so that the certificate gets renewed on his server and then sent to mine, and that looks more of a hassle than just him setting up his own prosody/ejabberd on his server.

So I was wondering: dear lazyweb, did any of you have the same issue and already came up with a solution that is easy to implement and trivial to maintain that we missed?

Clint Adams: Then Moises claimed that T.G.I. Friday's was acceptable

Wed, 22 Mar 2017 02:46:45 +0000


“Itʼs really sad listening to a friend talk about how he doesnʼt care for his wife and doesnʼt find her attractive anymore,” he whined, “while at the same time talking about the kid she is pregnant with—obviously they havenʼt had sex in awhile—and how though he only wants one kid, she wants multiple so they will probably have more. He said he couldnʼt afford to have a divorce. He literally said that one morning, watching her get dressed he laughed and told her, ‘Your boobs look weird.’ She didnʼt like that. I reminded him that they will continue to age. That didnʼt make him feel good. He said that he realized before getting married that he thought he was a good person, but now heʼs realizing heʼs a bad person. He said he was a misogynist. I said, ‘Worse, youʼre the type of misogynist who pretends to be a feminist.’ He agreed. He lived in Park Slope, but he moved once they became pregnant.”

“Good luck finding a kid-friendly restaurant,” she said.

Posted on 2017-03-22
Tags: umismu

Dirk Eddelbuettel: anytime 0.2.2

Wed, 22 Mar 2017 01:50:00 +0000


A bugfix release of the anytime package arrived at CRAN earlier today. This is tenth release since the inaugural version late last summer, and the second (bugfix / feature) release this year.

anytime is a very focused package aiming to do just one thing really well: to convert anything in integer, numeric, character, factor, ordered, ... format to either POSIXct or Date objects -- and to do so without requiring a format string. See the anytime page, or the GitHub for a few examples.

This releases addresses an annoying bug related to British TZ settings and the particular impact of a change in 1971, and generalizes input formats to accept integer or numeric format in two specific ranges. Details follow below:

Changes in anytime version 0.2.2 (2017-03-21)

  • Address corner case of integer-typed (large) values corresponding to POSIXct time (PR #57 closing ##56)

  • Add special case for ‘Europe/London’ and 31 Oct 1971 BST change to avoid a one-hour offset error (#58 fixing #36 and #51)

  • Address another corner case of numeric values corresponding to Date types which are now returned as Date

  • Added file init.c with calls to R_registerRoutines() and R_useDynamicSymbols(); already used .registration=TRUE in useDynLib in NAMESPACE

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the anytime page.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Steinar H. Gunderson: 10-bit H.264 support

Tue, 21 Mar 2017 23:41:00 +0000


Following my previous tests about 10-bit H.264, I did some more practical tests; since is up again, I did some tests with actual 10-bit input. The results were pretty similar, although of course 4K 60 fps organic content is going to be different at times from the partially rendered 1080p 24 fps clip I used.

But I also tested browser support, with good help from people on IRC. It was every bit as bad as I feared: Chrome on desktop (Windows, Linux, macOS) supports 10-bit H.264, although of course without hardware acceleration. Chrome on Android does not. Firefox does not (it tries on macOS, but plays back buggy). iOS does not. VLC does; I didn't try a lot of media players, but obviously ffmpeg-based players should do quite well. I haven't tried Chromecast, but I doubt it works.

So I guess that yes, it really is 8-bit H.264 or 10-bit HEVC—but I haven't tested the latter yet either :-)

Matthew Garrett: Announcing the Shim review process

Tue, 21 Mar 2017 20:29:30 +0000

(image) Shim has been hugely successful, to the point of being used by the majority of significant Linux distributions and many other third party products (even, apparently, Solaris). The aim was to ensure that it would remain possible to install free operating systems on UEFI Secure Boot platforms while still allowing machine owners to replace their bootloaders and kernels, and it's achieved this goal.

However, a legitimate criticism has been that there's very little transparency in Microsoft's signing process. Some people have waited for significant periods of time before being receiving a response. A large part of this is simply that demand has been greater than expected, and Microsoft aren't in the best position to review code that they didn't write in the first place.

To that end, we're adopting a new model. A mailing list has been created at, and members of this list will review submissions and provide a recommendation to Microsoft on whether these should be signed or not. The current set of expectations around binaries to be signed documented here and the current process here - it is expected that this will evolve slightly as we get used to the process, and we'll provide a more formal set of documentation once things have settled down.

This is a new initiative and one that will probably take a little while to get working smoothly, but we hope it'll make it much easier to get signed releases of Shim out without compromising security in the process.

(image) comments

Reproducible builds folks: Reproducible Builds: week 99 in Stretch cycle

Tue, 21 Mar 2017 18:44:20 +0000

Here's what happened in the Reproducible Builds effort between Sunday March 12 and Saturday March 18 2017: Upcoming events On March 23rd Holger Levsen will give a talk at the German Unix User Group's "Frühjahrsfachgespräch" called Reproducible Builds everywhere. Verifying Software Freedom with Reproducible Builds will be presented by Vagrant Cascadian at Libreplanet2017 in Boston, March 25th. You, too, can write reproducible software! workshop by Ximin Luo, Vagrant Cascadian and Valerie Young at Libreplanet2017 in Boston, March 25th. Reproducible Builds Hackathon Hamburg 2017 The Reproducible Builds Hamburg Hackathon 2017, or RB-HH-2017 for short is a 3 day hacking event taking place May 5th-7th in the CCC Hamburg Hackerspace located inside Frappant, as collective art space located in a historical monument in Hamburg, Germany. The aim of the hackathon is to spent some days working on Reproducible Builds in every distribution and project. The event is open to anybody interested on working on Reproducible Builds issues, with or without prior experience! Accomodation is available and travel sponsorship may be available by agreement. Please register your interest as soon as possible. Reproducible Builds Summit Berlin 2016 This is just a quick note, that all the pads we've written during the Berlin summit in December 2016 are now online (thanks to Holger), nicely complementing the report by Aspiration Tech. Request For Comments for new specification: BUILD_PATH_PREFIX_MAP Ximin Luo posted a draft version of our BUILD_PATH_PREFIX_MAP specification for passing build-time paths between high-level and low-level build tools. This is meant to help eliminate irreproducibility caused by different paths being used at build time. At the time of writing, this affects an estimated 15-20% of 25000 Debian packages. This is a continuation of an older proposal SOURCE_PREFIX_MAP, which has been updated based on feedback on our patches from GCC upstream, attendees of our Berlin 2016 summit, and participants on our mailing list. Thanks to everyone that contributed! The specification also contains runnable source code examples and test cases; see our git repo. Please comment on this draft ASAP - we plan to release version 1.0 of this in a few weeks. Toolchain changes #857632 apt: ignore the currently running kernel if attempting a reproducible build (Chris Lamb) #857803 shadow: Make the sp_lstchg shadow field reproducible. (Chris Lamb) #857892 fontconfig: please make the cache files reproducible (Chris Lamb) Packages reviewed and fixed, and bugs filed Chris Lamb: #857771 filed against golang-github-go-macaron-toolbox. #857772 filed against sushi. #857803 filed against shadow. #857889 filed against calendar-exchange-provider. #857892 filed against fontconfig. #858150 filed against eric, forwarded upstream. #858152 filed against fritzing. #858220 filed against ns2. Reviews of unreproducible packages 5 package reviews have been added, 274 have been updated and 800 have been removed in this week, adding to our knowledge about identified issues. 1 issue type has been added: filesystem_ordering_in_pak_files_generated_by_simutrans_makeobj Weekly QA work During our reproducibility testing, FTBFS bugs have been detected and reported by: Chris Lamb (5) Mattia Rizzolo (1) diffoscope development diffoscope 79 and 80 were uploaded to experimental by Chris Lamb. It included contributions from: Chris Lamb: Ensure that we really are using ImageMagick. (Closes: #857940) Extract SquashFS images in one go rather than per-file, speeding up (eg.) Tails ISO comparison by ~10x. Support newer[...]

Tanguy Ortolo: Bad support of ZIP archives with extra fields

Tue, 21 Mar 2017 18:33:00 +0000


For sharing multiple files, it is often convenient to pack them into an archive, and the most widely supported format to do so is probably ZIP. Under *nix, you can archive a directory with Info-ZIP:

% zip -r something/

(When you have several files, it is recommended to archive them in a directory, to avoid cluttering the directory where people will extract them.)

Unsupported ZIP archive

Unfortunately, while we would expect ZIP files to be widely supported, I found out that this is not always the case, and I had many recipients failing to open them under operating systems such as iOS.

Avoid extra fields

That issue seems to be linked to the usage of extra file attributes, that are enabled by default, in order to store Unix file metadata. The field designed to store such extra attributes was designed from the beginning so each implementation can take into account attributes it supports and ignore any other ones, but some buggy ZIP implementation appear not to function at all with them.

Therefore, unless you actually need to preserve Unix file metadata, you should avoid using extra fields. With Info-ZIP, you would have to add the option -X:

% zip -rX something/

Matthew Garrett: Buying a Utah teapot

Mon, 20 Mar 2017 20:45:42 +0000

(image) The Utah teapot was one of the early 3D reference objects. It's canonically a Melitta but hasn't been part of their range in a long time, so I'd been watching Ebay in the hope of one turning up. Until last week, when I discovered that a company called Friesland had apparently bought a chunk of Melitta's range some years ago and sell the original teapot[1]. I've just ordered one, and am utterly unreasonably excited about this.

[1] They have them in 0.35, 0.85 and 1.4 litre sizes. I believe (based on the measurements here) that the 1.4 litre one matches the Utah teapot.

(image) comments

Shirish Agarwal: Tale of two countries, India and Canada

Mon, 20 Mar 2017 19:38:32 +0000

Apologies – the first blog post got sent out by mistake. Weather comparisons between the two countries Last year, I had come to know that this year’s debconf is happening in Canada, a cold country. Hence, few weeks/month back, I started trying to find information online when I stumbled across few discussion boards where people were discussing about Innerwear and Outerwear and I couldn’t understand what that was all about. Then somehow stumbled across this Video, which is of a game called the Long Dark and just seeing couple of episodes it became pretty clear to me why the people there were obsessing with getting the right clothes and everything about it. Couple of Debconf people were talking about the weather in Montreal, and surprise, surprise it was snowing there, in fact supposed to be near the storm of the century. Was amazed to see that they have a website to track how much snow has been lifted. If we compare that to Pune, India weather-wise we are polar opposites. There used to be a time, when I was very young, maybe 5 yrs. old that once the weather went above 30 degree celsius, rains would fall, but now its gonna touch 40 degrees soon. And April and May, the two hottest months are yet to come. China Gate Before I venture further, I was gifted the book ‘China Gate‘ written by an author named William Arnold. When I read the cover and the back cover, it seemed the story was set between China and Taiwan, later when I started reading it, it shares history of Taiwan going back 200 or so odd years. This became relevant as next year’s Debconf, Debconf 2018 will be in Taiwan, yes in Asia very much near to India. I am ashamed to say that except for the Tiananmen Square Massacre and the Chinese High-Speed Rail there wasn’t much that I knew. According to the book, and I’m paraphrasing here the gist I got was that for a long time, the Americans promised Taiwan it will be an Independent country forever, but due to budgetary and other political constraints, the United States took the same stand as China from 1979. Interestingly, now it seems Mr. Trump wants to again recognize Taiwan as a separate entity from China itself but as is with Mr. Trump you can’t be sure of why he does, what he does. Is it just a manoeuvrer designed to out-smart the chinese and have a trade war or something else, only time will tell. One thing which hasn’t been shared in the book but came to know via web is that Taiwan calls itself ‘Republic of China’ . If Taiwan wants to be independent then why the name ‘Republic of China’ ? Doesn’t that strengthen China’s claim that Taiwan is an integral part of China. I don’t understand it. The book does seduce you into thinking that the events are happening in real-time, as in happening now. That’s enough OT for now. Population Density As well in the game and whatever I could find on the web, Canada seems to be on the lower side as far as population is concerned. IIRC, few years back, Canadians invited Indian farmers and gave them large land-holdings for over 100 years on some small pittance. While the link I have shared is from 2006, I read it online and in newspapers even as late as in 2013/2014. The point being there seems to be lot of open spaces in Canada, whereas in India we fight for even one inch literally, due to overpopulation. This sharing reminded me of ‘Mark of Gideon‘. While I was young, I didn’t understand the political meaning of it and still struggle to understand about whom the show was talking about. Was it India, Africa or s[...]

Shirish Agarwal: Canada and India, similarities and differences.

Mon, 20 Mar 2017 18:42:26 +0000

Weather comparisons between the two countries Few days/weeks back, I had come to know that Canada, where this year’s debconf is happening is cold country. I started trying to find information online when I stumbled across few boards where people were discussing about innerwear and outerwear and I couldn’t understand what that was all about. Then somehow stumbled across this game, it’s called the Long Dark and just seeing couple of episodes it became pretty clear to me why the people there were obsessing with getting the right clothes and everything about it. Couple of Debconf people were talking about weather in Montreal, and surprise, surprise it was snowing there, in fact supposed to be near the storm of the century. Was amazed to see that they have a website to track how much snow has been lifted. If we compare that to Pune, India weather-wise we are polar opposites. There used to be a time, when I was very young, maybe 5 yrs. old that once the weather went above 30 degree celcius, rains would fall, but now its gonna touch 40 degrees soon. And April and May, the two hottest months are yet to come. China Gate Before I venture further, I was gifted the book ‘China Gate‘ written by an author named William Arnold. When I read the cover and the backcover, it seemed the story was set between China and Taiwan, later when I started reading it, it shares history of Taiwan going back 200 or so odd years. This became relevant as next year’s Debconf, Debconf 2018 will be in Taiwan, yes in Asia very much near to India. I am ashamed to say that except for the Tiananmen Square Massacre and the Chinese High-Speed Rail there wasn’t much that I knew. According to the book, and I’m paraphrasing here the gist I got was that for a long time, the Americans promised Taiwan it will be an Independent country forever, but due to budgetary and other political constraints, the United States took the same stand as China from 1979 and now it seems Mr. Trump wants to again recognize Taiwan as a separate entity from China itself. One thing which hasn’t been shared in the book but came to know via web is that Taiwan calls itself ‘Republic of China’ . If Taiwan wants to be independent then why the name ‘Republic of China’ ? Doesn’t that strengthen China’s claim that Taiwan is an integral part of China. I don’t understand it. The book does seduce you into thinking that the events are happening in real-time, as in happening now. That’s enough OT for now. Population Density As well in the game and whatever I could find on the web, Canada seems to be on the lower side as far as population is concerned. IIRC, few years back, Canadians invited Indian farmers and gave them large land-holdings for over 100 years on some small pittance. While the link I have shared is from 2006, I read it online and in newspapers even as late as in 2013/2014. The point being there seems to be lot of open spaces in Canada, whereas in India we fight for even one inch literally, due to overpopulation. This sharing reminded me of ‘Mark of Gideon‘. While I was young, I didn’t understand the political meaning of it and still struggle to understand about whom the show was talking about. Was it India, Africa or some other continent they were talking about ? This also becomes obvious when you figure out the surface area of the two countries. When I had started to learn about Canada, I had no idea, nor a clue that Canada is three times the size of India. And this is when I know India is a lar[...]

Bits from Debian: DebConf17 welcomes its first eighteen sponsors!

Mon, 20 Mar 2017 14:15:00 +0000

DebConf17 will take place in Montreal, Canada in August 2017. We are working hard to provide fuel for hearts and minds, to make this conference once again a fertile soil for the Debian Project flourishing. Please join us and support this landmark in the Free Software calendar. Eighteen companies have already committed to sponsor DebConf17! With a warm welcome, we'd like to introduce them to you. Our first Platinum sponsor is Savoir-faire Linux, a Montreal-based Free/Open-Source Software company which offers Linux and Free Software integration solutions and actively contributes to many free software projects. "We believe that it's an essential piece [Debian], in a social and political way, to the freedom of users using modern technological systems", said Cyrille Béraud, president of Savoir-faire Linux. Our first Gold sponsor is Valve, a company developing games, social entertainment platform, and game engine technologies. And our second Gold sponsor is Collabora, which offers a comprehensive range of services to help its clients to navigate the ever-evolving world of Open Source. As Silver sponsors we have credativ (a service-oriented company focusing on open-source software and also a Debian development partner), Mojatatu Networks (a Canadian company developing Software Defined Networking (SDN) solutions), the Bern University of Applied Sciences (with over 6,600 students enrolled, located in the Swiss capital), Microsoft (an American multinational technology company), Evolix (an IT managed services and support company located in Montreal), Ubuntu (the OS supported by Canonical) and Roche (a major international pharmaceutical provider and research company dedicated to personalized healthcare). ISG.EE, IBM, Bluemosh, Univention and Skroutz are our Bronze sponsors so far. And finally, The Linux foundation, Réseau Koumbit and are our supporter sponsors. Become a sponsor too! Would you like to become a sponsor? Do you know of or work in a company or organization that may consider sponsorship? Please have a look at our sponsorship brochure (or a summarized flyer), in which we outline all the details and describe the sponsor benefits. For further details, feel free to contact us through, and visit the DebConf17 website at [...]

Dirk Eddelbuettel: Rcpp 0.12.10: Some small fixes

Sun, 19 Mar 2017 13:39:00 +0000

The tenth update in the 0.12.* series of Rcpp just made it to the main CRAN repository providing GNU R with by now over 10,000 packages. Windows binaries for Rcpp, as well as updated Debian packages will follow in due course. This 0.12.10 release follows the 0.12.0 release from late July, the 0.12.1 release in September, the 0.12.2 release in November, the 0.12.3 release in January, the 0.12.4 release in March, the 0.12.5 release in May, the 0.12.6 release in July, the 0.12.7 release in September, the 0.12.8 release in November, and the 0.12.9 release in January --- making it the fourteenth release at the steady and predictable bi-montly release frequency. Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 975 packages on CRAN depend on Rcpp for making analytical code go faster and further. That is up by sixtynine packages over the two months since the last release -- or just over a package a day! The changes in this release are almost exclusively minor bugfixes and enhancements to documentation and features: James "coatless" Balamuta rounded out the API, Iñaki Ucar fixed a bug concerning one-character output, Jeroen Ooms allowed for finalizers on XPtr objects, Nathan Russell corrected handling of lower (upper) triangular matrices, Dan Dillon and I dealt with Intel compiler quirks for his algorithm.h header, and I added a C++17 plugin along with some (overdue!) documentation regarding the various C++ standards that are supported by Rcpp (which is in essence whatever your compiler supports, i.e., C++98, C++11, C++14 all the way to C++17 but always keep in mind what CRAN and different users may deploy). Changes in Rcpp version 0.12.10 (2017-03-17) Changes in Rcpp API: Added new size attribute aliases for number of rows and columns in DataFrame (James Balamuta in #638 addressing #630). Fixed single-character handling in Rstreambuf (Iñaki Ucar in #649 addressing #647). XPtr gains a parameter finalizeOnExit to enable running the finalizer when R quits (Jeroen Ooms in #656 addressing #655). Changes in Rcpp Sugar: Fixed sugar functions upper_tri() and lower_tri() (Nathan Russell in #642 addressing #641). The algorithm.h file now accomodates the Intel compiler (Dirk in #643 and Dan in #645 addressing issue #640). Changes in Rcpp Attributes The C++17 standard is supported with a new plugin (used eg for g++-6.2). Changes in Rcpp Documentation: An overdue explanation of how C++11, C++14, and C++17 can be used was added to the Rcpp FAQ. Thanks to CRANberries, you can also look at a diff to the previous release. As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings. [...]

Petter Reinholdtsen: Free software archive system Nikita now able to store documents

Sun, 19 Mar 2017 07:00:00 +0000

The Nikita Noark 5 core project is implementing the Norwegian standard for keeping an electronic archive of government documents. The Noark 5 standard document the requirement for data systems used by the archives in the Norwegian government, and the Noark 5 web interface specification document a REST web service for storing, searching and retrieving documents and metadata in such archive. I've been involved in the project since a few weeks before Christmas, when the Norwegian Unix User Group announced it supported the project. I believe this is an important project, and hope it can make it possible for the government archives in the future to use free software to keep the archives we citizens depend on. But as I do not hold such archive myself, personally my first use case is to store and analyse public mail journal metadata published from the government. I find it useful to have a clear use case in mind when developing, to make sure the system scratches one of my itches. If you would like to help make sure there is a free software alternatives for the archives, please join our IRC channel (#nikita on and the project mailing list. When I got involved, the web service could store metadata about documents. But a few weeks ago, a new milestone was reached when it became possible to store full text documents too. Yesterday, I completed an implementation of a command line tool archive-pdf to upload a PDF file to the archive using this API. The tool is very simple at the moment, and find existing fonds, series and files while asking the user to select which one to use if more than one exist. Once a file is identified, the PDF is associated with the file and uploaded, using the title extracted from the PDF itself. The process is fairly similar to visiting the archive, opening a cabinet, locating a file and storing a piece of paper in the archive. Here is a test run directly after populating the database with test data using our API tester: ~/src//noark5-tester$ ./archive-pdf mangelmelding/mangler.pdf using arkiv: Title of the test fonds created 2017-03-18T23:49:32.103446 using arkivdel: Title of the test series created 2017-03-18T23:49:32.103446 0 - Title of the test case file created 2017-03-18T23:49:32.103446 1 - Title of the test file created 2017-03-18T23:49:32.103446 Select which mappe you want (or search term): 0 Uploading mangelmelding/mangler.pdf PDF title: Mangler i spesifikasjonsdokumentet for NOARK 5 Tjenestegrensesnitt File 2017/1: Title of the test case file created 2017-03-18T23:49:32.103446 ~/src//noark5-tester$ You can see here how the fonds (arkiv) and serie (arkivdel) only had one option, while the user need to choose which file (mappe) to use among the two created by the API tester. The archive-pdf tool can be found in the git repository for the API tester. In the project, I have been mostly working on the API tester so far, while getting to know the code base. The API tester currently use the HATEOAS links to traverse the entire exposed service API and verify that the exposed operations and objects match the specification, as well as trying to create objects holding metadata and uploading a simple XML file to store. The tester has proved very useful for finding flaws in our implementation, as well as flaws in the reference site and the specification. The test document I uploaded is a summary of all the specification defects we have collected so[...]

Clint Adams: Measure once, devein twice

Sun, 19 Mar 2017 04:38:03 +0000

Ophira lived in a wee house in University Square, Tampa. It had one floor, three bedrooms, two baths, a handful of family members, a couple pets, some plants, and an occasional staring contest. Mauricio lived in Lowry Park North, but Ophira wasn’t allowed to go there because Mauricio was afraid that someone would tell his girlfriend. Ophira didn’t like Mauricio’s girlfriend and Mauricio’s girlfriend did not like Ophira. Mauricio did not bring his girlfriend along when he and Ophira went to St. Pete Beach. They frolicked in the ocean water, and attempted to have sex. Mauricio and Ophira were big fans of science, so Somewhat quickly they concluded that it is impossible to have sex underwater, and absconded to Ophira’s car to have sex therein. “I hate Mauricio’s girlfriend,” Ophira told Amit on the telephone. “She’s not even pretty.” “Hey, listen,” said Amit. “I’m going to a wedding on Captiva.” “Oh, my family used to go to Captiva every year. There’s bioluminescent algae and little crabs and stuff.” “Yeah? Do you want to come along? You could pick me up at the airport.” “Why would I want to go to a wedding?” “Well, it’s on the beach and they’re going to have a bouncy castle.” “A bouncy castle‽ Are you serious?” “Yes.” “Well, okay.” Amit prepared to go to the wedding and Ophira became terse then unresponsive. After he landed at RSW, he called Ophira, but instead of answering the phone she startled and fell out of her chair. Amit arranged for other transportation toward the Sanibel Causeway. Ophira bit her nails for a few hours, then went to her car and drove to Cape Coral. Ophira cruised around Cape Coral for a while, until she spotted a teenager cleaning a minivan. She parked her car and approached him. “Whatcha doing?” asked Ophira, pretending to chew on imaginary gum. The youth slid the minivan door open. “I’m cleaning,” he said hesitantly. “Didn’t your parents teach you not to talk to strangers? I could do all kinds of horrible things to you.” They conversed for a bit. She recounted a story of her personal hero, a twelve-year-old girl who seduced and manipulated older men into ruin. She rehashed the mysteries of Mauricio’s girlfriend. She waxed poetic on her love of bouncy castles. The youth listened, hypnotized. “What’s your name, kid?” Ophira yawned. “Arjun,” he replied. “How old are you?” Arjun thought about it. “15,” he said. “Hmm,” Ophira stroked her chin. “Can you sneak me into your room so that your parents never find out about it?” Arjun’s eyes went wide. MEANWHILE, on Captiva Island, Amit had learned that even though the Tenderly had multiple indoor jacuzzis, General Fitzpatrick and Mrs. Fitzpatrick had decided it prudent to have sex in the hot tub on the deck; that the execution of this plan had somehow necessitated a lengthy cleaning process before the hot tub could be used again; that that’s why workmen were cleaning the hot tub; and that the Fitzpatrick children had gotten General Fitzpatrick and Mrs. Fitzpatrick to agree to not do that again, with an added suggestion that they not be seen doing anything else naked in public. A girl walked up to Amit. “Hey, I heard you lost your plus-one. Are you here alone? What a loser!” she giggled nervously, then stared. “Leave me alone, Darlene,” sighed Amit. Darlene’s face reddened as she spun[...]

Vincent Sanders: A rose by any other name would smell as sweet

Sat, 18 Mar 2017 13:01:15 +0000

Often I end up dealing with code that works but might not be of the highest quality. While quality is subjective I like to use the idea of "code smell" to convey what I mean, these are a list of indicators that, in total, help to identify code that might benefit from some improvement.Such smells may include:Complex code lacking comments on intended operationCode lacking API documentation comments especially for interfaces used outside the local moduleNot following style guideInconsistent styleInconsistent indentationPoorly structured codeOverly long functionsExcessive use of pre-processorMany nested loops and control flow clausesExcessive numbers of parametersI am most certainly not alone in using this approach and Fowler et al have covered this subject in the literature much better than I can here. One point I will raise though is some programmers dismiss code that exhibits these traits as "legacy" and immediately suggest a fresh implementation. There are varying opinions on when a rewrite is the appropriate solution from never to always but in my experience making the old working code smell nice is almost always less effort and risk than a re-write.TestsWhen I come across smelly code, and I decide it is worthwhile improving it, I often discover the biggest smell is lack of test coverage. Now do remember this is just one code smell and on its own might not be indicative, my experience is smelly code seldom has effective test coverage while fresh code often does.Test coverage is generally understood to be the percentage of source code lines and decision paths used when instrumented code is exercised by a set of tests. Like many metrics developer tools produce, "coverage percentage" is often misused by managers as a proxy for code quality. Both Fowler and Marick have written about this but sufficient to say that for a developer test coverage is a useful tool but should not be misapplied.Although refactoring without tests is possible the chances for unintended consequences are proportionally higher. I often approach such a refactor by enumerating all the callers and constructing a description of the used interface beforehand and check that that interface is not broken by the refactor. At which point is is probably worth writing a unit test to automate the checks.Because of this I have changed my approach to such refactoring to start by ensuring there is at least basic API code coverage. This may not yield the fashionable 85% coverage target but is useful and may be extended later if desired.It is widely known and equally widely ignored that for maximum effectiveness unit tests must be run frequently and developers take action to rectify failures promptly. A test that is not being run or acted upon is a waste of resources both to implement and maintain which might be better spent elsewhere.For projects I contribute to frequently I try to ensure that the CI system is running the coverage target, and hence the unit tests, which automatically ensures any test breaking changes will be highlighted promptly. I believe the slight extra overhead of executing the instrumented tests is repaid by having the coverage metrics available to the developers to aid in spotting areas with inadequate tests.ExampleA short example will help illustrate my point. When a web browser receives an object over HTTP the server can supply a MIME type in a content-type header th[...]

Shirish Agarwal: Science Day at GMRT, Khodad 2017

Fri, 17 Mar 2017 19:19:29 +0000

The above picture is the blend of the two communities from foss community and mozilla India. And unless you were there you wouldn’t know who is from which community which is what FOSS is all about. But as always I’m getting a bit ahead of myself. Akshat, who works at NCRA as a programmer, the standing guy on the left shared with me in January this year that this year too, we should have two stalls, foss community and mozilla India stalls next to each other. While we had the banners, we were missing stickers and flyers. Funds were and are always an issue and this year too, it would have been emptier if we didn’t get some money saved from last year minidebconf 2016 that we had in Mumbai. Our major expenses included printing stickers, stationery and flyers which came to around INR 5000/- and couple of LCD TV monitors which came for around INR 2k/- as rent. All the labour was voluntary in nature, but both me and Akshat easily spending upto 100 hours before the event. Next year, we want to raise to around INR 10-15k so we can buy 1 or 2 LCD monitors and we don’t have to think for funds for next couple of years. How will we do that I have no idea atm. Me and Akshat did all the printing and stationery runs and hence had not been using my lappy for about 3-4 days. Come to the evening before the event and the laptop would not start. Coincidentally, or not few months or even last at last year’s Debconf people had commented on IBM/Lenovo’s obsession with proprietary power cords and adaptors. I hadn’t given it much thought but when I got no power even after putting it on AC power for 3-4 hours, I looked up on the web and saw that the power cord and power adaptors were all different even in T440 and even that under existing models. In fact I couldn’t find mine hence sharing it via pictures below. I knew/suspected that thinkpads would be rare where I was going, it would be rarer still to find the exact power cord and I was unsure whether it was the power cord at fault or adaptor or whatever goes for SMPS in laptop or memory or motherboard/CPU itself. I did look up the documentation at and was surprised at the extensive documentation that Lenovo has for remote troubleshooting. I did the usual take out the battery, put it back in, twiddle with the little hole in the bottom of the laptop, trying to switch on without the battery on AC mains, trying to switch on with battery power only but nothing worked. Couple of hours had gone by and with a resigned thought went to bed, convincing myself that anyways it’s good I am not taking the lappy as it is extra-dusty there and who needs a dead laptop anyways. Update – After the event was over, I did contact Lenovo support and within a week, with one visit from a service engineer, he was able to identify that it was a faulty cable which was at fault and not the the other things which I was afraid of. Another week gone by and lenovo replaced the cable. Going by service standards that I have seen of other companies, Lenovo deserves a gold star here for the prompt service they provided. I probably would end up subscribing to their extended 2-year warranty service when my existing 3 year warranty is about to be over. Next day, woke up early morning, two students from COEP hostel were volunteering and we made our way to NCRA, Pune University Campus. Iro[...]

Antonio Terceiro: Patterns for Testing Debian Packages

Fri, 17 Mar 2017 01:23:33 +0000

At the and of 2016 I had the pleasure to attend the 11th Latin American Conference on Pattern Languages of Programs, a.k.a SugarLoaf PLoP. PLoP is a series of conferences on Patterns (as in “Design Patterns”), a subject that I appreciate a lot. Each of the PLoP conferences but the original main “big” conference has a funny name. SugarLoaf PLoP is called that way because its very first edition was held in Rio de Janeiro, so the organizers named it after a very famous mountain in Rio. The name stuck even though a long time has passed since it was held in Rio for the last time. 2016 was actually the first time SugarLoaf PLoP was held outside of Brazil, finally justifying the “Latin American” part of its name. I was presenting a paper I wrote on patterns for testing Debian packages. The Debian project funded my travel expenses through the generous donations of its supporters. PLoP’s are very fun conferences with a relaxed atmosphere, and is amazing how many smart (and interesting!) people gather together for them. My paper is titled “Patterns for Writing As-Installed Tests for Debian Packages”, and has the following abstract: Large software ecosystems, such as GNU/Linux distributions, demand a large amount of effort to make sure all of its components work correctly invidually, and also integrate correctly with each other to form a coherent system. Automated Quality Assurance techniques can prevent issues from reaching end users. This paper presents a pattern language originated in the Debian project for automated software testing in production-like environments. Such environments are closer in similarity to the environment where software will be actually deployed and used, as opposed to the development environment under which developers and regular Continuous Integration mechanisms usually test software products. The pattern language covers the handling of issues arising from the difference between development and production-like environments, as well as solutions for writing new, exclusive tests for as-installed functional tests. Even though the patterns are documented here in the context of the Debian project, they can also be generalized to other contexts. In practical terms, the paper documents a set of patterns I have noticed in the last few years, when I have been pushing the Debian Continous Integration project. It should be an interesting read for people interested in the testing of Debian packages in their installed form, as done with autopkgtest. It should also be useful for people from other distributions interested in the subject, as the issues are not really Debian-specific. I have recently finished the final version of the paper, which should be published in the ACM Digital Library at any point now. You can download a copy of the paper in PDF. Source is also available, if you are into markdown, LaTeX, makefiles and this sort of thing. If everything goes according to plan, I should be presenting a talk on this at the next Debconf in Montreal. [...]

Thorsten Glaser: Updates to the last two posts

Thu, 16 Mar 2017 23:12:00 +0000


Someone from the FSF’s licencing department posted an official-looking thing saying they don’t believe GitHub’s new ToS to be problematic with copyleft. Well, my lawyer (not my personal one, nor for The MirOS Project, but related to another association, informally) does agree with my reading of the new ToS, and I can point out at least a clause in the GPLv1 (I really don’t have time right now) which says contrary (but does this mean the FSF generally waives the restrictions of the GPL for anything on GitHub?). I’ll eMail GitHub Legal directly and will try to continue getting this fixed (as soon as I have enough time for it) as I’ll otherwise be forced to force GitHub to remove stuff from me (but with someone else as original author) under GPL, such as… tinyirc and e3.

My dbconfig-common Debian packaging example got a rather hefty upgrade because dbconfig-common (unlike any other DB schema framework I know of) doesn’t apply the upgrades on a fresh install (and doesn’t automatically put the upgrades into a transaction either) but only upgrades between Debian package versions (which can be funny with backports, but AFAICT that part is handled correctly). I now append the upgrades to the initial-version-as-seen-in-the-source to generate the initial-version-as-shipped-in-the-binary-package (optionally, only if it’s named .in) removing all transaction stuff from the upgrade files and wrapping the whole shit in BEGIN; and COMMIT; after merging. (This should at least not break nōn-PostgreSQL databases and… well, database-like-ish things I cannot test for obvious (SQLite is illegal, at least in Germany, but potentially worldwide, and then PostgreSQL is the only remaining Open Source database left ;) reasons.)

Update: Yes, this does mean that maintainers of databases and webservers should send me patches to make this work with not-PostgreSQL (new install/, upgrade files) and not-Apache-2.2/2.4 (new debian/*/*.conf snippets) to make this packaging example even more generally usable.

Natureshadow already forked this and made a Python/Flask package from it, so I’ll prod him to provide a similarily versatile hello-python-world example package.

Joey Hess: end of an era

Thu, 16 Mar 2017 22:14:06 +0000


I'm at home downloading hundreds of megabytes of stuff. This is the first time I've been in position of "at home" + "reasonably fast internet" since I moved here in 2012. It's weird!


While I was renting here, I didn't mind dialup much. In a way it helps to focus the mind and build interesting stuff. But since I bought the house, the prospect of only dialup at home ongoing became more painful.

While I hope to get on the fiber line that's only a few miles away eventually, I have not convinced that ISP to build out to me yet. Not enough neighbors. So, satellite internet for now.



Dish seems well aligned, speed varies a lot, but is easily hundreds of times faster than dialup. Latency is 2x dialup.

The equipment uses more power than my laptop, so with the current solar panels, I anticipate using it only 6-9 months of the year. So I may be back to dialup most days come winter, until I get around to adding more PV capacity.

It seems very cool that my house can capture sunlight and use it to beam signals 20 thousand miles into space. Who knows, perhaps there will even be running water one day.


Raphaël Hertzog: Freexian’s report about Debian Long Term Support, February 2017

Thu, 16 Mar 2017 13:25:21 +0000

Like each month, here comes a report about the work of paid contributors to Debian LTS. Individual reports In January, about 154 work hours have been dispatched among 13 paid contributors. Their reports are available: Antoine Beaupré did 3 hours (out of 13h allocated, thus keeping 10 extra hours for March). Balint Reczey did 13 hours (out of 13 hours allocated + 1.25 hours remaining, thus keeping 1.25 hours for March). Ben Hutchings did 19 hours (out of 13 hours allocated + 15.25 hours remaining, he gave back the remaining hours to the pool). Chris Lamb did 13 hours. Emilio Pozuelo Monfort did 12.5 hours (out of 13 hours allocated, thus keeping 0.5 hour for March). Guido Günther did 8 hours. Hugo Lefeuvre did nothing and gave back his 13 hours to the pool. Jonas Meurer did 14.75 hours (out of 5 hours allocated + 9.75 hours remaining). Markus Koschany did 13 hours. Ola Lundqvist did 4 hours (out of 13h allocated, thus keeping 9 hours for March). Raphaël Hertzog did 3.75 hours (out of 10 hours allocated, thus keeping 6.25 hours for March). Roberto C. Sanchez did 5.5 hours (out of 13 hours allocated + 0.25 hours remaining, thus keeping 7.75 hours for March). Thorsten Alteholz did 13 hours. Evolution of the situation The number of sponsored hours increased slightly thanks to Bearstech and LiHAS joining us. The security tracker currently lists 45 packages with a known CVE and the dla-needed.txt file 39. The number of open issues continued its slight increase, this time it could be explained by the fact that many contributors did not spend all the hours allocated (for various reasons). There’s nothing worrisome at this point. Thanks to our sponsors New sponsors are in bold. Platinum sponsors: TOSHIBA (for 17 months) GitHub (for 8 months) Gold sponsors: The Positive Internet (for 33 months) Blablacar (for 32 months) Linode LLC (for 22 months) Babiel GmbH (for 11 months) Plat’Home (for 11 months) Silver sponsors: Domeneshop AS (for 32 months) Université Lille 3 (for 32 months) Trollweb Solutions (for 30 months) Nantes Métropole (for 26 months) University of Luxembourg (for 24 months) Dalenys (for 23 months) Univention GmbH (for 18 months) Université Jean Monnet de St Etienne (for 18 months) Sonus Networks (for 12 months) UR Communications BV (for 6 months) maxcluster GmbH (for 6 months) Exonet B.V. Bronze sponsors: David Ayers – IntarS Austria (for 33 months) Evolix (for 33 months) Offensive Security (for 33 months), a.s. (for 33 months) Freeside Internet Service (for 32 months) MyTux (for 32 months) Linuxhotel GmbH (for 30 months) Intevation GmbH (for 29 months) Daevel SARL (for 28 months) Bitfolk LTD (for 27 months) Megaspace Internet Services GmbH (for 27 months) Greenbone Networks GmbH (for 26 months) NUMLOG (for 26 months) WinGo AG (for 25 months) Ecole Centrale de Nantes – LHEEA (for 22 months) Sig-I/O (for 19 months) Entr’ouvert (for 17 months) Adfinis SyGroup AG (for 14 months) Laboratoire LEGI – UMR 5519 / CNRS (for 9 months) Quarantainenet BV (for 9 months) GNI MEDIA (for 8 months) RHX Srl (for 6 months) LiHAS Bearstech No comment | Liked this article? Click here. | My blog is Flattr-enabled. [...]

Enrico Zini: Django signing signs, does not encrypt

Thu, 16 Mar 2017 11:01:00 +0000

As is says in the documentation. django.core.signing signs, and does not encyrpt.

Even though signing.dumps creates obscure-looking tokens, they are not encrypted, and here's a proof:

>>> from django.core import signing
>>> a = signing.dumps({"action":"set-password", "username": "enrico", "password": "SECRET"})
>>> from django.utils.encoding import force_bytes
>>> print(signing.b64_decode(force_bytes(a.split(":",1)[0])))

I'm writing it down so one day I won't be tempted to think otherwise.

Wouter Verhelst: Codes of Conduct

Thu, 16 Mar 2017 08:02:08 +0000

These days, most large FLOSS communities have a "Code of Conduct"; a document that outlines the acceptable (and possibly not acceptable) behaviour that contributors to the community should or should not exhibit. By writing such a document, a community can arm itself more strongly in the fight against trolls, harassment, and other forms of antisocial behaviour that is rampant on the anonymous medium that the Internet still is. Writing a good code of conduct is no easy matter, however. I should know -- I've been involved in such a process twice; once for Debian, and once for FOSDEM. While I was the primary author for the Debian code of conduct, the same is not true for the FOSDEM one; I was involved, and I did comment on a few early drafts, but the core of FOSDEM's current code was written by another author. I had wanted to write a draft myself, but then this one arrived and I didn't feel like I could improve it, so it remained. While it's not easy to come up with a Code of Conduct, there (luckily) are others who walked this path before you. On the "geek feminism" wiki, there is an interesting overview of existing Open Source community and conference codes of conduct, and reading one or more of them can provide one with some inspiration as to things to put in one's own code of conduct. That wiki page also contains a paragraph "Effective codes of conduct", which says (amongst others) that a good code of conduct should include Specific descriptions of common but unacceptable behaviour (sexist jokes, etc.) The attentive reader will notice that such specific descriptions are noticeably absent from both the Debian and the FOSDEM codes of conduct. This is not because I hadn't seen the above recommendation (I had); it is because I disagree with it. I do not believe that adding a list of "don't"s to a code of conduct is a net positive to it. Why, I hear you ask? Surely having a list of things that are not welcome behaviour is a good thing, which should be encouraged? Surely such a list clarifies the kind of things your does not want to see? Having such a list will discourage that bad behaviour, right? Well, no, I don't think so. And here's why. Enumerating badness A list of things not to do is like a virus scanner. For those not familiar with these: on some operating systems, there is specific piece of software that everyone recommends you run, which checks if particular blobs of data appear in files on the disk. If they do, then these files are assumed to be bad, and are kicked out. If they do not, then these files are assumed to be not bad, and are left alone (for the most part). This works if we know all the possible types of badness; but as soon as someone invents a new form of badness, suddenly your virus scanner is ineffective. Additionally, it also means you're bound to continually have to update your virus scanner (or, as the case may be, code of conduct) to a continually changing hostile world. For these (and other) reasons, enumerating badness is listed as number 2 in security expert Markus Ranum's "six dumbest ideas in computer security," which was written in 2005. In short, a list of "things not to do" is bound to be incomplete; if the goal i[...]

Ben Hutchings: Debian LTS work, February 2017

Thu, 16 Mar 2017 04:44:35 +0000


I was assigned 13 hours of work by Freexian's Debian LTS initiative and carried over 15.25 from January. I worked 19 hours and have returned the remaining 9.25 hours to the general pool.

I prepared a security update for the Linux kernel and issued DLA-833-1. However, I spent most of my time catching up with a backlog of fixes for the Linux 3.2 longterm stable branch. I issued two stable updates (3.2.85, 3.2.86).

Dirk Eddelbuettel: RcppEigen

Wed, 15 Mar 2017 11:40:00 +0000


A new maintenance release of RcppEigen, still based on Eigen 3.2.9 is now on CRAN and is now going into Debian soon.

This update ensures that RcppEigen and the Matrix package agree on their #define statements for the CholMod / SuiteSparse library. Thanks to Martin Maechler for the pull request. I also added a file src/init.c as now suggested (soon: requested) by the R CMD check package validation.

The complete NEWS file entry follows.

Changes in RcppEigen version (2017-03-14)

  • Synchronize CholMod header file with Matrix package to ensure binary compatibility on all platforms (Martin Maechler in #42)

  • Added file init.c with calls to R_registerRoutines() and R_useDynamicSymbols(); also use .registration=TRUE in useDynLib in NAMESPACE

Courtesy of CRANberries, there is also a diffstat report for the most recent release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Michal Čihař: Life of free software project

Wed, 15 Mar 2017 11:00:28 +0000

During last week I've noticed several interesting posts about challenges being free software maintainer. After being active in open source for 16 years I can share much of the feelings I've read and I can also share my dealings with the things. First of all let me link some of the other posts on the topic: What it feels like to be an open-source maintainer by Nolan Lawson‏ Time to leave by Mikeal Rogers Sustainable Open Source: The Maintainers Perspective or: How I Learned to Stop Caring and Love Open Source by Jan Lehnardt I guess everybody involved in in some popular free software project knows it - there is much more work to be done than people behind the project can handle. It really doesn't matter it those are bug reports, support requests, new features or technical debt, it's simply too much of that. If you are the only one behind the project it can feel even more pressing. There can be several approaches how to deal with that, but you have to choose what you prefer and what is going to work for you and your project. I've used all of the below mentioned approaches on some of the projects, but I don't think there is a silver bullet. Finding more people Obviously if you can not cope with the work, let's find more people to do the work. Unfortunately it's not that easy. Sometimes people come by, contribute few patches, but it's not that easy to turn them into regular contributor. You should encourage them to stay and to care about the part of the project they have touched. You can try to attract completely new contributors through programs as Google Summer of Code (GSoC) or Outreachy, but that has it's own challenges as well. With phpMyAdmin we're participating regularly in GSoC (we've only missed last year as we were not chosen by Google that year) and it indeed helps to bring new people on the board. Many of them even stay around your project (currently 3 of 5 phpMyAdmin team members are former GSoC students). But I think this approach really works only for bigger organizations. You can also motivate people by money. It's way which is not really much used on free software projects, partly because lack of funding (I'll get to that later) and partly because it doesn't necessarily bring long time contributors, just cash hunters. I've been using Bountysource for some of my projects (Weblate and Gammu) and so far it mostly works other way around - if somebody posts bounty on the issue, it means it's quite important for him to get that fixed, so I use that as indication for myself. On attracting new developers it never really worked well, even when I've tried to post bounties to some easy to fix issues, where newbies could learn our code base and get paid for that. These issues stayed opened for months and in the end I've fixed them myself because they annoyed me. Don't care too much I think this is most important aspect - you simply can never fix all the problems. Let's face it and work according to that. There can be various levels of don't caring. I find it always better to try to encourage people to fix their problem, but you can't expect big success rate in that, so y[...]

Bits from Debian: Build Android apps with Debian: apt install android-sdk

Wed, 15 Mar 2017 11:00:00 +0000

In Debian stretch, the upcoming new release, it is now possible to build Android apps using only packages from Debian. This will provide all of the tools needed to build an Android app targeting the "platform" android-23 using the SDK build-tools 24.0.0. Those two are the only versions of "platform" and "build-tools" currently in Debian, but it is possible to use the Google binaries by installing them into /usr/lib/android-sdk. This doesn't cover yet all of the libraries that are used in the app, like the Android Support libraries, or all of the other myriad libraries that are usually fetched from jCenter or Maven Central. One big question for us is whether and how libraries should be included in Debian. All the Java libraries in Debian can be used in an Android app, but including something like Android Support in Debian would be strange since they are only useful in an Android app, never for a Debian app. Building apps with these packages Here are the steps for building Android apps using Debian's Android SDK on Stretch. sudo apt install android-sdk android-sdk-platform-23 export ANDROID_HOME=/usr/lib/android-sdk In build.gradle, set compileSdkVersion to 23 and buildToolsVersion to 24.0.0 run gradle build The Gradle Android Plugin is also packaged. Using the Debian package instead of the one from online Maven repositories requires a little configuration before running gradle. In the buildscript block: add maven { url 'file:///usr/share/maven-repo' } to repositories use compile '' to load the plugin Currently there is only the target platform of API Level 23 packaged, so only apps targeted at android-23 can be built with only Debian packages. There are plans to add more API platform packages via backports. Only build-tools 24.0.0 is available, so in order to use the SDK, build scripts need to be modified. Beware that the Lint in this version of Gradle Android Plugin is still problematic, so running the :lint tasks might not work. They can be turned off with lintOptions.abortOnError in build.gradle. Google binaries can be combined with the Debian packages, for example to use a different version of the platform or build-tools. Why include the Android SDK in Debian? While Android developers could develop and ship apps right now using these Debian packages, this is not very flexible since only build-tools-24.0.0 and android-23 platform are available. Currently, the Debian Android Tools Team is not aiming to cover the most common use cases. Those are pretty well covered by Google's binaries (except for the proprietary license on the Google binaries), and are probably the most work for the Android Tools Team to cover. The current focus is on use cases that are poorly covered by the Google binaries, for example, like where only specific parts of the whole SDK are used. Here are some examples: tools for security researchers, forensics, reverse engineering, etc. which can then be included in live CDs and distros like Kali Linux a hardened APK signing server using apksigner that uses [...]

Keith Packard: Valve

Tue, 14 Mar 2017 19:10:17 +0000


Consulting for Valve in my spare time

Valve Software has asked me to help work on a couple of Linux graphics issues, so I'll be doing a bit of consulting for them in my spare time. It should be an interesting diversion from my day job working for Hewlett Packard Enterprise on Memory Driven Computing and other fun things.

First thing on my plate is helping support head-mounted displays better by getting the window system out of the way. I spent some time talking with Dave Airlie and Eric Anholt about how this might work and have started on the kernel side of that. A brief synopsis is that we'll split off some of the output resources from the window system and hand them to the HMD compositor to perform mode setting and page flips.

After that, I'll be working out how to improve frame timing reporting back to games from a composited desktop under X. Right now, a game running on X with a compositing manager can't tell when each frame was shown, nor accurately predict when a new frame will be shown. This makes smooth animation rather difficult.

John Goerzen: Parsing the GOP’s Health Insurance Statistics

Tue, 14 Mar 2017 15:35:05 +0000

There has been a lot of noise lately about the GOP health care plan (AHCA) and the differences to the current plan (ACA or Obamacare). A lot of statistics are being misinterpreted.

The New York Times has an excellent analysis of some of this. But to pick it apart, I want to highlight a few things:

Many Republicans are touting the CBO’s estimate that, some years out, premiums will be 10% lower under their plan than under the ACA. However, this carries with it a lot of misleading information.

First of all, many are spinning this as if costs would go down. That’s not the case. The premiums would still rise — they would just have risen less by the end of the period than under ACA. That also ignores the immediate spike and throwing millions out of the insurance marketplace altogether.

Now then, where does this 10% number come from? First of all, you have to understand the older people are substantially more expensive to the health system, and therefore more expensive to insure. ACA limited the price differential from the youngest to the oldest people, which meant that in effect some young people were subsidizing older ones on the individual market. The GOP plan removes that limit. Combined with other changes in subsidies and tax credits, this dramatically increases the cost to older people. For instance, the New York Times article cites a CBO estimate that “the price an average 64-year-old earning $26,500 would need to pay after using a subsidy would increase from $1,700 under Obamacare to $14,600 under the Republican plan.”

They further conclude that these exceptionally high rates would be so unaffordable to older people that the older people will simply stop buying insurance on the individual market. This means that the overall risk pool of people in that market is healthier, and therefore the average price is lower.

So, to sum up: the reason that insurance premiums under the GOP plan will rise at a slightly slower rate long-term is that the higher-risk people will be unable to afford insurance in the first place, leaving only the cheaper people to buy in.

Reproducible builds folks: Reproducible Builds: week 98 in Stretch cycle

Tue, 14 Mar 2017 06:41:54 +0000

Here's what happened in the Reproducible Builds effort between Sunday March 5 and Saturday March 11 2017: Upcoming events On March 23rd Holger Levsen will give a talk at the German Unix User Group's "Frühjahrsfachgespräch" called Reproducible Builds everywhere. Verifying Software Freedom with Reproducible Builds will be presented by Vagrant Cascadian at Libreplanet2017 in Boston, March 25th. You, too, can write reproducible software! workshop by Ximin Luo, Vagrant Cascadian and Valerie Young at Libreplanet2017 in Boston, March 25th. Reproducible Builds Hackathon Hamburg The Reproducible Builds Hamburg Hackathon 2017, or RB-HH-2017 for short, is a 3 day hacking event taking place in the CCC Hamburg Hackerspace located inside the Frappant, which is collective art space located in a historical monument in Hamburg, Germany. The aim of the hackathon is to spent some days working on Reproducible Builds in every distribution and project. The event is open to anybody interested on working on Reproducible Builds issues in any distro or project, with or without prio experience! Packages filed Chris Lamb: #856834 filed against tendermint-go-rpc. #856860 filed against archvsync. #857122 filed against python-gdata (sent upstream) #857313 filed against cylc. #857454 filed against qtltools. Toolchain development Guillem Jover uploaded dpkg 1.18.23 to unstable, declaring .buildinfo format 1.0 as "stable". Jams McCoy uploaded devscripts 2.17.2 to unstable addingd support for .buildinfo files to the debsign utility via patches from Ximin Luo and Guillem Jover. Hans-Christoph Steiner noted that the first reproducibility-related patch in the Android SDK was marked as confirmed. Reviews of unreproducible packages 39 package reviews have been added, 7 have been updated and 9 have been removed in this week, adding to our knowledge about identified issues. 2 issue types have been added: randomness_in_binaries_generated_by_ruby_mkmf build_dir_in_documentation_generated_by_doxygen Weekly QA work During our reproducibility testing, FTBFS bugs have been detected and reported by: Chris Lamb (2) development Chris Lamb: Show reproducible status of package. Inject operator.getitem as a template filter. Include version in BuildInfo's __unicode__ reproducible-website development Holger Levsen: Add first batch of post-it notes as pics Add FAQ feedback pad Update report to latest version from Beatrice Polish diffoscope pad Add (unverified) Berlin summit pads from riseup, without interlinks yet Hans-Christoph Steiner gave a progress report on testing F-Droid: we now have a complete vagrant workflow working in nested KVM! So we can provision a new KVM guest, then package it using vagrant box all inside of a KVM guest (which is a profitbricks build node). So we finally have a working setup on Next up is fixing bugs in our libvirt snapshoting support. Then Hans-Christoph was also able to [...]

Sean Whitton: Initial views of 5th edition DnD

Mon, 13 Mar 2017 23:37:02 +0000

I’ve been playing in a 5e campaign for around two months now. In the past ten days or so I’ve been reading various source books and Internet threads regarding the design of 5th edition. I’d like to draw some comparisons and contrasts between 5th edition, and the 3rd edition family of games (DnD 3.5e and Paizo’s Pathfinder, which may be thought of as 3.75e). The first thing I’d like to discuss is that wizards and clerics are no longer Vancian spellcasters. In rules terms, this is the idea that individual spells are pieces of ammunition. Spellcasters have a list of individual spells stored in their heads, and as they cast spells from that list, they cross off each item. Barring special rules about spontaneously converting prepared spells to healing spells, for clerics, the only way to add items back to the list is to take a night’s rest. Contrast this with spending points from a pool of energy in order to use an ability to cast a fireball. Then the limiting factor on using spells is having enough points in your mana pool, not having further castings of the spell waiting in memory. One of the design goals of 5th edition was to reduce the dominance of spellcasters at higher levels of play. The article to which I linked in the previous paragraph argues that this rebalancing requires the removal of Vancian magic. The idea, to the extent that I’ve understood it, is that Vancian magic is not an effective restriction on spellcaster power levels, so it is to be replaced with other restrictions—adding new restrictions while retaining the restrictions inherent in Vancian magic would leave spellcasters crippled. A further reason for removing Vancian magic was to defeat the so-called “five minute adventuring day”. The compat ability of a party that contains higher level Vancian spellcasters drops significantly once they’ve fired off their most powerful combat spells. So adventuring groups would find themselves getting into a fight, and then immediately retreating to fully rest up in order to get their spells back. This removes interesting strategic and roleplaying possibilities involving the careful allocation of resources, and continuing to fight as hit points run low. There are some other related changes. Spell components are no longer used up when casting a spell. So you can use one piece of bat guano for every fireball your character ever casts, instead of each casting requiring a new piece. Correspondingly, you can use a spell focus, such as a cool wand, instead of a pouch full of material components—since the pouch never runs out, there’s no mechanical change if a wizard uses an arcane focus instead. 0th level spells may now be cast at will (although Pathfinder had this too). And there are decent 0th level attack spells, so a spellcaster need not carry a crossbow or shortbow in order to have something to do on rounds when it would not be optimal to fire off one of their precious spells. I am very much in f[...]

Ross Gammon: February 2017 – My Free Software activities summary

Mon, 13 Mar 2017 21:06:44 +0000

When I sat down to write this blog, I thought I hadn’t got much done in February. But as it took  me quite a while to write up, there must have actually been a little bit of progress. With my wife starting a new job, there have been some adjustments in family life, and I have struggled just to keep up with all the Debian and Ubuntu emails. Anyway…….. Debian Backported Gramps 4.2.5 to Jessie Backports. Ubuntu Tested Ubuntu Studio 16.02.2 point release, marked as ready, and updated the Release Notes. Started updating my previous Gramps backport in Ubuntu to Gramps 4.2.5. The package builds fine, and I have tested that it installs and works. I just need to update the bug. Prepared updates to the ubuntustudio-default-settings & ubuntustudio-meta packages. There were some deferred changes from before Yakkety was released, including moving the final bit of configuration left in the ubuntustudio-lightdm-theme package to ubuntustudio-default-settings. Jeremy Bicha sponsored the uploads after suggesting moving away from some transitional ttf font packages in ubuntustudio-meta. Tested the Ubuntu Studio 17.04 First Beta release, marked as ready, and prepared the Release Notes. Upgraded my music studio Ubuntu Studio computer to Yakkety 16.1o. Got accepted as an Ubuntu Contributing Developer by the Developer Membership Board. Other After a merge of my Family Tree with the Family Tree of my wife in Gramps a long way back, I finally started working through the database merging duplicates and correcting import errors. Worked some more on the model railway, connecting up the other end of the tunnel section with the rest of the railway. Plan status from last month & update for next month Debian For the Debian Stretch release: Keep an eye on the Release Critical bugs list, and see if I can help fix any. – In Progress Generally: Finish the Gramps 5.2.5 backport for Jessie. – Done Package all the latest upstream versions of my Debian packages, and upload them to Experimental to keep them out of the way of the Stretch release. Begin working again on all the new stuff I want packaged in Debian. Ubuntu Finish the ubuntustudio-lightdm-theme, ubuntustudio-default-settings transition including an update to the ubuntustudio-meta packages. – Done Reapply to become a Contributing Developer. – Done Start working on an Ubuntu Studio package tracker website so that we can keep an eye on the status of the packages we are interested in. – Started Start testing & bug triaging Ubuntu Studio packages. – In progress Test Len’s work on ubuntustudio-controls – In progress Do the Ubuntu Studio Zesty 17.04 Final Beta release. Other Give JMRI a good try out and look at what it would take to package it. – In progress Also look at OpenPLC for simulating the relay logic of real railway interlockings (i.e. a little bit of the day job at home involving free software – fun!). – In progress [...]

Michal Čihař: Weblate users survey

Mon, 13 Mar 2017 11:00:21 +0000

Weblate is growing quite well in last months, but sometimes it's development is really driven by people who complain instead of following some roadmap with higher goals. I think it's time to change it at least a little bit. In order to get broader feedback I've sent out short survey to active project owners in Hosted Weblate week ago. I've decided to target at smaller audience for now, though publicly open survey might follow later (but it's always harder to evaluate feedback across different user groups). Overall feelings were really positive, most people find Weblate better than other similar services they have used. This is really something I like to hear :-). But the most important part for me was where users want to see improvements. This somehow matches my expectation that we really should improve the user interface. We have quite a lot features, which are really hidden in the user interface. Also interface for some of the features is far from being intuitive. This all probably comes from the fact that we really don't have anybody experienced with creating user interfaces right now. It's time to find somebody who will help us. In case you are able to help or know somebody who might be interested in helping, please get in touch. Weblate is free software, but this can still be paid job. Last part of the survey was focused on some particular features, but the outcome was not as clear as I hoped for as almost all feature group attracted about same attention (with one exception being extending the API, which was not really wanted by most of the users). Overall I think doing some survey like this is useful and I will certainly repeat it (probably yearly or so), to see where we're moving and what our users want. Having feedback from users is important for every project and this seemed to worked quite well. Anyway if you have further feedback, don't hesitate to use our issue tracker at GitHub or contact me directly. Filed under: Debian English phpMyAdmin SUSE Weblate | 0 comments [...]

Iustin Pop: A recipe for success

Sun, 12 Mar 2017 22:38:26 +0000

It is said that with age comes wisdom. I would be happy for that to be true, because today I must have been very very young then. For example, if you want to make a long bike ride in order to hit some milestone, like your first metric century, it is not indicated to follow ANY of the following points: instead of doing this in the season, when you're fit, wait over the winter, during which you should indulge in food and drink with only an occasional short bike ride, so that most of your fitness is gone and replaced by a few extra kilograms; instead of choosing a flat route that you've done before, extending it a bit to hit the target distance, think about taking the route from one of the people you follow on Strava (and I mean real cyclists here); bonus points if you choose one they mention was about training instead of a freeride and gave it a meaningful name like "The ride of 3 peaks", something with 1'500m+ altitude gain… in order to not get bogged down by too much by extra weight (those winter kilograms are enough!), skimp on breakfast (just a very very light one); together with the energy bar you eat, something like 400 calories… take the same amount of food you take for much shorter and flatter rides; bonus points if you don't check the actual calories in the food, and instead of the presumed 700+ calories you think you're carrying (which might be enough, if you space them correctly, given how much you can absorb per hour), take at most 300 calories with you, because hey, your body is definitely used with long efforts in which you convert fat to energy on the fly, right? especially after said winter pause! since water is scarce in the Swiss outdoors (not!), especially when doing a road bike ride, carry lots of water with you (full hydro-pack, 3l) instead of an extra banana or energy bar, or a sandwich, or nuts, or a steak… mmmm, steak! and finally and most importantly don't do the ride indoors on the trainer, even though it can pretty realistically simulate the effort, but instead do it for real outside, where you can't simply stop when you had enough, because you have to get back home… For bonus points, if you somehow manage to reach the third peak in the above ride, and have mostly only flat/down to the destination, do the following: be so glad you're done with climbing, that you don't pay attention to the map and start a wrong descent, on a busy narrow road, so that you can't stop immediately as you realise you've lost the track; it will cost you only an extra ~80 meters of height towards the end of the ride. Which are pretty cheap, since all the food is gone and the water almost as well, so the backpack is light. Right. However, if you do follow all the above, you're rewarded with a most wonderful thing for the second half of the ride: your will receive a +5 boost on your concentration skill. You will be able to focus on, and think about a sin[...]

Mike Hommey: When the memory allocator works against you

Sun, 12 Mar 2017 01:47:12 +0000

Cloning mozilla-central with git-cinnabar requires a lot of memory. Actually too much memory to fit in a 32-bits address space. I hadn’t optimized for memory use in the first place. For instance, git-cinnabar keeps sha-1s in memory as hex values (40 bytes) rather than raw values (20 bytes). When I wrote the initial prototype, it didn’t matter that much, and while close(ish) to the tipping point, it didn’t require more than 2GB of memory at the time. Time passed, and mozilla-central grew. I suspect the recent addition of several thousands of commits and files has made things worse. In order to come up with a plan to make things better (short or longer term), I needed data. So I added some basic memory resource tracking, and collected data while cloning mozilla-central. I must admit, I was not ready for what I witnessed. Follow me for a tale of frustrations (plural). I was expecting things to have gotten worse on the master branch (which I used for the data collection) because I am in the middle of some refactoring and did many changes that I was suspecting might have affected memory usage. I wasn’t, however, expecting to see the clone command using 10GB(!) memory at peak usage across all processes. (Note, those memory sizes are RSS, minus “shared”) It also was taking an unexpected long time, but then, I hadn’t cloned a large repository like mozilla-central from scratch in a while, so I wasn’t sure if it was just related to its recent growth in size or otherwise. So I collected data on 0.4.0 as well. Less time spent, less memory usage… ok. There’s definitely something wrong on master. But wait a minute, that slope from ~2GB to ~4GB on the git-remote-hg process doesn’t actually make any kind of sense. I mean, I’d understand it if it were starting and finishing with the “Import manifest” phase, but it starts in the middle of it, and ends long before it finishes. WTH? First things first, since RSS can be a variety of things, I checked /proc/$pid/smaps and confirmed that most of it was, indeed, the heap. That’s the point where you reach for Google, type something like “python memory profile” and find various tools. One from the results that I remembered having used in the past is guppy’s heapy. Armed with pdb, I broke execution in the middle of the slope, and tried to get memory stats with heapy. SIGSEGV. Ouch. Let’s try something else. I reached out to objgraph and pympler. SIGSEGV. Ouch again. Tried working around the crashes for a while (too long while, retrospectively, hindsight is 20/20), and was somehow successful at avoiding them by peaking at a smaller set of objects. But whatever I did, despite being attached to a process that had 2.6GB RSS, I wasn’t able to find more than 1.3GB of data. This wasn’t adding up. It surely didn’t help that getting to that point took close to an hour each time. R[...]

Steve Kemp: How I started programming

Sun, 12 Mar 2017 00:00:00 +0000

I've written parts of this story in the past, but never in one place and never in much detail. So why not now? In 1982 my family moved house, so one morning I went to school and at lunch-time I had to walk home to a completely different house. We moved sometime towards the end of the year, and ended up spending lots of money replacing the windows of the new place. For people in York I was born in Farrar Street, Y010 3BY, and we moved to a place on Thief Lane, YO1 3HS. Being named as it was I "ironically" stole at least two street-signs and hung them on my bedroom wall. I suspect my parents were disappointed. Anyway the net result of this relocation, and the extra repairs meant that my sisters and I had a joint Christmas present that year, a ZX Spectrum 48k. I tried to find pictures of what we received but unfortunately the web doesn't remember the precise bundle. All together though we received: A tape-deck. A 48k ZX Spectrum, with its glorious rubber keys. A pack of 10 (?) cassette-tapes. The first six were definitely the Spectrum six-pack: The classic Horizons tape. A version of Scrabble. Horace Goes Skiing From the Horace series. A version of Chess by Psion. Make-a-Chip - A logic-demonstration. Chequered Flag, also by PSION, which was a terrible racing game. (Mostly this was terrible because it was in no-way a race. There were zero other cars on the track.) I know we also received Horace and the Spiders, and I have vague memories of some other things being included, including a Space Invaders clone. No doubt my parents bought them separately. Highlights of my Spectrum-gaming memories include R-Type, Strider, and the various "Dizzy" games. Some of the latter I remember very fondly. Unfortunately this Christmas was pretty underwhelming. We unpacked the machine, we cabled it up to the family TV-set - we only had the one, after all - and then proceeded to be very disappointed when nothing we did resulted in a successful game! It turns out our cassette-deck was not good enough. Being back in the 80s the shops were closed over Christmas, and my memory is that it was around January before we received a working tape-player/recorder, such that we could load games. Happily the computer came with manuals. I read one, skipping words and terms I didn't understand. I then read the other, which was the spiral-bound orange book. It contained enough examples and decent wording that I learned to write code in BASIC. Not bad for an 11/12 year old. Later I discovered that my local library contained "computer books". These were colourful books that promised "The Mystery of Silver Mounter", or "Write your own ADVENTURE PROGRAMS". But were largely dry books that contained nothing but multi-page listings of BASIC programs to type in. Often with adjustments that had to be made for your own computer-[...]

John Goerzen: Silent Data Corruption Is Real

Sat, 11 Mar 2017 21:34:24 +0000

Here’s something you never want to see:

ZFS has detected a checksum error:

   eid: 138
 class: checksum
  host: alexandria
  time: 2017-01-29 18:08:10-0600
 vtype: disk

This means there was a data error on the drive. But it’s worse than a typical data error — this is an error that was not detected by the hardware. Unlike most filesystems, ZFS and btrfs write a checksum with every block of data (both data and metadata) written to the drive, and the checksum is verified at read time. Most filesystems don’t do this, because theoretically the hardware should detect all errors. But in practice, it doesn’t always, which can lead to silent data corruption. That’s why I use ZFS wherever I possibly can.

As I looked into this issue, I saw that ZFS repaired about 400KB of data. I thought, “well, that was unlucky” and just ignored it.

Then a week later, it happened again. Pretty soon, I noticed it happened every Sunday, and always to the same drive in my pool. It so happens that the highest I/O load on the machine happens on Sundays, because I have a cron job that runs zpool scrub on Sundays. This operation forces ZFS to read and verify the checksums on every block of data on the drive, and is a nice way to guard against unreadable sectors in rarely-used data.

I finally swapped out the drive, but to my frustration, the new drive now exhibited the same issue. The SATA protocol does include a CRC32 checksum, so it seemed (to me, at least) that the problem was unlikely to be a cable or chassis issue. I suspected motherboard.

It so happened I had a 9211-8i SAS card. I had purchased it off eBay awhile back when I built the server, but could never get it to see the drives. I wound up not filling it up with as many drives as planned, so the on-board SATA did the trick. Until now.

As I poked at the 9211-8i, noticing that even its configuration utility didn’t see any devices, I finally started wondering if the SAS/SATA breakout cables were a problem. And sure enough – I realized I had a “reverse” cable and needed a “forward” one. $14 later, I had the correct cable and things are working properly now.

One other note: RAM errors can sometimes cause issues like this, but this system uses ECC DRAM and the errors would be unlikely to always manifest themselves on a particular drive.

So over the course of this, had I not been using ZFS, I would have had several megabytes of reads with undetected errors. Thanks to using ZFS, I know my data integrity is still good.

Enrico Zini: On the meaning of "we"

Sat, 11 Mar 2017 13:11:34 +0000

Rather than as a word of endearment, I'm starting to see "we" as a word of entitlement.

In some moments of insecurity, I catch myself "wee"-ing over other people, to claim them as mine.

Jonathan Dowland: Nintendo NES Classic Mini

Fri, 10 Mar 2017 11:45:40 +0000


After months of trying, I've finally got my hands on a Nintendo NES Classic Mini. It's everything I wish retropie was: simple, reliable, plug-and-play gaming. I didn't have a NES at the time, so the games are all mostly new to me (although I'm familiar with things like Super Mario Brothers).


NES classic and 8bitdo peripherals

The two main complaints about the NES classic are the very short controller cable and the need to press the "reset" button on the main unit to dip in and out of games. Both are addressed by the excellent 8bitdo Retro Receiver for NES Classic bundle. You get a bluetooth dongle that plugs into the classic and a separate wireless controller. The controller is a replica of the original NES controller. However, they've added another two buttons on the right-hand side alongside the original "A" and "B", and two discrete shoulder buttons which serve as turbo-repeat versions of "A" and "B". The extra red buttons make it look less authentic which is a bit of a shame, and are not immediately useful on the NES classic (but more on that in a minute).

With the 8bitdo controller, you can remotely activate the Reset button by pressing "Down" and "Select" at the same time. Therefore the whole thing can be played from the comfort of my sofa.

That's basically enough for me, for now, but in the future if I want to expand the functionality of the classic, it's possible to mod it. A hack called "Hakchi2" lets you install additional NES ROMs; install retroarch-based emulator cores and thus play SNES, Megadrive, N64 (etc. etc.) games; as well as other hacks like adding "down+select" Reset support to the wired controller. If you were playing non-NES games on the classic, then the extra buttons on the 8bitdo become useful.

Reproducible builds folks: Reproducible Builds: week 97 in Stretch cycle

Fri, 10 Mar 2017 08:41:59 +0000

Here's what happened in the Reproducible Builds effort between Sunday February 26 and Saturday March 4 2017: Upcoming Events Ed Maste will present Reproducible Builds in FreeBSD at AsiaBSDCon 2017. Ximin Luo will present Reproducible builds, its uses and the future at Open Source Days in Copenhagen on March 18. Holger Levsen will give a talk at the German Unix User Group's "Frühjahrsfachgespräch" in Darmstadt, Germany, about Reproducible Builds everywhere on March 23. Verifying Software Freedom with Reproducible Builds will be presented by Vagrant Cascadian at Libreplanet2017 in Boston, March 25th-26th. Media coverage Aspiration Tech published a very detailed report on our Reproducible Builds World Summit 2016 in Berlin. Reproducible work in other projects Duncan published a very thorough post on the Rust Programming Language Forum about reproducible builds in the Rust compiler and toolchain. In particular, he produced a table recording the reproducibility of different build products under different individual variations, totalling 187 build+variation combinations. Packages reviewed and fixed, and bugs filed Chris Lamb: #856614 filed against dask.distributed. #856807 filed against node-mocha, forwarded and merged upstream #856834 filed against tendermint-go-rpc. #856860 filed against archvsync. Dhole: #856257 filed against tunnelx. Reviews of unreproducible packages 60 package reviews have been added, 8 have been updated and 13 have been removed in this week, adding to our knowledge about identified issues. 1 issue type has been added: timestamp_in_fonts_generated_by_opentype Weekly QA work During our reproducibility testing, FTBFS bugs have been detected and reported by: Chris Lamb (3) diffoscope development diffoscope 78 was uploaded to unstable and jessie-backports by Mattia Rizzolo. It included contributions from: Chris Lamb: Make tests that call xxd work on jessie again. (Closes: #855239) tests: Move normalize_zeros to more generic module. Brett Smith: comparators.json: Catch bad JSON errors on Python pre-3.5. (Closes: #855233) Ed Maste: Use BSD-style stat(1) on FreeBSD. (Closes: #855169) In addition, the following changes were made on the experimental branch: Chris Lamb (4): Tidy cbfs tests. Correct "exercice" -> "exercise" typo. Support newer versions of cbfstool to avoid test failure. (Closes: #856446) Skip icc test that varies on endian if the (Debian-specific) patch is not present. (Closes: #856447) reproducible-website development anonmos1: Replace root with 0 when giving UIDs/GIDs to GNU tar. Holger Levsen and Chris Lamb: Publish report by Aspiration Tech about RWS Berlin 2016. tests.reproducible-builds.o[...]

Martín Ferrari: SunCamp happening again this May!

Fri, 10 Mar 2017 07:36:02 +0000

As I announced in mailing lists a few days ago, the Debian SunCamp (DSC2017) is happening again this May. SunCamp different to most other Debian events. Instead of a busy schedule of talks, SunCamp focuses on the hacking and socialising aspect, without making it just a Debian party/vacation. DSC2016 - Hacking and discussing The idea is to have 4 very productive days, staying in a relaxing and comfy environment, working on your own projects, meeting with your team, or presenting to fellow Debianites your most recent pet project. DSC2016 - Tincho talking about Prometheus We have tried to make this event the simplest event possible, both for organisers and attendees. There will be no schedule, except for the meal times at the hotel. But these can be ignored too, there is a lovely bar that serves snacks all day long, and plenty of restaurants and cafés around the village. The SunCamp is an event to get work done, but there will be time for relaxing and socialising too. DSC2016 - Well deserved siesta DSC2016 - Playing Pétanque Do you fancy a hack-camp in a place like this? One of the things that makes the event simple, is that we have negotiated a flat price for accommodation that includes usage of all the facilities in the hotel, and optionally food. We will give you a booking code, and then you arrange your accommodation as you please, you can even stay longer if you feel like it! The rooms are simple but pretty, and everything has been renovated very recently. We are not preparing a talks programme, but we will provide the space and resources for talks if you feel inclined to prepare one. You will have a huge meeting room, divided in 4 areas to reduce noise, where you can hack, have team discussions, or present talks. Do you want to see more pictures? Check the full gallery Debian SunCamp 2017 Hotel Anabel, LLoret de Mar, Province of Girona, Catalonia, Spain May 18-21, 2017 Tempted already? Head to the wikipage and register now, it is only 2 months away! Please try to reserve your room before the end of March. The hotel has reserved a number of rooms for us until that time. You can reserve a room after March, but we can't guarantee the hotel will still have free rooms. Comment [...]

Steinar H. Gunderson: Tired

Thu, 09 Mar 2017 18:28:00 +0000


To be honest, at this stage I'd actually prefer ads in Wikipedia to having ever more intrusive begging for donations. Please go away soon.

Petter Reinholdtsen: Detecting NFS hangs on Linux without hanging yourself...

Thu, 09 Mar 2017 14:20:00 +0000

Over the years, administrating thousand of NFS mounting linux computers at the time, I often needed a way to detect if the machine was experiencing NFS hang. If you try to use df or look at a file or directory affected by the hang, the process (and possibly the shell) will hang too. So you want to be able to detect this without risking the detection process getting stuck too. It has not been obvious how to do this. When the hang has lasted a while, it is possible to find messages like these in dmesg: nfs: server nfsserver not responding, still trying nfs: server nfsserver OK It is hard to know if the hang is still going on, and it is hard to be sure looking in dmesg is going to work. If there are lots of other messages in dmesg the lines might have rotated out of site before they are noticed. While reading through the nfs client implementation in linux kernel code, I came across some statistics that seem to give a way to detect it. The om_timeouts sunrpc value in the kernel will increase every time the above log entry is inserted into dmesg. And after digging a bit further, I discovered that this value show up in /proc/self/mountstats on Linux. The mountstats content seem to be shared between files using the same file system context, so it is enough to check one of the mountstats files to get the state of the mount point for the machine. I assume this will not show lazy umounted NFS points, nor NFS mount points in a different process context (ie with a different filesystem view), but that does not worry me. The content for a NFS mount point look similar to this: [...] device /dev/mapper/Debian-var mounted on /var with fstype ext3 device nfsserver:/mnt/nfsserver/home0 mounted on /mnt/nfsserver/home0 with fstype nfs statvers=1.1 opts: rw,vers=3,rsize=65536,wsize=65536,namlen=255,acregmin=3,acregmax=60,acdirmin=30,acdirmax=60,soft,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=,mountvers=3,mountport=4048,mountproto=udp,local_lock=all age: 7863311 caps: caps=0x3fe7,wtmult=4096,dtsize=8192,bsize=0,namlen=255 sec: flavor=1,pseudoflavor=1 events: 61063112 732346265 1028140 35486205 16220064 8162542 761447191 71714012 37189 3891185 45561809 110486139 4850138 420353 15449177 296502 52736725 13523379 0 52182 9016896 1231 0 0 0 0 0 bytes: 166253035039 219519120027 0 0 40783504807 185466229638 11677877 45561809 RPC iostats version: 1.0 p/v: 100003/3 (nfs) xprt: tcp 925 1 6810 0 0 111505412 111480497 109 2672418560317 0 248 53869103 22481820 per-op statistics NULL: 0 0 0 0 0 0 0 0 GETATT[...]

Arturo Borrero González: Netfilter in GSoC 2017

Thu, 09 Mar 2017 09:00:00 +0000


Great news! The Netfilter project has been elected by Google to be a mentoring organization in this year Google Summer of Code program. Following the pattern of the last years, Google seems to realise and support the importance of this software project in the Linux ecosystem.

I will be proudly mentoring some student this 2017 year, along with Eric Leblond and of course Pablo Neira.

The focus of the Netfilter project has been in nftables for the last years, and the students joining our community will likely work on the new framework.

For prospective students: there is an ideas document which you must read. The policy in the Netfilter project is to encourage students to send patches before they are elected to join us. Therefore, a good starting point is to subscribe to the mailing lists, download the git code repositories, build by hand the projects (compilation) and look at the bugzilla (registration required).

Due to this type of internships and programs, I believe is interesting to note the ascending involvement of women in the last years. I can remember right now: Ana Rey (@AnaRB), Shivani Bhardwaj (@tuxish), Laura García and Elise Lennion (blog).

On a side note, Debian is not participating in GSoC this year :-(

Thorsten Glaser: Updated Debian packaging example: PHP webapp with dbconfig-common

Wed, 08 Mar 2017 22:00:00 +0000


Since I use this as base for other PHP packages like SimKolab, I’ve updated my packaging example with:

  • PHP 7 support (untested, as I need libapache2-mod-php5)
  • tons more utility code for you to use
  • a class autoloader, with example (build time, for now)
  • (at build time) running a PHPUnit testsuite (unless nocheck)

The old features (Apache 2.2 and 2.4 support, dbconfig-common, etc.) are, of course, still there. Support for other webservers could be contributed by you, and I could extend the autoloader to work at runtime (using dpkg triggers) to include dependencies as packaged in other Debian packages. See, nobody needs “composer”! ☻

Feel free to check it out, play around with it, install it, test it, send me improvement patches and feature requests, etc. — it’s here with a mirror at GitHub (since I wrote it myself and the licence is permissive enough anyway).

This posting and the code behind it are sponsored by my employer ⮡ tarent.

Neil McGovern: GNOME ED update – Week 10

Wed, 08 Mar 2017 21:02:00 +0000



After quite a bit of work, we finally have the sponsorship brochure produced for GUADEC and GNOME.Asia. Huge thanks to everyone who helped, I’m really pleased with the result. Again, if you or your company are interested in sponsoring us, please drop a mail to!

Food and Games

I like food, and I like games. So this week there was a couple of awesome sneak previews on the upcoming GNOME 3.24 release. Matthias Clasen posted about GNOME Recipes the 1.0 release – tasty snacks are now available directly on the desktop, which means I can also view them when I’m at the back of the house in the kitchen, where the wifi connection is somewhat spotty. Adrien Plazas also posted about GNOME Games – now I can get my retro gaming fix easily.

Signing things(image)

I was sent a package in the post, with lots of blank stickers and a couple of pens. I’ve now signed a load of stickers, and my hand hurts. More details about exactly what this is about soon :)

Antoine Beaupré: An update to GitHub's terms of service

Wed, 08 Mar 2017 17:00:00 +0000

On February 28th, GitHub published a brand new version of its Terms of Service (ToS). While the first draft announced earlier in February didn't generate much reaction, the new ToS raised concerns that they may break at least the spirit, if not the letter, of certain free-software licenses. Digging in further reveals that the situation is probably not as dire as some had feared. The first person to raise the alarm was probably Thorsten Glaser, a Debian developer, who stated that the "new GitHub Terms of Service require removing many Open Source works from it". His concerns are mainly about section D of the document, in particular section D.4 which states: You grant us and our legal successors the right to store and display your Content and make incidental copies as necessary to render the Website and provide the Service. Section D.5 then goes on to say: [...] You grant each User of GitHub a nonexclusive, worldwide license to access your Content through the GitHub Service, and to use, display and perform your Content, and to reproduce your Content solely on GitHub as permitted through GitHub's functionality ToS versus GPL The concern here is that the ToS bypass the normal provisions of licenses like the GPL. Indeed, copyleft licenses are based on copyright law which forbid users from doing anything with the content unless they comply with the license, which forces, among other things, "share alike" properties. By granting GitHub and its users rights to reproduce content without explicitly respecting the original license, the ToS may allow users to bypass the copyleft nature of the license. Indeed, as Joey Hess, author of git-annex, explained : The new TOS is potentially very bad for copylefted Free Software. It potentially neuters it entirely, so GPL licensed software hosted on Github has an implicit BSD-like license Hess has since removed all his content (mostly mirrors) from GitHub. Others disagree. In a well-reasoned blog post, Debian developer Jonathan McDowell explained the rationale behind the changes: My reading of the GitHub changes is that they are driven by a desire to ensure that GitHub are legally covered for the things they need to do with your code in order to run their service. This seems like a fair point to make: GitHub needs to protect its own rights to operate the service. McDowell then goes on to do a detailed rebuttal of the arguments made by Glaser, arguing specifically that section D.5 "does not grant [...] additional rights to reproduce outside of GitH[...]

Clint Adams: Oh, little boy, pick up the pieces

Wed, 08 Mar 2017 16:06:39 +0000


Chris sat in the window seat in the row behind his parents. Actually he also sat in half of his neighbor’s seat. His neighbor was uncomfortable but said nothing and did not attempt to lower the armrest to try to contain his girth.

His parents were awful human beings: selfish, self-absorbed and controlling. “Chris,” his dad would say, “look out the window!” His dad was the type of officious busybody who would snitch on you at work for not snitching on someone else.

“What?” Chris would reply, after putting down The Handmaid’s Tale and removing one of his earbuds. Then his dad would insist that it was very important that he look out the window to see a very important cloud or glacial landform.

Chris would comply and then return to his book and music.

“Chris,” his mom would say, “you need to review our travel itinerary.” His mom cried herself to sleep when she heard that Nigel Stock died, gave up on ever finding True Love, and resolved to achieve a husband and child instead.

“What?” Chris would reply, after putting down The Handmaid’s Tale and removing one of his earbuds. Then his mom would insist that it was very important that review photos and prose regarding their managed tour package in Costa Rica, because he wouldn’t want to show up there unprepared. Chris would passive-aggressively stare at each page of the packet, then hand it back to his mother.

It was already somewhat clear that due to delays in taking off they would be missing their connecting flight to Costa Rica. About ⅓ of the passengers on the aeroplane were also going to Costa Rica, and were discussing the probable missed connection amongst themselves and with the flight staff.

Chris’s parents were oblivious to all of this, despite being native speakers of English. Additionally, just as they were unaware of what other people were discussing, they imagined that no one else could hear their private family discussions.

Everyone on the plane missed their connecting flights. Chris’s parents continued to be terrible human beings.

Posted on 2017-03-08
Tags: etiamdisco

Petter Reinholdtsen: How does it feel to be wiretapped, when you should be doing the wiretapping...

Wed, 08 Mar 2017 10:50:00 +0000

So the new president in the United States of America claim to be surprised to discover that he was wiretapped during the election before he was elected president. He even claim this must be illegal. Well, doh, if it is one thing the confirmations from Snowden documented, it is that the entire population in USA is wiretapped, one way or another. Of course the president candidates were wiretapped, alongside the senators, judges and the rest of the people in USA.

Next, the Federal Bureau of Investigation ask the Department of Justice to go public rejecting the claims that Donald Trump was wiretapped illegally. I fail to see the relevance, given that I am sure the surveillance industry in USA believe they have all the legal backing they need to conduct mass surveillance on the entire world.

There is even the director of the FBI stating that he never saw an order requesting wiretapping of Donald Trump. That is not very surprising, given how the FISA court work, with all its activity being secret. Perhaps he only heard about it?

What I find most sad in this story is how Norwegian journalists present it. In a news reports the other day in the radio from the Norwegian National broadcasting Company (NRK), I heard the journalist claim that 'the FBI denies any wiretapping', while the reality is that 'the FBI denies any illegal wiretapping'. There is a fundamental and important difference, and it make me sad that the journalists are unable to grasp it.

Update 2017-03-13: Look like The Intercept report that US Senator Rand Paul confirm what I state above.

Matthew Garrett: The Internet of Microphones

Wed, 08 Mar 2017 01:30:19 +0000

So the CIA has tools to snoop on you via your TV and your Echo is testifying in a murder case and yet people are still buying connected devices with microphones in and why are they doing that the world is on fire surely this is terrible?You're right that the world is terrible, but this isn't really a contributing factor to it. There's a few reasons why. The first is that there's really not any indication that the CIA and MI5 ever turned this into an actual deployable exploit. The development reports[1] describe a project that still didn't know what would happen to their exploit over firmware updates and a "fake off" mode that left a lit LED which wouldn't be there if the TV were actually off, so there's a potential for failed updates and people noticing that there's something wrong. It's certainly possible that development continued and it was turned into a polished and usable exploit, but it really just comes across as a bunch of nerds wanting to show off a neat demo.But let's say it did get to the stage of being deployable - there's still not a great deal to worry about. No remote infection mechanism is described, so they'd need to do it locally. If someone is in a position to reflash your TV without you noticing, they're also in a position to, uh, just leave an internet connected microphone of their own. So how would they infect you remotely? TVs don't actually consume a huge amount of untrusted content from arbitrary sources[2], so that's much harder than it sounds and probably not worth it because:YOU ARE CARRYING AN INTERNET CONNECTED MICROPHONE THAT CONSUMES VAST QUANTITIES OF UNTRUSTED CONTENT FROM ARBITRARY SOURCESSeriously your phone is like eleven billion times easier to infect than your TV is and you carry it everywhere. If the CIA want to spy on you, they'll do it via your phone. If you're paranoid enough to take the battery out of your phone before certain conversations, don't have those conversations in front of a TV with a microphone in it. But, uh, it's actually worse than that.These days audio hardware usually consists of a very generic codec containing a bunch of digital→analogue converters, some analogue→digital converters and a bunch of io pins that can basically be wired up in arbitrary ways. Hardcoding the roles of these pins makes board layout more annoying and some people want more inputs than outputs and some people vice versa, so it's not uncommon for it to be possible to reconfigure an input as an output [...]