Subscribe: Planet Debian
Added By: Feedage Forager Feedage Grade B rated
Language: English
author  biblatex  build  code  debian stretch  debian  git  months  new  release  shell  stretch  time  version  via  work 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Planet Debian

Planet Debian

Planet Debian -


Norbert Preining: Calibre 3 for Debian

Sat, 24 Jun 2017 03:42:55 +0000


I have updated my Calibre Debian repository to include packages of the current Calibre 3.1.1. As with the previous packages, I kept RAR support in to allow me to read comic books. I also have forwarded my changes to the maintainer of Calibre in Debian so maybe we will have soon official packages, too.

The repository location hasn’t changed, see below.

deb calibre main
deb-src calibre main

The releases are signed with my Debian key 0x6CACA448860CDC13


Joachim Breitner: The perils of live demonstrations

Fri, 23 Jun 2017 23:54:36 +0000


Yesterday, I was giving a talk at the The South SF Bay Haskell User Group about how implementing lock-step simulation is trivial in Haskell and how Chris Smith and me are using this to make CodeWorld even more attractive to students. I gave the talk before, at Compose::Conference in New York City earlier this year, so I felt well prepared. On the flight to the West Coast I slightly extended the slides, and as I was too cheap to buy in-flight WiFi, I tested them only locally.

So I arrived at the offices of Target1 in Sunnyvale, got on the WiFi, uploaded my slides, which are in fact one large interactive CodeWorld program, and tried to run it. But I got a type error…

Turns out that the API of CodeWorld was changed just the day before:

commit 054c811b494746ec7304c3d495675046727ab114
Author: Chris Smith 
Date:   Wed Jun 21 23:53:53 2017 +0000

    Change dilated to take one parameter.
    Function is nearly unused, so I'm not concerned about breakage.
    This new version better aligns with standard educational usage,
    in which "dilation" means uniform scaling.  Taken as a separate
    operation, it commutes with rotation, and preserves similarity
    of shapes, neither of which is true of scaling in general.

Ok, that was quick to fix, and the CodeWorld server started to compile my code, and compiled, and aborted. It turned out that my program, presumably the larges CodeWorld interaction out there, hit the time limit of the compiler.

Luckily, Chris Smith just arrived at the venue, and he emergency-bumped the compiler time limit. The program compiled and I could start my presentation.

Unfortunately, the biggest blunder was still awaiting for me. I came to the slide where two instances of pong are played over a simulated network, and my point was that the two instances are perfectly in sync. Unfortunately, they were not. I guess it did support my point that lock-step simulation can easily go wrong, but it really left me out in the rain there, and I could not explain it – I did not modify this code since New York, and there it worked flawless2. In the end, I could save my face a bit by running the real pong game against an attendee over the network, and no desynchronisation could be observed there.

Today I dug into it and it took me a while, and it turned out that the problem was not in CodeWorld, or the lock-step simulation code discussed in our paper about it, but in the code in my presentation that simulated the delayed network messages; in some instances it would deliver the UI events in different order to the two simulated players, and hence cause them do something different. Phew.

  1. Yes, the retail giant. Turns out that they have a small but enthusiastic Haskell-using group in their IT department.

  2. I hope the video is going to be online soon, then you can check for yourself.

Joey Hess: PV array is hot

Fri, 23 Jun 2017 20:43:11 +0000


Only took a couple hours to wire up and mount the combiner box.


Something about larger wiring like this is enjoyable. So much less fiddly than what I'm used to.


And the new PV array is hot!


Update: The panels have an open circuit voltage of 35.89 and are in strings of 2, so I'd expect to see 71.78 V with only my multimeter connected. So I'm losing 0.07 volts to wiring, which is less than I designed for.

Riku Voipio: Cross-compiling with debian stretch

Fri, 23 Jun 2017 16:25:32 +0000

(image) Debian stretch comes with cross-compiler packages for selected architectures:
 $ apt-cache search cross-build-essential
crossbuild-essential-arm64 - Informational list of cross-build-essential packages for
crossbuild-essential-armel - ...
crossbuild-essential-armhf - ...
crossbuild-essential-mipsel - ...
crossbuild-essential-powerpc - ...
crossbuild-essential-ppc64el - ...

Lets have a quick exact steps guide. But first - while you can use do all this in your desktop PC rootfs, it is more wise to contain yourself. Fortunately, Debian comes with a container tool out of box:

sudo debootstrap stretch /var/lib/container/stretch
echo "strech_cross" | sudo tee /var/lib/container/stretch/etc/debian_chroot
sudo systemd-nspawn -D /var/lib/container/stretch
Then we set up cross-building enviroment for arm64 inside the container:

# Tell dpkg we can install arm64
dpkg --add-architecture arm64
# Add src line to make "apt-get source" work
echo "deb-src stretch main" >> /etc/apt/sources.list
apt-get update
# Install cross-compiler and other essential build tools
apt install --no-install-recommends build-essential crossbuild-essential-arm64
Now we have a nice build enviroment, blets build something more complicated to cross-build, qemu:

# Get qemu sources from debian
apt-get source qemu
cd qemu-*
# New in stretch: build-dep works in unpacked source tree
apt-get build-dep -a arm64 .
# Cross-build Qemu for arm64
dpkg-buildpackage -aarm64 -j6 -b
Now that works perfectly for Qemu. For other packages, challenges may appear. For example you may have to se "nocheck" flag to skip build-time unit tests. Or some of the build-dependencies may not be multiarch-enabled. So work continues :)

Elena 'valhalla' Grandi: On brokeness, the live installer and being nice to people

Fri, 23 Jun 2017 14:57:33 +0000

On brokeness, the live installer and being nice to people

This morning I've read this

I understand that somebody on the internet will always be trolling, but I just wanted to point out:

* that the installer in the old live images has been broken (for international users) for years
* that nobody cared enough to fix it, not even the people affected by it (the issue was reported as known in various forums, but for a long time nobody even opened an issue to let the *developers* know).

Compare this with the current situation, with people doing multiple tests as the (quite big number of) images were being built, and a fix released soon after for the issues found.

I'd say that this situation is great, and that instead of trolling around we should thank the people involved in this release for their great job.

Jonathan Dowland: WD drive head parking update

Fri, 23 Jun 2017 14:28:10 +0000


An update for my post on Western Digital Hard Drive head parking: disabling the head-parking completely stopped the Load_Cycle_Count S.M.A.R.T. attribute from incrementing. This is probably at the cost of power usage, but I am not able to assess the impact of that as I'm not currently monitoring the power draw of the NAS (Although that's on my TODO list).

Bits from Debian: Hewlett Packard Enterprise Platinum Sponsor of DebConf17

Fri, 23 Jun 2017 14:15:00 +0000


We are very pleased to announce that Hewlett Packard Enterprise (HPE) has committed support to DebConf17 as a Platinum sponsor.

"Hewlett Packard Enterprise is excited to support Debian's annual developer conference again this year", said Steve Geary, Senior Director R&D at Hewlett Packard Enterprise. "As Platinum sponsors and member of the Debian community, HPE is committed to supporting Debconf. The conference, community and open distribution are foundational to the development of The Machine research program and will our bring our Memory Driven Computing agenda to life."

HPE is one of the largest computer companies in the world, providing a wide range of products and services, such as servers, storage, networking, consulting and support, software, and financial services.

HPE is also a development partner of Debian, and provides hardware for port development, Debian mirrors, and other Debian services (hardware donations are listed in the Debian machines page).

With this additional commitment as Platinum Sponsor, HPE contributes to make possible our annual conference, and directly supports the progress of Debian and Free Software helping to strengthen the community that continues to collaborate on Debian projects throughout the rest of the year.

Thank you very much Hewlett Packard Enterprise, for your support of DebConf17!

Become a sponsor too!

DebConf17 is still accepting sponsors. Interested companies and organizations may contact the DebConf team through, and visit the DebConf17 website at

Arturo Borrero González: Backup router/switch configuration to a git repository

Fri, 23 Jun 2017 08:10:00 +0000

Most routers/switches out there store their configuration in plain text, which is nice for backups. I’m talking about Cisco, Juniper, HPE, etc. The configuration of our routers are being changed several times a day by the operators, and in this case we lacked some proper way of tracking these changes. Some of these routers come with their own mechanisms for doing backups, and depending on the model and version perhaps they include changes-tracking mechanisms as well. However, they mostly don’t integrate well into our preferred version control system, which is git. After some internet searching, I found rancid, which is a suite for doing tasks like this. But it seemed rather complex and feature-full for what we required: simply fetch the plain text config and put it into a git repo. Worth noting that the most important drawback of not triggering the change-tracking from the router/switch is that we have to follow a polling approach: loggin into each device, get the plain text and the commit it to the repo (if changes detected). This can be hooked in cron, but as I said, we lost the sync behaviour and won’t see any changes until the next cron is run. In most cases, we lost authorship information as well. But it was not important for us right now. In the future this is something that we will have to solve. Also, some routers/switches lack some basic SSH security improvements, like public-key authentication, so we end having to hard-code user/pass in our worker script. Since we have several devices of the same type, we just iterate over their names. For example, this is what we use for hp comware devices: #!/bin/bash # run this script by cron USER="git" PASSWORD="readonlyuser" DEVICES="device1 device2 device3 device4" FILE="flash:/startup.cfg" GIT_DIR="myrepo" GIT="/srv/git/${GIT_DIR}.git" TMP_DIR="$(mktemp -d)" if [ -z "$TMP_DIR" ] ; then echo "E: no temp dir created" >&2 exit 1 fi GIT_BIN="$(which git)" if [ ! -x "$GIT_BIN" ] ; then echo "E: no git binary" >&2 exit 1 fi SCP_BIN="$(which scp)" if [ ! -x "$SCP_BIN" ] ; then echo "E: no scp binary" >&2 exit 1 fi SSHPASS_BIN="$(which sshpass)" if [ ! -x "$SSHPASS_BIN" ] ; then echo "E: no sshpass binary" >&2 exit 1 fi # clone git repo cd $TMP_DIR $GIT_BIN clone $GIT cd $GIT_DIR for device in $DEVICES; do mkdir -p $device cd $device # fetch cfg CONN="${USER}@${device}" $SSHPASS_BIN -p "$PASSWORD" $SCP_BIN ${CONN}:${FILE} . # commit $GIT_BIN add -A . $GIT_BIN commit -m "${device}: configuration change" \ -m "A configuration change was detected" \ --author="cron " $GIT_BIN push -f cd .. done # cleanup rm -rf $TMP_DIR You should create a read-only user ‘git’ in the devices. And beware that each device model has the config file stored in a different place. For reference, in HP comware, the file to scp is flash:/startup.cfg. And you might try creating the user like this: local-user git class manage password hash xxxxx service-type ssh authorization-attribute user-role security-audit # In Junos/Juniper, the file you should scp is /config/juniper.conf.gz and the script should gunzip the data before committing. For the read-only user, try is something like this: system { [...] login { [...] class git { permissions maintenance; allow-commands scp.*; } user git { uid xxx; class git; authentication { encrypted-password "xxx"; } } } } The file to scp in HP procurve is /cfg/startup-config. And for the read-only user, try something like this: aaa authorization group "git user" 1 match-command "scp.*" permit aaa authentication local-user "git" group "git user" password sha1 "xxxxx" What would be the ideal situation? Get the device controlled directly by git (i.e. commit –> git hook –> device update) or at least have the device to commit[...]

Steve McIntyre: -1, Trolling

Thu, 22 Jun 2017 21:59:00 +0000


Here's a nice comment I received by email this morning. I guess somebody was upset by my last post?

From: Tec Services 
Date: Wed, 21 Jun 2017 22:30:26 -0700
Subject: its time for you to retire from debian...unbelievable..your
         the quality guy and fucked up the installer!

i cant ever remember in the hostory of computing someone releasing an installer
that does not work!!


you need to be retired...due to being retarded..

and that this was dedicated to ian...what a should be ashames..he is probably roling in his grave from shame
right now....

It's nice to be appreciated.

John Goerzen: First Experiences with Stretch

Thu, 22 Jun 2017 13:19:37 +0000

I’ve done my first upgrades to Debian stretch at this point. The results have been overall good. On the laptop my kids use, I helped my 10-year-old do it, and it worked flawlessly. On my workstation, I got a kernel panic on boot. Hmm.

Unfortunately, my system has to use the nv drivers, which leaves me with an 80×25 text console. It took some finagling (break=init in grub, then manually insmoding the appropriate stuff based on modules.dep for nouveau), but finally I got a console so I could see what was breaking. It appeared that init was crashing because it couldn’t find liblz4. A little digging shows that liblz4 is in /usr, and /usr wasn’t mounted. I’ve filed the bug on systemd-sysv for this.

I run root on ZFS, and further digging revealed that I had datasets named like this:

  • tank/hostname-1/ROOT
  • tank/hostname-1/usr
  • tank/hostname-1/var

This used to be fine. The mountpoint property of the usr dataset put it at /usr without incident. But it turns out that this won’t work now, unless I set ZFS_INITRD_ADDITIONAL_DATASETS in /etc/default/zfs for some reason. So I renamed them so usr was under ROOT, and then the system booted.

Then I ran samba not liking something in my bind interfaces line (to be fair, it did still say eth0 instead of br0). rpcbind was failing in postinst, though a reboot seems to have helped that. More annoying was that I had trouble logging into my system because resolv.conf was left empty (despite dns-* entries in /etc/network/interfaces and the presence of resolvconf). I eventually repaired that, and found that it kept removing my “search” line. Eventually I removed resolvconf.

Then mariadb’s postinst was silently failing. I eventually discovered it was sending info to syslog (odd), and /etc/init.d/apparmor teardown let it complete properly. It seems like there may have been an outdated /etc/apparmor.d/cache/usr.sbin.mysql out there for some reason.

Then there was XFCE. I use it with xmonad, and the session startup was really wonky. I had to zap my sessions, my panel config, etc. and start anew. I am still not entirely sure I have it right, but I at do have a usable system now.

Dirk Eddelbuettel: nanotime 0.2.0

Thu, 22 Jun 2017 12:16:00 +0000


A new version of the nanotime package for working with nanosecond timestamps just arrived on CRAN.

nanotime uses the RcppCCTZ package for (efficient) high(er) resolution time parsing and formatting up to nanosecond resolution, and the bit64 package for the actual integer64 arithmetic.

Thanks to a metric ton of work by Leonardo Silvestri, the package now uses S4 classes internally allowing for greater consistency of operations on nanotime objects.

Changes in version 0.2.0 (2017-06-22)

  • Rewritten in S4 to provide more robust operations (#17 by Leonardo)

  • Ensure tz="" is treated as unset (Leonardo in #20)

  • Added format and tz arguments to nanotime, format, print (#22 by Leonardo and Dirk)

  • Ensure printing respect options()$max.print, ensure names are kept with vector (#23 by Leonardo)

  • Correct summary() by defining names<- (Leonardo in #25 fixing #24)

  • Report error on operations that are meaningful for type; handled NA, NaN, Inf, -Inf correctly (Leonardo in #27 fixing #26)

We also have a diff to the previous version thanks to CRANberries. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Norbert Preining: Signal handling in R

Thu, 22 Jun 2017 08:28:03 +0000


Recently I have been programming quite a lot in R, and today stumbled over the problem to implement a kind of monitoring loop in R. Typically that would be a infinite loop with sleep calls, but I wanted to allow for waking up from the sleep via sending UNIX style signals, in particular SIGINT. After some searching I found Beyond Exception Handling: Conditions and Restarts from the Advanced R book. But it didn’t really help me a lot to program an interrupt handler.

My requirements were:

  • an interruption of the work-part should be immediately restarted
  • an interruption of the sleep-part should go immediately into the work-part

Unfortunately it seems not to be possible to ignore interrupts at all from with the R code. The best one can do is install interrupt handlers and try to repeat the code which was executed while the interrupt happened. This is what I tried to implement with the following code below. I still have to digest the documentation about conditions and restarts, and play around a lot, but at least this is an initial working version.

workfun <- function() {
  i <- 1
  do_repeat <- FALSE
  while (TRUE) {
    message("begin of the loop")
        # do all the work here
        cat("Entering work part i =", i, "\n");
        i <- i + 1
        cat("finished work part\n")
      gotSIG = function() { 
        message("interrupted while working, restarting work part")
        do_repeat <<- TRUE
    if (do_repeat) {
      cat("restarting work loop\n")
      do_repeat <- FALSE
    } else {
      cat("between work and sleep part\n")
        # do the sleep part here
        cat("Entering sleep part i =", i, "\n")
        i <- i + 1
        cat("finished sleep part\n")
      gotSIG = function() {
        message("got work to do, waking up!")
    message("end of the loop")

cat("Current process:", Sys.getpid(), "\n");

  interrupt = function(e) {

While not perfect, I guess I have to live with this method for now.

Dirk Eddelbuettel: RcppCCTZ 0.2.3 (and 0.2.2)

Thu, 22 Jun 2017 01:06:00 +0000


A new minor version 0.2.3 of RcppCCTZ is now on CRAN.

RcppCCTZ uses Rcpp to bring CCTZ to R. CCTZ is a C++ library for translating between absolute and civil times using the rules of a time zone. In fact, it is two libraries. One for dealing with civil time: human-readable dates and times, and one for converting between between absolute and civil times via time zones. The RcppCCTZ page has a few usage examples and details.

This version ensures that we set the TZDIR environment variable correctly on the old dreaded OS that does not come with proper timezone information---an issue which had come up while preparing the next (and awesome, trust me) release of nanotime. It also appears that I failed to blog about 0.2.2, another maintenance release, so changes for both are summarised next.

Changes in version 0.2.3 (2017-06-19)

  • On Windows, the TZDIR environment variable is now set in .onLoad()

  • Replaced init.c with registration code inside of RcppExports.cpp thanks to Rcpp 0.12.11.

Changes in version 0.2.2 (2017-04-20)

  • Synchronized with upstream CCTZ

  • The time_point object is instantiated explicitly for nanosecond use which appears to be required on macOS

We also have a diff to the previous version thanks to CRANberries. More details are at the RcppCCTZ page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Joey Hess: DIY professional grade solar panel installation

Wed, 21 Jun 2017 22:42:10 +0000

I've installed 1 kilowatt of solar panels on my roof, using professional grade eqipment. The four panels are Astronergy 260 watt panels, and they're mounted on IronRidge XR100 rails. Did it all myself, without help. I had three goals for this install: Cheap but sturdy. Total cost will be under $2500. It would probably cost at least twice as much to get a professional install, and the pros might not even want to do such a small install. Learn the roof mount system. I want to be able to add more panels, remove panels when working on the roof, and understand everything. Make every day a sunny day. With my current solar panels, I get around 10x as much power on a sunny day as a cloudy day, and I have plenty of power on sunny days. So 10x the PV capacity should be a good amount of power all the time. My main concerns were, would I be able to find the rafters when installing the rails, and would the 5x3 foot panels be too unweildly to get up on the roof by myself. I was able to find the rafters, without needing a stud finder, after I removed the roof's vent caps, which exposed the rafters. The shingles were on straight enough that I could follow the lines down and drilled into the rafter on the first try every time. And I got the rails on spaced well and straight, although I could have spaced the FlashFeet out better (oops). My drill ran out of juice half-way, and I had to hack it to recharge on solar power, but that's another story. Between the learning curve, a lot of careful measurement, not the greatest shoes for roofing, and waiting for recharging, it took two days to get the 8 FlashFeet installed and the rails mounted. Taking a break from that and swimming in the river, I realized I should have been wearing my water shoes on the roof all along. Super soft and nubbly, they make me feel like a gecko up there! After recovering from an (unrelated) achilles tendon strain, I got the panels installed today. Turns out they're not hard to handle on the roof by myself. Getting them up a ladder to the roof by yourself would normally be another story, but my house has a 2 foot step up from the back retaining wall to the roof, and even has a handy grip beam as you step up. The last gotcha, which I luckily anticipated, is that panels will slide down off the rails before you can get them bolted down. This is where a second pair of hands would have been most useful. But, I macguyvered a solution, attaching temporary clamps before bringing a panel up, that stopped it sliding down while I was attaching it. I also finished the outside wiring today. Including the one hack of this install so far. Since the local hardware store didn't have a suitable conduit to bring the cables off the roof, I cobbled one together from pipe, with foam inserts to prevent chafing. While I have 1 kilowatt of power on my roof now, I won't be able to use it until next week. After ordering the upgrade, I realized that my old PWM charge controller would be able to handle less than half the power, and to get even that I would have needed to mount the fuse box near the top of the roof and run down a large and expensive low-voltage high-amperage cable, around OO AWG size. Instead, I'll be upgrading to a MPPT controller, and running a single 150 volt cable to it. Then, since the MPPT controller can only handle 1 kilowatt when it's converting to 24 volts, not 12 volts, I'm gonna have to convert the entire house over from 12V DC to 24V DC, including changing all the light fixtures and rewiring the battery bank... [...]

Reproducible builds folks: Reproducible Builds: week 112 in Stretch cycle

Wed, 21 Jun 2017 16:27:41 +0000

Here's what happened in the Reproducible Builds effort between Sunday June 11 and Saturday June 17 2017: Upcoming events On June 19th, Chris Lamb presented at LinuxCon China 2017 on Reproducible Builds. h01ger created a poll for a date for the next Reproducible Builds summit, please vote if you are interested in attending. Our next IRC meeting will be on the 6th of July at 17:00 UTC with the following agenda. Upstream patches and bugs filed Bernhard M. Wiedemann: gnuradio + volk pymol distorm qtscriptgenerator cpython x3270 x3270 sphinx obs-service-tar_scm osc matplotlib merged pyparted merged bjoern merged Reviews of unreproducible packages 1 package review has been added, 19 have been updated and 2 have been removed in this week, adding to our knowledge about identified issues. Weekly QA work During our reproducibility testing, FTBFS bugs have been detected and reported by: Adrian Bunk (1) Edmund Grimley Evans (1) diffoscope development Chris Lamb: Document the 'h' variable in our raw_feeder Split diffoscope.difference into feeders Tidy diffoscope/ Ximin Luo: Add a PartialString class More ydiff/linediff from diffoscope.{difference => diff} to group unified_diff related things together difference: has_children -> has_visible_children, and take into account comments Add various traverse_ methods to Difference Move side-by-side and linediff algorithms to Refactor html-dir presenter to be a class instance, avoiding global state html-dir: show/hide diff comments, which can be very large As you might have noticed, Debian stretch was released last week. Since then, Mattia and Holger renamed our testing suite to stretch and added a buster suite so that we keep our historic results for stretch visible and can continue our development work as usual. In this sense, happy hacking on buster; may it become the best Debian release ever and hopefully the first reproducible one! Vagrant Cascadian: Proposed reducing the number of concurrent builds each armhf machine runs, to reduce out-of-memory errors, crashes and false positives with FTBFS on armhf, reducing the number of armhf build workers from 70 to 51. Valerie Young: Add highlighting in navigation for the new nodes health pages. Mattia Rizzolo: Do not dump database ACL in the backups. Deduplicate SSLCertificateFile directive into the common-directives-ssl macro Apache: t.r-b.o: redirect /testing/ to /stretch/ db: s/testing/stretch/g Start adding code to test buster... Holger Levsen: Update README.infrastructure to explain who has root access where. correctly recognize zero builds per day. Add build nodes health overview page, then split it in three: health overview, daily munin graphs and weekly munin graphs. improve handling of systemctl timeouts. reproducible_build_service: sleep less and thus restart failed workers sooner. Replace ftp.(de|uk|us) with everywhere. Performance page: also show local problems with (which are autofixed after a maximum of 133.7 minutes). Rename nodes_info job to html_nodes_info. Add new node health check jobs, split off from maintenance jobs, run every 15 minutes. Add two new checks: 1. for correct future (2019 is incorrect atm, and we sometimes got that). 2.) for writeable /tmp (sometimes happens on borked armhf nodes). Add jobs for testing buster. s/testing/stretch/g in all the code. Finish the code to deal with buster. Teach jessie and Ubuntu 16.04 how to debootstrap buster. Axel Beckert is currently in the process of setting up eight LeMaker HiKey960 boar[...]

Vincent Bernat: IPv4 route lookup on Linux

Wed, 21 Jun 2017 08:00:32 +0000

TL;DR: With its implementation of IPv4 routing tables using LPC-tries, Linux offers good lookup performance (50 ns for a full view) and low memory usage (64 MiB for a full view). During the lifetime of an IPv4 datagram inside the Linux kernel, one important step is the route lookup for the destination address through the fib_lookup() function. From essential information about the datagram (source and destination IP addresses, interfaces, firewall mark, …), this function should quickly provide a decision. Some possible options are: local delivery (RTN_LOCAL), forwarding to a supplied next hop (RTN_UNICAST), silent discard (RTN_BLACKHOLE). Since 2.6.39, Linux stores routes into a compressed prefix tree (commit 3630b7c050d9). In the past, a route cache was maintained but it has been removed1 in Linux 3.6. Route lookup in a trie Lookup with a simple trie Lookup with a path-compressed trie Lookup with a level-compressed trie Implementation in Linux Performance Memory usage Routing rules Builtin tables Performance Conclusion Route lookup in a trie Looking up a route in a routing table is to find the most specific prefix matching the requested destination. Let’s assume the following routing table: $ ip route show scope global table 100 default via dev out2 nexthop via dev out3 weight 1 nexthop via dev out4 weight 1 via dev out1 via dev out1 via dev out1 via dev out1 Here are some examples of lookups and the associated results: Destination IP Next hop via out1 via out1 via out3 or via out4 (ECMP) via out2 A common structure for route lookup is the trie, a tree structure where each node has its parent as prefix. Lookup with a simple trie The following trie encodes the previous routing table: For each node, the prefix is known by its path from the root node and the prefix length is the current depth. A lookup in such a trie is quite simple: at each step, fetch the nth bit of the IP address, where n is the current depth. If it is 0, continue with the first child. Otherwise, continue with the second. If a child is missing, backtrack until a routing entry is found. For example, when looking for, we will find the result in the corresponding leaf (at depth 32). However for, we will reach but there is no second child. Therefore, we backtrack until the routing entry. Adding and removing routes is quite easy. From a performance point of view, the lookup is done in constant time relative to the number of routes (due to maximum depth being capped to 32). Quagga is an example of routing software still using this simple approach. Lookup with a path-compressed trie In the previous example, most nodes only have one child. This leads to a lot of unneeded bitwise comparisons and memory is also wasted on many nodes. To overcome this problem, we can use path compression: each node with only one child is removed (except if it also contains a routing entry). Each remaining node gets a new property telling how many input bits should be skipped. Such a trie is also known as a Patricia trie or a radix tree. Here is the path-compressed version of the previous trie: Since some bits have been ignored, on a match, a final check is executed to ensure all bits from the found entry are matching the input IP address. If not, we must act as if the entry wasn’t found (and backtrack to find a matching prefi[...]

Steve McIntyre: So, Stretch happened...

Tue, 20 Jun 2017 22:21:00 +0000


Things mostly went very well, and we've released Debian 9 this weekend past. Many many people worked together to make this possible, and I'd like to extend my own thanks to all of them.

As a project, we decided to dedicate Stretch to our late founder Ian Murdock. He did much of the early work to get Debian going, and inspired many more to help him. I had the good fortune to meet up with Ian years ago at a meetup attached to a Usenix conference, and I remember clearly he was a genuinely nice guy with good ideas. We'll miss him.

For my part in the release process, again I was responsible for producing our official installation and live images. Release day itself went OK, but as is typical the process ran late into Saturday night / early Sunday morning. We made and tested lots of different images, although numbers were down from previous releases as we've stopped making the full CD sets now.

Sunday was the day for the release party in Cambridge. As is traditional, a group of us met up at a local hostelry for some revelry! We hid inside the pub to escape from the ridiculouly hot weather we're having at the moment.


Due to a combination of the lack of sleep and the heat, I nearly forgot to even take any photos - apologies to the extra folks who'd been around earlier whom I missed with the camera... :-(

Andreas Bombe: New Blog

Tue, 20 Jun 2017 22:09:40 +0000

So I finally got myself a blog to write about my software and hardware projects, my work in Debian and, I guess, stuff. Readers of, hi! If you can see this I got the configuration right.

For the curious, I’m using a static site generator for this blog — Hugo to be specific — like all the cool kids do these days.

Foteini Tsiami: Internationalization, part one

Tue, 20 Jun 2017 10:00:46 +0000

The first part of internationalizing a Greek application, is, of course, translating all the Greek text to English. I already knew how to open a user interface (.ui) file with Glade and how to translate/save it from there, and mail the result to the developers.

If only it was that simple! I learned that the code of most open source software is kept on version control systems, which fortunately are a bit similar to Wikis, which I was familiar with, so I didn’t have a lot of trouble understanding the concepts. Thanks to a very brief git crash course from my mentors, I was able to quickly start translating, committing, and even pushing back the updated files.

The other tricky part was internationalizing the python source code. There Glade couldn’t be used, a text editor like Pluma was needed. And the messages were part of the source code, so I had to be extra careful not to break the syntax. The English text then needed to be wrapped around _(), which does the gettext call which dynamically translates the messages into the user language.

All this was very educative, but now that the first part of the internationalization, i.e. the Greek-to-English translations, are over, I think I’ll take some time to read more about the tools that I used!


Norbert Preining: TeX Live 2017 hits Debian/unstable

Tue, 20 Jun 2017 01:09:33 +0000

Yesterday I uploaded the first packages of TeX Live 2017 to Debian/unstable, meaning that the new release cycle has started. Debian/stretch was released over the weekend, and this opened up unstable for new developments. The upload comprised the following packages: asymptote, cm-super, context, context-modules, texlive-base, texlive-bin, texlive-extra, texlive-extra, texlive-lang, texworks, xindy. I mentioned already in a previous post the following changes: several packages have been merged, some are dropped (eg. texlive-htmlxml) and one new package (texlive-plain-generic) has been added luatex got updated to 1.0.4, and is now considered stable updmap and fmtutil now require either -sys or -user tlmgr got a shell mode (interactive/scripting interface) and a new feature to add arbitrary TEXMF trees (conf auxtrees) The last two changes are described together with other news (easy TEXMF tree management) in the TeX Live release post. These changes more or less sum up the new infra structure developments in TeX Live 2017. Since the last release to unstable (which happened in 2017-01-23) about half a year of package updates have accumulated, below is an approximate list of updates (not split into new/updated, though). Enjoy the brave new world of TeX Live 2017, and please report bugs to the BTS! Updated/new packages: academicons, achemso, acmart, acro, actuarialangle, actuarialsymbol, adobemapping, alkalami, amiri, animate, aomart, apa6, apxproof, arabluatex, archaeologie, arsclassica, autoaligne, autobreak, autosp, axodraw2, babel, babel-azerbaijani, babel-english, babel-french, babel-indonesian, babel-japanese, babel-malay, babel-ukrainian, bangorexam, baskervaldx, baskervillef, bchart, beamer, beamerswitch, bgteubner, biblatex-abnt, biblatex-anonymous, biblatex-archaeology, biblatex-arthistory-bonn, biblatex-bookinother, biblatex-caspervector, biblatex-cheatsheet, biblatex-chem, biblatex-chicago, biblatex-claves, biblatex-enc, biblatex-fiwi, biblatex-gb7714-2015, biblatex-gost, biblatex-ieee, biblatex-iso690, biblatex-manuscripts-philology, biblatex-morenames, biblatex-nature, biblatex-opcit-booktitle, biblatex-oxref, biblatex-philosophy, biblatex-publist, biblatex-shortfields, biblatex-subseries, bibtexperllibs, bidi, biochemistry-colors, bookcover, boondox, bredzenie, breqn, bxbase, bxcalc, bxdvidriver, bxjalipsum, bxjaprnind, bxjscls, bxnewfont, bxorigcapt, bxpapersize, bxpdfver, cabin, callouts, chemfig, chemformula, chemmacros, chemschemex, childdoc, circuitikz, cje, cjhebrew, cjk-gs-integrate, cmpj, cochineal, combofont, context, conv-xkv, correctmathalign, covington, cquthesis, crimson, crossrefware, csbulletin, csplain, csquotes, css-colors, cstldoc, ctex, currency, cweb, datetime2-french, datetime2-german, datetime2-romanian, datetime2-ukrainian, dehyph-exptl, disser, docsurvey, dox, draftfigure, drawmatrix, dtk, dviinfox, easyformat, ebproof, elements, endheads, enotez, eqnalign, erewhon, eulerpx, expex, exsheets, factura, facture, fancyhdr, fbb, fei, fetamont, fibeamer, fithesis, fixme, fmtcount, fnspe, fontmfizz, fontools, fonts-churchslavonic, fontspec, footnotehyper, forest, gandhi, genealogytree, glossaries, glossaries-extra, gofonts, gotoh, graphics, graphics-def, graphics-pln, grayhints, gregoriotex, gtrlib-largetrees, gzt, halloweenmath, handout, hang, heuristica, hlist, hobby, hvfloat, hyperref, hyperxmp, ifptex, ijsra, japanese-otf-uptex, jlreq, jmlr, jsclasses, jslectureplanner, karnaugh-map, keyfloat, knowledge, komacv, koma-script, kotex-oblivoir, l3, l3build, ladder, langsci, latex[...]

Jeremy Bicha: GNOME Tweak Tool 3.25.3

Mon, 19 Jun 2017 23:15:46 +0000

Today I released the second development snapshot (3.25.3) of what will be GNOME Tweak Tool 3.26. I consider the initial User Interface (UI) rework proposed by the GNOME Design Team to be complete now. Every page in Tweak Tool has been updated, either in this snapshot or the previous development snapshot. The hard part still remains: making the UI look as good as the mockups. Tweak Tool’s backend makes this a bit more complicated than usual for an app like this. Here are a few visual highlights of this release. The Typing page has been moved into an Additional Layout Options dialog in the Keyboard & Mouse page. Also, the Compose Key option has been given its own dialog box. Florian Müllner added content to the Extensions page that is shown if you don’t have any GNOME Shell extensions installed yet. A hidden feature that GNOME has had for a long time is the ability to move the Application Menu from the GNOME top bar to a button in the app’s title bar. This is easy to enable in Tweak Tool by turning off the Application Menu switch in the Top Bar page. This release improves how well that works, especially for Ubuntu users where the required hidden appmenu window button was probably not pre-configured. Some of the ComboBoxes have been replaced by ListBoxes. One example is on the Workspaces page where the new design allows for more information about the different options. The ListBoxes are also a lot easier to select than the smaller ComboBoxes were. For details of these and other changes, see the commit log or the NEWS file. GNOME Tweak Tool 3.26 will be released alongside GNOME 3.26 in mid-September. [...]

Shirish Agarwal: Seizures, Vigo and bi-pedal motion

Mon, 19 Jun 2017 16:49:52 +0000

Dear all, an update is in order. While talking to physiotherapist couple of days before, came to know the correct term to what was I experiencing. I had experienced convulsive ‘seizure‘ , spasms being a part of it. Reading the wikipedia entry and the associated links/entries it seems I am and was very very lucky. The hospital or any hospital is a very bad bad place. I have seen all horror movies which people say are disturbing but have never been disturbed as much as I was in hospital. I couldn’t help but hear people’s screams and saw so many cases which turned critical. At times it was not easy to remain positive but dunno from where there was a will to live which pushed me and is still pushing me. One of the things that was painful for a long time were the almost constant stream of injections that were injected in me. It was almost an afterthought that the nurse put a Vigo in me. While the above medical device is similar, mine had a cross, the needle was much shorter and is injected into the vein. After that all injections are injected into that including common liquid which is salt,water and something commonly given to patients to stabilize first. I am not remembering the name atm. I also had a urine bag which was attached to my penis in a non-invasive manner. Both my grandfather and grandma used to cry when things went wrong while I didn’t feel any pain but when the urine bag was disattached and attached again, so seems things have improved there. I was also very conscious of getting bed sores as both my grandpa and grandma had them when in hospital. As I had no strength I had to beg. plead do everything to make sure that every few hours I was turned from one side to other. I also had an air bag which is supposed to alleviate or relief this condition. Constant physiotherapy every day for a while slowly increased my strength and slowly both the vigo and feeding tube put inside my throat was removed. I have no remembrance as to when they had put the feeding tube as it was all rubber and felt bad when it came out. Further physiotherapy helped me crawl till the top of the bed, the bed was around 6 feet in length and and more than enough so I could turn both sides without falling over. Few days later I found I could also sit up using my legs as a lever and that gave confidence to the doctors to remove the air bed so I could crawl more easily. Couple of more days later I stood on my feet for the first time and it was like I had lead legs. Each step was painful but the sense and feeling of independence won over whatever pain was there. I had to endure wet wipes from nurses and ward boys in place of a shower everyday and while they were respectful always it felt humiliating. The first time I had a bath after 2 weeks or something, every part of my body cried and I felt like a weakling. I had thought I wouldn’t be able to do justice to the physiotherapy session which was soon after but after the session was back to feeling normal. For a while I was doing the penguin waddle which while painful was also had humor in it. I did think of shooting the penguin waddle but decided against it as I was half-naked most of the time ( the hospital clothes never fit me properly) Cut to today and I was able to climb up and down the stairs on my own and circled my own block, slowly but was able to do it on my own by myself. While I always had a sense of wonderment for bi-pedal motion as well as all other means of transport, found much more respect of walking. I li[...]

Vasudev Kamath: Update: - Shell pipelines with subprocess crate and use of Exec::shell function

Mon, 19 Jun 2017 15:18:00 +0000

In my previous post I used Exec::shell function from subprocess crate and passed it string generated by interpolating --author argument. This string was then run by the shell via Exec::shell. After publishing post I got ping on IRC by Jonas Smedegaard and Paul Wise that I should replace Exec::shell, as it might be prone to errors or vulnerabilities of shell injection attack. Indeed they were right, in hurry I did not completely read the function documentation which clearly mentions this fact. When invoking this function, be careful not to interpolate arguments into the string run by the shell, such as Exec::shell(format!("sort {}", filename)). Such code is prone to errors and, if filename comes from an untrusted source, to shell injection attacks. Instead, use Exec::cmd("sort").arg(filename). Though I'm not directly taking input from untrusted source, its still possible that the string I got back from git log command might contain some oddly formatted string with characters of different encoding which could possibly break the Exec::shell , as I'm not sanitizing the shell command. When we use Exec::cmd and pass argument using .args chaining, the library takes care of creating safe command line. So I went in and modified the function to use Exec::cmd instead of Exec::shell. Below is updated function. fn copyright_fromgit(repo: &str) -> Result> { let tempdir = TempDir::new_in(".", "debcargo")?; Exec::cmd("git") .args(&["clone", "--bare", repo, tempdir.path().to_str().unwrap()]) .stdout(subprocess::NullFile) .stderr(subprocess::NullFile) .popen()?; let author_process = { Exec::shell(OsStr::new("git log --format=\"%an <%ae>\"")).cwd(tempdir.path()) | Exec::shell(OsStr::new("sort -u")) }.capture()?; let authors = author_process.stdout_str().trim().to_string(); let authors: Vec<&str> = authors.split('\n').collect(); let mut notices: Vec = Vec::new(); for author in &authors { let author_string = format!("--author={}", author); let first = { Exec::cmd("/usr/bin/git") .args(&["log", "--format=%ad", "--date=format:%Y", "--reverse", &author_string]) .cwd(tempdir.path()) | Exec::shell(OsStr::new("head -n1")) }.capture()?; let latest = { Exec::cmd("/usr/bin/git") .args(&["log", "--format=%ad", "--date=format:%Y", &author_string]) .cwd(tempdir.path()) | Exec::shell("head -n1") }.capture()?; let start = i32::from_str(first.stdout_str().trim())?; let end = i32::from_str(latest.stdout_str().trim())?; let cnotice = match start.cmp(&end) { Ordering::Equal => format!("{}, {}", start, author), _ => format!("{}-{}, {}", start, end, author), }; notices.push(cnotice); } Ok(notices) } I still use Exec::shell for generating author list, this is not problematic as I'm not interpolating arguments to create command string. [...]

Hideki Yamane: PoC: use Sphinx for debian-policy

Mon, 19 Jun 2017 13:09:02 +0000

Before party, we did a monthly study meeting and I gave a talk about tiny hack for debian-policy document.

debian-policy was converted from debian-sgml to docbook in 4.0.0, and my proposal is "Go move forward to Sphinx".

Here's sample, and you can also get PoC source from my GitHub repo and check it.

Michal Čihař: Call for Weblate translations

Mon, 19 Jun 2017 04:00:22 +0000


Weblate 2.15 is almost ready (I expect no further code changes), so it's really great time to contribute to it's translations! Weblate 2.15 should be released early next week.

As you might expect, Weblate is translated using Weblate, so the contributions should be really easy. In case there is something unclear, you can look into Weblate documentation.

I'd especially like to see improvements in the Italian translation which was one of the first in Weblate beginnings, but hasn't received much love in past years.

Filed under: Debian English SUSE Weblate

Simon Josefsson: OpenPGP smartcard under GNOME on Debian 9.0 Stretch

Sun, 18 Jun 2017 22:42:19 +0000

I installed Debian 9.0 “Stretch” on my Lenovo X201 laptop today. Installation went smooth, as usual. GnuPG/SSH with an OpenPGP smartcard — I use a YubiKey NEO — does not work out of the box with GNOME though. I wrote about how to fix OpenPGP smartcards under GNOME with Debian 8.0 “Jessie” earlier, and I thought I’d do a similar blog post for Debian 9.0 “Stretch”. The situation is slightly different than before (e.g., GnuPG works better but SSH doesn’t) so there is some progress. May I hope that Debian 10.0 “Buster” gets this right? Pointers to which package in Debian should have a bug report tracking this issue is welcome (or a pointer to an existing bug report). After first login, I attempt to use gpg --card-status to check if GnuPG can talk to the smartcard. jas@latte:~$ gpg --card-status gpg: error getting version from 'scdaemon': No SmartCard daemon gpg: OpenPGP card not available: No SmartCard daemon jas@latte:~$ This fails because scdaemon is not installed. Isn’t a smartcard common enough so that this should be installed by default on a GNOME Desktop Debian installation? Anyway, install it as follows. root@latte:~# apt-get install scdaemon Then try again. jas@latte:~$ gpg --card-status gpg: selecting openpgp failed: No such device gpg: OpenPGP card not available: No such device jas@latte:~$ I believe scdaemon here attempts to use its internal CCID implementation, and I do not know why it does not work. At this point I often recall that want pcscd installed since I work with smartcards in general. root@latte:~# apt-get install pcscd Now gpg --card-status works! jas@latte:~$ gpg --card-status Reader ...........: Yubico Yubikey NEO CCID 00 00 Application ID ...: D2760001240102000006017403230000 Version ..........: 2.0 Manufacturer .....: Yubico Serial number ....: 01740323 Name of cardholder: Simon Josefsson Language prefs ...: sv Sex ..............: male URL of public key : Login data .......: jas Signature PIN ....: not forced Key attributes ...: rsa2048 rsa2048 rsa2048 Max. PIN lengths .: 127 127 127 PIN retry counter : 3 3 3 Signature counter : 8358 Signature key ....: 9941 5CE1 905D 0E55 A9F8 8026 860B 7FBB 32F8 119D created ....: 2014-06-22 19:19:04 Encryption key....: DC9F 9B7D 8831 692A A852 D95B 9535 162A 78EC D86B created ....: 2014-06-22 19:19:20 Authentication key: 2E08 856F 4B22 2148 A40A 3E45 AF66 08D7 36BA 8F9B created ....: 2014-06-22 19:19:41 General key info..: sub rsa2048/860B7FBB32F8119D 2014-06-22 Simon Josefsson sec# rsa3744/0664A76954265E8C created: 2014-06-22 expires: 2017-09-04 ssb> rsa2048/860B7FBB32F8119D created: 2014-06-22 expires: 2017-09-04 card-no: 0006 01740323 ssb> rsa2048/9535162A78ECD86B created: 2014-06-22 expires: 2017-09-04 card-no: 0006 01740323 ssb> rsa2048/AF6608D736BA8F9B created: 2014-06-22 expires: 2017-09-04 card-no: 0006 01740323 jas@latte:~$ Using the key will not work though. jas@latte:~$ echo foo|gpg -a --sign gpg: no default secret key: No secret key gpg: signing failed: No secret key jas@latte:~$ This is because the public key and the secret key stub are not available. jas@latte:~$ gpg --list-keys jas@latte:~$ gpg --list-secret-keys jas@latte:~$ You need to import the key for [...]

Alexander Wirt: alioth needs your help

Sun, 18 Jun 2017 19:06:13 +0000


It may look that the decision for pagure as alioth replacement is already finalized, but that’s not really true. I got a lot of feedback and tips in the last weeks, those made postpone my decision. Several alternative systems were recommended to me, here are a few examples:

and probably several others. I won’t be able to evaluate all of those systems in advance of our sprint. That’s where you come in: if you are familiar with one of those systems, or want to get familiar with them, join us on our mailing list and create a wiki page below with a review of your system.

What do we need to know?

  • Feature set compared to current alioth
  • Feature set compared to a popular system like github
  • Some implementation designs
  • Some information about scaling (expect something like 15.000 > 25.000 repos)
  • Support for other version control systems
  • Advantages: why should we choose that system
  • Disadvantages: why shouldn’t we choose that system
  • License
  • Other interesting features
  • Details about extensibility
  • A really nice thing would be a working vagrant box / vagrantfile + ansible/puppet to test things

If you want to start on such a review, please announce it on the mailinglist.

If you have questions, ask me on IRC, Twitter or mail. Thanks for your help!

Eriberto Mota: Como migrar do Debian Jessie para o Stretch

Sun, 18 Jun 2017 17:58:18 +0000

Bem vindo ao Debian Stretch! Ontem, 17 de junho de 2017, o Debian 9 (Stretch) foi lançado. Eu gostaria de falar sobre alguns procedimentos básicos e regras para migrar do Debian 8 (Jessie). Passos iniciais A primeira coisa a fazer é ler a nota de lançamento. Isso é fundamental para saber sobre possíveis bugs e situações especiais. O segundo passo é atualizar o Jessie totalmente antes de migrar para o Stretch. Para isso, ainda dentro do Debian 8, execute os seguintes comandos: # apt-get update # apt-get dist-upgrade Migrando Edite o arquivo /etc/apt/sources.list e altere todos os nomes jessie para stretch. A seguir, um exemplo do conteúdo desse arquivo (poderá variar, de acordo com as suas necessidades): deb stretch main deb-src stretch main                                                                                                                                  deb stretch/updates main deb-src stretch/updates main Depois, execute: # apt-get update # apt-get dist-upgrade Caso haja algum problema, leia as mensagens de erro e tente resolver o problema. Resolvendo ou não tal problema, execute novamente o comando: # apt-get dist-upgrade Havendo novos problemas, tente resolver. Busque soluções no Google, se for necessário. Mas, geralmente, tudo dará certo e você não deverá ter problemas. Alterações em arquivos de configuração Quando você estiver migrando, algumas mensagens sobre alterações em arquivos de configuração poderão ser mostradas. Isso poderá deixar alguns usuários pedidos, sem saber o que fazer. Não entre em pânico. Existem duas formas de apresentar essas mensagens: via texto puro em shell ou via janela azul de mensagens. O texto a seguir é um exemplo de mensagem em shell: Ficheiro de configuração '/etc/rsyslog.conf' ==> Modificado (por si ou por um script) desde a instalação. ==> O distribuidor do pacote lançou uma versão atualizada. O que deseja fazer? As suas opções são: Y ou I : instalar a versão do pacote do maintainer N ou O : manter a versão actualmente instalada D : mostrar diferenças entre as versões Z : iniciar uma shell para examinar a situação A ação padrão é manter sua versão atual. *** rsyslog.conf (Y/I/N/O/D/Z) [padrão=N] ? A tela a seguir é um exemplo de mensagem via janela: Nos dois casos, é recomendável que você escolha por instalar a nova versão do arquivo de configuração. Isso porque o novo arquivo de configuração estará totalmente adaptado aos novos serviços instalados e poderá ter muitas opções novas ou diferentes. Mas não se preocupe, pois as suas configurações não serão perdidas. Haverá um backup das mesmas. Assim, para shell, escolha a opção "Y" e, no caso de janela, escolha a opção "instalar a versão do mantenedor do pacote". É muito importante anotar o nome de cada arquivo modificado. No caso da janela anterior, trata-se do arquivo /etc/samba/smb.conf. No caso do shell o arquivo foi o /etc/rsyslog.conf. Depois de completar a migração, você poderá ver o novo arquivo de configuração e o original. Caso o novo [...]

Michal Čihař: python-gammu for Windows

Sun, 18 Jun 2017 16:00:25 +0000


It has been few months since I'm providing Windows binaries for Gammu, but other parts of the family were still missing. Today, I'm adding python-gammu.

Unlike previous attempts which used crosscompilation on Linux using Wine, this is also based on AppVeyor. Still I don't have to touch Windows to do that, what is nice :-). This has been introducted in python-gammu 2.9 and depend on Gammu 1.38.4.

What is good on this is that pip install python-gammu should now work with binary packages if you're using Python 3.5 or 3.6.

Maybe I'll find time to look at option providing Wammu as well, but it's more tricky there as it doesn't support Python 3, while the python-gammu for Windows can currently only be built for Python 3.5 and 3.6 (due to MSVC dependencies of older Python versions).

Filed under: Debian English Gammu python-gammu Wammu

Vasudev Kamath: Rust - Shell like Process pipelines using subprocess crate

Sun, 18 Jun 2017 15:29:00 +0000

I had to extract copyright information from the git repository of the crate upstream. The need aroused as part of updating debcargo, tool to create Debian package source from the Rust crate. General idea behind taking copyright information from git is to extract starting and latest contribution year for every author/committer. This can be easily achieved using following shell snippet for author in $(git log --format="%an" | sort -u); do author_email=$(git log --format="%an <%ae>" --author="$author" | head -n1) first=$(git \ log --author="$author" --date=format:%Y --format="%ad" --reverse \ | head -n1) latest=$(git log --author="$author" --date=format:%Y --format="%ad" \ | head -n1) if [ $first -eq $latest ]; then echo "$first, $author_email" else echo "$first-$latest, $author_email" fi done Now challenge was to execute these command in Rust and get the required answer. So first step was I looked at std::process, default standard library support for executing shell commands. My idea was to execute first command to extract authors into a Rust vectors or array and then have 2 remaining command to extract years in a loop. (Yes I do not need additional author_email command in Rust as I can easily get both in the first command which is used in for loop of shell snippet and use it inside another loop). So I setup to 3 commands outside the loop with input and output redirected, following is snippet should give you some idea of what I tried to do. let authors_command = Command::new("/usr/bin/git") .arg("log") .arg("--format=\"%an <%ae>\"") .spawn()?; let output = authors_command.wait()?; let authors: Vec = String::from_utf8(output.stdout).split('\n').collect(); let head_n1 = Command::new("/usr/bin/head") .arg("-n1") .stdin(Stdio::piped()) .stdout(Stdio::piped()) .spwn()?; for author in &authors { ... } And inside the loop I would create additional 2 git commands read their output via pipe and feed it to head command. This is where I learned that it is not straight forward as it looks :-). std::process::Command type does not implement Copy nor Clone traits which means one use of it I will give up the ownership!. And here I started fighting with borrow checker. I need to duplicate declarations to make sure I've required commands available all the time. Additionally I needed to handle error output at every point which created too many nested statements there by complicating the program and reducing its readability When all started getting out of control I gave a second thought and wondered if it would be good to write down this in shell script ship it along with debcargo and use the script Rust program. This would satisfy my need but I would need to ship additional script along with debcargo which I was not really happy with. Then a search on revealed subprocess, a crate designed to be similar with subprocess module from Python!. Though crate is not highly downloaded it still looked promising, especially the trait implements a trait called BitOr which allows use of | operator to chain the commands. Additionally it allows executing full shell command[...]

Hideki Yamane: Debian9 release party in Tokyo

Sun, 18 Jun 2017 11:31:53 +0000

We celebrated Debian9 "stretch" release in Tokyo (thanks to Cybozu, Inc. for the place).We enjoyed beer, wine, sake, soft drinks, pizza, sandwich, snacks and cake&coffee (Nicaraguan one, it reminds me DebConf12 :) [...]

Bits from Debian: Debian 9.0 Stretch has been released!

Sun, 18 Jun 2017 06:25:00 +0000


Let yourself be embraced by the purple rubber toy octopus! We're happy to announce the release of Debian 9.0, codenamed Stretch.

Want to install it? Choose your favourite installation media among Blu-ray Discs, DVDs, CDs and USB sticks. Then read the installation manual.

Already a happy Debian user and you only want to upgrade? You can easily upgrade from your current Debian 8 Jessie installation, please read the release notes.

Do you want to celebrate the release? Share the banner from this blog in your blog or your website!

Jonathan Carter: AIMS Desktop 2017.1 is available!

Sun, 18 Jun 2017 04:55:25 +0000



Back at DebConf 15 in Germany, I gave a talk on on AIMS Desktop (which was then based on Ubuntu), and our intentions and rationale for wanting to move it over to being Debian based.

Today, alongside the Debian 9 release, we release AIMS Desktop 2017.1, the first AIMS Desktop released based on Debian. For Debian 10, we’d like to get the last remaining AIMS Desktop packages into Debian so that it could be a Debian pure blend.


Students trying out a release candidate at AIMS South Africa

It’s tailored to the needs of students, lecturers and researchers at the African Institute for Mathemetical Sciences, we’re releasing it to the public in the hope that it could be useful for other tertiary education users with an interest in maths and science software. If you run a mirror at your university, it would also be great if you could host a copy. we added an rsync location on the downloads page which you could use to keep it up to date.

Jonathan Carter: Debian 9 is available!

Sun, 18 Jun 2017 04:00:18 +0000



Congratulations to everyone who has played a part in the creation of Debian GNU/Linux 9.0! It’s a great release, I’ve installed the pre-release versions for friends, family and colleagues and so far the feedback has been very positive.

This release is dedicated to Ian Murdock, who founded the Debian project in 1993, and sadly passed away on 28 December 2015. On the Debian ISO files a dedication statement is available on /doc/dedication/dedication-9.0.txt

Here’s a copy of the dedication text:

Dedicated to Ian Murdock

Ian Murdock, the founder of the Debian project, passed away
on 28th December 2015 at his home in San Francisco. He was 42.

It is difficult to exaggerate Ian's contribution to Free
Software. He led the Debian Project from its inception in
1993 to 1996, wrote the Debian manifesto in January 1994 and
nurtured the fledgling project throughout his studies at
Purdue University.

Ian went on to be founding director of Linux International,
CTO of the Free Standards Group and later the Linux
Foundation, and leader of Project Indiana at Sun
Microsystems, which he described as "taking the lesson
that Linux has brought to the operating system and providing
that for Solaris".

Debian's success is testament to Ian's vision. He inspired
countless people around the world to contribute their own free
time and skills. More than 350 distributions are known to be
derived from Debian.

We therefore dedicate Debian 9 "stretch" to Ian.

-- The Debian Developers

During this development cycle, the amount of source packages in Debian grew from around 21 000 to around 25 000 packages, which means that there’s a whole bunch of new things Debian can make your computer do. If you find something new in this release that you like, post about it on your favourite social networks, using the hashtag #newinstretch – or look it up to see what others have discovered!

Benjamin Mako Hill: The Community Data Science Collective Dataverse

Sun, 18 Jun 2017 02:35:27 +0000

I’m pleased to announce the Community Data Science Collective Dataverse. Our dataverse is an archival repository for datasets created by the Community Data Science Collective. The dataverse won’t replace work that collective members have been doing for years to document and distribute data from our research. What we hope it will do is get our data — like our published manuscripts — into the hands of folks in the “forever” business. Over the past few years, the Community Data Science Collective has published several papers where an important part of the contribution is a dataset. These include: Consider The Redirect: A Missing Dimension of Wikipedia Research (blog post) — A paper about why it’s important for Wikipedia research to take redirect pages into account. Alongside the paper, we published code to build a dataset of redirects plus the dataset of redirects itself. Page Protection: Another Missing Dimension of Wikipedia Research — A follow-up paper that discusses page protection. Alongside the paper, we published code and a dataset of page protection spells. A Longitudinal Dataset of Five Years of Public Activity in the Scratch Online Community (blog post) — A large dataset of social interaction data from the website than runs the Scratch online community. Recently, we’ve also begun producing replication datasets to go alongside our empirical papers. So far, this includes: Starting Online Communities: Motivations and Goals of Wiki Founders (blog post) — A paper about why people set up to create new online communities. The Wikipedia Adventure: Field Evaluation of an Interactive Tutorial for New Users (blog post) — A description and evaluation of a system to help onboard newcomers to Wikipedia. In the case of each of the first groups of papers where the dataset was a part of the contribution, we uploaded code and data to a website we’ve created. Of course, even if we do a wonderful job of keeping these websites maintained over time, eventually, our research group will cease to exist. When that happens, the data will eventually disappear as well. The text of our papers will be maintained long after we’re gone in the journal or conference proceedings’ publisher’s archival storage and in our universities’ institutional archives. But what about the data? Since the data is a core part — perhaps the core part — of the contribution of these papers, the data should be archived permanently as well. Toward that end, our group has created a dataverse. Our dataverse is a repository within the Harvard Dataverse where we have been uploading archival copies of datasets over the last six months. All five of the papers described above are uploaded already. The Scratch dataset, due to access control restrictions, isn’t listed on the main page but it’s online on the site. Moving forward, we’ll be populating this new datasets we create as well as replication datasets for our future empirical papers. We’re currently preparing several more. The primary point of the CDSC Dataverse is not to provide you with way to get our data although you’re certainly welcome to use it th[...]

Alexander Wirt: Survey about alioth replacement

Sat, 17 Jun 2017 22:38:05 +0000


To get some idea about the expectations and current usage of alioth I created a survey. Please take part in it if you are an alioth user. If you need some background about the coming alioth replacement I recommend to read the great lwn article written by anarcat.

Bits from Debian: Upcoming Debian 9.0 Stretch!

Fri, 16 Jun 2017 22:30:00 +0000


The Debian Release Team in coordination with several other teams are preparing the last bits needed for releasing Debian 9 Stretch. Please, be patient! Lots of steps are involved and some of them take some time, such as building the images, propagating the release through the mirror network, and rebuilding the Debian website so that "stable" points to Debian 9.

Follow the live coverage of the release on or the @debian profile in your favorite social network! We'll spread the word about what's new in this version of Debian 9, how the release process is progressing during the weekend and facts about Debian and the wide community of volunteer contributors that make it possible.

Elena 'valhalla' Grandi: Travel piecepack v0.1

Fri, 16 Jun 2017 16:06:13 +0000

Travel piecepack v0.1


A set of generic board game pieces is nice to have around in case of a sudden spontaneous need of gaming, but carrying my full set takes some room, and is not going to fit in my daily bag.

I've been thinking for a while that an half-size set could be useful, and between yesterday and today I've actually managed to do the first version.

It's (2d) printed on both sides of a single sheet of heavy paper, laminated and then cut, comes with both the basic suites and the playing card expansion and fits in a mint tin divided by origami boxes.

It's just version 0.1 because there are a few issues: first of all I'm not happy with the manual way I used to draw the page: ideally it would have been programmatically generated from the same svg files as the 3d piecepack (with the ability to generate other expansions), but apparently reading paths from an svg and writing it in another svg is not supported in an easy way by the libraries I could find, and looking for it was starting to take much more time than just doing it by hand.

I also still have to assemble the dice; in the picture above I'm just using the ones from the 3d-printed set, but they are a bit too big and only four of them fit in the mint tin. I already have the faces printed, so this is going to be fixed in the next few days.

Source files are available in the same git repository as the 3d-printable piecepack, with the big limitation mentioned above; updates will also be pushed there, just don't hold your breath for it :)

Michal Čihař: New projects on Hosted Weblate

Fri, 16 Jun 2017 16:00:22 +0000


Hosted Weblate provides also free hosting for free software projects. The hosting requests queue was over one month long, so it's time to process it and include new project.

This time, the newly hosted projects include:

We now also host few new Minetest mods:

If you want to support this effort, please donate to Weblate, especially recurring donations are welcome to make this service alive. You can do them on Liberapay or Bountysource.

Filed under: Debian English SUSE Weblate

Mike Hommey: Announcing git-cinnabar 0.5.0 beta 2

Thu, 15 Jun 2017 23:12:13 +0000

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.5.0 beta 1?

  • Enabled support for clonebundles for faster clones when the server provides them.
  • Git packs created by git-cinnabar are now smaller.
  • Added a new git cinnabar upgrade command to handle metadata upgrade separately from fsck.
  • Metadata upgrade is now significantly faster.
  • git cinnabar fsck also faster.
  • Both now also use significantly less memory.
  • Updated git to 2.13.1 for git-cinnabar-helper.

Jeremy Bicha: #newinstretch : Latest WebKitGTK+

Thu, 15 Jun 2017 16:02:04 +0000


Debian 9 “Stretch”, the latest stable version of the venerable Linux distribution, will be released in a few days. I pushed a last-minute change to get the latest security and feature update of WebKitGTK+ (packaged as webkit2gtk 2.16.3) in before release.

Carlos Garcia Campos discusses what’s new in 2.16, but there are many, many more improvements since the 2.6 version in Debian 8.

Like many things in Debian, this was a team effort from many people. Thank you to the WebKitGTK+ developers, WebKitGTK+ maintainers in Debian, Debian Release Managers, Debian Stable Release Managers, Debian Security Team, Ubuntu Security Team, and testers who all had some part in making this happen.

As with Debian 8, there is no guaranteed security support for webkit2gtk for Debian 9. This time though, there is a chance of periodic security updates without needing to get the updates through backports.

If you would like to help test the next proposed update, please contact me so that I can help coordinate this.

Rhonda D'Vine: Apollo 440

Thu, 15 Jun 2017 10:27:00 +0000


It's been a while. And currently I shouldn't even post but rather pack my stuff because I'll get the keys to my flat in 6 days. Yay!

But, for packing I need a good sound track. And today it is Apollo 440. I saw them live at the Sundance Festival here in Vienna 20 years ago. It's been a while, but their music still gives me power to pull through.

So, without further ado, here are their songs:

  • Ain't Talkin' 'Bout Dub: This is the song I first stumbled upon, and got me into them.
  • Stop The Rock: This was featured in a movie I enjoyed, with a great dancing scene. :)
  • Krupa: Also a very up-cheering song!

As always, enjoy!

/music | permanent link | Comments: 2 | Flattr this

Enrico Zini: 5 years of Debian Diversity Statement

Thu, 15 Jun 2017 07:37:59 +0000

The Debian Project welcomes and encourages participation by everyone.

No matter how you identify yourself or how others perceive you: we welcome you. We welcome contributions from everyone as long as they interact constructively with our community.

While much of the work for our project is technical in nature, we value and encourage contributions from those with expertise in other areas, and welcome them into our community.

The Debian Diversity Statement has recently turned 5 years old, and I still find it the best diversity statement I know of, one of the most welcoming texts I've seen, and the result of one of the best project-wide mailing list discussions I can remember.

Joey Hess: not tabletop solar

Wed, 14 Jun 2017 21:48:02 +0000


Borrowed a pickup truck today to fetch my new solar panels. This is 1 kilowatt of power on my picnic table.


Steve Kemp: Porting pfctl to Linux

Wed, 14 Jun 2017 21:00:00 +0000

If you have a bunch of machines running OpenBSD for firewalling purposes, which is pretty standard, you might start to use source-control to maintain the rulesets. You might go further, and use some kind of integration testing to deploy changes from your revision control system into production. Of course before you deploy any pf.conf file you need to test that the file contents are valid/correct. If your integration system doesn't run on OpenBSD though you have a couple of choices: Run a test-job that SSH's to the live systems, and tests syntax. Via pfctl -n -f /path/to/rules/pf.conf. Write a tool on your Linux hosts to parse and validate the rules. I looked at this last year and got pretty far, but then got distracted. So the other day I picked it up again. It turns out that if you're patient it's not hard to use bison to generate some C code, then glue it together such that you can validate your firewall rules on a Linux system. deagol ~/pf.ctl $ ./pfctl ./pf.conf ./pf.conf:298: macro 'undefined_variable' not defined ./pf.conf:298: syntax error Unfortunately I had to remove quite a lot of code to get the tool to compile, which means that while some failures like that above are caught others are missed. The example above reads: vlans="{vlan1,vlan2}" .. pass out on $vlans proto udp from $undefined_variable Unfortunately the following line does not raise an error: pass out on vlan12 inet proto tcp from to $http_server port {80,443} That comes about because looking up the value of the table named unknown just silently fails. In slowly removing more and more code to make it compile I lost the ability to keep track of table definitions - both their names and their values - Thus the fetching of a table by name has become a NOP, and a bogus name will result in no error. Now it is possible, with more care, that you could use a hashtable library, or similar, to simulate these things. But I kinda stalled, again. (Similar things happen with fetching a proto by name, I just hardcoded inet, gre, icmp, icmp6, etc. Things that I'd actually use.) Might be a fun project for somebody with some time anyway! Download the OpenBSD source, e.g. from a github mirror - yeah, yeah, but still. CVS? No thanks! - Then poke around beneath sbin/pfctl/. The main file you'll want to grab is parse.y, although you'll need to setup a bunch of headers too, and write yourself a Makefile. Here's a hint: deagol ~/pf.ctl $ tree . ├── inc │   ├── net │   │   └── pfvar.h │   ├── queue.h │   └── sys │   ├── _null.h │   ├── refcnt.h │   └── tree.h ├── Makefile ├── parse.y ├── pf.conf ├── pfctl.h ├── pfctl_parser.h └── 3 directories, 11 files [...]

Michael Prokop: Grml 2017.05 – Codename Freedatensuppe

Wed, 14 Jun 2017 20:46:19 +0000


The Debian stretch release is going to happen soon (on 2017-06-17) and since our latest Grml release is based on a very recent version of Debian stretch I’m taking this as opportunity to announce it also here. So by the end of May we released a new stable release of Grml (the Debian based live system focusing on system administrator’s needs), known as version 2017.05 with codename Freedatensuppe.

Details about the changes of the new release are available in the official release notes and as usual the ISOs are available via

With this new Grml release we finally made the switch from file-rc to systemd. From a user’s point of view this doesn’t change that much, though to prevent having to answer even more mails regarding the switch I wrote down some thoughts in Grml’s FAQ. There are some things that we still need to improve and sort out, but overall the switch to systemd so far went better than anticipated (thanks a lot to the pkg-systemd folks, especially Felipe Sateler and Michael Biebl!).

And last but not least, Darshaka Pathirana helped me a lot with the systemd integration and polishing the release, many thanks!

Happy Grml-ing!

Daniel Pocock: Croissants, Qatar and a Food Computer Meetup in Zurich

Wed, 14 Jun 2017 19:53:50 +0000


In my last blog, I described the plan to hold a meeting in Zurich about the OpenAg Food Computer.

The Meetup page has been gathering momentum but we are still well within the capacity of the room and catering budget so if you are in Zurich, please join us.

Thanks to our supporters

The meeting now has sponsorship from three organizations, Project 21 at ETH, the Debian Project and Free Software Foundation of Europe.

Sponsorship funds help with travel expenses and refreshments.

Food is always in the news

In my previous blog, I referred to a number of food supply problems that have occurred recently. There have been more in the news this week: a potential croissant shortage in France due to the rising cost of butter and Qatar's efforts to air-lift 4,000 cows from the US and Australia, among other things, due to the Saudi Arabia embargo.

The food computer isn't an immediate solution to these problems but it appears to be a helpful step in the right direction.


Holger Levsen: 20170614-stretch-vim

Wed, 14 Jun 2017 17:04:46 +0000


Changed defaults for vim in Stretch

So appearantly vim in Stretch comes with some new defaults, most notably the mouse is now enabled and there is incremental search, which I find… challenging.

As a reminder for my future self, these needs to go into ~/.vimrc (or /etc/vim/vimrc) to revert those changes:

set mouse=
set noincsearch

Sjoerd Simons: Debian armhf VM on arm64 server

Wed, 14 Jun 2017 13:10:31 +0000

At Collabora one of the many things we do is build Debian derivatives/overlays for customers on a variety of architectures including 32 bit and 64 bit ARM systems. And just as Debian does, our OBS system builds on native systems rather than emulators. Luckily with the advent of ARM server systems some years ago building natively for those systems has been a lot less painful than it used to be. For 32 bit ARM we've been relying on Calxeda blade servers, however Calxeda unfortunately tanked ages ago and the hardware is starting to show its age (though luckily Debian Stretch does support it properly, so at least the software is still fresh). On the 64 bit ARM side, we're running on Gigabyte MP30-AR1 based servers which can run 32 bit arm code (As opposed to e.g. ThunderX based servers which can only run 64 bit code). As such running armhf VMs on them to act as build slaves seems a good choice, but setting that up is a bit more involved than it might appear. The first pitfall is that there is no standard bootloader or boot firmware available in Debian to boot on the "virt" machine emulated by qemu (I didn't want to use an emulation of a real machine). That also means there is nothing to pick the kernel inside the guest at boot nor something which can e.g. have the guest network boot, which means direct kernel booting needs to be used. The second pitfall was that the current Debian Stretch armhf kernel isn't built with support for the generic PCI host controller which the qemu virtual machine exposes, which means no storage and no network shows up in the guest. Hopefully that will get solved soonish (Debian bug 864726) and can be in a Stretch update, until then a custom kernel package is required using the patch attach to the bug report is required but I won't go into that any further in this post. So on the happy assumption that we have a kernel that works, the challenge left is to nicely manage direct kernel loading. Or more specifically, how ensure the hosts boots the kernel the guest has installed via the standard apt tools without having to copy kernels around between guest/host, which essentially comes down to exposing /boot from the guest to the host. The solution we picked is to use qemu's 9pfs support to share a folder from the host and use that as /boot of the guest. For the 9p folder the "mapped" security mode seems needed as the "none" mode seems to get confused by dpkg (Debian bug 864718). As we're using libvirt as our virtual machine manager the remainder of how to glue it all together will be mostly specific to that. First step is to install the system, mostly as normal. One can directly boot into the vmlinuz and initrd.gz provided by normal Stretch armhf netboot installer (downloaded into e.g /tmp). The setup overall is [...]

Sjoerd Simons: Debian Jessie on Raspberry Pi 2

Wed, 14 Jun 2017 13:10:31 +0000

Apart from being somewhat slow, one of the downsides of the original Raspberry Pi SoC was that it had an old ARM11 core which implements the ARMv6 architecture. This was particularly unfortunate as most common distributions (Debian, Ubuntu, Fedora, etc) standardized on the ARMv7-A architecture as a minimum for their ARM hardfloat ports. Which is one of the reasons for Raspbian and the various other RPI specific distributions. Happily, with the new Raspberry Pi 2 using Cortex-A7 Cores (which implement the ARMv7-A architecture) this issue is out of the way, which means that a a standard Debian hardfloat userland will run just fine. So the obvious first thing to do when an RPI 2 appeared on my desk was to put together a quick Debian Jessie image for it. The result of which can be found at: Login as root with password debian (Obviously do change the password and create a normal user after booting). The image is 3G, so should fit on any SD card marketed as 4G or bigger. Using bmap-tools for flashing is recommended, otherwise you'll be waiting for 2.5G of zeros to be written to the card, which tends to be rather boring. Note that the image is really basic and will just get you to a login prompt on either serial or hdmi, batteries are very much not included, but can be apt-getted :). Technically, this image is simply a Debian Jessie debootstrap with a extra packages for hardware support. Unlike Raspbian the first partition (which contains the firmware & kernel files to boot the system) is mounted on /boot/firmware rather then on /boot. This is because the VideoCore expects the first partition to be a FAT filesystem, but mounting FAT on /boot really doesn't work right on Debian systems as it contains files managed by dpkg (e.g. the kernel package) which requires a POSIX compatible filesystem. Essentially the same reason why Debian is using /boot/efi for the ESP partition on Intel systems rather the mounting it on /boot directly. For reference, the RPI2 specific packages in this image are from in the jessie distribution and rpi2 component (this repository is enabled by default on the image). The relevant packages there are: linux: Current 3.18 based package from Debian experimental (3.18.5-1~exp1 at the time of this writing) with a stack of patches on top from the raspberrypi github repository and tweaked to build an rpi2 flavour as the patchset isn't multiplatform capable raspberrypi-firmware-nokernel: Firmware package and misc libraries packages taken from Raspbian, with a slight tweak to install in /boot/firmware rather then /boot. flash-kernel: Current flash-kernel package from debian experimental, with a small addition to[...]

Mike Gabriel: Ayatana Indicators

Wed, 14 Jun 2017 12:53:56 +0000

In the near future various upstream projects related to the Ubuntu desktop experience as we have known it so far may become only sporadically maintained or even fully unmaintained. Ubuntu will switch to the Gnome desktop environment with 18.04 LTS as its default desktop, maybe even earlier. The Application Indicators [1] brought into being by Canonical Ltd. will not be needed in Gnome (AFAIK) any more. We can expect the Application Indicator related projects become unmaintained upstream. (In fact I have recently been offered continuation of upstream maintenance of libdbusmenu). Historical Background This all started at Ubuntu Developer Summit 2012 when Canonical Ltd. announced Ubuntu to become the successor of Windows XP in business offices. The Unity Greeter received an Remote Login Service enhancement: since then it supports Remote Login to Windows Terminal Servers. The question came up, why Remote Login to Linux servers--maybe even Ubuntu machines--is not on the agenda. It turned out, that it wasn't even a discussable topic. At that time, I started looking into the Unity Greeter code, adding support for X2Go Logon into Unity Greeter. I never really stopped looking at the greeter code from time to time. Since then, it turned into some sort of a hobby... While looking into the Unity Greeter code over the past years and actually forking Unity Greeter as Arctica Greeter [2] in September 2015, I also started looking into the Application Indicators concept just recently. And I must say, the more I have been looking into it, the more I have started liking the concept behind Application Indicators. The basic idea is awesome. However, lately all indicators became more and more Ubuntu-centric and IMHO too polluted by code related to the just declared dead Ubuntu phablet project. Forking Application Indicators Saying all this, I recently forked Application Indicators as Ayatana Indicators. At the moment I represent upstream and Debian package maintainer in one person. Ideally, this is only temporary and more people join in. (I heard some Unity 7 maintainers think about switching to Ayatana Indicators for the now community maintained Unity 7). The goal is to provide Ayatana Indicators to all desktop environments generically, all that want to use them, either as default or optionally. Release-wise, the idea is to strictly differentiate between upstream and Debian downstream in the release cycles of the various related components. I hope, noone is too concerned about the choice of name, as the "Ayatana" word actually was first used for upstream efforts inside Ubuntu [3]. Using the Ayatana term for the indicator forks is meant as honouring the previously undertaken efforts. I have seen[...]

Nicolas Dandrimont: DebConf 17 bursaries: update your status now!

Wed, 14 Jun 2017 12:40:53 +0000

TL;DR: if you applied for a DebConf 17 travel bursary, and you haven’t accepted it yet, login to the DebConf website and update your status before June 20th or your bursary grant will be gone. *blows dust off the blog* As you might be aware, DebConf 17 is coming soon and it’s gonna be the biggest DebConf in Montréal ever. Of course, what makes DebConf great is the people who come together to work on Debian, share their achievements, and help draft our cunning plans to take over the world. Also cheese. Lots and lots of cheese. To that end, the DebConf team had initially budgeted US$40,000 for travel grants ($30,000 for contributors, $10,000 for diversity and inclusion grants), allowing the bursaries team to bring people from all around the world who couldn’t have made it to the conference. Our team of volunteers rated the 188 applications, we’ve made a ranking (technically, two rankings : one on contribution grounds and one on D&I grounds), and we finally sent out a first round of grants last week. After the first round, the team made a new budget assessment, and thanks to the support of our outstanding sponsors, an extra $15,000 has been allocated for travel stipends during this week’s team meeting, with the blessing of the DPL. We’ve therefore been able to send a second round of grants today. Now, if you got a grant, you have two things to do : you need to accept your grant, and you need to update your requested amount. Both of those steps allow us to use our budget more wisely: having grants expire frees money up to get more people to the conference earlier. Having updated amounts gives us a better view of our overall budget. (You can only lower your requested amount, as we can’t inflate our budget) Our system has sent mails to everyone, but it’s easy enough to let that email slip (or to not receive it for some reason). It takes 30 seconds to look at the status of your request on the DebConf 17 website, and even less to do the few clicks needed for you to accept the grant. Please do so now! OK, it might take a few minutes if your SSO certificate has expired and you have to look up the docs to renew it. The deadline for the first round of travel grants (which went out last week) is June 20th. The deadline for the second round (which went out today) is June 24th. If somehow you can’t login to the website before the deadline, the bursaries team has an email address you can use. We want to send out a third round of grants on June 25th, using the money people freed up: our current acceptance ratio is around 40%, and a lot of very strong applications have been deferred. We don’t want them to wait up until July to get a definitive [...]

Dirk Eddelbuettel: #7: C++14, R and Travis -- A useful hack

Wed, 14 Jun 2017 01:54:00 +0000

Welcome to the seventh post in the rarely relevant R ramblings series, or R4 for short. We took a short break as several conferences and other events interfered during the month of May, keeping us busy and away from this series. But we are back now with a short and useful hack I came up with this weekend. The topic is C++14, i.e. the newest formally approved language standard for C++, and its support in R and on Travis CI. With release R 3.4.0 of a few weeks ago, R now formally supports C++14. Which is great. But there be devils. A little known fact is that R hangs on to its configuration settings from its own compile time. That matters in cases such as the one we are looking at here: Travis CI. Travis is a tremendously useful and widely-deployed service, most commonly connected to GitHub driving "continuous integration" (the 'CI') testing after each commit. But Travis CI, for as useful as it is, is also maddingly conservative still forcing everybody to live and die by [Ubuntu 14.04] So while we all benefit from the fine work by Michael who faithfully provides Ubuntu binaries for distribution via CRAN (based on the Debian builds provided by yours truly), we are stuck with Ubuntu 14.04. Which means that while Michael can provide us with current R 3.4.0 it will be built on ancient Ubuntu 14.04. Why does this matter, you ask? Well, if you just try to turn the very C++14 support added to R 3.4.0 on in the binary running on Travis, you get this error: ** libs Error in .shlib_internal(args) : C++14 standard requested but CXX14 is not defined And you get it whether or not you define CXX14 in the session. So R (in version 3.4.0) may want to use C++14 (because a package we submitted requests it), but having been built on the dreaded Ubuntu 14.04, it just can't oblige. Even when we supply a newer compiler. Because R hangs on to its compile-time settings rather than current environment variables. And that means no C++14 as its compile-time compiler was too ancient. Trust me, I tried: adding not only g++-6 (from a suitable repo) but also adding C++14 as the value for CXX_STD. Alas, no mas. The trick to overcome this is twofold, and fairly straightforward. First off, we just rely on the fact that g++ version 6 defaults to C++14. So by supplying g++-6, we are in the green. We have C++14 by default without requiring extra options. Sweet. The remainder is to tell R to not try to enable C++14 even though we are using it. How? By removing CXX_STD=C++14 on the fly and just for Travis. And this can be done easily with a small script configure which conditions on being on Travis by checking two environment v[...]

Reproducible builds folks: Reproducible Builds: week 111 in Stretch cycle

Tue, 13 Jun 2017 20:50:40 +0000

Here's what happened in the Reproducible Builds effort between Sunday June 4 and Saturday June 10 2017: Past and upcoming events On June 10th, Chris Lamb presented at the Hong Kong Open Source Conference 2017 on reproducible builds. Patches and bugs filed Chris Lamb: #864082 filed against fontconfig, forwarded upstream Reviews of unreproducible packages 7 package reviews have been added, 10 have been updated and 14 have been removed in this week, adding to our knowledge about identified issues. Weekly QA work During our reproducibility testing, FTBFS bugs have been detected and reported by: Adrian Bunk (4) Chris Lamb (1) Christoph Biedl (1) Niko Tyni (1) Two FTBFS issues of LEDE (exposed in our setup) were found and were fixed: build: ensure that flock is available for make download (Felix Fietkau) include/toplevel: set env GIT_ASKPASS=/bin/true (Alexander 'lynxis' Couzens) diffoscope development Chris Lamb: Some code style improvements Alexander 'lynxis' Couzens made some changes for testing LEDE and OpenWrt: Build tar before downloading everything: On system without tar --sort=name we need to compile tar before downloading everything Set CONFIG_AUTOREMOVE to reduce required space Create a workaround for signing keys: LEDE signs the release with a signing key, but generates the signing key if it's not present. To have a reproducible release we need to take care of signing keys. openwrt_get_banner(): use staging_dir instead of build_dir because the former is persistent among the two builds. Don't build all packages to improve development speed for now. Only build one board instead of all boards. Reducing the build time improves developing speed. Once the image is reproducible we will enable more boards. Disable node_cleanup_tmpdirs Hans-Christoph Steiner, for testing F-Droid: Do full git reset/clean like Jenkins does hard code WORKSPACE dir names, as WORKSPACE cannot be generated from $0 as it's a temporary name. Daniel Shahaf, for testing Debian: Remote scheduler: English fix to error message. Allow multiple architectures in one invocation. Refactor: Break out a helper function. Rename variable to disambiguate with scheduling_args.message. Include timestamps in logs Set timestamps to second resolution (was millisecond by default). Holger 'h01ger' Levsen, for testing Debian: Improvements to the breakages page: List broken packages and diffoscope problems first, and t.r-b.o problems last. Reword, drop 'caused by'. Add niceness to our list of variations, running with niceness of 11 for the first build and niceness of [...]

Jonathan Wiltshire: What to expect on Debian release day

Tue, 13 Jun 2017 18:29:28 +0000

Nearly two years ago I wrote about what to expect on Jessie release day. Shockingly enough, the process for Stretch to be released should be almost identical.

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, May 2017

Tue, 13 Jun 2017 07:19:06 +0000

Like each month, here comes a report about the work of paid contributors to Debian LTS. Individual reports In May, about 182 work hours have been dispatched among 11 paid contributors. Their reports are available: Ben Hutchings did 13 hours (out of 15h allocated + 3 extra hours, thus keeping 5 extra hours for June). Brian May did 10 hours. Chris Lamb did 18 hours. Emilio Pozuelo Monfort did 23 hours (out of 24 hours allocated + 2 hours remaining, thus keeping 3 hours for June). Guido Günther did 8 hours. Hugo Lefeuvre did 15 hours. Jonas Meurer gave back his remaining hours from last month. Markus Koschany did 27.25 hours. Ola Lundqvist did 12 hours (out of 6h allocated + 6 remaining hours). Raphaël Hertzog did 12 hours. Roberto C. Sanchez did 22 hours (out of 20 hours allocated + 4.5 hour remaining, thus keeping 2.5 extra hours for June). Thorsten Alteholz did 27.25 hours. Evolution of the situation The number of sponsored hours did not change and we are thus still a little behind our objective. The security tracker currently lists 44 packages with a known CVE and the dla-needed.txt file 42. The number of open issues is close to last month. Thanks to our sponsors New sponsors are in bold (none this month unfortunately). Platinum sponsors: TOSHIBA (for 20 months) GitHub (for 11 months) Gold sponsors: The Positive Internet (for 36 months) Blablacar (for 35 months) Linode (for 25 months) Babiel GmbH (for 14 months) Plat’Home (for 14 months) Silver sponsors: Domeneshop AS (for 35 months) Université Lille 3 (for 35 months) Trollweb Solutions (for 33 months) Nantes Métropole (for 29 months) Dalenys (for 26 months) Univention GmbH (for 21 months) Université Jean Monnet de St Etienne (for 21 months) Sonus Networks (for 15 months) UR Communications BV (for 9 months) maxcluster GmbH (for 9 months) Exonet B.V. (for 5 months) Bronze sponsors: David Ayers – IntarS Austria (for 36 months) Evolix (for 36 months) Offensive Security (for 36 months), a.s. (for 36 months) Freeside Internet Service (for 35 months) MyTux (for 35 months) Linuxhotel GmbH (for 33 months) Intevation GmbH (for 32 months) Daevel SARL (for 31 months) Bitfolk LTD (for 30 months) Megaspace Internet Services GmbH (for 30 months) Greenbone Networks GmbH (for 29 months) NUMLOG (for 29 months) WinGo AG (for 28 months) Ecole Centrale de Nantes – LHEEA (for 25 months) Sig-I/O (for 22 months) Entr’ouvert (for 20 months) Adfinis SyGroup AG (for 17 months) Laboratoire LEGI – UMR 5519 / CNRS (for 12 months) Quarantainenet BV (for 12 months) GNI MEDIA (for 11 months[...]

Gunnar Wolf: Reporting progress on the translation infrastructure

Tue, 13 Jun 2017 04:28:54 +0000

Some days ago, I blogged asking for pointers to get started with the translation of Made with Creative Commons. Thank you all for your pointers and ideas! To the people that answered via private mail, via IRC, via comments on the blog. We have made quite a bit of progress so far; I want to test some things before actually sending a call for help. What do we have? Git repository set up I had already set up a repository at GitLab; right now, the contents are far from useful, they merely document what I have done so far. I have started talking with my Costa Rican friend Leo Arias, who is also interested in putting some muscle behind this translation, and we are both the admins to this project. Talked with the authors Sarah is quite enthusiastic about us making this! I asked her to hold a little bit before officially announcing there is work ongoing... I want to get bits of infrastructure ironed out first. Important — Talking with her, she discussed the tools they used for authoring the book. It made me less of a purist :) Instead of starting from something "pristine", our master source will be the PDF export of the Google Docs document. Markdown conversion Given that translation tools work over the bits of plaintext, we want to work with the "plainest" rendition of the document, which is Markdown. I found that Pandoc does a very good approximation to what we need (that is, introduces very little "ugly" markup elements). Converting the ODT into Markdown is as easy as: $ pandoc -f odt MadewithCreativeCommonsmostup-to-dateversion.odt -t markdown > Of course, I want to fine-tune this as much as possible. Producing a translatable .po file I have used Gettext to translate user interfaces; it is a tool very well crafted for that task. Translating a book is quite different: How and where does it break and join? How are paragraphs "strung" together into chapters, parts, a book? That's a task for PO 4 Anything (po4a). As simple as this: po4a-gettextize -f text -m -p MadewithCreativeCommonsmostup-to-dateversion.po -M utf-8 I tested the resulting file with my good ol' trusty poedit, and it works... Very nicely! What is left to do? I made an account and asked for hosting at Weblate. I have not discussed this with Leo, so I hope he will agree ;-) Weblate is a Web-based infrastructure for collaborative text translation, provided by Debian's Michal Čihař. It integrates nicely with version control systems, preserves credit for each[...]

Dirk Eddelbuettel: RcppMsgPack 0.1.1

Tue, 13 Jun 2017 02:24:00 +0000


A new package! Or at least new on CRAN as the very initial version 0.1.0 had been available via the ghrr drat for over a year. But now we have version 0.1.1 to announce as a CRAN package.

RcppMspPack provides R with MessagePack header files for use via C++ (or C, if you must) packages such as RcppRedis.

MessagePack itself is an efficient binary serialization format. It lets you exchange data among multiple languages like JSON. But it is faster and smaller. Small integers are encoded into a single byte, and typical short strings require only one extra byte in addition to the strings themselves.

MessagePack is used by Redis and many other projects, and has bindings to just about any language.

To use this package, simply add it to the LinkingTo: field in the DESCRIPTION field of your R package---and the R package infrastructure tools will then know how to set include flags correctly on all architectures supported by R.

More information may be on the RcppMsgPack page. Issues and bugreports should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Sven Hoexter: UEFI PXE preseeded Debian installation on HPE DL120

Mon, 12 Jun 2017 16:35:52 +0000

We bought a bunch of very cheap low end HPE DL120 server. Enough to warrant a completely automated installation setup. Shouldn't be that much of a deal, right? Get dnsmasq up and running, feed it a preseed.cfg and be done with it. In practise it took us more hours then we expected. Setting up the hardware Our hosts are equipped with an additional 10G dual port NIC and we'd like to use this NIC for PXE booting. That's possible, but it requires you to switch to UEFI boot. Actually it enables you to boot from any available NIC. Setting up dnsmasq We decided to just use the packaged debian-installer from jessie and do some ugly things like overwritting files in /usr/lib via ansible later on. So first of all install debian-installer-8-netboot-amd64 and dnsmasq, then enroll our additional config for dnsmasq, ours looks like this: domain=int.foobar.example dhcp-range=,,,1h dhcp-boot=bootnetx64.efi pxe-service=X86-64_EFI, "Boot UEFI PXE-64", bootnetx64.efi enable-tftp tftp-root=/usr/lib/debian-installer/images/8/amd64/text dhcp-option=3, dhcp-host=00:c0:ff:ee:00:01,,foobar-01 Now you've to link /usr/lib/debian-installer/images/8/amd64/text/bootnetx64.efi to /usr/lib/debian-installer/images/8/amd64/text/debian-installer/amd64/bootnetx64.efi. That got us of the ground and we had a working UEFI PXE boot that got us into debian-installer. Feeding d-i the preseed file Next we added some grub.cfg settings and parameterized some basic stuff to be handed over to d-i via the kernel command line. You'll find the correct grub.cfg in /usr/lib/debian-installer/images/8/amd64/text/debian-installer/amd64/grub/grub.cfg. We added the following two lines to automate the start of the installer: set default="0" set timeout=5 and our kernel command line looks like this: linux /debian-installer/amd64/linux vga=788 --- auto=true interface=eth1 netcfg/dhcp_timeout=60 netcfg/choose_interface=eth1 priority=critical preseed/url=tftp:// quiet Important points: tftp host IP is our dnsmasq host. Within d-i we see our NIC we booted from as eth1. eth0 is the shared on-board ilo interface. That differs e.g. within grml where it's eth2. preseeed.cfg, GPT and ESP One of the most painful points was the fight to find out the correct preseed values to install with GPT to create a ESP (EFI system partition) and use LVM for /. Relevant settings are: # auto method must be lvm d-i partman-auto/method str[...]

Petter Reinholdtsen: Updated sales number for my Free Culture paper editions

Mon, 12 Jun 2017 09:40:00 +0000

It is pleasing to see that the work we put down in publishing new editions of the classic Free Culture book by the founder of the Creative Commons movement, Lawrence Lessig, is still being appreciated. I had a look at the latest sales numbers for the paper edition today. Not too impressive, but happy to see some buyers still exist. All the revenue from the books is sent to the Creative Commons Corporation, and they receive the largest cut if you buy directly from Lulu. Most books are sold via Amazon, with Ingram second and only a small fraction directly from Lulu. The ebook edition is available for free from Github.

Title / languageQuantity
2016 jan-jun2016 jul-dec2017 jan-may
Culture Libre / French 3 6 15
Fri kultur / Norwegian 7 1 0
Free Culture / English 14 27 16
Total 24 34 31

A bit sad to see the low sales number on the Norwegian edition, and a bit surprising the English edition still selling so well.

If you would like to translate and publish the book in your native language, I would be happy to help make it happen. Please get in touch.