Subscribe: Planet Debian
Added By: Feedage Forager Feedage Grade B rated
Language: English
amd deb  amd  apt  deb  debian  dpkg  filed  git  install  months  new  package  packages  release  time  version  work 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Planet Debian

Planet Debian

Planet Debian -


Michal Čihař: Making Weblate more secure and robust

Fri, 21 Jul 2017 10:00:21 +0000


Having publicly running web application always brings challenges in terms of security and in generally in handling untrusted data. Security wise Weblate has been always quite good (mostly thanks to using Django which comes with built in protection against many vulnerabilities), but there were always things to improve in input validation or possible information leaks.

When Weblate has joined HackerOne (see our first month experience with it), I was hoping to get some security driven core review, but apparently most people there are focused on black box testing. I can certainly understand that - it's easier to conduct and you need much less knowledge of the tested website to perform this.

One big area where reports against Weblate came in was authentication. Originally we were mostly fully relying on default authentication pipeline coming with Python Social Auth, but that showed some possible security implications and we ended up with having heavily customized authentication pipeline to avoid several risks. Some patches were submitted back, some issues reported, but still we've diverged quite a lot in this area.

Second area where scanning was apparently performed, but almost none reports came, was input validation. Thanks to excellent XSS protection in Django nothing was really found. On the other side this has triggered several internal server errors on our side. At this point I was really happy to have Rollbar configured to track all errors happening in the production. Thanks to having all such errors properly recorded and grouped it was really easy to go through them and fix them in our codebase.

Most of the related fixes have landed in Weblate 2.14 and 2.15, but obviously this is ongoing effort to make Weblate better with every release.

Filed under: Debian English SUSE Weblate

Gunnar Wolf: Hey, everybody, come share the joy of work!

Thu, 20 Jul 2017 05:17:54 +0000


I got several interesting and useful replies, both via the blog and by personal email, to my two previous posts where I mentioned I would be starting a translation of the Made With Creative Commons book. It is my pleasure to say: Welcome everybody, come and share the joy of work!

Some weeks ago, our project was accepted as part of Hosted Weblate, lowering the bar for any interested potential contributor. So, whoever wants to be a part of this: You just have to log in to Weblate (or create an account if needed), and start working!

What is our current status? Amazingly better than anything I have exepcted: Not only we have made great progress in Spanish, reaching >28% of translated source strings, but also other people have started translating into Norwegian Bokmål (hi Petter!) and Dutch (hats off to Heimen Stoffels!). So far, Spanish (where Leo Arias and myself are working) is most active, but anything can happen.

I still want to work a bit on the initial, pre-po4a text filtering, as there are a small number of issues to fix. But they are few and easy to spot, your translations will not be hampered much when I solve the missing pieces.

So, go ahead and get to work! :-D Oh, and if you translate sizeable amounts of work into Spanish: As my university wants to publish (in paper) the resulting works, we would be most grateful if you can fill in the (needless! But still, they ask me to do this...) authorization for your work to be a part of a printed book.

Norbert Preining: The poison of

Thu, 20 Jul 2017 01:28:32 +0000


All those working in academics or research have surely heard about It started out as a service for academics, in their own words: is a platform for academics to share research papers. The company’s mission is to accelerate the world’s research.

But as with most of these platforms, they need to get money, and since some months now is pressing users to pay into a premium account at the incredible rate of 8.25USD per month.


This is about he same you pay for Netflix, or some other streaming service. If you remain on the free side, what remains for you to do is SNS-like stuff, and uploading your papers so that can make money from it.

What I am really surprised that they can pull this of at a .edu domain. The registry requirements state

For Institutions Within the United States. To obtain an Internet name in the .edu domain, your institution must be a postsecondary institution that is institutionally accredited by an agency on the U.S. Department of Education’s list of Nationally Recognized Accrediting Agencies (see recognized accrediting bodies).
Educause web site

Seeing what they are doing I think it is high time to request removal of the domain name.

So let us see what they are offering for their paid service:

  • Reader “The Readers feature tells you who is reading, downloading, and bookmarking your papers.”
  • Mentions “Get notified when you’re cited or mentioned, including in papers, books, drafts, theses, and syllabuses that Google Scholar can’t find.”
  • Advanced Search “Search the full text and citations of over 18 million papers”
  • Analytics “Learn more about who visits your profile”
  • Homepage – automatically generated home page from the data you enter into the system

On the other hand, the free service is consisting of SNS elements where you can follow other researchers, see when they upload/input an event, and that is it more or less. They have lured a considerable amount of academics into this service, gathered lots of papers, and now they are showing their real face – money.

In contrast to LinkedIn, which also offers paid tier, but keeps the free tier reasonably usable, has broken its promise to “accelerate the world’s research” and even worse, it is NOT a “platform for academics to share research papers”. They are collecting papers and sell access to them, like the publisher paywalls.

I consider this kind of service highly poisonous for the academic environment and researchers.

Benjamin Mako Hill: Testing Our Theories About “Eternal September”

Thu, 20 Jul 2017 00:12:16 +0000

Graph of subscribers and moderators over time in /r/NoSleep. The image is taken from our 2016 CHI paper. Last year at CHI 2016, my research group published a qualitative study examining the effects of a large influx of newcomers to the /r/nosleep online community in Reddit. Our study began with the observation that most research on sustained waves of newcomers focuses on the destructive effect of newcomers and frequently invokes Usenet’s infamous “Eternal September.” Our qualitative study argued that the /r/nosleep community managed its surge of newcomers gracefully through strategic preparation by moderators, technological systems to reign in on norm violations, and a shared sense of protecting the community’s immersive environment among participants. We are thrilled that, less a year after the publication of our study, Zhiyuan “Jerry” Lin and a group of researchers at Stanford have published a quantitative test of our study’s findings! Lin analyzed 45 million comments and upvote patterns from 10 Reddit communities that a massive inundation of newcomers like the one we studied on /r/nosleep. Lin’s group found that these communities retained their quality despite a slight dip in its initial growth period. Our team discussed doing a quantitative study like Lin’s at some length and our paper ends with a lament that our findings merely reflected, “propositions for testing in future work.” Lin’s study provides exactly such a test! Lin et al.’s results suggest that our qualitative findings generalize and that sustained influx of newcomers need not doom a community to a descent into an “Eternal September.” Through strong moderation and the use of a voting system, the subreddits analyzed by Lin appear to retain their identities despite the surge of new users. There are always limits to research projects work—quantitative and qualitative. We think the Lin’s paper compliments ours beautifully, we are excited that Lin built on our work, and we’re thrilled that our propositions seem to have held up! This blog post was written with Charlie Kiene. Our paper about /r/nosleep, written with Charlie Kiene and Andrés Monroy-Hernández, was published in the Proceedings of CHI 2016 and is released as open access. Lin’s paper was published in the Proceedings of ICWSM 2017 and is also available online. [...]

Dirk Eddelbuettel: RcppAPT 0.0.4

Wed, 19 Jul 2017 12:12:00 +0000


A new version of RcppAPT -- our interface from R to the C++ library behind the awesome apt, apt-get, apt-cache, ... commands and their cache powering Debian, Ubuntu and the like -- arrived on CRAN yesterday.

We added a few more functions in order to compute on the package graph. A concrete example is shown in this vignette which determines the (minimal) set of remaining Debian packages requiring a rebuild under R 3.4.* to update their .C() and .Fortran() registration code. It has been used for the binNMU request #868558.

As we also added a NEWS file, its (complete) content covering all releases follows below.

Changes in version 0.0.4 (2017-07-16)

  • New function getDepends

  • New function reverseDepends

  • Added package registration code

  • Added usage examples in scripts directory

  • Added vignette, also in docs as rendered copy

Changes in version 0.0.3 (2016-12-07)

  • Added dumpPackages, showSrc

Changes in version 0.0.2 (2016-04-04)

  • Added reverseDepends, dumpPackages, showSrc

Changes in version 0.0.1 (2015-02-20)

  • Initial version with getPackages and hasPackages

A bit more information about the package is available here as well as as the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Lars Wirzenius: Dropping Yakking from Planet Debian

Wed, 19 Jul 2017 05:54:13 +0000


A couple of people objected to having Yakking on Planet Debian, so I've removed it.

Daniel Silverstone: Yay, finished my degree at last

Tue, 18 Jul 2017 21:56:15 +0000

A little while back, in June, I sat my last exam for what I hoped would be the last module in my degree. For seven years, I've been working on a degree with the Open University and have been taking advantage of the opportunity to have a somewhat self-directed course load by taking the 'Open' degree track. When asked why I bothered to do this, I guess my answer has been a little varied. In principle it's because I felt like I'd already done a year's worth of degree and didn't want it wasted, but it's also because I have been, in the dim and distant past, overlooked for jobs simply because I had no degree and thus was an easy "bin the CV". Fed up with this, I decided to commit to the Open University and thus began my journey toward 'qualification' in 2010. I started by transferring the level 1 credits from my stint at UCL back in 1998/1999 which were in a combination of basic programming in Java, some mathematics including things like RSA, and some psychology and AI courses which at the time were aiming at a degree called 'Computer Science with Cognitive Sciences'. Then I took level 2 courses, M263 (Building blocks of software), TA212 (The technology of music) and MS221 (Exploring mathematics). I really enjoyed the mathematics course and so... At level 3 I took MT365 (Graphs, networks and design), M362 (Developing concurrent distributed systems), TM351 (Data management and analysis - which I ended up hating), and finally finishing this June with TM355 (Communications technology). I received an email this evening telling me the module result for TM355 had been posted, and I logged in to find I had done well enough to be offered my degree. I could have claimed my degree 18+ months ago, but I persevered through another two courses in order to qualify for an honours degree which I have now been awarded. Since I don't particularly fancy any ceremonial awarding, I just went through the clicky clicky and accepted my qualification of 'Batchelor of Science (Honours) Open, Upper Second-class Honours (2.1)' which grants me the letters 'BSc (Hons) Open (Open)' which, knowing me, will likely never even make it onto my CV because I'm too lazy. It has been a significant effort, over the course of the past few years, to complete a degree without giving up too much of my personal commitments. In addition to earning the degree, I have worked, for six of the seven years it has taken, for Codethink doing interesting work in and around Linux systems and Trustable software. I have designed and built Git server software which is in use in some universities, and many companies, along with a good few of my F/LOSS colleagues. And I've still managed to find time to attend plays, watch films, read an average of 2 novel-length stories a week (some of which were even real books), and be a member of the Manchester Hackspace. Right now, I'm looking forward to a stress free couple of weeks, followed by an immense amount of fun at Debconf17 in Montréal! [...]

Foteini Tsiami: Internationalization, part three

Tue, 18 Jul 2017 10:18:13 +0000

The first builds of the LTSP Manager were uploaded and ready for testing. Testing involves installing or purging the ltsp-manager package, along with its dependencies, and using its GUI to configure LTSP, create users, groups, shared folders etc. Obviously, those tasks are better done on a clean system. And the question that emerges is: how can we start from a clean state, without having to reinstall the operating system each time?

My mentors pointed me to an answer for that: VirtualBox snapshots. VirtualBox is a virtualization application (others are KVM or VMware) that allows users to install an operating system like Debian in a contained environment inside their host operating system. It comes with an easy to use GUI, and supports snapshots, which are points in time where we mark the guest operating system state, and can revert to that state later on.

So I started by installing Debian Stretch with the MATE desktop environment in VirtualBox, and I took a snapshot immediately after the installation. Now whenever I want to test LTSP Manager, I revert to that snapshot, and that way I have a clean system where I can properly check the installation procedure and all of its features!


Reproducible builds folks: Reproducible Builds: week 116 in Stretch cycle

Tue, 18 Jul 2017 07:29:39 +0000

Here's what happened in the Reproducible Builds effort between Sunday July 9 and Saturday July 15 2017: Packages reviewed and fixed, and bugs filed Adrian Bunk: #867771 filed against python-trollius. #867773 filed against relatorio. #867781 filed against gcin. #867890 filed against apache-directory-jdbm. #867900 filed against yapsy. #867906 filed against wcsaxes. Chris Lamb: #867753 filed against grunt (forwarded upstream). #867848 filed against gconf (forwarded upstream). #868133 filed against grep. #868321 filed against node-marked-man (forwarded-upstream). Bernhard M. Wiedemann: chromium chromium kubernetes qtscriptgenerator crash merged ghc-rpm-macros merged samba htmldoc efl webkitgtk3 merged fence-agents fio merged Reviews of unreproducible packages 13 package reviews have been added, 12 have been updated and 19 have been removed in this week, adding to our knowledge about identified issues. 2 issue types have been added: build_path_captured_by_valac timestamps_in_javascript_generated_by_node_grunt_banner 3 issue types have been updated: Add fix for timestamps_embedded_in_manpages_by_node_marked_man toolchain issue. Add fix for timestamps_in_javascript_generated_by_node_grunt_banner toolchain issue. timestamps_in_javascript_generated_by_node_grunt_banner is actually in src:grunt Weekly QA work During our reproducibility testing, FTBFS bugs have been detected and reported by: Adrian Bunk (47) diffoscope development Version 84 was uploaded to unstable by Mattia Rizzolo. It included contributions already reported from the previous weeks, as well as new ones: Ximin Luo: Attempt to fix fsimage test on Jenkins Use a tempdir rather than ./cache for guestfs cache Add more debugging output to test_fsimage After the release, development continued in git with contributions from: Mattia Rizzolo: RequiredToolNotFound.get_package(): just call the new get_package_provider() Add a get_package_provider() function, returning the package name that best matches the system Move from deprecated platform.linux_distribution() to the external package distro strip-nondeterminism development Versions 0.036-1, 0.037-1 and 0.038-1 were uploaded to unstable by Chris Lamb. They included contributions from: Niels Thykier: Add missing use statements in handler modules dh_strip_nondeterminism: Assumes tmpdir() exists File::StripNondeterminism: Apply perltidy Lazy load most handlers Lazy load remaining handlers Chris Lamb: Add missing File::Temp imports in JAR and PNG handlers. This appears to have been exposed by lazily-loading handlers in #867982. reprotest development Development continued in git with contributions from: Ximin Luo: presets: use newer flag --no-sign for dpkg-buildpackage Document --diffoscope-args= --exclude-directory-metadata and use it in presets Mattia Rizzolo: Bump debhelper compat level to 10. Bump Standards-Version to 4.0.0. development Chris Lamb: Avoid a race condition between check-and-creation of Buildinfo instances. Mattia Rizzolo: Make database backups quicker to restore by avoiding --column-inserts's pg_dump option. Fixup the deployment scripts after the stretch migration. Fixup Apache redirects that were broken after introducing the buster suite Fixup diffoscope jobs that were not always installing the highest possible version of diffoscope Holger Levsen: Add a node health check for a too big jenkins.log. Misc. This week's edition was written by Bernhard M. Wiedemann, Chris Lamb, Mattia Rizzolo, Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists. [...]

Matthew Garrett: Avoiding TPM PCR fragility using Secure Boot

Tue, 18 Jul 2017 06:48:03 +0000

In measured boot, each component of the boot process is "measured" (ie, hashed and that hash recorded) in a register in the Trusted Platform Module (TPM) build into the system. The TPM has several different registers (Platform Configuration Registers, or PCRs) which are typically used for different purposes - for instance, PCR0 contains measurements of various system firmware components, PCR2 contains any option ROMs, PCR4 contains information about the partition table and the bootloader. The allocation of these is defined by the PC Client working group of the Trusted Computing Group. However, once the boot loader takes over, we're outside the spec[1].One important thing to note here is that the TPM doesn't actually have any ability to directly interfere with the boot process. If you try to boot modified code on a system, the TPM will contain different measurements but boot will still succeed. What the TPM can do is refuse to hand over secrets unless the measurements are correct. This allows for configurations where your disk encryption key can be stored in the TPM and then handed over automatically if the measurements are unaltered. If anybody interferes with your boot process then the measurements will be different, the TPM will refuse to hand over the key, your disk will remain encrypted and whoever's trying to compromise your machine will be sad.The problem here is that a lot of things can affect the measurements. Upgrading your bootloader or kernel will do so. At that point if you reboot your disk fails to unlock and you become unhappy. To get around this your update system needs to notice that a new component is about to be installed, generate the new expected hashes and re-seal the secret to the TPM using the new hashes. If there are several different points in the update where this can happen, this can quite easily go wrong. And if it goes wrong, you're back to being unhappy.Is there a way to improve this? Surprisingly, the answer is "yes" and the people to thank are Microsoft. Appendix A of a basically entirely unrelated spec defines a mechanism for storing the UEFI Secure Boot policy and used keys in PCR 7 of the TPM. The idea here is that you trust your OS vendor (since otherwise they could just backdoor your system anyway), so anything signed by your OS vendor is acceptable. If someone tries to boot something signed by a different vendor then PCR 7 will be different. If someone disables secure boot, PCR 7 will be different. If you upgrade your bootloader or kernel, PCR 7 will be the same. This simplifies things significantly.I've put together a (not well-tested) patchset for Shim that adds support for including Shim's measurements in PCR 7. In conjunction with appropriate firmware, it should then be straightforward to seal secrets to PCR 7 and not worry about things breaking over system updates. This makes tying things like disk encryption keys to the TPM much more reasonable.However, there's still one pretty major problem, which is that the initramfs (ie, the component responsible for setting up the disk encryption in the first place) isn't signed and isn't included in PCR 7[2]. An attacker can simply modify it to stash any TPM-backed secrets or mount the encrypted filesystem and then drop to a root prompt. This, uh, reduces the utility of the entire exercise.The simplest solution to this that I've come up with depends on how Linux implements initramfs files. In its simplest form, an initramfs is just a cpio archive. In its slightly more complicated form, it's a compressed cpio archive. And in its peak form of evolution, it's a series of compressed cpio archives concatenated together. As the kernel reads each one in turn, it extracts it over the previous ones. That means that any files in the final archive will overwrite files of the same name in previous archive[...]

Norbert Preining: Calibre and rar support

Tue, 18 Jul 2017 01:33:08 +0000


Thanks to the cooperation with upstream authors and the maintainer Martin Pitt, the Calibre package in Debian is now up-to-date at version 3.4.0, and has adopted a more standard packaging following upstream. In particular, all the desktop files and man pages have been replaced by what is shipped by Calibre. What remains to be done is work on RAR support.

Rar support is necessary in the case that the eBook uses rar as compression, which happens quite often in comic books (cbr extension). Calibre 3 has split out rar support into a dynamically loaded module, so what needs to be done is packaging it. I have prepared a package for the Python library unrardll which allows Calibre to read rar-compressed ebooks, but it depends on the unrar shared library, which unfortunately is not built in Debian. I have sent a patch to fix this to the maintainer, see bug 720051, but without reaction from the maintainer.

Thus, I am publishing updated packages for unrar shipping also libunrar5, and unrardll Python package in my calibre repository. After installing python-unrardll Calibre will happily import meta-data from rar-compressed eBooks, as well as display them.

deb calibre main
deb-src calibre main

The releases are signed with my Debian key 0x6CACA448860CDC13


Jonathan McDowell: Just because you can, doesn't mean you should

Mon, 17 Jul 2017 18:41:46 +0000


There was a recent Cryptoparty Belfast event that was aimed at a wider audience than usual; rather than concentrating on how to protect ones self on the internet the 3 speakers concentrated more on why you might want to. As seems to be the way these days I was asked to say a few words about the intersection of technology and the law. I think people were most interested in all the gadgets on show at the end, but I hope they got something out of my talk. It was a very high level overview of some of the issues around the Investigatory Powers Act - if you’re familiar with it then I’m not adding anything new here, just trying to provide some sort of details about why it’s a bad thing from both a technological and a legal perspective.


Steinar H. Gunderson: Solskogen 2017: Nageru all the things

Mon, 17 Jul 2017 15:45:30 +0000


Solskogen 2017 is over! What a blast that was; I especially enjoyed that so many old-timers came back to visit, it really made the party for me.

This was the first year we were using Nageru for not only the stream but also for the bigscreen mix, and I was very relieved to see the lack of problems; I've had nightmares about crashes with 150+ people watching (plus 200-ish more on stream), but there were no crashes and hardly a dropped frame. The transition to a real mixing solution as well as from HDMI to SDI everywhere gave us a lot of new opportunities, which allowed a number of creative setups, some of them cobbled together on-the-spot:

  • Nageru with two cameras, of which one was through an HDMI-to-SDI converter battery-powered from a 20000 mAh powerbank (and sent through three extended SDI cables in series): Live music compo (with some, er, interesting entries).
  • 1080p60 bigscreen Nageru with two computer inputs (one of them through a scaler) and CasparCG graphics run from an SQL database, sent on to a 720p60 mixer Nageru (SDI pass-through from the bigscreen) with two cameras mixed in: Live graphics compo
  • Bigscreen Nageru switching from 1080p50 to 1080p60 live (and stream between 720p50 and 720p60 correspondingly), running C64 inputs from the Framemeister scaler: combined intro compo
  • And finally, it's Nageru all the way down: A camera run through a long extended SDI cable to a laptop running Nageru, streamed over TCP to a computer running VLC, input over SDI to bigscreen Nageru and sent on to streamer Nageru: Outdoor DJ set/street basket compo (granted, that one didn't run entirely smoothly, and you can occasionally see Windows device popups :-) )

It's been a lot of fun, but also a lot of work. And work will continue for an even better show next year… after some sleep. :-)

Jose M. Calhariz: Crossgrading a complex Desktop and Debian Developer machine running Debian 9

Sun, 16 Jul 2017 16:49:00 +0000

This article is an experiment in progress, please recheck, while I am updating with the new information.

I have a very old installation of Debian, possibly since v2, dot not remember, that I have upgraded since then both in software and hardware. Now the hardware is 64bits, runs a kernel of 64bits but the run-time is still 32bits. For 99% of tasks this is very good. Now that I have made many simulations I may have found a solution to do a crossgrade of my desktop. I write here the tentative procedure and I will update with more ideias on the problems that I may found.

First you need to install a 64bits kernel and boot with it. See my previous post on how to do it.

Second you need to do a bootstrap of crossgrading and the instalation of all the libs as amd64:

 apt-get clean
 apt-get upgrade
 apt-get --download-only install dpkg:amd64 tar:amd64 apt:amd64 bash:amd64 dash:amd64 init:amd64 mawk:amd64
 dpkg --list > original.dpkg
 for pack32 in $(grep i386 original.dpkg | awk '{print $2}' ) ; do 
   echo $pack32 ; 
   if dpkg --status $pack32 | grep -q "Multi-Arch: same" ; then 
     apt-get --download-only install -y --allow-remove-essential ${pack32%:i386}:amd64 ; 
   fi ; 
 cd /var/cache/apt/archives/
 dpkg --install dpkg_*amd64.deb tar_*amd64.deb apt_*amd64.deb bash_*amd64.deb dash_*amd64.deb *.deb
 dpkg --configure --pending
 dpkg -i --skip-same-version dpkg_1.18.24_amd64.deb apt_1.4.6_amd64.deb bash_4.4-5_amd64.deb dash_0.5.8-2.4_amd64.deb mawk_1.3.3-17+b3_amd64.deb *.deb

 dpkg --install /var/cache/apt/archives/*_amd64.deb
 dpkg --install /var/cache/apt/archives/*_amd64.deb
 dpkg --print-architecture
 dpkg --print-foreign-architectures

But this step does not prevent the apt-get install to have broken dependencies. So instead of only installing the libs with dpkg -i, I am going to try to install all the packages with dpkg -i:

apt-get clean
apt-get upgrade
apt-get --download-only install dpkg:amd64 tar:amd64 apt:amd64 bash:amd64 dash:amd64 init:amd64 mawk:amd64
dpkg --list > original.dpkg
for pack32 in $(grep i386 original.dpkg | awk '{print $2}' ) ; do 
  echo $pack32 ; 
  if dpkg --status $pack32 | grep -q "Multi-Arch: same" ; then 
    apt-get --download-only install -y --allow-remove-essential ${pack32%:i386}:amd64 ; 
  fi ; 
cd /var/cache/apt/archives/
dpkg --install dpkg_*amd64.deb tar_*amd64.deb apt_*amd64.deb bash_*amd64.deb dash_*amd64.deb *.deb
dpkg --configure --pending
dpkg --install dpkg_*_amd64.deb apt_*_amd64.deb bash_*_amd64.deb dash_*_amd64.deb mawk_*_amd64.deb *.deb

dpkg --install /var/cache/apt/archives/*_amd64.deb
dpkg --install /var/cache/apt/archives/*_amd64.deb
dpkg --print-architecture
dpkg --print-foreign-architectures

Forth, do a full crossgrade:

 if ! apt-get install --allow-remove-essential $(grep :i386 original.dpkg | awk '{print $2}' | sed -e s/:i386//) ; then
   apt-get --fix-broken --allow-remove-essential install
   apt-get install --allow-remove-essential $(grep :i386 original.dpkg | awk '{print $2}' | sed -e s/:i386//)

Vasudev Kamath: Overriding version information from with pbr

Sun, 16 Jul 2017 15:23:00 +0000

I recently raised a pull request on zfec for converting its python packaging from pure to pbr based. Today I got review from Brian Warner and one of the issue mentioned was python --version is not giving same output as previous version of Previous version used versioneer which extracts version information needed from VCS tags. Versioneer also provides flexibility of specifying type of VCS used, style of version, tag prefix (for VCS) etc. pbr also does extract version information from git tag but it expects git tag to be of format tags/refs/x.y.z format but zfec used a zfec- prefix to tag (example zfec-1.4.24) and pbr does not process this. End result, I get a version in format 0.0.devNN where NN is number of commits in the repository from its inception. Me and Brian spent few hours trying to figure out a way to tell pbr that we would like to override version information it auto deduces, but there was none other than putting version string in PBR_VERSION environment variable. That documentation was contributed by me 3 years back to pbr project. So finally I used versioneer to create a version string and put it in the environment variable PBR_VERSION. import os import versioneer os.environ['PBR_VERSION'] = versioneer.get_version() ... setup( setup_requires=['pbr'], pbr=True, ext_modules=extensions ) And added below snippet to setup.cfg which is how versioneer can be configured with various information including tag prefixes. [versioneer] VCS = git style = pep440 versionfile_source = zfec/ versionfile_build = zfec/ tag_prefix = zfec- parentdir_prefix = zfec- Though this work around gets the work done, it does not feel correct to set environment variable to change the logic of other part of same program. If you guys know the better way do let me know!. Also probably I should consider filing an feature request against pbr to provide a way to pass tag prefix for version calculation logic. [...]

Lior Kaplan: PDO_IBM: tracking changes publicly

Sun, 16 Jul 2017 13:13:58 +0000

As part of my work at Zend (now a RogueWave company), I maintain the various patch sets. One of those is the changes for PDO_IBM extension for PHP.

After some patch exchange I decided it’s would be easier to manage the whole process over a public git repository, and maybe gain some more review / feedback along the way. Info at

Another aspect of this, is having IBMi specific patches from YIPS (young i professionals) at, which itself are patches on top of vanilla releases. Info at

So keeping track over these changes as well is easier while using git’s ability to rebase efficiently, so when a new release is done, I can adapt my patches quite easily. Make sure the changes can be back and forward ported between vanilla and IBMi versions of the extension.

Filed under: PHP (image)

Joey Hess: Functional Reactive Propellor

Sat, 15 Jul 2017 21:43:21 +0000

I wrote this code, and it made me super happy! data Variety = Installer | Target deriving (Eq) seed :: UserInput -> Versioned Variety Host seed userinput ver = host "foo" & ver ( (== Installer) --> hostname "installer" <|> (== Target) --> hostname (inputHostname userinput) ) & osDebian Unstable X86_64 & Apt.stdSourcesList & Apt.installed ["linux-image-amd64"] & Grub.installed PC & XFCE.installed & ver ( (== Installer) --> desktopUser defaultUser <|> (== Target) --> desktopUser (inputUsername userinput) ) & ver ( (== Installer) --> autostartInstaller ) This is doing so much in so little space and with so little fuss! It's completely defining two different versions of a Host. One version is the Installer, which in turn installs the Target. The code above provides all the information that propellor needs to convert a copy of the Installer into the Target, which it can do very efficiently. For example, it knows that the default user account should be deleted, and a new user account created based on the user's input of their name. The germ of this idea comes from a short presentation I made about propellor in Portland several years ago. I was describing RevertableProperty, and Joachim Breitner pointed out that to use it, the user essentially has to keep track of the evolution of their Host in their head. It would be better for propellor to know what past versions looked like, so it can know when a RevertableProperty needs to be reverted. I didn't see a way to address the objection for years. I was hung up on the problem that propellor's properties can't be compared for equality, because functions can't be compared for equality (generally). And on the problem that it would be hard for propellor to pull old versions of a Host out of git. But then I ran into the situation where I needed these two closely related hosts to be defined in a single file, and it all fell into place. The basic idea is that propellor first reverts all the revertible properties for other versions. Then it ensures the property for the current version. Another use for it would be if you wanted to be able to roll back changes to a Host. For example: foos :: Versioned Int Host foos ver = host "foo" & hostname "" & ver ( (== 1) --> Apache.modEnabled "mpm_worker" <|> (>= 2) --> Apache.modEnabled "mpm_event" ) & ver ( (>= 3) --> Apt.unattendedUpgrades ) foo :: Host foo = foos `version` (4 :: Int) Versioned properties can also be defined: foobar :: Versioned Int -> RevertableProperty DebianLike DebianLike foobar ver = ver ( (== 1) --> (Apt.installed "foo" Apt.removed "foo") <|> (== 2) --> (Apt.installed "bar" Apt.removed "bar") ) Notice that I've embedded a small DSL for versioning into the propellor config file syntax. While implementing versioning took all day, that part was super easy; Haskell config files win again! API documentation for this feature PS: Not really FRP, probably. But time-varying in a FRP-like way. Development of this was sponsored by Jake Vosloo on Patreon. [...]

Dirk Eddelbuettel: Rcpp 0.12.12: Rounding some corners

Sat, 15 Jul 2017 17:09:00 +0000

The twelveth update in the 0.12.* series of Rcpp landed on CRAN this morning, following two days of testing at CRAN preceded by five full reverse-depends checks we did (and which are always logged in this GitHub repo). The Debian package has been built and uploaded; Windows and macOS binaries should follow at CRAN as usual. This 0.12.12 release follows the 0.12.0 release from late July, the 0.12.1 release in September, the 0.12.2 release in November, the 0.12.3 release in January, the 0.12.4 release in March, the 0.12.5 release in May, the 0.12.6 release in July, the 0.12.7 release in September, the 0.12.8 release in November, the 0.12.9 release in January, the 0.12.10.release in March, and the 0.12.11.release in May making it the sixteenth release at the steady and predictable bi-montly release frequency. Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 1097 packages (and hence 71 more since the last release in May) on CRAN depend on Rcpp for making analytical code go faster and further, along with another 91 in BioConductor. This releases contain a fairly large number of fairly small and focused pull requests most of which either correct some corner cases or improve other aspects. JJ tirelessly improved the package registration added in the previous release and following R 3.4.0. Kirill tidied up a number of small issues allowing us to run compilation in even more verbose modes---usually a good thing. Jeroen, Elias Pipping and Yo Gong all contributed as well, and we thank everybody for their contributions. All changes are listed below in some detail. Changes in Rcpp version 0.12.12 (2017-07-13) Changes in Rcpp API: The tinyformat.h header now ends in a newline (#701). Fixed rare protection error that occurred when fetching stack traces during the construction of an Rcpp exception (Kirill Müller in #706). Compilation is now also possibly on Haiku-OS (Yo Gong in #708 addressing #707). Dimension attributes are explicitly cast to int (Kirill Müller in #715). Unused arguments are no longer declared (Kirill Müller in #716). Visibility of exported functions is now supported via the R macro atttribute_visible (Jeroen Ooms in #720). The no_init() constructor accepts R_xlen_t (Kirill Müller in #730). Loop unrolling used R_xlen_t (Kirill Müller in #731). Two unused-variables warnings are now avoided (Jeff Pollock in #732). Changes in Rcpp Attributes: Execute tools::package_native_routine_registration_skeleton within package rather than current working directory (JJ in #697). The R portion no longer uses dir.exists to no require R 3.2.0 or newer (Elias Pipping in #698). Fix native registration for exports with name attribute (JJ in #703 addressing #702). Automatically register init functions for Rcpp Modules (JJ in #705 addressing #704). Add Shield around parameters in Rcpp::interfaces (JJ in #713 addressing #712). Replace dot (".") with underscore ("_") in package names when generating native routine registrations (JJ in #722 addressing #721). Generate C++ native routines with underscore ("_") prefix to avoid exporting when standard exportPattern is used in NAMESPACE (JJ in #725 addressing #723). Thanks to CRANberries, you can also look at a diff to the previous release. As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. This po[...]

Junichi Uekawa: revisiting libjson-spirit.

Sat, 15 Jul 2017 12:18:51 +0000

(image) revisiting libjson-spirit. I tried compiling a program that uses libjson-spirit and noticed that it still is broken. New programs compiled against the header does not link with the provided static library. Trying to rebuild it fixes it, but it uses compat version 8, and that needs to be fixed (trivially). hmm... actually code doesn't build anymore and there's multiple new upstream versions. ... and then I noticed that it was a stale copy already removed from Debian repository. What's a good C++ json implementation these days?

Vasudev Kamath: debcargo: Replacing subprocess crate with git2 crate

Sat, 15 Jul 2017 10:05:00 +0000

In my previous post I talked about using subprocess crate to extract beginning and ending year from git repository for generating debian/copyright file. In this post I'm going to talk on how I replaced subprocess with native git2 crate and achieved the same result in much cleaner and safer way. git2 is a native Rust crate which provides access to Git repository internals. git2 does not involve any unsafe invocation as it is built against libgit2-sys which is actually using Rust FFI to directly bind to underlying libgit library. Below is the new copyright_fromgit function with git2 crate. fn copyright_fromgit(repo_url: &str) -> Result { let tempdir = TempDir::new_in(".", "debcargo")?; let repo = Repository::clone(repo_url, tempdir.path())?; let mut revwalker = repo.revwalk()?; revwalker.push_head()?; // Get the latest and first commit id. This is bit ugly let latest_id =; let first_id = revwalker.last().unwrap()?; // revwalker ends here is consumed by last let first_commit = repo.find_commit(first_id)?; let latest_commit = repo.find_commit(latest_id)?; let first_year = DateTime::::from_utc( NaiveDateTime::from_timestamp(first_commit.time().seconds(), 0), Utc).year(); let latest_year = DateTime::::from_utc( NaiveDateTime::from_timestamp(latest_commit.time().seconds(), 0), Utc).year(); let notice = match first_year.cmp(&latest_year) { Ordering::Equal => format!("{}", first_year), _ => format!("{}-{},", first_year, latest_year), }; Ok(notice) } So here is what I'm doing Use git2::Repository::clone to clone the given URL. We are thus avoiding exec of git clone command. Get a revision walker object. git2::RevWalk implements Iterator trait and allows walking through the git history. This is what we are using to avoid exec of git log command. revwalker.push_head() is important because we want to tell revwalker from where we want to walk the history. In this case we are asking it to walk history from repository HEAD. Without this line next line will not work. (Learned it in hard way :-) ). Then we extract git2::Oid which is we can say similar to commit hash and can be used to lookup a particular commit. We take latest commit hash using RevWalk::next call and the first commit using Revwalk::last, note the order this is because Revwalk::last consumes the revwalker so doing it in reverse order will make borrow checker unhappy :-). This replaces exec of head -n1 command. Look up the git2::Commit objects using git2::Repository::find_commit Then convert the git2::Time to chrono::DateTime and take out the years. After this change I found an obvious error which went unnoticed in previous version, that is if there was no repository key in Cargo.toml. When there was no repo URL git clone exec did not error out and our shell commands happily extracted year from the debcargo repository!. Well since I was testing code from debcargo repository It never failed, but when I executed from non-git repository folder git threw an error but that was git log and not git clone. This error was spotted right away because git2 threw me an error that I gave it empty URL. When it comes to performance I see that debcargo is faster compared to previous version. This makes sense because previously it was doing 5 fork and exec system calls and now that is avoided. [...]

Chris Lamb: Installation birthday

Fri, 14 Jul 2017 10:08:12 +0000


Fancy receiving congratulations on the anniversary of when you installed your system?

Installing the installation-birthday package on your Debian machines will celebrate each birthday of your system by automatically sending a message to the local system administrator.

The installation date is based on the system installation time, not the package itself.

installation-birthday is available in Debian 9 ("stretch") via the stretch-backports repository, as well as in the testing and unstable distributions:

$ apt install installation-birthday

Enjoy, and patches welcome. :)

Norbert Preining: TeX Live contrib repository (re)new(ed)

Fri, 14 Jul 2017 00:34:25 +0000

It is my pleasure to announce the renewal/rework/restart of the TeX Live contrib repository service. The repository is collecting packages that cannot enter TeX Live directly (mostly due to license reasons), but are free to distribute. The basic idea is to provide a repository mimicking Debian’s nonfree branch. The packages on this site are not distributed inside TeX Live proper for one or another of the following reasons: because it is not free software according to the FSF guidelines; because it is an executable update; because it is not available on CTAN; because it is an intermediate release for testing. In short, anything related to TeX that can not be on TeX Live but can still legally be distributed over the Internet can hav e a placeon TLContrib. Currently there are 52 packages in the repository, falling roughly into the following categories: nosell fonts fonts and macros with nosell licenses, e.g., garamond, garamondx, etc. These fonts are mostly those that are also available via getnonfreefonts. nosell packages packages with nosell licenses, e.g. acmtrans nonfree support packages those packages that require non-free tools or fonts, e.g., acrotex, lucida-otf, verdana, etc The full list of packages can be seen here. The ultimate goal is to provide a companion to the core TeX Live (tlnet) distribution in much the same way as Debian‘s non-free tree is a companion to the normal distribution. The goal is not to replace TeX Live: packages that could go into TeX Live itself should stay (or be added) there. TLContrib is simply trying to fill in a gap in the current distribution system. Quick Start If you are running the current version of TeX Live, which is 2017 at the moment, the following code will suffice: tlmgr repository add tlcontrib tlmgr pinning add tlcontrib '*' In future there might be releases for certain years. Verification The packages are signed with my GnuPG RSA key: 0x6CACA448860CDC13. tlmgr will automatically verify authenticity if you add my key: curl -fsSL | tlmgr key add - After that tlmgr will tell you about failed authentication of this repository. History Taco Hoekwater started in 2010, but it hasn’t seen much activity in recent years. Taco agreed to hand it over to myself, who is currently maintaining the repository. Big thanks to Taco for his long support and cooperation. In contrast to the original tlcontrib page, we don’t offer an automatic upload of packages or user registration. If you want to add packages here, see below. Adding packages If you want to see a package included here that cannot enter TeX Live proper, the following ways are possible (from most appreciated to least appreciated): clone the tlcontrib git repo (see below), add the package, and publish the branch where I can pull from it. Then send me an email with the URL and explanation about free distributability/license; send me the package in TDS format, with explanation about free distributability/license; send me the package as distributed by the author (or on CTAN), with explanation about free distributability/license; send me a link to the package explanation about free distributability/license. Git repository The packages are kept in a git repository and the tlmgr repo is built from after changes. The location is Enjoy. [...]

Jose M. Calhariz: Crossgrading a more typical server in Debian9

Thu, 13 Jul 2017 17:32:00 +0000

First you need to install a 64bits kernel and boot with it. See my previous post on how to do it.

Second you need to do a bootstrap of crossgrading:

 apt-get clean
 apt-get upgrade
 apt-get --download-only install dpkg:amd64 tar:amd64 apt:amd64 bash:amd64 dash:amd64 init:amd64 mawk:amd64
 dpkg --install /var/cache/apt/archives/*_amd64.deb
 dpkg --install /var/cache/apt/archives/*_amd64.deb
 dpkg --print-architecture
 dpkg --print-foreign-architectures

Third, do a crossgrade of the libraries:

 dpkg --list > original.dpkg
 apt-get --fix-broken --allow-remove-essential install
 for pack32 in $(grep :i386 original.dpkg | awk '{print $2}' ) ; do 
   if dpkg --status $pack32 | grep -q "Multi-Arch: same" ; then 
     apt-get install --yes --allow-remove-essential ${pack32%:i386} ; 
   fi ; 

Forth, do a full crossgrade:

 if ! apt-get install --allow-remove-essential $(grep :i386 original.dpkg | awk '{print $2}' | sed -e s/:i386//) ; then
   apt-get --fix-broken --allow-remove-essential install
   apt-get install --allow-remove-essential $(grep :i386 original.dpkg | awk '{print $2}' | sed -e s/:i386//)

Lars Wirzenius: Adding Yakking to Planet Debian

Thu, 13 Jul 2017 10:07:21 +0000


In a case of blatant self-promotion, I am going to add the Yakking RSS feed to the Planet Debian aggregation. (But really because I think some of the readership of Planet Debian may be interested in the content.)

Yakking is a group blog by a few friends aimed at new free software contributors. From the front page description:

Welcome to Yakking.

This is a blog for topics relevant to someone new to free software development. We assume you are already familiar with computers, and are curious about participating in the production of free software. You don't need to be a programmer: software development requires a wide variety of skills, and you can be a valued core contributor to a project without being a programmer.

If anyone objects, please let me know.

Vincent Fourmond: Screen that hurts my eyes, take 2

Thu, 13 Jul 2017 08:03:30 +0000

Six month ago, I wrote a lengthy post about my new computer hurting my eyes. I haven't made any progress with that, but I've accidentally upgraded my work computer from Kernel 4.4 to 4.8 and the nvidia drivers from 340.96-4 to 375.66-2. Well, my work computer now hurts, I've switched back to the previous kernel and drivers, I hope it'll be back to normal.

Any ideas of something specific that changed, either between 4.4 and 4.8 (kernel startup code, default framebuffer modes, etc ?), or between the 340.96 and the 375.66 drivers ? In any case, I'll try that specific combination of kernels/drivers home to see if I can get it to a useable state.

Lucy Wayland: Basic Chilli Template

Thu, 13 Jul 2017 01:59:38 +0000

Amounts are to taste:
[Stage one]
Chopped red onion
Chopped garlic
Chopped fresh ginger
Chopped big red chillies (mild)
Chopped birds eye chillies (red or green, quite hot)
Chopped scotch bonnets (hot)
[Fry onion in some olive oil. When getting translucent, and rest of ingredients. May need to add some more oil. When the garlic is browning. On to stage two.]
[Stage two]
Some tins of chopped tomato
Some tomato puree
Some basil
Some thyme
Bay leaf optional
Some sliced mushroom
Some chopped capsicum pepper
Some kidney beans
Other beans optional (butter beans are nice)
Lentils optional (Pro tip: if adding lentils to adding lentils, especially red lentils, I recommend adding some garam masala as well. Lifts the flavour.)
Veggie mince optional
Pearled barley very optional
Stock (some reclaimed from swilling water around tom tims)
Water to keep topping up with if it get too sticky or dry
Dash of red wine optional
Worcester sauce optional
Any other flavouring you feel like optional (I quite often add random herbs or spices
[[Secret ingredient: a spoonful of Marmite]]
[Cook everything up together, but wait until there is enough fluid before you add the dry/sticky ingredients in.]
[Serve with carb of choice. I currently fond of using Ryvita as dipper instead of tortilla chips.]
[Also serve with a a “cooler” such as natural yogurt, soured cream or something else.

You want more than one type of chilli in there to broaden the flavour. I use all three, plus occasionally others as well. If you are feeling masochistic you can go hotter than scotch bonnets, but I although you may get something of the same heat, I think you lose something in the flavour.

BTW – if you get the chance, of all the tortilla chips, I think blue corn ones are the best. Only seem to find them in health food shops.

There you go. It’s a “Zen” recipe, which is why I couldn’t give you a link. You just do it until it looks right, feels right, tastes right. And with practice you get it better and better.


Jose M. Calhariz: Crossgrading a minimal install of Debian 9

Wed, 12 Jul 2017 22:46:00 +0000

By testing the previous instructions for a full crosgrade I run into trouble. Here is the results of my tests to do a full crossgrade of a minimal installation of Debian inside a VM.

First you need to install a 64bits kernel and boot with it. See my previous post on how to do it.

Second you need to do a bootstrap of crossgrading:

 apt-get clean
 apt-get upgrade
 apt-get --download-only install dpkg:amd64 tar:amd64 apt:amd64 bash:amd64 dash:amd64 init:amd64
 dpkg --install /var/cache/apt/archives/*_amd64.deb
 dpkg --install /var/cache/apt/archives/*_amd64.deb
 dpkg --print-architecture
 dpkg --print-foreign-architectures
 apt-get --fix-broken --allow-remove-essential install

Third do a full crossgrade:

 apt-get install --allow-remove-essential $(dpkg --list | grep :i386 | awk '{print $2}' | sed -e s/:i386// )

This procedure seams to be a little fragile, but worked most of the time for me.

Jose M. Calhariz: Crossgrading the kernel in Debian 9

Wed, 12 Jul 2017 20:26:00 +0000

I have a very old installation of 32bits Debian running in new hardware. Until now running a 64bits kernel was enough to use efficiently more than 4GiB of RAM. The only problem I found was the proprietary drivers from AMD/ATI and NVIDIA, that did not like this mixed environment and some problems with openafs, easilly solved with the help of the package maintainers of openafs. Crossgrading the Qemu/KVM to 64 bits did not pose a problem, so I have been running 64bits VMs for some time.

But now the nouveau driver do not work with my new display adapter and I need to run tools from OpsCode not available as 32bits. So is time to do a CrossGrade. Finding some problems I can not recommend it to the inexperienced people. Is time investigate the issues and report bugreports to Debian were appropriate.

If you run 32bits Debian installation you can easily install a 64bits kernel . The procedure is simple and well tested.

dpkg --add-architecture amd64
apt-get update
apt-get install linux-image-amd64:amd64

And reboot to test the new kernel.

You can expect here more articles about crossgrading.

Sven Hoexter: environment variable names with dots and busybox 1.26.0 ash

Wed, 12 Jul 2017 15:38:09 +0000

In case you're for example using Alpine Linux 3.6 based docker images, and you've been passing through environment variable names with dots, you might miss them now in your actual environment. It seems that with busybox 1.26.0 the busybox ash got a lot stricter regarding validation of environment variable names and now you can no longer pass through variable names with dots in them. They just won't be there. If you've been running ash interactively you could not add them in the past, but until now you could do something like this in your Dockerfile


and later on accces a variable "".

bash still allows those invalid variable names and is way more tolerant. So to be nice to your devs, and still bump your docker image version, you can add bash and ensure you're starting your application with /bin/bash instead of /bin/sh inside of your container.


Reproducible builds folks: Reproducible Builds: week 115 in Stretch cycle

Wed, 12 Jul 2017 13:22:34 +0000

Here's what happened in the Reproducible Builds effort between Sunday July 2 and Saturday July 8 2017: Reproducible work in other projects Ed Maste pointed to a thread on the LLVM developer mailing list about container iteration being the main source of non-determinism in LLVM, together with discussion on how to solve this. Ignoring build path issues, container iteration order was also the main issue with rustc, which was fixed by using a fixed-order hash map for certain compiler structures. (It was unclear from the thread whether LLVM's builds are truly path-independent or rather that they haven't done comparisons between builds run under different paths.) Bugs filed Adrian Bunk: #867499 filed against tiptop. #867773 filed against relatorio. #867781 filed against gcin. Chris Lamb: #866945 filed against tinymux. #867753 filed against grunt. #867848 filed against gconf. Patches submitted upstream: Bernhard M. Wiedemann: perl-Class-MethodMaker sort hashtable perl-Mouse sort file list thunderbird sort symbol list SOURCE_DATE_EPOCH support: graphviz, merged gnupg, merged in improved variant by upstream janus-gateway, merged libkcapi merged txt2tags fixup on earlier patch, merged Reviews of unreproducible packages 52 package reviews have been added, 62 have been updated and 20 have been removed in this week, adding to our knowledge about identified issues. No issue types were updated or added this week. Weekly QA work During our reproducibility testing, FTBFS bugs have been detected and reported by: Adrian Bunk (143) Andreas Beckmann (1) Dmitry Shachnev (1) Lucas Nussbaum (3) Niko Tyni (3) Scott Kitterman (1) Sean Whitton (1) diffoscope development Development continued in git with contributions from: Ximin Luo: Add a PartialString class to help with lazily-loaded output formats such as html-dir. html and html-dir output: add a size-hint to the diff headers and lazy-load buttons add new limit flags and deprecate old ones html-dir output split index pages up if they get too big put css/icon data in separate files to avoid duplication main: warn if loading a diff but also giving diff-calculation flags Test fixes for Python 3.6 and CI environments without imagemagick (#865625). Fix a performance regression (#865660) involving the Wagner-Fischer algorithm for calculating levenshtein distance. With these changes, we are able to generate a dynamically loaded HTML diff for GCC-6 that can be displayed in a normal web browser. For more details see this mailing list post. Misc. This week's edition was written by Ximin Luo, Bernhard M. Wiedemann and Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists. [...]

Francois Marier: Toggling Between Pulseaudio Outputs when Docking a Laptop

Wed, 12 Jul 2017 05:07:22 +0000

In addition to selecting the right monitor after docking my ThinkPad, I wanted to set the correct sound output since I have headphones connected to my Ultra Dock. This can be done fairly easily using Pulseaudio. Switching to a different pulseaudio output To find the device name and the output name I need to provide to pacmd, I ran pacmd list-sinks: 2 sink(s) available. ... * index: 1 name: driver: ... ports: analog-output: Analog Output (priority 9900, latency offset 0 usec, available: unknown) properties: analog-output-speaker: Speakers (priority 10000, latency offset 0 usec, available: unknown) properties: device.icon_name = "audio-speakers" From there, I extracted the soundcard name (alsa_output.pci-0000_00_1b.0.analog-stereo) and the names of the two output ports (analog-output and analog-output-speaker). To switch between the headphones and the speakers, I can therefore run the following commands: pacmd set-sink-port alsa_output.pci-0000_00_1b.0.analog-stereo analog-output pacmd set-sink-port alsa_output.pci-0000_00_1b.0.analog-stereo analog-output-speaker Listening for headphone events Then I looked for the ACPI event triggered when my headphones are detected by the laptop after docking. After looking at the output of acpi_listen, I found jack/headphone HEADPHONE plug. Combining this with the above pulseaudio names, I put the following in /etc/acpi/events/thinkpad-dock-headphones: event=jack/headphone HEADPHONE plug action=su francois -c "pacmd set-sink-port alsa_output.pci-0000_00_1b.0.analog-stereo analog-output" to automatically switch to the headphones when I dock my laptop. Finding out whether or not the laptop is docked While it is possible to hook into the docking and undocking ACPI events and run scripts, there doesn't seem to be an easy way from a shell script to tell whether or not the laptop is docked. In the end, I settled on detecting the presence of USB devices. I ran lsusb twice (once docked and once undocked) and then compared the output: lsusb > docked lsusb > undocked colordiff -u docked undocked This gave me a number of differences since I have a bunch of peripherals attached to the dock: --- docked 2017-07-07 19:10:51.875405241 -0700 +++ undocked 2017-07-07 19:11:00.511336071 -0700 @@ -1,15 +1,6 @@ Bus 001 Device 002: ID 8087:8000 Intel Corp. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub -Bus 003 Device 081: ID 0424:5534 Standard Microsystems Corp. Hub -Bus 003 Device 080: ID 17ef:1010 Lenovo Bus 003 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub -Bus 002 Device 041: ID xxxx:xxxx ... -Bus 002 Device 040: ID xxxx:xxxx ... -Bus 002 Device 039: ID xxxx:xxxx ... -Bus 002 Device 038: ID 17ef:100f Lenovo -Bus 002 Device 037: ID xxxx:xxxx ... -Bus 002 Device 042: ID 0424:2134 Standard Microsystems Corp. Hub -Bus 002 Device 036: ID 17ef:1010 Lenovo Bus 002 Device 002: ID xxxx:xxxx ... Bus 002 Device 004: ID xxxx:xxxx ... Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub I picked 17ef:1010 as it appeared to be some internal bus on the Ultra Dock (none of my USB de[...]

Joey Hess: bonus project

Tue, 11 Jul 2017 20:29:56 +0000


Little bonus project after the solar upgrade was replacing the battery box's rotted roof, down to the cinderblock walls.


Except for a piece of plywood, used all scrap lumber for this project, and also scavenged a great set of hinges from a discarded cabinet. I hope the paint on all sides and an inch of shingle overhang will be enough to protect the plywood.

Bonus bonus project to use up paint. (Argh, now I want to increase the size of the overflowing grape arbor. Once you start on this kind of stuff..)


After finishing all that, it was time to think about this while enjoying this.

(Followed by taking delivery of a dumptruck full of gravel -- 23 tons -- which it turns out was enough for only half of my driveway..)

Andreas Bombe: PDP-8/e Replicated — Overview

Tue, 11 Jul 2017 18:02:07 +0000

This is an overview of the hardware and internals of the PDP-8/e replica I’m building. The front panel board If you know the original or remember the picture from the first post it is clear that this is a functional replica not aiming to be as pretty as those of the other projects I mentioned. I have reordered the switches into two rows to make the board more compact (which also means cheaper) without sacrificing usability. There’s the two rows of display lights plus one run light the 8/e provides. The upper row is the address made up of 12 bits of memory address and 3 bits of extended memory address or field. Below are the 12 bits indicator which can show one data set out of six as selected by the user. All the switches of the original are implemented as more compact buttons1. While momentary switches are easily substituted by buttons, all buttons implementing two position switches toggle on/off with each press and they have a LED above that shows the current state. The six position rotary switch is implemented as a button cycling through all indicator displays together with six LEDs which show the active selection. Markings show the meaning of the indicator and switches as on the original, grouped in threes as the predominant numbering system for the PDPs was octal. The upper line shows the meaning for the state indicator, the middle for the status indicator and bit numbers for the rest. Note that on the PDP-8 and opposite to modern conventions, the most significant bit was numbered 0. I designed it as a pure front panel board without any PDP-8 simulation parts. The buttons and associated lights are controllable via SPI lines with a 3.3 V supply. The address and indicator lights have a separate common anode configuration with all cathodes individually available on a pin header without any resistors in the path, leaving voltage and current regulation up to the simulation board. This board is actually a few years old from a similar project where I emulated the PDP-8 in software on a microcontroller and the flexible design allowed me to reuse it unchanged. The main board This is where the magic happens. You can see three big ICs on the board: On the left is the STM32F405 microcontroller (with ARM Cortex-M4 core), the bigger one in the middle is the Altera2 MAX 10 FPGA and finally to the right is the SRAM that is large enough to hold all the main memory of the 32 KW maximum expansion of the PDP-8/e. The two smaller chips to the right of that are just buffers that drive the front panel address LEDs, the small chip at the top left is a RS-232 level shifter. The idea behind this is that the PDP-8 and peripherals that are simple to directly implement, such as GPIO or a serial port, are fully on the FPGA. Other peripherals such as paper and magnetic tape and disks, which are after all not connected to real PDP-8 drives but disk images on a microSD, are implemented on the microcontroller interfacing with stub devices in the FPGA. Compared to implementing everything everything in the FPGA, the STM32F4 has the advantage of useful built in peripherals such as two host/device capable USB ports. 5[...]

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, June 2017

Tue, 11 Jul 2017 14:49:00 +0000

Like each month, here comes a report about the work of paid contributors to Debian LTS. Individual reports In May, about 161 work hours have been dispatched among 11 paid contributors. Their reports are available: Antoine Beaupré did 12h (out of 16h allocated, thus keeping 4 extra hours for July). Ben Hutchings did 20 hours (out of 15h allocated + 5 extra hours). Chris Lamb did 16 hours. Emilio Pozuelo Monfort did 11 hours (out of 16 hours allocated + 3 hours remaining, thus keeping 8 hours for July). Guido Günther did 9 hours. Hugo Lefeuvre did 5 hours (out of 15h allocated, thus keeping 10 extra hours for July). Markus Koschany did 16 hours. Ola Lundqvist did 12 hours (out of 14h allocated, thus keeping 2 extra hours for July). Raphaël Hertzog did 12 hours. Roberto C. Sanchez did 6.5 hours (out of 16 hours allocated + 2.5 hour remaining, thus keeping 12 extra hours for July). Thorsten Alteholz did 16 hours. Evolution of the situation The number of sponsored hours increased slightly with one new bronze sponsor and another silver sponsor is in the process of joining. The security tracker currently lists 49 packages with a known CVE and the dla-needed.txt file 54. The number of open issues is close to last month. Thanks to our sponsors New sponsors are in bold. Platinum sponsors: TOSHIBA (for 21 months) GitHub (for 11 months) Gold sponsors: The Positive Internet (for 37 months) Blablacar (for 36 months) Linode (for 26 months) Babiel GmbH (for 15 months) Plat’Home (for 14 months) Silver sponsors: Domeneshop AS (for 36 months) Université Lille 3 (for 36 months) Trollweb Solutions (for 34 months) Nantes Métropole (for 30 months) Dalenys (for 27 months) Univention GmbH (for 22 months) Université Jean Monnet de St Etienne (for 22 months) Sonus Networks (for 16 months) UR Communications BV (for 10 months) maxcluster GmbH (for 10 months) Exonet B.V. (for 6 months) Bronze sponsors: David Ayers – IntarS Austria (for 37 months) Evolix (for 37 months) Offensive Security (for 37 months), a.s. (for 37 months) Freeside Internet Service (for 36 months) MyTux (for 36 months) Linuxhotel GmbH (for 34 months) Intevation GmbH (for 33 months) Daevel SARL (for 32 months) Bitfolk LTD (for 31 months) Megaspace Internet Services GmbH (for 31 months) Greenbone Networks GmbH (for 30 months) NUMLOG (for 30 months) WinGo AG (for 29 months) Ecole Centrale de Nantes – LHEEA (for 26 months) Sig-I/O (for 23 months) Entr’ouvert (for 21 months) Adfinis SyGroup AG (for 18 months) Quarantainenet BV (for 13 months) GNI MEDIA (for 12 months) RHX Srl (for 10 months) Bearstech (for 4 months) LiHAS (for 4 months) People Doc One comment | Liked this article? Click here. | My blog is Flattr-enabled. [...]

Steve Kemp: bind9 considered harmful

Mon, 10 Jul 2017 21:00:00 +0000


Recently there was another bind9 security update released by the Debian Security Team. I thought that was odd, so I've scanned my mailbox:

  • 11 January 2017
    • DSA-3758 - bind9
  • 26 February 2017
    • DSA-3795-1 - bind9
  • 14 May 2017
    • DSA-3854-1 - bind9
  • 8 July 2017
    • DSA-3904-1 - bind9

So in the year to date there have been 7 months, in 3 of them nothing happened, but in 4 of them we had bind9 updates. If these trends continue we'll have another 2.5 updates before the end of the year.

I don't run a nameserver. The only reason I have bind-packages on my system is for the dig utility.

Rewriting a compatible version of dig in Perl should be trivial, thanks to the Net::DNS::Resolver module:

These are about the only commands I ever run:

dig -t a +short
dig -t aaaa +short
dig -t a @

I should do that then. Yes.

Markus Koschany: My Free Software Activities in June 2017

Mon, 10 Jul 2017 20:16:30 +0000

Welcome to Here is my monthly report that covers what I have been doing for Debian. If you’re interested in  Java, Games and LTS topics, this might be interesting for you. Debian Games This month eventually saw the release of Debian 9 “Stretch” (yeah). Time to move some packages from experimental to unstable and package new upstream versions. New releases in June: minetest, pygame-sdl2, renpy, fifechan, hitori, blockattack, gtkatlantic Now in unstable: springlobby, freeciv, bzflag, freeorion, megaglest, megaglest-data, torcs Bug fixes: doomsday (#863536, #858333), gamine (RC #864547), neverball (#852736, #852168), 3dchess (RC #866378, #864623), lincity-ng (#861049), asc (#856073), armagetronad (#861773),  adonthell-data (#863202), briquolo (#861045), foobillardplus (#861046), gamazons (#829984) Most interesting bug and best team effort was #866378 which caused 3dchess to consume 100% CPU time. Apparently not many people cared about it or it would have been detected much sooner, nevertheless it is fixed now even in Jessie and Stretch. Debian Java + Android New releases in June: libsmali-java, apktool, libjide-oss-java, jboss-modules, jboss-logging, jboss-xnio, openjpa, objenesis, hawtjni, undertow (RC #864405) Now in unstable: robocode, hsqldb, activemq, libbultitude-clojure, sweethome3d Bug fixes: jython (RC #864859) later also backported to Jessie and Stretch,  electric (#847297), libhtml5parser-java (#849462) I started to package the latest version of pdfsam, a Java application to split, merge, rotate and mix PDF files. This is a major update and I had to package eleven new dependencies just for that but I think it was worth the time. PDFsam is now a good-looking JavaFX app and I believe it has more features in its basic version than the current alternatives in Debian. I plan to write a separate article about this endeavor soon. An update of libmetadata-extractor-java was also necessary. Version 2.10.1 is currently available in experimental and as soon as all dependencies for pdfsam have been approved by the FTP team we can move it to unstable. I gave the maintainer of gpsprune (#866762) a heads-up because this update will break it. Debian LTS This was my sixteenth month as a paid contributor and I have been paid to work 16 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following: I triaged mp3splt and putty and marked CVE-2017-5666 and CVE-2017-6542 as no-dsa because the impact was very low. DLA-975-1. I uploaded the security update for wordpress which I prepared last month fixing 6 CVE. DLA-986-1. Issued a security update for zookeeper fixing 1 CVE. DLA-989-1. Issued a security update for jython fixing 1 CVE. DLA-996-1. Issued a security update for tomcat7 fixing 1 CVE. DLA-1002-1. Issued a security update for smb4k fixing 1 CVE. DLA-1013-1. Issued a security update for graphite2 fixing 8 CVE. DLA-1020-1. Issued a security update for jetty fixing 1 CVE. DLA-1021-1. Issued a security update for jetty8 fixing 1 CVE. Misc I updated wbar, fixed #829981 and uploaded mediathekview and osmo to unstable. For the Buster r[...]

Jonathan McDowell: Going to DebConf 17

Mon, 10 Jul 2017 17:54:47 +0000



Completely forgot to mention this earlier in the year, but delighted to say that in just under 4 weeks I’ll be attending DebConf 17 in Montréal. Looking forward to seeing a bunch of fine folk there!


2017-08-04 11:40 DUB -> 13:40 KEF WW853
2017-08-04 15:25 KEF -> 17:00 YUL WW251


2017-08-12 19:50 YUL -> 05:00 KEF WW252
2017-08-13 06:20 KEF -> 09:50 DUB WW852

(Image created using GIMP, fonts-dkg-handwriting and the DebConf17 Artwork.)

Kees Cook: security things in Linux v4.12

Mon, 10 Jul 2017 08:24:23 +0000

Previously: v4.11. Here’s a quick summary of some of the interesting security things in last week’s v4.12 release of the Linux kernel: x86 read-only and fixed-location GDT With kernel memory base randomization, it was stil possible to figure out the per-cpu base address via the “sgdt” instruction, since it would reveal the per-cpu GDT location. To solve this, Thomas Garnier moved the GDT to a fixed location. And to solve the risk of an attacker targeting the GDT directly with a kernel bug, he also made it read-only. usercopy consolidation After hardened usercopy landed, Al Viro decided to take a closer look at all the usercopy routines and then consolidated the per-architecture uaccess code into a single implementation. The per-architecture code was functionally very similar to each other, so it made sense to remove the redundancy. In the process, he uncovered a number of unhandled corner cases in various architectures (that got fixed by the consolidation), and made hardened usercopy available on all remaining architectures. ASLR entropy sysctl on PowerPC Continuing to expand architecture support for the ASLR entropy sysctl, Michael Ellerman implemented the calculations needed for PowerPC. This lets userspace choose to crank up the entropy used for memory layouts. LSM structures read-only James Morris used __ro_after_init to make the LSM structures read-only after boot. This removes them as a desirable target for attackers. Since the hooks are called from all kinds of places in the kernel this was a favorite method for attackers to use to hijack execution of the kernel. (A similar target used to be the system call table, but that has long since been made read-only.) KASLR enabled by default on x86 With many distros already enabling KASLR on x86 with CONFIG_RANDOMIZE_BASE and CONFIG_RANDOMIZE_MEMORY, Ingo Molnar felt the feature was mature enough to be enabled by default. Expand stack canary to 64 bits on 64-bit systems The stack canary values used by CONFIG_CC_STACKPROTECTOR is most powerful on x86 since it is different per task. (Other architectures run with a single canary for all tasks.) While the first canary chosen on x86 (and other architectures) was a full unsigned long, the subsequent canaries chosen per-task for x86 were being truncated to 32-bits. Daniel Micay fixed this so now x86 (and future architectures that gain per-task canary support) have significantly increased entropy for stack-protector. Expanded stack/heap gap Hugh Dickens, with input from many other folks, improved the kernel’s mitigation against having the stack and heap crash into each other. This is a stop-gap measure to help defend against the Stack Clash attacks. Additional hardening needs to come from the compiler to produce “stack probes” when doing large stack expansions. Any Variable Length Arrays on the stack or alloca() usage needs to have machine code generated to touch each page of memory within those areas to let the kernel know that the stack is expanding, but with single-page granularity. That’s it for now; please let me [...]

Niels Thykier: Approaching the exclusive “sub-minute” build time club

Sun, 09 Jul 2017 18:41:39 +0000

For the first time in at least two years (and probably even longer), debhelper with the 10.6.2 upload broke the 1 minute milestone for build time (by mere 2 seconds – look for “Build needed 00:00:58, […]”).  Sadly, the result it is not deterministic and the 10.6.3 upload needed 1m + 5s to complete on the buildds. This is not the result of any optimizations I have done in debhelper itself.  Instead, it is the result of “questionable use of developer time” for the sake of meeting an arbitrary milestone. Basically, I made it possible to parallelize more of the debhelper build (10.6.1) and finally made it possible to run the tests in parallel (10.6.2). In 10.6.2, I also made the most of the tests run against all relevant compat levels.  Previously, it would only run the tests against one compat level (either the current one or a hard-coded older version). Testing more than one compat turned out to be fairly simple given a proper test library (I wrote a “Test::DH” module for the occasion).  Below is an example, which is the new test case that I wrote for Debian bug #866570. $ cat t/dh_install/03-866570-dont-install-from-host.t #!/usr/bin/perl use strict; use warnings; use Test::More; use File::Basename qw(dirname); use lib dirname(dirname(__FILE__)); use Test::DH; use File::Path qw(remove_tree make_path); use Debian::Debhelper::Dh_Lib qw(!dirname); plan(tests => 1); each_compat_subtest { my ($compat) = @_;   # #866570 - leading slashes must *not* pull things from the root FS.   make_path('bin');   create_empty_file('bin/grep-i-licious');   ok(run_dh_tool('dh_install', '/bin/grep*'));   ok(-e "debian/debhelper/bin/grep-i-licious", "#866570 [${compat}]"); ok(!-e "debian/debhelper/bin/grep", "#866570 [${compat}]");   remove_tree('debian/debhelper', 'debian/tmp'); }; I have cheated a bit on the implementation; while the test runs in a temporary directory, the directory is reused between compat levels (accordingly, there is a “clean up” step at the end of the test). If you want debhelper to maintain this exclusive (and somewhat arbitrary) property (deterministically), you are more than welcome to help me improve the Makefile.   I am not sure I can squeeze any more out of it with my (lack of) GNU make skills.Filed under: Debhelper, Debian [...]

Steinar H. Gunderson: Nageru 1.6.1 released

Sun, 09 Jul 2017 10:14:00 +0000

I've released version 1.6.1 of Nageru, my live video mixer. Now that Solskogen is coming up, there's been a lot of activity on the Nageru front, but hopefully everything is actually coming together now. Testing has been good, but we'll see whether it stands up to the battle-hardening of the real world or not. Hopefully I won't be needing any last-minute patches. :-) Besides the previously promised Prometheus metrics (1.6.1 ships with a rather extensive set, as well as an example Grafana dashboard) and frame queue management improvements, a surprising late addition was that of a new transcoder called Kaeru (following the naming style of Nageru itself, from the japanese verb kaeru (換える) which means roughly to replace or excahnge—iKnow! claims it can also mean “convert”, but I haven't seen support for this anywhere else). Normally, when I do streams, I just let Nageru do its thing and send out a single 720p60 stream (occasionally 1080p), usually around 5 Mbit/sec; less than that doesn't really give good enough quality for the high-movement scenarios I'm after. But Solskogen is different in that there's a pretty diverse audience when it comes to networking conditions; even though I have a few mirrors spread around the world (and some JavaScript to automatically pick the fastest one; DNS round-robin is really quite useless here!), not all viewers can sustain such a bitrate. Thus, there's also a 480p variant with around 1 Mbit/sec or so, and it needs to come from somewhere. Traditionally, I've been using VLC for this, but streaming is really a niche thing for VLC. I've been told it will be an increased focus for 4.0 now that 3.0 is getting out the door, but over the last few years, there's been a constant trickle of little issues that have been breaking my transcoding pipeline. My solution for this was to simply never update VLC, but now that I'm up to stretch, this didn't really work anymore, and I'd been toying around with the idea of making a standalone transcoder for a while. (You'd ask “why not the ffmpeg(1) command-line client?”, but it's a bit too centered around files and not streams; I use it for converting to HLS for iOS devices, but it has a nasty habit of I/O blocking real work, and its HTTP server really isn't meant for production work. I could survive the latter if it supported Metacube and I could feed it into Cubemap, but it doesn't.) It turned out Nageru had already grown most of the pieces I needed; it had video decoding through FFmpeg, x264 encoding with speed control (so that it automatically picks the best preset the machine can sustain at any given time) and muxing, audio encoding, proper threading everywhere, and a usable HTTP server that could output Metacube. All that was required was to add audio decoding to the FFmpeg input, and then replace the GPU-based mixer and GUI with a very simple driver that just connects the decoders to the encoders. (This means it runs fine on a headless server with no GPU, but it also means you'll get F[...]

Urvika Gola: Outreachy Progress on Lumicall

Sat, 08 Jul 2017 09:59:33 +0000

Lumicall 1.13.0 is released! Through Lumicall, you can make encrypted calls and send messages using open standards. It uses the SIP protocol to inter-operate with other apps and corporate telephone systems. During the Outreachy Internship period I worked on the following issues :- White Labelling – I researched on creating a white label version of Lumicall. Few ideas on how the white label build could be used.. Existing SIP providers can use white label version of Lumicall to expand their business and launch SIP client. This would provide a one stop shop for them!! New SIP clients/developers can use Lumicall white label version to get the underlying working of making encrypted phone calls using SIP protocol, it will help them to focus on other additional functionalities they would like to include. Documentation for implementing white labelling – Link 1 and Link 2   Adding Silent Mode Functionality – Since Lumicall is majorly used to make encrypted calls, there was a need to designate quiet times and the phone will not make an audible ringing tone during that time & if the user has multiple SIP accounts, the user can set the silent mode functionality on one of them, maybe, the Work account. Documentation for adding silent mode feature  – Link 1 and Link 2   Adding 9 Patch Image  Using Lumicall, users can send SIP messages across. Just to improve the UI a little, I added a 9 patch image in the message screen. A 9 patch image is created using 9 patch tools and are saved as imagename.9.png . The image will resize itself according to the text length and font size. Documentation for 9 patch image – Link You can try the new version of Lumicall here ! and know more about Lumicall on a blog by Daniel Pocock. Looking forward to your valuable feedback !! [...]

Daniel Silverstone: Gitano - Approaching Release - Access Control Changes

Sat, 08 Jul 2017 09:31:26 +0000

As mentioned previously I am working toward getting Gitano into Stretch. A colleague and friend of mine (Richard Maw) did a large pile of work on Lace to support what we are calling sub-defines. These let us simplify Gitano's ACL files, particularly for individual projects. In this posting, I'd like to cover what has changed with the access control support in Gitano, so if you've never used it then some of this may make little sense. Later on, I'll be looking at some better user documentation in conjunction with another friend of mine (Lars Wirzenius) who has promised to help produce a basic administration manual before Stretch is totally frozen. Sub-defines With a more modern lace (version 1.3 or later) there is a mechanism we are calling 'sub-defines'. Previously if you wanted to write a ruleset which said something like "Allow Steve to read my repository" you needed: define is_steve user exact steve allow "Steve can read my repo" is_steve op_read And, as you'd expect, if you also wanted to grant read access to Jeff then you'd need yet set of defines: define is_jeff user exact jeff define is_steve user exact steve define readers anyof is_jeff is_steve allow "Steve and Jeff can read my repo" readers op_read This, while flexible (and still entirely acceptable) is wordy for small rulesets and so we added sub-defines to create this syntax: allow "Steve and Jeff can read my repo" op_read [anyof [user exact jeff] [user exact steve]] Of course, this is generally neater for simpler rules, if you wanted to add another user then it might make sense to go for: define readers anyof [user exact jeff] [user exact steve] [user exact susan] allow "My friends can read my repo" op_read readers The nice thing about this sub-define syntax is that it's basically usable anywhere you'd use the name of a previously defined thing, they're compiled in much the same way, and Richard worked hard to get good error messages out from them just in case. No more auto_user_XXX and auto_group_YYY As a result of the above being implemented, the support Gitano previously grew for automatically defining users and groups has been removed. The approach we took was pretty inflexible and risked compilation errors if a user was deleted or renamed, and so the sub-define approach is much much better. If you currently use auto_user_XXX or auto_group_YYY in your rulesets then your upgrade path isn't bumpless but it should be fairly simple: Upgrade your version of lace to 1.3 Replace any auto_user_FOO with [user exact FOO] and similarly for any auto_group_BAR to [group exact BAR]. You can now upgrade Gitano safely. No more 'basic' matches Since Gitano first gained support for ACLs using Lace, we had a mechanism called 'simple match' for basic inputs such as groups, usernames, repo names, ref names, etc. Simple matches looked like user FOO or group !BAR. The match syntax grew more and more arcane as we added Lua pattern support refs ~^[...]

Urvika Gola: Speaking at Open Source Bridge’17

Thu, 06 Jul 2017 15:51:55 +0000


Recently, I and my Co – speaker Pranav Jain, got a chance to speak at Open Source Bridge conference which was held in Portland, Oregon!

Pranav talked about GSoC and I talked about Outreachy , together we talked about Free RTC project Lumicall.
OSB conference was much more than just a ‘conference’. More than content in the talks, it had meaning. I am referring to the amazing keynote session by Nicole Sanchez on Tech Reform. She explained wonderfully the need of the hour, i.e Diversity inclusion is not just ‘inclusion’. Focus should be on what comes after the inclusion, Growth.

We also met several Debian developers and Debian mentor for Outreachy (Hoping to meet my mentors someday!! )

Thanks to OSB, I got to meet Outreachy co-ordinator Sarah Sharp! It was wonderful meeting an Outreachy-person! (image) We talked and exchanged ideas about the programme. and.. she clicked beautiful pictures of us delivering the talk.

(image) Picture Courtesy – Sarah Sharp
(image) Picture Courtesy – Sarah Sharp

The talk ended with an unexpected and very precious hand written note written by Audrey Eschright..


Thank you Debian for giving us a chance to speak at Open Source Bridge and to meet wonderful people in Open Source. ❤


Joachim Breitner: The Micro Two Body Problem

Thu, 06 Jul 2017 15:27:46 +0000


Inspired by recent PhD comic “Academic Travel” and not-so-recent xkcd comic “Movie Narrative Charts”, I created the following graphics, which visualizes the travels of an academic couple over the course of 10 months (place names anonymized).


Two bodies traveling the world

Holger Levsen:

Thu, 06 Jul 2017 14:54:01 +0000


a media experiment: / G20 not welcome

Our view currently every day and night:

(image) No one is illegal!

(image) No football for fascists!

The FC/MC is a collective initiative to change the perception of the G20 in Hamburg - the summit itself and the protests surrounding it. FC/MC is a media experiment, located in the stadium of the amazing St.Pauli football club. We will operate until this Sunday, providing live coverage (text, photos, audio, video), back stories and much much more. Another world is possible!

Disclaimer: I'm not involved in content generation, I'm just doing computer stuff as usual, but that said, I really like the work of those who are! (image)

Thadeu Lima de Souza Cascardo: News on Debian on apexqtmo

Thu, 06 Jul 2017 04:20:05 +0000

I had been using my Samsung Galaxy S Relay 4G for almost three years when I decided to get a new phone. I would use this new phone for daily tasks and take the chance to get a new model for hacking in the future. My apexqtmo would still be my companion and would now be more available for real hacking. And so it also happened that its power button got stuck. It was not the first time, but now it would happen every so often, and would require me to disassemble it. So I managed to remove the plastic button and leave it with a hole so I could press the button with a screwdriver or a paperclip. That was the excuse I needed to get it to running Debian only. Though it's now always plugged on my laptop, I got the chance to hack on it on my scarce free time. As I managed to get a kernel I built myself running on it, I started fixing things like enabling devtmpfs. I didn't insist much on running systemd, though, and kept with System V. The Xorg issues were either on the server or the client, depending on which client I ran. I decided to give a chance to running the Android userspace on a chroot, but gave up after some work to get some firmware loaded. I managed to get the ALSA controls right after saving them inside a chroot on my CyanogenMod system. Then, restoring them on Debian allowed to play songs. Unfortunately, it seems I broke the audio jack when disassembling it. Otherwise, it would have been a great portable audio player. I even wrote a small program that would allow me to control mpd by swiping on the touchscreen. Then, as Debian release approached, I decided to investigate the framebuffer issue closely. I ended finding out that it was really a bug in the driver, and after fixing it, the X server and client crashes were gone. It was beautiful to get some desktop environment running with the right colors, get a calculator started and really using the phone as a mobile device. There are two lessons or findings here for me. The first one is that the current environments are really lacking. Even something like GPE can't work. The buttons are tiny, scrollbars are still the only way for scrolling, some of the time. No automatic virtual keyboards. So, there needs to be some investing in the existing environments, and maybe even the development of new environments for these kinds of devices. This was something I expected somehow, but it's still disappointing to know that we had so much of those developed in the past and now gone. I really miss Maemo. Running something like Qtopia would mean grabing a very old unmaintained software not available in Debian. There is still matchbox, but it's as subpar as the others I tested. The second lesson is that building a userspace to run on old kernels will still hit the problem of broken drivers. In my particular case, unless I wrote code for using Ion instead of the framebuf[...]

Shirish Agarwal: Debian 9 release party at Co-hive

Wed, 05 Jul 2017 18:11:07 +0000

Dear all, This would be a biggish one so please have a chai/coffee or something stronger as it would take a while. I would start with attempt at some alcohol humor. While some people know that I have had a series of convulsive epileptic seizure I had shared bits about it in another post as well. Recovery is going through allopathic medicines as well as physiotherapy which I go to every alternate day. One of the exercises that I do in physiotherapy sessions is walk cross-legged on a line. While doing it today, it occurred to me that this is the same test that a Police inspector would do if they caught you drinking or are suspected of drunk driving. While some in the police force have now also have breath analyzer machines to determine alcohol content in the breath and body (and ways to deceive it are also there) the above exercise is still an integral part of examination. Now few of my friends who do drink have and had made expertise of walking on a line, while I due to this neurological disorder still have issues of walking on a line. So while I don’t think of a drinking party in the near future (6 months at least), if I ever do get caught with a friend who is drunk (by association I would also be a suspect) by a policeman who doesn’t have a breath analyzer machine, I could be in a lot of trouble. In addition if I tell him I have a neurological disorder I am bound to land up in a cell as he will think I’m trying to make a fool of him. If you are able to picturize the situation, I’m sure you will get a couple of laughs. Now coming to the release party, I was a bit apprehensive. It’s been quite a while I had faced an audience and just coming out of illness didn’t know how well or ill-prepared I would be for the session. I had forsaken/given up exercising two days earlier before the event as I wanted to have loose body, loose limbs all over. I also took a mild sedative (1mg) the day before just so I will have a fit night sleep and be able to focus all my energies on the big day. (I don’t recommend sedatives unless the doctor prescribes) and I did have a doctor prescription so was able to have a nice sleep. I didn’t do any Debian study as I hoped my somewhat long experience with both Ubuntu and Debian should help me. On the d-day, I had asked dhanesh (the organizer of the event) to accompany me from home to venue and back as I was unsure of the journey as it was around 9-10 kms. from my place and while I had been to the venue about couple of years back, I had just a mild remembrance of the place. Anyways, Dhanesh compiled with my request and together we reached the venue before the appointed 1500 hrs. As it was a Sunday I was unsure as how many people would turn up as people usually like to cozy up on a Sunday. Around 1530 hrs everybody showed up It included couple [...]

Patrick Matthäi: Bug in Stretch Linux vmxnet3 driver

Wed, 05 Jul 2017 16:21:09 +0000


Are I am the only one experiencing #864642 (vmxnet3: Reports suspect GRO implementation on vSphere hosts / one VM crashes)? Unluckily I still do not have got any answer on my report so that I may help to track this issue down.

Help is welcome :)

Steve Kemp: I've never been more proud

Tue, 04 Jul 2017 21:00:00 +0000


This morning I remembered I had a beefy virtual-server setup to run some kernel builds upon (when I was playing with Linux security moduels), and I figured before I shut it down I should use the power to run some fuzzing.

As I was writing some code in Emacs at the time I figured "why not fuzz emacs?"

After a few hours this was the result:

 deagol ~ $ perl -e 'print "`" x ( 1024 * 1024  * 12);' > t.el
 deagol ~ $ /usr/bin/emacs --batch --script ./t.el
 Segmentation fault (core dumped)

Yup, evaluating an lisp file caused a segfault, due to a stack-overflow (no security implications). I've never been more proud, even though I have contributed code to GNU Emacs in the past.

Reproducible builds folks: Reproducible Builds: week 114 in Stretch cycle

Tue, 04 Jul 2017 12:49:52 +0000

Here's what happened in the Reproducible Builds effort between Sunday June 25 and Saturday July 1 2017: Upcoming and past events Our next IRC meeting is scheduled for July 6th at 17:00 UTC (agenda). Topics to be discussed include an update on our next Summit, a potential NMU campaign, a press release for buster, branding, etc. Toolchain development and fixes James McCoy reviewed and merged Ximin Luo's script debpatch into the devscripts Git repository. This is useful for rebasing our patches onto new versions of Debian packages. Packages fixed and bugs filed Adrian Bunk: #866713 filed against debhelper. Chris Lamb: #865994 filed against xabacus. #866164 filed against qmidinet. #866169 filed against singularity-container. #866330 filed against cd-hit. Ximin Luo uploaded dash, sensible-utils and xz-utils to the deferred uploads queue with a delay of 14 days. (We have had patches for these core packages for over a year now and the original maintainers seem inactive so Debian conventions allow for this.) Patches submitted upstream: openmpi gtk-doc fixed by sorting directory listings samba TeX nedit criu tvheadend sort tvheadend date cfengine Reviews of unreproducible packages 4 package reviews have been added, 4 have been updated and 35 have been removed in this week, adding to our knowledge about identified issues. One issue types has been updated: Add upstream URL for random_order_of_pdf_ids_generated_by_latex One issue type has been added: timestamps_in_manpages_added_by_golang_go_flags Weekly QA work During our reproducibility testing, FTBFS bugs have been detected and reported by: Adrian Bunk (68) Daniel Schepler (1) Michael Hudson-Doyle (1) Scott Kitterman (6) diffoscope development Daniel Shahaf: Fix markup in the man page synopsis. Thanks to Niels Thykier for the report. (Closes: #866577) Mattia Rizzolo: Bump backport version check in debian/rules Ximin Luo: Fix a progressbar failure Put the 400MB "fsimage" cache in a more obvious place Fix CI tests under Python 3.6 Add a --exclude-directory-metadata option. (Closes: #866241) Raise warning for getfacl. Remove a redundant try-clause Fix recursive indentation of headers Use a loop rather than recursion In html-dir mode, put css/icon in separate files to avoid duplication diffcontrol UI tweaks Split index pages up if they get too big Vagrant Cascadian working on testing Debian: Upgraded the 27 armhf build machines to stretch. Fix mtu check to only display status when eth0 is present. Helmut Grohne worked on testing Debian: Limit diffoscope memory usage to 10GB virtual per process. It currently tends to use 50GB virtual, 36GB resident which is bad for ever[...]

Raphaël Hertzog: My Free Software Activities in June 2017

Tue, 04 Jul 2017 09:40:48 +0000

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me. Debian LTS I was allocated 12 hours to work on security updates for Debian 7 Wheezy. During this time I did the following: Released DLA-983-1 and DLA-984-1 on tiff3/tiff to fix 4 CVE. I also updated our patch set to get back in sync with upstream since we had our own patches for a while and upstream ended up using a slightly different approach. I checked that the upstream fix did really fix the issues with the reproducer files that were available to us. Handled CVE triage for a whole week. Released DLA-1006-1 on libarchive (2 CVE fixed by Markus Koschany, one by me). Debian packaging Django. A last-minute update to Django in stretch before the release, I uploaded python-django 1:1.10.7-2 fixing two bugs (among which one was release critical) and filed the corresponding unblock request. schroot. I tried to prepare another last-minute update, this time for schroot. The goal was to fix the bash completion (#855283) and a problem encountered by the Debian sysadmins (#835104). Those issues are fixed in unstable/testing but my unblock request got turned into a post-release stretch update because the release managers wanted to give the package some more testing time in unstable. Even now, they are wondering whether they should accept the new systemd service file. live-build, live-config and live-boot. On live-build, I merged a patch to add a keyboard shortcut for the advanced option menu entry (#864386). For live-config, I uploaded version 5.20170623 to fix a broken boot sequence when you have multiple partitions (#827665). For live-boot, I uploaded version 1:20170623 to fix the path to udevadm (#852570) and avoiding a file duplication in the initrd (864385). zim. I packaged a release candidate (0.67~rc2) in Debian Experimental and started to use it. I quickly discovered two annoying regressions that I reported upstream (here and here). logidee-tools. This is a package I authored a long time ago and that I’m no longer actively using. It does still work but I sometimes wonder if it still has real users. Anyway I wanted to quickly replace the broken dependency on pgf but I ended up converting the Subversion repository to Git and I also added autopkgtests. At least those tests will inform me when the package no longer works… otherwise I would not notice since I’m no longer using it. Bugs filed. I filed #865531 on lintian because the new check testsuite-autopkgtest-missing i[...]

Foteini Tsiami: Internationalization, part two

Tue, 04 Jul 2017 08:35:46 +0000

Now that sch-scripts has been renamed to ltsp-manager and translated to English, it was time to set up a proper project site for it in launchpad:

The following subpages were created for LTSP Manager there:

  • Code: a review of all the project code, which currently only includes the master git repository.
  • Bugs: a tracker where bugs can be reported. I already filed a few bugs there!
  • Translations: translators will use this to localize LTSP Manager to their languages. It’s not quite ready yet.
  • Answers: a place to ask the LTSP Manager developers for anything regarding the project.

We currently have an initial version of LTSP Manager running in Debian Stretch; although more testing and more bug reports will be needed before we start the localization phase. Attaching a first screenshot!



Arturo Borrero González: Netfilter Workshop 2017: I'm new coreteam member!

Tue, 04 Jul 2017 08:00:00 +0000


I was invited to attend the Netfilter Workshop 2017 in Faro, Portugal this week, so I’m here with all the folks enjoying some days of talks, discussions and hacking around Netfilter and general linux networking.

The Coreteam of the Netfilter project, with active members Pablo Neira Ayuso (head), Jozsef Kadlecsik, Eric Leblond and Florian Westphal have invited me to join them, and the appointment has happened today.

You may contact me now at my new email address:

This is the result of my continued contribution to the Netfilter project since several years now (probably since 2012-2013). I’m really happy with this, and I appreciate their recognition. I will do my best in this new position. Thanks!

Regarding the workshop itself, we are having lots of interesting talks and discussions about the state of the Netfilter technology, open issues, missing features and where to go in the future.

Really interesting!

John Goerzen: Time, Frozen

Tue, 04 Jul 2017 03:00:30 +0000

We’re expecting a baby any time now. The last few days have had an odd quality of expectation: any time, our family will grow.

It makes time seem to freeze, to stand still.

We have Jacob, about to start fifth grade and middle school. But here he is, still a sweet and affectionate kid as ever. He loves to care for cats and seeks them out often. He still keeps an eye out for the stuffed butterfly he’s had since he was an infant, and will sometimes carry it and a favorite blanket around the house. He will also many days prepare the “Yellow House News” on his computer, with headlines about his day and some comics pasted in — before disappearing to play with Legos for awhile.

And Oliver, who will walk up to Laura and “give baby a hug” many times throughout the day — and sneak up to me, try to touch my arm, and say “doink” before running off before I can “doink” him back. It was Oliver that had asked for a baby sister for Christmas — before he knew he’d be getting one!

In the past week, we’ve had out the garden hose a couple of times. Both boys will enjoy sending mud down our slide, or getting out the “water slide” to play with, or just playing in mud. The rings of dirt in the bathtub testify to the fun that they had. One evening, I built a fire, we made brats and hot dogs, and then Laura and I sat visiting and watching their water antics for an hour after, laughter and cackles of delight filling the air, and cats resting on our laps.

These moments, or countless others like Oliver’s baseball games, flying the boys to a festival in Winfield, or their cuddles at bedtime, warm the heart. I remember their younger days too, with fond memories of taking them camping or building a computer with them. Sometimes a part of me wants to just keep soaking in things just as they are; being a parent means both taking pride in children’s accomplishments as they grow up, and sometimes also missing the quiet little voice that can be immensely excited by a caterpillar.

And yet, all four of us are so excited and eager to welcome a new life into our home. We are ready. I can’t wait to hold the baby, or to lay her to sleep, to see her loving and excited older brothers. We hope for a smooth birth, for mom and baby.

Here is the crib, ready, complete with a mobile with a cute bear (and even a plane). I can’t wait until there is a little person here to enjoy it.

Antoine Beaupré: My free software activities, June 2017

Mon, 03 Jul 2017 16:37:46 +0000

Debian Long Term Support (LTS) This is my monthly Debian LTS report. This time I worked on Mercurial, sudo and Puppet. Mercurial remote code execution I issued DLA-1005-1 to resolve problems with the hg server --stdio command that could be abused by "remote authenticated users to launch the Python debugger, and consequently execute arbitrary code, by using --debugger as a repository name" (CVE-2017-9462). Backporting the patch was already a little tricky because, as is often the case in our line of work, the code had changed significantly in newer version. In particular, the commandline dispatcher had been refactored which made the patch non-trivial to port. On the other hand, mercurial has an extensive test suite which allowed me to make those patches in all confidence. I also backported a part of the test suite to detect certain failures better and to fix the output so that it matches the backported code. The test suite is slow, however, which meant slow progress when working on this package. I also noticed a strange issue with the test suite: all hardlink operations would fail. Somehow it seems that my new sbuild setup doesn't support doing hardlinks. I ended up building a tarball schroot to build those types of packages, as it seems the issue is related to the use of overlayfs in sbuild. The odd part is my tests of overlayfs, following those instructions, show that it does support hardlinks, so there maybe something fishy here that I misunderstand. This, however, allowed me to get a little more familiar with sbuild and the schroots. I also took this opportunity to optimize the builds by installing an apt-cacher-ng proxy to speed up builds, which will also be useful for regular system updates. Puppet remote code execution I have issued DLA-1012-1 to resolve a remote code execution attack against puppetmaster servers, from authenticated clients. To quote the advisory: "Versions of Puppet prior to 4.10.1 will deserialize data off the wire (from the agent to the server, in this case) with a attacker-specified format. This could be used to force YAML deserialization in an unsafe manner, which would lead to remote code execution." The fix was non-trivial. Normally, this would have involved fixing the YAML parsing, but this was considered problematic because the ruby libraries themselves were vulnerable and it wasn't clear we could fix the problem completely by fixing YAML parsing. The update I proposed took the bold step of switching all clients to PSON and simply deny YAML parsing from the server. This means all clients need to be updated before the server can be u[...]

Ben Hutchings: Debian LTS work, June 2017

Mon, 03 Jul 2017 16:11:18 +0000


I was assigned 15 hours of work by Freexian's Debian LTS initiative and carried over 5 hours. I worked all 20 hours.

I spent most of my time working - together with other Linux kernel developers - on backporting and testing several versions of the fix for CVE-2017-1000364, part of the "Stack Clash" problem. I uploaded two updates to linux and issued DLA-993-1 and DLA-993-2. Unfortunately the latest version still causes regressions for some applications, which I will be investigating this month.

I also released a stable update on the Linux 3.2 longterm stable branch (3.2.89) and prepared another (3.2.90) which I released today.

Vincent Bernat: Performance progression of IPv4 route lookup on Linux

Mon, 03 Jul 2017 13:25:39 +0000

TL;DR: Each of Linux 2.6.39, 3.6 and 4.0 brings notable performance improvements for the IPv4 route lookup process. In a previous article, I explained how Linux implements an IPv4 routing table with compressed tries to offer excellent lookup times. The following graph shows the performance progression of Linux through history: Two scenarios are tested: 500,000 routes extracted from an Internet router (half of them are /24), and 500,000 host routes (/32) tightly packed in 4 distinct subnets. All kernels are compiled with GCC 4.9 (from Debian Jessie). This version is able to compile older kernels1 as well as current ones. The kernel configuration used is the default one with CONFIG_SMP and CONFIG_IP_MULTIPLE_TABLES options enabled (however, no IP rules are used). Some other unrelated options are enabled to be able to boot them in a virtual machine and run the benchmark. The measurements are done in a virtual machine with one vCPU2. The host is an Intel Core i5-4670K and the CPU governor was set to “performance”. The benchmark is single-threaded. Implemented as a kernel module, it calls fib_lookup() with various destinations in 100,000 timed iterations and keeps the median. Timings of individual runs are computed from the TSC (and converted to nanoseconds by assuming a constant clock). The following kernel versions bring a notable performance improvement: In Linux 2.6.39, commit 3630b7c050d9, David Miller removes the hash-based routing table implementation to switch to the LPC-trie implementation (available since Linux 2.6.13 as a compile-time option). This brings a small regression for the scenario with many host routes but boosts the performance for the general case. In Linux 3.0, commit 281dc5c5ec0f, the improvement is not related to the network subsystem. Linus Torvalds disables the compiler size-optimization from the default configuration. It was believed that optimizing for size would help keeping the instruction cache efficient. However, compilers generated under-performing code on x86 when this option was enabled. In Linux 3.6, commit f4530fa574df, David Miller adds an optimization to not evaluate IP rules when they are left unconfigured. From this point, the use of the CONFIG_IP_MULTIPLE_TABLES option doesn’t impact the performances unless some IP rules are configured. This version also removes the route cache (commit 5e9965c15ba8). However, this has no effect on the benchmark as it directly calls fib_lookup() which doesn’t involve th[...]

Bits from Debian: New Debian Developers and Maintainers (May and June 2017)

Sun, 02 Jul 2017 12:30:00 +0000

The following contributors got their Debian Developer accounts in the last two months:

  • Alex Muntada (alexm)
  • Ilias Tsitsimpis (iliastsi)
  • Daniel Lenharo de Souza (lenharo)
  • Shih-Yuan Lee (fourdollars)
  • Roger Shimizu (rosh)

The following contributors were added as Debian Maintainers in the last two months:

  • James Valleroy
  • Ryan Tandy
  • Martin Kepplinger
  • Jean Baptiste Favre
  • Ana Cristina Custura
  • Unit 193


Ritesh Raj Sarraf: apt-offline 1.8.1 released

Sun, 02 Jul 2017 11:38:15 +0000

apt-offline 1.8.1 released. This is a bug fix release fixing some python3 glitches related to module imports. Recommended for all users. apt-offline (1.8.1) unstable; urgency=medium   * Switch setuptools to invoke py3   * No more argparse needed on py3   * Fix based on comments from pyqt mailing list   * Bump version number to 1.8.1  -- Ritesh Raj Sarraf  Sat, 01 Jul 2017 21:39:24 +0545   What is apt-offline Description: offline APT package manager  apt-offline is an Offline APT Package Manager.  .  apt-offline can fully update and upgrade an APT based distribution without  connecting to the network, all of it transparent to APT.  .  apt-offline can be used to generate a signature on a machine (with no network).  This signature contains all download information required for the APT database  system. This signature file can be used on another machine connected to the  internet (which need not be a Debian box and can even be running windows) to  download the updates.  The downloaded data will contain all updates in a format understood by APT and  this data can be used by apt-offline to update the non-networked machine.  .  apt-offline can also fetch bug reports and make them available offline.   Categories: Debian-BlogComputingToolsKeywords: aptapt-offlineOffline APT Package ManagerRHUTLike:  [...]

Junichi Uekawa: PLC network connection seems to be less reliable.

Sun, 02 Jul 2017 01:28:11 +0000

(image) PLC network connection seems to be less reliable. Maybe it's too hot. Reconnecting it seems to make it better. Hmmm..