Subscribe: Planet Debian
Added By: Feedage Forager Feedage Grade B rated
Language: English
chris lamb  commit  debian  hours  make  memory  months  new  packages  patch  perl  release  time  version  work   
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Planet Debian

Planet Debian

Planet Debian -


Sean Whitton: 'Do you really need to do that?'

Wed, 28 Sep 2016 15:25:09 +0000


A new postdoc student arrived at our department this semester, and after learning that he uses GNU/Linux for all his computing, I invited him along to TFUG. During some of our meetings people asked “how could I do X on my GNU/Linux desktop?” and, jokingly, the postdoc would respond “the answer to your question is ‘do you really need to do that?’” Sometimes the more experienced GNU/Linux users at the table would respond to questions by suggesting that the user should simply give up on doing X, and the postdoc would slap his thigh and laugh and say “see? I told you that’s the answer!”

The phenomenon here is that people who have at some point made a commitment to at least try to use GNU/Linux for all their computing quickly find that they have come to value using GNU/Linux more than they value engaging in certain activities that only work well/at all under a proprietary operating system. I think that this is because they get used to being treated with respect by their computer. And indeed, one of the reasons I’ve almost entirely given up on computer gaming is that computer games are non-free software. “Are you sure you need to do that?” starts sounding like a genuine question rather than simply a polite way of saying that what someone wants to do can’t be achieved.

I suggest that this is a blessing in disguise. The majority of the things that you can only do under a proprietary operating system are things that it would be good for you if you were to switch them out for other activities. I’m not suggesting that switching to a GNU/Linux is a good way to give up on the entertainment industry. It’s a good way of moderating your engagement with the entertainment industry. Rather than logging onto Netflix, you might instead pop in a DVD of a movie. You can still engage with contemporary popular culture, but the technical barriers give you an opportunity to moderate your consumption: once you’ve finished watching the movie, the software won’t try to get you to watch something else by making a calculation as to what you’re most likely to assent to watching next based on what you’ve watched before. For this behaviour of the Netflix software is just another example of non-free software working against its user’s interests: watching a movie is good for you, but binge-watching a TV series probably isn’t. In cases like this, living in the world of Free Software makes it easier to engage with media healthily.

Chris Lamb: Diffoscope progress bar

Wed, 28 Sep 2016 11:45:53 +0000


Diffoscope is a diff utility which recursively unpacks archives, ISOs, etc., transforming a wide variety of files into human-readable forms before comparison instead of simply showing the raw difference in hexadecimal.

I recently added a progress bar when diffoscope is run on a terminal:


Note that as diffoscope can, at any point, encounter an archive or format that requires unpacking, the progress will always be approximate and may even appear to go "backwards".

The implementation, available in version 61, is simple (see #1, #2, #3 & #4) but takes into account of a number of subtleties by using context managers to correctly track the state throughout.

Kees Cook: security things in Linux v4.4

Tue, 27 Sep 2016 22:47:08 +0000

Continuing with interesting security things in the Linux kernel, here’s v4.4. As before, if you think there’s stuff I missed that should get some attention, please let me know. CONFIG_IO_STRICT_DEVMEM The CONFIG_STRICT_DEVMEM setting that has existed for a long time already protects system RAM from being accessible through the /dev/mem device node to root in user-space. Dan Williams added CONFIG_IO_STRICT_DEVMEM to extend this so that if a kernel driver has reserved a device memory region for use, it will become unavailable to /dev/mem also. The reservation in the kernel was to keep other kernel things from using the memory, so this is just common sense to make sure user-space can’t stomp on it either. Everyone should have this enabled. If you’re looking to create a very bright line between user-space having access to device memory, it’s worth noting that if a device driver is a module, a malicious root user can just unload the module (freeing the kernel memory reservation), fiddle with the device memory, and then reload the driver module. So either just leave out /dev/mem entirely (not currently possible with upstream), build a monolithic kernel (no modules), or otherwise block (un)loading of modules (/proc/sys/kernel/modules_disabled). seccomp UM Mickaël Salaün added seccomp support (and selftests) for user-mode Linux. Moar architectures! seccomp Checkpoint/Restore-In-Userspace Tycho Andersen added a way to extract and restore seccomp filters from running processes via PTRACE_SECCOMP_GET_FILTER under CONFIG_CHECKPOINT_RESTORE. This is a continuation of his work (that I failed to mention in my prior post) from v4.3, which introduced a way to suspend and resume seccomp filters. As I mentioned at the time (and for which he continues to quote me) “this feature gives me the creeps.” :) x86 W^X corrections Stephen Smalley noticed that there was still a range of kernel memory (just past the end of the kernel code itself) that was incorrectly marked writable and executable, defeating the point of CONFIG_DEBUG_RODATA which seeks to eliminate these kinds of memory ranges. He corrected this and added CONFIG_DEBUG_WX which performs a scan of memory at boot time and yells loudly if unexpected memory protection are found. To nobody’s delight, it was shortly discovered the UEFI leaves chunks of memory in this state too, which posed an ugly-to-solve problem (which Matt Fleming addressed in v4.6). x86_64 vsyscall CONFIG I introduced a way to control the mode of the x86_64 vsyscall with a build-time CONFIG selection, though the choice I really care about is CONFIG_LEGACY_VSYSCALL_NONE, to force the vsyscall memory region off by default. The vsyscall memory region was always mapped into process memory at a fixed location, and it originally posed a security risk as a ROP gadget execution target. The vsyscall emulation mode was added to mitigate the problem, but it still left fixed-position static memory content in all processes, which could still pose a security risk. The good news is that glibc since version 2.15 doesn’t need vsyscall at all, so it can just be removed entirely. Any kernel built this way that discovered they needed to support a pre-2.15 glibc could still re-enable it at the kernel command line with “vsyscall=emulate”. That’s it for v4.4. Tune in tomorrow for v4.5! © 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License. [...]

C.J. Adams-Collier: OpenDaylight Symposium 2016

Tue, 27 Sep 2016 17:17:08 +0000

I’ll write this later.  Keywords and notes inline Cloud 5G AT&T Ericsson SDN Cisco Intel Linux Debian Ubuntu Open Source Hydrogen Helium Lithium OSI stack Usability Developers and users working together. Fast Dev/Test cycles Sessions start at 10:30 Message bus Minimal install needs to be smaller Most functionalities must be implemented in modules Upgrade path complexity Out of band control channel. OPNFV customers don’t run stock releases, make customisations and plugins easier. Message bus Distribution vendors should not be expected to perform package maintenance.  Get the distribution ready for fast pathing in to backports, security, testing  and unstable. Tweet [...]

Ben Hutchings: Debian LTS work, August 2016

Tue, 27 Sep 2016 10:41:56 +0000


I was assigned 14.75 hours of work by Freexian's Debian LTS initiative and carried over 0.7 from last month. I worked a total of 14 hours, carrying over 1.45 hours.

I finished preparing and finally uploaded an update for linux (3.2.81-2). This took longer than expected due to the difficulty of reproducing CVE-2016-5696 and verifying the backported fix. I also released an upstream stable update (3.2.82) which will go into the next update in wheezy LTS.

I discussed a few other security updates and issues on the debian-lts mailing list.

Kees Cook: security things in Linux v4.3

Mon, 26 Sep 2016 22:54:38 +0000

When I gave my State of the Kernel Self-Protection Project presentation at the 2016 Linux Security Summit, I included some slides covering some quick bullet points on things I found of interest in recent Linux kernel releases. Since there wasn’t a lot of time to talk about them all, I figured I’d make some short blog posts here about the stuff I was paying attention to, along with links to more information. This certainly isn’t everything security-related or generally of interest, but they’re the things I thought needed to be pointed out. If there’s something security-related you think I should cover from v4.3, please mention it in the comments. I’m sure I haven’t caught everything. :) A note on timing and context: the momentum for starting the Kernel Self Protection Project got rolling well before it was officially announced on November 5th last year. To that end, I included stuff from v4.3 (which was developed in the months leading up to November) under the umbrella of the project, since the goals of KSPP aren’t unique to the project nor must the goals be met by people that are explicitly participating in it. Additionally, not everything I think worth mentioning here technically falls under the “kernel self-protection” ideal anyway — some things are just really interesting userspace-facing features. So, to that end, here are things I found interesting in v4.3: CONFIG_CPU_SW_DOMAIN_PAN Russell King implemented this feature for ARM which provides emulated segregation of user-space memory when running in kernel mode, by using the ARM Domain access control feature. This is similar to a combination of Privileged eXecute Never (PXN, in later ARMv7 CPUs) and Privileged Access Never (PAN, coming in future ARMv8.1 CPUs): the kernel cannot execute user-space memory, and cannot read/write user-space memory unless it was explicitly prepared to do so. This stops a huge set of common kernel exploitation methods, where either a malicious executable payload has been built in user-space memory and the kernel was redirected to run it, or where malicious data structures have been built in user-space memory and the kernel was tricked into dereferencing the memory, ultimately leading to a redirection of execution flow. This raises the bar for attackers since they can no longer trivially build code or structures in user-space where they control the memory layout, locations, etc. Instead, an attacker must find areas in kernel memory that are writable (and in the case of code, executable), where they can discover the location as well. For an attacker, there are vastly fewer places where this is possible in kernel memory as opposed to user-space memory. And as we continue to reduce the attack surface of the kernel, these opportunities will continue to shrink. While hardware support for this kind of segregation exists in s390 (natively separate memory spaces), ARM (PXN and PAN as mentioned above), and very recent x86 (SMEP since Ivy-Bridge, SMAP since Skylake), ARM is the first upstream architecture to provide this emulation for existing hardware. Everyone running ARMv7 CPUs with this kernel feature enabled suddenly gains the protection. Similar emulation protections (PAX_MEMORY_UDEREF) have been available in PaX/Grsecurity for a while, and I’m delighted to see a form of this land in upstream finally. To test this kernel protection, the ACCESS_USERSPACE and EXEC_USERSPACE triggers for lkdtm have existed since Linux v3.13, when they were introduced in anticipation of the x86 SMEP and SMAP features. Ambient Capabilities Andy Lutomirski (with Christoph Lameter and Serge Hallyn) implemented a way for processes to pass capabilities across exec() in a sensible manner. Until Ambient Capabilities, any capabilities available to a process would only be passed to a child process if the new executable was correctly marked with filesystem capability bits. This turns out to be a real headache for[...]

Reproducible builds folks: Reproducible Builds: week 74 in Stretch cycle

Mon, 26 Sep 2016 21:25:12 +0000

Here is what happened in the Reproducible Builds effort between Sunday September 18 and Saturday September 24 2016: Outreachy We intend to participate in Outreachy Round 13 and look forward for new enthusiastic applications to contribute to reproducible builds. We're offering four different areas to work on: Improve test and debugging tools. Improving reproducibility of Debian packages. Improving Debian infrastructure. Help collaboration across distributions. Reproducible Builds World summit #2 We are planning e a similar event to our Athens 2015 summit and expect to reveal more information soon. If you haven't been contacted yet but would like to attend, please contact holger. Toolchain development and fixes Mattia uploaded dpkg/ to our experimental repository. and covered the details for the upload in a mailing list post. The most important change is the incorporation of improvements made by Guillem Jover (dpkg maintainer) to the .buildinfo generator. This is also in the hope that it will speed up the merge in the upstream. One of the other relevant changes from before is that .buildinfo files generated from binary-only builds will no longer include the hash of the .dsc file in Checksums-Sha256 as documented in the specification. Even if it was considered important to include a checksum of the source package in .buildinfo, storing it that way breaks other assumptions (eg. that Checksums-Sha256 contains only files part of that are part of a single upload, wheras the .dsc might not be part of that upload), thus we look forward for another solution to store the source checksum in .buildinfo. Bugs filed #838713 filed against python-xlib by Chris Lamb. #838754 filed against golang-google-grpc by Chris Lamb. #838188 filed against ocaml by Johannes Schauer. #838785 filed against funnelweb by Reiner Herrmann. Reviews of unreproducible packages 250 package reviews have been added, 4 have been updated and 4 have been removed in this week, adding to our knowledge about identified issues. 4 issue types have been added: captures_users_gecos issue timestamps_in_org_mode_html_output toolchain issue. varnish_vmodtool_random_file_id gpg_keyring_magic_bytes_differ 3 issue types have been updated: timestamps_in_org_mode_html_output more generally. timestamps_in_documentation_generated_by_texi2html randomness_in_ocaml_preprocessed_files Weekly QA work FTBFS bugs have been reported by: Chris Lamb (11) Santiago Vila (2) Documentation updates h01ger created a new Jenkins job so that every commit pushed to the master branch for the website will update diffoscope development Mattia Rizzolo: Skip rlib tests if the "nm" tool is missing Ximin Luo: Give better advice about what envvar to set to make the console work tests/basic-command-line: check exit code and use a more complex example Add a script to check sizes of dependencies Don't use unicode quotes to avoid breakage under LC_ALL=C strip-nondeterminism development Chris Lamb: .perltidyrc: Add from lintian. .perltidyrc: We use tabs, not spaces. Run perltidy reprotest development Ximin Luo uploaded reprotest 0.3 and 0.3.1 to unstable with these changes: Make some variations more reliable, so tests don't fail Add a safety device to guard against typos Address lintian warnings Remove any existing artifact, in case the build script doesn't overwrite it Fix the logic of some tests, and don't vary fileordering on Debian buildds Use the magic of VIRTUALENV_DOWNLOAD=no, seen in tox's own autopkgtest tests Flush so subprocess output is guaranteed to appear later Don't error if the build command generates stderr Default tests to run on "null" only since it takes effort to set up the others hey dawg i herd u liek tests so i put some tests in ur tests so u can test while u test Make no_clear_on_error optional; we don't want to pa[...]

Rhonda D'Vine: LP

Mon, 26 Sep 2016 10:00:00 +0000


I guess you know by now that I simply love music. It is powerful, it can move you, change your mood in a lot of direction, make you wanna move your body to it, even unknowingly have this happen, and remind you of situations you want to keep in mind. The singer I present to you was introduce to me by a dear friend with the following words: So this hasn't happened to me in a looooong time: I hear a voice and can't stop crying. I can't decide which song I should send to you thus I send three of which the last one let me think of you.

And I have to agree, that voice is really great. Thanks a lot for sharing LP with me, dear! And given that I got sent three songs and I am not good at holding excitement back, I want to share it with you, so here are the songs:

  • Lost On You: Her voice is really great in this one.
  • Halo: Have to agree that this is really a great cover.
  • Someday: When I hear that song and think about that it reminds my friend of myself I'm close to tears, too ...

Like always, enjoy!

/music | permanent link | Comments: 0 | Flattr this

Clint Adams: Collect the towers

Sun, 25 Sep 2016 23:57:07 +0000


Why is openbmap's North American coverage so sad? Is there a reason that RadioBeacon doesn't also submit to OpenCellID? Is there a free software Android app that submits data to OpenCellID?

Vincent Sanders: I'll huff, and I'll puff, and I'll blow your house in

Sun, 25 Sep 2016 23:23:10 +0000

Sometimes it really helps to have a different view on a problem and after my recent writings on my Public Suffix List (PSL) library I was fortunate to receive a suggestion from my friend Enrico Zini.I had asked for suggestions on reducing the size of the library further and Enrico simply suggested Huffman coding. This was a technique I had learned about long ago in connection with data compression and the intervening years had made all the details fuzzy which explains why it had not immediately sprung to mind.Huffman coding named for David A. Huffman is an algorithm that enables a representation of data which is very efficient. In a normal array of characters every character takes the same eight bits to represent which is the best we can do when any of the 256 values possible is equally likely. If your data is not evenly distributed this is not the case for example if the data was english text then the value is fifteen times more likely to be that for e than k.So if we have some data with a non uniform distribution of probabilities we need a way the data be encoded with fewer bits for the common values and more bits for the rarer values. To be efficient we would need some way of having variable length representations without storing the length separately. The term for this data representation is a prefix code and there are several ways to generate them.Such is the influence of Huffman on the area of prefix codes they are often called Huffman codes even if they were not created using his algorithm. One can dream of becoming immortalised like this, to join the ranks of those whose names are given to units or whole ideas in a field must be immensely rewarding, however given Huffman invented his algorithm and proved it to be optimal to answer a question on a term paper in his early twenties I fear I may already be a bit too late.The algorithm itself is relatively straightforward. First a frequency analysis is performed, a fancy way of saying count how many of each character is in the input data. Next a binary tree is created by using a priority queue initialised with the nodes sorted by frequency.The two least frequent items count is summed together and a node placed in the tree with the two original entries as child nodes. This step is repeated until a single node exists with a count value equal to the length of the input.To encode data once simply walks the tree outputting a 0 for a left node or 1 for right node until reaching the original value. This generates a mapping of values to bit output, the input is then simply converted value by value to the bit output. To decode the data the data is used bit by bit to walk the tree to arrive at values.If we perform this algorithm on the example string table *!asiabvcomcoopitamazonawsarsaves-the-whalescomputebasilicata we can reduce the 488 bits (61 * 8 bit characters) to 282 bits or 40% reduction. Obviously in a real application the huffman tree would need to be stored which would probably exceed this saving but for larger data sets it is probable this technique would yield excellent results on this kind of data.Once I proved this to myself I implemented the encoder within the existing conversion program. Although my perl encoder is not very efficient it can process the entire PSL string table (around six thousand labels using 40KB or so) in less than a second, so unless the table grows massively an inelegant approach will suffice.The resulting bits were packed into 32bit values to improve decode performance (most systems prefer to deal with larger memory fetches less frequently) and resulted in 18KB of output or 47% of the original size. This is a great improvement in size and means the statically linked test program is now 59KB and is actually smaller than the gzipped source data.$ ls -alh test_nspsl-rwxr-xr-x 1 vince vince 59K S[...]

Julian Andres Klode: Introducing TrieHash, a order-preserving minimal perfect hash function generator for C(++)

Sun, 25 Sep 2016 18:44:36 +0000

Abstract I introduce TrieHash an algorithm for constructing perfect hash functions from tries. The generated hash functions are pure C code, minimal, order-preserving and outperform existing alternatives. Together with the generated header files,they can also be used as a generic string to enumeration mapper (enums are created by the tool). Introduction APT (and dpkg) spend a lot of time in parsing various files, especially Packages files. APT currently uses a function called AlphaHash which hashes the last 8 bytes of a word in a case-insensitive manner to hash fields in those files (dpkg just compares strings in an array of structs). There is one obvious drawback to using a normal hash function: When we want to access the data in the hash table, we have to hash the key again, causing us to hash every accessed key at least twice. It turned out that this affects something like 5 to 10% of the cache generation performance. Enter perfect hash functions: A perfect hash function matches a set of words to constant values without collisions. You can thus just use the index to index into your hash table directly, and do not have to hash again (if you generate the function at compile time and store key constants) or handle collision resolution. As #debian-apt people know, I happened to play a bit around with tries this week before guillem suggested perfect hashing. Let me tell you one thing: My trie implementation was very naive, that did not really improve things a lot… Enter TrieHash Now, how is this related to hashing? The answer is simple: I wrote a perfect hash function generator that is based on tries. You give it a list of words, it puts them in a trie, and generates C code out of it, using recursive switch statements (see code generation below). The function achieves competitive performance with other hash functions, it even usually outperforms them. Given a dictionary, it generates an enumeration (a C enum or C++ enum class) of all words in the dictionary, with the values corresponding to the order in the dictionary (the order-preserving property), and a function mapping strings to members of that enumeration. By default, the first word is considered to be 0 and each word increases a counter by one (that is, it generates a minimal hash function). You can tweak that however: = 0 WordLabel ~ Word OtherWord = 9 will return 0 for an unknown value, map “Word” to the enum member WordLabel and map OtherWord to 9. That is, the input list functions like the body of a C enumeration. If no label is specified for a word, it will be generated from the word. For more details see the documentation C code generation switch(string[0] | 32) { case 't': switch(string[1] | 32) { case 'a': switch(string[2] | 32) { case 'g': return Tag; } } } return Unknown; Yes, really recursive switches – they directly represent the trie. Now, we did not really do a straightforward translation, there are some optimisations to make the whole thing faster and easier to look at: First of all, the 32 you see is used to make the check case insensitive in case all cases of the switch body are alphabetical characters. If there are non-alphabetical characters, it will generate two cases per character, one upper case and one lowercase (with one break in it). I did not know that lowercase and uppercase characters differed by only one bit before, thanks to the clang compiler for pointing that out in its generated assembler code! Secondly, we only insert breaks only between cases. Initially, each case ended with a return Unknown, but guillem (the dpkg developer) suggested it might be faster to let them fallthrough where possible. Turns out it was not faster on a good compiler, but it’s still more readable anywhere. Finally, we build [...]

Steinar H. Gunderson: Nageru @ Fyrrom

Sun, 25 Sep 2016 14:01:00 +0000


When Samfundet wanted to make their own Boiler Room spinoff (called “Fyrrom”—more or less a direct translation), it was a great opportunity to try out the new multitrack code in Nageru. After all, what can go wrong with a pretty much untested and unfinished git branch, right?

So we cobbled together a bunch of random equipment from here and there:


Hooked it up to Nageru:


and together with some great work from the people actually pulling together the event, this was the result. Lots of fun.

And yes, some bugs were discovered—of course, field testing without followup patches is meaningless (that would either mean you're not actually taking your test experience into account, or that your testing gave no actionable feedback and thus was useless), so they will be fixed in due time for the 1.4.0 release.

Edit: Fixed a screenshot link.

Sven Hoexter: in causa wosign

Sun, 25 Sep 2016 10:00:18 +0000

Since I kind of recommended the free WoSign CA in the past, I would like to point out the issues that have piled up. Mozilla has a writeup due to a removal discussion for NSS: (Yes I'm late with this post, about a month or two by now ...)

Since WoSign, or the person behind it, silently also bought StartCom we've now with StartSSL and WoSign two of the three free CAs in one hand with a questionable track record. That leaves everyone looking for a low budget option with Let's Encrypt.

Russ Allbery: podlators 4.08

Sun, 25 Sep 2016 02:28:00 +0000

A new release of the distribution that provides Pod::Man and Pod::Text for Perl documentation formatting.

The impetus for this release is fixing a rendering bug in Pod::Man that spewed stray bits of half-escaped *roff into the man page for the text "TRUE (1)". This turned out to be due to two interlocking bugs in the dark magic regexes that try to fix up formatting to make man pages look a bit better: incorrect double-markup in both small caps and as a man page reference, and incorrect interpretation of the string "\s0(1)". Both are fixed in this release.

podlators 4.00 changed Pod::Man to make piping POD through pod2man on standard input without providing the --name option an error, since there was no good choice for the man page title. This turned out to be too disruptive: the old behavior of tolerating this had been around for too long, and I got several bug reports. Since I think backward compatibility is extremely important for these tools, I've backed down from this change, and now Pod::Man and pod2man just silently use the man page name "STDIN" (which still fixes the original problem of being reproducible).

It is, of course, still a good idea to provide the name option when dealing with standard input, since "STDIN" isn't a very good man page title.

This release also adds new --lquote and --rquote options to pod2man to set the quote marks independently, and removes a test that relied on a POD construct that is going to become an error in Pod::Simple.

You can get the latest release from the podlators distribution page.

Dirk Eddelbuettel: tint 0.0.1: Tint Is Not Tufte

Sat, 24 Sep 2016 23:13:00 +0000


A new experimental package is now on the ghrr drat. It is named tint which stands for Tint Is Not Tufte. It provides an alternative for Tufte-style html presentation. I wrote a bit more on the package page and the README in the repo -- so go read this.

Here is just a little teaser of what it looks like:


and the full underlying document is available too.

For questions or comments use the issue tracker off the GitHub repo. The package may be short-lived as its functionality may end up inside the tufte package.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Iain R. Learmonth: Azure from Debian

Sat, 24 Sep 2016 22:03:13 +0000


Around a week ago, I started to play with programmatically controlling Azure. I needed to create and destroy a bunch of VMs over and over again, and this seemed like something I would want to automate once instead of doing manually and repeatedly. I started to look into the azure-sdk-for-python and mentioned that I wanted to look into this in #debian-python. ardumont from Software Heritage noticed me, and was planning to package azure-storage-python. We joined forces and started a packaging team for Azure-related software.

I spoke with the upstream developer of the azure-sdk-for-python and he pointed me towards azure-cli. It looked to me that this fit my use case better than the SDK alone, as it had the high level commands I was looking for.

Between me and ardumont, in the space of just under a week, we have now packaged: python-msrest (#838121), python-msrestazure (#838122), python-azure (#838101), python-azure-storage (#838135), python-adal (#838716), python-applicationinsights (#838717) and finally azure-cli (#838708). Some of these packages are still in the NEW queue at the time I’m writing this, but I don’t foresee any issues with these packages entering unstable.

azure-cli, as we have packaged, is the new Python-based CLI for Azure. The Microsoft developers gave it the tagline of “our next generation multi-platform command line experience for Azure”. In the short time I’ve been using it I’ve been very impressed with it.

In order to set it up initially, you have to configure a couple of of defaults using az configure. After that, you need to az login which again is an entirely painless process as long as you have a web browser handy in order to perform the login.

After those two steps, you’re only two commands away from deploying a Debian virtual machine:

az resource group create -n testgroup -l "West US"
az vm create -n testvm -g testgroup --image credativ:Debian:8:latest --authentication-type ssh

This will create a resource group, and then create a VM within that resource group with a user automatically created with your current username and with your SSH public key (~/.ssh/ automatically installed. Once it returns you the IP address, you can SSH in straight away.

Looking forward to some next steps for Debian on Azure, I’d like to get images built for Azure using vmdebootstrap and I’ll be exploring this in the lead up to, and at, the upcoming vmdebootstrap sprint in Cambridge, UK later in the year (still being organised).

Ritesh Raj Sarraf: Laptop Mode Tools 1.70

Sat, 24 Sep 2016 13:55:23 +0000

I'm pleased to announce the release of Laptop Mode Tools, version 1.70. This release adds support for AHCI Runtime PM, introduced in Linux 4.6. It also includes many important bug fixes, mostly related to invocation and determination of power states. Changelog: 1.70 - Sat Sep 24 16:51:02 IST 2016     * Deal harder with broken battery states     * On machines with 2+ batteries, determine states from all batteries     * Limit status message logging frequency. Some machines tend to send       ACPI events too often. Thanks Maciej S. Szmigiero     * Try harder to determine power states. As reports have shown, the       power_supply subsystem has had incorrect state reporting on many machines,       for both, BAT and AC.     * Relax conditional events where Laptop Mode Tools should be executed. This       affected for use cases of Laptop being docked and undocked       Thanks Daniel Koch.     * CPU Hotplug settings extended     * Cleanup states for improved Laptop Mode Tools invocation       Thanks: Tomas Janousek     * Align Intel P State default to what the actual driver (intel_pstate.c) uses       Thanks: George Caswell and Matthew Gabeler-Lee     * Add support for AHCI Runtime PM in module intel-sata-powermgmt     * Many systemd and initscript fixes     * Relax default USB device list. This avoids the long standing issues with       USB devices (mice, keyboard) that mis-behaved during autosuspend Source tarball, Feodra/SUSE RPM Packages available at: Debian packages will be available soon in Unstable. Homepage: Mailing List:        Categories: Debian-BlogToolsKeywords: Laptop Mode Toolspower savingRHUTLike:  [...]

James McCoy: neovim-enters-stretch

Sat, 24 Sep 2016 03:01:55 +0000


Last we heard from our fearless hero, Neovim, it was just entering the NEW queue. Well, a few days later it landed in experimental and 8 months, to the day, since then it is now in Stretch.

Enjoy the fish!

Jonathan Dowland: WadC 2.1

Thu, 22 Sep 2016 20:22:18 +0000


Today I released version 2.1 of Wad Compiler, a lazy functional programming language and IDE for the construction of Doom maps.

This comes about a year after version 2.0. The most significant change is an adjustment to the line splitting algorithm to fix a long-standing issue when you try to draw a new linedef over the top of an existing one, but in the opposite direction. Now that this bug is fixed, it's much easier to overdraw vertical or horizontal lines without needing an awareness of the direction of the original lines.

The other big changes are in the GUI, which has been cleaned up a fair bit, now had undo/redo support, the initial window size is twice as large, and it now supports internationalisation, with a partial French translation included.

This version is dedicated to the memory of Professor Seymour Papert (1928-2016), co-inventor of the LOGO programming language).

For more information see the release notes and the reference.

Joey Hess: keysafe beta release

Thu, 22 Sep 2016 20:13:21 +0000


After a month of development, keysafe 0.20160922 is released, and ready for beta testing. And it needs servers.

With this release, the whole process of backing up and restoring a gpg secret key to keysafe servers is implemented. Keysafe is started at desktop login, and will notice when a gpg secret key has been created, and prompt to see if it should back it up.

At this point, I recommend only using keysafe for lower-value secret keys, for several reasons:

  • There could be some bug that prevents keysafe from restoring a backup.
  • Keysafe's design has not been completely reviewed for security.
  • None of the keysafe servers available so far or planned to be deployed soon meet all of the security requirements for a recommended keysafe server. While server security is only the initial line of defense, it's still important.

Currently the only keysafe server is one that I'm running myself. Two more keysafe servers are needed for keysafe to really be usable, and I can't run those.

If you're interested in running a keysafe server, read the keysafe server requirements and get in touch.

Arturo Borrero González: Initial post

Thu, 22 Sep 2016 19:38:06 +0000


Finally, I decided it was time to switch from blogger to jekyllrb hosted at github pages.

My old blog at will still be online as an archive, since I don’t plan to migrate the content from there to here.

Gustavo Noronha Silva: WebKitGTK+ 2.14 and the Web Engines Hackfest

Thu, 22 Sep 2016 17:03:36 +0000

Next week our friends at Igalia will be hosting this year’s Web Engines Hackfest. Collabora will be there! We are gold sponsors, and have three developers attending. It will also be an opportunity to celebrate Igalia’s 15th birthday \o/. Looking forward to meet you there! =) Carlos Garcia has recently released WebKitGTK+ 2.14, the latest stable release. This is a great release that brings a lot of improvements and works much better on Wayland, which is becoming mature enough to be used by default. In particular, it fixes the clipboard, which was one of the main missing features, thanks to Carlos Garnacho! We have also been able to contribute a bit to this release =) One of the biggest changes this cycle is the threaded compositor, which was implemented by Igalia’s Gwang Yoon Hwang. This work improves performance by not stalling other web engine features while compositing. Earlier this year we contributed fixes to make the threaded compositor work with the web inspector and fixed elements, helping with the goal of enabling it by default for this release. Wayland was also lacking an accelerated compositing implementation. There was a patch to add a nested Wayland compositor to the UIProcess, with the WebProcesses connecting to it as Wayland clients to share the final rendering so that it can be shown to screen. It was not ready though and there were questions as to whether that was the way to go and alternative proposals were floating around on how to best implement it. At last year’s hackfest we had discussions about what the best path for that would be where collaborans Emanuele Aina and Daniel Stone (proxied by Emanuele) contributed quite a bit on figuring out how to implement it in a way that was both efficient and platform agnostic. We later picked up the old patchset, rebased on the then-current master and made it run efficiently as proof of concept for the Apertis project on an i.MX6 board. This was done using the fancy GL support that landed in GTK+ in the meantime, with some API additions and shortcuts to sidestep performance issues. The work was sponsored by Robert Bosch Car Multimedia. Igalia managed to improve and land a very well designed patch that implements the nested compositor, though it was still not as efficient as it could be, as it was using glReadPixels to get the final rendering of the page to the GTK+ widget through cairo. I have improved that code by ensuring we do not waste memory when using HiDPI. As part of our proof of concept investigation, we got this WebGL car visualizer running quite well on our sabrelite imx6 boards. Some of it went into the upstream patches or proposals mentioned below, but we have a bunch of potential improvements still in store that we hope to turn into upstreamable patches and advance during next week’s hackfest. One of the improvements that already landed was an alternate code path that leverages GTK+’s recent GL super powers to render using gdk_cairo_draw_from_gl(), avoiding the expensive copying of pixels from the GPU to the CPU and making it go faster. That improvement exposed a weird bug in GTK+ that causes a black patch to appear when shrinking the window, which I have a tentative fix for. We originally proposed to add a new gdk_cairo_draw_from_egl() to use an EGLImage instead of a GL texture or renderbuffer. On our proof of concept we noticed it is even more efficient than the texturing currently used by GTK+, and could give us even better performance for WebKitGTK+. Emanue[...]

Zlatan Todorić: Open Source Motion Comic Almost Fully Funded - Pledge now!

Thu, 22 Sep 2016 11:54:21 +0000

The Pepper and Carrot motion comic is almost funded. The pledge from Ethic Cinema put it on good road (as it seemed it would fail). Ethic Cinema is non profit organization that wants to make open source art (as they call it Libre Art). Purism's creative director, François Téchené, is member and co-founder of Ethic Cinema. Lets push final bits so we can get this free as in freedom artwork.

Notice that Pepper and Carrot is a webcomic (also available as book) free as in freedom artwork done by David Revoy who also supports this campaign. Also the support is done by Krita community on their landing page.

Lets do this!

Junichi Uekawa: Tried creating a GCE control panel for myself.

Thu, 22 Sep 2016 03:04:25 +0000

(image) Tried creating a GCE control panel for myself. GCP GCE control panel takes about 20 seconds for me to load, CPU is busy loading the page. It does so many things and it's very complex. I've noticed that the API isn't that slow, so I used OAuth to let me do what I want usually; list the hosts and start/stop the instance, and list the IPs. Takes 500ms to do it instead of 20 seconds. I've put the service on AppEngine. The hardest part was figuring out how this OAuth2 dance was supposed to work, and all the python documentation I have seen were somewhat outdated and rewriting then to a workable state. document was outdated but sample code was fixed. I had to read up on vendoring and PIP and other stuff in order to get all the dependencies installed. I guess my python appengine skills are too rusty now.

C.J. Adams-Collier: virt manager cannot find suitable emulator for x86 64

Wed, 21 Sep 2016 18:42:04 +0000


Looks like I was missing qemu-kvm.

$ sudo apt-get install qemu-kvm qemu-system

Matthew Garrett: Microsoft aren't forcing Lenovo to block free operating systems

Wed, 21 Sep 2016 17:09:31 +0000

There's a story going round that Lenovo have signed an agreement with Microsoft that prevents installing free operating systems. This is sensationalist, untrue and distracts from a genuine problem.The background is straightforward. Intel platforms allow the storage to be configured in two different ways - "standard" (normal AHCI on SATA systems, normal NVMe on NVMe systems) or "RAID". "RAID" mode is typically just changing the PCI IDs so that the normal drivers won't bind, ensuring that drivers that support the software RAID mode are used. Intel have not submitted any patches to Linux to support the "RAID" mode.In this specific case, Lenovo's firmware defaults to "RAID" mode and doesn't allow you to change that. Since Linux has no support for the hardware when configured this way, you can't install Linux (distribution installers will boot, but won't find any storage device to install the OS to).Why would Lenovo do this? I don't know for sure, but it's potentially related to something I've written about before - recent Intel hardware needs special setup for good power management. The storage driver that Microsoft ship doesn't do that setup. The Intel-provided driver does. "RAID" mode prevents the Microsoft driver from binding and forces the user to use the Intel driver, which means they get the correct power management configuration, battery life is better and the machine doesn't melt.(Why not offer the option to disable it? A user who does would end up with a machine that doesn't boot, and if they managed to figure that out they'd have worse power management. That increases support costs. For a consumer device, why would you want to? The number of people buying these laptops to run anything other than Windows is miniscule)Things are somewhat obfuscated due to a statement from a Lenovo rep:This system has a Signature Edition of Windows 10 Home installed. It is locked per our agreement with Microsoft. It's unclear what this is meant to mean. Microsoft could be insisting that Signature Edition systems ship in "RAID" mode in order to ensure that users get a good power management experience. Or it could be a misunderstanding regarding UEFI Secure Boot - Microsoft do require that Secure Boot be enabled on all Windows 10 systems, but (a) the user must be able to manage the key database and (b) there are several free operating systems that support UEFI Secure Boot and have appropriate signatures. Neither interpretation indicates that there's a deliberate attempt to prevent users from installing their choice of operating system.The real problem here is that Intel do very little to ensure that free operating systems work well on their consumer hardware - we still have no information from Intel on how to configure systems to ensure good power management, we have no support for storage devices in "RAID" mode and we have no indication that this is going to get better in future. If Intel had provided that support, this issue would never have occurred. Rather than be angry at Lenovo, let's put pressure on Intel to provide support for their hardware. comments [...]

Vincent Sanders: If I see an ending, I can work backward.

Tue, 20 Sep 2016 21:12:37 +0000

Now while I am sure Arthur Miller was referring to writing a play when he said those words they have an oddly appropriate resonance for my topic.In the early nineties Lou Montulli applied the idea of magic cookies to HTTP to make the web stateful, I imagine he had no idea of the issues he was going to introduce for the future. Like most of the web technology it was a solution to an immediate problem which it has never been possible to subsequently improve.The HTTP cookie is simply a way for a website to identify a connecting browser session so that state can be kept between retrieving pages. Due to shortcomings in the design of cookies and implementation details in browsers this has lead to a selection of unwanted side effects. The specific issue that I am talking about here is the supercookie where the super prefix in this context has similar connotations as to when applied to the word villain.Whenever the browser requests a resource (web page, image, etc.) the server may return a cookie along with the resource that your browser remembers. The cookie has a domain name associated with it and when your browser requests additional resources if the cookie domain matches the requested resources domain name the cookie is sent along with the request.As an example the first time you visit a page on you might receive a cookie with the domain so next time you visit a page on your browser will send the cookie along. Indeed it will also send it along for any page on supercookies is simply one where instead of being limited to one sub-domain ( the cookie is set for a top level domain (foo.invalid) so visiting any such domain (I used the invalid name in my examples but one could substitute com or your web browser gives out the cookie. Hackers would love to be able to set up such cookies and potentially control and hijack many sites at a time.This problem was noted early on and browsers were not allowed to set cookie domains with fewer than two parts so example.invalid or were allowed but invalid or com on their own were not. This works fine for top level domains like .com, .org and .mil but not for countries where the domain registrar had rules about second levels like the uk domain (uk domains must have a second level like is no way to generate the correct set of top level domains with an algorithm so a database is required and is called the Public Suffix List (PSL). This database is a simple text formatted list with wildcard and inversion syntax and is at time of writing around 180Kb of text including comments which compresses down to 60Kb or so with deflate.A few years ago with ICANN allowing the great expansion of top level domains the existing NetSurf supercookie handling was found to be wanting and I decided to implement a solution using the PSL. At this point in time the database was only 100Kb source or 40Kb compressed.I started by looking at limited existing libraries. In fact only the regdom library was adequate but used 150Kb of heap to load the pre-processed list. This would have had the drawback of increasing NetSurf heap usage significantly (we still have users on 8Mb systems). Because of this and the need to run PHP script to generate the pre-[...]

Gunnar Wolf: Proposing a GR to repeal the 2005 vote for declassification of the debian-private mailing list

Tue, 20 Sep 2016 16:03:02 +0000

For the non-Debian people among my readers: The following post presents bits of the decision-taking process in the Debian project. You might find it interesting, or terribly dull and boring :-) Proceed at your own risk. My reason for posting this entry is to get more people to read the accompanying options for my proposed General Resolution (GR), and have as full a ballot as possible. Almost three weeks ago, I sent a mail to the debian-vote mailing list. I'm quoting it here in full: Some weeks ago, Nicolas Dandrimont proposed a GR for declassifying debian-private[1]. In the course of the following discussion, he accepted[2] Don Armstrong's amendment[3], which intended to clarify the meaning and implementation regarding the work of our delegates and the powers of the DPL, and recognizing the historical value that could lie within said list. [1] [2] [3] In the process of the discussion, several people objected to the amended wording, particularly to the fact that "sufficient time and opportunity" might not be sufficiently bound and defined. I am, as some of its initial seconders, a strong believer in Nicolas' original proposal; repealing a GR that was never implemented in the slightest way basically means the Debian project should stop lying, both to itself and to the whole free software community within which it exists, about something that would be nice but is effectively not implementable. While Don's proposal is a good contribution, given that in the aforementioned GR "Further Discussion" won 134 votes against 118, I hereby propose the following General Resolution: === BEGIN GR TEXT === Title: Acknowledge that the debian-private list will remain private. 1. The 2005 General Resolution titled "Declassification of debian-private list archives" is repealed. 2. In keeping with paragraph 3 of the Debian Social Contract, Debian Developers are strongly encouraged to use the debian-private mailing list only for discussions that should not be disclosed. === END GR TEXT === Thanks for your consideration, -- Gunnar Wolf (with thanks to Nicolas for writing the entirety of the GR text ;-) ) Yesterday, I spoke with the Debian project secretary, who confirmed my proposal has reached enough Seconds (that is, we have reached five people wanting the vote to happen), so I could now formally do a call for votes. Thing is, there are two other proposals I feel are interesting, and should be part of the same ballot, and both address part of the reasons why the GR initially proposed by Nicolas didn't succeed: Ian Jackson's Acknowledge difficulty of declassifying debian-private makes explicit the role of the listmasters and allows for a formal declassification process to take place, as long as the privacy guarantees we had after the 2005 GR are not diminished. Iain Lane's reply to Ian is not yet formally proposed, but makes it spelt out that no declassification should ever occur unless all of the involved authors have explicitly consented So, once more (and finally!), why am I posting this? To invite Iain to formally propose his text as an option to mine To invite more DDs to second the available options To publ[...]

Michal Čihař: wlc 0.6

Tue, 20 Sep 2016 16:00:16 +0000


wlc 0.6, a command line utility for Weblate, has been just released. There have been some minor fixes, but the most important news is that Windows and OS X are now supported platforms as well.

Full list of changes:

  • Fixed error when invoked without command.
  • Tested on Windows and OS X (in addition to Linux).

wlc is built on API introduced in Weblate 2.6 and still being in development. Several commands from wlc will not work properly if executed against Weblate 2.6, first fully supported version is 2.7 (it is now running on both demo and hosting servers). You can usage examples in the wlc documentation.

Filed under: Debian English SUSE Weblate | 0 comments

Reproducible builds folks: Reproducible Builds: week 73 in Stretch cycle

Tue, 20 Sep 2016 12:58:10 +0000

What happened in the Reproducible Builds effort between Sunday September 11 and Saturday September 17 2016: Toolchain developments Ximin Luo started a new series of tools called (for now) debrepatch, to make it easier to automate checks that our old patches to Debian packages still apply to newer versions of those packages, and still make these reproducible. Ximin Luo updated one of our few remaining patches for dpkg in #787980 to make it cleaner and more minimal. The following tools were fixed to produce reproducible output: naturaldocs/1.51-2 by Petter Reinholdtsen, original patch by Chris Lamb. Packages reviewed and fixed, and bugs filed The following updated packages have become reproducible - in our current test setup - after being fixed: elog/3.1.2-1-1 by Roger Kalt, original patch by Reiner Herrmann. eyed3/0.6.18-3 by Petter Reinholdtsen, original patch by Chris Lamb. frog/0.13.5-1 by Maarten van Gompel, original patch by Chris Lamb. gtranslator/2.91.7-3 by Andreas Henriksson, original patch by Reiner Herrmann. sozi/12.05-1.1 by Daniel Kahn Gillmor, original patch by Chris Lamb. The following updated packages appear to be reproducible now, for reasons we were not able to figure out. (Relevant changelogs did not mention reproducible builds.) evince/3.21.92-1 by Michael Biebl. gnome-control-center/1:3.21.92-2 by Raphaël Hertzog. libipathverbs/1.3-2 by Ana Beatriz Guerrero Lopez. pagekite/0.5.8e-2 by Petter Reinholdtsen. The following 3 packages were not changed, but have become reproducible due to changes in their build-dependencies: jaxrs-api python-lua zope-mysqlda. Some uploads have addressed some reproducibility issues, but not all of them: eurephia/1.1.0-6 by Alberto Gonzalez Iniesta, original patch by Chris Lamb. fdroidserver/0.7.0-1 by Hans-Christoph Steiner, original patch by Chris Lamb. mini-buildd/1.0.18 by Stephan Sürken. nbc/1.2.1.r4+dfsg-3 by Petter Reinholdtsen, original patch by Chris Lamb. ncurses/6.0+20160910-1 by Sven Joachim, #818067 by Niels Thykier. python-kinterbasdb/3.3.0-4 by Santiago Vila, original patch by Chris Lamb. snapper/0.3.3-1 Hideki Yamane, original patch by Sascha Steinbiss. Patches submitted that have not made their way to the archive yet: #838188 filed against ocaml by Johannes Schauer. Reviews of unreproducible packages 462 package reviews have been added, 524 have been updated and 166 have been removed in this week, adding to our knowledge about identified issues. 25 issue types have been updated: Added a new annotation for issues called "fix-deterministic" to help us update package reviews more easily. This indicates whether we expect that an issue would always happen on Jenkins; i.e. if there is a successful build, then we know the issue is fixed for that package and can update our notes. Added random_order_in_sisu_javax_inject_named and too_much_input_for_diff. Removed timestamps_in_manpages_generated_by_ronn. Updated timestamps_in_allegro_dat_files. Additionally, 21 issues were marked with "fix-deterministic". Weekly QA work FTBFS bugs have been reported by: Chris Lamb (10) Filip Pytloun (1) Santiago Vila (1) diffoscope development A new version of diffoscope 60 was uploaded to unstable by Mattia Rizzolo. It included con[...]

Mike Gabriel: Rocrail changed License to some dodgy non-free non-License

Mon, 19 Sep 2016 09:51:44 +0000

The Background Story A year ago, or so, I took some time to search the internet for Free Software that can be used for controlling model railways via a computer. I was happy to find Rocrail [1] being one of only a few applications available on the market. And even more, I was very happy when I saw that it had been licensed under a Free Software license: GPL-3(+). A month ago, or so, I collected my old Märklin (Digital) stuff from my parents' place and started looking into it again after +15 years, together with my little son. Some weeks ago, I remembered Rocrail and thought... Hey, this software was GPLed code and absolutely suitable for uploading to Debian and/or Ubuntu. I searched for the Rocrail source code and figured out that it got hidden from the web some time in 2015 and that the license obviously has been changed to some non-free license (I could not figure out what license, though). This made me very sad! I thought I had found a piece of software that might be interesting for testing with my model railway. Whenever I stumble over some nice piece of Free Software that I plan to use (or even only play with), I upload this to Debian as one of the first steps. However, I highly attempt to stay away from non-free sofware, so Rocrail has become a no-option for me back in 2015. I should have moved on from here on... Instead... Proactively, I signed up with the Rocrail forum and asked the author(s) if they see any chance of re-licensing the Rocrail code under GPL (or any other FLOSS license) again [2]? When I encounter situations like this, I normally offer my expertise and help with such licensing stuff for free. My impression until here already was that something strange must have happened in the past, so that software developers choose GPL and later on stepped back from that decision and from then on have been hiding the source code from the web entirely. Going deeper... The Rocrail project's wiki states that anyone can request GitBlit access via the forum and obtain the source code via Git for local build purposes only. Nice! So, I asked for access to the project's Git repository, which I had been granted. Thanks for that. Trivial Source Code Investigation... So far so good. I investigated the source code (well, only the license meta stuff shipped with the source code...) and found that the main COPYING files (found at various locations in the source tree, containing a full version of the GPL-3 license) had been replaced by this text: Copyright (c) 2002 Robert Jan Versluis, All rights reserved. Commercial usage needs permission. The replacement happened with these Git commits: commit cfee35f3ae5973e97a3d4b178f20eb69a916203e Author: Rob Versluis Date: Fri Jul 17 16:09:45 2015 +0200 update copyrights commit df399d9d4be05799d4ae27984746c8b600adb20b Author: Rob Versluis Date: Wed Jul 8 14:49:12 2015 +0200 update licence commit 0daffa4b8d3dc13df95ef47e0bdd52e1c2c58443 Author: Rob Versluis Date: Wed Jul 8 10:17:13 2015 +0200 update Getting in touch again, still being really interested and wanting to help... As I consider such a non-licens[...]

Gregor Herrmann: RC bugs 2016/37

Sun, 18 Sep 2016 21:22:38 +0000


we're not running out of (perl-related) RC bugs. here's my list for this week:

  • #811672 – qt4-perl: "FTBFS with GCC 6: cannot convert x to y"
    add patch from upstream bug tracker, upload to DELAYED/5
  • #815433 – libdata-messagepack-stream-perl: "libdata-messagepack-stream-perl: FTBFS with new msgpack-c library"
    upload new upstream version (pkg-perl)
  • #834249 – src:openbabel: "openbabel: FTBFS in testing"
    propose a patch (build with -std=gnu++98), later upload to DELAYED/2
  • #834960 – src:libdaemon-generic-perl: "libdaemon-generic-perl: FTBFS too much often (failing tests)"
    add patch from ntyni (pkg-perl)
  • #835075 – src:libmail-gnupg-perl: "libmail-gnupg-perl: FTBFS: Failed 1/10 test programs. 0/4 subtests failed."
    upload with patch from dkg (pkg-perl)
  • #835412 – src:libzmq-ffi-perl: "libzmq-ffi-perl: FTBFS too much often, makes sbuild to hang"
    add patch from upstream git (pkg-perl)
  • #835731 – src:libdbix-class-perl: "libdbix-class-perl: FTBFS: Tests failures"
    cherry-pick patch from upstream git (pkg-perl)
  • #837055 – src:fftw: "fftw: FTBFS due to failing to execute (. removed from @INC in perl)"
    add patch to call require with "./", upload to DELAYED/2, rescheduled to 0-day on maintainer's request
  • #837221 – src:metacity-themes: "metacity-themes: FTBFS: Can't locate debian/ in @INC"
    call helper scripts with "perl -I." in debian/rules, QA upload
  • #837242 – src:jwchat: "jwchat: FTBFS: Can't locate scripts/ in @INC"
    add patch to call require with "./", upload to DELAYED/2
  • #837264 – src:libsys-info-base-perl: "libsys-info-base-perl: FTBFS: Couldn't do SPEC: No such file or directory at builder/lib/ line 42."
    upload with patch from ntyni (pkg-perl)
  • #837284 – src:libimage-info-perl: "libimage-info-perl: FTBFS: Can't locate inc/Module/ in @INC"
    call perl with -I. in debian/rules, upload to DELAYED/2

Paul Tagliamonte: DNSync

Sun, 18 Sep 2016 21:00:08 +0000


While setting up my new network at my house, I figured I’d do things right and set up an IPSec VPN (and a few other fancy bits). One thing that became annoying when I wasn’t on my LAN was I’d have to fiddle with the DNS Resolver to resolve names of machines on the LAN.

Since I hate fiddling with options when I need things to just work, the easiest way out was to make the DNS names actually resolve on the public internet.

A day or two later, some Golang glue, and AWS Route 53, and I wrote code that would sit on my dnsmasq.leases, watch inotify for IN_MODIFY signals, and sync the records to AWS Route 53.

I pushed it up to my GitHub as DNSync.

PRs welcome!

Eriberto Mota: Statistics to Choose a Debian Package to Help

Sun, 18 Sep 2016 03:30:15 +0000


In the last week I played a bit with UDD (Ultimate Debian Database). After some experiments I did a script to generate a daily report about source packages in Debian. This report is useful to choose a package that needs help.

The daily report has six sections:

  • Sources in Debian Sid (including orphan)
  • Sources in Debian Sid (only Orphan, RFA or RFH)
  • Top 200 sources in Debian Sid with outdated Standards-Version
  • Top 200 sources in Debian Sid with NMUs
  • Top 200 sources in Debian Sid with BUGs
  • Top 200 sources in Debian Sid with RC BUGs

The first section has several important data about all source packages in Debian, ordered by last upload to Sid. It is very useful to see packages without revisions for a long time. Other interesting data about each package are Standards-Version, packaging format, number of NMUs, among others. Believe it or not, there are packages uploaded to Sid for the last time 2003! (seven packages)

With the report, you can choose a ideal package to do QA uploads, NMUs or to adopt.

Well, if you like to review packages, this report is for you: Enjoy!


Norbert Preining: Fixing packages for broken Gtk3

Sun, 18 Sep 2016 02:31:58 +0000


As mentioned on sunweaver’s blog Debian’s GTK-3+ v3.21 breaks Debian MATE 1.14, Gtk3 is breaking apps all around. But not only Mate, probably many other apps are broken, too, in particular Nemo (the file manager of Cinnamon desktop) has redraw issues (bug 836908), and regular crashes (bug 835043).


I have prepared packages for mate-terminal and nemo built from the most recent git sources. The new mate-terminal now does not crash anymore on profile changes (bug 835188), and the nemo redraw issues are gone. Unfortunately, the other crashes of nemo are still there. The apt-gettable repository with sources and amd64 binaries are here:

deb gtk3fixes main
deb-src gtk3fixes main

and are signed with my usual GPG key.

Last but not least, I quote from sunweaver’s blog:


  1. Isn’t GTK-3+ a shared library? This one was rhetorical… Yes, it is.
  2. One that breaks other application with every point release? Well, unfortunately, as experience over the past years has shown: Yes, this has happened several times, so far — and it happened again.
  3. Why is it that GTK-3+ uploads appear in Debian without going through a proper transition? This question is not rhetorical. If someone has an answer, please enlighten me.

(end of quote)

My personal answer to this is: Gtk is strongly related to Gnome, Gnome is strongly related to SystemD, all this is pushed onto Debian users in the usual way of “we don’t care for breaking non-XXX apps” (for XXX in Gnome, SystemD). It is very sad to see this recklessness taking more and more space all over Debian.

I finish with another quote from sunweaver’s blog:

already scared of the 3.22 GTK+ release, luckily the last development release of the GTK+ 3-series

Jonas Meurer: apache rewritemap querystring

Sat, 17 Sep 2016 15:52:59 +0000

Apache2: Rewrite REQUEST_URI based on a bulk list of GET parameters in QUERY_STRING Recently I searched for a solution to rewrite a REQUEST_URI based on GET parameters in QUERY_STRING. To make it even more complicated, I had a list of ~2000 parameters that have to be rewritten like the following: if %{QUERY_STRING} starts with one of : rewrite %{REQUEST_URI} from /new/ to /old/ Honestly, it took me several hours to find a solution that was satisfying and scales well. Hopefully, this post will save time for others with the need for a similar solution. Research and first attempt: RewriteCond %{QUERY_STRING} ... After reading through some documentation, particularly Manipulating the Query String, the following ideas came to my mind at first: RewriteCond %{REQUEST_URI} ^/new/ RewriteCond %{QUERY_STRING} ^(param1)(.*)$ [OR] RewriteCond %{QUERY_STRING} ^(param2)(.*)$ [OR] ... RewriteCond %{QUERY_STRING} ^(paramN)(.*)$ RewriteRule /new/ /old/?%1%2 [R,L] or instead of an own RewriteCond for each parameter: RewriteCond %{QUERY_STRING} ^(param1|param2|...|paramN)(.*)$ There has to be something smarter ... But with ~2000 parameters to look up, neither of the solutions seemed particularly smart. Both scale really bad and probably it's rather heavy stuff for Apache to check ~2000 conditions for every ^/new/ request. Instead I was searching for a solution to lookup a string from a compiled list of strings. RewriteMap seemed like it might be what I was searching for. I read the Apache2 RewriteMap documentation here and here and finally found a solution that worked as expected, with one limitation. But read on ... The solution: RewriteMap and RewriteCond ${mapfile:%{QUERY_STRING}} ... Finally, the solution was to use a RewriteMap with all parameters that shall be rewritten and check given parameters in the requests against this map within a RewriteCond. If the parameter matches, the simple RewriteRule applies. For the inpatient, here's the rewrite magic from my VirtualHost configuration: RewriteEngine On RewriteMap RewriteParams "dbm:/tmp/" RewriteCond %{REQUEST_URI} ^/new/ RewriteCond ${RewriteParams:%{QUERY_STRING}|NOT_FOUND} !=NOT_FOUND RewriteRule ^/new/ /old/ [R,L] A more detailed description of the solution First, I created a RewriteMap at /tmp/rewrite-params.txt with all parameters to be rewritten. A RewriteMap requires two field per line, one with the origin and the other one with the replacement part. Since I use the RewriteMap merely for checking the condition, not for real string replacement, the second field doesn't matter to me. I ended up putting my parameters in both fields, but you could choose every random value for the second field: /tmp/rewrite-params.txt: param1 param1 param2 param2 ... paramN paramN Then I created a DBM hash map file from that plain text map file, as DBM maps are indexed, while TXT maps are not. In other words: with big maps, DBM is a huge performance boost: httxt2dbm -i /tmp/rewrite-params.txt -o /tmp/ Now, let's go through the VirtualHost confi[...]

Norbert Preining: Android 7.0 Nougat – Root – PokemonGo

Sat, 17 Sep 2016 04:59:04 +0000

Since my switch to Android my Nexus 6p is rooted and I have happily fixed the Android (<7) font errors with Japanese fonts in English environment (see this post). The recently released Android 7 Nougat finally fixes this problem, so it was high time to update. In addition, a recent update to Pokemon Go excluded rooted devices, so I was searching for a solution that allows me to: update to Nougat, keep root, and run PokemonGo (as well as some bank security apps etc). After some playing around here are the steps I took: Installation of necessary components Warning: The following is for Nexus6p device, you need different image files and TWRP recovery for other devices. Flash Nougat firmware images Get it from the Google Android Nexus images web site, unpack the zip and the included zip one gets a lot of img files. unzip cd angler-nrd90u/ unzip As I don’t want my user partition to get flashed, I did not use the included flash script, but did it manually: fastboot flash bootloader bootloader-angler-angler-03.58.img fastboot reboot-bootloader sleep 5 fastboot flash radio radio-angler-angler-03.72.img fastboot reboot-bootloader sleep 5 fastboot erase system fastboot flash system system.img fastboot erase boot fastboot flash boot boot.img fastboot erase cache fastboot flash cache cache.img fastboot erase vendor fastboot flash vendor vendor.img fastboot erase recovery fastboot flash recovery recovery.img fastboot reboot After that boot into the normal system and let it do all the necessary upgrades. Once this is done, let us prepare for systemless root and possible hiding of it. Get the necessary file Get Magisk, SuperSU-magisk, as well as the Magisk-Manager.apk from this forum thread (direct links as of 2016/9:,, Magisk-Manager.apk). Transfer these two files to your device – I am using an external USB stick that can be plugged into the device, or copy it via your computer or via a cloud service. Also we need to get a custom recovery image, I am using TWRP. I used the version 3.0.2-0 of TWRP I had already available, but that version didn’t manage to decrypt the file system and hangs. One needs to get at least version 3.0.2-2 from the TWRP web site. Install latest TWRP recorvery Reboot into boot-loader, then use fastboot to flash twrp: fastboot erase recovery fastboot flash recovery twrp-3.0.2-2-angler.img fastboot reboot-bootloader After that select Recovery with the up-down buttons and start twrp. You will be asked you pin if you have one set. Install Select “Install” in TWRP, select the file, and see you device being prepared for systemless root. Install SuperSU, Magisk version Again, boot into TWRP and use the install tool to install After reboot you should have a SuperSU binary running. Install the Magisk Manager From your device browse to the .apk and install it. How to run safety net programs Those programs that check for safety functions (Pokemon Go, Android[...]

Steinar H. Gunderson: BBR opensourced

Fri, 16 Sep 2016 22:16:00 +0000


This is pretty big stuff for anyone who cares about TCP. Huge congrats to the team at Google.

Dirk Eddelbuettel: anytime 0.0.2: Added functionality

Fri, 16 Sep 2016 02:28:00 +0000


anytime arrived on CRAN via release 0.0.1 a good two days ago. anytime aims to convert anything in integer, numeric, character, factor, ordered, ... format to POSIXct (or Date) objects.

This new release 0.0.2 adds two new functions to gather conversion formats -- and set new ones. It also fixed a minor build bug, and robustifies a conversion which was seen to be not quite right under some time zones.

The NEWS file summarises the release:

Changes in Rcpp version 0.0.2 (2016-09-15)

  • Refactored to use a simple class wrapped around two vector with (string) formats and locales; this allow for adding formats; also adds accessor for formats (#4, closes #1 and #3).

  • New function addFormats() and getFormats().

  • Relaxed one tests which showed problems on some platforms.

  • Added as.POSIXlt() step to anydate() ensuring all POSIXlt components are set (#6 fixing #5).

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the anytime page.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Craig Sanders: Frankenwheezy! Keeping wheezy alive on a container host running libc6 2.24

Thu, 15 Sep 2016 16:24:09 +0000

It’s Alive! The day before yesterday (at Infoxchange, a non-profit whose mission is “Technology for Social Justice”, where I do a few days/week of volunteer systems & dev work), I had to build a docker container based on an ancient wheezy image. It built fine, and I got on with working with it. Yesterday, I tried to get it built on my docker machine here at home so I could keep working on it, but the damn thing just wouldn’t build. At first I thought it was something to do with networking, because running curl in the Dockerfile was the point where it was crashing – but it turned out that many programs would segfault – e.g. it couldn’t run bash, but sh (dash) was OK. I also tried running a squeeze image, and that had the same problem. A jessie image worked fine (but the important legacy app we need wheezy for doesn’t yet run in jessie). After a fair bit of investigation, it turned out that the only significant difference between my workstation at IX and my docker machine at home was that I’d upgraded my home machines to libc6 2.24-2 a few days ago, whereas my IX workstation (also running sid) was still on libc6 2.23. Anyway, the point of all this is that if anyone else needs to run a wheezy on a docker host running libc6 2.24 (which will be quite common soon enough), you have to upgrade libc6 and related packages (and any -dev packages, including libc6-dev, you might need in your container that are dependant on the specific version of libc6). In my case, I was using docker but I expect that other container systems will have the same problem and the same solution: install libc6 from jessie into wheezy. Also, I haven’t actually tested installing jessie’s libc6 on squeeze – if it works, I expect it’ll require a lot of extra stuff to be installed too. I built a new frankenwheezy image that had libc6 2.19-18+deb8u4 from jessie. To build it, I had to use a system which hadn’t already been upgraded to libc6 2.24. I had already upgraded libc6 on all the machines on my home network. Fortunately, I still had my old VM that I created when I first started experimenting with docker – crazily, it was a VM with two ZFS ZVOLs, a small /dev/vda OS/boot disk, and a larger /dev/vdb mounted as /var/lib/docker. The crazy part is that /dev/vdb was formatted as btrfs (mostly because it seemed a much better choice than aufs). Disk performance wasn’t great, but it was OK…and it worked. Docker has native support for ZFS, so that’s what I’m using on my real hardware. I started with the base wheezy image we’re using and created a Dockerfile etc to update it. First, I added deb lines to the /etc/apt/sources.list for my local jessie and jessie-updates mirror, then I added the following line to /etc/apt/apt.conf: APT::Default-Release "wheezy"; Without that, any other apt-get installs in the Dockerfile will install from jesssie rather than wheezy, which will almost certainly break the legacy app. I forgot to do it the first time, [...]

Mike Gabriel: [Arctica Project] Release of nx-libs (version

Wed, 14 Sep 2016 14:20:19 +0000

Introduction NX is a software suite which implements very efficient compression of the X11 protocol. This increases performance when using X applications over a network, especially a slow one. NX (v3) has been originally developed by NoMachine and has been Free Software ever since. Since NoMachine obsoleted NX (v3) some time back in 2013/2014, the maintenance has been continued by a versatile group of developers. The work on NX (v3) is being continued under the project name "nx-libs". Release Announcement On Tuesday, Sep 13th, version of nx-libs has been released [1]. This release brings some code cleanups regarding displayed copyright information and an improvement when it comes to reconnecting to an already running session from an X11 server with a color depths setup that is different from the X11 server setup where the NX/X11 session was originally created on. Furthermore, an issue reported to the X2Go developers has been fixed that caused problems on Windows clients on copy+paste actions between the NX/X11 session and the underlying MS Windows system. For details see X2Go BTS, Bug #952 [3]. Change Log A list of recent changes (since can be obtained from here. Binary Builds You can obtain binary builds of nx-libs for Debian (jessie, stretch, unstable) and Ubuntu (trusty, xenial) via these apt-URLs: Debian: deb {jessie,stretch,sid} main Ubuntu: deb {trusty,xenial} main Our package server's archive key is: 0x98DE3101 (fingerprint: 7A49 CD37 EBAE 2501 B9B4 F7EA A868 0F55 98DE 3101). Use this command to make APT trust our package server: wget -qO - | sudo apt-key add - The nx-libs software project brings to you the binary packages nxproxy (client-side component) and nxagent (nx-X11 server, server-side component). References [1] [2] [3] [...]

John Goerzen: Two Boys, An Airplane, Plus Hundreds of Old Computers

Tue, 13 Sep 2016 17:03:42 +0000

“Was there anything you didn’t like about our trip?” Jacob’s answer: “That we had to leave so soon!” That’s always a good sign. When I first heard about the Vintage Computer Festival Midwest, I almost immediately got the notion that I wanted to go. Besides the TRS-80 CoCo II up in my attic, I also have fond memories of an old IBM PC with CGA monitor, a 25MHz 486, an Alpha also in my attic, and a lot of other computers along the way. I didn’t really think my boys would be interested. But I mentioned it to them, and they just lit up. They remembered the Youtube videos I’d shown them of old line printers and punch card readers, and thought it would be great fun. I thought it could be a great educational experience for them too — and it was. It also turned into a trip that combined being a proud dad with so many of my other interests. Quite a fun time. (Jacob modeling his new t-shirt) Captain Jacob Chicago being not all that close to Kansas, I planned to fly us there. If you’re flying yourself, solid flight planning is always important. I had already planned out my flight using electronic tools, but I always carry paper maps with me in the cockpit for backup. I got them out and the boys and I planned out the flight the old-fashioned way. Here’s Oliver using a scale ruler (with markings for miles corresponding to the scale of the map) and Jacob doing calculating for us. We measured the entire route and came to within one mile of the computer’s calculation for each segment — those boys are precise! We figured out how much fuel we’d use, where we’d make fuel stops, etc. The day of our flight, we made it as far as Davenport, Iowa when a chance of bad weather en route to Chicago convinced me to land there and drive the rest of the way. The boys saw that as part of the exciting adventure! Jacob is always interested in maps, and had kept wanting to use my map whenever we flew. So I dug an old Android tablet out of the attic, put Avare on it (which has aviation maps), and let him use that. He was always checking it while flying, sometimes saying this over his headset: “DING. Attention all passengers, this is Captain Jacob speaking. We are now 45 miles from St. Joseph. Our altitude is 6514 feet. Our speed is 115 knots. We will be on the ground shortly. Thank you. DING” Here he is at the Davenport airport, still busy looking at his maps: Every little airport we stopped at featured adults smiling at the boys. People enjoyed watching a dad and his kids flying somewhere together. Oliver kept busy too. He loves to help me on my pre-flight inspections. He will report every little thing to me – a scratch, a fleck of paint missing on a wheel cover, etc. He takes it seriously. Both boys love to help get the plane ready or put it away. The Computers Jacob quickly gravitated towards a few interesting things. He sat for about half an hour watching this ol[...]

Dirk Eddelbuettel: anytime 0.0.1: New package for 'anything' to POSIXct (or Date)

Tue, 13 Sep 2016 12:26:00 +0000


anytime just arrived on CRAN as a very first release 0.0.1.

So why (yet another) package dealing with dates and times? R excels at computing with dates, and times. By using typed representation we not only get all that functionality but also of the added safety stemming from proper representation.

But there is a small nuisance cost: How often have we each told as.POSIXct() that the origin is epoch '1970-01-01'? Do we have to say it a million more times? Similarly, when parsing dates that are in some recogniseable form of the YYYYMMDD format, do we really have to manually convert from integer or numeric or factor or ordered to character first? Having one of several common separators and/or date / time month forms (YYYY-MM-DD, YYYY/MM/DD, YYYYMMDD, YYYY-mon-DD and so on, with or without times, with or without textual months and so on), do we really need a format string?

anytime() aims to help as a small general purpose converter returning a proper POSIXct (or Date) object nomatter the input (provided it was somewhat parseable), relying on Boost date_time for the (efficient, performant) conversion.

See some examples on the anytime page or the GitHub, or in the screenshot below. And then just give it try!


For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, August 2016

Tue, 13 Sep 2016 08:50:58 +0000

Like each month, here comes a report about the work of paid contributors to Debian LTS. Individual reports In August, 140 work hours have been dispatched among 10 paid contributors. Their reports are available: Balint Reczey did 9.5 hours (out of 14.75 hours allocated + 2 remaining, thus keeping 7.25 extra hours for September). Ben Hutchings did 14 hours (out of 14.75 hours allocated + 0.7 remaining, keeping 1.45 extra hours for September). Brian May did 14.75 hours. Chris Lamb did 15 hours (out of 14.75 hours, thus keeping 0.45 hours for next month). Emilio Pozuelo Monfort did 13.5 hours (out of 14.75 hours allocated + 0.5 remaining, thus keeping 2.95 hours extra hours for September). Guido Günther did 9 hours. Markus Koschany did 14.75 hours. Ola Lundqvist did 15.2 hours (out of 14.5 hours assigned + 0.7 remaining). Roberto C. Sanchez did 11 hours (out of 14.75h allocated, thus keeping 3.75 extra hours for September). Thorsten Alteholz did 14.75 hours. Evolution of the situation The number of sponsored hours rised to 167 hours per month thanks to UR Communications BV joining as gold sponsor (funding 1 day of work per month)! In practice, we never distributed this amount of work per month because some sponsors did not renew in time and some of them might not even be able to renew at all. The security tracker currently lists 31 packages with a known CVE and the dla-needed.txt file 29. It’s a small bump compared to last month but almost all issues are affected to someone. Thanks to our sponsors New sponsors are in bold. Platinum sponsors: TOSHIBA (for 11 months) GitHub Gold sponsors: The Positive Internet (for 27 months) Blablacar (for 26 months) Linode LLC (for 16 months) Babiel GmbH (for 5 months) Plat’Home (for 4 months) UR Communications BV Silver sponsors: Domeneshop AS (for 26 months) Université Lille 3 (for 26 months) Trollweb Solutions (for 24 months) Nantes Métropole (for 20 months) University of Luxembourg (for 18 months) Dalenys (for 16 months) Univention GmbH (for 12 months) Université Jean Monnet de St Etienne (for 12 months) Sonus Networks (for 6 months) Bronze sponsors: David Ayers – IntarS Austria (for 27 months) Evolix (for 27 months) Offensive Security (for 27 months), a.s. (for 27 months) Freeside Internet Service (for 26 months) MyTux (for 26 months) Linuxhotel GmbH (for 24 months) Intevation GmbH (for 23 months) Daevel SARL (for 22 months) Bitfolk LTD (for 21 months) Megaspace Internet Services GmbH (for 21 months) Greenbone Networks GmbH (for 20 months) NUMLOG (for 20 months) WinGo AG (for 19 months) Ecole Centrale de Nantes – LHEEA (for 16 months) Sig-I/O (for 13 months) Entr’ouvert (for 11 months) Adfinis SyGroup AG (for 8 months) Laboratoire LEGI – UMR 5519 / CNRS (for 3 months) Quarantainenet BV (for 3 months) GNI MEDIA No comment | Liked this article? Click here. | My blog is[...]

Joey Hess: PoW bucket bloom: throttling anonymous clients with proof of work, token buckets, and bloom filters

Tue, 13 Sep 2016 05:14:47 +0000

An interesting side problem in keysafe's design is that keysafe servers, which run as tor hidden services, allow anonymous data storage and retrieval. While each object is limited to 64 kb, what's to stop someone from making many requests and using it to store some big files? The last thing I want is a git-annex keysafe special remote. ;-) I've done a mash-up of three technologies to solve this, that I think is perhaps somewhat novel. Although it could be entirely old hat, or even entirely broken. (All I know so far is that the code compiles.) It uses proof of work, token buckets, and bloom filters. Each request can have a proof of work attached to it, which is just a value that, when hashed with a salt, starts with a certain number of 0's. The salt includes the ID of the object being stored or retrieved. The server maintains a list of token buckets. The first can be accessed without any proof of work, and subsequent ones need progressively more proof of work to be accessed. Clients will start by making a request without a PoW, and that will often succeed, but when the first token bucket is being drained too fast by other load, the server will reject the request and demand enough proof of work to allow access to the second token bucket. And so on down the line if necessary. At the worst, a client may have to do 8-16 minutes of work to access a keysafe server that is under heavy load, which would not be ideal, but is acceptible for keysafe since it's not run very often. If the client provides a PoW good enough to allow accessing the last token bucket, the request will be accepted even when that bucket is drained. The client has done plenty of work at this point, so it would be annoying to reject it. To prevent an attacker that is willing to burn CPU from abusing this loophole to flood the server with object stores, the server delays until the last token bucket fills back up. So far so simple really, but this has a big problem: What prevents a proof of work from being reused? An attacker could generate a single PoW good enough to access all the token buckets, and flood the server with requests using it, and so force everyone else to do excessive amounts of work to use the server. Guarding against that DOS is where the bloom filters come in. The server generates a random request ID, which has to be included in the PoW salt and sent back by the client along with the PoW. The request ID is added to a bloom filter, which the server can use to check if the client is providing a request ID that it knows about. And a second bloom filter is used to check if a request ID has been used by a client before, which prevents the DOS. Of course, when dealing with bloom filters, it's important to consider what happens when there's a rare false positive match. This is not a problem with the first bloom filter, because [...]

Norbert Preining: Farewell academics talk: Colloquium Logicum 2016 – Gödel Logics

Mon, 12 Sep 2016 21:27:39 +0000


Today I had my invited talk at the Colloquium Logicum 2016, where I gave an introduction to and overview of the state of the art of Gödel Logics. Having contributed considerably to the state we are now, it was a pleasure to have the opportunity to give an invited talk on this topic.


It was also somehow a strange talk (slides are available here), as it was my last as “academic”. After the rejection of extension of contract by the JAIST (foundational research, where are you going? Foreign faculty, where?) I have been unemployed – not a funny state in Japan, but also not the first time I have been, my experiences span Austrian and Italian unemployment offices. This unemployment is going to end this weekend, and after 25 years in academics I say good-bye.

Considering that I had two invited talks, one teaching assignment for the ESSLLI, submitted three articles (another two forthcoming) this year, JAIST is missing out on quite a share of achievements in their faculty database. Not my problem anymore.

It was a good time in academics, and I will surely not stop doing research, but I am looking forward to new challenges and new ways of collaboration and development. I will surely miss academics, but for now I will dedicate my energy to different things in life.

Thanks to all the colleagues who did care, and for the rest, I have already forgotten you.

Keith Packard: hopkins

Mon, 12 Sep 2016 20:22:12 +0000


Hopkins Trailer Brake Controller in Subaru Outback

My minivan transmission gave up the ghost last year, so I bought a Subaru outback to pull my t@b travel trailer. There isn't a huge amount of space under the dash, so I didn't want to mount a trailer brake controller in the 'usual' spot, right above my right knee.

Instead, I bought a Hopkins InSIGHT brake controller, 47297. That comes in three separate pieces which allows for very flexible mounting options.

I stuck the 'main' box way up under the dash on the left side of the car. There was a nice flat spot with plenty of space that was facing the right direction:


The next trick was to mount the display and control boxes around the storage compartment in the center console:


Routing the cables from the controls over to the main unit took a piece of 14ga solid copper wire to use as a fishing line. The display wire was routed above the compartment lid, the control wire was routed below the lid.

I'm not entirely happy with the wire routing; I may drill some small holes and then cut the wires to feed them through.

Shirish Agarwal: mtpfs, feh and not being able to share the debconf experience.

Mon, 12 Sep 2016 17:29:47 +0000

I have been sick for about 2 weeks now hence haven’t written. I had joint pains and still am weak. There has been lot of reports of malaria, chikungunya and dengue fever around the city. The only thing I came to know is how lucky I am to be able to move around on 2 legs and how powerless and debilitating it feels when you can’t move. In the interim I saw ‘Me Before You‘ and after going through my most miniscule experience, I could relate with Will Taylor’s character. If I was in his place, I would probably make the same choices. But my issues are and were slightly different. Last month I was supposed to share my debconf experience in the local PLUG meet. For that purpose, I took some pictures from my phone on a pen-drive to share. But when reached the venue, found out that I had forgotten to take the pen-drive. What I had also done is used the mogrify command from the imagemagick stable to lossy compress the images on the pen-drive so it is easier on image viewers. But that was not to be and at the last moment had to use my phone plugged into the USB drive of the lappy and show some pictures. This was not good. I had known that it was mounted somewhere but hadn’t looked at where. After coming back home, it took me hardly 10 minutes to find out where it was mounted. It is not mounted under /media/shirish but under /run/user/1000/gvfs . If I do list under it shows mtp:host=%5Busb%3A005%2C007%5D . I didn’t need any packages under debian to make it work. Interestingly, the only image viewer which seems to be able to work with all the images is ‘feh’ which is a command-line image viewer in Debian. [$] aptitude show feh Package: feh Version: 2.16.2-1 State: installed Automatically installed: no Priority: optional Section: graphics Maintainer: Debian PhotoTools Maintainers Architecture: amd64 Uncompressed Size: 391 k Depends: libc6 (>= 2.15), libcurl3 (>= 7.16.2), libexif12 (>= 0.6.21-1~), libimlib2 (>= 1.4.5), libpng16-16 (>= 1.6.2-1), libx11-6, libxinerama1 Recommends: libjpeg-progs Description: imlib2 based image viewer feh is a fast, lightweight image viewer which uses imlib2. It is commandline-driven and supports multiple images through slideshows, thumbnail browsing or multiple windows, and montages or index prints (using TrueType fonts to display file info). Advanced features include fast dynamic zooming, progressive loading, loading via HTTP (with reload support for watching webcams), recursive file opening (slideshow of a directory hierarchy), and mouse wheel/keyboard control. Homepage: I did try various things to get it to mount under /media/shirish/ but as of date have no luck. Am running Android 6.0 – Marshmallow and have enabled ‘USB debugging’ with h[...]

Steve Kemp: If your code accepts URIs as input..

Mon, 12 Sep 2016 16:33:58 +0000


There are many online sites that accept reading input from remote locations. For example a site might try to extract all the text from a webpage, or show you the HTTP-headers a given server sends back in response to a request.

If you run such a site you must make sure you validate the schema you're given - also remembering to do that if you're sent any HTTP-redirects.

Really the issue here is a confusion between URL & URI.

The only time I ever communicated with Aaron Swartz was unfortunately after his death, because I didn't make the connection. I randomly stumbled upon the html2text software he put together, which had an online demo containing a form for entering a location. I tried the obvious input:


The software was vulnerable, read the file, and showed it to me.

The site gives errors on all inputs now, so it cannot be used to demonstrate the problem, but on Friday I saw another site on Hacker News with the very same input-issue, and it reminded me that there's a very real class of security problems here.

The site in question was and allows you to enter a URL to convert to markdown - I found this via the hacker news submission.

The following link shows the contents of /etc/hosts, and demonstrates the problem:

The output looked like this:

.. localhost broadcasthost
::1 localhost
fe80::1%lo0 localhost stage files brettt..

In the actual output of '/etc/passwd' all newlines had been stripped. (Which I now recognize as being an artifact of the markdown processing.)

UPDATE: The problem is fixed now.

Ritesh Raj Sarraf: apt-offline 1.7.1 released

Mon, 12 Sep 2016 10:41:50 +0000

I am happy to mention the release of apt-offline, version 1.7.1. This release includes many bug fixes, code cleanups and better integration. Integration with PolicyKit Better integration with apt gpg keyring Resilient to failures when a sub-task errors out New Feature: Changelog This release adds the ability to deal with package changelogs ('set' command option: --generate-changelog) based on what is installed, extract changelog (Currently support with python-apt only) from downloaded packages and display them during installation ('install' command opiton: --skip-changelog, if you want to skip display of changelog) New Option: --apt-backend Users can now opt to choose an apt backend of their choice. Currently support: apt, apt-get (default) and python-apt   Hopefully, there will be one more release, before the release to Stretch. apt-offline can be downloaded from its homepage or from Github page.    Update: The PolicyKit integration requires running the apt-offline-gui command with pkexec (screenshot). It also work fine with sudo, su etc.   Categories: Debian-PagesDebian-BlogToolsKeywords: aptapt-offlinedebianRHUTLike:  [...]

Reproducible builds folks: Reproducible Builds: week 72 in Stretch cycle

Mon, 12 Sep 2016 07:49:38 +0000

What happened in the Reproducible Builds effort between Sunday September 4 and Saturday September 10 2016: Reproducible work in other projects Python 3.6's dictonary type now retains the insertion order. Thanks to themill for the report. In coreboot, Alexander Couzens committed a change to make their release archives reproducible. Patches submitted #836609 filed against nostalgy by Chris Lamb. #836605 filed against torque by Chris Lamb. #836968 filed against erlang-p1-oauth2 by Chris Lamb. #836970 filed against erlang-p1-sqlite3 by Chris Lamb. #836817 filed against tj3 by Chris Lamb. #836969 filed against erlang-p1-xmlrpc by Chris Lamb. Reviews of unreproducible packages We've been adding to our knowledge about identified issues. 3 issue types have been added: random_id_in_pdf_generated_by_dblatex captures_execution_time captures_home_dir 1 issue type has been updated: Expand comment for random_id_in_pdf_generated_by_dblatex 16 have been have updated: Add patch for nostalgy Add patch for torque Expand comment for babl. Tag pbuilder with random_id_in_pdf_generated_by_dblatex Tag uclibc with users_and_groups_in_tarball Tag realmd with different_encoding_in_html_by_docbook_xsl Tag mrmpi with timestamps_in_documentation_generated_by_htmldoc Tag dynare with leaks_path_environment_variable Tag sdlgfx with timestamps_in_tarball Tag pantomime1.2 with plist_weirdness Tag blockattack with different_due_to_umask Tag expeyes with different_pot_creation_date_in_gettext_mo_files Tag android-platform-frameworks-data-binding with random_ordering_in_pom Tag krunner with users_and_groups_in_tarball Tag libevocosm with leaks_path_environment_variable Tag gdbm with random_order_in_md5sums 13 have been removed, not including removed packages: htp fixed in/since 1.19-2 Remove openni-sensor-pointclouds and openni-sensor-primesense; fixed by __DATE__ & __TIME__ hsqldb fixed in/since 2.3.3+dfsg2-1 linux-tools RM ksnapshot RM cobalt-panel-utils RM strigi RM libkdeedu RM erc RM ttf-atarismall RM gnupg-doc RM remove RMed django-localflavor easymp3gain RM 100s of packages have been tagged with the more generic captures_build_path, and many with captures_kernel_version, user_hostname_manually_added_requiring_further_investigation, user_hostname_manually_added_requiring_further_investigation, captures_shell_variable_in_autofoo_script, etc. Particular thanks to Emanuel Bronshtein for his work here. Weekly QA work FTBFS bugs have been reported by: Aaron M. Ucko (1) Chris Lamb (7) diffoscope development Mattia Rizzolo: Force LC_ALL=C.UTF-8 in basic-command-list autopkgtest so that diffoscope can always output something Ximin Luo: html-di[...]

Gregor Herrmann: RC bugs 2016/34-36

Sun, 11 Sep 2016 21:42:02 +0000

as before, my work on release-critical bugs was centered around perl issues. here's the list of bugs I worked on: #687904 – interchange-ui: "interchange-ui: cannot install this package"(re?)apply patch from #625904, upload to DELAYED/5 #754755 – src:libinline-java-perl: "libinline-java-perl: FTBFS on mips: test suite issues"prepare a preliminary fix (pkg-perl) #821994 – src:interchange: "interchange: Build arch:all+arch:any but is missing build-{arch,indep} targets"apply patch from sanvila to add targets, upload to DELAYED/5 #834550 – src:interchange: "interchange: FTBFS with '.' removed from perl's @INC"patch to "require ./", upload to DELAYED/5 #834731 – src:kdesrc-build: "kdesrc-build: FTBFS with '.' removed from perl's @INC"add patch from Dom to "require ./", upload to DELAYED/5 #834738 – src:libcatmandu-mab2-perl: "libcatmandu-mab2-perl: FTBFS with '.' removed from perl's @INC"add patch from Dom to "require ./" (pkg-perl) #835075 – src:libmail-gnupg-perl: "libmail-gnupg-perl: FTBFS: Failed 1/10 test programs. 0/4 subtests failed."add some debugging info #835133 – libnet-jabber-perl: "libnet-jabber-perl: FTBFS in testing"add patch from CPAN RT (pkg-perl) #835206 – src:munin: "munin: FTBFS with '.' removed from perl's @INC"add patch from Dom to call perl with -I., upload to DELAYED/5, then cancelled on maintainer's request #835353 – src:pari: "pari: FTBFS with '.' removed from perl's @INC"add patch to call perl with -I., upload to DELAYED/5 #835711 – src:libconfig-identity-perl: "libconfig-identity-perl: FTBFS: Tests failures"run tests under gnupg1 (pkg-perl) #837136 – libgtk3-perl: "libgtk3-perl: FTBFS: t/overrides.t failure"add patch from CPAN RT (pkg-perl) #837237 – src:libtest-file-perl: "libtest-file-perl: FTBFS: Tests failures"add patch so tests find their common files again (pkg-perl) #837249 – src:libconfig-record-perl: "libconfig-record-perl: FTBFS: lib/Config/ No such file or directory at Config-Record.spec.PL line 13."fix build in debian/rules (pkg-perl) [...]

Niels Thykier: Unseen changes to lintian.d.o

Sun, 11 Sep 2016 19:50:46 +0000

We have been making a lot of minor changes to lintian.d.o and the underlying report framework. Most of them were hardly noticeable to the naked. In fact, I probably would not have spotted any of them, if I had not been involved in writing them.  Nonetheless, I felt like sharing them, so here goes. User “visible” changes: The generated reports are now in HTML5 rather than XHTML. [commit:9652c2e] Add “alt” texts to graphs for [commit:83434ec] Wrap front page index in a

Niels Thykier: debhelper 10 is now available

Sun, 11 Sep 2016 18:06:37 +0000

Today, debhelper 10 was uploaded to unstable and is coming to a mirror near you “really soon now”. The actual changes between version “9.20160814” and version “10” are rather modest. However, it does mark the completion of debhelper compat 10, which has been under way since early 2012.

Some highlights from compat 10 include:

  • The dh sequence in compat 10 automatically regenerate autotools files via dh_autoreconf
  • The dh sequence in compat 10 includes the dh-systemd debhelper utilities
  • dh sequencer based packages now defaults to building in parallel (i.e. “–parallel” is default in compat 10)
  • dh_installdeb now properly shell-escapes maintscript arguments.

For the full list of changes in compat 10, please review the contents of the debhelper(7) manpage. Beyond that, you may also want to upgrade your lintian to 2.5.47 as it is the first version that knows that compat 10 is stable.


Filed under: Debhelper, Debian (image)

Hideki Yamane: mirror disk usage: not so much as expected

Sun, 11 Sep 2016 09:02:26 +0000

Debian repository mirror server disk usage.

I guess many new packages are added to repo, but disk usage is not so much. Why?

Dirk Eddelbuettel: New package gettz on CRAN

Sat, 10 Sep 2016 21:40:00 +0000


gettz is now on CRAN in its initial release 0.0.1.

It provides a possible fallback in situations where Sys.timezone() fails to determine the system timezone. That can happen when e.g. the file /etc/localtime somehow is not a link into the corresponding file with zoneinfo data in, say, /usr/share/zoneinfo.

Duane McCully provided a nice StackOverflow answer with code that offers fallbacks via /etc/timezone (on Debian/Ubuntu) or /etc/sysconfig/clock (on RedHat/CentOS/Fedora, and rumour has it, BSD* systems) or /etc/TIMEZONE (on Solaris). The gettz micro-package essentially encodes that approach so that we have an optional fallback when Sys.timezone() comes up empty.

In the previous paragraph, note the stark absense of OS X where there seems nothing to query, and of course Windows. Contributions for either would be welcome.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Sylvain Le Gall: Release of OASIS 0.4.7

Sat, 10 Sep 2016 20:00:22 +0000

I am happy to announce the release of OASIS v0.4.7. OASIS is a tool to help OCaml developers to integrate configure, build and install systems in their projects. It should help to create standard entry points in the source code build system, allowing external tools to analyse projects easily. This tool is freely inspired by Cabal which is the same kind of tool for Haskell. You can find the new release here and the changelog here. More information about OASIS in general on the OASIS website. Pull request for inclusion in OPAM is pending. Here is a quick summary of the important changes: Drop support for OASISFormat 0.2 and 0.1. New plugin "omake" to support build, doc and install actions. Improve automatic tests (Travis CI and AppVeyor) Trim down the dependencies (removed ocaml-gettext, camlp4, ocaml-data-notation) Features: findlib_directory (beta): to install libraries in sub-directories of findlib. findlib_extra_files (beta): to install extra files with ocamlfind. source_patterns (alpha): to provide module to source file mapping. This version contains a lot of changes and is the achievement of a huge amount of work. The addition of OMake as a plugin is a huge progress. The overall work has been targeted at making OASIS more library like. This is still a work in progress but we made some clear improvement by getting rid of various side effect (like the requirement of using "chdir" to handle the "-C", which leads to propage ~ctxt everywhere and design OASISFileSystem). I would like to thanks again the contributor for this release: Spiros Eliopoulos, Paul Snively, Jeremie Dimino, Christopher Zimmermann, Christophe Troestler, Max Mouratov, Jacques-Pascal Deplaix, Geoff Shannon, Simon Cruanes, Vladimir Brankov, Gabriel Radanne, Evgenii Lepikhin, Petter Urkedal, Gerd Stolpmann and Anton Bachin. [...]

Enrico Zini: Dreaming of being picked

Sat, 10 Sep 2016 07:47:03 +0000

From "Stop stealing dreams":

«Settling for the not-particularly uplifting dream of a boring, steady job isn’t helpful. Dreaming of being picked — picked to be on TV or picked to play on a team or picked to be lucky — isn’t helpful either. We waste our time and the time of our students when we set them up with pipe dreams that don’t empower them to adapt (or better yet, lead) when the world doesn’t work out as they hope.

The dreams we need are self-reliant dreams. We need dreams based not on what is but on what might be. We need students who can learn how to learn, who can discover how to push themselves and are generous enough and honest enough to engage with the outside world to make those dreams happen.»

This made me think that I know many hero stories based on "the chosen", like Matrix, like most superheros getting powers either from some entity chosing them for it, or from chance.

I have a hard time thinking of a superhero who becomes one just by working hard at acquiring and honing their skills: I can only think of Batman and Ironman, and they start off as super rich.

If I think of people who start from scratch as commoners and work hard to become exceptional, in the standard superhero narrative, I can only think of supervillains.


It makes me feel culturally biased into thinking that a common person cannot be trusted to act responsibly, and that only the rich, the chosen and the aristocrats can.

As a bias it may serve the rich and the aristocrats, but I don't think it serves society as a whole.

Dirk Eddelbuettel: RProtoBuf 0.4.6: bugfix update

Fri, 09 Sep 2016 23:40:00 +0000

Relatively quickly after version 0.4.5 of RProtoBuf was released, we have a new version 0.4.6 to announce which appeared on CRAN today. RProtoBuf provides R bindings for the Google Protocol Buffers ("Protobuf") data encoding and serialization library used and released by Google, and deployed as a language and operating-system agnostic protocol by numerous projects. This version contains a contributed bug-fix pull request covering conversion of zero-length vectors, and adding native support for S4 objects. At the request / suggestion of the CRAN maintainers, it also uncomments a LaTeX macro in the vignette (corresponding to our recent JSS paper paper) which older R versions do not (yet) have in their jss.cls file. Changes in RProtoBuf version 0.4.6 (2016-09-08) Support for serializing zero-length objects was added (PR #18 addressing #13) S4 objects are natively encoded (also PR #18) The vignette based on the JSS paper no longer uses a macro available only with the R-devel version of jss.cls, and hence builds on all R versions CRANberries also provides a diff to the previous release. The RProtoBuf page has an older package vignette, a 'quick' overview vignette, a unit test summary vignette, and the pre-print for the JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo. This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings. [...]