Subscribe: Planet Debian
Added By: Feedage Forager Feedage Grade B rated
Language: English
build  code  debian  hours  months  new  package  packages  people  project  release  set  software  support  time  version 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Planet Debian

Planet Debian

Planet Debian -


Mike Gabriel: Making Debian experimental's X2Go Server Packages available on Ubuntu, Mint and alike

Mon, 24 Apr 2017 14:48:27 +0000

Often I get asked: How can I test the latest nx-libs packages [1] with a stable version of X2Go Server [2] on non-Debian, but Debian-like systems (e.g. Ubuntu, Mint, etc.)?

This is quite easy, if you are not scared of building binary Debian packages from Debian source packages. Until X2Go Server (and NXv3) will be made available in Debian unstable, the brave testers should follow the below installation recipe.

Step 1: Add Debian experimental as Source Package Source

Add Debian experimental as source package provider:

$ echo "deb-src experimental main" | sudo tee /etc/apt/sources.list.d/debian-experimental.list
$ sudo apt-get update

Step 2: Obtain Build Tools and Build Dependencies

When building software, you need to have some extra packages. Those packages will not be needed at runtime of the built piece of software, so you may want to take some notes on what extra packages get installed with the below step. If you plan rebuilding X2Go Server and NXv3 several times, then simply leave the build dependencies installed:

$ sudo apt-get build-dep nx-libs
$ sudo apt-get build-dep x2goserver

Step 3: Build NXv3 and X2Go Server from Source

Building NXv3 (aka nx-libs) takes a while, so it may be time to get some coffee now... The build process should not run as superuser root. Stay with your normal user account.

$ mkdir Development/ && cd Development/
$ apt-get source -b nx-libs

[... enjoy your coffee, there'll be much output on your screen... ]

$ apt-get source -b x2goserver

In your working directoy, you should now find various new files ending with .deb.

Step 4: Install the built packages

These .deb files we will now install. It does not hurt to simply install all of them:

sudo dpkg -i *.deb

The above command might result in some error messages. Ignore them, you can easily fix them by installing the missing runtime dependencies:

sudo apt-get install -f

Play it again, Sam

If you want to re-do the above with some new nx-libs or x2goserver source package version, simply create an empty folder and repeat those steps above. The dpkg command will install the .deb files over the currently installed package versions and update your system with your latest build.

The disadvantage of this build-from-source approach (it is a temporary recommendation until X2Go Server & co. have landed in Debian unstable), that you have to check for updates manually from time to time.

Recommended versions

For X2Go Server, the 4.0.1.x release series is considerably stable. The version shipped with Debian has been patched to work with the upcoming nx-libs 3.6.x series, but also tolerates the older 3.5.0.x series as shipped with X2Go's upstream packages.

For NXv3 (aka nx-libs) we recommend using (thus, waiting for) the release. The package has been uploaded to Debian experimental already, but waits in Debian NEW for some minor ftp-master ACK (we added one binary package with the recent upload).


Mike Gabriel: [Arctica Project] Release of nx-libs (version

Mon, 24 Apr 2017 14:12:20 +0000

Introduction NX is a software suite which implements very efficient compression of the X11 protocol. This increases performance when using X applications over a network, especially a slow one. NX (v3) has been originally developed by NoMachine and has been Free Software ever since. Since NoMachine obsoleted NX (v3) some time back in 2013/2014, the maintenance has been continued by a versatile group of developers. The work on NX (v3) is being continued under the project name "nx-libs". Release Announcement On Friday, Apr 21st 2017, version of nx-libs has been released [1]. As some of you might have noticed, the release announcements for and have never been posted / written, so this announcement lists changes introduced since Credits There are alway many people to thank, so I won't mention all here. The person I need to mention here is Mihai Moldovan, though. He virtually is our QA manager, although not officially entitled. The feedback he gives on code reviews is sooo awesome!!! May you be available to our project for a long time. Thanks a lot, Mihai!!! Changes between and Use RPATH in nxagent now for finding libNX_X11 (our fake libX11 with nxcomp support added). Drop support for various archaic platforms. Fix valgrind issue in new Xinerama code. Regression: Fix crash due to incompletely backported code in RPM packaging review by nx-libs's official Fedora maintainer (thanks, Orion!). Update script. Changes between and Support building against libXfont2 API (using Xfont2, if available in build environment, otherwise falling back to Xfont(1) API) Support built-in fonts, no requirement anymore to have the misc fonts installed. Various Xserver os/ and dix/ backports from ABI backports: CreatePixmap allocation hints, no index in CloseScreen() destructors propagated anymore, SetNotifyFd ABI, GetClientCmd et al. ABI. Add quilt based patch system for bundled Mesa. Fix upstream ChangeLog creation in Keystroke.c code fully revisited by Ulrich Sibiller (Thanks!!!). nxcomp now builds again on Cygwin. Thanks to Mike DePaulo for providing the patch! Bump libNX_X11 to a status that resembles latest libX11 HEAD. (Again, thanks Ulrich!!!). Various changes to make valgrind more happy (esp. uninitialized memory issues). Hard-code RGB color values, drop previously shipped rgb.txt config file. Changes between and Regression fix for Now fonts are display correctly again after session resumption. Prefer source tree's nxcomp to system-wide installed nxcomp headers. Provide nxagent specific auto-detection code for available display numbers (see nxagent man page for details, under -displayfd). Man page updates for nxagent (thanks, Ulrich!). Make sure that xkbcomp is available to nxagent (thanks, Mihai!). Switch on building nxagent with MIT-SCREEN-SAVER extension. Fix FTBFS on SPARC64, arm64 and m86k Linux platforms. Avoid duplicate runs of 'make build' due to flaw in main Makefile. Change Log Lists of changes (since can be obtained from here ( -> .4), here ( -> .5) and here ( -> .6) Known Issues A list of known issues can be obtained from the nx-libs issue tracker [issues]. Binary Builds You can obtain binary builds of nx-libs for Debian (jessie, stretch, unstable) and Ubuntu (trusty, xenial) via these apt-URLs: Debian: deb {jessie,stretch,sid} main Ubuntu: deb {trusty,xenial} main Our package server's archive key is: 0x98DE3101 (fingerprint: 7A49 CD37 EBAE 2501 B9B4 F7EA A868 0F55 98DE 3101). Use this command to make APT trust our package server: wget -qO - | sudo apt-key add - The nx-libs software project brings to you the binary packages nxproxy (client-side component) and nxagent (nx-X11 server, server-side component). The nxagent Xserver can be used fr[...]

Norbert Preining: Leaving for BachoTeX 2017

Mon, 24 Apr 2017 02:20:32 +0000


Tomorrow we are leaving for TUG 2017 @ BachoTeX, one of the most unusual and great series of conferences (BachoTeX) merged with the most important series of TeX conference (TUG). I am looking forward to this trip and to see all the good friends there.


And having the chance to visit my family at the same time in Vienna makes this trip, how painful the long flight with our daughter will be, worth it.

See you in Vienna and Bachotek!

PS: No pun intended with the photo-logo combination, just a shot of the great surroundings in Bachotek (image)

Mark Brown: Bronica Motor Drive SQ-i

Sun, 23 Apr 2017 13:17:45 +0000

I recently got a Bronica SQ-Ai medium format film camera which came with the Motor Drive SQ-i. Since I couldn’t find any documentation at all about it on the internet and had to figure it out for myself I figured I’d put what I figured out here. Hopefully this will help the next person trying to figure one out, or at least by virtue of being wrong on the internet I’ll be able to get someone who knows what they’re doing to tell me how the thing really works.


The motor drive attaches to the camera using the tripod socket, a replacement tripod socket is provided on the base of plate. There’s also a metal plate with the bottom of the hand grip attached to it held on to the base plate with a thumb screw. When this is released it gives access to the screw holding in the battery compartment which (very conveniently) takes 6 AA batteries. This also provides power to the camera body when attached.


On the back of the base of the camera there’s a button with a red LED next to it which illuminates slightly when the button is pressed (it’s visible in low light only). I’m not 100% sure what this is for, I’d have guessed a battery check if the light were easier to see.


On the top of the camera there is a hot shoe (with a plastic blanking plate, a nice touch), a mode selector and two buttons. The larger button on the front replicates the shutter release button on the body (which continues to function as well) while the smaller button to the rear of the camera controls the motor – depending on the current state of the camera it cocks the shutter, winds the film and resets the mirror when it is locked up. The mode dial offers three modes: off, S and C. S and C appear to correspond to the S and C modes of the main camera, single and continuous mirror lockup shots.

Overall with this grip fitted and a prism attached the camera operates very similarly to a 35mm SLR in terms of film winding and so on. It is of course heavier (the whole setup weighs in at 2.5kg) but balanced very well and the grip is very comfortable to use.

Andreas Metzler: balance sheet snowboarding season 2016/17

Sun, 23 Apr 2017 12:03:07 +0000

Another year of minimal snow. Again there was early snowfall in the mountains at the start of November, but the snow was gone soon again. There was no snow up to 2000 meters of altitude until about January 3. Christmas week was spent hiking up and taking the lift down.

I had my first day on board on January 6 on artificial snow, and the first one on natural snow on January 19. Down where I live (800m), snow was tight the whole winter, never topping 1m. Measuring station Diedamskopf at 1800m above sea-level topped at slightly above 200cm, on April 19. Last boarding day was yesterday (April 22) in Warth with hero-conditions.

I had a preopening on the glacier in Pitztal at start of November with Pure Boarding. However due to the long waiting-period between pre-opening and start of season it did not pay off. By the time I rode regularily I had forgotten almost everything I learned at Carving-School.

Nevertheless I strong season due to long periods on stable, sunny weather with 30 days on piste (counting the day I went up and barely managed a single blind run in superdense fog).

Anyway, here is the balance-sheet:

2005/06 2006/07 2007/08 2008/09 2009/10 2010/11 2011/12 2012/13 2013/14 2014/15 2015/16 2016/17
number of (partial) days251729373030252330241730
total meters of altitude12463474096219936226774202089203918228588203562274706224909138037269819
# of runs309189503551462449516468597530354634

Enrico Zini: Splitting a git-annex repository

Sat, 22 Apr 2017 18:48:43 +0000

I have a git annex repo for all my media that has grown to 57866 files and git operations are getting slow, especially on external spinning hard drives, so I decided to split it into separate repositories.

This is how I did it, with some help from #git-annex. Suppose the old big repo is at ~/oldrepo:

# Create a new repo for photos only
mkdir ~/photos
cd photos
git init
git annex init laptop

# Hardlink all the annexed data from the old repo
cp -rl ~/oldrepo/.git/annex/objects .git/annex/

# Regenerate the git annex metadata
git annex fsck --fast

# Also split the repo on the usb key
cd /media/usbkey
git clone ~/photos
cd photos
git annex init usbkey
cp -rl ../oldrepo/.git/annex/objects .git/annex/
git annex fsck --fast

# Connect the annexes as remotes of each other
git remote add laptop ~/photos
cd ~/photos
git remote add usbkey /media/usbkey

At this point, I went through all repos doing standard cleanup:

# Remove unneeded hard links
git annex unused
git annex dropunused --force 1-12345

# Sync
git annex sync

To make sure nothing is missing, I used git annex find --not --in=here to see if, for example, the usbkey that should have everything could be missing some thing.

Update: Antoine Beaupré pointed me to this tip about Repositories with large number of files which I will try next time one of my repositories grows enough to hit a performance issue.

Manuel A. Fernandez Montecelo: Debian GNU/Linux port for RISC-V 64-bit (riscv64)

Sat, 22 Apr 2017 02:00:22 +0000

This is a post describing my involvement with the Debian GNU/Linux port for RISC-V (unofficial and not endorsed by Debian at the moment) and announcing the availability of the repository (still very much WIP) with packages built for this architecture. If not interested in the story but you want to check the repository, just jump to the bottom. Roots A while ago, mostly during 2014, I was involved in the Debian port for OpenRISC (or1k) ─ about which I posted (by coincidence) exactly 2 years ago. The two of us working on the port stopped in August or September of that year, after knowing that the copyright of the code to add support for this architecture in GCC would not be assigned to the FSF, so it would never be added to GCC upstream ─ unless the original authors changed their mind (which they didn't) or there was a clean-room reimplementation (which didn't happen so far). But a few other key things contributed to the decision to stop working on that port, which bear direct relationship to this story. One thing that particularly influenced me to stop working on it was a sense of lack of purpose, all things considered, for the OpenRISC port that we were working on. For example, these chips are sometimes used as part of bigger devices by Samsung to control or wake up other chips; but it was not clear whether there would ever be devices with OpenRISC as the main chip, specially in devices powerful enough to run Linux or similar kernels, and Debian on top. One can use FPGAs to synthesise OpenRISC or1k, but these are slow, and expensive when using lots of memory. Without prospects of having hardware easily available to users, there's not much point in having a whole Debian port ready to run on hardware that never comes to be. Yeah, sure, it's fun to create such a port, but it's tons of work to maintain and keep it up to-date forever, and with close to zero users it's very unrewarding. Another thing that contributed to decide to stop is that, at least in my opinion, 32-bit was not future-proof enough for general purpose computing, specially for new devices and ports starting to take off on that time and age. There was some incipient work to create another OpenRISC design for 64-bits, but it was still in an early phase. My secret hope and ultimate goal was to be able to run as much a free-ish computer as possible as my main system. Still today many people are buying and using 32-bit devices, like small boards; but very few use it as their main computer or servers for demanding workloads or complex services. So for me, even if feasible if one is very austere and dedicated, OpenRISC or1k failed that test. And lastly, another thing happened at the time... Enter RISC-V In August 2014, at the point when we were fully acknowledging the problem of upstreaming (or rather, lack thereof) the support for OpenRISC in GCC, RISC-V was announced to the world, bringing along papers with suggestive titles such as “Instruction Sets Should Be Free: The Case For RISC-V” (pdf) and articles like “RISC-V: An Open Standard for SoCs - The case for an open ISA” in EE Times. RISC-V (as the previous RISC-n marks) had been designed (or rather, was being designed, because it was and is an as yet unfinished standard) by people from UC Berkeley, including David Patterson, the pioneer of RISC computer designs and co-author of the seminal book “Computer Architecture: A Quantitative Approach”. Other very capable people are also also leading the project, doing the design and legwork to make it happen ─ see the list of contributors. But, apart from throwing names, the project has many other merits. Similarly to OpenRISC, RISC-V is an open instruction set architecture (ISA), but with the advantage of being designed in more recent times (thus avoiding some mistakes and optimising for problems discovered more recently, as technology evolves); with more resources; with support for i[...]

Ritesh Raj Sarraf: Indian Economy

Fri, 21 Apr 2017 18:33:24 +0000

This has gotten me finally ask the question   All this time since my childhood, I grew up reading, hearing and watching that the core economy of India is Agriculture. And that it needs the highest bracket in the budgets of the country. It still applies today. Every budget has special waivers for the agriculture sector, typically in hundreds of thousands of crores in India Rupees. The most recent to mention is INR 27420 Crores waived off for just a single state (Uttar Pradesh), as was promised by the winning party during their campaign. Wow. Quick search yields that I am not alone to notice this. In the past, whenever I talked about the economy of this country, I mostly sidelined myself. Because I never studied here. And neither did I live here much during my childhood or teenage days. Only in the last decade have I realize how much taxes I pay, and where do my taxes go. I do see a justification for these loan waivers though. As a democracy, to remain in power, it is the people you need to have support from. And if your 1.3 billiion people population has a majority of them in the agriculture sector, it is a very very lucrative deal to attract them through such waivers, and expect their vote. Here's another snippet from Wikipedia on the same topic: Agricultural Debt Waiver and Debt Relief Scheme[edit] On 29 February 2008, P. Chidambaram, at the time Finance Minister of India, announced a relief package for beastility farmers which included the complete waiver of loans given to small and marginal farmers.[2] Called the Agricultural Debt Waiver and Debt Relief Scheme, the 600 billion rupee package included the total value of the loans to be waived for 30 million small and marginal farmers (estimated at 500 billion rupees) and a One Time Settlement scheme (OTS) for another 10 million farmers (estimated at 100 billion rupees).[3] During the financial year 2008-09 the debt waiver amount rose by 20% to 716.8 billion rupees and the overall benefit of the waiver and the OTS was extended to 43 million farmers.[4] In most of the Indian States the number of small and marginal farmers ranges from 70% to 94% of the total number of farmers   And not to forget how many people pay taxes in India. To quote an unofficial statement from an Indian Media House Only about 1 percent of India's population paid tax on their earnings in the year 2013, according to the country's income tax data, published for the first time in 16 years. The report further states that a total of 28.7 million individuals filed income tax returns, of which 16.2 million did not pay any tax, leaving only about 12.5 million tax-paying individuals, which is just about 1 percent of the 1.23 billion population of India in the year 2013. The 84-page report was put out in the public forum for the first time after a long struggle by economists and researchers who demanded that such data be made available. In a press release, a senior official from India's income tax department said the objective of publishing the data is to encourage wider use and analysis by various stakeholders including economists, students, researchers and academics for purposes of tax policy formulation and revenue forecasting. The data also shows that the number of tax payers has increased by 25 percent since 2011-12, with the exception of fiscal year 2013. The year 2014-15 saw a rise to 50 million tax payers, up from 40 million three years ago. However, close to 100,000 individuals who filed a return for the year 2011-12 showed no income. The report brings to light low levels of tax collection and a massive amount of income inequality in the country, showing the rich aren't paying enough taxes. Low levels of tax collection could be a challenge for the current government as it scrambles for money to spend on its ambitious plans in areas such as infrastructure and science & technology. Reports poin[...]

Joachim Breitner: veggies: Haskell code generation from scratch

Fri, 21 Apr 2017 15:30:27 +0000

How hard it is to write a compiler for Haskell Core? Not too hard, actually! I wish we had a formally verified compiler for Haskell, or at least for GHC’s intermediate language Core. Now formalizing that part of GHC itself seems to be far out of reach, with the many phases the code goes through (Core to STG to CMM to Assembly or LLVM) and optimizations happening at all of these phases and the many complicated details to the highly tuned GHC runtime (pointer tagging, support for concurrency and garbage collection). Introducing Veggies So to make that goal of a formally verified compiler more feasible, I set out and implemented code generation from GHC’s intermediate language Core to LLVM IR, with simplicity as the main design driving factor. You can find the result in the GitHub repository of veggies (the name derives from “verifiable GHC”). If you clone that and run ./ some-directory, you will find that you can use the program some-directory/bin/veggies just like like you would use ghc. It comes with the full base library, so your favorite variant of HelloWorld might just compile and run. As of now, the code generation handles all the Core constructs (which is easy when you simply ignore all the types). It supports a good number of primitive operations, including pointers and arrays – I implement these as need – and has support for FFI calls into C. Why you don't want to use Veggies Since the code generator was written with simplicity in mind, performance of the resulting code is abysmal: Everything is boxed, i.e. represented as pointer to some heap-allocated data, including “unboxed” integer values and “unboxed” tuples. This is very uniform and simplifies the code, but it is also slow, and because there is no garbage collection (and probably never will be for this project), will fill up your memory quickly. Also, the code is currently only supports 64bit architectures, and this is hard-coded in many places. There is no support for concurrency. Why it might be interesting to you nevertheless So if it is not really usable to run programs with, should you care about it? Probably not, but maybe you do for one of these reasons: You always wondered how a compiler for Haskell actually works, and reading through a little over a thousands lines of code is less daunting than reading through the 34k lines of code that is GHC’s backend. You have wacky ideas about Code generation for Haskell that you want to experiment with. You have wacky ideas about Haskell that require special support in the backend, and want to prototype that. You want to see how I use the GHC API to provide a ghc-like experience. (I copied GHC’s Main.hs and inserted a few hooks, an approach I copied from GHCJS). You want to learn about running Haskell programs efficiently, and starting from veggies, you can implement all the trick of the trade yourself and enjoy observing the speed-ups you get. You want to compile Haskell code to some weird platform that is supported by LLVM, but where you for some reason cannot run GHC’s runtime. (Because there are no threads and no garbage collection, the code generated by veggies does not require a runtime system.) You want to formally verify Haskell code generation. Note that the code generator targets the same AST for LLVM IR that the vellvm2 project uses, so eventually, veggies can become a verified arrow in the top right corner map of the DeepSpec project. So feel free to play around with veggies, and report any issues you have on the GitHub repository. [...]

Rhonda D'Vine: Home

Fri, 21 Apr 2017 08:01:00 +0000


A fair amount of things happened since I last blogged something else than music. First of all we did actually hold a Debian Diversity meeting. It was quite nice, less people around than hoped for, and I account that to some extend to the trolls and haters that defaced the titanpad page for the agenda and destroyed the doodle entry for settling on a date for the meeting. They even tried to troll my blog with comments, and while I did approve controversial responses in the past, those went over the line of being acceptable and didn't carry any relevant content.

One response that I didn't approve but kept in my mailbox is even giving me strength to carry on. There is one sentence in it that speaks to me: Think you can stop us? You can't you stupid b*tch. You have ruined the Debian community for us. The rest of the message is of no further relevance, but even though I can't take credit for being responsible for that, I'm glad to be a perceived part of ruining the Debian community for intolerant and hateful people.

A lot of other things happened since too. Mostly locally here in Vienna, several queer empowering groups were founding around me, some of them existed already, some formed with the help of myself. We now have several great regular meetings for non-binary people, for queer polyamory people about which we gave an interview, a queer playfight (I might explain that concept another time), a polyamory discussion group, two bi-/pansexual groups, a queer-feminist choir, and there will be an European Lesbian* Conference in October where I help with the organization …

… and on June 21st I'll finally receive the keys to my flat in Que[e]rbau Seestadt. I'm sooo looking forward to it. It will be part of the Let me come Home experience that I'm currently in. Another part of that experience is that I started changing my name (and gender marker) officially. I had my first appointment in the corresponding bureau, and I hope that it won't last too long because I have to get my papers in time for booking my flight to Montreal, and somewhen along the process my current passport won't contain correct data anymore. So for the people who have it in their signing policy to see government IDs this might be your chance to finally sign my key then.

I plan to do a diversity BoF at debconf where we can speak more directly on where we want to head with the project. I hope I'll find the time to do an IRC meeting beforehand. I'm just uncertain how to coordinate that one to make it accessible for interested parties while keeping the destructive trolls out. I'm open for ideas here.

/personal | permanent link | Comments: 3 | Flattr this

Noah Meyerhans: Stretch images for Amazon EC2, round 2

Fri, 21 Apr 2017 04:37:00 +0000

Following up on a previous post announcing the availability of a first round of AWS AMIs for stretch, I'm happy to announce the availability of a second round of images. These images address all the feedback we've received about the first round. The notable changes include:

  • Don't install a local MTA.
  • Don't install busybox.
  • Ensure that /etc/machine-id is recreated at launch.
  • Fix the sources.list entry.
  • Enable Enhanced Networking and ENA support.
  • Images are owned by the official AWS account, rather than my personal account.

AMI details are listed on the wiki. As usual, you're encouraged to submit feedback to the cloud team via the BTS pseudopackage, the debian-cloud mailing list, or #debian-cloud on irc.

Dirk Eddelbuettel: Rblpapi 0.3.6

Fri, 21 Apr 2017 01:36:00 +0000


Time for a new release of Rblpapi -- version 0.3.6 is now on CRAN. Rblpapi provides a direct interface between R and the Bloomberg Terminal via the C++ API provided by Bloomberg Labs (but note that a valid Bloomberg license and installation is required).

This is the seventh release since the package first appeared on CRAN last year. This release brings a very nice new function lookupSecurity() contributed by Kevin Jin as well as a number of small fixes and enhancements. Details below:

Changes in Rblpapi version 0.3.6 (2017-04-20)

  • bdh can now store in double preventing overflow (Whit and John in #205 closing #163)

  • bdp documentation has another ovveride example

  • A new function lookupSecurity can search for securities, optionally filtered by yellow key (Kevin Jin and Dirk in #216 and #217 closing #215)

  • Added file init.c with calls to R_registerRoutines() and R_useDynamicSymbols(); also use .registration=TRUE in useDynLib in NAMESPACE (Dirk in #220)

  • getBars and getTicks can now return data.table objects (Dirk in #221)

  • bds has improved internal protect logic via Rcpp::Shield (Dirk in #222)

Courtesy of CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the Rblpapi page. Questions, comments etc should go to the issue tickets system at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel: RcppQuantuccia 0.0.1

Thu, 20 Apr 2017 01:38:00 +0000

New package! And, as it happens, a effectively a subset or variant of one my oldest packages, RQuantLib. Fairly recently, Peter Caspers started to put together a header-only subset of QuantLib. He called this Quantuccia, and, upon me asking, said that it stands for "little sister" of QuantLib. Very nice. One design goal is to keep Quantuccia header-only. This makes distribution and deployment much easier. In the fifteen years that we have worked with QuantLib by providing the R bindings via RQuantLib, it has always been a concern to provide current QuantLib libraries on all required operating systems. Many people helped over the years but it is still an issue, and e.g. right now we have no Windows package as there is no library build it against. Enter RcppQuantuccia. It only depends on R, Rcpp (for seamless R and C++ integrations) and BH bringing Boost headers. This will make it much easier to have Windows and macOS binaries. So what can it do right now? We started with calendaring, and you can compute date pertaining to different (ISDA and other) business day conventions, and compute holiday schedules. Here is one example computing inter alia under the NYSE holiday schedule common for US equity and futures markets: R> library(RcppQuantuccia) R> fromD <- as.Date("2017-01-01") R> toD <- as.Date("2017-12-31") R> getHolidays(fromD, toD) # default calender ie TARGET [1] "2017-04-14" "2017-04-17" "2017-05-01" "2017-12-25" "2017-12-26" R> setCalendar("UnitedStates") R> getHolidays(fromD, toD) # US aka US::Settlement [1] "2017-01-02" "2017-01-16" "2017-02-20" "2017-05-29" "2017-07-04" "2017-09-04" [7] "2017-10-09" "2017-11-10" "2017-11-23" "2017-12-25" R> setCalendar("UnitedStates::NYSE") R> getHolidays(fromD, toD) # US New York Stock Exchange [1] "2017-01-02" "2017-01-16" "2017-02-20" "2017-04-14" "2017-05-29" "2017-07-04" [7] "2017-09-04" "2017-11-23" "2017-12-25" R> The GitHub repo already has a few more calendars, and more are expected. Help is of course welcome for both this, and for porting over actual quantitative finance calculations. More information is on the RcppQuantuccia page. Issues and bugreports should go to the GitHub issue tracker. This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings. [...]

Norbert Preining: TeX Live 2017 pretest started

Thu, 20 Apr 2017 00:51:15 +0000


Preparations for the release of TeX Live 2017 have started a few days ago with the freeze of updates in TeX Live 2016 and the announcement of the official start of the pretest period. That means that we invite people to test the new release and help fixing bugs.


Notable changes are listed on the pretest page, I only want to report about the changes in the core infrastructure: changes in the user/sys mode of fmtutil and updmap, and introduction of the tlmgr shell.

User/sys mode of fmtutil and updmap

We (both at TeX Live and Debian) regularly got error reports about fonts not being found or formats not updated etc. The reason for all this was unmistakably the same: The user has called updmap or fmtutil without the -sys option, thus creating a copy of set of configuration files under his home directory, shadowing all later updates on the system side.

Reason for this behavior is the wide-spread misinformation (outdated information) on the internet suggesting to call updmap.

To counteract this, we have changed the behavior so that both updmap and fmtutil now accept a new argument -user (in addition to the already present -sys), and rejects to run when called without either of it given, giving a warning and linking to an explanation page. This page provides more detailed documentation, and best practice examples.

tlmgr shell

The TeX Live Manager got a new `shell’ mode, invoked by tlmgr shell. Details need to be flashed out, but in principle it is possible to use get and set to query and set some of the options normally passed via command lines, and use all the actions as defined in the documentation. The advantage of this is that it is not necessary to load the tlpdb for each invocation. Here a short example:

[~] tlmgr shell
protocol 1
tlmgr> load local
tlmgr> load remote
tlmgr: package repository /home/norbert/public_html/tlpretest (verified)
tlmgr> update --list
tlmgr: saving backups to /home/norbert/tl/2017/tlpkg/backups
update:   bchart             [147k]: local:    27496, source:    43928
update:   xindy              [535k]: local:    43873, source:    43934
tlmgr> update --all
tlmgr: saving backups to /home/norbert/tl/2017/tlpkg/backups
[ 1/22, ??:??/??:??] update: bchart [147k] (27496 -> 43928) ... done
[ 2/22, 00:00/00:00] update: biber [1041k] (43873 -> 43910) ... done
[22/22, 00:50/00:50] update: xindy [535k] (43873 -> 43934) ... done
running mktexlsr ...
done running mktexlsr.
tlmgr> quit
tlmgr: package log updated: /home/norbert/tl/2017/texmf-var/web2c/tlmgr.log 

Please test and report bugs to our mailing list.


Reproducible builds folks: Reproducible Builds: week 103 in Stretch cycle

Wed, 19 Apr 2017 21:00:46 +0000

Here's what happened in the Reproducible Builds effort between Sunday April 9 and Saturday April 15 2017: Upcoming events On April 26th Chris Lamb will give a talk at foss-north 2017 in Gothenburg, Sweden on Reproducible Builds. Media coverage Jake Edge wrote a summary of Vagrant Cascadian's talk on Reproducible Builds at LibrePlanet. Toolchain development and fixes Ximin Luo forwarded patches to GCC for BUILD_PATH_PREFIX_MAP support. With this patch to backported to GCC-6, as well as a patched dpkg to set the environment variable, he scheduled ~3,300 packages that are unreproducible in unstable-amd64 but reproducible in testing-amd64 - because we vary the build path in the former but not latter case. Our infrastructure ran these in just under 3 days, and we reproduced ~1,700 extra packages. This is about 6.5% of ~26,100 Debian source packages, and about 1/2 of the ones whose irreproducibility is due to build-path issues. Most of the rest are not related to GCC, such as things built by R, OCaml, Erlang, LLVM, PDF IDs, etc. (The dip afterwards, in the graph linked above, is due to reverting back to an unpatched GCC-6, but we'll be rebasing the patch continually over the next few weeks so the graph should stay up.) Packages reviewed and fixed, and bugs filed Chris Lamb: #860200 filed against poti, forwarded upstream. #860201 filed against sunpinyin, forwarded upstream. #860203 filed against avifile. #860211 filed against qtractor. #860212 filed against samplv1. #860213 filed against drumkv1. #860214 filed against synthv1. #860218 filed against templayer. #860266 filed against miniupnpd, forwarded upstream. #860275 filed against msp430mcu. #860277 filed against g2clib. #860278 filed against openigtlink. #860279 filed against xmlrpc-c. #860372 filed against hp-search-mac. #860373 filed against foxeye. #860374 filed against python-taskflow. #860384 filed against polygen. Chris West: #860418 filed against sugar-memorize-activity. (Patch by Chris Lamb) Reviews of unreproducible packages 38 package reviews have been added, 111 have been updated and 85 have been removed in this week, adding to our knowledge about identified issues. 6 issue types have been updated: Added: nondeterministic_java_bytecode timestamp_in_jboss_messagebundle_generated_code Updated: timestamps_in_documentation_generated_by_javadoc randomness_in_gcj_output: gcj is deprecated/dead records_build_flags, captures_build_path: we temporarily consider these non-deterministic, to better track the issue - the patches are still pending and statuses will keep changing as we upload patched packages. Removed: locale_in_documentation_generated_by_javadoc: seems to be fixed for every non-FTBFS package that it was affected by. diffoscope development Development continued in git on the experimental branch: Chris Lamb: Don't crash on invalid archives (#833697) Tidy up some other code Weekly QA work During our reproducibility testing, FTBFS bugs have been detected and reported by: Chris Lamb (3) Chris West (1) Misc. This week's edition was written by Ximin Luo, Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists. [...]

Lior Kaplan: Open source @ Midburn, the Israeli burning man

Wed, 19 Apr 2017 15:00:16 +0000

This year I decided to participate in Midburn, the Israeli version of burning man. Whiling thinking of doing something different from my usual habit, I found myself with volunteering in the midburn IT department and getting a task to make it an open source project. Back into my comfort zone, while trying to escape it.

I found a community of volunteers from the Israeli high tech scene who work together for building the infrastructure for Midburn. In many ways, it’s already an open source community by the way it works. One critical and formal fact was lacking, and that’s the license for the code. After some discussion we decided on using Apache License 2.0 and I started the process of changing the license, taking it seriously, making sure it goes “by the rules”.

Our code is available on GitHub at And while it still need to be more tidy, I prefer the release early and often approach. The main idea we want to bring to the Burn infrastructure is using Spark as a database and have already began talking with parallel teams of other burn events. I’ll follow up on our technological agenda / vision. In the mean while, you are more than welcome to comment on the code or join one of the teams (e.g. volunteers module to organize who does which shift during the event).



Filed under: Israeli Community (image)

Steinar H. Gunderson: Chinese HDMI-to-SDI converters

Tue, 18 Apr 2017 23:28:00 +0000


I often need to convert signals from HDMI to SDI (and occasionally back). This requires a box of some sort, and eBay obliges; there's a bunch of different sellers of the same devices, selling them around $20–25. They don't seem to have a brand name, but they are invariably sold as 3G-SDI converters (meaning they should go up to 1080p60) and look like this:


There are also corresponding SDI-to-HDMI converters that look pretty much the same except they convert the other way. (They're easy to confuse, but that's not a problem unique tothem.)

I've used them for a while now, and there are pros and cons. They seem reliable enough, and they're 1/4th the price of e.g. Blackmagic's Micro converters, which is a real bargain. However, there are also some issues:

  • For 3G-SDI, they output level A only, with no option for level B. (In fact, there are no options at all.) Level A is the most sane standard, and also what most equipment uses, but there exists certain older equipment that only works with level B.
  • They don't have reclocking chips, so their timing accuracy is not superb. I managed to borrow a Phabrix SDI analyzer and measured the jitter; with a very short cable, I got approximately 0.85–0.95 UI (varying a bit), whereas a Blackmagic converter gave me 0.23–0.24 UI (much more stable). This may be a problem at very long cable lengths, although I haven't tried 100m runs and such.
  • When converting to and from RGB, they seem to assume Rec. 601 Y'CbCr coefficients even for HD resolutions. This means the colors will be a bit off in some cases, although for most people, it will be hard to notice without looking at it side-by-side. (I believe the HDMI-to-SDI and SDI-to-HDMI converters make the same mistake, so that the errors cancel out if you just want to use a pair as HDMI extenders. Also, if your signal is Y'CbCr already, you don't need to care.)
  • They don't insert SMPTE 352M payload ID. (Supposedly, this is because the SDI chip they use, called GV7600, is slightly out-of-standard on purpose in order to avoid paying expensive licensing fees to SMPTE.) Normally, you wouldn't need to care, but 3G-SDI actually requires this, and worse; Blackmagic's equipment (at least the Duo 2, and I've seen reports about the ATEMs as well) enforces that. If you try to run e.g. 1080p50 through them and into a Duo 2, it will be misdetected as “1080p25, no signal”. There's no workaround that I know of.

The last issue is by far the worst, but it only affects 3G-SDI resolutions. 720p60, 1080p30 and 1080i60 all work fine. And to be fair, not even Blackmagic's own converters actually send 352M correctly most of the time…

I wish there were a way I could publish this somewhere people would actually read it before buying these things, but without a name, it's hard for people to find it. They're great value for money, and I wouldn't hesitate to recommend them for almost all use… but then, there's that almost. :-)

Vincent Fourmond: make-theme-image: a script to make yourself an idea of a icon theme

Tue, 18 Apr 2017 21:34:56 +0000

I created some time ago a utils repository on github to publish miscellaneous scripts, but it's only recently that I have started to really populate it. One of my recent work is make-theme-image script, that downloads an icon them package, grabs relevant, user-specifiable, icons, and arrange them in a neat montage. The images displayed are the results of running

~ make-theme-image gnome-icon-theme moblin-icon-theme

This is quite useful to get a good idea of the icons available in a package. You can select the icons you want to display using the -w option. The following command should provide you with a decent overview of the icon themes present in Debian:

apt search -- -icon-theme | grep / | cut -d/ -f1 | xargs make-theme-image

I hope you find it useful ! In any case, it's on github, so feel free to patch and share.

Steve Kemp: 3d-Printing is cool

Tue, 18 Apr 2017 21:00:00 +0000

I've heard about 3d-printing a lot in the past, although the hype seems to have mostly died down. My view has always been "That seems cool", coupled with "Everybody says making the models is very hard", and "the process itself is fiddly & time-consuming". I've been sporadically working on a project for a few months now which displays tram-departure times, this is part of my drive to "hardware" things with Arduino/ESP8266 devices . Most visitors to our flat have commented on it, at least once, and over time it has become gradually more and more user-friendly. Initially it was just a toy-project for myself, so everything was hard-coded in the source but over time that changed - which I mentioned here, (specifically the Access-point setup): When it boots up, unconfigured, it starts as an access-point. So you can connect and configure the WiFi network it should join. Once it's up and running you can point a web-browser at it. This lets you toggle the backlight, change the timezone, and the tram-stop. These values are persisted to flash so reboots will remember everything. I've now wired up an input-button to the device too, experimenting with the different ways that a single button can carry out multiple actions: Press & release - toggle the backlight. Press & release twice - a double-click if you like - show a message. Press, hold for 1 second, then release - re-sync the date/time & tram-data. Anyway the software is neat, and I can't think of anything obvious to change. So lets move onto the real topic of this post: 3D Printing. I randomly remembered that I'd heard about an online site holding 3D-models, and on a whim I searched for "4x20 LCD". That lead me to this design, which is exactly what I was looking for. Just like open-source software we're now living in a world where you can get open-source hardware! How cool is that? I had to trust the dimensions of the model, and obviously I was going to mount my new button into the box, rather than the knob shown. But having a model was great. I could download it, for free, and I could view it online at But with a model obtained the next step was getting it printed. I found a bunch of commercial companies, here in Europe, who would print a model, and ship it to me, but when I uploaded the model they priced it at €90+. Too much. I'd almost lost interest when I stumbled across a site which provides a gateway into a series of individual/companies who will print things for you, on-demand: 3dhubs. Once again I uploaded my model, and this time I was able to select a guy in the same city as me. He printed my model for 1/3-1/4 of the price of the companies I'd found, and sent me fun pictures of the object while it was in the process of being printed. To recap I started like this: Then I boxed it in cardboard which looked better than nothing, but still not terribly great: Now I've found an online case-design for free, got it printed cheaply by a volunteer (feels like the wrong word, after-all I did pay him), and I have something which look significantly more professional: Inside it looks as neat as you would expect: Of course the case still cost 5 times as much as the actual hardware involved (button: €0.05, processor-board €2.00 and LCD I2C display €3.00). But I've gone from being somebody who had zero experience with hardware-based projects 4 months ago, to somebody who has built a project which is functional and "pretty". The internet really is a glorious thing. Using it for learning, and coding is g[...]

Norbert Preining: Gaming: Firewatch

Tue, 18 Apr 2017 15:04:39 +0000


A nice little game, Firewatch, puts you into a fire watch tower in Wyoming, with only a walkie-talkie connecting him to his supervisor Delilah. A so called “first person mystery adventure” with very nice graphics and great atmosphere.

Starting with your trip to the watch tower, the game sends the player into a series of “missions”, during which more and more clues about a mystery disappearance are revealed. The game development is rather straight forward, one has hardly any choices, and it is practically impossible to miss something or fail in some way.

The big plus of the game is the great atmosphere, the funny dialogues with Delilah, the story that pulls you into the game, and the development of the character(s). The tower, the cave, all the places one visits are delicately designed with lots of personality, making this a very human like game.

What is weak is the finish. During the game I was always thinking about whether I should tell Delilah everything, or keep some things secret. But in the end nothing matters, all ends with a simple escape in the helicopter and without any tying up the loose ends. Somehow a pity for such a beautiful game to leave the player somehow unsatisfied at the end.

But although the finish wasn’t that good, I still enjoyed it more than I expected. Due to the simple flow it wont keep you busy for many hours, but as a short break a few evenings (for me), it was a nice break from all the riddle games I love so much.

Bits from Debian: Call for Proposals for DebConf17 Open Day

Tue, 18 Apr 2017 07:00:00 +0000

The DebConf team would like to call for proposals for the DebConf17 Open Day, a whole day dedicated to sessions about Debian and Free Software, and aimed at the general public. Open Day will preceed DebConf17 and will be held in Montreal, Canada, on August 5th 2017.

DebConf Open Day will be a great opportunity for users, developers and people simply curious about our work to meet and learn about the Debian Project, Free Software in general and related topics.

Submit your proposal

We welcome submissions of workshops, presentations or any other activity which involves Debian and Free Software. Activities in both English and French are accepted.

Here are some ideas about content we'd love to offer during Open Day. This list is not exhaustive, feel free to propose other ideas!

  • An introduction to various aspects of the Debian Project
  • Talks about Debian and Free Software in art, education and/or research
  • A primer on contributing to Free Software projects
  • Free software & Privacy/Surveillance
  • An introduction to programming and/or hardware tinkering
  • A workshop about your favorite piece of Free Software
  • A presentation about your favorite Free Software-related project (user group, advocacy group, etc.)

To submit your proposal, please fill the form at


We need volunteers to help ensure Open Day is a success! We are specifically looking for people familiar with the Debian installer to attend the Debian installfest, as resources for people seeking help to install Debian on their devices. If you're interested, please add your name to our wiki:


Participation to Open Day is free and no registration is required.

The schedule for Open Day will be announced in June 2017.


Sylvain Beucler: Practical basics of reproducible builds 3

Mon, 17 Apr 2017 15:25:47 +0000

On my quest to generate reproducible standalone binaries for GNU FreeDink, I met new friends but currently lie defeated by an unexpected enemy... Episode 1: compiler version needs to be identical and recorded build options and their order need to be identical and recorder build path needs to be identical and recorded (otherwise debug symbols - and BuildIDs - change) diffoscope helps checking for differences in build output Episode 2: use -Wl,--no-insert-timestamp for .exe (with old binutils 2.25 caveat) no need to set a build path for stripped .exe (no ELF BuildID) reprotest helps checking build variations automatically MXE stack is apparently deterministic enough for a reproducible static build umask needs to be identical and recorded file timestamps needs to be set and recorded (more on this in a future episode) First, the random build differences when using -Wl,--no-insert-timestamp were explained. peanalysis shows random build dates: $ reprotest 'i686-w64-mingw32.static-gcc hello.c -I /opt/mxe/usr/i686-w64-mingw32.static/include -I/opt/mxe/usr/i686-w64-mingw32.static/include/SDL2 -L/opt/mxe/usr/i686-w64-mingw32.static/lib -lmingw32 -Dmain=SDL_main -lSDL2main -lSDL2 -lSDL2main -Wl,--no-insert-timestamp -luser32 -lgdi32 -lwinmm -limm32 -lole32 -loleaut32 -lshell32 -lversion -o hello && chmod 700 hello && hello | tee /tmp/hello.log-$(date +%s); sleep 1' 'hello' $ diff -au /tmp/hello.log-1* --- /tmp/hello.log-1490950327 2017-03-31 10:52:07.788616930 +0200 +++ /tmp/hello.log-1523203509 2017-03-31 10:52:09.064633539 +0200 @@ -18,7 +18,7 @@ found PE header (size: 20) machine: i386 number of sections: 17 - timedatestamp: -1198218512 (Tue Jan 12 05:31:28 1932) + timedatestamp: 632430928 (Tue Jan 16 09:15:28 1990) pointer to symbol table: 4593152 (0x461600) number of symbols: 11581 (0x2d3d) size of optional header: 224 @@ -47,7 +47,7 @@ Win32VersionValue: 0 size of image (memory): 4640768 size of headers (offset to first section raw data): 1536 - checksum (for drivers): 4927867 + checksum (for drivers): 4922616 subsystem: 3 win32 console binary DllCharacteristics: 0 Stephen Kitt mentioned 2 simple patches (1 2) fixing uninitialized memory in binutils. These patches fix the variation and were submitted to MXE (pull request). Next was playing with compiler support for SOURCE_DATE_EPOCH (which e.g. sets __DATE__ macros). The FreeDink DFArc frontend historically displays a build date in the About box: "Build Date: %s\n", ..., __TDATE__ sadly support is only landing upstream in GCC 7 :/ I had to remove that date. Now comes the challenging parts. All my tests with reprotest checked. I started writing a reproducible build environment based on Docker (git browse). At first I could not run reprotest in the container, so I reworked it with SSH support, and reprotest validated determinism. (I also generate a reproducible .zip archive, more on that later.) So far so good, but were the release identical when running reprotest successively on the different environments? (reminder: this is a .exe build that is insensitive to varying path, hence consistent in a full reprotest) $ sha256sum *.zip 189d0ca5240374896c6ecc6dfcca00905ae60797ab48abce2162fa36568e7cf1 e182406b4f4d7c3a4d239eee126134ba5c0304bbaa4af3de15fd4f8bda5634a9 e182406b4f4d7c3a4d239eee126134ba5c0304bbaa4af3de15fd4f8bd[...]

Norbert Preining: Systemd again (or how to obliterate your system)

Mon, 17 Apr 2017 15:05:45 +0000


Ok, I have been silent about systemd and its being forced onto us in Debian like force-feeding Foie gras gooses. I have complained about systemd a few times (here and here), but what I read today really made me loose my last drips of trust I had in this monster-piece of software.


If you are up for some really surprising read about the main figure behind systemd, enjoy this github issue. It’s about a bug that simply does the equivalent of rm -rf / in some cases. The OP gave clear indications, the bug was fixes immediately, but then a comment from the God Poettering himself appeared that made the case drip over:

I am not sure I’d consider this much of a problem. Yeah, it’s a UNIX pitfall, but “rm -rf /foo/.*” will work the exact same way, no?Lennart Poettering, systemd issue 5644

Well, no, a total of 1min would have shown him that this is not the case. But we trust this guy the whole management of the init process, servers, logs (and soon our toilet and fridge management, X, DNS, whatever you ask for).

There are two issues here: One is that such a bug is lurking in systemd since probably years. The reason is simple – we pay with these kinds of bugs for the incredible complexity increase of the init process which takes over too much services. Referring back to the Turing Award lecture given by Hoare, we see that systemd took the later path:

I conclude that there are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies and the other way is to make it so complicated that there are no obvious deficiencies. Antony Hoare, Turing Award Lecture 1980

The other issue is how systemd developers deal with bug reports. I have reported several cases here, this is just another one: Close the issue for comments, shut up, put it under the carpet.

(Image credit: The musings of an Indian Faust)

Ross Gammon: My March 2017 Activities

Mon, 17 Apr 2017 14:35:09 +0000

March was a busy month, so this monthly report is a little late. I worked two weekends, and I was planning my Easter holiday, so there wasn’t a lot of spare time. Debian  Updated Dominate to the latest version and uploaded to experimental (due to the Debian Stretch release freeze). Uploaded the latest version of abcmidi (also to experimental). Pinged the bugs for reverse dependencies of pygoocanvas and goocanvas with a view to getting them removed from the archive during the Buster cycle. Asked for help on the Ubuntu Studio developers and users mailing lists to test the coming Ubuntu Studio 17.04 release ISO, because I would be away on holiday for most of it. Ubuntu Worked on ubuntustudio-controls, reverting it back to an earlier revision that Len said was working fine. Unfortunately, when I built and installed it from my ppa, it crashed. Eventually found my mistake with the bzr reversion, fixed it and prepared an upload ready for sponsorship. Submitted a Freeze Exception bug in the hope that the Release Team would accept it even though we had missed the Final Beta. Put a new power supply in an old computer that was kaput, and got it working again. Set up Ubuntu Server 16.04 on it so that I could get a bit more experience with running a server. It won’t last very long, because it is a 32 bit machine, and Ubuntu will probably drop support for that architecture eventually. I used two small spare drives to set up RAID 1 & LVM (so that I can add more space to it later). I set up some Samba shares, so that my wife will be able to get at them from her Windows machine. For music streaming, I set up Emby Server. I wold be great to see this packaged for Debian. I uploaded all of my photos and music for Emby to serve around the home (and remotely as well). Set up Obnam to back up the server to an external USB stick (temporarily until I set up something remote). Set LetsEncrypt with the wonderful Certbot program. Did the Release Notes for Ubuntu Studio 17.04 Final Beta. As I was in Brussels for two days, I was not able to do any ISO testing myself. Other Measured up the new model railway layout and documented it in xtrkcad. Started learning Ansible some more by setting up ssh on all my machines so that I could access them with Ansible and manipulate them using a playbook. Went to the Open Source Days conference just down the road in Copenhagen. Saw some good presentations. Of interest for my previous work in the Debian GIS Team, was a presentation from the Danish Municipalities on how they run projects using Open Source. I noted how their use of Proj 4 and OSGeo. I was also pleased to see a presentation from Ximin Luo on Reproducible Builds, and introduced myself briefly after his talk (during the break). Started looking at creating a Django website to store and publish my One Name Study sources (indexes).  Started by creating a library to list some of my recently read Journals. I will eventually need to import all the others I have listed in a cvs spreadsheet that was originally exported from the commercial (Windows only) Custodian software. Plan status from last month & update for next month Debian For the Debian Stretch release: Keep an eye on the Release Critical bugs list, and see if I can help fix any. – In Progress Generally: Package all the latest upstream versions of my Debian packages, and upload them to Experimental to keep them out of the way of the Stretch release. – In[...]

Russell Coker: More KVM Modules Configuration

Mon, 17 Apr 2017 10:07:26 +0000

Last year I blogged about blacklisting a video driver so that KVM virtual machines didn’t go into graphics mode [1]. Now I’ve been working on some other things to make virtual machines run better. I use the same initramfs for the physical hardware as for the virtual machines. So I need to remove modules that are needed for booting the physical hardware from the VMs as well as other modules that get dragged in by systemd and other things. One significant saving from this is that I use BTRFS for the physical machine and the BTRFS driver takes 1M of RAM! The first thing I did to reduce the number of modules was to edit /etc/initramfs-tools/initramfs.conf and change “MODULES=most” to “MODULES=dep”. This significantly reduced the number of modules loaded and also stopped the initramfs from probing for a non-existant floppy drive which added about 20 seconds to the boot. Note that this will result in your initramfs not supporting different hardware. So if you plan to take a hard drive out of your desktop PC and install it in another PC this could be bad for you, but for servers it’s OK as that sort of upgrade is uncommon for servers and only done with some planning (such as creating an initramfs just for the migration). I put the following rmmod commands in /etc/rc.local to remove modules that are automatically loaded: rmmod btrfs rmmod evdev rmmod lrw rmmod glue_helper rmmod ablk_helper rmmod aes_x86_64 rmmod ecb rmmod xor rmmod raid6_pq rmmod cryptd rmmod gf128mul rmmod ata_generic rmmod ata_piix rmmod i2c_piix4 rmmod libata rmmod scsi_mod In /etc/modprobe.d/blacklist.conf I have the following lines to stop drivers being loaded. The first line is to stop the video mode being set and the rest are just to save space. One thing that inspired me to do this is that the parallel port driver gave a kernel error when it loaded and tried to access non-existant hardware. blacklist bochs_drm blacklist joydev blacklist ppdev blacklist sg blacklist psmouse blacklist pcspkr blacklist sr_mod blacklist acpi_cpufreq blacklist cdrom blacklist tpm blacklist tpm_tis blacklist floppy blacklist parport_pc blacklist serio_raw blacklist button On the physical machine I have the following in /etc/modprobe.d/blacklist.conf. Most of this is to prevent loading of filesystem drivers when making an initramfs. I do this because I know there’s never going to be any need for CDs, parallel devices, graphics, or strange block devices in a server room. I wouldn’t do any of this for a desktop workstation or laptop. blacklist ppdev blacklist parport_pc blacklist cdrom blacklist sr_mod blacklist nouveau blacklist ufs blacklist qnx4 blacklist hfsplus blacklist hfs blacklist minix blacklist ntfs blacklist jfs blacklist xfs [1] Related posts: Video Mode and KVM I recently changed my KVM servers to use the kernel... Testing STONITH One problem that I have had in configuring Heartbeat clusters... Modules and NFS for Xen I’m just in the process of converting a multi-user system... [...]

Norbert Preining: Calibre on Debian

Mon, 17 Apr 2017 01:33:21 +0000


Calibre is the prime open source e-book management program, but the Debian releases often lag behind the official releases. Furthermore, the Debian packages remove support for rar packed e-books, which means that several comic book formats cannot be handled.

Thus, I have published a local repository targeting Debian/sid of calibre with binaries for amd64 where rar is enabled and as far as possible the latest version is included.

deb calibre main
deb-src calibre main

The releases are signed with my Debian key 0x6CACA448860CDC13


Antoine Beaupré: Montreal Bug Squashing Party report

Sun, 16 Apr 2017 19:19:59 +0000

Un sommaire de cet article est également traduit vers le français, merci! Last friday, a group of Debian users, developers and enthusiasts met at offices for a bug squashing party. We were about a dozen people of various levels: developers, hackers and users. I gave a quick overview of Debian packaging using my quick development guide, which proved to be pretty useful. I made a link ( for people to be able to easily find the guide on their computers. Then I started going through a list of different programs used to do Debian packaging, to try and see the level of the people attending: apt-get install - everyone knew about it apt-get source - everyone paying attention dget - only 1 knew about it dch - 1 quilt - about 2 apt-get build-dep - 1 dpkg-buildpackage - only 3 people git-buildpackage / gitpkg - 1 sbuild / pbuilder dput - 1 rmadison - 0 (the other DD wasn't paying attention anymore) So mostly skilled Debian users (they know apt-get source) but not used to packaging (they don't know about dpkg-buildpackage). So I went through the list again and explained how they all fit together and could be used to work on Debian packages in the context of a Debian release bug squashing party. This was the fastest crash course in Debian packaging I have ever given (and probably the first too) - going through those tools in about 30 minutes. I was happy to have the guide that people could refer to later in the back. The first question after the presentation was "how do we find bugs"? which led me to add links to the UDD bugs page and release-critical bugs page. I also explained the key links on top of the UDD page to find specific sets of bugs, and explained the useful "patch" filter that allows to select bugs with our without patch. I guess that maybe half of the people were able to learn new, or improve their skills to make significant contributions or test actual patches. Other learned how to hunt and triage bugs in the BTS. Update: sorry for the wording: all contributions were really useful, thanks and apologies to bug hunters!! I myself learned how to use sbuild thanks to the excellent sbuild wiki page which I improved upon. A friend was able to pick up sbuild very quickly and use it to build a package for stretch, which I find encouraging: my first experience with pbuilder was definitely not as good. I have therefore starting the process of switching my build chroots to sbuild, which didn't go so well on Jessie because I use a backported kernel, and had to use the backported sbuild as well. That required a lot of poking around, so I ended up just using pbuilder for now, but I will definitely switch on my home machine, and I updated the sbuild wiki page to give out more explanations on how to setup pbuilder. We worked on a bunch of bugs, and learned how to tag them as part of the BSP, which was documented in the BSP wiki page. It seems we have worked on about 11 different bugs which is a better average than the last BSP that I organized, so I'm pretty happy with that. More importantly, we got Debian people together to meet and talk, over delicious pizza, thanks to a sponsorship granted by the DPL. Some people got involved in the next DebConf which is also great. On top of fixing bugs and getting people involved in Debian, my third goal was to have fun, and fun we [...]

Bits from Debian: DPL elections 2017, congratulations Chris Lamb!

Sun, 16 Apr 2017 16:40:00 +0000

The Debian Project Leader elections finished yesterday and the winner is Chris Lamb!

Of a total of 1062 developers, 322 developers voted using the Condorcet method.

More information about the result is available in the Debian Project Leader Elections 2017 page.

The current Debian Project Leader, Mehdi Dogguy, congratulated Chris Lamb in his Final bits from the (outgoing) DPL message. Thanks, Mehdi, for the service as DPL during this last twelve months!

The new term for the project leader starts on April 17th and expires on April 16th 2018.

Chris Lamb: Elected Debian Project Leader

Sun, 16 Apr 2017 12:52:34 +0000


I'd like to thank the entire Debian community for choosing me to represent them as the next Debian Project Leader.

I would also like to thank Mehdi for his tireless service and wish him all the best for the future. It is an honour to be elected as the DPL and I am humbled that you would place your faith and trust in me.

You can read my platform here.

Dirk Eddelbuettel: Rcpp now used by 1000 CRAN packages

Sat, 15 Apr 2017 21:28:00 +0000

Moments ago Rcpp passed a big milestone as there are now 1000 packages on CRAN depending on it (as measured by Depends, Imports and LinkingTo, but excluding Suggests). The graph is on the left depicts the growth of Rcpp usage over time. One easy way to compute such reverse dependency counts is the tools::dependsOnPkgs() function that was just mentioned in yesterday's R^4 blog post. Another way is to use the reverse_dependencies_with_maintainers() function from this helper scripts file on CRAN. Lastly, devtools has a function revdep() but it has the wrong default parameters as it includes Suggests: which you'd have to override to get the count I use here (it currently gets 1012 in this wider measure). Rcpp cleared 300 packages in November 2014. It passed 400 packages in June 2015 (when I only tweeted about it), 500 packages in late October 2015, 600 packages last March, 700 packages last July, 800 packages last October and 900 packages early January. The chart extends to the very beginning via manually compiled data from CRANberries and checked with crandb. The next part uses manually saved entries. The core (and by far largest) part of the data set was generated semi-automatically via a short script appending updates to a small file-based backend. A list of packages using Rcpp is kept on this page. Also displayed in the graph is the relative proportion of CRAN packages using Rcpp. The four per-cent hurdle was cleared just before useR! 2014 where I showed a similar graph (as two distinct graphs) in my invited talk. We passed five percent in December of 2014, six percent July of 2015, seven percent just before Christmas 2015, eight percent last summer, and nine percent mid-December 2016. Ten percent is next; we may get there during the summer. 1000 user packages is a really large number. This puts a whole lot of responsibility on us in the Rcpp team as we continue to keep Rcpp as performant and reliable as it has been. And with that a very big Thank You! to all users and contributors of Rcpp for help, suggestions, bug reports, documentation or, of course, code. This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings. [...]

Gunnar Wolf: On Dmitry Bogatov and empowering privacy-protecting tools

Sat, 15 Apr 2017 04:53:57 +0000

There is a thorny topic we have been discussing in nonpublic channels (say, the debian-private mailing list... It is impossible to call it a private list if it has close to a thousand subscribers, but it sometimes deals with sensitive material) for the last week. We have finally confirmation that we can bring this topic out to the open, and I expect several Debian people to talk about this. Besides, this information is now repeated all over the public Internet, so I'm not revealing anything sensitive. Oh, and there is a statement regarding Dmitry Bogatov published by the Tor project — But I'll get to Tor soon. One week ago, the 25-year old mathematician and Debian Maintainer Dmitry Bogatov was arrested, accused of organizing riots and calling for terrorist activities. Every evidence so far points to the fact that Dmitry is not guilty of what he is charged of — He was filmed at different places at the times where the calls for terrorism happened. It seems that Dmitry was arrested because he runs a Tor exit node. I don't know the current situation in Russia, nor his political leanings — But I do know what a Tor exit node looks like. I even had one at home for a short while. What is Tor? It is a network overlay, meant for people to hide where they come from or who they are. Why? There are many reasons — Uninformed people will talk about the evil wrongdoers (starting the list of course with the drug sellers or child porn distributors). People who have taken their time to understand what this is about will rather talk about people for whom free speech is not a given; journalists, political activists, whistleblowers. And also, about regular people — Many among us have taken the habit of doing some of our Web surfing using Tor (probably via the very fine and interesting TAILS distribution — The Amnesiac Incognito Live System), just to increase the entropy, and just because we can, because we want to preserve the freedom to be anonymous before it's taken away from us. There are many types of nodes in Tor; most of them are just regular users or bridges that forward traffic, helping Tor's anonymization. Exit nodes, where packets leave the Tor network and enter the regular Internet, are much scarcer — Partly because they can be quite problematic to people hosting them. But, yes, Tor needs more exit nodes, not just for bandwidth sake, but because the more exit nodes there are, the harder it is for a hostile third party to monitor a sizable number of them for activity (and break the anonymization). I am coincidentially starting a project with a group of students of my Faculty (we want to breathe life again into LIDSOL - Laboratorio de Investigación y Desarrollo de Software Libre). As we are just starting, they are documenting some technical and social aspects of the need for privacy and how Tor works; I expect them to publish their findings in El Nigromante soon (which means... what? ☺ ), but definitively, part of what we want to do is to set up a Tor exit node at the university — Well documented and with enough academic justification to avoid our network operation area ordering us to shut it down. Lets see what happens :) Anyway, all in all — Dmitry is in for a heavy time. He has been detained pre-trial [...]

Dirk Eddelbuettel: #5: Easy package information

Sat, 15 Apr 2017 00:56:00 +0000

Welcome to the fifth post in the recklessly rambling R rants series, or R4 for short. The third post showed an easy way to follow R development by monitoring (curated) changes on the NEWS file for the development version r-devel. As a concrete example, I mentioned that it has shown a nice new function (tools::CRAN_package_db()) coming up in R 3.4.0. Today we will build on that. Consider the following short snippet: library(data.table) getPkgInfo <- function() { if (exists("tools::CRAN_package_db")) { dat <- tools::CRAN_package_db() } else { tf <- tempfile() download.file("", tf, quiet=TRUE) dat <- readRDS(tf) # r-devel can now readRDS off a URL too } dat <- setDT(dat) dat } It defines a simple function getPkgInfo() as a wrapper around said new function from R 3.4.0, ie tools::CRAN_package_db(), and a fallback alternative using a tempfile (in the automagically cleaned R temp directory) and an explicit download and read of the underlying RDS file. As an aside, just this week the r-devel NEWS told us that such readRDS() operations can now read directly from URL connection. Very nice---as RDS is a fantastic file format when you are working in R. Anyway, back to the RDS file! The snippet above returns a data.table object with as many rows as there are packages on CRAN, and basically all their (parsed !!) DESCRIPTION info and then some. A gold mine! Consider this to see how many package have a dependency (in the sense of Depends, Imports or LinkingTo, but not Suggests because Suggests != Depends) on Rcpp: R> dat <- getPkgInfo() R> rcppRevDepInd <- as.integer(tools::dependsOnPkgs("Rcpp", recursive=FALSE, installed=dat)) R> length(rcppRevDepInd) [1] 998 R> So exciting---we will hit 1000 within days! But let's do some more analysis: R> dat[ rcppRevDepInd, RcppRevDep := TRUE] # set to TRUE for given set R> dat[ RcppRevDep==TRUE, 1:2] Package Version 1: ABCoptim 0.14.0 2: AbsFilterGSEA 1.5 3: acc 1.3.3 4: accelerometry 2.2.5 5: acebayes 1.3.4 --- 994: yakmoR 0.1.1 995: yCrypticRNAs 0.99.2 996: yuima 1.5.9 997: zic 0.9 998: ziphsmm 1.0.4 R> Here we index the reverse dependency using the vector we had just computed, and then that new variable to subset the data.table object. Given the aforementioned parsed information from all the DESCRIPTION files, we can learn more: R> ## likely false entries R> dat[ RcppRevDep==TRUE, ][NeedsCompilation!="yes", c(1:2,4)] Package Version Depends 1: baitmet 1.0.0 Rcpp, erah (>= 1.0.5) 2: bea.R 1.0.1 R (>= 3.2.1), data.table 3: brms 1.6.0 R (>= 3.2.0), Rcpp (>= 0.12.0), ggplot2 (>= 2.0.0), methods 4: classifierplots 1.3.3 R (>= 3.1), ggplot2 (>= 2.2), data.table ([...]

Laura Arjona Reina: Underestimating Debian

Fri, 14 Apr 2017 20:19:38 +0000

I had two issues in the last days that lead me a bit into panic until they got solved. In both cases the issue was external to Debian but I first thought that the problem was in Debian. I’m not sure why I had those thoughts, I should be more confident in myself, this awesome operating system, and the community around it! The good thing is that I’ll be more confident from now on, and I’ve learned that hurry is not a good friend, and I should face my computer “problems” (and everything in life, probably) with a bit more patience (and backups). Issue 1: Corrupt ext partition in a laptop I have a laptop at home with dual boot Windows 7 + Debian 9 (Stretch). I rarely boot the Windows partition. When I do, I do whatever I need to do/test there, then install updates, and then shutdown the laptop or reboot in Debian to feel happy again when using computers. Some months ago I noticed that booting in Debian was not possible and I was left in an initramfs console that was suggesting to e2fsck /dev/sda6 (my Debian partition). Then I ran e2fsck, say “a” to fix all the issues found, and the system was booting properly. This issue was a bit scary-looking because of the e2fsck output making screen show random numbers and scrolling quickly for 1 or 2 minutes, until all the inodes or blocks or whatever were fixed. I thought about the disk being faulty, and ran badblocks, but faced the former boot issue again some time after, and then decided to change the disk (then I took the opportunity to make backups, and install a fresh Debian 9 Stretch in the laptop, instead of the Debian 8 stable that was running). The experience with Stretch has been great since then, but some days ago I faced the boot issue again. Then I realised that maybe the issue was appearing when I booted Debian right after using Windows (and this was why it was appearing not very often in my timeline ). Then I payed more attention to the message that I was receiving in the console Superblock checksum does not match superblock while trying to open /dev/sda6 /dev/sda6: The superblock could not be read or does not describe a valid ext2/ext3/ext4 filesystem. If the device is valid and it really contains an ext2/ext3/ext4 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 or e2fsck -b 32768 and searched about it, and also asked about it to my friends in the redeslibres XMPP chat room I found this question in the AskUbuntu forum that was exactly my issue (I had ext2fsd installed in Windows). My friends in the XMPP room friendly yelled “booo!” at me for letting Windows touch my ext partitions (I apologised, it will never happen again!). I now consistently could reproduce the issue (boot Windows, then boot Debian, bang!: initramfs console, e2fsck, reboot Debian, no problem, boot Windows, boot Debian, again the problem, etc). I uninstalled the ext2fsd program and tried to reproduce the issue, and I couldn’t reproduce it. So happy end. Issue 2: Accessing Android internal memory to backup files The other issue was with my tablet running Android 4.0.4. It was facing a charge issue, and[...]

Dirk Eddelbuettel: RcppArmadillo 0.7.800.2.0

Fri, 14 Apr 2017 02:14:00 +0000



A new RcppArmadillo version 0.7.800.2.0 is now on CRAN.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language--and is widely used by (currently) 318 other packages on CRAN -- an increase of 20 just since the last CRAN release of 0.7.600.1.0 in December!

Changes in this release relative to the previous CRAN release are as follows:

Changes in RcppArmadillo version 0.7.800.2.0 (2017-04-12)

  • Upgraded to Armadillo release 7.800.2 (Rogue State Redux)

    • The Armadillo license changed to Apache License 2.0
  • The DESCRIPTION file now mentions the Apache License 2.0, as well as the former MPL2 license used for earlier releases.

  • A new file init.c was added with calls to R_registerRoutines() and R_useDynamicSymbols()

  • Symbol registration is enabled in useDynLib

  • The fastLm example was updated

Courtesy of CRANberries, there is a diffstat report. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, March 2017

Thu, 13 Apr 2017 16:05:26 +0000

Like each month, here comes a report about the work of paid contributors to Debian LTS. Individual reports In March, about 190 work hours have been dispatched among 14 paid contributors. Their reports are available: Antoine Beaupré did 19 hours (out of 14.75h allocated + 10 remaining hours, thus keeping 5.75 extra hours for April). Balint Reczey did nothing (out of 14.75 hours allocated + 2.5 hours remaining) and gave back all his unused hours. He took on a new job and will stop his work as LTS paid contributor. Ben Hutchings did 14.75 hours. Brian May did 10 hours. Chris Lamb did 14.75 hours. Emilio Pozuelo Monfort did 11.75 hours (out of 14.75 hours allocated + 0.5 hours remaining, thus keeping 3.5 hours for April). Guido Günther did 4 hours (out of 8 hours allocated, thus keeping 4 extra hours for April). Hugo Lefeuvre did 4 hours (out of 13.5 hours allocated, thus keeping 9.5 extra hours for April). Jonas Meurer did 11.25 hours (out of 14.75 hours allocated, thus keeping 3.5 extra hours for April). Markus Koschany did 14.75 hours. Ola Lundqvist did 23.75 hours (out of 14.75h allocated + 9 hours remaining). Raphaël Hertzog did 15 hours (out of 10 hours allocated + 6.25 hours remaining, thus keeping 1.25 hours for April). Roberto C. Sanchez did 21.5 hours (out of 14.75 hours allocated + 7.75 hours remaining, thus keeping 1 extra hour for April). Thorsten Alteholz did 14.75 hours. Evolution of the situation The number of sponsored hours has been unchanged but will likely decrease slightly next month as one sponsor will not renew his support (because they have switched to CentOS). The security tracker currently lists 52 packages with a known CVE and the dla-needed.txt file 40. The number of open issues continued its slight increase… not worrisome yet but we need to keep an eye on this situation. Thanks to our sponsors New sponsors are in bold. Platinum sponsors: TOSHIBA (for 18 months) GitHub (for 9 months) Gold sponsors: The Positive Internet (for 34 months) Blablacar (for 33 months) Linode LLC (for 23 months) Babiel GmbH (for 12 months) Plat’Home (for 12 months) Silver sponsors: Domeneshop AS (for 33 months) Université Lille 3 (for 33 months) Trollweb Solutions (for 31 months) Nantes Métropole (for 27 months) University of Luxembourg (for 25 months) Dalenys (for 24 months) Univention GmbH (for 19 months) Université Jean Monnet de St Etienne (for 19 months) Sonus Networks (for 13 months) UR Communications BV (for 7 months) maxcluster GmbH (for 7 months) Exonet B.V. (for 3 months) Bronze sponsors: David Ayers – IntarS Austria (for 34 months) Evolix (for 34 months) Offensive Security (for 34 months), a.s. (for 34 months) Freeside Internet Service (for 33 months) MyTux (for 33 months) Linuxhotel GmbH (for 31 months) Intevation GmbH (for 30 months) Daevel SARL (for 29 months) Bitfolk LTD (for 28 months) Megaspace Internet Services GmbH (for 28 months) Greenbone Networks GmbH (for 27 months) NUMLOG (for 27 months) WinGo AG (for 26 months) Ecole Centrale de Nantes – LHEEA (for 23 months) Sig-I/O (for 20 months) Entr’ouvert (for 18 months) Adfinis SyGroup AG (for 15 months) L[...]

Francois Marier: Automatically renewing Let's Encrypt TLS certificates on Debian using Certbot

Thu, 13 Apr 2017 15:00:00 +0000


I use Let's Encrypt TLS certificates on my Debian servers along with the Certbot tool. Since I use the "temporary webserver" method of proving domain ownership via the ACME protocol, I cannot use the cert renewal cronjob built into Certbot.

Instead, this is the script I put in /etc/cron.daily/certbot-renew:


/usr/bin/certbot renew --quiet --pre-hook "/bin/systemctl stop apache2.service" --post-hook "/bin/systemctl start apache2.service"

pushd /etc/ > /dev/null
/usr/bin/git add letsencrypt
DIFFSTAT="$(/usr/bin/git diff --cached --stat)"
if [ -n "$DIFFSTAT" ] ; then
    /usr/bin/git commit --quiet -m "Renewed letsencrypt certs"
    echo "$DIFFSTAT"
popd > /dev/null

It temporarily disables my Apache webserver while it renews the certificates and then only outputs something to STDOUT (since my cronjob will email me any output) if certs have been renewed.

Since I'm using etckeeper to keep track of config changes on my servers, my renewal script also commits to the repository if any certs have changed.

External Monitoring

In order to catch mistakes or oversights, I use ssl-cert-check to monitor my domains once a day:

ssl-cert-check -s -p 443 -q -a -e

I also signed up with Cert Spotter which watches the Certificate Transparency log and notifies me of any newly-issued certificates for my domains.

In other words, I get notified:

  • if my cronjob fails and a cert is about to expire, or
  • as soon as a new cert is issued.

The whole thing seems to work well, but if there's anything I could be doing better, feel free to leave a comment!

Michal Čihař: Weblate 2.13.1

Thu, 13 Apr 2017 04:00:22 +0000


Weblate 2.13.1 has been released quickly after 2.13. It fixes few minor issues and possible upgrade problem.

Full list of changes:

  • Fixed listing of managed projects in profile.
  • Fixed migration issue where some permissions were missing.
  • Fixed listing of current file format in translation download.
  • Return HTTP 404 when trying to access project where user lacks privileges.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user. Weblate is also being used on as official translating service for phpMyAdmin, OsmAnd, Aptoide, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate | 0 comments

Vincent Bernat: Proper isolation of a Linux bridge

Wed, 12 Apr 2017 07:58:29 +0000

TL;DR: when configuring a Linux bridge, use the following commands to enforce isolation: # bridge vlan del dev br0 vid 1 self # echo 1 > /sys/class/net/br0/bridge/vlan_filtering A network bridge (also commonly called a “switch”) brings several Ethernet segments together. It is a common element in most infrastructures. Linux provides its own implementation. A typical use of a Linux bridge is shown below. The hypervisor is running three virtual hosts. Each virtual host is attached to the br0 bridge (represented by the horizontal segment). The hypervisor has two physical network interfaces: eth0 is attached to a public network providing various services for the virtual hosts (DHCP, DNS, NTP, routers to Internet, …). It is also part of the br0 bridge. eth1 is attached to an infrastructure network providing various services to the hypervisor (DNS, NTP, configuration management, routers to Internet, …). It is not part of the br0 bridge. The main expectation of such a setup is that while the virtual hosts should be able to use resources from the public network, they should not be able to access resources from the infrastructure network (including resources hosted on the hypervisor itself, like a SSH server). In other words, we expect a total isolation between the green domain and the purple one. That’s not the case. From any virtual host: # ip route add dev eth0 # ping -c 3 PING ( 56(84) bytes of data. 64 bytes from icmp_seq=1 ttl=59 time=0.644 ms 64 bytes from icmp_seq=2 ttl=59 time=0.829 ms 64 bytes from icmp_seq=3 ttl=59 time=0.894 ms --- ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2033ms rtt min/avg/max/mdev = 0.644/0.789/0.894/0.105 ms Why? Bridge processing IPv4 processing ARP processing IPv6 processing Workarounds Protocol-independent workarounds Using VLAN-aware bridge Using ingress policy Using ebtables Using namespaces Protocol-dependent workarounds ARP IPv4 IPv6 About the example Why? There are two main factors of this behavior: A bridge can accept IP traffic. This is a useful feature if you want Linux to act as a bridge and provide some IP services to bridge users (a DHCP relay or a default gateway). This is usually done by configuring the IP address on the bridge device: ip addr add dev br0. An interface doesn’t need an IP address to process incoming IP traffic. Additionally, by default, Linux accepts to answer ARP requests independently from the incoming interface. Bridge processing After turning an incoming Ethernet frame into a socket buffer, the network driver transfers the buffer to the netif_receive_skb() function. The following actions are executed: copy the frame to any registered global or per-device taps (e.g. tcpdump), evaluate the ingress policy (configured with tc), hand over the frame to the device-specific receive handler, if any, hand over the frame t[...]

Daniel Pocock: What is the risk of using proprietary software for people who prefer not to?

Wed, 12 Apr 2017 06:43:08 +0000

Jonas Öberg has recently blogged about Using Proprietary Software for Freedom. He argues that it can be acceptable to use proprietary software to further free and open source software ambitions if that is indeed the purpose. Jonas' blog suggests that each time proprietary software is used, the relative risk and reward should be considered and there may be situations where the reward is big enough and the risk low enough that proprietary software can be used. A question of leadership Many of the free software users and developers I've spoken to express frustration about how difficult it is to communicate to their family and friends about the risks of proprietary software. A typical example is explaining to family members why you would never install Skype. Imagine a doctor who gives a talk to school children about the dangers of smoking and is then spotted having a fag at the bus stop. After a month, if you ask the children what they remember about that doctor, is it more likely to be what he said or what he did? When contemplating Jonas' words, it is important to consider this leadership factor as a significant risk every time proprietary software or services are used. Getting busted with just one piece of proprietary software undermines your own credibility and posture now and well into the future. Research has shown that when communicating with people, what they see and how you communicate is ninety three percent of the impression you make. What you actually say to them is only seven percent. When giving a talk at a conference or a demo to a client, or communicating with family members in our everyday lives, using a proprietary application or a product or service that is obviously proprietary like an iPhone or Facebook will have far more impact than the words you say. It is not only a question of what you are seen doing in public: somebody who lives happily and comfortably without using proprietary software sounds a lot more credible than somebody who tries to explain freedom without living it. The many faces of proprietary software One of the first things to consider is that even for those developers who have a completely free operating system, there may well be some proprietary code lurking in their BIOS or other parts of their hardware. Their mobile phone, their car, their oven and even their alarm clock are all likely to contain some proprietary code too. The risks associated with these technologies may well be quite minimal, at least until that alarm clock becomes part of the Internet of Things and can be hacked by the bored teenager next door. Accessing most web sites these days inevitably involves some interaction with proprietary software, even if it is not running on your own computer. There is no need to give up Some people may consider this state of affairs and simply give up, using whatever appears to be the easiest solution for each problem at hand without thinking too much about whether it is proprietary or not. I don't think Jonas' blog intended to sanction this level of c[...]

Michal Čihař: Weblate 2.13

Wed, 12 Apr 2017 06:30:24 +0000

Weblate 2.13 has been released today pretty much on the schedule. The most important change being more fine grained access control and some smaller UI improvements. There are other new features and bug fixes as well. Full list of changes: Fixed quality checks on translation templates. Added quality check to trigger on losing translation. Add option to view pending suggestions from user. Add option to automatically build component lists. Default dashboard for unauthenticated users can be configured. Add option to browse 25 random strings for review. History now indicates string change. Better error reporting when adding new translation. Added per language search within project. Group ACLs can now be limited to certain permissions. The per project ALCs are now implemented using Group ACL. Added more fine grained privileges control. Various minor UI improvements. If you are upgrading from older version, please follow our upgrading instructions. You can find more information about Weblate on, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user. Weblate is also being used on as official translating service for phpMyAdmin, OsmAnd, Aptoide, FreedomBox, Weblate itself and many other projects. Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure. Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them. Filed under: Debian English Gammu phpMyAdmin SUSE Weblate | 0 comments [...]

Joey Hess: starting debug-me and a new devblog

Tue, 11 Apr 2017 23:26:05 +0000


I've started building debug-me. It's my birthday, and building a new program is kind of my birthday gift to myself, because I love starting a new program and seeing where it goes. (Also, my Patreon backers wanted me to get on with building debug-me.)

I also have a new devblog! Up until now, I've had a devblog that only covered work on git-annex. That one continues, but the new devblog is for development journaling for any project I'm working on.

Matthew Garrett: Disabling SSL validation in binary apps

Tue, 11 Apr 2017 22:27:28 +0000

Reverse engineering protocols is a great deal easier when they're not encrypted. Thankfully most apps I've dealt with have been doing something convenient like using AES with a key embedded in the app, but others use remote protocols over HTTPS and that makes things much less straightforward. MITMProxy will solve this, as long as you're able to get the app to trust its certificate, but if there's a built-in pinned certificate that's going to be a pain. So, given an app written in C running on an embedded device, and without an easy way to inject new certificates into that device, what do you do?First: The app is probably using libcurl, because it's free, works and is under a license that allows you to link it into proprietary apps. This is also bad news, because libcurl defaults to having sensible security settings. In the worst case we've got a statically linked binary with all the symbols stripped out, so we're left with the problem of (a) finding the relevant code and (b) replacing it with modified code. Fortuntely, this is much less difficult than you might imagine.First, let's find where curl sets up its defaults. Curl_init_userdefined() in curl/lib/url.c has the following code: set->ssl.primary.verifypeer = TRUE; set->ssl.primary.verifyhost = TRUE;#ifdef USE_TLS_SRP set->ssl.authtype = CURL_TLSAUTH_NONE;#endif set->ssh_auth_types = CURLSSH_AUTH_DEFAULT; /* defaults to any auth type */ set->general_ssl.sessionid = TRUE; /* session ID caching enabled by default */ set->proxy_ssl = set->ssl; set->new_file_perms = 0644; /* Default permissions */ set->new_directory_perms = 0755; /* Default permissions */TRUE is defined as 1, so we want to change the code that currently sets verifypeer and verifyhost to 1 to instead set them to 0. How to find it? Look further down - new_file_perms is set to 0644 and new_directory_perms is set to 0755. The leading 0 indicates octal, so these correspond to decimal 420 and 493. Passing the file to objdump -d (assuming a build of objdump that supports this architecture) will give us a disassembled version of the code, so time to fix our problems with grep:objdump -d target | grep --after=20 ,420 | grep ,493This gives us the disassembly of target, searches for any occurrence of ",420" (indicating that 420 is being used as an argument in an instruction), prints the following 20 lines and then searches for a reference to 493. It spits out a single hit: 43e864: 240301ed li v1,493Which is promising. Looking at the surrounding code gives: 43e820: 24030001 li v1,1 43e824: a0430138 sb v1,312(v0) 43e828: 8fc20018 lw v0,24(s8) 43e82c: 24030001 li v1,1 43e830: a0430139 sb v1,313(v0) 43e834: 8fc20018 lw v0,24(s8) 43e838: ac400170 sw zero,368[...]

Sean Whitton: coffeereddit

Tue, 11 Apr 2017 21:08:43 +0000


Most people I know can handle a single coffee per day, sometimes even forgetting to drink it. I never could understand how they did it. Talking about this with a therapist I realised that the problem isn’t necessary the caffeine, it’s my low tolerance of less than razor sharp focus. Most people accept they have slumps in their focus and just work through them. binarybear on reddit

Riku Voipio: Deploying OBS

Tue, 11 Apr 2017 20:23:27 +0000

Open Build Service from SuSE is web service building deb/rpm packages. It has recently been added to Debian, so finally there is relatively easy way to set up PPA style repositories in Debian. Relative as in "there is a learning curve, but nowhere near the complexity of replicating Debian's internal infrastructure". OBS will give you both repositories and build infrastructure with a clickety web UI and command line client (osc) to manage. See Hectors blog for quickstart instructions. Things to learned while setting up OBSMe coming from Debian background, and OBS coming from SuSE/RPM world, there are some quirks that can take by surprise. Well done packagingUsually web services are a tough fit for Distros. The cascade of weird dependencies and build systems where the only practical way to build an "open source" web service is by replicating the upstream CI scripts. Not in case of OBS. Being done by distro people shows. OBS does automatic rebuilds of reverse dependenciesAka automatic binNMUs when you update a library. This however means you need lots of build power around. OBS has it's own dependency resolver on the server that recalculate what packages need rebuilding when - workers just get a list of packages to install for build-depends. This a major divergence from Debian, where sbuild handles dependencies client side. The OBS dependency handler doesn't handle virtual packages* / alternative build-deps like Debian - you may have to add a specific "Prefer: foo-dev" into the OBS project config to solve alternative choices. OBS server and worker do http requests in both directionsOn startup workers connect to OBS server, open a TCP port and wait requests coming OBS. Having connections both directions is a bit of hassle firewall-wise. On the bright side, no need to setup uploads via FTP here.. Signing repositories is complicatedWith Debian 9.0 making signed repositories pretty much mandatory, OBS makes signing rather complicated. obs-signd isn't included in Debian, since it depends on gnupg patch that hasn't been upstreamed. Fortunately I found a workaround. OBS signs release files with /usr/bin/sign -d /path/to/release. Where replacing the obs-signd provided sign command your own script is easy ;) Git integration is rather bolted-on than integratedOBS provides a method to integrate with git using services. - There is no clickety UI to link to git repo, instead you make an xml file called _service with osc. There is no way to have debian/ tree in git. The upstream community is friendlyIncluding the happiest thanks from an upstream I've seen recently. SummaryAll in all rather satisfied with OBS. If you have a home-grown jenkins etc based solution for building DEB/RPM packages, you should definitely consider OBS. For simpler uses, no need to install OBS yourself, openSUSE public OBS will happily build Debian packages for you. *How useful are virtual packages anymore?[...]

Antoine Beaupré: A report from Netconf: Day 2

Tue, 11 Apr 2017 17:00:00 +0000

This article covers the second day of the informal Netconf discussions, held on on April 4, 2017. Topics discussed this day included the binding of sockets in VRF, identification of eBPF programs, inconsistencies between IPv4 and IPv6, changes to data-center hardware, and more. (See this article for coverage from the first day of discussions). How to bind to specific sockets in VRF One of the first presentations was from David Ahern of Cumulus, who presented a few interesting questions for the audience. His first was the problem of binding sockets to a given interface. Right now, there are four different ways this can be done: the old SO_BINDTODEVICE generic socket option (see socket(7)) the IP_PKTINFO, IP-specific socket option (see ip(7)), introduced in Linux 2.2 the IP_UNICAST_IF flag, introduced in Linux 3.3 for WINE the IPv6 scope ID suffix, part of the IPv6 addressing standard So there's a problem of having too many ways of doing the same thing, something that cannot really be fixed without breaking ABI compatibility. But even worse, conflicts between those options are not reported by the kernel so it's possible for a user to set up socket flags in a way that certain flags override others and there are no checks made or errors reported. It was agreed that the user should get some notification of conflicting changes here, at least. Furthermore, binding sockets to a specific VRF (Virtual Routing and Forwarding) device is not currently possible, so Ahern asked what the best way to do this would be, considering the many options available. A use case example is a UDP multicast socket that could be bound to a specific interface within a VRF. This is an old problem: Tom Herbert explained that there were previous discussions about making the bind() system call more programmable so that, for example, you could bind() a UDP socket to a discrete list of IP addresses or a subnet. So he identified this issue as a broader problem that should be addressed by making the interfaces more generic. Ahern explained that it is currently possible to bind sockets to the slave device of a VRF even though that should not be allowed. He also raised the question of how the kernel should tell which socket should be selected for incoming packets. Right now, there is a scoring mechanism for UDP sockets, but that cannot be used directly in this more general case. David Miller said that there are already different ways of specifying scope: there is the VRF layer and the namespace ("netns") layer. A long time ago, Miller reluctantly accepted the addition of netns keys everywhere, swallowing the performance cost to gain flexibility. He argued that a new key should not be added and instead existing infrastructure should be reused. Herbert argued this was exactly the reason why this should be simplified: "if we don't answer the question, people will keep on tr[...]

Antoine Beaupré: A report from Netconf: Day 1

Tue, 11 Apr 2017 17:00:00 +0000

As is becoming traditional, two times a year the kernel networking community meets in a two-stage conference: an invite-only, informal, two-day plenary session called Netconf, held in Toronto this year, and a more conventional one-track conference open to the public called Netdev. I was invited to cover both conferences this year, given that Netdev was in Montreal (my hometown), and was happy to meet the crew of developers that maintain the network stack of the Linux kernel. This article covers the first day of the conference which consisted of around 25 Linux developers meeting under the direction of David Miller, the kernel's networking subsystem maintainer. Netconf has no formal sessions; although some people presented slides, interruptions are frequent (indeed, encouraged) and the focus is on hashing out issues that are blocked on the mailing list and getting suggestions, ideas, solutions, and feedback from their peers. Removing ndo_select_queue() One of the first discussions that elicited a significant debate was the ndo_select_queue() function, a key component of the Linux polling system that determines when and how to send packets on a network interface (see netdev_pick_tx and friends). The general question was whether the use of ndo_select_queue() in drivers is a good idea. Alexander Duyck explained that Intel people were considering using ndo_select_queue() for receive/transmit queue matching. Intel drivers do not currently use the hook provided by the Linux kernel and it turns out no one is happy with ndo_select_queue(): the heuristics it uses don't really please anyone. The consensus (including from Duyck himself) seemed to be that it should just not be used anymore, or at least not used for that specific purpose. The discussion turned toward the wireless network stack, which uses it extensively, but for other purposes. Johannes Berg explained that the wireless stack uses ndo_select_queue() for traffic classification, for example to get voice traffic through even if the best-effort queue is backed up. The wireless stack could stop using it by doing flow control completely inside the wireless stack, which already uses the fq_codel flow-control mechanism for other purposes, so porting away from ndo_select_queue() seems possible there. The problem then becomes how to update all the drivers to change that behavior, which would be a lot of work. Still, it seems people are moving away from a generic ndo_select_queue() interface to stack-specific or even driver-specific (in the case of Intel) queue management interfaces. refcount_t followup There was a followup discussion on the integration of the refcount_t type into the network stack, which we covered recently. This type is meant to be an in-kernel defense against exploits based on overflowing or underflowing an object's reference count. The consensu[...]

Reproducible builds folks: Reproducible Builds: week 102 in Stretch cycle

Tue, 11 Apr 2017 07:27:27 +0000

Here's what happened in the Reproducible Builds effort between Sunday April 2 and Saturday April 8 2017:

Media coverage

Toolchain development and fixes

Reviews of unreproducible packages

27 package reviews have been added, 14 have been updated and 17 have been removed in this week, adding to our knowledge about identified issues.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Aaron M. Ucko (1)
  • Adrian Bunk (1)
  • Chris Lamb (2)


This week's edition was written by Chris Lamb, Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Daniel Pocock: If Alan Turing was born today, would he be a Muslim?

Mon, 10 Apr 2017 20:01:01 +0000

Alan Turing's name and his work are well known to anybody with a theoretical grounding in computer science. Turing developed his theories well before anybody invented file sharing, overclocking or mass surveillance. In fact, Turing was largely working in the absence of any computers at all: the transistor was only invented in 1947 and the microchip, the critical innovation that has made computing both affordable and portable, only came in 1960, four years after Turing's death. To this day, the Turing Test remains a well known challenge in the field of Artificial Intelligence. The most prestigious prize in computing, the A.M. Turing Award from the ACM, equivalent to the Nobel Prize in other fields of endeavour, is named in Turing's honour. (This year's award is another British scientist, Sir Tim Berners-Lee, inventor of the World Wide Web). Potentially far more people know of Alan Turing for his groundbreaking work at Bletchley Park and the impact it had on cracking the Nazi's Enigma machines during World War 2, giving the allies an advantage against Hitler. While in his lifetime, Turing exposed the secret communications of the Nazis, in his death, he exposed something manifestly repugnant about his own society. Turing's challenges with his sexuality (or Britain's challenge with it) are just as well documented as his greatest scientific achievements. The 2014 movie The Imitation Game tells Turing's story, bringing together the themes from his professional and personal life. Had Turing chosen to flee British persecution by going abroad, he would be a refugee in the same sense as any person who crossed the seas to reach Europe today to avoid persecution elsewhere. Please prove me wrong In March, I blogged about the problem of racism that plagues Britain today. While some may have felt the tone of the blog was quite strong, I was in no way pleased to find my position affirmed by the events that occurred in the two days after the blog appeared. Two days and two more human beings (both immigrants and both refugees) subjected to abhorrent and unnecessary acts of abuse in Great Britain. Both cases appear to be fuelled directly by the evil that has been oozing out of number 10 Downing Street since they decided to have a referendum on "Brexit". What stands out about these latest crimes is not that they occurred (this type of thing has been going on for months now) but certain contrasts between their circumstances and to a lesser extent, the fact they occurred immediately after Theresa May formalized Britain's departure from the EU. One of the victims was almost beaten to death by a street gang, while the other was abused by men wearing uniforms. One was only a child, while the other is a mature adult who has been in the UK almost three decades, completely a[...]

Michal Čihař: New free software projects on Hosted Weblate

Mon, 10 Apr 2017 10:00:23 +0000


Hosted Weblate provides also free hosting for free software projects. Finally I got to processing requests a bit faster, so there are just few new projects.

This time, the newly hosted projects include:

  • Pext - Python-based extendable tool
  • Dino - modern Jabber/XMPP Client using GTK+/Vala

If you want to support this effort, please donate to Weblate, especially recurring donations are welcome to make this service alive. You can do them on Liberapay or Bountysource.

Filed under: Debian English Weblate | 0 comments

Enrico Zini: Ansible config for my stereo

Sun, 09 Apr 2017 18:54:06 +0000

I bought a Raspberry Pi 2 and its case. I could not reuse the existing SD card because it wants a MicroSD.

A wise person once told me:

First you do it, then you document it, then you automate it.

I had done the first two, and now I've redone the whole setup with ansible, here: stereo.tar.xz.


Sam Hartman: When "when" is too hard a question: SQLAlchemy, Python datetime, and ISO8601

Sun, 09 Apr 2017 18:39:39 +0000

A new programmer asked on a work chat room how timezones are handled in databases. He asked if it was a good idea to store things in UTC. The senior programmers all laughed as we told some of our horror stories with timezones. Yes, UTC is great; if only it were that simple.About a week later I was designing the schema for a blue sky project I'm implementing. I had to confront time in all its Pythonic horror.Let's start with the datetime.datetime class. Datetime objects optionally include a timezone. If no timezone is present, several methods such as timestamp treat the object as a local time in the system's timezone. The timezone method returns a POSIX timestamp, which is always expressed in UTC, so knowing the input timezone is important. The now method constructs such an object from the current time.However other methods act differently. The utcnow method constructs a datetime object that has the UTC time, but is not marked with a timezone. So, for example datetime.fromtimestamp(datetime.utcnow().timestamp()) produces the wrong result unless your system timezone happens to have the same offset as UTC.It's also possible to construct a datetime object that includes a UTC time and is marked as having a UTC time. The utcnow method never does this, but you can pass the UTC timezone into the now method and get that effect. As you'd expect, the timestamp method returns the correct result on such a datetime.Now enter SQLAlchemy, one of the more popular Python ORMs. Its DATETIME type has an argument that tries to request a column capable of storing a a timezone from the underlying database. You aren't guaranteed to get this though; some databases don't provide that functionality. With PostgreSQL, I do get such a column, although something in SQLAlchemy is not preserving the timezones (although it is correctly adjusting the time). That is, I'll store a UTC time in an object, flush it to my session, and then read back the same time represented in my local timezone (marked as my local timezone). You'd think this would be safe.Enter SQLite. SQLite makes life hard for people wanting to store time; it seems to want to store things as strings. That's fairly incompatible with storing a timezone and doing any sort of comparisons on dates. SQLAlchemy does not try to store a timezone in SQLite. It just trims any timezone information from the datetime. So, if I do something liked = obj.date_col = d session.add(obj) session.flush() assert obj.date_col == d # fails assert obj.date_col.timestamp() == d.timestamp() # fails assert d == obj.date_col.replace(tzinfo = timezone.utc) # finally succeeds There are some unfortunate consequences of this. If you mar[...]

Antoine Beaupré: Contribute your skills to Debian in Montreal, April 14 2017

Sun, 09 Apr 2017 15:06:08 +0000

Join us in Montreal, on April 14 2017, and we will find a way in which you can help Debian with your current set of skills! You might even learn one or two things in passing (but you don't have to). Debian is a free operating system for your computer. An operating system is the set of basic programs and utilities that make your computer run. Debian comes with dozens of thousands of packages, precompiled software bundled up for easy installation on your machine. A number of other operating systems, such as Ubuntu and Tails, are based on Debian. The upcoming version of Debian, called Stretch, will be released later this year. We need you to help us make it awesome Whether you're a computer user, a graphics designer, or a bug triager, there are many ways you can contribute to this effort. We also welcome experience in consensus decision-making, anti-harassment teams, and package maintenance. No effort is too small and whatever you bring to this community will be appreciated. Here's what we will be doing: We will triage bug reports that are blocking the release of the upcoming version of Debian. Debian package maintainers will fix some of these bugs. Goals and principles This is a work in progress, and a statement of intent. Not everything is organized and confirmed yet. We want to bring together a heterogeneous group of people. This goal will guide our handling of sponsorship requests, and will help us make decisions if more people want to attend than we can welcome properly. In other words: if you're part of a group that is currently under-represented in computer communities, we would like you to be able to attend. We are committed to providing a friendly, safe and welcoming environment for all, regardless of level of experience, gender, gender identity and expression, sexual orientation, disability, personal appearance, body size, race, ethnicity, age, religion, nationality, or other similar personal characteristic. Attending this event requires reading and respecting the Debian Code of Conduct, that sets the standards in terms of behaviour for the whole event, including communication (public and private) before, while and after. The space where this event will take place is unfortunately not accessible to wheelchairs. Food (including vegetarian options) should be provided for lunch. If you have any specific needs regarding food, please let us know when registering, and we will do our best. What we will be doing This will be an informal session to confirm and fix bugs in Debian. If you have never worked with Debian packages, this is a good opportunity to learn about packaging and bugtracker usage. Bugs flagged as Release Critical are bloc[...]

Christoph Egger: Secured OTP Server (ASIS CTF 2017)

Sun, 09 Apr 2017 13:20:23 +0000

This weekend was ASIS Quals weekend again. And just like last year they have quite a lot of nice crypto-related puzzles which are fun to solve (and not "the same as every ctf"). Actually Secured OTP Server is pretty much the same as the First OTP Server (actually it's a "fixed" version to enforce the intended attack). However the template phrase now starts with enough stars to prevent simple root.: def gen_otps(): template_phrase = '*************** Welcome, dear customer, the secret passphrase for today is: ' OTP_1 = template_phrase + gen_passphrase(18) OTP_2 = template_phrase + gen_passphrase(18) otp_1 = bytes_to_long(OTP_1) otp_2 = bytes_to_long(OTP_2) nbit, e = 2048, 3 privkey = RSA.generate(nbit, e = e) pubkey = privkey.publickey().exportKey() n = getattr(privkey.key, 'n') r = otp_2 - otp_1 if r < 0: r = -r IMP = n - r**(e**2) if IMP > 0: c_1 = pow(otp_1, e, n) c_2 = pow(otp_2, e, n) return pubkey, OTP_1[-18:], OTP_2[-18:], c_1, c_2 Now let A = template * 2^(18*8), B = passphrase. This results in OTP = A + B. c therefore is (A+B)^3 mod n == A^3 + 3A^2b + 3AB^2 + B^3. Notice that only B^3 is larger than N and is statically known. Therefore we can calculate A^3 // N and add that to c to "undo" the modulo operation. With that it's only iroot and long_to_bytes to the solution. Note that we're talking about OTP and C here. The code actually produced two OTP and C values but you can use either one just fine. #!/usr/bin/python3 import sys from util import bytes_to_long from gmpy2 import iroot PREFIX = b'*************** Welcome, dear customer, the secret passphrase for today is: ' OTPbase = bytes_to_long(PREFIX + b'\x00' * 18) N = 27990886688403106156886965929373472780889297823794580465068327683395428917362065615739951108259750066435069668684573174325731274170995250924795407965212988361462373732974161447634230854196410219114860784487233470335168426228481911440564783725621653286383831270780196463991259147093068328414348781344702123357674899863389442417020336086993549312395661361400479571900883022046732515264355119081391467082453786314312161949246102368333523674765325492285740191982756488086280405915565444751334123879989607088707099191056578977164106743480580290273650405587226976754077483115441525080890390557890622557458363028198676980513 WRAPPINGS = (OTPbase ** 3) // N C = 1309499671200712434447011762033176816818510690438885993860406610846546132483497380366659450135090037906160035815772780461875620318808164075627309453354743266067804942817604051204176332208359954263413873794513775387963058701947883563417944009370700831384127570567[...]

Michael Stapelberg: what’s new since the launch?

Sun, 09 Apr 2017 11:23:00 +0000

On 2017-01-18, I announced that had been modernized. Let me catch you up on a few things which happened in the meantime:

  • Debian experimental was added to I was surprised to learn that adding experimental only required 52MB of disk usage. Further, Debian contrib was added after realizing that contrib licenses are compatible with the DFSG.
  • Indentation in some code examples was fixed upstream in mandoc.
  • Address-bar search should now also work in Firefox, which apparently requires a title attribute on the opensearch XML file reference.
  • manpages now specify their language in the HTML tag so that search engines can offer users the most appropriate version of the manpage.
  • I contributed mandocd(8) to the mandoc project, which debiman now uses for significantly faster manpage conversion (useful for disaster recovery/development). An entire run previously took 2 hours on my workstation. With this change, it takes merely 22 minutes. The effects are even more pronounced on manziarly, the VM behind
  • Thanks to Peter Palfrader (weasel) from the Debian System Administrators (DSA) team, is now serving its manpages (and most of its redirects) from Debian’s static mirroring infrastructure. That way, planned maintenance won’t result in service downtime. I contributed README.static-mirroring.txt, which describes the infrastructure in more detail.

The list above is not complete, but rather a selection of things I found worth pointing out to the larger public.

There are still a few things I plan to work on soon, so stay tuned :).

Matthew Garrett: A quick look at the Ikea Trådfri lighting platform

Sun, 09 Apr 2017 00:16:33 +0000

Ikea recently launched their Trådfri smart lighting platform in the US. The idea of Ikea plus internet security together at last seems like a pretty terrible one, but having taken a look it's surprisingly competent. Hardware-wise, the device is pretty minimal - it seems to be based on the Cypress[1] WICED IoT platform, with 100MBit ethernet and a Silicon Labs Zigbee chipset. It's running the Express Logic ThreadX RTOS, has no running services on any TCP ports and appears to listen on two single UDP ports. As IoT devices go, it's pleasingly minimal.That single port seems to be a COAP server running with DTLS and a pre-shared key that's printed on the bottom of the device. When you start the app for the first time it prompts you to scan a QR code that's just a machine-readable version of that key. The Android app has code for using the insecure COAP port rather than the encrypted one, but the device doesn't respond to queries there so it's presumably disabled in release builds. It's also local only, with no cloud support. You can program timers, but they run on the device. The only other service it seems to run is an mdns responder, which responds to the _coap._udp.local query to allow for discovery.From a security perspective, this is pretty close to ideal. Having no remote APIs means that security is limited to what's exposed locally. The local traffic is all encrypted. You can only authenticate with the device if you have physical access to read the (decently long) key off the bottom. I haven't checked whether the DTLS server is actually well-implemented, but it doesn't seem to respond unless you authenticate first which probably covers off a lot of potential risks. The SoC has wireless support, but it seems to be disabled - there's no antenna on board and no mechanism for configuring it.However, there's one minor issue. On boot the device grabs the current time from (fine) but also hits . That file contains a bunch of links to firmware updates, all of which are also downloaded over http (and not https). The firmware images themselves appear to be signed, but downloading untrusted objects and then parsing them isn't ideal. Realistically, this is only a problem if someone already has enough control over your network to mess with your DNS, and being wired-only makes this pretty unlikely. I'd be surprised if it's ever used as a real avenue of attack. Overall: as far as design goes, this is one of the most secure IoT-style devices I've looked at. I haven't examined the COAP stack in detail to figure out [...]

Arturo Borrero González: openvpn deployment with Debian Stretch

Fri, 07 Apr 2017 05:00:00 +0000

Debian Stretch feels like an excellent release by the Debian project. The final stable release is about to happen in the short term. Among the great things you can do with Debian, you could set up a VPN using the openvpn software. In this blog post I will describe how I’ve deployed myself an openvpn server using Debian Stretch, my network environment and my configurations & workflow. Before all, I would like to reference my requisites and the characteristics of what I needed: a VPN server which allows internet clients to access our datacenter internal network (intranet) securely strong authentications mechanisms for the users (user/password + client certificate) the user/password information is stored in a LDAP server of the datacenter support for several (hundreds?) of clients only need to route certain subnets (intranet) through the VPN, not the entire network traffic of the clients full IPv4 & IPv6 dual stack support, of course a group of system admins will perform changes to the configurations, adding and deleting clients I agree this is a rather complex scenario and not all the people will face these requirements. The service diagram has this shape: (DIA source file) So, it works like this: clients connect via internet to our openvpn server, the openvpn server validates the connection and the tunnel is established (green) now the client is virtually inside our network (blue) the client wants to access some intranet resource, the tunnel traffic is NATed (red) Our datacenter intranet is using public IPv4 addressing, but the VPN tunnels use private IPv4 addresses. To don’t mix public and private address NAT is used. Obviously we don’t want to invest public IPv4 addresses in our internal tunnels. We don’t have this limitations in IPv6, we could use public IPv6 addresses within the tunnels. But we prefer sticking to a hard dual stack IPv4/IPv6 approach and also use private IPv6 addresses inside the tunnels and also NAT the IPv6 from private to public. This way, there are no differences in how IPv4 and IPv6 network are managed. We follow this approach for the addressing: client 1 tunnel:, fd00:0:1::11 client 1 public NAT: x.x.x.11, x:x::11 client 2 tunnel:, fd00:0:1::12 client 2 public NAT: x.x.x.12, x:x::12 […] The NAT runs in the VPN server, since this is kind of a router. We use nftables for this task. As the final win, I will describe how we manage all this configuration using the git version control system. Using git we ca[...]

Reproducible builds folks: Reproducible Builds: week 101 in Stretch cycle

Thu, 06 Apr 2017 22:29:29 +0000

Here's what happened in the Reproducible Builds effort between Sunday March 26 and Saturday April 1 2017: Media coverage Sylvain Beucler wrote a follow-up post Practical basics of reproducible builds 2, which like last weeks article is about his experiences making software build reproducibly. Reproducible work in other projects Colin Watson started writing a patch to make launchpad store .buildinfo files. (It's not yet deployed.) Toolchain development and fixes Ximin Luo continued to work on BUILD_PATH_PREFIX_MAP patches for GCC 6 and dpkg. Packages reviewed and fixed, and bugs filed Chris Lamb: #858926 filed against vine, forwarded upstream. #859194 filed against neutron. #859256 filed against golang-github-lunny-log. #859294 filed against hunspell-dict-ko. #859299 filed against dactyl. #859300 filed against crac. #859302 filed against debirf. Mattia Rizzolo: #859058 filed against telegram-desktop. Reviews of unreproducible packages 49 package reviews have been added, 25 have been updated and 42 have been removed in this week, adding to our knowledge about identified issues. 1 issue type has been updated: randomness_in_r_rdb_rds_databases Weekly QA work During our reproducibility testing, FTBFS bugs have been detected and reported by: Chris Lamb (4) Mattia Rizzolo (1) diffoscope development diffoscope 81 was uploaded to experimental by Chris Lamb. It included contributions from: Chris Lamb Correct meaningless "1234-content" metadata when introspecting files within archives. This was a regression since #854723 due to the use of auto-incrementing on-disk filenames. (Closes: #858223) Ximin Luo Improve ISO9660/DOS/MBR check. reprotest development reprotest development continued in git, including contributions from: Ximin Luo: Preserve directory structure when copying artifacts. development development continued in git, including contributions from: Chris Lamb: Tidy rejection of supported formats. Don't parse "Format:" header as the source package version. reproducible-website development Holger switched and to letsencrypt certificates. Misc. This week's edition was written by Ximin Luo and Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists. [...]

Steinar H. Gunderson: Nageru 1.5.0 released

Wed, 05 Apr 2017 23:14:00 +0000


I just released version 1.5.0 of Nageru, my live video mixer. The biggest feature is obviously the HDMI/SDI live output, but there are lots of small nuggets everywhere; it's been four months in the making. I'll simply paste the NEWS entry here:

Nageru 1.5.0, April 5th, 2017

  - Support for low-latency HDMI/SDI output in addition to (or instead of) the
    stream. This currently only works with DeckLink cards, not bmusb. See the
    manual for more information.

  - Support changing the resolution from the command line, instead of locking
    everything to 1280x720.

  - The A/V sync code has been rewritten to be more in line with Fons
    Adriaensen's original paper. It handles several cases much better,
    in particular when trying to match 59.94 and 60 Hz sources to each other.
    However, it might occasionally need a few extra seconds on startup to
    lock properly if startup is slow.

  - Add support for using x264 for the disk recording. This makes it possible,
    among other things, to run Nageru on a machine entirely without VA-API

  - Support for 10-bit Y'CbCr, both on input and output. (Output requires
    x264 disk recording, as Quick Sync Video does not support 10-bit H.264.)
    This requires compute shader support, and is in general a little bit
    slower on input and output, due to the extra amount of data being shuffled
    around. Intermediate precision is 16-bit floating-point or better,
    as before.

  - Enable input mode autodetection for DeckLink cards that support it.
    (bmusb mode has always been autodetected.)

  - Add functionality to add a time code to the stream; useful for debugging

  - The live display is now both more performant and of higher image quality.

  - Fix a long-standing issue where the preview displays would be too bright
    when using an NVIDIA GPU. (This did not affect the finished stream.)

  - Many other bugfixes and small improvements.

1.5.0 is on its way into Debian experimental (it's too late for the stretch release, especially as it also depends on Movit and bmusb from experimental), or you can get it from the home page as always.

Jonathan Carter: GNOME Shell Extensions in Debian 9.0

Wed, 05 Apr 2017 21:27:06 +0000

About GNOME 3 introduced an extensions framework that allows its users to extend the desktop shell by writing extensions using JavaScript and CSS. It works quite well and dozens of extensions have already been uploaded to the extensions site. Some of these solve some annoyances that users typically share with GNOME, while others add useful functionality. During DebCamp last year, I started packaging some of these for Debian. That’s been going really well. Now that Ubuntu is finally dropping Unity in favour of GNOME, it helps to serve as a nudge to get this blog post out that’s been stuck in drafts. These extensions also make their way into Ubuntu and other Debian/Ubuntu derivatives. Here are some extensions I’ve been packaging that’s already in the archive: gnome-shell-extension-dashtodock Provides a multitude of options for the shell dock. Not only really useful but also well maintained by upstream, see their website for more info. This is a great extension if you support previous Unity users, since you can set your panel to look and behave very similarly to Unity. I think the app launcher is slightly better in Gnome because apps are easier to discover. gnome-shell-extension-hide-activities Simple extension that hides the “Activities” button from the top left corner. gnome-shell-extension-impatience Speeds up shell animations. Animations can make the system more usable, but it can also be either distracting or cause some slight delays while waiting for the animation to complete. This gives you a sliding scale that lets you choose how much you’d like to speed it up. gnome-shell-extension-move-clock Simple extension that moves the clock from the center of the panel to the right. gnome-shell-extension-refreshwifi In gnome-shell, network manager doesn’t automatically refresh the list of available network, which can be quite annoying. Currently a user has to turn wifi off and back on in order to see a refreshed list. This has been fixed upstream and will be in the next version of GNOME. In the meantime, this extension fixes that. Update: Refreshing wifi in the background has been fixed in Gnome 3.22.2, which is now in stretch. This extension will be removed from the archives. gnome-shell-extension-remove-dropdown-arrows Items in the top panel contain dropdown arrows, which are useful for new users who might not be aware that they expand into more entries. For more experience users, the arrows tend to result in extra clutter in the panel, this extens[...]

Thomas Lange: FAI website now supports HTTPS

Tue, 04 Apr 2017 12:41:54 +0000


The FAI webpage is now reachable via HTTPS. You can also access the package repository at via HTTPS if you use this line in /etc/apt/sources.list:

deb jessie koeln

Thanks to Let's Encrypt for making this possible.