Subscribe: Planet Debian
Added By: Feedage Forager Feedage Grade B rated
Language: English
apt  debian  hours  kernel  months  new  org  package  packages  release  source  support  system  time  version  work 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Planet Debian

Planet Debian

Planet Debian -


Norbert Preining: Analysing Debian packages with Neo4j – Part 2 UDD and Graph DB Schema

Thu, 19 Apr 2018 05:21:40 +0000

In the first part of this series of articles on analyzing Debian packages with Neo4j we gave a short introduction to Debian and the life time and structure of Debian packages. The current second part first describes the Ultimate Debian Database UDD and how to map the information presented here from the UDD into a Graph Database by developing the database scheme, that is the set of nodes and relations, together with their attributes, from the inherent properties of Debian packages. The next part will describe how to get the data from the UDD into Neo4j, give some sample queries, and discuss further work. The Ultimate Debian Database UDD The Ulimate Debian Database UDD gathers a lot of data about various aspects of Debian in the same SQL database. It allows users to easily access and combine all these data. Data currently being imported include: Packages and Sources files, from Debian and Ubuntu, Bugs from the Debian BTS, Popularity contest, History of uploads, History of migrations to testing, Lintian, Orphaned packages, Carnivore, Debtags, Ubuntu bugs (from Launchpad), Packages in NEW queue, DDTP translations. Debian Wiki Collection all these information and obviously having been grown over time, the database exhibits a highly de-normalized structure with ample duplication of the same information. As a consequence, reading the SQL code fetching data from the UDD and presenting them in a coherent interface tends to be highly convoluted. This lets us to the project of putting (parts) of the UDD into a graph database, removing all the duplication on the way and representing the connections between the entities in a natural graph way. Developing the database schema Recall from the first column that there are source packages and binary packages in Debian, and that the same binary package can be built in different versions from different source packages. Thus we decided to have both source and binary packages as separate entities, that is nodes of the graph, and the two being connected via a binary relation builds. Considering dependencies between Debian packages we recall the fact that there are versioned and unversioned dependencies. We thus decide to have again different entities for versioned source and binary packages, and unversioned source and binary packages. The above considerations leads to the following sets of nodes and relations: vsp -[:is_instance_of]-> sp vbp -[:is_instance_of]-> bp sp -[:builds]-> bp vbp -[:next]-> vbp vsp -[:next]-> vsp where vsp stands for versioned source package, sp for (unversioned) source package, and analog for binary packages. The versioned variants carry besides the name attribute also a version attribute in the node. The relations are is_instance_of between versioned and unversioned packages, builds between versioned source and versioned binary packages, and next that defines an order on the versions. An example of a simple graph for the binary package luasseq which has was originally built from the source package luasseq but was then taken over into the TeX Live packages and built from a different source. Next we want to register suites, that is associating which package has been included in which release of Debian. Thus we add a new node type suite and a new relation contains which connects suites and versioned binary packages vbp: suite -[:contains]-> vbp Nodes of type suite only contain one attribute name. We could add release dates etc, but refrained from it for now. Adding the suites to the above diagram we obtain the following: Next we add maintainers. The new node type mnt has two attributes: name and email. Here it would be nice to add alternative email addresses as well as alternative spellings of the name, something that is quite common. We add a relation maintains to versioned source and binary packages only since, as we have seen, the maintainership can change over the history of a package: mnt -[:maintains]-> vbp mnt -[:maintains]-> vsp This leads us [...]

Hideki Yamane: Improve debootstrap time a bit, without local mirror

Thu, 19 Apr 2018 05:14:38 +0000

I've introduced two features to improve debootstrap time, auto proxy detection via squid-deb-proxy-client (by Michael Vogt) and cache directory support. It reduces time to create chroot environment without huge local mirror.

Let's create chroot environment without any new features.
$ time sudo debootstrap sid sid-chroot
I: Target architecture can be executed     
I: Retrieving InRelease                     
I: Checking Release signature           
I: Valid Release signature (key id 126C0D24BD8A2942CC7DF8AC7638D0442B90D010)
I: Retrieving Packages             
I: Validating Packages             
I: Base system installed successfully. 
real    8m27.624s
user    1m52.732s
sys     0m10.786s

Then, use --cache-dir option.
$ time sudo debootstrap --cache-dir=/home/henrich/tmp/cache sid sid-chroot
E: /home/henrich/tmp/cache: No such directory

Yes, we should cache directory first.
$ mkdir ~/tmp/cache
Let's go.
$ time sudo debootstrap --cache-dir=/home/henrich/tmp/cache sid sid-chroot
I: Target architecture can be executed
I: Retrieving InRelease             
I: Checking Release signature         
I: Valid Release signature (key id 126C0D24BD8A2942CC7DF8AC7638D0442B90D010)
I: Retrieving Packages                 
I: Validating Packages                   
I: Base system installed successfully. 
real    2m10.180s
user    1m47.428s
sys     0m8.196s
It cuts about 6 minutes! (of course, it depends on the mirror you choose). Then, try to use proxy feature.
$ sudo apt install squid-deb-proxy-client
$ time sudo debootstrap sid sid-chroot
Using auto-detected proxy:
I: Target architecture can be executed     
I: Retrieving InRelease                     
I: Checking Release signature           
I: Valid Release signature (key id 126C0D24BD8A2942CC7DF8AC7638D0442B90D010)
I: Retrieving Packages             
I: Validating Packages             
I: Configuring systemd...
I: Base system installed successfully.
Can you see the words "Using auto-detected proxy:"? It detects package proxy and use it. And its result is
real    2m15.995s
user    1m49.737s
sys     0m8.778s

Conclusion: If you already run squid-deb-proxy on some machine in local network, then install squid-deb-proxy-client and debootstrap automatically use it, or you can use --cache-dir option for speed up creating chroot environment via debootstrap. Especiall if you don't have good network conectivity, both features will help without effort.

Oh, and one more thing... Thomas Lange has proposed patches to improve debootstrap and it makes debootstrap much faster. If you're interested, please look into it.

Steve Kemp: A filesystem for known_hosts

Thu, 19 Apr 2018 03:45:48 +0000


The other day I had an idea that wouldn't go away, a filesystem that exported the contents of ~/.ssh/known_hosts.

I can't think of a single useful use for it, beyond simple shell-scripting, and yet I couldn't resist.

 $ go get -u
 $ go install

Now make it work:

 $ mkdir ~/knownfs
 $ knownfs ~/knownfs

Beneat out mount-point we can expect one directory for each known-host. So we'll see entries:

 ~/knownfs $ ls | grep \.vpn

 ~/knownfs $ ls | grep steve

The host-specified entries will each contain a single file fingerprint, with the fingerprint of the remote host:

 ~/knownfs $ cd
 ~/knownfs/ $ ls
 frodo ~/knownfs/ $ cat fingerprint

I've used it in a few shell-loops to run commands against hosts matching a pattern, but beyond that I'm struggling to think of a use for it.

If you like the idea I guess have a play:

It was perhaps more useful and productive than my other recent work - which involves porting an existing network-testing program from Ruby to golang, and in the process making it much more uniform and self-consistent.

The resulting network tester is pretty good, and can now notify via MQ to provide better decoupling too. The downside is of course that nobody changes network-testing solutions on a whim, and so these things are basically always in-house only.

Shirish Agarwal: getting libleveldb1v5 fixed

Thu, 19 Apr 2018 01:23:08 +0000

Please treat this as a child’s fantasy till the information is not approved or corrected by a DD/DM who obviously have much more info and experience in dealing with below. It had been quite a few years since I last played Minetest, a voxel-based game similar and yet different to its more famous brethren minecraft . I wanted to install and play it but found that one of the libraries it needs is libleveldb1v5, a fast key-value storage library which according to #877773 has been marked as grave bug report because of no info. on the soname bump. I saw that somebody had also reported it upstream and the bug has been fixed and has some more optimizations done to the library as well. From the description of the library it reminded me so much of sqlite which has almost the same feature-set (used by mozilla for bookmarks and pwd management if I’m not mistaken). I was thinking as to if this has been fixed quite some back then why the maintainer didn’t put the fixed version on sid and then testing. I realized it might be because the new version has a soname bump which means it would need to be transitioned probably with proper breaks and everything. A quick check via $ apt-rdepends -r libleveldb1v5 | wc -l Reading package lists... Done Building dependency tree Reading state information... Done 195 revealed that almost 190 packages directly or indirectly will be affected by the transition change. I then tried to find where the VCS is located by doing – $ apt-cache showsrc libleveldb1v5 | grep Vcs-Git Vcs-Git: git:// Vcs-Git: git:// Then I cloned the repo to my system to see if the maintainer had done any recent changes and saw :- b$ git log --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr)' --abbrev-commit | head -15 * 7465515 - (HEAD -> master, tag: debian/1.20-2, origin/master, origin/HEAD) Packaging cleanup (4 months ago) * f85b876 - Remove libleveldb-dbg package and use the auto-generated one (4 months ago) * acac71f - Update Standards-Version to 4.1.2 (4 months ago) * e281654 - Update debhelper level to 11 (4 months ago) * df015eb - Don't run self-test parallel (4 months ago) * ba81cc9 - (tag: debian/1.20-1) Update debhelper level to 10 (7 months ago) * cb84f26 - Update Standards-Version to 4.1.0 (7 months ago) * be0ef22 - Convert markdown documentation to HTML (7 months ago) * ab8faa7 - Start 1.20-1 changelog (7 months ago) * 03641f7 - Updated version 1.20 from 'upstream/1.20' (7 months ago) |\ | * 59c75ca - (tag: upstream/1.20, origin/upstream) New upstream version 1.20 (7 months ago) * | a21bcbc - (tag: debian/1.19-2) Add the missing ReadMemoryBarrier and WriteMemoryBarrier functions for mips* (1 year, 5 months ago) * | 70c6e38 - Add myself to debian/copyright (1 year, 5 months ago) * | 1ba7231 - Update source URL (1 year, 5 months ago) There is probably a much simpler way to get the same output but for now that would have to suffice. Anyways, there are many variations of the code I used using git log --pretty and git log --decorate etc. Maybe one of those could give the same output, would need the time diff as shared above. Trivia – I am usually more interested in commit messages and time when the commits are done and know a bit of git to find out the author of a particular commit even if abbreviated commit is there and want to thank her(im) for the work done on that package or a particular commit which address some annoying bug that I had. /Trivia Although the best I have hankered for is to have some sort of visualization tool about projects that I like something like Andrews plot or the C-Chart for visualization purposes but till date haven’t found anything which would render it into those visuals straightway. Maybe a feature for a future git version, who knows I know that in itself is a Pandora’s box as some people might just like to visualization of only when releases were made of an upst[...]

Jeremy Bicha: gksu removed from Ubuntu

Thu, 19 Apr 2018 00:49:16 +0000

Today, gksu was removed from Ubuntu 18.04, four weeks after it was removed from Debian.

Chris Lamb: Re-elected as Debian Project Leader

Wed, 18 Apr 2018 18:24:19 +0000


I have been extremely proud to have served as the Debian Project Leader since my election in early 2017. During this time I've learned a great deal about the inner workings of the Project as well as about myself. I have grown as a person thanks to all manner of new interactions and fresh experiences.

I believe is a privilege simply to be a Debian Developer, let alone to be selected as their representative. It was therefore an even greater honour to learn that I have been re-elected by the community for another year. I profoundly and wholeheartedly thank everyone for placing their trust in me for another term.

Being the "DPL" is a hard job. It is even difficult to even communicate exactly how and any statistics somehow fail to capture it. However, I now understand the look in previous Leaders' eyes when they congratulated me on my appointment and future candidates should not nominate themselves lightly.

Indeed, I was unsure whether I would stand for re-appointment and I might not have done had it not been for some touching and encouraging words from some close confidants. They underlined to me that a year is not a long time, further counselling that I should consider myself just getting started and only now prepared to start to take on the bigger picture.

Debian itself will always face challenges but I sincerely believe that the Project remains as healthy as ever. We are uniquely cherished and remain remarkably poised to improve the free software ecosystem as a whole. Moreover, our stellar reputation for technical excellence, stability and software freedom remains highly respected and without peer. It is truly an achievement to be proud of.

I thank everyone who had the original confidence, belief and faith in me, but I offer my further sincere and humble thanks to all those who have felt they could extend this to a second term, especially with such a high turnout. I am truly excited and looking forward to the year ahead.


Vincent Bernat: Self-hosted videos with HLS

Wed, 18 Apr 2018 17:45:47 +0000

Note This article was first published on Exoscale blog with some minor modifications. Hosting videos on YouTube is convenient for several reasons: pretty good player, free bandwidth, mobile-friendly, network effect and, at your discretion, no ads.1 On the other hand, this is one of the less privacy-friendly solution. Most other providers share the same characteristics—except the ability to disable ads for free. With the

Sven Hoexter: logstash 5.6.9 released with logstash-filter-grok 4.0.3

Wed, 18 Apr 2018 16:11:08 +0000

In case you're using logstash 5.6.x from elastic, version 5.6.9 is released with logstash-filter-grok 4.0.3. This one fixes a bad memory leak that was a cause for frequent logstash crashes since logstash 5.5.6. Reference:

I hope this is now again a decent logstash 5.x release. I've heard some rumours that the 6.x versions is also a bit plagued by memory leaks. (image)

Jonathan Dowland: simple

Wed, 18 Apr 2018 15:47:07 +0000


Every now and then, for one reason or another, I am sat in front of a Linux-powered computer with the graphical user interface disabled, instead using an old-school text-only mode.


There's a strange, peaceful quality about these environments.

When I first started using computers in the 90s, the Text Mode was the inferior, non-multitasking system that you generally avoided unless you were trying to do something specific (like run Doom without any other programs eating up your RAM).

On a modern Linux (or BSD) machine, unless you are specifically trying to do something graphical, the power and utility of the machine is hardly diminished at all in this mode. The surface looks calm: there's nothing much visibly going on, just the steady blink of the command prompt, as if the whole machine is completely dedicated to you, and is waiting poised to do whatever you ask of it next. Yet most of the same background tasks are running as normal, doing whatever they do.

One difference, however, is the distractions. Rather like when you drive out of a city to the countryside and suddenly notice the absence of background noise, background light, etc., working at a text terminal — relative to a regular graphical desktop — can be a very calming experience.

So I decided to take a fresh look at my desktop and see whether there were unwelcome distractions. For some time now I've been using a flat background colour to avoid visual clutter. After some thought I realised that most of the time I didn't need to see what was in GNOME3's taskbar. I found and installed this hide-top-bar extension and now it's tucked away unless I mouse up to the top. Now that it's out of the way by default, I actually put more information into it: the full date in the time display; and (via another extension, TopIcons Plus) the various noisy icons that apps like Dropbox, OpenBox, VLC, etc. provide.


There's still some work to do, notably in my browser (Firefox), but I think this is a good start.

Laura Arjona Reina: Kubb 2018 season has just begun

Wed, 18 Apr 2018 07:40:25 +0000


Since last year I play kubb with my son. It’s a sport/game of marksmanship and patience. It’s a quite inclusive game and it’s played outside, in a grass or sand field.

It happens that the Spanish association of Kubb is in the town where I live (even, in my neighbourhood!) so several family gatherings with tournaments happen in the parks near my house. Last year we attended for first time and learned how to play, and since then, we participated in 2 or 3 events more.

As kubb is played in open air, season starts in March/April, when the weather is good enough to have a nice morning in the park. I got surprised that being a so minority game, about 50-100 people gather in each local tournament, grouped in teams of any kind: individuals, couples or up to 6 persons-teams, mothers and daughters, only kids-teams, teams formed by people of 3 different generations… as strenght or speed (or even experience) are not relevant to win this game, almost anybody can play with anybody.


Enjoying playing kubb makes me also think about how communities around a non-mainstream topic are formed and maintained, and how to foster diversity and good relationships among participants. I’ve noted down some ideas that I think the kubb association does well:

  •  No matter how big or small you are, always take into account the possible newcomers: setting a slot at the start of the event to welcome them and explain “how the day will work” makes those newcomers feel less stressed.
  • Designing events where the whole family can participate (or at least “be together”, not only “events with childcare”) but it’s not mandatory that all of them participate, helps people to get involved more long-term.
  • The format of the event has to be kept simple to avoid organisers to get burned out. If the organisers are so overwhelmed taking care of things that they cannot taste the result of their work, that means that the organisation team should grow and balance the load.
  • Having a “break” during the year so everybody can rest and do other things also helps people get more motivated when the next season/event starts.

Thinking about kubb, particularly together/versus with the other sport that my kid plays (football), I find similarities and contrasts with another “couple” of activities that we also experience in our family: the “free software way of life” versus the “mainstream use” of computers/devices nowadays. It’s good to know both (not to be “apart of the world in our warm bubble”), and it’s good to have the humble, but creative and more human-focused and good-values-loaded one as big reference for the type of future that we want to live and we build everyday with our small actions.


You can comment on this post using this thread.

Norbert Preining: TeX Live 2018 for Debian

Wed, 18 Apr 2018 01:33:15 +0000


TeX Live 2018 has hit Debian/unstable today. The packages are based on what will be (most likely, baring any late desasters) on the TeX Live DVD which is going to press this week. This brings the newest and shiniest version of TeX Live to Debian. There have


The packages that have been uploaded are:

The changes listed in the TeX Live documentation and which are relevant for Debian are:

  • Kpathsea: Case-insensitive filename matching now done by default in non-system directories; set texmf.cnf or environment variable texmf_casefold_search to 0 to disable. Full details in the Kpathsea manual.
  • epTEX, eupTEX: New primitive \epTeXversion.
  • LuaTEX: Preparation for moving to Lua 5.3 in 2019: a binary luatex53 is available on most platforms, but must be renamed to luatex to be effective. Or use the ConTEXt Garden files; more information there.
  • MetaPost: Fixes for wrong path directions, TFM and PNG output.
  • pdfTEX: Allow encoding vectors for bitmap fonts; current directory not hashed into PDF ID; bug fixes for \pdfprimitive and related.
  • XeTEX: Support /Rotate in PDF image inclusion; exit nonzero if the output driver fails; various obscure UTF-8 and other primitive fixes.
  • tlmgr: new front-ends tlshell (Tcl/Tk) and tlcockpit (Java); JSON output; uninstall now a synonym for remove; new action/option print-platform-info.

And above all, the most important change: We switched to CMSS, a font designed by DEK, for our logo and banners (image)


Alexander Wirt: alioth deprecation - next steps

Tue, 17 Apr 2018 22:19:52 +0000


As you should be aware, will be decommissioned with the EOL of wheezy, which is at the end of May. The replacement for the main part of alioth, git, is alive and out of beta, you know it as If you did not move your git repository yet, hurry up, time is running out.

The other important service from the alioth set, lists, moved to a new host and is now live at with the lists which opted into migration. All public list archives moved over too and will continue to exist under the old URL.

decommissioning timeline

2018-05-01 DISABLE registration of new users on alioth. Until an improved SSO (GSOC Project) is ready, new user registrations needed for SSO services will be handled manually. More details on this will follow in a seperate announcement.
2018-05-10 - 2018-05-13 darcs, bzr and mercurial repositories will be exported as tarballs and made available readonly from a new archive host, details on that will follow.
2018-05-17 - 2018-05-18 During the Mini-DebConf Hamburg any existing cron jobs will be turned off, websites still on alioth will be disabled.
2018-05-31 All remaining repositories (cvs, svn and git) will be archived similar to the ones above. The host moszumanska, the home of alioth, will go offline!

Reproducible builds folks: Reproducible Builds: Weekly report #155

Tue, 17 Apr 2018 08:49:54 +0000

Here's what happened in the Reproducible Builds effort between Sunday April 8 and Saturday April 14 2018: Ricardo Wurmus announced that his paper titled "Reproducible genomics analysis pipelines with GNU Guix" has entered the pre-print/preview stage. Debian bug #894441 (related to packages violating multi-arch specs due to SOURCE_DATE_EPOCH) was originally filed against dpkg-dev. However, it was reassigned this week to the meta-package via and finally retitled to "binNMUs should be replaced by easy no change except debian/changelog uploads". Chris Lamb updated the documentation for disorderfs (our FUSE-based filesystem that deliberately introduces non-determinism into filesystem metadata) adding an example on how to properly unmount to the manual page. Mes (a Scheme-based compiler for our "sister" bootstrappable builds effort) announced their 0.12 release on our mailing list. 40 package reviews were added, 39 have been updated and 31 have been removed in this week, adding to our knowledge about identified issues. In addition, two issue types were added by Chris Lamb (build_path_in_mip_files_generated_by_irafcl and nondeterminism_in_autolex_bin). Holger Levsen announced a further call for votes to decide on a logo for the project which closed on Sunday 15th April (results pending). For more background information, please see the previous meeting's minutes), the proposals and the original proof-of-concept artwork. Patches filed Bernhard M. Wiedemann: linphone (readdir(2)) python-axolotl-curve25519 (readdir(2)) python-nautilus (SOURCE_DATE_EPOCH) sed (profile guided optimizations) sphinx (readdir(2)) tuxpaint-config (SOURCE_DATE_EPOCH) Chris Lamb: #895269 filed against (forwarded upstream). #895270 filed against python-click (forwarded upstream) #895401 filed against libmypaint. #895553 filed against sphinx (forwarded upstream) In addition, 38 build failure bugs were reported by Adrian Bunk. strip-nondeterminism development Version 0.041-1 was uploaded to unstable by Chris Lamb: Chris Lamb: Drop PHP "Pear" registry support. Bump Standards-Version to 4.1.4. Evgueni Souleimanov: Add U-Boot Legacy Image (uImage) format support. Add bFLT executable format support. development Mattia Rizzolo made a large number of changes to our Jenkins-based testing framework, including: Ensure sshd(8) listens on ports reachable from the outer internet. Fix check for "not for us" (NFU) packages when looking at .buildinfo files. Fix a bug that caused reproducible_maintenance_amd64_jenkins job to not be created. Duplicate the sshd_config(5) file for the armhf nodes as they need local tweaks. Install the jessie kernel on the i386 nodes running the 32-bit kernel to workaround #875990 & #876035. Re-enable jobs on a particular ARM board. Move the repository from Alioth to Misc. This week's edition was written by Bernhard M. Wiedemann, Chris Lamb and Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists. [...]

Norbert Preining: Specification and Verification of Software with CafeOBJ – Part 1 – Introducing CafeOBJ

Tue, 17 Apr 2018 00:29:30 +0000

Software bugs are everywhere – the feared Blue Screen of Death, the mobile phone rebooting at the most inconvenient moment, games crashing. Most of these bugs are not serious problems, but there are other cases that are far more serious: Therac-25 X-ray machine which killed at least 5 patients by over-exposure Ariane 5 rocket, Flight 501 due to an overflow error Mars Climate Orbiter which crashed due to SI versus Imperial system inconsistency Intel Pentium F00F bug chip design error Toyota’s Electronic Throttle Control System (ETCS) Heartbleed vulnerability of OpenSSL While bugs will always remain, the question is how to deal with these kinds of bug. There is unsurmountable amount of literature on this topic, but it general falls into one of the following categories: program testing: subject the program to be checked to a large set of tests trying to exhaust all possible code paths post coding formal verification – model checking: given program code, model the behavior of the program in temporal logic and prove necessary properties pre coding specification and verification: start with a formal specification what the program should do, and verify that the specification is correct, that is, that it satisfies desirable properties The first two items above are extremely successful and well developed. In this blog series we want to discuss the third item, specification and their verification. Overview on the blog series This blog will introduce some general concepts about software and specifications, as well as introduce CafeOBJ as an algebraic specification language that is executable and thus can be used to verify the specification at the same time. Further blog entries will introduce the CafeOBJ language in bit more detail, go through a simple example of cloud synchronization specification, and discuss more involved techniques and automated theorem proving using CafeOBJ. Why should we verify specifications? The value of formal specifications of software has been recognized since the early 80ies, and formal systems have been in development since then (Z, Larch and OBJ all originate at that time). On the other hand, actual use of these techniques did remain mostly in the academic surrounding – engineers and developers where mostly reluctant to learn highly mathematical languages to write specifications instead of writing code. With the growth of interactivity, explosion of number of communication protocols (from low level TCP to high level SSL) with handshakes and data exchange sequences, the need for formal verification of these protocols, especially if they guard crucial data, has been increasing steadily. The CafeOBJ approach CafeOBJ is a member of the OBJ family and thus uses algebraic methods to describe specifications. This is in contrast to the Z system which uses set theory and lambda calculus. Our aims in developing the language (as well as the system) CafeOBJ can be summarized as follows: provide a reasonable blend of user and machine capabilities allow intuitive modeling while preserving a rigorous formal background allow for various levels of modelling – from high-level to hard-core do not try to fully automate everything – understanding of design and problems is necessary We believe that we achieve this through the combination of a rigid formal background, the incorporation of order-sorted equational theory, an executable semantics via rewriting, high-level programming facilities (inheritance, templates and instantiations, …), and last but not least a completely freedom to redefine the language of the specification (postfix, infix, mixfix, syntax overloading, …). More specifically, the logical foundations are formed by the following fou[...]

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, March 2018

Mon, 16 Apr 2018 14:07:41 +0000

Like each month, here comes a report about the work of paid contributors to Debian LTS. Individual reports In March, about 214 work hours have been dispatched among 13 paid contributors. Their reports are available: Abhijith PA did 8 hours. Antoine Beaupré did 9.75h. Ben Hutchings did 15 hours (out of 15h allocated + 2 remaining hours, thus keeping 2 extra hours for April). Brian May (second report) did 15.75 hours. Chris Lamb did 18 hours. Emilio Pozuelo Monfort did 12.5 hours (out of 17.5 hours allocated, thus keeping 5 extra hours for April). Holger Levsen did 1.5 hours (out of 18 hours allocated, thus keeping 16.5 extra hours for April). Hugo Lefeuvre did 35.5 hours (out of 17.5 hours allocated + 22.25 remaining hours, thus keeping 4.25 extra hours for April). Markus Koschany did 23.25 hours. Ola Lundqvist did 9.5 hours (out of 14 hours allocated + 5 remaining hours, thus keeping 9.5 extra hours for April). Roberto C. Sanchez did 7.5 hours (out of 23.25 hours allocated, thus keeping 15.75 extra hours for April). Santiago Ruano Rincón did 12 hours (out of 10 hours allocated + 2 remaining hours). Thorsten Alteholz did 23.25 hours. Evolution of the situation The number of sponsored hours did not change. The security tracker currently lists 31 packages with a known CVE and the dla-needed.txt file 26. Thanks to a few extra hours dispatched this month (accumulated backlog of a contributor), the number of open issues came back to a more usual value. Thanks to our sponsors New sponsors are in bold. Platinum sponsors: TOSHIBA (for 30 months) GitHub (for 21 months) Gold sponsors: The Positive Internet (for 46 months) Blablacar (for 45 months) Linode (for 35 months) Babiel GmbH (for 24 months) Plat’Home (for 24 months) Silver sponsors: Domeneshop AS (for 45 months) Université Lille 3 (for 45 months) Trollweb Solutions (for 43 months) Nantes Métropole (for 39 months) Dalenys (for 36 months) Univention GmbH (for 31 months) Université Jean Monnet de St Etienne (for 31 months) Ribbon Communications, Inc. (for 25 months) maxcluster GmbH (for 19 months) Exonet B.V. (for 15 months) Leibniz Rechenzentrum (for 9 months) (for 6 months) Bronze sponsors: David Ayers – IntarS Austria (for 46 months) Evolix (for 46 months) Offensive Security (for 46 months), a.s. (for 46 months) Freeside Internet Service (for 45 months) MyTux (for 45 months) Intevation GmbH (for 43 months) Linuxhotel GmbH (for 43 months) Daevel SARL (for 41 months) Bitfolk LTD (for 40 months) Megaspace Internet Services GmbH (for 40 months) Greenbone Networks GmbH (for 39 months) NUMLOG (for 39 months) WinGo AG (for 39 months) Ecole Centrale de Nantes – LHEEA (for 35 months) Sig-I/O (for 32 months) Entr’ouvert (for 30 months) Adfinis SyGroup AG (for 27 months) GNI MEDIA (for 22 months) Laboratoire LEGI – UMR 5519 / CNRS (for 22 months) Quarantainenet BV (for 22 months) RHX Srl (for 19 months) Bearstech (for 13 months) LiHAS (for 13 months) People Doc (for 10 months) Catalyst IT Ltd (for 8 months) Supagro (for 3 months) Demarcq SAS No comment | Liked this article? Click here. | My blog is Flattr-enabled. [...]

Holger Levsen: 20180416-LTS-march

Mon, 16 Apr 2018 13:01:05 +0000


My LTS work in March

So in March I resumed contributing to LTS again, after 2 years of taking a break, due to being overwhelmed with work on Reproducible Builds... Reproducible Builds is still eating a lot of my time, but as we currently are unfunded I had to pick up some other sources of funding.

And then, due to Reproducible Builds still requiring a lot of my attention (both actual work as well as work on getting funded again) and other stuff happening in my life, I was also mostly unable to find time to really dive into LTS again, so while I managed to renew my knowledge of the procedures etc, I only managed to find 1.5h work to be done (image) Which in turn made me feel quite bad, so that I also postponed writing about this until now.

So, in March I only managed to mark libcdio as no-DSA and upload samba to fix CVE-2018-1050.

On the plus side and despite the above, I'm very happy to be able to work on LTS again, because a.) I consider it interesting (to fix bugs in old packages, yes!) and b.) because I use LTS myself and c.) because the LTS crowd is actually a nice and helpful one.

And now let's see how much LTS work I'll manage in April...!

Hideki Yamane: Of course it runs NetBSD

Sun, 15 Apr 2018 22:56:05 +0000

“Of course it runs NetBSD”

Note: It runs Debian squeeze chroot on NetBSD 7.1.2, with Linux emulation

Thorsten Glaser: mksh on Jehanne, a guest post by Shamar

Sun, 15 Apr 2018 21:11:00 +0000


Giacomo Tesio referenced mksh(1) in his annual Jehanne report and provided a guest post (dated 2018-01-09, sorry for posting it this late only) for us on his journey on porting mksh to Jehanne, his Plan 9 derivative operating system. Read on for his story!

(read more…)

Russ Allbery: Free software log (March 2018)

Sun, 15 Apr 2018 18:21:00 +0000

I did get a few software releases out this month, although not as much as I'd planned and I still have a lot of new releases pending that are waiting for me to have a bit more free time.

control-archive got a 1.8.0 release, which catches up from accumulated changes over the past year plus and falls back to GnuPG v1 for signature processing. One of the projects that I'd like to find time for is redoing all of my scattered code for making and checking Usenet control messages.

DocKnot 1.03 adds more support for SPDX license identifiers, which I've started using in my projects, and then 1.04 was a quick bug fix release for something I broke in the test suite on Windows systems.

I also redid the Kerberos authentication plugin for INN to use modern Kerberos APIs, which cleared up some build issues when pointing at non-system Kerberos libraries.

Dirk Eddelbuettel: #18: Adding Intel MKL easily via a simple script

Sun, 15 Apr 2018 13:54:00 +0000

Welcome to the eighteenth post in the rarely riveting R ramblings series of posts, or R4 for short. The Intel Math Kernel Library (MKL) is well-know high(er) performance math library tailored for Intel CPUs offering best-in-class numerical performance on a number of low-level operations (BLAS, LAPACK, ...). They are not open source, used to be under commerical or research-only licenses --- but can now be had (still subject to license terms you should study) via apt-get (and even yum). This page describe the installation of the MKL (and other components) in detail (but stops short of the system integration aspect we show here). Here we present one short script, discussed in detail below, to add the MKL to your Debian or Ubuntu system. Its main advantages are clean standard code using package management tools; additional steps to make it the the system default; and with an option for clean removal leaning again on the package management system. We put the script and a largely identical to this writeup into this GitHub repo where issues, comments, questions, ... should be filed. MKL for .deb-based systems: An easy recipe This post describes how to easily install the Intel Math Kernel Library (MKL) on a Debian or Ubuntu system. Very good basic documentation is provided by Intel at their site. The discussion here is more narrow as it focusses just on the Math Kernel Library (MKL). The tl;dr version: Use this script which contains the commands described here. First Step: Set up apt We download the GnuPG key first and add it to the keyring: cd /tmp wget apt-key add GPG-PUB-KEY-INTEL-SW-PRODUCTS-2019.PUB To add all Intel products we would run first command, but here we focus just on the MKL. The website above lists other suboptions (TBB, DAAL, MPI, ...) ## all products: #wget -O /etc/apt/sources.list.d/intelproducts.list ## just MKL sh -c 'echo deb all main > /etc/apt/sources.list.d/intel-mkl.list' We then update our lists of what is available in the repositories. apt-get update As a personal aside, I still use the awesome wajig frontend to dpkg, apt and more by Graham Williams (of rattle fame). Among other tricks, wajig keeps state and therefore "knows" what packages are new. Here, we see a lot: edd@rob:/tmp$ wajig update Hit:1 artful InRelease Ign:2 stable InRelease Hit:3 artful-updates InRelease Hit:4 artful InRelease Hit:5 artful-backports InRelease Ign:6 artful/ InRelease Hit:7 artful/ Release Hit:8 artful-security InRelease Hit:9 all InRelease Hit:10 stable Release Hit:12 jessie InRelease Reading package lists... Done This is 367 up on the previous count with 367 new packages. edd@rob:/tmp$ wajig new Package Description ========================-=================================================== intel-mkl-gnu-f-32bit-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux* intel-mkl-ps-gnu-f-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux* intel-mkl-cluster-2018.2-199 Intel(R[...]

Hideki Yamane: Update desktop components for released version

Sun, 15 Apr 2018 13:25:28 +0000

I found RHEL7.5 desktop is rebased to GNOME 3.26. I hope Debian stable release could do such thing, then what's the blocker for it?

Russ Allbery: INN 2.6.2

Sat, 14 Apr 2018 23:54:00 +0000

(As usual, Julien finished this release a bit back, and then I got busy with life stuff and hadn't gotten the announcement out. And yes, I copied and pasted this parenthetical from the last announcement. Tradition!)

In the feature department, this release adds a new syntaxchecks parameter to inn.conf that can be used to disable message ID syntax checking, better header sanitization support in mailpost, support for TLS 1.3, and support for using GnuPG v1 (which is unfortunately important for control messages and NoCeM on Usenet still).

In the bug-fix department, this release always uses the OVDB helper server with OVDB to avoid various stability problems, fixes a header checking bug in inews that was incorrectly rejecting some long headers, fixes some control command reporting in the daily status report, and hopefully fixes buffindexed on systems with a native page size larger than 16KB.

As always, thanks to Julien ÉLIE for preparing this release and doing most of the maintenance work on INN!

You can get the latest version from the official ISC download page (although that download link still points to INN 2.6.1 as of this writing) or from my personal INN pages. The latter also has links to the full changelog and the other INN documentation.

Norbert Preining: Analysing Debian packages with Neo4j – Part 1 – Debian

Sat, 14 Apr 2018 14:33:05 +0000

Overview on the blog series The Ultimate Debian Database UDD collects a variety of data around Debian and Ubuntu: Packages and sources, bugs, history of uploads, just to name a few. The database scheme reveals a highly de-normalized RDB. In this on-going work we extract (some) data from UDD and represent it as a graph database. In the following series of blog entries we will report on this work. Part 1 (this one) will give a short introduction to Debian and the life time and structure of Debian packages. Part 2 will develop the graph database scheme (nodes and relations) from the inherent properties of Debian packages. The final part 3 will describe how to get the data from the UDD into Neo4j, give some sample queries, and discuss further work. This work has been presented at the Neo4j Online Meetup and a video recording of the presentation is available on YouTube. Part 1 – Debian Debian is an open source Linux distribution, developed mostly by volunteers. With a history of already more than 20 years, Debian is one of the oldest Linux distributions. It sets itself apart from many other Linux distributions by a strict set of license rules that guarantees that everything within Debian is free according to the Debian Free Software Guidelines. Debian also gave rise to a large set of offsprings, most widely known one is Ubuntu. Debian contains not only the underlying operating system (Linux) and the necessary tools, but also a huge set of programs and applications, currently about 50000 software packages. All of these packages come with full source code but are already pre-compiled for easy consumption. To understand what information we have transfered into Neo4j we need to take a look at how Debian is structured, and how a packages lives within this environment. Debian releases Debian employs release based software management, that is, a new Debian version is released in more or less regular intervals. The current stable release is Debian stretch (Debian 9.2) and was released first in June 2017, with the latest point release on October 7th, 2017. To prepare packages for the next stable release, they have to go through a set of suites to make sure they conform to quality assurance criteria. These suites are: Development (sid): the entrance point for all packages, where the main development takes place; Testing: packages that are ready to be released as the next stable release; Stable: the status of the current stable release. There are a few other suites like experimental or targeting security updates, but we leave their discussion out here. Package and suite transitions Packages have a certain life cycle within Debian. Consider the following image (by Youheu Sasaki, CC-NC-SA): Packages and Suites (Youhei Sasaki, CC-NC-SA) Packages normally are uploaded into the unstable suite and remain there at least for 5 days. If no release critical bug has been reported, after these 5 days the package transitions automatically from unstable into the testing suite, which will be released as stable by the release managers at some point in the future. Structure of Debian packages Debian packages come as source packages and binary packges. Binary packages are available in a variety of architectures: amd64, i386, powerpc just to name a few. Debian developers upload source packages (and often own’s own architecture’s binary package), and for other architectures auto-builders compile and package binary packages. Debian auto-builders (from Debi[...]

Vasudev Kamath: Docker Private Registry and Self Signed Certificates

Sat, 14 Apr 2018 14:17:00 +0000


I was recently experimenting with hosting a private registry on an internal LAN network for publishing docker private images. I found out that docker-pull works only with TLS secured registry. There is possible to run insecure registry by editing daemon.json file but its better to use self-signed certificates instead.

Once I followed the step and started registry I tried to docker-pull and it started complaining about certificate not having any valid names. But this same certificate worked fine with browsers too, of course you need to add exception but no other errors were encountered.

Documentation for Docker does not speaks any specific settings needs to be done prior to generating a self-signed certificate so I was bit confused at beginning.A bit of searching showed up following issue filed against docker and then later re-assigned against *Golang* for its method of handling x509 certificate. It appears that with valid Subject Alternative Name Go crypto library ignores the Common Name.

From thread on Security Stack Exchange I found the command to create a self-signed certificate to contain self-signed certificate. Command in excepted answer does not work until you add --extensions option to it as mentioned in one of the comments. Full command is as shown below.

openssl req -new -sha256 -key domain.key \
             -subj "/C=US/ST=CA/O=Acme, Inc./" \
             -reqexts SAN -extensions SAN \
             -config \
<(cat /etc/ssl/openssl.cnf <(printf "[SAN]\,")) -out domain.crt

You would need to replace values in -subj and under [SAN] extension. Benefit of this command is you need not modify the /etc/ssl/openssl.conf file.

If you do not have a domain name for the registry and using IP address instead consider replacing [SAN] section in above command to have IP: instead of DNS.

Happy hacking.!

Steinar H. Gunderson: Match day

Sat, 14 Apr 2018 08:31:00 +0000


We're live!

Edit: Day 1 is over, and the videos are up, although not quite cut yet. We had some issues here and there, but overall, things seem to have come out well. More fun with the playoffs tomorrow :-)

Vasudev Kamath: Docker container as Development Environment

Sat, 14 Apr 2018 07:45:00 +0000

When you have a distributed team working on a project, you need to make sure that every one uses similar development environment. This is critical if you are working on embedded systems project. There are multiple possibility for this scenario. Use a common development server and provide every developer in your team an account in it. Provide description to every one how to setup their development environment and trust them they will do so. Use Docker to provide the developer with ready made environment and use build server (CI) for creating final deployment binaries. 1st approach was most common approach used, but it has some drawbacks. Since its a shared system you should make sure not every one is able to install or modify system and hence have a single administrator so that no one accidentally breaks the development environment. Other problems include forced to use older OS due to specific requirements of compilers etc. 2nd approach makes you put your trust on your developer to get the correct development environment. Of course you need to trust your team, but every one is a human being and humans can make mistake. Enter Docker Dockerfile is best way to document/setup development environment. A developer can read Dockerfile and understand what is being done to setup the environment and simply execute docker build command to generate his/her development environment, or better you build the image and publish it in either public registry and if that is not possible put it in a private registry, and ask developers to pull a image of development environment. Don't be scared by private registry, setting one up is not a humongous task! Its just couple of docker commands and there is a pretty good documentation available to set one up. While setting up a development environment you need to make sure last instruction in your Dockerfile is executing the shell of your choice. This is because when you start a container this last instruction is what actually is run by docker, and in our case we need to provide developer with a shell all build toolchain and libraries. ... CMD ["/bin/bash"] Now developer just needs to get/build the image, start the container and use it for their work!. To summarize below command is sufficient for developer to run a fresh environment. $ docker build -t devenv . # If they are building it # If they are pulling it from say private registry $ docker pull private-registry-ip/path/to/devenv $ docker run -itd --name development devenv $ docker attach development When container is started it will execute the shell and the next attach command will attach your shell's input/output to container. Now it can be used just like normal shell to build the application. Another good thing which can be done is sharing the workspace with container so your container can just contain the toolchain and library that is needed and all version control, code editing and likewise can be done in the host machine. One thing that would be needed to make sure is your UID on host and UID of the user inside the container is same. This can be easily done by creating a separate user in the container with UID same as your UID on host system. Advantages Some advantages of using docker container include They are easy to setup and saves a lot of time to the team as a whole compared to traditional approaches. Easy to throwaway and start fresh: If a developer[...]

Shirish Agarwal: cleartext passwords and transparency

Sat, 14 Apr 2018 06:54:36 +0000

I had originally thought of talking about the recent autonomous car project which killed a homeless lady in Tempe but guess that would have to wait for another day. I saw Lars Wirzenius’s blog post which led me to change the direction a bit. So let me just jump in with Lars blog post where he talks about cleartext passwords. While he has actually surmised and shared what a security problem they are, the pity is we come to know of this only because the people in question tacitly admitted to bad practises. How many more such bad actors are there, developers putting user credentials in cleartext god only knows. There was even an April Fool’s joke in 2014 which shared why putting passwords in cleartext is bad. This is one lesson which web developers are neither taught nor learnt. Most web development courses in India may talk about web frameworks, CSS, front-end and back-end web development and even may talk about UX but security will be something which is supposed to be magically gained while you do the above things. Please note I said most, not all but yes there is needed a whole lot of awakening in terms of safe web development practices but that’s time for another day and another tale. Casual interactions with course publishers has been that most students are looking for buzz words and neither the employers look for ‘security’ as a strong point. There even have been casual studies which shared that 0.01 of financial crimes are reported in India . I myself am guilty of this when a bank mis-appropriates or does something stupid, my only thing is to get the transaction rectified or get it corrected rather than worry about if some small, medium or large-scale conspiracy is happening in the bank. But that malaise has to many factors to put in this small blog post. Few years back EFF did a tremendous job of pursuing and getting everyday users and vendors like mozilla, chromium to adopt https globally, but to my knowledge many Indian websites including some of the biggest behemoths in India with whom we have day-to-day activity keep all their user passwords in cleartext. What perhaps may or may not be a shocker to many people that many ATM’s at least in India don’t work on https even today. Is there even a wonder why skinners are still able to cheat honest people and taxpayers . The reasons for all of the above could be ranging from sheer incompetence to being lazy to not being regulated at all. Rather than sharing anecdotes and also not having INR 100 crores or INR 1 billion rupees ( that statement will become clear in a while) with developers who under casual circumstances have shared they neither do one-way-encryption or salting or any of the methods of securing passwords either because financial companies don’t demand it or know about it even though they should know better. I can however share an anecdote however which resulted in a suit of law which a media house won sometime back. It isn’t so much about unsafe web practices but more about companies lack of morals for financial web gains and our (the commons) own lack of understanding of such matters. I had to search on my blog before sharing and turns out I didn’t share this anecdote before, surprise, surprise. Since 2008, I know of a media house called moneylife which is run by a beautiful, very intelligent woman called Sucheta Dala[...]

Silva Arapi: Digital Born Media Carnival July 2017

Fri, 13 Apr 2018 20:50:40 +0000

As described in their website, Digital Born Media Carnival was a gathering of hundred of online media representatives, information explorers and digital rights enthusiasts. The event took place on 14 – 18 July in Kotor, Montenegro. I found out about it as one of the members of Open Labs Hackerspace shared the news on our forum. While struggling if I should attend or not because of a very busy period at work and at the University, the whole thing sounded very interesting and intriguing at the same time, so I decided to join the group of people who were also planning to go and apply with a workshop session too. No regrets at all! This turned out to be one of the greatest events I’ve attended so far and had a great impact in what I somehow decided to do next, regarding my work as a hacktivist and as a digital rights enthusiast. The organizers of the Carnival had announced on the website that they were looking for online media representatives, journalists, bloggers, content creators, human right defenders, hacktivists, new media startups etc and as a hactivist I found myself willing to join and learn more about some topics which for a while had been very intriguing to me, while I was also looking at this as an opportunity to meet with other people with common interests as me. I applied with a workshop where I was going to introduce some simple tools for people to better preserve their privacy online. The session was accepted and I was invited to lead altogether with Andrej Petrovski the sessions on Digital Security track, located in the Sailing club “Lahor”. I held my workshop there on Saturday late in the morning and I really enjoyed it. Most of the attendees where journalists or people not with a technical background, and they showed a lot of interest, asked me many questions and shared some stories, I also received very good feedback on the workshop and it really gave me some really good vibes since this was the first time for me speaking on cyber security in an important event of this kind, as it was the DBMC’17. @ArapiSilva started the workshop sessions at #DBMC17 with a cybersecurity session #Kotor — Redon Skikuli (@rskikuli) July 15, 2017 I spent the other days on the Carnival attending different workshops and talks, meeting new people, discussing with friends and enjoying the sun. We would go to the beach on the afternoon and had very cool drone photo shooting DBMC drone photo shooting – Kotor, Montenegro This was a great work from the SHARE Foundation and hopefully there will be other events as such in the near future and I would totally recommend for people to attend! If you are new with the topics discussed there, this is a great way to start. If you have been on the field for a while, this is the place to meet other professionals as you. If you are looking for an event which you can also combine with some days of vacation but also be in touch with causes you care about, this would once again be the place to go. [...]

Junichi Uekawa: Wishing the pollen season to end.

Fri, 13 Apr 2018 01:23:43 +0000

(image) Wishing the pollen season to end.

Kees Cook: security things in Linux v4.16

Fri, 13 Apr 2018 00:04:22 +0000

Previously: v4.15  Linux kernel v4.16 was released last week. I really should write these posts in advance, otherwise I get distracted by the merge window. Regardless, here are some of the security things I think are interesting: KPTI on arm64 Will Deacon, Catalin Marinas, and several other folks brought Kernel Page Table Isolation (via CONFIG_UNMAP_KERNEL_AT_EL0) to arm64. While most ARMv8+ CPUs were not vulnerable to the primary Meltdown flaw, the Cortex-A75 does need KPTI to be safe from memory content leaks. It’s worth noting, though, that KPTI does protect other ARMv8+ CPU models from having privileged register contents exposed. So, whatever your threat model, it’s very nice to have this clean isolation between kernel and userspace page tables for all ARMv8+ CPUs. hardened usercopy whitelisting While whole-object bounds checking was implemented in CONFIG_HARDENED_USERCOPY already, David Windsor and I finished another part of the porting work of grsecurity’s PAX_USERCOPY protection: usercopy whitelisting. This further tightens the scope of slab allocations that can be copied to/from userspace. Now, instead of allowing all objects in slab memory to be copied, only the whitelisted areas (where a subsystem has specifically marked the memory region allowed) can be copied. For example, only the auxv array out of the larger mm_struct. As mentioned in the first commit from the series, this reduces the scope of slab memory that could be copied out of the kernel in the face of a bug to under 15%. As can be seen, one area of work remaining are the kmalloc regions. Those are regularly used for copying things in and out of userspace, but they’re also used for small simple allocations that aren’t meant to be exposed to userspace. Working to separate these kmalloc users needs some careful auditing. Total Slab Memory: 48074720 Usercopyable Memory: 6367532 13.2% task_struct 0.2% 4480/1630720 RAW 0.3% 300/96000 RAWv6 2.1% 1408/64768 ext4_inode_cache 3.0% 269760/8740224 dentry 11.1% 585984/5273856 mm_struct 29.1% 54912/188448 kmalloc-8 100.0% 24576/24576 kmalloc-16 100.0% 28672/28672 kmalloc-32 100.0% 81920/81920 kmalloc-192 100.0% 96768/96768 kmalloc-128 100.0% 143360/143360 names_cache 100.0% 163840/163840 kmalloc-64 100.0% 167936/167936 kmalloc-256 100.0% 339968/339968 kmalloc-512 100.0% 350720/350720 kmalloc-96 100.0% 455616/455616 kmalloc-8192 100.0% 655360/655360 kmalloc-1024 100.0% 812032/812032 kmalloc-4096 100.0% 819200/819200 kmalloc-2048 100.0% 1310720/1310720 This series took quite a while to land (you can see David’s [...]

Enrico Zini: ansible nspawn connection plugin

Thu, 12 Apr 2018 23:39:44 +0000

I have been playing with system images using ansible and chroots, and I figured that using systemd-nspawn to handle the chroots would make things nice, giving ansible commands the benefit of a running system. There has been an attempt which was rejected. Here is my attempt. It does boot the machine then run commands inside it, and it works nicely. The only thing I missed is a way of shutting down the machine at the end, since ansible seems to call close() at the end of each command, and I do not know enough ansible internals to do this right. I hope this can serve as inspiration for something that works well. # Based on (c) 2013, Maykel Moya # Based on (c) 2015, Toshio Kuratomi # (c) 2018, Enrico Zini # # This is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see . from __future__ import (absolute_import, division, print_function) __metaclass__ = type import distutils.spawn import os import os.path import pipes import subprocess import time import hashlib from ansible import constants as C from ansible.errors import AnsibleError from ansible.plugins.connection import ConnectionBase, BUFSIZE from ansible.module_utils.basic import is_executable try: from __main__ import display except ImportError: from ansible.utils.display import Display display = Display() class Connection(ConnectionBase): ''' Local chroot based connections ''' transport = 'schroot' has_pipelining = True # su currently has an undiagnosed issue with calculating the file # checksums (so copy, for instance, doesn't work right) # Have to look into that before re-enabling this become_methods = frozenset(C.BECOME_METHODS).difference(('su',)) def __init__(self, play_context, new_stdin, *args, **kwargs): super(Connection, self).__init__(play_context, new_stdin, *args, **kwargs) self.chroot = self._play_context.remote_addr # We need short and fast rather than secure m = hashlib.sha1() m.update(os.path.abspath(self.chroot)) self.machine_name = "ansible-" + m.hexdigest() if os.geteuid() != 0: raise AnsibleError("nspawn connection requires running as root") # we're running as root on the local system so do some # trivial checks for ensuring 'host' is actually a chroot'able dir if not os.path.isdir(self.chroot): raise AnsibleError("%s is not a directory" % self.chroot) chrootsh = os.path.join(self.chroot, 'bin/sh') # Want to check for a usable bourne shell inside the chroot. # is_executable() == True is sufficient. For symlinks [...]

Shirish Agarwal:, summer cleaning and talking about creative writing-Taiwan

Thu, 12 Apr 2018 21:03:15 +0000

As around the week-end, took some time today to cleanup my mail inbox, replied some new and old mails which I had forgotten to and deleted the ones which were long past their expiry date. While doing it, came to know that I had joined circa 2009. Almost a decade went by and didn’t even realize where it went. Looking back, I realized I had forgotten what the early days were like on In those days, the only meetups were dating kind, now of course its much more of a fleshed out and caters from Information Technology, Writing, Cooking and even Salsa dancing. I just saw somebody start a salsa group in my hometown. Anyways, last week after almost 6 months went to a meetup. I had seen How to start writing a short novel being hosted by my friend Dr. Swati Shome. Copyright – Arun Paria I had some other engagements and being a Sunday and knowing most Puneties and the laid back culture came at 11:30 . Meetup had shown me around 20 odd people so was expecting a small group of people but turns out there were few writers in making and many wanna-be hacky writers like yours truly, no offence meant to anyone There were lot of interesting questions, anecdotes shared by Mme. Tanushree Poddar. There were couple of young chaps who went to travels and had magical experiences. I was tempted to ask whether it was in Triund (Himalayas somewhere ) or Parvati Valley. There are lots of places out there where you feel the magic out there. Also if you are with friends and are safe, you can also try to heighten the experiences using hallucinogens like magic mushrooms. I can’t explain it but you really feel everything is communicating with you and it all makes sense. The best part is you don’t get addicted while understanding what some people who might be attuned to nature might be feeling. It’s a sacred feeling. Coming to the meeting, I realized how much I missed talking and chatting with fellow writers, bloggers etc. I and probably few others helped Swati with reviewing and plugging holes in her non-fiction book on sexuality for teenagers. She showed me an initial print copy the publisher had shared with her, I guess it still has to go with few iterations as the final thing would be available in June. While I love hardbacks, in this case I would make an exception. I do hope the publisher prices it properly and more and more children and their parents use the book to be able to talk about sex without shame etc. While in some spheres we, the Indian society have become bolder, talk between children and parents are still within the old boundaries while technology has marched on. I am talking here about the middle class only. Kids as small as 6-8 years know about sex which we didn’t know even after reaching majority (i.e. 18) but that’s Swati’s book will talk about. Incidentally Swati chided me about sharing something like 8 A4 pages of feedback with her, what she didn’t know I probably shared less than half as sex is more a mental thing than anything else. Anyways, during the interaction and talking with others, I re-realized again that there are so many people like me who feel the need to write. While everybody does pay homage to success, most of us driven by a need [...]

Julien Danjou: Lessons from OpenStack Telemetry: Incubation

Thu, 12 Apr 2018 12:50:00 +0000

It was mostly around that time in 2012 that I and a couple of fellow open-source enthusiasts started working on Ceilometer, the first piece of software from the OpenStack Telemetry project. Six years have passed since then. I've been thinking about this blog post for several months (even years, maybe), but lacked the time and the hindsight needed to lay out my thoughts properly. In a series of posts, I would like to share my observations about the Ceilometer development history. To understand the full picture here, I think it is fair to start with a small retrospective on the project. I'll try to keep it short, and it will be unmistakably biased, even if I'll do my best to stay objective – bear with me. Incubation Early 2012, I remember discussing with the first Ceilometer developers the right strategy to solve the problem we were trying to address. The company I worked for wanted to run a public cloud, and billing the resources usage was at the heart of the strategy. The fact that no components in OpenStack were exposing any consumption API was a problem. We debated about how to implement those metering features in the cloud platform. There were two natural solutions: either achieving some resource accounting report in each OpenStack projects or building a new software on the side, covering for the lack of those functionalities. At that time there were only less than a dozen of OpenStack projects. Still, the burden of patching every project seemed like an infinite task. Having code reviewed and merged in the most significant projects took several weeks, which, considering our timeline, was a show-stopper. We wanted to go fast. Pragmatism won, and we started implementing Ceilometer using the features each OpenStack project was offering to help us: very little. Our first and obvious candidate for usage retrieval was Nova, where Ceilometer aimed to retrieves statistics about virtual machines instances utilization. Nova offered no API to retrieve those data – and still doesn't. Since it was out of the equation to wait several months to have such an API exposed, we took the shortcut of polling directly libvirt, Xen or VMware from Ceilometer. That's precisely how temporary hacks become historical design. Implementing this design broke the basis of the abstraction layer that Nova aims to offer. As time passed, several leads were followed to mitigate those trade-offs in better ways. But on each development cycle, getting anything merged in OpenStack became harder and harder. It went from patches long to review, to having a long list of requirements to merge anything. Soon, you'd have to create a blueprint to track your work, write a full specification linked to that blueprint, with that specification being reviewed itself by a bunch of the so-called core developers. The specification had to be a thorough document covering every aspect of the work, from the problem that was trying to be solved, to the technical details of the implementation. Once the specification was approved, which could take an entire cycle (6 months), you'd have to make sure that the Nova team would make your blueprint a priority. To make sure it was, you would have to fly a few thousands of k[...]

Bits from Debian: Bursary applications for DebConf18 are closing in 48 hours!

Thu, 12 Apr 2018 10:30:00 +0000

If you intend to apply for a DebConf18 bursary and have not yet done so, please proceed as soon as possible!

Bursary applications for DebConf18 will be accepted until April 13th at 23:59 UTC. Applications submitted after this deadline will not be considered.

You can apply for a bursary when you register for the conference.

Remember that giving a talk or organising an event is considered towards your bursary; if you have a submission to make, submit it even if it is only sketched-out. You will be able to detail it later. DebCamp plans can be entered in the usual Sprints page at the Debian wiki.

Please make sure to double-check your accommodation choices (dates and venue). Details about accommodation arrangements can be found on the wiki.

See you in Hsinchu!


Steinar H. Gunderson: Streaming the Norwegian ultimate championships

Wed, 11 Apr 2018 23:36:00 +0000


As the Norwegian indoor frisbee season is coming to a close, the Norwegian ultimate nationals are coming up, too. Much like in Trøndisk 2017, we'll be doing the stream this year, replacing a single-camera Windows/XSplit setup with a multi-camera free software stack based on Nageru.

The basic idea is the same as in Trøndisk; two cameras (one wide and one zoomed) for the main action and two static ones above the goal zones. (The hall has more amenities for TV productions than the one in Trøndisk, so a basic setup is somewhat simpler.) But there are so many tweaks:

  • We've swapped out some of the cameras for more suitable ones; the DSLRs didn't do too well under the flicker of the fluorescent tubes, for instance, and newer GoPros have rectilinear modes). And there's a camera on the commentators now, with side-by-side view as needed.

  • There are tally lights on the two human-operated cameras (new Nageru feature).

  • We're doing CEF directly in Nageru (new Nageru feature) instead of through CasparCG, to finally get those 60 fps buttery smooth transitions (and less CPU usage!).

  • HLS now comes out directly of Cubemap (new Cubemap feature) instead of being generated by a shell script using FFmpeg.

  • Speaking of CPU usage, we now have six cores instead of four, for more x264 oomph (we wanted to do 1080p60 instead of 720p60, but alas, even x264 at nearly superfast can't keep up when there's too much motion).

  • And of course, a ton of minor bugfixes and improvements based on our experience with Trøndisk—nothing helps as much as battle-testing.

For extra bonus, we'll be testing camera-over-IP from Android for interviews directly on the field, which will be a fun challenge for the wireless network. Nageru does have support for taking in IP streams through FFmpeg (incidentally, a feature originally added for the now-obsolete CasparCG integration), but I'm not sure if the audio support is mature enough to run in production yet—most likely, we'll do the reception with a laptop and use that as a regular HDMI input. But we'll see; thankfully, it's a non-essential feature this time, so we can afford to have it break. :-)

Streaming starts Saturday morning CEST (UTC+2), will progress until late afternoon, and then restart on Sunday with the playoffs (the final starts at 14:05). There will be commentary in a mix of Norwegian and English depending on the mood of the commentators, so head over to if you want to watch :-) Exact schedule on the page.

Ben Hutchings: Debian LTS work, March 2018

Wed, 11 Apr 2018 20:41:33 +0000


I was assigned 15 hours of work by Freexian's Debian LTS initiative and carried over 2 hours from February. I worked 15 hours and will again carry over 2 hours to April.

I made another two releases on the Linux 3.2 longterm stable branch (3.2.100 and 3.2.101), the latter including mitigations for Spectre on x86. I rebased the Debian package onto 3.2.101 but didn't upload an update to Debian this month. We will need to add gcc-4.9 to wheezy before we can enable all the mitigations for Spectre variant 2.

Joerg Jaspert: Debian SecureBoot Sprint 2018

Wed, 11 Apr 2018 15:01:13 +0000

Monday morning I gave back the keys to Office Factory Fulda, who sponsored the location for the SecureBoot Sprint from Thursday, 4th April to Sunday, 8th April. Appearently we left a pretty positive impression (we managed to clean up), so are welcome again for future sprints. The goal of this sprint was enabling SecureBoot in/for Debian, so that users who have SecureBoot enabled machines do not need to turn that off to be able to run Debian. That needs us to handle signing a certain set of packages in a defined way, handling it as automated as possible while ensuring that stuff is done in a safe/secure way. Now add details like secure handling of keys, only signing pre-approved sets (to make abusing it harder), revocations, key rollovers, combine it all with the infrastructue and situation we have in Debian, say dak, buildd, security archive with somewhat different rules of visibility, reproducability, a huge set of architectures only some of which do SecureBoot, proper audit logging of signatures and you end up with 7 people from different teams taking the whole first day just discussing and hashing out a specification. Plus some joining in virtually. I’m not going into actual details of all that, as a sprint report will follow soon. Friday to Sunday was used for actual implementation of the agreed solution. The actual dak changes turned out to not be too large, and thankfully Ansgar was on them, so I could take time to push the FTPTeams move to the new Salsa service forward. I still have a few of our less-important repositories to move, but thats a simple process I will be doing during this week, the most important step was coming up with a sane way of using Salsa. That does not mean the actual web interface, but getting code changes from there to the various Debian hosts we run our services on. In the past, we pushed the hosts directly, so all code changes appearing on them meant that someone who was in the right unix group on that machine made them appear.1 “Verified by ssh login” basically. With Salsa, we now add a service that has a different set of administrators added on top. And a big piece of software too, with a huge possibility of bugs, worst case allowing random users access to our repositories. Which is a way larger problem area than “git push via ssh” as in the past, and as such more likely to be bad. If we blindly pull from a repository on such shared space, the confirmation “a FTPMaster said this code is good” is gone. So it needs a way of adding that confirmation back, while still being able to use all the nice features that Salsa offers. Within Debian, whats better than using already established ways of trusting something, gnupg created signatures?! So how to go forward? I have been lucky, i did not need to entirely invent this on my own, Enrico had similar concerns for the New-Maintainer web pages. He setup CI to test his stuff and, if successful, installs the tested stuff on the NM machine, provided that the commit is signed by a key from a defined set. Unfortunately, for me, he deals with a Django app that lis[...]

Olivier Berger: Preventing resume immediately after suspend on Dell Latitude 5580 (Debian testing)

Wed, 11 Apr 2018 11:14:23 +0000


I’ve installed Debian buster (testing at the time of writing) on a new Dell Latitude 5580 laptop, and one annoyance I’ve found is that the laptop would almost always resume as soon as it was suspended.

AFAIU, it seems the culprit is the network card (Ethernet controller: Intel Corporation Ethernet Connection (4) I219-LM) which would be configured with Wake-On-Lan (wol) set to the “magic packet” mode (ethtool enp0s31f6 | grep Wake-on would return ‘g’). One hint is that grep enabled /proc/acpi/wakeup returns GLAN.

There are many ways to change that for the rest of the session with a command like ethtool -s enp0s31f6 wol d.

But I had a hard time figuring out if there was a preferred way to make this persistant among the many hits in so many tutorials and forum posts.

My best hit so far is to add the a file named /etc/systemd/network/ containing :



The driver can be found by checking udev settings as reported by udevadm info -a /sys/class/net/enp0s31f6

There are other ways to do that with systemd, but so far it seems to be working for me. Hth,

Steve Kemp: Bread and data

Wed, 11 Apr 2018 09:01:00 +0000


For the past two weeks I've mostly been baking bread. I'm not sure what made me decide to make some the first time, but it actually turned out pretty good so I've been doing every day or two ever since.

This is the first time I've made bread in the past 20 years or so - I recall in the past I got frustrated that it never rose, or didn't turn out well. I can't see that I'm doing anything differently, so I'll just write it off as younger-Steve being daft!

No doubt I'll get bored of the delicious bread in the future, but for the moment I've got a good routine going - juggling going to the shops, child-care, and making bread.

Bread I've made includes the following:





Beyond that I've spent a little while writing a simple utility to embed resources in golang projects, after discovering the tool I'd previously been using, go-bindata, had been abandoned.

In short you feed it a directory of files and it will generate a file static.go with contents like this:

files[ "data/index.html" ] = "....
files[ "data/robots.txt" ] = "User-Agent: * ..."

It's a bit more complex than that, but not much. As expected getting the embedded data at runtime is trivial, and it allows you to distribute a single binary even if you want/need some configuration files, templates, or media to run.

For example in the project I discussed in my previous post there is a HTTP-server which serves a user-interface based upon bootstrap. I want the HTML-files which make up that user-interface to be embedded in the binary, rather than distributing them seperately.

Anyway it's not unique, it was a fun experience writing, and I've switched to using it now:

Gunnar Wolf: DRM, DRM, oh how I hate DRM...

Wed, 11 Apr 2018 04:43:06 +0000

I love flexibility. I love when the rules of engagement are not set in stone and allow us to lead a full, happy, simple life. (Apologies to Felipe and Marianne for using their very nice sculpture for this rant. At least I am not desperately carrying a brick! ☺) I have been very, very happy after I switched to a Thinkpad X230. This is the first computer I have with an option for a cellular modem, so after thinking it a bit, I got myself one: After waiting for a couple of weeks, it arrived in a nonexciting little envelope straight from Hong Kong. If you look closely, you can even appreciate there's a line (just below the smaller barcode) that reads "Lenovo"). I soon found how to open this laptop (kudos to Lenovo for a very sensible and easy opening process, great documentation... So far, it's the "openest" computer I have had!) and installed my new card! The process was decently easy, and after patting myself in the back, I eagerly turned on my computer... Only to find the BIOS to halt with the following message: 1802: Unauthorized network card is plugged in - Power off and remove the miniPCI network card (1199/6813). System is halted So... Got everything back to its original state. Stupid DRM in what I felt the openest laptop I have ever had. Gah. Anyway... As you can see, I have a brand new cellular modem. I am willing to give it to the first person that offers me a nice beer in exchange, here in Mexico or wherever you happen to cross my path (just tell me so I bring the little bugger along!) Of course, I even tried to get one of the nice volunteers to install Libreboot in my computer now that I was to Libreplanet, which would have solved the issue. But they informed me that Libreboot is supported only in the (quite a bit older) X200 machines, not in the X230. AttachmentSize IMG_20180409_225503.jpg1003.02 KB IMG_20180409_225835.jpg1.77 MB IMG_20180409_230000.jpg113.36 KB IMG_20180409_225835.jpg1.77 MB IMG_20180408_085157.jpg3.44 MB [...]

Reproducible builds folks: Reproducible Builds: Weekly report #154

Tue, 10 Apr 2018 08:03:07 +0000

Here's what happened in the Reproducible Builds effort between Sunday April 1 and Saturday April 7 2018: Holger Levsen published a call for votes in order to decide on a logo for the projects, closing on Sunday 15th April. For more background information, please see the previous meeting's minutes), the proposals and the original proof-of-concept artwork. During Easterhegg 2018 Holger was interviewed in German by the Swiss "Hackerfunk" podcast. (MP3 & shownotes, Ogg). Juro Bystricky posted to our mailing list on the Linux kernel's srcversion field. Chris Lamb uploaded python-setuptools to our local package repository to work around an issue where version 39.0.1 onwards generated PKG-INFO files with a non-deterministic "Provides-Extra" output (#894215). This was subsequently superceded by Matthias Klose upload of 39.0.1-2 into unstable. anthraxx added Arch Linux support for the Gnumeric spreadsheet comparator in diffoscope. Patches Bernhard M. Wiedemann: cobra (SOURCE_DATE_EPOCH) Qt .qch files (SOURCE_DATE_EPOCH) webkitgtk (readdir(2)) Chris Lamb: #894607 filed against pylint (forwarded upstream). #890311 and #890568 for dashel (forwarded upstream). Juan Picca: #894787 filed against vte. In addition, build failure bugs were reported by Adam Borowski (1), Adrian Bunk (27) and Aurélien Courderc (1). development Mattia Rizzolo made a large number of changes to our Jenkins-based testing framework, including: Variation testing, etc.: Configure APT to ignore Release expiry when we are in the future. Configure APT pinning to force usage of our own packages instead of the ones coming from Debian. Only set Acquire::Check-Valid-Until:false when the host runs in the future. Performance, etc.: Decrease the number of workers in a tentative to reduce the general load. Configure dpkg to not issue pointless fsync()s. Do not store the scheduling message anymore. chroot maintenance: Move from schroot to chdist for the master node. Swap pbuilder and schroot columns now that the latter are disabled Use chdist to download the source on the build nodes too. Use a schroot session while calling diffoscope. Use the host architecture when selecting which chdist to obtain sources from. Disable all the schroot maintenance jobs. Misc: Make the location of the JSON outputs distribution-specific. Keep the build worker logs for an extra day. Log failures and notify us. Correct logging in Reviews of unreproducible packages 52 package reviews were added, 43 were updated and 50 have been removed in this week, adding to our knowledge about identified issues. Two issue categorisation types were added (nondeterminism_in_files_generated_by_hfst & nondeterminism_in_apertium_lrx_bin_files_generated_by_lrx_comp) and two were removed (nondeterminstic_ordering_in_gsettings_glib_enums_xml & captures_build_path_in_python_sugar3_symlinks) Misc. This week's edition was written by Bernhard M. Wiedema[...]

Markus Koschany: My Free Software Activities in March 2018

Mon, 09 Apr 2018 21:58:52 +0000

Welcome to Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you. Debian Games I could close a KDE specific startup bug in ace-of-penguins (#883707) thanks to the help of Esa Peuha. New upstream releases this month: springlobby, trackballs and freeciv. Bug fixes and package updates: foobillardplus (#892338, #889523), tuxpuck (#892349), micropolis-activity (RC #891338, #870761), beneath-a-steel-sky. I sponsored an NMU for Innocent de Marchi, xjig 2.4-14.1 and reviewed jpeces, a puzzle game written in Java for him. This was a rather quiet month for Debian Games. We still have a couple of open RC bugs due to the removal of obsolete Gnome 2 libraries. No progress in regard to last month. Debian Java I spent most of my free time on Java packages because…OpenJDK 9 is now the default Java runtime environment in Debian! As of today I count 319 RC bugs (bugs with severity normal would be serious today as well) of which 227 are already resolved. That means one third of the Java team’s packages have to be adjusted for the new OpenJDK version. Java 9 comes with a new module system called Jigsaw. Undoubtedly it represents a lot of new interesting ideas but it is also a major paradigm shift. For us mere packagers it means more work than any other version upgrade in the past. Let’s say we are a handful of regular contributors (I’m generous) and we spend most of our time to stabilize the Java ecosystem in Debian to the point that we can build all of our packages again. Repeat for every new Debian release. Unfortunately not much time is actually spent on packaging new and cool applications or libraries unless they are strictly required to fix a specific Java 9 issue. It just doesn’t feel right at the moment. Most upstreams are rather indifferent or relaxed when it comes to porting their applications to Java 9 because they still can use Java 8, so why can’t we? They don’t have to provide security support for five years and can make the switch to Java 9 much later. They can also cherry-pick certain versions of libraries whereas we have to ensure that everything works with one specific version of a library. But that’s not all: Java 9 will not be shipped with Buster and we even aim for OpenJDK 11! Releases of OpenJDK will be more frequent from now on, expect a new release every six months, and there are certain versions which will receive extended security support like OpenJDK 11. One thing we can look forward to: Apparently more commercial features of Oracle JDK will be merged into OpenJDK and it appears the longterm goal is to make Oracle JDK and OpenJDK builds completely interchangeable. So maybe one day only one free software JDK for everything and everyone? I hope so. I worked on the following packages to address Java 9 or other bugs: activemq, snakeyaml, libjchart2d-java, jackson-dataformat-yaml, j[...]

Lucas Kanashiro: Migrating PET features to distro-tracker

Mon, 09 Apr 2018 13:30:14 +0000

After joining the Debian Perl Team some time ago, PET has helped me a lot to find work to do in the team context, and also helped the whole team in our workflow. For those who do not know what PET is: “a collection of scripts that gather information about your (or your group’s) packages. It allows you to see in a bird’s eye view the health of hundreds of packages, instantly realizing where work is needed.”. PET became an important project since about 20 Debian teams were using it, including Perl and Ruby teams in which I am more active.

In Cape Town, during the DebConf16, I had a conversation with Raphael Hertzog about the possibility to migrate PET features to distro-tracker. He is one of the distro-tracker maintainers, and we found some similarities between those tools. Altough, after that I did not have enough time to push it forward. However, after the migration from Alioth to Salsa PET became almost unuseful because a lot of things were done based on Alioth. This brought me the motivation to get this migration idea off the drawing board, and support the PET features in distro-tracker team’s visualization.

In the meantime, the Debian Outreach team published a GSoC call for mentors for this year. I was a Debian GSoC student in 2014 and 2015, and this was a great opportunity for me to join the community. With that in mind and the wish to give this opportunity to others, I decided to become a mentor this year and proposed a project to implement the PET features in distro-tracker, called Improving distro-tracker to better support Debian Teams. We are at the selection students phase and I received great proposals. I am looking forward to the start of the program and finally have the PET features available in And of course, bring new blood to the Debian Project, since this is the idea behind those outreach programs.

Michal Čihař: New projects on Hosted Weblate

Mon, 09 Apr 2018 10:00:39 +0000


Hosted Weblate provides also free hosting for free software projects. The hosting requests queue has grown too long and waited for more than month, so it's time to process it and include new projects. I hope that gives you have good motivation to spend Christmas break by translating free software.

This time, the newly hosted projects include:

If you want to support this effort, please donate to Weblate, especially recurring donations are welcome to make this service alive. You can do that easily on Liberapay or Bountysource.

Filed under: Debian English SUSE Weblate

Lars Wirzenius: Architecture aromas

Mon, 09 Apr 2018 06:13:20 +0000


"Code smells" are a well-known concept: they're things that make you worried your code is not of good quality, without necessarily being horribly bad in and of themsleves. Like a bad smell in a kitchen that looks clean.

I've lately been thinking of "architecture aromas", which indicate there might be something good about an architecture, but don't guarantee it.

An example: you can't think of any compontent that could be removed from an architecture without sacrificing main functionality.

Antoine Beaupré: Death of a Thinkpad x120e laptop

Mon, 09 Apr 2018 04:01:13 +0000

My laptop named "angela" is (was?) a Thinkpad x120e (ThinkWiki). It's a netbook model (although they branded it a Ultraportable), which meant back then that it was a small, wide, slim laptop with less power, but cheaper. It did its job: I carried it through meetings and conferences all over the world for 7 years now. I also used it as a workstation for a short time in 2016-2017 when marcos stopped being a workstation and turned solely into a home cinema. I always disliked the keyboard. I got used to the "chiclet" key style, but never to the missing top block of keys: I just use "scroll lock", "print screen" and "pause" too much. I also found the CPU to be much slower than my previous workstation (marcos), which meant it was a pain to go back to it. Memory was also a limitation: I could apparently bump the memory to 8GB, but the cost is high (80$) and the configuration is not officially supported. I also struggled with the wifi card that works through binary blobs and it's not possible to replace it because the BIOS blocks "unauthorized" cards from being installed, an absolutely ludicrous idea. As a comparison, the Thinkpad x201, released a year earlier, fully supports 8GB of RAM and has a more powerful i5 processor. It can also run Coreboot, although that is less supported than other Thinkpad models. A generous friend was nice enough to give me his spare x201 which, even if it's incredibly worn out, already feels more solid, reliable and fast than my shiny x120e. And the x201 has broken keys, torn bezel and the hard drive cover is held together with duct tape. I love it. How the x120e died In the end, this laptop died a horrible death: it crashed, face first, on a linoleum floor. This seems to have cracked something in the screen which makes the text readable, but barely, and colors, totally off. Here is a Github webpage that is supposed to be white, but shows up as cyan: This phenomenon progressively damages the display until it displays nothing but a blank white screen. I have heard it might be some gaz that leaks from the display into the LCD, but that screen is supposed to be lit by a LED array (as opposed to CFFL, see the backlight article for more explanations) so that's probably not the problem. So I don't quite know what's going on with that screen, but it's obviously dead, which is somewhat inconvenient for a laptop, to say the least. It would probably be possible to replace that screen (40-60$USD for parts), however there is another issue: the CPU/fan assembly also has a serious cooling problem. When the machine boots, the fan kicks in full speed immediately. Just idle, the CPU hits 62°C. Doing a git annex fsck on a bunch of files (which involves many SHA256 checksums) made the CPU heat up to 99°C. Playing videos on Netflix completely crashes the machine with temperature warnings, as it stru[...]

Joey Hess: AIMS inverter control via GPIO ports

Sun, 08 Apr 2018 23:35:40 +0000


I recently upgraded my inverter to a AIMS 1500 watt pure sine inverter (PWRI150024S). This is a decent inverter for the price, I hope. It seems reasonably efficient under load compared to other inverters. But when it's fully idle, it still consumes 4 watts of power.

That's almost as much power as my laptop, and while 96 watt-hours per day may not sound like a lot of power, some days in winter, 100 watt-hours is my entire budget for the day. Adding more batteries just to power an idle inverter would be the normal solution, probably. Instead, I want to have my house computer turn it off when it's not being used.

Which comes to the other problem with this inverter, since the power control is not a throw switch, but a button you have to press and hold for a second. And looking inside the inverter, this was not easily hacked to add a relay to control it.

The inverter has a RJ22 control port. AIMS also does not seem to document what the pins do, so I reverse engineered them.

Since the power is toggled, it's important that the computer be able to check if the inverter is currently running, to reliably get to the desired on/off state.

I designed (well, mostly cargo-culted) a circuit that uses 4n35 optoisolators to safely interface the AIMS with my cubietruck's GPIO ports, letting it turn the inverter on and off, and also check if it's currently running. Built this board, which is the first PCB I've designed and built myself.

(image) (image)

The full schematic and haskell code to control the inverter are in the git repository My design notebook for this build is available in secure scuttlebutt along with power consumption measurements.

It works!

joey@darkstar:~>ssh house inverter status
joey@darkstar:~>ssh house inverter on
joey@darkstar:~>ssh house inverter status

Dirk Eddelbuettel: tint 0.1.0

Sun, 08 Apr 2018 14:57:00 +0000


A new release of the tint package just arrived on CRAN. Its name expands from tint is not tufte as the package offers a fresher take on the Tufte-style for html and pdf presentations.

This version adds support for the tufte-book latex style. The package now supported handouts in html or pdf format (as before) but also book-length material. I am using this myself in a current draft and this is fully working, though (as always) subject to changes.

A screenshot of a chapter opening and a following page is below:


One can deploy the template for book-style documents from either rmarkdown (easy) or bookdown (so far manual setup only). I am using the latter but the difference does not really matter for as long as you render the whole document at once; the only change from bookdown, really, is that your source directory ends up containing more files giving you more clutter and more degrees of freedom to wonder what gets set where.

The full list of changes is below.

Changes in tint version 0.1.0 (2018-04-08)

  • A new template 'tintBook' was added.

  • The pdf variants now default to 'tango' as the highlighting style.

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the tint page.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Niels Thykier: Build system changes in debhelper

Sun, 08 Apr 2018 09:55:42 +0000

Since debhelper/11.2.1[1], we now support using cmake for configure and ninja for build + test as an alternative to cmake for configure and make for build + test.  This change was proposed by Kyle Edwards in Debian bug #895044. You can try this new combination by specifying “cmake+ninja” as build system.

To facilitate this change, the cmake and meson debhelper buildsystems had to change their (canonical) name.  As such you may have noticed that the “–list” option for dh_auto_* now produces a slightly different output:


$ dh_auto_configure --list | grep +
cmake+makefile       CMake (CMakeLists.txt) combined with simple Makefile
cmake+ninja          CMake (CMakeLists.txt) combined with Ninja (
meson+ninja          Meson ( combined with Ninja (


You might also notice that “cmake” and “meson” is no longer listed in the full list of build systems.  To retain backwards compatibility, the names “cmake” and “meson” are handled as “cmake+makefile” and “meson+ninja”.  This can be seen if we specify a build system:


$ dh_auto_configure --buildsystem cmake --list | tail -n1
Specified: cmake+makefile (default for cmake)
$ dh_auto_configure --buildsystem cmake+makefile --list | tail -n1
Specified: cmake+makefile
$ dh_auto_configure --buildsystem cmake+ninja --list | tail -n1
Specified: cmake+ninja


If your package uses cmake, please give it a try and see what works and what does not.  So far, the only known issue is that cmake+ninja may fail if the package has no tests while it success with cmake+makefile.  I believe this is because the makefile build system checks whether the “test” or “check” targets exist before calling make.

Enjoy. (image)



[1] Strictly speaking, it was version 11.2.  However, version 11.2 had a severe regression that made it mostly useless.

Lars Wirzenius: Storing passwords in cleartext: don't ever

Sat, 07 Apr 2018 07:48:53 +0000


This year I've implemented a rudimentary authentication server for work, called Qvisqve. I am in the process for also using it for my current hobby project, ick, which provides HTTP APIs and needs authentication. Qvisqve stores passwords using scrypt: source. It's not been audited, and I'm not claiming it to be perfect, but it's at least not storing passwords in cleartext. (If you find a problem, do email me and tell me:

This week, two news stories have reached me about service providers storing passwords in cleartext. One is a Finnish system for people starting a new business. The password database has leaked, with about 130,000 cleartext passwords. The other is about T-mobile in Austria bragging on Twitter that they store customer passwords in cleartext, and some people not liking that.

In both cases, representatives of the company claim it's OK, because they have "good security". I disagree. Storing passwords is itself shockingly bad security, regardless of how good your other security measures are, and whether your password database leaks or not. Claiming it's ever OK to store user passwords in cleartext in a service is incompetence at best.

When you have large numbers of users, storing passwords in cleartext becomes more than just a small "oops". It becomes a security risk for all your users. It becomes gross incompetence.

A bank is required to keep their customers' money secure. They're not allowed to store their customers cash in a suitcase on the pavement without anyone guarding it. Even with a guard, it'd be negligent, incompetent, to do that. The bulk of the money gets stored in a vault, with alarms, and guards, and the bank spends much effort on making sure the money is safe. Everyone understands this.

Similar requirements should be placed on those storing passwords, or other such security-sensitive information of their users.

Storing passwords in cleartext, when you have large numbers of users, should be criminal negligence, and should carry legally mandated sanctions. This should happen when the situation is realised, even if the passwords haven't leaked.

Louis-Philippe Véronneau: Debian & Stuff -- Montreal Debian Meeting

Sat, 07 Apr 2018 04:00:00 +0000


Today we had a meeting of the local Montreal Debian group. The last meetings we had were centered on working on finishing the DebConf17 final report and some people told us they didn't feel welcome because they weren't part of the organisation of the conference.

I thus decided to call today's event "Debian & Stuff" and invite people to come hack with us on diverse Debian related projects. Most of the people who came were part of the DC17 local team, but a few other people came anyway and we all had a great time. Someone even came from Ottawa to learn how to compile the Linux kernel!


We held the meeting at the Foulab hackerspace, one of the first hackerspaces in Canada. It's in the western part of Montreal, so I had never been there before, but I had heard great things about it.

The space is so cool we had a lot of trouble staying concentrated on the projects we were working on: they brew beer locally, they have a bunch of incredible hardware projects, they run a computer museum, their door is opened remotely using SSH keys ... I could go on and on. If I had more time, I would definitely become a member.

We managed to do a lot of work on the final report and if everything goes well, we should be able to finish the report during our next meeting.


Steinar H. Gunderson: kTLS in Cubemap

Fri, 06 Apr 2018 20:03:00 +0000

Cubemap, my video reflector, is getting TLS support. This isn't really because I think Cubemap video is very privacy-sensitive (although I suppose it does protect it against any meddling ISP middleboxes that would want to transcode the video), but putting non-TLS video on TLS pages is getting increasingly frowned upon by browsers—it used to provoke mixed content warnings, but now, it's usually just blocked outright. This took longer than one would expect, since Cubemap prides itself on extremely high performance. (Even when it was written, five years ago, it could sustain multiple 10gig links on a single, old quadcore.) Cubemap is different from regular HTTP servers in that it doesn't really care about small requests; it doesn't do HLS or MPEG-DASH (although HLS support is also on its way!), just a single very long stream of video, so startup time doesn't matter at all. To that extent, it uses sendfile() (from a buffer file, usually on tmpfs or similar), which wasn't compatible with TLS… until now. Linux >= 4.13 supports kTLS, where the kernel does the encryption and framing needed (after userspace has done the handshake and handed over the keys). This allows us to keep using sendfile(), and also benefit from the kernel's generally more efficient handling of segmented buffers, reducing the number of copies. Also, of course, the kernel would be able to use any encryption offloads efficiently, although I don't think it's actually doing so for kTLS yet. The other problem is that Cubemap is designed to have extremely long-lived connections. Since it doesn't do segmented video (which is typically rather high-latency, and also tends to demand more of the TCP congestion control algorithms), clients can be connected for hours at a time, which makes restarts for upgrades trickier. Cubemap solves this by serializing all its state to a file, then exec()-ing the new binary and reloading the state, meaning no connections need to be broken; it stops serving video for mere milliseconds, and clients won't notice a thing. (It also deals with configuration changes this way, since restart is a stricly more powerful concept than configuration reload.) However, most TLS libraries, or generally libraries in general, don't support serializing state. Salvation comes in the form of TLSe, a small TLS library that supported exactly such serialization (originally as a security feature to be able to separate the private keys out into another process, I believe). Eduard Suica, the TLSe author, was able to add kTLS support in almost no time at all, and after some bugfixing, it appears quite stable. I figured out fairly late that it can't serialize during the key exchange[...]

Michal Čihař: Weblate 2.20

Thu, 05 Apr 2018 14:45:42 +0000

Weblate 2.20 has been released today. There are several performance improvements, new features and bug fixes. Full list of changes: Improved speed of cloning subversion repositories. Changed repository locking to use third party library. Added support for downloading only strings needing action. Added support for searching in several languages at once. New addon to configure Gettext output wrapping. New addon to configure JSON formatting. Added support for authentication in API using RFC 6750 compatible Bearer authentication. Added support for automatic translation using machine translation services. Added support for HTML markup in whiteboard messages. Added support for mass changing state of strings. Translate-toolkit at least 2.3.0 is now required, older versions are no longer supported. Added built in translation memory. Added componentlists overview to dashboard and per component list overview pages. Added support for DeepL machine translation service. Machine translation results are now cached inside Weblate. Added support for reordering commited changes. If you are upgrading from older version, please follow our upgrading instructions. You can find more information about Weblate on, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. Weblate is also being used on as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects. Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure. Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them. Filed under: Debian English SUSE Weblate [...]

Abhijith PA: FOSSASIA experience

Thu, 05 Apr 2018 06:54:00 +0000

Hello Everyone !

I was able to attend this years FOSSASIA summit held at Lifelong Learning Institute, Singapore. Its a quite decent sized, 4 day long conference filled with lot of parallel tracks that cover vast areas from fun tinkering things to huge block chain and data mining topics (which I consider as big topic). Way too much parallel tracks that I decided to attend less talk and meet more people around the venue. The atmosphere was very hacker friendly.


I spend most of my time at the Debian booth. People swing by the booth and they talked about their experience with Debian. It was fun to meet them all. Prior to the conference I created a wiki page to coordinate Debian booth at exhibition which really helped.
I met three Debian Developers - Chow Loong Jin (hyperair), Andrew Lee 李健秋 (ajqlee) and Héctor Orón Martínez (zumbi). Andrew Lee and zumbi also volunteered at Debian booth from time to time along with Balasankar ‘balu’ C (balasankarc). Hyperair was sitting at HackerspaceSG booth, just two booth across from us.


All in all it was an amazing conference. I want to reach out to the organizers and thank them for FOSSASIA.


Singapore is a beautiful city with lots of tourists and tourist attractions. All places are well connected with public transport system. People can reach every corner of Singapore with Metro trains and buses. you can find huge variety of food in Singapore. Stalls serving light meals and restaurants are everywhere. On top of that you can find stores like 7-eleven where you can get instant noodles and similar stuffs. Anish (a Debian contributor, also friend of me and balu from kerala who now lives in Singapore) taught me how to use chopsticks :). I also brought home a pair of chopsticks as souvenir that came with my lunch.

(image) (image) (image)

Matthew Garrett: Linux kernel lockdown and UEFI Secure Boot

Thu, 05 Apr 2018 01:07:01 +0000

David Howells recently published the latest version of his kernel lockdown patchset. This is intended to strengthen the boundary between root and the kernel by imposing additional restrictions that prevent root from modifying the kernel at runtime. It's not the first feature of this sort - /dev/mem no longer allows you to overwrite arbitrary kernel memory, and you can configure the kernel so only signed modules can be loaded. But the present state of things is that these security features can be easily circumvented (by using kexec to modify the kernel security policy, for instance).Why do you want lockdown? If you've got a setup where you know that your system is booting a trustworthy kernel (you're running a system that does cryptographic verification of its boot chain, or you built and installed the kernel yourself, for instance) then you can trust the kernel to keep secrets safe from even root. But if root is able to modify the running kernel, that guarantee goes away. As a result, it makes sense to extend the security policy from the boot environment up to the running kernel - it's really just an extension of configuring the kernel to require signed modules.The patchset itself isn't hugely conceptually controversial, although there's disagreement over the precise form of certain restrictions. But one patch has, because it associates whether or not lockdown is enabled with whether or not UEFI Secure Boot is enabled. There's some backstory that's important here.Most kernel features get turned on or off by either build-time configuration or by passing arguments to the kernel at boot time. There's two ways that this patchset allows a bootloader to tell the kernel to enable lockdown mode - it can either pass the lockdown argument on the kernel command line, or it can set the secure_boot flag in the bootparams structure that's passed to the kernel. If you're running in an environment where you're able to verify the kernel before booting it (either through cryptographic validation of the kernel, or knowing that there's a secret tied to the TPM that will prevent the system booting if the kernel's been tampered with), you can turn on lockdown.There's a catch on UEFI systems, though - you can build the kernel so that it looks like an EFI executable, and then run it directly from the firmware. The firmware doesn't know about Linux, so can't populate the bootparam structure, and there's no mechanism to enforce command lines so we can't rely on that either. The controversial patch simply adds a kernel configuration option that automatically enables lockdown when UEFI secure boot [...]

Benjamin Mako Hill: UW Stationery in LaTeX

Wed, 04 Apr 2018 18:53:10 +0000

The University of Washington’s brand page recently started publishing letterhead templates that departments and faculty can use for official communication. Unfortunately, they only provide them in Microsoft Word DOCX format. Because my research group works in TeX for everything, Sayamindu Dasgupta and I worked together to create a LaTeX version of the “Matrix Department Signature Template” (the DOCX file is available here). We figured other folks at UW might be interested in it as well. The best way to get the template to use it yourself is to clone it from git (git clone git:// If you notice issues or if you want to create branches with either of the other two types of official UW stationary, patches are always welcome (instructions on how to make and send patches is here)! Because the template relies on two OpenType fonts, it requires XeTeX. A detailed list of the dependencies is provided in the README file. We’ve only run it on GNU/Linux (Debian and Arch) but it should work well on any operating system that can run XeTeX as well as web-based TeX systems like ShareLaTeX. And although we created the template, keep in mind that we don’t manage UW’s brand identity in anyway. If you have any questions or concerns about if and when you should use the letterhead, you should contact brand and creative services with the contact information on the stationery page. [...]

John Goerzen: Emacs #5: Documents and Presentations with org-mode

Wed, 04 Apr 2018 16:14:59 +0000

The Emacs series This is fifth in a series on Emacs and org-mode. This blog post was generated from an org-mode source and is available as: a blog page, slides (PDF format), and a PDF document. 1 About org-mode exporting 1.1 Background org-mode isn't just an agenda-making program. It can also export to lots of formats: LaTeX, PDF, Beamer, iCalendar (agendas), HTML, Markdown, ODT, plain text, man pages, and more complicated formats such as a set of web pages. This isn't just some afterthought either; it's a core part of the system and integrates very well. One file can be source code, automatically-generated output, task list, documentation, and presentation, all at once. Some use org-mode as their preferred markup format, even for things like LaTeX documents. The org-mode manual has an extensive section on exporting. 1.2 Getting started From any org-mode document, just hit C-c C-e. From there will come up a menu, letting you choose various export formats and options. These are generally single-key options so it's easy to set and execute. For instance, to export a document to a PDF, use C-c C-e l p or for HTML export, C-c C-e h h. There are lots of settings available for all of these export options; see the manual. It is, in fact, quite possible to use LaTeX-format equations in both LaTeX and HTML modes, to insert arbitrary preambles and settings for different modes, etc. 1.3 Add-on packages ELPA containts many addition exporters for org-mode as well. Check there for details. 2 Beamer slides with org-mode 2.1 About Beamer Beamer is a LaTeX environment for making presentations. Its features include: Automated generating of structural elements in the presentation (see, for example, the Marburg theme). This provides a visual reference for the audience of where they are in the presentation. Strong help for structuring the presentation Themes Full LaTeX available 2.2 Benefits of Beamer in org-mode org-mode has a lot of benefits for working with Beamer. Among them: org-mode's very easy and strong support for visualizing and changing the structure makes it very quick to reorganize your material. Combined with org-babel, live source code (with syntax highlighting) and results can be embedded. The syntax is often easier to work with. I have completely replaced my usage of LibreOffice/Powerpoint/GoogleDocs with org-mode and beamer. It is, in fact, rather frustrating when I have to use one of those tools, as they are nowhere near as strong as org-mode fo[...]

Alexander Reichle-Schmehl: Automatic OTRS ticket creation for Debian package updates

Wed, 04 Apr 2018 15:00:19 +0000

I recently stumbled about the problem, that I have to create specific tickets in our OTRS system for package upgrades on our Debian servers. Background is, that we use OTRS for our ITIL change management processes. To be more precise: The actual problem isn't to create the tickets - there are plenty of tools to do that, but to create them in a way I found them useful. apticron not only sends a mail for pending package upgrades (and downloads them if you want to), it also calls apt-listchanges which will show you the changelog entries of the packages you are about to install. You see so not only that you have to install upgrades, but also why. However, I didn't found a way to change the mail, or add a specific header to the mail. Which would have been a big plus - as you can remote control OTRS via e-mail header quite a lot. And as I am lazy, that is something I definitely wanted to have. Same for unattended-upgrades. Nice tool, but doesn't allow to change the mail content / header. (At least I didn't found a way to do so.) I was pleasantly surprised, that cron-apt not only allows to add headers, it also lists some examples for OTRS headers in its documentation! However, by default it only lists (and downloads) packages to be upgrades. No changelogs. There is an open whishlist bug to get this feature added, but considering the age of the bug, I wouldn't hold my breath till it is implemented ;) There is a solution for this problem, though. Although it is a bit ugly: And as I'm apparently not the only one interested in it, I'll write it down here (partly because I'm interested to find out, if my blog is still working after quite some years of inactivity). The basic idea is to call apt-listchanges on all deb files in /var/cache/apt/archives/. As there might be some cruft laying around, you'll have to run apt-clean before that. As we have a proxy and enough bandwidth that is acceptable in our case. First you'll have to install the cron-apt and apt-listchanges. Add a file into /etc/cron-apt/action.d/1-clean containing: clean. This will cause cron-apt to call apt-get clean on each invocation and this cleaning all files in /var/cache/apt/archives. Next create a file /etc/cron-apt/action.d/4-listchanges containing the line:/var/cache/apt/archives/*.deb and a file /etc/cron-apt/config.d/4-listchanges containing the lines: APTCOMMAND=/usr/bin/apt-listchanges OPTIONS="" Finally we have to configure cron-apt to actually mail our stuff. Thus create /etc/cron-apt/config similar [...]

Lucy Wayland: The Big Decision

Wed, 04 Apr 2018 13:53:08 +0000

I care for you all
I love you all
In so many different ways
I made an oath
And I live by the first law
Thou shall harm none
I swore to the triple Goddesses
That I would not take my life
By my own hand
Today I took part of my life away
It was necessary
It had to be done
But part of me died today
Please say goodbye
To the Lucy you knew and loved
She will never be the same

Raphaël Hertzog: My Free Software Activities in March 2018

Wed, 04 Apr 2018 10:22:51 +0000

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me. Distro Tracker I reviewed and merged 14 merge requests from multiple contributors: Add unit tests to team-related views (Arthur Del Esposte) Display component (main/contrib/non-free) of source package (Chirath R) Add debci link in links panel (Lucas Kanashiro) Use proper plural formal depending on the number of commits since last upload (James Clarke) Support next parameter in login url to redirect after login (Chirath R) Display transitive reverse dependencies in autoremoval action item (Lucas Kanashiro) Fail gracefully when adding the same email twice into a team (Arthur Del Esposte) Switch handling of britney’s excuses to use its YAML file instead of parsing the raw HTML (Pierre-Elliott Bécue and Christophe Siraut) Use friendlier news URL that include the title (Arthur Del Esposte) Accept trailing slash on news URL (Arthur Del Esposte) Improve description of autoremoval action items by adding links to buggy dependencies (Arthur Del Esposte) Refactoring: rename PackageExtractedInfos into PackageData (Pierre-Elliott Bécue) Fix regression in UpdatesExcusesTask (Pierre-Elliott Bécue) Add missing version to some long description of autoremovals action items (Pierre-Elliott Bécue) On top of this, I updated the Salsa/AliothMigration wiki page with information about how to best leverage when you migrate to salsa. I also filed a few issues for bugs or things that I’d like to see improved: A few thoughts on how to redesign the “Task” mechanism Failure in /accounts/confirm/*token* due to multiple authentication backends Regression in UpdateExcusesTask (that got quickly fixed by Pierre Elliot Bécue, see above) I also gave my feedback about multiple mockups prepared by Chirath R in preparation of his Google Summer of Code project proposal. Security Tools Packaging Team Following the departure of alioth, the new list that we requested on has been created: I updated (in the git repositories) all the Vcs-* and all the Maintainer fields of the packages maintained by the team. I prepared and uploaded afflib 3.7.16-3 to fix RC bug #892599. I spons[...]