Subscribe: Planet Debian
Added By: Feedage Forager Feedage Grade B rated
Language: English
build  chris lamb  code  debian  file  hours  months  much  new  packages  project  release  software  stretch  time  work 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Planet Debian

Planet Debian

Planet Debian -


Christoph Egger: Installing a python systemd service?

Wed, 26 Oct 2016 11:16:26 +0000

As web search engines and IRC seems to be of no help, maybe someone here has a helpful idea. I have some service written in python that comes with a .service file for systemd. I now want to build&install a working service file from the software's I can override the build/build_py commands of setuptools, however that way I still lack knowledge wrt. the bindir/prefix where my service script will be installed. Solution Turns out, if you override the install command (not the install_data!), you will have self.root and self.install_scripts (and lots of other self.install_*). As a result, you can read the template and write the desired output file after calling super's run method. The fix was inspired by GateOne (which, however doesn't get the --root parameter right, you need to strip self.root from the beginning of the path to actually make that work as intended). class myinstall(install): _servicefiles = [ 'foo/bar.service', ] def run(self): if not self.dry_run: bindir = self.install_scripts if bindir.startswith(self.root): bindir = bindir[len(self.root):] systemddir = os.path.join(self.root, "lib/systemd/system") for servicefile in self._servicefiles: service = os.path.split(servicefile)[1] self.announce("Creating %s" % os.path.join(systemddir, service), level=2) with open(servicefile) as servicefd: servicedata = with open(os.path.join(systemddir, service), "w") as servicefd: servicefd.write(servicedata.replace("%BINDIR%", bindir)) Comments, suggestions and improvements, of course, welcome! [...]

Steinar H. Gunderson: Why does software development take so long?

Wed, 26 Oct 2016 07:30:21 +0000

Nageru 1.4.0 is out (and on its way through the Debian upload process right now), so now you can do live video mixing with multichannel audio to your heart's content. I've already blogged about most of the interesting new features, so instead, I'm trying to answer a question: What took so long? To be clear, I'm not saying 1.4.0 took more time than I really anticipated (on the contrary, I pretty much understood the scope from the beginning, and there was a reason why I didn't go for building this stuff into 1.0.0); but if you just look at the changelog from the outside, it's not immediately obvious why “multichannel audio support” should take the better part of three months of develoment. What I'm going to say is of course going to be obvious to most software developers, but not everyone is one, and perhaps my experiences will be illuminating. Let's first look at some obvious things that isn't the case: First of all, development is not primarily limited by typing speed. There are about 9,000 lines of new code in 1.4.0 (depending a bit on how you count), and if it was just about typing them in, I would be done in a day or two. On a good keyboard, I can type plain text at more than 800 characters per minute—but you hardly ever write code for even a single minute at that speed. Just as when writing a novel, most time is spent thinking, not typing. I also didn't spend a lot of time backtracking; most code I wrote actually ended up in the finished product as opposed to being thrown away. (I'm not as lucky in all of my projects.) It's pretty common to do so if you're in an exploratory phase, but in this case, I had a pretty good idea of what I wanted to do right from the start, and that plan seemed to work. This wasn't a difficult project per se; it just needed to be done (which, in a sense, just increases the mystery). However, even if this isn't at the forefront of science in any way (most code in the world is pretty pedestrian, after all), there's still a lot of decisions to make, on several levels of abstraction. And a lot of those decisions depend on information gathering beforehand. Let's take a look at an example from late in the development cycle, namely support for using MIDI controllers instead of the mouse to control the various widgets. I've kept a pretty meticulous TODO list; it's just a text file on my laptop, but it serves the purpose of a ghetto bugtracker. For 1.4.0, it contains 83 work items (a single-digit number is not ticked off, mostly because I decided not to do those things), which corresponds roughly 1:2 to the number of commits. So let's have a look at what the ~20 MIDI controller items went into. First of all, to allow MIDI controllers to influence the UI, we need a way of getting to it. Since Nageru is single-platform on Linux, ALSA is the obvious choice (if not, I'd probably have to look for a library to put in-between), but seemingly, ALSA has two interfaces (raw MIDI and sequencer). Which one do you want? It sounds like raw MIDI is what we want, but actually, it's the sequencer interface (it does more of the MIDI parsing for you, and generally is friendlier). The first question is where to start picking events from. I went the simplest path and just said I wanted all events—anything else would necessitate a UI, a command-line flag, figuring out if we wanted to distinguish between different devices with the same name (and not all devices potentially even have names), and so on. But how do you enumerate devices? (Relatively simple, thankfully.) What do you do if the user inserts a new one while Nageru is running? (Turns out there's a special device you can subscribe to that will tell you about new devices.) What if you get an error on subscription? (Just print a warning and ignore it; it's legitimate not to have access to all devices on the system. By the way, for PCM devices, all of these answers are different.) So now we have a sequencer device, how do we get events from it? Can we do it in the main loop? Turns out it probably doesn't integrate too w[...]

Daniel Pocock: FOSDEM 2017 Real-Time Communications Call for Participation

Wed, 26 Oct 2016 06:39:00 +0000

FOSDEM is one of the world's premier meetings of free software developers, with over five thousand people attending each year. FOSDEM 2017 takes place 4-5 February 2017 in Brussels, Belgium. This email contains information about: Real-Time communications dev-room and lounge, speaking opportunities, volunteering in the dev-room and lounge, related events around FOSDEM, including the XMPP summit, social events (the legendary FOSDEM Beer Night and Saturday night dinners provide endless networking opportunities), the Planet aggregation sites for RTC blogs Call for participation - Real Time Communications (RTC) The Real-Time dev-room and Real-Time lounge is about all things involving real-time communication, including: XMPP, SIP, WebRTC, telephony, mobile VoIP, codecs, peer-to-peer, privacy and encryption. The dev-room is a successor to the previous XMPP and telephony dev-rooms. We are looking for speakers for the dev-room and volunteers and participants for the tables in the Real-Time lounge. The dev-room is only on Saturday, 4 February 2017. The lounge will be present for both days. To discuss the dev-room and lounge, please join the FSFE-sponsored Free RTC mailing list. To be kept aware of major developments in Free RTC, without being on the discussion list, please join the Free-RTC Announce list. Speaking opportunities Note: if you used FOSDEM Pentabarf before, please use the same account/username Real-Time Communications dev-room: deadline 23:59 UTC on 17 November. Please use the Pentabarf system to submit a talk proposal for the dev-room. On the "General" tab, please look for the "Track" option and choose "Real-Time devroom". Link to talk submission. Other dev-rooms and lightning talks: some speakers may find their topic is in the scope of more than one dev-room. It is encouraged to apply to more than one dev-room and also consider proposing a lightning talk, but please be kind enough to tell us if you do this by filling out the notes in the form. You can find the full list of dev-rooms on this page and apply for a lightning talk at Main track: the deadline for main track presentations is 23:59 UTC 31 October. Leading developers in the Real-Time Communications field are encouraged to consider submitting a presentation to the main track. First-time speaking? FOSDEM dev-rooms are a welcoming environment for people who have never given a talk before. Please feel free to contact the dev-room administrators personally if you would like to ask any questions about it. Submission guidelines The Pentabarf system will ask for many of the essential details. Please remember to re-use your account from previous years if you have one. In the "Submission notes", please tell us about: the purpose of your talk any other talk applications (dev-rooms, lightning talks, main track) availability constraints and special needs You can use HTML and links in your bio, abstract and description. If you maintain a blog, please consider providing us with the URL of a feed with posts tagged for your RTC-related work. We will be looking for relevance to the conference and dev-room themes, presentations aimed at developers of free and open source software about RTC-related topics. Please feel free to suggest a duration between 20 minutes and 55 minutes but note that the final decision on talk durations will be made by the dev-room administrators. As the two previous dev-rooms have been combined into one, we may decide to give shorter slots than in previous years so that more speakers can participate. Please note FOSDEM aims to record and live-stream all talks. The CC-BY license is used. Volunteers needed To make the dev-room and lounge run successfully, we are looking for volunteers: FOSDEM provides video recording equipment and live streaming, volunteers are needed to assist in this organizing one or more restaurant bookings (dependending upon number of participants) for the evening of Saturday, 4 February participation in the Real-Time lounge helping att[...]

Joachim Breitner: Showcasing Applicative

Wed, 26 Oct 2016 04:00:00 +0000

My plan for this week’s lecture of the CIS 194 Haskell course at the University of Pennsylvania is to dwell a bit on the concept of Functor, Applicative and Monad, and to highlight the value of the Applicative abstraction. I quite like the example that I came up with, so I want to share it here. In the interest of long-term archival and stand-alone pesentation, I include all the material in this post.1 Imports In case you want to follow along, start with these imports: import Data.Char import Data.Maybe import Data.List import System.Environment import System.IO import System.Exit The parser The starting point for this exercise is a fairly standard parser-combinator monad, which happens to be the result of the student’s homework from last week: newtype Parser a = P (String -> Maybe (a, String)) runParser :: Parser t -> String -> Maybe (t, String) runParser (P p) = p parse :: Parser a -> String -> Maybe a parse p input = case runParser p input of Just (result, "") -> Just result _ -> Nothing -- handles both no result and leftover input noParserP :: Parser a noParserP = P (\_ -> Nothing) pureParserP :: a -> Parser a pureParserP x = P (\input -> Just (x,input)) instance Functor Parser where fmap f p = P $ \input -> do (x, rest) <- runParser p input return (f x, rest) instance Applicative Parser where pure = pureParserP p1 <*> p2 = P $ \input -> do (f, rest1) <- runParser p1 input (x, rest2) <- runParser p2 rest1 return (f x, rest2) instance Monad Parser where return = pure p1 >>= k = P $ \input -> do (x, rest1) <- runParser p1 input runParser (k x) rest1 anyCharP :: Parser Char anyCharP = P $ \input -> case input of (c:rest) -> Just (c, rest) [] -> Nothing charP :: Char -> Parser () charP c = do c' <- anyCharP if c == c' then return () else noParserP anyCharButP :: Char -> Parser Char anyCharButP c = do c' <- anyCharP if c /= c' then return c' else noParserP letterOrDigitP :: Parser Char letterOrDigitP = do c <- anyCharP if isAlphaNum c then return c else noParserP orElseP :: Parser a -> Parser a -> Parser a orElseP p1 p2 = P $ \input -> case runParser p1 input of Just r -> Just r Nothing -> runParser p2 input manyP :: Parser a -> Parser [a] manyP p = (pure (:) <*> p <*> manyP p) `orElseP` pure [] many1P :: Parser a -> Parser [a] many1P p = pure (:) <*> p <*> manyP p sepByP :: Parser a -> Parser () -> Parser [a] sepByP p1 p2 = (pure (:) <*> p1 <*> (manyP (p2 *> p1))) `orElseP` pure [] A parser using this library for, for example, CSV files could take this form: parseCSVP :: Parser [[String]] parseCSVP = manyP parseLine where parseLine = parseCell `sepByP` charP ',' <* charP '\n' parseCell = do charP '"' content <- manyP (anyCharButP '"') charP '"' return content We want EBNF Often when we write a parser for a file format, we might also want to have a formal specification of the format. A common form for such a specification is EBNF. This might look as follows, for a CSV file: cell = '"', {not-quote}, '"'; line = (cell, {',', cell} | ''), newline; csv = {line}; It is straight-forward to create a Haskell data type to represent an ENBF syntax description. Here is a simple EBNF library (data type and pretty-printer) for your convenience: data RHS = Terminal String | NonTerminal String | Choice RHS RHS | Sequence RHS RHS | Optional RHS | Repetition RHS deriving (Show, Eq) ppRHS :: RHS -> String ppRHS = go 0 where go _ (Terminal s) = surround "'" "'" $ concatMap quote s go _ (NonTerminal s) = s go a (Choice x1 x2) = p a 1 $ go 1 x1 ++ " | " ++ go 1 x2 go a (Sequence x1 x2) = p a 2 $ go 2 x1 ++ ", " ++ go 2 x2 go _ (Optional x) = surr[...]

Dirk Eddelbuettel: Rblpapi 0.3.5

Wed, 26 Oct 2016 02:14:00 +0000


A new release of Rblpapi is now on CRAN. Rblpapi provides a direct interface between R and the Bloomberg Terminal via the C++ API provided by Bloomberg Labs (but note that a valid Bloomberg license and installation is required).

This is the sixth release since the package first appeared on CRAN last year. This release brings new functionality via new (getPortfolio()) and extended functions (getTicks()) as well as several fixes:

Changes in Rblpapi version 0.3.5 (2016-10-25)

  • Add new function getPortfolio to retrieve portfolio data via bds (John in #176)

  • Extend getTicks() to (optionally) return non-numeric data as part of data.frame or data.table (Dirk in #200)

  • Similarly extend getMultipleTicks (Dirk in #202)

  • Correct statement on timestamp for getBars (Closes issue #192)

  • Minor edits to a few files in order to either please R(-devel) CMD check --as-cran, or update documentation

Courtesy of CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the Rblpapi page. Questions, comments etc should go to the issue tickets system at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Laura Arjona Reina: Rankings, Condorcet and free software: Calculating the results for the Stretch Artwork Survey

Tue, 25 Oct 2016 20:11:53 +0000

We had 12 candidates for the Debian Stretch Artwork and a survey was set up for allowing people to vote which one they prefer. The survey was run in my LimeSurvey instance, LimeSurvey  its a nice free software with a lot of features. It provides a “Ranking” question type, and it was very easy for allowing people to “vote” in the Debian style (Debian uses the Condorcet method in its elections). However, although LimeSurvey offers statistics and even graphics to show the results of many type of questions, its output for the Ranking type is not useful, so I had to export the data and use another tool to find the winner. Export the data from LimeSurvey I’ve created a read-only user to visit the survey site. With this visitor you can explore the survey questionnaire, its results, and export the data. URL: Username: stretch Password: artwork First attempt, the quick and easy (and nonfree, I guess) There is an online tool to calculate the Condorcet winner,  The steps I followed to feed the tool with the data from LimeSurvey were these: 1.- Went to admin interface of lime survey, selected the stretch artwork survey, responses and statistics, export results to application 2.- Selected “Completed responses only”, “Question codes”, “Answer codes”, and exported to CSV. (results_stretch1.csv) 3.- Opened the CSV with LibreOffice Calc, and removed these columns: id    submitdate    lastpage    startlanguage 4.- Remove the first row containing the headers and saved the result (results_stretch2.csv) 5.- In commandline: sort results_stretch2.csv | uniq -c > results_stretch3.csv 6.- Opened results_stretch3.csv with LibreOffice Calc and “merge delimitors” when importing. 7.- Removed the first column (blank) and added a column between the numbers and the first ranked option, and fulfilled that column with “:” value. Saved (results_stretch4.csv) 8.- Opened results_stretch4.csv with my preferred editor and search and replace “,:,” for “:” and after that, search and replace “,” for “>”. Save the result (results_stretch5.csv) 9.- Went to, selected Condorcet basic, “tell me some things”, and pasted the contents of results_stretch5.csv there. The results are in results_stretch1.html But where is the source code of this Condorcet tool? I couldn’t find the source code (nor license) of the solver by Eric Gorr. The tool is mentioned in where other tools are listed and when the tool is libre software, is noted so. But not in this case. There, I found another tool, VoteEngine, which is open source, so I tried with that. Second attempt: VoteEngine, a Free Open Source Software tool made with Python I used a modification of voteengine-0.99 (the original zip is available in and a diff with the changes I made (basically, Numeric -> numpy and Int -> int, inorder that works in Debian stable), here. Steps 1 to 4 are the same as in the first attempt. 5.- Sorted alphabetically the different 12 options to vote, and assigned a letter to each one (saved the assignments in a file called  stretch_key.txt). 6.- Opened results_stretch2.csv with my favorite editor, and search and replace the name of the different options, for their corresponding letter in stretch_key.txt file. Searched and replaced “,” for ” ” (space). Then, saved the results into the file results_stretch3_voteengine.txt 7.- Copied the input.txt file from voteengine-0.99 into stretch.txt and edited the options to our needs. Pasted the contents of results_stretch3_voteengine.cvs at the end of stretch.txt 8.-In the commandline ./ winner.txt (winner.txt contains the results for the Condorcet method). 9.- I edited again stretch.txt [...]

Jose M. Calhariz: New packages for Amanda on the works

Tue, 25 Oct 2016 18:41:00 +0000

Because of the upgrade of perl, amanda is currently broken on testing and unstable on Debian. The problem is known and I am working with my sponsor to create new packages to solve the problem. Please hang a little more.

Bits from Debian: "softWaves" will be the default theme for Debian 9

Tue, 25 Oct 2016 17:50:00 +0000

The theme "softWaves" by Juliette Taka Belin has been selected as default theme for Debian 9 'stretch'.


After the Debian Desktop Team made the call for proposing themes, a total of twelve choices have been submitted, and any Debian contributor has received the opportunity to vote on them in a survey. We received 3,479 responses ranking the different choices, and softWaves has been the winner among them.

We'd like to thank all the designers that have participated providing nice wallpapers and artwork for Debian 9, and encourage everybody interested in this area of Debian, to join the Design Team. It is being considered to package all of them so they are easily available in Debian. If you want to help in this effort, or package any other artwork (for example, particularly designed to be accessibility-friendly), please contact the Debian Desktop Team, but hurry up, because the freeze for new packages in the next release of Debian starts on January 5th, 2017.

This is the second time that Debian ships a theme by Juliette Belin, who also created the theme "Lines" that enhances our actual stable release, Debian 8. Congratulations, Juliette, and thank you very much for your continued commitment to Debian!

Julian Andres Klode: Introducing DNS66, a host blocker for Android

Tue, 25 Oct 2016 16:20:23 +0000


I’m proud (yes, really) to announce DNS66, my host/ad blocker for Android 5.0 and newer. It’s been around since last Thursday on F-Droid, but it never really got a formal announcement.

DNS66 creates a local VPN service on your Android device, and diverts all DNS traffic to it, possibly adding new DNS servers you can configure in its UI. It can use hosts files for blocking whole sets of hosts or you can just give it a domain name to block (or multiple hosts files/hosts). You can also whitelist individual hosts or entire files by adding them to the end of the list. When a host name is looked up, the query goes to the VPN which looks at the packet and responds with NXDOMAIN (non-existing domain) for hosts that are blocked.

You can find DNS66 here:

F-Droid is the recommended source to install from. DNS66 is licensed under the GNU GPL 3, or (mostly) any later version.

Implementation Notes

DNS66’s core logic is based on another project,  dbrodie/AdBuster, which arguably has the cooler name. I translated that from Kotlin to Java, and cleaned up the implementation a bit:

All work is done in a single thread by using poll() to detect when to read/write stuff. Each DNS request is sent via a new UDP socket, and poll() polls over all UDP sockets, a Device Socket (for the VPN’s tun device) and a pipe (so we can interrupt the poll at any time by closing the pipe).

We literally redirect your DNS servers. Meaning if your DNS server is, all traffic to is routed to the VPN. The VPN only understands DNS traffic, though, so you might have trouble if your DNS server also happens to serve something else. I plan to change that at some point to emulate multiple DNS servers with fake IPs, but this was a first step to get it working with fallback: Android can now transparently fallback to other DNS servers without having to be aware that they are routed via the VPN.

We also need to deal with timing out queries that we received no answer for: DNS66 stores the query into a LinkedHashMap and overrides the removeEldestEntry() method to remove the eldest entry if it is older than 10 seconds or there are more than 1024 pending queries. This means that it only times out up to one request per new request, but it eventually cleans up fine.


Filed under: Android, Uncategorized (image)

Michal Čihař: New features on Hosted Weblate

Tue, 25 Oct 2016 16:00:20 +0000


Today, new version has been deployed on Hosted Weblate. It brings many long requested features and enhancements.

Adding project to watched got way simpler, you can now do it on the project page using watch button:


Another feature which will be liked by project admins is that they can now change project metadata without contacting me. This works for both project and component level:


And adding some fancy things, there is new badge showing status of translations into all languages. This is how it looks for Weblate itself:


As you can see it can get pretty big for projects with many translations, but you get complete picture of the translation status in it.

You can find all these features in upcoming Weblate 2.9 which should be released next week. Complete list of changes in Weblate 2.9 is described in our documentation.

Filed under: Debian English phpMyAdmin SUSE Weblate | 0 comments

Jaldhar Vyas: Aaargh gcc 5.x You Suck

Tue, 25 Oct 2016 06:45:53 +0000


Aaargh gcc 5.x You Suck

I had to write a quick program today which is going to be run many thousands of times a day so it has to run fast. I decided to do it in c++ instead of the usual perl or javascript because it seemed appropriate and I've been playing around a lot with c++ lately trying to update my knowledge of its' modern features. So 200 LOC later I was almost done so I ran the program through valgrind a good habit I've been trying to instill. That's when I got a reminder of why I avoid c++.

==37698== HEAP SUMMARY:
==37698==     in use at exit: 72,704 bytes in 1 blocks
==37698==   total heap usage: 5 allocs, 4 frees, 84,655 bytes allocated
==37698== LEAK SUMMARY:
==37698==    definitely lost: 0 bytes in 0 blocks
==37698==    indirectly lost: 0 bytes in 0 blocks
==37698==      possibly lost: 0 bytes in 0 blocks
==37698==    still reachable: 72,704 bytes in 1 blocks
==37698==         suppressed: 0 bytes in 0 blocks

One of things I've learnt which I've been trying to apply more rigorously is to avoid manual memory management (news/deletes.) as much as possible in favor of modern c++ features such as std::unique_ptr etc. By my estimation there should only be three places in my code where memory is allocated and none of them should leak. Where do the others come from? And why is there a missing free (or delete.) Now the good news is that valgrind is saying that the memory is not technically leaking. It is still reachable at exit but that's ok because the OS will reclaim it. But this program will run a lot and I think it could still lead to problems over time such as memory fragmentation so I wanted to understand what was going on. Not to mention the bad aesthetics of it.

My first assumption (one which has served me well over the years) was to assume that I had screwed up somewhere. Or perhaps it could some behind the scenes compiler magic. It turned out to be the latter -- sort of as I found out only after two hours of jiggling code in different ways and googling for clues. That's when I found this Stack Overflow question which suggests that it is either a valgrind or compiler bug. The answer specifically mentions gcc 5.1. I was using Ubuntu LTS which has gcc 5.4 so I have just gone ahead and assumed all 5.x versions of gcc have this problem. Sure enough, compiling the same program on Debian stable which has gcc 4.9 gave this...

==6045== HEAP SUMMARY:
==6045==     in use at exit: 0 bytes in 0 blocks
==6045==   total heap usage: 3 allocs, 3 frees, 10,967 bytes allocated
==6045== All heap blocks were freed -- no leaks are possible

...Much better. The executable was substantially smaller too. The time was not a total loss however. I learned that valgrind is pronounced val-grinned (it's from Norse mythology.) not val-grind as I had thought. So I have that going for me which is nice.

Russ Allbery: Review: Lord of Emperors

Tue, 25 Oct 2016 04:04:00 +0000

Review: Lord of Emperors, by Guy Gavriel Kay Series: Sarantine Mosaic #2 Publisher: Eos Copyright: 2000 Printing: February 2001 ISBN: 0-06-102002-8 Format: Mass market Pages: 560 Lord of Emperors is the second half of a work that began with Sailing to Sarantium and is best thought of as a single book split for publishing reasons. You want to read the two together and in order. As is typical for this sort of two-part work, it's difficult to review the second half without spoilers. I'll be more vague about the plot and the characters than normal, and will mark one bit that's arguably a bit of a spoiler (although I don't think it would affect the enjoyment of the book). At the end of Sailing to Sarantium, we left Crispin in the great city, oddly and surprisingly entangled with some frighteningly powerful people and some more mundane ones (insofar as anyone is mundane in a Guy Gavriel Kay novel, but more on that in a bit). The opening of Lord of Emperors takes a break from the city to introduce a new people, the Bassanids, and a new character, Rustem of Karakek. While Crispin is still the heart of this story, the thread that binds the entirety of the Sarantine Mosaic together, Rustem is the primary protagonist for much of this book. I had somehow forgotten him completely since my first read of this series many years ago. I have no idea how. I mentioned in my review of the previous book that one of the joys of reading this series is competence porn: watching the work of someone who is extremely good at what they do, and experiencing vicariously some of the passion and satisfaction they have for their work. Kay's handling of Crispin's mosaics is still the highlight of the series for me, but Rustem's medical practice (and Strumosus, and the chariot races) comes close. Rustem is a brilliant doctor by the standards of the time, utterly frustrated with the incompetence of the Sarantine doctors, but also weaving his own culture's belief in omens and portents into his actions. He's more reserved, more laconic than Crispin, but is another character with focused expertise and a deep internal sense of honor, swept unexpectedly into broader affairs and attempting to navigate them by doing the right thing in each moment. Kay fills this book with people like that, and it's compelling reading. Rustem's entrance into the city accidentally sets off a complex chain of events that draws together all of the major characters of Sailing to Sarantium and adds a few more. The stakes are no less than war and control of major empires, and here Kay departs firmly from recorded history into his own creation. I had mentioned in the previous review that Justinian and Theodora are the clear inspirations for this story; that remains true, and many other characters are easy to map, but don't expect history to go here the way that it did in our world. Kay's version diverges significantly, and dramatically. But one of the things I love the most about this book is its focus on the individual acts of courage, empathy, and ethics of each of the characters, even when those acts do not change the course of empires. The palace intrigue happens, and is important, but the individual acts of Kay's large cast get just as much epic narrative attention even if they would never appear in a history book. The most globally significant moment of the book is not the most stirring; that happens slightly earlier, in a chariot race destined to be forgotten by history. And the most touching moment of the book is a moment of connection between two people who would never appear in history, over the life of a third, that matters so much to the reader only [...]

Gunnar Wolf: On the results of vote "gr_private2"

Tue, 25 Oct 2016 01:46:48 +0000


Given that I started the GR process, and that I called for discussion and votes, I feel somehow as my duty to also put a simple wrap-around to this process. Of course, I'll say many things already well-known to my fellow Debian people, but also non-debianers read this.

So, for further context, if you need to, please read my previous blog post, where I was about to send a call for votes. It summarizes the situation and proposals; you will find we had a nice set of messages in during September; I have to thank all the involved parties, much specially to Ian Jackson, who spent a lot of energy summing up the situation and clarifying the different bits to everyone involved.

So, we held the vote; you can be interested in looking at the detailed vote statistics for the 235 correctly received votes, and most importantly, the results:


First of all, I'll say I'm actually surprised at the results, as I expected Ian's proposal (acknowledge difficulty; I actually voted this proposal as my top option) to win and mine (repeal previous GR) to be last; turns out, the winner option was Iain's (remain private). But all in all, I am happy with the results: As I said during the discussion, I was much disappointed with the results to the previous GR on this topic — And, yes, it seems the breaking point was when many people thought the privacy status of posted messages was in jeopardy; we cannot really compare what I would have liked to have in said vote if we had followed the strategy of leaving the original resolution text instead of replacing it, but I believe it would have passed. In fact, one more surprise of this iteration was that I expected Further Discussion to be ranked higher, somewhere between the three explicit options. I am happy, of course, we got such an overwhelming clarity of what does the project as a whole prefer.

And what was gained or lost with this whole excercise? Well, if nothing else, we gain to stop lying. For over ten years, we have had an accepted resolution binding us to release the messages sent to debian-private given such-and-such-conditions... But never got around to implement it. We now know that debian-private will remain private... But we should keep reminding ourselves to use the list as little as possible.

For a project such as Debian, which is often seen as a beacon of doing the right thing no matter what, I feel being explicit about not lying to ourselves of great importance. Yes, we have the principle of not hiding our problems, but it has long been argued that the use of this list is not hiding or problems. Private communication can happen whenever you have humans involved, even if administratively we tried to avoid it.

Any of the three running options could have won, and I'd be happy. My #1 didn't win, but my #2 did. And, I am sure, it's for the best of the project as a whole.

Chris Lamb: Concorde

Mon, 24 Oct 2016 18:59:01 +0000


Today marks the 13th anniversary since the last passenger flight from New York arrived in the UK. Every seat was filled, a feat that had become increasingly rare for a plane that was a technological marvel but a commercial flop….

  • Only 20 aircraft were ever built despite 100 orders, most of them cancelled in the early 1970s.
  • Taxiing to the runway consumed 2 tons of fuel.
  • The white colour scheme was specified to reduce the outer temperature by about 10°C.
  • In a promotional deal with Pepsi, F-BTSD was temporarily painted blue. Due to the change of colour, Air France were advised to remain at Mach 2 for no more than 20 minutes at a time.
  • At supersonic speed the fuselage would heat up and expand by as much as 30cm. The most obvious manifestation of this was a gap that opened up on the flight deck between the flight engineer's console and the bulkhead. On some aircraft conducting a retiring supersonic flight, the flight engineers placed their caps in this expanded gap, permanently wedging the cap as it shrank again.
  • At Concorde's altitude a breach of cabin integrity would result in a loss of pressure so severe that passengers would quickly suffer from hypoxia despite application of emergency oxygen. Concorde was thus built with smaller windows to reduce the rate of loss in such a breach.
  • The high cruising altitude meant passengers received almost twice the amount of radiation as a conventional long-haul flight. To prevent excessive exposure, the flight deck comprised of a radiometer; if the radiation level became too high, pilots would descend below 45,000 feet.
  • BA's service had a greater number of passengers who booked a flight and then failed to appear than any other aircraft in their fleet.
  • Market research later in Concorde's life revealed that customers thought Concorde was more expensive than it actually was. Ticket prices were progressively raised to match these perceptions.
  • The fastest transatlantic airliner flight was from New York JFK to London Heathrow on 7 February 1996 by British Airways' G-BOAD in 2 hours, 52 minutes, 59 seconds from takeoff to touchdown. It was aided by a 175 mph tailwind.

See also: A Rocket to Nowhere.

Reproducible builds folks: Reproducible Builds: week 78 in Stretch cycle

Mon, 24 Oct 2016 16:10:06 +0000

What happened in the Reproducible Builds effort between Sunday October 16 and Saturday October 22 2016: Media coverage Chris Lamb presented at Software Freedom Kosovo on reproducible builds on Saturday 22nd October. Upcoming events Jürgen Rose will be giving a talk on Enforcing reproducible builds with Eclipse Package Drone at EclipseCon Europe 2016 in Ludwigsburg, Germany on October 27th. In order to build packages reproducibly, you not only need identical sources but also some external definition of the environment used for a particular build. This definition includes the inputs and the outputs and, in the Debian case, are available in a $package_$architecture_$version.buildinfo file. We anticipate the next dpkg upload to sid will create .buildinfo files by default. Whilst it's clear that we also need to teach dak to deal with them (#763822) its not actually clear how to handle .buildinfo files after dak has processed them and how to make them available to the world. To this end, Chris Lamb has started development on a proof-of-concept .buildinfo server to see what issues arise. Source Reproducible work in other projects Ximin Luo submitted a patch to GCC as a prerequisite for future patches to make debugging symbols reproducible. Packages reviewed and fixed, and bugs filed #841274 filed against node-once by Chris Lamb. #841342 filed against zshdb by Chris Lamb. #841427 filed against unifdef by Chris Lamb. #841440 filed against rdp-alignment by Chris Lamb. #841497 filed against cf-python by Chris Lamb. #841694 filed against dvdauthor by Reiner Herrmann. #841698 filed against node-lodash by Chris Lamb. #841701 filed against libtext-charwidth-perl by Reiner Herrmann. #841702 filed against libapt-pkg-perl by Reiner Herrmann. #841703 filed against libio-pty-perl by Reiner Herrmann. #841707 filed against eximdoc4 by Chris Lamb. Reviews of unreproducible packages 99 package reviews have been added, 3 have been updated and 6 have been removed in this week, adding to our knowledge about identified issues. 6 issue types have been added: bin_sh_is_bash captures_build_arch captures_build_path_via_assert docbook_to_man_one_byte_delta_on_i386 graphviz_nondeterminstic_output randomness_in_documentation_generated_by_scilab Weekly QA work During of reproducibility testing, some FTBFS bugs have been detected and reported by: Chris Lamb (23) Daniel Reichelt (2) Lucas Nussbaum (1) Santiago Vila (18) diffoscope development Mattia Rizzolo: tests/ppu: skip some PPU tests if ppudump is < 3.0.0 ppu: ignore decoding errors from ppudump while filtering the output ppu: don't do run a full ppudump while only looking for PPU file version debian: bump debhelper compat level to 10, no changes needed. Michel Messerschmidt: Add rudimentary support for OpenDocumentFormat files h01ger increased the diskspace for reproducible content on Jenkins. Thanks to ProfitBricks. Valerie Young supplied a patch to make Python SQL interface more SQLite/PostgresSQL agnostic. lynxis worked hard to make LEDE and OpenWrt builds happen on two hosts. Misc. Our poll to find a good time for an IRC meeting is still running until Tuesday, October 25st; please reply as soon as possible. We need a logo! Some ideas and requirements for a Reproducible Builds logo have been documented in the wiki. Contributions very welcome, even if simply by forwarding this information. This week's edition was written by Chris Lamb & Holger Levsen and reviewed by a bunch of Reproducible Builds folks on IRC. [...]

Russ Allbery: Review: The Design of Everyday Things

Mon, 24 Oct 2016 04:17:00 +0000

Review: The Design of Everyday Things, by Don Norman Publisher: Basic Books Copyright: 2013 ISBN: 0-465-05065-4 Format: Trade paperback Pages: 298 There are several editions of this book (the first under a different title, The Psychology of Everyday Things). This review is for the Revised and Expanded Edition, first published in 2013 and quite significantly revised compared to the original. I probably read at least some of the original for a class in human-computer interaction around 1994, but that was long enough ago that I didn't remember any of the details. I'm not sure how much impact this book has had outside of the computer field, but The Design of Everyday Things is a foundational text of HCI (human-computer interaction) despite the fact that many of its examples and much of its analysis is not specific to computers. Norman's goal is clearly to write a book that's fundamental to the entire field of design; not having studied the field, I don't know if he succeeded, but the impact on computing was certainly immense. This is the sort of book that everyone ends up hearing about, if not necessarily reading, in college. I was looking forward to filling a gap in my general knowledge. Having now read it cover-to-cover, would I recommend others invest the time? Maybe. But probably not. There are several things this book does well. One of the most significant is that it builds a lexicon and a set of general principles that provide a way of talking about design issues. Lexicons are not the most compelling reading material (see also Design Patterns), but having a common language is useful. I still remember affordances from college (probably from this book or something else based on it). Norman also adds, and defines, signifiers, constraints, mappings, and feedback, and talks about the human process of building a conceptual model of the objects with which one is interacting. Even more useful, at least in my opinion, is the discussion of human task-oriented behavior. The seven stages of action is a great systematic way of analyzing how humans perform tasks, where those actions can fail, and how designers can help minimize failure. One thing I particularly like about Norman's presentation here is the emphasis on the feedback cycle after performing a task, or a step in a task. That feedback, and what makes good or poor feedback, is (I think) an underappreciated part of design and something that too often goes missing. I thought Norman was a bit too dismissive of simple beeps as feedback (he thinks they don't carry enough information; while that's not wrong, I think they're far superior to no feedback at all), but the emphasis on this point was much appreciated. Beyond these dry but useful intellectual frameworks, though, Norman seems to have a larger purpose in The Design of Everyday Things: making a passionate argument for the importance of design and for not tolerating poor design. This is where I think his book goes a bit off the rails. I can appreciate the boosterism of someone who feels an aspect of creating products is underappreciated and underfunded. But Norman hammers on the unacceptability of bad design to the point of tedium, and seems remarkably intolerant of, and unwilling to confront, the reasons why products may be released with poor designs for their eventual users. Norman clearly wishes that we would all boycott products with poor designs and prize usability above most (all?) other factors in our decisions. Equally clearly, this is not happening, and Norman knows it. He even describes some of the r[...]

Dirk Eddelbuettel: Word Marathon Majors: Five Star Finisher!

Mon, 24 Oct 2016 02:41:00 +0000

A little over eight years ago, I wrote a short blog post which somewhat dryly noted that I had completed the five marathons constituting the World Marathon Majors. I had completed Boston, Chicago and New York during 2007, adding London and then Berlin (with a personal best) in 2008. The World Marathon Majors existed then, but I was not aware of a website. The organisation was aiming to raise the profile of the professional and very high-end aspect of the sport. But marathoning is funny as they let somewhat regular folks like you and me into the same race. And I always wondered if someone kept track of regular folks completing the suite... I have been running a little less the last few years, though I did get around to complete the Illinois Marathon earlier this year (only tweeted about it and still have not added anything to the running section of my blog). But two weeks ago, I was once again handing out water cups at the Chicago Marathon, sending along two tweets when the elite wheelchair and elite male runners flew by. To the first, the World Marathon Majors account replied, which lead me to their website. Which in turn lead me to the Five Star Finisher page, and the newer / larger Six Star Finisher page now that Tokyo has been added. And in short, one can now request one's record to be added (if they check out). So I did. And now I am on the Five Star Finisher page! I don't think I'll ever surpass that as a runner. The table header and my row look like this: If only my fifth / sixth grade physical education teacher could see that---he was one of those early running nuts from the 1970s and made us run towards / around this (by now enlarged) pond and boy did I hate that :) Guess it did have some long lasting effects. And I casually circled the lake a few years ago, starting much further away from my parents place. Once you are in the groove for distance... But leaving that aside, running has been fun and I with some luck I may have another one or two marathons or Ragnar Relays left. The only really bad part about this is that I may have to get myself to Tokyo after all (for something that is not an ISM workshop) ... This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings. [...]

Daniel Silverstone: Gitano - Approaching Release - Deprecated commands

Mon, 24 Oct 2016 02:24:58 +0000

As mentioned previously I am working toward getting Gitano into Stretch. Last time we spoke about lace, on which a colleague and friend of mine (Richard Maw) did a large pile of work. This time I'm going to discuss deprecation approaches and building more capability out of fewer features. First, a little background -- Gitano is written in Lua which is a deliberately small language whose authors spend more time thinking about what they can remove from the language spec than they do what they could add in. I first came to Lua in the 3.2 days, a little before 4.0 came out. (The authors provide a lovely timeline in case you're interested.) With each of the releases of Lua which came after 3.2, I was struck with how the authors looked to take a number of features which the language had, and collapse them into more generic, more powerful, smaller, fewer features. This approach to design stuck with me over the subsequent decade, and when I began Gitano I tried to have the smallest number of core features/behaviours, from which could grow the power and complexity I desired. Gitano is, at its core, a set of files in a single format (clod) stored in a consistent manner (Git) which mediate access to a resource (Git repositories). Some of those files result in emergent properties such as the concept of the 'owner' of a repository (though that can simply be considered the value of the project.owner property for the repository). Indeed the concept of the owner of a repository is a fiction generated by the ACL system with a very small amount of collusion from the core of Gitano. Yet until recently Gitano had a first class command set-owner which would alter that one configuration value. [gitano] set-description ---- Set the repo's short description (Takes a repo) [gitano] set-head ---- Set the repo's HEAD symbolic reference (Takes a repo) [gitano] set-owner ---- Sets the owner of a repository (Takes a repo) Those of you with Gitano installations may see the above if you ask it for help. Yet you'll also likely see: [gitano] config ---- View and change configuration for a repository (Takes a repo) The config command gives you access to the repository configuration file (which, yes, you could access over git instead, but the config command can be delegated in a more fine-grained fashion without having to write hooks). Given the config command has all the functionality of the three specific set-* commands shown above, it was time to remove the specific commands. Migrating If you had automation which used the set-description, set-head, or set-owner commands then you will want to switch to the config command before you migrate your server to the current or any future version of Gitano. In brief, where you had: ssh git@gitserver set-FOO repo something You now need: ssh git@gitserver config repo set project.FOO something It looks a little more wordy but it is consistent with the other features that are keyed from the project configuration, such as: ssh git@gitserver config repo set cgitrc.section Fooble Section Name And, of course, you can see what configuration is present with: ssh git@gitserver config repo show Or look at a specific value with: ssh git@gitserver config repo show specific.key As always, you can get more detailed (if somewhat cryptic) help with: ssh git@gitserver help config Next time I'll try and touch on the new PGP/GPG integration support. [...]

Francois Marier: Tweaking Referrers For Privacy in Firefox

Mon, 24 Oct 2016 00:00:00 +0000

The Referer header has been a part of the web for a long time. Websites rely on it for a few different purposes (e.g. analytics, ads, CSRF protection) but it can be quite problematic from a privacy perspective. Thankfully, there are now tools in Firefox to help users and developers mitigate some of these problems. Description In a nutshell, the browser adds a Referer header to all outgoing HTTP requests, revealing to the server on the other end the URL of the page you were on when you placed the request. For example, it tells the server where you were when you followed a link to that site, or what page you were on when you requested an image or a script. There are, however, a few limitations to this simplified explanation. First of all, by default, browsers won't send a referrer if you place a request from an HTTPS page to an HTTP page. This would reveal potentially confidential information (such as the URL path and query string which could contain session tokens or other secret identifiers) from a secure page over an insecure HTTP channel. Firefox will however include a Referer header in HTTPS to HTTPS transitions unless network.http.sendSecureXSiteReferrer (removed in Firefox 52) is set to false in about:config. Secondly, using the new Referrer Policy specification web developers can override the default behaviour for their pages, including on a per-element basis. This can be used both to increase or reduce the amount of information present in the referrer. Legitimate Uses Because the Referer header has been around for so long, a number of techniques rely on it. Armed with the Referer information, analytics tools can figure out: where website traffic comes from, and how users are navigating the site. Another place where the Referer is useful is as a mitigation against cross-site request forgeries. In that case, a website receiving a form submission can reject that form submission if the request originated from a different website. It's worth pointing out that this CSRF mitigation might be better implemented via a separate header that could be restricted to particularly dangerous requests (i.e. POST and DELETE requests) and only include the information required for that security check (i.e. the origin). Problems with the Referrer Unfortunately, this header also creates significant privacy and security concerns. The most obvious one is that it leaks part of your browsing history to sites you visit as well as all of the resources they pull in (e.g. ads and third-party scripts). It can be quite complicated to fix these leaks in a cross-browser way. These leaks can also lead to exposing private personally-identifiable information when they are part of the query string. One of the most high-profile example is the accidental leakage of user searches by Solutions for Firefox Users While web developers can use the new mechanisms exposed through the Referrer Policy, Firefox users can also take steps to limit the amount of information they send to websites, advertisers and trackers. In addition to enabling Firefox's built-in tracking protection by setting privacy.trackingprotection.enabled to true in about:config, which will prevent all network connections to known trackers, users can control when the Referer header is sent by setting network.http.sendRefererHeader to: 0 to never send the header 1 to send the header only when clicking on links and similar elements 2 (default) to send the header on all requests (e.g. images, links, etc.) It's also possible to put a limit on the maximum amount of informa[...]

Carl Chenet: PyMoneroWallet: the Python library for the Monero wallet

Sun, 23 Oct 2016 22:00:59 +0000

Do you know the Monero crytocurrency? It’s a cryptocurrency, like Bitcoin, focused on the security, the privacy and the untracabily. That’s a great project launched in 2014, today called XMR on all cryptocurrency exchange platforms (like Kraken or Poloniex).

So what’s new? In order to work with a Monero wallet from some Python applications, I just wrote a Python library to use the Monero wallet: PyMoneroWallet


Using PyMoneroWallet is as easy as:

$ python3
>>> from monerowallet import MoneroWallet
>>> mw = MoneroWallet()
>>> mw.getbalance()
{'unlocked_balance': 2262265030000, 'balance': 2262265030000}

Lots of features are included, you should have a look at the documentation of the monerowallet module to know them all, but quickly here are some of them:

And so on. Have a look at the complete documentation for extensive available functions.

UPDATE: I’m trying to launch a crowdfunding of the PyMoneroWallet project. Feel free to comment in this thread of the official Monero forum to let them know you think that PyMoneroWallet is a great idea (image)

Feel free to contribute to this starting project to help spreading the Monero use by using the PyMoneroWallet project with your Python applications (image)

Vincent Sanders: Rabbit of Caerbannog

Sun, 23 Oct 2016 21:27:56 +0000

Subsequent to my previous use of American Fuzzy Lop (AFL) on the NetSurf bitmap image library I applied it to the gif library which, after fixing the test runner, failed to produce any crashes but did result in a better test corpus improving coverage above 90%I then turned my attention to the SVG processing library. This was different to the bitmap libraries in that it required parsing a much lower density text format and performing operations on the resulting tree representation.The test program for the SVG library needed some improvement but is very basic in operation. It takes the test SVG, parses it using libsvgtiny and then uses the parsed output to write out an imagemagick mvg file.The libsvg processing uses the NetSurf DOM library which in turn uses an expat binding to parse the SVG XML text. To process this with AFL required instrumenting not only the XVG library but the DOM library. I did not initially understand this and my first run resulted in a "map coverage" indicating an issue. Helpfully the AFL docs do cover this so it was straightforward to rectify.Once the test program was written and environment set up an AFL run was started and left to run. The next day I was somewhat alarmed to discover the fuzzer had made almost no progress and was running very slowly. I asked for help on the AFL mailing list and got a polite and helpful response, basically I needed to RTFM.I must thank the members of the AFL mailing list for being so helpful and tolerating someone who ought to know better asking  dumb questions.After reading the fine manual I understood I needed to ensure all my test cases were as small as possible and further that the fuzzer needed a dictionary as a hint to the file format because the text file was of such low data density compared to binary formats.I crafted an SVG dictionary based on the XML one, ensured all the seed SVG files were as small as possible and tried again. The immediate result was thousands of crashes, nothing like being savaged by a rabbit to cause a surprise.Not being in possession of the appropriate holy hand grenade I resorted instead to GDB and electric fence. Unlike the bitmap library crashes memory bounds issues simply did not feature in the crashes.Instead they mainly centered around actual logic errors when constructing and traversing the data structures.For example Daniel Silverstone fixed an interesting bug where the XML parser binding would try and go "above" the root node in the tree if the source closed more tags than it opened which resulted in wild pointers and NULL references.I found and squashed several others including dealing with SVG which has no valid root element and division by zero errors when things like colour gradients have no points.I find it interesting that the type and texture of the crashes completely changed between the SVG and binary formats. Perhaps it is just the nature of the textural formats that causes this although it might be due to the techniques used to parse the formats.Once all the immediately reproducible crashes were dealt with I performed a longer run. I used my monster system as previously described and ran the fuzzer for a whole week.Summary stats============= Fuzzers alive : 10 Total run time : 68 days, 7 hours Total execs : 9268 million Cumulative speed : 15698 execs/sec Pending paths : 0 faves, 2501 total Pending per fuzzer : 0 faves, 250 total (on average) Crashes found : 9 locally uniqueAfter burning almost seventy days of processor time AFL found me a[...]

Jaldhar Vyas: What I Did During My Summer Vacation

Sun, 23 Oct 2016 05:01:06 +0000

Thats So Raven If I could sum up the past year in one word, that word would be distraction. There have been so many strange, confusing or simply unforseen things going on I have had trouble focusing like never before. For instance, on the opposite side of the street from me is one of Jersey City's old resorvoirs. It's not used for drinking water anymore and the city eventually plans on merging it into the park on the other side. In the meantime it has become something of a wildlife refuge. Which is nice except one of the newly settled critters was a bird of prey -- the consensus is possibly some kind of hawk or raven. Starting your morning commute under the eyes of a harbinger of death is very goth and I even learned to deal with the occasional piece of deconstructed rodent on my doorstep but nighttime was a big problem. For contrary to popular belief, ravens do not quoth "nevermore" but "KRRAAAA". Very loudly. Just as soon as you have drifted of to sleep. Eventually my sleep-deprived neighbors and I appealed to the NJ division of enviromental protection to get it removed but by the time they were ready to swing into action the bird had left for somewhere more congenial like Transylvania or Newark. Or here are some more complete wastes of time: I go the doctor for my annual physical. The insurance company codes it as Adult Onset Diabetes by accident. One day I opened the lid of my laptop and there's a "ping" sound and a piece of the hinge flies off. Apparently that also severed the connection to the screen and naturally the warranty had just expired so I had to spend the next month tethered to an external monitor until I could afford to buy a new one. Mix in all the usual social, political, family and work drama and you can see that it has been a very trying time for me. Dovecot I have managed to get some Debian work done. On Dovecot, my principal package, I have gotten tremendous support from Apollon Oikonomopolous who I belatedly welcome as a member of the Dovecot maintainer team. He has been particularly helpful in fixing our systemd support and cleaning out a lot of the old and invalid bugs. We're in pretty good shape for the freeze. Upstream has released an RC of 2.2.26 and hopefully the final version will be out in the next couple of days so we can include it in Stretch. We can always use more help with the package so let me know if you're interested. Debian-IN Most of the action has been going on without me but I've been lending support and sponsoring whenever I can. We have several new DDs and DMs but still no one north of the Vindhyas I'm afraid. Debian Perl Group gregoa did a ping of inactive maintainers and I regretfully had to admit to myself that I wasn't going to be of use anytime soon so I resigned. Perl remains my favorite language and I've actually been more involved in the meetings of my local Perlmongers group so hopefully I will be back again one day. And I still maintain the Perl modules I wrote myself. Debian-Axe-Murderers* May have gained a recruit. *Stricly speaking it should be called Debian-People-Who-Dont-Think-Faults-in-One-Moral-Domain-Such-As-For-Example-Axe-Murdering-Should-Leak-Into-Another-Moral-Domain-Such-As-For-Example-Debian but come on, that's just silly. [...]

Ingo Juergensmann: Automatically update TLSA records on new Letsencrypt Certs

Sat, 22 Oct 2016 22:29:31 +0000

I've been using DNSSEC for some quite time now and it is working quite well. When LetsEncrypt went public beta I jumped on the train and migrated many services to LE-based TLS. However there was still one small problem with LE certs:  When there is a new cert, all of the old TLSA resource records are not valid anymore and might give problems to strict DNSSEC checking clients. It took some while until my pain was big enough to finally fix it by some scripts. There are at least two scripts involved: 1) This script does all of my DNSSEC handling. You can just do a " enable-dnssec domain.tld" and everything is configured so that you only need to copy the appropriate keys into the webinterface of your DNS registry. host:~/bin# No parameter given. Usage: MODE DOMAIN MODE can be one of the following: enable-dnssec : perform all steps to enable DNSSEC for your domain edit-zone : safely edit your zone after enabling DNSSEC create-dnskey : create new dnskey only load-dnskey : loads new dnskeys and signs the zone with them show-ds : shows DS records of zone zoneadd-ds : adds DS records to the zone file show-dnskey : extract DNSKEY record that needs to uploaded to your registrar update-tlsa : update TLSA records with new TLSA hash, needs old and new TLSA hashes as additional parameters For updating zone-files just do a " edit-zone domain.tld" to add new records and such and the script will take care e.g. of increasing the serial of the zone file. I find this very convenient, so I often use this script for non-DNSSEC-enabled domains as well. However you can spot the command line option "update-tlsa". This option needs the old and the new TLSA hashes beside the domain.tld parameter. However, this option is used from the second script:  2) check_tlsa.shThis is a quite simple Bash script that parses the domains.txt from script, looking up the old TLSA hash in the zone files (structured in TLD/domain.tld directories), compare the old with the new hash (by invoking and if there is a difference in hashes, call with the proper parameters:  #!/bin/bash set -e LEPATH="/etc/" for i in `cat /etc/ | awk '{print $1}'` ; do         domain=`echo $i | awk 'BEGIN {FS="."} ; {print $(NF-1)"."$NF}'`         #echo -n "Domain: $domain"         TLD=`echo $i | awk 'BEGIN {FS="."}; {print $NF}'`         #echo ", TLD: $TLD"         OLDTLSA=`grep -i "in.*tlsa" /etc/bind/${TLD}/${domain} | grep ${i} | head -n 1 | awk '{print $NF}'`         if [ -n "${OLDTLSA}" ] ; then                 #echo "--> ${OLDTLSA}"                 # Usage: cert.pem host[:port] usage selector mtype                 NEWTLSA=`/path/to/ $LEPATH/certs/${i}/fullchain.pem ${i} 3 1 1 | awk '{print $NF}'`                 #echo "==> $NEWTLSA"                 if [ "${OLDTLSA}" != "${NEWTLSA}" ] ; then                         /path/to/ update-tlsa ${domain} ${OLDTLSA} ${NEWTLSA} > /dev/null                         echo "TLSA RR update for ${i}"                 fi         fi done So, quite simple and obviously a quick hack. For sure someone else can write a cleaner and more sophisticated implementation to do the same stuff, but at leas[...]

Matthieu Caneill: Debugging 101

Sat, 22 Oct 2016 22:00:00 +0000

While teaching this semester a class on concurrent programming, I realized during the labs that most of the students couldn't properly debug their code. They are at the end of a 2-year cursus, know many different programming languages and frameworks, but when it comes to tracking down a bug in their own code, they often lacked the basics. Instead of debugging for them I tried to give them general directions that they could apply for the next bugs. I will try here to summarize the very first basic things to know about debugging. Because, remember, writing software is 90% debugging, and 10% introducing new bugs (that is not from me, but I could not find the original quote). So here is my take at Debugging 101. Use the right tools Many good tools exist to assist you in writing correct software, and it would put you behind in terms of productivity not to use them. Editors which catch syntax errors while you write them, for example, will help you a lot. And there are many features out there in editors, compilers, debuggers, which will prevent you from introducing trivial bugs. Your editor should be your friend; explore its features and customization options, and find an efficient workflow with them, that you like and can improve over time. The best way to fix bugs is not to have them in the first place, obviously. Test early, test often I've seen students writing code for one hour before running make, that would fail so hard that hundreds of lines of errors and warnings were outputted. There are two main reasons doing this is a bad idea: You have to debug all the errors at once, and the complexity of solving many bugs, some dependent on others, is way higher than the complexity of solving a single bug. Moreover, it's discouraging. Wrong assumptions you made at the beginning will make the following lines of code wrong. For example if you chose the wrong data structure for storing some information, you will have to fix all the code using that structure. It's less painful to realize earlier it was the wrong one to choose, and you have more chances of knowing that if you compile and execute often. I recommend to test your code (compilation and execution) every few lines of code you write. When something breaks, chances are it will come from the last line(s) you wrote. Compiler errors will be shorter, and will point you to the same place in the code. Once you get more confident using a particular language or framework, you can write more lines at once without testing. That's a slow process, but it's ok. If you set up the right keybinding for compiling and executing from within your editor, it shouldn't be painful to test early and often. Read the logs Spot the places where your program/compiler/debugger writes text, and read it carefully. It can be your terminal (quite often), a file in your current directory, a file in /var/log/, a web page on a local server, anything. Learn where different software write logs on your system, and integrate reading them in your workflow. Often, it will be your only information about the bug. Often, it will tell you where the bug lies. Sometimes, it will even give you hints on how to fix it. You may have to filter out a lot of garbage to find relevant information about your bug. Learn to spot some keywords like error or warning. In long stacktraces, spot the lines concerning your files; because more often, your code is to be blamed, rather than deeper library code. gr[...]

Iain R. Learmonth: The Domain Name System

Sat, 22 Oct 2016 18:15:13 +0000

As I posted yesterday, we released PATHspider 1.0.0. What I didn’t talk about in that post was an event that occured only a few hours before the release. Everything was going fine, proofreading of the documentation was in progress, a quick git push with the documentation updates and… CI FAILED!?! Our CI doesn’t build the documentation, only tests the core code. I’m planning to release real soon and something has broken. Starting to panic. irl@orbiter# ./ ................ ---------------------------------------------------------------------- Ran 16 tests in 0.984s OK This makes no sense. Maybe I forgot to add a dependency and it’s been broken for a while? I scrutinise the dependencies list and it all looks fine. In fairness, probably the first thing I should have done is look at the build log in Jenkins, but I’ve never had a failure that I couldn’t reproduce locally before. It was at this point that I realised there was something screwy going on. A sigh of relief as I realise that there’s not a catastrophic test failure but now it looks like maybe there’s a problem with the University research group network, which is arguably worse. Being focussed on getting the release ready, I didn’t realise that the Internet was falling apart. Unknown to me, a massive DDoS attack against Dyn, a major DNS host, was in progress. After a few attempts to debug the problem, I hardcoded a line into /etc/hosts, still believing it to be a localised issue. I’ve just removed this line as the problem seems to have resolved itself for now. There are two main points I’ve taken away from this: CI failure doesn’t necessarily mean that your code is broken, it can also indicate that your CI infrastructure is broken. Decentralised internetwork routing is pretty worthless when the centralised name system goes down. This afternoon I read a post by [tj] on the 57North Planet, and this is where I learnt what had really happened. He mentions multicast DNS and Namecoin as distributed name system alternatives. I’d like to add some more to that list: The Onion Name System ICMP Domain Name Messages DNS-over-HTTPS Only the first of these is really a distributed solution. My idea with ICMP Domain Name Messages is that you send an ICMP message to a webserver. Somewhere along the path, you’ll hit either a surveillance or censorship middlebox. These middleboxes can provide value by caching any DNS replies that are seen so that an ICMP DNS request message will cause the message to not be forwarded but a reply is generated to provide the answer to the query. If the middlebox cannot generate a reply, it can still forward it to other surveillance and censorship boxes. I think this would be a great secondary use for the NSA and GCHQ boxen on the Internet, clearly fits within the scope of “defending national security” as if DNS is down the Internet is kinda dead, plus it’d make it nice and easy to find the boxes with PATHspider. [...]

Dirk Eddelbuettel: RcppArmadillo 0.7.500.0.0

Sat, 22 Oct 2016 15:43:00 +0000



A few days ago, Conrad released Armadillo 7.500.0. The corresponding RcppArmadillo release 0.7.500.0.0 is now on CRAN (and will get into Debian shortly).

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language--and is widely used by (currently) 274 other packages on CRAN.

Changes in this release relative to the previous CRAN release are as follows:

Changes in RcppArmadillo version 0.7.500.0.0 (2016-10-20)

  • Upgraded to Armadillo release 7.500.0 (Coup d'Etat)

    • Expanded qz() to optionally specify ordering of the Schur form

    • Expanded each_slice() to support matrix multiplication

Courtesy of CRANberries, there is a diffstat report. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Christoph Egger: Running Debian on the ClearFog

Sat, 22 Oct 2016 10:37:44 +0000

Back in August, I was looking for a Homeserver replacement. During FrOSCon I was then reminded of the Turris Omnia project by The basic SoC (Marvel Armada 38x) seemed to be nice hand have decent mainline support (and, with the turris, users interested in keeping it working). Only I don't want any WIFI and I wasn't sure the standard case would be all that usefully. Fortunately, there's also a simple board available with the same SoC called ClearFog and so I got one of these (the Base version). With shipping and the SSD (the only 2242 M.2 SSD with 250 GiB I could find, a ADATA SP600) it slightly exceeds the budget but well. When installing the machine, the obvious goal was to use mainline FOSS components only if possible. Fortunately there's mainline kernel support for the device as well as mainline U-Boot. First attempts to boot from a micro SD card did not work out at all, both with mainline U-Boot and the vendor version though. Turns out the eMMC version of the board does not support any micro SD cards at all, a fact that is documented but others failed to notice as well. U-Boot As the board does not come with any loader on eMMC and booting directly from M.2 requires removing some resistors from the board, the easiest way is using UART for booting. The vendor wiki has some shell script wrapping an included C fragment to feed U-Boot to the device but all that is really needed is U-Boot's kwboot utility. For some reason the SPL didn't properly detect UART booting on my device (wrong magic number) but patching the if (in arch-mvebu's spl.c) and always assume UART boot is an easy way around. The plan then was to boot a Debian armhf rootfs with a defconfig kernel from USB stick. and install U-Boot and the rootfs to eMMC from within that system. Unfortunately U-Boot seems to be unable to talk to the USB3 port so no kernel loading from there. One could probably make UART loading work but switching between screen for serial console and xmodem seemed somewhat fragile and I never got it working. However ethernet can be made to work, though you need to set eth1addr to eth3addr (or just the right one of these) in U-Boot, saveenv and reboot. After that TFTP works (but is somewhat slow). eMMC There's one last step required to allow U-Boot and Linux to access the eMMC. eMMC is wired to the same PINs as the SD card would be. However the SD card has an additional indicator pin showing whether a card is present. You might be lucky inserting a dummy card into the slot or go the clean route and remove the pin specification from the device tree. --- a/arch/arm/dts/armada-388-clearfog.dts +++ b/arch/arm/dts/armada-388-clearfog.dts @@ -306,7 +307,6 @@ sdhci@d8000 { bus-width = <4>; - cd-gpios = <&gpio0 20 GPIO_ACTIVE_LOW>; no-1-8-v; pinctrl-0 = <&clearfog_sdhci_pins &clearfog_sdhci_cd_pins>; Next Up is flashing the U-Boot to eMMC. This seems to work with the vendor U-Boot but proves to be tricky with mainline. The fun part boils down to the fact that the boot firmware reads the first block from eMMC, but the second from SD card. If you write the mainline U-Boot, which was written and tested for SD ca[...]

Matthew Garrett: Fixing the IoT isn't going to be easy

Sat, 22 Oct 2016 05:14:28 +0000

A large part of the internet became inaccessible today after a botnet made up of IP cameras and digital video recorders was used to DoS a major DNS provider. This highlighted a bunch of things including how maybe having all your DNS handled by a single provider is not the best of plans, but in the long run there's no real amount of diversification that can fix this - malicious actors have control of a sufficiently large number of hosts that they could easily take out multiple providers simultaneously.To fix this properly we need to get rid of the compromised systems. The question is how. Many of these devices are sold by resellers who have no resources to handle any kind of recall. The manufacturer may not have any kind of legal presence in many of the countries where their products are sold. There's no way anybody can compel a recall, and even if they could it probably wouldn't help. If I've paid a contractor to install a security camera in my office, and if I get a notification that my camera is being used to take down Twitter, what do I do? Pay someone to come and take the camera down again, wait for a fixed one and pay to get that put up? That's probably not going to happen. As long as the device carries on working, many users are going to ignore any voluntary request.We're left with more aggressive remedies. If ISPs threaten to cut off customers who host compromised devices, we might get somewhere. But, inevitably, a number of small businesses and unskilled users will get cut off. Probably a large number. The economic damage is still going to be significant. And it doesn't necessarily help that much - if the US were to compel ISPs to do this, but nobody else did, public outcry would be massive, the botnet would not be much smaller and the attacks would continue. Do we start cutting off countries that fail to police their internet?Ok, so maybe we just chalk this one up as a loss and have everyone build out enough infrastructure that we're able to withstand attacks from this botnet and take steps to ensure that nobody is ever able to build a bigger one. To do that, we'd need to ensure that all IoT devices are secure, all the time. So, uh, how do we do that?These devices had trivial vulnerabilities in the form of hardcoded passwords and open telnet. It wouldn't take terribly strong skills to identify this at import time and block a shipment, so the "obvious" answer is to set up forces in customs who do a security analysis of each device. We'll ignore the fact that this would be a pretty huge set of people to keep up with the sheer quantity of crap being developed and skip straight to the explanation for why this wouldn't work.Yeah, sure, this vulnerability was obvious. But what about the product from a well-known vendor that included a debug app listening on a high numbered UDP port that accepted a packet of the form "BackdoorPacketCmdLine_Req" and then executed the rest of the payload as root? A portscan's not going to show that up[1]. Finding this kind of thing involves pulling the device apart, dumping the firmware and reverse engineering the binaries. It typically takes me about a day to do that. Amazon has over 30,000 listings that match "IP camera" right now, so you're going to need 99 more of me and a year just to examine the cameras. And that's assuming nobody ships any new ones.Even[...]

Russell Coker: Another Broken Nexus 5

Sat, 22 Oct 2016 04:56:51 +0000

In late 2013 I bought a Nexus 5 for my wife [1]. It’s a good phone and I generally have no complaints about the way it works. In the middle of 2016 I had to make a warranty claim when the original Nexus 5 stopped working [2]. Google’s warranty support was ok, the call-back was good but unfortunately there was some confusion which delayed replacement. Once the confusion about the IMEI was resolved the warranty replacement method was to bill my credit card for a replacement phone and reverse the charge if/when they got the original phone back and found it to have a defect covered by warranty. This policy meant that I got a new phone sooner as they didn’t need to get the old phone first. This is a huge benefit for defects that don’t make the phone unusable as you will never be without a phone. Also if the user determines that the breakage was their fault they can just refrain from sending in the old phone. Today my wife’s latest Nexus 5 developed a problem. It turned itself off and went into a reboot loop when connected to the charger. Also one of the clips on the rear case had popped out and other clips popped out when I pushed it back in. It appears (without opening the phone) that the battery may have grown larger (which is a common symptom of battery related problems). The phone is slightly less than 3 years old, so if I had got the extended warranty then I would have got a replacement. Now I’m about to buy a Nexus 6P (because the Pixel is ridiculously expensive) which is $700 including postage. Kogan offers me a 3 year warranty for an extra $108. Obviously in retrospect spending an extra $100 would have been a benefit for the Nexus 5. But the first question is whether new phone going to have a probability greater than 1/7 of failing due to something other than user error in years 2 and 3? For an extended warranty to provide any benefit the phone has to have a problem that doesn’t occur in the first year (or a problem in a replacement phone after the first phone was replaced). The phone also has to not be lost, stolen, or dropped in a pool by it’s owner. While my wife and I have a good record of not losing or breaking phones the probability of it happening isn’t zero. The Nexus 5 that just died can be replaced for 2/3 of the original price. The value of the old Nexus 5 to me is less than 2/3 of the original price as buying a newer better phone is the option I want. The value of an old phone to me decreases faster than the replacement cost because I don’t want to buy an old phone. For an extended warranty to be a good deal for me I think it would have to cost significantly less than 1/10 of the purchase price due to the low probability of failure in that time period and the decreasing value of a replacement outdated phone. So even though my last choice to skip an extended warranty ended up not paying out I expect that overall I will be financially ahead if I keep self-insuring, and I’m sure that I have already saved money by self-insuring all my previous devices. [1] [2] Related posts: The Nexus 5 The Nexus 5 is the latest Android phone to be... Nexus 4 Ringke Fusion Case I’ve been using Android phones for 2.5 years and... Nexus 4 M[...]

Iain R. Learmonth: PATHspider 1.0.0 released!

Fri, 21 Oct 2016 23:46:13 +0000

In today’s Internet we see an increasing deployment of middleboxes. While middleboxes provide in-network functionality that is necessary to keep networks manageable and economically viable, any packet mangling — whether essential for the needed functionality or accidental as an unwanted side effect — makes it more and more difficult to deploy new protocols or extensions of existing protocols. For the evolution of the protocol stack, it is important to know which network impairments exist and potentially need to be worked around. While classical network measurement tools are often focused on absolute performance values, PATHspider performs A/B testing between two different protocols or different protocol extensions to perform controlled experiments of protocol-dependent connectivity problems as well as differential treatment. PATHspider is a framework for performing and analyzing these measurements, while the actual A/B test can be easily customized. Late on the 21st October, we released version 1.0.0 of PATHspider and it’s ready for “production” use (whatever that means for Internet research software). Our first real release of PATHspider was version 0.9.0 just in time for the presentation of PATHspider at the 2016 Applied Networking Research Workshop co-located with IETF 96 in Berlin earlier this year. Since this release we have made a lot of changes and I’ll talk about some of the highlights here (in no particular order): Switch from twisted.plugin to straight.plugin While we anticipate that some plugins may wish to use some features of Twisted, we didn’t want to have Twisted as a core dependency for PATHspider. We found that straight.plugin was not just a drop-in replacement but it simplified the way in which 3rd-party plugins could be developed and it was worth the effort for that alone. Library functions for the Observer PATHspider has an embedded flow-meter (think something like NetFlow but highly customisable). We found that even with the small selection of plugins that we had we were duplicating code across plugins for these customisations of the flow-meter. In this release we now provide library functions for common needs such as identifying TCP 3-way handshake completions or identifying ICMP Unreachable messages for flows. New plugin: DSCP We’ve added a new plugin for this release to detect breakage when using DiffServ code points to achieve differentiated services within a network. Plugins are now subcommands Using the subparsers feature of argparse, all plugins including 3rd-party plugins will now appear as subcommands to the PATHspider command. This makes every plugin a first-class citizen and makes PATHspider truly generalised. We have an added benefit from this that plugins can also ask for extra arguments that are specific to the needs of the plugin, for example the DSCP plugin allows the user to select which code point to send for the experimental test. Test Suite PATHspider now has a test suite! As the size of the PATHspider code base grows we need to be able to make changes and have confidence that we are not breaking code that another module relies on. We have so far only achieved 54% coverage of the codebase but we hope to improve this for the next release. We have focussed on the critica[...]

Dirk Eddelbuettel: anytime 0.0.4: New features and fixes

Fri, 21 Oct 2016 02:25:00 +0000

A brand-new release of anytime is now on CRAN following the three earlier releases since mid-September. anytime aims to convert anything in integer, numeric, character, factor, ordered, ... format to POSIXct (or Date) objects -- and does so without requiring a format string. See the anytime page for a few examples. With release 0.0.4, we add two nice new features. First, NA, NaN and Inf are now simply skipped (similar to what the corresponding Base R functions do). Second, we now also accept large numeric values so that, _e.g., anytime(as.numeric(Sys.time()) also works, effectively adding another input type. We also have squashed an issue reported by the 'undefined behaviour' sanitizer, and the widened the test for when we try to deploy the gettz package get missing timezone information. A quick example of the new features: anydate(c(NA, NaN, Inf, as.numeric(as.POSIXct("2016-09-01 10:11:12")))) [1] NA NA NA "2016-09-01" The NEWS file summarises the release: Changes in anytime version 0.0.4 (2016-10-20) Before converting via lexical_cast, assign to atomic type via template logic to avoid an UBSAN issue (PR #15 closing issue #14) More robust initialization and timezone information gathering. More robust processing of non-finite input also coping with non-finite values such as NA, NaN and Inf which all return NA Allow numeric POSIXt representation on input, also creating proper POSIXct (or, if requested, Date) Courtesy of CRANberries, there is a comparison to the previous release. More information is on the anytime page. For questions or comments use the issue tracker off the GitHub repo. This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings. [...]

Kees Cook: CVE-2016-5195

Thu, 20 Oct 2016 23:02:04 +0000


My prior post showed my research from earlier in the year at the 2016 Linux Security Summit on kernel security flaw lifetimes. Now that CVE-2016-5195 is public, here are updated graphs and statistics. Due to their rarity, the Critical bug average has now jumped from 3.3 years to 5.2 years. There aren’t many, but, as I mentioned, they still exist, whether you know about them or not. CVE-2016-5195 was sitting on everyone’s machine when I gave my LSS talk, and there are still other flaws on all our Linux machines right now. (And, I should note, this problem is not unique to Linux.) Dealing with knowing that there are always going to be bugs present requires proactive kernel self-protection (to minimize the effects of possible flaws) and vendors dedicated to updating their devices regularly and quickly (to keep the exposure window minimized once a flaw is widely known).

So, here are the graphs updated for the 668 CVEs known today:

  • Critical: 3 @ 5.2 years average
  • High: 44 @ 6.2 years average
  • Medium: 404 @ 5.3 years average
  • Low: 216 @ 5.5 years average



© 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.

Héctor Orón Martínez: Build a Debian package against Debian 8.0 using Download On Demand (DoD) service

Thu, 20 Oct 2016 07:58:57 +0000

In the previous post Open Build Service software architecture has been overviewed. In the current blog post, a tutorial on setting up a package build with OBS from Debian packages is presented. Steps: Generate a test environment by creating Stretch/SID VM Enable experimental repository Install OBS server, api, worker and osc CLI packages Ensure all OBS services are running Create an OBS project for Download on Demand (DoD) Create an OBS project linked to DoD Adding a package to the project Troubleshooting OBS Generate a test environment by creating Stretch/SID VM Really, use whatever suits you best, but please create an untrusted test environment for this one. In the current tutorial it assumes “$hostname” is “stretch”, which should be stretch or sid suite. Be aware that copy & paste configuration files from current post might lead you into broken characters (i.e. “). Debian Stretch weekly netinst CD Enable experimental repository # echo "deb experimental main" >> /etc/apt/sources.list.d/experimental.list # apt-get update Install and setup OBS server, api, worker and osc CLI packages # apt-get install obs-server obs-api obs-worker osc In the install process mysql database is needed, therefore if mysql server is not setup, a password needs to be provided. When OBS API database ‘obs-api‘ is created, we need to pick a password for it, provide “opensuse”. The ‘obs-api’ package will configure apache2 https webserver (creating a dummy certificate for “stretch”) to serve OBS webui. Add “stretch” and “obs” aliases to “localhost” entry in your /etc/hosts file. Enable worker by setting ENABLED=1 in /etc/default/obsworker Try to connect to the web UI https://stretch/ Login into OBS webui, default login credentials: Admin/opensuse). From command line tool, try to list projects in OBS $ osc -A https://stretch ls Accept dummy certificate and provide credentials (defaults: Admin/opensuse) If the install proceeds as expected follow to the next step. Ensure all OBS services are running # backend services obsrun 813 0.0 0.9 104960 20448 ? Ss 08:33 0:03 /usr/bin/perl -w /usr/lib/obs/server/bs_dodup obsrun 815 0.0 1.5 157512 31940 ? Ss 08:33 0:07 /usr/bin/perl -w /usr/lib/obs/server/bs_repserver obsrun 1295 0.0 1.6 157644 32960 ? S 08:34 0:07 \_ /usr/bin/perl -w /usr/lib/obs/server/bs_repserver obsrun 816 0.0 1.8 167972 38600 ? Ss 08:33 0:08 /usr/bin/perl -w /usr/lib/obs/server/bs_srcserver obsrun 1296 0.0 1.8 168100 38864 ? S 08:34 0:09 \_ /usr/bin/perl -w /usr/lib/obs/server/bs_srcserver memcache 817 0.0 0.6 346964 12872 ? Ssl 08:33 0:11 /usr/bin/memcached -m 64 -p 11211 -u memcache -l obsrun 818 0.1 0.5 78548 11884 ? Ss 08:33 0:41 /usr/bin/perl -w /usr/lib/obs/server/bs_dispatch obsserv+ 819 0.0 0.3 77516 7196 ? Ss 08:33 0:05 /usr/bin/perl -w /usr/lib/obs/server/bs_service mysql 851 0.0 0.0 4284 1324 ? Ss 08:33 0:00 /bin/sh /usr/bin/mysqld_safe mysql 1239 0.2 6.3 1010744 130104 ? Sl 08:33 1:31 \_ /usr/sbin/m[...]

Daniel Pocock: Choosing smartcards, readers and hardware for the Outreachy project

Thu, 20 Oct 2016 07:25:44 +0000

One of the projects proposed for this round of Outreachy is the PGP / PKI Clean Room live image. Interns, and anybody who decides to start using the project (it is already functional for command line users) need to decide about purchasing various pieces of hardware, including a smart card, a smart card reader and a suitably secure computer to run the clean room image. It may also be desirable to purchase some additional accessories, such as a hardware random number generator. If you have any specific suggestions for hardware or can help arrange any donations of hardware for Outreachy interns, please come and join us in the pki-clean-room mailing list or consider adding ideas on the PGP / PKI clean room wiki. Choice of smart card For standard PGP use, the OpenPGP card provides a good choice. For X.509 use cases, such as VPN access, there are a range of choices. I recently obtained one of the SmartCard HSM cards, Card Contact were kind enough to provide me with a free sample. An interesting feature of this card is Elliptic Curve (ECC) support. More potential cards are listed on the OpenSC page here. Choice of card reader The technical factors to consider are most easily explained with a table: On disk Smartcard reader without PIN-pad Smartcard reader with PIN-pad Software Free/open Mostly free/open, Proprietary firmware in reader Key extraction Possible Not generally possible Passphrase compromise attack vectors Hardware or software keyloggers, phishing, user error (unsophisticated attackers) Exploiting firmware bugs over USB (only sophisticated attackers) Other factors No hardware Small, USB key form-factor Largest form factor Some are shortlisted on the GnuPG wiki and there has been recent discussion of that list on the GnuPG-users mailing list. Choice of computer to run the clean room environment There are a wide array of devices to choose from. Here are some principles that come to mind: Prefer devices without any built-in wireless communications interfaces, or where those interfaces can be removed Even better if there is no wired networking either Particularly concerned users may also want to avoid devices with opaque micro-code/firmware Small devices (laptops) that can be stored away easily in a locked cabinet or safe to prevent tampering No hard disks required Having built-in SD card readers or the ability to add them easily SD cards and SD card readers The SD cards are used to store the master private key, used to sign the certificates/keys on the smart cards. Multiple copies are kept. It is a good idea to use SD cards from different vendors, preferably not manufactured in the same batch, to minimize the risk that they all fail at the same time. For convenience, it would be desirable to use a multi-card reader: although the software experience will be much the same if lots of individual card readers or USB flash drives are used. Other devices One additional idea that comes to mind is a hardware random number generator (TRNG), such as the FST-01. Can you help with ideas or donations? If you have any specific suggestions for hardware or can help arrange any donations of hardware for Outreachy interns, please come and j[...]

Pau Garcia i Quiles: FOSDEM Desktops DevRoom 2017 all for Participation

Wed, 19 Oct 2016 23:41:47 +0000

FOSDEM is one of the largest (5,000+ hackers!) gatherings of Free Software contributors in the world and happens each February in Brussels (Belgium, Europe). Once again, one of the tracks will be the Desktops DevRoom (formerly known as “CrossDesktop DevRoom”), which will host Desktop-related talks. We are now inviting proposals for talks about Free/Libre/Open-source Software on the topics of Desktop development, Desktop applications and interoperability amongst Desktop Environments. This is a unique opportunity to show novel ideas and developments to a wide technical audience. Topics accepted include, but are not limited to: Open Desktops: Gnome, KDE, Unity, Enlightenment, XFCE, Razor, MATE, Cinnamon, ReactOS, CDE etc Closed desktops: Windows, Mac OS X, MorphOS, etc (when talking about a FLOSS topic) Software development for the desktop Development tools Applications that enhance desktops General desktop matters Cross-platform software development Web Thin clients, desktop virtualiation, etc Talks can be very specific, such as the advantages/disadvantages of distributing a desktop application with snap vs flatpak, or as general as using HTML5 technologies to develop native applications. Topics that are of interest to the users and developers of all desktop environments are especially welcome. The FOSDEM 2016 schedule might give you some inspiration. Submissions Please include the following information when submitting a proposal: Your name The title of your talk (please be descriptive, as titles will be listed with around 400 from other projects) Short abstract of one or two paragraphs Short bio (with photo) Requested time: from 15 to 45 minutes. Normal duration is 30 minutes. Longer duration requests must be properly justified. You may be assigned LESS time than you request. How to submit All submissions are made in the Pentabarf event planning tool: To submit your talk, click on “Create Event”, then make sure to select the “Desktops” devroom as the “Track”. Otherwise your talk will not be even considered for any devroom at all. If you already have a Pentabarf account from a previous year, even if your talk was not accepted, please reuse it. Create an account if, and only if, you don’t have one from a previous year. If you have any issues with Pentabarf, please contact Deadline The deadline for submissions is December 5th 2016. FOSDEM will be held on the weekend of 4 & 5 February 2017 and the Desktops DevRoom will take place on Sunday, February 5th 2017. We will contact every submitter with a “yes” or “no” before December 11th 2016. Recording permission The talks in the Desktops DevRoom will be audio and video recorded, and possibly streamed live too. In the “Submission notes” field, please indicate that you agree that your presentation will be licensed under the CC-By-SA-4.0 or CC-By-4.0 license and that you agree to have your presentation recorded. For example: “If my presentation is accepted for FOSDEM, I hereby agree to license all recordings, slides, and other associated materials un[...]

Héctor Orón Martínez: Open Build Service in Debian needs YOU! ☞

Wed, 19 Oct 2016 19:13:57 +0000

“Open Build Service is a generic system to build and distribute packages from sources in an automatic, consistent and reproducible way.”   openSUSE distributions’ build system is based on a generic framework named Open Build Service (OBS), I have been using these tools in my work environment, and I have to say, as Debian developer, that it is a great tool. In the current blog post I plan for you to learn the very basics of such tool and provide you with a tutorial to get, at least, a Debian package building.   Fig 1 – Open Build Service Architecture The figure above shows Open Build Service, from now on OBS, software architecture. There are several parts which we should differenciate: Web UI / API (obs-api) Backend (obs-server) Build daemon / worker (obs-worker) CLI tool to manage API (osc) Each one of the above packages can be installed in separated machines as a distributed architecture, it is very easy to split the system into several machines running the services, however in the tutorial below everything installs in one machine. BACKEND The backend is composed of several scripts written either in shell or Perl. There are several services running in the backend: Source service Repository service Scheduler service Dispatcher service Warden service Publisher service Signer service DoD service … The backend manages source packages (any format such RPM, DEB, …) and schedules them for a build in the worker. Once the package is built it can be published in a repository for the wider audience or kept unpublished and used by other builds. WORKER System can have several worker machines which are encharged to perform the package builds. There are different options that can be configured (see /etc/default/obsworker) such enabling switch, number of worker instances, jobs per instance. This part of the system is written in shell and/or Perl language. WEB UI / API The frontend allows in a clickable way to get around most options OBS provides: setup projects, upload/branch/delete packages, submit review requests, etc. As an example, you can see a live instance running at The frontend parts are really a Ruby-on-rails web application, we (mainly thanks to Andrew Lee with ruby team help) have tried to get it nicely running, however we have had lots of issues due to javascripts or rubygems malfunctioning. Current webui is visible and provides some package status, however most actions do not work properly, or configurations cannot be applied as editor does not save changes, projects or packages in a project are not listed either. If you are a Ruby-on-rails expert or if you are able to help us out with some of the webui issues we get at Debian that would be really appreciated from our side. OSC CLI OSC is a managing command line tool, written in Python, that interfaces with OBS API to be able to perform actions, edit configurations, do package reviews, etc.   Now that we have done a general overview of the system, let me introduce you to OBS with a practical tutorial. TUTORIAL: Build a Debian package against Debian 8.0 using Download On Demand (Do[...]

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, September 2016

Wed, 19 Oct 2016 10:29:10 +0000

Like each month, here comes a report about the work of paid contributors to Debian LTS. Individual reports In September, about 152 work hours have been dispatched among 13 paid contributors. Their reports are available: Balint Reczey did 15 hours (out of 12.25 hours allocated + 7.25 remaining, thus keeping 4.5 extra hours for October). Ben Hutchings did 6 hours (out of 12.3 hours allocated + 1.45 remaining, he gave back 7h and thus keeps 9.75 extra hours for October). Brian May did 12.25 hours. Chris Lamb did 12.75 hours (out of 12.30 hours allocated + 0.45 hours remaining). Emilio Pozuelo Monfort did 1 hour (out of 12.3 hours allocated + 2.95 remaining) and gave back the unused hours. Guido Günther did 6 hours (out of 7h allocated, thus keeping 1 extra hour for October). Hugo Lefeuvre did 12 hours. Jonas Meurer did 8 hours (out of 9 hours allocated, thus keeping 1 extra hour for October). Markus Koschany did 12.25 hours. Ola Lundqvist did 11 hours (out of 12.25 hours assigned thus keeping 1.25 extra hours). Raphaël Hertzog did 12.25 hours. Roberto C. Sanchez did 14 hours (out of 12.25h allocated + 3.75h remaining, thus keeping 2 extra hours). Thorsten Alteholz did 12.25 hours. Evolution of the situation The number of sponsored hours reached 172 hours per month thanks to maxcluster GmbH joining as silver sponsor and RHX Srl joining as bronze sponsor. We only need a couple of supplementary sponsors now to reach our objective of funding the equivalent of a full time position. The security tracker currently lists 39 packages with a known CVE and the dla-needed.txt file 34. It’s a small bump compared to last month but almost all issues are affected to someone. Thanks to our sponsors New sponsors are in bold. Platinum sponsors: TOSHIBA (for 12 months) GitHub (for 3 months) Gold sponsors: The Positive Internet (for 28 months) Blablacar (for 27 months) Linode LLC (for 17 months) Babiel GmbH (for 6 months) Plat’Home (for 6 months) UR Communications BV Silver sponsors: Domeneshop AS (for 27 months) Université Lille 3 (for 27 months) Trollweb Solutions (for 25 months) Nantes Métropole (for 21 months) University of Luxembourg (for 19 months) Dalenys (for 18 months) Univention GmbH (for 13 months) Université Jean Monnet de St Etienne (for 13 months) Sonus Networks (for 7 months) maxcluster GmbH Bronze sponsors: David Ayers – IntarS Austria (for 28 months) Evolix (for 28 months) Offensive Security (for 28 months), a.s. (for 28 months) Freeside Internet Service (for 27 months) MyTux (for 27 months) Intevation GmbH (for 25 months) Linuxhotel GmbH (for 25 months) Daevel SARL (for 23 months) Bitfolk LTD (for 22 months) Megaspace Internet Services GmbH (for 22 months) Greenbone Networks GmbH (for 21 months) NUMLOG (for 21 months) WinGo AG (for 21 months) Ecole Centrale de Nantes – LHEEA (for 17 months) Sig-I/O (for 14 months) Entr’ouvert (for 12 months) Adfinis SyGroup AG (for 9 months) GNI MEDIA (for 4 months) Laboratoire LEGI – UMR 5519 / CNRS (for 4 months) Quarantainenet BV (for 4 months) RHX Srl No comment | Liked this article[...]

Kees Cook: Security bug lifetime

Wed, 19 Oct 2016 04:46:05 +0000

In several of my recent presentations, I’ve discussed the lifetime of security flaws in the Linux kernel. Jon Corbet did an analysis in 2010, and found that security bugs appeared to have roughly a 5 year lifetime. As in, the flaw gets introduced in a Linux release, and then goes unnoticed by upstream developers until another release 5 years later, on average. I updated this research for 2011 through 2016, and used the Ubuntu Security Team’s CVE Tracker to assist in the process. The Ubuntu kernel team already does the hard work of trying to identify when flaws were introduced in the kernel, so I didn’t have to re-do this for the 557 kernel CVEs since 2011. As the README details, the raw CVE data is spread across the active/, retired/, and ignored/ directories. By scanning through the CVE files to find any that contain the line “Patches_linux:”, I can extract the details on when a flaw was introduced and when it was fixed. For example CVE-2016-0728 shows: Patches_linux: break-fix: 3a50597de8635cd05133bd12c95681c82fe7b878 23567fd052a9abb6d67fe8e7a9ccdd9800a540f2 This means that CVE-2016-0728 is believed to have been introduced by commit 3a50597de8635cd05133bd12c95681c82fe7b878 and fixed by commit 23567fd052a9abb6d67fe8e7a9ccdd9800a540f2. If there are multiple lines, then there may be multiple SHAs identified as contributing to the flaw or the fix. And a “-” is just short-hand for the start of Linux git history. Then for each SHA, I queried git to find its corresponding release, and made a mapping of release version to release date, wrote out the raw data, and rendered graphs. Each vertical line shows a given CVE from when it was introduced to when it was fixed. Red is “Critical”, orange is “High”, blue is “Medium”, and black is “Low”: And here it is zoomed in to just Critical and High: The line in the middle is the date from which I started the CVE search (2011). The vertical axis is actually linear time, but it’s labeled with kernel releases (which are pretty regular). The numerical summary is: Critical: 2 @ 3.3 years High: 34 @ 6.4 years Medium: 334 @ 5.2 years Low: 186 @ 5.0 years This comes out to roughly 5 years lifetime again, so not much has changed from Jon’s 2010 analysis. While we’re getting better at fixing bugs, we’re also adding more bugs. And for many devices that have been built on a given kernel version, there haven’t been frequent (or some times any) security updates, so the bug lifetime for those devices is even longer. To really create a safe kernel, we need to get proactive about self-protection technologies. The systems using a Linux kernel are right now running with security flaws. Those flaws are just not known to the developers yet, but they’re likely known to attackers, as there have been prior boasts/gray-market advertisements for at least CVE-2010-3081 and CVE-2013-2888. (Edit: see my updated graphs that include CVE-2016-5195.) © 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License. [...]

Michal Čihař: Gammu 1.37.90

Wed, 19 Oct 2016 04:00:16 +0000


Yesterday Gammu 1.37.90 has been released. This release brings quite a lot of changes and it's for testing purposes. Hopefully stable 1.38.0 will follow soon as soon as I won't get negative feedback on the changes.

Besides code changes, there is one news for Windows users - there is Windows binary coming with the release. This was possible to automate thanks to AppVeyor, who does provide CI service where you can download built artifacts. Without this, I'd not be able to do make this as I don't have single Windows computer :-).

Full list of changes:

  • Improved support Huawei K3770.
  • API changes in some parameter types.
  • Fixed various Windows compilation issues.
  • Fixed several resource leaks.
  • Create outbox SMS atomically in FILES backend.
  • Removed getlocation command as we no longer fit into their usage policy.
  • Fixed call diverts on TP-LINK MA260.
  • Initial support for Oracle database.
  • Removed unused daemons, pbk and pbk_groups tables from the SMSD schema.
  • SMSD outbox entries now can have priority set in the database.
  • Added SIM IMSI to the SMSD status table.
  • Added CheckNetwork directive.
  • SMSD attempts to power on radio if disabled.
  • Fixed processing of AT unsolicited responses in some cases.
  • Fixed parsing USSD responses from some devices.

Would you like to see more features in Gammu? You an support further Gammu development at Bountysource salt or by direct donation.

Filed under: Debian English Gammu | 0 comments

Reproducible builds folks: Reproducible Builds: week 77 in Stretch cycle

Wed, 19 Oct 2016 00:02:51 +0000

What happened in the Reproducible Builds effort between Sunday October 9 and Saturday October 15 2016: Media coverage despinosa wrote a blog post on Vala and reproducibility h01ger and lynxis gave a talk called "From Reproducible Debian builds to Reproducible OpenWrt, LEDE" (video, slides) at the OpenWrt Summit 2016 held in Berlin, together with ELCE, held by the Linux Foundation. A discussion on debian-devel@ resulted in a nice quotable comment from Paul Wise: "(Reproducible) builds from source (with continuous rechecking) is the only way to have enough confidence that a Debian user has the freedoms promised to them by the Debian social contract." Chris Lamb will present a talk at Software Freedom Kosovo on reproducible builds on Saturday 22nd October. Documentation update After discussions with HW42, Steven Chamberlain, Vagrant Cascadian, Daniel Shahaf, Christopher Berg, Daniel Kahn Gillmor and others, Ximin Luo has started writing up more concrete and detailed design plans for setting SOURCE_ROOT_DIR for reproducible debugging symbols, buildinfo security semantics and buildinfo security infrastructure. Toolchain development and fixes Dmitry Shachnev noted that our patch for #831779 has been temporarily rejected by docutils upstream; we are trying to persuade them again. Tony Mancill uploaded javatools/0.59 to unstable containing original patch by Chris Lamb. This fixed an issue where documentation Recommends: substvars would not be reproducible. Ximin Luo filed bug 77985 to GCC as a pre-requisite for future patches to make debugging symbols reproducible. Packages reviewed and fixed, and bugs filed The following updated packages have become reproducible - in our current test setup - after being fixed: cobbler/2.6.6+dfsg1-13 by Thomas Goirand, original patch by Chris Lamb. collectd/5.6.1-1 by Marc Fournier. fonts-tiresias/0.1-3 by Gürkan Myczko, original patch by Chris Lamb. fntsample/4.0-2 by Євгеній Мещеряков, original patch by Chris Lamb. fpga-icestorm/0~20160913git266e758-2 by Ruben Undheim, original patch by Chris Lamb. frog/0.13.5-1 by Maarten van Gompel, original patch by Chris Lamb. lambda-align/1.0.0-2 by Sascha Steinbiss, original patch by Chris Lamb. pleiades/1.7.0-2 by Hideki Yamane, original patch by Chris Lamb. sweethome3d/5.2+dfsg-1 by Markus Koschany, original fix by Gabriele Giacone. trac-subtickets/0.2.0-2 by W. Martin Borgert. The following updated packages appear to be reproducible now, for reasons we were not able to figure out. (Relevant changelogs did not mention reproducible builds.) aodh/3.0.0-2 by Thomas Goirand. eog-plugins/3.16.5-1 by Michael Biebl. flam3/3.0.1-5 by Daniele Adriana Goulart Lopes. hyphy/2.2.7+dfsg-1 by Andreas Tille. libbson/1.4.1-1 by A. Jesse Jiryu Davis. libmongoc/1.4.1-1 by A. Jesse Jiryu Davis. lxc/1:2.0.5-1 by Evgeni Golov. spice-gtk/0.33-1 by Liang Guo. spice-vdagent/0.17.0-1 by Liang Guo. tnef/1.4.12-1 by Kevin Coyner. Some uploads have addressed some reproducibility issues, but not[...]

Enrico Zini: debtags and aptitude forget-new

Tue, 18 Oct 2016 08:25:55 +0000

I like to regularly go through the new packages section in aptitude to see what interesting new packages entered testing, but recently that joyful moment got less joyful for me because of a barrage of obscurely named packages.

I have just realised that aptitude forget-new supports search patterns, and that brought back the joy.

I put this in a script that I run before looking for new packages in aptitude:

aptitude forget-new '?tag(field::biology)
                   | ?tag(devel::lang:ruby)
                   | ?tag(devel::lang:perl)
                   | ?tag(role::shared-lib)
                   | ?tag(suite::openstack)
                   | ?tag(implemented-in::php)
                   | ~n^node-'

The actual content of the search pattern is purely a matter of taste.

I'm happy to see how debtags becomes quite useful here, to keep my own user experience manageable as the size of Debian keeps growing.

Update: pabs suggested to use apt post-invoke hooks. For example:

        $ cat /etc/apt/apt.conf.d/99forget-new
        APT::Update::Post-Invoke { "aptitude forget-new '~sdebug'"; };

MJ Ray: Rinse and repeat

Tue, 18 Oct 2016 04:28:23 +0000


Forgive me, reader, for I have sinned. It has been over a year since my last blog post. Life got busy. Paid work. Another round of challenges managing my chronic illness. Cycle campaigning. Fun bike rides. Friends. Family. Travels. Other social media to stroke. I’m still reading some of the planets where this blog post should appear and commenting on some, so I’ve not felt completely cut off, but I am surprised how many people don’t allow comments on their blogs any more (or make it too difficult for me with reCaptcha and the like).

The main motive for this post is to test some minor upgrades, though. Hi everyone. How’s it going with you? I’ll probably keep posting short updates in the future.

Go in peace to love and serve the web. (image)

Dirk Eddelbuettel: gettz 0.0.2

Tue, 18 Oct 2016 02:16:00 +0000


Release 0.0.2 of gettz is now on CRAN.

gettz provides a possible fallback in situations where Sys.timezone() fails to determine the system timezone. That can happen when e.g. the file /etc/localtime somehow is not a link into the corresponding file with zoneinfo data in, say, /usr/share/zoneinfo.

Windows is now no longer excluded, though it doesn't do anything useful yet. The main use of the package is still for Linux.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Russ Allbery: pgpcontrol 2.5

Mon, 17 Oct 2016 23:36:00 +0000

pgpcontrol is the collection of the original signing and verification scripts that David Lawrence wrote (in Perl) for verification of Usenet control messages. I took over maintenance of it, with a few other things, but haven't really done much with it. It would benefit a lot from an overhaul of both the documentation and the code, and turning it into a more normal Perl module and supporting scripts.

This release is none of those things. It's just pure housekeeping, picking up changes made by other people (mostly Julien ÉLIE) to the copies of the scripts in INN and making a few minor URL tweaks. But I figured I may as well, rather than distribute old versions of the scripts.

You can tell how little I've done with this stuff by noting that they don't even have a distribution page on my web site. The canonical distribution site is, although I'm not sure if that site will pick up the new release. (This relies on a chain of rsync commands that have been moved multiple times since the last time I pushed the release button, and I suspect that has broken.) I'll ping someone about possibly fixing that; in the meantime, you can find the files on

Arturo Borrero González: nftables in Debian Stretch

Mon, 17 Oct 2016 13:30:00 +0000

The next Debian stable release is codenamed Stretch, which I would expect to be released in less than a year. The Netfilter Project has been developing nftables for years now, and the status of the framework has been improved to a good point: it’s ready for wide usage and adoption, even in high-demand production environments. The last released version of nft was 0.6, and the Debian package was updated just a day after Netfilter announced it. With the 0.6 version the software framework reached a good state of maturity, and I myself encourage users to migrate from iptables to nftables. In case you don’t know about nftables yet, here is a quick reference: it’s the tool/framework meant to replace iptables (also ip6tables, arptables and ebtables) it integrates advanced structures which allow to arrange your ruleset for optimal performance all the system is more configurable than in iptables the syntax is much better than in iptables several actions in a single rule simplified IPv4/IPv6 dual stack less kernel updates required great support for incremental, dynamic and atomic ruleset updates To run nftables in Debian Stretch you need several components: nft: the command line interface libnftnl: the nftables-netlink library linux kernel: a least 4.7 is recommended A simple aptitude run will put your system ready to go, out of the box, with nftables: root@debian:~# aptitude install nftables Once installed, you can start using the nft command: root@debian:~# nft list ruleset A good starting point is to copy a simple workstation firewall configuration: root@debian:~# cp /usr/share/doc/nftables/examples/syntax/workstation /etc/nftables.conf And load it: root@debian:~# nft -f /etc/nftables.conf Your nftables ruleset is now firewalling your network: root@debian:~# nft list ruleset table inet filter { chain input { type filter hook input priority 0; iif lo accept ct state established,related accept ip6 nexthdr icmpv6 icmpv6 type { nd-neighbor-solicit, nd-router-advert, nd-neighbor-advert } accept counter drop } } Several examples can be found at /usr/share/doc/nftables/examples/. A simple systemd service is included to load your ruleset at boot time, which is disabled by default. If you are running Debian Jessie and want to give a try, you can use nftables from jessie-backports. If you want to migrate your ruleset from iptables to nftables, good news. There are some tools in place to help in the task of translating from iptables to nftables, but that doesn’t belong to this post :-) The nano editor includes nft syntax highlighting. What are you waiting for to use nftables? [...]

Thomas Lange: FAI 5.2 is going to the cloud

Mon, 17 Oct 2016 11:51:21 +0000


The newest version of FAI, the Fully Automatic Installation tool set, now supports creating disk images for virtual machines or for your cloud environment.

The new command fai-diskimage uses the normal FAI process for building disk images of different formats. An image with a small set of packages can be created in less than 50 seconds, a Debian XFCE desktop in nearly two minutes and a complete Ubuntu 16.04 desktop image is created in four minutes.

New FAI installation images for CD and USB stick are also available.

Update: Add link to announcement

FAI cloud

Jaldhar Vyas: Something Else Will Be Posted Soon Also.

Mon, 17 Oct 2016 06:07:58 +0000


Yikes today was Sharad Purnima which means there is about two weeks to go before Diwali and I haven't written anything here all year.

OK new challenge: write 7 substantive blog posts before Diwali. Can I manage to do it? Let's see...

Russell Coker: Improving Memory

Mon, 17 Oct 2016 04:20:27 +0000

I’ve just attended a lecture about improving memory, mostly about mnemonic techniques. I’m not against learning techniques to improve memory and I think it’s good to teach kids a variety of things many of which won’t be needed when they are younger as you never know which kids will need various skills. But I disagree with the assertion that we are losing valuable skills due to “digital amnesia”. Nowadays we have programs to check spelling so we can avoid the effort of remembering to spell difficult words like mnemonic, calendar apps on our phones that link to addresses and phone numbers, and the ability to Google the world’s knowledge from the bathroom. So the question is, what do we need to remember? For remembering phone numbers it seems that all we need is to remember numbers that we might call in the event of a mobile phone being lost or running out of battery charge. That would be a close friend or relative and maybe a taxi company (and 13CABS isn’t difficult to remember). Remembering addresses (street numbers etc) doesn’t seem very useful in any situation. Remembering the way to get to a place is useful and it seems to me that the way the navigation programs operate works against this. To remember a route you would want to travel the same way on multiple occasions and use a relatively simple route. The way that Google maps tends to give the more confusing routes (IE routes varying by the day and routes which take all shortcuts) works against this. I think that spending time improving memory skills is useful, but it will either take time away from learning other skills that are more useful to most people nowadays or take time away from leisure activities. If improving memory skills is fun for you then it’s probably better than most hobbies (it’s cheap and provides some minor benefits in life). When I was in primary school it was considered important to make kids memorise their “times tables”. I’m sure that memorising the multiplication of all numbers less than 13 is useful to some people, but I never felt a need to do it. When I was young I could multiply any pair of 2 digit numbers as quickly as most kids could remember the result. The big difference was that most kids needed a calculator to multiply any number by 13 which is a significant disadvantage. What We Must Memorise Nowadays the biggest memory issue is with passwords (the Correct Horse Battery Staple XKCD comic is worth reading [1]). Teaching mnemonic techniques for the purpose of memorising passwords would probably be a good idea – and would probably get more interest from the audience. One interesting corner-case of passwords is ATM PIN numbers. The Wikipedia page about PIN numbers states that 4-12 digits can be used for PINs [2]. The 4 digit PIN was initially cho[...]

Thomas Goirand: Released OpenStack Newton, Moving OpenStack packages to upstream Gerrit CI/CD

Sun, 16 Oct 2016 21:28:57 +0000

OpenStack Newton is released, and uploaded to Sid OpenStack Newton was released on the Thursday 6th of October. I was able to upload nearly all of it before the week-end, though there was a bit of hick-ups still, as I forgot to upload python-fixtures 3.0.0 to unstable, and only realized it thanks to some bug reports. As this is a build time dependency, it didn’t disrupt Sid users too much, but 38 packages wouldn’t build without it. Thanks to Santiago Vila for pointing at the issue here. As of writing, a lot of the Newton packages didn’t migrate to Testing yet. It’s been migrating in a very messy way. I’d love to improve this process, but I’m not sure how, if not filling RC bugs against 250 packages (which would be painful to do), so they would migrate at once. Suggestions welcome. Bye bye Jenkins For a few years, I was using Jenkins, together with a post-receive hook to build Debian Stable backports of OpenStack packages. Though nearly a year and a half ago, we had that project to build the packages within the OpenStack infrastructure, and use the CI/CD like OpenStack upstream was doing. This is done, and Jenkins is gone, as of OpenStack Newton. Current status As of August, almost all of the packages Git repositories were uploaded to OpenStack Gerrit, and the build now happens in OpenStack infrastructure. We’ve been able to build all packages a release OpenStack Newton Debian packages using this system. This non-official jessie backports repository has also been validated using Tempest. Goodies from Gerrit and upstream CI/CD It is very nice to have it built this way, so we will be able to maintain a full CI/CD in upstream infrastructure using Newton for the life of Stretch, which means we will have the tools to test security patches virtually forever. Another thing is that now, anyone can propose packaging patches without the need for an Alioth account, by sending a patch for review through Gerrit. It is our hope that this will increase the likeliness of external contribution, for example from 3rd party plugins vendors (ie: networking driver vendors, for example), or upstream contributors themselves. They are already used to Gerrit, and they all expected the packaging to work this way. They are all very much welcome. The upstream infra: nodepool, zuul and friends The OpenStack infrastructure has been described already in, by Ian Wienand. So I wont describe it again, he did a better job than I ever would. How it works All source packages are stored in Gerrit with the “deb-” prefix. This is in order to avoid conflict with upstream code, and to easily locate packaging repositories. For example, you’ll find Nova packaging under Two Debian reposito[...]

Steinar H. Gunderson: opensourced

Sun, 16 Oct 2016 13:43:00 +0000

It's been said that backup is a bit like flossing; everybody knows you should do it, but nobody does it. If you want to start flossing, an immediate question is what kind of dental floss to get—and conversely, for backup, which backup software do you want to rely on? I had some criteria: Automated full-system backup, not just user files. Self-controlled, not cloud (the cloud economics don't really make sense for 10 TB+ of backup storage, especially when you factor in restore cost). Does not require one file on the backup server for one each file on the backed-up server (makes for infinitely long fscks, greatly increased risk of file system corruption, frequently gives performance problems in the backup host, and makes inter-file compression impossible). Not written in Python (makes for glacial speeds). Pull backups, not push (so a backed-up server cannot delete its own backups in event of a break-in). Does not require any special preparation or lots of installation on each server. Ideally, restore using UNIX standard tools only. I looked at basically everything that existed in Debian and then some, and all of them failed. But Samfundet had its own script that's basically just a simple wrapper around tar and ssh, which has worked for 15+ years without a hitch (including several restores), so why not use it? All the authors agreed to a GPLv2+ licensing, so now it's time for to meet the world. It does about the simplest thing you can imagine: ssh to the server and use GNU tar to tar down every filesystem that has the “dump” bit set in fstab. Every 30 days, it does a full backup; otherwise, it does an incremental backup using GNU tar's incremental mode (which makes sure you will also get information about file deletes). It doesn't do inter-file diffs (so if you have huge files that change only a little bit every day, you'll get blowup), and you can't do single-file restores without basically scanning through all the files; tar isn't random-access. So it doesn't do much fancy, but it works, and it sends you a nice little email every day so you can know your backup went well. (There's also a less frequently used mode where the backed-up server encrypts the backup using GnuPG, so you don't even need to trust the backup server.) It really takes fifteen minutes to set up, so now there's no excuse. :-) Oh, and the only good dental floss is this one. :-) [...]

Rémi Vanicat: Trying to install Debian on G752VM-GC006T

Sun, 16 Oct 2016 12:13:57 +0000

I'm trying to install Debian GNU/linux on my new ASUS G752VM-GC006T

So what I've discovered:

  • It's F2 to have the bios, and in the last bios section, you can directly boot on any device.
  • It boot on the netinst DVD
  • netinst can't see the SSD disk
  • the trackpad doesn't work
  • after successful install, booting on the fresh install failed. I had to use the recovery tools to install nvidia non-free package to have debian successfully boot.
  • I mostly use sid on my computer (mostly to test problem, and report them). It was a bad idea: Debian stopped to find its own disk. adding pci=nomsi to the kernel option fix this.

So I've a working linux. My problem are:

  • I still can't see the SSD disk from linux
  • I cannot easily dualboot:
    • linux can't see the SSD where windows is,
    • windows boot loader don't want to start Debian, because it doesn't want to,
    • at least, the bios can boot both of them, but there is no "pretty" menu
  • the trackpad is not working.
  • 0.5 To feel small today...

And the question is: where to report those bug.

First edit: rEFInd seem to find windows and Debian, thanks to blackcat77

Mirco Bauer: Debian 8 on Dell XPS 15

Sun, 16 Oct 2016 03:46:18 +0000

It was time for a new work laptop so I got a Dell XPS 15 9950. I wasn't planning to write a blog post of how to install Debian 8 "Jessie" on the laptop but since it wasn't just install and use, I will share what is needed to get the wifi and graphics card to work. So first download the DVD-1 AMD64 image of Debian 8 from your favorite download mirror. The closest one for me is the Hong Kong mirror. You do not need to download the other DVDs, just the first one is sufficient. The netinstaller and CD images will not provide a good experience since they need a working network/internet connection. With the DVD image you can do a full default desktop install and most things will just work out-of-the-box. Now you can do a regular install, no special procedure or anything will be needed. Depending on your desktop selection it will boot right into lovely GNOME3. You will quickly notice that the wifi is not working out-of-the-box though. It is a Qualcomm Atheros QCA6174 and the Linux kernel version 3.16 shipped with Debian 8 does not support that wifi card. This card needs the ath10k_pci kernel module which is included in a newer Linux kernel package from the Debian backports archive. If you don't have the Dell docking station as neither I do, then there is no wired ethernet that you can use for getting a temporary Internet connection. So use a different computer with Internet access to download the following packages from the Debian backports archive manually and put them on a USB disk. linux-base linux-image-4.7.0 firmware-atheros firmware-misc-nonfree xserver-xorg-video-intel After that connect the USB disk to the new Dell laptop and mount the disk using the GNOME3 file browser (nautilus). It will mount the USB disk to /media/$your_username/$volume_name. Become root using sudo or su. Then install all downloaded package from USB with like this: cd /media/$your_username/$volume_name dpkg -i linux-base_*.deb dpkg -i linux-image-4.7.0-0.bpo.1-amd64_*.deb dpkg -i firmware-atheros_*.deb dpkg -i firmware-misc-nonfree_*.deb dpkg -i xserver-xorg-video-intel_*.deb That's it. If dpkg finished without error message then you can reboot and your wifi and graphics card should just work! After reboot you can verify the wifi card is recognized by running "/sbin/iwconfig" and see if wlan0 shows up. Have fun with your Dell XPS and Debian! PS: if this does not work for you, leave a comment or write to meebey at meebey . net [...]

Thorsten Alteholz: DOPOM: libmatthew-java – Unix socket API and bindings for Java

Sat, 15 Oct 2016 21:01:07 +0000

While looking at the “action needed”-paragraph of one of my packages, I saw that a dependency was orphaned and needed a new maintainer. So I decided to restart DOPOM (Debian Orphaned Package Of the Month), that I started in 2012 with ent as the first package.

This month I adopted libmatthew-java. Sure it was not a big deal as the QA-team already did a good job and kept the package in shape. But now there is one burden lifted from their shoulders.

According to the Work-Needing and Prospective Packages page 956 packages are ophaned at the moment. If every Debian contributor grabs one of them, we could unwind the QA-team (no, just kidding). So similar to NEW which was down to 0 this year, can we get rid of the WNPP as well? At least for a short time?

Daniel Silverstone: Gitano - Approaching Release - Access Control Changes

Sat, 15 Oct 2016 03:11:04 +0000

As mentioned previously I am working toward getting Gitano into Stretch. A colleague and friend of mine (Richard Maw) did a large pile of work on Lace to support what we are calling sub-defines. These let us simplify Gitano's ACL files, particularly for individual projects. In this posting, I'd like to cover what has changed with the access control support in Gitano, so if you've never used it then some of this may make little sense. Later on, I'll be looking at some better user documentation in conjunction with another friend of mine (Lars Wirzenius) who has promised to help produce a basic administration manual before Stretch is totally frozen. Sub-defines With a more modern lace (version 1.3 or later) there is a mechanism we are calling 'sub-defines'. Previously if you wanted to write a ruleset which said something like "Allow Steve to read my repository" you needed: define is_steve user exact steve allow "Steve can read my repo" is_steve op_read And, as you'd expect, if you also wanted to grant read access to Jeff then you'd need yet set of defines: define is_jeff user exact jeff define is_steve user exact steve define readers anyof is_jeff is_steve allow "Steve and Jeff can read my repo" readers op_read This, while flexible (and still entirely acceptable) is wordy for small rulesets and so we added sub-defines to create this syntax: allow "Steve and Jeff can read my repo" op_read [anyof [user exact jeff] [user exact steve]] Of course, this is generally neater for simpler rules, if you wanted to add another user then it might make sense to go for: define readers anyof [user exact jeff] [user exact steve] [user exact susan] allow "My friends can read my repo" op_read readers The nice thing about this sub-define syntax is that it's basically usable anywhere you'd use the name of a previously defined thing, they're compiled in much the same way, and Richard worked hard to get good error messages out from them just in case. No more auto_user_XXX and auto_group_YYY As a result of the above being implemented, the support Gitano previously grew for automatically defining users and groups has been removed. The approach we took was pretty inflexible and risked compilation errors if a user was deleted or renamed, and so the sub-define approach is much much better. If you currently use auto_user_XXX or auto_group_YYY in your rulesets then your upgrade path isn't bumpless but it should be fairly simple: Upgrade your version of lace to 1.3 Replace any auto_user_FOO with [user exact FOO] and similarly for any auto_group_BAR to [group exact BAR]. You can now upgrade Gitano safely. [...]

Michal Čihař: New free software projects on Hosted Weblate

Fri, 14 Oct 2016 16:00:17 +0000


Hosted Weblate provides also free hosting for free software projects. I'm quite slow in processing the hosting requests, but when I do that, I process them in a batch and add several projects at once.

This time, the newly hosted projects include:

Filed under: Debian English SUSE Weblate | 0 comments

Mike Gabriel: [Arctica Project] Release of nx-libs (version

Fri, 14 Oct 2016 15:47:05 +0000

Introduction NX is a software suite which implements very efficient compression of the X11 protocol. This increases performance when using X applications over a network, especially a slow one. NX (v3) has been originally developed by NoMachine and has been Free Software ever since. Since NoMachine obsoleted NX (v3) some time back in 2013/2014, the maintenance has been continued by a versatile group of developers. The work on NX (v3) is being continued under the project name "nx-libs". Release Announcement On Thursday, Oct 13th, version of nx-libs has been released [1]. This release brings a major backport of libNX_X11 to the status of libX11 1.3.4 (as provided by On top of that, all CVE fixes provided for libX11 by the Debian X11 Strike Force and the Debian LTS team got cherry-picked to libNX_X11, too. This big chunk of work has been performed by Ulrich Sibiller and there is more to come. We currently have a pull request pending review that backports more commits from libX11 (bumping the status of libNX_X11 to the state of libX11 1.6.4, which is the current HEAD on the Git site). Another big clean-up performed by Ulrich is the split-up of XKB code which got symlinked between libNX_X11 and nx-X11/programs/Xserver. This brings in some code duplications but allows maintaing the nxagent Xserver code and the libNX_X11 code separately. In the upstream ChangeLog you will find some more items around code clean-ups and .deb packaging, see the diff [2] on the ChangeLog file for details. So for this releas, a very special and massive thanks goes to Ulrich Sibiller!!! Well done!!! Change Log A list of recent changes (since can be obtained from here. Known Issues This version of nx-libs is known to segfault when LDFLAGS / CFLAGS have the -pie / -fPIE hardening flags set. This issue is currently under investigation. Binary Builds You can obtain binary builds of nx-libs for Debian (jessie, stretch, unstable) and Ubuntu (trusty, xenial) via these apt-URLs: Debian: deb {jessie,stretch,sid} main Ubuntu: deb {trusty,xenial} main Our package server's archive key is: 0x98DE3101 (fingerprint: 7A49 CD37 EBAE 2501 B9B4 F7EA A868 0F55 98DE 3101). Use this command to make APT trust our package server: wget -qO - | sudo apt-key add - The nx-libs software project brings to you the binary packages nxproxy (client-side component) and nxagent (nx-X11 server, server-side component). Ubuntu [...]

Antoine Beaupré: Managing good bug reports

Fri, 14 Oct 2016 15:11:03 +0000

Bug reporting is an art form that is too often neglected in software projects. Bug reports allow contributors to participate without deep technical knowledge and at the same time provide a crucial space for developers to be made aware of issues with their software that they could not have foreseen or found themselves, for lack of resources, variety or imagination. Prior art Unfortunately, there are rarely good guidelines for submitting bug reports. Historically, people have pointed towards How to report bugs effectively or How to ask questions the smart way. While those guides can be useful for motivated people and may seem attractive references for project managers, they suffer from serious issues: they are written by technical people, for non-technical people as a result, they have a deeply condescending attitude such as calling people "stupid" or various animal names like "mongoose" they are also very technical themselves: one starts with a copyright notice and a changelog, the other uses magic words like "Core dumps" and $Id$ they are too long: sgtatham's is about 3600 words long, esr's is even longer at about 11800 words. those texts will take about 20 to 60 minutes to read by an average reader, according to research Individual projects have their own guides as well. Linux has the REPORTING_BUGS file that is a much shorter 1200 that can be read under 5 minutes, provided that you can understand the topic at hand. Interestingly, that guide refers to both esr's and sgtatham's guidelines, which means, in the degenerate case where the user hasn't had the "privilege" of reading esr's prose already, they will have an extra hour and a half of reading to do to have honestly followed the guidelines before reporting the bug. I often find good documentation in the Tails project. Their bug reporting guidelines are easily accessible and quick to read, although they still might be too technical. It could be argued that you need to get technical at some point to get that information out, of course. In the Monkeysign project, I have started a bug reporting guide that doesn't yet address all those issues. I am considering writing a new guide, but I figured I would look at other people's work and get feedback before writing my own standard. What's the point? Why have those documents been written? Are people really expected to read them before seeking help? It seems to me unlikely that someone would: be motivated enough to do something about a broken part of their computer figure out they can do something about it read [...]

Jonathan Dowland: Hi-Fi Furniture

Fri, 14 Oct 2016 14:23:48 +0000

sadly obsolete For the last four years or so, I've had my Hi-Fi and the vast majority of my vinyl collection stored in a self-contained, mildly-customized Ikea unit. Since moving house this has been in my dining room—which we have always referred to as the "play room", since we have a second dining room in which we actually dine. The intention for the play room was for it to be the room within which all our future children would have their toys kept, in an attempt to keep the living room from being overrun with plastic. The time has thus come for my Hi-Fi to come out of there, so we've moved it to our living room. Unfortunately, there's not enough room in the living room for the Ikea unit: I need something narrower for the space available. via In the spirit of my original hack, I started looking at what others might have achieved with Ikea components. There are some great examples of open-style units built out of the (extremely cheap) Lack coffee tables, such as this ikeahackers article, but I'd prefer something more enclosed. One problem I've had with the Expedit unit was my cat trying to scratch the records. I ended up putting framed records at the front to cover the spines of the records within. If I were keeping the unit, I'd look at fitting hinges (another ikeahackers article) Asides from hacked Ikea stuff, there are a few companies offering traditional enclosed Hi Fi cabinets. I'm going to struggle to fit both the equipment and a subset of records into these, so I might have to look at storing them separately. In some ways that makes life easier: the records could go into a 1x4 Ikea KALLAX unit, leaving the amp and deck to home somewhere. Perhaps I could look at a bigger unit for under the TV. My parents have a nice Hi-Fi unit that pretends to be a chest of drawers. I'm fairly sure my Dad custom-built it, as it has a hinged top to provide access to the turntable and I haven't seen anything like that on the market. That brings me onto thinking about other AV things I'd like to achieve in the living room. I've always been interested in exploring surround sound, but my initial attempt in my prior flat did not go well, either because the room was not terribly suited accoustically, or because the Pioneer unit I bought was rubbish, or both. It seems that there aren't really AV receivers which are designed to satisfy both people wanting to use them in a Hi-Fi and a home cinema setting. I could stick to stereo and run the TV into my existing (or a new) amp[...]

Daniel Silverstone: Gitano - Approaching Release - Changes

Fri, 14 Oct 2016 13:30:51 +0000

Continuing on from the previous article, here is a (probably incomplete) list of the critical changes to Gitano which have been, or will be, worked on during the run toward a 1.0 release. Each of these will have a blog posting to discuss what the changes mean for current and future users. Sometimes I'll aggregate postings, sometimes I won't. The following are some highlights from the past little while of development which has been undertaken by Richard and myself. Each item is, I feel, important enough to warrant commentary, even for those who already use Gitano. Lace now supports a sub-define syntax: [foo bar] which makes for simpler rulesets. Gitano no longer creates auto_user_XXX and auto_group_XXX Lace predicates Gitano no longer supports "basic" simple matches of the form user foo but instead requires a match kind such as group prefix bar-. Gitano is gaining i18n/l10n support, though it will not be complete for version 1.0 the basics will be in place. Gitano is gaining a much larger integration test suite using yarn. Deprecated commands have now been removed from Gitano. (e.g. no more set-owner) Gitano has gained PGP/GPG signature verification for commits and tags. Any number of smaller things have been done which fall below some arbitrary barrier for telling you about. If you're aware of any of them and feel they are worthwhile telling the world about, then please prod me and I'll add an article to the series. Finally it's worth noting that the effort to get all this into Debian Stretch proceeds apace. Of the eight packages needed, at the time of posting: one was already in and has been updated (luxio), three have been accepted into Debian already (supple, clod, lua-scrypt), two are in NEW (gall and lace), and that leaves the newest library (tongue) and then Gitano itself still to go. The Debian FTP team have been awesome in helping me with all this, so thanks go to them. [...]

Michal Čihař: motranslator 2.0

Fri, 14 Oct 2016 04:00:16 +0000


Yesterday, the motranslator 2.0 has been released. As the version change suggests there are some important changes under the hood.

Full list of changes:

  • Consistently use camelCase in API
  • No more relies on using eval()
  • Depends on symfony/expression-language for calculations

As you can see, yesterday announced SimpleMath is not used in the end and I've moved to use existing library. Somehow I misunderstood library description and I thought that it works as PHP, what would be problem for us (or would bring need to add parenthesis around ternary operator as we did with eval()). But this is not the case and ternary operator behaves sane in ExpressionLanguage, so we're good too use it.

Anyway if you were using MoTranslator, it might be good idea to upgrade and check if API changes affect you.

Filed under: Debian English phpMyAdmin | 0 comments