Subscribe: Planet Debian
Added By: Feedage Forager Feedage Grade B rated
Language: English
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Planet Debian

Planet Debian

Planet Debian -


Uwe Kleine-König: Using the switch on Turris Omnia with Debian

Fri, 23 Mar 2018 21:29:00 +0000

After installing Debian on Turris Omnia there are a few more steps needed to make use of the network switch.

The Armada 385 CPU provides three network interfaces. Two are connected to the switch (but only one of them is used to "talk" to the switch), and one is routed directly to the WAN port.

After booting you might have to issue the following commands to make the devices representing the five external ports of the switch appear and functional:

# ip link set eth1 up
# modprobe mv88e6xxx

After that you can use the network devices lan0 to lan4 like normal network devices. To make them actually behave as you would expect from a network switch you have to put them into a bridge. The driver then offloads forwarding between the ports to the switch hardware such that the cpu doesn't need to bother for each single packet.

To automate setup of the bridged ports I used systemd-networkd as follows:

# echo mv88e6xxx > /etc/modules-load.d/switch.conf
# printf '[Match]\nPath=platform-f1030000.ethernet\n[Link]\n#MACAddress=...\nName=eth1\n' > /etc/systemd/network/
# printf '[NetDev]\nName=brlan\nKind=bridge\n' > /etc/systemd/network/brlan.netdev
# printf '[Match]\nName=brlan\n\n[Network]\nLinkLocalAddressing=ipv6\n' > /etc/systemd/network/
# printf '[Match]\nName=lan[01234]\n\n[Network]\nBridge=brlan\nBindCarrier=eth1\n' > /etc/systemd/network/
# printf '[Match]\nName=eth1\n' > /etc/systemd/network/
# systemctl enable --now systemd-networkd.service

You also might want to mask NetworkManager and/or ifupdown to not interfere with the above setup. And obviously you might want to add some more options to to configure the addresses used there. See

Aigars Mahinovs: Automation of embedded development

Fri, 23 Mar 2018 13:35:10 +0000


I am wondering if there is a standard solution to a problem that I am facing. Say you are developing an embedded Debian Linux device. You want to have a "test farm" - a bunch of copies of your target hardware running a lot of tests, while the development is ongoing. For this to work automatically, your automation setup needs to have a way to fully re-flash the device, even if the image previously flashed to it does not boot. How would that be usually achieved?

I'd imagine some sort of option in the initial bootloader that would look at some hardware switch (that your test host could trip programmatically) and if that is set, then boot into a very minimal and very stable "emergency" Linux system, then you could ssh into that, mount the target partitions and rewrite their contents with the new image to be tested.

Are there ready-made solutions that do such a thing? Generically or even just for some specific development boards? Do people solve this problem in a completely different way? Was unable to find any good info online.

Renata D'Avila: Pushing a commit to a different repo

Fri, 23 Mar 2018 03:49:00 +0000

View from the Barra-Galheta beach trail, in Florianopolis, Brazil My Outreachy internship with Debian is over. I'm still going to write an article about it, to let everyone know what I worked on towards the ending, but I simply didn't have the time yet to sit down and compile all the information. As you might or might not have noticed, right after my last Outreachy activity, I sort of took a week off in the beach. \o/ Mila, a cute stray dog that accompanied us during a whole trail For the past weeks, I've also been involved in the organization of three events (one of them was a Debian Women meeting in Curitiba that took place two Saturdays ago and another one is Django Girls Porto Alegre, which starts tonight). Because of this last one, I was reviewing their Brazilian Portuguese tutorial and adding some small fixes to the language. After all, we are talking to women who read the tutorial during the workshop, so why all the mentions to programmers and hackers and such should mention the male counterpart in Portuguese? Women program, too! When I was going to commit my fixes, though, I got an error: remote: error: GH006: Protected branch update failed for refs/heads/master. To ! [remote rejected] master -> master (protected branch hook declined) Oops? Yup, as it so happens more often than not, I forgot to fork the repository before starting to change the files! I just did 'git clone' straight to Django Girls' tutorial repository. But, since I had already done all the steps towards the commit, what could I do to avoid losing the changes? Could I just push this commit to another repository of my own and try and open a Pull Request to DjangoGirls/tutorial? Of course I had to go and search for that. Isn't that what all programmers do? Go and find someone else who already had the same problem they have to try and find a solution? Quick guide to the solution I've found: Fork the original repository to my collection of repos (on Github, just clicking 'Fork' will do). Get the branch and the id of the commit that had been created. For instance, on this case: [master 4d314550] Small fixes for pt-br version The branch is: master The id is: 4d314550 Use the URL for the new repository (your fork), the branch and the commit id for a new git push command, like this: git push URL_FOR_THE_NEW_REPO commit_id:branch Example with my repo: git push 4d314550:master And this was yet another article for future reference. [...]

Dirk Eddelbuettel: RcppCNPy 0.2.9

Fri, 23 Mar 2018 00:07:00 +0000


Another minor maintenance release of the RcppCNPy package arrived on CRAN this evening.

RcppCNPy provides R with read and write access to NumPy files thanks to the cnpy library by Carl Rogers.

There is only small code change: a path is now checked before an attempt to save. Thanks to Wush for the suggestion. I also added a short new vignette showing how reticulate can be used for NumPy data.

Changes in version 0.2.9 (2018-03-22)

  • The npySave function has a new option to check the path in the given filename.

  • A new vignette was added showing how the reticulate package can be used instead.

CRANberries also provides a diffstat report for the latest release. As always, feedback is welcome and the best place to start a discussion may be the GitHub issue tickets page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Christoph Egger: iCTF 2018 Spiderman writeup

Thu, 22 Mar 2018 12:06:34 +0000

This is FAUST playing CTF again, this time iCTF. We somehow managed to score an amazing 5th place. Team: FAUST Crew: izibi, siccegge Files: spiderman spider is a patched python interpreter. man is a pyc but with different magic values (explaining the patched python interpreter for now). Plain decompiling failes due to some (dead) cruft code at the beginning of all methods. can be patched away or you do more manual disassembling. Observations: does something with RSA public exponent is slightly uncommon (\(2^{16}+3\) instead of \(2^{16}+1\)) but that should be fine. uses openssl prime -generate to generate the RSA key. Doesn't use -safe but should also be fine for RSA purposes You need to do a textbook RSA signature on a challenge to get the flag Fine so far nothing obvious to break. When interacting with the service, you will likely notice the Almost Equal function in the Fun menu. According to the bytecode, it takes two integers \(a\) and \(b\) and outputs if \(a = b \pm 1\), but looking at the gameserver traffic, these two numbers are also considered to be almost equal: $$ a = 33086666666199589932529891 \\ b = 35657862677651939357901381 $$ So something's strange here. Starting the spider binary gives a python shell where you can play around with these numbers and you will find that a == b - 1 will actually result in True. So there is something wrong with the == operator in the shipped python interpreter, however it doesn't seem to be any sort of overflow. Bit representation also doesn't give anything obvious. Luky guess: why the strange public exponent? let's try the usual here. and indeed \(a = b - 1 \pmod{2^{16}+1}\). Given this is also used to compare the signature on the challenge this becomes easily bruteforceable. #!/usr/bin/env python3 import nclib, sys from random import getrandbits e = 2**16+3 # exponent w = 2**16+1 # wtf nc = nclib.Netcat((sys.argv[1], 20005), udp=False, verbose=True) nc.recv_until(b'4) Exit\n') nc.send(b'3\n') # Read nc.recv_until(b'What do you want to read?\n') nc.send(sys.argv[2].encode() + b'\n') nc.recv_until(b'solve this:\n') modulus, challenge = map(int, nc.recv_until(b'\n').decode().split()[:2]) challenge %= w # Starting at 0 would also work, but using large random numbers makes # it less obvious that we only bruteforce a small set of numbers answer = getrandbits(2000) while (pow(answer, e, modulus)) % w != challenge: answer += 1 nc.send(str(answer).encode() + b'\n') flag = nc.recv_until(b'\n') nc.recv_until(b'4) Exit\n') nc.send(b'4\n') [...]

Petter Reinholdtsen: Self-appointed leaders of the Free World

Thu, 22 Mar 2018 10:00:00 +0000

The leaders of the worlds have started to congratulate the re-elected Russian head of state, and this causes some criticism. I am though a little fascinated by a comment from USA senator John McCain, sited by The Hill and others:

"An American president does not lead the Free World by congratulating dictators on winning sham elections."

While I totally agree with the senator here, the way the quote is phrased make me suspect that he is unaware of the simple fact that USA have not lead the Free World since at least before its government kidnapped a completely innocent Canadian citizen in transit on his way home to Canada via John F. Kennedy International Airport in September 2002 and sent him to be tortured in Syria for a year.

USA might be running ahead, but the path they are taking is not the one taken by any Free World.

Vincent Fourmond: Release 2.2 of QSoas

Wed, 21 Mar 2018 20:46:27 +0000

The new release of QSoas is finally ready ! It brings in a lot of new features and improvements, notably greatly improved memory use for massive multifits, a fit for linear (in)activation processes (the one we used in Fourmond et al, Nature Chemistry 2014), a new way to transform "numbers" like peak position or stats into new datasets and even SVG output ! Following popular demand, it also finally brings back the peak area output in the find-peaks command (and the other, related commands) ! You can browse the full list of changes there.

The new release can be downloaded from the downloads page.

Freely available binary images for QSoas 1.0

In addition to the new release, we are now releasing the binary images for MacOS and Windows for the release 1.0. They are also freely available for download from the downloads page.

About QSoas

QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050–5052. Current version is 2.2. You can download its source code or buy precompiled versions for MacOS and Windows there.

Julien Danjou: On blog migration

Wed, 21 Mar 2018 18:46:08 +0000


I've started my first Web page in 1998 and one could say that it evolved quite a bit in the meantime. From a Frontpage designed Web site with frames, it evolved to plain HTML files. I've started blogging in 2003, though the archives of this blog only gets back to 2007. Truth is, many things I wrote in the first years were short (there were no Twitter) and not that relevant nowadays. Therefore, I never migrated them along the road of the many migrations that site had.

The last time I switched this site engine was in 2011, were I switched from Emacs Muse (and my custom muse-blog.el extension) to Hyde, a static Web site generator written in Python.

That taught me a few things.

First, you can't really know for sure which project will be a ghost in 5 years. I had no clue back then that Hyde author would lose interest and struggle passing the maintainership to someone else. The community was not big but it existed. Betting on a horse is part skill and part chance. My skills were probably lower seven years ago and I also may have had bad luck.

Secondly, maintaining a Web site is painful. I used to blog more regularly a few years ago, as the friction of using a dynamic blog engine was lower than spawning my deprecated static engine. Knowing that it needs 2 minutes to generate a static Web site really makes it difficult to compose and see the result at the same time without losing patience. It took me a few years to decide it was time to invest in the migration. I just jumped from Hyde to Ghost, hosted on their Pro engine as I don't want to do any maintenance. Let's be honest, I've no will to inflict myself the maintenance of a JavaScript blogging engine.


The positive side is that this is still Markdown based, so the migration job was not so painful. Ghost offers a REST API which allow to manipulate most of the content. It works fine, and I was able to leverage the Python ghost-client to write a tiny migration script to migrate every post.

I am looking forward to share most of the things that I work on during the next months. I really enjoyed reading contents of great hackers those last years, and I've learned ton of things by reading the adventure of smarter engineers.

It might be my time to share.

Jeremy Bicha: gksu is dead. Long live PolicyKit

Wed, 21 Mar 2018 17:29:14 +0000

Today, gksu was removed from Debian unstable. It was already removed 2 months ago from Debian Testing (which will eventually be released as Debian 10 “Buster”).

It’s not been decided yet if gksu will be removed from Ubuntu 18.04 LTS. There is one blocker bug there.


Petter Reinholdtsen: Facebooks ability to sell your personal information is the real Cambridge Analytica scandal

Wed, 21 Mar 2018 15:30:00 +0000

So, Cambridge Analytica is getting some well deserved criticism for (mis)using information it got from Facebook about 50 million people, mostly in the USA. What I find a bit surprising, is how little criticism Facebook is getting for handing the information over to Cambridge Analytica and others in the first place. And what about the people handing their private and personal information to Facebook? And last, but not least, what about the government offices who are handing information about the visitors of their web pages to Facebook? No-one who looked at the terms of use of Facebook should be surprised that information about peoples interests, political views, personal lifes and whereabouts would be sold by Facebook.

What I find to be the real scandal is the fact that Facebook is selling your personal information, not that one of the buyers used it in a way Facebook did not approve when exposed. It is well known that Facebook is selling out their users privacy, but a scandal nevertheless. Of course the information provided to them by Facebook would be misused by one of the parties given access to personal information about the millions of Facebook users. Collected information will be misused sooner or later. The only way to avoid such misuse, is to not collect the information in the first place. If you do not want Facebook to hand out information about yourself for the use and misuse of its customers, do not give Facebook the information.

Personally, I would recommend to completely remove your Facebook account, and take back some control of your personal information. According to The Guardian, it is a bit hard to find out how to request account removal (and not just 'disabling'). You need to visit a specific Facebook page and click on 'let us know' on that page to get to the real account deletion screen. Perhaps something to consider? I would not trust the information to really be deleted (who knows, perhaps NSA, GCHQ and FRA already got a copy), but it might reduce the exposure a bit.

If you want to learn more about the capabilities of Cambridge Analytica, I recommend to see the video recording of the one hour talk Paul-Olivier Dehaye gave to NUUG last april about Data collection, psychometric profiling and their impact on politics.

And if you want to communicate with your friends and loved ones, use some end-to-end encrypted method like Signal or Ring, and stop sharing your private messages with strangers like Facebook and Google.

Iustin Pop: Hakyll basics

Wed, 21 Mar 2018 02:00:08 +0000

As part of my migration to Hakyll, I had to spend quite a bit time understanding how it works before I became somewhat “at-home” with it. There are many posts that show “how to do x”, but not so many that explain its inner workings. Let me try to fix that: at its core, Hakyll is nothing else than a combination of make and m4 all in one. Simple, right? Let’s see :) Note: in the following, basic proficiency with Haskell is assumed. Monads and data types Rules The first area (the make equivalent), more precisely the Rules monad, concerns itself with the rules for mapping source files into output files, or creating output files from scratch. Key to this mapping is the concept of an Identifier, which is name in an abstract namespace. Most of the time—e.g. for all the examples in the upstream Hakyll tutorial—this identifier actually maps to a real source file, but this is not required; you can create an identifier from any string value. The similarity, or relation, to file paths manifests in two ways: the Identifier data type, although opaque, is internally implemented as a simple data type consisting of a file path and a “version”; the file path here points to the source file (if any), while the version is rather a variant of the item (not a numeric version!). if the identifier has been included in a rule, it will have an output file (in the Compiler monad, via getRoute). In effect, the Rules monad is all about taking source files (as identifiers) or creating them from scratch, and mapping them to output locations, while also declaring how to transform—or create—the contents of the source into the output (more on this later). Anyone can create an identifier value via fromFilePath, but “registering” them into the rules monad is done by one of: matching input files, via match, or matchMetadata, which takes a Pattern that describes, well, matching source files (on the file-system): match :: Pattern -> Rules () -> Rules () creating an abstract identifier, via create: create :: [Identifier] -> Rules () -> Rules () Note: I’m probably misusing the term “registered” here. It’s not the specific value that is registered, but the identifier’s file path. Once this string value has been registered, one can use a different identifier value with a similar string (value) in various function calls. Note: whether we use match or create doesn’t matter; only the actual values matter. So a match "" is equivalent to create [""], match here takes the list of identifiers from the file-system, but does not associated them to the files themselves—it’s just a way to get the list of strings. The second argument to the match/create calls is another rules monad, in which we’re processing the identifiers and tell how to transform them. This transformation has, as described, two aspects: how to map the file path to an output path, via the Rules data type, and how to compile the body, in the Compiler monad. Name mapping The name mapping starts with the route call, which lifts the routes into the rules monad. The routing has the usual expected functionality: idRoute :: Routes, which maps 1:1 the input file name to the output one. setExtension :: String -> Routes, which changes the extension of the filename, or sets it (if there wasn’t any). constRoute :: FilePath -> Routes, which is special in that it will result in the same output filename, which is obviously useful only for rules matching a single identifier. and a few more options, like building the route based on the identifier (customRoute), building it based on metadata associated to the identifier (metadataRoute), composing routes, match-and-replace, etc. All in all, routes offer all the needed functionality for mapping. Note that how we declare th[...]

Steinar H. Gunderson: Debian CEF packages

Tue, 20 Mar 2018 23:38:00 +0000


I've created some Debian CEF packages—CEF isn't the easiest thing to package (and it takes an hour to build even on my 20-core server, since it needs to build basically all of Chromium), but it's fairly rewarding to see everything fall into place. It should benefit not only Nageru, but also OBS and potentially CasparCG if anyone wants to package that.

It's not in the NEW queue because it depends on a patch to chromium that I hope the Chromium maintainers are brave enough to include. :-)

Reproducible builds folks: Reproducible Builds: Weekly report #151

Tue, 20 Mar 2018 19:59:19 +0000

Here's what happened in the Reproducible Builds effort between Sunday March 11 and Saturday March 17 2018: Mattia Rizzolo updated our patched version of GCC to 7.3.0-11. This includes our patches to support BUILD_PATH_PREFIX_MAP. Chris Lamb added support for comparing Gnumeric spreadsheets to our diffoscope tool (#893311) as well as updated the tests tests for openjdk-9 (#893183). The arm64 network in our test build framework came back after some protracted downtime caused by a hardware issue. 63 package reviews were added to our package notes adding to our knowledge about identified issues. In additiom, 43 entries were updated and 38 were been removed, A timestamps_in_pdf_generated_by_inkscape toolchain issue was also added. Isaac Z. Schlueter updated the npm package manager for JavaScript applications to use a fixed, deterministic modification time when creating archives. Due to a limitation in the ZIP archive format, they opted for 26th October 1985 instead. The latest release of Tails (3.6) is no longer reproducible. Heise reported on a trojaned version of a BitTorrent client that infected 400,000 computers (German). We believe such attacks would be detected quicker with a combination of free software and reproducible builds. Mathieu Boespflug and Théophane Hufschmitt posted about using Bazel and Nix to achieve fully-reproducible builds. Holger Levsen added yocto to our list of partner projects as they mention "Binary Reproducibility" as a feature. Upcoming events On Tuesday March 20th, Chris Lamb will speak about reproducible builds at the New York Linux Users Group. Chris Lamb will also be presenting at LibrePlanet 2018 on reproducible builds on Saturday 24th March. Patches submitted Bernhard M. Wiedemann: epic (sent upstream via email) icinga2 (hostname) kubernetes (parallelism, copyright year) lilypond (use convert -strip) marisa (drop date) mono (date) nautilus-dropbox (date/SOURCE_DATE_EPOCH) pencil (date) perl (Time::Local FTBFS in 2020) python-bjoern (sort, readdir(2)) uisp (date/SOURCE_DATE_EPOCH) uperf (merged, date/SOURCE_DATE_EPOCH) wyrd (date/SOURCE_DATE_EPOCH) Chris Lamb: #893314 filed against inkscape (sent upstream). Weekly QA work During our reproducibility testing, FTBFS bugs have been detected and reported by: Adrian Bunk (168) Emmanuel Bourg (2) Pirate Praveen (1) Tiago Stürmer Daitx (1) Misc. This week's edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists. [...]

Neil McGovern: ED Update – week 11

Tue, 20 Mar 2018 15:45:32 +0000


It’s time (Well, long overdue) for a quick update on stuff I’ve been doing recently, and some things that are coming up. I’ve worked out a new way of doing these, so they should be more regular now, about every couple of weeks or so.

  • The annual report is moving ahead. I’ve moved up the timelines a bit here from previous years, so hopefully, the people who very kindly help author this can remember what we did in the 2016/17 financial year!
  • GUADEC/GNOME.Asia/LAS sponsorship – elements are coming together for the sponsorship brochure
    • Some sponsors are lined up, and these will be announced by the usual channels – thanks to everyone who supports the project and our conferences!
  • Shell Extensions – It’s been noticed that reviews of extensions have been taking quite some time recently, so I’ve stepped in to help. I still think that part of the process could be automated, but at the moment it’s quite manual. Help is very much appreciated!
  • The Code of Conduct consultation has been useful, and there’s been a couple of points raised where clarity could be added. I’m getting those drafted at the moment, and hope to get the board to approve this soon.
  • A couple of administrative bits:
    • We now have a filing system for paperwork in NextCloud
    • Reviewing accounts for the end of year accounts – it’s the end of the tax year, so our finances need to go to the IRS
    • Tracking of accounts receivable hasn’t been great in the past, probably not helped by GNUCash. I’m looking at alternatives at the moment.
  • Helping out with a couple of trademark issues that have come up
  • Regular working sessions for Flathub legal bits with our lawyers
  • I’ll be at LibrePlanet 2018 this weekend, and I’m giving a talk on Sunday. With the FSF, we’re hosting a SpinachCon on Friday. This aims to do some usability testing and finding those small things which annoy people.

Holger Levsen: 20180319-some-problems

Tue, 20 Mar 2018 12:26:27 +0000


Some problems with Code of Conducts

shiromarieke took her time and wrote an IMHO very good text about problems with Code of Conducts, which I wholeheartly recommend to read.

I'll just quote two sentences which I think are essential:

Quote 1: "This is not a rant - it is a call for action: Let's gather, let's build the structures we need to make all people feel safe and respected in our communities." - in that sense, if you have feedback, please share it with shiromarieke as suggested by her. I'm very thankful she is taking the time to discuss her critism and work on possible improvements! (I'll likely not discuss this online though I'll be happy to discuss offline.) I just wanted to share this link with the Debian communities, as I agree with many of shiromarieke's points and because I want to support effords to improve this, as I believe those efforts will benefit everyone (as diversity and a welcoming athmospehre benefits everyone).

Quote 2: "Although I don't believe CoC are a good solution to help fix problems I have and will always do my best to respect existing CoC of workplaces, events or other groups I am involved with and I am thankful for your attempt to make our places and communities safer." - me too.

Daniel Pocock: Can a GSoC project beat Cambridge Analytica at their own game?

Tue, 20 Mar 2018 12:15:22 +0000

A few weeks ago, I proposed a GSoC project on the topic of Firefox and Thunderbird plugins for Free Software Habits. At first glance, this topic may seem innocent and mundane. After all, we all know what habits are, don't we? There are already plugins that help people avoid visiting Facebook too many times in one day, what difference will another one make? Yet the success of companies like Facebook and those that prey on their users, like Cambridge Analytica (who are facing the prospect of a search warrant today), is down to habits: in other words, the things that users do over and over again without consciously thinking about it. That is exactly why this plugin is relevant. Many students have expressed interest and I'm keen to find out if any other people may want to act as co-mentors (more information or email me). One Facebook whistleblower recently spoke about his abhorrence of the dopamine-driven feedback loops that keep users under a spell. The game changer Can we use the transparency of free software to help users re-wire those feedback loops for the benefit of themselves and society at large? In other words, instead of letting their minds be hacked by Facebook and Cambridge Analytica, can we give users the power to hack themselves? In his book The Power of Habit, Charles Duhigg lays bare the psychology and neuroscience behind habits. While reading the book, I frequently came across concepts that appeared immediately relevant to the habits of software engineers and also the field of computer security, even though neither of these topics is discussed in the book. Most significantly, Duhigg finishes with an appendix on how to identify and re-wire your habits and he has made it available online. In other words, a quickstart guide to hack yourself: could Duhigg's formula help the proposed plugin succeed where others have failed? If you could change one habit, you could change your life The book starts with examples of people who changed a single habit and completely reinvented themselves. For example, an overweight alcoholic and smoker who became a super-fit marathon runner. In each case, they show how the person changed a single keystone habit and everything else fell into place. Wouldn't you like to have that power in your own life? Wouldn't it be even better to share that opportunity with your friends and family? One of the challenges we face in developing and promoting free software is that every day, with every new cloud service, the average person in the street, including our friends, families and co-workers, is ingesting habits carefully engineered for the benefit of somebody else. Do you feel that asking your friends and co-workers not to engage you in these services has become a game of whack-a-mole? Providing a simple and concise solution, such as a plugin, can help people to find their keystone habits and then help them change them without stress or criticism. Many people want to do the right thing: if it can be made easier for them, with the right messages, at the right time, delivered in a positive manner, people feel good about taking back control. For example, if somebody has spent 15 minutes creating a Doodle poll and sending the link to 50 people, is there any easy way to communicate your concerns about Doodle? If a plugin could highlight an alternative before they invest their time in Doodle, won't they feel better? If you would like to provide feedback or even help this project go ahead, you can subscribe here and post feedback to the thread or just email me. [...]

Shirish Agarwal: Debconf 2018, MATE 1.2.0, libqalculate transition etc.

Tue, 20 Mar 2018 05:20:09 +0000

Dear all, First up is news on Debconf 2018 which will be held in Hsinchu, Taiwan. Apparently, the CFP or Call for Proposals was made just a few days ago and I probably forgot to share about it. Registration has also been opened now. The only thing most people have to figure out is how to get a system-generated certificate, make sure to have an expiry date, I usually have a year, make it at least 6 months as you would need to put up your proposal for contention and let the content-team decide it on the proposal merit. This may at some point move from alioth to salsa as the alioth service is going away. The best advice I can give is to put your proposal in and keep reworking/polishing it till the end date for applications is near. At the same time do not over commit yourself. From a very Indian perspective and somebody who has been to one debconf, you can think of the debconf as a kind of ‘khumb‘ Mela or gathering as you will. You can definitely network with all the topics and people you care for, but the most rewarding are those talks which were totally unplanned for. Also it does get crazy sometime so it’s nice if you are able to have some sane time for yourself even if it just a 5-10 minute walk. On the budgeting side of things, things have been going well but could be better. The team has managed to raise probably bit more than half the target. See the list of the sponsors of Debconf. With so many companies using the products the Debian Developers work hard at maintaining, it would be in the companies self-enlightened interest to keep the pot going. There are high hopes that it will be a healthy turnout and influences hardware, software, Information technology policymakers to have a more open and secure society where people are just not data. In other news, I’m excited to see MATE 1.20 which is now in testing. I asked people from the mate-team last month for the new packages, came to know of the gtk3+ port which was unfortunately postponed to 3.20.1 which is also complete and might be in a little later. I love mate quite a bit for the functionality and yet low memory usage it provides. I tried to push to have a mate-desktop install CD but was consequently denied. While he didn’t elaborate the reasons, I can hypothesize some of the reasons that might be an influence – a. Any -desktop CD would not be for a single architecture but all of the architectures. b. Which in turn would bring headaches from storage at the mirror network c. not to mention making sure that mate is always at a releasable state especially in point releases. I have to admit that I have become a bit of mate fanboy since I started using it sometime back. The mate-team atm consists of Mike Gabriel and Martin Wimpress with Martin usually doing the patching work while Mike does the uploading work to the archive. There are well-wishers like me who do chime in from time to time but probably needs 1 or 2 dedicated people who make things easier. If you have the technical chops and want to learn packaging it might be a good way to get into it. It isn’t big and heavy like GNOME, nor is it at light as some of the other competitors in the desktop space. It’s just right. Add to that it brings in its own unique themeing and looks which makes it look unique than other distributions. The only thing bad about it is that upstream is a bit secretive about what can we expect in the releases round the corner and in the near/late future probably bit of reason might be constrained resources. Update – For what it’s worth they have started the package uploads of the new version having the debian-mate@lists.debia[...]

Jonathan McDowell: First impressions of the Gemini PDA

Mon, 19 Mar 2018 20:41:04 +0000

Last March I discovered the IndieGoGo campaign for the Gemini PDA, a plan to produce a modern PDA with a decent keyboard inspired by the Psion 5. At that point in time the estimated delivery date was November 2017, and it wasn’t clear they were going to meet their goals. As someone has owned a variety of phones with keyboards, from a Nokia 9000i to a T-Mobile G1 I’ve been disappointed about the lack of mobile devices with keyboards. The Gemini seemed like a potential option, so I backed it, paying a total of $369 including delivery. And then I waited. And waited. And waited. Finally, one year and a day after I backed the project, I received my Gemini PDA. Now, I don’t get as much use out of such a device as I would have in the past. The Gemini is definitely not a primary phone replacement. It’s not much bigger than my aging Honor 7 but there’s no external display to indicate who’s calling and it’s a bit clunky to have to open it to dial (I don’t trust Google Assistant to cope with my accent enough to have it ring random people). The 9000i did this well with an external keypad and LCD screen, but then it was a brick so it had the real estate to do such things. Anyway. I have a laptop at home, a laptop at work and I cycle between the 2. So I’m mostly either in close proximity to something portable enough to move around the building, or travelling in a way that doesn’t mean I could use one. My first opportunity to actually use the Gemini in anger therefore came last Friday, when I attended BelFOSS. I’d normally bring a laptop to a conference, but instead I decided to just bring the Gemini (in addition to my normal phone). I have the LTE version, so I put my FreedomPop SIM into it - this did limit the amount I could do with it due to the low data cap, but for a single day was plenty for SSH, email + web use. I already have the Pro version of the excellent JuiceSSH, am a happy user of K-9 Mail and tend to use Chrome these days as well. All 3 were obviously perfectly happy on the Android 7.1.1 install. Aside: Why am I not running Debian on the device? Planet do have an image available form their Linux Support page, but it’s running on top of the crufty 3.18 Android kernel and isn’t yet a first class citizen - it’s not clear the LTE will work outside Android easily and I’ve no hope of ARM opening up the Mali-T880 drivers. I’ve got plans to play around with improving the support, but for the moment I want to actually use the device a bit until I find sufficient time to be able to make progress. So how did the day go? On the whole, a success. Battery life was great - I’d brought a USB battery pack expecting to need to boost the charge at some point, but I last charged it on Thursday night and at the time of writing it’s still claiming 25% battery left. LTE worked just fine; I had a 4G signal for most of the day with occasional drops down to 3G but no noticeable issues. The keyboard worked just fine; much better than my usual combo of a Nexus 7 + foldable Bluetooth keyboard. Some of the symbols aren’t where you’d expect, but that’s understandable on a scaled down keyboard. Screen resolution is great. I haven’t used the USB-C ports other than to charge and backup so far, but I like the fact there are 2 provided (even if you need a custom cable to get HDMI rather than it following the proper standard). The device feels nice and solid in your hand - the case is mostly metal plates that remove to give access to the SIM slot and (non-removable but user replaceable) battery. The hinge mechanism seems robust; I haven’t been w[...]

Vincent Bernat: Integration of a Go service with systemd: socket activation

Mon, 19 Mar 2018 08:28:47 +0000

In a previous post, I highlighted some useful features of systemd when writing a service in Go, notably to signal readiness and prove liveness. Another interesting bit is socket activation: systemd listens on behalf of the application and, on incoming traffic, starts the service with a copy of the listening socket. Lennart Poettering details in a blog post: If a service dies, its listening socket stays around, not losing a single message. After a restart of the crashed service it can continue right where it left off. If a service is upgraded we can restart the service while keeping around its sockets, thus ensuring the service is continously responsive. Not a single connection is lost during the upgrade. This is one solution to get zero-downtime deployment for your application. Another upside is you can run your daemon with less privileges—loosing rights is a difficult task in Go.1 The basics Handling of existing connections Waiting a few seconds for existing connections Waiting longer for existing connections Waiting longer for existing connections (alternative) Zero-downtime deployment? Addendum: decoy process using Go Addendum: identifying sockets by name The basics🔗 Let’s take back our nifty 404-only web server: package main import ( "log" "net" "net/http" ) func main() { listener, err := net.Listen("tcp", ":8081") if err != nil { log.Panicf("cannot listen: %s", err) } http.Serve(listener, nil) } Here is the socket-activated version, using go-systemd: package main import ( "log" "net/http" "" ) func main() { listeners, err := activation.Listeners(true) // ❶ if err != nil { log.Panicf("cannot retrieve listeners: %s", err) } if len(listeners) != 1 { log.Panicf("unexpected number of socket activation (%d != 1)", len(listeners)) } http.Serve(listeners[0], nil) // ❷ } In ❶, we retrieve the listening sockets provided by systemd. In ❷, we use the first one to serve HTTP requests. Let’s test the result with systemd-socket-activate: $ go build 404.go $ systemd-socket-activate -l 8000 ./404 Listening on [::]:8000 as 3. In another terminal, we can make some requests to the service: $ curl '[::1]':8000 404 page not found $ curl '[::1]':8000 404 page not found For a proper integration with systemd, you need two files: a socket unit for the listening socket, and a service unit for the associated service. We can use the following socket unit, 404.socket: [Socket] ListenStream = 8000 BindIPv6Only = both [Install] WantedBy = The systemd.socket(5) manual page describes the available options. BindIPv6Only = both is explicitely specified because the default value is distribution-dependent. As for the service unit, we can use the following one, 404.service: [Unit] Description = 404 micro-service [Service] ExecStart = /usr/bin/404 systemd knows the two files work together because they share the same prefix. Once the files are in /etc/systemd/system, execute systemctl daemon-reload and systemctl start 404.​socket. Your service is ready to accept connections! Handling of existing connections🔗 Our 404 service has a major shortcoming: existing connections are abruptly killed when the daemon is stopped or restarted. Let’s fix that! Waiting a few seconds for existing connections🔗 We can include a short grace period for connections to terminate, then kill remaining ones: // On signal, gracefully shut down the server and wait 5 // seconds for current [...]

Daniel Pocock: GSoC and Outreachy: Mentors don't need to be Debian Developers

Mon, 19 Mar 2018 08:10:06 +0000


A frequent response I receive when talking to prospective mentors: "I'm not a Debian Developer yet".

As student applications have started coming in, now is the time for any prospective mentors to introduce yourself on the debian-outreach list if you would like to help with any of the listed projects or any topics that have been proposed spontaneously by students without any mentor.

It doesn't matter if you are a Debian Developer or not. Furthermore, mentoring in a program like GSoC or Outreachy is a form of volunteering that is recognized just as highly as packaging or any other development activity.

When an existing developer writes an email advocating your application to become a developer yourself, they can refer to your contribution as a mentor. Many other processes, such as requests for DebConf bursaries, also ask for a list of your contributions and you can mention your mentoring experience there.

With the student deadline on 27 March, it is really important to understand the capacity of the mentoring team over the next 10 days so we can decide how many projects can realistically be supported. Please ask on the debian-outreach list if you have any questions about getting involved.

Steve Kemp: Serverless deployment via docker

Mon, 19 Mar 2018 07:01:12 +0000


I've been thinking about serverless-stuff recently, because I've been re-deploying a bunch of services and some of them could are almost microservices. One thing that a lot of my things have in common is that they're all simple HTTP-servers, presenting an API or end-point over HTTP. There is no state, no database, and no complex dependencies.

These should be prime candidates for serverless deployment, but at the same time I don't want to have to recode them for AWS Lamda, or any similar locked-down service. So docker is the obvious answer.

Let us pretend I have ten HTTP-based services, each of which each binds to port 8000. To make these available I could just setup a simple HTTP front-end:


We'd need to route the request to the appropriate back-end, so we'd start to present URLs like:


Here any request which had the prefix steve/foo would be routed to a running instance of the docker container steve/foo. In short the name of the (first) path component performs the mapping to the back-end.

I wrote a quick hack, in golang, which would bind to port 80 and dynamically launch the appropriate containers, then proxy back and forth. I soon realized that this is a terrible idea though! The problem is a malicious client could start making requests for things like:


That would trigger my API-proxy to download the containers and spin them up. Allowing running arbitrary (albeit "sandboxed") code. So taking a step back, we want to use the path-component of an URL to decide where to route the traffic? Each container will bind to :8000 on its private (docker) IP? There's an obvious solution here: HAProxy.

So I started again, I wrote a trivial golang deamon which will react to docker events - containers starting and stopping - and generate a suitable haproxy configuration file, which can then be used to reload haproxy.

The end result is that if I launch a container named "foo" then requests to will reach it. Success! The only downside to this approach is that you must manually launch your back-end docker containers - but if you do so they'll become immediately available.

I guess there is another advantage. Since you're launching the containers (manually) you can setup links, volumes, and what-not. Much more so than if your API layer span them up with zero per-container knowledge.

Michael Stapelberg: sbuild-debian-developer-setup(1)

Mon, 19 Mar 2018 07:00:00 +0000

I have heard a number of times that sbuild is too hard to get started with, and hence people don’t use it. To reduce hurdles from using/contributing to Debian, I wanted to make sbuild easier to set up. sbuild ≥ 0.74.0 provides a Debian package called sbuild-debian-developer-setup. Once installed, run the sbuild-debian-developer-setup(1) command to create a chroot suitable for building packages for Debian unstable. On a system without any sbuild/schroot bits installed, a transcript of the full setup looks like this: % sudo apt install -t unstable sbuild-debian-developer-setup Reading package lists... Done Building dependency tree Reading state information... Done The following additional packages will be installed: libsbuild-perl sbuild schroot Suggested packages: deborphan btrfs-tools aufs-tools | unionfs-fuse qemu-user-static Recommended packages: exim4 | mail-transport-agent autopkgtest The following NEW packages will be installed: libsbuild-perl sbuild sbuild-debian-developer-setup schroot 0 upgraded, 4 newly installed, 0 to remove and 1454 not upgraded. Need to get 1.106 kB of archives. After this operation, 3.556 kB of additional disk space will be used. Do you want to continue? [Y/n] Get:1 http://localhost:3142/ unstable/main amd64 libsbuild-perl all 0.74.0-1 [129 kB] Get:2 http://localhost:3142/ unstable/main amd64 sbuild all 0.74.0-1 [142 kB] Get:3 http://localhost:3142/ testing/main amd64 schroot amd64 1.6.10-4 [772 kB] Get:4 http://localhost:3142/ unstable/main amd64 sbuild-debian-developer-setup all 0.74.0-1 [62,6 kB] Fetched 1.106 kB in 0s (5.036 kB/s) Selecting previously unselected package libsbuild-perl. (Reading database ... 276684 files and directories currently installed.) Preparing to unpack .../libsbuild-perl_0.74.0-1_all.deb ... Unpacking libsbuild-perl (0.74.0-1) ... Selecting previously unselected package sbuild. Preparing to unpack .../sbuild_0.74.0-1_all.deb ... Unpacking sbuild (0.74.0-1) ... Selecting previously unselected package schroot. Preparing to unpack .../schroot_1.6.10-4_amd64.deb ... Unpacking schroot (1.6.10-4) ... Selecting previously unselected package sbuild-debian-developer-setup. Preparing to unpack .../sbuild-debian-developer-setup_0.74.0-1_all.deb ... Unpacking sbuild-debian-developer-setup (0.74.0-1) ... Processing triggers for systemd (236-1) ... Setting up schroot (1.6.10-4) ... Created symlink /etc/systemd/system/ → /lib/systemd/system/schroot.service. Setting up libsbuild-perl (0.74.0-1) ... Processing triggers for man-db ( ... Setting up sbuild (0.74.0-1) ... Setting up sbuild-debian-developer-setup (0.74.0-1) ... Processing triggers for systemd (236-1) ... % sudo sbuild-debian-developer-setup The user `michael' is already a member of `sbuild'. I: SUITE: unstable I: TARGET: /srv/chroot/unstable-amd64-sbuild I: MIRROR: http://localhost:3142/ I: Running debootstrap --arch=amd64 --variant=buildd --verbose --include=fakeroot,build-essential,eatmydata --components=main --resolve-deps unstable /srv/chroot/unstable-amd64-sbuild http://localhost:3142/ I: Retrieving InRelease I: Checking Release signature I: Valid Release signature (key id 126C0D24BD8A2942CC7DF8AC7638D0442B90D010) I: Retrieving Packages I: Validating Packages I: Found packages in base already in required: apt I: Resolving de[...]

Dirk Eddelbuettel: RcppSMC 0.2.1: A few new tricks

Sun, 18 Mar 2018 21:16:00 +0000


A new release, now at 0.2.1, of the RcppSMC package arrived on CRAN earlier this afternoon (and once again as a very quick pretest-publish within minutes of submission).

RcppSMC provides Rcpp-based bindings to R for the Sequential Monte Carlo Template Classes (SMCTC) by Adam Johansen described in his JSS article. Sequential Monte Carlo is also referred to as Particle Filter in some contexts .

This releases contains a few bug fixes and one minor rearrangment allowing header-only use of the package from other packages, or via a Rcpp plugin. Many of these changes were driven by new contributors, which is a wonderful thing to see for any open source project! So thanks to everybody who helped with. Full details below.

Changes in RcppSMC version 0.2.1 (2018-03-18)

  • The sampler now has a copy constructor and assignment overload (Brian Ni in #28).

  • The SMC library component can now be used in header-only mode (Martin Lysy in #29).

  • Plugin support was added for use via cppFunction() and other Rcpp Attributes (or inline functions (Dirk in #30).

  • The sampler copy ctor/assigment operator is now copy-constructor safe (Martin Lysy In #32).

  • A bug in state variance calculation was corrected (Adam in #36 addressing #34).

  • History getter methods are now more user-friendly (Tiberiu Lepadatu in #37).

  • Use of pow with atomic types was disambiguated to std::pow) to help the Solaris compiler (Dirk in #42).

Courtesy of CRANberries, there is a diffstat report for this release.

More information is on the RcppSMC page. Issues and bugreports should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Russ Allbery: control-archive 1.8.0

Sun, 18 Mar 2018 21:14:00 +0000

This is the software that maintains the archive of control messages and the newsgroups and active files on I update things in place, but it's been a while since I made a formal release, and one seemed overdue (particularly since it needed some compatibility tweaks for GnuPG v1).

In code changes, signing IDs with whitespace are now supported, summaries when there is no log file for the summary period don't produce an error, and gpg1 is now used explicitly with flags to allow weak digest algorithms since the state of crypto for Usenet control messages is rather dire.

On the documentation side, there are multiple fixes to the README.html file that's also shipped with pgpcontrol, updating email addresses, URLs, package versions, and various other details.

For hierarchy changes, the grisbi.* key has been cleaned up a bit for hopefully more reliable verification, and everything related to gov.* has been dropped.

You can get the latest release from the control-archive distribution page.

François Marier: Dynamic DNS on your own domain

Sun, 18 Mar 2018 20:45:00 +0000

I recently moved my dynamic DNS hostnames from (now owned by Oracle) to No-IP. In the process, I moved all of my hostnames under a sub-domain that I control in case I ever want to self-host the authoritative DNS server for it.

Creating an account

In order to use my own existing domain, I registered for the Plus Managed DNS service and provided my top-level domain (

Then I created a support ticket to ask for the sub-domain feature. Without that, No-IP expects you to delegate your entire domain to them, whereas I only wanted to delegate *

Once that got enabled, I was able to create hostnames like machine.dyn in the No-IP control panel. Without the sub-domain feature, you can't have dots in hostnames.

I used a bogus IP address (e.g. for all of the hostnames I created in order to easily confirm that the client software is working.

DNS setup

On my registrar's side, here are the DNS records I had to add to delegate anything under to No-IP:

dyn NS
dyn NS
dyn NS
dyn NS
dyn NS

Client setup

In order to update its IP address whenever it changes, I installed ddclient on each of my machines:

apt install ddclient

While the ddclient package won't help you configure your No-IP service during installation or enable the web IP lookup method, this can all be done by editing the configuration after the fact.

I put the following in /etc/ddclient.conf:

use=web,, web-skip='IP Address'

and the following in /etc/default/ddclient:


Then restart the service:

systemctl restart ddclient.service

Note that you do need to change the default update interval or the server will ban your IP address.


To test that the client software is working, wait 6 minutes (there is an internal check which cancels any client invocations within 5 minutes of another), then run it manually:

ddclient --verbose --debug

The IP for that machine should now be visible on the No-IP control panel and in DNS lookups:

dig +short

Iustin Pop: New site layout

Sun, 18 Mar 2018 16:50:00 +0000

With the move to Hakyll, I thought whether to also get rid of the old /~iustin on my homepage address. I don’t remember exactly why I chose that layout - maybe I thought I’ll use my domain ( for other purposes? But there are other ways to do that (e.g.

So, from today, my new homepage address is simply

Iustin Pop: Goodbye Ikiwiki, hello Hakyll!

Sun, 18 Mar 2018 15:05:16 +0000

For a while now, I was somewhat unhappy with Ikiwiki, for very “important” reasons. First, it’s written in Perl, and I haven’t written serious Perl for around 20 years (and that was not serious code). So, me extending it if needed is unlikely, and in all my years of using it I haven’t touched anything except the config file. Second, and these the real reasons, Ikiwiki is too complex. The templating system is oh-so-verbose. I tried to move to Bootstrap for the styling of my blog (I don’t have time myself to learn enough CSS for responsive, nice and clean sites), but editing the default page template was giving me head-aches. Its wiki origins mean tight integration with the source repository, and the software automatically committing stuff to git as it needed (e.g. new tag pages, calendar updates, etc.) which is overhead for what should basically be a static web site. So, in the interest of throwing the baby with the bathwater, I said let’s give Hakyll a go. It’s written in Haskell, so everything will be good, right? And it is indeed. It’s so bare-bones that doing anything non-trivial (as in not just a plain page) requires writing code. The exercise of having a home-page/blog was, during the past week as I worked on converting to it, a programming exercise. Which is quite strange in itself, but for me it works - another excuse for Haskell. Now, Ikiwiki is a real wiki engine, so didn’t I lose too much by moving to Hakyll, which is just static site generator? No, I actually stopped using the “live” functionality of Ikiwiki (its cgi-bin script) a long while ago, as I disabled the commenting functionality; there was just too much spam. And live editing of pages was never needed for my use-case. This migration resulted in some downsides, though: I haven’t yet, and probably will never import the ~100 or so comments that I had on the old pages; as said, I stopped this a long while ago (around 2013), so… I reorganised the URLs, and a lot of my old posts were not conforming to any scheme, so posts up until mid-2016 do not have the right redirects; I’ll possibly fix this sometimes soon. Posts newer than that date already had date in the URL, and these have a generic redirection in place. Hakyll doesn’t include the tags in the atom/rss feeds “categories”, so this is a downgrade from before Because Hakyll is more extensible, I can use canned stuff (e.g. Bootstrap, Font Awesome) more easily, which means bigger site; the previous one was really trivial in terms of size. Internally (for my self), there are a few more issues: Ikiwiki came with a lot of real functionality, that is now missing; as a trivial example, shortcuts like [[!wiki Foobar]], which I now have to replicate. But with all the above said, there are good parts as well. The entire site is responsive design now, and both old and future posts that include images will be much nicer behaving on non-large-desktop-viewing case. Instead of ~60 lines of (non-commented-out) configuration, I now have 200 lines of Haskell code (ignoring comments, etc.); this is a net win, right? On top of that, because of the lack of built-in things, I had to learn how to use Hakyll, so now I can (and already did) much more customisation to the html output; random example: for linking to internal pictures, I have a simple macro: $pic("xxx.jpg", "alt-text")$ which [...]

Vincent Bernat: Route-based VPN on Linux with WireGuard

Sun, 18 Mar 2018 01:29:20 +0000

In a previous article, I described an implementation of redundant site-to-site VPNs using IPsec (with strongSwan as an IKE daemon) and BGP (with BIRD) to achieve this: 🦑 The two strengths of such a setup are: Routing daemons distribute routes to be protected by the VPNs. They provide high availability and decrease the administrative burden when many subnets are present on each side. Encapsulation and decapsulation are executed in a different network namespace. This enables a clean separation between a private routing instance (where VPN users are) and a public routing instance (where VPN endpoints are). As an alternative to IPsec, WireGuard is an extremely simple (less than 5,000 lines of code) yet fast and modern VPN that utilizes state-of-the-art and opinionated cryptography (Curve25519, ChaCha20, Poly1305) and whose protocol, based on Noise, has been formally verified. It is currently available as an out-of-tree module for Linux but is likely to be merged when the protocol is not subject to change anymore. Compared to IPsec, its major weakness is its lack of interoperability. It can easily replace strongSwan in our site-to-site setup. On Linux, it already acts as a route-based VPN. As a first step, for each VPN, we create a private key and extract the associated public key: $ wg genkey oM3PZ1Htc7FnACoIZGhCyrfeR+Y8Yh34WzDaulNEjGs= $ echo oM3PZ1Htc7FnACoIZGhCyrfeR+Y8Yh34WzDaulNEjGs= | wg pubkey hV1StKWfcC6Yx21xhFvoiXnWONjGHN1dFeibN737Wnc= Then, for each remote VPN, we create a short configuration file:1 [Interface] PrivateKey = oM3PZ1Htc7FnACoIZGhCyrfeR+Y8Yh34WzDaulNEjGs= ListenPort = 5803 [Peer] PublicKey = Jixsag44W8CFkKCIvlLSZF86/Q/4BovkpqdB9Vps5Sk= EndPoint = [2001:db8:2::1]:5801 AllowedIPs =,::/0 A new ListenPort value should be used for each remote VPN. WireGuard can multiplex several peers over the same UDP port but this is not applicable here, as the routing is dynamic. The AllowedIPs directive tells to accept and send any traffic. The next step is to create and configure the tunnel interface for each remote VPN: $ ip link add dev wg3 type wireguard $ wg setconf wg3 wg3.conf WireGuard initiates a handshake to establish symmetric keys: $ wg show wg3 interface: wg3 public key: hV1StKWfcC6Yx21xhFvoiXnWONjGHN1dFeibN737Wnc= private key: (hidden) listening port: 5803 peer: Jixsag44W8CFkKCIvlLSZF86/Q/4BovkpqdB9Vps5Sk= endpoint: [2001:db8:2::1]:5801 allowed ips:, ::/0 latest handshake: 55 seconds ago transfer: 49.84 KiB received, 49.89 KiB sent Like VTI interfaces, WireGuard tunnel interfaces are namespace-aware: once created, they can be moved into another network namespace where clear traffic is encapsulated and decapsulated. Encrypted traffic is routed in its original namespace. Let’s move each interface into the private namespace and assign it a point-to-point IP address: $ ip link set netns private dev wg3 $ ip -n private addr add 2001:db8:ff::/127 dev wg3 $ ip -n private link set wg3 up The remote end uses 2001:db8:ff::1/127. Once everything is setup, from one VPN, we should be able to ping each remote host: $ ip netns exec private fping 2001:db8:ff::{1,3,5,7} 2001:db8:ff::1 is alive 2001:db8:ff::3 is alive 2001:db8:ff::5 is alive 2001:db8:ff::7 is alive BIRD configuration is unmodified compared to our previous se[...]

Russ Allbery: DocKnot 1.03

Sun, 18 Mar 2018 00:42:00 +0000

This is the software that I use to generate documentation for my software. Currently, it just handles README,, and the top-level web page for the package.

This release adds a new metadata file, support/extra, which includes information that should be added to the middle of the normal SUPPORT section of README and files. It also adds an explanatory paragraph about SPDX to the default templates, and adds SPDX license identifiers to the package itself.

I've spent quite some time looking at good ways of maintaining accurate license metadata for my packages (and for Debian packages I maintain), including writing a truly ugly Perl script that generates a Debian copyright-format 1.0 file from a source tree. (There are multiple versions of this; mine is pickier than any other that I'm aware of.) Rather than trying to solve the free-form comment parsing problem, some form of structured metadata that's broadly adopted feels like the correct engineering solution (putting aside the fact that it will be hard to get everyone to adopt it). The SPDX project is trying to solve this, and although it seems very bureaucratic and the spec is almost unreadable, it does seem to be catching on to a degree.

I'm therefore adopting it in my packages at least to the extent of adding SPDX-License-Identifier headers to my source files and using the SPDX-standard identifiers (which annoyingly differ from the Debian copyright-format identifiers). I added a test to check that all the files have these headers and will start adding that to all my packages as I release them.

I'm still generating the LICENSE file with my messed-up Perl script. I want to switch from that to a better script that supports SPDX and I don't have to maintain, and will take a look at both the SPDX tooling and cme when I have a chance.

You can get the latest release from the DocKnot distribution page.

Martin Zobel-Helas: Unboxing and commissioning of my new reMarkable Paper tablet

Sat, 17 Mar 2018 13:48:05 +0000

A few days back my reMarkable paper tables arrived. A couple of friends asked me to do a review of this device, so here we go. The Device It is a E-Ink tablet for reading, writing and sketching using a small stylus. It is very thin (approx. 7mm) and with its 360 gramm it is very lightweight and fits into every laptop bag. Like any common mobile device today it uses micro USB for charging its 3000mAh battery. Its build in wireless can be used to sync with the vendors cloud, and with its 8GB internal storage it provides space for several thousand of pages of documents. The device itself is run by a ARM A9 CPU running on the vendors own Linux derivate called ‘Codex’. The vendor publishes the source code for his uBoot and linux kernel on their GitHub account. With Linux kernel 4.1.28 they do not run a very recent kernel for a device that ships since September 2017. First steps with Linux The device officially can only be synced with Windows or macOS or be synced using the vendor’s closed cloud with the Android or iOS app. But this is only partly true. The device, when connected with micro USB to a Linux machine, announces itself as a network device: zobel@gjallar ~ % sudo tail -f /var/log/kern.log Mar 16 20:15:07 gjallar kernel: [52605.362166] usb 1-1: new high-speed USB device number 43 using xhci_hcd Mar 16 20:15:08 gjallar kernel: [52605.512014] usb 1-1: New USB device found, idVendor=04b3, idProduct=4010 Mar 16 20:15:08 gjallar kernel: [52605.512020] usb 1-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0 Mar 16 20:15:08 gjallar kernel: [52605.512024] usb 1-1: Product: RNDIS/Ethernet Gadget Mar 16 20:15:08 gjallar kernel: [52605.512028] usb 1-1: Manufacturer: Linux 4.1.28-fslc+g7f82abb with 2184000.usb Mar 16 20:15:08 gjallar kernel: [52606.078698] cdc_ether 1-1:1.0 usb0: register 'cdc_ether' at usb-0000:00:14.0-1, CDC Ethernet Device, c2:1f:85:68:47:d8 Mar 16 20:15:08 gjallar kernel: [52606.078746] usbcore: registered new interface driver cdc_ether Mar 16 20:15:08 gjallar kernel: [52606.091233] cdc_ether 1-1:1.0 enp0s20f0u1: renamed from usb0 So if you do DHCP on that device, your interface will be assigned an IP: zobel@gjallar ~ % ip addr sh dev enp0s20f0u1 12: enp0s20f0u1: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether c2:1f:85:68:47:d8 brd ff:ff:ff:ff:ff:ff inet brd scope global dynamic noprefixroute enp0s20f0u1 valid_lft 43sec preferred_lft 43sec inet6 fe80::f358:7473:1050:ed9b/64 scope link noprefixroute valid_lft forever preferred_lft forever You can even log into the device. When you click on the “rM” sign on the top left corner, and then click “About” you get the information how to log into the device using SSH. The device runs busybox and dropbear. So, here we go, lets to to log into the device! zobel@gjallar ~ % ssh -l root root@ password: reMarkable ╺━┓┏━╸┏━┓┏━┓ ┏━╸┏━┓┏━┓╻ ╻╻╺┳╸┏━┓┏━┓ ┏━┛┣╸ ┣┳┛┃ ┃ ┃╺┓┣┳┛┣━┫┃┏┛┃ ┃ ┣━┫┗━┓ ┗━╸┗━╸╹┗╸┗━┛ ┗━┛╹┗╸╹ ╹┗┛ ╹ ╹ ╹ ╹┗━[...]

Elana Hashman: ClojureSYNC Talk Resources

Sat, 17 Mar 2018 04:00:00 +0000

At the inaugural ClojureSYNC in 2018, I gave a talk called "apt-get install leiningen: Bootstrapping the Clojure Ecosystem for Debian". This was the culmination of the year of work I put into packaging Leiningen 2.x and other Clojure software for Debian. It was incredibly exciting to present there, and Eric ran a fabulous conference! I wish more tech conferences would send me to New Orleans. apt-get install leiningen: Bootstrapping the Clojure Ecosystem for Debian Talk page, ClojureSYNC website Talk video: not yet posted (see link above for updates) Talk slides (pdf download) All the packages I maintain Clojure Team's GitLab repository Debian Clojure Wiki Debian Clojure Packaging Tutorial Follow-up readings How web bloat affects people with slow connections, by Dan Luu The Ethics of Unpaid Labor and the OSS Community, by Ashe Dryden Debian Social Contract Debian Free Software Guidelines Debian Dunc-Tank Controversy via LWN Image licensing info "There's no NEW queue for talk slides." – lamby, Debian Project Leader Debian logo, copyright 1999 Software in the Public Interest, Inc. used under the Attribution-ShareAlike 3.0 Unported License. Images on "The '90s" and "Walnut Creek GIFs Galore CDROM" slides were obtained from The Internet Archive and included under their Terms of Use. It is believed the inclusion of these images, for illustration of the history of computing and file distribution, constitutes fair use under US copyright law. Dynamic Duo: the Gnu and the Penguin in flight in colour, used under the GNU Free Documentation License, v1.3. Yggdrasil Computing Plug and Play Linux was obtained from Wikipedia. It is believed the inclusion of this image, for illustration of the history of early Linux distributions, constitutes fair use under US copyright law. Debian family tree, authors Andreas Lundqvist, Donjan Rodic, modified by Michaeldsuarez, used under the GNU Free Documentation License, v1.3. [...]

Louis-Philippe Véronneau: Minimal SQL privileges

Sat, 17 Mar 2018 00:45:33 +0000

Lately, I have been working pretty hard on a paper I have to hand out at the end of my university semester for the machine learning class I'm taking. I will probably do a long blog post about this paper in May if it turns out to be good, but for the time being I have some time to kill while my latest boosting model runs. So let's talk about something I've started doing lately: creating issues on FOSS webapp project trackers when their documentation tells people to grant all privileges to the database user. You know, something like: GRANT ALL PRIVILEGES ON database.* TO 'username'@'localhost' IDENTIFIED BY 'password'; I'd like to say I've never done this and always took time to specify a restricted subset of privileges on my servers, but I'd be lying. To be honest, I woke up last Christmas when someone told me it was an insecure practice. When you take a few seconds to think about it, there are quite a few database level SQL privileges and I don't see why I should grant them all to a webapp if it only needs a few of them. So I started asking projects to do something about this and update their documentation with a minimal set of SQL privileges needed to run correctly. The Drupal project does this quite well and tells you to: GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, CREATE TEMPORARY TABLES ON databasename.* TO 'username'@'localhost' IDENTIFIED BY 'password'; When I first reached out to the upstream devs of these projects, I was sure I'd be seen as some zealous nuisance. To my surprise, everyone thought it was a good idea and fixed it. Shout out to Nextcloud, Mattermost and KanBoard for taking this seriously! If you are using a webapp and the documentation states you should grant all privileges to the database user, here is a template you can use to create an issue and ask them to change it: Hi! The installation documentation says that you should grant all SQL privileges to the database user: GRANT ALL PRIVILEGES ON database.* TO 'username'@'localhost' IDENTIFIED BY 'password'; I was wondering what are the true minimal SQL privileges WEBAPP needs to run properly. I don't normally like to grant all privileges for security reasons and would really appreciate it if you could publish a minimal SQL database privileges list. I guess I'm expecting something like [Drupal][drupal] does. GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, CREATE TEMPORARY TABLES ON databasename.* TO 'username'@'localhost' IDENTIFIED BY 'password'; At the database level, [MySQL/MariaDB][mariadb] supports: * `ALTER` * `CREATE` * `CREATE ROUTINE` * `CREATE TEMPORARY TABLES` * `CREATE VIEW` * `DELETE` * `DELETE HISTORY` * `DROP` * `EVENT` * `INDEX` * `INSERT` * `LOCK TABLES` * `REFERENCES` * `SELECT` * `SHOW VIEW` * `TRIGGER` * `UPDATE` Does WEBAPP really need database level privileges like EVENT or CREATE ROUTINE? If not, why should I grant them? Thanks for your work on WEBAPP! [drupal]: [mariadb]: [...]

Dirk Eddelbuettel: RcppClassicExamples 0.1.2

Fri, 16 Mar 2018 21:56:00 +0000


Per a CRAN email sent to 300+ maintainers, this package (just like many others) was asked to please register its S3 method. So we did, and also overhauled a few other packagaging standards which changed since the previous uploads in December of 2012 (!!).

No new code or features. Full details below. And as a reminder, don't use the old RcppClassic -- use Rcpp instead.

Changes in version 0.1.2 (2018-03-15)

  • Registered S3 print method [per CRAN request]

  • Added src/init.c with registration and updated all .Call usages taking advantage of it

  • Updated http references to https

  • Updated DESCRIPTION conventions

Thanks to CRANberries, you can also look at a diff to the previous release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel: RDieHarder 0.1.4

Fri, 16 Mar 2018 21:52:00 +0000


Per a CRAN email sent to 300+ maintainers, this package (just like many others) was asked to please register its S3 method. So we did, and also overhauled a few other packagaging standards which changed since the last upload in 2014.

No NEWS.Rd file to take a summary from, but the top of the ChangeLog has details.

Thanks to CRANberries, you can also look at a diff to the previous release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Russell Coker: Racism in the Office

Fri, 16 Mar 2018 12:21:13 +0000

Today I was at an office party and the conversation turned to race, specifically the incidence of unarmed Afro-American men and boys who are shot by police. Apparently the idea that white people (even in other countries) might treat non-white people badly offends some people, so we had a man try to explain that Afro-Americans commit more crime and therefore are more likely to get shot. This part of the discussion isn’t even noteworthy, it’s the sort of thing that happens all the time. I and another man pointed out that crime is correlated with poverty and racism causes non-white people to be disproportionately poor. We also pointed out that US police seem capable of arresting proven violent white criminals without shooting them (he cited arrests of Mafia members I cited mass murderers like the one who shot up the cinema). This part of the discussion isn’t particularly noteworthy either. Usually when someone tries explaining some racist ideas and gets firm disagreement they back down. But not this time. The next step was the issue of whether black people are inherently violent. He cited all of Africa as evidence. There’s a meme that you shouldn’t accuse someone of being racist, it’s apparently very offensive. I find racism very offensive and speak the truth about it. So all the following discussion was peppered with him complaining about how offended he was and me not caring (stop saying racist things if you don’t want me to call you racist). Next was an appeal to “statistics” and “facts”. He said that he was only citing statistics and facts, clearly not understanding that saying “Africans are violent” is not a statistic. I told him to get his phone and Google for some statistics as he hadn’t cited any. I thought that might make him just go away, it was clear that we were long past the possibility of agreeing on these issues. I don’t go to parties seeking out such arguments, in fact I’d rather avoid such people altogether if possible. So he found an article about recent immigrants from Somalia in Melbourne (not about the US or Africa, the previous topics of discussion). We are having ongoing discussions in Australia about violent crime, mainly due to conservatives who want to break international agreements regarding the treatment of refugees. For the record I support stronger jail sentences for violent crime, but this is an idea that is not well accepted by conservatives presumably because the vast majority of violent criminals are white (due to the vast majority of the Australian population being white). His next claim was that Africans are genetically violent due to DNA changes from violence in the past. He specifically said that if someone was a witness to violence it would change their DNA to make them and their children more violent. He also specifically said that this was due to thousands of years of violence in Africa (he mentioned two thousand and three thousand years on different occasions). I pointed out that European history has plenty of violence that is well documented and also that DNA just doesn’t work the way he thinks it does. Of course he tried to shout me d[...]

Daniel Pocock: OSCAL'18, call for speakers, radio hams, hackers & sponsors reminder

Fri, 16 Mar 2018 08:46:29 +0000

The OSCAL organizers have given a reminder about their call for papers, booths and sponsors (ask questions here). The deadline is imminent but you may not be too late. OSCAL is the Open Source Conference of Albania. OSCAL attracts visitors from far beyond Albania (OpenStreetmap), as the biggest Free Software conference in the Balkans, people come from many neighboring countries including Kosovo, Montenegro, Macedonia, Greece and Italy. OSCAL has a unique character unlike any other event I've visited in Europe and many international guests keep returning every year. A bigger ham radio presence in 2018? My ham radio / SDR demo worked there in 2017 and was very popular. This year I submitted a fresh proposal for a ham radio / SDR booth and sought out local radio hams in the region with an aim of producing an even more elaborate demo for OSCAL'18. If you are a ham and would like to participate please get in touch using this forum topic or email me personally. Why go? There are many reasons to go to OSCAL: We can all learn from their success with diversity. One of the finalists for Red Hat's Women in Open Source Award, Jona Azizaj, is a key part of their team: if she is announced the winner at Red Hat Summit the week before OSCAL, wouldn't you want to be in Tirana when she arrives back home for the party? Warm weather to help people from northern Europe to thaw out. For many young people in the region, their only opportunity to learn from people in the free software community is when we visit them. Many people from the region can't travel to major events like FOSDEM due to the ongoing outbreak of immigration bureaucracy and the travel costs. Many Balkan countries are not EU members and incomes are comparatively low. Due to the low living costs in the region and the proximity to larger European countries, many companies are finding compelling opportunities to work with local developers there and OSCAL is a great place to make contacts informally. Sponsors sought Like many free software communities, Open Labs is a registered non-profit organization. Anybody interested in helping can contact the team and ask them for whatever details you need. The Open Labs Manifesto expresses a strong commitment to transparency which hopefully makes it easy for other organizations to contribute and understand their impact. Due to the low costs in Albania, even a small sponsorship or donation makes a big impact there. If you can't make a direct payment to Open Labs, you could also potentially help them with benefits in kind or by contributing money to one of the larger organizations supporting OSCAL. Getting there without direct service from Ryanair or Easyjet These notes about budget airline routes might help you plan your journey. It is particularly easy to get there from major airports in Italy. If you will also have a vacation at another location in the region it may be easier and cheaper to fly to that location and then use a bus to Tirana. Making it a vacation For people who like to combine conferences with their vacations, the Balkans (WikiTravel) offer many opportunities, including be[...]

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, February 2018

Fri, 16 Mar 2018 08:08:41 +0000

Like each month, here comes a report about the work of paid contributors to Debian LTS. Individual reports In February, about 196 work hours have been dispatched among 12 paid contributors. Their reports are available: Abhijith PA did 8 hours. Antoine Beaupré did 7.25h (out of 4h allocated + 3.25h remaining). Ben Hutchings did 13 hours (out of 15h allocated, thus keeping 2 extra hours for March). Brian May did 10 hours. Chris Lamb did 18 hours. Emilio Pozuelo Monfort did 2 hours only due to personal issues (out of 23.75 hours allocated + 10.5 hours remaining, he gave back the remaining 32.25 hours). Hugo Lefeuvre did 1.5 hours (out of 23.75 hours allocated, thus keeping 22.25 extra hours for March). Markus Koschany did 23.75 hours. Ola Lundqvist did 9 hours (out of 14 hours allocated, thus keeping 5 extra hours for March). Roberto C. Sanchez did 27.5 hours (out of 23.75 hours allocated + 3.75 hours remaining). Santiago Ruano Rincón did 6 hours (out of 8 hours allocated, thus keeping 2 extra hours for March). Thorsten Alteholz did 23.75 hours. Evolution of the situation The number of sponsored hours did not change but a new platinum sponsor is about to join our project. The security tracker currently lists 60 packages with a known CVE and the dla-needed.txt file 33. The number of open issues increased significantly and we seem to be behind in terms of CVE triaging. Thanks to our sponsors New sponsors are in bold. Platinum sponsors: TOSHIBA (for 29 months) GitHub (for 20 months) Gold sponsors: The Positive Internet (for 45 months) Blablacar (for 44 months) Linode (for 34 months) Babiel GmbH (for 23 months) Plat’Home (for 23 months) Silver sponsors: Domeneshop AS (for 44 months) Université Lille 3 (for 44 months) Trollweb Solutions (for 42 months) Nantes Métropole (for 38 months) Dalenys (for 35 months) Univention GmbH (for 30 months) Université Jean Monnet de St Etienne (for 30 months) Ribbon Communications, Inc. (for 24 months) maxcluster GmbH (for 18 months) Exonet B.V. (for 14 months) Leibniz Rechenzentrum (for 8 months) (for 5 months) Bronze sponsors: David Ayers – IntarS Austria (for 45 months) Evolix (for 45 months) Offensive Security (for 45 months), a.s. (for 45 months) Freeside Internet Service (for 44 months) MyTux (for 44 months) Intevation GmbH (for 42 months) Linuxhotel GmbH (for 42 months) Daevel SARL (for 40 months) Bitfolk LTD (for 39 months) Megaspace Internet Services GmbH (for 39 months) Greenbone Networks GmbH (for 38 months) NUMLOG (for 38 months) WinGo AG (for 38 months) Ecole Centrale de Nantes – LHEEA (for 34 months) Individual reports In Ja Sig-I/O (for 31 months) Entr’ouvert (for 29 months) Adfinis SyGroup AG (for 26 months) GNI MEDIA (for 21 months) Laboratoire LEGI – UMR 5519 / CNRS (for 21 months) Quarantainenet BV (for 21 months) RHX Srl (for 18 months) Bearstech (for 12 months) LiHAS (for 12 months) People Doc (for 9 months) Catalyst IT Ltd (for 7 months) Supagro Demarcq SAS No comment | Liked this article? Click here. | My blog is Flattr-enabled. [...]

Norbert Preining: TeX Live 2018 (pretest) hits Debian/experimental

Fri, 16 Mar 2018 04:27:15 +0000


TeX Live 2017 has been frozen and we have entered into the preparation phase for the release of TeX Live 2018. Time to update also the Debian packages to the current status.


The other day I have uploaded the following set of packages to Debian/experimental:

  • texlive-bin 2018.20180313.46939-1
  • texlive-base, texlive-lang, texlive-extra 2018.20180313-1
  • biber 2.11-1

This brings Debian/experimental on par with the current status of TeX Live’s tlpretest. After a bit of testing and the sources have stabilized a bit more I will upload all the stuff to unstable for broader testing.

This year hasn’t seen any big changes, see the above linked post for details. Testing and feedback would be greatly appreciated.


Louis-Philippe Véronneau: Roundcube fr_FEM locale 1.3.5

Fri, 16 Mar 2018 04:00:00 +0000


Roundcube 1.3.5 was released today and with it, I've released version 1.3.5 of my fr_FEM (French gender-neutral) locale.

This latest version is actually the first one that can be used with a production version of Roundcube: the first versions I released were based on the latest commit in the master branch at the time instead of an actual release. Not sure why I did that.

I've also changed the versioning scheme to follow Roundcube's. Version 1.3.5 of my localisation is thus compatible with Roundcube 1.3.5. Again, I should have done that from the start.

The fine folks at Riseup actually started using fr_FEM as the default French locale on their instance and I'm happy to say the UI integration seems to be working pretty well.

Sandro Knauß (hefee), who is working on the Debian Roundcube package, also told me he'd like to replace the default Roundcube French locale by fr_FEM in Debian. Nice to see people think a gender-neutral locale is a good idea!

Finally, since this was the first time I had to compare two different releases of Roundcube to see if the 20 files I care about had changed, I decided to write a simple script that leverages git to do this automatically. Running ./ -p git_repo -i 1.3.4 -f 1.3.5 -l fr_FR -o roundcube_diff.txt outputs a nice file that tells you if new localisation files have been added and displays what changed in the old ones.

You can find the locale here.

Clint Adams: Don't feed them after midnight

Thu, 15 Mar 2018 12:51:41 +0000


“Hello,” said Adrian, but Adrian was lying.

“My name is Adrian,” said Adrian, but Adrian was lying.

“Hold on while I fellate this 魔鬼,” announced Adrian.


Posted on 2018-03-15
Tags: bgs

Daniel Powell: Mentorship within software development teams

Thu, 15 Mar 2018 10:45:00 +0000

In response to: This email I wrote a short blog post with some insight about the subject of mentorship.


In my journey to find an internship opportunity through Google Summer of Code, I wanted to give input about the relationship between a mentor and an intern/apprentice. My time as a service manager in the automotive repair industry gave me insight into the design of these relationships.

My recommendation for mentoring programs within a software development team are to have a dual group and private messaging environment for teams of 3 mentors guiding 2 or 3 interns based on their comfort and experience in a group setting. My rationale for this is ass follows:

Every personality does not necessarily engage well with each other. While it's important to learn to work with people who you disagree with, I have found that when given the opportunity to float between mentors for different issues, apprentices will learn more from those who they get along with the best. If the end goal is for the pupil to learn the most during this experience, and hence increase also their productivity on a project then having the dual ability to use a group setting or PM to a specific mentor is ideal. This also gives the opportunity for a mentor to recommend asking a question to another mentor because their specialty in the topic area is better, which in turn can help assuage a conflict of personality simply from the shared introduction. (Just think about when someone you like or respect recommends you work with someone who you thought you didn't get along with - it's a more comfortable situation when you are introduced in this circumstance, when it's done in a transparent and positive light).

Our most successful ratio of mentors to apprentices was 3:2 for technicians who were short on shop experience, but in the scope of this project a 3:3 ratio could be appropriate. I would, however, avoid assigning a mentor as a lead for a student in this format. It makes the barrier for reaching out to the other two mentors too high (especially for those who are relatively new to a team dynamic). You may also change the ratio based on the experience of the students that you accept and their team experience. For example, if you have two students who have never worked in a team environment it may be prudent to move to a 3:2 ratio as to not overwhelm the mentors. It's nice to have that flexibility, so it may be good to avoid such a rigid structuring of teams.

Sven Hoexter: aput - simple upload script for a flat artifactory Debian repository

Wed, 14 Mar 2018 18:26:37 +0000

At work we're using Jfrog Artifactory to provide a Debian repository (among other kinds of repository). Using the WebUI sucks, uploading by cut&pasting a curl command is annoying too, so I just wrote down a few lines of shell to upload a single Debian binary package.

Expectation is a flat repository and that you edit the variables at the top to provide the repository URL, name and your API Key. So no magic involved.

Abhijith PA: Going to FOSSASIA 2018

Wed, 14 Mar 2018 12:43:00 +0000

I will be attending FOSSASIA summit 2018 happening at Singapore. Thanks to Daniel Pocock, we have a Debian booth there. If you are attending please add your name to this wiki page or contact me personally. We can hangout at the booth.

Laura Arjona Reina: WordPress for Android and short blog posts

Wed, 14 Mar 2018 06:34:56 +0000


I use for my social network interactions and from time to time I post short thoughts there.

I usually reserve my blog for longer posts including links etc.

That means that it’s harder for me to publish in my blog.

OTOH my daily commute time may be enough to craft short posts. I bring my laptop with me but it’s common that I
open kate, begin to write, and arrive my destination with my post almost finished but unpublished. Or, second variant, I cannot sit so I cannot type in the metro and pass the time reading or thinking.

I’ve just installed WordPress for Android and hopefully that helps me to write short posts in my commute time and publish quicker. Let’s try and see what happens (image)


Comment about this post in this thread.

Norbert Preining: Replacing a lost Yubikey

Wed, 14 Mar 2018 06:05:30 +0000

Some weeks ago I lost my purse with everything in there, from residency card, driving license, credit cards, cash cards, all kind of ID cards, and last but not least my Yubikey NEO. Being Japan I did expect that the purse will show up in a few days, most probably the money gone but all the cards intact. Unfortunately not this time. So after having finally reissued most of the cards, I also took the necessary procedures concerning the Yubikey, which contained my GnuPG subkeys, and was used as second factor for several services (see here and here). Although the GnuPG keys on the Yubikey are considered safe from extraction, I still decided to revoke them and create new subkeys – one of the big advantage of subkeys, one does not start at zero but just creates new subkeys instead of running around trying to get signatures again. Other things that have to be made is removing the old Yubikey from all the services where it has been used as second factor. In my case that were quite a lot (Google, Github, Dropbox, NextCloud, WordPress, …). BTW, you have a set of backup keys saved somewhere for all the services you are using, right? It helps a lot getting into the system. GnuPG keys renewal To remind myself of what is necessary, here are the steps: Get your master key from the backup USB stick revoke the three subkeys that are on the Yubikey create new subkeys install the new subkeys onto a new Yubikey, update keyservers All of that is quite straight-forward: Use gpg --expert --edit-key YOUR_KEY_ID, after this you select the subkey with key N, followed by a revkey. You can select all three subkeys and revoke them at the same time: just type key N for each of the subkeys (where N is the index starting from 0 of the key). Next create new subkeys, here you can follow the steps laid out in the original blog. In the same way you can move them to a new Yubikey Neo (good that I bought three of them back then!). Last but not least you have to update the key-servers with your new public key, which is normally done with gpg --send-keys (again see the original blog). The most tricky part was setting up and distributing the keys on my various computers: The master key remains as usual on offline media only. On my main desktop at home I have the subkeys available, while on my laptop I only have stubs pointing at the Yubikey. This needs a bit of shuffling around, but should be obvious somehow when looking at the previous blogs. Full disk encryption I had my Yubikey also registered as unlock device for the LUKS based full disk encryption. The status before the update was as follows: $ cryptsetup luksDump /dev/sdaN Version: 1 Cipher name: aes .... Key Slot 0: ENABLED ... Key Slot 1: DISABLED Key Slot 2: DISABLED Key Slot 3: DISABLED Key Slot 4: DISABLED Key Slot 5: DISABLED Key Slot 6: DISABLED Key Slot 7: ENABLED ... I was pretty su[...]

Louis-Philippe Véronneau: Playing with water

Wed, 14 Mar 2018 04:00:00 +0000

I'm currently taking a machine learning class and although it is an insane amount of work, I like it a lot. I initially had planned to use R to play around with the database I have, but the teacher recommended I use H2o, a FOSS machine learning framework. I was a bit sceptical at first since I'm already pretty good with R, but then I found out you could simply import H2o as an R library. H2o replaces most R functions by its own parallelized ones to cut down on processing time (no more doParallel calls) and uses an "external" server you have to run on the side instead of running R calls directly. I was pretty happy with this situation, that is until I actually started using H2o in R. With the huge database I'm playing with, the library felt clunky and I had a hard time doing anything useful. Most of the time, I just ended up with long Java traceback calls. Much love. I'm sure in the right hands using H2o as a library could have been incredibly powerful, but sadly it seems I haven't earned my black belt in R-fu yet. I was pissed for at least a whole day - not being able to achieve what I wanted to do - until I realised H2o comes with a WebUI called Flow. I'm normally not very fond of using web thingies to do important work like writing code, but Flow is simply incredible. Automated graphing functions, integrated ETA when running resource intensive models, descriptions for each and every model parameters (the parameters are even divided in sections based on your familiarly with the statistical models in question), Flow seemingly has it all. In no time I was able to run 3 basic machine learning models and get actual interpretable results. So yeah, if you've been itching to analyse very large databases using state of the art machine learning models, I would recommend using H2o. Try Flow at first instead of the Python or R hooks to see what it's capable of doing. The only downside to all of this is that H2o is written in Java and depends on Java 1.7 to run... That, and be warned: it requires a metric fuckton of processing power and RAM. My poor server struggled quite a bit even with 10 available cores and 10Gb of RAM... [...]

Dirk Eddelbuettel: Rcpp 0.12.16: A small update

Wed, 14 Mar 2018 00:49:00 +0000

The sixteenth update the 0.12.* series of Rcpp landed on CRAN earlier this evening after a few days of gestation in incoming/ at CRAN. Once again, this release follows the 0.12.0 release from July 2016, the 0.12.1 release in September 2016, the 0.12.2 release in November 2016, the 0.12.3 release in January 2017, the 0.12.4 release in March 2016, the 0.12.5 release in May 2016, the 0.12.6 release in July 2016, the 0.12.7 release in September 2016, the 0.12.8 release in November 2016, the 0.12.9 release in January 2017, the 0.12.10.release in March 2017, the 0.12.11.release in May 2017, the 0.12.12 release in July 2017, the 0.12.13.release in late September 2017, the 0.12.14.release in November 2017, and the 0.12.15.release in January 2018 making it the twentieth release at the steady and predictable bi-montly release frequency. Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 1316 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with another 91 in BioConductor. Compared to other releases, this release contains a relatively small change set, but between Kirill, Kevin and myself a few things got cleaned up and solidified. Full details are below. Changes in Rcpp version 0.12.16 (2018-03-08) Changes in Rcpp API: Rcpp now sets and puts the RNG state upon each entry to an Rcpp function, ensuring that nested invocations of Rcpp functions manage the RNG state as expected (Kevin in #825 addressing #823). The R::pythag wrapper has been commented out; the underlying function has been gone from R since 2.14.0, and ::hypot() (part of C99) is now used unconditionally for complex numbers (Dirk in #826). The long long type can now be used on 64-bit Windows (Kevin in #811 and again in #829 addressing #804). Changes in Rcpp Attributes: Code generated with cppFunction() now uses .Call() directly (Kirill Mueller in #813 addressing #795). Changes in Rcpp Documentation: The Rcpp FAQ vignette is now indexed as 'Rcpp-FAQ'; a stale Gmane reference was removed and entry for getting compilers under Conda was added. The top-level now has a Support section. The Rcpp.bib reference file was refreshed to current versions. Thanks to CRANberries, you can also look at a diff to the previous release. As always, details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings. [...]

Reproducible builds folks: Reproducible Builds: Weekly report #150

Tue, 13 Mar 2018 20:25:06 +0000

Here's what happened in the Reproducible Builds effort between Sunday March 4 and Saturday March 10 2018: On Saturday 10th March, Chris Lamb presented at SCALE 16x on Reproducible Builds. Dan Mux posted about moving away from Bazel referencing reproducibility. Chris Lamb demonstrated that the Reproducible Builds can also find quality assurance issues, such as in todoman where a non-fatal missing build-dependency was causing the output to be unreproducibile. The Yocto project's feature page lists "Binary Reproducibility" as their number one feature. diffoscope development diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. Mattia Rizzolo backported version 91 to the Debian backports repository. Chris Lamb: Support the case where the python3-xattr package is installed but python3-pyxattr is not. (Closes: #892240) Move documentation for maybe_decode into a docstring on the method itself. Avoid decoding strings by opening source files in binary mode. Mattia Rizzolo: tests: binary: fix test after 934dfff tests: test_dos_mbr: explicitly use utf8 for reading files comparators.utils.file: don't try to decode a string In addition, Juliana — our Outreachy intern — continued her work on parallel processing. Bugs filed Adrian Bunk: #892459 filed against simpleitk. Bernhard M. Wiedemann: racket python-datrie (sort readdir(2)) yubioath-desktop (sort readdir(2)) mango-doc (date, orphaned) lilypond (SOURCE_DATE_EPOCH/date) perl-Glib (update => fixes perl-Goo-Canvas) fbreader (filed upstream: 1, 2 & 3) yudit (SOURCE_DATE_EPOCH/date, upstreamable) python-pycryptopp (sort readdir(2)) autogen (compile-time benchmarking, SOURCE_DATE_EPOCH, .tar.gz) Chris Lamb: #892019 filed against python-meshio. #892020 filed against python-diskimage-builder. #892021 filed against kronosnet. #892419 filed against gnocchi (upstream). #892420 filed against nova (nova). #892425 filed against node-package-preamble . #892496 filed against yt. #892515 filed against meson (upstream). #892565 filed against codespell. node-rollup In addition, package reviews have been added, 44 have been updated and 26 have been removed in this week, adding to our knowledge about identified issues. Lastly, two issue classification types have been added: nondeterminstic_output_in_pkgconfig_files_generated_by_meson (patch sent upstream) timestamps_in_preamble_generated_by_node_package_preamble (patch sent upstream) development Hans-Christoph Steiner (F-Droid): Include newly packaged dependencies. Timeout build jobs after 48 hours. Holger Levsen (F-Droid:) Timeout to build jobs after 36 hours. Weekly QA work During our reproducibility testing, FTBFS bugs have been detected and rep[...]

Thomas Lange: build service now supports creation of VM disk images

Tue, 13 Mar 2018 16:27:24 +0000


A few days ago, I've added a new feature to the build service.

Additionally to creating an installation image, can now build bootable disk images. These disk images can be booted in a VM like KVM, Virtualbox or VMware or openstack.

You can define a disk image size, select a language, set a user and root password, select a Debian distribution and enable backports just by one click. It's possible to add your public key for access to the root account without a password. This can also be done by just specifying your GitHub account. Several disk formats are supports, like raw (compressed with xz or zstd), qcow2, vdi, vhdx and vmdk. And you can add your own list of packages, you want to have inside this OS. After a few minutes the disk image is created and you will get a download link, including a log the the creation process and a link to the FAI configuration that was used to create your customized image.

The new service is available at

If you have any comments, feature requests or feedback, do not hesitate to contact me.

Petter Reinholdtsen: First rough draft Norwegian and Spanish edition of the book Made with Creative Commons

Tue, 13 Mar 2018 12:00:00 +0000

I am working on publishing yet another book related to Creative Commons. This time it is a book filled with interviews and histories from those around the globe making a living using Creative Commons.

Yesterday, after many months of hard work by several volunteer translators, the first draft of a Norwegian Bokmål edition of the book Made with Creative Commons from 2017 was complete. The Spanish translation is also complete, while the Dutch, Polish, German and Ukraine edition need a lot of work. Get in touch if you want to help make those happen, or would like to translate into your mother tongue.

The whole book project started when Gunnar Wolf announced that he was going to make a Spanish edition of the book. I noticed, and offered some input on how to make a book, based on my experience with translating the Free Culture and The Debian Administrator's Handbook books to Norwegian Bokmål. To make a long story short, we ended up working on a Bokmål edition, and now the first rough translation is complete, thanks to the hard work of Ole-Erik Yrvin, Ingrid Yrvin, Allan Nordhøy and myself. The first proof reading is almost done, and only the second and third proof reading remains. We will also need to translate the 14 figures and create a book cover. Once it is done we will publish the book on paper, as well as in PDF, ePub and possibly Mobi formats.

The book itself originates as a manuscript on Google Docs, is downloaded as ODT from there and converted to Markdown using pandoc. The Markdown is modified by a script before is converted to DocBook using pandoc. The DocBook is modified again using a script before it is used to create a Gettext POT file for translators. The translated PO file is then combined with the earlier mentioned DocBook file to create a translated DocBook file, which finally is given to dblatex to create the final PDF. The end result is a set of editions of the manuscript, one English and one for each of the translations.

The translation is conducted using the Weblate web based translation system. Please have a look there and get in touch if you would like to help out with proof reading. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Junichi Uekawa: I've been writing js more for chrome extensions.

Mon, 12 Mar 2018 07:54:32 +0000

(image) I've been writing js more for chrome extensions. I write python using pandas for plotting graphs now. I wonder if there's good graphing solution for js. I don't remember how I crafted R graphs annymore.

Ben Hutchings: Debian LTS work, February 2018

Mon, 12 Mar 2018 00:51:28 +0000


I was assigned 15 hours of work by Freexian's Debian LTS initiative and worked 13 hours. I will carry over 2 hours to March.

I made another release on the Linux 3.2 longterm stable branch (3.2.99) and started the review cycle for the next update (3.2.100). I rebased the Debian package onto 3.2.99 but didn't upload an update to Debian this month.

I also discussed the possibilities for cooperation between Debian LTS and CIP, briefly reviewed leptonlib for additional security issues, and updated the wiki page about the status of Spectre and Meltdown in Debian.

Elena Gjevukaj: CoderGals Hackathon

Sun, 11 Mar 2018 09:04:01 +0000


CoderGals Hackathon was organized for the first time in my country. This event took place in the beautiful city of Prizren. This hackathon held for 24 to 48 hours, was an idea which started from two girls majoring in Computer Science, Qendresa and Albiona Hoti.

Thanks to them, we had the chance to work on exciting projects as well as be mentored by key tech people including: Mergim Cahani, Daniel Pocock, Taulant Mehmeti, Mergim Krasniqi, Kolos Pukaj, Bujar Dervishaj, Arta Shehu Zaimi and Edon Bajrami.


We brainstormed for about 3-4 hours to decide for the project. We discussed many ideas that ranged from Doppler effect to GUI interfaces for phone calls. Finally we ended up making an project for linking the PC with your phone so it will ease the procedure not to use both when you need to add a contact, make a call or even sent text messages. We called it Phone Client project.


You can check our work online:

Phone Client

It was a challenge for us because we worked for the first time on Debian OS.

Projects that other girls worked on:

Vasudev Kamath: Biboumi - A XMPP - IRC Gateway

Sun, 11 Mar 2018 05:19:00 +0000

IRC is a communication mode (technically a communication protocol) used by many Free Software projects for communication and collaboration. It is serving these projects well even 30 years after its inception. Though I'm pretty much okay with IRC I had a problem of not able to use IRC from the mobile phones. Main problem is the inconsistent network connection, where IRC needs always to be connected. This is where I came across Biboumi. Biboumi by itself does not have anything to do with mobile phones, its just a gateway which will allow you to connect with IRC channel as if it is a XMPP MUC room from any XMPP client. Benefit of this is it allows to enjoy some of XMPP feature in your IRC channel (not all but those which can be mapped). I run Biboumi with my ejabbered instance and there by now I can connect to some of the Debian IRC channel directly from my phone using Conversations XMPP client for Android. Biboumi is packaged for Debian, though I'm co-maintainer of the package most hardwork is done by Jonas Smedegaard in keeping the package in shape. It is also available for stretch-backports (though slightly outdated as its not packaged by us for backports). Once you install the package, copy example configuration file from /usr/share/doc/biboumi/examples/example.conf to /etc/biboumi/biboumi.cfg and modify the values as needed. Below is my sample file with password redacted. hostname=biboumi.localhost password=xxx db_name=/var/lib/biboumi/biboumi.sqlite #log_file=/var/log/biboumi/biboumi.log log_level=0 port=8888 realname_customization=true realname_from_jid=false Explanation for all the key, values in the configuration file is available in the man page (man biboumi). Biboumi is configured as external component of the XMPP server. In my case I'm using ejabberd to host my XMPP service. Below is the configuration needed for allowing biboumi to connect with ejabberd. listen: - port: 8888 ip: "" module: ejabberd_service acess: all hosts: "biboumi.localhost": password: xxx password field in biboumi configuration should match password value in your XMPP server configuration. After doing above configuration reload ejabberd (or your XMPP server) and start biboumi. Biboumi package provides systemd service file so you might need to enable it first. That's it now you have an XMPP to IRC gateway ready. You might notice that I'm using local host name for hostname key as well as ip field in ejabberd configuration. This is because TLS support was added to biboumi Debian package only after 7.2 release as botan 2.x was not available till that point in De[...]

Jeremy Bicha: webkitgtk in Debian Stretch: Report Card

Sat, 10 Mar 2018 17:25:58 +0000

webkitgtk is the GTK+ port of WebKit. webkitgtk provides web functionality for many things including GNOME Online Accounts’ login panels; Evolution’s HTML email editor and viewer; and the engine for the Epiphany web browser (also known as GNOME Web). Last year, I announced here that Debian 9 “Stretch” included the latest version of webkitgtk (Debian’s package is named webkit2gtk). At the time, I hoped that Debian 9 would get periodic security and bugfix updates. Nine months later, let’s see how we’ve been doing. Release History Debian 9.0, released June 17, 2017, included webkit2gtk 2.16.3 (up to date). Debian 9.1 was released July 22, 2017 with no webkit2gtk update (2.16.5 was the current release at the time). Debian 9.2, released October 8, 2017, included 2.16.6 (There was a 2.18.0 release available then but for the first stable update, we kept it simple by not taking the brand new series.) Debian 9.3 was released December 9, 2017 with no webkit2gtk update (2.18.3 was the current release at the time). Debian 9.4 released March 10, 2018 (today!), includes 2.18.6 (up to date). Release Schedule webkitgtk development follows the GNOME release schedule and produces new major updates every March and September. Only the current stable series is supported (although sometimes there can be a short overlap; 2.14.6 was released at the same time as 2.16.1). Distros need to adopt the new series every six months. Like GNOME, webkitgtk uses even numbers for stable releases (2.16 is a stable series, 2.16.3 is a point release in that series, but 2.17.3 is a development release leading up to 2.18, the next stable series). There are webkitgtk bugfix releases, approximately monthly. Debian stable point releases happen approximately every two or three months (the first point release was quicker). In a few days, webkitgtk 2.20 will be released. Debian 9.5 will need to include 2.20.1 (or 2.20.2) to keep users on a supported release. Report Card From five Debian 9 releases, we have been up to date in 2 or 3 of them (depending on how you count the 9.2 release). Using a letter grade scale, I think I’d give Debian a B or B- so far. But this is significantly better than Debian 8 which offered no webkitgtk updates at all except through backports. In my grading, Debian could get a A- if we consistently updated webkitgtk in these point releases. To get a full A, I think Debian would need to push the new webkitgtk updates (after a brief delay for regression testing) directly as security updates without waiting for point releases. Although that proposal has been rejected for Debian 9, I t[...]

Andrew Shadura: Say no to Slack, say yes to Matrix

Sat, 10 Mar 2018 13:50:00 +0000

Of all proprietary chatting systems, Slack has always seemed one of the worst to me. Not only it’s a closed proprietary system with no sane clients, open source or not, but it not just one walled garden, as Facebook or WhatsApp are, but a constellation of walled gardens, isolated from each other. To be able to participate in multiple Slack communities, the user has to create multiple accounts and keep multiple chat windows open all the time. Federation? Self-hosting? Owning your data? All of those are not a thing in Slack. Until recently, it was possible to at least keep the logs of all conversations locally by connecting to the chat using IRC or XMPP if the gateway was enabled. Now, with Slack shutting down gateways not only you cannot keep the logs on your computer, you also cannot use a client of your choice to connect to Slack. They also began changing the bots API which was likely the reason the Matrix-to-Slack gateway didn’t work properly at times. The issue has since resolved itself, but Slack doesn’t give any guarantees the gateway will continue working, and obviously they aren’t really interested in keeping it working. So, following Gunnar Wolf’s advice (consider also reading this article by Megan Squire), I recommend you stop using Slack. If you prefer an isolated chat system with features Slack provides, and you can self-host, consider MatterMost or Rocket.Chat. Both seem to provide more or less the same features as Slack, but don’t lock you in, and you can choose to either use their paid cloud offering, or run it on your own server. We’ve been using MatterMost at Collabora since July last year, and while it’s not perfect, it’s not a bad piece of software. If you woulde prefer a system you can federate, you may be interested to have a look at Matrix. Matrix is an open decentralised protocol and ecosystem, which architecturally looks similar to XMPP, but uses different technologies and offers a richer and more modern baseline, including VoIP, end-to-end encryption, decentralised history and content storage, easy bot integration and more. The web client for Matrix, Riot is comparable to Slack, but unlike Slack, there are more clients you can use, including Weechat, libpurple, a bunch of Qt-based clients and, importantly, Riot for Android and iOS. You don’t have to self-host a Matrix homeserver, since runs one you can use, but it’s quite easy to run one if you decide to, and you don’t even have to migrate your existing chats — you just join them from accounts on your own homeserver, and that’s it[...]

Michael Stapelberg: dput usability changes

Sat, 10 Mar 2018 09:00:00 +0000

dput-ng ≥ 1.16 contains two usability changes which make uploading easier:

  1. When no arguments are specified, dput-ng auto-selects the most recent .changes file (with confirmation).
  2. Instead of erroring out when detecting an unsigned .changes file, debsign(1) is invoked to sign the .changes file before proceeding.

With these changes, after building a package, you just need to type dput (in the correct directory of course) to sign and upload it.

Gunnar Wolf: On the demise of Slack's IRC / XMPP gateways

Sat, 10 Mar 2018 01:23:08 +0000


I have grudgingly joined three Slack workspaces , due to me being part of proejects that use it as a communications center for their participants. Why grudgingly? Because there is very little that it adds to well-established communications standards that we have had for long years decades.

On this topic, I must refer you to the talk and article presented by Megan Squire, one of the clear highlights of my participation last year at the 13th International Conference on Open Source Systems (OSS2017): «Considering the Use of Walled Gardens for FLOSS Project Communication». Please do have a good read of this article.

Thing is, after several years of playing open with probably the best integration gateway I have seen, Slack is joining the Embrace, Extend and Extinguish-minded companies. Of course, I strongly doubt they will manage to extinguish XMPP or IRC, but they want to strengthen the walls around their walled garden...

So, once they have established their presence among companies and developer groups alike, Slack is shutting down their gateways to XMPP and IRC, arguing it's impossible to achieve feature-parity via the gateway.

Of course, I guess all of us recognize and understand there has long not been feature parity. But that's a feature, not a bug! I expressly dislike the abuse of emojis and images inside what's supposed to be a work-enabling medium. Of course, connecting to Slack via IRC, I just don't see the content not meant for me.

The real motivation is they want to control the full user experience.

Well, they have lost me as a user. The day my IRC client fails to connect to Slack, I will delete my user account. They already had record of all of my interactions using their system. Maybe I won't be able to move any of the groups I am part of away from Slack – But many of us can help create a flood.

Say no to predatory tactics. Say no to Embrace, Extend and Extinguish. Say no to Slack.

Adnan Hodzic: Hello world!

Fri, 09 Mar 2018 22:08:10 +0000


Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

Sven Hoexter: half-assed Oracle JRE/JDK 10 support for java-package

Fri, 09 Mar 2018 18:22:29 +0000

I spent an hour to add very basic support for the upcoming Java 10 to my fork of java-package. It still has some edges and the list of binary executables managed via the alternatives system requires some major cleanup. I think once Java 8 is EOL in September it's a good point to consolidate and strip everything except for Java 11 support. If someone requires an older release he can still get back on an earlier version, but by then we won't see any new releases of Java 8, 9, 10. Not speaking about even older stuff.

[sven@digital lib (master)]$ java -version
java version "10" 2018-03-20
Java(TM) SE Runtime Environment 18.3 (build 10+46)
Java HotSpot(TM) 64-Bit Server VM 18.3 (build 10+46, mixed mode)