Subscribe: Planet Debian
http://planet.debian.org/rss20.xml
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
build  code  data  debian  free  git  months  new  package  packages  release  source  system  time  update  version  work 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Planet Debian

Planet Debian



Planet Debian - https://planet.debian.org/



 



Reproducible builds folks: Reproducible Builds: Weekly report #143

Tue, 23 Jan 2018 03:34:56 +0000

Here's what happened in the Reproducible Builds effort between Sunday January 14 and Saturday January 20 2018:

Upcoming events

Packages reviewed and fixed, and bugs filed

During reproducibility testing, 83 FTBFS bugs have been detected and reported by Adrian Bunk.

Reviews of unreproducible packages

56 package reviews have been added, 44 have been updated and 19 have been removed in this week, adding to our knowledge about identified issues.

diffoscope development

Furthermore Juliana Oliveira has been working on a separated branch on parallizing diffoscope.

jenkins.debian.net development

Misc.

This week's edition was written by Bernhard M. Wiedemann, Chris Lamb and Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.




Laura Arjona Reina: It’s 2018, where’s my traditional New Year Plans post?

Tue, 23 Jan 2018 00:38:37 +0000

I closed my eyes, opened them again, a new year began, and we’re even almost finishing January. Time flies. In this article I’ll post some updates about my life with computer, software and free software communities. It’s more a “what I’ve been doing” than a “new year plans” post… it seems that I’m learning to not to make so much plans (life comes to break them anyways!). At home My home server is still running Debian Jessie. I’m happy that it just works and my services are up, but I’m sad that I couldn’t find time for an upgrade to Debian stable (which is now Debian 9 Stretch) and maybe reinstall it with another config. I have lots of photos and videos to upload in my GNU MediaGoblin instances, but also couldn’t find time to do it (nor to print some of them, which was a plan for 2017, and the files still sleep in external harddrives or DVDs). So, this is a TODO item that crossed the year (yay! now I have almost 12 months ahead to try to complete it!). I’ll try to get this done before summer. I am considering installing my own pump.io instance but I’m not sure it’s good to place it in the same machine as the other services. We’ll see. I bought a new laptop (well, second hand, but in a very good condition), a Lenovo X230, and this is now my main computer. It’s an i5 with 8 GB RAM. Wow, modern computer at home! I’m very very happy with it, with its screen, keyboard, and everything. It’s running a clean install of Debian 9 stable with KDE Plasma Desktop and works great. It is not heavy at all so I carry it to work and use it in the public transport (when I can sit) for my contributions to free software. My phone (Galaxy S III with Lineage OS 14 which is Android 7) fell down and the touchscreen broke (I can see the image but it is unresponsive to touch). When normal boot, the phone is recognized by the PC as storage, and thus I could recover most of the data on it, but it’s not recognized by adb (as when USB debugging is disabled). It is recognized by adb when booted into Recovery (TWRP), though. I tried to enable USB debugging in several ways from adb while in Recovery, but couldn’t. I could switch off the wifi, though, so when I booted the phone it does not receive new messages, etc. I bought an OTG cable but I have no wireless mouse at home and couldn’t make it work with a normal USB mouse. I’ve given up for now until I find a wireless mouse or I have more time, and temporarily returned to use my old Galaxy Ace (with CyanogenMod 7 which is Android 2.3.7). I’ve looked at new phones but I don’t like that all of them have integrated battery, the screens are too big, all of them are very expensive (I know they are hi-tech machines, but don’t want to carry so valuable stuff all the time in my pocket) and other things. I still need to find time to go shopping with the list of phones where I can install Lineage OS (I already visited some stores but didn’t get convinced by the price, or they had no suitable models). My glasses broke (in a different incident than the phone) and I used old ones for two weeks, because in the middle of the new ones preparation I had some family issues to care about. So putting time in reading or writing in front of the computer has been a bit uncomfortable and I tried to avoid it in the last weeks. Now I have new glasses and I can see very well so I’m returning to my computer TODO. I’ve given up the battle against iThings at home (I lost). I don’t touch them but other members of the family use them. I’m considering contributing to Debian info about testing things or maintaining some wiki pages about accessing iThings from Debian etc, but will leave that for summer, maybe later. Now I just try not to get depressed about this. At work We still have servers running Debian Wheezy which is in LTS support until May. I’m confident that we’ll upgrade before Wheezy reaches end of life, but frankly looking at my work plan, I’m not sure when. Every month seems packed with other stuff. I’ve taken some weeks[...]



Benjamin Mako Hill: Introducing Computational Methods to Social Media Scientists

Tue, 23 Jan 2018 00:38:19 +0000

The ubiquity of large-scale data and improvements in computational hardware and algorithms have provided enabled researchers to apply computational approaches to the study of human behavior. One of the richest contexts for this kind of work is social media datasets like Facebook, Twitter, and Reddit. We were invited by Jean Burgess, Alice Marwick, and Thomas Poell to write a chapter about computational methods for the Sage Handbook of Social Media. Rather than simply listing what sorts of computational research has been done with social media data, we decided to use the chapter to both introduce a few computational methods and to use those methods in order to analyze the field of social media research. A “hairball” diagram from the chapter illustrating how research on social media clusters into distinct citation network neighborhoods. Explanations and Examples In the chapter, we start by describing the process of obtaining data from web APIs and use as a case study our process for obtaining bibliographic data about social media publications from Elsevier’s Scopus API.  We follow this same strategy in discussing social network analysis, topic modeling, and prediction. For each, we discuss some of the benefits and drawbacks of the approach and then provide an example analysis using the bibliographic data. We think that our analyses provide some interesting insight into the emerging field of social media research. For example, we found that social network analysis and computer science drove much of the early research, while recently consumer analysis and health research have become more prominent. More importantly though, we hope that the chapter provides an accessible introduction to computational social science and encourages more social scientists to incorporate computational methods in their work, either by gaining computational skills themselves or by partnering with more technical colleagues. While there are dangers and downsides (some of which we discuss in the chapter), we see the use of computational tools as one of the most important and exciting developments in the social sciences. Steal this paper! One of the great benefits of computational methods is their transparency and their reproducibility. The entire process—from data collection to data processing to data analysis—can often be made accessible to others. This has both scientific benefits and pedagogical benefits. To aid in the training of new computational social scientists, and as an example of the benefits of transparency, we worked to make our chapter pedagogically reproducible. We have created a permanent website for the chapter at https://communitydata.cc/social-media-chapter/ and uploaded all the code, data, and material we used to produce the paper itself to an archive in the Harvard Dataverse. Through our website, you can download all of the raw data that we used to create the paper, together with code and instructions for how to obtain, clean, process, and analyze the data. Our website walks through what we have found to be an efficient and useful workflow for doing computational research on large datasets. This workflow even includes the paper itself, which is written using LaTeX + knitr. These tools let changes to data or code propagate through the entire workflow and be reflected automatically in the paper itself. If you  use our chapter for teaching about computational methods—or if you find bugs or errors in our work—please let us know! We want this chapter to be a useful resource, will happily consider any changes, and have even created a git repository to help with managing these changes! The book chapter and this blog post were written with Jeremy Foote and Aaron Shaw. You can read the book chapter here. This blog post was originally published on the Community Data Science Collective blog. [...]



Bits from Debian: Mentors and co-mentors for Debian's Google Summer of Code 2018

Mon, 22 Jan 2018 23:50:00 +0000

(image)

Debian is applying as a mentoring organization for the Google Summer of Code 2018, an internship program open to university students aged 18 and up.

Debian already has a wide range of projects listed but it is not too late to add more or to improve the existing proposals. Google will start reviewing the ideas page over the next two weeks and students will start looking at it in mid-February.

Please join us and help extending Debian! You can consider listing a potential project for interns or listing your name as a possible co-mentor for one of the existing projects on Debian's Google Summer of Code wiki page.

At this stage, mentors are not obliged to commit to accepting an intern but it is important for potential mentors to be listed to get the process started. You will have the opportunity to review student applications in March and April and give the administrators a definite decision if you wish to proceed in early April.

Mentors, co-mentors and other volunteers can follow an intern through the entire process or simply volunteer for one phase of the program, such as helping recruit students in a local university or helping test the work completed by a student at the end of the summer.

Participating in GSoC has many benefits for Debian and the wider free software community. If you have questions, please come and ask us on IRC #debian-outreach or the debian-outreachy mailing list.




Lars Wirzenius: Ick: a continuous integration system

Mon, 22 Jan 2018 18:30:24 +0000

TL;DR: Ick is a continuous integration or CI system. See http://ick.liw.fi/ for more information. More verbose version follows. First public version released The world may not need yet another continuous integration system (CI), but I do. I've been unsatisfied with the ones I've tried or looked at. More importantly, I am interested in a few things that are more powerful than what I've ever even heard of. So I've started writing my own. My new personal hobby project is called ick. It is a CI system, which means it can run automated steps for building and testing software. The home page is at http://ick.liw.fi/, and the download page has links to the source code and .deb packages and an Ansible playbook for installing it. I have now made the first publicly advertised release, dubbed ALPHA-1, version number 0.23. It is of alpha quality, and that means it doesn't have all the intended features and if any of the features it does have work, you should consider yourself lucky. Invitation to contribute Ick has so far been my personal project. I am hoping to make it more than that, and invite contributions. See the governance page for the constitution, the getting started page for tips on how to start contributing, and the contact page for how to get in touch. Architecture Ick has an architecture consisting of several components that communicate over HTTPS using RESTful APIs and JSON for structured data. See the architecture page for details. Manifesto Continuous integration (CI) is a powerful tool for software development. It should not be tedious, fragile, or annoying. It should be quick and simple to set up, and work quietly in the background unless there's a problem in the code being built and tested. A CI system should be simple, easy, clear, clean, scalable, fast, comprehensible, transparent, reliable, and boost your productivity to get things done. It should not be a lot of effort to set up, require a lot of hardware just for the CI, need frequent attention for it to keep working, and developers should never have to wonder why something isn't working. A CI system should be flexible to suit your build and test needs. It should support multiple types of workers, as far as CPU architecture and operating system version are concerned. Also, like all software, CI should be fully and completely free software and your instance should be under your control. (Ick is little of this yet, but it will try to become all of it. In the best possible taste.) Dreams of the future In the long run, I would ick to have features like ones described below. It may take a while to get all of them implemented. A build may be triggered by a variety of events. Time is an obvious event, as is source code repository for the project changing. More powerfully, any build dependency changing, regardless of whether the dependency comes from another project built by ick, or a package from, say, Debian: ick should keep track of all the packages that get installed into the build environment of a project, and if any of their versions change, it should trigger the project build and tests again. Ick should support building in (or against) any reasonable target, including any Linux distribution, any free operating system, and any non-free operating system that isn't brain-dead. Ick should manage the build environment itself, and be able to do builds that are isolated from the build host or the network. This partially works: one can ask ick to build a container and run a build in the container. The container is implemented using systemd-nspawn. This can be improved upon, however. (If you think Docker is the only way to go, please contribute support for that.) Ick should support any workers that it can control over ssh or a serial port or other such neutral communication channel, without having to install an agent of any kind on them. Ick won't assume that it can have, say, a full Java run time, so that the worker can be, say, a micro c[...]



Thomas Lange: FAI.me build service now supports backports

Mon, 22 Jan 2018 13:00:16 +0000

(image)

The FAI.me build service now supports packages from the backports repository. When selecting the stable distribution, you can also enable backports packages. The customized installation image will then uses the kernel from backports (currently 4.14) and you can add additional packages by appending /stretch-backports to the package name, e.g. notmuch/stretch-backports.

Currently, the FAIme service offers images build with Debian stable, stable with backports and Debian testing.

If you have any ideas for extensions or any feedback, send an email to FAI.me =at= fai-project.org

FAI.me




Dirk Eddelbuettel: Rblpapi 0.3.8: Strictly maintenance

Mon, 22 Jan 2018 12:47:00 +0000

(image)

Another Rblpapi release, now at version 0.3.8, arrived on CRAN yesterday. Rblpapi provides a direct interface between R and the Bloomberg Terminal via the C++ API provided by Bloomberg Labs (but note that a valid Bloomberg license and installation is required).

This is the eight release since the package first appeared on CRAN in 2016. This release wraps up a few smaller documentation and setup changes, but also includes an improvement to the (less frequently-used) subscription mode which Whit cooked up on the weekend. Details below:

Changes in Rblpapi version 0.3.8 (2018-01-20)

  • The 140 day limit for intra-day data histories is now mentioned in the getTicks help (Dirk in #226 addressing #215 and #225).

  • The Travis CI script was updated to use run.sh (Dirk in #226).

  • The install_name_tool invocation under macOS was corrected (@spennihana in #232)

  • The blpAuthenticate help page has additional examples (@randomee in #252).

  • The blpAuthenticate code was updated and improved (Whit in #258 addressing #257)

  • The jump in version number was an oversight; this should have been 0.3.7.

And only while typing up these notes do I realize that I fat-fingered the version number. This should have been 0.3.7. Oh well.

Courtesy of CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the Rblpapi page. Questions, comments etc should go to the issue tickets system at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.




Daniel Pocock: Keeping an Irish home warm and free in winter

Mon, 22 Jan 2018 09:20:20 +0000

(image)

The Irish Government's Better Energy Homes Scheme gives people grants from public funds to replace their boiler and install a zoned heating control system.

Having grown up in Australia, I think it is always cold in Ireland and would be satisfied with a simple control switch with a key to make sure nobody ever turns it off but that isn't what they had in mind for these energy efficiency grants.

Having recently stripped everything out of the house, right down to the brickwork and floorboards in some places, I'm cautious about letting any technologies back in without checking whether they are free and trustworthy.

(image)

This issue would also appear to fall under the scope of FSFE's Public Money Public Code campaign.

Looking at the last set of heating controls in the house, they have been there for decades. Therefore, I can't help wondering, if I buy some proprietary black box today, will the company behind it still be around when it needs a software upgrade in future? How many of these black boxes have wireless transceivers inside them that will be compromised by security flaws within the next 5-10 years, making another replacement essential?

With free and open technologies, anybody who is using it can potentially make improvements whenever they want. Every time a better algorithm is developed, if all the homes in the country start using it immediately, we will always be at the cutting edge of energy efficiency.

(image) Are you aware of free and open solutions that qualify for this grant funding? Can a solution built with devices like Raspberry Pi and Arduino qualify for the grant?

Please come and share any feedback you have on the FSFE discussion list (join, reply to the thread).

(image)




Norbert Preining: Continuous integration testing of TeX Live sources

Mon, 22 Jan 2018 09:15:48 +0000

The TeX Live sources consists in total of around 15000 files and 8.7M lines (see git stats). It integrates several upstream projects, including big libraries like FreeType, Cairo, and Poppler. Changes come in from a variety of sources: external libraries, TeX specific projects (LuaTeX, pdfTeX etc), as well as our own adaptions and changes/patches to upstream sources. Since quite some time I wanted to have a continuous integration (CI) testing, but since our main repository is based on Subversion, the usual (easy, or the one I know) route via Github and one of the CI testing providers, didn’t come to my mind – until last week. Over the weekend I have set up CI testing for our TeX Live sources by using the following ingredients: git-svn for checkout, Github for hosting, Travis-CI for testing, and a cron job that does the connection. To be more specific: git-svn I use git-svn to check out only the source part of the (otherwise far to big) subversion repository onto my server. This is similar to the git-svn checkout of the whole of TeX Live as I reported here, but contains only the source part. Github The git-svn checkout is pushed to the project TeX-Live/texlive-source on Github. Travis-CI The CI testing is done in the TeX-Live/texlive-source project on Travis-CI (who are offering free services for open source projects, thanks!) Although this sounds easy, there are a few stumbling blocks: First of all, the .travis.yml file is not contained in the main subversion repository. So adding it to the master tree that is managed via git-svn is not working, because the history is rewritten (git svn rebase). My solution was to create a separate branch travis-ci which adds only the .travis.yml file and merge master. Travis-CI by default tests all branches, and does not test those not containing a .travis.yml, but to be sure I added an except clause stating that the master branch should not be tested. This way other developers can try different branches, too. The full .travis.yml can be checked on Github, here is the current status: # .travis.yml for texlive-source CI building # Norbert Preining # Public Domain language: c branches: except: - master before_script: - find . -name \*.info -exec touch '{}' \; before_install: - sudo apt-get -qq update - sudo apt-get install -y libfontconfig-dev libx11-dev libxmu-dev libxaw7-dev script: ./Build What remains is stitching these things together by adding a cron job that regularly does git svn rebase on the master branch, merges the master branch into travis-ci branch, and pushes everything to Github. The current cron job is here: #!/bin/bash # cron job for updating texlive-source and pushing it to github for ci set -e TLSOURCE=/home/norbert/texlive-source.git GIT="git --no-pager" quiet_git() { stdout=$(tempfile) stderr=$(tempfile) if ! $GIT "$@" $stdout 2>$stderr; then echo "STDOUT of git command:" cat $stdout echo "************" cat $stderr >&2 rm -f $stdout $stderr exit 1 fi rm -f $stdout $stderr } cd $TLSOURCE quiet_git checkout master quiet_git svn rebase quiet_git checkout travis-ci # don't use [skip ci] here because we only built the # last commit, which would stop building quiet_git merge master -m "merging master" quiet_git push --all With this setup we can CI testing of our changes in the TeX Live sources, and in the future maybe some developers will use separate branches to get testing there, too. Enjoy. [...]



Shirish Agarwal: PrimeZ270-p, Intel i7400 review and Debian – 1

Mon, 22 Jan 2018 05:23:06 +0000

This is going to be a biggish one as well. This is a continuation from my last blog post . Before diving into installation, I had been reading for quite a while Matthew Garett’s work. Thankfully most of his blog posts do get mirrored on planet.debian.org hence it is easy to get some idea as what needs to be done although have told him (I think even shared here) that he should somehow make his site more easily navigable. Trying to find posts on either ‘GPT’ and ‘UEFI’ and to have those posts in an ascending or descending way date-wise is not possible, at least I couldn’t find a way to do it as he doesn’t do it date-wise or something. The closest I could come to is sing ‘$keyword’ site:https://mjg59.dreamwidth.org/ via a search-engine and go through the entries shared therein. This doesn’t mean I don’t value his contribution. It is in fact, the opposite. AFAIK he was one of the first people who drew the community’s attention when UEFI came in and only Microsoft Windows could be booted on them, nothing else. I may be wrong but AFAIK he was the first one to talk about having a shim and was part of getting people to be part of the shim process. While I’m sure Matthew’s understanding may have evolved significantly from what he had shared before, it was two specific blog posts that I had to re-read before trying to install MS-Windows and then Debian-GNU/Linux system on it. . I went to a friend’s house who had windows 7 running at his end, I ran over there, used diskpart and did the change to GPT using Windows technet article. I had to use/go the GPT way as I understood that MS-Windows takes all the four primary partitions for itself, leaving nothing for any other operating system to do/use . I did the conversion to GPT and tried to have MS-Windows 10 as my current motherboard and all future motherboards from Intel Gen7/Gen8 onwards do not support anything less than Windows 10. I did see an unofficial patch floating on github somewhere but now have lost the reference to it. I had read some of the bug-reports of the repo. which seemed to suggest it was still a work in progress. Now this is where it starts becoming a bit… let’s say interesting. Now a friend/client of mine offered me a job to review MS-Windows 10, with his product keys of course. I was a bit hesitant as it had been a long time since I had worked with MS-Windows and didn’t know if I could do it or not, the other was a suspicion that I might like it too much. While I did review it, I found – a. It it one heck of a bloatware – I had thought MS-Windows would have learned it by now but no, they still have to have to learn that adware and bloatware aren’t solutions. I still can’t get my head wrapped around as to how 4.1 GB of an MS-WIndows ISO gets extracted to 20 GB and still have to install shit-loads of third-party tools to actually get anything done. Just amazed (and not in good way.) . Just to share as an example I still had to get something like Revo Uninstaller as MS-Windows even till date hasn’t learned to uninstall programs cleanly and needs a tool like that to clean the registry and other places to remove the titbits left along the way. Edit/Update – It still doesn’t have Fall Creators Update which is still supposed to be another 4 GB+ iso which god only knows how much space that will take. b. It’s still not gold – With all the hoopla around MS-Windows 10 that I had been hearing and seeing ads, I was under the impression that MS-Windows had turned gold i.e. it had a release like Debian would have ‘buster’ something around next year probably around or after 2019 Debconf is held. Windows 10 Microsoft would be released around July 2018, so it’s still a few months off. c. I had read an insightful article few years ago by a Junior Microsoft e[...]



Louis-Philippe Véronneau: French Gender-Neutral Translation for Roundcube

Mon, 22 Jan 2018 05:00:00 +0000

(image)

Here's a quick blog post to tell the world I'm now doing a French gender-neutral translation for Roundcube.

A while ago, someone wrote on the Riseup translation list to complain against the current fr_FR translation. French is indeed a very gendered language and it is common place in radical spaces to use gender-neutral terminologies.

So yeah, here it is: https://github.com/baldurmen/roundcube_fr_FEM

I haven't tested the UI integration yet, but I'll do that once the Riseup folks integrate it to their Roundcube instance.




Dirk Eddelbuettel: #15: Tidyverse and data.table, sitting side by side ... (Part 1)

Sun, 21 Jan 2018 22:40:00 +0000

Welcome to the fifteenth post in the rarely rational R rambling series, or R4 for short. There are two posts I have been meaning to get out for a bit, and hope to get to shortly---but in the meantime we are going start something else. Another longer-running idea I had was to present some simple application cases with (one or more) side-by-side code comparisons. Why? Well at times it feels like R, and the R community, are being split. You're either with one (increasingly "religious" in their defense of their deemed-superior approach) side, or the other. And that is of course utter nonsense. It's all R after all. Programming, just like other fields using engineering methods and thinking, is about making choices, and trading off between certain aspects. A simple example is the fairly well-known trade-off between memory use and speed: think e.g. of a hash map allowing for faster lookup at the cost of some more memory. Generally speaking, solutions are rarely limited to just one way, or just one approach. So if pays off to know your tools, and choose wisely among all available options. Having choices is having options, and those tend to have non-negative premiums to take advantage off. Locking yourself into one and just one paradigm can never be better. In that spirit, I want to (eventually) show a few simple comparisons of code being done two distinct ways. One obvious first candidate for this is the gunsales repository with some R code which backs an earlier NY Times article. I got involved for a similar reason, and updated the code from its initial form. Then again, this project also helped motivate what we did later with the x13binary package which permits automated installation of the X13-ARIMA-SEATS binary to support Christoph's excellent seasonal CRAN package (and website) for which we now have a forthcoming JSS paper. But the actual code example is not that interesting / a bit further off the mainstream because of the more specialised seasonal ARIMA modeling. But then this week I found a much simpler and shorter example, and quickly converted its code. The code comes from the inaugural datascience 1 lesson at the Crosstab, a fabulous site by G. Elliot Morris (who may be the highest-energy undergrad I have come across lately) focusssed on political polling, forecasts, and election outcomes. Lesson 1 is a simple introduction, and averages some polls of the 2016 US Presidential Election. Complete Code using Approach "TV" Elliot does a fine job walking the reader through his code so I will be brief and simply quote it in one piece: ## Getting the polls library(readr) polls_2016 <- read_tsv(url("http://elections.huffingtonpost.com/pollster/api/v2/questions/16-US-Pres-GE%20TrumpvClinton/poll-responses-clean.tsv")) ## Wrangling the polls library(dplyr) polls_2016 <- polls_2016 %>% filter(sample_subpopulation %in% c("Adults","Likely Voters","Registered Voters")) library(lubridate) polls_2016 <- polls_2016 %>% mutate(end_date = ymd(end_date)) polls_2016 <- polls_2016 %>% right_join(data.frame(end_date = seq.Date(min(polls_2016$end_date), max(polls_2016$end_date), by="days"))) ## Average the polls polls_2016 <- polls_2016 %>% group_by(end_date) %>% summarise(Clinton = mean(Clinton), Trump = mean(Trump)) library(zoo) rolling_average <- polls_2016 %>% mutate(Clinton.Margin = Clinton-Trump, Clinton.Avg = rollapply(Clinton.Margin,width=14, FUN=function(x){mean(x, na.rm=TRUE)}, by=1, partial=TRUE, fill=NA, align="right")) library(ggplot2) ggplot(rolling_average)+ geom_line(aes(x=end_date,y=Clinton.Avg),col="blue") + geom_[...]



Russ Allbery: New year haul

Sat, 20 Jan 2018 23:08:00 +0000

Some new acquired books. This is a pretty wide variety of impulse purchases, filled with the optimism of a new year with more reading time.

Libba Bray — Beauty Queens (sff)
Sarah Gailey — River of Teeth (sff)
Seanan McGuire — Down Among the Sticks and Bones (sff)
Alexandra Pierce & Mimi Mondal (ed.) — Luminescent Threads (nonfiction anthology)
Karen Marie Moning — Darkfever (sff)
Nnedi Okorafor — Binti (sff)
Malka Older — Infomocracy (sff)
Brett Slatkin — Effective Python (nonfiction)
Zeynep Tufekci — Twitter and Tear Gas (nonfiction)
Martha Wells — All Systems Red (sff)
Helen S. Wright — A Matter of Oaths (sff)
J.Y. Yang — Waiting on a Bright Moon (sff)

Several of these are novellas that were on sale over the holidays; the rest came from a combination of reviews and random on-line book discussions.

The year hasn't been great for reading time so far, but I do have a couple of things ready to review and a third that I'm nearly done with, which is not a horrible start.




Shirish Agarwal: PC desktop build, Intel, spectre issues etc.

Sat, 20 Jan 2018 22:05:19 +0000

This is and would be a longish one. I have been using desktop computers for around couple of decades now. My first two systems were an Intel Pentium III and then a Pentium Dual-core, the first one on kobian/mercury motherboard. The motherboards were actually called Mercury and was a brand which was later sold to Kobian which kept the brand-name. The motherboards and the CPU/processor used to be cheap. One could set up a decentish low-end system with display for around INR 40k/- which seemed to be decent as a country we had just come out of non-alignment movement and also chose to come out of isolationist tendencies (technology and otherwise as well). Most middle-class income families got their first taste of computers after y2k. There were quite a few y2k incomes which prompted the Government to lose duties further. One of the highlights during 1991 when satellite TV came was shown by CNN (probably CNN International) was the coming down of the Berlin Wall. There were many of us who were completely ignorant of world politics or what is/was happening in other parts of the world. Computer systems at those times were considered a luxury item and duties were sky-high ( between 1992-2001). The launch of Mars Pathfinder, its subsequent successful landing on the Martian surface also catapulted people’s imagination about PCs and micro-processors. I can still recall the excitement that was among young people of my age first seeing the liftoff from Cape Canaveral and then later the processed images of Spirits cameras showing images of a desolate desert-type land. We also witnessed the beginnings of ‘International Space Station‘ (ISS) . Me and few of my friends had drunk lot of Carl Sagan and many other sci-fi coolaids/stories. Star Trek, the movies and the universal values held/shared by them was a major influence to all our lives. People came to know about citizen based science or projects/distributed science projects, y2k fear appeared to be unfounded all these factors and probably a few more prompted the Government of India to reduce duties on motherboards, processors, components as well as taking Computers out of the restricted list which lead to competition and finally the common man being able to dream of a system sooner than later. Y2K also kick-started the beginnings of Indian software industry which is the bread and butter of many a middle class men-women who are in the service industry using technology directly or indirectly. In 2002 I bought my first system, an Intel Pentium III, i810 chipset (integrated graphics) with 256 MB of SDRAM which was supposed to be sufficient for the tasks it was being used for, Some light gaming, some web-mails, seeing movies,etc running on a mercury board. I don’t remember the code-name partly because the code-names are/were really weird and partly because it is just too long ago. I remember using Windows ’98 and trying to install one of the early GNU/Linux variants on that machine. Ir memory serves right, you had to flick a jumper (like a switch) to use the extended memory. I do not know/remember what happened but I think somewhere in a year or two in that time-frame Mercury India filed for bankruptcy and the name, manufacturing was sold to Kobian. After Kobian took over the ownership, it said it would neither honor the 3/5 year warranty or even repairs on the motherboards Mercury had sold, it created a lot of bad will against the company and relegated itself to the bottom of the pile for both experienced and new system-builders. Also mercury motherboards weren’t reputed/known to have a long life although the one I had gave me quite a decent life. The next machine I purchased was a Pentium Dual-core, (around 2009/2010[...]



Dirk Eddelbuettel: Rcpp 0.12.15: Numerous tweaks and enhancements

Sat, 20 Jan 2018 21:53:00 +0000

The fifteenth release in the 0.12.* series of Rcpp landed on CRAN today after just a few days of gestation in incoming/. This release follows the 0.12.0 release from July 2016, the 0.12.1 release in September 2016, the 0.12.2 release in November 2016, the 0.12.3 release in January 2017, the 0.12.4 release in March 2016, the 0.12.5 release in May 2016, the 0.12.6 release in July 2016, the 0.12.7 release in September 2016, the 0.12.8 release in November 2016, the 0.12.9 release in January 2017, the 0.12.10.release in March 2017, the 0.12.11.release in May 2017, the 0.12.12 release in July 2017, the 0.12.13.release in late September 2017, and the 0.12.14.release in November 2017 making it the nineteenth release at the steady and predictable bi-montly release frequency. Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 1288 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with another 91 in BioConductor. This release contains a pretty large number of pull requests by a wide variety of authors. Most of these pull requests are very focused on a particular issue at hand. One was larger and ambitious with some forward-looking code for R 3.5.0; however this backfired a little on Windows and is currently "parked" behind a #define. Full details are below. Changes in Rcpp version 0.12.15 (2018-01-16) Changes in Rcpp API: Calls from exception handling to Rf_warning() now correctly set an initial format string (Dirk in #777 fixing #776). The 'new' Date and Datetime vectors now have is_na methods too. (Dirk in #783 fixing #781). Protect more temporary SEXP objects produced by wrap (Kevin in #784). Use public R APIs for new_env (Kevin in #785). Evaluation of R code is now safer when compiled against R 3.5 (you also need to explicitly define RCPP_PROTECTED_EVAL before including Rcpp.h). Longjumps of all kinds (condition catching, returns, restarts, debugger exit) are appropriately detected and handled, e.g. the C++ stack unwinds correctly (Lionel in #789). [ Committed but subsequently disabled in release 0.12.15 ] The new function Rcpp_fast_eval() can be used for performance-sensitive evaluation of R code. Unlike Rcpp_eval(), it does not try to catch errors with tryEval in order to avoid the catching overhead. While this is safe thanks to the stack unwinding protection, this also means that R errors are not transformed to an Rcpp::exception. If you are relying on error rethrowing, you have to use the slower Rcpp_eval(). On old R versions Rcpp_fast_eval() falls back to Rcpp_eval() so it is safe to use against any versions of R (Lionel in #789). [ Committed but subsequently disabled in release 0.12.15 ] Overly-clever checks for NA have been removed (Kevin in #790). The included tinyformat has been updated to the current version, Rcpp-specific changes are now more isolated (Kirill in #791). Overly picky fall-through warnings by gcc-7 regarding switch statements are now pre-empted (Kirill in #792). Permit compilation on ANDROID (Kenny Bell in #796). Improve support for NVCC, the CUDA compiler (Iñaki Ucar in #798 addressing #797). Speed up tests for NA and NaN (Kirill and Dirk in #799 and #800). Rearrange stack unwind test code, keep test disabled for now (Lionel in #801). Further condition away protect unwind behind #define (Dirk in #802). Changes in Rcpp Attributes: Addressed a missing Rcpp namespace prefix when generating a C++ interface (James Balamuta in #779). Changes in Rcpp Documentation: The Rcpp FAQ now shows Rcpp::Rcpp.plugin.maker() and not the outdated ::: use applicable non-exported functions. Thanks to CRANberries, you can also look at a diff to the previous re[...]



Norbert Preining: TLCockpit v0.8

Sat, 20 Jan 2018 14:32:05 +0000

(image)

Today I released v0.8 of TLCockpit, the GUI front-end for the TeX Live Manager tlmgr. I spent the winter holidays in updating and polishing, but also in helping me debug problems that users have reported. Hopefully the new version works better for all.
(image)

If you are looking for a general introduction to TLCockpit, please see the blog introducing it. Here I only want to introduce the changes made since the last release:

  • add debug facility: It is now possible to pass -d for debugging to tlcockpit, activating debugging. There is also -dd for more verbose debugging.
  • select mirror facility: The edit screen for the repository setting now allows selecting from the current list of mirrors, see the following screenshot:
    (image)
  • initial loading speedup: Till now we used to parse the json output of tlmgr, which included everything the whole database contains. We now load the initial minimal information via info --data and load additional data when details for a package is shown on demand. This should especially make a difference on systems without a compiled json Perl library available.
  • fixed self update: In the previous version, updating the TeX Live Manager itself was not properly working – it was updated but the application itself became unresponsive afterwards. This is hopefully fixed (although this is really tricky).
  • status indicator: The status indicator has moved from the menu bar (where it was somehow a stranger) to below the package listing, and now also includes the currently running command, see screenshot after the next item.
  • nice spinner: Only an eye-candy, but I added a rotating spinner while loading the database, updates, backups, or doing postactions. See the attached screenshot, which also shows the new location of the status indicator and the additional information provided.
    (image)

I hope that this version is more reliable, stable, and easier to use. As usual, please use the issue page of the github project to report problems.

TeX Live should contain the new version starting from tomorrow.

Enjoy.




Eddy Petrișor: Suppressing color output of the Google Repo tool

Fri, 19 Jan 2018 06:51:11 +0000

(image) On Windows, in the cmd shell, the color control caracters generated by the Google Repo tool (or its windows port made by ESRLabs) or git appear as garbage. Unfortunately, the Google Repo tool, besides the fact it has a non-google-able name, lacks documentation regarding its options, so sometimes the only way to find out what is the option I want is to look in the code.
To avoid repeatedly look over the code to dig up this, future self, here is how you disable color output in the repo tool with the info subcommand:
repo --color=never info
Other options are 'auto' and 'always', but for some reason, auto does not do the right thing (tm) in Windows and garbage is shown with auto.



Mike Gabriel: Building packages with Meson and Debhelper version level 11 for Debian stretch-backports

Thu, 18 Jan 2018 19:30:13 +0000

More a reminder for myself, than a blog post...

If you want to backport a project from unstable based on the meson build system and your package uses debhelper to invoke the meson build process, then you need to modify the backported package's debian/control file slightly:

diff --git a/debian/control b/debian/control
index 43e24a2..d33e76b 100644
--- a/debian/control
+++ b/debian/control
@@ -14,7 +14,7 @@ Build-Depends: debhelper (>= 11~),
                libmate-menu-dev (>= 1.16.0),
                libmate-panel-applet-dev (>= 1.16.0),
                libnotify-dev,
-               meson,
+               meson (>= 0.40.0),
                ninja-build,
                pkg-config,
 Standards-Version: 4.1.3

Enforce the build to pull-in meson from stretch-backports, i.e. a meson version that is newer than 0.40.0.

Reasoning: if you want to build your package against debhelper (>= 11~) from stretch-backports it will use the --wrap-mode option when invoking meson. However, this option only got added in meson 0.40.0. So you need to make sure, the meson version from stretch-backports gets pulled in, too, for your build. The build will fail when using the meson version that we find in Debian stretch.




Joey Hess: cubietruck temperature sensor

Thu, 18 Jan 2018 03:47:15 +0000

I wanted to use 1-wire temperature sensors (DS18B20) with my Cubietruck board, running Debian. The only page I could find documenting this is for the sunxi kernel, not the mainline kernel Debian uses. After a couple of hours of research I got it working, so here goes. wiring First you need to pick a GPIO pin to use for the 1-wire signal. The Cubietruck's GPIO pins are documented here, and I chose to use pin PG8. Other pins should work as well, although I originally tried to use PB17 and could not get it to work for an unknown reason. I also tried to use PB18 but there was a conflict with something else trying to use that same pin. To find a free pin, cat /sys/kernel/debug/pinctrl/1c20800.pinctrl/pinmux-pins and look for a line like: "pin 200 (PG8): (MUX UNCLAIMED) (GPIO UNCLAIMED)" Now wire the DS18B20 sensor up. With its flat side facing you, the left pin goes to ground, the center pin to PG8 (or whatever GPIO pin you selected), and the right pin goes to 3.3V. Don't forget to connect the necessary 4.7K ohm resistor between the center and right pins. You can find plenty of videos showing how to wire up the DS18B20 on youtube, which typically also involve a quick config change to a Raspberry Pi running Raspbian to get it to see the sensor. With Debian it's unfortunately quite a lot more complicated, and so this blog post got kind of long. configuration We need to get the kernel to enable the GPIO pin. This seems like a really easy thing, but this is where it gets really annoying and painful. You have to edit the Cubietruck's device tree. So apt-get source linux and in there edit arch/arm/boot/dts/sun7i-a20-cubietruck.dts In the root section ('/'), near the top, add this: onewire_device { compatible = "w1-gpio"; gpios = <&pio 6 8 GPIO_ACTIVE_HIGH>; /* PG8 */ pinctrl-names = "default"; pinctrl-0 = <&my_w1_pin>; }; In the '&pio` section, add this: my_w1_pin: my_w1_pin@0 { allwinner,pins = "PG8"; allwinner,function = "gpio_in"; }; Note that if you used a different pin than PG8 you'll need to change that. The "pio 6 8" means letter G, pin 8. The 6 is because G is the 7th letter of the alphabet. I don't know where this is documented; I reverse engineered it from another example. Why this can't be hex, or octal, or symbolic names or anything sane, I don't know. Now you'll need to compile the dts file into a dtb file. One way is to configure the kernel and use its Makefile; I avoided that by first sudo apt-get install device-tree-compiler and then running, in the top of the linux source tree: cpp -nostdinc -I include -undef -x assembler-with-cpp \ ./arch/arm/boot/dts/sun7i-a20-cubietruck.dts | \ dtc -O dtb -b 0 -o sun7i-a20-cubietruck.dtb - You'll need to install that into /etc/flash-kernel/dtbs/sun7i-a20-cubietruck.dtb on the cubietruck. Then run flash-kernel to finish installing it. use Now reboot, and if all went well, it'll come up and the GPIO pin will finally be turned on: # grep PG8 /sys/kernel/debug/pinctrl/1c20800.pinctrl/pinmux-pins pin 200 (PG8): onewire_device 1c20800.pinctrl:200 function gpio_in group PG8 And if you picked a GPIO pin that works and got the sensor wired up correctly, in /sys/bus/w1/devices/ there should be a subdirectory for the sensor, using its unique ID. Here I have two sensors connected, which 1-wire makes easy to do, just hang them all off the same wire.. er wires. root@honeybee:/sys/bus/w1/devices> ls 28-000008290227@ 28-000008645973@ w1_bus_master1@ root@honeybee:/sys/bus/w1/devices> [...]



Thorsten Alteholz: First steps with arm64

Wed, 17 Jan 2018 21:52:55 +0000

As it was Christmas time recently, I wanted to allow oneself something special. So I ordered a Macchiatobin from SolidRun. Unfortunately they don’t exaggerate with their delivery times and I had to wait about two months for my device. I couldn’t celebrate Christmas time with it, but fortunately New Year.

Anyway, first I tried to use the included U-Boot to start the Debian installer on an USB stick. Oh boy, that was a bad idea and in retrospect just a waste of time. But there is debian-arm@l.d.o and Steve McIntyre was so kind to help me out of my vale of tears.

First I put the EDK2 flash image from Leif on an SD card, set the jumper on the board to boot from it (for the SD card boot, the right most jumper has to be set!) and off we go. Afterwards I put the debian-testing-arm64-netinst.iso on an USB stick and tried to start this. Unfortunately I was hit by #887110 and had to use a mini installer from here. Installation went smooth and as a last step I had to start the rescue mode and install grub to the removable media path. It is an extra point in the installer, so no need to enter cryptic commands :-).

Voila, rebooted and my Macchiatobin is up and running.




Matthew Garrett: Privacy expectations and the connected home

Wed, 17 Jan 2018 21:45:48 +0000

Traditionally, devices that were tied to logins tended to indicate that in some way - turn on someone's xbox and it'll show you their account name, run Netflix and it'll ask which profile you want to use. The increasing prevalence of smart devices in the home changes that, in ways that may not be immediately obvious to the majority of people. You can configure a Philips Hue with wall-mounted dimmers, meaning that someone unfamiliar with the system may not recognise that it's a smart lighting system at all. Without any actively malicious intent, you end up with a situation where the account holder is able to infer whether someone is home without that person necessarily having any idea that that's possible. A visitor who uses an Amazon Echo is not necessarily going to know that it's tied to somebody's Amazon account, and even if they do they may not know that the log (and recorded audio!) of all interactions is available to the account holder. And someone grabbing an egg out of your fridge is almost certainly not going to think that your smart egg tray will trigger an immediate notification on the account owner's phone that they need to buy new eggs.Things get even more complicated when there's multiple account support. Google Home supports multiple users on a single device, using voice recognition to determine which queries should be associated with which account. But the account that was used to initially configure the device remains as the fallback, with unrecognised voices ended up being logged to it. If a voice is misidentified, the query may end up being logged to an unexpected account.There's some interesting questions about consent and expectations of privacy here. If someone sets up a smart device in their home then at some point they'll agree to the manufacturer's privacy policy. But if someone else makes use of the system (by pressing a lightswitch, making a spoken query or, uh, picking up an egg), have they consented? Who has the social obligation to explain to them that the information they're producing may be stored elsewhere and visible to someone else? If I use an Echo in a hotel room, who has access to the Amazon account it's associated with? How do you explain to a teenager that there's a chance that when they asked their Home for contact details for an abortion clinic, it ended up in their parent's activity log? Who's going to be the first person divorced for claiming that they were vegan but having been the only person home when an egg was taken out of the fridge?To be clear, I'm not arguing against the design choices involved in the implementation of these devices. In many cases it's hard to see how the desired functionality could be implemented without this sort of issue arising. But we're gradually shifting to a place where the data we generate is not only available to corporations who probably don't care about us as individuals, it's also becoming available to people who own the more private spaces we inhabit. We have social norms against bugging our houseguests, but we have no social norms that require us to explain to them that there'll be a record of every light that they turn on or off. This feels like it's going to end badly.(Thanks to Nikki Everett for conversations that inspired this post)(Disclaimer: while I work for Google, I am not involved in any of the products or teams described in this post and my opinions are my own rather than those of my employer's) comments [...]



Renata D'Avila: Not being perfect

Wed, 17 Jan 2018 19:49:00 +0000

I know I am very late on this update (and also very late on emailing back my mentors). I am sorry. It took me a long time to figure out how to put into words everything that has been going on for the past few weeks. Let's begin with this: yes, I am so very aware there is an evaluation coming up (in two days) and that it is important "to have at least one piece of work that is visible in the week of evaluation" to show what I have been doing since the beginning of the internship. But the truth is: as of now, I don't have any code to show. And what that screams to me is that it means that I have failed. I didn't know what to say either to my mentors or in here to explain that I didn't meet everyone's expectations. That I had not been perfect. So I had to ask what could I learn from this and how could I keep going and working on this project? Coincidence or not, I was wondering that when I crossed paths (again) with one of the most amazing TED Talks there is: Reshma Saujani's "Teach girls bravery, not perfection" And yes, that could be me. Even though I had written down almost every step I had taken trying to solve the problem I got stuck on, I wasn't ready to share all that, not even with my mentors (yes, I can see now how that isn't very helpful). I would rather let them go thinking I am lazy and didn't do anything all this time than to send all those notes about my failure and have them realize I didn't know what they expected me to know or... well, that they'd picked the wrong intern. What was I trying to do? As I talked about in my previous post, the EventCalendar macro seemed like a good place to start doing some work. I wanted to add a piece of code to it that would allow to export the events data to the iCalendar format. Because this is sort of what I did in my contribution for the github-icalendar) and because the mentor Daniel had suggested something like that, I thought that it would be a good way of getting myself familiarized to how macro development is done for MoinMoin wiki. How far did I go? As I had planned to do, I started by studying the EventMacro.py, to understand how it works, and taking notes. EventMacro fetches events from MoinMoin pages and uses Python's Pickle module to serialize and to de-serialize the data. This should be okay if you can trust enough the people editing the wiki (and, therefore, creating the events), but this might not be a good option if we start using external sources (such as third-party websites) for event data - at least, not directly on the data gathered. See the warning below, from the Pickle module docs: Warning: The pickle module is not secure against erroneous or maliciously constructed data. Never unpickle data received from an untrusted or unauthenticated source. From the code and from the inputs from the mentors, I understand that EventMacro is more about displaying the events, putting them on a wiki page. Indeed, this could be helpful later on, but not exactly for the purpose we want now, which is to have some standalone application to gather data about the events, model this data in the way that we want it to be organized and maybe making it assessible by an API and/or exporting as JSON? Then, either MoinMoin or any other FOSS community project could chose how to display and make use of them. What did go wrong? But the thing is... even if I had studied the code, I couldn't see it running on my MoinMoin instance. I have tried and tried, but, generally speaking, I got stuck on trying to get macros to[...]



Renata D'Avila: Not being perfect

Wed, 17 Jan 2018 19:49:00 +0000

I know I am very late on this update (and also very late on emailing back my mentors). I am sorry. It took me a long time to figure out how to put into words everything that has been going on for the past few weeks. Let's begin with this: yes, I am so very aware there is an evaluation coming up (in two days) and that it is important "to have at least one piece of work that is visible in the week of evaluation" to show what I have been doing since the beginning of the internship. But the truth is: as of now, I don't have any code to show. And what that screams to me is that it means that I have failed. I didn't know what to say either to my mentors or in here to explain that I didn't meet everyone's expectations. That I had not been perfect. So I had to ask what could I learn from this and how could I keep going and working on this project? Coincidence or not, when I was wondering that I crossed paths (again) with one of the most amazing TED Talks there is: Reshma Saujani's "Teach girls bravery, not perfection" And yes, that was very much me, because even though I had written down pretty much every step I had taken trying to solve the problem I got stuck on, I wasn't ready to share all that, not even with my mentors (yes, I can see now how that isn't very helpful). I would rather let them go thinking I am lazy and didn't do anything all this time than to send all those notes about my failure and have them realize I didn't know what they expected me to know or... well, that they'd picked the wrong candidate. What was I trying to do? As I talked about in my previous post, the EventCalendar macro seemed like a good place to start. I wanted to add a piece of code to it that would allow to export the events data to the iCalendar format. Because this is sort of what I did in my contribution for the github-icalendar) and because the mentor Daniel had suggested something like that, I thought that it would be a good way of getting myself familiarized to how macro development is done for MoinMoin wiki. How far did I go? As I had planned to do, I started by studying the EventMacro.py, to understand how it works, and taking notes. EventMacro fetches events from MoinMoin pages and uses Python's Pickle module to serialize and to de-serialize the data. This should be okay if you can trust enough the people editing the wiki (and, therefore, creating the events), but this might not be a good option if we start using external sources (such as third-party websites) for event data - at least, not directly on the data gathered. See the warning below, from the Pickle module docs: Warning: The pickle module is not secure against erroneous or maliciously constructed data. Never unpickle data received from an untrusted or unauthenticated source. From the code and from the inputs from the mentors, I understand that EventMacro is more about displaying the events, putting them on a wiki page. Indeed, this could be helpful later on, but not exactly for the purpose we want now, which is to have some standalone application to gather data about the events, model this data in the way that we want it to be organized and maybe making it assessible by an API and/or exporting as JSON? Then, either MoinMoin or any other FOSS community project could chose how to display and make use of them. What did go wrong? But the thing is... even if I studied the code, I couldn't see it running on my MoinMoin instance. I have tried and tried, but, generally speaking, I got stuc[...]



Jonathan Dowland: Announcing "Just TODO It"

Wed, 17 Jan 2018 17:20:34 +0000

(image)

Recently, I wished to use a trivially-simple TODO-list application whilst working on a project. I had a look through what was available to me in the "GNOME Software" application and was surprised to find nothing suitable. In particular I just wanted to capture a list of actions that I could tick off; I didn't want anything more sophisticated than that (and indeed, more sophistication would mean a learning curve I couldn't afford at the time). I then remembered that I'd written one myself, twelve years ago. So I found the old code, dusted it off, made some small adjustments so it would work on modern systems and published it.

At the time that I wrote it, I found (at least) one other similar piece of software called "Tasks" which used Evolution's TODO-list as the back-end data store. I can no longer find any trace of this software, and the old web host (projects.o-hand.com) has disappeared.

My tool is called Just TODO It and it does very little. If that's what you want, great! You can reach the source via that prior link or jump straight to GitHub: https://github.com/jmtd/todo




Dirk Eddelbuettel: RcppMsgPack 0.2.1

Wed, 17 Jan 2018 02:00:00 +0000

(image)

Am update of RcppMsgPack got onto CRAN today. It contains a number of enhancements Travers had been working on, as well as one thing CRAN asked us to do in making a suggested package optional.

MessagePack itself is an efficient binary serialization format. It lets you exchange data among multiple languages like JSON. But it is faster and smaller. Small integers are encoded into a single byte, and typical short strings require only one extra byte in addition to the strings themselves. RcppMsgPack brings both the C++ headers of MessagePack as well as clever code (in both R and C++) Travers wrote to access MsgPack-encoded objects directly from R.

Changes in version 0.2.1 (2018-01-15)

  • Some corrections and update to DESCRIPTION, README.md, msgpack.org.md and vignette (#6).

  • Update to c_pack.cpp and tests (#7).

  • More efficient packing of vectors (#8).

  • Support for timestamps and NAs (#9).

  • Conditional use of microbenchmark in tests/ as required for Suggests: package [CRAN request] (#10).

  • Minor polish to tests relaxing comparison of timestamp, and avoiding a few g++ warnings (#12 addressing #11).

Courtesy of CRANberries, there is also a diffstat report for this release. More information is on the RcppRedis page.

More information may be on the RcppMsgPack page. Issues and bugreports should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.




Jamie McClelland: Procrastinating by tweaking my desktop with devilspie2

Tue, 16 Jan 2018 14:51:36 +0000

Tweaking my desktop seems to be my preferred form of procrastination. So, a blog like this is a sure sign I have too much work on my plate. I have a laptop. I carry it to work and plug it into a large monitor - where I like to keep all my instant or near-instant communications displayed at all times while I switch between workspaces on my smaller laptop screen as I move from email (workspace one), to shell (workspace two), to web (workspace three), etc. When I'm not at the office, I only have my laptop screen - which has to accomdate everything. I soon got tired of dragging things around everytime I plugged or unplugged the monitor and starting accumulating a mess of bash scripts running wmctrl and even calling my own python-wnck script. (At first I couldn't get wmctrl to pin a window but I lived with it. But when gajim switched to gtk3 and my openbox window decorations disappeared, then I couldn't even pin my window manually.) Now I have the following simpler setup. Manage hot plugging of my monitor. Symlink to my monitor status device: 0 jamie@turkey:~$ ls -l ~/.config/turkey/monitor.status lrwxrwxrwx 1 jamie jamie 64 Jan 15 15:26 /home/jamie/.config/turkey/monitor.status -> /sys/devices/pci0000:00/0000:00:02.0/drm/card0/card0-DP-1/status 0 jamie@turkey:~$ Create a udev rule to place my monitor to the right of my LCD every time the monitor is plugged in and every time it is unplugged. 0 jamie@turkey:~$ cat /etc/udev/rules.d/90-vga.rules # When a monitor is plugged in, adjust my display to take advantage of it ACTION=="change", SUBSYSTEM=="drm", ENV{HOTPLUG}=="1", RUN+="/etc/udev/scripts/vga-adjust" 0 jamie@turkey:~$ And here is the udev script: 0 jamie@turkey:~$ cat /etc/udev/scripts/vga-adjust #!/bin/bash logger -t "jamie-udev" "Monitor event detected, waiting 1 second for system to detect change." # We don't know whether the VGA monitor is being plugged in or unplugged so we # have to autodetect first. And,it takes a few seconds to assess whether the # monitor is there or not, so sleep for 1 second. sleep 1 monitor_status="/home/jamie/.config/turkey/monitor.status" status=$(cat "$monitor_status") XAUTHORITY=/home/jamie/.Xauthority if [ "$status" = "disconnected" ]; then # The monitor is not plugged in logger -t "jamie-udev" "Monitor is being unplugged" xrandr --output DP-1 --off else logger -t "jamie-udev" "Monitor is being plugged in" xrandr --output DP-1 --right-of eDP-1 --auto fi 0 jamie@turkey:~$ Move windows into place. So far, this handles ensuring the monitor is activated and placed in the right position. But, nothing has changed in my workspace. Here's where the devilspie2 configuration comes in: ==> /home/jamie/.config/devilspie2/00-globals.lua <== -- Collect some global varibles to be used throughout. name = get_window_name(); app = get_application_name(); instance = get_class_instance_name(); -- See if the monitor is plugged in or not. If monitor is true, it is -- plugged in, if it is false, it is not plugged in. monitor = false; device = "/home/jamie/.config/turkey/monitor.status" f = io.open(device, "rb") if f then -- Read the contents, remove the trailing line break. content = string.gsub(f:read "*all", "\n", ""); if content == "connected" then monitor = true; end end ==> /home/jamie/.config/devilspie2/gajim.lua <== -- Look for my gajim message window. Pin it[...]



Reproducible builds folks: Reproducible Builds: Weekly report #142

Tue, 16 Jan 2018 12:00:48 +0000

Here's what happened in the Reproducible Builds effort between Sunday December 31 and Saturday January 13 2018: Media coverage Reproducible builds were mentioned on an episode of a Bryan Lunduke interview with Brendan Eich, the creator of the Javascript programming language. (link) Julien (jvoisin) Voisin wrote a short blog post detailing their success in reproducing the recent Tails ISO release. Development and fixes in key packages Chris Lamb implemented two reproducibility checks in the lintian Debian package quality-assurance tool: Warn about packages that ship Hypothesis example files. (#886101, report) Warn about packages that override dh_fixperms without calling dh_fixperms as this makes the build vary depending on the current umask(2). (#885910, report) Packages reviewed and fixed, and bugs filed Adrian Bunk: #886355 filed against libpar-packer-perl. #886361 filed against apertium. Bernhard M. Wiedemann: ibus-typing-booster (gz-mtime) console-setup (random,gz-mtime) fwupd (gz-mtime) libosmo-dsp (drop LaTeX log) libsamplerate (merged, PGO) aelfred (merged, time) jformatstring (merged, time) log4j (merged, upstreamable?, time) tanukiwrapper (merged, time) drbd-utils (enable rb) Chris Lamb: #885909 filed against node-crc32. #886001 filed against node-jquery. #886002 filed against node-deflate-js. #886003 filed against python-pysnmp4. #886100 filed against todoman. #886105 filed against klystrack. #886130 filed against libmsv. #886239 filed against librsvg. #886277 filed against node-promise (filed upstream). #886306 filed against python-pyocr. #886386 filed against mstflint. #886522 filed against python-stdnum (filed upstream) #886523 filed against python-hpack. #886703 filed against normaliz. #886898 filed against dtkwm. #886902 filed against clanlib. #886952 filed against hwinfo (filed upstream) #886988 filed against texlive-extra. #886989 filed against fox1.6. Reviews of unreproducible packages 60 package reviews have been added, 43 have been updated and 76 have been removed in this week, adding to our knowledge about identified issues. 4 new issue types have been added: randomness_in_binaries_generated_by_d_compiler_gdc serial_numbers_in_ogg_via_sox nondeterminism_in_files_generated_by_rime_deployer buildpath_in_binaries_generated_by_d_compiler_gdc The notes of one issue type was updated: build_dir_in_documentation_generated_by_doxygen: 1, 2 Weekly QA work During our reproducibility testing, FTBFS bugs have been detected and reported by: Adam Borowski (2) Adrian Bunk (16) Niko Tyni (1) Chris Lamb (6) Jonas Meurer (1) Simon McVittie (1) diffoscope development Chris Lamb: Bug fixes: Return "unknown" if we can't parse the readelf version number eg. for FreeBSD. (#886963) If the LLVM disassembler does not work, try the internal one. (#886736) Features/improvements: comparators.macho: Always strip the filename, not just when by itself Clarify "Unidentified file" log message; we tried and lookup via the comparators first Cleanups Drop an unnecessary else after return Drop whitespaces from end of file flake8 files Invert some logic as we use unconditional control flow Tidy some long lines Ensure block comments start with # Ensure we use a multiple of 4 spaces Drop an unused os.path import Add spaces around operators Mark special imports as noqa Tidy ListToolsAc[...]



Benjamin Mako Hill: OpenSym 2017 Program Postmortem

Tue, 16 Jan 2018 03:38:53 +0000

The International Symposium on Open Collaboration (OpenSym, formerly WikiSym) is the premier academic venue exclusively focused on scholarly research into open collaboration. OpenSym is an ACM conference which means that, like conferences in computer science, it’s really more like a journal that gets published once a year than it is like most social science conferences. The “journal”, in iithis case, is called the Proceedings of the International Symposium on Open Collaboration and it consists of final copies of papers which are typically also presented at the conference. Like journal articles, papers that are published in the proceedings are not typically published elsewhere. Along with Claudia Müller-Birn from the Freie Universtät Berlin, I served as the Program Chair for OpenSym 2017. For the social scientists reading this, the role of program chair is similar to being an editor for a journal. My job was not to organize keynotes or logistics at the conference—that is the job of the General Chair. Indeed, in the end I didn’t even attend the conference! Along with Claudia, my role as Program Chair was to recruit submissions, recruit reviewers, coordinate and manage the review process, make final decisions on papers, and ensure that everything makes it into the published proceedings in good shape. In OpenSym 2017, we made several changes to the way the conference has been run: In previous years, OpenSym had tracks on topics like free/open source software, wikis, open innovation, open education, and so on. In 2017, we used a single track model. Because we eliminated tracks, we also eliminated track-level chairs. Instead, we appointed Associate Chairs or ACs. We eliminated page limits and the distinction between full papers and notes. We allowed authors to write rebuttals before reviews were finalized. Reviewers and ACs were allowed to modify their reviews and decisions based on rebuttals. To assist in assigning papers to ACs and reviewers, we made extensive use of bidding. This means we had to recruit the pool of reviewers before papers were submitted. Although each of these things have been tried in other conferences, or even piloted within individual tracks in OpenSym, all were new to OpenSym in general. Overview Statistics Papers submitted 44 Papers accepted 20 Acceptance rate 45% Posters submitted 2 Posters presented 9 Associate Chairs 8 PC Members 59 Authors 108 Author countries 20 The program was similar in size to the ones in the last 2-3 years in terms of the number of submissions. OpenSym is a small but mature and stable venue for research on open collaboration. This year was also similar, although slightly more competitive, in terms of the conference acceptance rate (45%—it had been slightly above 50% in previous years). As in recent years, there were more posters presented than submitted because the PC found that some rejected work, although not ready to be published in the proceedings, was promising and advanced enough to be presented as a poster at the conference. Authors of posters submitted 4-page extended abstracts for their projects which were published in a “Companion to the Proceedings.” Topics Over the years, OpenSym has established a clear set of niches. Although we eliminated tracks, we asked authors to choose from a set of ca[...]



Russell Coker: More About the Thinkpad X301

Tue, 16 Jan 2018 03:22:44 +0000

Last month I blogged about the Thinkpad X301 I got from a rubbish pile [1]. One thing I didn’t realise when writing that post is that the X301 doesn’t have the keyboard light that the T420 has. With the T420 I could press the bottom left (FN) and top right (PgUp from memory) keys on the keyboard to turn a light on the keyboard. This is really good for typing at night. While I can touch type the small keyboard on a laptop makes it a little difficult so the light is a feature I found useful. I wrote my review of the X301 before having to use it at night. Another problem I noticed is that it crashes after running Memtest86+ for between 30 minutes and 4 hours. Memtest86+ doesn’t report any memory errors, the system just entirely locks up. I have 2 DIMMs for it (2G and 4G), I tried installing them in both orders, and I tried with each of them in the first slot (the system won’t boot if only the second slot is filled). Nothing changed. Now it is possible that this is something that might not happen in real use. For example it might only happen due to heat when the system is under sustained load which isn’t something I planned for that laptop. I would discard a desktop system that had such a problem because I get lots of free desktop PCs, but I’m prepared to live with a laptop that has such a problem to avoid paying for another laptop. Last night the laptop battery suddenly stopped working entirely. I had it unplugged for about 5 minutes when it abruptly went off (no flashing light to warn that the battery was low or anything). Now when I plug it in the battery light flashes orange. A quick Google search indicates that this might mean that a fuse inside the battery pack has blown or that there might be a problem with the system board. Replacing the system board is much more than the laptop is worth and even replacing the battery will probably cost more than it’s worth. Previously bought a Thinkpad T420 at auction because it didn’t cost much more than getting a new battery and PSU for a T61 [2] and I expect I can find a similar deal if I poll the auction sites for a while. Using an X series Thinkpad has been a good experience and I’ll definitely consider an X series for my next laptop. My previous history of laptops involved going from ones with a small screen that were heavy and clunky (what was available with 90’s technology and cost less than a car) to ones that had a large screen and were less clunky but still heavy. I hadn’t tried small and light with technology from the last decade, it’s something I could really get used to! By today’s standards the X301 is deficient in a number of ways. It has 64G of storage (the same as my most recent phones) which isn’t much for software development, 6G of RAM which isn’t too bad but is small by today’s standards (16G is a common factory option nowadays), a 1440*900 screen which looks bad in any comparison (less than the last 3 phones I’ve owned), and a slow CPU. No two of these limits would be enough to make me consider replacing that laptop. Even with the possibility of crashing under load it was still a useful system. But the lack of a usable battery in combination with all the other issues makes the entire system unsuitable for my needs. I would be very happy to use a fast laptop with a high resol[...]



Axel Beckert: Tex Yoda II Mechanical Keyboard with Trackpoint

Tue, 16 Jan 2018 02:38:11 +0000

Here’s a short review of the Tex Yoda II Mechanical Keyboard with Trackpoint, a pointer to the next Swiss Mechanical Keyboard Meetup and why I ordered a $300 keyboard with less keys than a normal one. Short Review of the Tex Yoda II Pro Trackpoint Cherry MX Switches Compact but heavy alumium case Backlight (optional) USB C connector and USB A to C cable with angled USB C plug All three types of Thinkpad Trackpoint caps included Configurable layout with nice web-based configurator (might be opensourced in the future) Fn+Trackpoint = scrolling (not further configurable, though) Case not clipped, but screwed Backlight brightness and Trackpoint speed configurable via key bindings (usually Fn and some other key) Default Fn keybindings as side printed and backlit labels Nice packaging Contra It’s only a 60% Keyboard (I prefer TKL) and the two common top rows are merged into one, switched with the Fn key. Cursor keys by default (and labeled) on the right side (mapped to Fn + WASD) — maybe good for games, but not for me. ~ on Fn-Shift-Esc Occassionally backlight flickering (low frequency) Pulsed LED light effect (i.e. high frequency flickering) on all but the lowest brightness level Trackpoint is very sensitive even in the slowest setting — use Fn+Q and Fn+E to adjust the trackpoint speed (“tps”) No manual included or (obviously) downloadable. Only the DIP switches 1-3 and 6 are documented, 4 and 5 are not. (Thanks gismo for the question about them!) No more included USB hub like the Tex Yoda I had or the HHKB Lite 2 (USB 1.1 only) has. My Modifications So Far Layout Modifications Via The Web-Based Yoda 2 Configurator Right Control and Menu key are Right and Left cursors keys Fn+Enter and Fn+Shift are Up and Down cursor keys Right Windows key is the Compose key (done in software via xmodmap) Middle mouse button is of course a middle click (not Fn as with the default layout). Other Modifications Clear dampening o-rings (clear, 50A) under each key cap for a more silent typing experience Braided USB cable Next Swiss Mechanical Keyboard Meetup On Sunday, the 18th of February 2018, the 4th Swiss Mechanical Keyboard Meetup will happen, this time at ETH Zurich, building CAB, room H52. I’ll be there with at least my Tex Yoda II and my vintage Cherry G80-2100. Why I ordered a $300 keyboard (JFTR: It was actually USD $299 plus shipping from the US to Europe and customs fee in Switzerland. Can’t exactly find out how much of shipping and customs fee were actually for that one keyboard, because I ordered several items at once. It’s complicated…) I always was and still are a big fan of Trackpoints as common on IBM and Lenovo Thinkapds as well as a few other laptop manufactures. For a while I just used Thinkpads as my private everyday computer, first a Thinkpad T61, later a Thinkpad X240. At some point I also wanted a keyboard with Trackpoint on my workstation at work. So I ordered a Lenovo Thinkpad USB Keyboard with Trackpoint. Then I decided that I want a permanent workstation at home again and ordered two more such keyboards: One for the workstation at home, one for my Debian GNU/kFreeBSD running ASUS EeeBox (not affected by Meltdown or Spectre, yay! :-) which I often took with me to staff Debian booths at events. Ther[...]



Steinar H. Gunderson: Retpoline-enabled GCC

Mon, 15 Jan 2018 21:28:00 +0000

(image)

Since I assume there are people out there that want Spectre-hardened kernels as soon as possible, I pieced together a retpoline-enabled build of GCC. It's based on the latest gcc-snapshot package from Debian unstable with H.J.Lu's retpoline patches added, but built for stretch.

Obviously this is really scary prerelease code and will possibly eat babies (and worse, it hasn't taken into account the last-minute change of retpoline ABI, so it will break with future kernels), but it will allow you to compile 4.15.0-rc8 with CONFIG_RETPOLINE=y, and also allow you to assess the cost of retpolines (-mindirect-branch=thunk) in any particularly sensitive performance userspace code.

There will be upstream backports at least to GCC 7, but probably pretty far back (I've seen people talk about all the way to 4.3). So you won't have to run my crappy home-grown build for very long—it's a temporary measure. :-)

Oh, and it made Stockfish 3% faster than with GCC 6.3! Hooray.




Cyril Brulebois: Quick recap of 2017

Mon, 15 Jan 2018 11:00:00 +0000

I haven’t been posting anything on my personal blog in a long while, let’s fix that! Partial reason for this is that I’ve been busy documenting progress on the Debian Installer on my company’s blog. So far, the following posts were published there: Debian Installer: Stretch Alpha 8 released with details on the release process, and on the debootstrap attempt regarding merged-/usr (granted, that one was from late 2016). Debian Installer: Stretch RC 2 released: wrapping up both RC 1 and RC 2 for Stretch, mentioning major changes instead of all the tiny details one would usually find in the release announcements published on the debian-devel-announce@ mailing list. Debian Installer: Stretch released: aggregating RC 3 to RC 5 this time, since the last few weeks before the Stretch release date were quite busy! After the Stretch release, it was time to attend DebConf’17 in Montreal, Canada. I’ve presented the latest news on the Debian Installer front there as well. This included a quick demo of my little framework which lets me run automatic installation tests. Many attendees mentioned openQA as the current state of the art technology for OS installation testing, and Philip Hands started looking into it. Right now, my little thing is still useful as it is, helping me reproduce regressions quickly, and testing bug fixes… so I haven’t been trying to port that to another tool yet. I also gave another presentation in two different contexts: once at a local FLOSS meeting in Nantes, France and once during the mini-DebConf in Toulouse, France. Nothing related to Debian Installer this time, as the topic was how I helped a company upgrade thousands of machines from Debian 6 to Debian 8 (and to Debian 9 since then). It was nice to have Evolix people around, since we shared our respective experience around automation tools like Ansible and Puppet. After the mini-DebConf in Toulouse, another event: the mini-DebConf in Cambridge, UK. I tried to give a lightning talk about “how snapshot.debian.org helped saved the release(s)” but clearly speed was lacking, and/or I had too many things to present, so that didn’t work out as well as I hoped. Fortunately, no time constraints when I presented that during a Debian meet-up in Nantes, France. :) Since Reproducible Tails builds were announced, it seemed like a nice opportunity to document how my company got involved into early work on reproducibility for the Tails project. On an administrative level, I’m already done with all the paperwork related to the second financial year. \o/ Next things I’ll likely write about: the first two D-I Buster Alpha releases (many blockers kept popping up, it was really hard to release), and a few more recent release critical bug reports. [...]



Daniel Pocock: RHL'18 in Saint-Cergue, Switzerland

Mon, 15 Jan 2018 08:02:54 +0000

(image)

RHL'18 was held at the centre du Vallon à St-Cergue, the building in the very center of this photo, at the bottom of the piste:

(image)

People from various free software communities in the region attended for a series of presentations, demonstrations, socializing and ski. This event is a lot of fun and I would highly recommend that people look out for the next edition. (subscribe to rhl-annonces on lists.swisslinux.org for a reminder email)

Ham radio demonstration

I previously wrote about building a simple antenna for shortwave (HF) reception with software defined radio. That article includes links to purchase all the necessary parts from various sources. Everything described in that article, together with some USB sticks running Debian Hams Live (bootable ham radio operating system), some rolls of string and my FT-60 transceiver, fits comfortably into an OSCAL tote bag like this:

(image)

It is really easy to take this kit to an event anywhere, set it up in 10 minutes and begin exploring the radio spectrum. Whether it is a technical event or a village fair, radio awakens curiosity in people of all ages and provides a starting point for many other discussions about technological freedom, distributing stickers and inviting people to future events. My previous blog contains photos of what is in the bag and a video demo.

Open Agriculture Food Computer discussion

We had a discussion about progress building an Open Agriculture (OpenAg) food computer in Switzerland. The next meeting in Zurich will be held on 30 January 2018, please subscribe to the forum topic to receive further details.

Preparing for Google Summer of Code 2018

In between eating fondue and skiing, I found time to resurrect some of my previous project ideas for Google Summer of Code. Most of them are not specific to Debian, several of them need co-mentors, please contact me if you are interested.




Sean Whitton: lastjedi

Sun, 14 Jan 2018 23:54:05 +0000

A few comments on Star Wars: The Last Jedi. Vice Admiral Holdo’s subplot was a huge success. She had to make a very difficult call over which she knew she might face a mutiny from the likes of Poe Dameron. The core of her challenge was that there was no speech or argument she could have given that would have placated Dameron and restored unity to the crew. Instead, Holdo had to press on in the face of that disunity. This reflects the fact that, sometimes, living as one should demands pressing on in the face deep disagreement with others. Not making it clear that Dameron was in the wrong until very late in the film was a key component of the successful portrayal of the unpleasantness of what Holdo had to do. If instead it had become clear to the audience early on that Holdo’s plan was obviously the better one, we would not have been able to observe the strength of Holdo’s character in continuing to pursue her plan despite the mutiny. One thing that I found weak about Holdo was her dress. You cannot be effective on the frontlines of a hot war in an outfit like that! Presumably the point was to show that women don’t have to give up their femininity in order to take tough tactical decisions under pressure, and that’s indeed something worth showing. But this could have been achieved by much more subtle means. What was needed was to have her be the character with the most feminine outfit, and it would have been possible to fulfill that condition by having her wear something much more practical. Thus, having her wear that dress was crude and implausible overkill in the service of something otherwise worth doing. I was very disappointed by most of the subplot with Rey and Luke: both the content of that subplot, and its disconnection from the rest of film. Firstly, the content. There was so much that could have been explored that was not explored. Luke mentions that the Jedi failed to stop Darth Sidious “at the height of their powers”. Well, what did the Jedi get wrong? Was it the Jedi code; the celibacy; the bureaucracy? Is their light side philosophy to absolutist? How are Luke’s beliefs about this connected to his recent rejection of the Force? When he lets down his barrier and reconnects with the force, Yoda should have had much more to say. The Force is, perhaps, one big metaphor for certain human capacities not emphasised by our contemporary culture. It is at the heart of Star Wars, and it was at the heart of Empire and Rogue One. It ought to have been at the heart of The Last Jedi. Secondly, the lack of integration with the rest of the film. One of the aspects of Empire that enables its importance as a film, I suggest, is the tight integration and interplay between the two main subplots: the training of Luke under Yoda, and attempting to shake the Empire off the trail of the Millennium Falcon. Luke wants to leave the training unfinished, and Yoda begs him to stay, truly believing that the fate of the galaxy depends on him completing the training. What is illustrated by this is the strengths and weaknesses of both Yoda’s traditional Jedi view and Luke’s desire to get on with fighting the goo[...]



Dirk Eddelbuettel: digest 0.6.14

Sun, 14 Jan 2018 22:24:00 +0000

(image)

Another small maintenance release, version 0.6.14, of the digest package arrived on CRAN and in Debian today.

digest creates hash digests of arbitrary R objects (using the 'md5', 'sha-1', 'sha-256', 'crc32', 'xxhash' and 'murmurhash' algorithms) permitting easy comparison of R language objects.

Just like release 0.6.13 a few weeks ago, this release accomodates another request by Luke and Tomas and changes two uses of NAMED to MAYBE_REFERENCED which helps in the transition to the new reference counting model in R-devel. Thierry also spotted a minor wart in how sha1() tested type for matrices and corrected that, and I converted a few references to https URLs and correct one now-dead URL.

CRANberries provides the usual summary of changes to the previous version.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.




Mario Lang: I pushed an implementation of myself to GitHub

Sun, 14 Jan 2018 21:22:00 +0000

(image)

Roughly 4 years ago, I mentioned that there appears to be an esotieric programming language which shares my full name.

I know, it is really late, but two days ago, I discovered Racket. As a Lisp person, I immediately felt at home. And realizing how the language dispatch mechanism works, I couldn't resist and write a Racket implementation of MarioLANG. A nice play on words and a good toy project to get my feet wet.

Racket programs always start with #lang. How convenient. MarioLANG programs for Racket therefore look something like this:

#lang mario
++++++++++++
===========+:
           ==

So much about abusing coincidences. Phew, this was a fun weekend project! And it has some potential for more challenges. Right now, it is only an interpreter, because it appears to be tricky to compile a 2d instruction "space" to traditional code. MarioLANG does not only allow for nested loops as BrainFuck does, it also includes weird concepts like the reversal of the instruction pointer direction. Coupled with the "skip" ([) instruction, this allow to create loops which have two exit conditions and reverse code execution on every pass. Something like this:

@[ some brainfuck [@
====================

And since this is a 2d programming language, this theoretical loop could be entered by jumping onto any of the instruction inbetween from above. And, the heading could be either leftward or rightward when entering.

Discovering these patterns and translating them to compilable code is quite beyond me right now. Lets see what time will bring.




Iustin Pop: SSL migration

Sun, 14 Jan 2018 10:05:08 +0000

SSL migration This week I managed to finally migrate my personal website to SSL, and on top of that migrate the SMTP/IMAP services to certificates signed by "proper" a CA (instead of my own). This however was more complex than I thought… Let's encrypt? I first wanted to do this when Let's Encrypt became available, but the way it works - with short term certificates with automated renewal put me off at first. The certbot tool needs to make semi-arbitrary outgoing requests to renew the certificates, and on public machines I have a locked-down outgoing traffic policy. So I gave up, temporarily… I later found out that at least for now (for the current protocol), certbot only needs to talk to a certain API endpoint, and after some more research, I realized that the http-01 protocol is very straight-forward, only needing to allow some specific plain http URLs. So then: Issue 1: allowing outgoing access to a given API endpoint, somewhat restricted. I solved this by using a proxy, forcing certbot to go through it via env vars, learning about systemctl edit on the way, and from the proxy, only allowing that hostname. Quite weak, but at least not "open policy". Issue 2: due to how http-01 works, it requires to leave some specific paths under http, which means you can't have (in Apache) a "redirect everything to https" config. While fixing this I learned about mod_macro, which is quite interesting (and doesn't need an external pre-processor). The only remaining problem is that you can't renew automatically certificates for non-externally accessible systems; the dns protocol also need changing externally-visible state, so more or less the same. So: Issue 3: For internal websites, still need a solution if own CA (self-signed, needs certificates added to clients) is not acceptable. How did it go? It seems that using SSL is more than SSLEngine on. I learned in this exercise about quite a few things. CAA DNS Certification Authority Authorization is pretty nice, and although it's not a strong guarantee (against malicious CAs), it gives some more signals that proper clients could check ("For this domain, only this CA is expected to sign certificates"); also, trivial to configure, with the caveat that one would need DNSSEC as well for end-to-end checks. OCSP stapling I was completely unaware of OCSP Stapling, and yay, seems like a good solution to actually verifying that the certs were not revoked. However… there are many issues with it: there needs to be proper configuration on the webserver to not cause more problems than without; Apache at least, needs increasing the cache lifetime, disable sending error responses (for transient CA issues), etc. but even more, it requires the web server user to be able to make "random" outgoing requests, which IMHO is a big no-no even the command line tools (i.e. openssl ocsp) are somewhat deficient: no proxy support (while s_client can use one) So the proper way to do this seems to be a separate piece of software, isolated from the webserver, that does proper/eager refresh of certificates while handling errors[...]



Daniel Leidert: Make 'bts' (devscripts) accept TLS connection to mail server with self signed certificate

Sun, 14 Jan 2018 02:46:24 +0000

My mail server runs with a self signed certificate. So bts, configured like this ...


BTS_SMTP_HOST=mail.wgdd.de:587
BTS_SMTP_AUTH_USERNAME='user'
BTS_SMTP_AUTH_PASSWORD='pass'

...lately refused to send mails with this error:


bts: failed to open SMTP connection to mail.wgdd.de:587
(SSL connect attempt failed error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed)

After searching a bit, I found a way to fix this locally without turning off the server certificate verification. The fix belongs into the send_mail() function. When calling the Net::SMTPS->new() constructor, it is possible to add the fingerprint of my self signed certificate like this (bold):


if (have_smtps) {
$smtp = Net::SMTPS->new($host, Port => $port,
Hello => $smtphelo, doSSL => 'starttls',
SSL_fingerprint => 'sha1$hex-fingerprint'
)
or die "$progname: failed to open SMTP connection to $smtphost\n($@)\n";
} else {
$smtp = Net::SMTP->new($host, Port => $port, Hello => $smtphelo)
or die "$progname: failed to open SMTP connection to $smtphost\n($@)\n";
}

Pretty happy to being able to use the bts command again.




Andreas Bombe: Fixing a Nintendo Game Boy Screen

Sun, 14 Jan 2018 00:28:13 +0000

Over the holidays my old Nintendo Game Boy (the original DMG-01 model) has resurfaced. It works, but the display had a bunch of vertical lines near the left and right border that stay blank. Apparently a common problem with these older Game Boys and the solution is to apply heat to the connector foil upper side to resolder the contacts hidden underneath. There’s lots of tutorials and videos on the subject so I won’t go into much detail here.

Just one thing: The easiest way is to use a soldering iron (the foil is pretty heat resistant, it has to be soldered during production after all) and move it along the top at the affected locations. Which I tried at first and it kind of works but takes ages. Some columns reappear, others disappear, reappeared columns disappear again… In someone’s comment I read that they needed over five minutes until it was fully fixed!

So… simply apply a small drop of solder to the tip. That’s what you do for better heat transfer in normal soldering and of course it also works here (since the foil connector back doesn’t take solder this doesn’t make a mess or anything). That way, the missing columns reappeared practically instantly at the touch of the solder iron and stayed fixed. Temperature setting was 250°C, more than sufficient for the task.

This particular Game Boy always had issues with the speaker stopping to work but we never had it replaced, I think because the problem was intermittent. After locating the bad solder joint on the connector and reheating it this problem was also fixed. Basically this almost 28 year old device is now in better working condition than it ever was.




Norbert Preining: Scala: debug logging facility and adjustment of logging level in code

Fri, 12 Jan 2018 23:18:09 +0000

As soon as users start to use your program, you want to implement some debug facilities with logging, and allow them to be turned on via command line switches or GUI elements. I was surprised that doing this in Scala wasn’t as easy as I thought, so I collected the information on how to set it up. Basic ingredients are the scala-logging library which wraps up slf4j, the Simple Logging Facade for Java, and a compatible backend, I am using logback, a successor of Log4j. At the current moment adding the following lines to your build.sbt will include the necessary libraries: libraryDependencies += "com.typesafe.scala-logging" %% "scala-logging" % "3.7.2" libraryDependencies += "ch.qos.logback" % "logback-classic" % "1.2.3" Next is to set up the default logging by adding a file src/main/resources/logback.xml containing at least the following entry for logging to stdout: %d{HH:mm:ss} [%thread] %-5level %logger{36} - %msg%n Note the default level here is set to info. More detailed information on the format of the logback.xml can be found here. In the scala code a simple mix-in of the LazyLogging trait is enough to get started: import com.typesafe.scalalogging.LazyLogging object ApplicationMain extends App with LazyLogging { ... logger.trace(...) logger.debug(...) logger.info(...) logger.warn(...) logger.error(...) (the above commands are in increasingly serious) The messages will only be shown if the logger call has higher seriousness than what is configured in logback.xml (or INFO by default). That means that anything of level trace and debug will not be shown. But we don’t want to ship always a new program with different logback.xml, so changing the default log level programatically is somehow a strict requirement. Fortunately a brave soul posted a solution on stackexchange, namely import ch.qos.logback.classic.{Level,Logger} import org.slf4j.LoggerFactory LoggerFactory.getLogger(org.slf4j.Logger.ROOT_LOGGER_NAME). asInstanceOf[Logger].setLevel(Level.DEBUG) This can be used to evaluate command line switches and activate debugging on the fly. The way I often do this is to allow flags -q, -qq, -d, and -dd for quiet, extra quiet, debug, extra debug, which would be translated to the logging levels warning, error, debug, and trace, respectively. Multiple invocations select the maximum debug level (so -q -d does turn on debugging). This can activated by the following simple code: val cmdlnlog: Int = args.map( { case "-d" => Level.DEBUG_INT case "-dd" => Level.TRACE_INT case "-q" => Level.WARN_INT case "-qq" => Level.ERROR_INT case _ => -1 } ).foldLeft(Level.OFF_INT)(scala.math.min(_,_)) if (cmdlnlog == -1) { // Unknown log level has been passed in, error out Console.err.println("Unsupported command line argument passed in, terminating.") sys.exit(0) } // if nothing has been passed on the command line, use INFO val newloglevel = if (cmdlnlog == Level.OFF_[...]



Raphaël Hertzog: Freexian’s report about Debian Long Term Support, December 2017

Fri, 12 Jan 2018 14:15:31 +0000

Like each month, here comes a report about the work of paid contributors to Debian LTS. Individual reports In October, about 142 work hours have been dispatched among 12 paid contributors. Their reports are available: Antoine Beaupré did nothing (out of 4h allocated + 8.25h remaining, thus keeping 12.25h for January). He intends to catch up in January. Ben Hutchings did 6 hours (out of 14h allocated, thus keeping 8 extra hours for January). Brian May did 10 hours. Chris Lamb did 14 hours. Emilio Pozuelo Monfort did 26.5 hours (out of 14 hours allocated + 13.75 hours remaining, thus keeping 1.25 hours for January). Guido Günther did 13.5 hours (out of 11h allocated + 2.5 extra hours). Hugo Lefeuvre did 14 hours. Markus Koschany did 14 hours. Ola Lundqvist did 7 hours. Raphaël Hertzog did 13 hours (out of 12h allocated + 2 extra hours, the remaining hour has been given back to the pool). Roberto C. Sanchez did 19 hours (out of 14 hours allocated + 5 hours remaining). Thorsten Alteholz did 14 hours. Evolution of the situation The number of sponsored hours did not change at 183 hours per month. It would be nice if we could continue to find new sponsors as the amount of work seems to be slowly growing too. The security tracker currently lists 21 packages with a known CVE and the dla-needed.txt file 16 (we’re a bit behind in CVE triaging apparently). Both numbers show a significant drop compared to last month. Yet the number of DLA released was not larger than usual (30), instead it looks like December brought us fewer new security vulnerabilities to handle and at the same time we used this opportunity to handle lower priorities packages that were kept on the side for multiple months. Thanks to our sponsors New sponsors are in bold (none this month). Platinum sponsors: TOSHIBA (for 27 months) GitHub (for 18 months) Gold sponsors: The Positive Internet (for 43 months) Blablacar (for 42 months) Linode (for 32 months) Babiel GmbH (for 21 months) Plat’Home (for 21 months) Silver sponsors: Domeneshop AS (for 42 months) Université Lille 3 (for 42 months) Trollweb Solutions (for 40 months) Nantes Métropole (for 36 months) Dalenys (for 33 months) Univention GmbH (for 28 months) Université Jean Monnet de St Etienne (for 28 months) Sonus Networks (for 22 months) maxcluster GmbH (for 16 months) Exonet B.V. (for 12 months) Leibniz Rechenzentrum (for 6 months) Vente-privee.com (for 3 months) Bronze sponsors: David Ayers – IntarS Austria (for 43 months) Evolix (for 43 months) Offensive Security (for 43 months) Seznam.cz, a.s. (for 43 months) Freeside Internet Service (for 42 months) MyTux (for 42 months) Intevation GmbH (for 40 months) Linuxhotel GmbH (for 40 months) Daevel SARL (for 38 months) Bitfolk LTD (for 37 months) Megaspace Internet Services GmbH (for 37 months) Greenbone Networks GmbH (for 36 months) NUMLOG (for 36 months) WinGo AG (for 36 months) Ecole Centrale de Nantes – LHEEA (for 32 months) Sig-I/O (for 29 months) Entr’ouvert[...]



Jonathan Dowland: Jason Scott Talks His Way Out Of It

Fri, 12 Jan 2018 13:59:43 +0000

(image)

I've been thoroughly enjoying the Jason Scott Talks His Way Out Of It Podcast by Jason Scott (of the Internet Archive and Archive Team, amongst other things) and perhaps you will too.

Scott started this podcast and a corresponding Patreon/LibrePay/Ko-Fi/Paypal/etc funding stream in order to help him get out of debt. He's candid about getting in and out of debt within the podcast itself; but he also talks about his work at The Internet Archive, the history of Bulletin-Board Systems, Archive Team, and many other topics. He's a good speaker and it's well worth your time. Consider supporting him too!

This reminds me that I am overdue writing an update on my own archiving activities over the last few years. Stay tuned…




Norbert Preining: Debian/TeX Live 2017.20180110-1 – the big rework

Fri, 12 Jan 2018 00:28:25 +0000

In short succession a new release of TeX Live for Debian – what could that bring? While there are not a lot of new and updated packages, there is a lot of restructuring of the packages in Debian, mostly trying to placate the voices that the TeX Live packages are getting bigger and bigger and bigger (which is true). In this release we have introduce two measures to allow for smaller installations: optional font package dependencies and downgrade of the -doc packages to suggests. Let us discuss the two changes, first the one about optional font packages: Till the last release the TeX Live package texlive-fonts-extra depended on a long list of font-* packages, which did amount to a considerable download and install size. There was a reason for this: TeX documents using these fonts via file name use kpathsea, so there are links from the texmf-dist tree to the actual font files. To ensure that these links are not dangling, the font packages were a unconditional dependency. But current LaTeX packages allow to lookup fonts not only via file name, but also via font name using fontconfig library. Although this is a suboptimal solution due to the inconsistencies and bugs of the fontconfig library (OsF and Expert font sets are a typical example of fonts that throw fontconfig into despair), it allows the use of fonts outside the TEXMF trees. We have done the following changes to allow users to reduce the installation size by implementing the following changes: texlive-fonts-extra only recommends the various font packages, but does not depend on them; links from the texmf-dist tree are now shipped in a new package texlive-fonts-extra-links; texlive-fonts-extra recommends texlive-fonts-extra-links, but does not strictly depend on it; only texlive-full depends on texlive-fonts-extra-links to provide the same experience as upstream TeX Live With these changes in place, users can decide to only install the TeX Live packages they need, and leave out texlive-fonts-extra-links and install only those fonts they actually need. This is in particular of interest for the build dependencies which will shrink considerably. The other change we have implemented in this release is a long requested, but always by me rejected one, namely the demotion of -doc packages to suggestions instead of recommendations. The texlive-*-doc packages are at times rather big, and with the default setup to install recommendations this induced a sharp rise in disc/download volume when installing TeX Live. By demoting the -doc packages to suggests they will not be automatically installed. I am still not convinced that this is a good solution, mostly due to two reasons: (i) people will cry out about missing documentation, and (ii) it is gray terrain in license terms, due to the requirement of several packages that code and docs are distributed together. Due to the above two reasons I might revert this change in future, but for now let us [...]



Tianon Gravi: iSCSI in Debian

Thu, 11 Jan 2018 07:00:00 +0000

I’ve recently been playing with Debian’s iSCSI support, and it’s pretty neat. It was a little esoteric to set things up, so I figured I’d write up a quick blog post of exactly what I did both for my own future-self’s sake and for the sake of anyone else trying to do something similar. The most “followable” guide I found was https://www.certdepot.net/rhel7-configure-iscsi-target-initiator-persistently/ (which the below is probably really similar to). The exact details of what I was trying to accomplish are as follows: 100GB “sparse” file on my-desktop presented as an iSCSI target mounted on my-rpi3 as /var/lib/docker (preferably with discard enabled so the file on my-desktop stays sparse) On my-desktop, I used the targetcli-fb package to configure my iSCSI target: $ sudo apt install targetcli-fb $ # create the sparse file $ mkdir -p /home/tianon/iscsi $ truncate --size=100G /home/tianon/iscsi/my-rpi3-docker.img $ # launch "targetcli" to configure the iSCSI bits $ sudo targetcli # create a "fileio" object connected to the new sparse file /> /backstores/fileio create name=my-rpi3-docker file_or_dev=/home/tianon/iscsi/my-rpi3-docker.img # enable "emulated TPU" (enable TRIM / UNMAP / DISCARD) /> /backstores/fileio/my-rpi3-docker set attribute emulate_tpu=1 # create iSCSI storage object /> /iscsi create iqn.1992-01.com.example.my-desktop:storage:my-rpi3-docker # create "LUN" assigned to the "fileio" object /> /iscsi/iqn.1992-01.com.example.my-desktop:storage:my-rpi3-docker/tpg1/luns create /backstores/fileio/my-rpi3-docker # create an ACL for my-rpi3 to connect /> /iscsi/iqn.1992-01.com.example.my-desktop:storage:my-rpi3-docker/tpg1/acls create iqn.1992-01.com.example:node:my-rpi3 # and set a CHAP username and password, for security /> /iscsi/iqn.1992-01.com.example.my-desktop:storage:my-rpi3-docker/tpg1/acls/iqn.1992-01.com.example:node:my-rpi3 set auth userid=rpi3 password=holy-cow-this-iscsi-password-is-so-secret-nobody-will-evvvvvvvvver-guess-it Additionally, I’ve been experimenting with firewalld on my-desktop, so I had to add the iscsi-target service to my internal zone to allow the traffic from my-rpi3. On my-rpi3, I used the open-iscsi package to configure my iSCSI initiator: $ sudo apt install open-iscsi $ # update "InitiatorName" to match the value from our ACL above $ sudo vim /etc/iscsi/initiatorname.iscsi InitiatorName=iqn.1992-01.com.example:node:my-rpi3 $ # update "node.startup" and "node.session.auth.*" for our CHAP credentials from above $ sudo vim /etc/iscsi/iscsid.conf ... node.startup = automatic ... node.session.auth.authmethod = CHAP node.session.auth.username = rpi3 node.session.auth.password = holy-cow-this-iscsi-password-is-so-secret-nobody-will-evvvvvvvvver-guess-it ... # restart iscsid so all that takes effect (especially the InitiatorName change) $ sudo systemctl restart iscsid $[...]



Norbert Preining: Gaming: Monument Valley 2

Thu, 11 Jan 2018 01:34:21 +0000

(image)

I recently found out the Monument Valley, one of my favorite games in 2016, has a successor, Monument Valley 2. Short on time as I usually am, I was nervous that the game would destroy my scheduling, but I went ahead and purchased it!
(image)

Fortunately, it turned out to be quite a short game, maybe max 2h to complete all the levels. The graphics are similar to the predecessor, that is to say beautifully crafted. The game mechanics is also unchanged, with only a few additions (like the grow-rotating tree (see the image in the lower right corner above). What has changed that there are now two actors, mother and daughter, and sometimes one has to manage both in parallel, which adds a nice twist.

What I didn’t like too much were the pseudo-philosophical teachings in the middle, like that one
(image)
but then, they are fast to skip over.

All in all again a great game and not too much of a time killer. This time I played it not on my mobile but on my Fire tablet, and the bigger screen was excellent and made the game more enjoyable.

Very recommendable.




Markus Koschany: My Free Software Activities in December 2017

Wed, 10 Jan 2018 19:07:08 +0000

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in  Java, Games and LTS topics, this might be interesting for you. Debian Games I spent some time in December 2017 to revive Hex-a-Hop, a nice (and somehow cute) logic game, which eventually closed seven bugs. Unfortunately this game was not well maintained but it should be up-to-date again now. I released a new version of debian-games, a collection of games metapackages. Five packages were removed from Debian but  I could also add eight new games or frontends to compensate for that. I updated a couple of packages to fix minor and normal bugs namely: dopewars (#633392,  #857671), caveexpress, marsshooter, snowballz (#866481), drascula, lure-of-the-temptress, lgeneral-data (#861048) and lordsawar (#885888). I also packaged new upstream versions of renpy and lgeneral. Last but not least: I completed another bullet transition (#885179). Debian Java I completed my work on triplea, a collection of strategy games written in Java, and packaged a new upstream release. The transition to Java 9 is still one of the most important Java issues in Debian. I either prepared patches or fixed the bugs in these packages directly: snakeyaml (#874140), mvel (#875775), libjgrapht-java (#740826, #874638), josql (#875587), dbus-java (#762550, #875355), libxmlrpc3-java (#874662), uddi4j (#874129), libjtds-java (#874644), cadencii (#873972), coco-java (#873973), olap4j (#873216, prepared by Chris West), libmatthew-java (#873989), jtb (#873985), db5.3 (#873976) and libusb-java. I worked on bouncycastle, a crypto library, and addressed CVE-2017-13098. More notable changes: I updated undertow to fix a FTBFS bug (#883357), repacked libjchart2d-java because of some files which were not removed by get-orig-source as intended, reassigned and fixed #883387 to eclipselink and updated felix-osgi-obr, felix-shell and felix-shell-tui. New upstream versions this month: jboss-modules. Debian LTS This was my twenty-second month as a paid contributor and I have been paid to work 14 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following: DLA-1216-1. Issued a security update for wordpress fixing 4 CVE. DLA-1227-1. Issued a security update for imagemagick fixing 4 CVE. DLA-1231-1. Issued a security update for graphicsmagick fixing 8 CVE. I confirmed that two more CVE (CVE-2017-17783 and CVE-2017-17913) did not affect the version in Wheezy. DLA-1236-1. Issued a security update for plexus-utils fixing 1 CVE. DLA-1237-1. Issued a security update for plexus-utils2 fixing 1 CVE. DLA-1208-1. I released an update for Debian’s reportbug tool to fix bug #878088. The LTS and security teams will be informed from now on when users report regressions due to security updates. I have also prepared updates[...]



Sean Whitton: Are you a DD or DM doing source-only uploads to Debian out of a git repository?

Wed, 10 Jan 2018 17:54:45 +0000

If you are a Debian Maintainer (DM) or Debian Developer (DD) doing source-only uploads to Debian for packages maintained in git, you are probably using some variation of the following: % # sbuild/pbuilder, install and test the final package % # everything looks good % dch -r % git commit debian/changelog "Finalise 1.2.3-1 upload" % gbp buildpackage -S --git-tag % debsign -S % dput ftp-master ../foo_1.2.3-1_source.changes % git push --follow-tags origin master where the origin remote is probably salsa.debian.org. Please consider replacing the above with the following: % # sbuild/pbuilder, install and test the final package % # everything looks good % dch -r % git commit debian/changelog "Finalise 1.2.3-1 upload" % dgit push-source --gbp % git push --follow-tags origin master where the dgit push-source call does the following: Various sanity checks, some of which are not performed by any other tools, such as not accidently overwriting an NMU. not missing the .orig.tar from your upload ensuring that the Distribution field in your changes is the same as your changelog Builds a source package from your git HEAD. Signs the .changes and .dsc. dputs these to ftp-master. Pushes your git history to dgit-repos. Why might you want to do this? Well, You don’t need to learn how to use dgit for any other parts of your workflow. It’s entirely drop-in. dgit will not make any merge commits on your master branch, or anything surprising like that. (It might make a commit to tweak your .gitignore.) No-one else in your team is required to use dgit. Nothing about their workflow need change. Benefit from dgit’s sanity checks. Provide your git history on dgit-repos in a uniform format that is easier for users, NMUers and downstreams to use (see dgit-user(7) and dgit-simple-nmu(7)). Note that this is independent of the history you push to alioth/salsa. You still need to push to salsa as before, and the format of that history is not changed. Only a single command is required to perform the source-only upload, instead of three. Hints If you’re using git dpm you’ll want --dpm instead of --gbp. If the last upload of the package was not performed with dgit, you’ll need to pass --overwrite. dgit will tell you if you need this. This is to avoid accidently excluding the changes in NMUs. [...]



Ben Hutchings: Meltdown and Spectre in Debian

Wed, 10 Jan 2018 03:05:52 +0000

(image)

I'll assume everyone's already heard repeatedly about the Meltdown and Spectre security issues that affect many CPUs. If not, see meltdownattack.com. These primarily affect systems that run untrusted code - such as multi-tenant virtual hosting systems. Spectre is also a problem for web browsers with Javascript enabled.

Meltdown

Over the last week the Debian kernel team has worked to mitigate Meltdown in all suites. This mitigation is currently limited to kernels running in 64-bit mode (amd64 architecture), but the issue affects 32-bit mode as well.

You can see where this mitigation is applied on the security tracker. As of today, wheezy, jessie, jessie-backports, stretch and unstable/sid are fixed while stretch-backports, testing/buster and experimental are not.

Spectre

Spectre needs to be mitigated in the kernel, browsers, and potentially other software. Currently the kernel changes to mitigate it are still under discussion upstream. Mozilla has started mitigating Spectre in Firefox and some of these changes are now in Debian unstable (version 57.0.4-1). Chromium has also started mitigating Spectre but no such changes have landed in Debian yet.




Ben Hutchings: Debian LTS work, December 2017

Wed, 10 Jan 2018 02:12:42 +0000

(image)

I was assigned 14 hours of work by Freexian's Debian LTS initiative, but only worked 6 hours so I carried over 8 hours to January.

I prepared and uploaded an update to the Linux kernel to fix various security issues. I issued DLA-1200-1 for this update. I also prepared another update on the Linux 3.2 longterm stable branch, though most of that work was done while on holiday so I didn't count the hours. I spent some time following the closed mailing list used to coordinate backports of KPTI/KAISER.




Lars Wirzenius: On using Github and a PR based workflow

Tue, 09 Jan 2018 16:11:46 +0000

(image)

In mid-2017, I decided to experiment with using pull-requests (PRs) on Github. I've read that they make development using git much nicer. The end result of my experiment is that I'm not going to adopt a PR based workflow.

The project I chose for my experiment is vmdb2, a tool for generating disk images with Debian. I put it up on Github, and invited people to send pull requests or patches, as they wished. I got a bunch of PRs, mostly from two people. For a little while, there was a flurry of activity. It has has now calmed down, I think primarily because the software has reached a state where the two contributors find it useful and don't need it to be fixed or have new features added.

This was my first experience with PRs. I decided to give it until the end of 2017 until I made any conclusions. I've found good things about PRs and a workflow based on them:

  • they reduce some of the friction of contributing, making it easier for people to contribute; from a contributor point of view PRs certainly seem like a better way than sending patches over email or sending a message asking to pull from a remote branch
  • merging a PR in the web UI is very easy

I also found some bad things:

  • I really don't like the Github UI or UX, in general or for PRs in particular
  • especially the emails Github sends about PRs seemed useless beyond a basic "something happened" notification, which prompt me to check the web UI
  • PRs are a centralised feature, which is something I prefer to avoid; further, they're tied to Github, which is something I object to on principle, since it's not free software
    • note that Gitlab provides support for PRs as well, but I've not tried it; it's an "open core" system, which is not fully free software in my opinion, and so I'm wary of Gitlab; it's also a centralised solution
    • a "distributed PR" system would be nice
  • merging a PR is perhaps too easy, and I worry that it leads me to merging without sufficient review (that is of course a personal flaw)

In summary, PRs seem to me to prioritise making life easier for contributors, especially occasional contributors or "drive-by" contributors. I think I prefer to care more about frequent contributors, and myself as the person who merges contributions. For now, I'm not going to adopt a PR based workflow.

(I expect people to mock me for this.)




Jonathan McDowell: How Virgin Media lost me as a supporter

Tue, 09 Jan 2018 08:39:57 +0000

For a long time I’ve been a supporter of Virgin Media (from a broadband perspective, though their triple play TV/Phone/Broadband offering has seemed decent too). I know they have a bad reputation amongst some people, but I’ve always found their engineers to be capable, their service in general reliable, and they can manage much faster speeds than any UK ADSL/VDSL service at cheaper prices. I’ve used their services everywhere I’ve lived that they were available, starting back in 2001 when I lived in Norwich. The customer support experience with my most recent move has been so bad that I am no longer of the opinion it is a good idea to use their service. Part of me wonders if the customer support has got worse recently, or if I’ve just been lucky. We had a problem about 6 months ago which was clearly a loss of signal on the line (the modem failed to see anything and I could clearly pinpoint when this had happened as I have collectd monitoring things). Support were insistent they could do a reset and fix things, then said my problem was the modem and I needed a new one (I was on an original v1 hub and the v3 was the current model). I was extremely dubious but they insisted. It didn’t help, and we ended up with an engineer visit - who immediately was able to say they’d been disconnecting noisy lines that should have been unused at the time my signal went down, and then proceeded to confirm my line had been unhooked at the cabinet and then when it was obvious the line was noisy and would have caused problems if hooked back up patched me into the adjacent connection next door. Great service from the engineer, but support should have been aware of work in the area and been able to figure out that might have been a problem rather than me having a 4-day outage and numerous phone calls when the “resets” didn’t fix things. Anyway. I moved house recently, and got keys before moving out of the old place, so decided to be organised and get broadband setup before moving in - there was no existing BT or Virgin line in the new place so I knew it might take a bit longer than usual to get setup. Also it would have been useful to have a connection while getting things sorted out, so I could work while waiting in for workmen. As stated at the start I’ve been pro Virgin in the past, I had their service at the old place and there was a CableTel (the Belfast cable company NTL acquired) access hatch at the property border so it was clear it had had service in the past. So on October 31st I placed an order on their website and was able to select an installation date of November 11th (earlier dates were available but this was a Saturday and more convenient). This all seemed fine;[...]



Don Armstrong: Debbugs Versioning: Merging

Tue, 09 Jan 2018 05:25:16 +0000

One of the key features of Debbugs, the bug tracking system Debian uses, is its ability to figure out which bugs apply to which versions of a package by tracking package uploads. This system generally works well, but when a package maintainer's workflow doesn't match the assumptions of Debbugs, unexpected things can happen. In this post, I'm going to: introduce how Debbugs tracks versions provide an example of a merge-based workflow which Debbugs doesn't handle well provide some suggestions on what to do in this case Debbugs Versioning Debbugs tracks versions using a set of one or more rooted trees which it builds from the ordering of debian/changelog entries. In the simplist case, every upload of a Debian package has changelogs in the same order, and each upload adds just one version. For example, in the case of dgit, to start with the package has this (abridged) version tree: the next upload, 3.13, has a changelog with this version ordering: 3.13 3.12 3.11 3.10, which causes the 3.13 version to be added as a descendant of 3.12, and the version tree now looks like this: dgit is being developed while also being used, so new versions with potentially disruptive changes are uploaded to experimental while production versions are uploaded to unstable. For example, the 4.0 experimental upload was based on the 3.10 version, with the changelog ordering 4.0 3.10. The tree now has two branches, but everything seems as you would expect: Merge based workflows Bugfixes in the maintenance version of dgit also are made to the experimental package by merging changes from the production version using git. In this case, some changes which were present in the 3.12 and 3.11 versions are merged using git, corresponds to a git merge flow like this: If an upload is prepared with changelog ordering 4.1 4.0 3.12 3.11 3.10, Debbugs combines this new changelog ordering with the previously known tree, to produce this version tree: This looks a bit odd; what happened? Debbugs walks through the new changelog, connecting each of the new versions to the previous version if and only if that version is not already an ancestor of the new version. Because the changelog says that 3.12 is the ancestor of 4.0, that's where the 4.1 4.0 version tree is connected. Now, when 4.2 is uploaded, it has the changelog ordering (based on time) 4.2 3.13 4.1 4.0 3.12 3.11 3.10, which corresponds to this git merge flow: Debbugs adds in 3.13 as an ancestor of 4.2, and because 4.1 was not an ancestor of 3.13 in the previous tree, 4.1 is added as an ancestor of 3.13. This results in the following graph: Which doesn't seem particularly helpful, because is prob[...]



Michael Stapelberg: Debian buster on the Raspberry Pi 3 (update)

Mon, 08 Jan 2018 21:55:00 +0000

I previously wrote about my Debian buster preview image for the Raspberry Pi 3.

Now, I’m publishing an updated version, containing the following changes:

  • WiFi works out of the box. Use e.g. ip link set dev wlan0 up, and iwlist wlan0 scan.
  • Kernel boot messages are now displayed on an attached monitor (if any), not just on the serial console.
  • Root file system resizing will now not touch the partition table if the user modified it.
  • The image is now compressed using xz, reducing its size to 170M.

As before, the image is built with vmdb2, the successor to vmdebootstrap. The input files are available at https://github.com/Debian/raspi3-image-spec.

Note that Bluetooth is still untested (see wiki:RaspberryPi3 for details).

Given that Bluetooth is the only known issue, I’d like to work towards getting this image built and provided on official Debian infrastructure. If you know how to make this happen, please send me an email. Thanks!

As a preview version (i.e. unofficial, unsupported, etc.) until that’s done, I built and uploaded the resulting image. Find it at https://people.debian.org/~stapelberg/raspberrypi3/2018-01-08/. To install the image, insert the SD card into your computer (I’m assuming it’s available as /dev/sdb) and copy the image onto it:

$ wget https://people.debian.org/~stapelberg/raspberrypi3/2018-01-08/2018-01-08-raspberry-pi-3-buster-PREVIEW.img.xz
$ xzcat 2018-01-08-raspberry-pi-3-buster-PREVIEW.img.xz | dd of=/dev/sdb bs=64k oflag=dsync status=progress

If resolving client-supplied DHCP hostnames works in your network, you should be able to log into the Raspberry Pi 3 using SSH after booting it:

$ ssh root@rpi3
# Password is “raspberry”



Jonathan Dowland: Announcing BadISO

Mon, 08 Jan 2018 19:00:56 +0000

(image)

For a few years now I have been working on-and-off on a personal project to import data from a large collection of home-made CD-Rs and DVD-Rs. I've started writing up my notes, experiences and advice for performing a project like this; but they aren't yet in a particularly legible state.

As part of this work I wrote some software called "BadISO" which takes a possibly-corrupted or incomplete optical disc image (specifically ISO9660) and combined with a GNU ddrescue map (or log) file, tells you which files within the image are intact, and which are not. The idea is you have tried to import a disc using ddrescue and some areas of the disc have not read successfully. The ddrescue map file tells you which areas in byte terms, but not what files that corresponds to. BadISO plugs that gap.

Here's some example output:

$ badiso my_files.iso
…
✔ ./joes/allstars.zip
✗ ./joes/ban.gif
✔ ./joes/eur-mgse.zip
✔ ./joes/gold.zip
✗ ./joes/graphhack.txt
…

BadISO was (and really, is) a hacky proof of concept written in Python. I have ambitions to re-write it properly (in either Idris or Haskell) but I'm not going to get around to it in the near future, and in the meantime at least one other person has found this useful. So I'm publishing it in its current state.

BadISO currently requires GNU xorriso.

You can grab it from https://github.com/jmtd/badiso.




Jelmer Vernooij: Breezy: Forking Bazaar

Mon, 08 Jan 2018 16:00:00 +0000

A couple of months ago, Martin and I announced a friendly fork of Bazaar, named Breezy.

It's been 5 years since I wrote a Bazaar retrospective and around 6 since I seriously contributed to the Bazaar codebase.

Goals

We don't have any grand ambitions for Breezy; the main goal is to keep Bazaar usable going forward. Your open source projects should still be using Git.

The main changes we have made so far come down to fixing a number of bugs and to bundling useful plugins. Bundling plugins makes setting up an environment simpler and to eliminate the API compatibility issues that plagued external plugins in the Bazaar world.

Perhaps the biggest effort in Breezy is porting the codebase to Python 3, allowing it to be used once Python 2 goes EOL in 2020.

A fork

Breezy is a fork of Bazaar and not just a new release series.

Bazaar upstream has been dormant for the last couple of years anyway - we don't lose anything by forking.

We're forking because gives us the independence to make some of the changes we deemed necessary and that are otherwise hard to make for an established project, For example, we're now bundling plugins, taking an axe to a large number of APIs and dropping support for older platforms.

A fork also means independence from Canonical; there is no CLA for Breezy (a hindrance for Bazaar) and we can set up our own infrastructure without having to chase down Canonical staff for web site updates or the installation of new packages on the CI system.

More information

Martin gave a talk about Breezy at PyCon UK this year.

Breezy bugs can be filed on Launchpad. For the moment, we are using the Bazaar mailing list and the #bzr IRC channel for any discussions and status updates around Breezy.




Sean Whitton: dgit push-source

Mon, 08 Jan 2018 07:48:27 +0000

(image)

dgit 4.2, which is now in Debian unstable, has a new subcommand: dgit push-source. This is just like dgit push, except that

  • it forces a source-only upload; and
  • it also takes care of preparing the _source.changes, transparently, without the user needing to run dpkg-buildpackage -S or dgit build-source or whatever.

push-source is useful to ensure you don’t accidently upload binaries, and that was its original motivation. But there is a deeper significance to this new command: to say

% dgit push-source unstable

is, in one command, basically to say

% git push ftp-master HEAD:unstable

That is: dgit push-source is like doing a single-step git push of your git HEAD straight into the archive! The future is now!

The contrast here is with ordinary dgit push, which is not analogous to a single git push command, because

  • it involves uploading .debs, which make a material change to the archive other than updating the source code of the package; and
  • it must be preceded by a call to dpkg-buildpackage, dgit sbuild or similar to prepare the .changes file.

While dgit push-source also involves uploading files to ftp-master in addition to the git push, because that happens transparently and does not require the user to run a build command, it can be thought of as an implementation detail.

Two remaining points of disanalogy:

  1. git push will push your HEAD no matter the state of the working tree, but dgit push-source has to clean your working tree. I’m thinking about ways to improve this.

  2. For non-native packages, you still need an orig.tar in ... Urgh. At least that’s easy to obtain thanks to git-deborig.




Mehdi Dogguy: Salsa webhooks and integrated services

Sun, 07 Jan 2018 23:07:00 +0000

Since many years now, Debian is facing an issue with one of its most important services: alioth.debian.org (Debian's forge). It is used by most the teams and hosts thousands of repositories (of all sorts) and mailing-lists. The service was stable (and still is), but not maintained. So it became increasingly important to find its replacement.Recently, a team for volunteers organized a sprint to work on the replacement of Alioth. I was very skeptical about the status of this new project until... tada! An announcement was sent out about the beta release of this new service: salsa.debian.org (a GitLab CE instance). Of course, Salsa hosts only Git repositories and doesn't deal with other {D,}VCSes used on Alioth (like Darcs, Svn, CVS, Bazaar and Mercurial) but it is a huge step forward!I must say that I absolutely love this new service which brings fresh air to Debian developers. We all owe a debt of gratitude to all those who made this possible. Thank you very much for working on this!Alas, no automatic migration was done between the two services (for good reasons). The migration is left to the maintainers. It might be an easy task for some who maintain a few packages, but it is a depressing task for bigger teams.To make this easy, Christoph Berg wrote a batch import script to import a Git repository in a single step. Unfortunately, GitLab is what it is... and it is not possible to set team-wide parameters to use in each repository. Salsa's documentation describes how to configure that for each repository (project, in GitLab's jargon) but this click-monkey-work is really not for many of us. Fortunately, GitLab has a nice API and all this is scriptable. So I wrote some scripts to:Import a Git repo (mainly Christoph's script with an enhancement)Set up IRC notificationsConfigure email pushes on commitsEnable webhooks to automatically tag 'pending' or 'close' Debian BTS bugs depending on mentions in commits messages.I published those scripts here: salsa-scripts. They are not meant to be beautiful, but only to make your life a little less miserable. I hope you find them useful. Personally, this first step was a prerequisite for the migration of my personal and team repositories over to Salsa. If more people want to contribute to those scripts, I can move the repository into the Debian group. [...]



Petter Reinholdtsen: Legal to share more than 11,000 movies listed on IMDB?

Sun, 07 Jan 2018 22:30:00 +0000

I've continued to track down list of movies that are legal to distribute on the Internet, and identified more than 11,000 title IDs in The Internet Movie Database (IMDB) so far. Most of them (57%) are feature films from USA published before 1923. I've also tracked down more than 24,000 movies I have not yet been able to map to IMDB title ID, so the real number could be a lot higher. According to the front web page for Retro Film Vault, there are 44,000 public domain films, so I guess there are still some left to identify. The complete data set is available from a public git repository, including the scripts used to create it. Most of the data is collected using web scraping, for example from the "product catalog" of companies selling copies of public domain movies, but any source I find believable is used. I've so far had to throw out three sources because I did not trust the public domain status of the movies listed. Anyway, this is the summary of the 28 collected data sources so far: 2352 entries ( 66 unique) with and 15983 without IMDB title ID in free-movies-archive-org-search.json 2302 entries ( 120 unique) with and 0 without IMDB title ID in free-movies-archive-org-wikidata.json 195 entries ( 63 unique) with and 200 without IMDB title ID in free-movies-cinemovies.json 89 entries ( 52 unique) with and 38 without IMDB title ID in free-movies-creative-commons.json 344 entries ( 28 unique) with and 655 without IMDB title ID in free-movies-fesfilm.json 668 entries ( 209 unique) with and 1064 without IMDB title ID in free-movies-filmchest-com.json 830 entries ( 21 unique) with and 0 without IMDB title ID in free-movies-icheckmovies-archive-mochard.json 19 entries ( 19 unique) with and 0 without IMDB title ID in free-movies-imdb-c-expired-gb.json 6822 entries ( 6669 unique) with and 0 without IMDB title ID in free-movies-imdb-c-expired-us.json 137 entries ( 0 unique) with and 0 without IMDB title ID in free-movies-imdb-externlist.json 1205 entries ( 57 unique) with and 0 without IMDB title ID in free-movies-imdb-pd.json 84 entries ( 20 unique) with and 167 without IMDB title ID in free-movies-infodigi-pd.json 158 entries ( 135 unique) with and 0 without IMDB title ID in free-movies-letterboxd-looney-tunes.json 113 entries ( 4 unique) with and 0 without IMDB title ID in free-movies-letterboxd-pd.json 182 entries ( 100 unique) with and 0 without IMDB title ID in free-movies-letterboxd-sile[...]



Russ Allbery: Free software log (December 2017)

Sun, 07 Jan 2018 20:27:00 +0000

I finally have enough activity to report that I need to think about how to format these updates. It's a good problem to have. In upstream work, I updated rra-c-util with an evaluation of the new warning flags for GCC 7, enabling as many warnings as possible. I also finished the work to build with Clang with warnings enabled, which required investigating a lot of conversions between variables of different sizes. Part of that investigation surfaced that the SA_LEN macro was no longer useful, so I removed that from INN and rra-c-util. I'm still of two minds about whether adding the casts and code to correctly build with -Wsign-conversion is worth it. I started a patch to rra-c-util (currently sitting in a branch), but wasn't very happy about the resulting code quality. I think doing this properly requires some thoughtfulness about macros and a systematic approach. Releases: C TAP Harness 4.2 (includes new is_blob contribution) DocKnot 1.02 pam-krb5 4.8 podlators 4.10 rra-c-util 7.0 Tasker 0.4 (final release) In Debian Policy, I wrote and merged a patch for one bug, merged patches for two other bugs, and merged a bunch of wording improvements to the copyright-format document. I also updated the license-count script to work with current Debian infrastructure and ran it again for various ongoing discussions of what licenses to include in common-licenses. Debian package uploads: lbcd (now orphaned) libafs-pag-perl (refreshed and donated to pkg-perl) libheimdal-kadm5-perl (refreshed and donated to pkg-perl) libnet-ldapapi-perl puppet-modules-puppetlabs-apt puppet-modules-puppetlabs-firewall (team upload) puppet-modules-puppetlabs-ntp (team upload) puppet-modules-puppetlabs-stdlib (two uploads) rssh (packaging refresh, now using dgit) webauth (now orphaned) xfonts-jmk There are a few more orphaning uploads and uploads giving Perl packages to the Perl packaging team coming, and then I should be down to the set of packages I'm in a position to actively maintain and I can figure out where I want to focus going forward. [...]



Sven Hoexter: BIOS update Dell Latitude E7470 and Lenovo TP P50

Sat, 06 Jan 2018 22:34:55 +0000

Maybe some recent events let to BIOS update releases by various vendors around the end of 2017. So I set out to update (for the first time) the BIOS of my laptops. Searching the interwebs for some hints I found a lot outdated information involving USB thumb drives, CDs, FreeDOS in variants but also some useful stuff. So here is the short list of what actually worked in case I need to do it again.

Update: Added a Wiki page so it's possible to extend the list. Seems that some of us avoided the update hassle so far, but now with all those Intel ME CVEs and Intel microcode updates it's likely we've to do it more often.

Dell Latitude E7470 (UEFI boot setup)

  1. Download the file "Latitude_E7x70_1.18.5.exe" (or whatever is the current release).
  2. Move the file to "/boot/efi/".
  3. Boot into the one time boot menu with F12 during the BIOS/UEFI start.
  4. Select the "Flash BIOS Update" menu option.
  5. Use your mouse to select the update file visually and watch the magic.

So no USB sticks, FreeDOS, SystemrescueCD images or other tricks involved. If it's cool that the computer in your computers computer running Minix (or whatever is involved in this case) updates your firmware is a different topic, but the process is pretty straight forward.

Lenovo ThinkPad P50

  1. Download the BIOS Update bootable CD image from Lenovo "n1eur31w.iso" (Select Windows as OS so it's available for download).
  2. Extract the eltorito boot image from the image "geteltorito -o thinkpad.img Downloads/n1eur31w.iso".
  3. Dump it on a USB thumb drive "dd if=thinkpad.img of=/dev/sdX".
  4. Boot from this thumb drive and follow the instructions of the installer.

I guess the process is similar for almost all ThinkPads.