Subscribe: Planet Debian
http://planet.debian.org/rss20.xml
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
build  code  debian  file  months  new  package  packages  reproducible builds  reproducible  system  systemd  time  unit   
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Planet Debian

Planet Debian



Planet Debian - http://planet.debian.org/



 



Petter Reinholdtsen: Easier recipe to observe the cell phones around you

Sun, 24 Sep 2017 06:30:00 +0000

A little more than a month ago I wrote how to observe the SIM card ID (aka IMSI number) of mobile phones talking to nearby mobile phone base stations using Debian GNU/Linux and a cheap USB software defined radio, and thus being able to pinpoint the location of people and equipment (like cars and trains) with an accuracy of a few kilometer. Since then we have worked to make the procedure even simpler, and it is now possible to do this without any manual frequency tuning and without building your own packages.

The gr-gsm package is now included in Debian testing and unstable, and the IMSI-catcher code no longer require root access to fetch and decode the GSM data collected using gr-gsm.

Here is an updated recipe, using packages built by Debian and a git clone of two python scripts:

  1. Start with a Debian machine running the Buster version (aka testing).
  2. Run 'apt install gr-gsm python-numpy python-scipy python-scapy' as root to install required packages.
  3. Fetch the code decoding GSM packages using 'git clone github.com/Oros42/IMSI-catcher.git'.
  4. Insert USB software defined radio supported by GNU Radio.
  5. Enter the IMSI-catcher directory and run 'python scan-and-livemon' to locate the frequency of nearby base stations and start listening for GSM packages on one of them.
  6. Enter the IMSI-catcher directory and run 'python simple_IMSI-catcher.py' to display the collected information.

Note, due to a bug somewhere the scan-and-livemon program (actually its underlying program grgsm_scanner) do not work with the HackRF radio. It do work with RTL 8232 and other similar USB radio receivers you can get very cheaply (for example from ebay), so for now the solution is to scan using the RTL radio and only use HackRF for fetching GSM data.

As far as I can tell, a cell phone only show up on one of the frequencies at the time, so if you are going to track and count every cell phone around you, you need to listen to all the frequencies used. To listen to several frequencies, use the --numrecv argument to scan-and-livemon to use several receivers. Further, I am not sure if phones using 3G or 4G will show as talking GSM to base stations, so this approach might not see all phones around you. I typically see 0-400 IMSI numbers an hour when looking around where I live.

I've tried to run the scanner on a Raspberry Pi 2 and 3 running Debian Buster, but the grgsm_livemon_headless process seem to be too CPU intensive to keep up. When GNU Radio print 'O' to stdout, I am told there it is caused by a buffer overflow between the radio and GNU Radio, caused by the program being unable to read the GSM data fast enough. If you see a stream of 'O's from the terminal where you started scan-and-livemon, you need a give the process more CPU power. Perhaps someone are able to optimize the code to a point where it become possible to set up RPi3 based GSM sniffers? I tried using Raspbian instead of Debian, but there seem to be something wrong with GNU Radio on raspbian, causing glibc to abort().




Iain R. Learmonth: Onion Services

Sun, 24 Sep 2017 06:15:12 +0000

In the summer 2017 edition of 2600 magazine there is a brilliant article on running onion services as part of a series on censorship resistant services. Onion services provide privacy and security for readers above that which is possible through the use of HTTPS. Since moving my website to Netlify, my onion service died as Netlify doesn’t provide automatic onion services (although they do offer automated Let’s Encrypt certificate provisioning). If anyone from Netlify is reading this, please consider adding a one-click onion service button next to the Let’s Encrypt button. For now though, I have my onion service hosted elsewhere. I’ve got a regular onion service (version 2) and also now a next generation onion service (version 3). My setup works like this: A cronjob polls my website’s git repository that contains a Hugo static site Two versions of the site are built with different base URLs set in the Hugo configuration, one for the regular onion service domain and one for the next generation onion service domain Apache is configured for two virtual hosts, one for each domain name tor from the Debian archives is configured for the regular onion service tor from git (to have next generation onion service support) is configured for the next generation onion service The main piece of advice I have for anyone that would like to have an onion service version of their static website is to make sure that your static site generator is handling URLs for you and that your sources have relative URLs as far as possible. Hugo is great at this and most themes should be using the baseURL configuration parameter where appropriate. There may be some room for improvement here in the polling process, perhaps this could be triggered by a webhook instead. I’m not using HTTPS on these services as the HTTPS private key for the domain isn’t even controlled by me, it’s controlled by Netlify, so wouldn’t really be a great method of authentication and Tor already provides strong encryption and its own authentication through the URL of the onion service. Of course, this means you need a secure way to get the URL, so here’s a PGP signed couple of URLs: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 As of 2017-09-23, the website at iain.learmonth.me is mirrored by me at the following onion addresses: w6d6vblb6vhuqxt6.onion tvin5bvfwew3ldttg5t6ynlif4t53y3mbmb7sgbyud7h5q6gblrpsnyd.onion This declaration was written and signed for publication in my blog. -----BEGIN PGP SIGNATURE----- iQEzBAEBCgAdFiEEfGEElJRPyB2mSFaW0hedW4oe0BEFAlnG1FMACgkQ0hedW4oe 0BGtTwgAp9PK6x1X9lnPLaeOOEALxn2BkDK5Q6PBt7OfnTh+f53oRrrxf0fmfNMH Qz/IDY+tULX3TZYbjDsuu+aDpk6YIdOnOzFpIYW9Qhm6jAsX4RDfn1cZoHg1IeM7 bCvrYHA5u753U3Mm+CsLbGihpYZE/FBdc/nE5S6LxYH83QZWLIW19EPeiBpBp3Hu VB6hUrDz3XU23dXn2U5/7faK7GKbC6TrBG/Z6dUtaXB62xgDIrPEMorwfsAZnWv4 3mAEsYJv9rnIyLbWamXDas8fJG04DOT+2C1NYmZ5CNJ4C7PKZuIYkaoVAp+pzLGJ 6BEBYaRvYIjd5g8xdVC3kmje6IM9cg== =lUvh -----END PGP SIGNATURE----- Note: For the next generation onion service, I do currently have some logging enabled in the tor daemon as I’m running this service as an experiment to uncover any bugs that appear. There is no logging beyond the default for the version 2 hidden service’s tor daemon. Another note: Current stable releases of Tor Browser do not support next generation onion services, you’ll have to grab an experimental build to try them out. Viewing my next generation onion service in Tor Browser [...]



Iain R. Learmonth: Free Software Efforts (2017W38)

Sun, 24 Sep 2017 06:15:12 +0000

Here’s my weekly report for week 38 of 2017. This week has not been a great week as I saw my primary development machine die in a spectacular reboot loop. Thanks to the wonderful community around Debian and free software (that if you’re reading this, you’re probably part of), I should be back up to speed soon. A replacement workstation is currently moving towards me and I’ve received a number of smaller donations that will go towards video converters and upgrades to get me back to full productivity. Debian I’ve prepared and tested backports for 3 packages in the tasktools packaging team: tasksh, bugwarrior and powerline-taskwarrior. Unfortunately I am not currently in the backports ACLs and so I can’t upload these but I’m hoping this to be resolved soon. Once these are uploaded, the latest upstream release for all packages in the tasktools team will be available either in the stable suite or in the stable backports suite. In preparation for the shutdown of Alioth mailing lists, I’ve set up a new mailing list for the tasktools team and have already updated the maintainer fields for all the team’s packages in git. I’ve subscribed the old mailing list’s user to the new mailing list in DDPO so there will still be a comprehensive view there during the migration. I am currently in the process of reaching out to the admins of git.tasktools.org with a view to moving our git repositories there. I’ve also continued to review the scapy package and have closed a couple more bugs that were already fixed in the latest upstream release but had been missed in the changelog. Bugs closed (fixed/wontfix): #774962, #850570 Tor Project I’ve deployed a small fix to an update from last week where the platform field on Atlas had been pulled across to the left column. It has now been returned to the right hand column and is not pushed down the page by long family lists. I’ve been thinking about the merge of Compass functionality into a future Atlas and this is being tracked in #23517. Tor Project has approved expenses (flights and hotel) for me to attend an in-person meeting of the Metrics Team. This meeting will occur in Berlin on the 28th September and I will write up a report detailing outcomes relevant to my work after the meeting. I have spent some time this week preparing for this meeting. Bugs closed (fixed/wontfix): #22146, #22297, #23511 Sustainability I believe it is important to be clear not only about the work I have already completed but also about the sustainability of this work into the future. I plan to include a short report on the current sustainability of my work in each weekly report. The loss of my primary development machine was a setback, however, I have been donated a new workstation which should hopefully arrive soon. The hard drives in my NAS can now also be replaced as I have budget available for this now. I do not see any hardware failures being imminent at this time, however should they occur I would not have budget to replace hardware, I only have funds to replace the hardware that has already failed. [...]



Enrico Zini: Systemd unit files

Sat, 23 Sep 2017 22:00:00 +0000

These are the notes of a training course on systemd I gave as part of my work with Truelite.

Writing .unit files

For reference, the global index with all .unit file directives is at man systemd.directives.

All unit files have a [Unit] section with documentation and dependencies. See man systemd.unit for documentation.

It is worth having a look at existing units to see what they are like. Use systemctl --all -t unittype for a list, and systemctl cat unitname to see its content wherever it is installed.

For example: systemctl cat graphical.target. Note that systemctl cat adds a line of comment at the top so one can see where the unit file is installed.

Most unit files also have an [Install] section (also documented in man systemd.unit) that controls what happens when enabling or disabling the unit.

See also:

.target units

.target units only contain [Unit] and [Install] sections, and can be used to give a name to a given set of dependencies.

For example, one could create a remote-maintenance.target unit, that when brought up activates, via dependencies, a set of services, mounts, network sockets, and so on.

See man systemd.target

See systemctl --all -t target for examples.

special units

man systemd.special has a list of units names that have a standard use associated to them.

For example, ctrl-alt-del.target is a unit that is started whenever Control+Alt+Del is pressed on the console. By default it is symlinked to reboot.target, and you can provide your own version in /etc/systemd/system/ to perform another action when Control+Alt+Del is pressed.

User units

systemd can also be used to manage services on a user session, starting them at login and stopping them at logout.

Add --user to the normal systemd commands to have them work with the current user's session instead of the general system.

See systemd/User in the Arch Wiki for a good description of what it can do.




Dirk Eddelbuettel: RcppCNPy 0.2.7

Sat, 23 Sep 2017 19:07:00 +0000

(image)

A new version of the RcppCNPy package arrived on CRAN yesterday.

RcppCNPy provides R with read and write access to NumPy files thanks to the cnpy library by Carl Rogers.

This version updates internals for function registration, but otherwise mostly switches the vignette over to the shiny new pinp two-page template and package.

Changes in version 0.2.7 (2017-09-22)

  • Vignette updated to Rmd and use of pinp package

  • File src/init.c added for dynamic registration

CRANberries also provides a diffstat report for the latest release. As always, feedback is welcome and the best place to start a discussion may be the GitHub issue tickets page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.




Dirk Eddelbuettel: RcppClassic 0.9.8

Sat, 23 Sep 2017 19:06:00 +0000

(image)

A bug-fix release RcppClassic 0.9.8 for the very recent 0.9.7 release which fixes a build issue on macOS introduced in 0.9.7. No other changes.

Courtesy of CRANberries, there are changes relative to the previous release.

Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.




Iain R. Learmonth: VM on bhyve not booting

Sat, 23 Sep 2017 09:45:00 +0000

(image)

Last night I installed updates on my FreeNAS box and rebooted it. As expected my network died, but then it never came back, which I hadn’t expected.

My FreeNAS box provides backup storage space, a local Debian mirror and a mirror of talks from recent conferences. It also runs a couple of virtual machines and one of these provides my local DNS resolver.

I hooked up the VNC console to the virtual machine and the problem looked to be that it was booting from the Debian installer CD. I removed the CD from the VM and rebooted, thinking that would be the end of it, but nope:

(image) The EFI shell presented where GRUB should have been

I put the installer CD back and booted in “Rescue Mode”. For some reason, the bootloader installation wasn’t working, so I planned to reinstall it. The autopartition layout for Debian with EFI seems to use /dev/sda2 for the root partition. When you choose this it will see that you have an EFI partition and offer to mount it for you too.

When I went to install the bootloader, I saw another option that I didn’t know about: “Force GRUB installation in removable media path”. In the work I did on live-wrapper I had only ever dealt with this method of booting, I didn’t realise that there were other methods. The reasoning behind this option can be found in detail in Debian bug #746662. I also found mjg59’s blog post from 2011 useful in understanding this.

Suffice is to say that this fixed the booting issue for me in this case. I haven’t investigated this much futher so I can’t be certain of any reproducable steps to this problem, but I did also stumble across this forum post which essentially gives the manual steps that are taken by that Rescue Mode option in order to fix the problem. I think the only reason I hadn’t run into this before now is that the VMs hadn’t been rebooted since their installation.




Russell Coker: Converting Mbox to Maildir

Sat, 23 Sep 2017 03:52:42 +0000

MBox is the original and ancient format for storing mail on Unix systems, it consists of a single file per user under /var/spool/mail that has messages concatenated. Obviously performance is very poor when deleting messages from a large mail store as the entire file has to be rewritten. Maildir was invented for Qmail by Dan Bernstein and has a single message per file giving fast deletes among other performance benefits. An ongoing issue over the last 20 years has been converting Mbox systems to Maildir. The various ways of getting IMAP to work with Mbox only made this more complex.

The Dovecot Wiki has a good page about converting Mbox to Maildir [1]. If you want to keep the same message UIDs and the same path separation characters then it will be a complex task. But if you just want to copy a small number of Mbox accounts to an existing server then it’s a bit simpler.

Dovecot has a mb2md.pl script to convert folders [2].

cd /var/spool/mail
mkdir -p /mailstore/example.com
for U in * ; do
  ~/mb2md.pl -s $(pwd)/$U -d /mailstore/example.com/$U
done

To convert the inboxes shell code like the above is needed. If the users don’t have IMAP folders (EG they are just POP users or use local Unix MUAs) then that’s all you need to do.

cd /home
for DIR in */mail ; do
  U=$(echo $DIR| cut -f1 -d/)
  cd /home/$DIR
  for FOLDER in * ; do
    ~/mb2md.pl -s $(pwd)/$FOLDER -d /mailstore/example.com/$U/.$FOLDER
  done
  cp .subscriptions /mailstore/example.com/$U/ subscriptions
done

Some shell code like the above will convert the IMAP folders to Maildir format. The end result is that the users will have to download all the mail again as their MUA will think that every message had been deleted and replaced. But as all servers with significant amounts of mail or important mail were probably converted to Maildir a decade ago this shouldn’t be a problem.




Enrico Zini: Systemd on the command line

Fri, 22 Sep 2017 22:00:00 +0000

These are the notes of a training course on systemd I gave as part of my work with Truelite. Exploring the state of a system systemctl status [unitname [unitname..]] show status of one or more units, or of the whole system. Glob patterns also work: systemctl status "systemd-fsck@*" systemctl list-units or just systemctl show a table with all units, their status and their description systemctl list-sockets lists listening sockets managed by systemd and what they activate systemctl list-timers lists timer-activated units, with information about when they last ran and when they will run again systemctl is-active [pattern] checks if one or more units are in active state systemctl is-enabled [pattern] checks if one or more units are enabled systemctl is-failed [pattern] checks if one or more units are in failed state systemctl list-dependencies [unitname] lists the dependencies of a unit, or a system-wide dependency tree systemctl is-system-running check if the system is running correctly, or if some unit is in a failed state systemd-cgtop like top but processes are aggregated by unit systemd-analyze produces reports on boot time, per-unit boot time charts, dependency graphs, and more Start and stop services Similar to the System V service command, systemctl provides commands to start/stop/restart/reload units or services: start: starts a unit if it is not already started stop: stops a unit restart: starts or restarts a unit reload: tell a unit to reload its configuration (if it supports it) try-restart: restarts a unit only if it is already active, otherwise do nothing, to prevent accidentally starting a service reload-or-restart: tell a unit to reload its configuration if supported, otherwise restart it try-reload-or-restart: tell a unit to reload its configuration if supported, otherwise restart it. If the unit is not already active, do nothing to prevent accidentally starting a service. Changing global system state systemctl has halt, poweroff, reboot, suspend, hibernate, and hybrid-sleep commands to tell systemd to reboot, power off, suspend and so on. kexec and switch-root also work. The rescue and emergency commands switch the system to rescue and emergency mode (see man systemd.special. systemctl default switches to the default mode, which also happens when exiting the rescue or emergency shell. Run services at boot systemd does not implement runlevels, and services start at boot based on their dependencies. To start a service at boot, you add to its .service file a WantedBy= dependency on a well-known .target unit. At boot, systemd brings up the whole chain of dependency started from a default unit, and that will eventually activate also your service. See systemctl get-default for what unit is currently the default in your system. You can change it via the systemd.unit= kernel command line, so you can configure multiple entries in the boot loader that boot the system running different services. For example systemd.unit=rescue.target for a rescue mode, systemd.unit=multi-user.target for a non-graphical mode, or add your own .target file to implement new system modes. See systemctl list-units -t target --all for a list of all currently available targets in your system. systemctl enable unitname enables the unit to start at boot, by creating symlinks to it in the .wants directory of the units listed in its WantedBy= configuration systemctl disable unitname removes the symlinks created by enable systemctl reenable unitname removes and readds the symlinks for when you changed WantedBy= Notes: systemctl start activates a unit right now, but does not automatically enable it at boot systemctl enable enables a unit at boot, but does not automatically start it right now * a disabled unit can still be activated if another unit depends on it To disable a unit so that it will nev[...]



Iain R. Learmonth: It Died: An Update

Fri, 22 Sep 2017 07:30:00 +0000

(image)

Update: I’ve had an offer of a used workstation that I’m following up. I would still appreciate any donations to go towards costs for cables/converters/upgrades needed with the new system but the hard part should hopefully be out the way now. (:

Thanks for all the responses I’ve received about the death of my desktop PC. As I updated in my previous post, I find it unlikely that I will have to orphan any of my packages as I believe that I should be able to get a new workstation soon.

The responses I’ve had so far have been extremely uplifting for me. It’s very easy to feel that no one cares or appreciates your work when your hardware is dying and everything feels like it’s working against you.

I’ve already received two donations towards a new workstation. If you feel you can help then please contact me. I’m happy to accept donations by PayPal or you can contact me for BACS/SWIFT/IBAN information.

I’m currently looking at an HP Z240 Tower Workstation starting with 8GB RAM and then perhaps upgrading the RAM later. I’ll be transplanting my 3TB hybrid HDD into the new workstation as that cache is great for speeding up pbuilder builds. I’m hoping for this to work for me for the next 10 years, just as the Sun had been going for the last 10 years.

Somebody buy this guy a computer. But take the Sun case in exchange. That sucker's cool: It Died @iainlearmonth http://ow.ly/oLEI30fk0yN
-- @BrideOfLinux - 11:00 PM - 21 Sep 2017

For the right donation, I would be willing to consider shipping the rebooty Sun if you like cool looking paperweights (send me an email if you like). It’s pretty heavy though, just weighed it at 15kg. (:




Clint Adams: PTT

Thu, 21 Sep 2017 22:32:04 +0000

(image)

“Hello,” said Adrian, but Adrian was lying.

“My name is Adrian,” said Adrian, but Adrian was lying.

“Today I took a pic of myself pulling a train,” announced Adrian.

(image)

Posted on 2017-09-21
Tags: bgs



Enrico Zini: Systemd Truelite course

Thu, 21 Sep 2017 22:00:00 +0000

These are the notes of a training course on systemd I gave as part of my work with Truelite. There is quite a lot of material, so I split them into a series of posts, running once a day for the next 9 days. Units Everything managed by systemd is called a unit (see man systemd.unit), and each unit is described by a configuration in ini-style format. For example, this unit continuously plays an alarm sound when the system is in emergency or rescue mode: [Unit] Description=Beeps when in emergency or rescue mode DefaultDependencies=false StopWhenUnneeded=true [Install] WantedBy=emergency.target rescue.target [Service] Type=simple ExecStart=/bin/sh -ec 'while true; do /usr/bin/aplay -q /tmp/beep.wav; sleep 2; done' Units can be described by configuration files, which have different extensions based on what kind of thing they describe: .service: daemons .socket: communication sockets .device: hardware devices .mount: mount points .automount: automounting .swap: swap files or partitions .target: only dependencies, like Debian metapackages .path: inotify monitoring of paths .timer: cron-like activation .slice: group processes for common resource management .scope: group processes for common resource management System unit files can be installed in: /lib/systemd/system/: for units provided by packaged software /run/systemd/system/: runtime-generated units /etc/systemd/system/: for units provided by systemd administrators Unit files in /etc/ override unit files in /lib/. Note that while Debian uses /lib/, other distributions may use /usr/lib/ instead. If there is a directory with the same name as the unit file plus a .d suffix, any file *.conf it contains is parsed after the unit, and can be used to add or override configuration options. For example: /lib/systemd/system/beep.service.d/foo.conf can be used to tweak the contents of /lib/systemd/system/beep.service, so it is possible for a package to distribute a tweak to the configuration of another package. /etc/systemd/system/beep.service.d/foo.conf can be used to tweak the contents of /lib/systemd/system/beep.service, so it is possible a system administrator to extend a packaged unit without needing to replace it entirely. Similarly, a unitname.wants/ or unitname.requires/ directory can be used to extend Wants= and Requires= dependencies on other units, by placing symlinks to other units in them. See also: Introduction to systemd. systemd on the Debian Wiki systemd on the Arch Wiki [...]



Iain R. Learmonth: It Died

Thu, 21 Sep 2017 09:10:00 +0000

(image)

On Sunday, in my weekly report on my free software activities, I wrote about how sustainable my current level of activites are. I had identified the risk that the computer that I use for almost all of my free software work was slowly dying. Last night it entered an endless reboot loop and subsequent efforts to save it have failed.

I cannot afford to replace this machine and my next best machine has half the cores, half the RAM and less than half of the screen real estate. As this is going to be a serious hit to my productivity, I need to seriously consider if I am able to continue to maintain the number of packages I currently do in Debian.

Update: Thank you for all the responses I’ve received on this post. While I have not yet resolved the situation, the level of response has me very confident that I will not have to orphan any packages and I should be back to work soon.

(image) The Sun Ultra 24



Steve Kemp: Retiring the Debian-Administration.org site

Wed, 20 Sep 2017 21:00:00 +0000

(image)

So previously I've documented the setup of the Debian-Administration website, and now I'm going to retire it I'm planning how that will work.

There are currently 12 servers powering the site:

  • web1
  • web2
  • web3
  • web4
    • These perform the obvious role, serving content over HTTPS.
  • public
    • This is a HAProxy host which routes traffic to one of the four back-ends.
  • database
    • This stores the site-content.
  • events
    • There was a simple UDP-based protocol which sent notices here, from various parts of the code.
    • e.g. "Failed login for bob from 1.2.3.4".
  • mailer
    • Sends out emails. ("You have a new reply", "You forgot your password..", etc)
  • redis
    • This stored session-data, and short-term cached content.
  • backup
    • This contains backups of each host, via Obnam.
  • beta
    • A test-install of the codebase
  • planet
    • The blog-aggregation site

I've made a bunch of commits recently to drop the event-sending, since no more dynamic actions will be possible. So events can be retired immediately. redis will go when I turn off logins, as there will be no need for sessions/cookies. beta is only used for development, so I'll kill that too. Once logins are gone, and anonymous content is disabled there will be no need to send out emails, so mailer can be shutdown.

That leaves a bunch of hosts left:

  • database
    • I'll export the database and kill this host.
    • I will install mariadb on each web-node, and each host will be configured to talk to localhost only
    • I don't need to worry about four database receiving diverging content as updates will be disabled.
  • backup
  • planet
    • This will become orphaned, so I think I'll just move the content to the web-nodes.

All in all I think we'll just have five hosts left:

  • public to do the routing
  • web1-web4 to do the serving.

I think that's sane for the moment. I'm still pondering whether to export the code to static HTML, there's a lot of appeal as the load would drop a log, but equally I have a hell of a lot of mod_rewrite redirections in place, and reworking all of them would be a pain. Suspect this is something that will be done in the future, maybe next year.




Dirk Eddelbuettel: pinp 0.0.2: Onwards

Wed, 20 Sep 2017 13:00:00 +0000

(image)

A first update 0.0.2 of the pinp package arrived on CRAN just a few days after the initial release.

We added a new vignette for the package (see below), extended a few nice features, and smoothed a few corners.

(image)

The NEWS entry for this release follows.

Changes in tint version 0.0.2 (2017-09-20)

  • The YAML segment can be used to select font size, one-or-two column mode, one-or-two side mode, linenumbering and watermarks (#21 and #26 addressing #25)

  • If pinp.cls or jss.bst are not present, they are copied in ((#27 addressing #23)

  • Output is now in shaded framed boxen too (#29 addressing #28)

  • Endmatter material is placed in template.tex (#31 addressing #30)

  • Expanded documentation of YAML options in skeleton.Rmd and clarified available one-column option (#32).

  • Section numbering can now be turned on and off (#34)

  • The default bibliography style was changed to jss.bst.

  • A short explanatory vignette was added.

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the tint page. For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.




Gunnar Wolf: Call to Mexicans: Open up your wifi #sismo

Tue, 19 Sep 2017 21:52:59 +0000

(image)

Hi friends,

~3hr ago, we just had a big earthquake, quite close to Mexico City. Fortunately, we are fine, as are (at least) most of our friends and family. Hopefully, all of them. But there are many (as in, tens) damaged or destroyed buildings; there have been over 50 deceased people, and numbers will surely rise until a good understanding of the event's strength are evaluated.

Mainly in these early hours after the quake, many people need to get in touch with their families and friends. There is a little help we can all provide: Provide communication.

Open up your wireless network. Set it up unencrypted, for anybody to use.

Refrain from over-sharing graphical content — Your social network groups don't need to see every video and every photo of the shaking moments and of broken buildings. Download of all those images takes up valuable time-space for the saturated cellular networks.

This advice might be slow to flow... The important moment to act is two or three hours ago, even now... But we are likely to have replicas; we are likely to have panic moments again. Do a little bit to help others in need!




Sylvain Beucler: dot-zed extractor

Tue, 19 Sep 2017 19:29:50 +0000

Following last week's .zed format reverse-engineered specification, Loïc Dachary contributed a POC extractor!
It's available at http://www.dachary.org/loic/zed/, it can list non-encrypted metadata without password, and extract files with password (or .pem file).
Leveraging on python-olefile and pycrypto, only 500 lines of code (test cases excluded) are enough to implement it (image)




Reproducible builds folks: Reproducible Builds: Weekly report #125

Tue, 19 Sep 2017 17:45:45 +0000

Here's what happened in the Reproducible Builds effort between Sunday September 10 and Saturday September 16 2017: Upcoming events Holger Levsen wrote and published details about our upcoming Berlin summit. Expect a more detailed announced soon and consider planning your travel! Reproduciblity work in Debian devscripts/2.17.10 was uploaded to unstable, fixing #872514. This adds a script to report on reproducibility status of installed packages written by Chris Lamb. #876055 was opened against Debian Policy to decide the precise requirements we should have on a build's environment variables. Bugs filed: Chris Lamb: #875700 filed against gtk+3.0. #875704 filed against gdk-pixbuf. #875792 filed against doit. Vagrant Cascadian: #875711 filed against qemu. Non-maintainer uploads: Holger Levsen: fonts-dustin/20030517-11 uploaded, fixing #815723 with patch by Scarlett Clark. Reproduciblity work in other projects Patches sent upstream: Bernhard M. Wiedemann: cadabra2 build timestamp jimtcl build timestamp, fixed another way itpp build timestamp, merged dunst file list ordering, merged HSAIL-Tools hash table ordering, merged kubernetes hash set ordering, no patch Reviews of unreproducible packages 16 package reviews have been added, 99 have been updated and 92 have been removed in this week, adding to our knowledge about identified issues. 1 issue type has been updated: Add build_path_captured_in_assembly_objects diffoscope development Juliana Oliveira Rodrigues: Fix comparisons between different container types not comparing inside files. It was caused by falling back to binary comparison for different file types even for unextracted containers. Add many tests for the fixed behaviour. Other code quality improvements. Chris Lamb: Various code quality and style improvements, some of it using Flake8. Mattia Rizzolo: Add a check to prevent installation with python < 3.4 reprotest development Ximin Luo: Split up the very large __init__.py and remove obsolete earlier code. Extend the syntax for the --variations flag to support parameters to certain variations like user_group, and document examples in README. Add a --vary flag for the new syntax and deprecate --dont-vary. Heavily refactor internals to support > 2 builds. Support >2 builds using a new --extra-build flag. Properly sanitize artifact_pattern to avoid arbitrary shell execution. trydiffoscope development Version 65 was uploaded to unstable by Chris Lamb including these contributions: Chris Lamb: Packaging maintenance updates. Developer documentation updates. Reproducible websites development Holger Levsen: Add a page for the Reproducible Builds World Summit 3 in Berlin 2017. Chris Lamb: Moved isdebianreproducibleyet.com to HTTPS. Updated the SSL certificate for buildinfo.debian.net. tests.reproducible-builds.org Vagrant Cascadian and Holger Levsen: Added two armhf boards to the build farm. #874682 Holger also: use timeout to limit the diffing of the two build logs to 30min, which greatly reduced jenkins load again. Misc. This week's edition was written by Ximin Luo, Bernhard M. Wiedemann, Chris Lamb, Holger Levsen and Daniel Shahaf & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists. [...]



Carl Chenet: The Github threat

Mon, 18 Sep 2017 22:00:54 +0000

Many voices arise now and then against risks linked to the Github use by Free Software projects. Yet the infatuation for the collaborative forge of the Octocat Californian start-ups doesn’t seem to fade away. These recent years, Github and its services take an important role in software engineering as they are seen as easy to use, efficient for a daily workload with interesting functions in enterprise collaborative workflow or amid a Free Software project. What are the arguments against using its services and are they valid? We will list them first, then we’ll examine their validity. 1. Critical points 1.1 Centralization The Github application belongs to a single entity, Github Inc, a US company which manage it alone. So, a unique company under US legislation manages the access to most of Free Software application code sources, which may be a problem with groups using it when a code source is no longer available, for political or technical reason. The Octocat, the Github mascot   This centralization leads to another trouble: as it obtained critical mass, it becomes more and more difficult not having a Github account. People who don’t use Github, by choice or not, are becoming a silent minority. It is now fashionable to use Github, and not doing so is seen as “out of date”. The same phenomenon is a classic, and even the norm, for proprietary social networks (Facebook, Twitter, Instagram). 1.2 A Proprietary Software When you interact with Github, you are using a proprietary software, with no access to its source code and which may not work the way you think it is. It is a problem at different levels. First, ideologically, but foremost in practice. In the Github case, we send them code we can control outside of their interface. We also send them personal information (profile, Github interactions). And mostly, Github forces any project which goes through the US platform to use a crucial proprietary tools: its bug tracking system. Windows, the epitome of proprietary software, even if others took the same path   1.3 The Uniformization Working with Github interface seems easy and intuitive to most. Lots of companies now use it as a source repository, and many developers leaving a company find the same Github working environment in the next one. This pervasive presence of Github in free software development environment is a part of the uniformization of said developers’ working space. Uniforms always bring Army in my mind, here the Clone army 2 – Critical points cross-examination 2.1 Regarding the centralization 2.1.1 Service availability rate As said above, nowadays, Github is the main repository of Free Software source code. As such it is a favorite target for cyberattacks. DDOS hit it in March and August 2015. On December 15, 2015, an outage led to the inaccessibility of 5% of the repositories. The same occurred on November 15. And these are only the incident reported by Github itself. One can imagine that the mean outage rate of the platform is underestimated. 2.1.2 Chain reaction could block Free Software development Today many dependency maintenance tools, as npm for javascript, Bundler for Ruby or even pip for Python can access an application source code directly from Github. Free Software projects getting more and more linked and codependents, if one component is down, all the developing process stop. One of the best examples is the npmgate. Any company could legally demand that Github take down some source code from its repository, which could create a chain reaction and blocking the development of many Free Software [...]



Russ Allbery: Consolidation haul

Mon, 18 Sep 2017 00:34:00 +0000

My parents are less fond than I am of filling every available wall in their house with bookshelves and did a pruning of their books. A lot of them duplicated other things that I had, or didn't sound interesting, but I still ended up with two boxes of books (and now have to decide which of my books to prune, since I'm out of shelf space). Also included is the regular accumulation of new ebook purchases. Mitch Albom — Tuesdays with Morrie (nonfiction) Ilona Andrews — Clean Sweep (sff) Catherine Asaro — Charmed Sphere (sff) Isaac Asimov — The Caves of Steel (sff) Isaac Asimov — The Naked Sun (sff) Marie Brennan — Dice Tales (nonfiction) Captain Eric "Winkle" Brown — Wings on My Sleeve (nonfiction) Brian Christian & Tom Griffiths — Algorithms to Live By (nonfiction) Tom Clancy — The Cardinal of the Kremlin (thriller) Tom Clancy — The Hunt for the Red October (thriller) Tom Clancy — Red Storm Rising (thriller) April Daniels — Sovereign (sff) Tom Flynn — Galactic Rapture (sff) Neil Gaiman — American Gods (sff) Gary J. Hudson — They Had to Go Out (nonfiction) Catherine Ryan Hyde — Pay It Forward (mainstream) John Irving — A Prayer for Owen Meany (mainstream) John Irving — The Cider House Rules (mainstream) John Irving — The Hotel New Hampshire (mainstream) Lawrence M. Krauss — Beyond Star Trek (nonfiction) Lawrence M. Krauss — The Physics of Star Trek (nonfiction) Ursula K. Le Guin — Four Ways to Forgiveness (sff collection) Ursula K. Le Guin — Words Are My Matter (nonfiction) Richard Matheson — Somewhere in Time (sff) Larry Niven — Limits (sff collection) Larry Niven — The Long ARM of Gil Hamilton (sff collection) Larry Niven — The Magic Goes Away (sff) Larry Niven — Protector (sff) Larry Niven — World of Ptavvs (sff) Larry Niven & Jerry Pournelle — The Gripping Hand (sff) Larry Niven & Jerry Pournelle — Inferno (sff) Larry Niven & Jerry Pournelle — The Mote in God's Eye (sff) Flann O'Brien — The Best of Myles (nonfiction) Jerry Pournelle — Exiles to Glory (sff) Jerry Pournelle — The Mercenary (sff) Jerry Pournelle — Prince of Mercenaries (sff) Jerry Pournelle — West of Honor (sff) Jerry Pournelle (ed.) — Codominium: Revolt on War World (sff anthology) Jerry Pournelle & S.M. Stirling — Go Tell the Spartans (sff) J.D. Salinger — The Catcher in the Rye (mainstream) Jessica Amanda Salmonson — The Swordswoman (sff) Stanley Schmidt — Aliens and Alien Societies (nonfiction) Cecilia Tan (ed.) — Sextopia (sff anthology) Lavie Tidhar — Central Station (sff) Catherynne Valente — Chicks Dig Gaming (nonfiction) J.E. Zimmerman — Dictionary of Classical Mythology (nonfiction) This is an interesting tour of a lot of stuff I read as a teenager (Asimov, Niven, Clancy, and Pournelle, mostly in combination with Niven but sometimes his solo work). I suspect I will no longer consider many of these books to be very good, and some of them will probably go back into used bookstores after I've re-read them for memory's sake, or when I run low on space again. But all those mass market SF novels were a big part of my teenage years, and a few (like Mote In God's Eye) I definitely want to read again. Also included is a random collection of stuff my parents picked up over the years. I don't know what to expect from a lot of it, which makes it fun to anticipate. Fall vacation is coming up, and with it a large amount of uninterrupted reading time. [...]



Sean Whitton: Debian Policy call for participation -- September 2017

Sun, 17 Sep 2017 23:04:04 +0000

Here’s a summary of the bugs against the Debian Policy Manual. Please consider getting involved, whether or not you’re an existing contributor. Consensus has been reached and help is needed to write a patch #172436 BROWSER and sensible-browser standardization #273093 document interactions of multiple clashing package diversions #299007 Transitioning perms of /usr/local #314808 Web applications should use /usr/share/package, not /usr/share/doc/… #425523 Describe error unwind when unpacking a package fails #452393 Clarify difference between required and important priorities #476810 Please clarify 12.5, “Copyright information” #484673 file permissions for files potentially including credential informa… #491318 init scripts “should” support start/stop/restart/force-reload - why… #556015 Clarify requirements for linked doc directories #568313 Suggestion: forbid the use of dpkg-statoverride when uid and gid ar… #578597 Recommend usage of dpkg-buildflags to initialize CFLAGS and al. #582109 document triggers where appropriate #587991 perl-policy: /etc/perl missing from Module Path #592610 Clarify when Conflicts + Replaces et al are appropriate #613046 please update example in 4.9.1 (debian/rules and DEB_BUILD_OPTIONS) #614807 Please document autobuilder-imposed build-dependency alternative re… #628515 recommending verbose build logs #664257 document Architecture name definitions #682347 mark ‘editor’ virtual package name as obsolete #685506 copyright-format: new Files-Excluded field #685746 debian-policy Consider clarifying the use of recommends #688251 Built-Using description too aggressive #749826 [multiarch] please document the use of Multi-Arch field in debian/c… #757760 please document build profiles #759316 Document the use of /etc/default for cron jobs #761219 document versioned Provides #767839 Linking documentation of arch:any package to arch:all #770440 policy should mention systemd timers #773557 Avoid unsafe RPATH/RUNPATH #780725 PATH used for building is not specified #793499 The Installed-Size algorithm is out-of-date #810381 Update wording of 5.6.26 VCS-* fields to recommend encryption #823256 Update maintscript arguments with dpkg >= 1.18.5 #833401 virtual packages: dbus-session-bus, dbus-default-session-bus #835451 Building as root should be discouraged #838777 Policy 11.8.4 for x-window-manager needs update for freedesktop menus #845715 Please document that packages are not allowed to write outside thei… #853779 Clarify requirements about update-rc.d and invoke-rc.d usage in mai… #874019 Note that the ’-e’ argument to x-terminal-emulator works like ’–’ #874206 allow a trailing comma in package relationship fields Wording proposed, awaiting review from anyone and/or seconds by DDs #515856 remove get-orig-source #542288 Versions for native packages, NMU’s, and binary only uploads #582109 document triggers where appropriate #610083 Remove requirement to document upstream source location in debian/c… #645696 [copyright-format] clearer definitions and more consistent License:… #649530 [copyright-format] clearer definitions and more consistent License:… #662998 stripping static libraries #682347 mark ‘editor’ virtual package name as obsolete #683222 say explicitly that debian/changelog is required in source packages #688251 Built-Using description too aggressive #737796 copyright-format: support Files: paragraph with both abbreviated na… #756835 Extension of the [...]



Russ Allbery: Free software log (July and August 2017)

Sun, 17 Sep 2017 20:08:00 +0000

I've wanted to start making one of these posts for a few months but have struggled to find the time. But it seems like a good idea, particularly since I get more done when I write down what I do, so you all get a rather belated one. This covers July and August; hopefully the September one will come closer to the end of September.

Debian

August was DebConf, which included a ton of Policy work thanks to Sean Whitton's energy and encouragement. During DebConf, we incorporated work from Hideki Yamane to convert Policy to reStructuredText, which has already made it far easier to maintain. (Thanks also to David Bremner for a lot of proofreading of the result.) We also did a massive bug triage and closed a ton of older bugs on which there had been no forward progress for many years.

After DebConf, as expected, we flushed out various bugs in the reStructuredText conversion and build infrastructure. I fixed a variety of build and packaging issues and started doing some more formatting cleanup, including moving some footnotes to make the resulting document more readable.

During July and August, partly at DebConf and partly not, I also merged wording fixes for seven bugs and proposed wording (not yet finished) for three more, as well as participated in various Policy discussions.

Policy was nearly all of my Debian work over these two months, but I did upload a new version of the webauth package to build with OpenSSL 1.1 and drop transitional packages.

Kerberos

I still haven't decided my long-term strategy with the Kerberos packages I maintain. My personal use of Kerberos is now fairly marginal, but I still care a lot about the software and can't convince myself to give it up.

This month, I started dusting off pam-krb5 in preparation for a new release. There's been an open issue for a while around defer_pwchange support in Heimdal, and I spent some time on that and tracked it down to an upstream bug in Heimdal as well as a few issues in pam-krb5. The pam-krb5 issues are now fixed in Git, but I haven't gotten any response upstream from the Heimdal bug report. I also dusted off three old Heimdal patches and submitted them as upstream merge requests and reported some more deficiencies I found in FAST support. On the pam-krb5 front, I updated the test suite for the current version of Heimdal (which changed some of the prompting) and updated the portability support code, but haven't yet pulled the trigger on a new release.

Other Software

I merged a couple of pull requests in podlators, one to fix various typos (thanks, Jakub Wilk) and one to change the formatting of man page references and function names to match the current Linux manual page standard (thanks, Guillem Jover). I also documented a bad interaction with line-buffered output in the Term::ANSIColor man page. Neither of these have seen a new release yet.




Dirk Eddelbuettel: RcppClassic 0.9.7

Sun, 17 Sep 2017 19:28:00 +0000

(image)

A rather boring and otherwise uneventful release 0.9.7 of RcppClassic is now at CRAN. This package provides a maintained version of the otherwise deprecated first Rcpp API; no new projects should use it.

Once again no changes in user-facing code. But this makes it the first package to use the very new and shiny pinp package as the backend for its vignette, now converted to Markdown---see here for this new version. We also updated three sources files for tabs versus spaces as the current g++ version complained (correctly !!) about misleading indents. Otherwise a file src/init.c was added for dynamic registration, the Travis CI runner script was updated to using run.sh from our r-travis fork, and we now strip the library after they have been built. Again, no user code changes.

And no iterate: nobody should use this package. Rcpp is so much better in so many ways---this one is simply available as we (quite strongly) believe that APIs are contracts, and as such we hold up our end of the deal.

Courtesy of CRANberries, there are changes relative to the previous release.

Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.




Uwe Kleine-König: IPv6 in my home network

Sun, 17 Sep 2017 10:15:00 +0000

I am lucky and get both IPv4 (without CGNAT) and IPv6 from my provider. Recently after upgrading my desk router (that is an Netgear WNDR3800 that serves the network on my desk) from OpenWRT to latest LEDE I looked into what can be improved in the IPv6 setup for both my home network (served by a FRITZ!Box) and my desk network.

Unfortunately I was unable to improve the situation compared to what I already had before.

Things that work

Making IPv6 work in general was easy, just a few clicks in the configuration of the FRITZ!Box and it mostly worked. After that I have:

  • IPv6 connectivity in the home net
  • IPv6 connectivity in the desk net

Things that don't work

There are a few things however that I'd like to have, that are not that easy it seems:

ULA for both nets

I let the two routers announce an ULA prefix each. Unfortunately I was unable to make the LEDE box announce its net on the wan interface for clients in the home net. So the hosts in the desk net know how to reach the hosts in the home net but not the other way round which makes it quite pointless. (It works fine as long as the FRITZ!Box announces a global net, but I'd like to have local communication work independent of the global connectivity.)

To fix this I'd need something like radvd on my LEDE router, but that isn't provided by LEDE (or OpenWRT) any more as odhcpd is supposed to be used which AFAICT is unable to send RAs on the wan interface though. Ok, probably I could install bird, but that seems a bit oversized. I created an entry in the LEDE forum but without any reply up to now.

Alternatively (but less pretty) I could setup an IPv6 route in the FRITZ!Box, but that only works with a newer firmware and as this router is owned by my provider I cannot update it.

Firewalling

The FRITZ!Box has a firewall that is not very configurable. I can punch a hole in it for hosts with a given interface-ID, but that only works for hosts in the home net, not the machines in the delegated subnet behind the LEDE router. In fact I think the FRITZ!Box should delegate firewalling for a delegated net also to the router of that subnet. (Hello AVM, did you hear me? At least a checkbox for that would be nice.)

So having a global address on the machines on my desk doesn't allow me to reach them from the internet.




Raphaël Hertzog: Freexian’s report about Debian Long Term Support, August 2017

Sun, 17 Sep 2017 08:24:19 +0000

Like each month, here comes a report about the work of paid contributors to Debian LTS. Individual reports In August, about 189 work hours have been dispatched among 12 paid contributors. Their reports are available: Antoine Beaupré did 16h. Ben Hutchings did 10 hours (out of 15h allocated + 1 extra hour, thus keeping 6 extra hour for September). Chris Lamb did 18 hours. Emilio Pozuelo Monfort did 20.5 hours (out of 20.25 hours allocated + 13 hours remaining, thus keeping 12.75 hours for September). Guido Günther did 10 hours. Hugo Lefeuvre did 14h (out of 2h allocated + 12 extra hours). Lucas Kanashiro did 20.25 hours. Markus Koschany did 20.25 hours. Ola Lundqvist did 9h (out of 14h allocated + 16 extra hours, he gave back 14 hours, thus keeping 7 extra hours for September). Raphaël Hertzog did 12 hours. Roberto C. Sanchez did 27.25 hours (out of 20.25 hours allocated + 16 hours remaining, thus keeping 9 extra hours for September). Thorsten Alteholz did 20.25 hours. Evolution of the situation The number of sponsored hours is the same as last month. The security tracker currently lists 59 packages with a known CVE and the dla-needed.txt file 60. The number of packages with open issues decreased slightly compared to last month but we’re not yet back to the usual situation. The number of CVE to fix per package tends to increase due to the increased usage of fuzzers. Thanks to our sponsors New sponsors are in bold. Platinum sponsors: TOSHIBA (for 22 months) GitHub (for 13 months) Gold sponsors: The Positive Internet (for 38 months) Blablacar (for 37 months) Linode (for 27 months) Babiel GmbH (for 16 months) Plat’Home (for 16 months) Silver sponsors: Domeneshop AS (for 37 months) Université Lille 3 (for 37 months) Trollweb Solutions (for 35 months) Nantes Métropole (for 32 months) Dalenys (for 28 months) Univention GmbH (for 23 months) Université Jean Monnet de St Etienne (for 23 months) Sonus Networks (for 17 months) UR Communications BV (for 12 months) maxcluster GmbH (for 11 months) Exonet B.V. (for 7 months) Leibniz Rechenzentrum Bronze sponsors: David Ayers – IntarS Austria (for 38 months) Evolix (for 38 months) Offensive Security (for 38 months) Seznam.cz, a.s. (for 38 months) Freeside Internet Service (for 37 months) MyTux (for 37 months) Intevation GmbH (for 35 months) Linuxhotel GmbH (for 35 months) Daevel SARL (for 33 months) Bitfolk LTD (for 32 months) Megaspace Internet Services GmbH (for 32 months) NUMLOG (for 32 months) Greenbone Networks GmbH (for 31 months) WinGo AG (for 31 months) Ecole Centrale de Nantes – LHEEA (for 27 months) Sig-I/O (for 24 months) Entr’ouvert (for 22 months) Adfinis SyGroup AG (for 19 months) GNI MEDIA (for 14 months) Quarantainenet BV (for 14 months) RHX Srl (for 11 months) Bearstech (for 5 months) LiHAS (for 5 months) People Doc Catalyst IT Ltd No comment | Liked this article? Click here. | My blog is Flattr-enabled. [...]



Dirk Eddelbuettel: pinp 0.0.1: pinp is not PNAS

Sat, 16 Sep 2017 20:07:00 +0000

(image)

A brandnew and very exciting (to us, at least) package called pinp just arrived on CRAN, following a somewhat unnecessarily long passage out of incoming. It is based on the PNAS LaTeX Style offered by the Proceeding of the National Academy of Sciences of the United States of America, or PNAS for short. And there is already a Markdown version in the wonderful rticles packages.

But James Balamuta and I thought we could do one better when we were looking to typeset our recent PeerJ Prepint as an attractive looking vignette for use within the Rcpp package.

And so we did by changing a few things (font, color, use of natbib and Chicago.bst for references, removal of a bunch of extra PNAS-specific formalities from the frontpage) and customized a number of other things for easier use by vignettes directly from the YAML header (draft mode watermark, doi or url for packages, easier author naming in footer, bibtex file and more).

We are quite pleased with the result which seems ready for the next Rcpp release---see e.g., these two teasers:

(image)

and

(image)

and the pinp package page or the GitHub repo have the full (four double-)pages of what turned a more dull looking 27 page manuscript into eight crisp two-column pages.

We have few more things planned (i.e., switching to single column mode, turning on linenumbers at least in one-column mode).

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.




Ben Hutchings: Debian LTS work, August 2017

Fri, 15 Sep 2017 17:56:08 +0000

(image)

I was assigned 15 hours of work by Freexian's Debian LTS initiative and carried over 1 hour from July. I only worked 10 hours, so I will carry over 6 hours to the next month.

I prepared and released an update on the Linux 3.2 longterm stable branch (3.2.92), and started work on the next update. I rebased the Debian linux package on this version, but didn't yet upload it.




Chris Lamb: Which packages on my system are reproducible?

Fri, 15 Sep 2017 08:29:10 +0000

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users. The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process. As part of this project I wrote a script to determine which packages installed on your system are "reproducible" or not: $ apt install devscripts […] $ reproducible-check […] W: subversion (1.9.7-2) is unreproducible (libsvn-perl, libsvn1, subversion) W: taglib (1.11.1+dfsg.1-0.1) is unreproducible (libtag1v5, libtag1v5-vanilla) W: tcltk-defaults (8.6.0+9) is unreproducible (tcl, tk) W: tk8.6 (8.6.7-1) is unreproducible (libtk8.6, tk8.6) W: valgrind (1:3.13.0-1) is unreproducible W: wavpack (5.1.0-2) is unreproducible (libwavpack1) W: x265 (2.5-2) is unreproducible (libx265-130) W: xen (4.8.1-1+deb9u1) is unreproducible (libxen-4.8, libxenstore3.0) W: xmlstarlet (1.6.1-2) is unreproducible W: xorg-server (2:1.19.3-2) is unreproducible (xserver-xephyr, xserver-xorg-core) 282/4494 (6.28%) of installed binary packages are unreproducible. Whether a package is "reproducible" or not is determined by querying the Debian Reproducible Builds testing framework. The --raw command-line argument lets you play with the data in more detail. For example, you can see who maintains your unreproducible packages: $ reproducible-check --raw | dd-list --stdin Alec Leamas lirc (U) Alessandro Ghedini valgrind Alessio Treglia fluidsynth (U) libsoxr (U) […] reproducible-check is available in devscripts since version 2.17.10, which landed in Debian unstable on 14th September 2017. [...]



Lior Kaplan: Public money? Public Code!

Thu, 14 Sep 2017 09:30:34 +0000

An open letter published today to the EU government says:

Why is software created using taxpayers’ money not released as Free Software?
We want legislation requiring that publicly financed software developed for the public sector be made publicly available under a Free and Open Source Software licence. If it is public money, it should be public code as well.

Code paid by the people should be available to the people!

See https://publiccode.eu/ for the campaign details.

This makes me think of starting an Israeli version…


Filed under: Uncategorized (image)



James McCoy: devscripts needs YOU!

Thu, 14 Sep 2017 03:18:48 +0000

(image)

(image)

Over the past 10 years, I've been a member of a dwindling team of people maintaining the devscripts package in Debian.

Nearly two years ago, I sent out a "Request For Help" since it was clear I didn't have adequate time to keep driving the maintenance.

In the mean time, Jonas split licensecheck out into its own project and took over development. Osamu has taken on much of the maintenance for uscan, uupdate, and mk-origtargz.

Although that has helped spread the maintenance costs, there's still a lot that I haven't had time to address.

Since Debian is still fairly early in the development cycle for Buster, I've decided this is as good a time as any for me to officially step down from active involvement in devscripts. I'm willing to keep moderating the mailing list and other related administrivia (which is fairly minimal given the repo is part of collab-maint), but I'll be unsubscribing from all other notifications.

I think devscripts serves as a good funnel for useful scripts to get in front of Debian (and its derivatives) developers, but Jonas may also be onto something by pulling scripts out to stand on their own. One of the troubles with "bucket" packages like devscripts is the lack of visibility into when to retire scripts. Breaking scripts out on their own, and possibly creating multiple binary packages, certainly helps with that. Maybe uscan and friends would be a good next candidate.

At the end of the day, I've certainly enjoyed being able to play my role in helping simplify the life of all the people contributing to Debian. I may come back to it some day, but for now it's time to let someone else pick up the reins.

If you're interested in helping out, you can join #devscripts on OFTC and/or send a mail to @lists.alioth.debian.org>.




Shirish Agarwal: Android, Android marketplace and gaming addiction.

Wed, 13 Sep 2017 14:44:16 +0000

This would be a longish piece so please bear and play with tea, coffee, beer or anything stronger that you desire while reading below I had bought an Android phone, a Samsung J5 just before going to debconf 2016. It was more for being in-trend rather than really using it. The one which I shared is the upgraded version (recentish) the one I have is 2 GB for which I had paid around double of what the list price was. The only reason I bought the model is that it had ‘removable battery’ at the price point I was willing to pay. I did see that Samsung has the same ham-handed issues with audio as previous Nokia devices use to, the speakers and microphone probably the cheapest you can get on the market. Nokia was same too, at least on the lower-end of the market, while Oppo has loud ringtones and loud music, perfect for those who are a bit hard of hearing (as yours truly is). I had been pleasantly surprised by the quality of photos Samsung J5 was churning even though I’m less than average shooter, never been really into it so was a sort of wake-up call for where camera sensor technology is advancing. And of course with newer phones the kind of detail it can capture is mesmerizing to say the least, although wide-angle shots still would take some time to get right I guess. If memory serves me right, sometime back Laura Arjona Reina (who handles part of debian-publicity and part of debian-women among other responsibilities) shared a blog post on p.d.o. where she had shared the troubles she had while exporting data from the phone. While she shared that and I lack the time or the energy to try and find it ( the entry is really bookmarkable, at least that specific blog post). What was interesting though that I had gone few years ago to Bangalore, there is an organization which I like and admire CIS great for researchers. Anyways they had done a project getting between 10-20 phones from the market made of Chinese origin (almost all mobiles sold in India, the fabrication of CPU and APU etc. are done in China/Taiwan and then assembled here). Here what is done at the most is assembly which for all political purposes is called ‘manufacturing’ . All the mobiles kept quite a bit of info. on the device even after you wiped them clean/put some other ROM on them. The CIS site is more than a bit cluttered otherwise would have shared the direct link. I do hope to send an e-mail to CIS and hopefully they will respond with the report and will share that here as and when. It would be interesting to know if after people flash a custom rom if the status quo is the same as it was before. I do suspect it would be the former as flashing ROMs on phones is still a bit of specialized subject at least here in India with even an average phone costing a month or two’s salary or more and the idea of bricking the phone scares most people (including yours truly). Anyways, for a long time I was on bed and had the phone. I used 2 games from the android marketplace which both mum and I enjoy and enjoyed. Those are Real jigsaw and Jigsaw puzzle HD . The permissions dialog which Real jigsaw among other games has is horrible and part of me freaks that all such apps. have unrestricted access to my storage area. Ideally, what Android should hav[...]



Vincent Bernat: Route-based IPsec VPN on Linux with strongSwan

Wed, 13 Sep 2017 08:20:42 +0000

A common way to establish an IPsec tunnel on Linux is to use an IKE daemon, like the one from the strongSwan project, with a minimal configuration1: conn V2-1 left = 2001:db8:1::1 leftsubnet = 2001:db8:a1::/64 right = 2001:db8:2::1 rightsubnet = 2001:db8:a2::/64 authby = psk auto = route The same configuration can be used on both sides. Each side will figure out if it is “left” or “right”. The IPsec site-to-site tunnel endpoints are 2001:db8:­1::1 and 2001:db8:­2::1. The protected subnets are 2001:db8:­a1::/64 and 2001:db8:­a2::/64. As a result, strongSwan configures the following policies in the kernel: $ ip xfrm policy src 2001:db8:a1::/64 dst 2001:db8:a2::/64 dir out priority 399999 ptype main tmpl src 2001:db8:1::1 dst 2001:db8:2::1 proto esp reqid 4 mode tunnel src 2001:db8:a2::/64 dst 2001:db8:a1::/64 dir fwd priority 399999 ptype main tmpl src 2001:db8:2::1 dst 2001:db8:1::1 proto esp reqid 4 mode tunnel src 2001:db8:a2::/64 dst 2001:db8:a1::/64 dir in priority 399999 ptype main tmpl src 2001:db8:2::1 dst 2001:db8:1::1 proto esp reqid 4 mode tunnel […] This kind of IPsec tunnel is a policy-based VPN: encapsulation and decapsulation are governed by these policies. Each of them contains the following elements: a direction (out, in or fwd2), a selector (source subnet, destination subnet, protocol, ports), a mode (transport or tunnel), an encapsulation protocol (esp or ah), and the endpoint source and destination addresses. When a matching policy is found, the kernel will look for a corresponding security association (using reqid and the endpoint source and destination addresses): $ ip xfrm state src 2001:db8:1::1 dst 2001:db8:2::1 proto esp spi 0xc1890b6e reqid 4 mode tunnel replay-window 0 flag af-unspec auth-trunc hmac(sha256) 0x5b68[…]8ba2904 128 enc cbc(aes) 0x8e0e377ad8fd91e8553648340ff0fa06 anti-replay context: seq 0x0, oseq 0x0, bitmap 0x00000000 […] If no security association is found, the packet is put on hold and the IKE daemon is asked to negotiate an appropriate one. Otherwise, the packet is encapsulated. The receiving end identifies the appropriate security association using the SPI in the header. Two security associations are needed to establish a bidirectionnal tunnel: $ tcpdump -pni eth0 -c2 -s0 esp 13:07:30.871150 IP6 2001:db8:1::1 > 2001:db8:2::1: ESP(spi=0xc1890b6e,seq=0x222) 13:07:30.872297 IP6 2001:db8:2::1 > 2001:db8:1::1: ESP(spi=0xcf2426b6,seq=0x204) All IPsec implementations are compatible with policy-based VPNs. However, some configurations are difficult to implement. For example, consider the following proposition for redundant site-to-site VPNs: A possible configuration between V1-1 and V2-1 could be: conn V1-1-to-V2-1 left = 2001:db8:1::1 leftsubnet = 2001:db8:a1::/64,2001:db8:a6::cc:1/128,2001:db8:a6::cc:5/128 right = 2001:db8:2::1 rightsubnet = 2001:db8:a2::/64,2001:db8:a6::/64,2001:db8:a8::/64 authby = psk keyexchange = ikev2 auto = route Each time a subnet is modified on one site, the configurations[...]



Reproducible builds folks: Reproducible Builds: Weekly report #124

Wed, 13 Sep 2017 07:48:18 +0000

Here's what happened in the Reproducible Builds effort between Sunday September 3 and Saturday September 9 2017: Media coverage isdebianreproducibleyet.com was released and subsequently updated. GSoC and Outreachy updates Debian will participate in this year's Outreachy initiative and the Reproducible Builds is soliciting mentors and students to join this round. For more background please see the following mailing list posts: 1, 2 & 3. Reproduciblity work in Debian Chris Lamb filed #874102 filed against texlive-bin to incorporate a proposed upstream change to fix reproducibility issues in generated PDF files. In addition, the following NMUs were accepted: fastforward (#776972) (lamby) dtc-xen (#777322) (lamby) dhcpping (#777320) (lamby) vimoutliner (#776369) (lamby) Reproduciblity work in other projects The Linux kernel announced support for the randstruct GCC plugin. "Please make the output of gio-querymodules deterministic" was merged upstream. (lamby) Patches sent upstream: Bernhard M. Wiedemann: gcin: (merged) Uninitialized stack memory html5-parser (merged): Sorting gromacs (merged): Date crawl (merged): Date GCompris-gtk: Date heimdal: Date, hostname Chris Lamb: Numpy (#872459) (PR #9652) build path (merged) Packages reviewed and fixed, and bugs filed Adrian Bunk: #874186 filed against svgpp. Reviews of unreproducible packages 3 package reviews have been added, 2 have been updated and 2 have been removed in this week, adding to our knowledge about identified issues. Weekly QA work During our reproducibility testing, FTBFS bugs have been detected and reported by: Adrian Bunk (15) diffoscope development Development continued in git, including the following contributions: Chris Lamb: Add support for "binwalking" to find (eg.) concatenated CPIO archives. (Closes: #820631) Loosen matching of file(1)'s output to ensure we correctly also match TTF files under file 5.32. Check we identify all CPIO fixtures in tests Make failing a some flake8 tests cause the testsuite to fail. (Currently just "undefined name") Countless style fixups, eg. remove unused imports, removing blank lines from end of flies, etc. Compare None using identity, not equality. diffoscope.diff: Correct reference to self.buf. comparators.utils.file: Correct reference to path_apparent_size. Juliana Rodrigues: Sip html_visuals test if 'sng' binary is not available Mattia Rizzolo: Numerous PEP8 fixes (eg. E122, E302, E713, etc.) Mattia Rizzolo also uploaded the version 86 released last week to stretch-backports. reprotest development Santiago Torres: Correct string formatting in get_all_servers Ximin Luo: Heavy refactoring towards supporting running more than 2 builds Rename append_command to append_to_build_command tests.reproducible-builds.org h01ger: Don't update the stretch package sets anymore Update URL for Tails packages list Disabled the OpenWrt at the request of lynxis as they were broken; if no-one shows up to fix them, we'll probably remove them in the future, as all current development happens within LEDE. Renewed the Let's Encrypt SSL certificates. Mattia Rizzolo: Avoid stale temporary files[...]



Markus Koschany: My Free Software Activities in August 2017

Tue, 12 Sep 2017 23:04:28 +0000

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in  Java, Games and LTS topics, this might be interesting for you. DebConf 17 in Montreal I traveled to DebConf 17 in Montreal/Canada. I arrived on 04. August and met a lot of different people which I only knew by name so far. I think this is definitely one of the best aspects of real life meetings, putting names to faces and getting to know someone better. I totally enjoyed my stay and I would like to thank all the people who were involved in organizing this event. You rock! I also gave a talk about the “The past, present and future of Debian Games”,  listened to numerous other talks and got a nice sunburn which luckily turned into a more brownish color when I returned home on 12. August. The only negative experience I made was with my airline which was supposed to fly me home to Frankfurt again. They decided to cancel the flight one hour before check-in for unknown reasons and just gave me a telephone number to sort things out.  No support whatsoever. Fortunately (probably not for him) another DebConf attendee suffered the same fate and together we could find another flight with Royal Air Maroc the same day. And so we made a short trip to Casablanca/Morocco and eventually arrived at our final destination in Frankfurt a few hours later. So which airline should you avoid at all costs (they still haven’t responded to my refund claims) ? It’s WoW-Air from Iceland. (just wow) Debian Games There were a lot of GCC-7 bugs to fix this month which claimed most of my games related time. Bug fixes: torcs (RC #853685), berusky2 (RC #853325), lordsawar (RC #853529), simutrans(#869029), gngb (RC #853425), libclaw (RC #853488), funguloids (RC #853408), plee-the-bear (RC #853618), alien-arena (RC #871218), fretsonfire (RC #872934), minetest, (RC #873324) widelands, (RC #871114), pingus (RC #853614) I released version 2.1 of the Debian Games Blend. I completely overhauled the gngb package, a color gameboy emulator. I packaged new upstream versions of freeciv, peg-e and blockattack. I backported the memory leak fix for unknown-horizons and fife to Stretch (#871037). I investigated some graphical glitches in Neverball which appear to be related to OpenGL and the graphic stack in general but I couldn’t find an immediate solution. (#871223) Debian Java I sponsored libimglib2-java for Ghislain Vaillant. New upstream releases: apktool, jboss-modules, jboss-logging-tools, jboss-logmanager. For jboss-xnio I packaged two new build-dependencies which are wildfly-common and wildfly-client-config and they are currently waiting in the NEW queue. The last build-dependency for PDFsam was accepted this month and I was able to upload the new version to experimental. Unfortunately the program is currently not really usable due to a bug in libhibernate-validator-java (#874579) Debian LTS This was my eighteenth month as a paid contributor and I have been paid to work 20,25 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following: From 31. July until 06. August I was in charge of our LTS frontdesk. [...]



Arturo Borrero González: Google Hangouts in Debian testing (Buster)

Tue, 12 Sep 2017 19:37:00 +0000

(image)

Google offers a lot of software components packaged specifically for Debian and Debian-like Linux distributions. Examples are: Chrome, Earth and the Hangouts plugin. Also, there are many other Internet services doing the same: Spotify, Dropbox, etc. I’m really grateful for them, since this make our life easier.

Problem is that our ecosystem is rather complex, with many distributions and many versions out there. I guess is not an easy task for them to keep such a big variety of support variations.

In this particular case, it seems Google doesn’t support Debian testing in their .deb packages. In this case, testing means Debian Buster. And the same happens with the official Spotify client package.

I’ve identified several issues with them, to name a few:

  • packages depends on lsb-core, no longer present in Buster testing.
  • packages depends on libpango1.0-0, however testing contains libpango-1.0-0

I’m in need of using Google Hangout so I’ve been forced to solve this situation by editing the .deb package provided by Google.

Simple steps:

  • 1) create a temporal working directory
% user@debian:~ $ mkdir pkg
% user@debian:~ $ cd pkg/
  • 2) get the original .deb package, the Google Hangout talk plugin.
% user@debian:~/pkg $ wget https://dl.google.com/linux/direct/google-talkplugin_current_amd64.deb
[...]
  • 3) extract the original .deb package
% user@debian:~/pkg $ dpkg-deb -R google-talkplugin_current_amd64.deb google-talkplugin_current_amd64/
  • 4) edit the control file, replace libpango1.0-0 with libpango-1.0-0
% user@debian:~/pkg $ nano google-talkplugin_current_amd64/DEBIAN/control
  • 5) rebuild the package and install it!
% user@debian:~/pkg $ dpkg -b google-talkplugin_current_amd64
% user@debian:~/pkg $ sudo dpkg -i google-talkpluging_current_amd64.deb

I have yet to investigate how to workaround the lsb-core thing, so still I can’t use Google Earth.




Steinar H. Gunderson: rANS encoding of signed coefficients

Mon, 11 Sep 2017 23:56:00 +0000

I'm currently trying to make sense of some still image coding (more details to come at a much later stage!), and for a variety of reasons, I've chosen to use rANS as the entropy coder. However, there's an interesting little detail that I haven't actually seen covered anywhere; maybe it's just because I've missed something, or maybe because it's too blindingly obvious, but I thought I would document what I ended up with anyway. (I had hoped for something even more elegant, but I guess the obvious would have to do.) For those that don't know rANS coding, let me try to handwave it as much as possible. Your state is typically a single word (in my case, a 32-bit word), which is refilled from the input stream as needed. The encoder and decoder works in reverse order; let's just talk about the decoder. Basically it works by looking at the lowest 12 (or whatever) bits of the decoder state, mapping each of those 2^12 slots to a decoded symbol. More common symbols are given more slots, proportionally to the frequency. Let me just write a tiny, tiny example with 2 bits and three symbols instead, giving four slots: Lowest bits Symbol 00 0 01 0 10 1 11 2 Note that the zero coefficient here maps to one out of two slots (ie., a range); you don't choose which one yourself, the encoder stashes some information in there (which is used to recover the next control word once you know which symbol there is). Now for the actual problem: When storing DCT coefficients, we typically want to also store a sign (ie., not just 1 or 2, but also -1/+1 and -2/+2). The statistical distribution is symmetrical, so the sign bit is incompressible (except that of course there's no sign bit needed for 0). We could have done this by introducing new symbols -1 and -2 in addition to our three other ones, but this means we'll need more bits of precision, and accordingly larger look-up tables (which is negative for performance). So let's find something better. We could also simply store it separately somehow; if the coefficient is non-zero, store the bits in some separate repository. Perhaps more elegantly, you can encode a second symbol in the rANS stream with probability 1/2, but this is more expensive computationally. But both of these have the problem that they're divergent in terms of control flow; nonzero coefficients potentially need to do a lot of extra computation and even loads. This isn't nice for SIMD, and it's not nice for GPU. It's generally not really nice. The solution I ended up with was simulating a larger table with a smaller one. Simply rotate the table so that the zero symbol has the top slots instead of the bottom slots, and then replicate the rest of the table. For instance, take this new table: Lowest bits Symbol 000 1 001 2 010 0 011 0 100 0 101 0 110 -1 111 -2 (The observant reader will note that this doesn't describe the exact same distribution as last time—zero has twice the relative freque[...]



Steve Kemp: Debian-Administration.org is closing down

Sun, 10 Sep 2017 21:00:00 +0000

(image)

After 13 years the Debian-Administration website will be closing down towards the end of the year.

The site will go read-only at the end of the month, and will slowly be stripped back from that point towards the end of the year - leaving only a static copy of the articles, and content.

This is largely happening due to lack of content. There were only two articles posted last year, and every time I consider writing more content I lose my enthusiasm.

There was a time when people contributed articles, but these days they tend to post such things on their own blogs, on medium, on Reddit, etc. So it seems like a good time to retire things.

An official notice has been posted on the site-proper.




Adnan Hodzic: Secure traffic to ZNC on Synology with Let’s Encrypt

Sun, 10 Sep 2017 16:40:01 +0000

I’ve been using IRC since late 1990’s, and I continue to do so to this day due to it (still) being one of the driving development forces in various open source communities. Especially in Linux development … and some of my acquintances I can only get in touch with via IRC :) My Setup On my Synology NAS I run ZNC (IRC bouncer/proxy) to which I connect using various IRC clients (irssi/XChat Azure/AndChat) from various platforms (Linux/Mac/Android). In this case ZNC serves as a gateway and no matter which device/client I connect from, I’m always connected to same IRC servers/chat rooms/settings when I left off. This is all fine and dandy, but connecting from external networks to ZNC means you will hand in your ZNC credentials in plain text. Which is a problem for me, even thought we’re “only” talking about IRC bouncer/proxy. With that said, how do we encrypt external traffic to our ZNC? HowTo: Chat securely with ZNC on Synology using Let’s Encrypt SSL certificate For reference or more thorough explanation of some of the steps/topics please refer to: Secure (HTTPS) public access to Synology NAS using Let’s Encrypt (free) SSL certificate Requirements: Synology NAS running DSM >= 6.0 Sub/domain name with ability to update DNS records SSH access to your Synology NAS 1: DNS setup Create A record for sub/domain you’d like to use to connect to your ZNC and point it to your Synology NAS external (WAN) IP. For your reference, subdomain I’ll use is: irc.hodzic.org 2: Create Let’s Encrypt certificate DSM: Control Panel > Security > Certificates > Add Followed by: Add a new certificate > Get a certificate from Let's Encrypt Followed by adding domain name A record was created for in Step 1, i.e: After certificate is created, don’t forget to configure newly created certificate to point to correct domain name, i.e: 3: Install ZNC In case you already have ZNC installed, I suggest you remove it and do a clean install. Mainly due to some problems with package in past, where ZNC wouldn’t start automatically on boot which lead to creating projects like: synology-znc-autostart. In latest version, all of these problems have been fixed and couple of new features have been added. ZNC can be installed using Synology’s Package Center, if community package sources are enabled. Which can simply be done by adding new package sources: Name: SynoCommunity Location: http://packages.synocommunity.com To successfuly authenticate newly added source, under “General” tab, “Trust Level” should be set to “Any publisher” As part of installation process, ZNC config will be run with most sane/useful options and admin user will be created allowing you access to ZNC webadmin. 4: Secure access to ZNC webadmin Now we want to bind our sub/domain created in “Step 1” to ZNC webadmin, and secure external access to it. This can be done by creating a reverse proxy. As part of this, you need to know which port has been allocated for SSL in ZNC Webadmin, i.e: In this case, we can see it’s 8251. Reverse Proxy[...]



intrigeri: Can you reproduce this Tails ISO image?

Sun, 10 Sep 2017 14:27:00 +0000

Thanks to a Mozilla Open Source Software award, we have been working on making the Tails ISO images build reproducibly. We have made huge progress: since a few months, ISO images built by Tails core developers and our CI system have always been identical. But we're not done yet and we need your help! Our first call for testing build reproducibility in August uncovered a number of remaining issues. We think that we have fixed them all since, and we now want to find out what other problems may prevent you from building our ISO image reproducibly. Please try to build an ISO image today, and tell us whether it matches ours! Build an ISO These instructions have been tested on Debian Stretch and testing/sid. If you're using another distribution, you may need to adjust them. If you get stuck at some point in the process, see our more detailed build documentation and don't hesitate to contact us: XMPP chatroom email: tails-dev@boum.org (public) or tails@boum.org (private) Setup the build environment You need a system that supports KVM, 1 GiB of free memory, and about 20 GiB of disk space. Install the build dependencies: sudo apt install \ git \ rake \ libvirt-daemon-system \ dnsmasq-base \ ebtables \ qemu-system-x86 \ qemu-utils \ vagrant \ vagrant-libvirt \ vmdebootstrap && \ sudo systemctl restart libvirtd Ensure your user is in the relevant groups: for group in kvm libvirt libvirt-qemu ; do sudo adduser "$(whoami)" "$group" done Logout and log back in to apply the new group memberships. Build Tails 3.2~alpha2 This should produce a Tails ISO image: git clone https://git-tails.immerda.ch/tails && \ cd tails && \ git checkout 3.2-alpha2 && \ git submodule update --init && \ rake build Send us feedback! No matter how your build attempt turned out we are interested in your feedback. Gather system information To gather the information we need about your system, run the following commands in the terminal where you've run rake build: sudo apt install apt-show-versions && \ ( for f in /etc/issue /proc/cpuinfo do echo "--- File: ${f} ---" cat "${f}" echo done for c in free locale env 'uname -a' '/usr/sbin/libvirtd --version' \ 'qemu-system-x86_64 --version' 'vagrant --version' do echo "--- Command: ${c} ---" eval "${c}" echo done echo '--- APT package versions ---' apt-show-versions qemu:amd64 linux-image-amd64:amd64 vagrant \ libvirt0:amd64 ) | bzip2 > system-info.txt.bz2 Then check that the generated file doesn't contain any sensitive information you do not want to leak: bzless system-info.txt.bz2 Next, please follow the instructions below that match your situation! If the build failed Sorry about that. Please help us fix it by opening a ticket: set Category to Build system; paste the output of rake build; attach system-info.txt.bz2 (this will publish that file). If the build succeeded Compute the SHA-512 chec[...]



Sylvain Beucler: dot-zed archive file format

Sun, 10 Sep 2017 13:50:48 +0000

TL,DR: I reverse-engineered the .zed encrypted archive format. Following a clean-room design, I'm providing a description that can be implemented by a third-party. Interested? (reference version at: https://www.beuc.net/zed/) .zed archive file format Introduction Archives with the .zed extension are conceptually similar to an encrypted .zip file. In addition to a specific format, .zed files support multiple users: files are encrypted using the archive master key, which itself is encrypted for each user and/or authentication method (password, RSA key through certificate or PKCS#11 token). Metadata such as filenames is partially encrypted. .zed archives are used as stand-alone or attached to e-mails with the help of a MS Outlook plugin. A variant, which is not covered here, can encrypt/decrypt MS Windows folders on the fly like ecryptfs. In the spirit of academic and independent research this document provides a description of the file format and encryption algorithms for this encrypted file archive. See the conventions section for conventions and acronyms used in this document. Structure overview The .zed file format is composed of several layers. The main container is using the (MS-CFB), which is notably used by MS Office 97-2003 .doc files. It contains several streams: Metadata stream: in OLE Property Set format (MS-OLEPS), contains 2 blobs in a specific Type-Length-Value (TLV) format: _ctlfile: global archive properties and access list It is obfuscated by means of static-key AES encryption. The properties include archive initial filename and a global IV. A global encryption key is itself encrypted in each user entry. _catalog: file list Contains each file metadata indexed with a 15-bytes identifier. Directories are supported. Full filename is encrypted using AES. File extension is (redundantly) stored in clear, and so are file metadata such as modification time. Each file in the archive compressed with zlib and encrypted with the standard AES algorithm, in a separate stream. Several encryption schemes and key sizes are supported. The file stream is split in chunks of 512 bytes, individually encrypted. Optional streams, contain additional metadata as well as pictures to display in the application background ("watermarks"). They are not discussed here. Or as a diagram: +----------------------------------------------------------------------------------------------------+ | .zed archive (MS-CBF) | | | | stream #1 stream #2 stream #3... | | +------------------------------+ +---------------------------+ +---------------------------+ | | | metadata (MS-OLEPS) | | encryption (AES) | | encryption (AES) | | | | | | 512-bytes chunks | | 512[...]



Charles Plessy: Summary of the discussion on off-line keys.

Sun, 10 Sep 2017 12:06:29 +0000

(image)

Last month, there has been an interesting discussion about off-line GnuPG keys and their storage systems on the debian-project@l.d.o mailing list. I tried to summarise it in the Debian wiki, in particular by creating two new pages.




Joachim Breitner: Less parentheses

Sun, 10 Sep 2017 10:10:16 +0000

Yesterday, at the Haskell Implementers Workshop 2017 in Oxford, I gave a lightning talk titled ”syntactic musings”, where I presented three possibly useful syntactic features that one might want to add to a language like Haskell. The talked caused quite some heated discussions, and since the Internet likes heated discussion, I will happily share these ideas with you Context aka. Sections This is probably the most relevant of the three proposals. Consider a bunch of related functions, say analyseExpr and analyseAlt, like these: analyseExpr :: Expr -> Expr analyseExpr (Var v) = change v analyseExpr (App e1 e2) = App (analyseExpr e1) (analyseExpr e2) analyseExpr (Lam v e) = Lam v (analyseExpr flag e) analyseExpr (Case scrut alts) = Case (analyseExpr scrut) (analyseAlt <$> alts) analyseAlt :: Alt -> Alt analyseAlt (dc, pats, e) = (dc, pats, analyseExpr e) You have written them, but now you notice that you need to make them configurable, e.g. to do different things in the Var case. You thus add a parameter to all these functions, and hence an argument to every call: type Flag = Bool analyseExpr :: Flag -> Expr -> Expr analyseExpr flag (Var v) = if flag then change1 v else change2 v analyseExpr flag (App e1 e2) = App (analyseExpr flag e1) (analyseExpr flag e2) analyseExpr flag (Lam v e) = Lam v (analyseExpr (not flag) e) analyseExpr flag (Case scrut alts) = Case (analyseExpr flag scrut) (analyseAlt flag <$> alts) analyseAlt :: Flag -> Alt -> Alt analyseAlt flag (dc, pats, e) = (dc, pats, analyseExpr flag e) I find this code problematic. The intention was: “flag is a parameter that an external caller can use to change the behaviour of this code, but when reading and reasoning about this code, flag should be considered constant.” But this intention is neither easily visible nor enforced. And in fact, in the above code, flag does “change”, as analyseExpr passes something else in the Lam case. The idiom is indistinguishable from the environment idiom, where a locally changing environment (such as “variables in scope”) is passed around. So we are facing exactly the same problem as when reasoning about a loop in an imperative program with mutable variables. And we (pure functional programmers) should know better: We cherish immutability! We want to bind our variables once and have them scope over everything we need to scope over! The solution I’d like to see in Haskell is common in other languages (Gallina, Idris, Agda, Isar), and this is what it would look like here: type Flag = Bool section (flag :: Flag) where analyseExpr :: Expr -> Expr analyseExpr (Var v) = if flag then change1 v else change2v analyseExpr (App e1 e2) = App (analyseExpr e1) (analyseExpr e2) analyseExpr (Lam v e) = Lam v (analyseExpr e) analyseExpr (Case scrut alts) = Case (analyseExpr scrut) (analyseAlt <$> alts) analyseAlt :: Alt -> Alt analyseAlt (dc, pats, e) = (d[...]



Lior Kaplan: PHP 7.2 is coming… mcrypt extension isn’t

Sun, 10 Sep 2017 08:56:52 +0000

Early September, it’s about 3 months before PHP 7.2 is expected to be release (schedule here). One of the changes is the removal of the mcrypt extension after it was deprecated in PHP 7.1. The main problem with mcrypt extension is that it is based on libmcrypt that was abandoned by it’s upstream since 2007. That’s 10 years of keeping a library alive, moving the burden to distribution’s security teams. But this isn’t new, Remi already wrote about this two years ago: “About libmcrypt and php-mcrypt“. But with removal of the extension from the PHP code base (about F**King time), it would force the recommendation was done “nicely” till now. And forcing people means some noise, although an alternative is PHP’s owns openssl extension. But as many migrations that require code change – it’s going slow. The goal of this post is to reach to the PHP eco system and map the components (mostly frameworks and applications) to still require/recommend mcyrpt and to pressure them to fix it before PHP 72 is released. I’ll appreciate the readers’ help with this mapping in the comments. For example, Laravel‘s release notes for 5.1: In previous versions of Laravel, encryption was handled by the mcrypt PHP extension. However, beginning in Laravel 5.1, encryption is handled by the openssl extension, which is more actively maintained. Or, on the other hand Joomla 3 requirements still mentions mcrypt. mcrypt safe: Drupal 7 and up, see https://www.drupal.org/docs/7/system-requirements/php Lavavel 5.1 and up, see https://laravel.com/docs/5.1/releases mcrypt dependant: Joomla, see https://downloads.joomla.org/technical-requirements Magento, see http://devdocs.magento.com/guides/v2.2/install-gde/system-requirements-tech.html (Checking 2.2 RC release, as it just added support for PHP 7.1) For those who really need mcrypt, it is part of PECL, PHP’s extensions repository. You’re welcome to compile it on your own risk.Filed under: Debian GNU/Linux, PHP [...]



Russell Coker: Observing Reliability

Sat, 09 Sep 2017 10:18:21 +0000

Last year I wrote about how great my latest Thinkpad is [1] in response to a discussion about whether a Thinkpad is still the “Rolls Royce” of laptops. It was a few months after writing that post that I realised that I omitted an important point. After I had that laptop for about a year the DVD drive broke and made annoying clicking sounds all the time in addition to not working. I removed the DVD drive and the result was that the laptop was lighter and used less power without missing any feature that I desired. As I had installed Debian on that laptop by copying the hard drive from my previous laptop I had never used the DVD drive for any purpose. After a while I got used to my laptop being like that and the gaping hole in the side of the laptop where the DVD drive used to be didn’t even register to me. I would prefer it if Lenovo sold Thinkpads in the T series without DVD drives, but it seems that only the laptops with tiny screens are designed to lack DVD drives. For my use of laptops this doesn’t change the conclusion of my previous post. Now the T420 has been in service for almost 4 years which makes the cost of ownership about $75 per year. $1.50 per week as a tax deductible business expense is very cheap for such a nice laptop. About a year ago I installed a SSD in that laptop, it cost me about $250 from memory and made it significantly faster while also reducing heat problems. The depreciation on the SSD about doubles the cost of ownership of the laptop, but it’s still cheaper than a mobile phone and thus not in the category of things that are expected to last for a long time – while also giving longer service than phones usually do. One thing that’s interesting to consider is the fact that I forgot about the broken DVD drive when writing about this. I guess every review has an unspoken caveat of “this works well for me but might suck badly for your use case”. But I wonder how many other things that are noteworthy I’m forgetting to put in reviews because they just don’t impact my use. I don’t think that I am unusual in this regard, so reading multiple reviews is the sensible thing to do. [1] https://etbe.coker.com.au/2016/11/06/thinkpad-rolls-royce/ Related posts: Is a Thinkpad Still Like a Rolls-Royce For a long time the Thinkpad has been widely regarded... PC prices drop again! A few weeks ago Dell advertised new laptops for $849AU,... Laptop Reliability Update: TumbleDry has a good analysis of the Square Trade... [...]



François Marier: TLS Authentication on Freenode and OFTC

Sat, 09 Sep 2017 04:52:47 +0000

In order to easily authenticate with IRC networks such as OFTC and Freenode, it is possible to use client TLS certificates (also known as SSL certificates). In fact, it turns out that it's very easy to setup both on irssi and on znc.

Generate your TLS certificate

On a machine with good entropy, run the following command to create a keypair that will last for 10 years:

openssl req -nodes -newkey rsa:2048 -keyout user.pem -x509 -days 3650 -out user.pem -subj "/CN="

Then extract your key fingerprint using this command:

openssl x509 -sha1 -noout -fingerprint -in user.pem | sed -e 's/^.*=//;s/://g'

Share your fingerprints with NickServ

On each IRC network, do this:

/msg NickServ IDENTIFY Password1!
/msg NickServ CERT ADD 

in order to add your fingerprint to the access control list.

Configure ZNC

To configure znc, start by putting the key in the right place:

cp user.pem ~/.znc/users//networks/oftc/moddata/cert/

and then enable the built-in cert plugin for each network in ~/.znc/configs/znc.conf:


    ...
            LoadModule = cert
    ...

    
    ...
            LoadModule = cert
    ...

Configure irssi

For irssi, do the same thing but put the cert in ~/.irssi/user.pem and then change the OFTC entry in ~/.irssi/config to look like this:

{
  address = "irc.oftc.net";
  chatnet = "OFTC";
  port = "6697";
  use_tls = "yes";
  tls_cert = "~/.irssi/user.pem";
  tls_verify = "yes";
  autoconnect = "yes";
}

and the Freenode one to look like this:

{
  address = "chat.freenode.net";
  chatnet = "Freenode";
  port = "7000";
  use_tls = "yes";
  tls_cert = "~/.irssi/user.pem";
  tls_verify = "yes";
  autoconnect = "yes";
}

That's it. That's all you need to replace password authentication with a much stronger alternative.




Vincent Fourmond: Extract many attachement from many mails in one go using ripmime

Fri, 08 Sep 2017 21:42:29 +0000

I was recently looking for a way to extract many attachments from a series of emails. I first had a look at the AttachmentExtractor thunderbird plugin, but it seems very old and not maintained anymore. So I've come up with another very simple solution that also works with any other mail client.

Just copy all the mails you want to extract attachments from to a single (temporary) mail folder, find out which file holds the mail folder and use ripmime on that file (ripmime is packaged for Debian). For my case, it looked like:

~ ripmime -i .icedove/XXXXXXX.default/Mail/pop.xxxx/tmp -d target-directory

Simple solution, but it saved me quite some time. Hope it helps !




Sven Hoexter: munin with TLS

Fri, 08 Sep 2017 14:41:12 +0000

Primarily a note for my future self so I don't have to find out what I did in the past once more.

If you're running some smaller systems scattered around the internet, without connecting them with a VPN, you might want your munin master and nodes to communicate with TLS and validate certificates. If you remember what to do it's a rather simple and straight forward process. To manage the PKI I'll utilize the well known easyrsa script collection. For this special purpose CA I'll go with a flat layout. So it's one root certificate issuing all server and client certificates directly. Some very basic docs can be also found in the munin wiki.

master setup

For your '/etc/munin/munin.conf':

tls paranoid
tls_verify_certificate yes
tls_private_key /etc/munin/master.key
tls_certificate /etc/munin/master.crt
tls_ca_certificate /etc/munin/ca.crt
tls_verify_depth 1

A node entry with TLS will look like this:

[node1.stormbind.net]
    address [2001:db8::]
    use_node_name yes

Important points here:

  • "tls_certificate" is a Web Client Authentication certificate. The master connects to the nodes as a client.
  • "tls_ca_certificate" is the root CA certificate.
  • If you'd like to disable TLS connections, for example for localhost, set "tls disabled" in the node block.

For easy-rsa the following command invocations are relevant:

./easyrsa init-pki
./easyrsa build-ca
./easrsa gen-req master
./easyrsa sign-req client master
./easyrsa set-rsa-pass master nopass

node setup

For your '/etc/munin/munin-node.conf':

tls paranoid
tls_verify_certificate yes
tls_private_key /etc/munin/node1.key
tls_certificate /etc/munin/node1.crt
tls_ca_certificate /etc/munin/ca.crt
tls_verify_depth 1

For easy-rsa the following command invocations are relevant:

./easyrsa gen-req node1
./easyrsa sign-req server node1
./easyrsa set-rsa-pass node1 nopass

Important points here:

  • "tls_certificate" on the node must be a server certificate.
  • You've to provide the CA here as well so we can verify the client certificate provided by the munin master.



Steinar H. Gunderson: Licensing woes

Thu, 07 Sep 2017 23:37:00 +0000

(image)

On releasing modified versions of GPLv3 software in binary form only (quote anonymized):

And in my opinion it's perfectly ok to give out a binary release of a project, that is a work in progress, so that people can try it out and coment on it. It's easier for them to have it as binary and not need to compile it themselfs. If then after a (long) while the code is still only released in binary form, then it's ok to start a discussion. But only for a quick test, that is unneccessary. So people, calm down and enjoy life!

I wonder at what point we got here.




Gunnar Wolf: It was thirty years ago today... (and a bit more): My first ever public speech!

Thu, 07 Sep 2017 18:35:30 +0000

I came across a folder with the most unexpected treasure trove: The text for my first ever public speech! (and some related materials) In 1985, being nine years old, I went to the IDESE school, to learn Logo. I found my diploma over ten years ago and blogged about it in this same space. Of course, I don't expect any of you to remember what I wrote twelve years ago about a (then) twenty years old piece of paper! I add to this very old stuff about Gunnar the four pages describing my game, Evitamono ("Avoid the monkey", approximately). I still remember the game quite vividly, including traumatic issues which were quite common back then; I wrote that «the sprites were accidentally deleted twice and the game once». I remember several of my peers telling about such experiences. Well, that is good if you account for the second system syndrome! I also found the amazing course material for how to program sound and graphics in the C64 BASIC. That was a course taken by ten year old kids. Kids that understood that you had to write [255,129,165,244,219,165,0,102] (see pages 3-5) into a memory location starting at 53248 to redefine a character so it looked as the graphic element you wanted. Of course, it was done with a set of POKEs, as everything in C64. Or that you could program sound by setting the seven SID registers for each of the three voices containing low frequency, high frequency, low pulse, high pulse, wave control, wave length, wave amplitude in memory locations 54272 through 54292... And so on and on and on... And as a proof that I did take the course: ...I don't think I could make most of my current BSc students make sense out of what is in the manual. But, being a kid in the 1980s, that was the only way to get a computer to do what you wanted. Yay for primitivity! :-D AttachmentSize Speech for "Evitamono"1.29 MB Coursee material for sound and graphics programming in C64 BASIC15.82 MB Proof that I was there!4.86 MB [...]



Lior Kaplan: FOSScamp Syros 2017 – day 3

Thu, 07 Sep 2017 15:13:48 +0000

The 3rd day should have started with a Debian sprint and then a LibreOffice one, taking advantage I’m still attending, as that’s my last day. But plans don’t always work out and we started 2 hours later. When everybody arrive we got everyone together for a short daily meeting (scrum style). The people were divided to 3 teams for translating:  Debian Installer, LibreOffice and Gnome. For each team we did a short list of what left and with what to start. And in the end – how does what so there will be no toe stepping. I was really proud with this and felt it was time well spent. The current translation percentage for Albanian in LibreOffice is 60%. So my recommendation to the team is translate master only and do not touch the help translation. My plans ahead would be to improve the translation as much as possible for LibreOffice 6.0 and near the branching point (Set to November 20th by the release schedule) decide if it’s doable for the 6.0 life time or to set the goal at 6.1. In the 2nd case, we might try to backport translation back to 6.0. For the translation itself, I’ve mentioned to the team about KeyID language pack and referred them to the nightly builds. These tools should help with keeping the translation quality high. For the Debian team, after deciding who works on what, I’ve asked Silva to do review for the others, as doing it myself started to take more and more of my time. It’s also good that the reviewer know the target language and not like me, can catch more the syntax only mistakes. Another point, as she’s available more easily to the team while I’m leaving soon, so I hope this role of reviewer will stay as part of the team. With the time left I mostly worked on my own tasks, which were packaging the Albanian dictionary, resulting in https://packages.debian.org/sid/myspell-sq and making sure the dictionary is also part of LibreOffice resulting in https://gerrit.libreoffice.org/#/c/41906/ . When it is accepted, I want to upload it to the LibreOffice repository so all users can download and use the dictionary. During the voyage home (ferry, bus, plain and train), I mailed Sergio Durigan Junior, my NM applicant, with a set of questions. My first action as an AM (: Overall FOSScamp results for Albanian translation were very close to the goal I set (100%): Albanian (sq) level1 – 99% Albanian (sq) level2 – 25% (the rest is pending at #874497) Albanian (sq) level3 – 100% That’s the result of work by Silva Arapi, Eva Vranici, Redon Skikuli, Anisa Kuci and Nafie Shehu.Filed under: Debian GNU/Linux, i18n & l10n, LibreOffice [...]



Thomas Lange: My recent FAI activities

Thu, 07 Sep 2017 15:03:30 +0000

(image)

During DebConf 17 in Montréal I had a FAI demo session (video), where I showed how to create a customized installation CD and how to create a diskimage using the same configuration. This diskimage is ready for use with a VM software or can be booted inside a cloud environment.

During the last weeks I was working on FAI 5.4 which will be released in a few weeks. I you want to test it use

deb https://fai-project.org/download beta-testing koeln

in your sources.list file.

The most important new feature will be the cross architecture support. I managed to create an ARM64 diskimage on a x86 host and boot this inside Qemu. Currently I learn how to flash images onto my new Hikey960 board for booting my own Debian images on real hardware. The embedded world is still new for me and very different in respect to the boot process.

At DebConf, I also worked on debootstrap. I produced a set of patches which can speedup debootstrap by a factor of 2. See #871835 for details.

FAI debootstrap ARM




Reproducible builds folks: Reproducible Builds: Weekly report #123

Thu, 07 Sep 2017 09:54:55 +0000

Here's what happened in the Reproducible Builds effort between Sunday August 27 and Saturday September 2 2017: Talks and presentations Holger Levsen talked about our progress and our still-far goals at BornHack 2017 (Video). Toolchain development and fixes The Debian FTP archive will now reject changelogs where different entries have the same timestamps. UDD now uses reproducible-tracker.json (~25MB) which ignores our tests for Debian unstable, instead of our full set of results in reproducible.json. Our tests for Debian unstable uses a stricter definition of "reproducible" than what was recently added to Debian policy, and these stricter tests are currently more unreliable. Packages reviewed and fixed, and bugs filed Patches sent upstream: Bernhard M. Wiedemann: File ordering: klee-uclibc: sort libdnet: sort libinvm-cim: sort libinvm-cli: sort Embedded build-date timestamps: robinhood: SOURCE_DATE_EPOCH support ceph/rocksdb: SOURCE_DATE_EPOCH support hylafax: use changelog modtime gnucash: use changelog modtime Warzone2100, merged: omit timestamps, sort file lists Chris Lamb: glib2.0: sort file lists (old) xorg, merged: SOURCE_DATE_EPOCH support Debian bugs filed: Adrian Bunk: #873608 filed against uhd. #874186 filed against svgpp. Chris Lamb: #873625 filed against glib2.0, filed upstream. #874102 filed against texlive-bin. Debian packages NMU-uploaded: Chris Lamb: bittornado/0.3.18-10.3 from #796212 cgilib/0.6-1.1 from #776935 dict-gazetteer2k/1.0.0-5.4 from #776376 dict-moby-thesaurus/1.0-6.4 from #776375 dtaus/0.9-1.1 from #777321 wily/0.13.41-7.3 from #777360 Reviews of unreproducible packages 25 package reviews have been added, 50 have been updated and 86 have been removed in this week, adding to our knowledge about identified issues. Weekly QA work During our reproducibility testing, FTBFS bugs have been detected and reported by: Adrian Bunk (46) Martín Ferrari (1) Steve Langasek (1) diffoscope development Version 86 was uploaded to unstable by Mattia Rizzolo. It included previous weeks' contributions from: Mattia Rizzolo tests/binary: skip a test if the 'distro' module is not available. Some code quality and style improvements. Guangyuan Yang tests/iso9660: support both cdrtools' genisoimage's versions of isoinfo. Chris Lamb comparators/xml: Use name attribute over path to avoid leaking comparison full path in output. Tidy diffoscope.progress a little. Ximin Luo Add a --tool-prefix-binutils CLI flag. Closes: #869868 On non-GNU systems, prefer some tools that start with "g". Closes: #871029 presenters/html: Don't traverse children whose parents were already limited. Clo[...]



John Goerzen: Switching to xmonad + Gnome – and ditching a Mac

Thu, 07 Sep 2017 02:43:12 +0000

I have been using XFCE with xmonad for years now. I’m not sure exactly how many, but at least 6 years, if not closer to 10. Today I threw in the towel and switched to Gnome. More recently, at a new job, I was given a Macbook Pro. I wasn’t entirely sure what to think of this, but I thought I’d give it a try. I found MacOS to be extremely frustrating and confining. It had no real support for a tiling window manager, and although projects like amethyst tried to approximate what xmonad can do on Linux, they were just too limited by the platform and were clunky. Moreover, the entire UI was surprisingly sluggish; maybe that was an induced effect from animations, but I don’t think that explains it. A Debisn stretch install, even on inferior hardware, was snappy in a way that MacOS never was. So I have requested to swap for a laptop that will run Debian. The strange use of Command instead of Control for things, combined with the overall lack of configurability of keybindings, meant that I was going to always be fighting muscle memory moving from one platform to another. Not only that, but being back in the world of a Free Software OS means a lot. Now then, back to xmonad and XFCE situation. XFCE once worked very well with xmonad. Over the years, this got more challenging. Around the jessie (XFCE 4.10) time, I had to be very careful about when I would let it save my session, because it would easily break. With stretch, I had to write custom scripts because the panel wouldn’t show up properly, and even some application icons would be invisible, if things were started in a certain order. This took much trial and error and was still cumbersome. Gnome 3, with its tightly-coupled Gnome Shell, has never been compatible with other window managers — at least not directly. A person could have always used MATE with xmonad — but a lot of people that run XFCE tend to have some Gnome 3 apps (for instance, evince) anyhow. Cinnamon also wouldn’t work with xmonad, because it is simply another tightly-coupled shell instead of Gnome Shell. And then today I discovered gnome-flashback. gnome-flashback is a Gnome 3 environment that uses the traditional X approach with a separate window manager (metacity of yore by default). Sweet. It turns out that Debian’s xmonad has built-in support for it. If you know the secret: apt-get install gnome-session-flashback (OK, it’s not so secret; it’s even in xmonad’s README.Debian these days) Install that, plus gnome and gdm3 and things are nice. Configure xmonad with GNOME support and poof – goodness right out of the box, selectable from the gdm sessions list. I still have[...]



Mike Gabriel: MATE 1.18 landed in Debian testing

Wed, 06 Sep 2017 09:04:25 +0000

This is to announce that finally all MATE Desktop 1.18 components have landed in Debian testing (aka buster).

Credits

Again a big thanks to the packaging team (esp. Vangelis Mouhtsis and Martin Wimpress, but also to Jeremy Bicha for constant advice and Aron Xu for joining the Debian+Ubuntu MATE Packaging Team and merging all the Ubuntu zesty and artful branches back to master).

Fully Available on all Debian-supported Architectures

The very special thing about this MATE 1.18 release for Debian is that MATE is now available on all Debian hardware architectures. See "Buildd" column on our DDPO overview page [1]. Thanks to all the people from the Debian porters realm for providing feedback to my porting questions.

References




Kees Cook: security things in Linux v4.13

Tue, 05 Sep 2017 23:01:20 +0000

Previously: v4.12. Here’s a short summary of some of interesting security things in Sunday’s v4.13 release of the Linux kernel: security documentation ReSTification The kernel has been switching to formatting documentation with ReST, and I noticed that none of the Documentation/security/ tree had been converted yet. I took the opportunity to take a few passes at formatting the existing documentation and, at Jon Corbet’s recommendation, split it up between end-user documentation (which is mainly how to use LSMs) and developer documentation (which is mainly how to use various internal APIs). A bunch of these docs need some updating, so maybe with the improved visibility, they’ll get some extra attention. CONFIG_REFCOUNT_FULL Since Peter Zijlstra implemented the refcount_t API in v4.11, Elena Reshetova (with Hans Liljestrand and David Windsor) has been systematically replacing atomic_t reference counters with refcount_t. As of v4.13, there are now close to 125 conversions with many more to come. However, there were concerns over the performance characteristics of the refcount_t implementation from the maintainers of the net, mm, and block subsystems. In order to assuage these concerns and help the conversion progress continue, I added an “unchecked” refcount_t implementation (identical to the earlier atomic_t implementation) as the default, with the fully checked implementation now available under CONFIG_REFCOUNT_FULL. The plan is that for v4.14 and beyond, the kernel can grow per-architecture implementations of refcount_t that have performance characteristics on par with atomic_t (as done in grsecurity’s PAX_REFCOUNT). CONFIG_FORTIFY_SOURCE Daniel Micay created a version of glibc’s FORTIFY_SOURCE compile-time and run-time protection for finding overflows in the common string (e.g. strcpy, strcmp) and memory (e.g. memcpy, memcmp) functions. The idea is that since the compiler already knows the size of many of the buffer arguments used by these functions, it can already build in checks for buffer overflows. When all the sizes are known at compile time, this can actually allow the compiler to fail the build instead of continuing with a proven overflow. When only some of the sizes are known (e.g. destination size is known at compile-time, but source size is only known at run-time) run-time checks are added to catch any cases where an overflow might happen. Adding this found several places where minor leaks were happening, and Daniel and I chased down fixes for them. One interesting note about this protection is that is only examines the size of the whole object[...]



Gunnar Wolf: Made with Creative Commons: Over half translated, yay!

Tue, 05 Sep 2017 19:05:48 +0000

(image)

An image speaks for a thousand words...
(image)
And our translation project is worth several thousand words!
I am very happy and surprised to say we have surpassed the 50% mark of the Made with Creative Commons translation project. We have translated 666 out of 1210 strings (yay for 3v1l numbers)!
I have to really thank Weblate for hosting us and allowing for collaboration to happen there. And, of course, I have to thank the people that have jumped on board and helped the translation — We are over half way there! Lets keep pushing!

(image)

PS - If you want to join the project, just get in Weblate and start translating right away, either to Spanish or other languages! (Polish, Dutch and Norwegian Bokmål are on their way) If you translate into Spanish, *please* read and abide by the specific Spanish translation guidelines.




Chris Lamb: Ask the dumb questions

Tue, 05 Sep 2017 11:51:02 +0000

In the same way it vital to ask the "smart questions", it is equally important to ask the dumb ones. Whilst your milieu might be—say—comparing and contrasting the finer details of commission structures between bond brokers, if you aren't quite sure of the topic learn to be bold and confident enough to boldly ask: I'm sorry, but what actually is a bond? Don't consider this to be an all-or-nothing affair. After all, you might have at least some idea about what a bond is. Rather, adjust your tolerance to also ask for clarification when you are merely slightly unsure or merely slightly uncertain about a concept, term or reference. So why do this? Most obviously, you are learning something and expanding your knowledge about the world, but a clarification can avoid problems later if you were mistaken in your assumptions. Not only that, asking "can you explain that?" or admitting "I don't follow…" is not only being honest with yourself, the vulnerability you show when admitting one's ignorance opens yourself to others leading to closer friendships and working relationships. We clearly have a tendency to want to come across as knowledgable or―perhaps more honestly―we don't want to appear dumb or uninformed as it will bruise our ego. But the precise opposite is true: nodding and muddling your way through conversations you only partly understand is unlikely to cultivate true feelings of self-respect and a healthy self-esteem. Since adopting this approach I have found I've rarely derailed the conversation. In fact, speaking up not only encourages and flatters others that you care about their subject, it has invariably lead to related matters which are not only more inclusive but actually novel and interesting to all present. So push through the voice in your head and be that elephant in the room. After all, you might not the only person thinking it. If it helps, try reframing it to yourself as helping others… You'll be finding it effortless soon enough. Indeed, asking the dumb question is actually a positive feedback loop where each question you pose helps you make others in the future. Excellence is not an act, but a habit. [...]



Junichi Uekawa: It's already September.

Tue, 05 Sep 2017 00:26:34 +0000

(image) It's already September. I haven't written much code last month. I wrote a CSV parser and felt a little depressed after reading rfc4180. None of my CSV files were in CRLF.




Jonathan Dowland: Sortpaper: 16:9 edition

Mon, 04 Sep 2017 21:34:09 +0000

(image)
(image)

sortpaper 16:9

Back in 2011 I stumbled across a file "sortpaper.png", which was a hand-crafted wallpaper I'd created some time in the early noughties to help me organise icons on my computer's Desktop. I published it at the time in the blog post sortpaper.

Since then I rediscovered the blog post, and since I was looking for an excuse to try out the Processing software, I wrote a Processing Sketch to re-create it, but with the size and colours parameterized: sortpaper.pde.txt. The thumbnail above links to an example 1920x1080 rendering.




Jonathan Dowland: sortpaper.pde

Mon, 04 Sep 2017 21:28:51 +0000