Subscribe: Planet Debian
http://planet.debian.org/rss20.xml
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
apt  data  debian  docker  file  hours  months  new  release  reproducible builds  software  ssh  system  time  weblate 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Planet Debian

Planet Debian



Planet Debian - http://planet.debian.org/



 



Michael Prokop: The #newinstretch game: new forensic packages in Debian/stretch

Thu, 25 May 2017 07:48:50 +0000

(image)

Repeating what I did for the last Debian releases with the #newinwheezy and #newinjessie games it’s time for the #newinstretch game:

Debian/stretch AKA Debian 9.0 will include a bunch of packages for people interested in digital forensics. The packages maintained within the Debian Forensics team which are new in the Debian/stretch release as compared to Debian/jessie (and ignoring jessie-backports):

  • bruteforce-salted-openssl: try to find the passphrase for files encrypted with OpenSSL
  • cewl: custom word list generator
  • dfdatetime/python-dfdatetime: Digital Forensics date and time library
  • dfvfs/python-dfvfs: Digital Forensics Virtual File System
  • dfwinreg: Digital Forensics Windows Registry library
  • dislocker: read/write encrypted BitLocker volumes
  • forensics-all: Debian Forensics Environment – essential components (metapackage)
  • forensics-colorize: show differences between files using color graphics
  • forensics-extra: Forensics Environment – extra console components (metapackage)
  • hashdeep: recursively compute hashsums or piecewise hashings
  • hashrat: hashing tool supporting several hashes and recursivity
  • libesedb(-utils): Extensible Storage Engine DB access library
  • libevt(-utils): Windows Event Log (EVT) format access library
  • libevtx(-utils): Windows XML Event Log format access library
  • libfsntfs(-utils): NTFS access library
  • libfvde(-utils): FileVault Drive Encryption access library
  • libfwnt: Windows NT data type library
  • libfwsi: Windows Shell Item format access library
  • liblnk(-utils): Windows Shortcut File format access library
  • libmsiecf(-utils): Microsoft Internet Explorer Cache File access library
  • libolecf(-utils): OLE2 Compound File format access library
  • libqcow(-utils): QEMU Copy-On-Write image format access library
  • libregf(-utils): Windows NT Registry File (REGF) format access library
  • libscca(-utils): Windows Prefetch File access library
  • libsigscan(-utils): binary signature scanning library
  • libsmdev(-utils): storage media device access library
  • libsmraw(-utils): split RAW image format access library
  • libvhdi(-utils): Virtual Hard Disk image format access library
  • libvmdk(-utils): VMWare Virtual Disk format access library
  • libvshadow(-utils): Volume Shadow Snapshot format access library
  • libvslvm(-utils): Linux LVM volume system format access librar
  • plaso: super timeline all the things
  • pompem: Exploit and Vulnerability Finder
  • pytsk/python-tsk: Python Bindings for The Sleuth Kit
  • rekall(-core): memory analysis and incident response framework
  • unhide.rb: Forensic tool to find processes hidden by rootkits (was already present in wheezy but missing in jessie, available via jessie-backports though)
  • winregfs: Windows registry FUSE filesystem

Join the #newinstretch game and present packages and features which are new in Debian/stretch.




Jaldhar Vyas: For Downtown Hoboken

Thu, 25 May 2017 04:34:14 +0000

(image)

Q: What should you do if you see a spaceman?

A: Park there before someone takes it, man.




Steve Kemp: Getting ready for Stretch

Wed, 24 May 2017 21:00:00 +0000

(image)

I run about 17 servers. Of those about six are very personal and the rest are a small cluster which are used for a single website. (Partly because the code is old and in some ways a bit badly designed, partly because "clustering!", "high availability!", "learning!", "fun!" - seriously I had a lot of fun putting together a fault-tolerant deployment with haproxy, ucarp, etc, etc. If I were paying for it the site would be both retired and static!)

I've started the process of upgrading to stretch by picking a bunch of hosts that do things I could live without for a few days - in case there were big problems, or I needed to restore from backups.

So far I've upgraded:

  • master.steve
    • This is a puppet-master, so while it is important killing it wouldn't be too bad - after all my nodes are currently setup properly, right?
    • Upgrading this host changed the puppet-server from 3.x to 4.x.
    • That meant I had to upgrade all my client-systems, because puppet 3.x won't talk to a 4.x master.
    • Happily jessie-backports contains a recent puppet-client.
    • It also meant I had to rework a lot of my recipes, in small ways.
  • builder.steve
    • This is a host I use to build packages upon, via pbuilder.
    • I have chroots setup for wheezy, jessie, and stretch, each in i386 and amd64 flavours.
  • git.steve
    • This is a host which stores my git-repositories, via gitbucket.
    • While it is an important host in terms of functionality, the software it needs is very basic: nginx proxies to a java application which runs on localhost:XXXX, with some caching magic happening to deal with abusive clients.
    • I do keep considering using gitlab, because I like its runners, etc. But that is pretty resource intensive.
    • On the other hand If I did switch I could drop my builder.steve host, which might mean I'd come out ahead in terms of used resources.
  • leave.steve
    • Torrent-box.
    • Upgrading was painless, I only run rtorrent, and a simple object storage system of my own devising.

All upgrades were painless, with only one real surprise - the attic-backup software was removed from Debian.

Although I do intend to retry using Larss' excellent obnum in the near future pragmatically I wanted to stick with what I'm familiar with. Borg backup is a fork of attic I've been aware of for a long time, but I never quite had a reason to try it out. Setting it up pretty much just meant editing my backup-script:

s/attic/borg/g

Once I did that, and created some new destinations all was good:

borg@rsync.io ~ $ borg init /backups/git.steve.org.uk.borg/
borg@rsync.io ~ $ borg init /backups/master.steve.org.uk.borg/
borg@rsync.io ~ $ ..

Upgrading other hosts, for example my website(s), and my email-box, will be more complex and fiddly. On that basis they will definitely wait for the formal stretch release.

But having a couple of hosts running the frozen distribution is good for testing, and to let me see what is new.




Jonathan Dowland: yakking

Wed, 24 May 2017 13:07:49 +0000

(image)

I've written a guest post for the Yakking Blog — "A WadC successor in Haskell?. It's mainly on the topic of Haskell with WadC as a use-case for a thought experiment.

Yakking is a collaborative blog geared towards beginner software engineers that is put together by some friends of mine. I was talking to them about contributing a blog post on a completely different topic a while ago, but that has not come to fruition (there or anywhere, yet). When I wrote up the notes that formed the basis of this blog post, I realised it might be a good fit.

Take a look at some of their other posts, and if you find it interesting, subscribe!




Michal Čihař: Weblate 2.14.1

Wed, 24 May 2017 08:00:22 +0000

(image)

Weblate 2.14.1 has been released today. It is bugfix release fixing possible migration issues, search results navigation and some minor security issues.

Full list of changes:

  • Fixed possible error when paginating search results.
  • Fixed migrations from older versions in some corner cases.
  • Fixed possible CSRF on project watch and unwatch.
  • The password reset no longer authenticates user.
  • Fixed possible captcha bypass on forgotten password.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate




Dirk Eddelbuettel: Rcpp 0.12.11: Loads of goodies

Tue, 23 May 2017 19:59:00 +0000

The elevent update in the 0.12.* series of Rcpp landed on CRAN yesterday following the initial upload on the weekend, and the Debian package and Windows binaries should follow as usual. The 0.12.11 release follows the 0.12.0 release from late July, the 0.12.1 release in September, the 0.12.2 release in November, the 0.12.3 release in January, the 0.12.4 release in March, the 0.12.5 release in May, the 0.12.6 release in July, the 0.12.7 release in September, the 0.12.8 release in November, the 0.12.9 release in January, and the 0.12.10.release in March --- making it the fifteenth release at the steady and predictable bi-montly release frequency. Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 1026 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with another 91 in BioConductor. This releases follows on the heels of R's 3.4.0 release and addresses on or two issues from the transition, along with a literal boatload of other fixes and enhancements. James "coatless" Balamuta was once restless in making the documentation better, Kirill Mueller addressed a number of more obscure compiler warnings (triggered under under -Wextra and the like), Jim Hester improved excecption handling, and much more mostly by the Rcpp Core team. All changes are listed below in some detail. One big change that JJ made is that Rcpp Attributes also generate the now-almost-required package registration. (For background, I blogged about this one, two, three times.) We tested this, and do not expect it to throw curveballs. If you have an existing src/init.c, or if you do not have registration set in your NAMESPACE. It should cover most cases. But one never knows, and one first post-release buglet related to how devtools tests things has already been fixed in this PR by JJ. Changes in Rcpp version 0.12.11 (2017-05-20) Changes in Rcpp API: Rcpp::exceptions can now be constructed without a call stack (Jim Hester in #663 addressing #664). Somewhat spurious compiler messages under very verbose settings are now suppressed (Kirill Mueller in #670, #671, #672, #687, #688, #691). Refreshed the included tinyformat template library (James Balamuta in #674 addressing #673). Added printf-like syntax support for exception classes and variadic templating for Rcpp::stop and Rcpp::warning (James Balamuta in #676). Exception messages have been rewritten to provide additional information. (James Balamuta in #676 and #677 addressing #184). One more instance of Rf_mkString is protected from garbage collection (Dirk in #686 addressing #685). Two exception specification that are no longer tolerated by g++-7.1 or later were removed (Dirk in #690 addressing #689) Changes in Rcpp Documentation: Added a Known Issues section to the Rcpp FAQ vignette (James Balamuta in #661 addressing #628, #563, #552, #460, #419, and #251). Changes in Rcpp Sugar: Added sugar function trimws (Nathan Russell in #680 addressing #679). Changes in Rcpp Attributes: Automatically generate native routine registrations (JJ in #694) The plugins for C++11, C++14, C++17 now set the values R 3.4.0 or later expects; a plugin for C++98 was added (Dirk in #684 addressing #683). Changes in Rcpp support functions: The Rcpp.package.skeleton() function now creates a package registration file provided R 3.4.0 or later is used (Dirk in #692) Thanks to CRANberries, you can also look at a diff to the previous release. As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for[...]



Reproducible builds folks: Reproducible Builds: week 108 in Stretch cycle

Tue, 23 May 2017 18:43:39 +0000

Here's what happened in the Reproducible Builds effort between Sunday May 14 and Saturday May 20 2017: News and Media coverage We've reached 94.0% reproducible packages on testing/amd64! (NB. without build path variation) Maria Glukhova was interviewed on It's FOSS about her involvement with Reproducible Builds with respect to Outreachy. IRC meeting Our next IRC meeting has been scheduled for Thursday June 1 at 16:00 UTC. Packages reviewed and fixed, bugs filed, etc. Bernhard M. Wiedemann: boost pytsk bam kakoune newsbeuter trigger-rally firebird povray zynaddsubfx (fixed) scintilla (merged) cryptopp (merged) Chris Lamb: #862553 filed against vim-command-t. #862588 filed against tkhtml1. #862592 filed against taskcoach. #862676 filed against mp3fs. #862825 filed against golang-github-pkg-profile. #863015 filed against jellyfish. #863054 filed against doxygen. Reviews of unreproducible packages 35 package reviews have been added, 28 have been updated and 12 have been removed in this week, adding to our knowledge about identified issues. 2 issue types have been added: new year_variable_in_documentation_generated_by_doxygen jellyfish_creates_nondeterministic_json diffoscope development Mattia Rizzolo: Export JUnit-style test report when building on Jenkins strip-nondeterminism development Chris Lamb: Only print log messages by default if the file was actually modified. (Closes: #863033) tests.reproducible-builds.org Holger wrote a new systemd-based scheduling system replacing 162 constantly running Jenkins jobs which were slowing down job execution in general: Nothing fancy really, just 370 lines of shell code in two scripts, out of these 370 lines 80 are comments and 162 are node defitions for those 162 "jobs". Worker logs not yet as good as with Jenkins but usually we dont need realitime log viewing of specific builds. Or rather, its a waste of time to do it. (Actual package build logs remain unchanged.) Builds are a lot faster for the fast archs, but not so much difference on armhf. Since April 12 for i386 (and a week later for the rest), the images below are ordered with i386 on top, then amd64, armhf and arm64. Except for armhf it's pretty visible when the switch was made. Misc. This week's edition was written by Chris Lamb, Holver Levsen, Bernhard M. Wiedemann, Vagrant Cascadian and Maria Glukhova & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists. [...]



Tianon Gravi: Debuerreotype

Tue, 23 May 2017 06:00:00 +0000

Following in the footsteps of one of my favorite Debian Developers, Chris Lamb / lamby (who is quite prolific in the reproducible builds effort within Debian), I’ve started a new project based on snapshot.debian.org (time-based snapshots of the Debian archive) and some of lamby’s work for creating reproducible Debian (debootstrap) rootfs tarballs. The project is named “Debuerreotype” as an homage to the photography roots of the word “snapshot” and the daguerreotype process which was an early method of taking photographs. The essential goal is to create “photographs” of a minimal Debian rootfs, so the name seemed appropriate (even if it’s a bit on the “mouthful” side). The end-goal is to create and release Debian rootfs tarballs for a given point-in-time (especially for use in Docker) which should be fully reproducible, and thus improve confidence in the provenance of the Debian Docker base images. For more information about reproducibility and why it matters, see reproducible-builds.org, which has more thorough explanations of the why and how and links to other important work such as the reproducible builds effort in Debian (for Debian package builds). In order to verify that the tool actually works as intended, I ran builds against seven explicit architectures (amd64, arm64, armel, armhf, i386, ppc64el, s390x) and eight explicit suites (oldstable, stable, testing, unstable, wheezy, jessie, stretch, sid). I used a timestamp value of 2017-05-16T00:00:00Z, and skipped combinations that don’t exist (such as wheezy on arm64) or aren’t supported anymore (such as wheezy on s390x). I ran the scripts repeatedly over several days, using diffoscope to compare the results. While doing said testing, I ran across #857803, and added a workaround. There’s also a minor outstanding issue with wheezy’s reproducibility that I haven’t had a chance to dig deep very deeply into yet (but it’s pretty benign and Wheezy’s LTS support window ends 2018-05-31, so I’m not too stressed about it). I’ve also packaged the tool for Debian, and submitted it into the NEW queue, so hopefully the FTP Masters will look favorably upon this being a tool that’s available to install from the Debian archive as well. 😇 Anyhow, please give it a try, have fun, and as always, report bugs! [...]



Gunnar Wolf: Open Source Symposium 2017

Mon, 22 May 2017 17:21:19 +0000

(image)

I travelled (for three days only!) to Argentina, to be a part of the Open Source Symposium 2017, a co-located event of the International Conference on Software Engineering.
(image)
This is, all in all, an interesting although small conference — We are around 30 people in the room. This is a quite unusual conference for me, as this is among the first "formal" academic conference I am part of. Sessions have so far been quite interesting.
What am I linking to from this image? Of course, the proceedings! They managed to publish the proceedings via the "formal" academic channels (a nice hard-cover Springer volume) under an Open Access license (which is sadly not usual, and is unbelievably expensive). So, you can download the full proceedings, or article by article, in EPUB or in PDF...
...Which is very very nice :)
Previous editions of this symposium have also their respective proceedings available, but AFAICT they have not been downloadable.
So, get the book; it provides very interesant and original insights into our community seen from several quite novel angles!

AttachmentSize
oss2017_cover.png84.47 KB



Michal Čihař: HackerOne experience with Weblate

Mon, 22 May 2017 10:00:22 +0000

Weblate has started to use HackerOne Community Edition some time ago and I think it's good to share my experience with that. Do you have open source project and want to get more attention of security community? This post will answer how it looks from perspective of pretty small project. I've applied with Weblate to HackerOne Community Edition by end of March and it was approved early in April. Based on their recommendations I've started in invite only mode, but that really didn't bring much attention (exactly none reports), so I've decided to go public. I've asked for making the project public just after coming from two weeks vacation, while expecting the approval to take some time where I'll settle down things which have popped up during vacation. In the end that was approved within single day, so I was immediately under fire of incoming reports: I was surprised that they didn't lie - you will really get huge amount of issues just after making your project public. Most of them were quite simple and repeating (as you can see from number of duplicates), but it really provided valuable input. Even more surprisingly there was second peak coming in when I've started to disclose resolved issues (once Weblate 2.14 has been released). Overall the issues could be divided to few groups: Server configuration such as lack of Content-Security-Policy headers. This is certainly good security practice and we really didn't follow it in all cases. The situation should be way better now. Lack or rate limiting in Weblate. We really didn't try to do that and many reporters (correctly) shown that this is something what should be addressed in important entry points such as authentication. Weblate 2.14 has brought lot of features in this area. Not using https where applicable. Yes, some APIs or web sites did not support https in past, but now they do and I didn't notice. Several pages were vulnerable to CSRF as they were using GET while POST with CSRF protection would be more appropriate. Lack of password strength validation. I've incorporated Django password validation to Weblate hopefully avoiding the weakest passwords. Several issues in authentication using Python Social Auth. I've never really looked at how the authentication works there and there are some questionable decisions or bugs. Some of the bugs were already addressed in current releases, but there are still some to solve. In the end it was really challenging week to be able to cope with the incoming reports, but I think I've managed it quite well. The HackerOne metrics states that there are 2 hours in average to respond on incoming incidents, what I think will not work in the long term :-). Anyway thanks to this, you can now enjoy Weblate 2.14 which more secure than any release before, if you have not yet upgraded, you might consider doing that now or look into our support offering for self hosted Weblate. The downside of this all was that the initial publishing on HackerOne made our website target of lot of automated tools and the web server was not really ready for that. I'm really sorry to all Hosted Weblate users who were affected by this. This has been also addressed now, but the infrastructure really should have been prepared before on this. To share how it looked like, here is number of requests to the nginx server: I'm really glad I could make Weblate available on HackerOne as it will clearly improve it's security and security of hosted offering we have. I will certainly consider providing swag and/or bounties on further severe reports, but that won't be possible without enough funding for Weblate. Filed under: Debian English SUSE Weblate [...]



Ritesh Raj Sarraf: apt-offline 1.8.0 releasedI

Sun, 21 May 2017 21:17:37 +0000

I am pleased to announce the release of apt-offline, version 1.8.0. This release is mainly a forward port of apt-offline to Python 3 and PyQt5. There are some glitches related to Python 3 and PyQt5, but overall the CLI interface works fine. Other than the porting, there's also an important bug fixed, related to memory leak when using the MIME library. And then there's some updates to the documentation (user examples) based on feedback from users. Release is availabe from Github and Alioth       What is apt-offline ? Description: offline APT package manager apt-offline is an Offline APT Package Manager. . apt-offline can fully update and upgrade an APT based distribution without connecting to the network, all of it transparent to APT. . apt-offline can be used to generate a signature on a machine (with no network). This signature contains all download information required for the APT database system. This signature file can be used on another machine connected to the internet (which need not be a Debian box and can even be running windows) to download the updates. The downloaded data will contain all updates in a format understood by APT and this data can be used by apt-offline to update the non-networked machine. . apt-offline can also fetch bug reports and make them available offline. Categories: Debian-BlogComputingProgrammingToolsKeywords: apt-offlineapt-offline-guipython3pyqt5Offline APT Package Manageroffline package managerRHUTLike:  [...]



Holger Levsen: 20170521-this-time-of-the-year

Sun, 21 May 2017 18:26:58 +0000

(image)

It's this time of the year again…

So it seems summer has finally arrived here and for the first time this year I've been offline for more than 24h, even despite having wireless network coverage. The lake, the people, the bonfire, the music, the mosquitos and the fireworks at 3.30 in the morning were totally worth it! (image)




Russ Allbery: Review: Sector General

Sun, 21 May 2017 17:21:00 +0000

Review: Sector General, by James White Series: Sector General #5 Publisher: Orb Copyright: 1983 Printing: 2002 ISBN: 0-312-87770-6 Format: Trade paperback Pages: 187 Sector General is the fifth book (or, probably more accurately, collection) in the Sector General series. I blame the original publishers for the confusion. The publication information is for the Alien Emergencies omnibus, which includes the fourth through the sixth books in the series. Looking back on my previous reviews of this series (wow, it's been eight years since I read the last one?), I see I was reviewing them as novels rather than as short story collections. In retrospect, that was a mistake, since they're composed of clearly stand-alone stories with a very loose arc. I'm not going to go back and re-read the earlier collections to give them proper per-story reviews, but may as well do this properly here. Overall, this collection is more of the same, so if that's what you want, there won't be any negative surprises. It's another four engineer-with-a-wrench stories about biological and medical puzzles, with only a tiny bit of characterization and little hint to any personal life for any of the characters outside of the job. Some stories are forgettable, but White does create some memorable aliens. Sadly, the stories don't take us to the point of real communication, so those aliens stop at biological puzzles and guesswork. "Combined Operation" is probably the best, although "Accident" is the most philosophical and an interesting look at the founding principle of Sector General. "Accident": MacEwan and Grawlya-Ki are human and alien brought together by a tragic war, and forever linked by a rather bizarre war monument. (It's a very neat SF concept, although the implications and undiscussed consequences don't bear thinking about too deeply.) The result of that war was a general recognition that such things should not be allowed to happen again, and it brought about a new, deep commitment to inter-species tolerance and politeness. Which is, in a rather fascinating philosophical twist, exactly what MacEwan and Grawlya-Ki are fighting against: not the lack of aggression, which they completely agree with, but with the layers of politeness that result in every species treating all others as if they were eggshells. Their conviction is that this cannot create a lasting peace. This insight is one of the most profound bits I've read in the Sector General novels and supports quite a lot of philosophical debate. (Sadly, there isn't a lot of that in the story itself.) The backdrop against which it plays out is an accidental crash in a spaceport facility, creating a dangerous and potentially deadly environment for a variety of aliens. Given the collection in which this is included and the philosophical bent described above, you can probably guess where this goes, although I'll leave it unspoiled if you can't. It's an idea that could have been presented with more subtlety, but it's a really great piece of setting background that makes the whole series snap into focus. A much better story in context than its surface plot. (7) "Survivor": The hospital ship Rhabwar rescues a sole survivor from the wreck of an alien ship caused by incomplete safeguards on hyperdrive generators. The alien is very badly injured and unconscious and needs the full attention of Sector General, but on the way back, the empath Prilicla also begins suffering from empathic hypersensitivity. Conway, the protagonist of most of this series, devotes most of his attention to that problem, having delivered the rescued alien to competent surgical h[...]



Adnan Hodzic: Automagically deploy & run containerized WordPress (PHP7 FPM, Nginx, MariaDB) using Ansible + Docker on AWS

Sun, 21 May 2017 16:28:33 +0000

In this blog post, I’ve described what started as simple migration of WordPress blog to AWS, ended up as automation project consisting of publishing multiple Ansible roles deploying and running multiple Docker images. If you’re not interested in reading about my entire journey, cognition gains and how this process came to be, please skim down to “Birth of: containerized-wordpress-project (TL;DR)” section. Migrating WordPress blog to AWS (EC2, Lightsail?) Since I’ve been sold on Amazon’s AWS idea of cloud computing “services” for couple of years now. I’ve wanted, and been trying to migrate this (WordPress) blog to AWS, but somehow it never worked out. Moving it to EC2 instance, with its own ELB volumes, AMI, EIP, Security Group … it just seemed as an overkill. When AWS Lightsail was first released, it seemed that was an answer to all my problems. But it wasn’t, disregarding its bit restrictive/dumbed down versions of original features. Living in Amsterdam, my main problem with it was that it was only available in a single US region. Regardless, I thought it had everything I needed for WordPress site, and as a new service, it had great potential. Its regional limitations were also good in a sense that they made me realize one important thing. And that’s once I migrate my blog to AWS, I want to be able to seamlessly move/migrate it across different EC2’s and different regions once they were available. If done properly, it meant I could even have it moved across different clouds (I’m talking to you Google Cloud). P.S: AWS Lightsail is now available in couple of different regions across Europe. Rollout which was almost smoothless. Fundamental problem of every migration … is migration Phase 1: Don’t reinvent the wheel? When you have a WordPress site that’s not self hosted. You want everything to work, but yet you really don’t want to spend any time managing infrastructure it’s on. And as soon as I started looking what could fit this criteria, I found that there were pre-configured, running out of box WordPress EC2 images available on AWS Marketplace, great! But when I took a look, although everything was running out of box, I wasn’t happy with software stack it was all built on. Namely Ubuntu 14.04 and Apache, and all of the services were started using custom scripts. Yuck. With this setup, when it was time to upgrade (and it’s already that time) you wouldn’t be thinking about upgrade. You’d only be thinking about another migration. Phase 2: What if I built everything myself? Installing and configuring everything manually, and then writing huge HowTo which I would follow when I needed to re-create whole stack was not an option. Same case with was scripting whole process, as overhead of changes that had to be tracked was too way too big. Being a huge Ansible fan, automating this step was a natural next step. I even found an awesome Ansible role which seemed like it’s going to do everything I need. Except, I realized I needed to update all software that’s deployed with it, and customize it since configuration it was deployed on wasn’t as generic. So I forked it and got to work. But soon enough, I was knee deep in making and fiddling with various system changes. Something I was trying to get away in this case, and most importantly something I was trying to avoid when it was time for next update. Phase 3: Marriage made in heaven: Ansible + Docker + AWS Idea to have everything Dockerized was around from very start. However, it never made a lot of sense until I put Ansible into same picture. And it was at this point where my final idea and requirements become crystal clear. Use Ansible to configure and setup host ready for Do[...]



Elena 'valhalla' Grandi: Modern XMPP Server

Sun, 21 May 2017 11:30:54 +0000

Modern XMPP ServerI've published a new HOWTO on my website http://www.trueelena.org/computers/howto/modern_xmpp_server.html:http://www.enricozini.org/blog/2017/debian/modern-and-secure-instant-messaging/ already wrote about the Why (and the What, Who and When), so I'll just quote his conclusion and move on to the How.I now have an XMPP setup which has all the features of the recent fancy chat systems, and on top of that it runs, client and server, on Free Software, which can be audited, it is federated and I can self-host my own server in my own VPS if I want to, with packages supported in Debian.HowI've decided to install https://prosody.im/, mostly because it was recommended by the RTC QuickStart Guide http://rtcquickstart.org/; I've heard that similar results can be reached with https://www.ejabberd.im/ and other servers.I'm also targeting https://www.debian.org/ stable (+ backports); as I write this is jessie; if there are significant differences I will update this article when I will upgrade my server to stretch. Right now, this means that I'm using prosody 0.9 (and that's probably also the version that will be available in stretch).Installation and prerequisitesYou will need to enable the https://backports.debian.org/ repository and then install the packages prosody and prosody-modules.You also need to setup some TLS certificates (I used Let's Encrypt https://letsencrypt.org/); and make them readable by the prosody user; you can see Chapter 12 of the RTC QuickStart Guide http://rtcquickstart.org/guide/multi/xmpp-server-prosody.html for more details.On your firewall, you'll need to open the following TCP ports:5222 (client2server)5269 (server2server)5280 (default http port for prosody)5281 (default https port for prosody)The latter two are needed to enable some services provided via http(s), including rich media transfers.With just a handful of users, I didn't bother to configure LDAP or anything else, but just created users manually via:prosodyctl adduser alice@example.orgIn-band registration is disabled by default (and I've left it that way, to prevent my server from being used to send spim https://en.wikipedia.org/wiki/Messaging_spam).prosody configurationYou can then start configuring prosody by editing /etc/prosody/prosody.cfg.lua and changing a few values from the distribution defaults.First of all, enforce the use of encryption and certificate checking both for client2server and server2server communications with:c2s_require_encryption = trues2s_secure_auth = trueand then, sadly, add to the whitelist any server that you want to talk to and doesn't support the above:s2s_insecure_domains = { "gmail.com" }virtualhostsFor each virtualhost you want to configure, create a file /etc/prosody/conf.avail/chat.example.org.cfg.lua with contents like the following:VirtualHost "chat.example.org" enabled = true ssl = { key = "/etc/ssl/private/example.org-key.pem"; certificate = "/etc/ssl/public/example.org.pem"; }For the domains where you also want to enable MUCs, add the follwing lines:Component "conference.chat.example.org" "muc" restrict_room_creation = "local"the "local" configures prosody so that only local users are allowed to create new rooms (but then everybody can join them, if the room administrator allows it): this may help reduce unwanted usages of your server by random people.You can also add the following line to enable rich media transfers via http uploads (XEP-0363):Component "upload.chat.example.org" "http_upload"The defaults are pretty sane, but see https://modules.prosody.im/mod_http_upload.html for details on what knobs you can configure for this moduleDon't forget to enable the[...]



Neil Williams: Software, service, data and freedom

Sat, 20 May 2017 07:24:00 +0000

Free software, free services but what about your data? I care a lot about free software, not only as a Debian Developer. The use of software as a service matters as well because my principle free software development is on just such a project, licensed under the GNU Affero General Public License version 3. The AGPL helps by allowing anyone who is suitably skilled to install their own copy of the software and run their own service on their own hardware. As a project, we are seeing increasing numbers of groups doing exactly this and these groups are actively contributing back to the project. So what is the problem? We've got an active project, an active community and everything is under a free software licence and regularly uploaded to Debian main. We have open code review with anonymous access to our own source code CI and anonymous access to project planning, open mailing list archives as well as an open bug tracker and a very active IRC channel (#linaro-lava on OFTC). We develop in the open, we respond in the open and we publish frequently (monthly, approximately). The code we write defaults to public visibilty at runtime with restrictions available for certain use cases. What else can we be doing? Well it was a simple question which started me thinking. The lava documentation has various example test scripts e.g. https://validation.linaro.org/static/docs/v2/examples/test-jobs/qemu-kernel-standard-sid.yaml these have no licence information, we've adapted them for a Linux Foundation project, what licence should apply to these files? Robert Marshall Those are our own examples, contributed as part of the documentation and covered by the AGPL like the rest of the documentation and the software which it documents, so I replied with the same. However, what about all the other submissions received by the service? Data Freedom LAVA acts by providing a service to authenticated users. The software runs your test code on hardware which might not be available to the user or which is simply inconvenient for the test writer to setup themselves. The AGPL covers this nicely. What about the data contributed by the users? We make this available to other users who will, naturally, copy and paste for their own tests. In most cases, because the software defaults to public access, anonymous users also get to learn from the contributions of other test writers. This is a good thing and to be encouraged. (One reason why we moved to YAML for all submissions was to allow comments to help other users understand why the submission does specific things.) Writing a test job submission or a test shell definition from scratch is a non-trivial amount of work. We've written dozens of pages of documentation covering how and how not to do it but the detail of how a test job runs exactly what the test writer requires can involve substantial effort. (Our documentation recommends using version control for each of these works for exactly these reasons.) At what point do these works become software? At what point do these need licensing? How could that be declared? Perils of the Javascript Trap approach When reading up on the AGPL, I also read about Service as a Software Substitute (SaaSS) and this led to The Javascript Trap. I don't consider LAVA to be SaaSS although it is Software as a Service (SaaS). (Distinguishing between those is best left to the GNU document as it is an almighty tangle at times.) I did look at the GNU ideas for licensing Javascript but it seems cumbersome and unnecessary - a protocol designed for the specific purposes of their own service rather than as a solution which could be readily adopted by all such services. The same[...]



Ritesh Raj Sarraf: Patanjali Research Foundation

Sat, 20 May 2017 04:46:01 +0000

(image)

PSA: Research in the domain of Ayurveda

http://www.patanjaliresearchfoundation.com/patanjali/
 

I am so glad to see this initiative taken by the Patanjali group. This is a great stepping stone in the health and wellness domain.

So far, Allopathy has been blunt in discarding alternate medicine practices, without much solid justification. The only, repetitive, response I've heard is "lack of research". This initiative definitely is a great step in that regard.

Ayurveda (Ancient Hindu art of healing) has a huge potential to touch lives. For the Indian sub-continent, this has the potential of a blessing.

The Prime Minister of India himself inaugurated the research centre.

Categories: 

Keywords: 

Like: 




Clint Adams: Help the Aged

Fri, 19 May 2017 16:10:06 +0000

(image)

I keep meeting girls from Walnut Creek who don’t know about the CDROM.

Posted on 2017-05-19
Tags: ranticore



Michael Prokop: Debian stretch: changes in util-linux #newinstretch

Fri, 19 May 2017 08:42:57 +0000

We’re coming closer to the Debian/stretch stable release and similar to what we had with #newinwheezy and #newinjessie it’s time for #newinstretch! Hideki Yamane already started the game by blogging about GitHub’s Icon font, fonts-octicons and Arturo Borrero Gonzalez wrote a nice article about nftables in Debian/stretch. One package that isn’t new but its tools are used by many of us is util-linux, providing many essential system utilities. We have util-linux v2.25.2 in Debian/jessie and in Debian/stretch there will be util-linux >=v2.29.2. There are many new options available and we also have a few new tools available. Tools that have been taken over from other packages last: used to be shipped via sysvinit-utils in Debian/jessie lastb: used to be shipped via sysvinit-utils in Debian/jessie mesg: used to be shipped via sysvinit-utils in Debian/jessie mountpoint: used to be shipped via initscripts in Debian/jessie sulogin: used to be shipped via sysvinit-utils in Debian/jessie New tools lsipc: show information on IPC facilities, e.g.: root@ff2713f55b36:/# lsipc RESOURCE DESCRIPTION LIMIT USED USE% MSGMNI Number of message queues 32000 0 0.00% MSGMAX Max size of message (bytes) 8192 - - MSGMNB Default max size of queue (bytes) 16384 - - SHMMNI Shared memory segments 4096 0 0.00% SHMALL Shared memory pages 18446744073692774399 0 0.00% SHMMAX Max size of shared memory segment (bytes) 18446744073692774399 - - SHMMIN Min size of shared memory segment (bytes) 1 - - SEMMNI Number of semaphore identifiers 32000 0 0.00% SEMMNS Total number of semaphores 1024000000 0 0.00% SEMMSL Max semaphores per semaphore set. 32000 - - SEMOPM Max number of operations per semop(2) 500 - - SEMVMX Semaphore max value 32767 - - lslogins: display information about known users in the system, e.g.: root@ff2713f55b36:/# lslogins UID USER PROC PWD-LOCK PWD-DENY LAST-LOGIN GECOS 0 root 2 0 1 root 1 daemon 0 0 1 daemon 2 bin 0 0 1 bin 3 sys 0 0 1 sys 4 sync 0 0 1 sync 5 games 0 0 1 games 6 man 0 0 1 man 7 lp 0 0 1 lp 8 mail 0 0 1 mail 9 news 0 0 1 news 10 uucp 0 0 1 uucp 13 proxy 0 0 1 proxy 33 www-data 0 0 1 www-data 34 backup 0 0 1 backup 38 list 0 0 1 Mailing List Manager 39 irc 0 0 1 ircd 41 gnats 0 0 1 Gnats Bug-Reporting System (admin) 100 _apt 0 0 1 65534 nobody 0 0 1 nobody lsns: list system namespaces, e.g.: root@ff2713f55b36:/# lsns NS TYPE NPROCS PID USER COMMAND 4026531835 cgroup 2 1 root bash 4026531837 user 2 1 root bash 4026532473 mnt [...]



Benjamin Mako Hill: Children’s Perspectives on Critical Data Literacies

Fri, 19 May 2017 00:51:18 +0000

Last week, we presented a new paper that describes how children are thinking through some of the implications of new forms of data collection and analysis. The presentation was given at the ACM CHI conference in Denver last week and the paper is open access and online. Over the last couple years, we’ve worked on a large project to support children in doing — and not just learning about — data science. We built a system, Scratch Community Blocks, that allows the 18 million users of the Scratch online community to write their own computer programs — in Scratch of course — to analyze data about their own learning and social interactions. An example of one of those programs to find how many of one’s follower in Scratch are not from the United States is shown below. Last year, we deployed Scratch Community Blocks to 2,500 active Scratch users who, over a period of several months, used the system to create more than 1,600 projects. As children used the system, Samantha Hautea, a student in UW’s Communication Leadership program, led a group of us in an online ethnography. We visited the projects children were creating and sharing. We followed the forums where users discussed the blocks. We read comment threads left on projects. We combined Samantha’s detailed field notes with the text of comments and forum posts, with ethnographic interviews of several users, and with notes from two in-person workshops. We used a technique called grounded theory to analyze these data. What we found surprised us. We expected children to reflect on being challenged by — and hopefully overcoming — the technical parts of doing data science. Although we certainly saw this happen, what emerged much more strongly from our analysis was detailed discussion among children about the social implications of data collection and analysis. In our analysis, we grouped children’s comments into five major themes that represented what we called “critical data literacies.” These literacies reflect things that children felt were important implications of social media data collection and analysis. First, children reflected on the way that programmatic access to data — even data that was technically public — introduced privacy concerns. One user described the ability to analyze data as, “creepy”, but at the same time, “very cool.” Children expressed concern that programmatic access to data could lead to “stalking“ and suggested that the system should ask for permission. Second, children recognized that data analysis requires skepticism and interpretation. For example, Scratch Community Blocks introduced a bug where the block that returned data about followers included users with disabled accounts. One user, in an interview described to us how he managed to figure out the inconsistency: At one point the follower blocks, it said I have slightly more followers than I do. And, that was kind of confusing when I was trying to make the project. […] I pulled up a second [browser] tab and compared the [data from Scratch Community Blocks and the data in my profile]. Third, children discussed the hidden assumptions and decisions that drive the construction of metrics. For example, the number of views received for each project in Scratch is counted using an algorithm that tries to minimize the impact of gaming the system (similar to, for example, Youtube). As children started to build programs with data, they started to uncover and speculate about the decisions behind metrics. For example, they guessed that the view count [...]



Alessio Treglia: Digital Ipseity: Which Identity?

Thu, 18 May 2017 08:50:52 +0000

(image)

 

(image) Within the next three years, more than seven billion people and businesses will be connected to the Internet. During this time of dramatic increases in access to the Internet, networks have seen an interesting proliferation of systems for digital identity management (i.e. our SPID in Italy). But what is really meant by “digital identity“? All these systems are implemented in order to have the utmost certainty that the data entered by the subscriber (address, name, birth, telephone, email, etc.) is directly coincident with that of the physical person. In other words, data are certified to be “identical” to those of the user; there is a perfect overlap between the digital page and the authentic user certificate: an “idem“, that is, an identity.

This identity is our personal records reflected on the net, nothing more than that. Obviously, this data needs to be appropriately protected from malicious attacks by means of strict privacy rules, as it contains so-called “sensitive” information, but this data itself is not sufficiently interesting for the commercial market, except for statistical purposes on homogeneous population groups. What may be a real goldmine for the “web company” is another type of information: user’s ipseity. It is important to immediately remove the strong semantic ambiguity that weighs on the notion of identity. There are two distinct meanings…

<Read More…[by Fabio Marzocca]>




Michael Prokop: Debugging a mystery: ssh causing strange exit codes?

Thu, 18 May 2017 07:29:36 +0000

Recently we had a WTF moment at a customer of mine which is worth sharing. In an automated deployment procedure we’re installing Debian systems and setting up MySQL HA/Scalability. Installation of the first node works fine, but during installation of the second node something weird is going on. Even though the deployment procedure reported that everything went fine: it wasn’t fine at all. After bisecting to the relevant command lines where it’s going wrong we identified that the failure is happening between two ssh/scp commands, which are invoked inside a chroot through a shell wrapper. The ssh command caused a wrong exit code showing up: instead of bailing out with an error (we’re running under ‘set -e‘) it returned with exit code 0 and the deployment procedure continued, even though there was a fatal error. Initially we triggered the bug when two ssh/scp command lines close to each other were executed, but I managed to find a minimal example for demonstration purposes: # cat ssh_wrapper chroot << "EOF" / /bin/bash ssh root@localhost hostname >/dev/null exit 1 EOF echo "return code = $?" What we’d expect is the following behavior, receive exit code 1 from the last command line in the chroot wrapper: # ./ssh_wrapper return code = 1 But what we actually get is exit code 0: # ./ssh_wrapper return code = 0 Uhm?! So what’s going wrong and what’s the fix? Let’s find out what’s causing the problem: # cat ssh_wrapper chroot << "EOF" / /bin/bash ssh root@localhost command_does_not_exist >/dev/null 2>&1 exit "$?" EOF echo "return code = $?" # ./ssh_wrapper return code = 127 Ok, so if we invoke it with a binary that does not exist we properly get exit code 127, as expected. What about switching /bin/bash to /bin/sh (which corresponds to dash here) to make sure it’s not a bash bug: # cat ssh_wrapper chroot << "EOF" / /bin/sh ssh root@localhost hostname >/dev/null exit 1 EOF echo "return code = $?" # ./ssh_wrapper return code = 1 Oh, but that works as expected!? When looking at this behavior I had the feeling that something is going wrong with file descriptors. So what about wrapping the ssh command line within different tools? No luck with `stdbuf -i0 -o0 -e0 ssh root@localhost hostname`, nor with `script -c “ssh root@localhost hostname” /dev/null` and also not with `socat EXEC:”ssh root@localhost hostname” STDIO`. But it works under unbuffer(1) from the expect package: # cat ssh_wrapper chroot << "EOF" / /bin/bash unbuffer ssh root@localhost hostname >/dev/null exit 1 EOF echo "return code = $?" # ./ssh_wrapper return code = 1 So my bet on something with the file descriptor handling was right. Going through the ssh manpage, what about using ssh’s `-n` option to prevent reading from standard input (stdin)? # cat ssh_wrapper chroot << "EOF" / /bin/bash ssh -n root@localhost hostname >/dev/null exit 1 EOF echo "return code = $?" # ./ssh_wrapper return code = 1 Bingo! Quoting ssh(1): -n Redirects stdin from /dev/null (actually, prevents reading from stdin). This must be used when ssh is run in the background. A common trick is to use this to run X11 programs on a remote machine. For example, ssh -n shadows.cs.hut.fi emacs & will start an emacs on shadows.cs.hut.fi, and the X11 connection will be automatically forwarded over an encrypted channel. The ssh program will be put in the b[...]



Tianon Gravi: My Docker Install Process (redux)

Thu, 18 May 2017 06:00:00 +0000

Since I wrote my first post on this topic, Docker has switched from apt.dockerproject.org to download.docker.com, so this post revisits my original steps, but tailored for the new repo. There will be less commentary this time (straight to the beef). For further commentary on “why” for any step, see my previous post. These steps should be fairly similar to what’s found in upstream’s “Install Docker on Debian” document, but do differ slightly in a few minor ways. grab Docker’s APT repo GPG key # "Docker Release (CE deb)" export GNUPGHOME="$(mktemp -d)" gpg --keyserver ha.pool.sks-keyservers.net --recv-keys 9DC858229FC7DD38854AE2D88D81803C0EBFCD88 # stretch+ gpg --export --armor 9DC858229FC7DD38854AE2D88D81803C0EBFCD88 | sudo tee /etc/apt/trusted.gpg.d/docker.gpg.asc # jessie # gpg --export 9DC858229FC7DD38854AE2D88D81803C0EBFCD88 | sudo tee /etc/apt/trusted.gpg.d/docker.gpg > /dev/null rm -rf "$GNUPGHOME" Verify: $ apt-key list ... /etc/apt/trusted.gpg.d/docker.gpg.asc ------------------------------------- pub rsa4096 2017-02-22 [SCEA] 9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88 uid [ unknown] Docker Release (CE deb) sub rsa4096 2017-02-22 [S] ... add Docker’s APT source With the switch to download.docker.com, HTTPS is now mandated: $ apt-get update && apt-get install apt-transport-https Setup sources.list: echo "deb [arch=amd64] https://download.docker.com/linux/debian stretch stable" | sudo tee /etc/apt/sources.list.d/docker.list Add edge component for every-month releases and test for release candidates (ie, ... stretch stable edge). Replace stretch with jessie for Jessie installs. At this point, you should be safe to run apt-get update to verify the changes: $ sudo apt-get update ... Get:5 https://download.docker.com/linux/debian stretch/stable amd64 Packages [1227 B] ... Reading package lists... Done (There shouldn’t be any warnings or errors about missing keys, etc.) configure Docker This step could be done after Docker’s installed (and indeed, that’s usually when I do it because I forget that I should until I’ve got Docker installed and realize that my configuration is suboptimal), but doing it before ensures that Docker doesn’t have to be restarted later. sudo mkdir -p /etc/docker sudo sensible-editor /etc/docker/daemon.json (sensible-editor can be replaced by whatever editor you prefer, but that command should choose or prompt for a reasonable default) I then fill daemon.json with at least a default storage-driver. Whether I use aufs or overlay2 depends on my kernel version and available modules – if I’m on Ubuntu, AUFS is still a no-brainer (since it’s included in the default kernel if the linux-image-extra-XXX/linux-image-extra-virtual package is installed), but on Debian AUFS is only available in either 3.x kernels (jessie’s default non-backports kernel) or recently in the aufs-dkms package (as of this writing, still only available on stretch and sid – no jessie-backports option). If my kernel is 4.x+, I’m likely going to choose overlay2 (or if that errors out, the older overlay driver). Choosing an appropriate storage driver is a fairly complex topic, and I’d recommend that for serious production deployments, more research on pros and cons is performed than I’m including here (especially since AUFS and OverlayFS are not the only options – they’re [...]



Daniel Pocock: Hacking the food chain in Switzerland

Wed, 17 May 2017 18:41:52 +0000

A group has recently been formed on Meetup seeking to build a food computer in Zurich. The initial meeting is planned for 6:30pm on 20 June 2017 at ETH, (Zurich Centre/Zentrum, Rämistrasse 101). The question of food security underlies many of the world's problems today. In wealthier nations, we are being called upon to trust a highly opaque supply chain and our choices are limited to those things that major supermarket chains are willing to stock. A huge transport and storage apparatus adds to the cost and CO2 emissions and detracts from the nutritional value of the produce that reaches our plates. In recent times, these problems have been highlighted by the horsemeat scandal, the Guacapocalypse and the British Hummus crisis. One interesting initiative to create transparency and encourage diversity in our diets is the Open Agriculture (OpenAg) Initiative from MIT, summarised in this TED video from Caleb Harper. The food produced is healthier and fresher than anything you might find in a supermarket and has no exposure to pesticides. An open source approach to food An interesting aspect of this project is the promise of an open source approach. The project provides hardware plans, a a video of the build process, source code and the promise of sharing climate recipes (scripts) to replicate the climates of different regions, helping ensure it is always the season for your favour fruit or vegetable. Do we need it? Some people have commented on the cost of equipment and electricity. Carsten Agger recently blogged about permaculture as a cleaner alternative. While there are many places where people can take that approach, there are also many overpopulated regions and cities where it is not feasible. Some countries, like Japan, have an enormous population and previously productive farmland contaminated by industry, such as the Fukushima region. Growing our own food also has the potential to reduce food waste, as individual families and communities can grow what they need. Whether it is essential or not, the food computer project also provides a powerful platform to educate people about food and climate issues and an exciting opportunity to take the free and open source philosophy into many more places in our local communities. The Zurich Meetup group has already received expressions of interest from a diverse group including professionals, researchers, students, hackers, sustainability activists and free software developers. Next steps People who want to form a group in their own region can look in the forum topic "Where are you building your Food Computer?" to find out if anybody has already expressed interest. Which patterns from the free software world can help more people build more food computers? I've already suggested using Debian's live-wrapper to distribute a runnable ISO image that can boot from a USB stick, can you suggest other solutions like this? Can you think of any free software events where you would like to see a talk or exhibit about this project? Please suggest them on the OpenAg forum. There are many interesting resources about the food crisis, an interesting starting point is watching the documentary Food, Inc. If you are in Switzerland, please consider attending the meeting on at 6:30pm on 20 June 2017 at ETH (Centre/Zentrum), Zurich. One final thing to contemplate: if you are not hacking your own food supply, who is? [...]



Reproducible builds folks: Reproducible Builds: week 107 in Stretch cycle

Wed, 17 May 2017 16:08:59 +0000

Here's what happened in the Reproducible Builds effort between Sunday May 7 and Saturday May 13 2017: Report from Reproducible Builds Hamburg Hackathon We were 16 participants from 12 projects: 7 Debian, 2 repeatr.io, 1 ArchLinux, 1 coreboot + LEDE, 1 F-Droid, 1 ElectroBSD + privoxy, 1 GNU R, 1 in-toto.io, 1 Meson and 1 openSUSE. Three people came from the USA, 3 from the UK, 2 Finland, 1 Austria, 1 Denmark and 6 from Germany, plus we several guests from our gracious hosts at the CCCHH hackerspace as well as a guest from Australia… We had four presentations: "Reproducible Builds everywhere" by h01ger https://in-toto.io by Justin Cappos http://repeatr.io by Eric Myhre http://mesonbuild.com by Jussi Pakkanen Some of the things we worked on: h01ger did orga stuff for this very hackathon, discussed tests.r-b.o with various non-Debian contributors, filed some bugs and restarted the policy discussion in #844431. He also did some polishing work on tests.r-b.o which shall be covered in next issue of our weekly blog. Justin Cappos involved many of us in interesting discussions and started to write an academic paper about Reproducible Builds of which he shared an early beta on our mailinglist. Chris Lamb (lamby) filed a number of patches for individual packages, worked on diffoscope, merged many changes to strip-nondeterminism and also filed #862073 against dak to upload buildinfo files to external services. Maria Glukhova (siamezzze) fixed a bug with plots on tests.reproducible-builds.org and worked on diffoscope test coverage. Lynxis worked on a new squashfs upstream release improving support for reproducible squashfs filesystems and also had some time to hack on coreboot and show others how to install coreboot on real hardware. Michael Poehn worked on integrating F-Droid builds into tests.reproducible-builds.org, on the F-Droid verification utility and also ran some app reproducibility tests. Bernhard worked on various unreproducible issues upstream and submitted fixes for curl, bzr, ant. Erin Myhre worked on bootstrapping cleanroom builds of compiler components in Repeatr sandboxes. Calvin Behling merged improvements to reppl for a cleaner storage format and better error handling and did design work for next version of repeatr pipeline execution. Calvin also lead the reproducibility testing of restaurant mood lighting. Eric and Calvin also claim to have had all sorts of useful exchanges about the state of other projects, and learned a lot about where to look for more info about debian bootstrap and archive mirroring from steven and lamby Phil Hands came by to say hi and worked on testing d-i on jenkins.debian.net. Chris West (Faux) worked on extending misc.git:has-only.py, and started looking at Britney. We had a Debian focussed meeting where we discussed a number of topics: IRC meetings: yes, we want to try again to have them, monthly, a poll for a good date is being held. Debian tests post Stretch: we'll add tests for stable/Stretch. .buildinfo files, how forward: we need sourceful uploads for any arch:all packages. dak should send .buildinfo files to buildinfo.debian.net. (pre?) Stretch release press release: we should do that, esp. as our achievements are largely unrelated to Stretch. Reproducible Builds Summit 3: yes, we want that. what to do (in notes.git) with resolved issues: keep the issues. strip-nondeterminism quo vadis: Justin reminded us that strip-no[...]



Jamie McClelland: Late to the Raspberry Pi party

Wed, 17 May 2017 14:46:06 +0000

I finally bought my first raspberry pi to setup as a router and wifi access point. It wasn't easy. I first had to figure out what to buy. I think that was the hardest part. I ended up with: Raspberry PI 3 Model B A1.2GHz 64-bit quad-core ARMv8 CPU, 1GB RAM (Model number: RASPBERRYPI3-MODB-1GB) Transcend USB 3.0 SDHC / SDXC / microSDHC / SDXC Card Reader, TS-RDF5K (Black). I only needed this because I don't have one already and I will need a way to copy a raspbian image from my laptop to a micro SD card. Centon Electronics Micro SD Card 16 GB (S1-MSDHC4-16G). This is the micro sd card. Smraza Clear case for Raspberry Pi 3 2 Model B with Power Supply,2pcs Heatsinks and Micro USB with On/Off Switch. And this is the box to put it all in. I already have a cable matters USB to ethernet device, which will provide the second ethernet connection so this device can actually work as a router. I studiously followed the directions to download the raspbian image and copy it to my micro sd card. I also touched a file on the boot partition called ssh so ssh would start automatically. Note: I first touched the ssh file on the root partition (sdb2) before realizing it belonged on the boot partition (sdb1). And, despite ambiguous directions found on the Internet, lowercase 'ssh' for the filename seems to do the trick. Then, I found the IP address with the help of NMAP (sudo nmap -sn 192.168.69.*) and tried to ssh in but alas... Connection reset by 192.168.69.116 port 22 No dice. So, I re-mounted the sdb2 partition of the micro sd card and looked in var/log/auth.log and found: May 5 19:23:00 raspberrypi sshd[760]: error: Could not load host key: /etc/ssh/ssh_host_ed25519_key May 5 19:23:00 raspberrypi sshd[760]: fatal: No supported key exchange algorithms [preauth] May 5 19:23:07 raspberrypi sshd[762]: error: key_load_public: invalid format May 5 19:23:07 raspberrypi sshd[762]: error: Could not load host key: /etc/ssh/ssh_host_rsa_key May 5 19:23:07 raspberrypi sshd[762]: error: key_load_public: invalid format May 5 19:23:07 raspberrypi sshd[762]: error: Could not load host key: /etc/ssh/ssh_host_dsa_key May 5 19:23:07 raspberrypi sshd[762]: error: key_load_public: invalid format May 5 19:23:07 raspberrypi sshd[762]: error: Could not load host key: /etc/ssh/ssh_host_ecdsa_key May 5 19:23:07 raspberrypi sshd[762]: error: key_load_public: invalid format How did that happen? And wait a minute... 0 jamie@turkey:~$ ls -l /mnt/etc/ssh/ssh_host_ecdsa_key -rw------- 1 root root 0 Apr 10 05:58 /mnt/etc/ssh/ssh_host_ecdsa_key 0 jamie@turkey:~$ date Fri May 5 15:44:15 EDT 2017 0 jamie@turkey:~$ Are the keys embedded in the image? Isn't that wrong? I fixed with: 0 jamie@turkey:mnt$ sudo rm /mnt/etc/ssh/ssh_host_* 0 jamie@turkey:mnt$ sudo ssh-keygen -q -f /mnt/etc/ssh/ssh_host_rsa_key -N '' -t rsa 0 jamie@turkey:mnt$ sudo ssh-keygen -q -f /mnt/etc/ssh/ssh_host_dsa_key -N '' -t dsa 0 jamie@turkey:mnt$ sudo ssh-keygen -q -f /mnt/etc/ssh/ssh_host_ecdsa_key -N '' -t ecdsa 0 jamie@turkey:mnt$ sudo ssh-keygen -q -f /mnt/etc/ssh/ssh_host_ed25519_key -N '' -t ed25519 0 jamie@turkey:mnt$ NOTE: I just did a second installation and this didn't happen. Maybe something went wrong as I experiment with SSH vs ssh on the boot partition? Then I could ssh in. I removed the pi user account and added my ssh key to /root/.ssh/authorized_keys and put a new name "mondra[...]



Michal Čihař: Weblate 2.14

Wed, 17 May 2017 14:00:24 +0000

(image)

Weblate 2.14 has been released today slightly ahead of the schedule. There are quite a lot of security improvements based on reports we got from HackerOne program, API extensions and other minor improvements.

Full list of changes:

  • Add glossary entries using AJAX.
  • The logout now uses POST to avoid CSRF.
  • The API key token reset now uses POST to avoid CSRF.
  • Weblate sets Content-Security-Policy by default.
  • The local editor URL is validated to avoid self-XSS.
  • The password is now validated against common flaws by default.
  • Notify users about imporant activity with their account such as password change.
  • The CSV exports now escape potential formulas.
  • Various minor improvements in security.
  • The authentication attempts are now rate limited.
  • Suggestion content is stored in the history.
  • Store important account activity in audit log.
  • Ask for password confirmation when removing account or adding new associations.
  • Show time when suggestion has been made.
  • There is new quality check for trailing semicolon.
  • Ensure that search links can be shared.
  • Included source string information and screenshots in the API.
  • Allow to overwrite translations through API upload.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate




Dirk Eddelbuettel: Upcoming Rcpp Talks

Wed, 17 May 2017 02:37:00 +0000

(image)

Very excited about the next few weeks which will cover a number of R conferences, workshops or classes with talks, mostly around Rcpp and one notable exception:

  • May 19: Rcpp: From Simple Examples to Machine learning, pre-conference workshop at our R/Finance 2017 conference here in Chicago

  • May 26: Extending R with C++: Motivation and Examples, invited keynote at R à Québec 2017 at Université Laval in Quebec City, Canada

  • June 28-29: Higher-Performance R Programming with C++ Extensions, two-day course at the Zuerich R Courses @ U Zuerich in Zuerich, Switzerland

  • July 3: Rcpp at 1000+ reverse depends: Some Lessons Learned (working title), at DSC 2017 preceding useR! 2017 in Brussels, Belgium

  • July 4: Extending R with C++: Motivation, Introduction and Examples, tutorial preceding useR! 2017 in Brussels, Belgium

  • July 5, 6, or 7: Hosting Data Packages via drat: A Case Study with Hurricane Exposure Data, accepted presentation, joint with Brooke Anderson

If you are near one those events, interested and able to register (for the events requiring registration), I would love to chat before or after.




Enrico Zini: Accident on the motorway

Tue, 16 May 2017 21:12:41 +0000

There was an accident on the motorway, luckily noone got seriously wounded, but a truckful of sugar and a truckful of cereals completely spilled on the motorway, and took some time to clean.

(image) (image) (image) (image) (image) (image) (image) (image) (image) (image)




Daniel Pocock: Building an antenna and receiving ham and shortwave stations with SDR

Tue, 16 May 2017 18:34:37 +0000

In my previous blog on the topic of software defined radio (SDR), I provided a quickstart guide to using gqrx, GNU Radio and the RTL-SDR dongle to receive FM radio and the amateur 2 meter (VHF) band. Using the same software configuration and the same RTL-SDR dongle, it is possible to add some extra components and receive ham radio and shortwave transmissions from around the world. Here is the antenna setup from the successful SDR workshop at OSCAL'17 on 13 May: After the workshop on Saturday, members of the OSCAL team successfully reconstructed the SDR and antenna at the Debian info booth on Sunday and a wide range of shortwave and ham signals were detected: Here is a close-up look at the laptop, RTL-SDR dongle (above laptop), Ham-It-Up converter (above water bottle) and MFJ-971 ATU (on right): Buying the parts Component Purpose, Notes Price/link to source RTL-SDR dongle Converts radio signals (RF) into digital signals for reception through the USB port. It is essential to buy the dongles for SDR with TCXO, the generic RTL dongles for TV reception are not stable enough for anything other than TV. ~ € 25 Enamelled copper wire, 25 meters or more Loop antenna. Thicker wire provides better reception and is more suitable for transmitting (if you have a license) but it is heavier. The antenna I've demonstrated at recent events uses 1mm thick wire. ~ € 10 4 (or more) ceramic egg insulators Attach the antenna to string or rope. Smaller insulators are better as they are lighter and less expensive. ~ € 10 4:1 balun The actual ratio of the balun depends on the shape of the loop (square, rectangle or triangle) and the point where you attach the balun (middle, corner, etc). You may want to buy more than one balun, for example, a 4:1 balun and also a 1:1 balun to try alternative configurations. Make sure it is waterproof, has hooks for attaching a string or rope and an SO-239 socket. from € 20 5 meter RG-58 coaxial cable with male PL-259 plugs on both ends If using more than 5 meters or if you want to use higher frequencies above 30MHz, use thicker, heavier and more expensive cables like RG-213. The cable must be 50 ohm. ~ € 10 Antenna Tuning Unit (ATU) I've been using the MFJ-971 for portable use and demos because of the weight. There are even lighter and cheaper alternatives if you only need to receive. ~ € 20 for receive only or second hand PL-259 to SMA male pigtail, up to 50cm, RG58 Joins the ATU to the upconverter. Cable must be RG58 or another 50 ohm cable ~ € 5 Ham It Up v1.3 up-converter Mixes the HF signal with a signal from a local oscillator to create a new signal in the spectrum covered by the RTL-SDR dongle ~ € 40 SMA (male) to SMA (male) pigtail Join the up-converter to the RTL-SDR dongle ~ € 2 USB charger and USB type B cable Used for power to the up-converter. A spare USB mobile phone charge plug may be suitable. ~ € 5 String or rope For mounting the antenna. A ligher and cheaper string is better for portable use while a stronger and weather-resistent rope is better for a fixed installation. € 5 Building the antenna There are numerous online calculators for measuring the amount of enamelled copper wire to cut. For example, for a centre frequency of 14.2 MHz on the 20 meter amateur band, the antenna len[...]



Raphaël Hertzog: Freexian’s report about Debian Long Term Support, April 2017

Tue, 16 May 2017 15:52:19 +0000

Like each month, here comes a report about the work of paid contributors to Debian LTS. Individual reports In April, about 190 work hours have been dispatched among 13 paid contributors. Their reports are available: Antoine Beaupré did 19.5 hours (out of 16h allocated + 5.5 remaining hours, thus keeping 2 extra hours for May). Ben Hutchings did 12 hours (out of 15h allocated, thus keeping 3 extra hours for May). Brian May did 10 hours. Chris Lamb did 18 hours. Emilio Pozuelo Monfort did 17.5 hours (out of 16 hours allocated + 3.5 hours remaining, thus keeping 2 hours for May). Guido Günther did 12 hours (out of 8 hours allocated + 4 hours remaining). Hugo Lefeuvre did 15.5 hours (out of 6 hours allocated + 9.5 hours remaining). Jonas Meurer did nothing (out of 4 hours allocated + 3.5 hours remaining, thus keeping 7.5 hours for May). Markus Koschany did 23.75 hours. Ola Lundqvist did 14 hours (out of 20h allocated, thus keeping 6 extra hours for May). Raphaël Hertzog did 11.25 hours (out of 10 hours allocated + 1.25 hours remaining). Roberto C. Sanchez did 16.5 hours (out of 20 hours allocated + 1 hour remaining, thus keeping 4.5 extra hours for May). Thorsten Alteholz did 23.75 hours. Evolution of the situation The number of sponsored hours decreased slightly and we’re now again a little behind our objective. The security tracker currently lists 54 packages with a known CVE and the dla-needed.txt file 37. The number of open issues is comparable to last month. Thanks to our sponsors New sponsors are in bold. Platinum sponsors: TOSHIBA (for 19 months) GitHub (for 10 months) Gold sponsors: The Positive Internet (for 35 months) Blablacar (for 34 months) Linode (for 24 months) Babiel GmbH (for 13 months) Plat’Home (for 13 months) Silver sponsors: Domeneshop AS (for 34 months) Université Lille 3 (for 34 months) Trollweb Solutions (for 32 months) Nantes Métropole (for 28 months) Dalenys (for 25 months) Univention GmbH (for 20 months) Université Jean Monnet de St Etienne (for 20 months) Sonus Networks (for 14 months) UR Communications BV (for 9 months) maxcluster GmbH (for 8 months) Exonet B.V. (for 4 months) Bronze sponsors: David Ayers – IntarS Austria (for 35 months) Evolix (for 35 months) Offensive Security (for 35 months) Seznam.cz, a.s. (for 35 months) Freeside Internet Service (for 34 months) MyTux (for 34 months) Linuxhotel GmbH (for 32 months) Intevation GmbH (for 31 months) Daevel SARL (for 30 months) Bitfolk LTD (for 29 months) Megaspace Internet Services GmbH (for 29 months) Greenbone Networks GmbH (for 28 months) NUMLOG (for 28 months) WinGo AG (for 28 months) Ecole Centrale de Nantes – LHEEA (for 24 months) Sig-I/O (for 21 months) Entr’ouvert (for 19 months) Adfinis SyGroup AG (for 16 months) GNI MEDIA (for 11 months) Laboratoire LEGI – UMR 5519 / CNRS (for 11 months) Quarantainenet BV (for 11 months) RHX Srl (for 8 months) Bearstech LiHAS No comment | Liked this article? Click here. | My blog is Flattr-enabled. [...]



Francois Marier: Recovering from an unbootable Ubuntu encrypted LVM root partition

Tue, 16 May 2017 04:10:00 +0000

A laptop that was installed using the default Ubuntu 16.10 (xenial) full-disk encryption option stopped booting after receiving a kernel update somewhere on the way to Ubuntu 17.04 (zesty). After showing the boot screen for about 30 seconds, a busybox shell pops up: BusyBox v.1.21.1 (Ubuntu 1:1.21.1-1ubuntu1) built-in shell (ash) Enter 'help' for list of built-in commands. (initramfs) Typing exit will display more information about the failure before bringing us back to the same busybox shell: Gave up waiting for root device. Common problems: - Boot args (cat /proc/cmdline) - Check rootdelay= (did the system wait long enough?) - Check root= (did the system wait for the right device?) - Missing modules (cat /proc/modules; ls /dev) ALERT! /dev/mapper/ubuntu--vg-root does not exist. Dropping to a shell! BusyBox v.1.21.1 (Ubuntu 1:1.21.1-1ubuntu1) built-in shell (ash) Enter 'help' for list of built-in commands. (initramfs) which now complains that the /dev/mapper/ubuntu--vg-root root partition (which uses LUKS and LVM) cannot be found. There is some comprehensive advice out there but it didn't quite work for me. This is how I ended up resolving the problem. Boot using a USB installation disk First, create bootable USB disk using the latest Ubuntu installer: Download an desktop image. Copy the ISO directly on the USB stick (overwriting it in the process): dd if=ubuntu.iso of=/dev/sdc1 and boot the system using that USB stick (hold the option key during boot on Apple hardware). Mount the encrypted partition Assuming a drive which is partitioned this way: /dev/sda1: EFI partition /dev/sda2: unencrypted boot partition /dev/sda3: encrypted LVM partition Open a terminal and mount the required partitions: cryptsetup luksOpen /dev/sda3 sda3_crypt vgchange -ay mount /dev/mapper/ubuntu--vg-root /mnt mount /dev/sda2 /mnt/boot mount -t proc proc /mnt/proc mount -o bind /dev /mnt/dev Note: When running cryptsetup luksOpen, you must use the same name as the one that is in /etc/crypttab on the root parition (sda3_crypt in this example). All of these partitions must be present (including /proc and /dev) for the initramfs scripts to do all of their work. If you see errors or warnings, you must resolve them. Regenerate the initramfs on the boot partition Then "enter" the root partition using: chroot /mnt and make sure that the lvm2 package is installed: apt install lvm2 before regenerating the initramfs for all of the installed kernels: update-initramfs -c -k all [...]



Gunnar Wolf: Starting a project on private and anonymous network usage

Mon, 15 May 2017 16:43:20 +0000

(image)

I am starting a work with the students of LIDSOL (Laboratorio de Investigación y Desarrollo de Software Libre, Free Software Research and Development Laboratory) of the Engineering Faculty of UNAM:
(image)
We want to dig into the technical and social implications of mechanisms that provide for anonymous, private usage of the network. We will have our first formal work session this Wednesday, for which we have invited several interesting people to join the discussion and help provide a path for our oncoming work. Our invited and confirmed guests are, in alphabetical order:

  • Salvador Alcántar (Wikimedia México)
  • Sandino Araico (1101)
  • Gina Gallegos (ESIME Culhuacán)
  • Juliana Guerra (Derechos Digitales)
  • Jacobo Nájera (Enjambre Digital)
  • Raúl Ornelas (Instituto de Investigaciones Económicas)

  • As well as LIDSOL's own teachers and students.
    This first session is mostly exploratory, we should keep notes and decide which directions to pursue to begin with. Do note that by "research" we are starting from the undergraduate student level — Not that we want to start by changing the world. But we do want to empower the students who have joined our laboratory to change themselves and change the world. Of course, helping such goals via the knowledge and involvement of projects (not just the tools!) such as Tor.




Michal Čihař: New projects on Hosted Weblate

Mon, 15 May 2017 16:00:25 +0000

(image)

Hosted Weblate provides also free hosting for free software projects. The hosting requests queue was over one month long, so it's time to process it and include new project.

This time, the newly hosted projects include:

We now also host few new Minetest mods:

If you want to support this effort, please donate to Weblate, especially recurring donations are welcome to make this service alive. You can do them on Liberapay or Bountysource.

Filed under: Debian English SUSE Weblate




intrigeri: GNOME and Debian usability testing, May 2017

Mon, 15 May 2017 12:55:10 +0000

During the Contribute your skills to Debian event that took place in Paris last week-end, we conducted a usability testing session. Six people were tasked with testing a few aspects of the GNOME 3.22 desktop environment and of the Debian 9 (Stretch) operating system. A number of other people observed them and took notes. Then, two observers and three testers analyzed the results, that we are hereby presenting: we created a heat map visualization, summed up the challenges met during the tests, and wrote this blog post together. We will point the relevant upstream projects to our results. A couple of other people also did some usability testing but went in much more depth: their feedback is much more detailed and comes with a number of improvement ideas. I will process and publish their results as soon as possible. Missions Testers were provided a laptop running GNOME on a a Debian 9 (Stretch) Live system. A quick introduction (mostly copied from the one we found in some GNOME usability testing reports) was read. Then they were asked to complete the following tasks. A. Nautilus Mission A.1 — Download and rename file in Nautilus Download a file from the web, a PDF document for example. Open the folder in which the file has been downloaded. Rename the dowloaded file to SUCCESS.pdf. Toggle the browser window to full screen. Open the file SUCCESS.pdf. Go back to the File manager. Close the file SUCCESS.pdf. Mission A.2 — Manipulate folders in Nautilus Create a new folder named cats in your user directory. Create a new folder named to do in your user directory. Move the cats folder to the to do folder. Delete the cats folder. Mission A.3 — Create a bookmark in Nautilus Create a folder named unicorns in your personal directory. This folder is important. Add a bookmark for unicorns in order to find it again in a few weeks. Mission A.4 — Nautilus display settings Folders and files are usually listed as icons, but they can also be displayed differently. Configure the File manager to make it show items as a list, with one file per line. You forgot your glasses and the font size is too small for you to see the text: increase the size of the text. B. Package management Introduction On Debian, each application is available as a "package" which contains every file needed for the software to work. Unlike in other operating systems, it is rarely necessary and almost never a good idea, to download and install software from the authors website. We can rather install it from an online library managed by Debian (like an appstore). This alternative offers several advantages, such as being able to update all the software installed in one single action. Specific tools are available to install and update Debian packages. Mission B.1 — Install and remove packages Install the vlc package. Start VLC. Remove the vlc package. Mission B.2 — Search and install a package Find a piece of software which can download files with BitTorrent in a graphical interface. Install the corresponding package. Launch that BitTorrent software. Mission B.3 — Upgrade the system Make sure the whole system[...]



Steve Kemp: Some minor updates ..

Sun, 14 May 2017 21:00:00 +0000

(image)

The past few weeks have been randomly busy, nothing huge has happened, but several minor diversions.

Coding

I made a new release of my console-based mail-client, with integrated Lua scripting, this is available for download over at https://lumail.org/

I've also given a talk (!!) on using a literate/markdown configuration for GNU Emacs. In brief I created two files:

~/.emacs/init.md

This contains both my configuration of GNU Emacs as well as documentation for the same. Neat.

~/.emacs/init.el

This parse the previous file, specifically looking for "code blocks" which are then extracted and evaluated.

This system is easy to maintain, and I'm quite happy with it :)

Fuzzing

Somebody nice took the time to report a couple of bugs against my simple bytecode-intepretting virtual-machine project - all found via fuzzing.

I've done some fun fuzzing of my own in the past, so this was nice to see. I've now resolved those bugs, and updated the README.md file to include instructions on fuzzing it. (Which I started doing myself, after receiving the first of the reports )

Finally I have more personal news too: I had a pair of CT-scans carried out recently, and apparently here in sunny Finland (that's me being ironic, it was snowing in the first week of May) when you undergo a CT-scan you can pay to obtain your data on CD-ROM.

I'm 100% definitely going to get a copy of my brain-scan data. I'll be able to view a 3d-rendered model of my own brain on my desktop. (Once upon a time I worked for a company that produced software, sold to doctors/surgeons, for creating 3d-rendered volumes from individual slices. I confirmed with the radiologist that handled my tests that they do indeed use the standard DICOM format. Small world.)




Bits from Debian: New Debian Developers and Maintainers (March and April 2017)

Sun, 14 May 2017 12:30:27 +0000

The following contributors got their Debian Developer accounts in the last two months:

  • Guilhem Moulin (guilhem)
  • Lisa Baron (jeffity)
  • Punit Agrawal (punit)

The following contributors were added as Debian Maintainers in the last two months:

  • Sebastien Jodogne
  • Félix Lechner
  • Uli Scholler
  • Aurélien Couderc
  • Ondřej Kobližek
  • Patricio Paez

Congratulations!




Russ Allbery: Review: The Raven and the Reindeer

Sun, 14 May 2017 00:08:00 +0000

Review: The Raven and the Reindeer, by T. Kingfisher Publisher: Red Wombat Tea Company Copyright: 2016 ASIN: B01BKTT73A Format: Kindle Pages: 191 Once upon a time, there was a boy born with frost in his eyes and frost in his heart. There are a hundred stories about why this happens. Some of them are close to true. Most of them are merely there to absolve the rest of us of blame. It happens. Sometimes it's no one's fault. Kay is the boy with frost in his heart. Gerta grew up next door. They were inseparable as children, playing together on cold winter days. Gerta was in love with Kay for as long as she could remember. Kay, on the other hand, was, well, kind of a jerk. There are not many stories about this sort of thing. There ought to be more. Perhaps if there were, the Gertas of the world would learn to recognize it. Perhaps not. It is hard to see a story when you are standing in the middle of it. Then, one night, Kay is kidnapped in the middle of the night by the Snow Queen while Gerta watches, helpless. She's convinced that she's dreaming, but when she wakes up, Kay is indeed gone, and eventually the villagers stop the search. But Gerta has defined herself around Kay her whole life, so she sets off, determined to find him, totally unprepared for the journey but filled with enough stubborn, practical persistence to overcome a surprising number of obstacles. Depending on your past reading experience (and cultural consumption in general), there are two things that may be immediately obvious from this beginning. First, it's written by Ursula Vernon, under her T. Kingfisher pseudonym that she uses for more adult fiction. No one else has quite that same turn of phrase, or writes protagonists with quite the same sort of overwhelmed but stubborn determination. Second, it's a retelling of Hans Christian Andersen's "The Snow Queen." I knew the first, obviously. I was completely oblivious to the second, having never read "The Snow Queen," or anything else by Andersen for that matter. I haven't even seen Frozen. I therefore can't comment in too much detail on the parallels and divergences between Kingfisher's telling and Andersen's (although you can read the original to compare if you want) other than some research on Wikipedia. As you might be able to tell from the quote above, though, Kingfisher is rather less impressed by the idea of childhood true love than Andersen was. This is not the sort of story in which the protagonist rescues the captive boy through the power of pure love. It's something quite a bit more complicated and interesting: a coming-of-age story for Gerta, in which her innocence is much less valuable than her fundamental decency, empathy, and courage, and in which her motives for her journey change as the journey proceeds. It helps that Kingfisher's world is populated by less idealized characters, many of whom are neither wholly bad nor wholly good, but wh[...]



Vincent Fourmond: Run QSoas complely non-interactively

Sat, 13 May 2017 22:45:51 +0000

QSoas can run scripts, and, since version 2.0, it can be run completely without user interaction from the command-line (though an interface may be briefly displayed). This possibility relies on the following command-line options:

  • --run, which runs the command given on the command-line;
  • --exit-after-running, which closes automatically QSoas after all the commands specified by --run were run;
  • --stdout (since version 2.1), which redirects QSoas's terminal directly to the shell output.
If you create a script.cmds file containing the following commands:
generate-buffer -10 10 sin(x)
save sin.dat
and run the following command from your favorite command-line interpreter:
~ QSoas --stdout --run '@ script.cmds' --exit-after-running
This will create a sin.dat file containing a sinusoid. However, if you run it twice, a Overwrite file 'sin.dat' ? dialog box will pop up. You can prevent that by adding the /overwrite=true option to save. As a general rule, you should avoid all commands that may ask questions in the scripts; a /overwrite=true option is also available for save-buffers for instance.

I use this possibility massively because I don't like to store processed files, I prefer to store the original data files and run a script to generate the processed data when I want to plot or to further process them. It can also be used to generate fitted data from saved parameters files. I use this to run automatic tests on Linux, Windows and Mac for every single build, in order to quickly spot platform-specific regressions.

To help you make use of this possibility, here is a shell function (Linux/Mac users only, add to your $HOME/.bashrc file or equivalent, and restart a terminal) to run directly on QSoas command files:

qs-run () {
        QSoas --stdout --run "@ $1" --exit-after-running
}
To run the script.cmds script above, just run
~ qs-run script.cmds

About QSoas

QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050–5052. Current version is 2.1



Ricardo Mones: Disabling "flat-volumes" in pulseaudio

Sat, 13 May 2017 15:12:48 +0000

(image) Today I've just faced another of those happy ideas some people implements in software, which can be useful for some cases, but can also also be bad as default behaviour.

The problems caused were already posted to Debian mailing lists, fortunately, as well as its solution, which basically in a default Debian configuration means to:

$ echo "flat-volumes = no" | sudo tee -a /etc/pulse/daemon.conf
$ pulseaudio -k && pulseaudio

And I think the default for Stretch should be set as above: raising volume to 100% just because of a system notification, while useful for some, it's not what common users expect.

Note: edited to fix first command as explained in comments. Thanks!



Steve McIntyre: Fonts and presentations

Fri, 12 May 2017 22:08:00 +0000

(image)

When you're giving a presentation, the choice of font can matter a lot. Not just in terms of how pretty your slides look, but also in terms of whether the data you're presenting is actually properly legible. Unfortunately, far too many fonts are appallingly bad if you're trying to tell certain characters apart. Imagine if you're at the back of a room, trying to read information on a slide that's (typically) too small and (if you're unlucky) the presenter's speech is also unclear to you (noisy room, bad audio, different language). A good clear font is really important here.

To illustrate the problem, I've picked a few fonts available in Google Slides. I've written the characters "1lIoO0" (that's one, lower case L, upper case I, lower case o, upper case O, zero) in each of those fonts. Some of the sans-serif fonts in particular are comically bad for trying to distinguish between these characters.

(image)

It may not matter in all cases if your audience can read all the characters on your slides and tell them apart, put if you're trying to present scientific or numeric results it's critical. Please consider that before looking for a pretty font.




Daniel Pocock: Thank you to the OSCAL team

Fri, 12 May 2017 13:26:52 +0000

(image)

The welcome gift deserves its own blog post. If you want to know what is inside, I hope to see you at OSCAL'17.

(image)




Martín Ferrari: 6 days to SunCamp

Fri, 12 May 2017 10:21:54 +0000

(image)

Only six more days to go before SunCamp! If you are still considering it, hurry up, you might still find cheap tickets for the low season.

It will be a small event (about 20-25 people), with a more intimate atmosphere than DebConf. There will be people fixing RC bugs, preparing stuff for after the release, or just discussing with other Debian folks.

There will be at least one presentation from a local project, and surely some members of nearby communities will join us for the day like they did last year.

See you all in Lloret!

Comment




Daniel Pocock: Kamailio World and FSFE team visit, Tirana arrival

Fri, 12 May 2017 09:48:10 +0000

This week I've been thrilled to be in Berlin for Kamailio World 2017, one of the highlights of the SIP, VoIP and telephony enthusiast's calendar. It is an event that reaches far beyond Kamailio and is well attended by leaders of many of the well known free software projects in this space. HOMER 6 is coming Alexandr Dubovikov gave me a sneak peek of the new version of the HOMER SIP capture framework for gathering, storing and analyzing messages in a SIP network. Visiting the FSFE team in Berlin Having recently joined the FSFE's General Assembly as the fellowship representative, I've been keen to get to know more about the organization. My visit to the FSFE office involved a wide-ranging discussion with Erik Albers about the fellowship program and FSFE in general. Steak and SDR night After a hard day of SIP hacking and a long afternoon at Kamailio World's open bar, a developer needs a decent meal and something previously unseen to hack on. A group of us settled at Escados, Alexanderplatz where my SDR kit emerged from my bag and other Debian users found out how easy it is to apt install the packages, attach the dongle and explore the radio spectrum. Next stop OSCAL'17, Tirana Having left Berlin, I'm now in Tirana, Albania where I'll give an SDR workshop and Free-RTC talk at OSCAL'17. The weather forecast is between 26 - 28 degrees celsius, the food is great and the weekend's schedule is full of interesting talks and workshops. The organizing team have already made me feel very welcome here, meeting me at the airport and leaving a very generous basket of gifts in my hotel room. OSCAL has emerged as a significant annual event in the free software world and if it's too late for you to come this year, don't miss it in 2018. [...]



Norbert Preining: Gaisi Takeuti, 1926-2017

Fri, 12 May 2017 00:59:51 +0000

Two days ago one of the most influential logician of the 20th century has passed away, Gaisi Takeuti (竹内 外史). I had the pleasure to meet this excellent man, teacher, writer, thinker several times while he was the president of the Kurt Gödel Society. I don’t want to recall his achievements in mathematical logic, in particular proof theory, because I am not worth to write about such a genius. I want to recall a few personal stories from my own experience. I came into contact with Prof. Takeuti via is famous book Proof Theory, which my then Professor, now colleague and friend Matthias Baaz used for teaching us students proof theory. Together with Shoenfield’s Mathematical Logic these two books became the foundation of my whole logic education. Now again in print, back then the “Proof Theory” was a rare precious. Few prints did remain in the library, and over the years one by one disappeared, until the last copy we had access to was my copy where I had scribbled pages and pages of notes and proofs. Matthias later on used these copies for his lectures, I should have written on the back-side! I remember well my first meeting with Prof. Takeuti: I was on the Conference on Internationalization in 2003 in Tsukuba, long before I moved to Japan. Back then I was just finishing my PhD and without much experience. When I arrived in the hotel, without fail there was a message of Prof. Takeuti inviting me for dinner the following day. We had dinner in a specialty restaurant of his area, together with is lovely wife. I was soo nervous about Japanese manners and stuttered Japanese phrases – just to be stopped by Prof. Takeuti pouring himself a glass of sake and telling me: Relax, and forget the rules and fill your own glass when you want to. I am well aware that this liberal attitude didn’t extend to Japanese colleagues, where he, descendant from a Samurai family, was at times very, extremely strict. The dinner was decided upon already, not easy since I was still strict vegetarian back than (now I would have enjoyed the dinner much more!), but for the last course we could decide. I remember with a smile how Prof. Takeuti suggested in Japanese various sweets, just to be interrupted by his wife with “No Gaisi, no!”. I asked what is going on and she explained that he wants to order a Japanese sweet for me – I agreed, and that was probably the worst dish I had in Japan. Slippy noodles swimming in a cold broth, to be picked with chopsticks and put into a semi-sweet soja-sauce. I finished it, but it wasn’t good. I should have thought twice when Prof. Takeuti’s wife ordered a normal fruit salad. Scientifically he was simply a genius – and famous for not reading a lot but reinventing everything. One of my research areas, Gödel logics, was reinvented by him as “Intuitionistic Fuzzy Logic” (for an overview see my talk at the C[...]



Arturo Borrero González: Debunk some Debian myths

Thu, 11 May 2017 16:21:00 +0000

Debian has many years of history, about 25 years already. With such a long travel over the continous field of developing our Universal Operating System, some myths, false accusations and bad reputation has arisen. Today I had the opportunity to discuss this topic, I was invited to give a Debian talk in the “11º Concurso Universitario de Software Libre”, a Spanish contest for students to develop and dig a bit into free-libre open source software (and hardware). In this talk, I walked through some of the most common Debian myths, and I would like to summarize here some of them, with a short explanation of why I think they should be debunked. myth #1: Debian is old software Please, use testing or stable-backports. If you use Debian stable your system will in fact be stable and that means: updates contain no new software but only fixes. myth #2: Debian is slow We compile and build most of our packages with industry-standard compilers and options. I don’t see a significant difference on how fast linux kernel or mysql run in a CentOS or in Debian. myth #3: Debian is difficult I already discussed about this issue back in Jan 2017, Debian is a puzzle: difficult. myth #4: Debian has no graphical environment This is, simply put, false. We have gnome, kde, xfce and more. The basic Debian installer asks you what do you want at install time. myth #5: since Debian isn’t commercial, the quality is poor Did you know that most of our package developers are experts in their packages and in their upstream code? Not all, but most of them. Besides, many package developers get paid to do their Debian job. Also, there are external companies which do indeed offer support for Debian (see freexian for example). myth #6: I don’t trust Debian Why? Did we do something to gain this status? If so, please let us know. You don’t trust how we build or configure our packages? You don’t trust how we work? Anyway, I’m sorry, you have to trust someone if you want to use any kind of computer. Supervising every single bit of your computer isn’t practical for you. Please trust us, we do our best. myth #7: nobody uses Debian I don’t agree. Many people use Debian. They even run Debian in the International Space Station. Do you count derivatives, such as Ubuntu? I believe this myth is just pointless, but some people out there really think nobody uses Debian. myth #8: Debian uses systemd Well, this is true. But you can run sysvinit if you want. I prefer and recommend systemd though :-) myth #9: Debian is only for servers No. See myths #1, #2 and #4. You may download my slides in PDF and in ODP format (only in Spanish, sorry for English readers). [...]



Jonathan Dowland: Residential IPv6 stability

Thu, 11 May 2017 16:16:39 +0000

(image)

I run some Internet services on my home Internet connection, mostly for myself but also for friends and family. The IPv4 address assigned to my home by my ISP (currently: BT Internet) is dynamic and changes from time to time. To get around this, I make use of a "dynamic dns" service: essentially, a web service that updates a hostname whenever my IP changes.

Since sometime last year I have also had an IPv6 address for my home connection: In fact, lots of them. There are more IPv6 addresses assigned to my home than there are IPv4 addresses on the entire Internet: 4,722,366,482,869,645,213,696 compared to 4,294,967,296 IPv4 addresses for the entire world (of which 3,706,452,992 are usable).

I am relatively new to IPv6 (despite having played with it on and off since around the year 2000). I was curious to find out how stable the IPv6 addresses are, compared to the IPv4 one. It turns out that it's very stable: I've had four IPv4 addresses since February this year, but my IPv6 allocation has not changed.




Norbert Preining: BachoTeX 2017

Thu, 11 May 2017 13:37:48 +0000

A week of typesetting, typography, bookbinding, bibliophily, not to forget long chats with good friends and loads of beer. That is BachoTeX, the best series of conferences I have ever been. This year BachoTeX was held for the 25th time, and was merged with the TUG Meeting for a firework of excellent presentations and long hours of brain storming, hacking, music making, dancing, and simply enjoying life! And while it was a bit less relaxing for me than in the last years, mostly due to the presence of my little daughter who requested presence quite often, it is still the place to be during the Golden Week! Of course I also gave a talk at BachoTeX about our latest changes in the upcoming TeX Live 2017 release: fmtutil and updmap – past & future changes (or: cleaning up the mess). Big thanks to my company Accelia Inc. for allowing me to attend the conference. We arrived after a long trip, first via train and plane to Vienna, then two days break (including research with a colleague), followed by a night train ride to Warsaw and another train ride to Torun and a taxi ride to Bachotek. All in all far too long to be done with a 14 month old girl. Finally arrived we went directly to our hut and found it freezing. Fortunately we could organize a heater so that the rest of the week we didn’t have to live in 5-10 degrees The second day brought already the traditional bonfire. After a small (for Polish standard) dinner we ignored the rain and met at the fireplace for BBQ, beer, and lots of live music. The rain stopped during the bonfire, probably due to my horrible singing, and the following days we were blessed with sunshine and warmer temperatures. The forest sparked in all kinds of greens. For our daughter the trip was a great experience – lots of wild play grounds, many other kids, and a lake she really wanted to go swimming in. Normally I go swimming there, but this year I had a bad cold so I refrained from it, and with me also our daughter, to her great disappointment. Another day has passed, and the sunset lights up the beautiful lake Bachotek. I cannot imagine a better place for concentrated work paired with great relaxation! During the days the temperatures were really nice, but the mornings were cold, and our morning walk to the breakfast place was quite chilly. Ample coffee breaks and lunch breaks left us enough time to discuss new developments. But the single most important thing that brought people to talk a lot was the horrible internet connection, a big plus in BachoTeX (but as far as rumors go it might have been the last time with that advantage!). The last evening we had a banquet honoring 25 years of BachoTeX, one of the oldest TeX conference. Live music [...]



Jonathan Dowland: Hof3.java

Thu, 11 May 2017 08:45:04 +0000




Jonathan Dowland: Hof2.java

Thu, 11 May 2017 08:45:04 +0000




Jonathan Dowland: Hof.java

Thu, 11 May 2017 08:45:04 +0000




Daniel Lange: Thunderbird startup hang (hint: Add-Ons)

Thu, 11 May 2017 08:00:00 +0000

(image)

If you see Thunderbird hanging during startup for a minute and then continuing to load fine, you are probably running into an issue similar to what I saw when Debian migrated Icedove back to the "official" Mozilla Thunderbird branding and changed ~/.icedove to ~/.thunderbird in the process (one symlinked to the other).

Looking at the console log (=start Thunderbird from a terminal so you see its messages), I got:

console.log: foxclocks.bootstrap._loadIntoWindow(): got xul-overlay-merged - waiting for overlay-loaded
[.. one minute delay ..]
console.log: foxclocks.bootstrap._windowListener(): got window load chrome://global/content/commonDialog.xul

Stracing confirms it hangs because Thunderbird loops waiting for a FUTEX until that apparently gets kicked by a XUL core timeout.
(Thanks for defensive programming folks!)

So in my case uninstalling the Add-On Foxclocks easily solved the problem.

I assume other Thunderbird Add-Ons may cause the same issue, hence the more generic description above.




Junichi Uekawa: I tried learning rust but making very little progress.

Wed, 10 May 2017 10:58:35 +0000

(image) I tried learning rust but making very little progress.




Clint Adams: Four years

Tue, 09 May 2017 20:45:03 +0000

(image)
Posted on 2017-05-09
Tags: barks



Matthew Garrett: Intel AMT on wireless networks

Tue, 09 May 2017 20:18:21 +0000

More details about Intel's AMT vulnerablity have been released - it's about the worst case scenario, in that it's a total authentication bypass that appears to exist independent of whether the AMT is being used in Small Business or Enterprise modes (more background in my previous post here). One thing I claimed was that even though this was pretty bad it probably wasn't super bad, since Shodan indicated that there were only a small number of thousand machines on the public internet and accessible via AMT. Most deployments were probably behind corporate firewalls, which meant that it was plausibly a vector for spreading within a company but probably wasn't a likely initial vector.I've since done some more playing and come to the conclusion that it's rather worse than that. AMT actually supports being accessed over wireless networks. Enabling this is a separate option - if you simply provision AMT it won't be accessible over wireless by default, you need to perform additional configuration (although this is as simple as logging into the web UI and turning on the option). Once enabled, there are two cases:The system is not running an operating system, or the operating system has not taken control of the wireless hardware. In this case AMT will attempt to join any network that it's been explicitly told about. Note that in default configuration, joining a wireless network from the OS is not sufficient for AMT to know about it - there needs to be explicit synchronisation of the network credentials to AMT. Intel provide a wireless manager that does this, but the stock behaviour in Windows (even after you've installed the AMT support drivers) is not to do this.The system is running an operating system that has taken control of the wireless hardware. In this state, AMT is no longer able to drive the wireless hardware directly and counts on OS support to pass packets on. Under Linux, Intel's wireless drivers do not appear to implement this feature. Under Windows, they do. This does not require any application level support, and uninstalling LMS will not disable this functionality. This also appears to happen at the driver level, which means it bypasses the Windows firewall.Case 2 is the scary one. If you have a laptop that supports AMT, and if AMT has been provisioned, and if AMT has had wireless support turned on, and if you're running Windows, then connecting your laptop to a public wireless network means that AMT is accessible to anyone else on that network[1]. If it hasn't received a firmware update, they'll be able to do so without needing any valid credentials.If you're a co[...]



Benjamin Mako Hill: Surviving an “Eternal September:” How an Online Community Managed a Surge of Newcomers

Tue, 09 May 2017 16:33:19 +0000

Attracting newcomers is among the most widely studied problems in online community research. However, with all the attention paid to challenge of getting new users, much less research has studied the flip side of that coin: large influxes of newcomers can pose major problems as well! The most widely known example of problems caused by an influx of newcomers into an online community occurred in Usenet. Every September, new university students connecting to the Internet for the first time would wreak havoc in the Usenet discussion forums. When AOL connected its users to the Usenet in 1994, it disrupted the community for so long that it became widely known as “The September that never ended”. Our study considered a similar influx in NoSleep—an online community within Reddit where writers share original horror stories and readers comment and vote on them. With strict rules requiring that all members of the community suspend disbelief, NoSleep thrives off the fact that readers experience an immersive storytelling environment. Breaking the rules is as easy as questioning the truth of someone’s story. Socializing newcomers represents a major challenge for NoSleep. Number of subscribers and moderators on /r/NoSleep over time. On May 7th, 2014, NoSleep became a “default subreddit”—i.e., every new user to Reddit automatically joined NoSleep. After gradually accumulating roughly 240,000 members from 2010 to 2014, the NoSleep community grew to over 2 million subscribers in a year. That said, NoSleep appeared to largely hold things together. This reflects the major question that motivated our study: How did NoSleep withstand such a massive influx of newcomers without enduring their own Eternal September? To answer this question, we interviewed a number of NoSleep participants, writers, moderators, and admins. After transcribing, coding, and analyzing the results, we proposed that NoSleep survived because of three inter-connected systems that helped protect the community’s norms and overall immersive environment. First, there was a strong and organized team of moderators who enforced the rules no matter what. They recruited new moderators knowing the community’s population was going to surge. They utilized a private subreddit for NoSleep’s staff. They were able to socialize and educate new moderators effectively. Although issuing sanctions against community members was often difficult, our interviewees explained that NoSleep’s moderators were deeply committed and largely uncompromising. That commitment resonates within the second syste[...]



Olivier Berger: Installing a Docker Swarm cluster inside VirtualBox with Docker Machine

Tue, 09 May 2017 12:02:01 +0000

(image)

I’ve documented the process of installing a Docker Swarm cluster inside VirtualBox with Docker Machine. This allows experimenting with Docker Swarm, the simple docker container orchestrator, over VirtualBox.

This allows you to play with orchestration scenarii without having to install docker on real machines.

Also, such an environment may be handy for teaching if you don’t want to install docker on the lab’s host. Installing the docker engine on Linux hosts for unprivileged users requires some care (refer to docs about securing Docker), as the default configuration may allow learners to easily gain root privileges (which may or not be desired).

See more at http://www-public.telecom-sudparis.eu/~berger_o/docker/install-docker-machine-virtualbox.html




Reproducible builds folks: Reproducible Builds: week 106 in Stretch cycle

Tue, 09 May 2017 08:53:45 +0000

Here's what happened in the Reproducible Builds effort between Sunday April 30 and Saturday May 6 2017: Past and upcoming events Between May 5th-7th the Reproducible Builds Hackathon 2017 took place in Hamburg, Germany. On May 6th Mattia Rizzolo gave a talk on Reproducible Builds at DUCC-IT 17 in Vicenza, Italy. On May 13th Chris Lamb will give a talk on Reproducible Builds at OSCAL 2017 in Tirana, Albania. Media coverage Gunnar Wolf published an article in Spanish entitled "Construcciones Reproducibles". Toolchain development and fixes Ximin updated his R patch to fix a few FTBFS and now we have 463/478 reproducible R packages. For more details, see his detailed write-up on this blog. Holger rebuilt dpkg, gcc-6 and r-base for our experimental toolchain for unstable on arm64, i386 and armhf. Packages reviewed and fixed, and bugs filed Chris Lamb: #861608 filed against sbt. #861672 filed against libwibble. #861756 filed against pd-pdstring. #861770 filed against fbreader. #861773 filed against armagetronad. #861893 filed against ironic. #861896 filed against manila. #861955 filed against canna. Reviews of unreproducible packages 93 package reviews have been added, 12 have been updated and 98 have been removed in this week, adding to our knowledge about identified issues. The following issues have been added: timestamps_in_cbd_files_generated_by_canna_mkbindic toolchain issue. timestamps_in_manpages_created_by_libwibble toolchain issue 2 issue types have been updated: Add patch for timestamps_in_manpages_created_by_libwibble Add patch for timestamps_in_cbd_files_generated_by_canna_mkbindic The following issues have been removed: disorderfs_sensitive nondeterministic_ordering_in_desktop_files_by_python_sugar3 randomness_in_swf_files_generated_by_as3compile valac_permutes_get_type_calls docbook_to_man_one_byte_delta ghc_captures_build_path_via_tempdir dict_ordering_in_python_alabaster_sphinx_theme_extra_nav_links gpg_keyring_magic_bytes_differ varnish_vmodtool_random_file_id random_order_in_lua_version_substvar unsorted_lua_versions_in_control nondeterminstic_ordering_in_gsettings_glib_enums_xml random_order_in_init_py_generated_by_python-genpy randomness_in_r_rdb_rds_databases undeterministic_symlinking_by_rdfind random_order_in_ruby_rdoc_indices random_order_in_dh_haskell_substvars plist_weirdness randomness_in_python_setuptools_install_files_txt fileorder_in_gemspec_files_list timestamps_in_pdf_generated_by_reportlab method_may_never_be_called_in_documentatio[...]



Martin Pitt: Cockpit is now just an apt install away

Tue, 09 May 2017 08:51:23 +0000

Cockpit has now been in Debian unstable and Ubuntu 17.04 and devel, which means it’s now a simple $ sudo apt install cockpit away for you to try and use. This metapackage pulls in the most common plugins, which are currently NetworkManager and udisks/storaged. If you want/need, you can also install cockpit-docker (if you grab docker.io from jessie-backports or use Ubuntu) or cockpit-machines to administer VMs through libvirt. Cockpit upstream also has a rather comprehensive Kubernetes/Openstack plugin, but this isn’t currently packaged for Debian/Ubuntu as kubernetes itself is not yet in Debian testing or Ubuntu. After that, point your browser to https://localhost:9090 (or the host name/IP where you installed it) and off you go. What is Cockpit? Think of it as an equivalent of a desktop (like GNOME or KDE) for configuring, maintaining, and interacting with servers. It is a web service that lets you log into your local or a remote (through ssh) machine using normal credentials (PAM user/password or SSH keys) and then starts a normal login session just as gdm, ssh, or the classic VT logins would. The left side bar is the equivalent of a “task switcher”, and the “applications” (i. e. modules for administering various aspects of your server) are run in parallel. The main idea of Cockpit is that it should not behave “special” in any way - it does not have any specific configuration files or state keeping and uses the same Operating System APIs and privileges like you would on the command line (such as lvmconfig, the org.freedesktop.UDisks2 D-Bus interface, reading/writing the native config files, and using sudo when necessary). You can simultaneously change stuff in Cockpit and in a shell, and Cockpit will instantly react to changes in the OS, e. g. if you create a new LVM PV or a network device gets added. This makes it fundamentally different to projects like webmin or ebox, which basically own your computer once you use them the first time. It is an interface for your operating system, which even reflects in the branding: as you see above, this is Debian (or Ubuntu, or Fedora, or wherever you run it on), not “Cockpit”. Remote machines In your home or small office you often have more than one machine to maintain. You can install cockpit-bridge and cockpit-system on those for the most basic functionality, configure SSH on them, and then add them on the Dashboard (I add a Fedora 26 machine here) and from then on can switch between the[...]



Bits from Debian: Bursary applications for DebConf17 are closing in 48 hours!

Mon, 08 May 2017 20:30:00 +0000

This is a final reminder: if you intend to apply for a DebConf17 bursary and have not yet done so, please proceed as soon as possible.

Bursary applications for DebConf17 will be accepted until May 10th at 23:59 UTC. Applications submitted after this deadline will not be considered.

You can apply for a bursary when you register for the conference.

Remember that giving a talk is considered towards your bursary; if you have a submission to make, submit it even if it is only sketched-out. You will be able to detail it later.

Please make sure to double-check your accommodation choices (dates and venue). Details about accommodation arrangements can be found on the wiki.

Note: For DebCamp we only have on-site accommodation available. The option chosen in the registration system will only be for the DebConf period (August 5 to 12).

See you in Montréal!

(image)