Subscribe: Planet Debian
http://planet.debian.org/rss20.xml
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
bit  debian  git  gitlab  hours  https  mode  months  new  org  package  packages  permission  release  support  time 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Planet Debian

Planet Debian



Planet Debian - http://planet.debian.org/



 



Sam Hartman: Cudos to GDB Contributors

Mon, 23 Jan 2017 14:32:05 +0000

I recently diagnosed a problem in Debian's pam-p11 package. This package allegedly permits logging into a computer using a smart card or USB security token containing an ssh key. If you know the PIN and have the token, then your login attempt is authorized against the ssh authorized keys file. This seems like a great way to permit console logins as root to machines without having a shared password. Unfortunately, the package didn't work very well for me. It worked once, then all future attempts to use it segfaulted. I'm familiar with how PAM works. I understand the basic ideas behind PKCS11 (the API used for this type of smart card), but was completely unfamiliar with this particular PAM module and the PKCS11 library it used. The segfault was in an area of code I didn't even expect that this PAM module would ever call. Back in 1994, that would have been a painful slog. Gdb has improved significantly since then, and I'd really like to thank all the people over the years who made that possible. I was able to isolate the problem in just a couple of hours of debugging. Here are some of the cool features I used:
  • "target record-full" which allows you to track what's going on so you can go backwards and potentially bisect where in a running program something goes wrong. It's not perfect; it seems to have trouble with memset and a few other functions, but it's really good.
  • Hardware watch points. Once you know what memory is getting clobbered, have the hardware report all changes so you can see who's responsible.
  • Hey, wait, what? I really wish I had placed a breakpoint back there. With "target record-full" and "reverse-continue," you can. Set the breakpoint and then reverse continue, and time runs backwards until your breakpoint gets hit.
  • I didn't need it for this session, but "set follow-fork-mode" is very handy for certain applications. There's even a way to debug both the parent and child of a fork at the same time, although I always have to go look up the syntax. It seems like it ought to be "set follow-fork-mode both," and there was once a debugger that used that syntax, but Gdb uses different syntax for the same concept.
Anyway, with just a couple of hours and no instrumentation of the code, I managed to track down how a bunch of structures were being freed as an unexpected side effect of one of the function calls. Neither I nor the author of the pam-p11 module expected that (although it is documented and does make sense in retrospect). Good tools make life easier.



Matthew Garrett: Android permissions and hypocrisy

Mon, 23 Jan 2017 07:58:57 +0000

I wrote a piece a few days ago about how the Meitu app asked for a bunch of permissions in ways that might concern people, but which were not actually any worse than many other apps. The fact that Android makes it so easy for apps to obtain data that's personally identifiable is of concern, but in the absence of another stable device identifier this is the sort of thing that capitalism is inherently going to end up making use of. Fundamentally, this is Google's problem to fix.Around the same time, Kaspersky, the Russian anti-virus company, wrote a blog post that warned people about this specific app. It was framed somewhat misleadingly - "reading, deleting and modifying the data in your phone's memory" would probably be interpreted by most people as something other than "the ability to modify data on your phone's external storage", although it ends with some reasonable advice that users should ask why an app requires some permissions.So, to that end, here are the permissions that Kaspersky request on Android:android.permission.READ_CONTACTSandroid.permission.WRITE_CONTACTSandroid.permission.READ_SMSandroid.permission.WRITE_SMSandroid.permission.READ_PHONE_STATEandroid.permission.CALL_PHONEandroid.permission.SEND_SMSandroid.permission.RECEIVE_SMSandroid.permission.RECEIVE_BOOT_COMPLETEDandroid.permission.WAKE_LOCKandroid.permission.WRITE_EXTERNAL_STORAGEandroid.permission.SUBSCRIBED_FEEDS_READandroid.permission.READ_SYNC_SETTINGSandroid.permission.WRITE_SYNC_SETTINGSandroid.permission.WRITE_SETTINGSandroid.permission.INTERNETandroid.permission.ACCESS_COARSE_LOCATIONandroid.permission.ACCESS_FINE_LOCATIONandroid.permission.READ_CALL_LOGandroid.permission.WRITE_CALL_LOGandroid.permission.RECORD_AUDIOandroid.permission.SET_PREFERRED_APPLICATIONSandroid.permission.WRITE_APN_SETTINGSandroid.permission.READ_CALENDARandroid.permission.WRITE_CALENDARandroid.permission.KILL_BACKGROUND_PROCESSESandroid.permission.RESTART_PACKAGESandroid.permission.MANAGE_ACCOUNTSandroid.permission.GET_ACCOUNTSandroid.permission.MODIFY_PHONE_STATEandroid.permission.CHANGE_NETWORK_STATEandroid.permission.ACCESS_NETWORK_STATEandroid.permission.ACCESS_LOCATION_EXTRA_COMMANDSandroid.permission.ACCESS_WIFI_STATEandroid.permission.CHANGE_WIFI_STATEandroid.permission.VIBRATEandroid.permission.READ_LOGSandroid.permission.GET_TASKSandroid.permission.EXPAND_STATUS_BARcom.android.browser.permission.READ_HISTORY_BOOKMARKScom.android.browser.permission.WRITE_HISTORY_BOOKMARKSandroid.permission.CAMERAcom.android.vending.BILLINGandroid.permission.SYSTEM_ALERT_WINDOWandroid.permission.BATTERY_STATSandroid.permission.MODIFY_AUDIO_SETTINGScom.kms.free.permission.C2D_MESSAGEcom.google.android.c2dm.permission.RECEIVEEvery single permission that Kaspersky mention Meitu having? They require it as well. And a lot more. Why does Kaspersky want the ability to record audio? Why does it want to be able to send SMSes? Why does it want to read my contacts? Why does it need my fine-grained location? Why is it able to modify my settings?There's no reason to assume that they're being malicious here. The reasons that these permissions exist at all is that there are legitimate reasons to use them, and Kaspersky may well have good reason to request them. But they don't explain that, and they do literally everything that their blog post criticises (including explicitly requesting the phone's IMEI). Why should we trust a Russian company more than a Chinese one?The moral here isn't that Kaspersky are evil or that Meitu are virtuous. It's that talking about application permissions is difficult and we don't have the language to explain to users what our apps are doing and why they're doing it, and Google are still falling far short of where they should be in terms of making this transparent to users. But the other moral is that you shouldn't complain about the permissions an app requires when you're asking for even more of them because it just makes you look stupid and bad at your job. comments [...]



Shirish Agarwal: Debian contributions and World History

Sun, 22 Jan 2017 23:06:23 +0000

Beware, this would be slightly longish. Debian Contributions In the last couple of weeks, was lucky to put up a patch against debian-policy which had been bothering me for a long-long time. The problem statement is simple, man-pages historically were made by software engineers for software-engineers. The idea, probably then was you give the user some idea of what the software does and the rest the software engineer would garner from reading the source-code. But over period of time, the audience has changed. While there are still software engineers who use GNU/Linux for the technical excellence, the man-pages have not kept up with this new audience who perhaps are either not technically so sound that or they do not want to take the trouble to reading the source-code to understand how things flow. An ‘example’ or ‘examples’ in a man-page gives us (the lesser mortals) some insight as how the command works, how the logic flows. A good example of a man-page is the ufw man-page – EXAMPLES Deny all access to port 53: ufw deny 53 Allow all access to tcp port 80: ufw allow 80/tcp Allow all access from RFC1918 networks to this host: ufw allow from 10.0.0.0/8 ufw allow from 172.16.0.0/12 ufw allow from 192.168.0.0/16 Deny access to udp port 514 from host 1.2.3.4: ufw deny proto udp from 1.2.3.4 to any port 514 Allow access to udp 1.2.3.4 port 5469 from 1.2.3.5 port 5469: ufw allow proto udp from 1.2.3.5 port 5469 to 1.2.3.4 port 5469 Now if we had man-pages like the above which give examples, then the user at least can try to accomplish whatever s/he is trying to do. I truly believe not having examples in a man-page kills 50% of your audience and people who could potentially use your tool. Personal wishlist – The only thing (and this might be my failure) is we need a good way to search through a man-page. The only way I know is using ‘/’ and try to give a pattern. Lots of times it fails because I, the user doesn’t know the exact keyword which the documenter was using. What would be nice, great if we do have some sort of parser where I tell it, ‘$this is what I’m looking for’ and the parser tries the pattern + all its synonyms and whatever seems to be most relevant passage from the content, in this case – a manpage it tells me. It would make my life a lot easier while at the same time force people to document more and more. I dunno if there has been any research or study of the relationship between good documentation and popularity of a program. I know there are lots of different tiny bits which make or break a program, one of which would definitely be documentation and in that a man-page IF it’s a command-line tool. A query on Quora gives some indication https://www.quora.com/How-comprehensible-do-you-find-Unix-Linux-Man-pages although the low response rate tells its own story. there have been projects like man2html and man2pdf and others which try to make the content more accessible to people who are not used to the man-page interface but till you don’t have ‘Examples’ the other things can work only so far. Also if anybody talks about X project which claims to solve this problem they will have to fight manpages who have been around like forever. As can be seen in the patch, did some rookie mistakes as can be seen. I also filed a lintian bug at the same time. Hope the patch does get merged at some point in debian-policy and then a check introduced in lintian in some future release. I do agree with anarcat’s assertion that it should be at the level of the manpage missing level. I am no coder but finding 14,000 binary packages without a manpage left me both shocked and surprised. I came to know about manpage-alert from the devscripts package to know which all binary packages that have been installed but not have man-pages. I hope to contribute a manpage or two if I across a package I’m somewhat comfortable with. I have made a beginning of sorts by running manpage alert and putting the output in a .txt file which I would grep through manua[...]



Steinar H. Gunderson: Nageru loopback test

Sun, 22 Jan 2017 22:47:00 +0000

(image)

Nageru, my live video mixer, is in the process of gaining HDMI/SDI output for bigscreen use, and in that process, I ran some loopback tests (connecting the output of one card into the input of another) to verify that I had all the colorspace parameters right. (This is of course trivial if you are only sending one input on bit-by-bit, but Nageru is much more flexible, so it really needs to understand what each pixel means.)

It turns out that if you mess up any of these parameters ever so slightly, you end up with something like this, this or this. But thankfully, I got this instead on the very first try, so it really seems it's been right all along. :-)

(There's a minor first-generation loss in that the SDI chain is 8-bit Y'CbCr instead of 10-bit, but I really can't spot it with the naked eye, and it doesn't compound through generations. I plan to fix that for those with spare GPU power at some point, possibly before 1.5.0 release.)




intrigeri: My first (public) blog ever

Sun, 22 Jan 2017 20:23:56 +0000

Hi there! Welcome to the first (public) blog I've ever had.




Lars Wirzenius: Improving debugging via email, followup

Sun, 22 Jan 2017 15:50:57 +0000

(image)

Half a year ago I wrote a blog post about debugging over email. This is a follow-up.

The blog post summarised:

  • Have an automated way to collect all usual informaion needed for debugging: versions, config and log files, etc.

  • Improve error messages so the users can solve their issues themselves.

  • Give users better automated diagnostics tools.

Based on further thinking and feedback, I add:

  • When a program notices a problem that may indicate a bug in it, it should collect the necessary information itself, automatically, in a way that the user just needs to send to the developers / support.

  • The primary goal should be to help people solve their own problems.

  • A secondary goal is to make the problem reproducible by the developers, or otherwise make it easy to fix bugs without access to the original system where the problem was manifested.

I've not written any code to help with this remote debugging, but it's something I will start experimenting with in the near future.

Further ideas welcome.




Elena 'valhalla' Grandi: New pajama

Sun, 22 Jan 2017 15:43:53 +0000

New pajama

I may have been sewing myself a new pajama.

(image) http://social.gl-como.it/photos/valhalla/image/81b600789aa02a91fdf62f54a71b1ba0

It was plagued with issues; one of the sleeve is wrong side out and I only realized it when everything was almost done (luckily the pattern is symmetric and it is barely noticeable) and the swirl moved while I was sewing it on (and the sewing machine got stuck multiple times: next time I'm using interfacing, full stop.), and it's a bit deformed, but it's done.

For the swirl, I used Inkscape to Simplify (Ctrl-L) the original Debian Swirl a few times, removed the isolated bits, adjusted some spline nodes by hand and printed on paper. I've then cut, used water soluble glue to attach it to the wrong side of a scrap of red fabric, cut the fabric, removed the paper and then pinned and sewed the fabric on the pajama top.
As mentioned above, the next time I'm doing something like this, some interfacing will be involved somewhere, to keep me sane and the sewing machine happy.

Blogging, because it is somewhat relevant to Free Software :) and there are even sources https://www.trueelena.org/clothing/projects/pajamas_set.html#downloads, under a DFSG-Free license :)



Dirk Eddelbuettel: Updated Example Repo for RMarkdown and Metropolis/Mtheme

Sat, 21 Jan 2017 19:23:00 +0000

(image)

During useR! 2016, Nick Tierney had asked on Twitter about rmarkdown and metropolis and whether folks had used RMarkdown-driven LaTeX Beamer presentations. My firm hell yeah answer, based on having used mtheme outright or in local mods for quite some time (see my talks page), lead to this blog post of mine describing this GitHub repo I had quickly set up during breaks at useR! 2016. The corresponding blog post and the repo have some more details on how I do this, in particular about local packages (also with sources on GitHub) for the non-standard fonts I use.

This week I got around to updating the repo / example a little by making the default colours (in my example) a little less awful, and a page on blocks and, most importantly, turning the example into the animated gif below:

(image)

And thanks to the beautiful tint package -- see its repo and CRAN package --- I now know how to create a template package. So if there is interest (and spare time), we could build a template package for RStudio too.

With that, may I ask a personal favour of anybody still reading the post? Please do not hit my twitter handle with questions for support. All my code is an GitHub, and issue tickets there are much preferred. Larger projects like Rcpp also have their own mailing lists, and it is much better to use those. And if you like neither, maybe ask on StackOverflow. But please don't spam my Twitter handle. Thank you.




Clint Adams: Strangers with candy

Sat, 21 Jan 2017 13:37:08 +0000

(image)

This.

Posted on 2017-01-21
Tags: barks



Joachim Breitner: Global almost-constants for Haskell

Fri, 20 Jan 2017 18:03:17 +0000

(image)

More than five years ago I blogged about the “configuration problem” and a proposed solution for Haskell, which turned into some Template Haskell hacks in the seal-module package.

With the new GHC proposal process in plase, I am suddenly much more inclined to write up my weird wishes for the Haskell language in proposal form, to make them more coherent, get feedback, and maybe (maybe) actually get them implemented. But even if the proposal is rejected it is still a nice forum to discuss these ideas.

So I turned my Template Haskell hack into a proposed new syntactic feature. The idea is shamelessly stolen from Isabelle, including some of the keywords, and would allow you to write

context fixes progName in
  foo :: Maybe Int -> Either String Int
  foo Nothing  = Left $ progName ++ ": no number given"
  foo (Just i) = bar i

  bar :: Int -> Either String Int
  bar 0 = Left $ progName ++ ": zero no good"
  bar n = Right $ n + 1

instead of

foo :: String -> Maybe Int -> Either String Int
foo progName Nothing  = Left $ progName ++ ": no number given"
foo progName (Just i) = bar progName  i

bar :: String -> Int -> Either String Int
bar progName 0 = Left $ progName ++ ": zero no good"
bar progName n = Right $ n + 1

when you want to have an “almost constant” parameter.

I am happy to get feedback at the GitHub pull request.




Michal Čihař: Weblate 2.10.1

Fri, 20 Jan 2017 14:00:21 +0000

(image)

This is first security bugfix release for Weblate. This has to come at some point, fortunately the issue is not really severe. But Weblate got it's first CVE ID today, so it's time to address it in a bugfix release.

Full list of changes:

  • Do not leak account existence on password reset form (CVE-2017-5537).

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Aptoide, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far!

Filed under: Debian English SUSE Weblate | 0 comments




Matthew Garrett: Android apps, IMEIs and privacy

Thu, 19 Jan 2017 23:36:36 +0000

(image) There's been a sudden wave of people concerned about the Meitu selfie app's use of unique phone IDs. Here's what we know: the app will transmit your phone's IMEI (a unique per-phone identifier that can't be altered under normal circumstances) to servers in China. It's able to obtain this value because it asks for a permission called READ_PHONE_STATE, which (if granted) means that the app can obtain various bits of information about your phone including those unique IDs and whether you're currently on a call.

Why would anybody want these IDs? The simple answer is that app authors mostly make money by selling advertising, and advertisers like to know who's seeing their advertisements. The more app views they can tie to a single individual, the more they can track that user's response to different kinds of adverts and the more targeted (and, they hope, more profitable) the advertising towards that user. Using the same ID between multiple apps makes this easier, and so using a device-level ID rather than an app-level one is preferred. The IMEI is the most stable ID on Android devices, persisting even across factory resets.

The downside of using a device-level ID is, well, whoever has that data knows a lot about what you're running. That lets them tailor adverts to your tastes, but there are certainly circumstances where that could be embarrassing or even compromising. Using the IMEI for this is even worse, since it's also used for fundamental telephony functions - for instance, when a phone is reported stolen, its IMEI is added to a blacklist and networks will refuse to allow it to join. A sufficiently malicious person could potentially report your phone stolen and get it blocked by providing your IMEI. And phone networks are obviously able to track devices using them, so someone with enough access could figure out who you are from your app usage and then track you via your IMEI. But realistically, anyone with that level of access to the phone network could just identify you via other means. There's no reason to believe that this is part of a nefarious Chinese plot.

Is there anything you can do about this? On Android 6 and later, yes. Go to settings, hit apps, hit the gear menu in the top right, choose "App permissions" and scroll down to phone. Under there you'll see all apps that have permission to obtain this information, and you can turn them off. Doing so may cause some apps to crash or otherwise misbehave, whereas newer apps may simply ask for you to grant the permission again and refuse to do so if you don't.

Meitu isn't especially rare in this respect. Over 50% of the Android apps I have handy request your IMEI, although I haven't tracked what they all do with it. It's certainly something to be concerned about, but Meitu isn't especially rare here - there are big-name apps that do exactly the same thing. There's a legitimate question over whether Android should be making it so easy for apps to obtain this level of identifying information without more explicit informed consent from the user, but until Google do anything to make it more difficult, apps will continue making use of this information. Let's turn this into a conversation about user privacy online rather than blaming one specific example.

(image) comments



Daniel Pocock: Which movie most accurately forecasts the Trump presidency?

Thu, 19 Jan 2017 19:31:22 +0000

(image)

Many people have been scratching their heads wondering what the new US president will really do and what he really stands for. His alternating positions on abortion, for example, suggest he may simply be telling people what he thinks is most likely to win public support from one day to the next. Will he really waste billions of dollars building a wall? Will Muslims really be banned from the US?

As it turns out, several movies provide a thought-provoking insight into what could eventuate. What's more, these two have a creepy resemblance to the Trump phenomenon and many of the problems in the world today.

Countdown to Looking Glass

On the classic cold war theme of nuclear annihilation, Countdown to Looking Glass is probably far more scary to watch on Trump eve than in the era when it was made. Released in 1984, the movie follows a series of international crises that have all come to pass: the assassination of a US ambassador in the middle east, a banking crisis and two superpowers in an escalating conflict over territory. The movie even picked a young Republican congressman for a cameo role: he subsequently went on to become speaker of the house. To relate it to modern times, you may need to imagine it is China, not Russia, who is the adversary but then you probably won't be able to sleep after watching it.

(image)

The Omen

Another classic is The Omen. The star of this series of four horror movies, Damien Thorn, appears to have a history that is eerily reminiscent of Trump: born into a wealthy family, a series of disasters befall every honest person he comes into contact with, he comes to control a vast business empire acquired by inheritance and as he enters the world of politics in the third movie of the series, there is a scene in the Oval Office where he is flippantly advised that he shouldn't lose any sleep over any conflict of interest arising from his business holdings. Did you notice Damien Thorn and Donald Trump even share the same initials, DT?

(image)




Reproducible builds folks: Reproducible Builds: week 90 in Stretch cycle

Wed, 18 Jan 2017 21:52:19 +0000

What happened in the Reproducible Builds effort between Sunday January 8 and Saturday January 14 2017: Upcoming Events The Reproducible Build Zoo will be presented by Vagrant Cascadian at the Embedded Linux Conference in Portland, Oregon, February 22nd Dennis Gilmore and Holger Levsen will present about "Reproducible Builds and Fedora" at Devconf.cz on February, 27th. Introduction to Reproducible Builds will be presented by Vagrant Cascadian at Scale15x in Pasadena, California, March 5th Reproducible work in other projects Reproducible Builds have been mentioned in the FSF high-priority project list. The F-Droid Verification Server has been launched. It rebuilds apps from source that were built by f-droid.org and checks that the results match. Bernhard M. Wiedemann did some more work on reproducibility for openSUSE. Bootstrappable.org (unfortunately no HTTPS yet) was launched after the initial work was started at our recent summit in Berlin. This is another topic related to reproducible builds and both will be needed in order to perform "Diverse Double Compilation" in practice in the future. Toolchain development and fixes Ximin Luo researched data formats for SOURCE_PREFIX_MAP and explored different options for encoding a map data structure in a single environment variable. He also continued to talk with the rustc team on the topic. Daniel Shahaf filed #851225 ('udd: patches: index by DEP-3 "Forwarded" status') to make it easier to track our patches. Chris Lamb forwarded #849972 upstream to yard, a Ruby documentation generator. Upstream has fixed the issue as of release 0.9.6. Alexander Couzens (lynxis) has made mksquashfs reproducible and is looking for testers. It compiles on BSD systems such as FreeBSD, OpenBSD and NetBSD. Bugs filed Chris Lamb: #850683 filed against gerbv. Lucas Nussbaum: #850973 filed against shogun. Nicola Corna: #851190 filed against gerbv. Reviews of unreproducible packages 13 package reviews have been added and 13 have been removed in this week, adding to our knowledge about identified issues. 1 issue type has been added: help2man_puts_traceback_in_generated_man_page Weekly QA work During our reproducibility testing, the following FTBFS bugs have been detected and reported by: Chris Lamb (3) Lucas Nussbaum (11) Nicola Corna (1) diffoscope development Many bugs were opened in diffoscope during the past few weeks, which probably is a good sign as it shows that diffoscope is much more widely used than a year ago. We have been working hard to squash many of them in time for Debian stable, though we will see how that goes in the end… Mattia Rizzolo: Code quality and style improvements. Maria Glukhova: Add image metadata comparison. Zipinfo included in APK files comparison. (Closes: #850502) Remove archive name from apktool.yml and rename it. (Closes: #850501) Chris Lamb: Correctly escape value of href="" elements (re. #849411) Support comparing .ico files using img2txt (Closes: #850730) and fixes and extra tests in subsequent commits. comparators.utils.file: Include magic file type when we know the file format but can't find file-specific details. (Closes: #850850) And other code quality and style improvements. reproducible-website development Daniel Shahaf: Add how-to-chair-a-meeting.md. Holger Levsen: Add links to reports by several attendees of berlin2016 tests.reproducible-builds.org Ximin Luo and Holger Levsen worked on stricter tests to check that /dev/shm and /run/shm are both mounted with the correct permissions. Some of our build machines currently still fail this test, and the problem is probably the root cause of the FTBFS of some packages (which fails with [...]



Jan Wagner: Migrating Gitlab non-packaged PostgreSQL into omnibus-packaged

Wed, 18 Jan 2017 21:08:02 +0000

With the release of Gitlab 8.15 it was announced that PostgreSQL needs to be upgraded. As I migrated from a source installation I used to have an external PostgreSQL database instead of using the one shiped with the omnibus package. So I decided to do the data migration into the omnibus PostgreSQL database now which I skipped before. Let's have a look into the databases: $ sudo -u postgres psql -d template1 psql (9.2.18) Type "help" for help. gitlabhq_production=# \l List of databases Name | Owner | Encoding | Collate | Ctype | Access privileges -----------------------+-------------------+----------+---------+---------+--------------------------------- gitlabhq_production | git | UTF8 | C.UTF-8 | C.UTF-8 | gitlab_mattermost | git | UTF8 | C.UTF-8 | C.UTF-8 | gitlabhq_production=# \q Dumping the databases and stop PostgreSQL. Maybe you need to adjust database names and users for your needs. $ su postgres -c "pg_dump gitlabhq_production -f /tmp/gitlabhq_production.sql" && \ su postgres -c "pg_dump gitlab_mattermost -f /tmp/gitlab_mattermost.sql" && \ /etc/init.d/postgresql stop Activate PostgreSQL shipped with Gitlab Omnibus $ sed -i "s/^postgresql\['enable'\] = false/#postgresql\['enable'\] = false/g" /etc/gitlab/gitlab.rb && \ sed -i "s/^#mattermost\['enable'\] = true/mattermost\['enable'\] = true/" /etc/gitlab/gitlab.rb && \ gitlab-ctl reconfigure Testing if the connection to the databases works $ su - git -c "psql --username=gitlab --dbname=gitlabhq_production --host=/var/opt/gitlab/postgresql/" psql (9.2.18) Type "help" for help. gitlabhq_production=# \q $ su - git -c "psql --username=gitlab --dbname=mattermost_production --host=/var/opt/gitlab/postgresql/" psql (9.2.18) Type "help" for help. mattermost_production=# \q Ensure pg_trgm extension is enabled $ sudo gitlab-psql -d gitlabhq_production -c 'CREATE EXTENSION IF NOT EXISTS "pg_trgm";' $ sudo gitlab-psql -d mattermost_production -c 'CREATE EXTENSION IF NOT EXISTS "pg_trgm";' Adjust permissions in the database dumps. Indeed please verify that users and databases might need to be adjusted too. $ sed -i "s/OWNER TO git;/OWNER TO gitlab;/" /tmp/gitlabhq_production.sql && \ sed -i "s/postgres;$/gitlab-psql;/" /tmp/gitlabhq_production.sql $ sed -i "s/OWNER TO git;/OWNER TO gitlab_mattermost;/" /tmp/gitlab_mattermost.sql && \ sed -i "s/postgres;$/gitlab-psql;/" /tmp/gitlab_mattermost.sql (Re)import the data $ sudo gitlab-psql -d gitlabhq_production -f /tmp/gitlabhq_production.sql $ sudo gitlab-psql -d gitlabhq_production -c 'REVOKE ALL ON SCHEMA public FROM "gitlab-psql";' && \ sudo gitlab-psql -d gitlabhq_production -c 'GRANT ALL ON SCHEMA public TO "gitlab-psql";' $ sudo gitlab-psql -d mattermost_production -f /tmp/gitlab_mattermost.sql $ sudo gitlab-psql -d mattermost_production -c 'REVOKE ALL ON SCHEMA public FROM "gitlab-psql";' && \ sudo gitlab-psql -d mattermost_production -c 'GRANT ALL ON SCHEMA public TO "gitlab-psql";' Make use of the shipped PostgreSQL $ sed -i "s/^gitlab_rails\['db_/#gitlab_rails\['db_/" /etc/gitlab/gitlab.rb && \ sed -i "s/^mattermost\['sql_/#mattermost\['sql_/" /etc/gitlab/gitlab.rb && \ gitlab-ctl reconfigure Now you should be able to connect to all the Gitlab services again. Optionally remove the external database apt-get remove postgresql postgresql-client postgresql-9.4 postgresql-client-9.4 postgresql-client-common postgresql-common Maybe you also want to purge the old database cont[...]



Michael Stapelberg: manpages.debian.org has been modernized

Wed, 18 Jan 2017 17:20:00 +0000

https://manpages.debian.org has been modernized! We have just launched a major update to our manpage repository. What used to be served via a CGI script is now a statically generated website, and therefore blazingly fast.

While we were at it, we have restructured the paths so that we can serve all manpages, even those whose name conflicts with other binary packages (e.g. crontab(5) from cron, bcron or systemd-cron). Don’t worry: the old URLs are redirected correctly.

Furthermore, the design of the site has been updated and now includes navigation panels that allow quick access to the manpage in other Debian versions, other binary packages, other sections and other languages. Speaking of languages, the site serves manpages in all their available languages and respects your browser’s language when redirecting or following a cross-reference.

Much like the Debian package tracker, manpages.debian.org includes packages from Debian oldstable, oldstable-backports, stable, stable-backports, testing and unstable. New manpages should make their way onto manpages.debian.org within a few hours.

The generator program (“debiman”) is open source and can be found at https://github.com/Debian/debiman. In case you would like to use it to run a similar manpage repository (or convert your existing manpage repository to it), we’d love to help you out; just send an email to stapelberg AT debian DOT org.

This effort is standing on the shoulders of giants: check out https://manpages.debian.org/about.html for a list of people we thank.

We’d love to hear your feedback and thoughts. Either contact us via an issue on https://github.com/Debian/debiman/issues/, or send an email to the debian-doc mailing list (see https://lists.debian.org/debian-doc/).




Jonathan Dowland: RetroPie, NES Classic and Bluetooth peripherals

Wed, 18 Jan 2017 17:14:58 +0000

I wanted to write a more in-depth post about RetroPie the Retro Gaming Appliance OS for Raspberry Pis, either technically or more positively, but unfortunately I don't have much positive to write. What I hoped for was a nice appliance that I could use to play old games from the comfort of my sofa. Unfortunately, nine times out of ten, I had a malfunctioning Linux machine and the time I'd set aside for jumping on goombas was being spent trying to figure out why bluetooth wasn't working. I have enough opportunities for that already, both at work and at home. I feel a little bad complaining about an open source, volunteer project: in its defence I can say that it is iterating fast and the two versions I tried in a relatively short time span were rapidly different. So hopefully a lot of my woes will eventually be fixed. I've also read a lot of other people get on with it just fine. Instead, I decided the Nintendo Classic NES Mini was the plug-and-play appliance for me. Alas, it became the "must have" Christmas toy for 2016 and impossible to obtain for the recommended retail price. I did succeed in finding one in stock at Toys R Us online at one point, only to have the checkout process break and my order not go through. Checking Stock Informer afterwards, that particular window of opportunity was only 5 minutes wide. So no NES classic for me! My adventures in RetroPie weren't entirely fruitless, thankfully: I discovered two really nice pieces of hardware. ThinkPad Keyboard The first is Lenovo's ThinkPad Compact Bluetooth Keyboard with TrackPoint, a very compact but pleasant to use Bluetooth keyboard including a trackpoint. I miss the trackpoint from my days as a Thinkpad laptop user. Having a keyboard and mouse combo in such a small package is excellent. My only two complaints would be the price (I was lucky to get one cheaper on eBay) and the fact it's bluetooth only: there's a micro-USB port for charging, but it would be nice if it could be used as a USB keyboard too. There's a separate, cheaper USB model. 8bitdo SFC30 The second neat device is a clone of the SNES gamepad by HK company 8bitdo called the SFC30. This looks and feels very much like the classic Nintendo SNES controller, albeit slightly thicker from front to back. It can be used in a whole range of different modes, including attached USB; Bluetooth pretending to be a keyboard; Bluetooth pretending to be a controller; and a bunch of other special modes designed to work with iOS or Android devices in various configurations. The manufacturer seem to be actively working on firmware updates to further enhance the controller. The firmware is presently closed source, but it would not be impossible to write an open source firmware for it (some people have figured out the basis for the official firmware). I like the SFC30 enough that I spent some time trying to get it working for various versions of Doom. There are just enough buttons to control a 2.5D game like Doom, whereas something like Quake or a more modern shooter would not work so well. I added support for several 8bitdo controllers directly into Chocolate Doom (available from 2.3.0 onwards) and into SDL2, a popular library for game development, which I think is used by Steam, so Steam games may all gain SFC30 support in the future too. [...]



Bálint Réczey: My debian-devel pledge

Wed, 18 Jan 2017 11:33:33 +0000

(image)

(image) I pledge that before sending each email to the debian-devel mailing list I move forward at least one actionable bug in my packages.




Mike Hommey: Announcing git-cinnabar 0.4.0

Wed, 18 Jan 2017 09:21:45 +0000

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.3.2?

  • Various bug fixes.
  • Updated git to 2.11.0 for cinnabar-helper.
  • Now supports bundle2 for both fetch/clone and push (https://www.mercurial-scm.org/wiki/BundleFormat2).
  • Now Supports git credential for HTTP authentication.
  • Now supports git push --dry-run.
  • Added a new git cinnabar fetch command to fetch a specific revision that is not necessarily a head.
  • Added a new git cinnabar download command to download a helper on platforms where one is available.
  • Removed upgrade path from repositories used with version < 0.3.0.
  • Experimental (and partial) support for using git-cinnabar without having mercurial installed.
  • Use a mercurial subprocess to access local mercurial repositories.
  • Cinnabar-helper now handles fast-import, with workarounds for performance issues on macOS.
  • Fixed some corner cases involving empty files. This prevented cloning Mozilla’s stylo incubator repository.
  • Fixed some correctness issues in file parenting when pushing changesets pulled from one mercurial repository to another.
  • Various improvements to the rules to build the helper.
  • Experimental (and slow) support for pushing merges, with caveats. See issue #20 for details about the current status.
  • Fail graft earlier when no commit was found to graft
  • Allow graft to work with git version < 1.9
  • Allow git cinnabar bundle to do the same grafting as git push



Norbert Preining: Debian/TeX Live January 2017

Wed, 18 Jan 2017 06:37:26 +0000

As the freeze of the next release is closing in, I have updated a bunch of packages around TeX: All of the TeX Live packages (binaries and arch independent ones) and tex-common. I might see whether I get some updates of ConTeXt out, too. The changes in the binaries are mostly cosmetic: one removal of a non-free (unclear-free) file, and several upstream patches got cherrypicked (dvips, tltexjp contact email, upmendex, dvipdfmx). I played around with including LuaTeX v1.0, but that breaks horribly with the current packages in TeX Live, so I refrained from it. The infrastructure package tex-common got a bugfix for updates from previous releases, and for the other packages there is the usual bunch of updates and new packages. Enjoy! New packages arimo, arphic-ttf, babel-japanese, conv-xkv, css-colors, dtxdescribe, fgruler, footmisx, halloweenmath, keyfloat, luahyphenrules, math-into-latex-4, mendex-doc, missaali, mpostinl, padauk, platexcheat, pstring, pst-shell, ptex-fontmaps, scsnowman, stanli, tinos, undergradmath, yaletter. Updated packages acmart, animate, apxproof, arabluatex, arsclassica, babel-french, babel-russian, baskervillef, beamer, beebe, biber, biber.x86_64-linux, biblatex, biblatex-apa, biblatex-chem, biblatex-dw, biblatex-gb7714-2015, biblatex-ieee, biblatex-philosophy, biblatex-sbl, bidi, calxxxx-yyyy, chemgreek, churchslavonic, cochineal, comicneue, cquthesis, csquotes, ctanify, ctex, cweb, dataref, denisbdoc, diagbox, dozenal, dtk, dvipdfmx, dvipng, elocalloc, epstopdf, erewhon, etoolbox, exam-n, fbb, fei, fithesis, forest, glossaries, glossaries-extra, glossaries-french, gost, gzt, historische-zeitschrift, inconsolata, japanese-otf, japanese-otf-uptex, jsclasses, latex-bin, latex-make, latexmk, lt3graph, luatexja, markdown, mathspec, mcf2graph, media9, mendex-doc, metafont, mhchem, mweights, nameauth, noto, nwejm, old-arrows, omegaware, onlyamsmath, optidef, pdfpages, pdftools, perception, phonrule, platex-tools, polynom, preview, prooftrees, pst-geo, pstricks, pst-solides3d, ptex, ptex2pdf, ptex-fonts, qcircuit, quran, raleway, reledmac, resphilosophica, sanskrit, scalerel, scanpages, showexpl, siunitx, skdoc, skmath, skrapport, smartdiagram, sourcesanspro, sparklines, tabstackengine, tetex, tex, tex4ht, texlive-scripts, tikzsymbols, tocdata, uantwerpendocs, updmap-map, uplatex, uptex, uptex-fonts, withargs, wtref, xcharter, xcntperchap, xecjk, xellipsis, xepersian, xint, xlop, yathesis. [...]



Hideki Yamane: It's all about design

Wed, 18 Jan 2017 02:45:29 +0000

From Arturo's blog
When I asked why not Debian, the answer was that it was very difficult to install and manage.
It's all about design, IMHO.
Installer, website, wiki... It should be "simple", not verbose, not cheap.



Dirk Eddelbuettel: RProtoBuf 0.4.8: Windows support for proto3

Wed, 18 Jan 2017 01:21:00 +0000

(image)

Issue ticket #20 demonstrated that we had not yet set up Windows for version 3 of Google Protocol Buffers ("Protobuf") -- while the other platforms support it. So I made the change, and there is release 0.4.8.

RProtoBuf provides R bindings for the Google Protocol Buffers ("Protobuf") data encoding and serialization library used and released by Google, and deployed as a language and operating-system agnostic protocol by numerous projects.

The NEWS file summarises the release as follows:

Changes in RProtoBuf version 0.4.8 (2017-01-17)

  • Windows builds now use the proto3 library as well (PR #21 fixing #20)

CRANberries also provides a diff to the previous release. The RProtoBuf page has an older package vignette, a 'quick' overview vignette, a unit test summary vignette, and the pre-print for the JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.




Arturo Borrero González: Debian is a puzzle: difficult

Tue, 17 Jan 2017 17:10:00 +0000

(image)

Debian is very difficult, a puzzle. This surprising statement was what I got last week when talking with a group of new IT students (and their teachers).

I would like to write down here what I was able to obtain from that conversation.

From time to time, as part of my job at CICA, we open the doors of our datacenter to IT students from all around Andalusia (our region) who want to learn what we do here and how we do it. All our infraestructure and servers are primarily built using FLOSS software (we have some exceptions, like backbone routers and switches), and the most important servers run Debian.

As part of the talk, when I am in such a meeting with a visiting group, I usually ask about which technologies they use and learn in their studies. The other day, they told me they use mostly Ubuntu and a bit of Fedora.

When I asked why not Debian, the answer was that it was very difficult to install and manage. I tried to obtain some facts about this but I failed in what seems to be a case of bad fame, a reputation problem which was extended among the teachers and therefore among the students. I didn’t detect any branding biasing or the like. I just seems lack of knowledge, and bad Debian reputation.

Using my DD powers and responsabilities, I kindly asked for feedback to improve our installer or whatever they may find difficult, but a week later I have received no email so far.

Then, what I obtain is nothing new:

  • we probably need more new-users feedback
  • we have work to do in the marketing/branding area
  • we have very strong competitors out there
  • we should keep doing our best

I myself recently had to use the Ubuntu installer in a laptop, and it didn’t seem that different to the Debian one: same steps and choices, like in every other OS installation.

Please, spread the word: Debian is not difficult. Certainly not perfect, but I don’t think that installing and using Debian is such a puzzle.




Ritesh Raj Sarraf: Linux Tablet-Mode Usability

Tue, 17 Jan 2017 14:02:36 +0000

In my ongoing quest to get Tablet-Mode working on my Hybrid machine, here's how I've been living with it so far. My intent is to continue using Free Software for both use cases. My wishful thought is to use the same software under both use cases.  Browser: On the browser front, things are pretty decent. Chromium has good support for Touchscreen input. Most of the Touchscreen use cases work well with Chromium. On the Firefox side, after a huge delay, finally, Firefox seems to be catching up. Hopefully, with Firefox 51/52, we'll have a much more usable Touchscreen browser. Desktop Shell: One of the reason of migrating to GNOME was its touch support. From what I've explored so far, GNOME is the only desktop shell that has touch support natively done. The feature isn't complete yet, but is fairly well usable. Given that GNOME has touchscreen support native, it is obvious to be using GNOME equivalent of tools for common use cases. Most of these tools inherit the touchscreen capabilities from the underneath GNOME libraries. File Manager: Nautilus has decent support for touch, as a file manager. The only annoying bit is a right-click equivalent. Or in touch input sense, a long-press. Movie Player: There's a decent movie player, based on GNOME libs; GNOME MPV. In my limited use so far, this interface seems to have good support. Other contenders are: SMPlayer is based on Qt libs. So initial expectation would be that Qt based apps would have better Touch support. But I'm yet to see any serious Qt application with Touch input support. Back to SMPlayer, the dev is pragmatic enough to recognize tablet-mode users and as such has provided a so called "Tablet Mode" view for SMPlayer (The tooltip did not get captured in the screenshot). MPV doesn't come with a UI but has basic management with OSD. And in my limited usage, the OSD implementation does seem capable to take touch input. Books / Documents: GNOME Documents/Books is very basic in what it has to offer, to the point that it is not much useful. But since it is based on the same GNOME libraries, it enjoys native touch input support. Calibre, on the other hand, is feature rich. But it is based on (Py)Qt. Touch input is told to work for Windows. For Linux, there's no support yet. The good thing about Calibre is that it has its own UI, which is pretty decent in a Tablet-Mode Touch workflow. Photo Management: With compact digital devices commonly available, digital content (Both Photos and Videos) is on the rise. The most obvious names that come to mind are Digikam and Shotwell. Shotwell saw its reincarnation in the recent past. From what I recollect, it does have touch support but was lacking quite a bit in terms of features, as compared to Digikam. Digikam is an impressive tool for digital content management. While Digikam is a KDE project, thankfully it does a great job in keeping its KDE dependencies to a bare minimum. But given that Digikam builds on KDE/Qt libs, I haven't had any much success in getting a good touch input solution for Tablet Mode. To make it barely usable in Table-Mode, one could choose a theme preference with bigger toolbars, labels and scrollbars. This helps in making a touch input workaround use case. As you can see, I've configured the Digikam UI with Text alongside Icons for easy touch input. Email: The most common use case. With Gmail and friends, many believe standalone email clients are no more a need. But there always are [...]



Elizabeth Ferdman: 6 Week Progress Update for PGP Clean Room

Tue, 17 Jan 2017 00:00:00 +0000

One of the PGP Clean Room’s aims is to provide users with the option to easily initialize one or more smartcards with personal info and pins, and subsequently transfer keys to the smartcard(s). The advantage of using smartcards is that users don’t have to expose their keys to their laptop for daily certification, signing, encryption or authentication purposes.

I started building a basic whiptail TUI that asks users if they will be using a smartcard:

(image)

smartcard-init.sh on Github

If yes, whiptail provides the user with the opportunity to initialize the smartcard with their name, preferred language and login, and change their admin PIN, user PIN, and reset code.

I outlined the commands and interactions necessary to edit personal info on the smartcard using gpg --card-edit and sending the keys to the card with gpg --edit-key in smartcard-workflow. There’s no batch mode for smartcard operations and there’s no “quick” command for it just yet (as in –quick-addkey). One option would be to try this out with command-fd/command-file. Currently, python bindings for gpgme are under development so that is another possibility.

We can use this workflow to support two smartcards– one for the primary key and one for the subkeys, a setup that would also support subkey rotation.




Raphaël Hertzog: Freexian’s report about Debian Long Term Support, December 2016

Mon, 16 Jan 2017 14:39:05 +0000

Like each month, here comes a report about the work of paid contributors to Debian LTS. Individual reports In December, about 175 work hours have been dispatched among 14 paid contributors. Their reports are available: Antoine Beaupré did 20.5 hours (out of 13.5 hours allocated + 7 remaining hours). Balint Reczey did 10 hours (out of 13.5 hours allocated, thus keeping 2.5 hours for January). Ben Hutchings did 10 hours (out of 13.5 hours allocated + 2 hours remaining, thus keeping 5.5 extra hours for January). Brian May did 10 hours. Chris Lamb did 13.5 hours. Emilio Pozuelo Monfort did 11 hours (out of 13.5 hours allocated, thus keeping 2.5 extra hours for January). Guido Günther did 8 hours. Hugo Lefeuvre did 11 hours (out of 13.5 hours allocated, thus keeping 2.5 extra hours for January). Jonas Meurer did 5.25 hours (out of 12 hours allocated, thus keeping 6.75 extra hours for January). Markus Koschany did 13.5 hours. Ola Lundqvist did 13.5 hours. Raphaël Hertzog did 10 hours. Roberto C. Sanchez did 13.5 hours. Thorsten Alteholz did 13.5 hours. Evolution of the situation The number of sponsored hours did not increase but a new silver sponsor is in the process of joining. We are only missing another silver sponsor (or two to four bronze sponsors) to reach our objective of funding the equivalent of a full time position. The security tracker currently lists 31 packages with a known CVE and the dla-needed.txt file 27. The situation improved a little bit compared to last month. Thanks to our sponsors New sponsors are in bold. Platinum sponsors: TOSHIBA (for 14 months) GitHub (for 5 months) Gold sponsors: The Positive Internet (for 30 months) Blablacar (for 29 months) Linode LLC (for 19 months) Babiel GmbH (for 8 months) Plat’Home (for 8 months) Silver sponsors: Domeneshop AS (for 29 months) Université Lille 3 (for 29 months) Trollweb Solutions (for 27 months) Nantes Métropole (for 23 months) University of Luxembourg (for 21 months) Dalenys (for 20 months) Univention GmbH (for 15 months) Université Jean Monnet de St Etienne (for 15 months) Sonus Networks (for 9 months) UR Communications BV (for 3 months) maxcluster GmbH (for 3 months) Bronze sponsors: David Ayers – IntarS Austria (for 30 months) Evolix (for 30 months) Offensive Security (for 30 months) Seznam.cz, a.s. (for 30 months) Freeside Internet Service (for 29 months) MyTux (for 29 months) Linuxhotel GmbH (for 27 months) Intevation GmbH (for 26 months) Daevel SARL (for 25 months) Bitfolk LTD (for 24 months) Megaspace Internet Services GmbH (for 24 months) Greenbone Networks GmbH (for 23 months) NUMLOG (for 23 months) WinGo AG (for 22 months) Ecole Centrale de Nantes – LHEEA (for 19 months) Sig-I/O (for 16 months) Entr’ouvert (for 14 months) Adfinis SyGroup AG (for 11 months) Laboratoire LEGI – UMR 5519 / CNRS (for 6 months) Quarantainenet BV (for 6 months) GNI MEDIA (for 5 months) RHX Srl (for 3 months) No comment | Liked this article? Click here. | My blog is Flattr-enabled. [...]



Maria Glukhova: APK, images and other stuff.

Mon, 16 Jan 2017 00:00:00 +0000

2 more weeks of my awesome Outreachy journey have passed, so it is time to make an update on my progress. I continued my work on improving diffoscope by fixing bugs and completing wishlist items. These include: Improving APK support I worked on #850501 and #850502 to improve the way diffoscope handles APK files. Thanks to Emanuel Bronshtein for providing clear description on how to reproduce these bugs and ideas on how to fix them. And special thanks to Chris Lamb for insisting on providing tests for these changes! That part actually proved to be little more tricky, and I managed to mess up with these tests (extra thanks to Chris for cleaning up the mess I created). Hope that also means I learned something from my mistakes. Also, I was pleased to see F-droid Verification Server as a sign of F-droid progress on reproducible builds effort - I hope these changes to diffoscope will help them! Adding support for image metadata That came from #849395 - a request was made to compare image metadata along with image content. Diffoscope has support for three types of images: JPEG, MS Windows Icon (*.ico) and PNG. Among these, PNG already had good image metadata support thanks to sng tool, so I worked on .jpeg and .ico files support. I initially tried to use exiftool for extracting metadata, but then I discovered it does not handle .ico files, so I decided to use a bigger force - ImageMagick’s identify - for this task. I was glad to see it had that handy -format option I could use to select only the necessary fields (I found their -verbose, well, too verbose for the task) and presenting them in the defined form, negating the need of filtering its output. What was particulary interesting and important for me in terms of learning: while working on this feature, I discovered that, at the moment, diffoscope could not handle .ico files at all - img2txt tool, that was used for retrieving image content, did not support that type of images. But instead of recognizing this as a bug and resolving it, I started to think of possible workaround, allowing for retrieving image metadata even after retrieving image content failed. Definetely not very good thinking. Thanks Mattia Rizzolo for actually recognizing this as a bug and filing it, and Chris Lamb for fixing it! Other work Order-like differences, part 2 In the previous post, I mentioned Lunar’s suggestion to use hashing for finding order-like difference in wide variety of input data. I implemented that idea, but after discussion with my mentor, we decided it is probably not worth it - this change would alter quite a lot of things in core modules of diffoscope, and the gain would be not really significant. Still, implementing that was an important experience for me, as I had to hack on deepest and, arguably, most difficult modules of diffoscope and gained some insight on how they work. Comparing with several tools (work in progress) Although my initial motivation for this idea was flawed (the workaround I mentioned earlier for .ico files), it still might be useful to have a mechanism that would allow to run several commands for finding difference, and then give the output of those that succeed, failing if and only if they all have failed. One possible case when it might happen is when we use commands coming from different tools, and one of them is not installed[...]



Sam Hartman: Musical Visualization of Network Traffic

Sun, 15 Jan 2017 23:19:26 +0000

I've been working on a fun holiday project in my spare time lately. It all started innocently enough. The office construction was nearing its end, and it was time for my workspace to be set up. Our deployment wizard and I were discussing. Normally we stick two high-end monitors on a desk. I'm blind; that seemed silly. He wanted to do something similarly nice for me, so he replaced one of the monitors with excellent speakers. They are a joy to listen to, but I felt like I should actually do something with them. So, I wanted to play around with some sort of audio project.
I decided to take a crack at an audio representation of network traffic. The solaris version of ping used to have an audio option, which would produce sound for successful pings. In the past I've used audio queues to monitor events like service health and build status.
It seemed like you could produce audio to give an overall feel for what was happening on the network. I was imagining a quick listen would be able to answer questions like:

  1. How busy is the network

  2. How many sources are active

  3. Is the traffic a lot of streams or just a few?

  4. Are there any interesting events such as packet loss or congestion collapse going on?

  5. What's the mix of services involved


I divided the project into three segments, which I will write about in future entries:

  • What parts of the network to model

  • How to present the audio information

  • Tools and implementation


I'm fairly happy with what I have. It doesn't represent all the items above. As an example, it doesn't directly track packet loss or retransmissions, nor does it directly distinguish one service from another. Still, just because of the traffic flow, rsync sounds different from http. It models enough of what I'm looking for that I find it to be a useful tool. And I learned a lot about music and Linux audio. I also got to practice designing discrete-time control functions in ways that brought back the halls of MIT.



Dirk Eddelbuettel: Rcpp 0.12.9: Next round

Sun, 15 Jan 2017 22:26:00 +0000

Yesterday afternoon, the nineth update in the 0.12.* series of Rcpp made it to the CRAN network for GNU R. Windows binaries have by now been generated; and the package was updated in Debian too. This 0.12.9 release follows the 0.12.0 release from late July, the 0.12.1 release in September, the 0.12.2 release in November, the 0.12.3 release in January, the 0.12.4 release in March, the 0.12.5 release in May, the 0.12.6 release in July, the 0.12.7 release in September, and the 0.12.8 release in November --- making it the thirteenth release at the steady bi-montly release frequency. Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 906 packages on CRAN depend on Rcpp for making analytical code go faster and further. That is up by sixthythree packages over the two months since the last release -- or about a package a day! Some of the changes in this release are smaller and detail-oriented. We did squash one annoying bug (stemming from the improved exception handling) in Rcpp::stop() that hit a few people. Nathan Russell added a sample() function (similar to the optional one in RcppArmadillo; this required a minor cleanup by for small number of other packages which used both namespaces 'opened'. Date and Datetime objects now have format() methods and << output support. We now have coverage reports via covr as well. Last but not least James "coatless" Balamuta was once more tireless on documentation and API consistency --- see below for more details. Changes in Rcpp version 0.12.9 (2017-01-14) Changes in Rcpp API: The exception stack message is now correctly demangled on all compiler versions (Jim Hester in #598) Date and Datetime object and vector now have format methods and operator<< support (#599). The size operator in Matrix is explicitly referenced avoiding a g++-6 issues (#607 fixing #605). The underlying date calculation code was updated (#621, #623). Addressed improper diagonal fill for non-symmetric matrices (James Balamuta in #622 addressing #619) Changes in Rcpp Sugar: Added new Sugar function sample() (Nathan Russell in #610 and #616). Added new Sugar function Arg() (James Balamuta in #626 addressing #625). Changes in Rcpp unit tests Added Environment::find unit tests and an Environment::get(Symbol) test (James Balamuta in #595 addressing issue #594). Added diagonal matrix fill tests (James Balamuta in #622 addressing #619) Changes in Rcpp Documentation: Exposed pointers macros were included in the Rcpp Extending vignette (MathurinD; James Balamuta in #592 addressing #418). The file Rcpp.bib move to directory bib which is guaranteed to be present (#631). Changes in Rcpp build system Travis CI now also calls covr for coverage analysis (Jim Hester in PR #591) Thanks to CRANberries, you can also look at a diff to the previous release. As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-p[...]



Mehdi Dogguy: Debian from 10,000 feet

Sun, 15 Jan 2017 17:12:00 +0000

Many of you are big fans of S.W.O.T analysis, I am sure of that! :-) Technical competence is our strongest suit, but we have reached a size and sphere of influence which requires an increase in organisation.We all love our project and want to make sure Debian still shines in the next decades (and centuries!). One way to secure that goal is to identify elements/events/things which could put that goal at risk. To this end, we've organized a short S.W.O.T analysis session at DebConf16. Minutes of the meeting can be found here. I believe it is an interesting read and is useful for Debian old-timers as well as newcomers. It helps to convey a better understanding of the project's status. For each item, we've tried to identify an action.Here are a few things we've worked on:Identify new potential contributors by attending and speaking at conferences where Free and Open Sources software are still not very well-known, or where we have too few contributors. Each Debian developer is encouraged to identify events where we can promote FOSS and Debian. As DPL, I'd be happy to cover expenses to attend such events.Our average age is also growing over the years. It is true that we could attract more new contributors than we already do.We can organize short internships. We should not wait for students to come to us. We can get in touch with universities and engineering schools and work together on a list of topics. It is easy and will give us the opportunity to reach out to more students.It is true that we have tried in the past to do that. We may organize a sprint with interested people and share our experience on trying to do internships on Debian-related subjects. If you have successfully done that in the past and managed to attract new contributors that way, please share your experience with us!If you see other ways to attract new contributors, please get in touch so that we can discuss!Not easy to get started in the project.It could be argued that all the information is available, but rather than being easily findable from on starting point, it is scattered over several places (documentation on our website, wiki, metadata on bug reports, etc…).Fedora and Mozilla both worked on this subject and did build a nice web application to make this easier and nicer. The result of this is asknot-ng.A whatcanidofor.debian.org would be wonderful! Any takers? We can help by providing a virtual machine to build this. Being a DD is not mandatory. Everyone is welcome! Cloud images for Debian.This is a very important point since cloud providers are now major distributions consumers. We have to ensure that Debian is correctly integrated in the cloud, without making compromises on our values and philosophy.I believe this item has been worked on during the last Debian Cloud sprint. I am looking forward to seeing the positive effects of this sprint in the long term. I believe it does help us to build a stronger relationship with cloud providers and gives us a nice opportunity to work with them on a shared set of goals!During next DebConf, we can review the progress that has been made on each item and discuss new ones. In addition to this session acting as a health check, I see it as a way for the DPL to discuss, openly and publicly, about the important changes that shoul[...]



Mike Gabriel: UIF bug: Caused by flawed IPv6 DNS resolving in Perl's NetAddr::IP

Sat, 14 Jan 2017 22:16:32 +0000

TL;DR; If you use NetAddr::IP->new6() for resolving DNS names to IPv6 addresses, the addresses returned by NetAddr::IP are not what you might expect. See below for details. Issue #2 in UIF Over the last couple of days, I tried to figure out the cause of a weird issue observed in UIF (Universal Internet Firewall [1], a nice Perl tool for setting up ip(6)tables based Firewalls). Already a long time ago, I stumbled over a weird DNS resolving issue of DNS names to IPv6 addresses in UIF that I reported as issue #2 [2] against upstream UIF back then. I happen to be co-author of UIF. So, I felt very ashamed all the time for not fixing the issue any sooner. As many of us DDs try to get our packages into shape before the next Debian release these days, I find myself doing the same. I started investigating the underlying cause of issue #2 in UIF a couple of days ago. Issue #119858 on CPAN Today, I figured out that the Perl code in UIF is not causing the observed phenomenon. The same behaviour is reproducible with a minimal and pure NetAddr::IP based Perl script (reported as Debian bug #851388 [2]. Thanks to Gregor Herrmann for forwarding Debian bug upstream (#119858 [3]). Here is the example script that shows the flawed behaviour: #!/usr/bin/perl use NetAddr::IP; my $hostname = "google-public-dns-a.google.com"; my $ip6 = NetAddr::IP->new6($hostname); my $ip4 = NetAddr::IP->new($hostname); print "$ip6 <- WTF???\n"; print "$ip4\n"; exit(0); ... gives... [mike@minobo ~]$ ./netaddr-ip_resolv-ipv6.pl 0:0:0:0:0:0:808:808/128 <- WTF??? 8.8.8.8/32 In words... So what happens in NetAddr::IP is that with the new6() "constructor" you initialize a new IPv6 address. If the address is a DNS name, NetAddr::IP internally resolves it into an IPv4 address and converts this IPv4 address into some IPv6'ish format. This bogus IPv6 address is not the one matching the given DNS name. Impacted Software in Debian Various Debian packages use NetAddr::IP and may be affected by this flaw, here is an incomplete list (use apt-rdepends -r libnetaddr-ip-perl for the complete list): spamassassin postgrey postfix-policyd-spf-perl mtpolicyd xen-tools fwsnort freeip-server 389-ds uif Any of the above packages could be affected if NetAddr::IP->new6() is being used. I haven't checked any of the code bases, but possibly the corresponding maintainers may want to do that. References [1] https://github.com/cajus/uif/ [2] https://github.com/cajus/uif/issues/2 [3] https://rt.cpan.org/Public/Bug/Display.html?id=119858 [4] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=851388 light+love Mike [...]



Russ Allbery: Review: Enchanters' End Game

Sat, 14 Jan 2017 20:18:00 +0000

Review: Enchanters' End Game, by David Eddings Series: The Belgariad #5 Publisher: Del Rey Copyright: December 1984 Printing: February 1990 ISBN: 0-345-33871-5 Format: Mass market Pages: 372 And, finally, the conclusion towards which everything has been heading, and the events for which Castle of Wizardry was the preparation. (This is therefore obviously not the place to start with this series.) Does it live up to all the foreshadowing and provide a satisfactory conclusion? I'd say mostly. The theology is a bit thin, but Eddings does a solid job of bringing all the plot threads together and giving each of the large cast a moment to shine. Enchanters' End Game (I have always been weirdly annoyed by that clunky apostrophe) starts with more of Garion and Belgarath, and, similar to the end of Castle of Wizardry, this feels like them rolling on the random encounter table. There is a fairly important bit with Nadraks at the start, but the remaining detour to the north is a mostly unrelated bit of world-building. Before this re-read, I didn't remember how extensive the Nadrak parts of this story were; in retrospect, I realize a lot of what I was remembering is in the Mallorean instead. I'll therefore save my commentary on Nadrak gender roles for an eventual Mallorean re-read, since there's quite a lot to dig through and much of it is based on information not available here. After this section, though, the story leaves Garion, Belgarath, and Silk for nearly the entire book, returning to them only for the climax. Most of this book is about Ce'Nedra, the queens and kings of the west, and what they're doing while Garion and his small party are carrying the Ring into Mordor— er, you know what I mean. And this long section is surprisingly good. We first get to see the various queens of the west doing extremely well managing the kingdoms while the kings are away (see my previous note about how Eddings does examine his stereotypes), albeit partly by mercilessly exploiting the sexism of their societies. The story then picks up with Ce'Nedra and company, including all of the rest of Garion's band, being their snarky and varied selves. There are some fairly satisfying set pieces, some battle tactics, some magical tactics, and a good bit of snarking and interplay between characters who feel like old friends by this point (mostly because of Eddings's simple, broad-strokes characterization). And Ce'Nedra is surprisingly good here. I would say that she's grown up after the events of the last book, but sadly she reverts to being awful in the aftermath. But for the main section of the book, partly because she's busy with other things, she's a reasonable character who experiences some actual consequences and some real remorse from one bad decision she makes. She's even admirable in how she handles events leading up to the climax of the book. Eddings does a good job showing every character in their best light, putting quite a lot of suspense (and some dramatic rescues) into this final volume, and providing a final battle that's moderately interesting. I'm not sure I entirely bought th[...]



Sven Hoexter: moto g falcon reactivation and exodus mod

Sat, 14 Jan 2017 13:43:36 +0000

I started to reactivate my old moto g falcon during the last days of CyanogenMod in December of 2016. First step was a recovery update to TWRP 3.0.2-2 so I was able to flash CM13/14 builds. While CM14 nightly builds did not boot at all the CM13 builds did, but up to the last build wifi connections to the internet did not work. I could actually register with my wifi (Archer C7 running OpenWRT) but all apps claim the internet connection check failed and I'm offline. So bummer, without wifi a smartphone is not much fun.

I was pretty sure that wifi worked when I last used that phone about 1.5 years ago with CM11/12, so I started to dive into the forums of xda-developers to look for alternatives. Here I found out about Exodus. I've a bit of trouble trusting stuff from xda-developer forums but what the hell, the phone is empty anyway so nothing to loose and I flashed the latest falcon build.

To flash it I had to clean the whole phone, format all partitions via TWRP and then sideloaded the zip image file via adb (adb from the Debian/stretch adb package works like a charm, thank you guys!). Booted and bäm wifi works again! Now Exodus is a really striped down mod, to do anything useful with it I had to activate the developer options and allow USB debugging. Afterwards I could install the f-droid and Opera apk via "adb install foo.apk".

Lineage OS

As I could derive from another thread on xda-developers Lineage OS has the falcon still on the shortlist for 14.x nightly builds. Maybe that will be an alternative again in the future. For now Exodus is a bit behind the curve (based on Android 6.0.1 from September 2016) but at least it's functional.




Jonathan McDowell: Cloning a USB LED device

Sat, 14 Jan 2017 11:53:56 +0000

(image)

A month or so ago I got involved in a discussion on IRC about notification methods for a headless NAS. One of the options considered was some sort of USB attached LED. DealExtreme had a cheap “Webmail notifier”, which was already supported by mainline kernels as a “Riso Kagaku” device but it had been sold out for some time.

This seemed like a fun problem to solve with a tinyAVR and V-USB. I had my USB relay board so I figured I could use that to at least get some code to the point that the kernel detected it as the right device, and the relay output could be configured as one of the colours to ensure it was being driven in roughly the right manner. The lack of a full lsusb dump (at least when I started out) made things a bit harder, plus the fact that the Riso uses an output report unlike the relay code, which uses a control message. However I had the kernel source for the driver and with a little bit of experimentation had something which would cause the driver to be loaded and the appropriate files in /sys/class/leds/ to be created. The relay was then successfully activated when the red LED was supposed to be on.

hid-led 0003:1294:1320.0001: hidraw0: USB HID v1.01 Device [MAIL  MAIL ] on usb-0000:00:14.0-6.2/input0
hid-led 0003:1294:1320.0001: Riso Kagaku Webmail Notifier initialized

I subsequently ordered some Digispark clones and modified the code to reflect the pins there (my relay board used pins 1+2 for USB, the Digispark uses pins 3+4). I then soldered a tricolour LED to the board, plugged it in and had a clone of the Riso Kaguku device for about £1.50 in parts (no doubt much cheaper in bulk). Very chuffed.

In case it’s useful to someone, the code is released under GPLv3+ and is available at https://the.earth.li/gitweb/?p=riso-kagaku-clone.git;a=summary or on GitHub at https://github.com/u1f35c/riso-kagaku-clone. I’m seeing occasional issues on an older Dell machine that only does USB2 with enumeration, but it generally is fine once it gets over that.

(FWIW, Jon, who started the original discussion, ended up with a BlinkStick Nano which is a neater device with 2 LEDs but still based on an Tiny85.)




Jamie McClelland: What's Up with WhatsApp?

Sat, 14 Jan 2017 02:03:02 +0000

Despite my jaded feelings about corporate Internet services in general, I was suprised to learn that WhatsApp's end-to-end encryption was a lie. In short, it is possible to send an encrypted message to a user that is intercepted and effectively de-crypted without the sender's knowledge.

However, I was even more surprised to read Open Whisper Systems critique of the original story, claiming that it is not a backdoor because the WhatsApp sender's client is always notified when a message is de-crypted.

The Open Whisper Systems post acknowledges that the WhatsApp sender can choose to disable these notifications, but claims that is not such a big deal because the WhatsApp server has no way to know which clients have this feature enabled and which do not, so intercepting a message is risky because it could result in the sender realizing it.

However, there is a fairly important piece of information missing, namely: as far as I can tell, the setting to notify users about key changes is disabled by default.

So, using the default installation, your end-to-end encrypted message could be intercepted and decrypted without you or the party you are communicating with knowing it. How is this not a back door? And yes, if the interceptor can't tell whether or not the sender has these notifications turned on, the interceptor runs the risk of someone knowing they have intercepted the message. Great. That's better than nothing. Except that there is strong evidence that many powerful governments on this planet routinely risk exposure in their pursuit of compromising our ability to communicate securely. And... not to mention non-governmental (or governmental) adversaries for whom exposure is not a big deal.

Furthermore a critical reason for end-to-end encrption is so that your provider does not have the technical capacity to intercept your communications. That's simply not true of WhatsApp. It is true of Signal and OMEMO, which requires the active participation of the sender to compromise the communication.

Why in the world would you distribute a client that not only has the ability to surpress such warnings, but has it enabled by default?

Some may argue that users regularly dismiss notifications like "fingerprint has changed" and that this problem is the achilles heal of secure communications. I agree. But... there is still a monumental difference between a user absent-mindedly dismissing an important security warning and never seeing the warning in the first place.

This flaw in WhatsApp is a critical reminder that secure communications doesn't just depend on a good protocol or technology, but on trust in the people who design and maintain our systems.




Elena 'valhalla' Grandi: Modern XMPP Server

Fri, 13 Jan 2017 12:59:12 +0000

Modern XMPP ServerI've published a new HOWTO on my website 'http://www.trueelena.org/computers/howto/modern_xmpp_server.html':http://www.enricozini.org/blog/2017/debian/modern-and-secure-instant-messaging/ already wrote about the Why (and the What, Who and When), so I'll just quote his conclusion and move on to the How.I now have an XMPP setup which has all the features of the recent fancy chat systems, and on top of that it runs, client and server, on Free Software, which can be audited, it is federated and I can self-host my own server in my own VPS if I want to, with packages supported in Debian.HowI've decided to install https://prosody.im/, mostly because it was recommended by the RTC QuickStart Guide http://rtcquickstart.org/; I've heard that similar results can be reached with https://www.ejabberd.im/ and other servers.I'm also targeting https://www.debian.org/ stable (+ backports); as I write this is jessie; if there are significant differences I will update this article when I will upgrade my server to stretch. Right now, this means that I'm using prosody 0.9 (and that's probably also the version that will be available in stretch).Installation and prerequisitesYou will need to enable the https://backports.debian.org/ repository and then install the packages prosody and prosody-modules.You also need to setup some TLS certificates (I used Let's Encrypt https://letsencrypt.org/); and make them readable by the prosody user; you can see Chapter 12 of the RTC QuickStart Guide http://rtcquickstart.org/guide/multi/xmpp-server-prosody.html for more details.On your firewall, you'll need to open the following TCP ports:5222 (client2server)5269 (server2server)5280 (default http port for prosody)5281 (default https port for prosody)The latter two are needed to enable some services provided via http(s), including rich media transfers.With just a handful of users, I didn't bother to configure LDAP or anything else, but just created users manually via:prosodyctl adduser alice@example.orgIn-band registration is disabled by default (and I've left it that way, to prevent my server from being used to send spim https://en.wikipedia.org/wiki/Messaging_spam).prosody configurationYou can then start configuring prosody by editing /etc/prosody/prosody.cfg.lua and changing a few values from the distribution defaults.First of all, enforce the use of encryption and certificate checking both for client2server and server2server communications with:c2s_require_encryption = trues2s_secure_auth = trueand then, sadly, add to the whitelist any server that you want to talk to and doesn't support the above:s2s_insecure_domains = { "gmail.com" }virtualhostsFor each virtualhost you want to configure, create a file /etc/prosody/conf.avail/chat.example.org.cfg.lua with contents like the following:VirtualHost "chat.example.org" enabled = true ssl = { key = "/etc/ssl/private/example.org-key.pem"; certificate = "/etc/ssl/public/example.org.pem"; }For the domains where you also want to enable MUCs, add the follwing lines:Component "conference.chat.example.org" "muc" re[...]



Ben Hutchings: Debian 8 kernel security update

Thu, 12 Jan 2017 22:41:10 +0000

(image)

There are a fair number of outstanding security issues in the Linux kernel for Debian 8 "jessie", but none of them were considered serious enough to issue a security update and DSA. Instead, most of them are being fixed through the point release (8.7) which will be released this weekend. Don't forget that you need to reboot to complete a kernel upgrade.

This update to linux (version 3.16.39-1) also adds the perf security mitigation feature from Grsecurity. You can disable unprivileged use of perf entirely by setting sysctl kernel.perf_event_paranoid=3. (This is the default for Debian "stretch".)




Ben Hutchings: Debian LTS work, December 2016

Thu, 12 Jan 2017 22:30:22 +0000

(image)

I was assigned 13.5 hours of work by Freexian's Debian LTS initiative and carried over 2 from November. I worked only 10 hours, so I carry over 5.5 hours.

As for the last few months, I spent all of this time working on the linux (kernel) package. I backported several security fixes and did some testing of the more invasive changes.

I also added the option to mitigate security issues in the performance events (perf) subsystem by disabling use by unprivileged users. This feature comes from Grsecurity and has been included in Debian unstable and Android kernels for a while. However, for Debian 7 LTS it has to be explicitly enabled by setting sysctl kernel.perf_event_paranoid=3.

I uploaded these changes as linux 3.2.84-1 and then (on 1st January) issued DLA 722-1.




Ritesh Raj Sarraf: Laptop Mode Tools 1.71

Thu, 12 Jan 2017 08:54:02 +0000

I am pleased to announce the 1.71 release of Laptop Mode Tools. This release includes some new modules, some bug fixes, and there are some efficiency improvements too. Many thanks to our users; most changes in this release are contributions from our users. A filtered list of changes in mentioned below. For the full log, please refer to the git repository.  Source tarball, Feodra/SUSE RPM Packages available at:https://github.com/rickysarraf/laptop-mode-tools/releases Debian packages will be available soon in Unstable. Homepage: https://github.com/rickysarraf/laptop-mode-tools/wiki Mailing List: https://groups.google.com/d/forum/laptop-mode-tools   1.71 - Thu Jan 12 13:30:50 IST 2017     * Fix incorrect import of os.putenv     * Merge pull request #74 from Coucouf/fix-os-putenv     * Fix documentation on where we read battery capacity from     * cpuhotplug: allow disabling specific cpus     * Merge pull request #78 from aartamonau/cpuhotplug     * runtime-pm: refactor listed_by_id()     * wireless-power: Use iw and fallback to iwconfig if it not available     * Prefer available AC supply information over battery state to determine ON_AC     * On startup, we want to force the full execution of LMT.     * Device hotplugs need a forced execution for LMT to apply the proper settings     * runtime-pm: Refactor list_by_type()     * kbd-backlight: New module to control keyboard backlight brightness     * Include Transmit power saving in wireless cards     * Don't run in a subshell     * Try harder to check battery charge     * New module: vgaswitcheroo     * Revive bluetooth module. Use rfkill primarily. Also don't unload (incomplete list of) kernel modules   What is Laptop Mode Tools Description: Tools for Power Savings based on battery/AC status  Laptop mode is a Linux kernel feature that allows your laptop to save  considerable power, by allowing the hard drive to spin down for longer  periods of time. This package contains the userland scripts that are  needed to enable laptop mode.  .  It includes support for automatically enabling laptop mode when the  computer is working on batteries. It also supports various other power  management features, such as starting and stopping daemons depending on  power mode, automatically hibernating if battery levels are too low, and  adjusting terminal blanking and X11 screen blanking  .  laptop-mode-tools uses the Linux kernel's Laptop Mode feature and thus  is also used on Desktops and Servers to conserve power Categories: Debian-BlogComputingToolsKeywords: Laptop Mode Toolspower savingRHUTLike:  [...]



Steinar H. Gunderson: 3G-SDI signal support

Wed, 11 Jan 2017 19:03:00 +0000

(image)

I had to figure out what kinds of signal you can run over 3G-SDI today, and it's pretty confusing, so I thought I'd share it.

For the reference, 3G-SDI is the same as 3G HD-SDI, an extension of HD-SDI, which is an extension of the venerable SDI standard (well, duh). They're all used for running uncompressed audio/video data of regular BNC coaxial cable, possibly hundreds of meters, and are in wide use in professional and semiprofessional setups.

So here's the rundown on 3G-SDI capabilities:

  • 1080p60 supports 10-bit 4:2:2 Y'CbCr. Period.
  • 720p60/1080p30/1080i60 supports a much wider range of formats: 10-bit 4:4:4:4 RGBA (alpha optional), 10-bit 4:4:4:4 Y'CbCrA (alpha optional), 12-bit 4:4:4 RGB, 12-bit 4:4:4 Y'CbCr or finally 12-bit 4:2:2 Y'CbCr (seems rather redundant).
  • There's also a format exclusively for 1080p24 (actually 2048x1080) that supports 12-bit X'Y'Z. Digital cinema, hello. Apart from that, it supports pretty much what 1080p30 does. There's also a 2048x1080p30 (no interlaced version) mode for 12-bit 4:2:2:4 Y'CbCrA, but it seems rather obscure.

And then there's dual-link 3G-SDI, which uses two cables instead of one—and there's also Blackmagic's proprietary “6G-SDI”, which supports basically everything dual-link 3G-SDI does. But in 2015, seemingly there was also a real 6G-SDI and 12G-SDI, and it's unclear to me whether it's in any way compatible with Blackmagic's offering. It's all confusing. But at least, these are the differences from single-link to dual-link 3G-SDI:

  • 1080p60 supports essentially everything that 720p60 supports on single-link: 10-bit 4:4:4:4 RGBA (alpha optional), 10-bit 4:4:4:4 Y'CbCrA (alpha optional), 12-bit 4:4:4 RGB, 12-bit 4:4:4 Y'CbCr and the redundant 12-bit 4:2:2 Y'CbCr.
  • 2048x1080 4:4:4 X'Y'Z' now also supports 1080p25 and 1080p30.

4K? I don't know. 120fps? I believe that's also a proprietary extension of some sort.

And of course, having a device support 3G-SDI doesn't mean at all it's required to support all of this; in particular, I believe Blackmagic's systems don't support alpha at all except on their single “12G-SDI” card, and I'd also not be surprised if RGB support is rather limited in practice.




Sven Hoexter: Failing with F5: using experimental mv feature on a pool causes tmm to segfault

Wed, 11 Jan 2017 17:36:53 +0000

Just a short PSA for those around working with F5 devices:

TMOS 11.6 introduced an experimental "mv" command in tmsh. In the last days we tried it for the first time on TMOS 12.1.1. It worked fine for a VirtualServer but a mv for a pool caused a sefault in tmm. We're currently working with the F5 support to sort it out, they think it's a known issue. Recommendation for now is to not use mv on pools. Just do it the old way, create a new pool, assign the new pool to the relevant VS and delete the old pool.

Possible bug ID at F5 is ID562808. Since I can not find it in the TMOS 12.2 release notes I expect that this issue also applies to TMOS 12.2, but I did not verify that.




Reproducible builds folks: Reproducible Builds: week 89 in Stretch cycle

Wed, 11 Jan 2017 15:04:04 +0000

What happened in the Reproducible Builds effort between Sunday January 1 and Saturday January 7 2017: GSoC and Outreachy updates Maria Glukhova blogged about Getting to know diffoscope better. Toolchain development #849999 was filed: "dpkg-dev should not set SOURCE_DATE_EPOCH to the empty string" Packages reviewed and fixed, and bugs filed Chris Lamb: #849968 filed against pciutils. #849972 filed against yard. #850556 filed against sed. Dhole: #849926 filed against nxt-firmware. Reviews of unreproducible packages 13 package reviews have been added, 4 have been updated and 6 have been removed in this week, adding to our knowledge about identified issues. 2 issue types have been added/updated: Add new randomness_in_documentation_generated_by_yardoc toolchain issue. Add fix for randomness_in_documentation_generated_by_yardoc toolchain issue. Upstreaming of reproducibility fixes Merged: calibre jna rocksndiamonds avfs jackaudio1 jackaudio2 gri Opened: rockdodger Weekly QA work During our reproducibility testing, the following FTBFS bugs have been detected and reported by: Chris Lamb (4) diffoscope development diffoscope 67 was uploaded to unstable by Chris Lamb. It included contributions from : [ Chris Lamb ] * Optimisations: - Avoid multiple iterations over archive by unpacking once for an ~8X runtime optimisation. - Avoid unnecessary splitting and interpolating for a ~20X optimisation when writing --text output. - Avoid expensive diff regex parsing until we need it, speeding up diff parsing by 2X. - Alias expensive Config() in diff parsing lookup for a 10% optimisation. * Progress bar: - Show filenames, ELF sections, etc. in progress bar. - Emit JSON on the the status file descriptor output instead of a custom format. * Logging: - Use more-Pythonic logging functions and output based on __name__, etc. - Use Debian-style "I:", "D:" log level format modifier. - Only print milliseconds in output, not microseconds. - Print version in debug output so that saved debug outputs can standalone as bug reports. * Profiling: - Also report the total number of method calls, not just the total time. - Report on the total wall clock taken to execute diffoscope, including cleanup. * Tidying: - Rename "NonExisting" -> "Missing". - Entirely rework diffoscope.comparators module, splitting as many separate concerns into a different utility package, tidying imports, etc. - Split diffoscope.difference into diffoscope.diff, etc. - Update file references in debian/copyright post module reorganisation. - Many other cleanups, etc. * Misc: - Clarify comment regarding why we call python3(1) directly. Thanks to Jérémy Bobbio . - Raise a clearer error if trying to use --html-dir on a file. - Fix --output-empty when files are identical and no outputs specified. [ Reiner Herrmann ] * Extend .apk recogni[...]



Dirk Eddelbuettel: R / Finance 2017 Call for Papers

Wed, 11 Jan 2017 11:44:00 +0000

Last week, Josh sent the call for papers to the R-SIG-Finance list making everyone aware that we will have our nineth annual R/Finance conference in Chicago in May. Please see the call for paper (at the link, below, or at the website) and consider submitting a paper. We are once again very excited about our conference, thrilled about upcoming keynotes and hope that many R / Finance users will not only join us in Chicago in May 2017 -- but also submit an exciting proposal. We also overhauled the website, so please see R/Finance. It should render well and fast on devices of all sizes: phones, tablets, desktops with browsers in different resolutions. The program and registration details still correspond to last year's conference and will be updated in due course. So read on below, and see you in Chicago in May! Call for Papers R/Finance 2017: Applied Finance with R May 19 and 20, 2017 University of Illinois at Chicago, IL, USA The ninth annual R/Finance conference for applied finance using R will be held on May 19 and 20, 2017 in Chicago, IL, USA at the University of Illinois at Chicago. The conference will cover topics including portfolio management, time series analysis, advanced risk tools, high-performance computing, market microstructure, and econometrics. All will be discussed within the context of using R as a primary tool for financial risk management, portfolio construction, and trading. Over the past eight years, R/Finance has included attendees from around the world. It has featured presentations from prominent academics and practitioners, and we anticipate another exciting line-up for 2017. We invite you to submit complete papers in pdf format for consideration. We will also consider one-page abstracts (in txt or pdf format) although more complete papers are preferred. We welcome submissions for both full talks and abbreviated "lightning talks." Both academic and practitioner proposals related to R are encouraged. All slides will be made publicly available at conference time. Presenters are strongly encouraged to provide working R code to accompany the slides. Data sets should also be made public for the purposes of reproducibility (though we realize this may be limited due to contracts with data vendors). Preference may be given to presenters who have released R packages. Financial assistance for travel and accommodation may be available to presenters, however requests must be made at the time of submission. Assistance will be granted at the discretion of the conference committee. Please submit proposals online at http://go.uic.edu/rfinsubmit. Submissions will be reviewed and accepted on a rolling basis with a final deadline of February 28, 2017. Submitters will be notified via email by March 31, 2017 of acceptance, presentation length, and financial assistance (if requested). Additional details will be announced via the conference website as they become available. Information on p[...]



Enrico Zini: Modern and secure instant messaging

Wed, 11 Jan 2017 11:43:32 +0000

Conversations is a really nice, actively developed, up to date XMPP client for Android that has the nice feature of telling you what XEPs are supported by the server one is using:

(image)

Some days ago, me and Valhalla played the game of trying to see what happens when one turns them all on: I would send her screenshots from my Conversations, and she would poke at her Prosody to try and turn things on:

(image)

Valhalla eventually managed to get all features activated, purely using packages from Jessie+Backports:

(image)

The result was a chat system in which I could see the same conversation history on my phone and on my laptop (with gajim)(https://gajim.org/), and have it synced even after a device has been offline,

We could send each other rich media like photos, and could do OMEMO encryption (same as Signal) in group chats.

I now have an XMPP setup which has all the features of the recent fancy chat systems, and on top of that it runs, client and server, on Free Software, which can be audited, it is federated and I can self-host my own server in my own VPS if I want to, with packages supported in Debian.

Valhalla has documented the whole procedure.

If you make a client for a protocol with lots of extension, do like Conversations and implement a status page with the features you'd like to have on the server, and little green indicators showing which are available: it is quite a good motivator for getting them all supported.




Dirk Eddelbuettel: nanotime 0.1.0: Now on Windows

Wed, 11 Jan 2017 00:49:00 +0000

Last month, we released nanotime, a package to work with nanosecond timestamps. See the initial release announcement for some background material and a few first examples. nanotime relies on the RcppCCTZ package for high(er) resolution time parsing and formatting: R itself stops a little short of a microsecond. And it uses the bit64 package for the actual arithmetic: time at this granularity is commonly represented at (integer) increments (at nanosecond resolution) relative to an offset, for which the standard epoch of Januar 1, 1970 is used. int64 types are a perfect match here, and bit64 gives us an integer64. Naysayers will point out some technical limitations with R's S3 classes, but it works pretty much as needed here. The one thing we did not have was Windows support. RcppCCTZ and the CCTZ library it uses need real C++11 support, and the g++-4.9 compiler used on Windows falls a little short lacking inter alia a suitable std::get_time() implementation. Enter Dan Dillon who ported this from LLVM's libc++ which lead to Sunday's RcppCCTZ 0.2.0 release. And now we have all our ducks in a row: everything works on Windows too. The next paragraph summarizes the changes for both this release as well as the initial one last month: Changes in version 0.1.0 (2017-01-10) Added Windows support thanks to expanded RcppCCTZ (closes #6) Added "mocked up" demo with nanosecond delay networking analysis Added 'fmt' and 'tz' options to output functions, expanded format.nanotime (closing #2 and #3) Added data.frame support Expanded tests Changes in version 0.0.1 (2016-12-15) Initial CRAN upload. Package is functional and provides examples. We also have a diff to the previous version thanks to CRANberries. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository. This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings. [...]



Bálint Réczey: Debian Developer Game of the Year

Tue, 10 Jan 2017 22:03:28 +0000

(image)

(image) I have just finished level one, fixing all RC bugs in packages under my name, even in team-maintained ones. (image)

Next level is no unclassified bug reports, which gonna be harder since I have just adopted shadow with 70+ open bugs. :-\

Luckily I can still go on bonus tracks which is fixing (RC) bugs in others’ packages, but one should not spend all the time on those track before finishing level 1!

PS: Last time I tried playing a conventional game I ended up fixing it in a few minutes instead.




Vincent Fourmond: Version 2.1 of QSoas is out

Tue, 10 Jan 2017 07:47:19 +0000

I have just released QSoas version 2.1. It brings in a new solve command to solve arbitrary non-linear equations of one unknown. I took advantage of this command in the figure to solve the equation for . It also provides a new way to reparametrize fits using the reparametrize-fit command, a new series of fits to model the behaviour of an adsorbed 1- or 2-electrons catalyst on an electrode (these fits are discussed in great details in our recent review (DOI: 10.1016/j.coelec.2016.11.002), improvements in various commands, the possibility to now compile using Ruby 2.3 and the most recent version of the GSL library, and sketches for an emacs major mode, which you can activate (for QSoas script files, ending in .cmds) using the following snippet in $HOME/.emacs:

(autoload 'qsoas-mode "$HOME/Prog/QSoas/misc/qsoas-mode.el" nil t)
(add-to-list 'auto-mode-alist '("\\.cmds$" . qsoas-mode))

Of course, you'll have to adapt the path $HOME/Prog/QSoas/misc/qsoas-mode.el to the actual location of qsoas-mode.el.

As before, you can download the source code from our website, and purchase the pre-built binaries following the links from that page too. Enjoy !




Sean Whitton: jan17vcspkg

Mon, 09 Jan 2017 20:14:52 +0000

(image)

There have been a two long threads on the debian-devel mailing list about the representation of the changes to upstream source code made by Debian maintainers. Here are a few notes for my own reference.

I spent a lot of time defending the workflow I described in dgit-maint-merge(7) (which was inspired by this blog post). However, I came to be convinced that there is a case for a manually curated series of patches for certain classes of package. It will depend on how upstream uses git (rebasing or merging) and on whether the Debian delta from upstream is significant and/or long-standing. I still think that we should be using dgit-maint-merge(7) for leaf or near-leaf packages, because it saves so much volunteer time that can be better spent on other things.

When upstream does use a merging workflow, one advantage of the dgit-maint-merge(7) workflow is that Debian’s packaging is just another branch of development.

Now consider packages where we do want a manually curated patch series. It is very hard to represent such a series in git. The only natural way to do it is to continually rebase the patch series against an upstream branch, but public branches that get rebased are not a good idea. The solution that many people have adopted is to represent their patch series as a folder full of .diff files, and then use gbp pq to convert this into a rebasing branch. This branch is not shared. It is edited, rebased, and then converted back to the folder of .diff files, the changes to which are then committed to git.

One of the advantages of dgit is that there now exists an official, non-rebasing git history of uploads to the archive. It would be nice if we could represent curated patch series as branches in the dgit repos, rather than as folders full of .diff files. But as I just described, this is very hard. However, Ian Jackson has the beginnings of a workflow that just might fit the bill.




Shirish Agarwal: The Great Indian Digital Tamasha

Mon, 09 Jan 2017 13:38:25 +0000

This is an extension to last month’s article/sharing where I had shared the changes that had transpired in the last 2-3 months. Now am in a position to share the kind of issues a user can go through in case he is looking for support from IRCTC to help him/her go cashless. If you a new user to use IRCTC services you wouldn’t go through this trouble. For those who might have TL;DR issues it’s about how hard it can become to get digital credentials fixed in IRCTC (Indian Railway Catering and Tourism Corporation) – a. 2 months back Indian Prime Minister gave a call incentivizing people to use digital means to do any commercial activities. One of the big organizations which took/takes part is IRCTC which handles the responsibility for e-ticketing millions of Rail tickets for common people. In India, a massive percentage moves by train as it’s cheaper than going by Air. A typical fare from say Pune – Delhi (capital of India) by second class sleeper would be INR 645/- for a distance of roughly 1600 odd kms and these are monopoly rates, there are no private trains and I’m not suggesting anything of that sort, just making sure that people know. An economy class ticket by Air for the same distance would be anywhere between INR 2500-3500/- for a 2 hour flight between different airlines. Last I checked there are around 8 mainstream airlines including flag-carrier Air India. About 30% of the population live on less than a dollar and a half a day which would come around INR 100/-. There was a comment some six months back on getting more people out of the poverty line. But as there are lots of manipulations in numbers for who and what denotes above poor and below poor in India and lot of it has to do with politics it’s not something which would be easily fixable. There are lots to be said in that arena but this article is not an appropriate blog-post for that. All in all, it’s only 3-5% of the population at the most who can travel via Air if situation demands and around 1-2% who might be frequent, business or leisure travellers. Now while I can thankfully afford an Air Ticket if the situation so demands, my mother gets motion sickness so while together we can only travel by train. b. With the above background, I had registered with IRCTC few years ago with another number (dual-SIM) I had purchased and was thinking that I would be using this long-term (seems to my first big mistake, hindsight 50:50) . This was somewhere in 2006/2007. c. Few months later I found that the other service provider wasn’t giving good service or was not upto mark. I was using IDEA (the main mobile operator) throughout those times. d. As I didn’t need the service that much, didn’t think to inform them that I want to change to another servi[...]



Petter Reinholdtsen: Where did that package go? — geolocated IP traceroute

Mon, 09 Jan 2017 11:20:00 +0000

Did you ever wonder where the web trafic really flow to reach the web servers, and who own the network equipment it is flowing through? It is possible to get a glimpse of this from using traceroute, but it is hard to find all the details. Many years ago, I wrote a system to map the Norwegian Internet (trying to figure out if our plans for a network game service would get low enough latency, and who we needed to talk to about setting up game servers close to the users. Back then I used traceroute output from many locations (I asked my friends to run a script and send me their traceroute output) to create the graph and the map. The output from traceroute typically look like this: traceroute to www.stortinget.no (85.88.67.10), 30 hops max, 60 byte packets 1 uio-gw10.uio.no (129.240.202.1) 0.447 ms 0.486 ms 0.621 ms 2 uio-gw8.uio.no (129.240.24.229) 0.467 ms 0.578 ms 0.675 ms 3 oslo-gw1.uninett.no (128.39.65.17) 0.385 ms 0.373 ms 0.358 ms 4 te3-1-2.br1.fn3.as2116.net (193.156.90.3) 1.174 ms 1.172 ms 1.153 ms 5 he16-1-1.cr1.san110.as2116.net (195.0.244.234) 2.627 ms he16-1-1.cr2.oslosda310.as2116.net (195.0.244.48) 3.172 ms he16-1-1.cr1.san110.as2116.net (195.0.244.234) 2.857 ms 6 ae1.ar8.oslosda310.as2116.net (195.0.242.39) 0.662 ms 0.637 ms ae0.ar8.oslosda310.as2116.net (195.0.242.23) 0.622 ms 7 89.191.10.146 (89.191.10.146) 0.931 ms 0.917 ms 0.955 ms 8 * * * 9 * * * [...] This show the DNS names and IP addresses of (at least some of the) network equipment involved in getting the data traffic from me to the www.stortinget.no server, and how long it took in milliseconds for a package to reach the equipment and return to me. Three packages are sent, and some times the packages do not follow the same path. This is shown for hop 5, where three different IP addresses replied to the traceroute request. There are many ways to measure trace routes. Other good traceroute implementations I use are traceroute (using ICMP packages) mtr (can do both ICMP, UDP and TCP) and scapy (python library with ICMP, UDP, TCP traceroute and a lot of other capabilities). All of them are easily available in Debian. This time around, I wanted to know the geographic location of different route points, to visualize how visiting a web page spread information about the visit to a lot of servers around the globe. The background is that a web site today often will ask the browser to get from many servers the parts (for example HTML, JSON, fonts, JavaScript, CSS, video) required to display the content. This will leak information about the visit to those controlling these servers and anyone able to peek at the data traffic passing by (like your ISP, the ISPs backbone provider, FRA, GCHQ, NSA a[...]



Guido Günther: Debian Fun in December 2016

Mon, 09 Jan 2017 08:24:01 +0000

Debian LTS November marked the 20th month I contributed to Debian LTS under the Freexian umbrella. I had 8 hours allocated which I used by: some rather quiet frontdesk days updating icedove to 45.5.1 resulting in DLA-752-1 fixing 7 CVEs looking whether Wheezy is affected by xsa-202, xsa-203, xsa-204 and handling the communication with credativ for these (update not yet released) Assessing cURL/libcURL CVE-2016-9586 Assessing whether Wheezy's QEMU is affeced by security issues in 9pfs "proxy" and "handle" code Releasing DLA-776-1 for samba fixing CVE-2016-2125 Other Debian stuff Allow travis.debian.net to start services when running autopkgtests Usual bunch of libvirt and related uploads Uploaded git-buildpackage 0.8.8 and 0.8.9 to unstable (list of changes) Some other Free Software activites Some fixes to libvirt's configure.ac for merged /usr Made some progress on getting the merkur board clones to life. The first stage bootloader and contiki are working now. Some ansible_spec fixes for roles in non default directories and robustness Due to excessive XMPP spam started a jabber-spam-blacklist Unbroke ansible-module-foreman Added support to configure Foreman settings via ansible-module-foreman and python-foreman Speed up VM removal with Foreman and VMware ESX Started foreman_serverspec a Foreman plugin to store serverspec reports in the Foreman. This is in a very early stage. [...]



Riku Voipio: 20 years of being a debian maintainer

Mon, 09 Jan 2017 08:01:42 +0000

(image)

fte (0.44-1) unstable; urgency=low

* initial Release.

-- Riku Voipio Wed, 25 Dec 1996 20:41:34 +0200
Welp I seem to have spent holidays of 1996 doing my first Debian package. The process of getting a package into Debian was quite straightforward then. "I have packaged fte, here is my pgp, can I has an account to upload stuff to Debian?" I think the bureaucracy took until second week of January until I could actually upload the created package.

uid Riku Voipio
sig 89A7BF01 1996-12-15 Riku Voipio
sig 4CBA92D1 1997-02-24 Lars Wirzenius
A few months after joining, someone figured out that to pgp signatures to be useful, keys need to be cross-signed. Hence young me taking a long bus trip from countryside Finland to the capital Helsinki to meet the only other DD in Finland in a cafe. It would still take another two years until I met more Debian people, and it could be proven that I'm not just an alter ego of Lars ;) Much later an alternative process of phone-calling prospective DD's would be added.



Bits from Debian: New Debian Developers and Maintainers (November and December 2016)

Sun, 08 Jan 2017 23:30:00 +0000

The following contributors got their Debian Developer accounts in the last two months:

  • Karen M Sandler (karen)
  • Sebastien Badia (sbadia)
  • Christos Trochalakis (ctrochalakis)
  • Adrian Bunk (bunk)
  • Michael Lustfield (mtecknology)
  • James Clarke (jrtc27)
  • Sean Whitton (spwhitton)
  • Jerome Georges Benoit (calculus)
  • Daniel Lange (dlange)
  • Christoph Biedl (cbiedl)
  • Gustavo Panizzo (gefa)
  • Gert Wollny (gewo)
  • Benjamin Barenblat (bbaren)
  • Giovani Augusto Ferreira (giovani)
  • Mechtilde Stehmann (mechtilde)
  • Christopher Stuart Hoskin (mans0954)

The following contributors were added as Debian Maintainers in the last two months:

  • Dmitry Bogatov
  • Dominik George
  • Gordon Ball
  • Sruthi Chandran
  • Michael Shuler
  • Filip Pytloun
  • Mario Anthony Limonciello
  • Julien Puydt
  • Nicholas D Steeves
  • Raoul Snyman

Congratulations!




Steve Kemp: Patching scp and other updates.

Sun, 08 Jan 2017 14:45:19 +0000

I use openssh every day, be it the ssh command for connecting to remote hosts, or the scp command for uploading/downloading files. Once a day, or more, I forget that scp uses the non-obvious -P flag for specifying the port, not the -p flag that ssh uses. Enough is enough. I shall not file a bug report against the Debian openssh-client page, because no doubt compatibility with both upstream, and other distributions, is important. But damnit I've had enough. apt-get source openssh-client shows the appropriate code: fflag = tflag = 0; while ((ch = getopt(argc, argv, "dfl:prtvBCc:i:P:q12346S:o:F:")) != -1) switch (ch) { .. .. case 'P': addargs(&remote_remote_args, "-p"); addargs(&remote_remote_args, "%s", optarg); addargs(&args, "-p"); addargs(&args, "%s", optarg); break; .. .. case 'p': pflag = 1; break; .. .. .. Swapping those two flags around, and updating the format string appropriately, was sufficient to do the necessary. In other news I've done some hardware development, using both Arduino boards and the WeMos D1-mini. I'm still at the stage where I'm flashing lights, and doing similarly trivial things: Absolute Arduino Basic Tutorial Recognizing the board. Getting the development environment setup. Running your first project. Absolute WeMos D1 Mini (ESP8266) Basic Tutorial Recognizing the board. Getting the development environment setup. Installing the libraries/headers/examples Writing your first WiFi project. I have more complex projects planned for the future, but these are on-hold until the appropriate parts are delivered: MP3 playback. Bluetooth-speakers. Washing machine alarm. LCD clock, with time set by NTP, and relay control. Even with a few LEDs though I've had fun, for example writing a trivial binary display. [...]



Steinar H. Gunderson: SpeedHQ decoder

Sun, 08 Jan 2017 12:06:00 +0000

(image)

I reverse-engineered a video codec. (And then the CTO of the company making it became really enthusiastic, and offered help. Life is strange sometimes.)

I'd talk about this and some related stuff at FOSDEM, but there's a scheduling conflict, so I will be in Ås that weekend, not Brussels.




Jonas Meurer: debian lts report 2016.12

Sun, 08 Jan 2017 10:18:56 +0000

Debian LTS report for December 2016

December 2016 was my fourth month as a Debian LTS team member. I was allocated 12 hours. Unfortunately I turned out to have way less time for Debian and LTS work than expected, so I only spent 5,25 hours of them for the following tasks:

  • DLA 732-1: backported CSRF protection to monit 1:5.4-2+deb7u1
  • DLA 732-2: fix a regression introduced in last monit security update
  • DLA 732-3: fix another regression introduced in monit security update
  • Nagios3: port 3.4.1-3+deb7u2 and 3.4.1-3+deb7u3 updates to wheezy-backports
  • DLA-760-1: fix two reflected XSS vulnerabilities in spip



Jonas Meurer: debian lts report 2016 12

Sun, 08 Jan 2017 10:13:48 +0000

Debian LTS report for December 2016

December 2016 was my fourth month as a Debian LTS team member. I was allocated 12 hours. Unfortunately I turned out to have way less time for Debian and LTS work than expected, so I only spent 5,25 hours of them for the following tasks:

  • DLA 732-1: backported CSRF protection to monit 1:5.4-2+deb7u1
  • DLA 732-2: fix a regression introduced in last monit security update
  • DLA 732-3: fix another regression introduced in monit security update
  • Nagios3: port 3.4.1-3+deb7u2 and 3.4.1-3+deb7u3 updates to wheezy-backports
  • DLA-760-1: fix two reflected XSS vulnerabilities in spip



Keith Packard: embedded-arm-libc

Sun, 08 Jan 2017 07:32:10 +0000

Finding a Libc for tiny embedded ARM systems You'd think this problem would have been solved a long time ago. All I wanted was a C library to use in small embedded systems -- those with a few kB of flash and even fewer kB of RAM. Small system requirements A small embedded system has a different balance of needs: Stack space is limited. Each thread needs a separate stack, and it's pretty hard to move them around. I'd like to be able to reliably run with less than 512 bytes of stack. Dynamic memory allocation should be optional. I don't like using malloc on a small device because failure is likely and usually hard to recover from. Just make the linker tell me if the program is going to fit or not. Stdio doesn't have to be awesomely fast. Most of our devices communicate over full-speed USB, which maxes out at about 1MB/sec. A stdio setup designed to write to the page cache at memory speeds is over-designed, and likely involves lots of buffering and fancy code. Everything else should be fast. A small CPU may run at only 20-100MHz, so it's reasonable to ask for optimized code. They also have very fast RAM, so cycle counts through the library matter. Available small C libraries I've looked at: μClibc. This targets embedded Linux systems, and also appears dead at this time. musl libc. A more lively project; still, definitely targets systems with a real Linux kernel. dietlibc. Hasn't seen any activity for the last three years, and it isn't really targeting tiny machines. newlib. This seems like the 'normal' embedded C library, but it expects a fairly complete "kernel" API and the stdio bits use malloc. avr-libc. This has lots of Atmel assembly language, but is otherwise ideal for tiny systems. pdclib. This one focuses on small source size and portability. Current AltOS C library We've been using pdclib for a couple of years. It was easy to get running, but it really doesn't match what we need. In particular, it uses a lot of stack space in the stdio implementation as there's an additional layer of abstraction that isn't necessary. In addition, pdclib doesn't include a math library, so I've had to 'borrow' code from other places where necessary. I've wanted to switch for a while, but there didn't seem to be a great alternative. What's wrong with newlib? The "obvious" embedded C library is newlib. Designed for embedded systems with a nice way to avoid needing a 'real' kernel underneath, newlib has a lot going for it. Most of the functions have a good balance between speed and size,[...]



Lars Wirzenius: Hacker Noir, chapter 1: Negotiation

Sat, 07 Jan 2017 20:53:34 +0000

(image)

I participated in Nanowrimo in November, but I failed to actually finish the required 50,000 words during the month. Oh well. I plan on finishing the book eventually, anyway.

Furthermore, as an open source exhibitionist I thought I'd publish a chapter each month. This will put a bit of pressure on me to keep writing, and hopefully I'll get some nice feedback too.

The working title is "Hacker Noir". I've put the first chapter up on http://noir.liw.fi/.




Simon Richter: Crossgrading Debian in 2017

Sat, 07 Jan 2017 20:45:11 +0000

So, once again I had a box that had been installed with the kind-of-wrong Debian architecture, in this case, powerpc (32 bit, bigendian), while I wanted ppc64 (64 bit, bigendian). So, crossgrade time. If you want to follow this, be aware that I use sysvinit. I doubt this can be done this way with systemd installed, because systemd has a lot more dependencies for PID 1, and there is also a dbus daemon involved that cannot be upgraded without a reboot. To make this a bit more complicated, ppc64 is an unofficial port, so it is even less synchronized across architectures than sid normally is (I would have used jessie, but there is no jessie for ppc64). Step 1: Be Prepared To work around the archive synchronisation issues, I installed pbuilder and created 32 and 64 bit base.tgz archives: pbuilder --create --basetgz /var/cache/pbuilder/powerpc.tgz pbuilder --create --basetgz /var/cache/pbuilder/ppc64.tgz \ --architecture ppc64 \ --mirror http://ftp.ports.debian.org/debian-ports \ --debootstrapopts --keyring=/usr/share/keyrings/debian-ports-archive-keyring.gpg \ --debootstrapopts --include=debian-ports-archive-keyring Step 2: Gradually Heat the Water so the Frog Doesn't Notice Then, I added the sources to sources.list, and added the architecture to dpkg: deb [arch=powerpc] http://ftp.debian.org/debian sid main deb [arch=ppc64] http://ftp.ports.debian.org/debian-ports sid main deb-src http://ftp.debian.org/debian sid main dpkg --add-architecture ppc64 apt update Step 3: Time to Go Wild apt install dpkg:ppc64 Obviously, that didn't work, in my case because libattr1 and libacl1 weren't in sync, so there was no valid way to install powerpc and ppc64 versions in parallel, so I used pbuilder to compile the current version from sid for the architecture that wasn't up to date (IIRC, one for powerpc, and one for ppc64). Manually installed the libraries, then tried again: apt install dpkg:ppc64 Woo, it actually wants to do that. Now, that only half works, because apt calls dpkg twice, once to remove the old version, and once to install the new one. Your options at this point are apt-get download dpkg:ppc64 dpkg -i dpkg_*_ppc64.deb or if you didn't think far enough ahead, cursing followed by cd /tmp ar x /var/cache/apt/archives/dpkg_*_ppc64.deb cd / tar -xJf /tmp/data.tar.xz dpkg -i /var/cache/apt/archives/dpkg_*_ppc64.deb Step 4: Automate That Now, I'd like to get this a bit more convenient, so I had to repeat t[...]