Subscribe: suihkulokki rambling
http://suihkulokki.blogspot.com/feeds/posts/default/-/debian
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
arm  build  cloud  debian  device  hub  install  kvm  lava server  lava  network  obs  packages  qemu  ser net  serial  server 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: suihkulokki rambling

suihkulokki rambling





Updated: 2017-12-11T05:11:43.450+02:00

 



Cross-compiling with debian stretch

2017-06-24T19:03:40.744+03:00

Debian stretch comes with cross-compiler packages for selected architectures:
 $ apt-cache search cross-build-essential
crossbuild-essential-arm64 - Informational list of cross-build-essential packages for
crossbuild-essential-armel - ...
crossbuild-essential-armhf - ...
crossbuild-essential-mipsel - ...
crossbuild-essential-powerpc - ...
crossbuild-essential-ppc64el - ...

Lets have a quick exact steps guide. But first - while you can use do all this in your desktop PC rootfs, it is more wise to contain yourself. Fortunately, Debian comes with a container tool out of box:

sudo debootstrap stretch /var/lib/container/stretch http://deb.debian.org/debian
echo "strech_cross" | sudo tee /var/lib/container/stretch/etc/debian_chroot
sudo systemd-nspawn -D /var/lib/container/stretch
Then we set up cross-building enviroment for arm64 inside the container:

# Tell dpkg we can install arm64
dpkg --add-architecture arm64
# Add src line to make "apt-get source" work
echo "deb-src http://deb.debian.org/debian stretch main" >> /etc/apt/sources.list
apt-get update
# Install cross-compiler and other essential build tools
apt install --no-install-recommends build-essential crossbuild-essential-arm64
Now we have a nice build enviroment, lets choose something more complicated than the usual kernel/BusyBox to cross-build, qemu:

# Get qemu sources from debian
apt-get source qemu
cd qemu-*
# New in stretch: build-dep works in unpacked source tree
apt-get build-dep -a arm64 .
# Cross-build Qemu for arm64
dpkg-buildpackage -aarm64 -j6 -b
Now that works perfectly for Qemu. For other packages, challenges may appear. For example you may have to se "nocheck" flag to skip build-time unit tests. Or some of the build-dependencies may not be multiarch-enabled. So work continues :)



Deploying OBS

2017-04-11T23:23:27.170+03:00

Open Build Service from SuSE is web service building deb/rpm packages. It has recently been added to Debian, so finally there is relatively easy way to set up PPA style repositories in Debian. Relative as in "there is a learning curve, but nowhere near the complexity of replicating Debian's internal infrastructure". OBS will give you both repositories and build infrastructure with a clickety web UI and command line client (osc) to manage. See Hectors blog for quickstart instructions.

Things to learned while setting up OBS

Me coming from Debian background, and OBS coming from SuSE/RPM world, there are some quirks that can take by surprise.

Well done packaging

Usually web services are a tough fit for Distros. The cascade of weird dependencies and build systems where the only practical way to build an "open source" web service is by replicating the upstream CI scripts. Not in case of OBS. Being done by distro people shows.

OBS does automatic rebuilds of reverse dependencies

Aka automatic binNMUs when you update a library. This however means you need lots of build power around. OBS has it's own dependency resolver on the server that recalculate what packages need rebuilding when - workers just get a list of packages to install for build-depends. This a major divergence from Debian, where sbuild handles dependencies client side. The OBS dependency handler doesn't handle virtual packages* / alternative build-deps like Debian - you may have to add a specific "Prefer: foo-dev" into the OBS project config to solve alternative choices.

OBS server and worker do http requests in both directions

On startup workers connect to OBS server, open a TCP port and wait requests coming OBS. Having connections both directions is a bit of hassle firewall-wise. On the bright side, no need to setup uploads via FTP here..

Signing repositories is complicated

With Debian 9.0 making signed repositories pretty much mandatory, OBS makes signing rather complicated. obs-signd isn't included in Debian, since it depends on gnupg patch that hasn't been upstreamed. Fortunately I found a workaround. OBS signs release files with /usr/bin/sign -d /path/to/release. Where replacing the obs-signd provided sign command your own script is easy ;)

Git integration is rather bolted-on than integrated

OBS provides a method to integrate with git using services. - There is no clickety UI to link to git repo, instead you make an xml file called _service with osc. There is no way to have debian/ tree in git.

The upstream community is friendly

Including the happiest thanks from an upstream I've seen recently.

Summary

All in all rather satisfied with OBS. If you have a home-grown jenkins etc based solution for building DEB/RPM packages, you should definitely consider OBS. For simpler uses, no need to install OBS yourself, openSUSE public OBS will happily build Debian packages for you.

*How useful are virtual packages anymore? "foo-defaults" packages seem to be the go-to solution for most real usecases anyways.




20 years of being a debian maintainer

2017-01-09T10:01:42.092+02:00


fte (0.44-1) unstable; urgency=low

* initial Release.

-- Riku Voipio Wed, 25 Dec 1996 20:41:34 +0200
Welp I seem to have spent holidays of 1996 doing my first Debian package. The process of getting a package into Debian was quite straightforward then. "I have packaged fte, here is my pgp, can I has an account to upload stuff to Debian?" I think the bureaucracy took until second week of January until I could actually upload the created package.

uid Riku Voipio
sig 89A7BF01 1996-12-15 Riku Voipio
sig 4CBA92D1 1997-02-24 Lars Wirzenius
A few months after joining, someone figured out that to pgp signatures to be useful, keys need to be cross-signed. Hence young me taking a long bus trip from countryside Finland to the capital Helsinki to meet the only other DD in Finland in a cafe. It would still take another two years until I met more Debian people, and it could be proven that I'm not just an alter ego of Lars ;) Much later an alternative process of phone-calling prospective DD's would be added.



Booting ubuntu 16.04 cloud images on Arm64

2016-05-09T15:32:58.945+03:00

For testing kvm/qemu, prebaked images cloud images are nice. However, there is a few steps to get started. First we need a recent Qemu (2.5 is good enough). An efi firmware is needed, and cloud-utils, for customizing our VM.

sudo apt install -y qemu qemu-utils cloud-utils
wget https://releases.linaro.org/components/kernel/uefi-linaro/15.12/release/qemu64/QEMU_EFI.fd
wget https://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-arm64-uefi1.img
Cloud images are plain - there is no user setup, no default user/pw combo, so to log in to the image, we need to customize the image on first boot. The defacto tool for this is cloud-init. The simplest method for using cloud-init is passing a block media with a settings file - of course for real cloud deployment, you would use one of fancy network based initialization protocols cloud-init supports. Enter the following to a file, say cloud.txt:

#cloud-config

users:
- name: you
ssh-authorized-keys:
- ssh-rsa AAAAB3Nz....
sudo: ['ALL=(ALL) NOPASSWD:ALL']
groups: sudo
shell: /bin/bash
This minimal config will just set you a user with ssh key. A more complex setup can install packages, write files and run arbitrary commands on first boot. In professional setups, you would most likely end up using cloud-init only to start Ansible or another configuration management tool.

cloud-localds cloud.img cloud.txt
qemu-system-aarch64 -smp 2 -m 1024 -M virt -bios QEMU_EFI.fd -nographic \
-device virtio-blk-device,drive=image \
-drive if=none,id=image,file=xenial-server-cloudimg-arm64-uefi1.img \
-device virtio-blk-device,drive=cloud \
-drive if=none,id=cloud,file=cloud.img \
-netdev user,id=user0 -device virtio-net-device,netdev=user0 -redir tcp:2222::22 \
-enable-kvm -cpu host
If you are on an X86 host and want to use qemu to run an aarch64 image, replace the last line with "-cpu cortex-a57". Now, since the example uses user networking with tcp port redirect, you can ssh into the VM:

ssh -p 2222 you@localhost
Welcome to Ubuntu 16.04 LTS (GNU/Linux 4.4.0-22-generic aarch64)
....



Ancient Linux swag

2016-02-17T22:19:08.371+02:00

(image)

Since I've now been using Linux for 20 years, I've dug up some artifacts from the early journey.

  1. First the book, from late 1995. This from before Tux, so the penguin in the cover is just a co-incidence. The book came with a slackware 3.0 CD, which was my entrance to Linux. Today, almost all of the book is outdated - slackware and lilo install? printing with lpr? mtools and dosemu? ftp, telnet with SLIP dialup? Manually configuring XFree86 and fvwm? How I miss those times!* The only parts of the book are still valid are: shell and vi guides. I didn't read latter, and instead imported my favorite editor from dos FTE.
  2. Fast forward some years, into my first programming job. Ready to advertise the Linux revolution, I bought the mug on right. Nobody else would have a Tux mug, so nobody would accidentally take my mug from the office dishwasher. That only worked for my first work place (a huge and nationally hated IT consultant house). The next workplace, a mobile gaming startup (in 2001, I was there before it was trendy!) - and there was already plenty of Linux mugs when I joined...
  3. While today it may be hard to imagine, those days using Microsoft office tools was mandatory. That leads to the third memorabilia in the picture. Wordperfect for Linux existed for a brief while, and in the box (can you imagine, software came in physical boxes?) came a Tux plush.

* Wait no, I don't miss those times at all




Using ser2net for serial access.

2015-11-23T21:55:20.516+02:00

Is your table a mess of wires? Do you have multiple devices connected via serial and can't remember which is /dev/ttyUSBX is connected to what board? Unless you are a embedded developer, you are unlikely to deal with serial much anymore - In that case you can just jump to the next post in your news feed.

Introducting ser2net

Usually people start with minicom for serial access. There are better tools - picocom, screen, etc. But to easily map multiple serial ports, use ser2net. Ser2net makes serial ports available over telnet.

Persistent usb device names and ser2net

To remember which usb-serial adapter is connected to what, we use the /dev/serial tree created by udev, in /etc/ser2net.conf:

# arndale
7004:telnet:0:'/dev/serial/by-path/pci-0000:00:1d.0-usb-0:1.8.1:1.0-port0':115200 8DATABITS NONE 1STOPBIT
# cubox
7005:telnet:0:/dev/serial/by-id/usb-Prolific_Technology_Inc._USB-Serial_Controller_D-if00-port0:115200 8DATABITS NONE 1STOPBIT
# sonic-screwdriver
7006:telnet:0:/dev/serial/by-id/usb-FTDI_FT230X_96Boards_Console_DAZ0KA02-if00-port0:115200 8DATABITS NONE 1STOPBIT
The by-path syntax is needed, if you have many identical usb-to-serial adapters. In that case a Patch from BTS is needed to support quoting in serial path. Ser2net doesn't seems very actively maintained upstream - a sure sign that project is stagnant is a homepage still at sourceforge.net... This patch among other interesting features can be also be found in various ser2net forks in github.

Setting easy to remember names

Finally, unless you want to memorize the port numbers, set TCP port to name mappings in /etc/services:

# Local services
arndale 7004/tcp
cubox 7005/tcp
sonic-screwdriver 7006/tcp
Now finally:
telnet localhost sonic-screwdriver
(image) ^Mandatory picture of serial port connection in action



Migration to Scaleway ARM server

2015-09-04T22:25:40.428+03:00

The C1 Server

(image)

Scaleway started selling ARM based hosted server in April. I've intended to blog about this for a while, since it was time to upgrade from wheezy to jessie was timely, why not switch provider from an X86 based to ARM one at the same time?

In many ways scaleway node is opposite to what "Enterprise ARM" people are working on. Each server is based on an oldish ARMv7 Quad-Core Marvell Armada XP, instead of a brand new 64-bit ARMv8 cpu. There is no UEFI, ACPI or any other "industry standards" involved, just a smooth web interface and a command line tool to manage your node(s). And the node is yours, it's not shared with others with virtualization. The picture above is a single node, which is stacked with 911 other nodes into a single rack.

This week, the C1 price was dropped to a very reasonable €2.99 per month, or €0.006 per hour.

Software runs on hardware, news at 11

The performance is more than enough for my needs - shell, email and light web serving. dovecot, postfix, irssi and apache2 are just an apt-get away. Anyone who says you need x86 for Linux servers is forgetting that Linux software is open source, and if not already available, can be compiled to any architecture with little effort. Thus the migration pains were only because I chose to modernize configuration of dovecot and friends. Details of the new setup shall be left for another post.




Dystopia of Things

2015-06-12T21:42:11.562+03:00

The Thing on InternetI've now had an "Internet of Things" device for about a year. It is Logitech Harmony HUB, an universal remote controller. It comes with a traditional remote, but the interesting part is that it allows me to use my smartphone/tablet as remote over WiFi. With the android app it provides a rather nice use experience, yet I can see the inevitable end in anger. Bare minimum GPL respectToday, the GPL sources for hub are available - at least the kernel and a patch for busybox. The proper GPL release is still only through written offer. The sources appeared online April this year while Hub has been sold for two years already. Even if I ordered the GPL CD, it's unlikely I could build a modified system with it - too many proprietary bits. The whole GPL was invented by someone who couldn't make a printer do what he wanted. The dystopian today where I have to rewrite the whole stack running on a Linux-based system if I'm not happy what's running there as provided by OEM.App onlyThe smartphone app is mandatory. The app is used to set up the hub. There is no HTML5 interface or any other way to control to the hub - just the bundled remote and phone apps. Fully proprietary apps, with limited customization options. And if app store update removes a feature you have used.. well you can't get it from anywhere anymore. The dystopian today where "Internet of Things" is actually "Smartphone App of Things".Locked APIMaybe instead of modifying the official app you could write your own UI? Like one screen with only the buttons you ever use when watching TV? There *is* an API with delightful headline "Better home experiences, together". However, not together with me, the owner of the harmony hub. The official API is locked to selected partners. And the API is not to control the hub - it's to let the hub connect to other IoT devices. Of course, for talented people, locked api is usually just undocumented api. People have reverse engineered how the app talks to the hub over wifi. Curiously it is actually Jabber based with some twists like logging credentials through Logitech servers. The dystopian today where I can't write programs to remotely control the internet connected thing I own without reverse engineering protocols.Central ServerDid someone say Logitech servers? Ah yes, all configuring of the remote happens via myharmony servers, where the database of remote controllers lives. There is some irony in calling the service *my* harmony when it's clearly theirs. The communication with cloud servers leaks at minimum back to Logitech what hardware I control with my hub. At the worst, it will become an avenue of exploits. And how long will Logitech manage these servers? The moment they are gone, harmony hub becomes a semi-brick. It will still work, but I can't change any configuration anymore. The dystopian future where the Internet of Thing will stop working *when* cloud servers get sunsetWhat nowThis is not just the Harmony hub - this is a pattern that many IoT products follow - Linux based gadget, smartphone app, cloud services, monetized apis. After the gadget is bought, the vendor has little incentive to provide any updates. After all, the next chance I'll carry them money is when the current gadget gets obsolete. I can see two ways out. The easy way is to get IoT gadgets as monthly paid service. Now the gadget vendor has the right incentive - instead of trying to convince me to buy their next gadget, their incentive is to keep me happily paying the monthly bill. The polar opposite is to start making open competing IoT's, and market to people the advantage of being yourself in control. I can see markets for both options. But half-way between is just pure dystopy.[...]



Fastest way to change running dtb

2015-04-22T15:51:00.615+03:00

Tollef posted about using BeagleBone Black for temperature monitoring. There was a passage about patching the DTB (device tree) file:
... This needs to be compiled into a .dtb. I found the easiest way was just to drop the patched .dts into an unpacked kernel tree and then running make dtbs.
There are easier ways. For example, you can get the current device tree file generated from /proc:

apt-get install device-tree-compiler
dtc -I fs -O dts -o current.dts /proc/device-tree/
(Why /proc and not /sys ? because device tree predates /sys) Now you can just modify and build the dtb again, and install it back to where bootloader reads the dtb from:

vim current.dts
dtc -I dts -O dtb -o new.dtb current.dts
Alternative, of course, is to build a brand new mainline kernel and use the dynamic Device tree code now available.



Crowdfunding better GCompris graphics

2014-12-31T23:29:59.008+02:00

GCompris is the most established open source kids educational game. Here we practice use of mouse with an Efika smartbook. In this subgame, mouse is moved around to uncover a image behind.
(image)
While GCompris is nice, it needs nice graphics badly. Now the GCompris authors are running a indiegogo crowfund exactly for that - to get new unified graphics.

Why should you fund? Apart from the "I want to be nice for any oss project", I see a couple of reasons specific for this crowdfund.

First, to show kids that apps can be changed! Instead of just using existing iPad apps as a consumer, Gcompris allows you to show kids how games are built and modified. With the new graphics, more kids will play longer, and eventually some will ask if something can be changed/added..

Second, GCompris has recently become QT/QML based, making it more portable than before. Wouldn't you like to see it in your Jolla tablet or a future Ubuntu phone? The crowfund doesn't promise to make new ports, but if you are eager to show your friends nice looking apps on your platform, this probably one of the easiest ways to help them happen.

Finally, as a nice way to say happy new year 2015 :)



Adventures in setting up local lava service

2014-11-07T11:03:28.571+02:00

Linaro uses LAVA as a tool to test variety of devices. So far I had not installed it myself, mostly due to assuming it to be enermously complex to set up. But thanks to Neil Williams work on packaging, installation has got a lot easier. Follow the Official Install Doc and Official install to debian Doc, roughly looking like: 1. Install Jessie into kvm kvm -m 2048 -drive file=lava2.img,if=virtio -cdrom debian-testing-amd64-netinst.iso2. Install lava-server apt-get update; apt-get install -y postgresql nfs-kernel-server apache2apt-get install lava-server# answer debconf questionsa2dissite 000-default && a2ensite lava-server.conf service apache2 reloadlava-server manage createsuperuser --username default --email=foo.bar@example.com$EDITOR /etc/lava-dispatcher/lava-dispatcher.conf # make sure LAVA_SERVER_IP is rightThat's the generic setup. Now you can point your browser to the IP address of the kvm machine, and log in with the default user and the password you made. 3 ... 1000 Each LAVA instance is site customized for the boards, network, serial ports, etc. In this example, I now add a single arndale board. cp /usr/lib/python2.7/dist-packages/lava_dispatcher/default-config/lava-dispatcher/device-types/arndale.conf /etc/lava-dispatcher/device-types/sudo /usr/share/lava-server/add_device.py -s arndale arndale-01 -t 7001This generates us a almost usable config for the arndale. For site specifics I have usb-to-serial. Outside kvm, I provide access to serial ports using the following ser2net config: 7001:telnet:0:/dev/ttyUSB0:115200 8DATABITS NONE 1STOPBIT7002:telnet:0:/dev/ttyUSB1:115200 8DATABITS NONE 1STOPBITTODO: make ser2net not run as root and ensure usb2serial devices always get same name.. For automatic power reset, I wanted something cheap, yet something that wouldn't require much soldering (I'm not a real embedded engineer.. I prefer software side ;) . Discussed with Hector, who hinted about prebuilt relay boxes. Chose one from Ebay, a kmtronic 8-port USB Relay. So now I have this cute boxed nonsense hack. The USB relay is driven with a short script, hard-reset-1 stty -F /dev/ttyACM0 9600echo -e '\xFF\x01\x00' > /dev/ttyACM0sleep 1echo -e '\xFF\x01\x01' > /dev/ttyACM0Sidenote: If you don't have or want automated power relay for lava, you can always replace this this script with something along "mpg123 puny_human_press_the_power_button_now.mp3" Both the serial port and reset script are on server with dns name aimless. So we take the /etc/lava-dispatcher/devices/arndale-01.conf that add_device.py created and make it look like: device_type = arndalehostname = arndale-01connection_command = telnet aimless 7001hard_reset_command = slogin lava@aimless -i /etc/lava-dispatcher/id_rsa /home/lava/hard-reset-1Since in my case I'm only going to test with tftp/nfs boot, the arndale board needs only to be setup to have a u-boot bootloader ready on power-on. Now everything is ready for a test job. I have a locally built kernel and device tree, and I export the directory using the httpd available by default in debian.. Python! cd out/python -m SimpleHTTPServerGo to the lava web server, select api->tokens and create a new token. Next we add the token and use it to submit a job $ sudo apt-get install lava-tool$ lava-tool auth-add http://default@lava-server/RPC2/$ lava-tool submit-job http://default@lava-server/RPC2/ lava_test.jsonsubmitted as job id: 1$The first job should now be visible in the lava web frontend, in the scheduler -> jobs part. If everything goes fine, the relay will click in a moment and the job will finish in a few minutes.[...]



Using networkd for kvm tap networking

2014-11-01T12:21:08.410+02:00

Setting up basic systemd-network was recently described by Joachim, and the post inspired me to try it as well. The twist is that in my case I need a bridge for my KVM with Lava server and arm/aarch64 qemu system emulators...

For background, qemu/kvm support a few ways to provide network to guests. The default is user networking, which requires no privileges, but is slow and based on ancient SLIRP code. The other common option is tap networking, which is fast, but complicated to set up. Turns out, with networkd and qemu bridge helper, tap is easy to set up.


$ for file in /etc/systemd/network/*; do echo $file; cat $file; done
/etc/systemd/network/eth.network
[Match]
Name=eth1
[Network]
Bridge=br0

/etc/systemd/network/kvm.netdev
[NetDev]
Name=br0
Kind=bridge

/etc/systemd/network/kvm.network
[Match]
Name=br0
[Network]
DHCP=yes

Diverging from Joachims simple example, we replaced "DHCP=yes" with "Bridge=br0". Then we proceed to define the bridge (in the kvm.netdev) and give it an ip via dhcp in kvm.network. From the kvm side, if you haven't used the bridge helper before, you need to give the helper permissions (setuid root or cap_net_admin) to create a tap device to attach on the bridge. The helper needs an configuration file to tell what bridge it may meddle with.

# cat > /etc/qemu/bridge.conf <<__END__
allow br0
__END__
# setcap cap_net_admin=ep /usr/lib/qemu/qemu-bridge-helper
Now we can start kvm with bridge networking as easily as with user networking:

$ kvm -m 2048 -drive file=jessie.img,if=virtio -net bridge -net nic,model=virtio -serial stdio
The manpages systemd.network(5) and systemd.netdev(5) do a great job explaining the files. Qemu/kvm networking docs are unfortunately not as detailed.



Booting Linaro ARMv8 OE images with Qemu

2014-08-13T17:36:33.583+03:00

A quick update - Linaro ARMv8 OpenEmbbeded images work just fine with qemu 2.1 as well:

$ http://releases.linaro.org/14.07/openembedded/aarch64/Image
$ http://releases.linaro.org/14.07/openembedded/aarch64/vexpress64-openembedded_lamp-armv8-gcc-4.9_20140727-682.img.gz
$ qemu-system-aarch64 -m 1024 -cpu cortex-a57 -nographic -machine virt \
-kernel Image -append 'root=/dev/vda2 rw rootwait mem=1024M console=ttyAMA0,38400n8' \
-drive if=none,id=image,file=vexpress64-openembedded_lamp-armv8-gcc-4.9_20140727-682.img \
-netdev user,id=user0 -device virtio-net-device,netdev=user0 -device virtio-blk-device,drive=image
[ 0.000000] Linux version 3.16.0-1-linaro-vexpress64 (buildslave@x86-64-07) (gcc version 4.8.3 20140401 (prerelease) (crosstool-NG linaro-1.13.1-4.8-2014.04 - Linaro GCC 4.8-2014.04) ) #1ubuntu1~ci+140726114341 SMP PREEMPT Sat Jul 26 11:44:27 UTC 20
[ 0.000000] CPU: AArch64 Processor [411fd070] revision 0
...
root@genericarmv8:~#
Quick benchmarking with age-old ByteMark nbench:
Index Qemu Foundation Host
Memory 4.294 0.712 44.534
Integer 6.270 0.686 41.983
Float 1.463 1.065 59.528
Baseline (LINUX) : AMD K6/233*
Qemu is upto 8x faster than Foundation model on Integers, but only 50% faster on Math. Meanwhile, the Host pc spends 7-40x slower emulating ARMv8 than executing native instructions.



Testing qemu 2.1 arm64 support

2014-08-05T22:45:55.825+03:00

Qemu 2.1 was just released a few days ago, and is now a available on Debian/unstable. Trying out an (virtual) arm64 machine is now just a few steps away for unstable users:

$ sudo apt-get install qemu-system-arm
$ wget https://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-arm64-disk1.img
$ wget https://cloud-images.ubuntu.com/trusty/current/unpacked/trusty-server-cloudimg-arm64-vmlinuz-generic
$ qemu-system-aarch64 -m 1024 -cpu cortex-a57 -nographic -machine virt -kernel trusty-server-cloudimg-arm64-vmlinuz-generic \
-append 'root=/dev/vda1 rw rootwait mem=1024M console=ttyAMA0,38400n8 init=/usr/lib/cloud-init/uncloud-init ds=nocloud ubuntu-pass=randomstring' \
-drive if=none,id=image,file=trusty-server-cloudimg-arm64-disk1.img \
-netdev user,id=user0 -device virtio-net-device,netdev=user0 -device virtio-blk-device,drive=image
[ 0.000000] Linux version 3.13.0-32-generic (buildd@beebe) (gcc version 4.8.2 (Ubuntu/Linaro 4.8.2-19ubuntu1) ) #57-Ubuntu SMP Tue Jul 15 03:52:14 UTC 2014 (Ubuntu 3.13.0-32.57-generic 3.13.11.4)
[ 0.000000] CPU: AArch64 Processor [411fd070] revision 0
...
-snip-
...
ubuntu@ubuntu:~$ cat /proc/cpuinfo
Processor : AArch64 Processor rev 0 (aarch64)
processor : 0
Features : fp asimd evtstrm
CPU implementer : 0x41
CPU architecture: AArch64
CPU variant : 0x1
CPU part : 0xd07
CPU revision : 0

Hardware : linux,dummy-virt
ubuntu@ubuntu:~$
The "init=/usr/lib/cloud-init/uncloud-init ds=nocloud ubuntu-pass=randomstring" is ubuntu cloud stuff that will set the ubuntu user password to "randomstring" - don't use "randomstring" literally there, if you are connected to internets...

For more detailed writeup of using qemu-system-aarch64, check the excellent writeup from Alex Bennee.




Arm builder updates

2014-05-08T22:14:14.427+03:00

Debian has recently received a donation of 8 build machines from Marvell. The new machines come with Quad core MV78460 Armada XP CPU's, DDR3 DIMM slot so we can plug in more memory, and speedy sata ports. They replace the well served Marvell MV78200 based builders - ones that have been building debian armel since 2009. We are planning a more detailed announcement, but I'll provide a quick summary:

The speed increase provided by MV78460 can viewed by comparing build times on selected builds since early april: (image)

Qemu build times.

We can now build Qemu in 2h instead of 16h -8x faster than before! Certainly a substantial improvement, so impressive kit from Marvell! But not all packages gain this amount of speedup:

(image)

webkitgtk build times.

This example, webkitgtk, builds barely 3x faster. The explanation is found from debian/rules of webkitgkt:


# Parallel builds are unstable, see #714072 and #722520
# ifneq (,$(filter parallel=%,$(DEB_BUILD_OPTIONS)))
# NUMJOBS = $(patsubst parallel=%,%,$(filter parallel=%,$(DEB_BUILD_OPTIONS)))
# MAKEARGUMENTS += -j$(NUMJOBS)
# endif
The old builders are single-core[1], so the regardless of parallel building, you can easily max out the cpu. New builders will use only 1 of 4 cores without parallel build support in debian/rules. (image)

During this buildd cpu usage graph, we see most time only one CPU is consumed. So for fast package build times.. make sure your packages supports parallel building.

For developers, abel.debian.org is porter machine with Armada XP. It has schroot's for both armel and armhf. set "DEB_BUILD_OPTIONS=parallel=4" and off you go.

Finally I'd like to thank Thomas Petazzoni, Maen Suleiman, Hector Oron, Steve McIntyre, Adam Conrad and Jon Ward for making the upgrade happen.

Meanwhile, we have unrelated trouble - a bunch of disks have broken within a few days apart. I take the warranty just run out...

[1] only from Linux's point of view. - mv78200 has actually 2 cores, just not SMP or coherent. You could run an RTOS on the other core while you run Linux on the other.




Where the armel buildd time went

2014-02-21T15:32:28.514+02:00

Wanna-build, wanna-build, which packages spent most time on armel buildd's since beginning of 2013?

package | sum(build_time)
-------------------------+--------------
libreoffice | 114 09:16:34
linux | 113 02:58:50
gcc-4.8 | 064 01:21:09
webkitgtk | 059 19:09:27
acl2 | 043 16:40:50
gcc-4.7 | 028 14:03:53
iceweasel | 026 19:02:13
gcc-snapshot | 026 01:31:21
openjdk-7 | 020 02:41:53
php5 | 019 16:13:22
llvm-toolchain-3.3 | 017 19:05:38
qt4-x11 | 017 02:57:09
espresso | 016 03:50:37
pypy | 015 07:07:25
icedove | 014 18:57:08
insighttoolkit4 | 014 17:16:43
qtbase-opensource-src | 014 12:39:09
llvm-toolchain-3.4 | 012 03:06:15
mono | 011 22:30:13
atlas | 011 20:40:54
qemu | 011 17:11:09
calligra | 011 16:05:55
gnuradio | 011 15:19:35
resiprocate | 011 10:14:56
llvm-toolchain-snapshot | 011 02:04:44
libav | 010 13:52:03
python2.7 | 009 18:58:33
ghc | 009 18:28:48
gnat-4.8 | 009 13:59:57
axiom | 009 12:40:24
cython | 009 00:47:04
openjdk-6 | 008 16:38:14
oce | 008 10:29:20
eglibc | 008 06:04:26
ppl | 007 20:48:45
root-system | 007 17:32:16
openturns | 007 10:12:53
gcl | 007 08:02:42
gcc-4.6 | 007 02:50:48
k3d | 007 00:36:11
python3.3 | 007 00:25:42
llvm-toolchain-3.2 | 007 00:17:59
vtk | 006 17:53:28
samba | 006 17:17:27
mysql-workbench | 006 14:36:46
kde-workspace | 006 07:31:12
gmsh | 006 04:32:42
psi-plus | 006 04:30:08
octave | 006 04:17:22
paraview | 006 04:13:25
Timeformat is "days HH:MM:SS". Our ridiculously stable mv78x00 buildd's have served well, but has come to become let them rest. Now, to find out how many of these top time consuming packages can build with parallel make and are not doing so already.



Replicant on Galaxy S3

2013-12-20T22:41:42.495+02:00

I recently got my self and Galaxy S3 for testing out Replicant, an android image made out of only open source components. Why Galaxy S3? It is well supported in Replicant, almost every driver is already open source. The hardware specs are acceptable, 1.4Ghz quad core, 1GB ram, microsd, and all the peripheral chips one expects for a phone. Galaxy S3 has sold insanely (50 million units supposedly), meaning I won't run out of accessories and aftermarket spare parts any time soon. The massive installed base also means a huge potential user community. S3 is still available as new, with two years of warranty. Why notWhile the S3 is still available new, it is safe to assume production is ending already - 1.5 year old product is ancient history in mobile world! It remains to be seen how much the massive user base will defend against the obsolescence. Upstream kernel support for "old" cpu is open question, replicant is still basing kernel on vendor kernel. Bootloader is unlocked, but it can't be changed due to trusted^Wtreacherous computing, preventing things like boot from sd card. Finally, not everything is open source, the GPU (mali) driver while being reverse engineered, is taking it's time - and the GPS hasn't been reversed yet. Installing replicant Before install, from the original installation, you might want to take a copy of firmware files (since replicant won't provide them). enable developer mode on the S3 and: sudo apt-get install android-toolsmkdir firmwareadb pull /system/vendor/firmware/ adb pull /system/etc/wifi After then, just follow official replicant install guide for S3. If you don't mind closed source firmwares, post-install you need to push the firmware files back: adb shellmount -o remount,rw /systemadb push . /system/vendor/firmware Here was my first catch, the wifi firmwares from jelly bean based image were not compatible with older ICS based replicant. Using replicant Booting to replicant is fast, few seconds to the pin screen. You are treated with the standard android lockscreen, usual slide/pin/pattern options are available. Basic functions like phone, sms and web browsing have icons from the homescreen and work without a hitch. Likewise camera seems to work, really the only smartphone feature missing is GPS. Sidenote - this image looks a LOT better on the S3 than on my thinkpad. No wonder people are flocking to phones and tablets when laptop makers use such crappy components. The grid menu has the standard android AOSP opensource applications in the ICS style menu with the extra of f-droid icon - which is the installer for open source applications. F-droid is it's own project that complements replicant project by maintaining a catalog of Free Software. F-droid brings hundreds of open source applications not only for replicant, but for any other android users, including platforms with android compatibility, such as Jolla's Sailfish OS. Of course f-droid client is open source, like the f-droid server (in Debian too). F-droid server is not just repository management, it can take care of building and deploying android apps. The WebKit based android browser renders web sites without issues, and if you are not happy with, you can download Firefox from f-droid. Many websites will notice you are mobile, and provide mobile web sites, which is sometimes good and sometimes annoying. Worse, some pages detect you are android and only offer you to load their closed android app for viewing the page. OTOH I am already viewing their closed source website, so using closed source app to view it isn't much worse. This keyboard is again the android standard one, but for most [...]



ACPI on ARM storm in teacup

2013-07-22T11:14:14.414+03:00

A recent google+ post by Jon Masters caused some stormy and some less stormy responses.

A lot of BIOS/UEFI/ACPI hate comes X86, where ACPI is used from everything from suspending devices to reading buttons and setting leds. So when X86 kernel suspends, it does magic calls to ACPI and prays that the firmware vendor did not screw it up. Now vendors do screw up, hence lots of cursing and ugly workarounds in the kernel follows. My Lenovo has a firwmare bug where the FN-buttons and Fan stops working if the laptop is attached on AC adapter for too long. The fan is probably a simple i2c device the kernel could control directly without jumping through ACPI hiding layer hoops. But the X86 people hold the view it is better to trust the firmware engineer to control devices instead of having the kernel folk to write device drivers to ... control devices!

Now on ARM(64) the idea of using ACPI is to have none of that.

Instead the idea is to use ACPI only to provide tables for enumerating what devices are available in the platform. Just like what device tree does. Now if this is the same as device tree, why bother?

The main reason is to allow the distribution installer behave same on X86 / ARM / ARM64. This is crucial for distributions like fedora and RHEL where a cabal holds the point of view that X86 distribution development must not be constrained by ARM support. But it also important for everyone that the method of installing your favorite distribution to an ARM64 server is standard and works the same for any server from any vendor. Now while UEFI and ACPI are definitely not my preferred solutions, I can accept them as necessary evil for having a more standard platform.




On behalf of aarch64 porters

2013-02-04T15:36:59.193+02:00

Public service announcement

When porting GNU/Linux applications to a new architecture, such as 64-Bit ARM, one gets familiar with the following error message:


checking build system type... x86_64-pc-linux-gnu
checking host system type... Invalid configuration `aarch64-oe-linux': machine `aarch64-oe' not recognized
configure: error: /bin/sh config.sub aarch64-oe-linux failed

This in itself is trivial to fix - run autoreconf or just copy in new versions of config.sub and config.guess. However, when bootstrapping a distribution of 12000+ packages, this becomes quickly tiresome. Thus we have a small request:

If you are an upstream of a software that uses autoconf - Please run autoreconf against autotools-dev 20120210.1 or later, and make a release of your software.

Aarch64 porters will be grateful as updated software trickles down to distributions.

This was the most discussed point during my FOSDEM talk "Porting applications to 64-Bit ARM".




My 5 eurocents on the Rasberry Pi opengl driver debacle

2012-10-27T21:59:59.430+03:00

The Linux community rather unenthusiastic about the open source shim to Raspberry PI GPU. I think the backslash is a bit unfair, even if the marketing from Raspberry was pompous - It is still a step in the right direction. More raspberry PI hardware enablement code is open source than before.

A step in the wrong direction is "locking firmware", where a loadable firmware is made read-only by storing it to a ROM. It has been planned by the GTA04 phone project, and Raspberry foundation also considers making a locked GPU ROM device. This is madness that needs to stop. It may fulfill the letter of FSF guidelines, but it is firmly against the spirit. It does not increase freedom - it takes away vendors ability to provide bugfixes to firmware, and communities possibility of reverse engineering the firmware.




F-droid store for android

2012-10-22T12:46:16.731+03:00

Christopher complained that in android, you need a user account to install software. That is not true, there is F-droid which is a catalog of Free/Open Source Software, no user account needed.

Lack of multitasking is yes annoying sometimes, yes, but that is really a poweruser problem. Most users don't load web pages in background while playing. Most users will simply not bother with web pages that take too long to load. Given the choice, most users will prefer a snappy UI over a UI that allows proper multitasking. N900 was never snappy, stutters were common, and it would often hang for long times if you had too many apps or browser windows open at the same time. Even for Android there are way more complaints about non-fluid UI than lack of proper multitasking. It is thus only logical for the Android developers to concentrate in developing a snappier UI over improving multitasking.

The fact that Android has given 400+ million Linux computers out in the hands of consumers is one of the MOST AMAZING THINGS ever. But it seems others on FOSS community seem to consider Android a rather unfortunate event, because it's not "Real Linux" or "100% free software" or "because it's not bug free". Err, like Maemo or MeeGo ever is any of those...

It is fair to complain about usability quirks Android as enduser. But if you believe Free Software, you should also see the opportunity the open source parts of Android provides to you fix it to your needs. After all, the point of Free Software is that you don't need to depend on the upstream to fix you everything? Christoph complains that the stock email app is lacking. Android Email app is Open Source (Maemo and N9 email apps are not). Android email has been forked as K-9 Mail. The app lifecycle is fixable allowing you to return from browser to the mail you were at - at least a plenty of other apps manage to do that.

The question is, are we consumers or creators. If we are just sitting waiting to Google provide us a perfect shrinkwrapped Android, we could just as well use iPhone or Windows phone. We should instead see Android as an opportunity - when it's 90% open source, fix the remaining closed source bits like open source 3D drivers and open source replacements for proprietary interfaces.




Adventures in perlsembly

2012-06-07T19:54:08.067+03:00

Some Debian packages are missing important optimizations. This one was noticed when comparing openssl benchmarks by monkeyiq to the results got on a pandaboard (OMAP 4430). Being a Cortex-A9, it should have clearly been same or faster than the N9 in the benchmark (OMAP 3630). And it was, except for AES benchmarks. Since AES is quite important, that seemed a bit odd. Turns out, that Debian/Ubuntu package had some hand-crafted arm assembler optimizations missing. Enable them, and the results with openssl speed command benchmark were quite nice:

The 'numbers' are in 1000s of bytes per second processed.

benchmark debian with patch +%
sha1 55836.67k -> 73599.08k +31.811%
aes-128 cbc 18451.11k -> 36305.34k +96.765%
aes-256 cbc 13552.30k -> 27108.31k +100.027%
sha256 20092.25k -> 43469.45k +116.349%
sha512 8052.74k -> 37194.28k +361.884 %
rsa 1024 1904.2v/S -> 3650.5v/s +91.708 %
Curiously, the assembler code is actually in perl files that output assembler code. This kind of code is affectionately called "perlsembly". A bug with patch has been filed, hopefully applied at the soonest.



Mosh - better remote shell

2012-05-14T13:03:38.903+03:00

In this age of 3d accelerated desktops and all that fancy stuff, one does not expect practical innovation happening in the remote terminal emulation area. But it has just happened. It is called Mosh, a shorthand for "Mobile Shell".

What does it do better than ssh we have learned to love?

  • Less lag! Being UDP based, it is not prone to TCP congestion effects. Considering that voip, games and everything else latency critical has been UDP based, it is (almost) surprising that it wasn't done for interactive terminals before...
  • Even less lag! Mosh provides local echo and line editing when the other side is not being responsive. To do this, mosh actually becomes a terminal emulator of it's own. This stuff is sweet on unstable 3G and conference wifi networks.
  • Survives suspending. Resume your laptop and *bam* all your remote mutt and vim editors are still there instead of the "connection reset" you get from ssh.
  • Roaming. Got another IP? Moved from wifi to ethernet to 3G? your sessions are still open! Another thing a TCP based protocol couldn't do easily...
It doesn't replace ssh, as it still borrows authentication from ssh. But that's cool, as you can keep your ssh authorized keys.

Available in Debian unstable,testing and Backports today, and many other systems as well. Hopefully an Android client comes available soon, as the above mentioned advantages seem really tailored for android like mobile systems.

Caveat: This is new stuff, and thus hasn't quite been proven to be secure.




0 Comments

2012-04-27T19:33:25.421+03:00

Cross Compiling with MultiArch Congrats to the Ubuntu Folk for the new LTS release. Incidentally this is also the first release where our work on MultiArch bears fruit. We can now cross-compile relatively easily without resorting to hacks like scratchbox, qemu chroot or dpkg-cross/xdeb. Lets show short practical guide on cross-building Qemu for armhf. The instructions refer to precise, but the goodiness is on the way to Debian as well. Biggest missing piece being the cross-compiler packages, which we have an Summer of Code project. The example is operated in a chroot to avoid potentially messing up your working machine. Qemu-linaro is not a shining example, as cross-building it doesn't work out of box. But that is educational, it allows me to show what kind of issues you can bump into, and how to fix them. $ sudo debootstrap --variant=buildd precise /srv/preciseEdit the /srv/precise/etc/apt.sources.list to the following (replace amd64 with i386 if you made an i386 chroot) deb [arch=amd64] http://archive.ubuntu.com/ubuntu precise main universedeb [arch=armhf] http://ports.ubuntu.com/ubuntu-ports precise main universedeb-src http://archive.ubuntu.com/ubuntu precise main universeEdit the /srv/precise/etc/dpkg/dpkg.cfg.d/multiarch by adding the following line: foreign-architecture armhfFinally disable install of recommends by editing /srv/precise/etc/apt/apt.conf.d/10local: APT::Install-Recommends "0";APT::Install-Suggests "0";Install the armhf crosscompiler the chroot $ sudo chroot /srv/precise/ # unset LANG LANGUAGE # mount -t proc proc /proc # apt-get update # apt-get install g++-arm-linux-gnueabihf pkg-config-arm-linux-gnueabihfGet the sources and try to install the cross-buildeps: # cd /tmp # apt-get source qemu-linaro # cd qemu-linaro-* # apt-get build-dep -aarmhf qemu-linaroAs we see, the build-dep bombs out ugly, with a them of "perl" being unable to be installed. This is because apt-get can't figure out if we should install and armhf or amd64 version of perl. We don't use the required syntax yet in Build-Dep line, "perl:any", as dpkg and apt in previous released don't support it. Thus back porting would no longer be possible. One way to fix it, would be to drop perl build-dep, as perl is already pulled by other build-deps. But lets instead show howto install it the build-deps manually. First we build system build-deps, then target architecture ones: # apt-get install debhelper texinfo # apt-get install zlib1g-dev:armhf libasound2-dev:armhf libsdl1.2-dev:armhf libx11-dev:armhf libpulse-dev:armhf libncurses5-dev:armhf libbrlapi-dev:armhf libcurl4-gnutls-dev:armhf libgnutls-dev:armhf libsasl2-dev:armhf uuid-dev:armhf libvdeplug2-dev:armhf libbluetooth-dev:armhfAnd try the build[1]: # dpkg-buildpackage -aarmhf -BWhich sadly errors out. Turns out the cross-build support in debian/rules is broken. Instead of --cc we need to feed an --cross-prefix to the ./configure of qemu. Edit the debian/rules with replacing - conf_arch += --cc=$(DEB_HOST_GNU_TYPE)-gcc --cpu=$(QEMU_CPU)+ conf_arch += --cross-prefix=$(DEB_HOST_GNU_TYPE)-Optional: Since we are cross-compiling from a multicore machine, lets also add parallel building support, by changing the override_dh_auto_build: rule to have --parallel flags in debian/rules as well: override_dh_auto_build: # system build dh_auto_build -B system-build --parallelifeq ($(DEB_HOST_ARCH_OS),linux) # user build dh_auto_build -B user-build -[...]



CuBox

2012-01-19T09:54:26.363+02:00


Just recently arrived from DHL, a solid-run CuBox. I guess nobody who knows me will be surprised when I tell it features an ARM cpu. Specifically an Marvell Armada 510. It features ARMv7 compatibility, with a slight twist of replacing NEON extensions with iWMMX extensions. On the boasting side Armada 510 promises 1080p video decoding and OpenGL ES graphics acceleration (closed source, unfortunately).

The tiny form factor of CuBox is pretty much more than the impressive amount of connectors included;

* Gigabit ethernet
* 2*USB
* eSATA
* HDMI out
* s/pdif optical audio out
* microSD slot
* microUSB serial/jtag port

The last item being important as it makes CuBox unbrickable.. Some will probably lament the lack of WiFi/Bluetooth, but get everything in one device ;). Besides, the USB slots are there to be filled..

Getting started was an slightly rough ride, as in the included Ubuntu (10.04 LTS), X refused to start. After wrongly suspecting that my Display was at fault, turned out the microSD included was slightly corrupted, and some critical contents of xkb-data package were garbage. After reinstall of that package, everything worked, including playing Big Buck Bunny in FullHD with totem.

Biggest disappointment so far is the non-mainline kernel, based on old 2.6.32.9. Some mainline support of Armada 510 exist, but will it work with the proprietary graphics code?