Subscribe: Rambling around foo
http://ramblingfoo.blogspot.com/feeds/posts/default
Added By: Feedage Forager Feedage Grade A rated
Language: English
Tags:
backup  code  git  linux  make  netbsd  nslu  origin  path  remotes origin  root  rsnapshot  sftp  system  time  version 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Rambling around foo

Rambling around foo



tending to depart from the main point or cover a wide range of subjects



Updated: 2017-11-21T17:16:34.634+02:00

 



LVM: Converting root partition from linear to raid1 leads to boot failure... and how to recover

2017-03-25T17:39:50.891+02:00

I have a system which has 3 distinct HDDs used as physucal volumes for Linux LVM. One logical volume is the root partition and it was initally created as a linear LV (vg0/OS).
Since I have PV redundancy, I thought it might be a good idea to convert the root LV from liear to raid1 with 2 mirrors.

WARNING: It seems LVM raid1 logicalvolume for / is not supported with grub2, at least not with Ubuntu's 2.02~beta2-9ubuntu1.6 (14.04LTS) or Debian Jessie's grub-pc 2.02~beta2-22+deb8u1!

So I did this:
lvconvert -m2 --type raid1 vg0/OS

Then I restarted to find myself at the 'grub rescue>' prompt.

The initial problem was seen on an Ubuntu 14.04 LTS (aka trusty) system, but I reproduced it on a VM with Debian Jessie.

I downloaded the Super Grub2 Disk and tried to boot the VM. After choosing the option to load the LVM and RAID support, I was able to boot my previous system.

I tried several times to reinstall GRUB, thinking that was the issue, but I always got this  kind of error:


/usr/sbin/grub-probe: error: disk `lvmid/QtJiw0-wsDf-A2zh-2v2y-7JVA-NhPQ-TfjQlN/phCDlj-1XAM-VZnl-RzRy-g3kf-eeUB-dBcgmb' not found.

In the end, after digging for more than 4 hours for answers,  I decided I might be able to revert the config to linear configuration, from the (initramfs) prompt.

Initally the LV was inactive, so I activated it:

lvchange -a y /dev/vg0/OS

Then restored the LV to linear:

lvconvert -m0 vg0/OS

Then tried to reboot without reinstalling GRUB, just for kicks, which succeded.

In order to confirm this was the issue, I redid the whole thing, and indeed, with a raid1 root, I always got the error lvmid error.

I'll have to check on Monday at work if I can revert it the same way the Ubuntu 14.04 system, but I suspect I will have no issues.


Is it true root on lvm-raid1 is nto supported?(image)



HOWTO: Setting and inserting/using MS Word 2013 document properties in the body of the document

2015-12-10T20:22:30.459+02:00

I wrote this so I won't forget it and for others to find, if confronted with the same issue.

I hate Microsoft Office in all its incarnations, but I have to use it at work for various stuff. One of them is maintaining some technical documentation. we now use Office 365 and Office 2013.

Since MS Office Word 2013 is not a technical documentation program, some of the support for it is clunky. For things such as version numbers or others strings that might repeat throughout the document, (advanced) document properties is a way to go.

To set them select File > Info > Properties > Advanced Properties > Custom then fill in the 'Name:', 'Type:' and 'Value:', then press Add, then OK.

Once the properties are set, it can be inserted in the document by selecting its name in the 'Property:' list from the menu: INSERT > Quick Parts >  Field... > Categories:Document Information > DocProperty.

After updating the value of any property (from the Advanced Properties dialog), to update all the places where the properties were used in the document, press Ctrl+A then right click > Update Filed > Update entire table > OK.

And, yes, 'Update entire table' will update the values, although it's name is stupid. (image)



HOWTO: No SSH logins SFTP only chrooted server configuration with OpenSSH

2015-05-23T17:44:40.914+03:00

If you are in a situation where you want to set up a SFTP server in a more secure way, don't want to expose anything from the server via SFTP and do not want to enable SSH login on the account allowed to sftp, you might find the information below useful.What do we want to achive:SFTP serveronly a specified account is allowed to connect to SFTPnothing outside the SFTP directory is exposedno SSH login is allowedany extra security measures are welcomeTo obtain all of the above we will create a dedicated account which will be chroot-ed, its home will be stored on a removable/no always mounted drive (acessing SFTP will not work when the drive is not mounted).Mount the removable drive which will hold the SFTP area (you might need to add some entry in fstab).  Create the account to be used for SFTP access (on a Debian system this will do the trick):# adduser --system --home /media/Store/sftp --shell /usr/sbin/nologin sftpThis will create the account sftp which has login disabled, shell is /usr/sbin/nologin and create the home directory for this user.Unfortunately the default ownership of the home directory of this user are incompatible with chroot-ing in SFTP (which prevents access to other files on the server). A message like the one below will be generated in this kind of case:$ sftp -v sftp@localhost[..]sftp@localhost's password: debug1: Authentication succeeded (password).Authenticated to localhost ([::1]:22).debug1: channel 0: new [client-session]debug1: Requesting no-more-sessions@openssh.comdebug1: Entering interactive session.Write failed: Broken pipeCouldn't read packet: Connection reset by peerAlso /var/log/auth.log will contain something like this:fatal: bad ownership or modes for chroot directory "/media/Store/sftp"The default permissions are visible using the 'namei -l' command on the sftp home directory:# namei -l /media/Store/sftpf: /media/Store/sftpdrwxr-xr-x root root    /drwxr-xr-x root root    mediadrwxr-xr-x root root    Storedrwxr-xr-x sftp nogroup sftpWe change the ownership of the sftp directory and make sure there is a place for files to be uploaded in the SFTP area:# chown root:root /media/Store/sftp# mkdir /media/Store/sftp/upload# chown sftp /media/Store/sftp/uploadWe isolate the sftp users from other users on the system and configure a chroot-ed environment for all users accessing the SFTP server:# addgroup sftpusers# adduser sftp sftusersSet a password for the sftp user so password authentication works:# passwd sftpPutting all pieces together, we restrict access only to the sftp user, allow it access via password authentication only to SFTP, but not SSH (and disallow tunneling and forwarding or empty passwords).Here are the changes done in /etc/ssh/sshd_config:PermitEmptyPasswords noPasswordAuthentication yesAllowUsers sftpSubsystem sftp internal-sftpMatch Group sftpusers        ChrootDirectory %h        ForceCommand internal-sftp        X11Forwarding no        AllowTcpForwarding no         PermitTunnel noReload the sshd configuration (I'm using systemd):# systemctl reload ssh.serviceCheck sftp user can't login via SSH:$ ssh sftp@localhostsftp@localhost's password: This service allows sftp connections only.Connection to localhost closed.But SFTP is working and is restricted to the SFTP area:$ sftp sftp@localhostsftp@localhost's password: Connected to localhost.sftp> lsupload  sftp> pwdRemote working directory: /sftp> put netbsd-nfs.binUploading netbsd-nfs.bin to /netbsd-nfs.binremote open("/netbsd-nfs.bin"): Permission deniedsftp> cd uploadsftp> put netbsd-nfs.binUploading netbsd-nfs.bin to /upload/netbsd-nfs.binnetbsd-nfs.bin                              &n[...]



Linksys NSLU2 adventures into the NetBSD land passed through JTAG highlands - part 2 - RedBoot reverse engineering and APEX hacking

2015-05-23T18:15:40.600+03:00

(continuation of Linksys NSLU2 adventures into the NetBSD land passed through JTAG highlands - part 1; meanwhile, my article was mentioned briefly in BSDNow Episode 89 - Exclusive Disjunction around minute 36:25)Choosing to call RedBoot from a hacked ApexAs I was saying in my previous post, in order to be able to automate the booting of the NetBSD image via TFTP, I opted for using a 2nd stage bootloader (planning to flash it in the NSLU2 instead of a Linux kernel), and since Debian was already using Apex, I chose Apex, too.The first problem I found was that the networking support in Apex was relying on an old version of the Intel NPE library which I couldn't find on Intel's site. The new version was incompatible/not building with the old build wrapper in Apex, so I was faced with 3 options:Fight with the availabel Intel code and try to force it to compile in ApexIncorporate the NPE driver from NetBSD into a rump kernel to be included in Apex instead of the original Intel code, since the NetBSD driver only needed an easily compilable binary blobHack together an Apex version that simulates the typing necessary RedBoot commands to load via TFTP the netbsd image and execute it.After taking a look at the NPE driver buildsystem, I concluded there were very few options less attractive that option 1, among which was hammering nails through my forehead as a improvement measure against the severe brain damage which I would probably be likely to be inflicted with after dealing with the NPE "build system".Option 2 looked like the best option I could have, given the situation, but my NetBSD foo was too close to 0 to even dream to endeavor on such a task. In my opinion, this still remains the technically superior solution to the problem since is very portable and a flexible way to ensure networking works in spite of the proprietary NPE code.But, in practice, the best option I could implement at the time was option 3. I initially planned to pre-fill from Apex my desired commands into the RedBoot buffer that stored the keyboard strokes typed by the user:load -r -b 0x200000 -h 192.168.0.2 netbsd-nfs.bingSince this was the first time ever for me I was going to do less than trivial reverse engineering in order to find the addresses and signatures of interesting functions in the RedBoot code, it wasn't bad at all that I had a version of the RedBoot source code.When stuck with reverse engineering, apply JTAGThe bad thing was that the code Linksys published as the source of the RedBoot running inside the NSLU2 was, in fact, a different code which had some significant changes around the code pieces I was mostly interested in. That in spite of the GPL terms.But I thought that I could manage. After all, how hard could it be to identify the 2-3 functions I was interested in and 1 buffer? Even if I only had the disassembled code from the slug, it shouldn't be that hard.I struggled with this for about 2-3 weeks on the few occasions I had during that time, but the excitement of leaning something new kept me going. Until I got stuck somewhere between the misalignment between the published RedBoot code and the disassembled code, the state of the system at the time of dumping the contents from RAM (for purposes of disassemby), the assembly code generated by GCC for some specific C code I didn't have at all, and the particularities of ARM assembly.What was most likely to unblock me was to actually see the code in action, so I decided attaching a JTAG dongle to the slug and do a session of in-circuit-debugging was in order.Luckily, the pinout of the JTAG interface was already identified in the NSLU2 Linux project, so I only had to solder some wires to the specified places and a 2x20 header to be able to connect through JTAG to the board.JTAG connections on Kinder (the NSLU2 targeting NetBSD)After this was done I tried immediately to see if when using a JTAG debugger I could break the execution of the code on the system. The answer was sadly, no.The chip was identif[...]



Linksys NSLU2 adventures into the NetBSD land passed through JTAG highlands - part 1

2015-05-23T18:21:09.580+03:00

About 2 months ago I set a goal to run some kind of BSD on the spare Linksys NSLU2 I had. This was driven mostly by curiosity, after listening to a few BSDNow episodes and becoming a regular listener, but it was a really interesting experience (it was also somewhat frustrating, mostly due to lacking documentation or proprietary code).Looking for documentation on how to install any BSD flavour on the Linksys NSLU2, I have found what appears to be some too-incomplete-to-be-useful-for-a-BSD-newbie information about installing FreeBSD, no information about OpenBSD and some very detailed information about NetBSD on the Linksys NSLU2.I was very impressed by the NetBSD build.sh script which can be used to cross-compile the entire NetBSD system - to do that, it also builds the appropriate toolchain - NetBSD kernel and the base system, even when ran on a Linux host. Having some experience with cross compilation for GNU/Linux embedded systems I can honestly say this is immensely impressive, well done NetBSD!Gone were a few failed attempts to properly follow the instruction and lots of hours of (re)building, but then I had the kernel and the sets (the NetBSD system is split into several parts which are grouped by functionality, these are the sets), so I was in the position to have to set things up to be able to net boot - kernel loading via TFTP and rootfs on NFS.But it wouldn't be challenging if the instructions were followed to the letter, so the first thing I wanted to change was that I didn't want to run dhcpd just to pass the DHCP boot configuration to the NSLU2, that seemed like a waste of resources since I already had dnsmasq running.After some effort and struggling with missing documentation, I managed to use dnsmasq to pass DHCP boot parameters to the slug, but also use it as TFTP server - after some time I documented this for future reference on my blog and expect to refer to it in the future.Setting up NFS wasn't a problem, but, when trying to boot, I found that I managed to misread at least 3 or 4 times some of the NSLU2 related information on the NetBSD wiki. To be able to debug what was happening I concluded the slug should have a serial console attached to it, which helped a lot.Still the result was that I wasn't able to boot the trunk version of the NetBSD code on my NSLU2.Long story short, with the help of some people from the #netbsd IRC channel on Freenode and from the port-arm NetBSD mailing list I found out that I might have a better chance with specific older versions. In practice what really worked was the code from the netbsd_6_1 branch.Discussions on the port-arm mailing list, some digging into the (recently found) PR (problem reports), and a successful execution of the trunk kernel (at the time, version 7.99.4) together with 6.1.5 userspace lead me to the conclusion the NetBSD userspace for armbe was broken in the trunk branch.And since I concluded this would be a good occasion to learn a few details about NetBSD, I set out to git bisect through the trunk history to identify when this happened. But that meant being able to easily load kernels and run them from TFTP, which was not how the RedBoot bootloader flashed into the slug behaves by default.Be default, the RedBoot bootloader flashed into the NSLU2 waits for 2 seconds for a manual interaction (it waits for a ^C) on the serial console or on the telnet RedBoot prompt, then, if no such event happens, it copies the Linux image it has in flash starting with adress 0x50060000 into RAM at address 0x01d00000 (after stripping the Sercomm header) and then executes the copied code from RAM.Of course, this is not a very handy way to try to boot things from TFTP, so my first idea to overcome this limitation was to use a second stage bootloader which would do the loading via TFTP of the NetBSD kernel, then execute it from RAM. Flashing this second stage bootloader instead of the Linux kernel at 0x50060000 would make sure that no manual intervention except p[...]



Linksys NSLU2 JTAG help requested

2015-04-30T03:42:29.139+03:00

Some time ago I have embarked on a jurney to install NetBSD on one of my two NSLU2-s. I have ran into all sorts of hurdles and problems which I finally managed to overcome, except one:The NSLU I am using has a standard 20 pin ARM JTAG connector attached to it (as per this page http://www.nslu2-linux.org/wiki/Info/PinoutOfJTAGPort, only TDI, TDO, TMS, TCK, Vref and GND signals), but, although the chip is identified, I am unable to halt the CPU:$ openocd -f interface/ftdi/olimex-arm-usb-ocd.cfg -f board/linksys_nslu2.cfgOpen On-Chip Debugger 0.8.0 (2015-04-14-09:12)Licensed under GNU GPL v2For bug reports, read    http://openocd.sourceforge.net/doc/doxygen/bugs.htmlInfo : only one transport option; autoselect 'jtag'adapter speed: 300 kHzInfo : ixp42x.cpu: hardware has 2 breakpoints and 2 watchpoints0Info : clock speed 300 kHzInfo : JTAG tap: ixp42x.cpu tap/device found: 0x29277013 (mfg: 0x009,part: 0x9277, ver: 0x2)[..]$ telnet localhost 4444Trying ::1...Trying 127.0.0.1...Connected to localhost.Escape character is '^]'.Open On-Chip Debugger> halttarget was in unknown state when halt was requestedin procedure 'halt'> pollbackground polling: onTAP: ixp42x.cpu (enabled)target state: unknown My main goal is to make sure I can  flash the device via JTAG, in case I break it, but it would be ideal if I could use the JTAG to single step through the code.I have found that other people have managed to flash the device via JTAG without the other signals, and some have even changed the bootloader (and had JTAG confirmed as backup solution), so I am stuck.So if anyone can give some insights into ixp42x / Xscale / NSLU2 specific JTAG issues or hints regarding this issue on OpenOCD or other such tool, I would be really grateful.Note: I have made a hacked second stage Apex bootloader to laod the NetBSD image via TFTP, but the default RedBoot sequence 'boot; exec 0x01d00000' should be 'boot; go 0x01d00000' for NetBSD to work, so I am considering changing the RedBoot partition to alter that command. The gory details can be summed as my Apex is calling RedBoot functions to be network enabled (because Intel's NPE current code is not working on Apex) and I have tested this to work with go, but not with exec.[...]



HOWTO: Dnsmasq server for network booting using TFTP and DHCP

2015-04-06T10:27:58.153+03:00

Dnsmasq is a very lightweight server that besides the expected DNS caching functionality, it also offers DHCP and TFTP functionality in a single binary.

This makes it very useful if one needs to network boot a system since you can have the TFTP and DHCP part of the setup done easily, and only add NFS for a complete network boot.

Add to that that

One extra nice thing dnsmasq has is that it can mark specific hosts, addresses or ranges with some internal markers, then use those markers as symbolic names to apply settings based for classes of devices.

In the configuration snippet below, there is a rule I set up to make sure I would apply the 'netbsd' label to any system connecting through specific ethernet interfaces (one is the interface of the system, the other is a USB NIC I use from time to time):
#You will need a range for static IPs in the main file
dhcp-range=192.168.77.250,192.168.77.254,static

# give the name 'kinder' to any machine connecting through the given ethernet nics and apply 'netbsd' label
dhcp-host=00:1a:70:99:60:BB,00:06:4F:0D:B1:95, kinder, 192.168.77.251, set:netbsd

# Machines tagged 'netbsd' shall use the given NFS root path
dhcp-option=tag:netbsd, option:root-path,/export/netbsd-nslu2/root
# Enable dnsmasq's built-in TFTP server
enable-tftp

# Set the root directory for files available via FTP.
tftp-root=/srv/tftp
Saving this configuration file in /etc/dnsmasq.d/kinder-netboot will enable this to be used by dnsmasq if this line is present in /etc/dnsmasq.conf
conf-dir=/etc/dnsmasq.d
Commenting it will disable the netbsd part easily.(image)



HOWTO: Disassemble a big endian Arm raw memory dump with objdump

2015-03-29T16:30:52.235+03:00

This is trivial and very useful for embedded code dumps, but in case somebody (including future me) needs this, here it goes:
arm-none-eabi-objdump -D -b binary -m arm -EB dump.bin | less
The options mean:
  • -D - disassemble
  • -b binary - input file is a raw file
  • -m arm - arm architecture
  • -EB - big endian
By default, endianness is assumed to be little endian, or at least that's happened with my toolchain.(image)



Net Neutrality

2015-03-29T01:40:37.299+02:00

I have seen this awesomeness way too late, but is still awesome.(image)



Occasional Rsnapshot v1.3.1

2015-03-06T23:39:57.446+02:00

I was writing in the previous post about Occasional Rsnapshot and  how I ended up writing it.

Just before releasing v1.2.1 I realized it would make sense Semantic Versioning which, in just a few words means:
Given a version number MAJOR.MINOR.PATCH, increment the:
MAJOR version when you make incompatible API changes,
MINOR version when you add functionality in a backwards-compatible manner, and
PATCH version when you make backwards-compatible bug fixes.
I just released Occasional Rsnapshot v1.3.1 which mainly fixed issue 4:
When deciding to do a backup for interval INT, there should be a check that the oldest snapshot in INT-1 interval is older than the threshold for the INT interval. Otherwise the INT interval will be populated with backups more frequent than desired and it is possible older backups in INT interval to completely lost.
The condition should be:
ts(oldest(INT-1)) - ts(newest(INT)) >= threshold(INT)
For example:
  • if weekly.0 is from 15th of February and daily.6 is from 17th of February, weekly should not be triggered, but
  • if weekly.0 is from 15th of February and daily.6 is from 23rd of February, weekly shall be triggered
This extra check should probably added in can_backup_interval.
It was a small bug, but it might have lead to losing important older backups because newer and more frequent backups would be pushed from hourly, interval by interval up to the yearly interval, in spite of the fact that the distance between backups wouldn't have been respecting the upper interval minimum distance in time.

There was also a small syntax bugfix, but functionally nothing has changed because bash was doing the right thing even with that small error.

If you have started using Occasional Rsnapshot, you definitely want now Occasional  Rsnapshot v1.3.1. If you haven't started and you don't have backups, please start doing backups, and while you're at it, you might want to try Occasional Rsnapshot (with Rsnapshot).

(image)



Occasional Rsnapshot v1.3.0

2015-02-25T21:41:44.312+02:00

It is almost exactly 1 year and a half since I came up with the idea of having a way of making backups using Rsnapshot automatically triggered by my laptop when I have the backup media connected to my laptop. This could mean connecting a USB drive directly to the laptop or mounting a NFS/sshfs share in my home network. Today I tagged Occasional Rsnapshot the v1.3.0 version, the first released version that makes sure even when you connect your backup media occasionally, your Rsnapshot backups are done if and when it makes sense to do it, according to the rsnapshot.conf file and the status of the existing backups on the backup media.Quoting from the README, here is what Occasional Rsnapshot does:This is a tool that allows automatic backups using rsnapshot when the external backup drive or remote backup media is connected.Although the ideal setup would be to have periodic backups on a system that is always online, this is not always possible. But when the connection is done, the backup should start fairly quickly and should respect the daily/weekly/... schedules of rsnapshot so that it accurately represents history.In other words, if you backup to an external drive or to some network/internet connected storage that you don't expect to have always connected (which is is case with laptops) you can use occasional_rsnapshot to make sure your data is backed up when the backup storage is connected.occasional_rsnapshot is appropriate for:laptops backing up on: a NAS on the home LAN ora remote or an internet hosted storage locationsystems making backups online (storage mounted locally somehow)systems doing backups on an external drive that is not always connected to the systemThe only caveat is that all of these must be mounted in the local file system tree somehow by any arbitrary tool, occasional_rsnapshot or rsnapshot do not care, as long as the files are mounted.So if you find yourself in a simillar situation, this script might help you to easily do backups in spite of the occasional availability of the backup media, instead of no backups at all. You can even trigger backups semi-automatically when you remember to or decide is time to backup, by simply pulging in your USB backup HDD.But how did I end up here, you might ask?In December 2012 I was asking about suggestions for backup solutions that would work for my very modest setup with Linux and Windows so I can backup my and my wife's system without worrying about loss of data.One month later I was explaining my concept of a backup solution that would not trust the backup server, and leave to the clients as much as possible the decision to start the backup at their desired time. I was also pondering on the problems I might encounter.From a security PoV, what I wanted was that:clients would be isolated from each othereven in the case of a server compromise:the data would not be accessible since they would be already encrypted before leaving the clientthe clients could not be compromised The general concept was sane and supplemental security measures such as port knocking and initiation of backups only during specific time frames could be added.The problem I ran to was that when I set up this in my home network a sigle backup cycle would take more than a day, due to the fact that I wanted to do backup of all of my data and my server was a humble Linksys NSLU2 with a 3TB storage attached on USB.Even when the initial copy was done by attaching the USB media directly to the laptop, so the backup would only copy changed data, the backup with the HDD attached to the NSLU2 was not finished even after more than 6 hours.The bottleneck was the CPU speed and the USB speed. I tried even mounting the storage media over sshfs so the tiny xscale processor in the NSLU2 would not be bothered by any of the rsync computation. [...]



uClibc based toolchain using Gentoo for NSLU2

2015-02-09T23:40:54.114+02:00

I was asked in the previous post why I didn't used the Debian armel port for my NSLU2.

My intention was to create a uclibc based system (and a uclibc based toolchain) for my NSLU2. This was not obvious in the post because building the uclibc based toolchain resulted in this error:
crossdev armv5-softfloat-linux-uclibceabi
[..]/var/tmp/portage/cross-armv5-softfloat-linux-uclibceabi/gcc-4.9.2/work/gcc-4.9.2/libsanitizer/sanitizer_common/sanitizer_platform_limits_posix.cc:67:21: fatal error: wordexp.h: No such file or directory
 #include
                     ^

compilation terminated.
Makefile:416: recipe for target 'sanitizer_platform_limits_posix.lo' failed
make[4]: *** [sanitizer_platform_limits_posix.lo] Error 1
make[4]: *** Waiting for unfinished jobs....

So, yesterday, to not forget what I did, I tried building a glibc based toolchain and posted that.

Today, after looking at the offending error, it seems gcc 4.9.2 assumes wordexp.h is always available on non-Android platforms and Gentoo does not make that file availble when installing uclibc.


I think the problem was introduced in gcc in 2013 with this commit, but I haven't cheked in detail. What I know for sure is that gcc 4.8.3 works at this moment with uclibc and the buildroot guys are still using gcc 4.8.4 in their default uClibc based toolchain. So here it is the command that generated the uClibc based toolchain:

crossdev --g 4.8.3 armv5-softfloat-linux-uclibceabi

I hope this helps other people. Yes, I know I should report the issue to GCC/Gentoo after further investigation.(image)



Using Gentoo to create a cross toolchain for the old NSLU2 systems (armv5te)

2015-02-07T21:07:26.258+02:00

This is mostly written so I don't forget how to create a custom (Arm) toolchain the Gentoo way (in a Gentoo chroot).I have been a Debian user since 2001, and I like it a lot. Yet I have had my share of problems with it, mostly because due to lack of time I have very little disposition to try to track unstable or testing, so I am forced to use stable.This led me to be a fan of Russ Albery's backport script and to create a lot of local backports of packages that are already in unstable or testing.But this does not help when packages are simply missing from Debian or when something like creating an arm uclibc based system that should be kept up to date, from a security PoV.I have experience with Buildroot and I must say I like it a lot for creating custom root filesystems and even toolchains. It allows a lot of flexibility that binary distros like Debian don't offer, it does its designated work, creating root filesystems. But buildroot is not appropriate for a system that should be kept up to date, because it lacks a mechanism by which to be able to update to new versions of packages without recompiling the entire rootfs.So I was hearing from the guys from the Linux Action Show (and Linux Unplugged - by the way, Jupiter Broadcast, why do I need scripts enabled from several sites just to see the links for the shows?) how Arch is great and all, that is a binary rolling release, and that you can customize packages by building your own packages from source using makepkg. I tried it, but Arm support is provided for some specific (modern) devices, my venerable Linksys NSLU2's (I have 2 of them) not being among them.So I tried Arch in a chroot, then dropped it in favour of a Gentoo chroot since I was under the feeling running Arch from a chroot wasn't such a great idea and I don't want to install Arch on my SSD.I used succesfully Gentoo in the past to create an arm-unknown-linux-gnueabi chroot back in 2008 and I always liked the idea of USE flags from Gentoo, so I knew I could do this.So here it goes:# create a local portage overlay - necessary for cross toolsexport LP=/usr/local/portagemkdir -p $LP/{metadata,profiles}echo 'mycross' > $LP/profiles/repo_nameecho 'masters = gentoo' > $LP/metadata/layout.confchown -R portage:portage $LPecho 'PORTDIR_OVERLAY="'$LP' ${PORTDIR_OVERLAY}"' >> /etc/portage/make.confunset LP# install crossdev, setup for the desired target, build toolchainemerge crossdevcrossdev --init-target -t arm-softfloat-linux-gnueabi -oO /usr/local/portage/mycrosscrossdev -t arm-softfloat-linux-gnueabi [...]



The stupidest trend in laptop design is...

2013-10-29T09:52:38.559+02:00

... numpads on laptop keyboards.Just because a very, very, very restricted segment of the population is into accounting or other jobs needing frequent numeric input, almost all laptop manufacturers feel the stupid urge to place a numpad on laptops with screens over 14''.Reasons why a numpad on a laptop is a bad idea:can lead to health issues: it forces the user to assume a bad position in front of the screen; this can affect the eyes and the backbone; It's bad enough many people have a bad posture in front of the computer as it is, no need to pump up Scoliosis' position in the laundry list of modern day health risksThe numpad forces eye focus and users' hands to be way off center.The touchpad looks as if it was thrown to the side by accidentNote how the designer tried to offset the misalignment forcedby the numpad by "positioning" the touchpad closer to center,but not in line with the space bar (and the keyboard)ugly design: it simply looks ugly; why do people think Apple didn't jump into this stupid bandwagon? Because it's ugly design!Without numpad the position is almost perfect!The symmetry of the laptop improves on the design.Notice how the touchpad is also centered.the numpad is useless for the vast majority of people, and those who need numpads, already use them at desktop (keyboard) or, can buy numpadsit's more expensive to manufacture (more moving parts, more complex wiring and more expensive materials - plastic is cheaper than plastic+copper+rubber)less resilient: more ways to fail, higher risk to get liquids insidebad mechanical design: wider keyboard means less distance from the edge to the keyboard, which can mean a more fragile casemakes the laptop heavier: the plastic or material that covers the keys would be enough to cover the insides; the extra rubber, wiring, support etc, add extra weightit kills the opportunity to use the real estate for other useful things (or things which improve the overall design): speakers, volume keys, finger print readers, other special function keys, LEDs or other output devicesWhat's worse is that most resellers do not have filters for this particular mis-feature, so you can't easily exclude laptops corrupted by this horrendous idea.So, if you're a laptop designer, please stop putting numpads on laptops.It will make for better looking laptops and you'll have the opportunity to be the one stating the obvious: numpads are generally useless! Even on desktop keyboards![...]



Integrating Beyond Compare with Semanticmerge

2013-09-02T00:51:28.306+03:00

Note: This post will probably not be on the liking of those who think free software is always preferable to closed source software, so if you are such a person, please take this article as an invitation to implement better open source alternatives that can realistically compete with the closed source applications I am mentioning here. I am not going to mention here where the open source alternatives are not up to the same level as the commercial tools, I'll leave that for the readers or for another article.Semanticmerge is a merge tool that attempts to do the right thing when it comes to merging source code. It is language aware and it currently supports Java and C#. Just today the creators of the software have started working on the support for C.Recently they added Debian packages, so I installed it on my system. For open source development Codice Software, the creators of Semanticmerge, offers free licenses, so I decided to ask for one today, and, although is Sunday, I received an answer and I will get my license on Monday.When a method is moved from one place to another and changed in a conflicting way in two parallel development lines, Semanticmerge can isolate the offending method and can pass all its incarnations (base, source and destination or, if you prefer, base, mine and theirs) to a text based merge tool to allow the developer to decide how to resolve the merge. On Linux, the Semanticmerge samples are using kdiff3 as the text-based merge tool, which is nice, but I don't use kdiff3, I use Meld, another open source visual tool for merges and comparisons.OTOH, Beyond Compare is a merge and compare tool made by Scooter Software which provides a very good text based 3-way merge with a 3 sources + 1 result pane, and can compare both files and directories. Two of its killer features is having the ability split differences into important and non-important ones according to the syntax of the compared/merged files, and the ability to easily change or add to the syntax rules in a very user-friendly way. This allows to easily ignore changes in comments, but also basic refactoring such as variable renaming, or other trivial code-wide changes, which allows the developer to focus on the important changes/differences during merges or code reviews.Syntax support for usual file formats like C, Java, shell, Perl etc. is built in (but can be modified, which is a good thing) and new file types with their syntaxes can be added via the GUI from scratch or based on existing rules.I evaluated Beyond Compare at my workplace and we decided it would be a good investment to purchases licenses for it for the people in our department.Having these two software separate is good, but having them integrated with each other would be even better. So I decided I would try to see how it can be done. I installed Beyond compare on my system, too and looked through the examples The first thing I discovered is that the main assumption of Semanticmerge developers was that the application would be called via the SCM when merges are to be done, so passing lots of parameters would not be problem. I realised that when I saw how one of the samples' starting script invoked semantic merge:semanticmergetool -s=$sm_dir/src.java -b=$sm_dir/base.java -d=$sm_dir/dst.java -r=/tmp/semanticmergetoolresult.java -edt="kdiff3 \"#sourcefile\" \"#destinationfile\"" -emt="kdiff3 \"#basefile\" \"#sourcefile\" \"#destinationfile\" --L1 \"#basesymbolic\" --L2 \"#sourcesymbolic\" --L3 \"#destinationsymbolic\" -o \"#output\"" -e2mt="kdiff3 \"#sourcefile\" \"#destinationfile\" -o \"#output\""Can you see the problem? It seems Semanticmerge has no persistent knowledge of the user preferences with regards[...]



HOWTO add a shell script wrapper and preserve quoting for parameters

2013-09-02T01:44:05.642+03:00

If you ever find yourself in the situation where you have to add a shell script wrapper over a command, but the parameters' quoting gets lost and you end up with the wrong parameters in the wrapped command/tool, you might want to read this post.

On my system I have some command line tools which are Windows only and, in order to easily use the same build system as on Windows on my Linux machine I added a wrapper script which invokes wine on the commands and made symlinks to the wrapper with the file names as the tools, but without the '.exe' suffix.

Of course, I wanted to properly pass the parameters through the wrapper to the tools so I wrote (note the bold text):
#!/bin/sh
wine $0.exe "$@"
So the answer is: use $@ and quote like I did in the code above and the parameters will be passed correctly.




Update: stbuehler suggested to use exec to replace the shell process with wine with this construct:
Use:
#!/bin/sh
exec wine $0.exe "$@"

Thanks for the suggestion. (image)



HOWTO: git - change branch without touching working copy (at all)

2013-07-24T18:02:51.225+03:00

Did you ever had the need in a git repository to change to another branch without altering AT ALL the working copy and ever wondered how that's done?

Usual use cases might be when you mde some changes to the working copy thinking you were on another branch, or you double-track in git a directory which is also tracked by another VCS (e.g. ClearCase).

What you need, in fact, is to update the index and not touch the working copy. The command that does that is

git read-tree otherbranch
If you also need to commit the state of your working tree to the otherbranch, you also need to tell git to actually associate the curent HEAD with the branch you just switched to:
git symbolic-ref HEAD refs/heads/otherbranch
I use this approach at my work place* to develop/experiment with possible code improvements on my machine before considering the merge into the official code.

* The preferred VCS is (Base) ClearCase, and I keep a git repository over the relevant part of the project in the ClerCase Dynamic View, so for synchronisations, the files in the working copy are updated by ClearCase and I have to resync my git branch (clearcaseint) following the latest official code from time to time, so I can pull in my local disk git repository the clearcaseint branch and merge it with my experimental changes in my git feature branches. 

If people are curious about how I work with ClearCase and git, I can expand on this in another post. (image)



(Not a) GNU Make quirk, or why logs should be provided

2013-07-20T23:26:46.419+03:00

About two months ago I was writing about a quirk I found in GNU Make related to the $(patsubst ) function.I have just tried this on my Debian Wheezy laptop which has make 3.81, but I wasn't able to reproduce the issue with the version from Debian (3.81-8.2).The makefile looks like this:PATH := ../some/prefixCPU12suf/includeCPUINC := $(patsubst ../some/prefix%,%,$(PATH))CPU := $(patsubst %/include,%,$(CPUINC))default:    @echo "PATH   = $(PATH)"    @echo "CPUINC = $(CPUINC)"    @echo "CPU    = $(CPU)"And the result was correct:0 eddy@heidi /tmp $ makePATH   = ../some/prefixCPU12/includeCPUINC = CPU12/includeCPU    = CPU120 eddy@heidi /tmp $ make --versionGNU Make 3.81Copyright (C) 2006  Free Software Foundation, Inc.This is free software; see the source for copying conditions.There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR APARTICULAR PURPOSE.This program built for x86_64-pc-linux-gnuThe worst part is that I know I tested this issue on 3.82 on Cygwin and on Linux with the 3.82 version and it failed, but I wasn't able to remember how I did it. I started searching through the directory where I knew there could be the test makefile, I wasn't able to find it, until I remembered what I was trying to achieve.From a path like ../some/prefixCPU12suf/include I wanted to use % to remove the parts 'some/prefix' and 'suf/include' because in the directory ../CPU12 there were some files that needed to be processed.The actual issue is that GNU Make's '%' is not analogous to shell's '*', so that means code like this does not work as I assumed anf the 'pref' part is not an anchor:PATH := ../some/prefCPU12suf/includeCPUINC := $(patsubst pref%,%,$(PATH))CPU := $(patsubst %suf/include,%,$(CPUINC))default:    @echo "PATH   = $(PATH)"    @echo "CPUINC = $(CPUINC)"    @echo "CPU    = $(CPU)"Which leads to these results, no matter the version:0 eddy@heidi ~/usr/src/make/make-profiler/make-3.82 $ ./make -f /tmp/makefile PATH   = ../some/prefCPU12suf/includeCPUINC = ../some/prefCPU12suf/includeCPU    = ../some/prefCPU120 eddy@heidi ~/usr/src/make/make-profiler/make-3.82 $ ./make --version GNU Make 3.82Built for x86_64-unknown-linux-gnuCopyright (C) 2010  Free Software Foundation, Inc.License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it.There is NO WARRANTY, to the extent permitted by law.0 eddy@heidi ~/usr/src/make/make-profiler/make-3.82 $ make -f /tmp/makefile PATH   = ../some/prefCPU12suf/includeCPUINC = ../some/prefCPU12suf/includeCPU    = ../some/prefCPU120 eddy@heidi ~/usr/src/make/make-profiler/make-3.82 $ make --version GNU Make 3.81Copyright (C) 2006  Free Software Foundation, Inc.This is free software; see the source for copying conditions.There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR APARTICULAR PURPOSE.This program built for x86_64-pc-linux-gnuNot sure if this could be qualified as a true bug, or a if the way I expected is a nice to have feature, but, in any case, the behaviour is consistent, unlike my brain which failed to initially identify the inconsistency in my code:0 eddy@heidi ~/usr/src/make/make-profiler/make-3.82 $ grep patsubst /tmp/makefile CPUINC := $(patsubst pref%,%,$(PATH))CPU := $(patsubst %suf,%,$(CPUINC))0 eddy@heidi ~/usr/src/make/make-profiler/make-3.82 $ make -f /tmp/makefile PATH   = ../some/prefCPU12suf/includeCPUINC = ../some/pr[...]



HOWTO: git - remove those pesky remotes/origin/

2013-07-24T11:38:30.642+03:00

Often people create branches to fix various bugs or implement various features, and publish those to a public repository to be pulled in the official repository (e.g. pull requests in the github model).The problem is that after the fix is merged the local repo still has the branch that tracks the origin (your public repository).To remove the local branch it is simple:git branch -d feature_xTo remove the actual branch in the remote repository:git push origin --delete feature_xBut, still the local tracking branches remain, those pesky 'remotes/origin/feature_x'. Here is an example from one of my repos:$ git branch -a  activestate  master  next* py.test  sysplatformtest  remotes/activestate/master  remotes/origin/HEAD -> origin/master  remotes/origin/appauthor-fallback  remotes/origin/linux-fixes  remotes/origin/master  remotes/origin/next  remotes/origin/py.test  remotes/origin/root-directories  remotes/origin/sysplatformtest  remotes/origin/wrapper-defaultsAll of those can be removed with this command:git remote prune origin And the output for my appdirs repository is:Pruning originURL: git@github.com:eddyp/appdirs.git * [pruned] origin/appauthor-fallback * [pruned] origin/linux-fixes * [pruned] origin/next * [pruned] origin/root-directories * [pruned] origin/sysplatformtest * [pruned] origin/wrapper-defaultsWhich, finally removes those damned refs:$ git branch -a   activestate* master  py.test  remotes/activestate/master  remotes/origin/HEAD -> origin/master  remotes/origin/master  remotes/origin/py.test Great!Update 2013-07-24:  There have been also other solutions proposed in the comments section. I am quoting them here for better visibility.Besides the solution I proposed, people commented about other alternatives for pruning remote branches, by using git fetch --prune or by using git-push's --prune option.Brice provided a method to remove specific branches one by one with git branch -dr command:git branch -dr origin/fooWhat is ironic is that in the past I have used this format, and before writing the original article I was looking for this, but haven't found it that easily. I hope this article will help some other poor soul or myself in the future. [...]






GNU Make quirk

2013-07-20T23:24:47.434+03:00

(Updated 2013-07-20: Correctly describe the issue and provide examples)

Make has several useful built-in functions, among which $(patsubst pattern,replacer,string). The pattern can contain the character % which is a wildcard meaning any string and that can be referred in the replacer parameter. Only the first occurrence of % is treated with this special meaning, just as the manual says.

What the manual doesn't say is that only the form %bla works, while the form bla% does NOT work GNU Make's '%' is not analogous to shell's '*', so you can't expect to remove substrings in the middle of a string without knowing the exact string until the part you want to remove.

This does not work if you want to obtain '../some/CPU12/include' in OTHERPATH:

PATH := ../some/prefCPU12/include
OTHERPATH := $(patsubst pref%,%,$(PATH))

default:
    @echo "OTHERPATH = $(OTHERPATH)
The result will be:
 OTHERPATH = ../some/prefCPU12/include
This bahaviour is somewhat asymetric to how subst behaves and how one would expect it to, considering '%' is explained as a glob pattern:

0 eddy@heidi /tmp $ cat makefile
PATH      := ../some/prefCPU12/include
OTHERPATH := $(patsubst pref%,%,$(PATH))
SPATH     := $(subst pref,,$(PATH))

default:
    @echo "PATH      = $(PATH)"
    @echo "OTHERPATH = $(OTHERPATH)"
    @echo "SPATH     = $(SPATH)"
0 eddy@heidi /tmp $ make
PATH      = ../some/prefCPU12/include
OTHERPATH = ../some/prefCPU12/include
SPATH     = ../some/CPU12/include
Not sure what make developers would say about this, and I am not sure if having % work as a glob pattern and using it, instead of invoking sed would be better for performance, but sure I would like to have the option :-) .
(image)



The sorry state of the Android universe

2013-03-28T17:12:37.595+02:00


Update: Corrected the code name for 4.2.2 is Jelly Bean, not Ice Cream Sandwich as I initially stated. I also described in the comments why the calendar application is also crap.

I recently bought a phone based on the Android platform (version 4.2.2 aka Jelly Bean). Before the purchase I had the wrong idea that this platform - Android - is the best thing since sliced bread. Let me tell you, that idea is so wrong, it's a shame anybody thinks or has ever thought that.

My previous phone was a Nokia E71 and with its stock set of applications and in spite of its old and rusty Symbian OS, I still have a hard time to even match the basic functionalities on the Android phone even using the most praised apps from the Play Store.

The dialer is crap, there is no decent speed dialer, the focus is on the apps instead of the phone functionality, the homescreen-type-to-search-in-contacts functionality of E71 is probably impossible due to the retarded decision to forget you're using a phone, notifications, even important ones, are hidden, to the point that the ones needing attention can be missed (e.g. entering the PIN/confirmation for Bluetooth pairing must be searched for in the notification area), when looking in the agenda there is no straight one step to edit a contact, you must jump to another application and do the edit there, treating SMS conversations as one to one instant messages works until you want to reply to multiple people at once.

Volume for ringing, SMS notification and the headset are controlled all together, so if during the last conversation you turned down the volume because it was too loud, you can miss a call because our will ring at a lower volume.

And these are only broken things in the basic functionality (for a post 2008 cell phone).

Android phones are smart phones, so more advanced features are required: playing audio and video files, GPS related applications, podcasting support, email handling and web browsing are among the features that can be expected on such a phone.

Only web related functionality and simple media playing are at a reasonable level compared with my old E71, maybe due to Google being web oriented and the new phones having better screens than the old Nokia.

But I am an avid podcast listener, so I've been searching for an application that can match Nokia's Podcasts stock application, and I've come to the conclusion Android users which love podcasts either have to wait for Nokia to develop on Android (which seems unlikely) or find one or two talented developers to create a decent application.

I don't understand how can such an application not have a playlist with the downloaded episodes, not have download all new episodes or mark all/selected as listened. I have found applications that have at least one of these issues and have an average of over 4 out of 5 stars in Google Play. Poor users!

In the light of these issues, I'm having so much difficulty coming up with an excuse for Nokia losing its position as a market leader, but our seems technical superiority is not necessary or enough to dominate the mobile phone market. Sadly, that says a lot about our species, and the words aren't nice.
(image)



Finally the XDG fixes are in appdirs

2013-03-22T00:34:00.737+02:00

Just a few hours ago my XDG fixes landed in the master branch of ActiveState appdirs.

https://github.com/ActiveState/appdirs/pull/17

They will be part of version 1.3.0, but the maintainers want first to create some tests, since the test frame was just rewritten recently.
(image)



Herbalife, a detailed analysis

2013-03-12T23:41:30.632+02:00

You might remember that a while ago I mentioned I got involved in skepticism to the point I even co-host a podcast, Skeptics in Romania (in Romanian).

Among the subjects we tackle there are claims about various miracle fruits, shady dietary advice, various nonscientific health products, and even scams. We try to inform or listeners about ways to identify themselves such dangerous/fake products and how they can inform themselves about the claims they might encounter, what questions they should ask before considering buying (into) such things.

One sensitive subject is the so called multilevel marketing, especially for people involved in such businesses.

This is a sensitive subject because many of these schemes are actually pyramidal schemes, also known as Ponzi schemes. These are illegal in many countries, since they are, in fact scams designed to lure people with supposed high profits and little work.

One such pyramidal system... well you can judge yourselves (note that the presentation contains many slides, but it's really captivating):

http://www.businessinsider.com/bill-ackmans-herbalife-presentation-2012-12?op=1j
(image)



So, I did the responsible thing and fixed appdirs

2013-03-04T09:39:58.162+02:00

In my previous post I expressed my frustration at the way a perfectly nice and fine idea, a portable way to get the standard configuration and data directory/files, was broken for Linux and BSD, because the authors of appdirs thought the XDG standard was "subject to some interpretation".

Although I said I decided not to use appdirs, I realised that wouldn't help anyone, so I fixed the code.

During the coding phase I discovered that the authors of appdirs broke the XDG standard even more, this time, ignoring XDG_DATA_DIRS, and talking about XDG_CONFIG_DIRS. When I found this I became convinced the *nix part of the implementation was subject to continuous irony, since the comment in this newly found breakage said "Perhaps should *use* that envvar", referring to XDG_CONFIG_DIRS, but writing later in code:

/etc/xdg/

Sweet, isn't it?

If you want a fixed version, you can grab it from my repository, on the linux-fixes branch:

https://github.com/eddyp/appdirs/tree/linux-fixes
(image)