Subscribe: An Open Access Peon
http://oapeon.blogspot.com/feeds/posts/default
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
add  certificate  configuration  create  entry  eth  file  git  install  network  server  set  sudo  ubuntu  vpn  xapian 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: An Open Access Peon

An Open Access Peon



Technical postings (predominantly related to Web/document processing) and general musings on Open Access to scientific literature. Very low bandwidth.



Updated: 2018-01-24T07:51:34.800+00:00

 



Using Oracle Thin Driver with Client/Server Authentication SSL

2016-08-16T09:20:03.932+01:00

Oracle Database server supports SSL on the wire encryption plus client and server authentication. This can be a bit tricky to set up and after much exhaustive searching I've never found a complete description of the steps to set up the client-side configuration (or at least, using a Tomcat Resource).

The following instructions describe how I set up SSL authentication (/encryption) from a Tomcat WebApp to Oracle Database server.

You must use a recent OJDBC6.JAR. Older versions (I can't work out which) have a bug relating to parsing passwords from connection properties. Download the latest OJDBC6 or OJDBC7 from Oracle. Place ojdbc6.jar into tomcat/lib.

You will need the "keytool" from the Java JDK or JRE.

Create a new keystore with self-signed certificate:

keytool -genkey -alias %computername% -keystore keystore.jks -storepass changeme -validity 3650

When prompted you probably want to use your machine name for the "What is your first and last name" (the CN= bit of the addressing).

Export the self-signed certificate:

keytool -export -keystore keystore.jks -storepass changeme -alias %computername% -file %computername%.cer

Provide this to your Oracle DBA who will import the certificate into the database trust store (wallet). The DBA should provide you a certificate chain for the server. Import these into your Java keystore:

keytool -importcert -noprompt -keystore keystore.jks -storepass changeme -file SERVER.CRT

In your Tomcat server.xml create a new Resource entry under GlobalNamingResources:


Errors

It is generally easier to debug SSL configuration problems using the sqlplus client tool. You will need an Oracle wallet (orapki tool) to do this, which I won't cover in this blog post. The following may help diagnose problems with your client configuration though.
  • Format error - check connectionProperties doesn't contain spaces/newlines
  • IO Error: The Network Adapter could not establish the connection - 1) check your have the correct passwords for trustStorePassword and keyStorePassword 2) try a newer version of ojdbc6.jar / confirm you are using the verison you expect 3) this is a genuine network/hostname problem 4) Ensure oracle.net.wallet_location isn't specified in catalina.properties or elsewhere (seems to override connectionProperties)
  • IO Error: NL Exception was generated - check server.xml's resource url attribute is formatted correctly



Using Tvheadend as a SAT>IP Server

2016-04-24T20:36:46.311+01:00

The goal here is to set up Tvheadend (TVH) as a SAT>IP Server. This allows you to stream satellite streams to tablets and other devices - at least to the limit of the number of lnb connections you have.

I followed the Ubuntu instructions at https://tvheadend.org/projects/tvheadend/wiki/AptRepository to install TVH and then http://docs.tvheadend.org/configure_tvheadend/ to get the basic configuration set up.

Unfortunately the documentation for configuring SAT>IP is quite sparse and leaves out one important detail if you expect to use it with general SAT>IP clients: SAT>IP must be configured to use rtsp port 554 but TVH out of the box will switch to 9983.

Enabling Networks for SAT>IP

Go to Configuration - DVB Inputs - Networks. For each network you want to use change the SAT>IP Source Number to 1 (other values are documented).

Enabling SAT>IP Server

Go to Configuration - General. In the SAT>IP Server section enter 554 as the port number and Save Configuration.

Port-Forwarding RTSP

If, after enabling SAT>IP Server, you see the following error in the log (click the double-down arrow bottom right):

2016-04-24 20:19:25.568 satips: RTSP port 554 specified but no root perms, using 9983

Then you will need to allow clients to connect to TVH on port 554. While you can run TVH as root it is probably easier to create a port forward:

sudo iptables -t nat -A PREROUTING -i eth0 -p udp --dport rtsp -j REDIRECT --to-port 9983

Following these steps allowed me to stream channels from TVH using the Elgato App from both Android and iOS.



WebDav Client Certificate Challenge

2015-09-01T14:50:16.914+01:00

If you have Office/Word 2010 and open a document via Internet Explorer a WebDav interrogation of the Web site will be performed. This is to determine whether Office can write back the document to the site (e.g. as you would use on Sharepoint).

We encountered a hard to trace issue with this interrogation. Opening a document from IE would result in a challenge to pick a client certificate. Clicking cancel past these dialogues would allow the document to still open. Tracing that this was something to do with WebDav was easy enough as we could see the WebDav requests coming into the server - indicated by an OPTIONS request at the folder level with subsequent PROPFINDs. These are issued by the user agents:

Microsoft Office Protocol Discovery
Microsoft Office Existence Discovery
Microsoft-WebDAV-MiniRedir/
DavClnt

What was confounding us was that the issue did not occur on our test system. After a comprehensive inspection of the IIS (7.5) server no differences between the production and test systems could be identified.

The issue was eventually uncovered by inspecting the SSL connection coming from both sites. On production "openssl" indicated a different hostname in the certificate to the site being requested. This is because our production system has multiple host aliases, with appropriate redirects in place. The test system has a single host.

The production aliases are served using Server Name Indication (SNI). This works fine in all modern browsers, including the version of IE being used. Where it doesn't work is in Microsoft's WebDav implementation. When the WebDav client interrogated the server it was getting a certificate name mismatch. Rather than indicating a server certificate error it instead requested a client certificate!

The solution for this site was to change the default certificate to the official Web site name. I don't know if there's a solution for multiple sites sharing the same https connection with SNI.



Routing Wireless LAN over a VPN

2015-08-24T07:43:48.957+01:00

The goal of this set up is to create a Wireless LAN (WLAN) that gets routed over a VPN. In this way we can create multiple WLANs that route to different places, enabling clients to pick a network to connect to. My specific use-case was to allow house users to connect to a virtual German network or to the local network.My set up has Cisco WAP121 Small Business Access Points and a Ubuntu 14.04 server (acting as DHCP and router). My local network just consists of unmanaged switches, which will forward all VLAN traffic to every port.Make sure Ubuntu is configured to forward IP traffic.The new VLAN will use id 20 and the IP range 192.168.20.0-255.In the Cisco control panel under Wireless, Networks I created a new Virtual Access Point with a VLAN ID of 20 (default is 1).The rest of the configuration is in the Ubuntu server.The VPN in use is OpenVPN but I expect these instructions would work with any device-based VPN. In my system this is "tun0". Having configured and confirmed the OpenVPN connects and working correctly it then needs some additional steps. Because I'm not routing all traffic through the VPN I needed to add route-nopull to the OpenVPN configuration. Additional up and down scripts are required to configure firewall and routing, so the following needs adding to the OpenVPN configuration:route-nopullscript-security 2up /etc/openvpn/scripts/vpn-up.shdown /etc/openvpn/scripts/vpn-down.shvpn-up.sh:#!/bin/sh# block all incoming connections i.e. block access to this boxiptables -A INPUT -m state --state ESTABLISHED,RELATED -i $dev -j ACCEPTiptables -A INPUT -p icmp -i $dev -j ACCEPTiptables -A INPUT -i $dev -j DROP# NAT connections over the tunneliptables -t nat -A POSTROUTING -s 192.168.20.0/24 -o $dev -j MASQUERADE# Route between eth0.20 and the tunneliptables -A FORWARD -i eth0.20 -o $dev -j ACCEPTiptables -A FORWARD -i $dev -o eth0.20 -j ACCEPT# Start routing traffic over the tunip route add default table 20 dev tun0ip rule add from 192.168.20.0/255.255.255.0 table 20vpn-down.sh:#!/bin/sh# stop routing traffic to the tunip rule del from 192.168.20.0/255.255.255.0 table 20ip route del default table 20 dev tun0# block all incoming connections i.e. block access to this boxiptables -D INPUT -m state --state ESTABLISHED,RELATED -i $dev -j ACCEPTiptables -D INPUT -p icmp -i $dev -j ACCEPTiptables -D INPUT -i $dev -j DROP# NAT connections over the tunneliptables -t nat -D POSTROUTING -s 192.168.20.0/24 -o $dev -j MASQUERADE# Route between eth0.20 and the tunneliptables -D FORWARD -i eth0.20 -o $dev -j ACCEPTiptables -D FORWARD -i $dev -o eth0.20 -j ACCEPTWe need to create an interface that will listen to VLAN 20. Install vlan interfaces support:apt-get install vlanCreate a new file /etc/network/interfaces.d/eth0:auto eth0.20iface eth0.20 inet static        address 192.168.20.1        netmask 255.255.255.0        vlan-raw-device eth0We need a DHCP server to provide addresses to the VLAN:apt-get install isc-dhcp-serverWe only want DHCP offered on the new VLAN so we tell dhcpd to bind only to eth0.20 in /etc/default/isc-dhcp-server:# On what interfaces should the DHCP server (dhcpd) serve DHCP requests?#       Separate multiple interfaces with spaces, e.g. "eth0 eth1".INTERFACES="eth0.20"Create a new subnet entry to offer IPs for in /etc/dhcp/dhcpd.conf:subnet 192.168.20.0 netmask 255.255.255.0 {        range 192.168.20.100 192.168.20.200;        option routers                  192.168.20.1;        option subnet-mask              255.255.255.0;        option broadcast-address        192.168.20.255;        option domain-name              "localdomain";        option domain-name-servers      8.[...]



Passenger Fusion-Nginx Node App Server for Ubuntu

2013-10-25T11:24:14.622+01:00

Download and install Passenger Fusion from https://www.phusionpassenger.com/download#open_source. Install nginx and passenger:

sudo apt-get install nginx-full passenger

Download and install node from https://launchpad.net/~chris-lea/+archive/node/:

sudo add-apt-repository ppa:chris-lea/node
sudo apt-get update
sudo apt-get install nodejs


In /etc/nginx/nginx.conf uncomment passenger_root and passenger_ruby:

passenger_root /usr/lib/ruby/vendor_ruby/phusion_passenger/locations.ini;
passenger_ruby /usr/bin/ruby;


In /etc/nginx/sites-available/default add at the end of the server {} block:

server {
...
       passenger_enabled on;
       location /app1/ {
               root /home/user/webapps/app1/public;
               passenger_enabled on;
       }
}


Follow the instructions at https://github.com/phusion/passenger/wiki/Node to create the app structure under /home/user/webapps/app1:

./public
./tmp
./tmp/restart.txt [touch this to re-deploy application]
./app


app is a normal Node Web application (something that uses 'http.createServer().server.listen()').

Restart nginx:

sudo /etc/init.d/nginx restart

Then test your app:

curl http://localhost/app1/



Generating Certificate (CSR) requests

2013-05-21T10:55:52.238+01:00

Here's a short script to generate a server key and PKCS #10 Certificate Request for use with https:

#!/bin/sh

HOSTNAME=$1

if [ "$#" -ne 1 ]; then
echo "Usage: $0 " >&2
exit 1
fi

if [ ! -f ${HOSTNAME}.key ]; then
openssl genrsa -out ${HOSTNAME}.key 2048
fi

cp cert.cfg ${HOSTNAME}.cfg
echo >> ${HOSTNAME}.cfg
echo "cn = ${HOSTNAME}" >> ${HOSTNAME}.cfg

certtool --generate-request \
--load-privkey ${HOSTNAME}.key \
--outfile ${HOSTNAME}.csr \
--template ${HOSTNAME}.cfg

if [ -f ${HOSTNAME}.csr ]; then
echo ${HOSTNAME}.csr
fi


This requires a cert.cfg that provides the basic information for your organisation:

# X.509 Certificate options
#
# DN options

organization = "University of Weevils"

unit = "Department of Creepy Crawlies"

locality = "Winchester"

state = "Hampshire"

country = GB

# Whether this certificate will be used to sign data (needed
# in TLS DHE ciphersuites).
signing_key

# Whether this certificate will be used for a TLS client
tls_www_client

# Whether this certificate will be used for a TLS server
tls_www_server

# Whether this certificate will be used to encrypt data (needed
# in TLS RSA ciphersuites). Note that it is preferred to use different
# keys for encryption and signing.
encryption_key


The resulting csr should be sent to your certificate authority for signing into a certificate (crt).



Migrating svn to git with sub-directories

2016-08-09T10:23:13.186+01:00

On EPrints we have a slightly odd layout in our svn repository:

branches/
3.3/
docs/
extensions/
system/
tags/
....
trunk/
docs/
extensions/
system/

This works well enough with svn where you can checkout a sub-directory but git only allows you to check out the entire repository - branches are real branches rather than just named directories. The correct git usage is to use separate repositories for "docs", "extensions" and "system" and place the contents for each one at the repository root.

The goal of the migration to git is therefore to move trunk/system up to trunk/, branches/3.3/system up to branches/3.3 and so on for branches still in use. (For my own sanity I'm going to ignore tags.)

One approach might be to "git svn clone" the entire svn tree and then move the elements of system/* up a directory. The downside to this approach is every file in the repository will then be touched with that movement, losing the ability to (trivially) see the last time a file was modified.

I also tried use the svn svn-dump-reloc tool to move directory contents up into their parent. That drove me down a path of despair of broken historical file movements and duplicated directory creations (because moving system/trunk/ to trunk/ duplicates the initial trunk/ creation).

The eventual approach taken was to use git-svn's ability to clone a sub-directory and magically place it in the root of the new git repository. I started with trunk/:


git svn clone -A users.txt https://svn.eprints.org/eprints/trunk/system eprints


And the same with the 3.3 branch:


git svn clone -A users.txt https://svn.eprints.org/eprints/branches/3.3/system 3.3


I can then create an empty branch for 3.3 (kudos to http://www.bitflop.dk/tutorials/how-to-create-a-new-and-empty-branch-in-git.html) in my git trunk clone:


cd eprints
git checkout --orphan 3.3
git rm -rf .


git can pull in the content of a remote repository like so:


git remote add -f 3.3 ../3.3/
git merge -s ours 3.3/master


Tidying up:


git remote rm 3.3
git checkout master


This repository can then be pushed up to github using their standard instructions:


git remote add origin https://github.com/...
git push -u origin master


And to push the branch(es):


git push origin 3.3



De-duplicating records based on title

2012-08-07T11:50:57.697+01:00

This is a draft!

I have a large database of metadata records, harvested from repositories using the OAI-PMH. We want to couple that with a database of records from the Thomson ISI Web of Knowledge database, in order to answer the question of what proportion of the ISI records are also available in a repository. (Whether there is an attached full-text will be a later question ...)

The technological problem is matching 700,000 records from ISI against 29,000,000 harvested from repositories. The common fields we have are title, date and author. Each ISI record has a unique ISI-assigned numeric identifier. Each OAI record has a unique numeric identifier assigned by me.

For now, I'm going to ignore the date and author and instead concentrate on matching titles. The technique I'm using to test equivalence is the Jaccard index. The elements from which the intersection and union for Jaccard are calculated are w-shingles. The shingles are generated by normalising and tokenizing the input title and calculating the digest of each 4-term sequence e.g.

"She sells sea shells, by the sea shore"

is normalised to:

"she sells sea shells by the sea shore"

which is tokenized (split) into sequences of 4 terms each:

{she, sells, sea, shells}
{sells, sea, shells, by}
{sea, shells, by, the}
{shells, by, the, sea}
{by, the, sea, shore}

from which the MD5 digest is calculated for each shingle, resulting in 5 unique shingles:

{8864a21dca39c33f878dbee52a68e0ed, 9c63cb078bdb03930b244383bc283d74, 67e4f723c2e752bc84da38835aed8790, 84868ccce9218519b8829b4f087ba87c,  06542f2681a2e8fc0bab470f638992aa}

To reduce the search space only the last 4 bytes of the digest is used. A database table is written that consists of pairs of the 4 byte shingle identifier plus the numeric identifier of the source record. Once all the records are processed we have a sorted table of shingles plus the list of records that have that common shingle.

|shingle|id|
2a68e0ed|1
bc283d74|1
5aed8790|1
087ba87c|1
638992aa|1

Calculating the overlap between records can be calculated using a self-table join:

SELECT a.id a, b.id b FROM shingles a INNER JOIN shingles b ON a.shingle=b.shingle AND a.id < b.id;

Counting the number of distinct a.id/b.id pairs from the above will tell us the number of shared (intersection) shingles:

SELECT x.a, x.b, COUNT(*) c FROM (SELECT a.id a, b.id b...) x GROUP BY x.a, x.b;

The count of the union of shingles between two records can be calculated by:

SELECT COUNT(DISTINCT shingle)) FROM shingles WHERE id=a.id OR id=b.id;

Once a likely set of duplicates has been identified (Jp>.5), a more accurate matching can be performed on the candidate titles, authors and dates.



VirtualBox guest with Host-Only Networking and NAT

2012-05-16T11:56:11.879+01:00

Follow-up: the much easier way to allow external access plus access to the VM from the host is to use two Network devices. Set the first network device to be "NAT" and the second to "Host only adapter".

The Host-Only networking in VirtualBox 4.1 allows the host/guest to talk to each other but requires some extra steps to allow the guest to access the outside world. We can use NAT on the host machine to NAT the virtual network through the host's real interface.

Configure the VirtualBox guest's network to use Host-Only Networking:

You will need to set up static networking on the guest O/S because the VirtualBox DHCP server won't assign a gateway or DNS servers. Replace 192.168.56.101 with the guest's IP address and modify the DNS to your local network settings:

# This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 #iface eth0 inet dhcp iface eth0 inet static address 192.168.56.101 netmask 255.255.255.0 network 192.168.56.0 broadcast 192.168.56.255 gateway 192.168.56.1 dns-search your-domain.example dns-nameservers 8.8.8.8 8.8.4.4

On the Host machine edit /etc/default/ufw:

DEFAULT_FORWARD_POLICY="ACCEPT"

Uncomment net/ipv4/ip_forward and net/ipv6/conf/default/forwarding in /etc/ufw/sysctl.conf:

net/ipv4/ip_forward=1 net/ipv6/conf/default/forwarding=1

Add masquerading rules to the top of /etc/ufw/before.rules:

# NAT table rules *nat :POSTROUTING ACCEPT [0:0] # Forward traffic from eth1 through eth0. -A POSTROUTING -s 192.168.0.0/16 -o eth0 -j MASQUERADE # commit the NAT rules COMMIT

(Re)enable the ufw:

sudo ufw disable sudo ufw enable



Ubuntu 11.10 on Sun X4540 ("Thumper")

2011-10-26T12:54:29.935+01:00

I installed Ubuntu 11.10 Server x64 edition using the Java ILOM client - you must use a 32bit Java client to connect a CD image. I set up a MD RAID1 across the bootable devices of controllers 0 and 1 (first disks on 0 and 1).

Post-install I encountered a blank, black screen on the GRUB 2 stage (i.e. after the kernel selection screen). To fix this edit the boot parameters on the kernel selection screen:

set gfxpayload=text ... linux [...] rootdelay=90
To make these changes permanent after booting:

sudo vi /etc/default/grub GRUB_CMDLINE_LINUX_DEFAULT="rootdelay=90" GRUB_GFXMODE=text sudo update-grub

(reboot to check everything worked correctly)

Installing native ZFS and setting up a ZFS pool

sudo apt-get install python-software-properties sudo add-apt-repository ppa:zfs-native/stable sudo apt-get update sudo apt-get install ubuntu-zfs

I created a small Perl script to set up a zpool over the remaining 46 disks. The scheme I use is 4 hot spares (the first disk of the other 4 controllers) with 7 raidz RAIDs over 1 disk per controller.

Usage:

Warning! This will destroy any data you have on the disks in your system. Only use this if you *really* know what you're doing.

sudo perl [script.pl] --create

The source code for the script:

#!/usr/bin/perl use Getopt::Long; use strict; use warnings; my $usage = "$0 [--dry-run --create --destroy --dump]"; GetOptions( 'dry-run' => \(my $dry_run), 'create' => \(my $opt_create), 'destroy' => \(my $opt_destroy), 'dump' => \(my $opt_dump), 'help' => \(my $opt_help), ) or die "$usage\n"; die "$usage\n" if $opt_help; my @DISKS; open(my $fh, "<", "/var/log/dmesg") or die "Error opening dmesg: $!"; while(<$fh>) { next if $_ !~ /Attached SCSI disk/; s/^\[[^\]]+\]\s*//; die if $_ !~ /sd\s+(\d+):0:(\d+):0:\s+\[(\w+)\]/; my( $c, $t, $dev ) = ($1, $2, $3); $DISKS[$c][$t] = "/dev/$dev"; } shift @DISKS while !defined $DISKS[0]; if( $opt_dump ) { foreach my $i (0..$#DISKS) { print "Controller $i:\n"; foreach my $j (0..$#{$DISKS[0]}) { next if !defined $DISKS[$i][$j]; print "\t$j\t$DISKS[$i][$j]\n"; } } } # system disks $DISKS[0][0] = undef; $DISKS[1][0] = undef; my @spares; for(@DISKS[2..$#DISKS]) { push @spares, $_->[0] or die "Missing disk at $_:0\n"; } my @pools; foreach my $i (1..$#{$DISKS[0]}) { foreach my $j (0..$#DISKS) { $pools[$i - 1][$j] = $DISKS[$j][$i] or die "Missing disk at $j:$i\n"; } } for(@pools) { $_ = "raidz @$_"; } if( $opt_destroy ) { cmd("zpool destroy zdata"); } if( $opt_create ) { cmd("zpool create -f zdata @pools spare @spares"); } sub cmd { my( $cmd ) = @_; print "$cmd\n"; system($cmd) if !$dry_run; }



IPv6-Only Ubuntu 10.04 LTS

2011-07-05T15:31:12.725+01:00

To load eth0 on start-up without a configured IPv4 address /etc/network/interfaces:


auto eth0
iface eth0 inet manual
up ifconfig eth0 up
# iface eth0 inet dhcp


For the Google IPv6 public DNS servers add to /etc/resolv.conf:


domain localdomain
search google.com
nameserver 2001:4860:4860::8888
nameserver 2001:4860:4860::8844



libxml2 and libxslt supported XPath functions

2011-02-11T15:59:47.570+00:00

The following functions are supported by the libxml2 library for use in XPath statements and hence supported in libxslt for use in transforms. For more information see xpath.c in the libxml2 source.

Note: string length is 'string-length' and not just 'length'!

last()
position()
count()
id()
local-name()
namespace-uri()
string()
string-length()
concat()
contains()
starts-with()
substring()
substring-before()
substring-after()
normalize-space()
translate()
not()
true()
false()
lang()
number()
sum()
floor()
ceiling()
round()
boolean()



CMIS vs Google Documents API vs SWORD

2011-01-19T16:31:06.649+00:00

The Atom protocol is a very simple mechanism for publishing news feeds - that is, date-ordered small bits of information. An Atom feed is a collection of Atom entries. Each entry contains some basic metadata (title, id) and may have links to other resources. Links of particular interest are 'edit' and 'edit-media' which, respectively, refer to the entry's metadata and media file. The Beach urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a 2005-10-07T17:17:08Z Daffy The Atom Publishing protocol (or AtomPub) provides a protocol to add new entries (i.e. to publish to feeds). AtomPub uses the HTTP POST, PUT and DELETE methods to, respectively, create, update or delete entries.To create an entry the client POSTs an Atom entry to the feed's URL. To update an entry the client PUTs an Atom entry to the entry's URL (replacing anything already there). And lastly, DELETEing an Atom entry URL destroys that entry. The protocol itself is quite readable so I suggest going there if you're lost!AtomPub is sufficient if you just want to post small entries in XML but often the client wants to e.g. publish a photo, which the new Atom entry will then refer to. There are several approaches to this but the simplest is to use the Atom Multipart Media Resource Creation mechanism, which bundles together the Atom entry and the media file into a single POST.Atom/AtomPub provides us with a fairly simple tool to publish items onto a Web site. As Institutional Repository (IR) developers we, unfortunately, require a more complex model than just a feed of entries containing one file each. We have more complex metadata and multiple files making up an object. An editorial workflow means items uploaded by users must first be checked by editors before they can be published. There are various other aspects to consider that I won't go into here.So we like the simplicity of Atom/AtomPub but it doesn't fulfil all of our requirements. Fortunately it is easy to extend AtomPub by injecting additional links and metadata into entries. These links can connect to other URLs that allow complex manipulations to be made on the underlying data structure (hence also to create more complex data structures). OASIS CMIS, SWORD and the Google Documents API are all extensions of AtomPub better known as "AtomPub Profiles". (I'm sure there are others but these are the obvious candidates for IR use.)OASIS Content Management Interoperability Services (CMIS) is over 200 pages long but, in part, describes an AtomPub profile. I concur with the sentiment here that being asked to implement CMIS won't make your developers happy. The model underpinning CMIS has a hierarchical folder structure. By supplying a special tag in a POST to a feed an Atom entry is created that points to another Atom feed (or 'folder'). In this way Atom entries are effectively typed to be either a 'document' or a 'folder'. Atom entries can be moved to other folders by POSTing them that folder's feed. There is lots more that CMIS adds in, to the extent that I forget what's at the beginning before I get to the end!root feed | |-- document entry | |-- document entry | |-- folder entry | |-- folder feed | |-- doc[...]



Ubuntu Maverick 10.0 on Acer Revo 3700

2010-12-03T10:48:18.964+00:00

These are some rough notes on what I needed to get Ubuntu Maverick 32bit working on the Acer r3700.

wireless

While the kernel rt2860pci driver will run the wireless ok it will cause a hard-lock when it is unloaded (e.g. during shutdown). Thanks to Wolfgang Kufner and Marcus Tisoft for providing a solution - replace the kernel driver with a patched driver from Ralink for the rt3090 (comment #9): https://bugs.launchpad.net/ubuntu/+source/linux/+bug/662288.

sound over hdmi (stereo only tested)

Unmute all digital outputs in Alsa (use right cursor + 'M' until they are all green):
alsamixer -c 1
sudo alsactl store


Testing alsa:
speaker-test -D plughw:1,7


Setting pulseaudio to output via alsa:
sudo gedit /etc/pulse/default.pa


Uncomment and modify the line containing module-alsa-sink:
  load-module module-alsa-sink device=hw:1,7


suspend

Fails to suspend ... haven't found a solution yet.



Enabling support for HTTPS in wkhtmltopdf

2010-06-09T12:30:41.449+01:00

If you get the following error when attempting to web thumbshot an HTTPS-based site with wkhtmltopdf in Fedora Core 13:

QSslSocket: cannot call unresolved function SSLv3_client_method
QSslSocket: cannot call unresolved function SSL_CTX_new
QSslSocket: cannot call unresolved function SSL_library_init
QSslSocket: cannot call unresolved function ERR_get_error
QSslSocket: cannot call unresolved function ERR_error_string


You need to add some additional library links. As root do:

cd /usr/lib64
ln -s libssl.so.10 libssl.so
cd /lib64
ln -s libcrypto.so.1.0.0 libcrypto.so


Hopefully that will fix the problem!

NB I resolved this by using strace to find out where wkhtmltopdf was attempting to find libcrypto and libssl (the problem being the static wkhtmltopdf build was looking for different versions than are installed on FC13):

strace wkhtmltopdf https://mail.google.com/ gmail.pdf 2>&1 | less



Ubuntu 10.04 vnc-based login server

2010-06-17T12:31:25.178+01:00

This recipe is for setting up a VNC login server. This allows you to use a VNC client to access a full GUI on a remote server. If instead you want to get VNC access to your desktop (or share with other users) you need to enable remote desktop.

VNC connections are not encrypted so if you connect directly to the VNC server any login details will be sent in the clear.

Install the required packages:

sudo apt-get install vnc4server xinetd gdm


Restrict GDM to only listening to localhost by adding the following to /etc/hosts.allow:

gdm: ip6-localhost


Enable XDMCP in GDM by setting up /etc/gdm/custom.conf as:

# GDM configuration storage

[daemon]

[security]

[xdmcp]
Enable=true
HonorIndirect=false
# following line fixes a problem with login/logout
DisplaysPerHost=2

[greeter]

[chooser]

[debug]


Create a new xinetd service /etc/xinetd.d/Xvnc (adjust geometry to get different screen sizes):

service Xvnc
{
type = UNLISTED
disable = no
socket_type = stream
protocol = tcp
wait = no
user = nobody
server = /usr/bin/Xvnc
server_args = -inetd -query ip6-localhost -geometry 1280x800 -depth 16 -cc 3 -once -SecurityTypes=none
port = 5901
}


Restart gdm (which will close any current logins!) and xinetd:

sudo service gdm restart
sudo /etc/init.d/xinetd restart


You can then connect to the VNC server using:

vncviewer localhost:5901



slapd configuration hints

2010-03-29T13:17:25.628+01:00

I run an OpenLDAP instance to provide common login credentials for Subversion and Trac.OpenLDAP is one of those tools that can be nightmarish to get working. LDAP may be 'lightweight' but when put in a stack of browser-web server-LDAP it can be very tricky to work out what's going wrong and where. I spent a weekend fruitlessly trying to debug Apache auth when what I needed to do was stop/start rather than restart (stupid I know).In OpenLDAP 2.4 the configuration is now moved from a standard "slapd.conf" to LDIF configuration files (in Ubuntu under /etc/ldap/slapd.d/). These can be changed directly or updated via a ldapmodify/ldapadd.You can add configuration files via the ldapi interface:sudo ldapmodify -c -Y EXTERNAL -H ldapi:/// -f FILENAME.LDIFBeware that if you add a configuration that breaks slapd it will shut down right after you add the configuration. So it's a good idea to backup /etc/ldap/slapd.d/ before making any untested changes (restore/start slapd).A quick example for querying the LDAP server (note: -ZZ requires TLS, omit this for unsecured connections):ldapsearch -ZZ -b dc=eprints,dc=org -h localhost -v -D cn=USERNAME,ou=people,dc=eprints,dc=org -w PASSWORDTo start slapd in debugging/console mode do (see the OpenLDAP documentation for values for the -d argument):slapd -h 'ldap:/// ldaps:/// ldapi:///' -F /etc/ldap/slapd.d/ -d 16383If you start slapd as root it will write its configuration files as root, so be careful to restore permissions on /etc/ldap/slapd.d/ back to openldap:openldap. If slapd starts as root but not from init.d it may be due to permissions problems (e.g. can't read a certificate file).I found it difficult to find a working example of creating a slapd database. Here's what I used to create an HDB-based database (paths are Ubuntu):# Load modules for database typedn: cn=module{0},cn=configobjectClass: olcModuleListcn: module{0}olcModulePath: /usr/lib/ldapolcModuleLoad: {0}back_hdb# Create directory databasedn: olcDatabase={1}hdb,cn=configobjectClass: olcDatabaseConfigobjectClass: olcHdbConfigolcDatabase: {1}hdbolcDbDirectory: /var/lib/ldapolcSuffix: dc=eprints,dc=orgolcRootDN: cn=admin,dc=eprints,dc=org## FIXME below!olcRootPW: xxxolcLastMod: TRUEolcDbCheckpoint: 512 30olcDbConfig: {0}set_cachesize 0 4194304 0olcDbConfig: {1}set_lk_max_objects 2048olcDbConfig: {2}set_lk_max_locks 2048olcDbConfig: {3}set_lk_max_lockers 2048olcDbIndex: uid pres,eqolcDbIndex: cn,sn,mail pres,eq,approx,subolcDbIndex: objectClass eqTranslating permissions from slapd.conf to LDIF is relatively easy. If, like me, you need a few tries at getting this write use this as a template (ldapmodify as above will replace *all* existing olcAccess lines):# Set up permissions on dc=eprints,dc=orgdn: olcDatabase={1}hdb,cn=configchangetype: modifyreplace: olcAccessolcAccess: {0}to dn.base="" by * readolcAccess: {1}to dn.base="cn=Subschema" by * readolcAccess: {2}to dn.subtree="ou=people,dc=eprints,dc=org" by group/groupOfUniqueNames/uniqueMember="cn=superusers,ou=groups,dc=eprints,dc=org" write by self write by users auth by anonymous autholcAccess: {3}to * by * noneYour permissions will likely need to be very different to those given here (which I've abbreviated anyway). Because I'm using apache mod_authz_ldap I have to provide a two-stage authentication, which requires having a 'superuser' account that can search for the relevant cn entry before performing the user authentication.I hope this saves somebody a headache![...]



Check your house boundaries

2010-03-19T10:48:39.666+00:00

I'm currently trying to purchase a house. We have an agreed price, the vendor has vacated the property but we're got stuck on legal issues going on 4 months.

The issue is the previous owner to the current one expanded the garden into unused land, expanding the garden nearly twofold in size. She then sold the property on with land which wasn't on the title deed, leaving the current vendor trying to sell something she doesn't have title to. A conservatory extension has also been added without a permission required by a deed convenant. These issues only came to light after we agreed an offer and after I had spent money on a survey and the solicitor's initial searches.

The Home Information Pack (HIP) does include a copy of the land registry entry. It was a simple process to marry-up the plot (as indicated on the land registry) with a satellite image of the plot. Having done this it is obvious that the plot does not correspond to the land registry, ringing alarm bells. With the benefit of hind-sight I could've avoided the financial exposure of surveys and solicitor costs before having the vendor sort these problems out.

In a more general sense, it's unclear who if anyone checks that the boundaries for a property are the same as those shown on the land registry. My full structural survey didn't include a boundary survey and the solicitor doesn't visit the site (and are seemingly technology illiterate).

As it seems with all house-buying processes the best person to actually check these things is you. When viewing a property's surrounding land you should definitely ask the estate agent whether the sale includes the entire plot and whether they have checked that against the land registry (which they should get as part of the HIP).



Generating Flash Video (FLV) using FFMpeg

2009-09-07T10:35:43.147+01:00

I encountered a problem generating Flash Video (.flv) format files using the latest versions of ffmpeg and libmp3lame. The generated files didn't contain a duration so wouldn't play nicely in the FLV Player embedded player (the player would buffer and play the 5 seconds buffer then stop).

After some investigation I found the problem was due to a buffer error with the libmp3lame library that, while it didn't stop a file being generated, prevented ffmpeg embedding the duration in the flv output. Here's a bug entry for Ubuntu but the OP erronously claims the encoding succeeds - it does to a degree, but any post-encoding operations by ffmpeg won't happen (setting the duration for FLV happens after the encoding).

After much fruitless time trying to fix the broken lame library I used the AAC codec instead:

ffmpeg -i FOO.MPG -acodec aac FOO.FLV



Installing Microsoft Access 97 with Office 2007

2009-08-20T11:15:29.308+01:00

Installing Access '97 onto a machine with a later version of Office already installed is troublesome. Before installing '97 follow these steps:

Install the Jet 3.5 update (fixes problem with creating system database file):

Jet 3.5 - http://support.microsoft.com/kb/172733

Follow the instructions here in the "Access 2000 Is Already Installed" section:

http://support.microsoft.com/kb/241141

If you don't do the "hatten.ttf" trick you will get this error: "Microsoft Access can't start because there is no license for it on this machine".

Now install Access '97 as normal (you may want to change it's installation directory).



Building xapian-bindings-php under Fedora Core 10

2009-06-29T12:57:39.484+01:00

Fedora Core includes packages for Xapian, an Open Source Search Engine Library. Due to a licensing problem Fedora Core doesn't include the PHP bindings for Xapian. Regardless of licensing spats I need to use Xapian from PHP so worked out a patch for the spec file (below) and the steps needed to build the missing xapian-bindings-php RPM:# rpm building dependenciessudo yum -y install rpmbuild rpmdevtools autoconf automake libtool zlib-devel gcc-g++# set up the RPM build tree (and macro file ~/.rpmmacros)rpmdev-setuptree# dependencies for building xapian-bindingssudo yum -y install xapian-core-devel python python-devel ruby ruby-devel php php-devel# download source RPMyumdownloader --source xapian-bindings-python# install source filerpm -ivh xapian-bindings-*# copy the patch below to rpmbuild/SPECS/xapian-bindings.spec.diff# patch the spec filepatch --dry-run -p0 -d rpmbuild/SPECS/ -i xapian-bindings.spec.diff && \patch -p0 -d rpmbuild/SPECS/ -i xapian-bindings.spec.diff# build all packages from the installed RPM spec filerpmbuild -ba rpmbuild/SPECS/xapian-bindings.spec# RPMs for your platform will be output to rpmbuild/RPMS/I needed to do this to build a xapian-bindings-php RPM, which required patching of the spec file:--- xapian-bindings.spec.orig 2008-12-08 13:20:55.000000000 +0000+++ xapian-bindings.spec 2009-06-23 15:45:05.000000000 +0100@@ -12,6 +12,7 @@ BuildRequires: python-devel >= 2.2 BuildRequires: autoconf automake libtool BuildRequires: ruby-devel ruby+BuildRequires: php-devel php BuildRequires: xapian-core-devel == %{version} Source0: http://www.oligarchy.co.uk/xapian/%{version}/%{name}-%{version}.tar.gz Buildroot: %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n)@@ -42,11 +43,21 @@ indexing and search facilities to applications. This package provides the files needed for developing Ruby scripts which use Xapian+%package php+Group: Developer/Libraries+Summary: Files needed for developing PHP scripts which use Xapian++%description php+Xapian is an Open Source Probabilistic Information Retrieval framework. It+offers a highly adaptable toolkit that allows developers to easily add advanced+indexing and search facilities to applications. This package provides the+files needed for developing Ruby scripts which use Xapian+ %prep %setup -q -n %{name}-%{version} %build-%configure --with-python --with-ruby+%configure --with-python --with-ruby --with-php make %{?_smp_mflags} %install@@ -59,6 +70,9 @@ mv %{buildroot}%{buildroot}/usr/share/doc/%{name}/python %{buildroot}/usr/share/doc/%{name}-python-%{version} mv %{buildroot}%{buildroot}/usr/share/doc/%{name}/ruby %{buildroot}/usr/share/doc/%{name}-ruby-%{version} rm -rf %{buildroot}%{buildroot}/usr/share/doc/%{name}/+# Fix location of PHP 5 include file+mkdir -p %{buildroot}/usr/share/php/+mv %{buildroot}%{buildroot}/usr/share/php5/xapian.php %{buildroot}/usr/share/php/xapian.php %clean [ "%{buildroot}" != "/" ] && rm -rf %{buildroot}@@ -74,6 +88,11 @@ %{ruby_sitearch}/_xapian.so %{ruby_sitelib}/xapian.rb+%files php+%defattr(-, root, root)+%doc AUTHORS ChangeLog COPYING NEWS README+/usr/lib64/php/modules/xapian.so+/usr/share/php/xapian.php %changelog * Mon Dec 08 2008 Adel Gadllah 1.0.9-1[...]



PPTP VPN under Ubuntu Intrepid 8.10

2009-03-10T10:20:17.236+00:00

It seems there are some issues with PPTP VPN under Ubuntu's latest Intrepid (as-of March 2009). When I tried to connect I would get an error in /var/log/syslog of "EAP: peer reports authentication failure" (amongst the general PPTP messages). Depending on your PPTP server you may have the same problem or entirely unrelated issues but I suspect mine is a very common set-up.

The Ubuntu bug here - https://bugs.launchpad.net/ubuntu/intrepid/+source/network-manager-pptp/+bug/259168 - contained the answer, which is to both enable MPPE and to disable EAP authentication.

Here is what I did:

1. Add a new VPN connection
2. Only add the username (do not set the password in the connection settings):

(image)

3. Under "Advanced" check "Use Point-to-Point Encryption (MPPE)":

(image)

4. Next, run gconf-editor and find your VPN connection and add a new key of "refuse-eap"/String/"yes":

(image)

5. Connect using the VPN and give it your password.

If you ever change settings using the Network Manager applet you will lose the "refuse-eap" setting and will need to re-add it.



Automounting a CIFS share in Ubuntu

2010-12-12T15:11:46.828+00:00

autofs is a tool that mounts and unmounts devices on demand. To use autofs to mount CIFS shares (Windows/Samba) in Ubuntu do the following:

$ sudo apt-get install autofs smbfs smbclient


Then edit /etc/auto.master and uncomment the smb line:

#
# $Id: auto.master,v 1.4 2005/01/04 14:36:54 raven Exp $
#
# Sample auto.master file
# This is an automounter map and it has the following format
# key [ -mount-options-separated-by-comma ] location
# For details of the format look at autofs(5).
#/misc /etc/auto.misc --timeout=60
/smb /etc/auto.smb
#/misc /etc/auto.misc
#/net /etc/auto.net


Restart autofs:

$ sudo /etc/init.d/autofs restart


You can now view and mount CIFS shares by changing to the directory /smb/HOSTNAME/SHARENAME e.g. to mount and change directory to "photos" on "bart":

$ cd /smb/bart/photos


You can even list the shares on "bart" by doing:

$ ls /smb/bart/


What the default scripts don't provide is the ability to have per-connection mount options. In particular I wanted to mount some hosts read only and using UTF-8. I modified the /etc/auto.smb script as follows:

...

# This file must be executable to work! chmod 755!

key="$1"
mountopts="-fstype=cifs"
smbopts=""
credfile="/etc/auto.smb.$key"
optsfile="/etc/auto.smb.$key.opts"

if [ -e $optsfile ]; then
. $optsfile
fi

...


I then created a connection specific file /etc/auto.smb.bart.opts containing:

mountopts="$mountopts,ro,iocharset=utf8"


Now any mounts on "bart" will be set read only and use the UTF-8 character set.



DG834/DG834G to OpenSwan VPN

2008-09-04T10:42:59.048+01:00

The Netgear DG834 ADSL routers support IPSEC based Virtual Private Networks (VPN). The DG834 uses the Open Source Openswan software (http://www.openswan.org/). This blog provides the configuration details for how I connected an intranet LAN behind a DG834 to a Ubuntu-based Linux server in another LAN via the Internet.Here's my setup:NAS - [192.168.0.1] DG834G [starsky] - {inet} {inet} - [hutch] ADSL ROUTER [192.168.1.1] - BackupWhere:The NAS is a Western Digital network drive.The Backup is a normal PC running Ubuntu with a RAID 5 SATA storage.Both ADSL connections have static IPs.starsky's public IP is 10.1.1.1hutch's public IP is 10.2.2.2Backup is set as the DMZ server in ADSL ROUTER.The goal was to copy data from the NAS to the Backup and to provide Windows share access to the Backup from the 192.168.0.0/24 subnet.The DG834G is running firmware version V2.10.22. I used the VPN Wizard to set up the VPN endpoint initially and then revised it in the VPN policy editor. The Policy Name can be anything (e.g. "Bob"). As I want this connection to be up all the time I set the "IKE Keep Alive" and set the connection to "Initiator and Responder". The Address Data is the host name of ADSL ROUTER (hutch). The Pre-shared Key should be something difficult to guess.On Backup I installed Openswan and followed the defaults during installation:$ sudo apt-get install openswanI added to /etc/ipsec.conf:conn Bob type=tunnel leftid=10.2.2.2 # hutch left=%defaultroute leftsubnet=192.168.1.0/24 right=10.1.1.1 # starsky rightsubnet=192.168.0.0/24 keyexchange=ike auto=start auth=esp authby=secret pfs=no rekey=no ike=3des-sha1-modp1024 esp=3des-sha1And to /etc/ipsec.secrets (where pre-shared key is the same as you entered into the DG834):# secrets for "Bob"hutch starsky 10.2.2.2 10.1.1.1: PSK "pre-shared key"I then restarted Openswan:sudo /etc/init.d/ipsec restartOnce the VPN has established (look in /var/log/auth.log) I can access Backup by browsing to it's IP address: //192.168.1.100/SHARENAME.I'm not sure with the above configuration you can access the 192.168.1 subnet from the 192.168.0 subnet. Answers on a postcard ...ADSL ROUTER is actually a DG834Gv4, but the firmware seems to be buggy. In the default firmware it won't allow Windows shares to be accessed across it and in the latest firmware (V5.01.09) the VPN connection is unreliable and won't re-establish unless the router is rebooted.The above "works for me", but I'm by no means an IPSEC or Openswan expert so welcome any feedback/corrections.[...]



Setting the output page size with ps2pdf

2007-09-07T16:14:41.283+01:00

Ghostscript's ps2pdf utility converts Postscript format files into Adobe PDF. One drawback with this utility is that, by default, it always outputs in Ghostscript's default page size, regardless of the postscript document's page layout. This is probably sensible if the document is destined for printing, but at the moment PDF is also the best cross-platform way to express large spaces of graphics (in this case directed-tree graphs).

I've managed to get around this - mostly - as follows:

  1. Use ImageMagick's identify to get the Postscript document dimensions:
    $ identify test.ps
    test.ps PS 609x1528 609x1528+0+0 PseudoClass 256c 909kb 0.020u 0:01

  2. Execute ps2pdf with the -dDEVICEWIDTHPOINTS and -dDEVICEHEIGHTPOINTS arguments
    $ ps2pdf -dDEVICEWIDTHPOINTS=679 -dDEVICEHEIGHTPOINTS=1598 test.ps test.pdf
What I haven't been able to work out is how to suppress the margins in the resulting PDF. To avoid the Postscript document being cropped I added 70 points to the width and height.