Subscribe: Michael's musings
http://www.sandelman.ca/mcr/blog/index.rss
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
app  bin  blog  devise  don  file  mcr blog  mcr  might  pepper  rails  secrets  system  things  usb  vmware  westgate  work 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Michael's musings

Michael's musings



Michael writes about random things, mostly technology and politics



 



Installing ChromeOS on old EEE-PC

My wife's work is plagued with a bunch of EEE PCs with Windows XP on it. There isn't any reason to have windows, and so we are installing ChromeOS on them. I went to:

http://www.pcadvisor.co.uk/how-to/software/how-install-chromeos-on-old-laptop-3636672/

This tells us to use cloudready's image. Downloading the 630M file gives us a zipp'ed .bin file, and then one is supposed to use a Chrome extension to write a USB file. No idea if it will work on Linux, and the eee-PCs really don't boot (and winXP is so old, I wouldn't let it on the internet).

So I ran:

kvm -hda ./chromiumos_image.bin

since I had previously run file on the image to see that it had an x86 boot sector. Here comes syslinux and some logo images with cloudready, which seems to be in some kind of reboot loop. Having confirmed that it was in fact bootable, I looked at what to do.

So, I looked deeper:

sudo losetup -f ./chromiumos_image.bin sudo kpartx -av /dev/loop0

This resulted in 27 partitions, which is really too many to poke around with.

I was hoping to put the installer on a USB key that I use for finnix and other rescue stuff, but at 5G and wanting to find a HD, etc. it was really too big. Maybe it would boot live from the USB key?

I googled a bit and found:

https://www.howtogeek.com/128575/how-to-run-chrome-os-from-a-usb-drive-and-use-it-on-any-computer/ which suggests running it in a VM, which isn't a super great idea, as it would lose much of the GL possibilities.

So I dd'ed it to a 32G USB key I had around, and booted this in the eee-PC. It installed and worked great.




Putting a comma in a KVM/QEMU SMBios name

Not well documented in qemu-system(1), but if you want to have an SMBios name like: "VMware, Inc", then you need to set it as:

kvm -smbios type=1,manufacturer="VMware,, Inc.",product="VMware Virtual Platform",version="None",serial="VMware-77 bb aa bb cc dd ee ff-11 22 33 44 55 66 77 88"

This results in something like:

System Information
        Manufacturer: VMware, Inc.
        Product Name: VMware Virtual Platform
        Version: None
        Serial Number: VMware-stuff

in your dmidecode, which you may need if you are moving from VMware to another platform, and have things inside that care.




Open source costs more
I wrote this message back in 2004, and with the new procurement concerns coming from the Federal government, I want to re-iterate it. The connection is that you can compete to price alone ONLY if you have a full, stable specification. (Pencils are about all I can think of that fit that description) Date: Fri, 08 Oct 2004 13:46:19 -0400 From: Michael Richardson My opinion is that there are little direct cost savings to FLOSS. It may well have higher direct costs, even. The reason has nothing to do with FLOSS. It has to do with cost/benefit ratios of outsourcing. FLOSS is primarily about insourcing large amount of things. Off-the-shelf manufactured products represents ultimate outsourcing. It is always cheaper to outsource IT, provided that you know what it is that you need doing (in sufficient detail that you can write the contract 1), and that your requirements are at least 90% in common with everyone else. Outsourcing has the following problems: a) lack of flexibility b) lack of customization c) lack of agility/dexterity (you can't change things quickly) But, we don't expect those things from our governments anyway. Certainly there are no corporations or NGOs that have any need to innovate, adapt to changing economic trends, or react to customer demands. Note: it doesn't matter what the licensing policy of the software is. It is the business relationship that means that you are charged a fixed price for a fixed basket of goods that is the problem. In a typical MS-Office desktop environment you have no useful first line of IT people. Even if you might have competent system administrators (and I don't mean MCSEs) there is essentially nothing that they can do to deal with any major issue. The only thing they can do is click on wizards, and call 1-900-Microsoft. As long as you only do things that 90% of microsoft's other customers also do, you are fine. You will get a fix sooner or later. If you are in the other 10%, and do unusual things, you are SOL. Since there is no use for a senior system administrators (they can't do anything), you might as well hire junior MCSEs, as they are cheap, redundant and easily replaced. [1] You do need the occasional access to a VERY SENIOR, UBER IT/business person to write the contract perfectly. I know very few people who have these skills. You can contract a "consultant", but as most of the ones that have the time to deal with MERX-RFP-crap to get the contract to do the consulting, have the time because it is a loss-leader to being in a position to sell the solution as well, they probably are too vendor bias'ed, so you don't get a good contract. Insourcing is about empowering people to solve their own problems. Giving them a multitude of tools (a utility belt), and the ability to adapt and fashion their own tools. This is where the licensing policy matters. If you can take the whole product to another supplier, or do the work internally easily, then you can adapt things. Things that are high risk are usually best done internally, where you can control the risk (==cost). It is also why things like VisualBasic exist, and is the #1 tool that MS-Office shops use. It provides them the flexibility, agility, etc. that you need. But, this is in fact insourcing! Note that the lack of 1 in most departments is what actually makes it very hard for FLOSS-corporations to offer open source solutions in response to outsourcing requests. Since the buyer doesn't really know what they want, they seldom get what they need, often pay huge amounts to have things "customized", and in any case, their needs change often. However, since FLOSS corporations are often made up of a small number of 1.-like people, and a better solution is to train the internal people to do most of the work. [...]



Friends of incumbent Broadcasters

Dear Friends of TV,

I didn't know that your organization was a front for Bell owned TV stations.

I thought it was about the CBC. I care about CBC radio. I care about the CBC RSS feed and web site. I was happy to donate $50 for your efforts last year, but I won't be doing that again.

I don't give a shit for TV. Seriously. Yawn.

CBC TV stations were typically privately owned affliates in the past, and they should return to being that, or fail.

I do not believe that netflix is getting a free "ride" — I think that Bell and Rogers' owned TV conglomerates have simply failed to innovate, and without netflix, we'd be in an even worse situation. Even though CraveTV is now available without the illegal Tied Selling to Bell, it still is a complete flop.

People pay HST for their Internet connection, and they access all manner of content via it: twitter, youtube, BBC,

Tell me: did ROGERS ever pay their multiple millions they owed to the Cable Programming Fund? Did Rogers ever actually make Third Party Internet Access on their cable network actually work? Do Bell or Rogers actually provide 21st century (IPv6) Internet service to their customers? No. (Telus does, btw)

I am interested in programming, not century-old broadcast television stations. Quality of Internet service is the way forward: Internet service where you can actually do things like host your own TV show from your basement (think Wayne's World if you like) is what local programming is about. Reasonable (and currently missing) systems that would permit micropayments to be easily made directly from reader to writer.

Duo/Monopolist systems of concentrated ownership and century-old intermediation are dead. I don't see why your organization would be supporting horses and buggies.




How to Flash OpenWSN to OpenMote with SEGGER JLink
I have an OpenMote device. One I bought prior to July 2014, so it's got a bootloader that won't let you install via serial UART, so it has to be JTAG'ed in, and in any case, you might toast yourself and have to start over, or you might want GDB. First, my setup. It's not a windows laptop with a USB cable. That would be... cramped and really inefficient to work with. I have a desktop, it's called obiwan. It does Linux desktop things like run multiple monitors, play music and keep browsers going. And does it without fan noise, and I like that a lot. I have a build server, it's called herring. It has lots of cross-compilers installed, three-way RAID mirror (because: consumer grade disks suck), and it's in the other room where it can make as much noise as it wants, and it does NFS to the other machines. I have a table/desk full of stuff, with a small form factor (fanless) PIII running Devone, with two USB hubs connected. It's called "lando". (Stupid muse/pybloxsom has no idea how to set width= on images) SOHO IoT/6tisch/ROLL lab The old 10/100 8-port switch screwed up in the picture was the original idea, but dammit, I ran out of ports, and so the old Dell 24-port 10/100 is there too, with a fan making noise and pissing me off. It turns out the RPI model Bs have PHYs that just don't do the MII dance correctly with more modern PHYs when driven by uBoot, so the RPI are still plugged into the 15 year old 10/100 switch. I installed the SEGGER software on lando, as it's got the USB cable to the JLink. I extracted the 6tisch Golden image, which can be found from the page: https://github.com/openwsn-berkeley/openwsn-fw/releases/tag/GD_REL-1.2.0 as the Download links marked "source code" at the bottom of the page: https://github.com/openwsn-berkeley/openwsn-fw/archive/GD_REL-1.2.0.tar.gz I extracted this on my build server, because that's where my arm GDB is installed: herring-[projects/pandora/openmote/images-GD_REL-1.2.0] mcr 1003 %ls -lta total 1556 drwxr-xr-x 4 mcr mcr 4096 Feb 3 12:22 ../ drwxr-xr-x 2 mcr mcr 4096 Jan 28 11:21 ./ -rw-r--r-- 1 mcr mcr 524256 Jan 28 11:21 GD_ROOT.bin -rw-r--r-- 1 mcr mcr 524256 Jan 28 11:21 GD_ROOT_SEC.bin -rw-r--r-- 1 mcr mcr 524256 Jan 28 11:21 GD_SNIFFER.bin On lando I started JLinkGDBServer: lando-[~] mcr 10001 %sudo /opt/SEGGER/JLink/JLinkGDBServer -device CC2538SF53 [sudo] password for mcr: SEGGER J-Link GDB Server V5.10j Command Line Version JLinkARM.dll V5.10j (DLL compiled Feb 2 2016 19:31:34) -----GDB Server start settings----- GDBInit file: none GDB Server Listening port: 2331 SWO raw output listening port: 2332 Terminal I/O port: 2333 Accept remote connection: yes Generate logfile: off Verify download: off Init regs on start: off Silent mode: off Single run mode: off Target connection timeout: 0 ms ------J-Link related settings------ J-Link Host interface: USB J-Link script: none J-Link settings file: none ------Target related settings------ Target device: CC2538SF53 Target interface: JTAG Target interface speed: 1000kHz Target endian: little Connecting to J-Link... J-Link is connected. Firmware: J-Link V9 compiled Feb 2 2016 18:43:46 Hardware: V9.30 S/N: 269305618 OEM: SEGGER-EDU Feature(s): FlashBP, GDB Checking target voltage... Target voltage: 3.31 V Listening on TCP/IP port 2331 Connecting to target... J-Link found 2 JTAG devices, Total IRLen = 10 JTAG ID: 0x4BA00477 (Cortex-M3) Connected to target Waiting for GDB connection... Then on herring I started the GDB, as explained at: http://www.openmote.com/blog/getting-started-with-contiki-and-openmote.html I thought to load the GD_ROOT.bin directly using: (gdb) restore GD_ROOT.bin binary 0x200000 Restoring binary file GD_ROOT.bin into memory (0x200000 to 0x27ff[...]



PostgresQL Foreign data wrappers and rails

Rails' discovers available tables and the attributes of those tables when it starts. It has no problems with views, but it turns out that foreign data wrappers do not turn up in that list, and so if one of your tables is really a foreign data wrapper (fdw), rails just doesn't believe it exists.

I was using the mysql_fdw to merge the customer lists of a CRM, a ticket system and the SG1 management application. The CRM uses MySQL (only!) while our preferred database is PostgreSQL; mysql_fdw makes them all see a few tables in common, but the the auto-discovery did not like it in staging.

Solution: add a layer of indirection! Move the original fdw into a schema, and then:

CREATE OR REPLACE VIEW public.crm_accounts AS (SELECT * FROM suitecrm.crm_accounts);

and lo-and-behold! it all works as expected.




Setting pepper in staging/production
The devise authentication plug for rails has been a standard piece of instructure for me for years. One of those things you should not roll on your own. https://github.com/plataformatec/devise In setting up the staging system for the IP4B SG2 system, I wanted to be able to copy bits of the production database to the staging system so that we can see real data. Logins were not working, and I was mystified until I realized that the pepper values in the config/initializers/devise.rb were different as I upgraded the devise, and started with a new file. I had set a longer pepper in the new revision, probably too long actually as it might prevent the per-user salt from having any effect. I then realized that I don't want this pepper value into the git tree it all, it should be set by the production.rb, or better the secrets.yml. I didn't find a clear way to do this, so I had to Use The Source, Luke (curiously, in the Percy Jackson novels, Luke is a bad guy) I used bundle show devise to tell me where devise was installed, and then I went into that directory with Emacs, and used grep -R for "pepper". I was also looking for where the Device.setup routine is and how it sets up the config. A clue was in test/models_test.rb, where I found: test 'set a default value for pepper' do assert_equal 'abcdef', Configurable.pepper end Hmm, what is the Configurable thing? Naw, it's in test/test_models.rb, so it's just for the test cases. The real meat is in lib/devise. I leant from test_models.rb that I can specify the pepper: in the devise model, so if I have to I can put it together in my app. I decided to grep for secrets: grep -R -nH -e secrets * rails.rb:33: if app.respond_to?(:secrets) rails.rb:34: Devise.secret_key ||= app.secrets.secret_key_base Which is nice, as it points to the secret_key_base, which I knew I had to initialize, but that is used for the cookies. I don't need to synchronize the cookies between staging and production. I was hoping to put the pepper into the secrets.yml. Looking at this, it seems that I ought to be able to add pepper to this, so I tried: module Devise class Engine < ::Rails::Engine initializer "devise.pepper" do |app| if app.respond_to?(:secrets) Devise.pepper ||= app.secrets.pepper puts "APP has new pepper: #{Devise.pepper}\n" end end end end I monkey patched this into my user.rb file, before I create the User. (I could fork and make a pull request on this, but I'll do that after I'm sure this is the right way) I wondered what app was, I suspect it comes from the ::Rails::Engine. I stuffed my secrets.yml on my desktop with: development: secret_key_base: mysecret pepper: "happyhappyjoyjoy" and invoked rails console. Good news it didn't blow up yet! I dumped the pepper: 2.1.5 :001 > Devise.pepper => "42e1d47ee85f2bb825429384729345234935924310c651bcdb822a65919d8b4" Well, that's not the pepper I set. I went and I removed it from config/initializers/devise.rb, and restarted rails console. 2.1.5 :001 > Devise.pepper => nil Okay, so there is a change... but it didn't load my monkey patch yet. Maybe if I make it load that code: 2.1.5 :002 > User.first ... 2.1.5 :003 > Devise.pepper => nil nope. So let's see if my idea was even sound. I stuffed the initializer code into the lib/devise/rails.rb in my local gem repo, and low and behold, it ran: obiwan-[nv/clientportaltest/beaumont](2.1.5) mcr 10027 %rails console APP has new pepper: happyhappyjoyjoy Loading development environment (Rails 4.2.5) Conclusion: I monkey patched too late. I put the same code at the top of config/initializers/devise.rb. Too late. How about at the end of config/application.rb? Yes. that worked. I'm not too thrilled by this, I think it ought to go elsewhere, but one step forwar[...]



Libreoffice 5

After updating my desktop to http://devuan.org, ascii, from Debian Wheezy, I have Libreoffice 5. It's sure pretty.

But, the file dialogs are still broken if you have NFS or SSHFS: they want to walk the file system and get stuck on slow remote servers.




Non-Review of Star Wars VII

When starting to write this blog entry, my blog workflow would ask me which of the 20 odd categories I want to put this in. I saw "defectivebydesign" and thought, "haha, that's a good place". Previously, this category has included reviews of: Amazon Users Tag, Canadian Blood Services, DomainsAtCost and the Huawei 1250.

Star Wars VII was fun. Good use of 3D, although I think that the Millenium Falcon chase seen on the desert planet (what was it's name?) could have used more in-cockfit 3D, with Rae's hands in the field of view, like a first-person shooter. Definitely, the later on Rae/Finn gun sequence should have totally been first-person-shooter... That is, they should have actually picked a well-known first-person-shooter (in a space ship, of course, maybe Lego Star Wars..) and made it exactly the same icons, etc.

Female-Yoda-With-Glasses. Okay. But why did they go there? Who is she? They should have at least had a small conversation about that.

Why did they have to have the same band as in the Mos Eisley Cantina? That part was simply LAME. It was at that point that I said... well... this has made a turn for the worse.... and really it never recovered in my opinion.

Planet Killer. Yup. We know from the books that the two Death Stars were not prototypes, that there was more super-weapons hidden away, and we know that there is a Hyperspace Planet Killer in a Correllian moon that Anakin Solo (Han and Leia's third child in the books) could operate, and did operate by mistake. Takes genetics (thanks to Han's strange past in the aristocracy) and use of the Force... So that part seems Canon to me. It would have been more interesting if they had actually had Ben Solo (Kylo Ren) operate and/or aim it.... it would have made the character far a more interesting part of the First Order. Otherwise, it's not clear why Ben Solo is needed at all. Aiming a gun that kills things faster than light can travel would be hard... that's where the force comes in.

Oh, and while it can shoot through hyperspace, and so kill planets faster than light, I see no reason why it should kill multiple planets with a single shot. But, even if it did.... those planets would NOT SEE each other blow up.

As for Kylo Ren; he's clearly not dead, but I'd have liked them to have made that slightly more clear. The planet could have blown up slightly less completely.




Westgate Mall renovations

I went to the Westgate Reconstruction "meeting" shortly after 6pm on Dec. 16.

It was in the basement of the Macy Hotel in a smallish meeting room. It was not a presentation, but rather a poster-board meeting modelled after the worst methods of the city of Ottawa. I THOUGHT we were done with having such a stupid meeting format, but then I realized that this wasn't a city meeting at all.

There is a time and place for a well done PowerPoint presentation followed by an open mic and questions from the floor, and this is actually one of them.

If you missed it, you missed nothing. The posters are all online at:

http://www.cbc.ca/news/canada/ottawa/westgate-shopping-centre-development-riocan-1.3368082 and http://rileybrockington.ca/wp-content/uploads/2015/12/Westgate-Rio-Can-Proposal.pdf

There was really nothing that isn't in these pictures, except a few ridiculous claims on two posters which aren't that in that deck.

  1. that the development will be sustainable.
  2. that the development will be a transit node.
  3. The "Transportation Considerations", which failed to mention walking or cycling, and whose only mention of transit is that the developer thinks the city might build LRT on Carling. (If only the city was that clueful)

The plan is basically:

  1. in ~5 years: knock down Monkey Joe's, put up a tower.
  2. a few years later: knock down Shopper's Drug Mart end of mall, put up another tower.
  3. a few years later: do something with the rest of the mall, and add greenspace.

I skipped out after ten minutes, there weren't enough Rio-Can people there. My friend and neighbour Sharon Body said that there might be a "talk" at some point, but I took off to Rockin' Johnny's for a hotdog. Just before leaving, I checked google maps to see if I could catch a 151 home. Only 1x 151 per hour goes through Westgate now, and low and behold, at 7:08pm I could catch the bus, and it was just short of 7pm now. I went outside, and caught the 151. It seemed early, beacuse, well... it was. It was the 151 that goes in the other direction at 7:04pm, I realized this after we got to Kirkwood and turned the wrong way. I walked home, as it was faster than waiting another 5 minutes. (It's not a long walk anyway)

Why do I mention the 151? Because the real problem that westgate and the community has is that between OC-Transpo and Rio-Can they have basically written off all transit use at and through Westgate. It COULD be a significant TRANSFER hub if the mall and OC-Transpo would pay attention to it. It has the 101, the 151, the 176 and the 85 there. But, it's all mis-designed, the shelters are in the wrong places, and the buses simply do not connect.

Were I trying to make Westgate more valuable, I'd want to massively increase the pedestrian traffic in and out. That would involve some kind of walkway between Westgate and Hampton park. I'd prefer an at-grade tunnel under the 417, but some might feel that could be hard to keep secure. I'd go for a third storey walkway over it, but that won't be easily useable by cyclists. We could work something out though, I think, if we had a decent meeting.




Trying out GNU Emacs 23

For at least the last 15 years I have been an Xemacs user. You can read elsewhere about the great GNU vs Xemacs split... I was solidly on the X/Lucid Emacs side of things. But, Xemacs gets no attention, and although it does 97% of what I want, I decided to try out the extra 3%.

For the past week I've been running GNU Emacs 23 at CREDIL. I still login to my desktop at home to run gnuclient -nw, to get access to the Xemacs running there, the one that has my email in it. Once I'm happy with GNU Emacs 23 here, I guess I will try it at home.

GNU Emacs has a bunch of Xemacs upgrade facilities... it reads and understands most of my .xemacs custom stuff, and it seems to update that stuff too, having adopted that entire subsystem.

GNU Emacs had a bunch of git integration, which while it seems like a smart thing, seems to an awful lot of file I/O operations, and worse, invokes git diff and bunch of things. This is a problem if your source tree happens to be over sshfs. I've turned some of it off, but there are more things which it seems to do... maybe a statfs to determine if the directory is remote and turn that stuff off by default would be wise.

I can not convince MUSE mode to accept the changes that I did a few months ago when I moved my bloxsom blog from a CGI to a static site, until I do that, this entry won't get published.




various blog things

My blog doesn't work with the additional options. I.e. you can visit it at: http://www.sandelman.ca/mcr/blog/ but, you can't get the RSS feed at: http://www.sandelman.ca/mcr/blog/index.rss. This seems to be a problem with AcceptPathInfo, which I haven't figured out.

http://httpd.apache.org/docs/2.2/mod/core.html#acceptpathinfo

In discussing this, I was told I am to run "vimblog", care of: http://www.jukie.net/~bart/blog/ and http://www.dmo.ca/blog/.

AHA. I had done:

ScriptAlias /mcr/blog/ "/home/mcr/cgi-bin/blosxom.cgi"
AcceptPathInfo  on

which doesn't quite work, because the mcr/blog gets spell checked to mcr/blog/, which no longer has a path part to look at.

ScriptAlias /mcr/blog "/home/mcr/cgi-bin/blosxom.cgi"
AcceptPathInfo  on

Does the right thing.