Subscribe: Wade Berrier
Added By: Feedage Forager Feedage Grade B rated
Language: English
build service  build  code  great  linux  madwifi  mono  much  service  suse  svn ignore  svn  time  vim  work   
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Wade Berrier

Wade Berrier

a glimpse inside

Updated: 2017-06-18T21:19:07.587-06:00


300 Baud Real Time Software Modem


I did a project during my Computer Science graduate program at Utah State University implementing a 300 baud real time software modem. It was a great project, and given my past as a small town BBS sysop (which deserves a dedicated post, but search  the linked page for Wade Berrier), it brought back some great memories.

I was inspired by this youtube video as well as by some of the projects and people I work with.

A video demonstration is posted here and the full write up is posted here.

Note: in the video I say that it was for Summer 2011, but I really did do it in 2012.

Note2: the write up was done with lout.  Why isn't this more popular?

Anyway, enjoy the video and write up.(image)

Source Code Navigation with Vim


I'm a pretty naive user of Vim.  It started while watching a professor of mine do code examples on a projector during class.  I was amazed at the speed of which he was copying and pasting code, editing text, etc... all without using the mouse.When I asked him how he did that, he talked about how he had been using those techniques for 20 years.  I had watched other people use emacs, but they were still moving their hands away from the home row.  That alone motivated me to pick and learn Vi.  (Yeah yeah, I know, a proficient emacs user could have been just as impressive.  More on that later.)Learning Vim turned out to be a great investment, as I worked on several unix and embedded systems where it was normal to have some form of Vi installed.  It's even usable in the Dvorak layout! (I'll have to save those details for another post).Back to the topic at hand: Vim is also great for reading and navigating source code.  I would see people using an IDE that would push a button and it would take them to the declaration of a variable, function, etc... "How did you do that!?"  See, I've never used a "real IDE" on a regular basis, and was unaccustomed to such features (feel free to flog me for this).  I had to figure out how to get those features inside of Vim.  (Another sidenote: I can't use anything besides Vim now.  Even my MS words docs often have rows of jjjjj kkkkk sporadically placed.  Another sidenote #2: One of the eclipse Vim plugins I tried last year made eclipse usable, but I still prefer Vim... I'll have to save that one for another post as well.)Turns out there were several options to get source code navigation working in Vim.  ctags, cscope, an eclipse ipc method, and one I stumbled on because of a co-worker: gnu global.  Anthony swore by global, and indeed, his source code navigation was impressive... even while using emacs!In any case, after trying all the above tools, I settled on gnu global, although it took some work to get it working with Vim and my setup.  That's what this post is really about: documenting my usage of Vim and gnu global.First, some settings for .bashrc:export MAKEOBJDIRPREFIX=$HOME/wa/globaltagsalias maketags='mkdir -p $MAKEOBJDIRPREFIX/$(pwd -P) && gtags -i $MAKEOBJDIRPREFIX/$(pwd -P)'alias maketags_cpp='GTAGSFORCECPP=1 $(maketags)'That will set up the command I use to generate the tags: maketags_cppglobal is great in that you can store the generated tags outside the source tree (in this case $HOME/wa/globaltags/).  Plus, you can be anywhere in your source tree and it correctly finds the tags.  You can also have automatic per project tag databases.  These are some of the main reasons I chose gnu global over the alternatives.  I don't want to jump to spots in an unrelated code base.Now, for Vim.  First, .vimrc:let GtagsCscope_Auto_Load = 1let GtagsCscope_Auto_Map = 1let GtagsCscope_Quiet = 1set cscopetagApparently Vim doesn't have a plugin architecture for tagging systems.  So, global provides a cscope adapter in order to make Vim think it's using cscope, when it's really using global.Next, drop gtags-cscope.vim into $HOME/.vim/plugins.  You may need to make sure you have the correct version for the version of global you're using.  global also has "gtags.vim", but like I said, Vim doesn't expose an api to allow different tagging systems, making the tag stack unavailable (which, is one of the best features of this setup).Another sidenote/tip from Anthony: if you ever google for it, make sure you google "gnu global" and not "global" or "gtags".  I know... "gnu global" is a terrible name marketing wise.So, after installing global, the basic workflow is like this:enter source dir root and run "maketags_cpp"open a file with VimSome of the common operations I use:go to function/variable definition of identifier under cursor (pushes onto the tag stack): CTRL-]go back (pops off the tag stack): CTRL-tsearch for all instances of this i[...]

Network Protocols


I've been looking over the supported protocols in IP and noticed the following protocols:



I've used and been interested in VoIP for several years.  It was really neat to see a precursor implemented in the 70's.

I had seen SCTP mentioned here and there but never really looked into it.

Here's a great overview:

Looks like it will become a widely used protocol at some point.

Learning about this stuff reminds me of when I was learning how to use DOS in the early nineties.  I thought I was sooo cool because I was navigating directories and programming batch files.  I later got into linux and read about it's background and history.  "You mean UNIX has been around for 30 years and I had no idea that my beloved DOS was a disgrace?"(image)

USU VPN Linux Client


I needed to connect to the Utah State University Virtual Private Network.  They had a page about connecting with linux, but it didn't work for me.  I ended up writing a simple script to configure, start, and stop the vpn.  You can find it here:

It works for me on Ubuntu 10.10, as long as I installed an updated openswan (as noted in the README).(image)

Qt and Threads


Multithreaded programming is fun, isn't it?

I'm writing a thin Qt wrapper around OpenAMQ so that we can encapsulate a connection in a separate thread without blocking the event loop in the main application thread.

Some guys at work have developed a few nifty tricks to make threaded programming in Qt easy... well, easier... actually, much easier.  But, I was still having some problems (which were mainly caused by some weirdness and wrong documentation in the OpenAMQ client library, but that's a different story).

Other than better understanding how QCoreApplication and QThread event loops interact, here's a lesson I learned:

Don't call deleteLater() within a class that inherits from QThread.  Reason being is that you need to call quit() before the QThread object is deleted, and once you do that, the event loop in QThread stops.  Since your event loop in the QThread object has stopped, the deleteLater will never get processed.

This all comes down to: using deleteLater in this scenario will never call your destructor.

Lesson learned.  Check.(image)

svn and tons of ignores


I never realized what a pain it was to maintain svn:ignore properties until I started using git (which makes it really easy to collect cruft metadata after a build).

After some searching on the web, I found this script:

# svn-ignore - tell Subversion to ignore a file in certain operations.

# See: []

test -z "$1" && echo "Usage: $0 FILENAME" && exit -1

for fullname in "$@"


dirname=$(dirname "$fullname")

filename=$(basename "$fullname")

(svn propget svn:ignore "$dirname" | egrep -v '^$';

echo "$filename") >/tmp/svn-ignore.$$

svn propset svn:ignore -F /tmp/svn-ignore.$$ "$dirname"

rm /tmp/svn-ignore.$$


That coupled with:

for i in `svn status` ; do if [ $i != "?" ]; then echo $i ; svn-ignore $i ; fi ; done

really saved me a lot of work.(image)

A New Adventure


Some may have noticed that I haven't been around the Mono irc channels lately. I recently started working at Applied Signal Technology in Salt Lake City on July 7th.

One project I work on is an embedded board with some Xilinx FPGAs, one of which has an embedded ppc processor. It runs embedded linux and does signal processing in the FPGA. The kernel and embedded linux has been a long time favorite of mine.

The other project is a state-of-health hardware and environment monitoring system written in C++/QT. Having been using Python for monobuild and been in the Gnome/Mono circle for a few years, it has been quite the shift to use C++. I found an interesting white paper comparing C++/qt and Java, but have not formalized any opinions yet. Comments?

I wanted to give an update on my disappearance as well as express my graditude to Novell, my fellow co-workers, and the Mono community. It was a great 3 years and an awesome experience. Thanks especially to Andrew and Marc to taking over the build and release processes. We spent my last week or two at Novell transitioning things over and I've also spent some time since then helping them out. I'm fully confident they will do a superb job with Mono 2.0.

Good luck to everyone with the upcoming release. I wish y'all and the Mono project the best.(image)



I was checking out the openSUSE 11.0 Gnome LiveCD to gather some information about a Mono bug, and accidentally discovered gvfs.

I guess it's a replacement for gnome-vfs. From a quick glance, nautilus seems pretty much the same to me as when it used gnome-vfs. But, low and behold, when I opened up an sftp:// uri in nautilus, that 'share' was available via fuse in /home/linux/.gvfs!!! How cool is that??

This is probably old news, but I'm pretty excited about this. I guess there'll also be a kio interface. It seems gvfs has some really great potential to bridge the vfs gap.

Great work!(image)

Gmail and IMAP


I've been using google hosted for my personal email for some time now. Cheryl was using their web client and I was fetching all my mail over pop to a local dovecot server.

After I heard they were going to support IMAP I decided that maybe I will finally migrate all my emails (back to 1996) to the google servers.

I noticed that messages copied via imap had incorrect dates when viewed from the web client. That hindered my decision for some time, but Andrew mentioned that they were going to eventually fix that. The dates still appear correctly in imap cilents, so I wasn't too worried. I'll mostly use an imap client, but it will be nice to be able to check and send mail from a web client.

(When hosting my own mail with dovecot, I had squirrelmail set up, but my mail was often rejected because it was sent from a dynamic ip. The unreleased squirrelmail beta had the option of configuring one authenticated account for outbound smtp, but using that feature with gmail was a little clunky because it seemed the mails weren't masqueraded properly.)

One of the things I really like about using gmail over imap is the ability to tag spam by moving it to the [Gmail]/spam folder. I had pretty good luck with spamassassin and although it was fun getting to work, I had some false positives and decided I didn't really want to think about spam any more.

The last of my concerns were answered by this help thread:

I just hope I don't start deleting messages while using other email servers and expect them to be in my 'All Mail' folder :)

The performance is ok, but not as good as using my own dovecot server serving one account. But since I get the above features and I don't have to worry about backups or my computer going down, that's something I'm willing to live with.


Some people have asked how I did the actual migration. I configured two imap servers in Evolution and manually copied messages/folders from one account to the other. This took several hours of babysitting the process for roughly 250MB of mail.

It may be worth looking into imapsync.


Andrew sent me this: google-email-uploader(image)

Accessibility Team looking for packager


Jared Allen asked me to keep an eye out for anyone interested in packaging for Novell's accessibility project. Send me your resume if you're interested.(image)

Novell Hack Week #2


I decided to continue on with my hack week idea from last year.I spent the better part of a day getting the devel environment set up (compiling and setting up myth from HEAD, setting up the latest compiz-fusion from the build service, and gathering some test HD videos for myth) only to find out that it looks like it's been fixed already! We'll have to wait until the next major release of myth, but it's in there. Moving on.The next item was a leftover idea that had been kicking around from the Tomboy hack night last December.For that event I wrote a little python script that rapidly created notes over the Tomboy dbus interface. I gathered some data about how Tomboy performs with a large number of notes. The main findings were:Start up time was pretty dismal with a large number of notes (even 1000, which isn't that inconceivable)Note creation time steadily increased as the number of notes increasedThe time it took to delete notes was much longer than desired when you had a large number of notesTomboy performed quite well during typical use cases, even with a large number of notesHence my Hack Week #2 project: add an sqlite backend to Tomboy to help address some of the performance issues in Tomboy.Boyd had mentioned that Everaldo and crew had done an sqlite backend for the maemo Tomboy port. My first objective was to port that code from the 0.7.x codebase to trunk (0.9.x?)It turns out the maemo port was done mainly to work around a bug in Mono running on the n800. The maemo sqlite port allowed a mechanism for storing multiple notes inside one file in order to work around the aforementioned bug. That alone wouldn't solve the above issues. (In fact, this sqlite backend was significantly slower than the file backend unless delayed writes were enabled for the sqlite db. With delayed writes, they performed roughly the same.)I spent the rest of hack week getting introduced to git and git-svn (which really rock!), getting my feet wet with C#, reading Tomboy source code, investigating Linq, and writing the C# code to do the db schema creation and schema upgrades. The main conclusive points of interest are:To utilize the sql db, queries are needed to pull only the notes into memory that are of interest (otherwise, with all notes in memory, I'm guessing that's a main reason as to why the previous list of shortcomings occur, especially startup time)Find out if the current note buffering scheme is needed during note editing. If not, the code could be simplified by persisting changes straight to the db.We'll likely need an interface to transparently search and interact with notes in memory or from the db (meaning, I'm guessing the findings from #2 may be futile)Provide note migration from xml to dbI seem to get mixed reactions from those who hear about a change like this. Some say, "Yay, with 200 notes, Tomboy is slow with some things!". Others, "Tomboy will likely move to using a db anyway." And, "With so much work and so little possible gain, why bother?" And lastly, "Eww, no db! Then I won't be able to ssh into my home/work computer and poke at the .xml note files!"To alleviate at least the last point, Andrew wrote a sweet command line util: Tomboy Remote. (Because you shouldn't be poking at a program's internal data anyways!) Update: Source download.In conclusion, there's quite a bit of work remaining. The main benefits of this week were that I got some C# exposure (finally!), experienced a great use case for decentralized scms, and got more familiar with the Tomboy codebase. More for next time!On another hack week semi-related note, I just upgraded my home system. (I got an intel mb, Core 2 Duo (E6550), 4 GB of ram, and an nvidia 6200le card for $300 after rebates. Thanks Joel and Steve!) Anyway, the onboard sound only has one audio port. [...]

openSUSE Build Service


Note: I drafted this neglected post in Feb '07, and since I'm talking about the build service at the Mono Summit, I decided to post as is.

I'm trying out the build service with the intent of migrating as much of Mono's packaging as possible.

I first heard about this service at BrainShare 2006 and thought it looked really neat. They did a demo build from the web client.

I just discovered the command line client: osc, and it's amazing! You can do local builds of your projects for multiple distributions! Then you can makes changes, tweak your files, do a local test build, and then commit your changes to the server. The server will add your packages to the queue and create a repository for download.

The package build system that Mono uses has cut out a lot of the manual work with building packages. The problem with it is that no one else besides me can use the system to build packages. (Someone could, but it would require creating jails, setting up ssh authentication, etc...). The great thing about the build service is that anyone that is a maintainer on the package can test a local build and submit changes from their local machine.

I've always been impressed with SuSE's autobuild system. It allowed for local builds and submitting build jobs to be done on the build farm. This is fine for SuSE builds, but I was unable to utilize this service for Mono packaging because I needed to build on several non-SUSE distros.

The buildservice has solved that. There are a few remaining issues that I'll need to sort out before I can move completely over. First of all, only x86 and x86_64 is supported. Plus, I'll need to figure out how to make previous releases available in the build service. (I'm assuming I can create a new namespace for each release, but I haven't looked into this).

This will also give better testing on the various distros, since for Mono, we only build on a lowest common denominator distro and use it everywhere for that arch.

Good job on SUSE's part, and all I gotta say is, "Wow" :)(image)

Monobuild updates


During the latter part of this week I revamped monobuild to use the .spec files from SuSE's buildservice rather than using Ximian buildbuddy. This was a long overdue move. When our build machine's 700 GB disk crashed, I decided to dive in. There are some nice advantages to this:
  • I'm not using the obsolete buildbuddy

  • I maintain only .spec files now instead of merging changes back and forth in buildbuddy

  • Those spec files can be shared with monobuild, suse build service, and suse autobuild

  • When setting up a new distro chroot, I don't have to rebuild buildbuddy with the new distro info
These spec files have been in the making since I started adding mono to the buildservice several months ago. Another thing that made this move possible: I purged the unsupported distros. Their vendors don't support them, so neither should we.

It is interesting to note that there has been some talk of coming up with a cross linux distro xml description to be used in the buildservice. Kinda funny, since buildbuddy had the ability to build rpm and deb. Oh well...

One of the other monobuild features I finished up is the ability to build rpms on your local machine. Previously you could only build on a machine connected through ssh. It's not real user friendly to get this working, but it's possible. I mainly wanted to implement this to work toward to goal of enabling others to easily create the installers.

The easiest way to build local rpms is definitely with the suse buildservice. It rocks. In fact, it has replaced much of the functionality of monobuild. But, since the build service doesn't support all the platforms or distros that we build on, we'll continue to use monobuild for releases of those missing platforms. Monobuild also works great for continuously building from trunk. (There's no reason monobuild could use the buildservice tools to locally build out of trunk, but there hasn't been a need at this point.)(image)

Novell Hack Week


There are two technologies that I really want to use all the time:PulseAudioXglThe problem is that I run mythtv quite a bit, and myth doesn't work very well with either of the aforementioned pieces of software. As a result, I usually don't have PulseAudio nor Xgl running, because it's a pain to constantly switch them on and off.So I decided to hack on mythtv for a week to fix this.PulseAudioRationale:In order to output to pulseaudio from MythTV, you have to use an oss emulation wrapper (padsp). Patch myth to have real pulseaudio support.Results:I took Monday to set up the myth development environment, set up my usb tuner on my laptop, and get the build infrastructure for PULSE output set up. By Tuesday morning I had unsynchronized audio/video going to the pulse server using the simple api. I assumed that by using this api, a/v would be out of sync. But by trying it out I was able to make sure of this, as well as get the basic framework implemented.I read some pulse documentation about the asynchronous api, and before diving in, decided to look at fixing the alsa output support to see what that would take. It ended up being really simple to fix alsa: don't use mmap access to the sound device. In case there were objections for my patch because I didn't use mmap, the final patch tries to use mmap, but then falls back to non-mmap. I spent the rest of Tuesday and most of Wednesday reading ALSA documentation, doing the final patch and making some packman derived mythtv packages that included my patch: (which are hosted here , although I'm hoping this will get into the myth sources, so these packages will eventually disappear.)Patch posted to the myth bug .Fixing ALSA was also nice in the fact that no new dependencies were needed for myth. If for some reason there are additional benefits of implementing native pulse support, I might re-address this later.XglRationale:I usually don't run XGL because mythtv crashes Xgl when you try to display video. This needs fixing.Results:I figure that mplayer works under Xgl using XVideo just fine, and that Myth should be able to do the same.MythTV has a branch called mythtv-vid where they are working on an OpenGL output driver. I spent a while installing this branch and getting the latest xgl and compiz-fusion running, just to make sure this problem wasn't fixed already. It wasn't, and I couldn't get the opengl output on myth working.At this point I wondered if I should just drop this idea and wait for the mythtv-vid project to finish the GL out support. I decided to do some simple benchmarks with mplayer to compare gl out and xvideo out. This can be done by disabling sound and telling mplayer to spit out the frames as fast as possible. The xvideo out ended up being slightly faster. (This was using ATI's fglrx driver. It would be interesting to run the same test with some different video cards and drivers). That was Thursday and a little bit of Friday.The rest of Friday morning I spent debugging the myth sources to find the crash. This went rather slowly because each testrun crashed Xgl and I had to constantly re-login. The crash is happening during some xvideo initializations. I've located the X calls that cause the crash, but that's as far as I got. I don't know enough about xvideo to debug this further, so I've got some more digging to do. That took me until about noon on Friday.The next few hours were spent setting up another machine so that I could demo mythtv running on a computer with synchronized output to 2 computers. The demo was video taped, but it's kind of difficult to experience synchronized output with a video camera :) I finished the rest of the day debugging xgl a little[...]



Like Linux needs any more sound libraries/backends... but this one is cool!

PulseAudio has been compared to the 'compiz' of audio streams.

You can do some really cool things with it: output all sound to another machine, play synchronous audio to multiple clients on a lan, move streams from one sink to another on the fly, virtual surround sound using two soundcards, stream and application specific volume adjustments. (Plus, using avahi, sound servers on the network are automatically discovered.) They may seem like useless esoteric cases, but here are some practical things I use it for:

I've got a few computers in my house. It's cool being able to have banshee playing on one machine, and the output going to all the speakers in the house, all synchronized.

I have two computers side by side that I use in a kvm fashion. One of the computers has much nicer speakers. I mostly use the computer that doesn't have nice speakers. I can have all the sound go through the nice speakers, and I don't have to plug and unplug things all the time.

I like to watch movies on a laptop to have the screen closer. Instead of plugging some speakers in the laptop and having wires all over, I play the sound through the computer's nice speakers.

Some of the computers I have use soundcards with only one sound channel. (Alsa dmix can mix streams into one, but PulseAudio can do the same, as well give me all the above features. Plus, I don't have to worry about dmix not playing well with certain alsa drivers.)

Not all the Pulse utils are shipped in openSUSE 10.2, but Takashi has packaged most everything up to be included in 10.3. (Check here for packages to use in 10.2 and 10.3). The only packages remaining at this point are gstreamer010-pulse and libflashsupport, which supports pulse output for Flash 9. These are both available in the build service.

Seems like a great sound solution and a wonderful fix of all sound mixing nightmares linux users have had to face over the years. I wish Gnome and openSUSE would configure PulseAudio by default and configure all the shipped applications to output in a pulseaudio compatible way (See Pulse Audio Perfect Setup for what apps and backends can and need to be configured).

Lennart gave a great presentation showing off and advocating PulseAudio:

Video Presentation

(In actuality, this blog posting has been in draft mode for a while, but since my 'Hack Week' project involved PulseAudio, I needed to finish this post first.)(image)

Switch User in openSUSE 10.2


Being able to switch users without logging out has been a long lost favorite. Here's how to enable the "Switch User" button on the gnome screensaver for suse 10.2:

gconftool-2 --set --type bool /apps/gnome-screensaver/user_switch_enabled true

Taken from:

Very very handy when Cheryl and I use the same computer.(image)

Firmware updates without a floppy


I've had to do some firmware updates lately. This isn't a very friendly process being a linux user (unless the manufacturer provides bootable iso images).

The firmware I needed to install came as a win32 self-extracting 'create a set of boot floppies' program. As I started running the program, I realized that the only windows box I had access to at the moment didn't have a floppy drive. Ugh. Searching led me to find this virtual floppy driver:

It worked like a charm. I ended up with a floppy image file. After some more searching I found out how to create a bootable iso image from this floppy image:

From this article:

* As root, make sure there's a /mnt/test directory
* As root, mount -o loop,ro /scratch/linuxinst/m91inst/images/network.img /mnt/test
* The remainder of the steps are done as a regular user
* mkdir /tmp/floppycopy
* cp -Rp /mnt/test/* /tmp/floppycopy
* cp -p /scratch/linuxinst/m91inst/images/network.img /tmp/floppycopy
* mkisofs -pad -b network.img -R -o /tmp/cd.iso /tmp/floppycopy
* cdrecord dev=0,3,0 speed=12 blank=fast -pad -v -eject /tmp/cd.iso

It actually worked. It would be nice if all manufacturers provided bootable iso images for all their updates.

Sidenote: Be careful when updating the cd drive's firmware using a cd. IBM had a bootable iso image to update the cd's firmware which worked fine, but in my case, using an image on a cd that was designed for a floppy didn't work. Luckily running it again from a floppy fixed the drive.(image)

New Blog Location


Seems like about a year ago I was migrating services to my little server. Now I'm heading in the opposite direction and trying to migrate services off of it. I guess I shouldn't give my wife such a hard time for wanting totally different wall colors than she did 2 years ago :)

Couple of reasons for doing so:
  1. The power has gone out a few times in the last year, and I'd rather not worry about availability, especially when I can get this blog for free.
  2. For the first time in 2 1/2 years, my qwest dsl was down for a day. I'm not sure exactly how the outage was since I was out of town, but again, I'd rather not worry about it. Plus, I only had < 1 Mbps upload capacity.
  3. I didn't really feel like keeping wordpress updated or tracking their security vulnerabilities. (wordpress, or any blog hosting software, doesn't ship in openSUSE)
  4. Most importantly, it seems to be a nice time to follow suit in migrating blogs.

I didn't have that many blog posts for the previous site so I copied and pasted them over.

There weren't that many comments on the old site so I simply pasted them in right along with the post.

Maybe I'll blog more now... maybe not. But this will be it's home until the "Google goes Evil" prophecies start to come true (outrageous advertising, fees, etc...), then maybe I'll return to my original color choices.(image)

Finally! Reliable atheros wireless in openSuSE 10.1


Earlier I had mentioned that I was a little concerned that the madwifi drivers were not being shipped with any SuSE products. I quickly found out why madwifi-ng wouldn’t be supported: it was very unreliable for me, especially when using it with NetworkManager, which enforces the use of wpa_supplicant. (This seems to be a known problem: ) I failed to get madwifi-old working, and quickly gave up on that (my wlan device could never associate with my accesspoint). I then started scraping the web to find a chipset that had well supported drivers that were included a stock install of suse, worked with NetworkManager/wpa_supplicant, and supported wpa. That was a couple of frustrating hours :) I was seriously ready to go buy another card but it seemed the best chipset choices were in cards that were no longer in production. Plus, I had two perfectly working atheros cards… this was getting ridiculous. Then it dawned on me: why not try using ndiswrapper with my cards? I hadn’t had much luck with ndiswrapper a couple of years ago but now I was starting to get desparate. Turned out that my pcmcia atheros card worked pretty well with my Dell c600. Not real convenient that I had to go find the exact driver from my vendor (opposed to just using one linux driver), but hey, at least it works and I don’t have to go buy another card. I later found out that there’s an opensuse wiki page suggesting this same solution: Atheros_ndiswrapper. Then I tried the ndiswrapper driver on my t42p ibm laptop, which has builtin atheros. This sorta worked, but I would get disconnected every couple of minutes. This wasn’t much better than my situation with madwifi-ng. I realized I needed to find out why the madwifi-old driver wasn’t working since this driver worked flawlessly on Ubuntu for months and months and months. Turns out the solution was this: I got the madwifi-old driver working, but this required that wpa_supplicant needed to be compiled with the headers from madwifi-old, not madwifi-ng. Now, I’ve finally been using rock solid wireless using wpa with networkmanager on suse 10.1 without any problems. Update: Packman packages the madwifi kmp kernel package. This package (along with the stock wpa_supplicant) works wonderfully! Not sure what I did wrong in trying madwifi-ng, but I’m glad that this solution works really well.Comments: Simon Geard Says: May 22nd, 2006 at 2:38 pm I’ve been putting in a bit of effort in this area myself lately, and you might be glad to know that the madwifi-ng drivers have improved somewhat in the last month or so, to the extent wpa_supplicant can be used with it’s generic wireless-extensions support instead of using madwifi-specific code. As such, I find it now works pretty well with NetworkManager. The only remaining problem for me is that they still report signal strength differently from everyone else, so that NM reports a much weaker signal than it actually has. For that reason, I keep the Gnome netstatus applet running as well, since that appears to have workaround code to display a correct signal. Wade Berrier Says: June 13th, 2006 at 10:04 am Simon, thanks for the info. Unfortunately, I’m kind of new to this blogging stuff and didn’t find your gem of a comment in the midst of the hundred spam comments I had:) I did manage to try packman’s madwifi-ng package and that’s been working really well. Too bad I didn’t notice your comment or else I would have tried it much earlier… [...]



Yay for Zenworks on SuSE RC2!

I previously mentioned that rug and Zenworks in SuSE beta 8 was a great addition, but not quite there yet.

It also wasn’t there in Beta9. Then it occurred to me: the SuSE ‘factory’ is a yum repository. So, I successfully upgraded from beta9 to beta10 with yum. Everything went smoothly.

Now, it was time to trying upgrading to factory (which I believe was rc2 at the time). I manually installed libzypp, zmd, libzypp-zmd-backend from factory. Then I deleted everything from /var/lib/zmd and /var/lib/zypp, as well as /var/cache/zmd.


/etc/init.d/novell-zmd restart

rug sa factory
rug sa factory-e
rug sa --type=zypp packman

rug sub factory
rug sub factory-e
rug sub packman

rug update

Everything went right along. All deps were resolved, all packages were downloaded (1 GB total), and the rpms were beginning to be installed. zmd quit at about 40% of the installation transaction, but the backend transaction finished until completion. I’m not sure what’s going on there.

Anyway, much progress, and I’m counting on rug/zmd being usable. The memory leaks have been fixed, but the process of adding repositories, refreshing, adding packages, and removing packages is still too slow. This taxes my 2 Ghz mobile chip way too much. Hopefully some further improvements will be made.

The cool thing about zmd is that it supports several repository formats:

wberrier@wberrier:~> rug st

Alias | Name | Description
yum | YUM | A service type for YUM servers
zypp | ZYPP | A service type for ZYPP installation source
nu | NU | A service type for Novell Update servers
rce | RCE | A service type for RCE servers
zenworks | ZENworks | A service type for Novell ZENworks servers
mount | Mount | Mount a directory of RPMs

Congrats to the people working on Zenworks and zypp and I’m looking forward to further improvements.


General Conference


There have been some great talks this weekend from the Brethren. Most striking to me have been about the Atonement and towards the comfort of the weary.

I’ve never seen President Monson be so hillarious! Even President Hinckley said something like, “President Monson is a hard act to follow.”

President Hinckley boldly denounced racism and encouraged all to be nicer and more generous to everyone.

As with several previous conferences, my in-laws came to visit and it was nice to spend the weekend with them.


Novell Brainshare 2006


This is my official first Mono/work related blog entry! Let me start with the few pictures that I took. I was able to go to Novell Brainshare in the Salt Palace in Salt Lake City. It was absolutely fantastic! I had a great time and learned a lot about Mono (mostly from listening to some tidbits of information that Paco and ‘the Mystery Shopper’ always had readily available). It was also nice to be able to get to know some collegues better as well as meet lots of people who are excited about Mono. And if they weren’t excited when they got to our booth, most of them were excited by the time they left. My first demo was an interesting experience. I was approached on Monday morning pretty much right when the Technology Lab opened. He started out with, “Hey, uh, what’s new in the Mono world?” I continued to say how this cool new gui plugin had just been checked in a few weeks ago. I then asked, “Are you familiar with Mono or .NET technologies?” “Yeah, a little.” I wrapped up the stetic demo with, “Pretty cool, huh?” In the meantime, Frank (Rego) comes back from his session and says, “Hey, Niel! How’s it going?” I look down to find a nametag with “Niel Bornstein” on it. Wow, was that embarrassing or what? I let him know how tricky that was and he said he had fun being the ‘Mystery Shopper’. I ended up bringing my copy of ‘Mono: A Developer’s Handbook’ with me the next day and he signed it. So, Niel, this is to you: next book you co-author, make sure it’s got a picture of you on the back cover. That would really help us poor vulnerable fellows out :) In fact, if I man a booth sometime in the future I’ve got a few intro questions up my sleeve now. “Have you heard of Mono?” “Have you used it before?” “Have you written any books on Mono lately?” Niel hung out and helped at the booth for several days. It was great to meet him and to have another person with some serious Mono experience there to help answer questions and man the crowd. Anyway, like others have been blogging about, we did some Winforms demos as well as showed off the new stetic MonoDevelop plugin. I wasn’t sure what I should show for demos. I hadn’t messed with MD nor stetic much. Lluis integrated things so well, that it only took a couple of minutes to put a simple example together. Of course, Dan deserves much credit. It was really easy to show Mono off. I would start out with, “Did you see the SLED [NLD] demo? All of those cool apps Nat and Guy showed off were implemented in Mono!” Then we’d do a simple gtk# app in Mono with stetic, and copy the binary to a win32 box and run it. So, Paco is a Mono evangelizing machine! I’d be doing the above demo, and he’d say, “You know… that’s cool, but why don’t we try that binary on another machine… say my Nokia 770!” We had great fun. We also showed SWF apps on win32 and linux. And wow, what a difference between and People would ask, “So, how far has Mono come since 1.1.4?” Or, “Will my app work.” “I dunno, let’s try it out!” There were a lot of people there interested in One cool example that I’d never considered was pulling content out of eDirectory. I knew that eDir was fast, scalable, replicatable, etc… but never thought about it until the guys from AppGenie showed me their site ( ) running on Mono using the Novell.Directory.Ldap connector. So then I’d try to answer any questions best I could. Can’t wait for[...]

DIY Projector


I’ve been enjoying my homemade projector for several weeks now and it’s about time to post some pictures.

Projector Screenshots

Some of the pictures are quite blurry, but this is because of my digital camera. This camera typically takes good pictures. Unfortunately, they are often fuzzy, especially when there is a lack of light in the shot.I first heard of building a projector from Dan Rhimer while living at Wymount on BYU campus. I later saw this article which showed how to do this with very little construction and easy to get parts.

Cheryl and I enjoy watching movies together and I’ve always wanted a projector. Me being the cheapskate that I am jumped aboard the idea. I told my Dad and some family members about what I wanted to do. My first purchase was a flat panel display at Novell’s surplus store in Orem. I bought it as a gamble at $30 without a power supply. I soon bought a universal power supply for lcds and laptops for $30. This screen worked pretty well, but I realized that at 18″ it was too big for the surface of an overhead projector.

My Dad loves shopping at places like DI and garage sales and such (surely where I get it from). He found an overheard projector at Another Way for $5. Bingo.
The next peice of the puzzle came when I was visiting Jake Cahoon’s office at work. Baha Masoud, a fellow co-worker had an old monitor that needed new backlight bulbs. Jake’s quite a handy man and decided to see what it would take to replace the backlights. After finding it would be $45 he wasn’t sure it was worth it and was going to bag the screen. I happened to show up and I told him I was looking for an lcd with a broken backlight. He didn’t object to my aquiring the screen. I use it at 1024×768 because that’s the max resolution of the driving laptop, but I believe the lcd can do 1280×1024.
The only missing link now were the bulbs for the overhead. I found them on the net for $5 a peice at 350 watts lasting 75 hours. (Which, interestingly enough, happens to be about the same cost per hour as commercial projector bulbs). I was quite impatient one Saturday night and decided to try the setup out with a halogen lamp (the types used for night time construction). I had worried if the image would be bright enough and I figured that if this lamp wasn’t bright enough, nothing would be. I mean, this lamp could practically heat a small home.

This image ended up being horribly fuzzy and almost indistinguishable. I was quite disappointed, but my ordered bulbs were already in the mail. Oh well, I figured if it didn’t work out I was only out $20.

So, I finally got the bulbs and tried them out. The image looked GREAT!! I was very excited to have my own bigscreen in the comfort of our home.

Cheryl helped me tune the color with methods I learned from this article.
I sometimes notice that the image still isn’t bright enough. Luckily, pumping up the brightness does the trick. I’m able to do this with Totem, mplayer, and mythtv. Not bad for $20. Now if I could just figure out where to put it in my house…


HP Laserjet 2100


A little history… after Cheryl and I had graduated, our use of the trusty HP Desktop 842C greatly depreciated. To say it short, using an inkjet after it’s been sitting for a while is disappointing. My ink dried up, and so to print a couple of things I was out almost 40 bux getting some new ink cartidges.

That’s fine and dandy, but I realized that this $40 was probably going to dry out as well. This put me on the lookout for an inexpensive laser printer. In fact, I found a Lexmark at DI (one of my favorite places to shop). I didn’t end up getting it for one reason or another. Sure enough, the next time I went back it was gone.

I told my friend Andrew about my quest and he had a friend who was selling an HP Laserjet 2100 for $10 (without toner). In fact, he even sent me the search link for a $25 toner from Toner Pirate. I was hoping for a USB printer, but this would certainly do.

So, for about the same price, I got a new printer with a toner that has a yeild of 5000 pages. Compare that with my $40 inkjet of 600 pages (not including the cost of the printer, of course).

You may be saying, “Well, you can’t print in color.” True. For my needs, it’s cheaper to head to Kinko’s for a color print :)

The actual reason for me writing this entry: It’s a note to self. When using this printer in Cups, use either the ljet4 or hpijs drivers. The recommended plxmono driver doesn’t work. Also, don’t turn off the printer when it’s printing pages of garbage. Instead, kill cups and let the spewing cease on its own.

Also noted bonuses of using hpijs: my margins are more accurate, and the dithering is a little fuzzier (makes the picture look better in greyscale).


Xgl and Compiz fun


I must confess, one of my main purposes of moving my email server to another dedicated box was so that I could try out Xgl and compiz without having the worries of hosing my box. I followed this guide. At first when I tried compiz I could only see vauge white shadowish type boxes. I saw on this guide that I needed a newer glitz if I had an older Nvidia card.

I must say this stuff is very impressive! I have a rather old card (GeForce4 MX 440 AGP) but after I used the newer glitz package in Dapper things went right along! I’ve posted my screenshots here.

Video didn’t work quite as well as I hoped. Xv output didn’t work at all (it froze), but gl2 or x11 with software zooming worked as long as I wasn’t doing any cool compiz effects :) And there are some other minor things like middle clicking a title bar doesn’t drop the window behind all others.

I’m not quite ready to run Dapper yet as it’s been locking up my computer quite frequently. I’m getting ready to downgrade my computer back to breezy, and I’m already feeling nostalgic. Guess I’ll have to try it on my Laptop at work with the fglrx driver.