Subscribe: Random thoughts and experiences with Ubuntu
http://handypenguin.blogspot.com/feeds/posts/default?alt=rss
Added By: Feedage Forager Feedage Grade A rated
Language: English
Tags:
build  community  free software  free  irc  kubernetes  linux  open source  open  python  services  software  source  ubuntu 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Random thoughts and experiences with Ubuntu

Random thoughts and experiences with Linux



Just a place to post random experiences, thoughts and ideas about Linux .



Last Build Date: Mon, 11 Dec 2017 03:12:00 +0000

 



Cloud Foundry and Kubernetes for Beginners

Wed, 09 Aug 2017 08:36:00 +0000

Cloud Foundry and Kubernetes are probably the most prominent technologies for cloud infrastructure development, they have a very different set of goals and as such they follow significantly different solution design approaches. Cloud Foundry is a traditional Platform-as-a-Service technology, with a specific design orientation towards enterprise-scale resources and privileges management. It follows a top-down approach, where your primary component is a CF "cloud" instance. Within a CF cloud you create organizations and spaces, which are bound to resource quota plans. Quota plans include both computing (CPU/RAM/instances) resources and external services resources (e.g. database storage). CF users are assigned to organizations and can deploy/monitor their applications based on their roles. There is a list of CF supported development languages/frameworks, called buildpacks . Developers/release managers can deploy and monitor their applications using the Cloud Foundry command line client. CF application instances run on Linux containers, on a CF platform you get the same level of scalability/isolation that you can find on most containerized application platforms. CF does a clear distinction between applications and services, on CF parlance, a service is an abstract resource that can be instantiated and bound to applications. Services are available from a CF's service catalog «named marketplace per it's usual format on public clouds», example of CF services are object storage, SQL dbs, nonsql, bigdata / deep learning APIs, messaging, etc. There are many CF powered PaaS providers, as an IBMer I am more familiar with IBM's offering, BlueMix . Bluemix provides a very large and diverse catalog of services, some which rely on IBM exclusive technology. In any case, Cloud Foundry is an open source project, which means you can deploy your own CF instance, exploring your existing infrastructure and adapting services per your requirements. Kubernetes is an application container orchestration technology, with a specific design orientation towards application containers management and integration. It follows a bottom-up approach, where your primary component is the "pod", a "pod" is a group of one or more containers that can be deployed into a Kubernetes cluster. Pod's are most commonly composed using Docker images. There is no default organization structure in a Kubernetes cluster, in order to achieve resource control on an organization level you will need to setup Kubernetes namespaces with resource quotas and roles. Developers/release managers (which can have namespaces bound roles) can deploy/monitor their container images. There is no Kubernetes specific list of images for application language/frameworks support, you will need to select/deploy/compose the pod with images bundling the required base O.S. image, SDK and applications. Kubernetes does not have an explicit distinction between applications and services, a Kubernetes pod can be either an application fronted (e.g. nodejs) or a back-end (e.g. postgresql), or both. A Kubernetes service is as network level of abstraction, used to define a TCP service from the container that should be exposed externally. There are many Kubernetes distributions and services providers, and there are also several PaaS solutions (e.g. RedHat's OpenShift) built on top of Kubernetes. IBM is also on the Kubernetes train on it's cloud platform. Kubernetes clusters are available as a CF service on BlueMix . Kubernetes is also an Open Source project, you can try it or build your own infrastructure. Roundup Cloud Foundry is a platform-as-a-service platform, with an explicit organization structure and resource management control system, CF provides officially supported SDKs, services are available as a different level of abstraction, services instances can be created and bound to applications. CF application instances are run within a self-healing elastic containerized platform. Kubernetes is container-orchestration platform, capable of running service[...]



When m(IRC) and ANSI C were popular

Wed, 02 Aug 2017 21:50:00 +0000

It was 2005, I was amazed with internet chats, both as an user, and as a computer programming enthusiast. m(IRC), an Internet Relay Chat application was probably the most popular chat app. For some young people, at that time, being "online" was not merely about having an internet connection, it was about being online on "mIRC". Sometimes people actually scheduled to be "online" together. Having a continuous internet connection at home was still a luxury for many.

As many of the early internet services and related software, installing and managing an IRC network was a complex activity, as such, most IRC chat networks were managed by large university groups and internet service providers. This was where I got in, improving the server side software, making it easier and more flexible to use for every body.

IRC chat networks provided both chat rooms and private messaging. Users were identified by their chosen nickname, and the chat rooms «named IRC channels» had moderation features. IRC servers kept all is users and channels information only in memory. When servers were restarted all this information was lost To overcome this, many IRC networks implemented IRC "registration" services, these services worked as "robots" which assigned  control over nicknames and chat-rooms, keeping that data on a persistent database, there services were also frequently extended with extra features like offline messaging.

I believed that there was a great potential for more advanced IRC services, using web/mail integration features which I was not able to find in the existing software. That was when I decided to develop an IRC Services software from scratch. 

I didn't kept any record about the initial development time-line and I was not familiar with any open source version control system at the time. "PTlink IRC Services 3" was released around June 2005, containing around 20k lines of ANSI C code.

It featured a C library providing an event driven API for all the IRC server protocol handling. For example, for an "on connect" message service, you would only need to bind your C function to the NEW_USER event, and from your function, you would use the irc_SendNotice() to delivery a message.

Services were provided as a set of modules, these modules were implemented as shared object libraries that could be dynamic loaded/reloaded, this is something that you currently find on most software, modules/plugins support.

Last but not least, the data store back-end was MySQL, while most IRC services were still using file based custom formats. This also allowed the development of a minimal web interface.

In 2006 the development was halted, mostly because I lost ownership over the ptlink.net domain which was bound to the software, and due to the trending lose of popularity of IRC.

It is a bit sad when you spend some hundred hours of development, specially open source, and it gets into a dead end. Nevertheless developing an event driven C library, with a modular IRC services integrator was a very challenging and exciting personal experience.




Adwaita

Sun, 09 Apr 2017 10:45:00 +0000

This weekend I have been playing with an open source project, the Processing language, which seems to be a great tool for computer graphics design/development, it provides and high level language, abstracting you from the more  l ow-level technical aspects of graphics programming.

I have been looking  on how to use it with Python as the development language, I have found two options:

  1. Python Mode for Processing which is a language supporte addon for Processing's IDE, it work as a wrapper for the processing java lib, it's based on jython,
  2. pyprocessing pypi package which is a regular python package, implementing the processing language using OpenGL and Pyglet for the renering


I preferred to go with the pypi package in order to be able to keep developing in a python ecosystem without java dependencies. Unfortunately I have found the pyprocessing development has been abandoned. I have found a github repository with the last commit from 5 years ago, and which actually fails to install/run because of a missing one line import statement.

So here I was again, forking yet another repository, because I found a project which I believe to have a great potential but for whatever reason is no longer maintained. Since this is becoming a common case, creating "keep it working" repositories, I gave it some more thought.

There are many abandoned, yet useful/interesting open source projects. It most commonly happens when projects are started/developed by a single person, sometimes just as prototypes, and at a given point in time the author loses the interest/capacity to maintain it. There is a lot of people with great technical skills/interest in software development, but very limited in community/team building. I have been there.

This is the reason why I have decided to start a new project, with the name "Adwaita" whose goal is to maintain open source projects vitality. The primary focus will be on keeping "poorly maintained" open source projects in better conditions to be adopted/driven by project specific communities.

In an "inception" kick-off style, Adwaita will be the first project managed by the Adwaita task force.



Testing OpenSUSE Tumbleweed

Sat, 18 Feb 2017 09:37:00 +0000

It has been a long time since I have tested a new distro, so here I am again, Now trying openSUSE Tumbleweed. I never tried openSUSE for more than a few days, so hopefully this time I willl build my own oppinion. I am going for Tumbleweed, the rolling release, since I am the bleeding edge guy.

Install

I have selected the NET install iso, because I have a decent internet connection, and I like to have a clean desktop, only installing software as needed. The iso is available from http://download.opensuse.org/tumbleweed/iso/ .

I have created a bootable usb with the following procedure:
https://en.opensuse.org/SDB:Create_a_Live_USB_stick_using_Windows#Using_ImageUSB

Post Install Issues

After installing a 3rd SDD disk do my desktop computer 2 years ago, installing any OS resulted in a broken boot system. It was no different with Tumbleweed, I just ended on a GRUB2 "No such device". However it was quite easy to fix, OpenSUSE's media has a "Boot from installed system" which detects an existing install, and boot from it. It worked like a charm. Then having some technical background on the issue. With a full booted and functional system, I have installed GRUB to the MBR from all the 3 disks, and it was done. A reboot presented me a nice boot graphical logo to select the system (OpenSUSE Tumbleweed or Windows 10).

I have switched from Gnome (2.0) to Cinnamon for the last couple of years, unfortunately the installer does not have a Cinnamon option for the install type. I have selected "Minimal X Environment" so that I could install cinnamon from the repositories later, that got me into another issue. The Minimal X provided IceWM and YaST «a nice system config management tool», however while attempting to configure the WiFi network, I have found that the system was missing the core packages required for Wifi connectivity (iw; wpa_supplicant), it was kind of blocking since I don't have a wired network. I had to boot into Windows to fetch the packages from:

I have filled a bug report for this issue:

I am currently finishing this blog post from Tumbleweed, hopefully I will report about a more wide experience during next week :)






Creating a portable Python + VSphere Python SDK for Windows

Thu, 25 Aug 2016 09:40:00 +0000

If you are working with Virtual Center in an enterprise environment, there are high changes that your VC clients are Windows systems running on a secure network (no network access). This article will let you build a portable Python extender the VSphere Python SDK that you can just copy and use from your vcenter client systems.  Get the required packages on a system having network access:  lessmsi from https://github.com/activescott/lessmsi/releases/latest  python*.msi from https://www.python.org/downloads/windows/Python packages from pip:six-1.10.0-py2.py3-none-any.whl,  requests-2.10.0-py2.py3-none-any.whl suds-0.4.tar.gzpyvmomi-6.0.0.2016.6.tar.gz Create a .bat file that creates the python portable directory: lessmsi-v1.4\lessmsi x python-2.7.12.amd64.msi python\cd python\SourceDirpython -m ensurepipscripts\pip install ../../six-1.10.0-py2.py3-none-any.whlScripts\pip install ../../requests-2.10.0-py2.py3-none-any.whlScripts\pip install ../../suds-0.4.tar.gzScripts\pip install ../../pyvmomi-6.0.0.2016.6.tar.gz Run it, you will get your portable content in the "python" folder [...]



More thinking on software bundles for Linux

Wed, 02 Nov 2011 00:18:00 +0000

The post Rethinking the Linux distibution made me revisit some of the ideas which I had in the past trying to address which in my opinion are major limitations in the current main packaging systems:
  • No support for multiple versions of the same software
  • No support for rollbacks
A software bundle composed of the application and all it's "non core" dependencies can  also add some other benefits:
  • Cross Linux distribution delivery
  • Fine grain control of libraries and options used by an application
  • Reduced complexity with the removal of dependencies management
The disadvantages are:
  • Increase on Disk and RAM usage from software containing different versions of common libraries
  • Applying security fixes  on dependencies requires re-creating/re-distributing every affected bundle
Possible approach for implementation:

Compiling
Adapt an existing source base build system like Arch's "makepkg", with the following changes:
  • Run time prefix must be set to /opt/bbundle
  • Build definitions for dependencies must be contained in the master build definition (this will lead to build definitions redundancy across bundles but will remove the risk of breaking builds by sharing dependencies building rules) 

Bundling
The bundle file format should be a commonly used archive format, since tar does not provide indexing, .zip is a better option. Having an indexed archive will allow to reduce download sizes by inspecting the bundle  contents prior to the download and skipping the download for common files found at installed bundles. In order to save on-disk space, the bundle installer should check for identical files across bundles and use hard links instead of duplicating files.

Installing
Bundles installation should be as simply as extracting the bundle archive into /usr/local/bbundle/bundle_name. A watching service must identify .desktop and other exportable resources and make them available from the host desktop environment.



Life changes

Thu, 12 May 2011 13:40:00 +0000

It has been more than 1 month since I left the Ubuntu community and started exploring other Linux distros. Meanwhile the increasing personal concerns added to the my country's economical situation prompted to re-evaluate where I am investing my life time.

In short, I need to engage in profitable activities otherwise I may fail to support my family.

While I will always be a strong FOSS supporter, because I believe on it's values. I no longer have the time for significant involvement in non profitable projects.
I will continue using and consequentially involved in the Linux ecosystem because it allows me to be more productive, both on my job and on other projects I may get involved with.



Building RPMs vs Building DEBs

Mon, 14 Mar 2011 15:01:00 +0000

Having a large experience with Debian package building it's refreshing to try something else. Last week I have learned the basics of RPM package building.
Here are the differences I have found so far and my opinions about them.

The RPM .spec file package contains both package metadata (description, dependencies, etc) and compile rules while on DEB you have the data split into different files. I remember that on the beginning it was hard to understand the purpose of all those debian/* files, I have found .spec files easier to understand.

The RPM .specs allows conditional building, during build the build target release can be used to dynamically adjust build flags, dependencies etc. While you can achieve this on Debian using some auto generation mechanism (debian/control.in), it is not naturally integrated in the building system, debian/* contains metadata and rules for a specific target system.

Not so important but a nice feature it's the support for translated description/summary on RPM packages.



Some differences between Fedora and Ubuntu

Sat, 05 Mar 2011 11:00:00 +0000

I have already noted a few technical differences between Fedora and Ubuntu which I am going to comment./tmp cleanupFedora does not automatically remove /tmp contents on reboot, I prefer Ubuntu's behavior, applications do not rely on tmp contents across reboots and regular users should not be working on /tmp. If there is no other automated cleanup mechanism -did not check yet-, on the long run the user will get a full root file system.Repository information cacheI don’t have a YUM technical background, so please excuse me if I will write something terribly wrong here.From an user perspective I have noted that yum does not have an explicit cache mechanism, you don’t need to explicitly update the cache. The good side it automatically gets the required information when a new repository is added and frees the user from a repetitive action. The bad side is that it may introduce some network/time overhead during package management opertions.Software update policyI did not read Fedora’s update policy yet but I have noted that they provide regular release upgrades for some software, piding was updated to 2.7.10 from the regular updates repository.[...]



Impressions from Linux Mint 10 Main Edition

Fri, 04 Mar 2011 23:27:00 +0000

Now that I am confident on the Fedora install I could afford to use my other partition and try Linux Mint 10 as suggested by a friend.

The out-of-the box visual experience was one of the best I had so far with a Linux distribution, the menu -mintmenu- seems highly inspired in MS Windows menus, from my reading it's a fork of the SLAB Menu, I really loved the easy navigation and search capability.
Linux Mint 10 is based and compatible with Ubuntu 10.10, the default configuration points to the linux mint repository plus the Ubuntu archives for the usual packages.

Besides the menu they provide their own set of tools, you can easily identify them with ls /usr/bin/mint*, some of them are just wrappers or tiny tools, but it shows they are working not only on cosmetics.

I did not find any documentation about their decision making process or governance in general, but I have sent an email asking for information.

They do care about community feedback, judging from the running poll .
There is a Debian based version (which I did not test yet), and it seems they will be deciding about switching to Debian for other flavors.

I will keep an eye on it, it may be a nice project to join.



First day with Fedora 14

Fri, 04 Mar 2011 08:58:00 +0000

I am starting my adventure searching for a new FOOS project to get involved with. I still think that a Linux distribution is one of the most interesting type of projects. We have so much Free Software that is not reaching yet those who would benefit from it.

Yesterday I have switched from Ubuntu 10.10 to Fedora 14 I have chosen to try Fedora because it has an open governance model with a clear leadership. While there are a clear special capacities from the sponsor (RedHat), at the highest level the project is managed by an Executive Board, the board is composed with a mix of RH appointed and community elected members.

Now back to my first day experience, I was afraid that it would be an hard experience, I use Linux for my primary workstation so it really needs to work.
The install was smoothly, I did found a bug on a specific case of setting up an user with an existing home dir containing invalid symbolic links, nothing serious.

I was able to install all my job required tools using Fedora 64 bits thanks to the multiarch supports which allows to install both 32 and 64 package versions, this is something I could not achieve with Ubuntu 64 bits, ia32libs is not sufficient for my case.
All the software that I needed was available from repositories or as an .rpm from upstream: Filezilla, KeepassX, Zim, X-Chat, Skype, VirtualBox, gnomedo, tsocks, eclipse, cairo-dock, geany. dropbox, pidgin .

During a full work day I have found no significant usability differences between Fedora an Ubuntu.

The only issue I had so far was related yum repository errors, this is one of my next steps, to understand Fedora repository types and package building.

I will also try to understand if the distribution itself is effectively governed as documented.



Stepping down considerately

Wed, 02 Mar 2011 21:36:00 +0000

This is the right time to apply the last guide line from Ubuntu's Code of Conduct.

I have started as an Ubuntu user in 2005, I have found it a promising project mostly because it was aimed at "humans" users, while most similar projects had still a greater focus on developers or development oriented aspects.
Getting involved was easy, the developers could be found on IRC some of them more friendly than others but always there, a point of connection with the community.
As soon I had some know-how I have started participating in the forums, each question was an opportunity for teaching, learning or improving, it was a great experience.

During this stage I have found that a lot of answers were related on how to get a specific version of a software, or people failing to do it. The most frequently answer was teaching how to build from source. That did not seem good to me.
We were promoting all this great thing of Free Software, but we were unable to deliver the latest version meeting a particular need without requesting the "human" user to get some application building skills ?

I have tried to engage the packaging (MOTU) team, It just didn't work for me. I was too eager to cover this need -only partially addressed by the back-ports project-. I did not find the process appealing I had no idea how to improve it, I just had minimal packaging skills.
People were not asking for the proper package, they just needed one that worked without disrupting their system (like they frequently did, compiling or installing from other releases). The GetDeb project was setup, it delivers packages to thousands of users.

Lately I have mostly participated on AskUbuntu.com which in my opinion is Ubuntu's mostly valuable free support center.

I will be looking for other Free Software collaboration opportunities, with a strong leadership that practices open governance and uses decision capacities that comes from transparency, straight discussion and communication.

Thanks to Mark Shuttleworth for setting up a great project and paying the salaries to so many brilliant people.
Thanks to all the Ubuntu users and developers for helping and letting me help building a Free Software and Open Source solution.
.



Ubuntu Community Council Experience

Tue, 01 Mar 2011 23:00:00 +0000

As reported in a previous post I have requested the Ubuntu Community Council for the definition of a control process for non technical driven changes .
Today I have attended to the CC meeting and I would like to share my experience.

I was able to expose my concerns, I got a long feedback from Mark, and as I understood his position and because no one was able to identify actions that worth be taken, everything is fine with Ubuntu change management. If you need more information about changes the recommended process is just to ask Mark he will point you to the right person.

My feeling about the CC board per si, is that, except for Mark which had Canonical/himself all over his conversation, the Ubuntu community was properly represented.
However because the CC lacks any executive capacity, that meeting felt mostly like a social event to get the pulse of the Ubuntu community, not an event to have effective discussion of ideas.

Logs available at: http://irclogs.ubuntu.com/2011/03/01/%23ubuntu-meeting.html



Free and Open Source Gaming Challenge Idea

Sun, 27 Feb 2011 22:46:00 +0000

Hello,
sometimes I find myself search for gamins at playdeb, for my friends, family kids or just for my own entertainment.
I believe that there is still a lack of awareness about Free and Open Source Games and we can do something to improve it.

The following idea is to run a FOS gaming challenge, unlike the traditional network based / score based competition this challenge would be about achieving a per game predefined objective for the largest possible number of FOS games. The primary goal is to reward gaming diversity, not expertise.

The idea is presented at:
https://docs.google.com/present/view?id=dnnmb2s_59f64pmhdv

If you like it and have a facebook account, check our facebook page:

If you want to get involved in the discussion and eventual organization of the event please subscribe to the mailing list:
https://lists.sourceforge.net/lists/listinfo/fosgaming-challenge

Please share this idea with those that you believe would be interested in participating, if we can gather the resources and sufficient interest this is likely to be more than just an idea.

Thanks to the GetDeb team mates which helped refining the idea and building up the presentation.



Non technical driven changes to upstream packages

Sun, 27 Feb 2011 22:42:00 +0000

Hello,
today I have sent this request to the Ubuntu Community Council which I believe to be in the interest of the Ubuntu community:

In light of the present Banshee default configuration change but also taking in account past events (bug 642839) I would like to request the definition and implementation of a control process for non technical driven changes.

As far as I understand such process is already in place for technical changes, and covers mostly stability, security and miscellaneous integration driven changes, with review/authorization being granted by the Ubuntu Technical Board when required.

In the absence of a similar review/authorization request process for non technical changes I am afraid there is an high risk of changes being introduced without proper assessment and communication.

This request does not seek in anyway to limit or condition Canonical's business authority for the Ubuntu trademark and product management, however such authority must be used in a way which is transparent to the Ubuntu community.



Software Center validating packages quality

Sat, 26 Feb 2011 23:46:00 +0000

Today I have found bug 712377, it seems that Software Center is going to check packages quality and refuse to install them.

This change is likely to affect many 3rd parties, does anyone know if it is planned to be enabled on Natty and where can we find the change specification/discussion ?



GetDeb: New build server

Sat, 26 Feb 2011 10:00:00 +0000

The packages building server was moved to a new infrastructure, the resources increase decreased the build time significantly.

We have also developed a minimalist report with the name and logs of the packages recently build, you can check it at http://build.getdeb.net/ .



Dear Ubuntu Community Manager

Thu, 24 Feb 2011 22:28:00 +0000

Jono,
could you please provide us some insightful information about what REALLY happened regarding the Banshee music stores default configuration ?

We were firstly informed that there was a negotiation attempt between Canonical and Banshee developers and which terminated with Caconical's terms being rejected.

Now we have your communication which attributes a mishandling responsibility to Cristian Parrino, however it does not provide a clear understanding on what happened.

Is this new a plan a proposal to the Banshee core developers ? Was it accepted by both parties ?
The perception that the Ubuntu community (which Canonical is part of) followed-up up a failed negotiations by communicating an unilateral plan with different terms provides a sense of questionable intentions.

The Free Software License of Banshee grants Canonical the right to manage the announced changes without involving the Banshee core developers, how was that right used ? Did Canonical trustfully considered to establish mutually accepted terms or was this just a mishandling by setting up a negotiation which was not intended ?

Thanks in advance.



Mono and the Open Source Cannibalism

Tue, 15 Feb 2011 13:30:00 +0000

The recent post from Miguel de Icaza demonstrates an incredible sense of business opportunity: Mono delivers what a mobile developer should care about, be everywhere.

Why now ? Has mono reached some significant technical milestone this week ?
Not really, but right now many mobile open source developers, supporters and business partners are concerned, let's remember them that Mono is here.

Who cares about the people involved in projects like Qt and Meego? They are just a bunch of losers, they are mobile developers so they should be using Mono anyway.



Does Canonical support help?

Tue, 04 Jan 2011 17:17:00 +0000

If you have subscribed to bug 439448 you probably be wondering about the effectiveness or even usefulness of bug reporting.
It is a serious usability bug, present on a LTS release, and one year after being reported is not yet clear if it's Ubuntu specific, or the exact component causing the bug. Because it affects many, mostly non experienced users, those more than 300 comments on the bug are mostly wild guesses about causes and workarounds, digging them to find relevant information will be an expensive and futile exercise.

Do you believe that subscribing Canonical's desktop support services would help in cases like this?

If the company can win where the community fails, maybe we can setup a community economical effort to get company paid support.



Why I am still supporting Free Software?

Mon, 03 Jan 2011 17:36:00 +0000

Today I was debating with a friend the relevance of Free Software, he pointed that at the current development rate is very unlikely that Linux (most common Free Software subject) will have a significant end users market share in 50 years. I remembered him that Linux powered devices already a significant mobile market share. He noted that most of the mobile users do not care or do not know that they use a Free Software powered mobile phone – I had to agreed on this.

Later today while providing some answers at askubuntu.com I have kept this debate on my head and asked myself, why I am still supporting Free Software ?

When I started getting involved in free software development, about 15 years ago, I was a young programmer wanna-be, I was eager to learn all these bunch of languages, protocols, libraries, etc. As if it wasn't good enough I could even get help and help other people, which I always loved to do. I never managed to get a job with free software/open source (and probably will never do) but despite having a good job I always felt that working and programming with free software was closest to what I love doing, knowledge sharing.

Today I no longer do programming, except for a few improvements at getdeb and some scripting at my job when I look at code this days is mostly to identify a problem or feature. For me the (open source) code has lost the magic it had a few years ago. Thankfully to my loved wife, daughter and friends I no longer have the required free time or desire to learn/work on what is required to fix bugs or develop new features. I have lost most of the capacity to use one Free Software's fundamental freedoms “The freedom to study how the program works, and change it to make it do what you wish (freedom 1)” , I can still do many things with the code, but no longer the ones I wish.

Why am I still here, an Ubuntu member, supporting Free Software yet economically dependent on and surrounded by commercial/closed source software?

I have assimilated the values of the Free Software -without the radicalism of some of it's activists-.
I believe that the ability to keep and expand such freedom is still more important than to use it.
I do not have the social skills usually required to influence or change minds, but I am sure I can reach others in ways that demonstrates the values of Free Software, which are hard to pass, specially for most people -which are not developers- .

Happy New Year 2011



Ubuntu Bug Fix Wishes for 2011

Thu, 30 Dec 2010 22:24:00 +0000

I would like to see the following bugs fixed during 2011, the first randomly prevent users from logging off, the second randomly presents an “ugly” unexpected appearance.

https://bugs.launchpad.net/ubuntu/lucid/+source/gnome-panel/+bug/439448
https://bugs.launchpad.net/ubuntu/+source/gnome-settings-daemon/+bug/574296

Both are responsible for a disruptive desktop experience for many users. I also wish that the effort for the new “consistent user experience for desktop” does not keep or increase our current inability to fix such severe open source problems.



GetDeb mirror pool status

Thu, 11 Nov 2010 23:23:00 +0000

I was able to identify and fix the python threading deadlocking problem, mirror-selector is fully functional now.

I have also added some statistics which allow us to check mirrors health and traffic distribution, you can check it at:
http://archive.getdeb.net/status/



Help with Python threading deadlock

Tue, 09 Nov 2010 13:52:00 +0000

Hello,
some months ago I have started developing the mirror-selector daemon.
The main work is done, it is capable to run for a few hours serving some thousand requests but it then get's into a thread deadlock scenario (the web server stops responding, gdb shows all the threads waiting on a semaphore lock).
This was my first project using multi-threading with Python and I am having a bad time finding the bug. I have recently introduced some debug code that I hope will print the stacktrace for every thread and give me an hint on the cause. It is most likely related Queue management.
The mirror-selector code is not that complex, it's available from bzr "bzr branch lp:mirror-selector", if you are experienced with Python threading and can spare a few minutes reviewing the code for the possible cause, it would be helpful.

Thanks



GetDeb/PlayDeb repositories for Ubuntu 10.10 are now open

Mon, 11 Oct 2010 20:14:00 +0000

We had another great Ubuntu release, the GetDeb/PlayDeb repository setup for the 10.10 is done and packages should start landing on the next days. For those which prefer to keep on the LTS release we will keep updating the packages for Lucid for at least the current release.