Last Build Date: Sun, 12 Apr 2015 14:29:45 GMT
Sun, 12 Apr 2015 14:29:45 GMTThe mini-Debconf Lyon 2015, in addition to being a great meeting to meet both friendly and new faces, has been the occasion for me to update and enrich the GNOME for system administrators course.
Tue, 18 Nov 2014 10:00:33 GMTDisclaimer: I’m not used to writing personal stuff on Debian channels. However, there is nothing new here for those who know me from other public channels.
Thu, 13 Nov 2014 08:45:46 GMT(image)
Fri, 15 Feb 2013 16:11:18 GMT
Following the DPL game call for players, here are my nominations for the fantastic four (in alphabetical order) :
These are four randomly selected people among those who share an understanding of the Debian community, a sense of leadership, and the ability to drive people forward with solutions.
Thu, 07 Feb 2013 11:33:53 GMT(image)
Tue, 22 Jan 2013 14:31:52 GMT
GNOME 3.4 for Debian wheezy is shaping up quite well. A handful of bugs remain to be fixed, but we are now in a polishing phase, as expected given the freeze status.
With upstream introducing heavy changes in new versions, it is time to think of what will happen with GNOME when we introduce version 3.8 in unstable. Namely, there are two categories of changes that have a heavy impact on Debian:
Upstream is not hostile to people working on making their modules compatible with these setups (non-3D, non-systemd). However, there is a limit to what the Debian GNOME team can do, and people have to make choices.
The consensus in the Debian GNOME team is to focus the extra amount of work we can provide to the fallback code. We are already in touch with other distributions and people who are interested in keeping the differences with upstream GNOME minimal. Our common goal is to be able to provide a GNOME installation for all Linux systems, with or without 3D.
However, none of us is willing to spend time on getting GNOME to work without systemd. We will not work actively against it either, but some components will certainly recommend systemd, and the functionality with other init systems will be degraded. So if people want to keep GNOME fully working on non-Linux systems, now is the time to start hacking on the missing pieces for this to work. For the time being, it does not look infeasible – although we don’t know what jessie will be made of.
Fri, 30 Nov 2012 11:48:09 GMT
Have you ever wondered one of those?
For those who weren’t at Mini-DebConf Paris 2012 last week, let me share here again the slides:
These questions, and many others, will find the beginning of an answer here. At the very least, I hope to show people leads on where to find relevant information for their needs about administrating GNOME systems.
Thu, 22 Nov 2012 10:51:10 GMT
Today I tried to translate a German sentence posted by mistake on an English IRC channel.
For that, I used Google translation.(image)
I think there is a message here about what country KDE comes from…
Fri, 04 Nov 2011 17:03:56 GMT
So, there is some history of organisations doing a poor job at managing security bugs.
We saw the “This is not really a security hole” jokes just to avoid having bad statistics in the front page. We saw the “OMFG you must update to the latest version RIGHT NOW and no I’m not telling why” panic.
We still frequently see security fixes hidden in unrelated public commits, just to make them harder to backport for distributors.
But really, there is absolutely no match for that. Kudos for setting a new standard in the worse way of dealing with security issues, guys.
Update: one of the developers has started insulting a pair of professional IT security experts who came and tried to educate him. Awesome reading, don’t forget the popcorn.
Thu, 03 Nov 2011 19:05:00 GMT
When George Papandreou announced its will to submit the European “help“ program to the approbation of the Greek people, I don’t know whether he wanted to scare people, but man, he really achieved something. From Wall Street to the Bundestag, through the Élysée Palace, they are all in a state of advanced panic. There’s a joke that’s been circulating since: for next Hallowe’en, disguise yourself as a referendum.
Yes, these guys are afraid. Afraid of the people. They are afraid because it is now clear that their interests are not the same as the interests of the people. And what do you do when you are afraid? Well, you find yourself some way out, often by lying.
And indeed, Mrs Merkel and Mr Sarkozy have been repeating over and over something that has been then repeated over and over by most so-called journalists: that Greeks can only choose between two endings:
That’s it: Mrs Merkel and Mr Sarkozy are outrageous liars.
There is another option for Greek people:
3. they don’t pay their debts to banks and rich people, and they stay in the Euro zone.It’s as simple as that: nothing in European treaties can force a country to leave the Euro zone. And nothing in these treaties can force a country to honor their bonds. Greece is a sovereign state and, as such, can choose not to honor its sovereign debt. And choose to stay in the Euro zone: why would they want to go out? What does it have to do with the currency those bonds have been emitted in? If California were to cease payment of its public debt (something not likely to happen at all, hmm?), would it have to abandon the Dollar?
But here is a thing that has been forbidden for a long time in European treaties: for a country to help financially another one to pay its debts. This rule was introduced by Jacques Delors (a man who knows what being European means) precisely in order to avoid the contagion we are facing currently because of stupid “help” plans all across Europe. Yes: the whole idea of Merkozy’s grand plan to “save Euro” while “helping Greece” (a weird kind of help, starving people, really) is illegal. So in addition to being liars, Mrs Merkel and Mr Sarkozy are delinquents.
So let Greece cease payment of its debt. A few banks will sink: so what? This will create less unemployment than letting our whole economy sink. European States will guarantee citizens’ savings up to 30 k€, that’s one of the other clever European rules (some countries guarantee more). Other people, rich people only, will lose their savings. Will that prevent you from sleeping? Not me. But that could prevent from sleeping a number of friends of Mrs Merkel’s and Mr Sarkozy’s.
And wouldn’t that be a good reason for lying and violating European treaties?
Tue, 03 May 2011 08:12:01 GMTSince this has been a major request from users for a long time, I can only cool with the idea of seeing the Debian project support a rolling release. However I’m not pleased with the proposed ideas, since they don’t actually include any serious plan to make this happen. Sorry guys, but a big GR that says « We want a pony rolling release to happen » doesn’t achieve anything. Let me elaborate. First of all, discussions have focused a lot on what to do when we’re in a freeze phase. Numerous cool ideas have been proposed, including PPAs (which again, won’t happen until someone implements them). This is all good, but this is only the tip of the iceberg. Above all, before wondering what can happen in a freeze that lasts 20% of the time, let’s wonder what can happen for the 80% remaining time. Once you have something that works in the regular development phase, you can tune it to make it happen, even if in a less optimal way, when the distribution is frozen. So let’s not put the cart before the horse. There are three options if you want to make a rolling release happen. Make unstable usable. to make it happen, you have to prevent the disasters that rarely but unavoidably happen here. You don’t want to make all rolling systems unusable because someone broke grub or uploaded a new version of udev that doesn’t work with the kernel. Make testing usable. This sounds easy since RC-buggy packages are already prevented from migrating, but actually it is not. A large number of RC bugs are discovered at the time of testing migration, when some packages migrate and others don’t. Worst of all, they require several days to be fixed, and it is very often that they require several months, when one of the packages gets entangled in a transition. Create a new suite for rolling usage. The proponents of the CUT project obviously believe in option 2. Unfortunately, I haven’t seen many things that could make it happen. A possible way to fix the situation would be to run large-scale regression testing on several upgrade paths. I don’t know if there are volunteers for this, but that won’t be me. That would also imply to make a lot of important bugs RC, since they could have a major effect on usability, but the release team will not be keen to make it happen. Because of the testing situation, when someone asks me for a rolling release, I point her to unstable with apt-listbugs. As of today, this is the closest thing we have to a rolling release, so we should probably examine more deeply option 1. Is it that complicated to write a tool to prevent upgrades to broken packages? A 2-day delay in mirror propagation and a simple list of broken packages/versions (like the #debian-devel topic, would be enough. Add an overlay archive, that works like experimental, and you can now handle freezes smoothly. Wait… isn’t that aptosid? We would probably gain a lot of insight from the people who invented this, instead of trying to reinvent the wheel. Finally, option 3 could open new horizons. There’s a risk that it might drive users away from the testing and unstable suites, and this makes us wonder how we could have proper testing for our packages. Still, build a process that would (and that’s really only an example) freeze unstable every month, give people 10 days to fix the most blatant issues, add a way to make security updates flow in from unstable, and you have a really nice rolling distribution. So overall, it only requires people to make things happen. You want option 2 to happen? Instead of working on GR drafts, start working with maintainers and release managers on ways to avoid breakage in testing. You want option 3 to happen? Start it as a new .debian.net service and see how it works. Personally, I’d be in favor of o[...]
Fri, 01 Apr 2011 18:34:57 GMTToday we gathered the representatives of different distributions that are present at GNOME.Asia to discuss what GNOME could do to improve its support for distributions that distribute it, especially in matters of long-term support. It is kind of sad that there weren’t any representatives from Canonical nor Red Hat, but the discussion turned out really interesting and we learned a lot about the packaging habits of each other. Furthermore, there were several concrete leads that were explored, which will lead to proposals from the GNOME foundation to all distributions. Helping with long-term support The most widespread GNOME version in the LTS releases that happened recently is 2.30, which is used by Debian squeeze, Ubuntu LTS 10.04, RHEL 6, and Solaris 11. It looks like an accident, but on the other hand: GNOME 2.32 isn’t really suitable as is for an entreprise distribution; Linux distributions agreed on a kernel version to support long-term, so this had an impact on their release schedules, and this might well happen again for the next release. In the future, a decision to use a common GNOME release could, anyway, only come from the distributions themselves, not from GNOME. A proposal that many people agreed upon was to give distribution maintainers commit access to old branches that GNOME module maintainers don’t touch anymore. This way they could share their patches more easily and make new releases of these old branches. This would imply, of course, setting up rules about what changes are allowed, that distributions would have to agree upon (how to treat feature additions for example). Managing bugs Currently it is hard to tell, for a distributor, whether other distributions are affected too and whether they have released a fix for that. It was agreed upon that Launchpad’s feature of linking bugs between distributions, including version tracking, would exactly fill that need. One of the solutions would then be to add such a feature to Bugzilla, but it is a lot of work since currently it doesn’t have any kind of version tracking. Another proposal was to deploy a new Launchpad instance to do serve as a hub between downstream bug systems and the GNOME Bugzilla. The condition for this to work would be to make it extremely easy to clone bugs between it and Bugzilla, and also if possible from the downstream bug systems. On the side-related topic of how not to crawl under bugs, it might be possible to get bugs forwarded with a single command from the Debian BTS to Bugzilla, using the XML-RPC interface. Upstream also considers that bugs sent to Debian are generally of higher quality than those from e.g. Ubuntu, and would be OK with us routing some of them directly to upstream (like we already do for Evolution). Communicating about the availability of patches Currently distributors are hardly ever informed that patches relevant for their distribution have been committed. They often learn of them by sheer luck while lurking on Bugzilla. The distributors-list ML is clearly the relevant media for that purpose, but it is clearly not used enough. It would need to be advertised more among both GNOME module maintainers, and among downstream maintainers as well. On this matter, the disappearance of the x.y.3 GNOME releases (starting with 2.28) was evoked. The problem was that most of those releases were about insufficient changes to justify e.g. stable updates in distributions. The proposed solution is to encourage maintainers of modules with bugs to fix to release new versions (through an annoucement on desktop-devel-announce), and to send a list of modules with new versions to downstream distributors so that they can integrate them. This avoids the GNOME release team the hassle of making a [...]
Fri, 01 Apr 2011 10:21:41 GMT
For the whole week, I’ve been in Bangalore for the GNOME.Asia 2011 hackfest. I’ve been delegated by Stefano to represent Debian here, and my employer EDF has agreed to cover for travel costs since they are very interested in first-hand information the future of the Linux desktop and sharing our work on scientific computing.
It’s been a really exciting week; I’ve spent quite some time packaging missing pieces of GNOME 3.0 (well, the release candidate versions of course) in experimental, together with Fred Peters. I think it’s reaching a usable state now, so we’ll probably soon provide metapackages to make it easily installable.
On the good news front, Vincent Untz also spent a lot of time improving the so-called “legacy mode”, which is more and more starting to look like the Shell without special effects, and with all the features from gnome-panel 2.x that are still here. We will try in Debian to cover all uses cases that there were for GNOME 2 with GNOME 3 technology, so that panel lovers are not left behind.
I’ve also proposed an update to the dh_gsettings proposal, which will provide the same functionality as dh_gconf and allow to easily set distribution-specific overrides. It is still missing a way to set mandatory settings, which might come as a problem for some corporate users, but this is planned for a future version of GSettings.
Today, we’re having a business track where I and representatives of other companies (Oracle, Lanedo, Dexxa) are sharing experiences about making money with free software. Unfortunately the local organizers didn’t manage to gather many people, despite our being in a city with an incredible number of IT industries.
Tomorrow, the public conference starts, and this should be the opposite: we’re expecting around 1000 people, which is a great achievement for a free software conference.
For an unrelated topic, being around so many GNOME hackers has some interesting side effects; I’ve been added to Planet GNOME. So, hey, hello Planet GNOME readers!
Thu, 31 Mar 2011 19:19:02 GMT
There have been a lot of web browsers embedding the Gecko engine, especially through the gtkmozembed “library” (it was not really a proper library but let’s call it like that). I remember being a happy user of galeon, which went on as epiphany, but there were also all these small applications that just need a good HTML renderer in one of their widgets, like yelp, or several Python applications using python-gtkmozembed.
Anyone having had to deal with these applications, especially the most complex ones, could tell you a few things:
So, today, it is official: Mozilla is dropping gtkmozembed from their codebase.
I don’t think this will come as a surprise to anyone. You can’t develop a new version of a behemoth, monolithic application every 3 months while still caring about the interfaces underneath. Embedded applications have been migrating to webkit over the recent years, and those that don’t do it really soon will die.
The interesting part of the announcement is not here. It can be found hidden in a bug report: a stable and versioned libmozjs will just never happen.
What does it mean?
First of all, it means that Debian and Ubuntu will have to go on maintaining their own versioning of libmozjs so that it can be linked to in a decent way by applications using the SpiderMonkey JS engine. It also means that this version will have to be bumped more often.
But it also puts into question the whole future of SpiderMonkey as a separate library. With a shortened release cycle, the Mozilla developers will be tempted to add more specific interfaces to SpiderMonkey, reducing its genericity in favor of its use in Firefox itself. This will produce less and less useful libmozjs versions, until we reach the point when they’ll make the same announcement as above, with s/gtkmozembed/libmozjs/.
One of the reasons for the limited adoption of JSCore is that it lies currently in the same library as Webkit, which is a huge dependency. I’ve been very glad to learn that Gustavo is considering the idea of splitting it. We need to provide an escape route for applications using libmozjs, and it looks like more than a decent one. I hope that GNOME Shell follows it sooner than later.
Tue, 15 Mar 2011 11:49:39 GMT
A few weeks ago, at work, we were looking for a solution to a tricky printing problem: how to manage, in a centralized infrastructure, a large number of locations, worstations and printers?
One of the consultants working for us came up with a great idea. With only a 20-line patch to CUPS, workstations would be able to find which printers are on the same location. 20 lines of code, instead of a complex virtualisation solution? This is exactly the kind of reasons why we use free software: when there’s something wrong, you can fix it. When you need something more, you can code it.
Now, many others could benefit of such an improvement, and we don’t want to maintain a forked version of CUPS, so we forwarded it upstream, who looked interested. But upstream now being Apple, they requested a stupid copyright assignment agreement.
I will leave to the reader’s imagination the complexity of getting such a document signed in a Fortune 500 company with no business with Apple. This will, of course, not happen - and if the decision was mine, the answer would have been a clear “No.” No, because I want to improve free software, not to contribute to Apple’s proprietary version. No, because copyleft is about giving as much as you take.
How many contributions are being left out of CUPS because of this stupid copyright assignment? It looks to me that such software is doomed to remain crippled as long as companies like Apple are in charge of their maintenance.
There is free software. And there is free software by Apple. And Oracle. And Canonical.
Mon, 07 Mar 2011 12:39:33 GMT
At first, it looked nice:(image)
But then, it was more like:(image)
Thu, 24 Feb 2011 07:54:42 GMT(image)
Sat, 25 Dec 2010 10:09:53 GMT(image)
My only contribution will be: merry FSMas to all!
Wed, 01 Dec 2010 19:12:39 GMTWe’ve come a long way since the times when you needed to configure 2 X servers in XDM just to be able to use 2 X sessions at once. However there was still some way to go until recently. A number of bugs that could be wrongly attributed to bugs in the X server or in the desktop environment were actually caused by the display manager doing crap. GDM up to 2.20 Since the introduction of the “flexible X servers” feature, GDM hadn’t evolved much on the matter of user switching. What it used to do was pretty straightforward: a specific protocol can be invoked by the gdmflexiserver command; the gdm daemon spawns a new X server on an empty console; it initiates another login process in it; when the session exits, or if the user clicks on “Quit” instead of logging in, the X server exits. It is interesting to note that VT (console) switching is purely handled by the X server. When starting, the new server switches the current VT to where it is. When exiting, it automatically switches back to the VT from which it was launched. While very simple, this idea fails to work correctly every time you try to do something more complicated than starting a temporary session for a guest and exiting it. For example, if you start two of them, there is a chance that, when the X server switches back to the console it was run from, there is nothing left running in this console, leaving you with the funny Control-Alt-Fn shortcuts to find your way back to a X server. You will also meet interesting race conditions when trying to switch back to an existing session from the login window. GDM 2.28 and above In the process of rewriting the code entirely, the GDM developers tried to address a number of those shortcomings, making use of D-Bus and ConsoleKit. The new design is slightly more complicated, however. The gdmflexiserver tool will first try to look for an existing login window in another X server, and just switch to the VT it is in if it finds one. Otherwise, the daemon starts a new slave process with a new X server and a new login window, in a very similar way to what older versions did. When logging in as a user with an existing session, it switches to the VT it is in, but leaves the login window and its X server running. When going into a new session, the X server is simply left to die at the end of the session, and to switch back to the VT from which it was launched. Not killing the X server in some cases partly addresses the problems caused by letting it switch back to the original VT when exiting. However in several ways the cure is worse than the disease. First of all, it will leave unused X servers, with all processes used by the login window - and that makes quite a number of them, with GDM now using a minimal GNOME session. When there is such a login window remaining, ConsoleKit will refuse to let you shut down your computer, being lured into thinking there is someone else using it. It doesn’t solve the inconsistency issue. When you leave a session, you can find either of: a login window, a screensaver unlock dialog, or a black screen. Getting it to work The modular architecture of GDM makes it possible to improve the situation. (Possible but not easy because of the millefeuille of classes.) However, it is merely a band-aid unless you fix the root issue: the X server knowing better than you which VT it should switch to when exiting. Fortunately Xorg now features an option to avoid that behavior: -novtswitch. So the first step is to enable it, and let the GDM daemon (or slave) handle VT switching through Con[...]
Sat, 13 Nov 2010 11:28:40 GMT
If you do that, the longest time taken by your build is, by far, the time needed to install the build-dependencies, because dpkg likes to fsync() every file it writes. It’s a good thing it does that on your main system, but in a disposable chroot you really, really don’t care what happens to it if the system crashes. Thanks to Mike, I discovered eatmydata, and tried it with cowbuilder.
If you want to try it out, add this to your pbuilderrc file:
EXTRAPACKAGES="eatmydata" if [ -z "$LD_PRELOAD" ]; then LD_PRELOAD=/usr/lib/libeatmydata/libeatmydata.so else LD_PRELOAD="$LD_PRELOAD":/usr/lib/libeatmydata/libeatmydata.so fi export LD_PRELOAD
You will also need to install eatmydata in your chroot, unless you want to regenerate it from scratch. And now you can enjoy your super-fast builds.
Sat, 06 Nov 2010 19:18:25 GMT
My wife has been pestering me for months to get a replacement for our dead Epson inkjet printer (which didn’t last long, mind you). To avoid the nightmare of printer support, which, unless you buy a high-end professional printer which does everything plus the coffee, is usually somewhere between “disaster” and “works sometimes”, we spent a long time on manufacturers websites to choose wisely the model.
We chose the HP Laserjet P1102, which, according to HP, has a full support level and is even part of their recommended models.
Yet, after plugging it in, it took me quite some time to understand why it would behave as a brick instead of a printer. First, I thought it was a bug in hplip. Then, I soon discovered that the printer advertised itself as a storage device instead of a printer. What, a buggy firmware?
Thanks to a random question on Launchpad I discovered it’s not a bug, it’s a feature. It’s named HP Smart Install and it turns out it’s yet another stupid idea to support OSes that are too dumb to detect your printers automatically: the printer advertises itself as a CD drive, until you install the driver that will make it switch back to being a printer.
What happens to those who don’t want this “feature” that turns your printer into a 10 kg, read-only USB drive? Well, HP has a solution in the Smart Install FAQ:
25. Can I turn HP Smart Install off or on?
Yes. You can use the HP Smart Install utility to disable/enable HP Smart Install. The utility is stored on the software CD, in the UTIL folder. SIUtility.exe is for 32-bit operating systems and SIUtility64.exe is for 64-bit operating systems.
Bunch of idiots. If I buy a €100 printer, it’s not so that I have to buy a €100 operating system just to activate it.
Tue, 02 Nov 2010 13:57:07 GMT
Several people asked me for the slides I presented Saturday at the Mini Debconf. Until they are available on the Debconf website, here they are:
Despite having gone completely overboard with the timing (let me apologize again to the organizers), the talk seems to have gathered quite some interest. Several people looked surprised to learn Debian is used on such a large scale.
Wed, 06 Oct 2010 12:45:33 GMT
It’s often said that KDE and GNOME are too bloated, too complex, too slow, or whatever. I won’t deny that these critics are often justified, and some parts of the code are badly designed. But there can also be reasons behind this bloat: they are called features.
When you want to mount an encrypted USB disk, you can write your own script and even write your own udev rules so that it can be mounted with autofs.
It looks fun to find a way to use software that has been obsolete and useless for 10 years, in a way that requires administrator rights just to add a new model of USB disk to your system, and puts the private key in a place that is readable for anyone stealing the hardware. But while it will turn out as an interesting read for those willing to understand how the device mapper and cryptsetup work, I think it’s a bit abusive to present it as a correct implementation of an encrypted disk mounting setup.
In etch, GNOME shipped with pmount, a nice utility, still included in Debian, that allows to mount your keys, encrypted or not. In lenny, it shipped with HAL, and allowed to store LUKS passphrases securely in the GNOME keyring. Whenever you plug a LUKS-encrypted disk on a lenny system running GNOME, it is immediately made accessible, and that’s all.
In squeeze, things go much farther thanks to udisks (the backend) and gnome-disk-utility (the frontend). Roland rightly pointed out that the g-d-u documentation is nonexistent - it consists only in a screenshot, which is outdated. Nevertheless, you will find it practical if you want to encrypt a USB drive, since you can format it, partition it, create encrypted volumes and the filesystems on them, in a few clicks and without root permissions. If you use nautilus, it will also mount them automatically using the same backend when you plug them.
I don’t know for you, but I think it is worth a few CPU cycles of my 3GHz processor and a few dozen megabytes of my 500GB drive.
Wed, 21 Apr 2010 21:08:16 GMT
For those who haven’t followed and just wondered why Debian is so late this is lame this sucks Ubuntu is better because they have the latest version and Fedora is even better because they even have versions that don’t work at all, here is the short story: the GDM rewrite wasn’t really usable until 2.28 (which is the version with which Ubuntu started to ship it, incidentally). Add to that the time to make a transition plan and to integrate it properly, and that makes actually only 6 months.
Big thanks go to Luca Bruno (Lethalman) who did most of the job. A quick look at the changelog will give you an idea of the amount of work involved to bring it to our quality standards.
Since the rewrite has absolutely zero compatibility with previous versions, it will not be upgraded in place. Therefore, while newly installed systems will get GDM 2.30 by default for squeeze, those upgrading from lenny will keep GDM 2.20. The 2.20 version will be dropped after the squeeze release.
If you want to upgrade your GDM, simply run apt-get install gdm3. It should work for simple setups, and there’s a hack that makes upgrades work even when logged on X.
Everyone who has needs for advanced features (such as LTSP people) should make sure GDM 2.30 suits their needs during the squeeze cycle, since the old version will not be here anymore after.
Finally, here is a call for translations. Anyone can help: just grab the gdm3 sources, get the .pot files and translate them to your language. Beware, there is one file in debian/po for the desktop files and one in debian/po-up for the patches. (I will try to merge them in a later version.) Then submit your translations as bug reports.