Added By: Feedage Forager Feedage Grade B rated
background  case  code  cube  design  detail  drupal  element  making  map  normal  point  surface  theme  time  work 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics

Updated: 2016-05-12T20:06:53+02:00


My JS1K Demo - The Making Of


If you haven't seen it yet, check out the JS1K demo contest. The goal is to do something neat in 1 kilobyte of JavaScript code. I couldn't resist making one myself, so I pulled out my bag of tricks from my Winamp music visualization days and started coding. I'm really happy with how it turned out. And no, it won't work in Internet Explorer 8 or less. Edit: OH SNAP! I just rewrote the demo to include volumetric light beams and still fit in 1K: Original Version frameborder='0' height='345' id='js1kjs' src='about:blank' style='background:#000;border:0' width='460' /> Stop DemoStart DemoView Source Improved Version frameborder='0' height='345' id='js1kjs2' src='about:blank' style='background:#000;border:0' width='460' /> Stop DemoStart DemoView Source Now, whenever size is an issue, the best way to make a small program is to generate all data on the fly, i.e. procedurally. This saves valuable storage space. While this might seem like a black art, often it just comes down to clever use of (high school) math. And as is often the case, the best tricks are also the simplest, as they use the least amount of code. To illustrate this, I'm going to break down my demo and show you all the major pieces and shortcuts used. Unlike the actual 1K demo, the code snippets here will feature legible spacing and descriptive variable names. Initialization JS1K's rules give you a Canvas tag to work with, so the first piece of code initializes it and makes it fill the window. From then on, it just renders frames of the demo. There are four major parts to this: Animating the wires Rotating and projecting the wires into the camera view Coloring the wires Animating the camera All of this is done 30 times per second, using a normal setInterval timer: setInterval(function () { ... }, 33); Drawing Wires The most obvious trick is that everything in the demo is drawn using only a single primitive: a line segment of varying color and stroke width. This allows the whole drawing process to be streamlined into two tight, nested loops. Each inner iteration draws a new line segment from where the previous one ended, while the outer iteration loops over the different wires. The lines are blended additively, using the built-in 'lighten' mode, which means they can be drawn in any order. This avoids having to manually sort them back-to-front. To simplify the perspective transformations, I use a coordinate system that places the point (0, 0) in the center of the canvas and ranges from -1 to 1 in both coordinates. This is a compact and convenient way of dealing with varying window sizes, without using up a lot of code: with (graphics) {  ratio = width / height;  globalCompositeOperation = 'lighter';  scale(width / 2 / ratio, height / 2);  translate(ratio, 1);  lineWidthFactor = 45 / height;  ... I add a correction ratio for non-square windows and calculate a reference line width lineWidthFactor for later. Here, I'm using the with construct to save valuable code space, though its use is generally discouraged. Then there's the two nested for loops: one iterating over the wires, and one iterating over the individual points along each wire. In pseudo-code they look like: For (12 wires => wireIndex) {  Begin new wire  For (45 points along each wire => pointIndex) {    Calculate path of point on a sphere: (x,y,z)    Extrude outwards in swooshes: (x,y,z)    Translate and Rotate into camera view: (x,y,z)    Project to 2D: (x,y)    Calculate color, width and luminance of this line: (r,g,b) (w,l)    If (this point is in front of the camera) {      If (the last point was visible) {        Draw line segment from last point to (x,y)      }    }    else {      Mark this point as invisible    }    Mark beginning of new line segment at (x,y)&nbs[...]

Making Worlds 4 - The Devil's in the Details


Last time I'd reached a pretty neat milestone: being able to render a somewhat realistic rocky surface from space. The next step is to add more detail, so it still looks good up close. Adding detail is, at its core, quite straightforward. I need to increase the resolution of the surface textures, and further subdivide the geometry. Unfortunately I can't just crank both up, because the resulting data is too big to fit in graphics memory. Getting around this will require several changes. Strategy Until now, the level-of-detail selection code has only been there to decide which portions of the planet should be drawn on screen. But the geometry and textures to choose from are all prepared up front, at various scales, before the first frame is started. The surface is generated as one high-res planet-wide map, using typical cube map rendering: This map is then divided into a quad-tree structure of surface tiles. It allows me to adaptively draw the surface at several pre-defined levels of detail, in chunks of various sizes. Source This strategy won't suffice, because each new level of detail doubles the work up-front, resulting in exponentially increasing time and memory cost. Instead, I need to write an adaptive system to generate and represent the surface on the fly. This process is driven by the Level-of-Detail algorithm deciding if it needs more detail in a certain area. Unlike before, it will no longer be able to make snap decisions and instant transitions between pre-loaded data: it will need to wait several frames before higher detail data is available. Uncontrolled growth of increasingly detailed tiles is not acceptable either: I only wish to maintain tiles useful for rendering views from the current camera position. So if a specific detailed portion of the planet is no longer being used—because the camera has moved away from it—it will be discarded to make room for other data. Generating Individual Tiles The first step is to be able to generate small portions of the surface on demand. Thankfully, I don't need to change all that much. Until now, I've been generating the cube map one cube face at a time, using a virtual camera at the middle of the cube. To generate only a portion of the surface, I have to narrow the virtual camera's viewing cone and skew it towards a specific point, like so: This is easy using a mathematical trick called homogeneous coordinates, which are commonly used in 3D engines. This turns 2D and 3D vectors into respectively 3D and 4D. Through this dimensional redundancy, we can then represent most geometrical transforms as a 4x4 matrix multiplication. This covers all transforms that translate, scale, rotate, shear and project, in any combination. The right sequence (i.e. multiplication) of transforms will map regular 3D space onto the skewed camera viewing cone. Given the usual centered-axis projection matrix, the off-axis projection matrix is found by multiplying with a scale and translate matrix in so-called "screen space", i.e. at the very end. The thing with homogeneous coordinates is that it seems like absolute crazy talk until you get it. I can only recommend you read a good introduction to the concept. With this in place, I can generate a zoomed height map tile anywhere on the surface. As long as the underlying brushes are detailed enough, I get arbitrarily detailed height textures for the surface. The normal map requires a bit more work however. Normals and Edges As I described in my last entry, normals are generated by comparing neighbouring samples in the height map. At the edges of the height map texture, there are no neighbouring samples to use. This wasn't an issue before, because the height map was a seamless planet-wide cube map, and samples were fetched automatically from adjacent cube faces. In an adaptive system however, the map resolution varies across the surface, and there's no guarantee that those neighbouring tiles will be available at the desired resolution. The easy way out is to make[...]

Making Worlds - Intermission


Today at BazCamp YVR I gave a short presentation and demo of my "Making Worlds" project, as well as an overview of procedural content generation in general.

The slides are available for download.


Making Worlds 3 - That's no Moon...


It's been over two months since the last installment in this series. Oops. Unfortunately, while trying to get to the next stage of this project, I ran into some walls. My main problem is that I'm not just creating worlds, but also learning to work with the Ogre engine and modern graphics hardware in particular. This presents some interesting challenges: between my own code and the pixels on the screen, there are no less than three levels of indirection. First, there's Ogre, a complex piece of C++ code that provides me with high-level graphics tools (i.e. objects in space). Ogre talks to OpenGL, which abstracts away low-level graphics operations (i.e. commands necessary to draw a single frame). The OpenGL calls are handed off to the graphics driver, which translates them into operations on the actual hardware (processing vertices and pixels in GPU memory). Given this long dependency chain, it's no surprise that when something goes wrong, it can be hard to pinpoint exactly where the problem lies. In my case, an oversight and misunderstanding of an Ogre feature lead to several days of wasted time and a lot of frustration that made me put aside the project for a while. With that said, back to the planets... Normal mapping Last time, I ended with a bumpy surface, carved by applying brushes to the surface. The geometry was there, but the surface was still just solid white. To make it more visually interesting, I'm going to apply light shading. The most basic information you need for shading a surface is the surface normal. This is the vector that points straight away from the surface at a particular point. For flat surfaces, the normal is the same everywhere. For curved surfaces, the normal varies continuously across the surface. Typical materials reflect the most light when the surface normal points straight at the light source. By comparing the surface normal with the direction of incoming light (using the vector dot product), you can get a good measure of how bright the surface should be under illumination: Lighting a surface using its normals. To use normals for lighting, I have two options. The first is to do this on a geometry basis, assigning a normal to every triangle in the planet mesh. This is straightforward, but ties the quality of the shading to the level of detail in the geometry. A second, better way is to use a normal map. You stretch an image over the surface, as you would for applying textures, but instead of color, each pixel in the image represents a normal vector in 3D. Each pixel's channels (red, green, blue) are used to describe the vector's X, Y and Z values. When lighting the surface, the normal for a particular point is found by looking it up in the normal map. The benefit of this approach is that you can stretch a high resolution normal map over low resolution geometry, often with almost no visual difference. Lighting a low-resolution surface using high-resolution normals. Here's the technique applied to a real model: (Source - Creative Commons Share-alike Attribution) Normal mapping helps keep performance up and memory usage down. Finding Normals So how do you generate such a normal map, or even a single normal at a single point? There are many ways, but the basic principle is usually the same. First you calculate two different vectors which are tangent to the surface at the point in question. Then you use the cross product to find a vector perpendicular to the two. This third vector is unique and will be the surface normal. For triangles, you can pick any two triangle edges as vectors. In my case, the surface is described by a heightmap on a sphere, which makes things a bit trickier and requires some math. I asked my friend Djun Kim, Ph.D. and teacher of mathematics at UBC for help and he recommended Calculus on Manifolds by Michael Spivak. This deceptively small and thin book covers all the basics of calculus in a dense and compact way, and quickly became my new favo[...]

Making Worlds 2 - Scaling Heights


Last time, I had a working, smooth sphere mesh. The next step is to create terrain. Scale Though my goal is to render at a huge range of scales, I'm going to focus on views from space first. That strongly limits how much detail I need to store and render. Aside from being a good initial sandbox in terms of content generation, it also means I can comfortably keep using my current code, which doesn't do any sophisticated memory or resource management yet. I'd much rather work on getting something interesting up first rather than work on invisible infrastructure. That said, this is not necessarily a limitation. The interesting thing about procedural content is that every generator you build can be combined with many others, including a copy of itself. In the case of terrain, there are definite fractal properties, like self-similarity at different levels of scale. This means that once I've generated the lowest resolution terrain, I can generate smaller scale variations and combine them with the larger levels for more detail. This can be repeated indefinitely and is only limited by the amount of memory available. Perlin Noise is a celebrated classic procedural algorithm, often used as a fractal generator. Height To build terrain, I need to create heightmaps for all 6 cube faces. Shamelessly stealing more ideas from Spore, I'm doing this on the GPU instead of the CPU, for speed. The GPU normally processes colored pixels, but there's no reason why you can't bind a heightmap's contents as a grayscale (one channel) image and 'draw' into it. As long I build my terrain using simple, repeated drawing operations, this will run incredibly fast. In this case, I'm stamping various brushes onto the sphere's surface to create bumps and pits. Each brush is a regular PNG image which is projected onto the surface around a particular point. The luminance of the brush's pixels determines whether to raise or lower terrain and by how much. Three example brushes from Spore. (source) However, while the brushes need to appear seamless on the final sphere, the drawing area consists only of the straight, square set of cube map faces. It might seem tricky to make this work so that the terrain appears undistorted on the curved sphere grid, but in fact, this distortion is neatly compensated for by good old perspective. All I need to do is set up a virtual scene in 3D, where the brushes are actual shapes hovering around the origin and facing the center. Then, I place a camera in the middle and take a snapshot both ways along each of the main X, Y and Z directions with a perfect 90 degree field of view. The resulting 6 images can then be tiled to form a distortion-free cube map. Rendering two different cube map faces. The red area is the camera's viewing cone/pyramid, which extends out to infinity. To get started I built a very simple prototype, using Ogre's scene manager facilities. I'm starting with just a simple, smooth crater/dent brush. I generate all 6 faces in sequence on the GPU, pull the images back to the CPU to create the actual mesh, and push the resulting chunks of geometry into GPU memory. This is only done once at the beginning, although the possibility is there to implement live updates as well. Here's a demo showing a planet and the brushes that created it, hovering over the surface. I haven't implemented any shading yet, so I have to toggle back and forth to wireframe mode so you can see the dents made by the brushes: allowfullscreen='allowfullscreen' frameborder='0' height='380' src='' style='margin: 0 auto;' width='560' /> The cubemap for this 'planet' looks like this when flattened. You can see that I haven't actually implemented real terrain carving, because brushes cause sharp edges when they overlap: The narrow dent on the left gets distorted and angular where it crosses the cube edge. This is a normal consequence of the cubemapping[...]

Making Worlds 1 - Of Spheres and Cubes


Let's start making some planets! Now, while this started as a random idea kind of project, it was clear from the start that I'd actually need to do a lot of homework for this. Before I could get anywhere, I needed to define exactly what I was aiming for. The first step in this was to shop around for some inspirational art and reference pictures. While there is plenty space art to be found online, in this case, nothing can substitute for the real thing. So I focused my search on real pictures, both of landscapes (terran or otherwise) as well as from space. I found classy shots like these: Hopefully I'll be able to render something similar in a while. At the same time, I eagerly devoured any paper I could find on rendering techniques from the past decade, some intended for real-time rendering, some old enough to be real-time today. Out of all this, I quickly settled on my goals: Represent spherical or lumpy heavenly bodies from asteroids to suns. With realistic looking topography and features. Viewable across all scales from surface to space. At flight-simulator levels of detail. Rendered with convincing atmosphere, water, clouds, haze. For most of these points, I found one or more papers describing a useful technique I could use or adapt. At the same time, there are still plenty of unknowns I'll need to figure out along the way, not to mention significant amounts of fudging and experimentation. The Spherical Grid To get started I needed to build some geometry, and to do that I needed to figure out what geometry I should use. After reviewing some options, I quickly settled on a regular spherical displacement map (AKA a heightmap). That is, starting with a smooth sphere, move every surface point up or down, perpendicular to the surface, to create terrain on the surface. If these vertical displacements are very small compared to the sphere radius, this can represent the surface of a typical planet (like Earth) at the levels of detail I'm looking for. If the displacements are of the same order as the sphere radius, you can deform it into very irregular potato-like shapes. The only thing heightmaps can't do is caves, tunnels, overhang and other kinds of holes, which is fine for now. The big question is, how should the spherical surface be divided up and represented? With a sphere, this is not an easy question, because there is no single obvious way to divide a spherical surface into regular sections or grids. Various techniques exist, each with their own benefits and specific use cases, and I spent quite some time looking into them. Here's a comparison between four different tesselations: Source Note that the tesselation labeled ECP is just the regular geographic latitude-longitude grid. The main features I was looking for were speed and simplicity, so I settled on the 'quadcube'. This is where you start with a cube whose faces have been divided into regular grids, and project every surface point out from the middle to an enclosing sphere. This results in a perfectly smooth sphere, built out of 6 identical round shells with curved edges. This arrangement is better known as the 'cube map' and often used for storing arbitrary 360 degree panorama views. Here's a cube and its spherical projection: The projected cube edges are indicated in red. Note that the resulting sphere is perfectly smooth and round, even though the grid has a bulgy appearance. Cube maps are great, because they are very easy to calculate and do not require complicated trigonometry. In reverse, mapping arbitrary spherical points back onto the cube is even simpler and in fact natively supported by GPUs as a texture mapping feature. This is important, because I'll be generating the surface terrain and texture dynamically and will need to index and access each surface 'pixel' efficiently. Using a cube map, I simply identify the corr[...]

Making Worlds: Introduction


For the past year or so I've been reacquainting myself with an old friend: C++.

More specifically, I've been exploring graphics programming again, this time with the luxurious flexibility of the modern GPU at my fingertips. To get me started, I shopped around for an open source engine to play with. After trying Irrlicht and finding its promises to be a bit lacking, Ogre turned out to be a really good choice. Though its architecture is a bit intimidating at first, it is all the more sound. More importantly, it seems to have a relatively healthy open-source community around it.

So with Ogre as my weapon of choice, I've started a new project: Making Worlds. More specifically, I want to procedurally generate a 3D planet, viewable from outer space as well as the ground (at flight-sim levels of detail), which can be rendered real-time on recent graphics hardware.

Why? Because I really like procedural content generation. It's an odd discipline where anything goes, and techniques from across mathematics, engineering and physics are applied. Then, you add a good dose of creativity and artistic sense, and perhaps mix in some real-world data too, until you find something that looks right.

Plus, far from being an exercise in pointlessness, procedural content is gaining in popularity, especially for video games.

So, in the style of Shamus Young's excellent Procedural city series, I'm going to start blogging about Making Planets. Unlike him however, I'm not going to adhere to a strict schedule.


Here's a teaser for the first installment.

CSS Sub-pixel Background Misalignments


Update: and now, IE8 adds even more odd behavior to the mix! A while ago, John Resig pointed out some issues with sub-pixel positioning in CSS. The problem he used is one of percentage-sized columns inside a container, where the resulting column widths don't round evenly to whole pixels or don't sum to the correct total. His conclusion is that browsers each have their own way of dealing with the problem. I've recently been bumping into a related issue however, that shows the situation is even worse: rounding is inconsistent even inside a single browser. Take the following scenario: a fixed width element that is horizontally centered in a viewport using margin-left: auto; margin-right: auto;. The viewport has a horizontally centered background image, having background-position: 50% 0. This is an extremely common page structure. You'd logically expect the background image and the element to line up, and move as one when the viewport is resized. However, this is not the case. Depending on the viewport width, the background can be offset one pixel to the left or right. This obviously wreaks havoc on many designs. I decided to investigate this more closely and the results are not pretty. My test case consists of the basic structure described above, repeated in a bunch of mini-viewports. Each background image contains a black box of a certain size, and is overlaid with a grey element that covers this box exactly. If the two pieces align, there should be no black peeking through on the sides, and each box should be fully gray. For full coverage, I vary the following parameters: The width of the viewport The odd/even size of the box/element Background image is bigger/smaller than the viewport Background image is padded evenly/unevenly around the box (1px difference). (*) I tested this in IE6, IE7, Safari 3.1.2, Firefox 3.0.4 and Opera 9.6.2. The result is quite baffling: not a single browser out there rounds background image positions the same as element positions, resulting in misalignments. The tell-tale black lines show up in every browser: IE6 and IE7: Safari 3.1 and Opera 9.6: Firefox 3.0: What's worse is there isn't a single case (across viewport sizes) that is handled consistently between all the browsers. So this CSS technique should in fact be considered broken. Of course this brings up the question: is it really a browser bug or just an implementation quirk? I would argue that at least in the case where the image's width parity matches the element's, you'd expect perfectly matching rounding (i.e. for the first four rows of test cases). The other test cases are more ambiguous, and all you could hope for is consistent behaviour in each browser individually. I wonder why this hasn't been brought up more though. A quick sampling of designers around me shows that they have all encountered this bug, but don't really know a fix and just tweak the design or layout structure to mask the effect. If you really do need to align a background image properly, there is an ugly work-around: place your background image on an additional fixed-width element layered behind the center column. Center the background's element using margins rather than the background image itself, and clip it off at the sides using overflow: hidden on an additional wrapper. This causes the background's position to be rounded the same way as the column on top. (*) Note that there is a choice whether to pad more on the left or on the right. I chose the left. This means that the last 4 rows of test cases are inherently ambiguous: a browser that misaligns all of these in the same fashion is in fact being consistent, just in the opposite direction. [...]

Projective Texturing with Canvas


Update: People keep asking me to use this code, apparently unaware this has been made obsolete by CSS 3D. So it's gone now.

The Canvas tag's popularity is slowly increasing around the web. I've seen big sites use it for image rotation, graph plotting, reflection effects and much more.

However, Canvas is still limited to 2D: its drawing operations can only do typical vector graphics with so-called affine transformations, i.e. scaling, rotating, skewing and translation. Though there have been some efforts to try and add a 3D context to Canvas, these efforts are still experimental and only available for a minority of browsers through plug-ins.

So when my colleague Ross asked me if we could build a Cover Flow-like widget with JavaScript, my initial reaction was no... but really, that's just a cop out. All you need are textured rectangles drawn in a convincing perspective: a much simpler scenario than full blown 3D.


Perspective views are described by so-called projective transforms, which Canvas2D does not support. However, it does support arbitrary clipping masks as well as affine transforms of both entire and partial images. These can be used to do a fake projective transform: you cut up your textured surface into a bunch of smaller patches (which are almost-affine) and render each with a normal affine transform. Of course you need to place the patches just right, so as to cover any possible gaps. As long as the divisions are small enough, this looks convincingly 3D.

So some hacking later, I have a working projective transform renderer in JavaScript. The algorithm uses adaptive subdivision to maintain quality and can be tuned for detail or performance. At its core it's really just a lot of linear algebra, though I did have to add a bunch of tweaks to make it look seamless due to annoying aliasing effects.

Unfortunately Safari seems to be the only browser that can render it at an acceptable speed, so this technique is just a curiosity for now. The current code was mostly written for readability rather than performance though, so it's possible it could be optimized to a more useful state. Feel free to browse the code.

A real 3D Canvas in the browser would obviously rock, but you can still do some nifty things if you know the right tricks...

Abusing jQuery.animate for fun and profit (and bacon)


The days of static UIs that only have jarring transitions between pages are pretty much over. With frameworks like CoreAnimation or jQuery, it's easy to add useful animations to applications and webpages. In the case of jQuery, you can easily animate any CSS property, and you get free work-arounds for browser bugs to boot. You can run multiple animations (of arbitrary duration) at the same time, queue animations and even animate complex properties like colors or clipping rectangles. But what if you want to go beyond mere CSS? You might have a custom widget that is drawn using , whose contents are controlled by internal variables; maybe you're using 3D transformations to scale and position images on a page, and simple 2D tweening just doesn't cut it. In that case, it would seem you are out of luck: jQuery's .animate() method can only be applied to a collection of DOM elements, and relies heavily on the browser's own semantics for processing CSS values and their units. However thanks to JavaScript's flexibility and jQuery's architecture, we can work around this, and re-use jQuery's excellent animation core for our own nefarious purposes. Hackity hack hack First, we need an object to store all the variables we wish to animate. We use an anonymous
outside of the main document, so that jQuery's DOM calls still work on it. We simply add our own properties to it: var vars = $.extend($('
')[0], {   foo: 1,   bar: 2,   customAnimate: true,   updated: true }); In this case, our properties are foo and bar. We also set customAnimate and updated to identify this object (see below). Next we need to override jQuery's default step function, which gets called for every step of an animation, and applies new values to an element's CSS properties.    // jQuery.fx.step._default     _default: function(fx) {[fx.prop] = + fx.unit;     } We can replace it using the following snippet: var $_fx_step_default = $.fx.step._default; $.fx.step._default = function (fx) {   if (!fx.elem.customAnimate) return $_fx_step_default(fx);   fx.elem[fx.prop] =;   fx.elem.updated = true; }; With the new step function, jQuery will check for the presence of a customAnimate property on any element it is animating. If present, it will assign the (unit-less) value to rather than and mark the element by setting element.updated to true. Now we're ready to animate, using the normal $.animate syntax: $(vars).animate({ foo: 5, bar: 10 }, { duration: 1000 }); The values and will now smoothly change over time. You can use any of jQuery's animation abilities as usual. Now what about that updated variable? Well to actually use these animated values, you will need some kind of timer or step callback to read them back and draw them on the page. If you're using , you need to redraw your entire widget for every change, but you don't want to be wasting CPU time by constantly refreshing it. Furthermore, if you're running multiple animations at the same time, you'll want to aggregate all your property changes into a single redraw per frame. This is easy with the updated property and a simple timer: setInterval(function () {   if (!vars.updated) return;   vars.updated = false;      drawWidget(); }, 30); Now your widget will only refresh [...]

Poster design for Interfacultair Theaterfestival 2008


The design is meant to look like the cover of a board game box and accompanies the web site's design.


Drupal's Designer Future


In the past months I've been doing a lot more graphical design, and it's caused me to think about how it relates to Drupal. This prompted me to write a rather long blog piece with some insights and a call to action. If you are interested in the future of Drupal, please read on. The trigger was that I noticed that I'm getting less and less motivated to do graphics work for Drupal. It's not that I don't like design... I loved designing and building that site last month for example. But when it comes to Drupal graphics, the personal reward that I feel from doing it doesn't seem proportional to the effort I put in. This includes designing little banners for the spotlight, doing a t-shirt, making ad buttons, doing the association theme and more. The most recent big example was the Garland theme. When Stefan Nagtegaal showed a work-in-progress version of his 'Themetastic' theme (as it was then called) in September, I was instantly charmed and knew that this was our new default theme in the making and said so clearly. Many others were not convinced though and hammered on details, even though the basic design for the theme was rock solid. Some were not convinced of the theme's potential, or simply didn't see that we needed a theme that was graphically smashing rather than a good base to develop on. At that point, I essentially said "screw the community, this is going to be our default theme" and started refining the theme so it was perfect for core. This took several weeks. Until then, the rest of the community put its eggs in the wrong baskets and got a lot of useless design-by-committee done. These designs, which were in my opinion mediocre at best, were being pushed for inclusion. This may sound a bit harsh, but I honestly believe that if the most popular candidate theme, Deliciously Zen, had become the new default core theme, we'd have been ridiculed for still not 'getting' design after 6 years and Drupal 5 would not have been such a big release. Just like 4.7, most people would not stick to Drupal long enough to discover how good it is. Now, when the Garland theme was finally done, everyone suddenly changed their opinion and congratulated the community on its excellent work. I have to admit this hit a nerve, especially after I'd been spending countless days and nights the two weeks before fixing annoying IE rendering bugs, redoing the CSS layout and adding a whole new layer of Glitz und Glanz to Drupal core. Only three people did serious work on what became the Garland theme: Stefan Nagtegaal did the original design from scratch and worked with Adrian Rossouw to come up with a proof-of-concept of the recolorable theme. I wrote the color picker, improved the theme and coded what became the color.module based on Adrian's stuff. Only a handful of people helped with testing of the theme during its development and only after the main theme was finished — most of the bugs were in the recoloring mechanism. How can such a vital piece of Drupal 5 have only have 3 serious contributors, when the whole release had almost 500 people submitting patches? To me, this shows that we have a problem in the Drupal community, or rather a knowledge void. Not enough Drupal people are savvy enough about theming and design to help out with even small tasks (like a banner) or even give quality tips and feedback on other work. The result is that theming and design receives little attention. Most contributed themes and sites could look a lot better, if they just themed it some more. And getting patches into core that give the defaults a little more oomph is tough, as they are often considered to be useless embellishments. Still, ever since Drupal started, there has been the recurring cry of doing m[...]

Children of Men Fake Media


I got linked this video, which contains all the fake media created for the movie 'Children of Men' (see my earlier post).

Aside from sci-fi geek fun, I loved watching them to analyse the graphical designs they used. One of the subjects I'll be talking about in my OSCMS talk about design is branding and style. If you're going to attend, here's a great opportunity to do your homework.

Having an eye for graphical design is as important as creative skill, but luckily you can train on this. Each of these ads or clips has a different look tailored towards the product and its audience. Look at the graphical elements, such as images, colors, typography and animation and try to figure out why it's appropriate and effective. There's also some public signage in there which has a style of its own.

If you have some time, a good trick is to take a particular design, look at it for a couple minutes, then try to reproduce it in a graphical program like Photoshop or Illustrator. When you're done, compare your version with the original, and try to figure out what you did different and whether this makes it better or worse. Look for qualities like readability, alignment, typography, contrast and aesthetics. The ones in the movie are probably a bit too graphical each, but you can do this for logos or web sites too.

The clips can be viewed in QuickTime and were done by London-based design studio Foreign Office.

Tip: you can slowly move forwards or backwards in a QuickTime video by scrolling up/down.

DrupalCon - Theme Development presentation


At FOSDEM I gave a presentation about Theme Development for the Drupal Conference in the Drupal developer's room. You can take a look at the slides. The topics I covered are:

  • Overview of the Drupal theme system
  • Making the FriendsElectric theme
  • Clean, semantic XHTML/CSS
  • Some examples of sexy Drupal sites

The presentation was filmed, the video is available through BitTorrent. For info on how to play the Ogg Theora files, check these instructions.

The other presentations are linked on

Read on for some useful links related to Drupal theming. Drupal-related

Some useful CSS links