Subscribe: Ergonomie web, Ruby on Rails et Architecture de l'information
Added By: Feedage Forager Feedage Grade B rated
Language: French
back  blog  cluster  company  data  elasticsearch  failure  means  medium  nodes  rack  restart  shard  shards  team  time 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Ergonomie web, Ruby on Rails et Architecture de l'information

Fred Thoughts on Startups, UX etc | The UX Ray

The UX Ray


The obvious source of failure no one talks about

Fri, 20 May 2016 23:14:44 +0200


Every time I see a startup dying, I can’t help but trying to understand what went wrong. Unfortunately, you can’t turn back time or get a time lapse of a multiple years history.

Unlike success, a startup failure might be hard to understand. Obvious reasons exist: lack of product / market fit, co founders issues, poor execution, lack culture, failure to build a great team, but it doesn’t explain everything.

Before 2011, Twitter was well known for its “fail whale” and inability to scale. Early Google was relying on (poor) competitors data before they had a big enough index. And Docker was once a Platforms As A Service (PAAS) before they pivoted to focus on Docker. Before they became the success story we hear about all over the Internet.

Semi failure is even harder to analyse. How can you know what made a promising company barely survive after 5 or 10 years without diving into an insane amount of data? Beside the company internals, such as product evolution, business model pivots, execs turnaround — a sign something’s fishy, not necessary the reason — and poor selling strategies, you need to analyse the evolution of the market, their clients and indeed their competitors.

There’s something else no one talks about when analysing failure. Something so obvious it sounds ridiculous until you face it.

Yesterday I wanted to see a friend whose startup only has 1 or 2 months cash left. Yesterday was also an optional bank holiday in France, but I didn’t expect their office door to be closed.

I was shocked. If my company was about to die, I would spend the 30, 45 remaining days trying to save it by all means. I’d run everywhere trying to find some more cash. I’d have the product team release that deal breaker differentiating feature. I’d try to find a quick win pivot. I’d even try to sell to my competitors in order to save some jobs. But I’d certainly not take a bank holiday.

Then I remembered every time I went there during the past 2 years, sometimes dropping for a coffee, sometimes using their Internet connection when I was working remotely and did not want to stay at home. There was no one before 10:00AM, and there was no one after 7:00PM. There were always people playing foosball / the playstation / watching a game on TV. Not like they were thousands people, more like a dozen. I remember late lunchs and, early parties.

Despite a fair product and promising business plan, they missed something critical. “Work hard, play harder” reads from left to right, not the other way around. In the startup world, the obvious source of failure no one talks about is the lack of work.

The myth of the "always at 200%" team

Mon, 16 May 2016 11:02:42 +0200


In the past decade, I’ve met many entrepreneurs asking their team to be as dedicated to their job as they are.

When I hire someone, I want them to be at 200%, 24/7, 365 days a year. If I send them an email at 2:00 AM, I expect an answer within 10 minutes. That’s the way you build a great business.

They all failed.

I experienced that state of mind, and it didn’t turn well. Employees of the company would stay awake late to be sure they would not miss an email from the boss. They wanted to be the first ones to answer to show how reactive and motivated they were. The ugly truth was, none of us was working efficiently building a great company. We were slacking on the Web late at night, checking our email from time to time, just in case something would happen. After one year, we all stopped pretending, and an incredible percentage of the team divorced.

Building a great team is hard. Keeping it is harder, and an important turnover is already a failure.

Quoting Richard Dalton on Twitter,

Teams are immutable. Every time someone leaves, or joins, you have a new team, not a changed team.

This is even more true for small teams where losing someone with specific skills can make the whole team at stake.

When one of my guys had to take a 3 months sick leave, I had 2 options. The first one was to divide his projects to the whole team, putting more pressure on them, asking them to do extra hours and working during the weekend so we could finish all the projects in time as expected. The second one was rescheduling the less critical projects and explaining my management why we would postpone some of them, and why it was OK to do it.

Would I have been in a huge corporation where the Monday morning report meeting is a sacred, well oiled ceremony with all your peers looking at each other, expecting them to fall from their pedestal as the leader gets mad at them for sliding their projects, I might have picked up the first option.

Forcing the whole team to work harder because of one of them missing would have been a huge mistake with terrible results. You can’t expect a team already working under pressure to work more to catch up with their absent pal projects without getting poor results and ressent towards the missing guy. Even more, this would have lead to someone else leaving for a sick leave after a few weeks, or leaving at all, destroying the team and all the efforts done to build it during more than one year.

As a manager, your first duty is to make sure the whole team succeeds, not to cover your ass from your management ire. For that reason, I decided to reschedule our projects, because the “always at 200%” team on the long run is a myth. And a planned failure.

Building an awesome Devops team on the ashes of an existing infrastructure

Mon, 09 May 2016 15:59:57 +0200

5AM, the cellphone next to my bed rings continuously. On the phone, the desperate CTO of a SAAS company. Their infrastructure is a ruin, the admin team has left the building with all the credentials in an encrypted file. They’ll run out of business if I don’t accept to join them ASAP he cries, threatening to hang himself with a WIFI cable. Well, maybe it’s a bit exaggerated. But not that much. I’ve given my Devops Commando talk about taking over a non documented, non managed, crumbling infrastructure a couple of time lately. My experience on avoiding disasters and building awesome devops teams rose many questions that led me into writing down everything before I forget about them. In the past 10 years, I’ve been working for a couple of SAAS companies and done a few consulting for some other. As a SAAS company, you need a solid infrastructure as much as solid sales team. The first one is often considered as a cost for the company, while the second is supposed to make it live. Both should actually considered assets, and many of my clients only realised it too late. Prerequisites Taking over an existing infrastructure when everyone has left the building to make it something viable is a tough, long term job that won’t go without the management full support. Before accepting the job, ensure a few prerequisites are filled or you’ll face a certain failure. Control over the budget Having a tight control of the budget is the mosts important part of getting over an infrastructure that requires a full replatforming. Since you have no idea of the amount of things you’ll have to add or change, it’s a tricky exercise that needs either some experience or a budget at least twice as much as the previous year. You’re not forced to spend all the money you’re allowed, but at least you’ll be able to achieve your mission during the first year. Control over your team hires (and fires) Whether you’re taking over an existing team or building one form scratch, be sure you have the final word on hiring (or firing). If the management can’t understand that people who use to “do the job” at a certain time of the company’s life don’t fit anymore, you’ll rush into big trouble. Things get worse when you get people who’s been slacking or under performing for years. After all, if you’re jumping in, that’s because some things are really fishy aren’t they? Freedom of technical choices Even though you’ll have to deal with an existing infrastructure, be sure you’ll be given free hands on the new technical choices when they happen. Being stuck with a manager who’ll block every new technology he doesn’t know about, or being forced to pick up all the newest fancy, not production ready things they’ve read on Hackers News makes one ops life a nightmare. From my experience, keeping the technos that work, even though they’re outdated or you don’t like them can save you lots of problems, starting with managing other people’s ego. Freedom of tools Managing an infrastructure requires a few tools, and better pick up the ones you’re familiar with. If you’re refused to switch form Trac to Jira, or refused a PagerDuty account for any reason, be sure you’ll get in trouble very soon for anything else you’ll have to change. Currently, my favorite, can’t live without tools are Jira for project management, PagerDuty for incident alerting, Zabbix for monitoring and ELK for log management. Being implied early into the product roadmap As an ops manager, it’s critical to be aware of what’s going on on the product level. You’ll have to deploy development, staging (we call it theory because “it works in theory”) and production infrastructure and the sooner you know, the better you’ll be able to work. Being implied in the product roadmap also means that you’ll be able to help the back[...]

The downside of loving your job too much

Sat, 07 May 2016 23:49:45 +0200


A few weeks ago, we were discussing what we’d do after we leave our company before starting another job. Most of us wanted to take a 1 month break during the summer vacation so they can profit from their family. Having a tough job including 24/7 oncalls at least once a month, I was more radical:

“When I leave Synthesio after leveraging my stocks for a few millions (I hope), I’ll take a 6 months vacation. My wife smiled at me and said:
– I don’t believe you. After one month you’ll get so bored that you will start another company and we won’t see you for another 5 years.”

After thinking about it for a second, I told her she was wrong. I’ll probably get bored to tears after 2 weeks.

Migrating a non Wordpress blog to Medium for the nerdiest

Thu, 28 Apr 2016 19:30:49 +0200

I’ve just finished migrating 10 years of blogging from Publify to Medium. If you’re wondering, I’ll keep this blog online and updated, but I wanted to profit from the community and awesome UI Medium brings since I’ve never been able to do a proper design. In this post, I’ll explain how you can migrate any blog to Medium since only Wordpress is supported so far. To migrate your blog to Medium, you need: Some dev and UNIX skills. A blog. A Medium account and a publication. A Medium integration API key. To ask Medium to remove the publishing API rate limit on your publication. Installing medium-cli There are many Medium SDKs around but most of them are incomplete and won’t allow you to publish in publications, but there’s a workaround. I’ve chosen to rely on medium-cli, a NPM command line interface that does the trick. $ (sudo) npm install (-g) medium-cli Medium-cli does not allow to push to a publication so we’ll have to patch it a bit to make it work. Edit the lib/medium file and replace line 38 with: .post(uri + ‘publications/’ + userId + ‘/posts’) Since medium-cli also cleans unwanted arguments, we’ll have to add 2 lines at the end of the clean() function, lines 61–62. publishedAt: post.publishedAt, notifyFollowers: post.notifyFollowers These are 2 important options: publishedAt is a non documented API feature that allows to back date your old posts. notifyFollowers will prevent Medium from spamming your followers will all your new publications. Setting up medium-cli to post to your publication To post to Medium with medium-cli, you need: Your Medium user id. Your Medium publication id. We’ll get it using the API with some curl foo. First, get your own information: $ curl -H “Authorization: Bearer yourmediumapikeyyoujustgenerated” {“data”:{“id”:”1234567890",”username”:”fdevillamil”,”name”:”Fred de Villamil ✌︎”,”url”:”","imageUrl":"*IKwA8UN-sM_AoqVj.jpg"}} Now, get your publication id $ curl -H “Authorization: Bearer yourmediumapikeyyoujustgenerated” {“data”:[{“id”:”987654321",”name”:”Fred Thoughts”,”description”:”Fred Thoughts on Startups, UX and Co”,”url”:”","imageUrl":"*EqoJ-xFhWa4dE-2i1jGnkg.png"}]} You’re almost done. Now, create a medium-cli blog and configure it with your API key. $ medium create myblog $ medium login $ rm -rf myblog/articles/* Edit your ~/.netrc file as the following: machine token yourmediumapikeyyoujustgenerated url userId 987654321 Here we are (born to be king) we can now export the content of your blog. Export your blog content There are many ways to export your blog content. If you don’t have a database access, you can crawl it with any script using the readitlater library. For my Publify blog, I’ve written a rake task. desc “Migrate to medium” task :mediumise => :environment do require ‘article’ dump = “/path-to/newblog/” Article.published.where(“id > 7079”).order(:published_at).each do |article| if File.exists? “#{dump}/{}-#{article.permalink}” next end Dir.mkdir “#{dump}/#{}-#{article.permalink}” open(“#{dump}/#{}-#{article.permalink}/”, ‘w’) do |f| f.puts “ — -” f.puts “title: \”#{article.title}\”” f.puts “tags: \”#{article.tags[0,2].map { |tag| tag.display_name }.join(“, “)}\”” f.puts “canonicalUrl:{article.permalink}.html” f.puts “publishStatus: public” f.puts “license: cc-40-by-n[...]

ElasticSearch cluster rolling restart at the speed of light with rack awareness

Fri, 08 Apr 2016 20:08:13 +0200

Woot, first post in more than one year, that’s quite a thing! ElasticSearch is an awesome piece of software, but some management operations can be quite a pain in the administrator’s ass. Performing a rolling restart of your cluster without downtime is one of them. On a 30 something server cluster running up to 900GB shards, it would take up to 3 days. Hopefully, we’re now able to do it in less than 30 minutes on a 70 nodes with more than 100TB of data. If you’re looking for ElasticSearch rolling restart, here’s what you’ll find: Disable the shard allocation on your cluster. Restart a node. Enable the shard allocation. Wait for the cluster to be green again. Goto 2 until you’re done. Indeed, part 4 is the longest, and you don’t have hours, if not days. Hopefully, there’s a solution to fix that: rack awareness. About Rack Awareness ElasticSearch shard allocation awareness is a rather underlooked feature. It allows you to add your ElasticSearch nodes to virtual racks so the primary and replica shards are not allocated in the same rack. That’s extremely handy to ensure fail over when you spread your cluster between multiple data centers. Rack awareness requires 2 configuration parameters, one at the node level (restart mandatory) and one at the cluster level (so you can do it runtime). Let’s say we have a 4 nodes cluster and want to activate rack awareness. We’ll split the cluster into 2 racks: goldorack alcorack On the first two nodes, add the following configuration options: node.rack_id: "goldorack" And on the 2 other nodes: node.rack_id: "alcorack" Restart, you’re ready to enable rack awareness. curl -XPUT localhost:9200/_cluster/settings -d '{ "persistent" : { "cluster.routing.allocation.awareness.attributes" : "rack_id" } }' Here we are, born to be king. Your ElasticSearch cluster starts reallocating shards. Wait until complete rebalance, it can take some times. You’ll soon be able to perform a blazing fast rolling restart. First, disable shard allocation globally: curl -XPUT 'http://localhost:9200/_cluster/settings' -d '{ "transient" : { "cluster.routing.allocation.enable": "none" } }' Restart the ElasticSearch process on all nodes in the goldorack rack. Once your cluster is complet, enable shard allocation back. curl -XPUT 'http://localhost:9200/_cluster/settings' -d '{ "transient" : { "cluster.routing.allocation.enable": "all" } }' A few minutes later, your cluster is all green. Woot! What happened there? Using rack awareness, you ensure that all your data is stored in nodes located in alcorack. When you restart all the goldorack nodes, the cluster elects alcorack replica as primary shards, and your cluster keeps running smoothly since you did not break the quorum. When the goldorack nodes come back, they catch up with the newly indexed data and you’re green in a minute. Now, do exactly the same thing with the other part of the cluster and you’re done. For the laziest (like me) Since we’re all lazy and don’t want to ssh on 70 nodes to perform the restart, here’s the Ansible way to do it: In your inventory: [escluster] node01 rack_id=goldorack node02 rack_id=goldorack node03 rack_id=alcorack node04 rack_id=alcorack And the restart task: - name: Perform a restart on a machine located in the rack_id passed as a parameter service: name=elasticsearch state=restarted when: rack is defined and rack == rack_id That’s all folks, see you soon (I hope)![...]

How to fix your Elasticsearch cluster stuck in initializing shards mode?

Mon, 26 Jan 2015 03:00:00 +0100

Elasticsearch is one of my favorite piece of software. I’ve been using it since 0.11 and deployed every version since 0.17.6 in production. However, I must admit it’s sometimes a pain in the ass to manage. It can behave unexpectedly and either vomit gigabytes in your logs, or stay desperately silent. One of those strange behaviour can happen after one or more data nodes are restarted. In a cluster running either lots of nodes or lots of shards, the post restart shard allocation can take forever and never end. This post is about investigating and eventually fixing this behaviour. You don’t need to have a deep knowledge of Elasticsearch to understand it. Red is dead The first evidence something’s wrong comes from the usual cluster state query. curl -XGET http://localhost:9200/_cluster/health?pretty { "cluster_name" : "myescluster", "status" : "red", "timed_out" : false, "number_of_nodes" : 20, "number_of_data_nodes" : 16, "active_primary_shards" : 2558, "active_shards" : 5628, "relocating_shards" : 0, "initializing_shards" : 4, "unassigned_shards" : 22 } This is all but good. The status: red is a sign your cluster is missing some primary shards. It means some data are still completely missing. As a consequence, queries on these data will fail and indexing will take a tremendous amount of time. When all the primary shards are back, the cluster switches in yellow to warn you it’s still working but your data is present. initializing_shards: 4 is what your cluster is currently doing: bringing back your data to life. unassigned_shards: 22 is where your lost primary shards are. The more you have there, the more data you’re missing and the more conflict you’re likely to meet. Baby come back, won’t you pleaaaase come back? What happened there? When a data node leaves the cluster and comes back, Elasticsearch will bring the data back and merge the records that may have been written during the time that node was away. Since there may be lots of new data, the process can take forever, even more when some shards fail at starting. Let’s run another query to understand the cluster state a bit better. curl -XGET http://localhost:9200/_cat/shards t37 434 p STARTED 12221982 13.8gb datanode02 t37 434 r STARTED 12221982 13.8gb datanode03 t37 403 p INITIALIZING 21620252 28.3gb datanode02 t37 404 p INITIALIZING 5720596 4.9gb datanode02 t37 20 p UNASSIGNED t37 20 r UNASSIGNED t37 468 p INITIALIZING 8313898 12.3gb datanode02 t37 470 p INITIALIZING 38868416 56.8gb datanode02 t37 37 p UNASSIGNED t37 37 r UNASSIGNED t37 430 r STARTED 11806144 15.8gb datanode04 t37 430 p STARTED 11806144 15.8gb datanode05 t37 368 p STARTED 34530372 43gb datanode05 t37 368 r STARTED 34530372 43gb datanode03 ... This is indeed a sample of the real output, which is actually 5628 lines long. There’s a few interesting things here I want to show you. Every line is under the same form. The first field, t37, is the index name. This one is obviously called t37, and it has a huge number of primary shard to store the gazillon posts I’ve written over the years. The second field is the shard number, followed by either p if the shard is a primary one, or r if it’s a replica. The fourth field is the shard state. It’s either UNASSIGNED, INITIALIZING or STARTED. The first 2 states are the ones we’re insterested with. If I run the previous query | grep INITIALIZING and | grep UNASSIGNED, I’ll get 4 and 22 lines respectively. If you need a fix… [...]

They’re just somebody that I used to know

Mon, 05 Jan 2015 03:00:00 +0100


I’m very cautious with the people I add and accept on LinkedIn. I’m so cautious I’ve published a request acceptance policy I never deviate from. I’m even more cautious when I have to accept a former colleague.

I have a very good reason for this.

I’m old enough to remember reference checking the old way in the tech world. The company you were applying to was calling your former boss or manager, sometimes asking you for their phone number to check what kind of employee you are. This led to an embarrassing situations if the above mentioned manager was unaware of your will to check how green was the grass somewhere else, but that was part of applying to a company populated of narrow brained people.

The rise of professional social networks like Linkedin, Viadeo or Xing have changed the game. Displaying a more or less comprehensive list of your current and past colleagues, they give recruiters a privileged access to people you’ve worked with on a daily basis.

This is not a big deal until the day you get a cold call or email from the company your former colleague is applying to, asking you why he left and if you would recommend them.

And then, you find yourself in that awkward situation where you don’t want to stab your former colleague in the back, but you can’t endorse him either, because you know something’s wrong. Maybe they’ve a problem with management or hygiene, or they can’t finish what they start, or they’re just an incompetent slacker who go though his probation period.

And the person at the other side of the line knows something’s wrong, and they’re playing cats and mice with you, pushing you in the ropes, trying to get that critical information you’re trying to hide.

And you’re struggling, trying to protect someone you wish you’ve never heard of, losing your credibility instead of him, in a twisted real life role playing game. That’s the moment you know the only way around is to admit you’re a Juda or answer “They’re just somebody that I used to know”. And get away with it whistling a little song.

Nagios called only once last night and it means a lot

Fri, 02 Jan 2015 03:00:00 +0100

Nagios called only once last night. The cellphone hidden under my pillow made its characteristic sound at 25 past midnight during new years eve, waking me up after a few minutes of sleep. A virtual machine ran out of space after Gitlab automatic backup filled the entire disk. I had to get up and fix the issue before I go back to bed. Nagios called only once last night, and there’s a lot to learn from it. The first thing is that Nagios shouldn’t have called last night. I got dragged out of my bed for an issue on a development machine and this should never happen. If there’s one thing I’m sure of about alerting, it’s that it should never get triggered out of office hours unless something’s business critical is broken. A development machine is not business critical, and it should have waited a few hours. It means being able to set priorities, even in production. A full /var/ on a computing server is not as critical as a database disk reaching 80%. A crashed frontend (behind a properly configured load balancer) is not as critical as a crashed master SQL server. The second thing is about rethinking alerting. Thinking your platform alerting from a business point of view means your traditional Nagios is both too much and not enough. It’s too much because 90% of the alerts you’ll get can wait until the next morning, which, by the way, means you get too much of them. It’s not enough because the usual Nagios checks won’t tell you your service is broken, so you need to rely on application specific probes such as Watchmouse or Pingdom. Third, in 2015, alerting should only get triggered when your core system gets hurt. In other words, most of your system should self heal. I had a great experience with hosting on Amazon Web Service because of one feature: the Autoscaling Group. Being able to upscale or downscale grapes of node according to their use is awesome. Knowing a broken machine will self replace is even better. It happened a couple of times because the Python process crashed and the load balancer check failed, or because of a kernel panic. It was a relief to find the replacement message in my mailbox in the morning instead of getting awaken by a SMS for restart a fronted process. It means there’s only one thing to worry about: your data stored. Fourth, this was not the first time this alert was sent. This should not have happened either. The monitoring sending an alert means something’s wrong. It can be a software or hardware failure, a use issue (like logs filling a hard drive), a scaling issue or something else. When it happens, you can fix the problem or fix the situation to ensure the probe won’t complain anymore. Deleting the oldest backups brings the probes back to green until they turn red again. Ensuring a lower local data retention fixes it for good. That difference is critical both in terms of architecture and focus. Building for green probes means you can focus on something else, sleeping for example. Fifth, it was more than just a hard drive being full. Why did the chicken cross the road? I don’t know, but I know why my hard drive became full. It became full because a local backup filled it. Which means, by the way, that this backup could not perform properly. Which means the data are not backed up (well, they are another way, this backup was redundant, but anyway). Which means there’s a more global risk to think about. That’s a lot to think about from a simple SMS received during New Year’s Eve, because getting an alert from the monitoring is something that should never happen. And if I was too lazy to actually fix what’s broken, I’m pretty sure my wife will remind me she hates being awaken at night. That’s a better motivation than any uptime contest.[...]

Intergalactic Radio Station

Thu, 01 Jan 2015 20:04:50 +0100

First of all, let me wish you a happy new year and great Holiday seasons. Whether you’re regular readers or just passing by, that’s the least you deserve. Last week, I spent a few days at my mom’s in Bordeaux. If you haven’t heard of it yet, Bordeaux is a city whose inhabitants still believe they’re the center of the world because it was cool for its wine, philosophers and slave trading during the 18th century. During the few days I spent there, I re discovered my father’s record collection. He was a fan of Berlioz, Procol Harum and early electronic music. I spent part of my childhood listening to Jean-Michel Jarre, Kraftwek or Vangelis before I went way further than himself could imagine. One of the reasons dad loved them was for their futuristic aspect. If you remember the million spectators gigs Jarre held everywhere in the world, there was much more than a free concert and impressive light show. The fully digital music and laser made light effects gave the feeling of coming from a distant future the artist was a musician with. My father loved holding that part of the future in his hands. There’s something definitely ironic my father probably didn’t get back then. How can you get a glimpse of the future without immediately making it part of the past? That’s an important aspect of the relativity of time that had me spending years focusing on a past I couldn’t catch anymore instead of doing what I had to do to build the present. Amongst all dad’s albums, I rediscovered Vangelis Direct. The Greek composer is mostly known for releasing Chariots of Fire and Blade Runner OST, but lots of his albums are worth listening to, starting with Direct or The Friends of Mr Cairo. Direct is interesting from many aspects. It sounds so incredibly outdated for a futuristic album you would almost expect it to be the footage of some late 20’s science fiction movies a la Metropolis. Listening to it, you find yourself in the position of an archeologist from a distant future discovering a record from an ancient civilization. This is the exact feeling I had without being able to translate it into words when I first read Van Vogt books in the mid 90’s. Most of Van Vogt books were depicting the complexe world of the Cold War sent to a distant future. Discovering them as a post USSR reader with a scholar knowledge in History made them terribly old fashion. While being old fashion, Direct also brings a strange, dark sightseeing of the future. Something between a clean dystopia a la Gattaca and some scary post apocalyptic Dr Bloodmoney where the frontier between a technology serving or enslaving making gets blurry. This perspective gets interesting as 2014 was hot in terms of artificial intelligence, robots, and drones becoming mainstream. It’s fascinating to see how the future we shaped diverged from the one we once thought. width="420" height="315" src="//" frameborder="0" allowfullscreen> [...]