Subscribe: Planet Grep
Added By: Feedage Forager Feedage Grade B rated
Language: English
application  blog  des  dries buytaert  drupal  index  les  make  new  pas  pour  qui  sans  site  time  timestamp 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Planet Grep

Planet Grep

Planet Grep -


Frank Goossens: Music from Our Tube: Mopo

Sat, 17 Mar 2018 05:14:36 +0000


As heard on; Mopo – Tökkö. The sound is vaguely reminiscent of Morphine due to the instrumentation no doubt, but this Finnish trio is more into jazz (their bio states; “The band draws their inspiration from jazz, punk and the Finnish nature”), but they rock big time nonetheless!

Watch this video on YouTube.

If you have time watch a concert of theirs on YouTube and if you dig them; they’ll be playing in Ghent and Leuven this month!


Xavier Mertens: TROOPERS 18 Wrap-Up Day #2

Thu, 15 Mar 2018 22:31:58 +0000

Hello Readers, here is my wrap-up of the second day. Usually, the second day is harder in the morning due to the social events but, at TROOPERS, they organize the hacker run started at 06:45 for the most motivated of us. Today, the topic of the 3rd track switched from SAP to Active Directory. But I remained in track #1 and #2 to follow a mix of offensive vs defensive presentations. By the way, to have a good idea of the TROOPERS’ atmosphere here is their introduction video for 2018:  As usual, the second day started also with a keynote which was assigned to Rossella Mattioli, working at ENISA. She’s doing a lot of promotion for this European agency across multiple security conferences but it’s special with TROOPERS because their motto is the same: “Make the world (the Internet) a safer place”. ENISA’s rule is to remove the gap between industries, the security community and the EU Member States. Rossella reviewed the projects and initiatives promoted by ENISA like the development of papers for all types of industries (power, automation, public transports, etc). To demonstrate the fact that today, the security must be addressed at a global scale, she gave the following example: Think about your journey to come to TROOPERS and list all the actions that you performed with online systems. The list is quite long! Another role of ENISA is to make CSIRT’s work better together. Did you know that they are 272 different CSIRST’s in the European Union? And they don’t always communicate in an efficient way. That’s why ENISA is working on common taxonomies to help them. Their website has plenty of useful documents that are available for free, just have a look! After a short coffee break and interesting chats with peers, I move to the defensive track to follow Matt Graeber who presented “Subverting trust in Windows”. Matt warned that the talk was a “hands-on” edition of his complete research that is available online. Today’s presentation focuses more on live demos. First, what is “trust” in the context of software? Is the software from a reputable vendor? What is the intent of the software? Can it be abused in any way? What is the protection status of signing keys? Is the certificate issuer reputable? Is the OS validating signer origin and code integrity properly? Trust maturity level can be matched with enforcement level: But what is the intent of code signing? To attest the origin and integrity of software.  It is NOT an attention of trust or intent! But, it can be used to enforce the mechanism for previously established trust. Some bad assumptions reported by Matt: Signed == trusted Non-robust signature verification No warning/enforcement of known bad certs One of the challenges is to detect malicious files and, across millions of events generated daily, how to take advantage of signed code? Signature can be valid but the patch suspicious C:\Windows\Tasks\notepad.exe) A bad approach is to just ignore because the file is signed… Matt’s demonstrated why! The first attack was based on the subject Interface package hijacks (attack the validation infrastructure). He manually added a signature to a non-signed PE file. Brilliant! The second attack scenario was to perform a certificate cloning and Root CA installation. Here again, very nice but it’s more touchy to achieve because the victim has to install the Root CA on his computer. The conclusion to the talk was that even signed binaries can’t be trusted… All the details of the attacks are available here: Subverting trust in Windows Code signing certificate cloning attacks and defenses Then, I switched back to the offensive track to listen to Salvador Mendoza and Leigh-Anne Galloway who presented “NFC payments: The art of relay and replay attacks“. They started with a recap of the NFC technology. Payments via NFC are not new: The first implementation was in 1996 in Korea (public transports). In 2015, ApplePay was launched. And today, 40% of non-cash operations are performed over NFC! [...]

Mattias Geniar: Enable the slow log in Elastic Search

Thu, 15 Mar 2018 18:30:47 +0000


The post Enable the slow log in Elastic Search appeared first on

Elasticsearch is pretty cool, you can just fire of HTTP commands to it to change (most of) its settings on the fly, without restarting the service. Here's how you can enable the slowlog to lo queries that exceed a certain time treshold.

These are enabled per index you have, so you can be selective about it.

Get all indexes in your Elastic Search

To start, get a list of all your Elasticsearch indexes. I'm using jq here for the JSON formatting (get jq here).

$ curl -s -XGET ''
green open index1 BV8NLebPuHr6wh2qUnp7XpTLBT 2 0 425739 251734  1.3gb  1.3gb
green open index2 3hfdy8Ldw7imoq1KDGg2FMyHAe 2 0 425374 185515  1.2gb  1.2gb
green open index3 ldKod8LPUOphh7BKCWevYp3xTd 2 0 425674 274984  1.5gb  1.5gb

This shows you have 3 indexes, called index1, index2 and index3.

Enable slow log per index

Make a PUT HTTP call to change the settings of a particular index. In this case, index index3 will be changed.

$ curl -XPUT -d '{"" : "50ms","": "50ms","index.indexing.slowlog.threshold.index.warn": "50ms"}' | jq

If you pretty-print the JSON payload, it looks like this:

  "" : "50ms",
  "": "50ms",
  "index.indexing.slowlog.threshold.index.warn": "50ms"

Which essentially means: log all queries, fetches and index rebuilds that exceed 50ms with a severity of "warning".

Enable global warning logging

To make sure those warning logs get written to your logs, make sure you enable that logging in your cluster.

$ curl -XPUT -d '{"transient" : {"" : "WARN", "logger.index.indexing.slowlog" : "WARN" }}' | jq

Again, pretty-printed payload:

  "transient" : {
    "" : "WARN",
    "logger.index.indexing.slowlog" : "WARN"

These settings aren't persisted on restart, they are only written to memory and active for the currently running elasticsearch instance.

The post Enable the slow log in Elastic Search appeared first on

Dries Buytaert: Canonical URLs

Thu, 15 Mar 2018 01:30:02 +0000


Google Search Console showed me that I have some duplicate content issues on, so I went ahead and tweaked my use of the rel="canonical" link tag.

When you have content that is accessible under multiple URLs, or even on multiple websites, and you don't explicitly tell Google which URL is canonical, Google makes the choice for you. By using the rel="canonical" link tag, you can tell Google which version should be prioritized in search results.

Doing canonicalization well improves your site's SEO, and doing canonicalization wrong can be catastrophic. Let's hope I did it right!

Dries Buytaert: RSS auto-discovery

Thu, 15 Mar 2018 01:21:11 +0000


While working on my POSSE plan, I realized that my site no longer supported "RSS auto-discovery". RSS auto-discovery is a technique that makes it possible for browsers and RSS readers to automatically find a site's RSS feed. For example, when you enter in an RSS reader or browser, it should automatically discover that the feed is It's a small adjustment, but it helps improve the usability of the open web.

To make your RSS feeds auto-discoverable, add a tag inside the tag of your website. You can even include multiple tags, which will allow you to make multiple RSS feeds auto-discoverable at the same time. Here is what it looks like for my site:

Pretty easy! Make sure to check your own websites — it helps the open web.

Xavier Mertens: TROOPERS 18 Wrap-Up Day #1

Wed, 14 Mar 2018 23:31:42 +0000

I’m back to Heidelberg (Germany) for my yearly trip to the TROOPERS conference. I really like this event and I’m glad to be able to attend it again (thanks to the crew!). So, here is my wrap-up for the first day. The conference organization remains the same with a good venue. It’s a multiple tracks events. Not easy to manage the schedule because you’re always facing critical choices between two interesting talks scheduled in parallel. As the previous edition, TROOPERS is renowned for its badge! This year, it is a radio receiver that you can use to receive live audio feeds from the different tracks but also some hidden information. Just awesome! After a short introduction to this edition by Nicki Vonderwell, the main stage was give to Mike Ossmann, the founder of Great Scott Gadgets. The introduction was based on his personal story: “I should not be here, I’m a musician”. Scott is usually better at giving technical talks but,  this time, it was a completely different exercise for him and he did well! Mike tried to open the eyes of the security community about how to share ideas. Indeed the best way to learn new stuff is to explore what you don’t know. Assign time to research and you will find. And, once you found something new, how to share it with the community. According to Mike, it’s time to change how we share ideas. For Mike, “Ideas have value, proportionally to how they are shared…”. Sometimes, ideas are found by mistake or found in parallel by two different researchers. Progress is made by small steps and many people. How to properly use your idea to get some investors coming to you? It takes work to turn an idea into a product! The idea is to abolish patents that prevent many ideas to come to real products. Hackers are good at open source but they lack at writing papers and documentation (compared to academic papers). Really good and inspiring ideas! Then I remained in the “offensive” track to attend Luca Bongiorni’s (@) presentation: “How to brink HID attacks to the next level”. Let’s start with the conclusion about this talk: “After, you will be more paranoid about USB devices“. What is HID or “Human Interface Devices”  like a mouse, keyboard, game controllers, drawing tablets, etc. Most of the time, no driver is required to use them, they’re whitelisted by DLP tools, not under AV’s scope. What could go wrong? Luca started with a review of the USB attacks landscape. Yes, there are available for a while: 1st generation: Teensy (2009) and RubberDucky (2010) (both simple keyboard emulators). The RubberDucky was able to  change the VID/PID to evade filters (to mimic another keyboard type) 2nd generation: BadUSB (2014), TurnipSchool (2015) 3rd generation: WHID Injector (2017), P4wnP1 (2017) But, often, the challenge is to entice the victim to plug the USB device into his/her computer. To help them, it’s possible to weaponise cool USB gadgets. Based on social engineering, it’s possible to find what looks interesting to the victim. He likes beer? Just weaponise a USB fridge. There are more chances that he will connect it. Other good candidates are USB plasma balls. The second part of the talk was a review of the cool features and demos of the WHID and P4nwP1. The final idea was to compromise an air-gapped computer. How? Entice the user to connect the USB device It creates a wireless access point that the attacker can use or connect to an SSID and phone home to a C2 server It creates a COM port It creates a keyboard Powershell payload is injected using the keyboard feature Data is sent to the device via the COM port Data is exfiltrated to the attacker Win! It’s quite difficult to protect against this kind of attack because people need USB devices! So, never trust USB devices and use USB condom. They are tools like DuckHunt on Windows or USBguard/USBdeath on Linux. Good tip, Microsoft has an event log which reports the[...]

Dries Buytaert: Rest in peace, Professor

Wed, 14 Mar 2018 23:18:13 +0000


Stephen Hawking passed away this morning at age 76. He was an inspiration in so many ways: his contributions to science unlocked a universe of exploration and he helped to dismantle stigma surrounding disability. Perhaps most importantly, he dedicated his life to meaningful work that he was deeply passionate about; a message that is important for all. Rest in peace, Professor.

Dries Buytaert: Acquia to increase engineering capacity by 60% in 2018

Tue, 13 Mar 2018 16:45:48 +0000


I'm excited to share that Acquia plans to increase the capacity of our global research and development team by 60 percent in 2018. Last year, we saw adoption of Acquia Lift more than double, and adoption of Acquia Cloud Site Factory nearly double. Increasing our investment in engineering will allow us to support this growth, in addition to accelerating our innovation. Acquia's product teams have an exciting year ahead!

Frank Goossens: Taking over Async JavaScript WordPress plugin

Tue, 13 Mar 2018 05:44:02 +0000


(image) David Clough, author of the Async JavaScript WordPress plugin contacted me on March 5th to ask me if I was interested to take over ownership of his project. Fast-forward to the present; I will release a new version of AsyncJS on March 13th on, which will:

  • integrate all “pro” features (that’s right, free for all)
  • include some rewritten code for easier maintenance
  • be fully i18n-ready (lots of strings to translate :-) )

I will provide support on the forum (be patient though, I don’t have a deep understanding of code, functionality & quirks yet). I also have some more fixes/ smaller changes/ micro-improvements in mind (well, a Trello-board really) for the next release, but I am not planning major changes or new functionality. But I vaguely remember I said something similar about Autoptimize a long time ago and look where that got me …

Anyway, kudo’s to David for a great plugin with a substantial user-base (over 30K active installations) and for doing “the right thing” (as in not putting it on the plugin-market in search for the highest bidder). I hope I’ll do you proud man!


Dries Buytaert: How to use Drupal 8's off-canvas dialog in your modules

Tue, 13 Mar 2018 00:42:54 +0000

The goal of this tutorial is to show how to use Drupal 8.5's new off-canvas dialog in your own Drupal modules. The term "off-canvas" refers to the ability for a dialog to slide in from the side of the page, in addition to resizing the page so that no part of it is obstructed by the dialog. You can see the off-canvas dialog in action in this animated GIF: This new Drupal 8.5 feature allows us to improve the content authoring and site building experience by turning Drupal outside-in. We can use the off-canvas dialog to enable the content creator or site builder to seamlessly edit content or configuration in-place, and see any changes take effect immediately. There is no need to navigate to the administrative backend to make edits. As you'll see in this tutorial, it's easy to use the off-canvas dialog in your own Drupal modules. I use a custom album module on for managing my photo albums and for embedding images in my posts. With Drupal 8.5, I can now take advantage of the new off-canvas dialog to edit the title, alt-attribute and captions of my photos. As you can see in the animated GIF above, every photo gets an "Edit"-link. Clicking the "Edit"-link opens up the off-canvas dialog. This allows me to edit a photo in context, without having to go to an another page to make changes. So how did I do that? Step 1: Create your form, the Drupal way Every image on has its own unique path: I can edit my images at: For example, gives you the image of the Niagara Falls. If you have the right permissions you could edit the image at (you don't 😉). Because you don't have the right permissions, I'll show you a screenshot of the edit form instead: I created those paths (or routes), using Drupal's routing system, and I created the form using Drupal's regular Drupal form API. I'm not going to explain how to create a Drupal form in this post, but you can read more about this at the routing system documentation and the form API. Here is the code for creating the form: 'hidden', '#value' => $image->getUrlPath(), // Unique ID of the image ]; $form['title'] = [ '#type' => 'textfield', '#title' => t('Title'), '#default_value' => $image->getTitle(), ]; $form['alt'] = [ '#type' => 'textfield', '#title' => t('Alt'), '#default_value' => $image->getAlt(), ]; $form['caption'] = [ '#type' => 'textarea', '#title' => t('Caption'), '#default_value' => $image->getCaption(), ]; $form['submit'] = [ '#type' => 'submit', '#value' => t('Save image'), ]; return $form; } public function submitForm(array &$form, FormStateInterface $form_state) { $values = $form_state->getValues(); $image = Image::loadImage($values['path']); if ($image) { $image->setTitle($values['title']); $image->setAlt($values['alt']); $image->setCaption($values['caption']); $image->save(); } $form_state->setRedirectUrl(Url::fromUserInput('/album/'. $image->getUrlPath())); } } ?> Step 2: Add an edit link to my images First, I want to overlay an "Edit"-button over my image: If you were to look at the HTML code, the image link uses the following tag: Edit Clicking the link doesn't open the off-canvas dialog yet. The class="edit-button" is used to style the button with CSS and to overlay it on top of the image. Step 3: Opening the off-canvas dialog Next, we have to tell Drupal to open the form in the off-canvas dialog when the "Edit"-link is clicked. To open th[...]

Jeroen Budts: v5: Pelican

Mon, 12 Mar 2018 20:07:00 +0000

It's been 5 years since I wrote my last blogpost. Until today.

The past years my interest in blogging declined. Combined with the regular burden of Drupal security updates, I seriously considered to simply scrap everything and replace the website with a simple one-pager. Although a lot of my older blog-posts are not very useful (to say the least) or are very out of date, some of these bring memories of long gone times. Needles to say I didn't like the idea of deleting everything from these past 15 years.

Meanwhile I also have the feeling that social networks, Facebook in particular, have too much control over the content we create. This gave me the idea to start using this blog again. By writing content on my own blog, I can take control back and still easily post it to Facebook, Twitter and so on. A few days later Dries wrote a post about his very similar plans. Apparently this idea even has a name: The IndieWeb movement.

I decided …

Xavier Mertens: [SANS ISC] Payload delivery via SMB

Mon, 12 Mar 2018 11:46:55 +0000

I published the following diary on “Payload delivery via SMB“:

This weekend, while reviewing the collected data for the last days, I found an interesting way to drop a payload to the victim. This is not brand new and the attack surface is (in my humble opinion) very restricted but it may be catastrophic. Let’s see why… [Read more]


[The post [SANS ISC] Payload delivery via SMB has been first published on /dev/random]


Dries Buytaert: That "passion + learning + contribution + relationships" feeling

Sun, 11 Mar 2018 23:01:29 +0000


Talking about the many contributors to Drupal 8.5, a few of them shouted out on social media that they got their first patch in Drupal 8.5. They were excited but admitted it was more challenging than anticipated. It's true that contributing to Drupal can be challenging, but it is also true that it will accelerate your learning, and that you will likely feel an incredible sense of reward and excitement. And maybe best of all, through your collaboration with others, you'll forge relationships and friendships. I've been contributing to Open Source for 20 years and can tell you that that combined "passion + learning + contribution + relationships"-feeling is one of the most rewarding feelings there is.

Dries Buytaert: Many small contributions add up to big results

Sun, 11 Mar 2018 22:49:44 +0000


I just updated my site to Drupal 8.5 and spent some time reading the Drupal 8.5 release notes. Seeing all the different issues and contributors in the release notes is a good reminder that many small contributions add up to big results. When we all contribute in small ways, we can make a lot of progress together.

Mattias Geniar: Laravel & MySQL auto-adding “on update current_timestamp()” to timestamp fields

Sun, 11 Mar 2018 18:37:32 +0000

The post Laravel & MySQL auto-adding “on update current_timestamp()” to timestamp fields appeared first on We hit an interesting Laravel "issue" while developing Oh Dear! concerning a MySQL table. Consider the following database migration to create a new table with some timestamp fields. Schema::create('downtime_periods', function (Blueprint $table) { $table->increments('id'); $table->unsignedInteger('site_id'); $table->foreign('site_id')->references('id')->on('sites')->onDelete('cascade'); $table->timestamp('started_at'); $table->timestamp('ended_at')->nullable(); $table->timestamps(); }); This turns into a MySQL table like this. mysql> DESCRIBE downtime_periods; +------------+------------------+------+-----+-------------------+-----------------------------+ | Field | Type | Null | Key | Default | Extra | +------------+------------------+------+-----+-------------------+-----------------------------+ | id | int(10) unsigned | NO | PRI | NULL | auto_increment | | site_id | int(10) unsigned | NO | MUL | NULL | | | started_at | timestamp | NO | | CURRENT_TIMESTAMP | on update CURRENT_TIMESTAMP | | ended_at | timestamp | YES | | NULL | | | created_at | timestamp | YES | | NULL | | | updated_at | timestamp | YES | | NULL | | +------------+------------------+------+-----+-------------------+-----------------------------+ 6 rows in set (0.00 sec) Notice the Extra column on the started_at field? That was unexpected. On every save/modification to a row, the started_at would be auto-updated to the current timestamp. The fix in Laravel to avoid this behaviour is to add nullable() to the migration, like this. $table->timestamp('started_at')->nullable(); To fix an already created table, remove the Extra behaviour with a SQL query. MySQL> ALTER TABLE downtime_periods CHANGE started_at started_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP; Afterwards, the table looks like you'd expect: mysql> describe downtime_periods; +------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------+------------------+------+-----+---------+----------------+ | id | int(10) unsigned | NO | PRI | NULL | auto_increment | | site_id | int(10) unsigned | NO | MUL | NULL | | | started_at | timestamp | YES | | NULL | | | ended_at | timestamp | YES | | NULL | | | created_at | timestamp | YES | | NULL | | | updated_at | timestamp | YES | | NULL | | +------------+------------------+------+-----+---------+----------------+ 6 rows in set (0.00 sec) Lesson learned! The post Laravel & MySQL auto-adding “on update current_timestamp()” to timestamp fields appeared first on[...]

Lionel Dricot: Pour l’abolition du Like

Sat, 10 Mar 2018 14:06:02 +0000

Le like est lâche, inutile, paresseux. Il ne sert à rien, il nous entretient dans un état d’hébétude. Je ne suis pas assez courageux pour assumer, pour repartager à mon audience. Je ne suis pas assez volontaire pour répondre à l’auteur. Alors je like. Et je cherche à augmenter mon nombre de like. Ou de followers qui, subtilement, sont d’ailleurs représentés par des likes sur les pages Facebook. Je ne cherche plus qu’à flatter mon égo et, grand prince, j’octroie parfois un peu d’ego bon marché à d’autres. Tenez, manants ! Voici un like, vous m’avez amusé, ému ou touché ! Votre cause est importante et mérite mon misérable et inutile like. Je veux aller vite. Je consomme sans réfléchir. Je like une photo pour l’émotion qu’elle suscite immédiatement mais sans jamais creuser vraiment, sans réfléchir à ce que cela veut dire réellement. Mon esprit critique fond, disparait, enfoui sous les likes, ce qui est une aubaine pour les publicitaires. Et puis, j’en viens à acheter des likes. Des likes qui ne veulent plus rien dire, des followers qui n’existent pas. Forcément, à partir du moment où une observable (le nombre de likes) est censé représenté un concept beaucoup plus ardu (la popularité, le succès), le système va tout faire pour maximiser l’observable en le décorellant de ce qu’elle représente. Les followers et les likes sont désormais tous faux. Ils n’ont que la signification religieuse que nous leur accordons. Nous sommes dans notre petite bulle, à interagir avec quelques idées qui nous flattent, qui nous brossent dans le sens du poil, qui nous rendent addicts. Peut-être pas encore assez addicts selon certains qui se font une fierté de rendre cette addiction encore plus importante. Il faut sortir de cette spirale infernale. Le prochain réseau social, le réseau enfin social sera anti-like. Il n’y aura pas de likes. Juste un repartage, une réponse ou un micropaiement libre. Pas de statistiques, pas de compteur de repartarges, pas de nombre de “personnes atteintes”. Je vais même plus loin : il n’y aura pas de compteurs de followers ! Certains followers pourraient même vous suivre de manière invisible (vous ne savez pas qu’ils vous suivent). Finie la course à l’audience artificielle. Notre valeur viendra de notre contenu, pas de notre pseudo popularité. Lorsque ce réseau verra le jour, l’utiliser paraitra angoissant, vide de sens. Nous aurons l’impression que nos écrits, nos paroles s’enfoncent dans un puit sans fond. Suis-je lu ? Suis-je dans le vide ? Paradoxalement, ce vide pourrait peut-être nous permettre de reconquérir la parole que nous avons laissée aux publicitaires en échange d’un peu d’égo, nous libérer de nos prisons de populisme en 140 caractères. Et, parfois, miraculeusement, quelques centimes d’une quelconque cryptomonnaie viendront nous remercier pour nos partages. Anonymes, anodins. Mais tellement signifiants. La première version d’un réseau de ce type existe depuis des siècles. On l’appelle “le livre”. Initialement, il a été réservé à une élite minoritaire. En se popularisant, il a été corrompu, transformé en machine commerciale pleine de statistiques qui tente d’imprimer des millions de fois les quelques mêmes livres appelés “best-sellers”. Mais le vrai libre, le livre anti-like existe toujours, indépendant, auto-édité, attendant son heure pour renaître sur nos réseaux, sur nos liseuses. Le livre du 21ème siècle est là, dans nos imaginations, sous nos claviers, attendant un réseau qui conjuguerait la simplicité d’un Twitter et la profondeur d’un fich[...]

Dries Buytaert: Thank you, Tiffany

Thu, 08 Mar 2018 20:17:08 +0000


I recently had the opportunity to read Tiffany Farriss' Drupal Association Retrospective. In addition to being the CEO of, Tiffany also served on the Drupal Association Board of Directors for nine years. In her retrospective post, Tiffany shares what the Drupal Association looked like when she joined the board in 2009, and how the Drupal Association continues to grow today.

What I really appreciate about Tiffany's retrospective is that it captures the evolution of the Drupal Association. It's easy to forget how far we've come. What started as a scrappy advisory board, with little to no funding, has matured into a nonprofit that can support and promote the mission of the Drupal project. While there is always work to be done, Tiffany's retrospective is a great testament of our community's progress.

I feel very lucky that the Drupal Association was able to benefit from Tiffany's leadership for nine years; she truly helped shape every aspect of the Drupal Association. I'm proud to have worked with Tiffany; she has been one of the most influential, talented members of our Board, and has been very generous by contributing both time and resources to the project.

Frank Goossens: Love thy brand

Thu, 08 Mar 2018 17:44:04 +0000


I ordered a mug with the logo of my favorite brand;


Let there be coffee!


Xavier Mertens: [SANS ISC] CRIMEB4NK IRC Bot

Thu, 08 Mar 2018 11:39:41 +0000

I published the following diary on “CRIMEB4NK IRC Bot“:

Yesterday, I got my hands on the source code of an IRC bot written in Perl. Yes, IRC (“Internet Relay Chat”) is still alive! If the chat protocol is less used today to handle communications between malware and their C2 servers, it remains an easy way to interact with malicious bots that provide interesting services to attackers. I had a quick look at the source code (poorly written) and found some interesting information… [Read more]

[The post [SANS ISC] CRIMEB4NK IRC Bot has been first published on /dev/random]


Dries Buytaert: Drupal 8.5.0 released

Thu, 08 Mar 2018 01:45:37 +0000


Earlier today, we released Drupal 8.5.0, which ships with improved features for content authors, site builders and developers.

Content authors can benefit from enhanced media support and content moderation workflows. It is now easier to upload, manage and reuse media assets, in addition to moving content between different workflow states (e.g. draft, archived, published, etc).

Drupal 8.5.0 also ships with a Settings Tray module, which improves the experience for site builders. Under the hood, the Settings Tray module uses Drupal 8.5's new off-canvas dialog library; Drupal module developers are encouraged to start using these new features to improve the end-user experience of their modules.

It's also exciting to see additional improvements to Drupal's REST API. With every new release, Drupal continues to extend investments in being an API-first platform, which makes it easier to integrate with JavaScript frameworks, mobile applications, marketing solutions and more.

Finally, Drupal 8.5 also ships with significant improvements for Drupal 7 to Drupal 8 migration. After four years of work, 1,300+ closed issues and contributions from over 570 Drualists, the migrate system's underlying architecture in Drupal 8.5 is fully stable. With the exception of sites with multilingual content, the migration path is now considered stable. Needless to say, this is a significant milestone.

These are just a few of the major highlights. For more details about what is new in Drupal 8.5, please check out the official release announcement and the detailed release notes.

What I'm probably most excited about is the fact that the new Drupal 8 release system is starting to hit its stride. The number of people contributing to Drupal continues to grow and the number of new features scheduled for Drupal 8.6 and beyond is exciting.

In future releases, we plan to add a media library, support for remote media types like YouTube videos, support for content staging, a layout builder, JSON API support, GraphQL support, a React-based administration application and a better out-of-the-box experience for evaluators. While we have made important progress on these features, they are not yet ready for core inclusion and/or production use. The layout builder is available in Drupal 8.5 as an experimental module; you can beta test the layout builder if you are interested in trying it out.

I want to extend a special thank you to the many contributors that helped make Drupal 8.5 possible. Hundreds of people and organizations have contributed to Drupal 8.5. It can be hard to appreciate what you can't see, but behind every bugfix and new feature there are a number of people and organizations that have given their time and resources to contribute back. Thank you!

Johan Van de Wauw: Don't use zonal statistics in ArcGIS

Wed, 07 Mar 2018 22:39:00 +0000

Short version: Don't ever use zonal statistics as a table in arcgis. It manages to do simple calculations such as calculating an average wrong.Example of using zonal statistics between grid cells (from the ArcGIS manual)Long version: I was hired by a customer to determine why they got unexpected results for their analyses. These analyses led to an official map with legal consequences. After investigating their whole procedure a number of issues were found. But the major source of errors was one which I found very unlikely: it turns out that the algorithm used by ArcGIS spatial analyst to determine the average grid value in a shape is wrong. Not just a little wrong. Very wrong. And no, I am not talking about the no data handling which was very wrong as well, I'm talking about how the algorithm compares vectors and rasters Interestingly, this seems to be known, as the arcgis manual states It is recommended to only use rasters as the zone input, as it offers you greater control over the vector-to-raster conversion. This will help ensure you consistently get the expected results.So how does arcgis compare vectors and rasters? In fact one could invent a number of algorithms:Use the centers of the pixels and compare those to the vectors (most frequently used and fastest).Use the actual area of the pixels Use those pixels of which the majority of the area is covered by the vector. None of these algorithm matches with the results we saw from arcgis, even though the documentation seems to suggest the first method is used. So what is happening? It seems that arcgis first converts your vector file to a raster, not necessarily in the same grid system as the grid you compare to. Then it interpolates your own grid (using an undocumented method) and then takes the average of those interpolated values if their cells match with the raster you supplied. This means pixels outside your shape can have an influence on the result. This mainly seems to occur when large areas are mapped (eg Belgium at 5m).The average of this triangle is calculated by ArcGIS as 5.47I don't understand how the market leader in GIS can do such a basic operation so wrong, and the whole search also convinced me how important it is to open the source (or at least the algorithm used) to get reproducible results. Anyway, if you are still stuck with arcgis, know that you can install SAGA GIS as a toolbox. It contains sensible algorithms to do a vector/raster intersection and they are also approximately 10 times faster than the ArcGIS versions. Or you can have a look at how Grass and QGIS implement this.  All of this of course only if you consistently want to get the expected results...And if your government also uses ArcGIS for determining taxes or other policies, perhaps they too should consider switching to a product which consistently gives the expected results.[...]

Frank Goossens: Long overdue: WP YouTube Lyte update

Wed, 07 Mar 2018 17:48:32 +0000


It took me way too long (Autoptimize and related stuff is great fun, but it is eating a lot of time), but I just pushed out an update to my first ever plugin; WP YouTube Lyte. From the changelog:

So there you have it; Lite YouTube Embeds 2018 style and an example Lyte embed of a 1930’s style Blue Monday …

Watch this video on YouTube.


Ruben Vermeersch: Jupyter lab with an Octave kernel

Wed, 07 Mar 2018 17:44:11 +0000


Octave is a good choice for getting some serious computing done (it’s largely an open-source Matlab). But for interactive exploration, it feels a bit awkward. If you’ve done any data science work lately, you’ll undoubtedly have used the fantastic Jupyter.

There’s a way to combine both and have the great UI of Jupyter with the processing core of Octave:


I’ve built a variant of the standard Jupyter Docker images that uses Octave as a kernel, to make it trivial to run this combination. You can find it here.

Comments | More on | @rubenv on Twitter

Xavier Mertens: SMBv1, The Phoenix of Protocols?

Tue, 06 Mar 2018 20:26:45 +0000

Everybody still reminds the huge impact that Wannacry had in many companies in 2017? The ransomware exploited the vulnerability, described in MS17-010, which abuse of the SMBv1 protocol. One of the requirements to protect against this kind of attacks was to simply disable SMBv1 (besides the fact to NOT expose it on the Internet ;-).

Last week, I was trying to connect an HP MFP (“Multi-Functions Printer”) to an SMB share to store scanned documents. This is very convenient to automate many tasks. The printer saves PDF files in a dedicated Windows share that can be monitored by a script to perform extra processing on documents like indexing them, extracting useful data, etc.

I spent a long time trying to understand why it was not working. SMB server, ok! Credentials, ok! Permissions on the share, ok! Firewall, ok! So what? In such cases, Google remains often your best friend and I found this post on an HP forum:


So, HP still requires SMBv1 to work properly? Starting with the Windows 10 Fall Creators update, SMBv1 is not installed by default but this can lead to incompatibility issues. The HP support twitted this:


The link points to a document that explains how to enable SMBv1… Really? Do you think that I’ll re-enable this protocol? HP has an online document that lists all printers and their compatibility with the different versions of SMB. Of course, mine is flagged as “SMBv1 only”. Is it so difficult to provide an updated firmware?

Microsoft created a nice page that lists all the devices/applications that still use SMBv1. The page is called “SMB1 Product Clearinghouse” and the list is quite impressive!

So, how to “kill” an obsolete protocol if it is still required by many devices/applications? If users must scan and store documents and it does not work, guess what will the average administrator do? Just re-enable SMBv1…

[The post SMBv1, The Phoenix of Protocols? has been first published on /dev/random]


Dries Buytaert: Cooking with Alexa and Drupal

Tue, 06 Mar 2018 18:51:46 +0000

When I'm home, one of the devices I use most frequently is the Amazon Echo. I use it to play music, check the weather, set timers, check traffic, and more. It's a gadget that is beginning to inform many of my daily habits. Discovering how organizations can use a device like the Amazon Echo is big part of my professional life too. For the past two years, Acquia Labs has been helping customers take advantage of conversational interfaces, beacons and augmented reality to remove friction from user experiences. One of the most exciting examples of this was the development of Ask GeorgiaGov, an Alexa skill that enables Georgia state residents to use an Amazon Echo to easily interact with government agencies. The demo video below shows another example. It features a shopper named Alex, who has just returned from Freshland Market (a fictional grocery store). After selecting a salmon recipe from Freshland Market's website, Alex has all the ingredients she needs to get started. Alex begins by asking Alexa how to make her preferred salmon recipe for eight people. The recipe on Freshland Market's Drupal website is for four people, so the Freshland Market Alexa skill automatically adjusts the number of ingredients needed to accommodate eight people. By simply asking Alexa a series of questions, Alex is able to preheat the oven, make ingredient substitutions and complete the recipe without ever looking at her phone or laptop. With Alexa, Alex is able to stay focused on the joy of cooking, instead of following a complex recipe. This project was easy to implement because the team took advantage of the Alexa integration module, which allows Drupal to respond to Alexa skill requests. Originally created by Jakub Suchy (Acquia) and maintained by Chris Hamper (Acquia), the Alexa integration module enables Drupal to respond to custom voice commands, otherwise known as "skills". Once an Amazon Echo user provides a verbal query, known as an "utterance", this vocal input is converted into a text-based request (the "intent") that is sent to the Freshland Market website (the "endpoint"). From there, a combination of custom code and the Alexa module for Drupal 8 responds to the Amazon Echo with the requested information. Over the past year, it's been very exciting to see the Acquia Labs team build a connected customer journey using chatbots, augmented reality and now, voice assistance. It's a great example of how organizations can build cross-channel customer experiences that take place both online and offline, in store and at home, and across multiple touch points. While Freshland Market is a fictional store, any organization could begin creating these user experiences today. Special thanks to Chris Hamper and Preston So for building the Freshland Market Alexa skill, and thank you to Ash Heath and Drew Robertson for producing the demo videos.[...]

Dries Buytaert: How is moving government forms online

Mon, 05 Mar 2018 17:42:00 +0000


Josh Gee, former product manager for the City of Boston's Department of Innovation and Technology, recently shared how identified 425 PDF forms used by citizens and moved 122 of them online. Not only did it improve the accessibility of's government services, it also saved residents roughly 10,000 hours filling out forms. While is a Drupal website, it opted to use SeamlessDocs for creating online forms, which could provide inspiration for Drupal's webform module. Josh's blog provides an interesting view into what it takes for a government to go paperless and build constituent-centric experiences.

Dries Buytaert: Drupal hooks vs Drupal events

Mon, 05 Mar 2018 17:28:07 +0000


Jonathan Daggerhart wrote a fantastic tutorial on Drupal 8's events system. I especially liked the comparison to Drupal's traditional hook system. When reading Jonathan's tutorial, I couldn't help but think how useful it would be to integrate it into Drupal's official documentation.

Xavier Mertens: [SANS ISC] Malicious Bash Script with Multiple Features

Mon, 05 Mar 2018 12:34:56 +0000

I published the following diary on “Malicious Bash Script with Multiple Features“:

It’s not common to find a complex malicious bash script. Usually, bash scripts are used to download a malicious executable and start it. This one has been spotted by @michalmalik who twitted about it. I had a quick look at it. The script has currently a score of 13/50 on VT. First of all, the script installs some tools and dependencies. ‘apt-get’ and ‘yum’  are used, this means that multiple Linux distributions are targeted… [Read more]

[The post [SANS ISC] Malicious Bash Script with Multiple Features has been first published on /dev/random]


Frank Goossens: Preventing WP Super Cache from caching if no Slimstat in HTML

Mon, 05 Mar 2018 10:58:57 +0000


I was struggling with an occasional loss of reported traffic by SlimStat, due to my pages being cached by WP Super Cache (which is not active for logged in users) but not having SlimStatParams & the slimstat JS-file in it. I tried changing different settings in Slimstat, but the problem still occurred. As I was not able to pinpoint the exact root cause, I ended up using this code snippet to prevent pages being cached by WP Super Cache;

function slimstat_checker($bufferIn) {
  if ( strpos($bufferIn, "

Changing the condition on line 3 allow one to stop caching based on whatever (not) being present in the HTML.


Lionel Dricot: Le Bitcoin va-t-il détruire la planète ?

Mon, 05 Mar 2018 07:16:25 +0000

Article co-rédigé avec Mathieu Jamar. Vous n’avez pas pu manquer les nombreux articles qui comparent la consommation énergétique du réseau Bitcoin à celui de différents pays. Et tous d’insister sur la catastrophe écologique qu’est le Bitcoin. Bitcoin consomme plus que tous les pays en orange ! Horreur ! Si le Bitcoin consomme autant d’électricité que le Maroc, c’est une catastrophe, non ? Non. « Bitcoin pollue énormément et va détruire la planète » est au Bitcoin ce que « Les étrangers piquent notre boulot » est à l’immigration. Une croyance facile à instiller mais fausse sur tellement de niveaux que ça en devient difficile d’énumérer les différentes erreurs. Ce que je vais pourtant tenter de faire. Pour simplifier, je vais diviser les erreurs de raisonnement en quatre parties : 1. La fausse simplification consommation équivaut à pollution 2. On ne compare que ce qui est comparable 3. Non, l’énergie du Bitcoin n’est pas gaspillée 4. L’optimisation en ingénierie Consommation électrique n’est pas pollution En tout premier lieu, consommer de l’électricité ne pollue pas. Ce qui pollue, ce sont certains moyens de production d’électricité. C’est pareil, me direz-vous. Et bien pas du tout ! Si c’était pareil, on se battrait contre les voitures électriques et on encouragerait les voitures à essence (qui sont plus efficaces car l’énergie est produite directement avec un meilleur rendement global). Vous êtes certainement convaincu que les voitures électriques, qui consomment de l’électricité, sont un atout dans la préservation de l’environnement. Or, selon les conditions, une Tesla consommerait entre entre 5000 et 10.000kWh pour 20/30.000 km par an. Cela veut dire que si la moitié des automobilistes d’un petit pays comme la Belgique achetait une Tesla, cette moitié consommerait à elle seule plus que toute la consommation du Bitcoin ! Et je ne parle que de la moitié d’un tout petit pays ! Vous imaginez la catastrophe si il y’avait plus de Tesla ? Mais trêve de comparaison foireuse, pourquoi est-ce que la consommation électrique n’est pas nécessairement polluante ? Et pourquoi les voitures électriques sont écologiquement intéressantes ? Premièrement parce que parfois l’électricité est là et inutilisée. C’est le cas des panneaux solaires, des barrages hydro-électriques ou des centrales nucléaires qui produisent de l’électricité, quoi qu’il arrive. On ne peut pas faire ON/OFF. Et l’électricité est pour le moment difficilement transportable. Selon une étude de Bitmex, une grande partie de l’électricité aujourd’hui utilisée dans le Bitcoin serait en fait de l’électricité provenant d’infrastructures hydro-électriques sous-utilisées car initialement dédiée à la production d’aluminium en Chine, production qui a baissé drastiquement suite à une baisse de demande pour ce matériau. Je répète : le Bitcoin a bénéficié d’une grande quantité d’électricité non-utilisée et donc très bon marché, écologiquement comme économiquement. Dans certains cas, diminuer la consommation électrique peut même être problématique. Lors de mes études, on m’a raconté que l’éclairage outrancier des autoroutes belges a ét[...]

Frank Goossens: Want to test automated Critical CSS creation?

Sun, 04 Mar 2018 19:56:19 +0000

Over 3 years ago Autoptimize added support for critical CSS and one and a half year ago the first “power-up” was released for Critical CSS rules creation. But time flies and it’s time for a new evolution; automated creation of critical CSS, using a deep integration with using their powerful API! A first version of the plugin is ready and the admin-page looks like this (look to the right of this paragraph); The plan: beta-test (asap) release as separate plugin on (shooting for April) release as part of Autoptimize 2.5 (target mid 2018) This new “” power-up has been tested on a couple of sites already (including this little blog of mine) and we are now looking for a small group of to help beta-test for that first target.  Beta-testers will be able to use for free during the test (i.e. for one month). If you’re interested; head on up to the contact form and tell me what kind or site you would test this on (main plugins + theme; I’m very interesting in advanced plugins like WooCommerce, BuddyPress and some of the major themes such as Avada, Divi, Astra, GeneratePress, … ) and I’ll get back to you with further instructions.   Possibly related twitterless twaddle: Almost time to announce Autoptimize 2.2 coming your way, care to test? Small experiment; Autoptimize with page cache [...]

Xavier Mertens: [SANS ISC] The Crypto Miners Fight For CPU Cycles

Sun, 04 Mar 2018 11:26:55 +0000

I published the following diary on “The Crypto Miners Fight For CPU Cycles“:

I found an interesting piece of Powershell code yesterday. The purpose is to download and execute a crypto miner but the code also implements a detection mechanism to find other miners, security tools or greedy processes (in terms of CPU cycles). Indeed, crypto miners make intensive use of your CPUs and more CPU resources they can (ab)use, more money will be generated… [Read more]

[The post [SANS ISC] The Crypto Miners Fight For CPU Cycles has been first published on /dev/random]


Dries Buytaert: Relaxing makes me write and writing makes me relax

Sat, 03 Mar 2018 18:07:01 +0000



Spending a few days at the Cayman Islands, where the WiFi is weak and the rum is strong. When I relax, I become creative and reflective. Writing helps me develop these creative thoughts and deepens my personal reflection. It's a virtuous cycle, because when I see my world, myself and my ideas more clearly, it helps me relax even more. Relaxing makes me write and writing makes me relax.

Sven Vermeulen: Automating compliance checks

Sat, 03 Mar 2018 12:20:00 +0000

With the configuration baseline for a technical service being described fully (see the first, second and third post in this series), it is time to consider the validation of the settings in an automated manner. The preferred method for this is to use Open Vulnerability and Assessment Language (OVAL), which is nowadays managed by the Center for Internet Security, abbreviated as CISecurity. Previously, OVAL was maintained and managed by Mitre under NIST supervision, and Google searches will often still point to the old sites. However, documentation is now maintained on CISecurity's github repositories. But I digress... Read-only compliance validation One of the main ideas with OVAL is to have a language (XML-based) that represents state information (what something should be) which can be verified in a read-only fashion. Even more, from an operational perspective, it is very important that compliance checks do not alter anything, but only report. Within its design, OVAL engineering has considered how to properly manage huge sets of assessment rules, and how to document this in an unambiguous manner. In the previous blog posts, ambiguity was resolved through writing style, and not much through actual, enforced definitions. OVAL enforces this. You can't write a generic or ambiguous rule in OVAL. It is very specific, but that also means that it is daunting to implement the first few times. I've written many OVAL sets, and I still struggle with it (although that's because I don't do it enough in a short time-frame, and need to reread my own documentation regularly). The capability to perform read-only validation with OVAL leads to a number of possible use cases. In the 5.10 specification a number of use cases are provided. Basically, it boils down to vulnerability discovery (is a system vulnerable or not), patch management (is the system patched accordingly or not), configuration management (are the settings according to the rules or not), inventory management (detect what is installed on the system or what the systems' assets are), malware and threat indicator (detect if a system has been compromised or particular malware is active), policy enforcement (verify if a client system adheres to particular rules before it is granted access to a network), change tracking (regularly validating the state of a system and keeping track of changes), and security information management (centralizing results of an entire organization or environment and doing standard analytics on it). In this blog post series, I'm focusing on configuration management. OVAL structure Although the OVAL standard (just like the XCCDF standard actually) entails a number of major components, I'm going to focus on the OVAL definitions. Be aware though that the results of an OVAL scan are also standardized format, as are results of XCCDF scans for instance. OVAL definitions have 4 to 5 blocks in them: - the definition itself, which describes what is being validated and how. It refers to one or more tests that are to be executed or validated for the definition result to be calculated - the test or tests, which are referred to by the definition. In each test, there is at least a ref[...]

Xavier Mertens: [SANS ISC] Reminder: Beware of the “Cloud”

Sat, 03 Mar 2018 12:16:34 +0000

I published the following diary on “Beware of the “Cloud”“:

Today, when you buy a product, there are chances that it will be “connected” and use cloud services for, at least, one of its features. I’d like to tell you a bad story that I had this week. Just to raise your awareness… I won’t mention any product or service because the same story could append with many alternative solutions and my goal is not to blame them… [Read more]

[The post [SANS ISC] Reminder: Beware of the “Cloud” has been first published on /dev/random]


Xavier Mertens: [SANS ISC] Common Patterns Used in Phishing Campaigns Files

Fri, 02 Mar 2018 12:20:28 +0000

I published the following diary on “Common Patterns Used in Phishing Campaigns Files“:

Phishing campaigns remain a common way to infect computers. Every day, I’m receiving plenty of malicious documents pretending to be sent from banks, suppliers, major Internet actors, etc. All those emails and their payloads are indexed and this morning I decided to have a quick look at them just by the name of the malicious files. Basically, there are two approaches used by attackers:

  • They randomize the file names by adding a trailing random string (ex: aaf_438445.pdf) or the complete filename.
  • They make the filename “juicy” to entice the user to open it by using common words.

[Read more]

[The post [SANS ISC] Common Patterns Used in Phishing Campaigns Files has been first published on /dev/random]


Wim Leers: API-First Drupal: what's new in 8.5?

Fri, 02 Mar 2018 11:11:44 +0000

Now that Drupal 8’s REST API 1 has reached the next level of maturity, I think a concise blog post summarizing the most important API-First Initiative improvements for every minor release is going to help a lot of developers. Drupal 8.5.0 will be released next week and the RC was tagged last week. So, let’s get right to it! The REST API made a big step forward with the 5th minor release of Drupal 8 — I hope you’ll like these improvements :) Thanks to everyone who contributed! text fields’ computed processed property exposed #2626924 No more need to re-implement this in consumers nor work-arounds. "body":{ "value":"


", "format":"basic_html" } ⬇ "body":{ "value":"


", "format":"basic_html", "processed":"


" } uri field on File gained a computed url property #2825487 "uri":{"value":"public://cat.png"} ⬇ "uri":{"url":"/files/cat.png","value":"public://cat.png"} Term POSTing requires non-admin permission #1848686 administer taxonomy permission ⬇ create terms in %vocabulary% permission Analogously for PATCH and DELETE: you need edit terms in %vocabulary% and delete terms in %vocabulary%, respectively. Vocabulary GETting requires non-admin permission #2808217 administer taxonomy permission ⬇ access taxonomy overview permission GET → decode → modify field → encode → PATCH → 403 200 #2824851 You can now GET a response, modifying the bits you want to change, and then sending exactly that, without needing to remove fields you’re not allowed to modify. Any fields that you’re not allowed to modify can still be sent without resulting in a 403 response, as long as you send exactly the same values. Drupal’s REST API now implements the robustness principle. 4xx GET responses cacheable: more scalable + faster #2765959 Valuable for use cases where you have many (for example a million) anonymous consumers hitting the same URL. Because the response is not cacheable, it can also not be cached by a reverse proxy in front of it. Meaning that we’ll have hundreds of thousands of requests hitting origin, which can bring down the origin server. Comprehensive integration tests + test coverage test coverage This massively reduces the risk of REST API regressions/changes/BC breaks making it into a Drupal 8 release. It allows us to improve things faster, because we can be confident that most regressions will be detected. That even includes the support for XML serialization, for the handful of you who are using that! We take backwards compatibility serious. Even better: we have test coverage test coverage: tests that ensure we have integration tests for every entity type that Drupal core’s stable modules ship with! Details at API-First Drupal — really!. Getting to this point took more than a year and required fixing bugs in many other parts of Drupal! Want more nuance and detail? See the REST: top priorities for Drupal 8.5.x issue on Are you curious wh[...]

Lionel Dricot: Comment Facebook gagne de l’argent

Wed, 28 Feb 2018 12:41:26 +0000

Savez-vous comment Facebook gagne de l’argent ? Par la publicité, bien entendu. Mais quel est le mécanisme exact ? Qui paie et pour quoi en échange ? Et quelles en sont les conséquences de ce modèle ? Les pages Facebook Sur Facebook, les utilisateurs comme vous et moi sont limités à 5000 amis maximum. Devenir ami nécessite l’acceptation des deux parties. Les pages, au contraire, ne sont pas limitée à des personnes physiques et peuvent être « aimées » par un nombre illimité de personnes.Les contenus d’une page sont toujours publics et leur but est de toucher un maximum de personnes. Pour cette raison, Facebook permet aux administrateurs d’une page d’avoir des statistiques complètes sur le nombre d’utilisateurs qui ont interagi avec une publication. Il semble donc logique pour tout business, même local, de se créer une page. Cela vaut pour les associations, les artistes, les politiciens, les groupes citoyens ou les blogueurs. La page Facebook semble être un outil idéal pour faire entendre sa voix. Il y’a cependant une petite subtilité. Si un individu aime cent pages qui publient chacune dix contenus par jour, il est impossible pour cet individu de consulter mille contenus, sans compter ceux de ses amis. La mise en avant des contenus Pour résoudre ce problème, Facebook décide des contenus « les plus intéressants » en se basant sur ce qui a déjà été aimé par d’autres utilisateurs. Les contenus nouveaux et originaux ont donc très peu de chance de se propager car pour être aimé, il faut être diffusé et pour être diffusé, il faut être aimé. L’administrateur de la page sera tout triste et déçu en consultant ses statistiques. 1000 personnes aiment sa page et, pourtant, son contenu n’a été lu qu’une dizaine de fois ! À ce moment, Facebook propose à notre administrateur de payer pour être vu plus de fois. Après tout, qu’est-ce que 10€ pour obtenir 1000 vues supplémentaires ? Lorsque j’ai créé mon premier site web, au 20ème siècle, il était courant d’avoir un compteur du nombre de visites reçues. Au plus le chiffre de ce compteur était important, au plus le site paraissait avoir du succès. Il n’était pas rare de rafraichir les pages pour incrémenter son compteur. Facebook a littéralement réussi à faire payer les créateurs de contenus pour incrémenter leur compteur. Tout en fournissant le compteur en question ! Les conséquences de ce modèle Ce système, très rentable, a plusieurs conséquences. Premièrement, cela signifie Facebook a tout intérêt à ce que les contenus des pages ne payant pas soient très peu consultés. Si vous avez une page Facebook et que vous ne payez pas, comme moi, vos contenus seront très peu diffusés, même chez ceux qui aiment votre page. Sauf cas exceptionnel où un contenu se révèle très populaire, les gens qui aiment votre page… ne verront pas votre contenu car Facebook n’a pas intérêt à les diffuser. Malgré 50 « J’aime » et 17 partages (ce qui est pour moi un véritable succès), le nombre d[...]

Dries Buytaert: Three ways we can improve Drupal's evaluator experience

Tue, 27 Feb 2018 15:41:13 +0000

Last week, Matthew Grasmick stepped into the shoes of a developer who has no Drupal experience, and attempted to get a new "Hello world!" site up and running using four different PHP frameworks: WordPress, Laravel, Symfony and Drupal. He shared his experience in a transparent blog post. In addition to detailing the inefficiencies in Drupal's download process and end-user documentation, Matt also shows that out of the four frameworks, Drupal required the most steps to get installed. While it is sobering to read, I'm glad Matthew brought this problem to the forefront. Having a good evaluator experience is critical as it has a direct impact on adoption rates. A lot goes into a successful evaluator experience: from learning what Drupal is, to understanding how it works, getting it installed and getting your first piece of content published. So how can we make some very necessary improvements to Drupal's evaluator experience? I like to think of the evaluator experience as a conversion funnel, similar to the purchase funnel developed in 1898 by E. St. Elmo Lewis. It maps an end-user journey from the moment a product attracts the user's attention to the point of use. It's useful to visualize the process as a funnel, because it helps us better understand where the roadblocks are and where to focus our efforts. For example, we know that more than 13 million people visited in 2017 (top of the funnel) and that approximately 75,000 new Drupal 8 websites launched in production (bottom of the funnel). A very large number of evaluators were lost as they moved down the conversion funnel. It would be good to better understand what goes on in between. As you can see from the image above, the Drupal Association plays an important role at the top of the funnel; from educating people about Drupal, to providing a streamlined download experience on, to helping users find themes and modules, and much more. The Drupal Association could do more to simplify the evaluator experience. For example, I like the idea of the Drupal Association offering and promoting a hosted, one-click trial service. This could be built by extending a service like into a hosted evaluation service, especially when combined with the upcoming Umami installation profile. (The existing "Try Drupal" program could evolve into a "Try hosting platforms" program. This could help resolve the expectation mismatch with the current "Try Drupal" program, which is currently more focused on showcasing hosting offerings than providing a seamless Drupal evaluation experience.) The good news is that the Drupal Association recognizes the same needs, and in the past months, we have been working together on plans to improve Drupal's conversional funnel. The Drupal Association will share its 2018 execution plans in the upcoming weeks. As you'll see, the plans address some of the pain points for evaluators (though not necessarily through a hosted trial service, as that could take significant engineering and infrastructure resources)[...]

Mattias Geniar: Clear systemd journal

Mon, 26 Feb 2018 17:30:02 +0000


The post Clear systemd journal appeared first on

On any server, the logs can start to add up and take considerable amount of disk space. Systemd conveniently stores these in /var/log/journal and has a systemctl command to help clear them.

Take this example:

$ du -hs /var/log/journal/
4.1G	/var/log/journal/

4.1GB worth of journal files, with the oldest dating back over 2 months.

$ ls -lath /var/log/journal/*/ | tail -n 2
-rw-r-x---+ 1 root systemd-journal 8.0M Dec 24 05:15 user-xxx.journal

On this server, I really don't need that many logs, so let's clean them out. There are generally 2 ways to do this.

Clear systemd journals older than X days

The first one is time-based, clearing everything holder than say 10 days.

$ journalctl --vacuum-time=10d
Vacuuming done, freed 2.3G of archived journals on disk.

Alternatively, you can limit its total size.

Clear systemd journals if they exceed X storage

This example will keep 2GB worth of logs, clearing everything that exceeds this.

$ journalctl --vacuum-size=2G
Vacuuming done, freed 720.0M of archived journals on disk.

Afterwards, your /var/log/journal should be much smaller.

$ du -hs /var/log/journal
1.1G	/var/log/journal

Saves you some GBs on disk!

The post Clear systemd journal appeared first on

Dries Buytaert: Posting my phone's battery status to my site

Mon, 26 Feb 2018 01:56:19 +0000

Earlier this month, I uninstalled Facebook from my phone. I also made a commitment to own my own content and to share shorter notes on my site. Since then, I shared my POSSE plan and I already posted several shorter notes. One thing that I like about Facebook and Twitter is how easy it is to share quick updates, especially from my phone. If I'm going to be serious about POSSE, having a good iOS application that removes friction from the publishing process is important. I always wanted to learn some iOS development, so I decided to jump in and start building a basic iOS application to post notes and photos directly to my site. I've already made some progress; so far my iOS application shares the state of my phone battery at This is what it looks like: This was inspired by Aaron Parecki, who not only tracks his phone battery but also tracks his sleep, his health patterns and his diet. Talk about owning your own data and liking tacos! Sharing the state of my phone's battery might sound silly but it's a first step towards being able to publish notes and photos from my phone. To post the state of my phone's battery on my Drupal site, my iOS application reads my battery status, wraps it in a JSON object and sends it to a new REST endpoint on my Drupal site. It took less than 100 lines of code to accomplish this. More importantly, it uses the same web services approach as posting notes and photos will. In an unexpected turn of events (yes, unnecessary scope creep), I decided it would be neat to keep my status page up-to-date with real-time battery information. In good tradition, the scope creep ended up consuming most of my time. Sending periodic battery updates turned out to be difficult because when a user is not actively using an iOS application, iOS suspends the application. This means that the applications can't listen to battery events nor send data using web services. This makes sense: suspending the application helps improve battery life, and allows iOS to devote more system resources to the active application in the foreground. The old Linux hacker in me wasn't going to be stopped though; I wanted my application to keep sending regular updates, even when it's not the active application. It turns out iOS makes a few exceptions and allows certain categories of applications – navigation, music and VOIP applications – to keep running in the background. For example, Waze continues to provide navigation instructions and Spotify will play music even when they are not the active application running in the foreground. You guessed it; the solution was to turn my note-posting-turned-battery application into a navigation application. This requires the application to listen to location update events from my phone's GPS. Doing so prevents the application from being suspended. As a bonus, I can use my location information for future use cases such as geotagging posts. This navigation hack works really well, but the bad[...]

Mattias Geniar: Announcing “Oh Dear!”: a new innovative tool to monitor your websites

Sat, 24 Feb 2018 20:20:10 +0000

The post Announcing “Oh Dear!”: a new innovative tool to monitor your websites appeared first on We've just completed our first month of public beta at Oh Dear!, a new website monitoring tool I've been working on for the last couple of months with my buddy Freek. I'm excited to show it to you! What is Oh Dear!? Our project is called "Oh Dear!" and it's a dead-simple tool you can use to help monitor the availability of your websites. And we define availability pretty broadly. We offer multi-location uptime monitoring, mixed content & broken links detection and SSL certificate & transparency reporting. There's an API and kick-ass documentation too. Think of it as any other monitoring solution, but tailored specifically to site owners that care about every detail about their site. What does it look like? I guess the more obvious first question would be "what can it do?", but since we're pretty damn proud of our layout & design, I want to brag about that first. Here's our homepage. There's extensive documentation on the API, every feature & the different notification methods. A beautiful in-app dashboard to give you a birds-eye view of the status of all your sites. An overview of the advanced notification features. The whole design just looks & feels right. If you've been following me for a while, you know I have no experience whatsoever with design. That's why we got external help from an Antwerp web agency called Spatie to help layout our app. I couldn't be happier with these results! What can Oh Dear! do? The heart & soul of Oh Dear! is about monitoring websites. So once you add your first site: We will monitor its uptime from independent world-wide locations (downtime gets verified from another, independent location -- different country, different cloud provider). We validate its HTTPS health: certificate expirations dates, invalid certificate chains, revoked certs, certificate transparency reporting, ... We crawl your site and report any broken links (internal & external) and will report errors (like HTTP/500) to you Everything can be reported to you via HipChat, Slack, Email, Push notifications (Pushover) or SMS (Nexmo) There's an API to automate Oh Dear! from start to finish & webhooks to tie in to your own monitoring systems All of this in -- but I'm biased here -- the easiest interface to set this up. Everything is tailored to website developers or maintainers. Web agencies or large firms, that run multiple sites. We know & feel the pain involved in managing those sites, we built Oh Dear! specifically for that use case. If you have clients or users that can create their own pages in a CMS, create links to other pages, ... you'll eventually get a situation where a user makes an unforeseen error. Oh Dear! can catch those, as we crawl your site and report any errors we find. Can I try it for free? Yes, a no-strings-attached, no credit card required free tri[...]

Dries Buytaert: Build and operate your own CDN

Thu, 22 Feb 2018 22:31:44 +0000


Janos Pasztor built his own Content Delivery Network. While I wouldn't want to operate my own personal CDN, it does sounds like a fun project for those interested in web performance.

Dries Buytaert: A Drupal distribution configurator

Thu, 22 Feb 2018 22:11:08 +0000


Today, Commerce Guys shared a new initiative for Drupal Commerce. They launched a "Drupal Commerce distribution configurator": a web-based UI that generates a composer.json file and builds a version of Drupal Commerce that is tailored to an evaluator's needs. The ability to visually configure your own distribution or demo is appealing. I like the idea of a Drupal distribution configurator for at least three reasons: (1) its development could lead to improvements to Drupal's Composer-based workflow, which would benefit all of Drupal, (2) it could help make distributions easier to build and maintain, which is something I'm passionate about, and last but not least, (3) it could grow into a more polished Drupal evaluator tool.

Jeroen De Dauw: Collaboration Best Practices

Thu, 22 Feb 2018 02:59:02 +0000

During 2016 and 2017 I worked in a small 3 people dev team at Wikimedia Deutschland, aptly named the FUN team. This is the team that rewrote our fundraising application and implemented the Clean Architecture. Shortly after we started working on new features we noticed room for improvement in many areas of our collaboration. We talked about these and created a Best Practices document, the content of which I share in this blog post. We created this document in October 2016 and it focused on the issues we where having at the time or things we where otherwise concerned about. Hence it is by no means a comprehensive list, and it might contain things not applicable to other teams due to culture/value differences or different modes of working. That said I think it can serve as a source of inspiration for other teams. Make sure others can pick up a task where it was left Make small commits Publish code quickly and don’t go home with unpublished code When partially finishing a task, describe what is done and still needs doing Link Phabricator on Pull Requests and Pull Requests on Phabricator Make sure others can (start) work(ing) on tasks Try to describe new tasks in such a way others can work on them without first needing to inquire about details Respond to emails and comments on tasks (at least) at the start and end of your day Go through the review queue (at least) at the start and end of your day Indicate if a task was started or is being worked on so everyone can pick up any task without first checking it is not being worked upon. At least pull the task into the “doing” column of the board, better yet assign yourself. Make sure others know what is going on Discuss introduction of new libraries or tools and creation of new projects Only reviewed code gets deployed or used as critical infrastructure If review does not happen, insist on it rather than adding to big branches Discuss (or Pair Program) big decisions or tasks before investing a lot of time in them Shared commitment Try working on the highest priority tasks (including any support work such as code review of work others are doing) Actively look for and work on problems and blockers others are running into Follow up commits containing suggested changes are often nicer than comments [...]

Frank Goossens: Music from Our Tube; Jordan Rakei Eye to Eye

Wed, 21 Feb 2018 13:16:06 +0000


What starts out as a somewhat Buckley-esque guitar & voice dreamscape evolves into a modern broken beat inspired song. The whole is quite intense and somber and has me listening to it on repeat for the last hour or so. And now it’s your turn!

Watch this video on YouTube.


Claudio Ramirez: Ubuntu 17.10 + Gnome: some hidden configurations

Wed, 21 Feb 2018 12:47:38 +0000

Update 20180216: remark about non-Ubuntu extensions & stability. I like what the Ubuntu people did when adopting Gnome as the new Desktop after the dismissal of Unity. When the change was announced some months ago, I decided to move to Gnome and see if I liked it. I did. It’s a good idea to benefit of the small changes Ubuntu did to Gnome 3. Forking dash-to-dock was a great idea so untested updates (e.g. upstream) don’t break the desktop. I won’t discuss settings you can change through the “Settings” application (Ubuntu Dock settings) or through “Tweaks”: $ sudo apt-get install gnome-tweak-tool It’s a good idea, though, to remove third party extensions so you are sure you’re using the ones provided and adapted by Ubuntu. You can always add new extensions later (the most important ones are even packaged). $ rm -rf ~/.local/share/gnome-shell/extensions/* Working with Gnome 3, and in less extent with MacOS, taught me that I prefer bars and docks to autohide. I never did in the past, but I feel that Gnome (and MacOS) got this right. I certainly don’t like the full height dock: make it so small as needed. You can use the graphical “dconf Editor” tool to make the changes, but I prefer the safer command line (you won’t make a change by accident). To prevent Ubuntu Dock to take all the vertical space (i.e., most of it is just an empty bar): $ dconf write /org/gnome/shell/extensions/dash-to-dock/extend-height false A neat Dock trick: when hovering over a icon on the dock, cycle through windows of the application while scrolling (or using two fingers). Way faster than click + select: $ dconf write /org/gnome/shell/extensions/dash-to-dock/scroll-action "'cycle-windows'" I set the dock to autohide in the regular “Settings” application. An extension is needed to do the same for the Top Bar (you need to log out, and the enable it through the “Tweaks” application): $ sudo apt-get install gnome-shell-extension-autohidetopbar Update: I don’t install extensions any more besides the ones forked by Ubuntu. In my experience, they make the desktop unstable under Wayland. That’s said, I haven’t seen crashes related to autohidetopbar. That said, I moved back to Xorg (option at login screen) because Wayland feels less stable. The next Ubuntu release (18.04) will default to Xorg as well meaning that at least until 18.10 Wayland won’t be the default session. Oh, just to be safe (e.g., in case you broke something), you can reset all the gnome settings with: $ dconf reset -f / Have a look at the comments for some extra settings (that I personally do not use, but many do). Some options that I don’t use far people have asked me about (here and elsewhere) Specially with the setting that allows scrolling above, you may want to only switch between windows of the same ap[...]

Mattias Geniar: Update a docker container to the latest version

Tue, 20 Feb 2018 19:13:30 +0000


The post Update a docker container to the latest version appeared first on

Here's a simple one, but if you're new to Docker something you might have to look up. On this server, I run Nginx as a Docker container using the official nginx:alpine  version.

I was running a fairly outdated version:

$ docker images | grep nginx
nginx    none                5a35015d93e9        10 months ago       15.5MB
nginx    latest              46102226f2fd        10 months ago       109MB
nginx    1.11-alpine         935bd7bf8ea6        18 months ago       54.8MB

In order to make sure I had the latest version, I ran pull:

$ docker pull nginx:alpine
alpine: Pulling from library/nginx
550fe1bea624: Pull complete
d421ba34525b: Pull complete
fdcbcb327323: Pull complete
bfbcec2fc4d5: Pull complete
Digest: sha256:c8ff0187cc75e1f5002c7ca9841cb191d33c4080f38140b9d6f07902ababbe66
Status: Downloaded newer image for nginx:alpine

Now, my local repository contains an up-to-date Nginx version:

$ docker images | grep nginx
nginx    alpine              bb00c21b4edf        5 weeks ago         16.8MB

To use it, you have to launch a new container based on that particular image. The currently running container will still be using the original (old) image.

$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED
4d9de6c0fba1        5a35015d93e9        "nginx -g 'daemon ..."   9 months ago

In my case, I re-created my HTTP/2 nginx container like this;

$ docker stop nginx-container
$ docker rm nginx-container
$ docker run --name nginx-container \ 
    --net="host" \
    -v /etc/nginx/:/etc/nginx/ \
    -v /etc/ssl/certs/:/etc/ssl/certs/ \
    -v /etc/letsencrypt/:/etc/letsencrypt/ \
    -v /var/log/nginx/:/var/log/nginx/ \
    --restart=always \
    -d nginx:alpine

And the Nginx/container upgrade was completed.

The post Update a docker container to the latest version appeared first on

Fabian Arrotin: Using newer PHP stack (built and distributed by CentOS) on CentOS 7

Mon, 19 Feb 2018 23:00:00 +0000

One thing that one has to like with Entreprise distribution is the same stable api/abi during the distro lifetime. If you have one application that works, you'll know that it will continue to work. But in parallel, one can't always decide the application to run on that distro, with the built-in components. I was personally faced with this recently, when I was in a need to migrate our Bug Tracker to a new version. Let's so use that example to see how we can use "newer" php pkgs distributed through the distro itself. The application that we use for is MantisBT, and by reading their requirements list it was clear than a CentOS 7 default setup would not work : as a reminder the default php pkg for .el7 is 5.4.16 , so not supported anymore by "modern" application[s]. That's where SCLs come to the rescue ! With such "collections", one can install those, without overwriting the base pkgs, and so can even run multiple parallel instances of such "stack", based on configuration. Let's just start simple with our MantisBT example : forget about the traditional php-* packages (including "php" which provides the mod_php for Apache) : it's up to you to let those installed if you need it, but on my case, I'll default to php 7.1.x for the whole vhost, and also worth knowing that I wanted to integrate php with the default httpd from the distro (to ease the configuration management side, to expect finding the .conf files at $usual_place) The good news is that those collections are built and so then tested and released through our CentOS Infra, so you don't have to care about anything else ! (kudos to the SCLo SIG ! ). You can see the available collections here So, how do we proceed ? easy ! First let's add the repository : yum install centos-release-scl And from that point, you can just install what you need. For our case, MantisBT needs php, php-xml, php-mbstring, php-gd (for the captcha, if you want to use it), and a DB driver, so php-mysql (if you targets mysql of course). You just have to "translate" that into SCLs pkgs : in our case, php becomes rh-php71 (meta pkg), php-xml becomes rh-php71-php-xml and so on (one remark though, php-mysql became rh-php71-php-mysqlnd !) So here we go : yum install httpd rh-php71 rh-php71-php-xml rh-php71-php-mbstring rh-php71-php-gd rh-php71-php-soap rh-php71-php-mysqlnd rh-php71-php-fpm As said earlier, we'll target the default httpd pkg from the distro , so we just have to "link" php and httpd. Remember that mod_php isn't available anymore, but instead we'll use the php-fpm pkg (see rh-php71-php-fpm) for this (so all requests are sent to that FastCGI Process Manager daemon) Let's do this : systemctl enable httpd --now systemctl enable rh-php71-php-fpm --now cat > /etc/httpd/conf.d/[...]

Dries Buytaert: Twenty years later and I am still at my desk learning CSS

Mon, 19 Feb 2018 21:24:29 +0000


I was working on my POSSE plan when Vanessa called and asked if I wanted to meet for a coffee. Of course, I said yes. In the car ride over, I was thinking about how I made my first website over twenty years ago. HTML table layouts were still cool and it wasn't clear if CSS was going to be widely adopted. I decided to learn CSS anyway. More than twenty years later, the workflows, the automated toolchains, and the development methods have become increasingly powerful, but also a lot more complex. Today, you simply npm your webpack via grunt with vue babel or bower to react asdfjkl;lkdhgxdlciuhw. Everything is different now, except that I'm still at my desk learning CSS.

Mattias Geniar: Show IDN punycode in Firefox to avoid phishing URLs

Mon, 19 Feb 2018 19:52:42 +0000


The post Show IDN punycode in Firefox to avoid phishing URLs appeared first on

Pop quiz: can you tell the difference between these 2 domains?



Both host a version of the popular crypto exchange Binance.

The second image is the correct one, the first one is a phishing link with the letter 'n' replaced by 'n with a dot below it' (U+1E47). It's not a piece of dirt on your screen, it's an attempt to trick you to believe it's the official site.

Firefox has a very interesting option called IDN_show_punycode. You can enable it in about:config`.


Once enabled, it'll make that phishing domain look like this:


Doesn't look that legit now anymore, does it?

I wish Chrome offered a similar option though, could prevent quite a few phishing attempts.


The post Show IDN punycode in Firefox to avoid phishing URLs appeared first on

Dries Buytaert: Get your HTTPS on

Mon, 19 Feb 2018 18:26:56 +0000


"Get your HTTPS on" because Chrome will mark all HTTP sites as "not secure" starting in July 2018. Chrome currently displays a neutral icon for sites that aren't using HTTPS, but starting with Chrome 68, the browser will warn users in the address bar.

Fortunately, HTTPS has become easier to implement through services like Let's Encrypt, who provide free certificates and aim to eliminate to complexity of setting up and maintaining HTTPS encryption.

Frank Goossens: Introducing zytzagoo’s major changes for Autoptimize 2.4

Sun, 18 Feb 2018 14:18:37 +0000

TL;DR Autoptimize 2.4 will be a major change. Tomaš Trkulja (aka zytzagoo) has cleaned up and modernized the code significantly, making it easier to read and maintain, switched to the latest and greatest minifiers and added automated testing. And as if that isn’t enough we’re also adding new optimization options! The downside: we will be dropping support for PHP < 5.3 and for the “legacy minifiers”. AO 2.4 will first be made available as a separate “Autoptimize Beta” plugin on via GitHub and will see more releases/ iterations with new options/ optimizations there before being promoted to the official “Autoptimize”. Back in March 2015 zytzagoo forked Autoptimize, rewriting the CDN-replacement logic (and a lot of autoptimizeStyles::minify really) and started adding automated testing. I kept an eye on the fork and later that year I contacted Tomas via Github to see how we could collaborate. We have been in touch ever since; some of his improvements have already been integrated and he is my go-to man to discuss coding best practices, bugfixes and security. FFWD to the nearby future; Autoptimize 2.4 will be based on Tomaš’ fork and will include the following major changes: New: option only minify JS/ CSS without combining them New: excluded JS- or CSS-files will be automatically minified Improvement: switching to the current version of JSMin and -more importantly- of YUI CSS compressor PHP port which has some major performance-improvements of itself Improvement: all create_function() instances have been replaced by anonymous functions (PHP 7.2 started issuing warnings about create_function being deprecated) Improvement: all code in autoptimize/classlesses/* (my stuff) has been rewritten into OOP and is now in autoptimize/classes/* Improvement: use of autoload instead of manual conditional includes Improvement: a nice amount of test cases (although no 100% coverage yet), allowing for Travis continuous integration-tests being done dropping support for PHP below 5.3 (you really should be on PHP 7.x, it is way faster) dropping support for the legacy minifiers These improvements will be released in a separate “Autoptimize Beta” plugin soon (albeit not on as “beta”-plugins are not allowed). You can already download from GitHub here. We will start adding additional optimization options there, releasing at a higher pace. The goal is to create a healthy Beta-user pool allowing us to move code from AO Beta to AO proper with more confidence. So what new optimization options would you like to see added to Autoptimize 2.4 and beyond? :-) [corrected 19/02; does not allow be[...]

Xavier Mertens: [SANS ISC] Malware Delivered via Windows Installer Files

Sat, 17 Feb 2018 11:48:48 +0000

I published the following diary on “Malware Delivered via Windows Installer Files“:

For some days, I collected a few samples of malicious MSI files. MSI files are Windows installer files that users can execute to install software on a Microsoft Windows system. Of course, you can replace “software” with “malware”. MSI files look less suspicious and they could bypass simple filters based on file extensions like “(com|exe|dll|js|vbs|…)”. They also look less dangerous because they are Composite Document Files… [Read more]

[The post [SANS ISC] Malware Delivered via Windows Installer Files has been first published on /dev/random]


Dries Buytaert: My POSSE plan for evolving my site

Fri, 16 Feb 2018 09:23:53 +0000

In an effort to reclaim my blog as my thought space and take back control over my data, I want to share how I plan to evolve my website. Given the incredible feedback on my previous blog posts, I want to continue the conversation and ask for feedback. First, I need to find a way to combine longer blog posts and status updates on one site: Update my site navigation menu to include sections for "Blog" and "Notes". The "Notes" section would resemble a Twitter or Facebook livestream that catalogs short status updates, replies, interesting links, photos and more. Instead of posting these on third-party social media sites, I want to post them on my site first (POSSE). The "Blog" section would continue to feature longer, more in-depth blog posts. The front page of my website will combine both blog posts and notes in one stream. Add support for Webmention, a web standard for tracking comments, likes, reposts and other rich interactions across the web. This way, when users retweet a post on Twitter or cite a blog post, mentions are tracked on my own website. Automatically syndicate to 3rd party services, such as syndicating photo posts to Facebook and Instagram or syndicating quick Drupal updates to Twitter. To start, I can do this manually, but it would be nice to automate this process over time. Streamline the ability to post updates from my phone. Sharing photos or updates in real-time only becomes a habit if you can publish something in 30 seconds or less. It's why I use Facebook and Twitter often. I'd like to explore building a simple iOS application to remove any friction from posting updates on the go. Streamline the ability to share other people's content. I'd like to create a browser extension to share interesting links along with some commentary. I'm a small investor in Buffer, a social media management platform, and I use their tool often. Buffer makes it incredibly easy to share interesting articles on social media, without having to actually open any social media sites. I'd like to be able to share articles on my blog that way. Second, as I begin to introduce a larger variety of content to my site, I'd like to find a way for readers to filter content: Expand the site navigation so readers can filter by topic. If you want to read about Drupal, click "Drupal". If you just want to see some of my photos, click "Photos". Allow people to subscribe by interests. Drupal 8 make it easy to offer an RSS feed by topic. However, it doesn't look nearly as easy to allow email subscribers to receive updates by interest. Mailchimp's RSS-to-email feature, my current mailing lis[...]

Xavier Mertens: Imap2TheHive: Support of Attachments

Thu, 15 Feb 2018 21:11:37 +0000

I just published a quick update of my imap2thehive tool. Files attached to an email can now be processed and uploaded as an observable attached to a case. It is possible to specify which MIME types to process via the configuration file. The example below will process PDF & EML files:

files: application/pdf,messages/rfc822

The script is available here.

[The post Imap2TheHive: Support of Attachments has been first published on /dev/random]


Les Jeudis du Libre: Mons, le 15 mars : ReactJS – Présentation de la librairie et prototypage d’un jeu de type “clicker”

Thu, 15 Feb 2018 06:23:45 +0000

Ce jeudi 15 mars 2018 à 19h se déroulera la 67ème séance montoise des Jeudis du Libre de Belgique. Le sujet de cette séance : ReactJS – Présentation de la librairie et prototypage d’un jeu de type “clicker” Thématique : Internet|Graphisme|sysadmin|communauté Public : Tout public|sysadmin|entreprises|étudiants|… L’animateur conférencier : Michaël Hoste (80LIMIT) Lieu de cette séance : Campus technique (ISIMs) de la Haute Ecole en Hainaut, Avenue V. Maistriau, 8a, Salle Académique, 2e bâtiment (cf. ce plan sur le site de l’ISIMs, et ici sur la carte Openstreetmap). La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page La séance sera suivie d’un verre de l’amitié. Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, OpenSides, MeaWeb et Phonoid. Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces. Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres. Description : Le développement JavaScript est un véritable fouillis de solutions techniques. Il ne se passe pas un mois sans voir apparaître un nouveau framework soi-disant révolutionnaire. Lorsque ReactJS est sorti en 2013, soutenu par un Facebook peu coutumier de la diffusion de solutions open-source, peu s’attendaient à voir arriver un réel challenger. Sa particularité de mélanger du code HTML et JavaScript au sein d’un seul fichier allait même à l’encontre du bon sens. Pourtant, 4 ans plus tard, cette solution reste idéale pour proposer des expériences web toujours plus fluides et complexes. ReactJS et ses mécanismes sous-jacents ont apporté un vent frais dans le monde du web front-end. Lors de cette séance, nous reprendrons l’historique de ReactJS et expliquerons pourquoi cette librairie, pourtant minimaliste, aborde les problèmatiques du web d’une[...]

Dries Buytaert: Reclaiming my blog as my thought space

Wed, 14 Feb 2018 13:33:49 +0000

Last week, I shared my frustration with using social media websites like Facebook or Twitter as my primary platform for sharing photos and status updates. As an advocate of the open web, this has bothered me for some time so I made a commitment to prioritize publishing photos, updates and more to my own site. I'm excited to share my plan for how I'd like to accomplish this, but before I do, I'd like to share two additional challenges I face on my blog. These struggles factor into some of the changes I'm considering implementing, so I feel compelled to share them with you. First, I've struggled to cover a wide variety of topics lately. I've been primarily writing about Drupal, Acquia and the Open Web. However, I'm also interested in sharing insights on startups, investing, travel, photography and life outside of work. I often feel inspired to write about these topics, but over the years I've grown reluctant to expand outside of professional interests. My blog is primarily read by technology professionals — from Drupal users and developers, to industry analysts and technology leaders — and in my mind, they do not read my blog to learn about a wider range of topics. I'm conflicted because I would like my l blog to reflect both my personal and professional interests. Secondly, I've been hesitant to share short updates, such as a two sentence announcement about a new Drupal feature or an Acquia milestone. I used to publish these kinds of short updates quite frequently. It's not that I don't want to share them anymore, it's that I struggle to post them. Every time I publish a new post, it goes out to more than 5,000 people that subscribe to my blog by email. I've been reluctant to share short status updates because I don't want to flood people's inbox. Throughout the years, I worked around these two struggles by relying on social media; while I used my blog for in-depth blog posts specific to my professional life, I used social media for short updates, sharing photos and starting conversation about wider variety of topics. But I never loved this division. I've always written for myself, first. Writing pushes me to think, and it is the process I rely on to flesh out ideas. This blog is my space to think out loud, and to start conversations with people considering the same problems, opportunities or ideas. In the early days of my blog, I never considered restricting my blog to certain topics or making it fit specific editorial standards. Om Malik published a blog last week that echoes my [...]

Philip Van Hoof: Verkoop met verlies

Sat, 10 Feb 2018 23:57:11 +0000

Vandaag wil ik de aandacht op een Belgische wet over het verkopen met verlies. Ons land verbiedt, bij wet, elke handelaar een goed met verlies te verkopen. Dat is de regel, in ons België. Die regel heeft (terecht) uitzonderingen. De definitie van de uitzondering wil zeggen dat ze niet de regel zijn: de verkoop met verlies is in België slechts per uitzondering toegestaan: naar aanleiding van soldenverkoop of uitverkoop; met als doel de goederen die vatbaar zijn voor snel bederf van de hand te doen als hun bewaring niet meer kan worden verzekerd; ten gevolge externe omstandigheden; goederen die technisch voorbijgestreefd zijn of beschadigd zijn; de noodzakelijkheid van concurrentie. Ik vermoed dat onze wet bestaat om oneerlijke concurrentie te bestrijden. Een handelaar kan dus niet een bepaald product (bv. een game console) tegen verlies verkopen om zo marktdominantie te verkrijgen voor een ander product uit zijn gamma (bv. games), bv. met als doel concurrenten uit de markt te weren. Volgens mij is het daarom zo dat, moest een game console -producent met verlies een console verkopen, dit illegaal is in België. Laten we aannemen dat game console producenten, die actief zijn in (de verkoop in) België, de Belgische wet volgen. Dan volgt dat ze hun game consoles niet tegen verlies verkopen. Ze maken dus winst. Moesten ze dat niet doen dan moeten ze voldoen aan uitzonderlijke voorwaarden, in de (eerder vermelde) Belgische wet, die hen toelaat wel verlies te maken. In alle andere gevallen zouden ze in de ontwettigheid verkeren. Dat is de Belgische wet. Dat maakt dat de aanschaf van zo’n game console, als Belgisch consument, betekent dat de producent -en verkoper een zekere winst hebben gemaakt door mijn aankoop. Er is dus geen sprake van verlies. Tenzij de producent -of verkoper in België betrokken is bij onwettige zaken. Laten we aannemen dat we op zo’n console, na aanschaf, een andere software willen draaien. Dan kan de producent/verkoper dus niet beweren dat zijn winst gemaakt wordt door zaken die naderhand verkocht zouden worden (a.d.h.v. bv. originele software). Hun winst is met andere woorden al gemaakt. Op de game console zelf. Indien niet, dan zou de producent of verkoper in onwettigheid verkeren (in België). Daarvan nemen we aan dat dit zo niet verlopen is. Want anders zou men het goed niet mogen verkopen. Het goed is wel verkocht. Volgens Belgische wetgeving (toch?). Indien niet, dan is de producent -en of verkoper verantwoordelijk. In geen geval de [...]

Xavier Mertens: Viper and ReversingLabs A1000 Integration

Fri, 09 Feb 2018 13:48:26 +0000

A quick blog post about a module that I wrote to interconnect the malware analysis framework Viper and the malware analysis platform A1000 from ReversingLabs.

The module can perform two actions at the moment: to submit a new sample for analysis and to retrieve the analysis results (categorization):

viper sample.exe > a1000 -h
usage: a1000 [-h] [-s] [-c]

Submit files and retrieve reports from a ReversingLab A1000

optional arguments:
-h, --help show this help message and exit
-s, --submit Submit file to A1000
-c, --classification Get classification of current file from A1000
viper sample.exe > a1000 -s
[*] Successfully submitted file to A1000, task ID: 393846

viper sample.exe > a1000 -c
[*] Classification
- Threat status : malicious
- Threat name : Win32.Trojan.Fareit dw eldorado
- Trust factor : 5
- Threat level : 2
- First seen : 2018-02-09T13:03:26Z
- Last seen : 2018-02-09T13:07:00Z

The module is available on my GitHub repository.

[The post Viper and ReversingLabs A1000 Integration has been first published on /dev/random]