Subscribe: Planet SysAdmin
http://planetsysadmin.com/atom.xml
Added By: Feedage Forager Feedage Grade B rated
Language:
Tags:
blog  choria  data  file  files  hash  helk  network  new  openssl  org  python  security  source  system  text  time 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Planet SysAdmin

Planet SysAdmin





Updated: 2018-04-19T20:20:16+00:00

 



A filesystem for known_hostsSteve Kemp's Blog

2018-04-19T09:01:00+00:00

The other day I had an idea that wouldn't go away, a filesystem that exported the contents of ~/.ssh/known_hosts.

I can't think of a single useful use for it, beyond simple shell-scripting, and yet I couldn't resist.

 $ go get -u github.com/skx/knownfs
 $ go install github.com/skx/knownfs

Now make it work:

 $ mkdir ~/knownfs
 $ knownfs ~/knownfs

Beneat out mount-point we can expect one directory for each known-host. So we'll see entries:

 ~/knownfs $ ls | grep \.vpn
 builder.vpn
 deagol.vpn
 master.vpn
 www.vpn

 ~/knownfs $ ls | grep steve
 blog.steve.fi
 builder.steve.org.uk
 git.steve.org.uk
 mail.steve.org.uk
 master.steve.org.uk
 scatha.steve.fi
 www.steve.fi
 www.steve.org.uk

The host-specified entries will each contain a single file fingerprint, with the fingerprint of the remote host:

 ~/knownfs $ cd www.steve.fi
 ~/knownfs/www.steve.fi $ ls
 fingerprint
 frodo ~/knownfs/www.steve.fi $ cat fingerprint
 98:85:30:f9:f4:39:09:f7:06:e6:73:24:88:4a:2c:01

I've used it in a few shell-loops to run commands against hosts matching a pattern, but beyond that I'm struggling to think of a use for it.

If you like the idea I guess have a play:

It was perhaps more useful and productive than my other recent work - which involves porting an existing network-testing program from Ruby to golang, and in the process making it much more uniform and self-consistent.

The resulting network tester is pretty good, and can now notify via MQ to provide better decoupling too. The downside is of course that nobody changes network-testing solutions on a whim, and so these things are basically always in-house only.

Debian and Free Software



The sensible way to use Bourne shell 'here documents' in pipelinesChris's Wiki :: blog

2018-04-19T03:10:13+00:00

I was recently considering a shell script where I might want to feed a Bourne shell 'here document' to a shell pipeline. This is certainly possible and years ago I wrote an entry on the rules for combining things with here documents, where I carefully wrote down how to do this and the general rule involved. This time around, I realized that I wanted to use a much simpler and more straightforward approach, one that is obviously correct and is going to be clear to everyone. Namely, putting the production of the here document in a subshell.

(
cat <

This is not as neat and nominally elegant as taking advantage of the full power of the Bourne shell's arcane rules, and it's probably not as efficient (in at least some sh implementations, you may get an extra process), but I've come around to feeling that that doesn't matter. This may be the brute force solution, but what matters is that I can look at this code and immediately follow it, and I'm going to be able to do that in six months or a year when I come back to the script.

(Here documents are already kind of confusing as it stands without adding extra strangeness.)

Of course you can put multiple things inside the (...) subshell, such as several here documents that you output only conditionally (or chunks of always present static text mixed with text you have to make more decisions about). If you want to process the entire text you produce in some way, you might well generate it all inside the subshell for convenience.

Perhaps you're wondering why you'd want to run a here document through a pipe to something. The case that frequently comes up for me is that I want to generate some text with variable substitution but I also want the text to flow naturally with natural line lengths, and the expansion will have variable length. Here, the natural way out is to use fmt:

(
cat <

Using fmt reflows the text regardless of how long the variables expand out to. Depending on the text I'm generating, I may be fine with reflowing all of it (which means that I can put all of the text inside the subshell), or I may have some fixed formatting that I don't want passed through fmt (so I have to have a mix of fmt'd subshells and regular text).

Having written that out, I've just come to the obvious realization that for simple cases I can just directly use fmt with a here document:

fmt <

This doesn't work well if there's some paragraphs that I want to include only some of the time, though; then I should still be using a subshell.

(For whatever reason I apparently have a little blind spot about using here documents as direct input to programs, although there's no reason for it.)

Recently changed pages in Chris's Wiki :: blog.



Ansible - add apt_key inlineRaymii.org

2018-04-19T00:00:00+00:00

Using the apt_key module one can add an APT key with ansible. You can get the key from a remote server or from a file, or just a key ID. I got the request to do some stuff on a machine which was quite rescricted (so no HKP protocol) and I was asked not to place to many files on the machine. The apt_key was needed but it could not be a file, so using a YAML Literal Block Scalar I was able to add the key inline in the playbook. Not the best way to do it, but one of the many ways Ansible allows it.(image) The Raymii.org RSS feed, all about GNU/Linux



Self-hosted videos with HLSVincent Bernat

2018-04-18T19:19:16+00:00

Note This article was first published on Exoscale blog with some minor modifications. Hosting videos on YouTube is convenient for several reasons: pretty good player, free bandwidth, mobile-friendly, network effect and, at your discretion, no ads.1 On the other hand, this is one of the less privacy-friendly solution. Most other providers share the same characteristics—except the ability to disable ads for free. With the



A CPU's TDP is a misleading headline numberChris's Wiki :: blog

2018-04-18T06:05:19+00:00

The AMD Ryzen 1800X in my work machine and the Intel Core i7-8700K in my home machine are both 95 watt TDP processors. Before I started measuring things with the actual hardware, I would have confidently guessed that they would have almost the same thermal load and power draw, and that the impact of a 95W TDP CPU over a 65W TDP CPU would be clearly obvious (you can see traces of this in my earlier entry on my hardware plans). Since it's commonly said that AMD CPUs run hotter than Intel ones, I'd expect the Ryzen to be somewhat higher than the Intel, but how much difference would I really expect from two CPUs with the same TDP?

Then I actually measured the power draws of the two machines, both at idle and under various different sorts of load. The result is not even close; the Intel is clearly using less power even after accounting for the 10 watts of extra power the AMD's Radeon RX 550 graphics card draws when it's lit up. It's ahead at idle, and it's also ahead under full load when the CPU should be at maximum power draw. Two processors that I would have expected to be fundamentally the same at full CPU usage are roughly 8% different in measured power draw; at idle they're even further apart on a proportional basis.

(Another way that TDP is misleading to the innocent is that it's not actually a measure of CPU power draw, it's a measure of CPU heat generation; see this informative reddit comment. Generally I'd expect the two to be strongly correlated (that heat has to come from somewhere), but it's possible that something that I don't understand is going on.)

Intellectually, I may have known that a processor's rated TDP was merely a measure of how much heat it could generate at maximum and didn't predict either its power draw when idle or its power draw under load. But in practice I thought that TDP was roughly TDP, and every 95 watt TDP (or 65 watt TDP) processor would be about the same as every other one. My experience with these two machines has usefully smacked me in the face with how this is very much not so. In practice, TDP apparently tells you how big a heatsink you need to be safe and that's it.

(There are all sorts of odd things about the relative power draws of the Ryzen and the Intel under various different sorts of CPU load, but that's going to be for another entry. My capsule summary is that modern CPUs are clearly weird and unpredictable beasts, and AMD and Intel must be designing their power-related internals fairly differently.)

PS: TDP also doesn't necessarily predict your actual observed CPU temperature under various conditions. Some of the difference will be due to BIOS decisions about fan control; for example, my Ryzen work machine appears to be more aggressive about speeding up the CPU fan, and possibly as a result it seems to report lower CPU temperatures under high load and power draw.

(Really, modern PCs are weird beasts. I'm not sure you can do more than putting in good cooling and hoping for the best.)

Recently changed pages in Chris's Wiki :: blog.



Link: Parsing: a timelineChris's Wiki :: blog

2018-04-17T04:49:05+00:00

Jeffery Kegler's Parsing: a timeline (via) is what it says on the title; it's an (opinionated) timeline of various developments in computer language parsing. There are a number of fascinating parts to it and many bits of history that I hadn't known and I'm glad to have read about. Among other things, this timeline discusses all of the things that aren't actually really solved problems in parsing, which is informative all by itself.

(I've been exposed to various aspects of parsing and it's a long standing interest of mine, but I don't think I've ever seen the history of the field laid out like this. I had no idea that so many things were relatively late developments, or of all of the twists and turns involved in the path to LALR parsers.)

Recently changed pages in Chris's Wiki :: blog.



Notes on setting up Raspberry Pi 3 as WiFi hotspotErrata Security

2018-04-16T12:35:43+00:00

I want to sniff the packets for IoT devices. There are a number of ways of doing this, but one straightforward mechanism is configuring a "Raspberry Pi 3 B" as a WiFi hotspot, then running tcpdump on it to record all the packets that pass through it. Google gives lots of results on how to do this, but they all demand that you have the precise hardware, WiFi hardware, and software that the authors do, so that's a pain.


I got it working using the instructions here. There are a few additional notes, which is why I'm writing this blogpost, so I remember them.
https://www.raspberrypi.org/documentation/configuration/wireless/access-point.md

I'm using the RPi-3-B and not the RPi-3-B+, and the latest version of Raspbian at the time of this writing, "Raspbian Stretch Lite 2018-3-13".

Some things didn't work as described. The first is that it couldn't find the package "hostapd". That solution was to run "apt-get update" a second time.

The second problem was error message about the NAT not working when trying to set the masquerade rule. That's because the 'upgrade' updates the kernel, making the running system out-of-date with the files on the disk. The solution to that is make sure you reboot after upgrading.

Thus, what you do at the start is:

apt-get update
apt-get upgrade
apt-get update
shutdown -r now

Then it's just "apt-get install tcpdump" and start capturing on wlan0. This will get the non-monitor-mode Ethernet frames, which is what I want.


Advanced persistent cybersecurity



My letter urging Georgia governor to veto anti-hacking billErrata Security

2018-04-16T11:42:52+00:00

February 16, 2018

Office of the Governor
206 Washington Street
111 State Capitol
Atlanta, Georgia 30334


Re: SB 315

Dear Governor Deal:

I am writing to urge you to veto SB315, the "Unauthorized Computer Access" bill.

The cybersecurity community, of which Georgia is a leader, is nearly unanimous that SB315 will make cybersecurity worse. You've undoubtedly heard from many of us opposing this bill. It does not help in prosecuting foreign hackers who target Georgian computers, such as our elections systems. Instead, it prevents those who notice security flaws from pointing them out, thereby getting them fixed. This law violates the well-known Kirchhoff's Principle, that instead of secrecy and obscurity, that security is achieved through transparency and openness.

That the bill contains this flaw is no accident. The justification for this bill comes from an incident where a security researcher noticed a Georgia state election system had made voter information public. This remained unfixed, months after the vulnerability was first disclosed, leaving the data exposed. Those in charge decided that it was better to prosecute those responsible for discovering the flaw rather than punish those who failed to secure Georgia voter information, hence this law.

Too many security experts oppose this bill for it to go forward. Signing this bill, one that is weak on cybersecurity by favoring political cover-up over the consensus of the cybersecurity community, will be part of your legacy. I urge you instead to veto this bill, commanding the legislature to write a better one, this time consulting experts, which due to Georgia's thriving cybersecurity community, we do not lack.

Thank you for your attention.

Sincerely,
Robert Graham
(formerly) Chief Scientist, Internet Security Systems

Advanced persistent cybersecurity



Upcoming presentation at LOADays: Varnish Internals – Speeding up a site x100ma.ttias.be

2018-04-16T08:38:10+00:00

The post Upcoming presentation at LOADays: Varnish Internals – Speeding up a site x100 appeared first on ma.ttias.be.

I'll be speaking at LOADays next Sunday about Varnish.

If you happen to be around, come say hi -- I'll be there all day!

Varnish Internals -- Speeding up a site x100

In this talk we'll look at the internals of Varnish, a reverse proxy with powerful caching abilities.

We'll walk through an HTTP request end-to-end, manipulate it change it in ways that no one should ever do in production -- but it'll proof how powerful Varnish can be.

Varnish is a load balancer, caching engine, its own scripting language and a fun way to deep-dive in to the HTTP protocol.

Source: Varnish Internals -- Speeding up a site x100 (Mattias Geniar)

The post Upcoming presentation at LOADays: Varnish Internals – Speeding up a site x100 appeared first on ma.ttias.be.

The Web, Open Source, PHP, Security, DevOps & Automation.



Let's stop talking about password strengthErrata Security

2018-04-16T01:57:11+00:00

(image)
Picture from EFF -- CC-BY license
Near the top of most security recommendations is to use "strong passwords". We need to stop doing this.

Yes, weak passwords can be a problem. If a website gets hacked, weak passwords are easier to crack. It's not that this is wrong advice.

On the other hand, it's not particularly good advice, either. It's far down the list of important advice that people need to remember. "Weak passwords" are nowhere near the risk of "password reuse". When your Facebook or email account gets hacked, it's because you used the same password across many websites, not because you used a weak password.

Important websites, where the strength of your password matters, already take care of the problem. They use strong, salted hashes on the backend to protect the password. On the frontend, they force passwords to be a certain length and a certain complexity. Maybe the better advice is to not trust any website that doesn't enforce stronger passwords (minimum of 8 characters consisting of both letters and non-letters).

To some extent, this "strong password" advice has become obsolete. A decade ago, websites had poor protection (MD5 hashes) and no enforcement of complexity, so it was up to the user to choose strong passwords. Now that important websites have changed their behavior, such as using bcrypt, there is less onus on the user.


But the real issue here is that "strong password" advice reflects the evil, authoritarian impulses of the infosec community. Instead of measuring insecurity in terms of costs vs. benefits, risks vs. rewards, we insist that it's an issue of moral weakness. We pretend that flaws happen because people are greedy, lazy, and ignorant. We pretend that security is its own goal, a benefit we should achieve, rather than a cost we must endure.

We like giving moral advice because it's easy: just be "stronger". Discussing "password reuse" is more complicated, forcing us discuss password managers, writing down passwords on paper, that it's okay to reuse passwords for crappy websites you don't care about, and so on.

What I'm trying to say is that the moral weakness here is us. Rather then give pertinent advice we give lazy advice. We give the advice that victim shames them for being weak while pretending that we are strong.

So stop telling people to use strong passwords. It's crass advice on your part and largely unhelpful for your audience, distracting them from the more important things.

Advanced persistent cybersecurity



OpenVMS 7.3 install log with simh VAX on Ubuntu 16.04Raymii.org

2018-04-16T00:00:00+00:00

Using a guide I was able to install OpenVMS 7.3 for VAX on simh on Ubuntu 16.04. This is a copy-paste of my terminal for future reference. This is not one of my usual articles, a guide with comprehensive information an background. Just a log of my terminal.(image) The Raymii.org RSS feed, all about GNU/Linux



File versioning and deleting on OpenVMS with DELETE and PURGERaymii.org

2018-04-15T00:00:00+00:00

I'm now a few weeks into my OpenVMS adventure and my home folder on the [DECUS](http://decus.org) system is quite cluttered with files. More specifically, with different versions of files, since OpenVMS by default has file versioning built in. This means that when you edit a file, or copy a file over an existing file, the old file is not overwritten but a new file with a new version is written. The old file still is there. This is one of the best things in my humble opinion so far on OpenVMS, but it does require maintenance to not have the disk get filled up fast. This article goes into the PURGE and DELETE commands which help you deal with file versioning and removal.(image) The Raymii.org RSS feed, all about GNU/Linux



Accelerate NYC Launch Party, Saturday, April 21Tom Limoncelli's EverythingSysadmin Blog

2018-04-11T21:00:00+00:00

The NYC launch event for Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations by Nicole Forsgren, PhD, Jez Humble, and Gene Kim will be held this Saturday from 11am-2pm.

All are invited. Space is limited. Please RSVP at EventBrite.

I'm super excited by this book for two reasons: (1) It explains the business case for devops in a way that speaks to executives. (2) It is based on real data with statistical correlation that show real cause and effect.

I'll be at this event. I hope to see you there too!

(image) Thoughts, news and views of Limoncelli, Hogan & Chalup



Bread and dataSteve Kemp's Blog

2018-04-11T09:01:00+00:00

For the past two weeks I've mostly been baking bread. I'm not sure what made me decide to make some the first time, but it actually turned out pretty good so I've been doing every day or two ever since.

This is the first time I've made bread in the past 20 years or so - I recall in the past I got frustrated that it never rose, or didn't turn out well. I can't see that I'm doing anything differently, so I'll just write it off as younger-Steve being daft!

No doubt I'll get bored of the delicious bread in the future, but for the moment I've got a good routine going - juggling going to the shops, child-care, and making bread.

Bread I've made includes the following:

(image)

(image)

(image)

(image)

Beyond that I've spent a little while writing a simple utility to embed resources in golang projects, after discovering the tool I'd previously been using, go-bindata, had been abandoned.

In short you feed it a directory of files and it will generate a file static.go with contents like this:

files[ "data/index.html" ] = "....
files[ "data/robots.txt" ] = "User-Agent: * ..."

It's a bit more complex than that, but not much. As expected getting the embedded data at runtime is trivial, and it allows you to distribute a single binary even if you want/need some configuration files, templates, or media to run.

For example in the project I discussed in my previous post there is a HTTP-server which serves a user-interface based upon bootstrap. I want the HTML-files which make up that user-interface to be embedded in the binary, rather than distributing them seperately.

Anyway it's not unique, it was a fun experience writing, and I've switched to using it now:

Debian and Free Software



ZFS Users Conference, April 19-20, Norwalk, CTTom Limoncelli's EverythingSysadmin Blog

2018-04-09T15:54:50+00:00

Datto will be hosting the 2nd annual ZFS User Conference featuring ZFS co-creator, Matt Ahrens! The date is April 19-20 at Datto HQ in Norwalk, CT.

This conference will focus on the deployment, administration, features, and tuning of the ZFS filesystem. Learn about OpenZFS and network with folks running businesses and interesting projects on ZFS.

For more information and registration see http://zfs.datto.com

(I won't be attending as I'm not longer using ZFS, but I'm still a ZFS fanboy so I felt like promoting this.)

(image) Thoughts, news and views of Limoncelli, Hogan & Chalup



Multi-git-status now shows branches with no upstreamElectricmonk.nl weblog

2018-04-08T18:22:17+00:00

Just a quick update on Multi-git-status. It now also shows branches with no upstream. These are typically branches created locally that haven't been configured to track a local or remote branch. Any changes in those branches are lost when the repo is removed from your machine. Additionally, multi-git-status now handles branches with slashes in them properly. For example, "feature/loginscreen". Here's how the output looks now:

 

(image)

You can get multi-git-status from the Github page.

Ferry Boender's ramblings



zero-knowledge proof: trust without shared secretsthe evolving ultrasaurus

2018-04-07T17:52:58+00:00

In cryptography we typically share a secret which allows us to decrypt future messages. Commonly this is a password that I make up and submit to a Web site, then later produce to verify I am the same person. I missed Kazue Sako’s Zero Knowledge Proofs 101 presentation at IIW last week, but Rachel Myers shared an impressively simply retelling in the car on the way back to San Francisco, which inspired me to read the notes and review the proof for myself. I’ve attempted to reproduce this simple explanation below, also noting additional sources and related articles. Zero Knowledge Proofs (ZPKs) are very useful when applied to internet identity — with an interactive exchange you can prove you know a secret without actually revealing the secret. Understanding Zero Knowledge Proofs with simple math: x -> f(x) Simple one way function. Easy to go one way from x to f(x) but mathematically hard to go from f(x) to x. The most common example is a hash function. Wired: What is Password Hashing? provides an accessible introduction to why hash functions are important to cryptographic applications today. f(x) = g ^ x mod p Known(public): g, p * g is a constant * p has to be prime Easy to know x and compute g ^ x mod p but difficult to do in reverse. Interactive Proof Alice wants to prove Bob that she knows x without giving any information about x. Bob already knows f(x). Alice can make f(x) public and then prove that she knows x through an interactive exchange with anyone on the Internet, in this case, Bob. Alice publishes f(x): g^x mod p Alice picks random number r Alice sends Bob u = g^r mod p Now Bob has artifact based on that random number, but can’t actually calculate the random number Bob returns a challenge e. Either 0 or 1 Alice responds with v: If 0, v = r If 1, v = r + x Bob can now calculate: If e == 0: Bob has the random number r, as well as the publicly known variables and can check if u == g^v mod p If e == 1: u*f(x) = g^v (mod p) I believe step 6 is true based on Congruence of Powers, though I’m not sure that I’ve transcribed e==1 case accurately with my limited ascii representation. If r is true random, equally distributed between zero and (p-1), this does not leak any information about x, which is pretty neat, yet not sufficient. In order to ensure that Alice cannot be impersonated, multiple iterations are required along with the use of large numbers (see IIW session notes). Further Reading Comparing Information Without Leaking It Ronald Fagin, Moni Naor, Peter Winkler, 1996. The Knowledge Complexity of Interactive Proof-Systems: original 1985 paper by Shafi Goldwasser, Silvio Micali, and Charles Rackoff How to Explain Zero-Knowledge Protocols to Your Children Quisquater, Jean-Jacques; Guillou, Louis C.; Berson, Thomas A. (1990). Advances in Cryptology – CRYPTO ’89: Proceedings. 435: 628–631. Applied Kid Cryptography or How To Convince Your Children You Are Not Cheating Moni Naor, Yael Naor, Omer Reingold, 1999 [...]Sarah Allen's reflections on internet software and other topics



Hash-based Signatures: An illustrated PrimerA Few Thoughts on Cryptographic Engineering

2018-04-07T15:41:50+00:00

Over the past several years I’ve been privileged to observe two contradictory and fascinating trends. The first is that we’re finally starting to use the cryptography that researchers have spent the past forty years designing. We see this every day in examples ranging from encrypted messaging to phone security to cryptocurrencies. The second trend is that cryptographers are getting ready for all these good times to end. But before I get to all of that — much further below — let me stress that this is not a post about the quantum computing apocalypse, nor is it about the success of cryptography in the 21st century. Instead I’m going to talk about something much more wonky. This post will be about one of the simplest (and coolest!) cryptographic technologies ever developed: hash-based signatures. Hash-based signature schemes were first invented in the late 1970s by Leslie Lamport, and significantly improved by Ralph Merkle and others. For many years they were largely viewed as an interesting cryptographic backwater, mostly because they produce relatively large signatures (among other complications). However in recent years these constructions have enjoyed something of a renaissance, largely because — unlike signatures based on RSA or the discrete logarithm assumption — they’re largely viewed as resistant to serious quantum attacks like Shor’s algorithm. First some background. Background: Hash functions and signature schemes In order to understand hash-based signatures, it’s important that you have some familiarity with cryptographic hash functions. These functions take some input string (typically or an arbitrary length) and produce a fixed-size “digest” as output. Common cryptographic hash functions like SHA2, SHA3 or Blake2 produce digests ranging from 256 bits to 512 bits. In order for a function to be considered a ‘cryptographic’ hash, it must achieve some specific security requirements. There are a number of these, but here we’ll just focus on three common ones: 1. Pre-image resistance (sometimes known as “one-wayness”): given some output , it should be time-consuming to find an input such that . (There are many caveats to this, of course, but ideally the best such attack should require a time comparable to a brute-force search of whatever distribution is drawn from.) 2. Second-preimage resistance: This is subtly different than pre-image resistance. Given some input , it should be hard for an attacker to find a different input such that . 3. Collision resistance: It should be hard to find any two values such that . Note that this is a much stronger assumption than second-preimage resistance, since the attacker has complete freedom to find any two messages of its choice. The example hash functions I mentioned above are believed to provide all of these properties. That is, nobody has articulated a meaningful (or even conceptual) attack that breaks any of them. That could always change, of course, in which case we’d almost certainly stop using them. (We’ll discuss the special case of quantum attacks a bit further below.) Since our goal is to use hash functions to construct signature schemes, it’s also helpful to briefly review that primitive. A digital signature scheme is a public key primitive in which a user (or “signer”) generates a pair of keys, called the public key and private key. The user retains the private key, and can use this to “sign” arbitrary messages — producing a resulting digital signature. Anyone who has possession of the public key can verify the correctness of a message and its assoc[...]



Chalk Talk #2: how does Varnish work?ma.ttias.be

2018-04-06T09:30:08+00:00

The post Chalk Talk #2: how does Varnish work? appeared first on ma.ttias.be.

In the first Chalk Talk video, we looked at what Varnish can do. In this second video, I explain how Varnish does this.

As usual, if you like a Dutch written version, have a look at the company blog.

Next videos will focus more on the technical internals, like how the hashing works, how to optimize your content & site and how to debug Varnish.

The post Chalk Talk #2: how does Varnish work? appeared first on ma.ttias.be.

The Web, Open Source, PHP, Security, DevOps & Automation.



A small web application with Angular5 and DjangoMarios Zindilis

2018-04-05T23:00:00+00:00

Django works well as the back-end of an application that uses Angular5 in the front-end. In my attempt to learn Angular5 well enough to build a small proof-of-concept application, I couldn't find a simple working example of a combination of the two frameworks, so I created one. I called this the Pizza Maker. It's available on GitHub, and its documentation is in the README.

If you have any feedback for this, please open an issue on GitHub.

Marios Zindilis's Personal Website



Nested Loops in AnsibleEvaggelos Balaskas - System Engineer

2018-04-05T10:09:02+00:00

Recently I needed to create a Nested Loop in Ansible. One of the possible issues I had to consider, was the backward compatibility with both Ansible v1 and Ansible v2. A few days after, Ansible 2.5 introduced the the loop keyword and you can read a comprehensive blog entry here: Loop: Plays in the future, items in the past. So here are my notes on the subject: Variables Below is a variable yaml file for testing purposes: vars.yml --- days: - Monday - Tuesday - Wednesday - Thursday - Friday - Saturday - Sunday months: - January - February - March - April - May - June - July - August - September - October - November - December Ansible v1 Let’s start with Ansible v1: # ansible --version ansible 1.9.6 configured module search path = None Playbook Below a very simple ansible-playbook example that supports nested loops: --- - hosts: localhost gather_facts: no vars_files: - vars.yml tasks: - name: "This is a simple test" debug: msg: "Day: {{ item[0] }} exist in Month: {{ item[1] }}" with_nested: - "{{ days }}" - "{{ months }}" This playbook doesnt do much. Prints a message for every day and every month. Ansible-Playbook Run locally the playbook by: # ansible-playbook nested.yml -c local -l localhost -i "localhost," the output: PLAY [localhost] ****************************** TASK: [This is a simple test] ***************** ok: [localhost] => (item=['Monday', 'January']) => { "item": [ "Monday", "January" ], "msg": "Day: Monday exist in Month: January" } ... ok: [localhost] => (item=['Sunday', 'December']) => { "item": [ "Sunday", "December" ], "msg": "Day: Sunday exist in Month: December" } PLAY RECAP ************************************* localhost : ok=1 changed=0 unreachable=0 failed=0 Messages There are seven (7) days and twelve (12) months, so the output must print: 7*12 = 84 messages. Counting the messages: # ansible-playbook nested.yml -c local -l localhost -i "localhost," | egrep -c msg 84 Time Measuring the time it needs to pass through the nested-loop: time ansible-playbook nested.yml -c local -l localhost -i "localhost," &> /dev/null real 0m0.448s user 0m0.406s sys 0m0.040s 0.448s nice! Ansible v2 Running the same playbook in latest ansible: # ansible-playbook nested.yml -c local -l localhost seems to still work! Compatibility issues: Resolved! Counting the messages # ansible-playbook nested.yml | egrep -c msg 84 Time # time ansible-playbook nested.yml &> /dev/null real 0m7.396s user 0m7.575s sys 0m0.172s 7.396s !!! that is 7seconds more than ansible v1. Complex Loops The modern way, is to use the loop keyword with the nested lookup plugin: --- - hosts: localhost gather_facts: no vars_files: - vars.yml tasks: - name: "This is a simple test" debug: msg: "Day: {{ item[0] }} exist in Month: {{ item[1] }}" loop: "{{ lookup('nested', days, month) }}" Time # time ansible-playbook lookup_loop.yml &> /dev/null real 0m7.975s user 0m8.169s sys 0m0.177s 7.623s Tag(s): ansible[...]The sky above the port was the color of television, tuned to a dead channel



Varnish: same hash, different results? Check the Vary header!ma.ttias.be

2018-04-03T18:45:29+00:00

The post Varnish: same hash, different results? Check the Vary header! appeared first on ma.ttias.be.

I'll admit I get bitten by the Vary header once every few months. It's something a lot of CMS's randomly add, and it has a serious impact on how Varnish handles and treats requests.

For instance, here's a request I was troubleshooting that had these varnishlog hash() data:

-   VCL_call       HASH
-   Hash           "/images/path/to/file.jpg%00"
-   Hash           "http%00"
-   Hash           "www.yoursite.tld%00"
-   Hash           "/images/path/to/file.jpg.jpg%00"
-   Hash           "www.yoursite.tld%00"
-   VCL_return     lookup
-   VCL_call       MISS

A new request, giving the exact same hashing data, would return a different page from the cache/backend. So why does a request with the same hash return different data?

Let me introduce the Vary header.

In this case, the page I was requesting added the following header:

Vary: Accept-Encoding,User-Agent

This instructs Varnish to keep a separate version each page for every value of Accept-Encoding and User-Agent it finds.

The Accept-Encoding would make sense, but Varnish already handles that internally. A gziped/plain version will return different data, that makes sense. There's no real point in adding that header for Varnish, but other proxies in between might still benefit from it.

The User-Agent is plain nonsense, why would you serve a different version of a page per browser? If you consider a typical User-Agent string to contain text like Mozilla/5.0 (Macintosh; Intel Mac OS X...) AppleWebKit/537.xx (KHTML, like Gecko) Chrome/65.x.y.z Safari/xxx, that's practically unique per visitor you have.

So, quick hack in this case, I remove the Vary header altogether.

sub vcl_backend_response {
  unset beresp.http.Vary;
  ...
}

No more variations of the cache based on what a random CMS does or says.

The post Varnish: same hash, different results? Check the Vary header! appeared first on ma.ttias.be.

The Web, Open Source, PHP, Security, DevOps & Automation.



How to run Ansible2.5 on CentOS 5Evaggelos Balaskas - System Engineer

2018-04-03T13:35:22+00:00

[notes based on a docker centos5] # cat /etc/redhat-release CentOS release 5.11 (Final) Setup Enviroment Install compiler: # yum -y install gcc make Install zlib headers: # yum -y install zlib-devel Install tools: # yum -y install curl unzip SSL/TLS Errors If you are on a CentOS 5x machine, when trying to download files from the internet, you will get this error msg: This is a brown out of TLSv1 support. TLSv1 support is going away soon, upgrade to a TLSv1.2+ capable client. or SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert protocol version that is because CentOS 5x has an old cipher suite that doesnt work with today’s standards. OpenSSL To bypass these SSL/TLS errors, we need to install a recent version of openssl. # cd /root/ # curl -LO https://www.openssl.org/source/openssl-1.0.2o.tar.gz # tar xf openssl*.tar.gz # cd openssl* # ./Configure shared linux-x86_64 # make # make install The output has a useful info: OpenSSL shared libraries have been installed in: /usr/local/ssl So, we have to update the system’s library paths, to include this one: # echo "/usr/local/ssl/lib/" >> /etc/ld.so.conf # /sbin/ldconfig Python 2.7 Download the latest Python2.7 # cd /root/ # curl -LO https://www.python.org/ftp/python/2.7.14/Python-2.7.14.tgz # tar xf Python*.tgz # cd Python* Install Python: # ./configure --prefix=/opt/Python27 --enable-shared # make # make install PATH # export PATH=/opt/Python27/bin/:$PATH # python -c "import ssl; print(ssl.OPENSSL_VERSION)" OpenSSL 1.0.2o 27 Mar 2018 SetupTools Download the latest setuptools # cd /root/ # export PYTHONHTTPSVERIFY=0 # python -c 'import urllib; urllib.urlretrieve ("https://pypi.python.org/packages/72/c2/c09362ab29338413ab687b47dab03bab4a792e2bbb727a1eb5e0a88e3b86/setuptools-39.0.1.zip", "setuptools-39.0.1.zip")' Install setuptools # unzip setuptools*.zip # cd setuptools* # python2.7 setup.py build # python2.7 setup.py install PIP Install PIP # cd /root/ # easy_install pip Searching for pip Reading https://pypi.python.org/simple/pip/ Downloading https://pypi.python.org/packages/4b/5a/8544ae02a5bd28464e03af045e8aabde20a7b02db1911a9159328e1eb25a/pip-10.0.0b1-py2.py3-none-any.whl#md5=34dd54590477e79bc681d9ff96b9fd39 Best match: pip 10.0.0b1 Processing pip-10.0.0b1-py2.py3-none-any.whl Installing pip-10.0.0b1-py2.py3-none-any.whl to /opt/Python27/lib/python2.7/site-packages writing requirements to /opt/Python27/lib/python2.7/site-packages/pip-10.0.0b1-py2.7.egg/EGG-INFO/requires.txt Adding pip 10.0.0b1 to easy-install.pth file Installing pip script to /opt/Python27/bin Installing pip3.6 script to /opt/Python27/bin Installing pip3 script to /opt/Python27/bin Installed /opt/Python27/lib/python2.7/site-packages/pip-10.0.0b1-py2.7.egg Processing dependencies for pip Finished processing dependencies for pip Ansible Now, we are ready to install ansible # pip install ansible Collecting ansible /opt/Python27/lib/python2.7/site-packages/pip-10.0.0b1-py2.7.egg/pip/_vendor/urllib3/util/ssl_.py:339: SNIMissingWarning: An HTTPS request has been made, but the SNI (Subject Name Indication) extension to TLS is not available on this platform. This may cause the server to present an incorrect TLS certificate, which can cause validation failures. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings SNIMissingWarning Using cached ansible-2.5.0-py2.py3-none-any.whl Collecting p[...]



Adding rich object data types to PuppetR.I.Pienaar

2018-04-03T09:23:10+00:00

Extending Puppet using types, providers, facts and functions are well known and widely done. Something new is how to add entire new data types to the Puppet DSL to create entirely new language behaviours. I’ve done a bunch of this recently with the Choria Playbooks and some other fun experiments, today I’ll walk through building a small network wide spec system using the Puppet DSL. Overview A quick look at what we want to achieve here, I want to be able to do Choria RPC requests and assert their outcomes, I want to write tests using the Puppet DSL and they should run on a specially prepared environment. In my case I have a AWS environment with CentOS, Ubuntu, Debian and Archlinux machines: Below I test the File Manager Agent: Get status for a known file and make sure it finds the file Create a brand new file, ensure it reports success Verify that the file exist and is empty using the status action cspec::suite("filemgr agent tests", $fail_fast, $report) |$suite| {   # Checks an existing file $suite.it("Should get file details") |$t| { $results = choria::task("mcollective", _catch_errors => true, "action" => "filemgr.status", "nodes" => $nodes, "silent" => true, "fact_filter" => ["kernel=Linux"], "properties" => { "file" => "/etc/hosts" } )   $t.assert_task_success($results)   $results.each |$result| { $t.assert_task_data_equals($result, $result["data"]["present"], 1) } }   # Make a new file and check it exists $suite.it("Should support touch") |$t| { $fname = sprintf("/tmp/filemgr.%s", strftime(Timestamp(), "%s"))   $r1 = choria::task("mcollective", _catch_errors => true, "action" => "filemgr.touch", "nodes" => $nodes, "silent" => true, "fact_filter" => ["kernel=Linux"], "fail_ok" => true, "properties" => { "file" => $fname } )   $t.assert_task_success($r1)   $r2 = choria::task("mcollective", _catch_errors => true, "action" => "filemgr.status", "nodes" => $nodes, "silent" => true, "fact_filter" => ["kernel=Linux"], "properties" => { "file" => $fname } )   $t.assert_task_success($r2)   $r2.each |$result| { $t.assert_task_data_equals($result, $result["data"]["present"], 1) $t.assert_task_data_equals($result, $result["data"]["size"], 0) } } } I also want to be able to test other things like lets say discovery: cspec::suite("${method} discovery method", $fail_fast, $report) |$suite| { $suite.it("Should support a basic discovery") |$t| { $found = choria::discover( "discovery_method" => $method, )   $t.assert_equal($found.sort, $all_nodes.sort) } } So we want to make a Spec like system that can drive Puppet Plans (aka Choria Playbooks) and do various assertions on the outcome. We want to run it with mco playbook run and it should write a JSON report to disk with all suites, cases and assertions. Adding a new Data Type to Puppet I’ll show how to add the Cspec::Suite data Type to Puppet. This comes in 2 parts: You have to describe the Type that is exposed to Puppet and you have to provide a Ruby implementation of the Type. Describing the Objects Here we create the signature for Cspec::Suite: # modules/cspec/lib/puppet/datatypes/cspec/suite.rb Puppet::DataT[...]



toolsmith #132 - The HELK vs APTSimulator - Part 2HolisticInfoSec™

2018-04-03T07:01:00+00:00

Continuing where we left off in The HELK vs APTSimulator - Part 1, I will focus our attention on additional, useful HELK features to aid you in your threat hunting practice. HELK offers Apache Spark, GraphFrames, and Jupyter Notebooks  as part of its lab offering. These capabilities scale well beyond a standard ELK stack, this really is where parallel computing and significantly improved processing and analytics truly take hold. This is a great way to introduce yourself to these technologies, all on a unified platform.Let me break these down for you a little bit in case you haven't been exposed to these technologies yet. First and foremost, refer to @Cyb3rWard0g's wiki page on how he's designed it for his HELK implementation, as seen in Figure 1.Figure 1: HELK ArchitectureFirst, Apache Spark. For HELK, "Elasticsearch-hadoop provides native integration between Elasticsearch and Apache Spark, in the form of an RDD (Resilient Distributed Dataset) (or Pair RDD to be precise) that can read data from Elasticsearch." Per the Apache Spark FAQ, "Spark is a fast and general processing engine compatible with Hadoop data" to deliver "lighting-fast cluster computing."Second, GraphFrames. From the GraphFrames overview, "GraphFrames is a package for Apache Spark which provides DataFrame-based Graphs. GraphFrames represent graphs: vertices (e.g., users) and edges (e.g., relationships between users). GraphFrames also provide powerful tools for running queries and standard graph algorithms. With GraphFrames, you can easily search for patterns within graphs, find important vertices, and more."  Finally, Jupyter Notebooks to pull it all together.From Jupyter.org: "The Jupyter Notebook is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. Uses include: data cleaning and transformation, numerical simulation, statistical modeling, data visualization, machine learning, and much more." Jupyter Notebooks provide a higher order of analyst/analytics capabilities, if you haven't dipped your toe in that water, this may be your first, best opportunity.Let's take a look at using Jupyter Notebooks with the data populated to my Docker-based HELK instance as implemented in Part 1. I repopulated my HELK instance with new data from a different, bare metal Windows instance reporting to HELK with Winlogbeat, Sysmon enabled, and looking mighty compromised thanks to @cyb3rops's APTSimulator.To make use of Jupyter Notebooks, you need your JUPYTER CURRENT TOKEN to access the Jupyter Notebook web interface. It was presented to you when your HELK installation completed, but you can easily retrieve it via sudo docker logs helk-analytics, then copy and paste the URL into your browser to connect for the first time with a token. It will look like this,http://localhost:8880/?token=3f46301da4cd20011391327647000e8006ee3574cab0b163, as described in the Installation wiki. After browsing to the URL with said token, you can begin at http://localhost:8880/lab, where you should immediately proceed to the Check_Spark_Graphframes_Integrations.ipynb notebook. It's found in the hierarchy menu under training > jupyter_notebooks > getting_started. This notebook is essential to confirming you're ingesting data properly with HELK and that its integrations are fully functioning. Step through it one cell at a time with the play butt[...]



ed(1) mastery is a must for a real Unix personThat grumpy BSD guy

2018-04-01T10:21:00+00:00

ed(1) is the standard editor. Now there's a book out to help you master this fundamental Unix tool.In some circles on the Internet, your choice of text editor is a serious matter.We've all seen the threads on mailing lits, USENET news groups and web forums about the relative merits of Emacs vs vi, including endless iterations of flame wars, and sometimes even involving lesser known or non-portable editing environments.And then of course, from the Linux newbies we have seen an endless stream of tweeted graphical 'memes' about the editor vim (aka 'vi Improved') versus the various apparently friendlier-to-some options such as GNU nano. Apparently even the 'improved' version of the classical and ubiquitous vi(1) editor is a challenge even to exit for a significant subset of the younger generation.Yes, your choice of text editor or editing environment is a serious matter. Mainly because text processing is so fundamental to our interactions with computers.But for those of us who keep our systems on a real Unix (such as OpenBSD or FreeBSD), there is no real contest. The OpenBSD base system contains several text editors including vi(1) and the almost-emacs mg(1), but ed(1) remains the standard editor.Now Michael Lucas has written a book to guide the as yet uninitiated to the fundamentals of the original Unix text editor. It is worth keeping in mind that much of Unix and its original standard text editor written back when the standard output and default user interface was more likely than not a printing terminal.To some of us, reading and following the narrative of Ed Mastery is a trip down memory lane. To others, following along the text will illustrate the horror of the world of pre-graphic computer interfaces. For others again, the fact that ed(1) doesn't use your terminal settings much at all offers hope of fixing things when something or somebody screwed up your system so you don't have a working terminal for that visual editor.ed(1) is a line editor. And while you may have heard mutters that 'vi is just a line editor in drag', vi(1) does offer a distinctly visual interface that only became possible with the advent of the video terminal, affectionately known as the glass teletype. ed(1) offers no such luxury, but as the book demonstrates, even ed(1) is able to display any part of a file's content for when you are unsure what your file looks like.The book Ed Mastery starts by walking the reader through a series of editing sessions using the classical ed(1) line editing interface. To some readers the thought of editing text while not actually seeing at least a few lines at the time onscreen probably sounds scary.  This book shows how it is done and while the author never explicitly mentions it, the text aptly demonstrates how the ed(1) command set is in fact the precursor of of how things are done in many Unix text processing programs.As one might expect, the walkthrough of ed(1) text editing functionality is followed up by a sequence on searching and replacing which ultimately leads to a very readable introduction to regular expressions, which of course are part of the ed(1) package too. If you know your ed(1) command set, you are quite far along in the direction of mastering the stream editor sed(1), as well as a number of other systems where regular expressions play a crucial role.After the basic editing functionality and some minor text processing magi[...]



Tarsnap pricing changeDaemonic Dispatches

2018-04-01T00:00:00+00:00

I launched the current Tarsnap website in 2009, and while we've made some minor adjustments to it over the years — e.g., adding a page of testimonials, adding much more documentation, and adding a page with .deb binary packages — the changes have overall been relatively modest. One thing people criticized the design for in 2009 was the fact that prices were quoted in picodollars; this is something I have insisted on retaining for the past eight years. One of the harshest critics of Tarsnap's flat rate picodollars-per-byte pricing model is Patrick McKenzie — known to much of the Internet as "patio11" — who despite our frequent debates can take credit for ten times more new Tarsnap customers than anyone else, thanks to a single ten thousand word blog post about Tarsnap. The topic of picodollars has become something of an ongoing debate between us, with Patrick insisting that they communicate a fundamental lack of seriousness and sabotage Tarsnap's success as a business, and me insisting that using they communicate exactly what I want to communicate, and attract precisely the customer base I want to have. In spite of our disagreements, however, I really do value Patrick's input; indeed, the changes I mentioned above came about in large part due to the advice I received from him, and for a long time I've been considering following more of Patrick's advice. A few weeks ago, I gave a talk at the AsiaBSDCon conference about profiling the FreeBSD kernel boot. (I'll be repeating the talk at BSDCan if anyone is interested in seeing it in person.) Since this was my first time in Tokyo (indeed, my first time anywhere in Asia) and despite communicating with him frequently I had never met Patrick in person, I thought it was only appropriate to meet him for dinner; fortunately the scheduling worked out and there was an evening when he was free and I wasn't suffering too much from jetlag. After dinner, Patrick told me about a cron job he runs: Got dinner with @cperciva in Tokyo. At the end of dinner, told him about my cron job that polls Tarsnap looking for picodollars to go away. After laughing then confirming I was serious he suggested I tell you, Twitter, so here you are. I can be patient.— Patrick McKenzie (@patio11) March 9, 2018 I knew then that the time was coming to make a change Patrick has long awaited: Getting rid of picodollars. It took a few weeks before the right moment arrived, but I'm proud to announce that as of today, April 1st 2018, Tarsnap's storage pricing is 8333333 attodollars per byte-day. This addresses a long-standing concern I've had about Tarsnap's pricing: Tarsnap bills customers for usage on a daily basis, but since 250 picodollars is not a multiple of 30, usage bills have been rounded. Tarsnap's accounting code works with attodollars internally (Why attodollars? Because it's easy to have 18 decimal places of precision using 64.64 fixed-point arithmetic.) and so during 30-day months I have in fact been rounding down and billing customers at a rate of 8333333 attodollars per byte-day for years — so making this change on the Tarsnap website brings it in line with the reality of the billing system. Of course, there are other advantages to advertising Tarsnap's pricing in attodollars. Everything which was communicated by pricing storage in picodollars per byte-month is communicated e[...]



Working with Yaml and Jinja2 in Python3Evaggelos Balaskas - System Engineer

2018-03-31T18:17:20+00:00

YAML YAML is a human friendly data serialization standard, especially for configuration files. Its simple to read and use. Here is an example: --- # A list of tasty fruits fruits: - Apple - Orange - Strawberry - Mango btw the latest version of yaml is: v1.2. PyYAML Working with yaml files in python is really easy. The python module: PyYAML must be installed in the system. In an archlinux box, the system-wide installation of this python package, can be done by typing: $ sudo pacman -S --noconfirm python-yaml Python3 - Yaml Example Save the above yaml example to a file, eg. fruits.yml Open the Python3 Interpreter and write: $ python3.6 Python 3.6.4 (default, Jan 5 2018, 02:35:40) [GCC 7.2.1 20171224] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from yaml import load >>> print(load(open("fruits.yml"))) {'fruits': ['Apple', 'Orange', 'Strawberry', 'Mango']} >>> an alternative way is to write the above commands to a python file: from yaml import load print(load(open("fruits.yml"))) and run it from the console: $ python3 test.py {'fruits': ['Apple', 'Orange', 'Strawberry', 'Mango']} Instead of print we can use yaml dump: eg. import yaml yaml.dump(yaml.load(open("fruits.yml"))) 'fruits: [Apple, Orange, Strawberry, Mango]n' The return type of yaml.load is a python dictionary: type(load(open("fruits.yml"))) Have that in mind. Jinja2 Jinja2 is a modern and designer-friendly templating language for Python. As a template engine, we can use jinja2 to build complex markup (or even text) output, really fast and efficient. Here is an jinja2 template example: I like these tasty fruits: * {{ fruit }} where {{ fruit }} is a variable. Declaring the fruit variable with some value and the jinja2 template can generate the prefarable output. python-jinja In an archlinux box, the system-wide installation of this python package, can be done by typing: $ sudo pacman -S --noconfirm python-jinja Python3 - Jinja2 Example Below is a python3 - jinja2 example: import jinja2 template = jinja2.Template(""" I like these tasty fruits: * {{ fruit }} """) data = "Apple" print(template.render(fruit=data)) The output of this example is: I like these tasty fruits: * Apple File Template Reading the jinja2 template from a template file, is a little more complicated than before. Building the jinja2 enviroment is step one: env = jinja2.Environment(loader=jinja2.FileSystemLoader("./")) and Jinja2 is ready to read the template file: template = env.get_template("t.j2") The template file: t.j2 is a litle diferrent than before: I like these tasty fruits: {% for fruit in fruits -%} * {{ fruit }} {% endfor %} Yaml, Jinja2 and Python3 To render the template a dict of global variables must be passed. And parsing the yaml file the yaml.load returns a dictionary! So everything are in place. Compine everything together: from yaml import load from jinja2 import Environment, FileSystemLoader mydata = (load(open("fruits.yml"))) env = Environment(loader=FileSystemLoader("./")) template = env.get_template("t.j2") print(template.render(mydata)) and the result is: $ python3 test.py I like these tasty fruits: * Apple * Orange * Strawberry * Mango Tag(s): python, python3, yaml, jinja, jinja2[...]The sky above the port was the color of t[...]



Rewriting some services in golangSteve Kemp's Blog

2018-03-30T07:00:00+00:00

The past couple of days I've been reworking a few of my existing projects, and converting them from Perl into Golang. Bytemark had a great alerting system for routing alerts to different enginners, via email, SMS, and chat-messages. The system is called mauvealert and is available here on github. The system is built around the notion of alerts which have different states (such as "pending", "raised", or "acknowledged"). Each alert is submitted via a UDP packet getting sent to the server with a bunch of fields: Source IP of the submitter (this is implicit). A human-readable ID such as "heartbeat", "disk-space-/", "disk-space-/root", etc. A raise-field. More fields here .. Each incoming submission is stored in a database, and events are considered unique based upon the source+ID pair, such that if you see a second submission from the same IP, with the same ID, then any existing details are updated. This update-on-receive behaviour is pretty crucial to the way things work, especially when coupled with the "raise"-field. A raise field might have values such as: +5m This alert will be raised in 5 minutes. now This alert will be raised immediately. clear This alert will be cleared immediately. One simple way the system is used is to maintain heartbeat-alerts. Imagine a system sends the following message, every minute: id:heartbeat raise:+5m [source:1.2.3.4] The first time this is received by the server it will be recorded in the database. The next time this is received the existing event will be updated, and crucially the time to raise an alert will be bumped (i.e. it will become current-time + 5m). The next time the update is received the raise-time will also be bumped .. At some point the submitting system crashes, and five minutes after the last submission the alert moves from "pending" to "raised" - which will make it visible in the web-based user-interface, and also notify an engineer. With this system you could easily write trivial and stateless ad-hoc monitoring scripts like so which would raise/clear : curl https://example.com && send-alert --id http-example.com --raise clear --detail "site ok" || \ send-alert --id http-example.com --raise now --detail "site down" In short mauvealert allows aggregation of events, and centralises how/when engineers are notified. There's the flexibility to look at events, and send them to different people at different times of the day, decide some are urgent and must trigger SMSs, and some are ignorable and just generate emails . (In mauvealert this routing is done by having a configuration file containing ruby, this attempts to match events so you could do things like say "If the event-id contains "failed-disc" then notify a DC-person, or if the event was raised from $important-system then notify everybody.) I thought the design was pretty cool, and wanted something similar for myself. My version, which I setup a couple of years ago, was based around HTTP+JSON, rather than UDP-messages, and written in perl: https://github.com/skx/purple The advantage of using HTTP+JSON is that writing clients to submit events to the central system could easily and cheaply be done in multiple environments for multiple platforms. I didn't see the need for the efficiency of using binary UDP-based message[...]



Sequence definitions with kwalifyLZone Blog

2018-03-27T20:59:21+00:00

After guess-trying a lot on how to define a simple sequence in kwalify (which I do use as a JSON/YAML schema validator) I want to share this solution for a YAML schema.

So my use case is whitelisting certain keys and somehow ensuring their types. Using this I want to use kwalify to validate YAML files. Doing this for scalars are simple, but hashes and lists of scalar elements are not. Most problematic was the lists...

Defining Arbitrary Scalar Sequences

So how to define a list in kwalify? The user guide gives this example:
---
list:
  type: seq
  sequence:
     - type: str
This gives us a list of strings. But many lists also contain numbers and some contain structured data. For my use case I want to exclude structured date AND allow numbers. So "type: any" cannot be used. Also "type: any" would'nt work because it would require defining the mapping for any, which in a validation use case where we just want to ensure the list as a type, we cannot know. The great thing is there is a type "text" which you can use to allow a list of strings or number or both like this:
---
list:
  type: seq
  sequence:
     - type: text

Building a key name + type validation schema

As already mentioned the need for this is to have a whitelisting schema with simple type validation. Below you see an example for such a schema:
---
type: map
mapping:
  "default_definition": &allow_hash
     type: map
     mapping:
       =:
         type: any

"default_list_definition": &allow_list type: seq sequence: # Type text means string or number - type: text

"key1": *allow_hash "key2": *allow_list "key3": type: str

=: type: number range: { max: 29384855, min: 29384855 }
At the top there are two dummy keys "default_definition" and "default_list_definition" which we use to define two YAML references "allow_hash" and "allow_list" for generic hashes and scalar only lists.

In the middle of the schema you see three keys which are whitelisted and using the references are typed as hash/list and also as a string.

Finally for this to be a whitelist we need to refuse all other keys. Note that '=' as a key name stands for a default definition. Now we want to say: default is "not allowed". Sadly kwalify has no mechanism for this that allows expressing something like
---
  =:
    type: invalid
Therefore we resort to an absurd type definition (that we hopefully never use) for example a number that has to be exactly 29384855. All other keys not listed in the whitelist above, hopefully will fail to be this number an cause kwalify to throw an error.

This is how the kwalify YAML whitelist works.



PyPI does brownouts for legacy TLSLZone Blog

2018-03-27T20:35:50+00:00

Nice! Reading through the maintenance notices on my status page aggregator I learned that PyPI started intentionally blocking legacy TLS clients as a way of getting people to switch before TLS 1.0/1.1 support is gone for real.

Here is a quote from their status page:

In preparation for our CDN provider deprecating TLSv1.0 and TLSv1.1 protocols, we have begun rolling brownouts for these protocols for the first ten (10) minutes of each hour.

During that window, clients accessing pypi.python.org with clients that do not support TLSv1.2 will receive an HTTP 403 with the error message "This is a brown out of TLSv1 support. TLSv1 support is going away soon, upgrade to a TLSv1.2+ capable client.".


I like this action as a good balance of hurting as much as needed to help end users to stop putting of updates.



The Virtual Horizon Podcast Episode 2 – A Conversation with Angelo LucianiThe Virtual Horizon

2018-03-26T13:00:31+00:00

On this episode of The Virtual Horizon podcast, we’ll journey to the French Rivera for the 2017 Nutanix .Next EU conference. We’ll be joined by Angelo Luciani, Community Evangelist for Nutanix, to discuss blogging and the Virtual Design Master competition.

Nutanix has two large conferences scheduled for 2018 – .Next in New Orleans in May 2018 and .Next EU in London at the end of November 2018.

Show Credits:
Podcast music is a derivative of Boogie Woogie Bed by Jason Shaw (audionatix.com) Licensed under Creative Commons: By Attribution 3.0 License http://creativecommons.org/licenses/by/3.0/

Virtualization, Automation, and End-User Computing



Integration of a Go service with systemd: socket activationVincent Bernat

2018-03-19T08:28:47+00:00

In a previous post, I highlighted some useful features of systemd when writing a service in Go, notably to signal readiness and prove liveness. Another interesting bit is socket activation: systemd listens on behalf of the application and, on incoming traffic, starts the service with a copy of the listening socket. Lennart Poettering details in a blog post: If a service dies, its listening socket stays around, not losing a single message. After a restart of the crashed service it can continue right where it left off. If a service is upgraded we can restart the service while keeping around its sockets, thus ensuring the service is continously responsive. Not a single connection is lost during the upgrade. This is one solution to get zero-downtime deployment for your application. Another upside is you can run your daemon with less privileges—loosing rights is a difficult task in Go.1 The basics Handling of existing connections Waiting a few seconds for existing connections Waiting longer for existing connections Waiting longer for existing connections (alternative) Zero-downtime deployment? Addendum: decoy process using Go Addendum: identifying sockets by name The basics🔗 Let’s take back our nifty 404-only web server: package main import ( "log" "net" "net/http" ) func main() { listener, err := net.Listen("tcp", ":8081") if err != nil { log.Panicf("cannot listen: %s", err) } http.Serve(listener, nil) } Here is the socket-activated version, using go-systemd: package main import ( "log" "net/http" "github.com/coreos/go-systemd/activation" ) func main() { listeners, err := activation.Listeners(true) // ❶ if err != nil { log.Panicf("cannot retrieve listeners: %s", err) } if len(listeners) != 1 { log.Panicf("unexpected number of socket activation (%d != 1)", len(listeners)) } http.Serve(listeners[0], nil) // ❷ } In ❶, we retrieve the listening sockets provided by systemd. In ❷, we use the first one to serve HTTP requests. Let’s test the result with systemd-socket-activate: $ go build 404.go $ systemd-socket-activate -l 8000 ./404 Listening on [::]:8000 as 3. In another terminal, we can make some requests to the service: $ curl '[::1]':8000 404 page not found $ curl '[::1]':8000 404 page not found For a proper integration with systemd, you need two files: a socket unit for the listening socket, and a service unit for the associated service. We can use the following socket unit, 404.socket: [Socket] ListenStream = 8000 BindIPv6Only = both [Install] WantedBy = sockets.target The systemd.socket(5) manual page describes the available options. BindIPv6Only = both is explicitely specified because the default value is distribution-dependent. As for the service unit, we can use the following one, 404.service: [Unit] Description = 404 micro-service [Service] ExecStart = /usr/bin/404 systemd knows the two files work together because they share the same prefix. Once the files are in /etc/systemd/system, execute systemctl daemon-reload and systemctl start 404.​socket. Your service is ready to accept connections! Handling of existing conn[...]



Restic (backup) deleting old backups is extremely slowElectricmonk.nl weblog

2018-03-18T15:34:13+00:00

Here's a very quick note:

I've been using the Restic backup tool with the SFTP backend for a while now, and so far it was great. Until I tried to prune some old backups. It takes two hours to prune 1 GiB of data from a 15 GiB backup. During that time, you cannot create new backups. It also consumes a huge amount of bandwidth when deleting old backups. I strongly suspect it downloads each blob from the remote storage backend, repacks it and then writes it back.

I've seen people on the internet with a few hundred GiB worth of backups having to wait 7 days to delete their old backups. Since the repo is locked during that time, you cannot create new backups.

This makes Restic completely unusable as far as I'm concerned. Which is a shame, because other than that, it's an incredible tool.

Ferry Boender's ramblings



Route-based VPN on Linux with WireGuardVincent Bernat

2018-03-18T01:29:20+00:00

In a previous article, I described an implementation of redundant site-to-site VPNs using IPsec (with strongSwan as an IKE daemon) and BGP (with BIRD) to achieve this: 🦑 Three sites using redundant IPsec VPNs to protect some subnets. The two strengths of such a setup are: Routing daemons distribute routes to be protected by the VPNs. They provide high availability and decrease the administrative burden when many subnets are present on each side. Encapsulation and decapsulation are executed in a different network namespace. This enables a clean separation between a private routing instance (where VPN users are) and a public routing instance (where VPN endpoints are). As an alternative to IPsec, WireGuard is an extremely simple (less than 5,000 lines of code) yet fast and modern VPN that utilizes state-of-the-art and opinionated cryptography (Curve25519, ChaCha20, Poly1305) and whose protocol, based on Noise, has been formally verified. It is currently available as an out-of-tree module for Linux but is likely to be merged when the protocol is not subject to change anymore. Compared to IPsec, its major weakness is its lack of interoperability. It can easily replace strongSwan in our site-to-site setup. On Linux, it already acts as a route-based VPN. As a first step, for each VPN, we create a private key and extract the associated public key: $ wg genkey oM3PZ1Htc7FnACoIZGhCyrfeR+Y8Yh34WzDaulNEjGs= $ echo oM3PZ1Htc7FnACoIZGhCyrfeR+Y8Yh34WzDaulNEjGs= | wg pubkey hV1StKWfcC6Yx21xhFvoiXnWONjGHN1dFeibN737Wnc= Then, for each remote VPN, we create a short configuration file:1 [Interface] PrivateKey = oM3PZ1Htc7FnACoIZGhCyrfeR+Y8Yh34WzDaulNEjGs= ListenPort = 5803 [Peer] PublicKey = Jixsag44W8CFkKCIvlLSZF86/Q/4BovkpqdB9Vps5Sk= EndPoint = [2001:db8:2::1]:5801 AllowedIPs = 0.0.0.0/0,::/0 A new ListenPort value should be used for each remote VPN. WireGuard can multiplex several peers over the same UDP port but this is not applicable here, as the routing is dynamic. The AllowedIPs directive tells to accept and send any traffic. The next step is to create and configure the tunnel interface for each remote VPN: $ ip link add dev wg3 type wireguard $ wg setconf wg3 wg3.conf WireGuard initiates a handshake to establish symmetric keys: $ wg show wg3 interface: wg3 public key: hV1StKWfcC6Yx21xhFvoiXnWONjGHN1dFeibN737Wnc= private key: (hidden) listening port: 5803 peer: Jixsag44W8CFkKCIvlLSZF86/Q/4BovkpqdB9Vps5Sk= endpoint: [2001:db8:2::1]:5801 allowed ips: 0.0.0.0/0, ::/0 latest handshake: 55 seconds ago transfer: 49.84 KiB received, 49.89 KiB sent Like VTI interfaces, WireGuard tunnel interfaces are namespace-aware: once created, they can be moved into another network namespace where clear traffic is encapsulated and decapsulated. Encrypted traffic is routed in its original namespace. Let’s move each interface into the private namespace and assign it a point-to-point IP address: $ ip link set netns private dev wg3 $ ip -n private addr add 2001:db8:ff::/127 dev wg3 $ ip -n private link set wg3 up The remote end uses 2001:db8:ff::1/127. Once everything is setup, from one VPN, we sh[...]



Puppet Agent Settings IssueLZone Blog

2018-03-17T20:38:08+00:00

Experienced a strange puppet agent 4.8 configuration issue this week. To properly distribute the agent runs over time to even out puppet master load I wanted to configure the splay settings properly. There are two settings:
  • A boolean "splay" to enable/disable splaying
  • A range limiter "splayLimit" to control the randomization
What first confused me was the "splay" was not on per-default. Of course when using the open source version it makes sense to have it off. Having it on per-default sounds more like an enterprise feature :-)

No matter the default after deploying an agent config with settings like this
[agent]
runInterval = 3600
splay = true
splayLimit = 3600
... nothing happened. Runs were still not randomized. Checking the active configuration with
# puppet config print | grep splay
splay=false
splayLimit=1800
turned out that my config settings were not working at all. What was utterly confusing is that even the runInterval was reported as 1800 (which is the default value). But while the splay just did not work the effective runInterval was 3600!

After hours of debugging it, I happened to read the puppet documentation section that covers the config sections like [agent] and [main]. It says that [main] configures global settings and other sections can override the settings in [main], which makes sense.

But it just doesn't work this way. In the end the solution was using [main] as config section instead of [agent]:
[main]
runInterval=3600
splay=true
splayLimit=3600
and with this config "puppet config print" finally reported the settings as effective and the runtime behaviour had the expected randomization.

Maybe I misread something somewhere, but this is really hard to debug. And INI file are not really helpful in Unix. Overriding works better default files and with drop dirs.



Target your damned survey reportSysAdmin1138 Expounds

2018-03-14T20:59:16+00:00

StackOverflow has released their 2018 Developer Hiring Landscape report. (alternate source) This is the report that reportedly is about describing the demographics and preferences of software creators, which will enable people looking to hire such creators to better tailor their offerings. It's an advertising manual, basically. However, they dropped the ball in a few areas. One of which has been getting a lot of traction on Twitter. StackOverflow developer survey results has childcare benefits as the lowest priority work benefit that developers value and that ~7% of respondents were women 🙄https://t.co/eIj0oUSHzJ -- Emma-Ashley (@EmmaAshley) March 13, 2018 It's getting traction for a good reason, and it has to do with how these sorts of reports are written. The section under discussion here is "Differences in assessing jobs by gender". They have five cross-tabs here: All respondents highest-ranked. All respondents lowest-ranks (what the above references). All men highest-ranked. All women highest-ranked. All non-binary highest-ranked (they have this. This is awesome). I took this survey, and it was one of those classic questions like: Rank these ten items from lowest to highest. And yet, this report seems to ignore everything but the 1's and 10's. This is misguided, and leaves a lot of very valuable market-segment targeting information on the floor. Since 92% of respondents were men, the first and third tabs were almost identical, differing only by tenths of a percent. The second tab is likewise, that's a proxy tab for "what men don't want". We don't know how women or non-binary differ in their least-liked preferences. There is some very good data they could have presented, but chose not to. First of all, the number one, two and three priorities are the ones that people are most conscious of and may be willing to compromise one to get the other two. This should have been presented. All respondents top-3 ranked. All men top-3 ranked. All women top-3 ranked. All non-binary top-3 ranked. Compensation/Benefits would probably be close to 100%, but we would get interesting differences in the number two and three places on that chart. This gives recruiters the information they need to construct their pitches. Top-rank is fine, but you also want to know the close-enoughs. Sometimes, if you don't hit the top spot, you can win someone by hitting everything else. I have the same complaint for their "What Developers Value in Compensation and Benefits" cross-tab. Salary/Bonus is the top item for nearly everyone. This is kind of a gimmie. The number 2 and 3 places are very important because they're the tie-breaker. If an applicant is looking at a job that hits their pay rank, but misses on the next two most important priorities, they're going to be somewhat less enthusiastic. In a tight labor market, if they're also looking at an offer from a company that misses the pay by a bit and hits the rest, that may be the offer that gets accepted. The 2 through 9 rankings on that chart are important. This is a company that uses proportional voting for their moderator [...]



No VMware NSX Hardware Gateway Support for CiscoThe Lone Sysadmin

2018-03-14T17:26:30+00:00

I find it interesting, as I’m taking my first real steps into the world of VMware NSX, that there is no Cisco equipment supported as a VMware NSX hardware gateway (VTEP). According to the HCL on March 13th, 2018 there is a complete lack of “Cisco” in the “Partner” category: I wonder how that works out […]

The post No VMware NSX Hardware Gateway Support for Cisco appeared first on The Lone Sysadmin. Head over to the source to read the full post!

(image) Rounding Up IT Outlaws



Making Of “Murdlok”, the new old adventure game for the C64pagetable.com

2018-03-09T17:12:52+00:00

Recently, the 1986 adventure game “Murdlok” was published here for the first time. This is author Peter Hempel‘s “making-of” story, in German. (English translation) Am Anfang war der Brotkasten: Wir schreiben das Jahr 1984, oder war es doch schon 1985? Ich hab es über all die Jahre vergessen. Computer sind noch ein Zauberwort, obwohl sie schon seit Jahren auf dem Markt angeboten werden. Derweilen sind sie so klein, dass diese problemlos auf den Tisch gestellt werden können. Mikroprozessor! Und Farbe soll der auch haben, nicht monochrom wie noch überall üblich. Commodore VC20 stand in der Reklame der Illustrierten Zeitung, der Volkscomputer, wahrlich ein merkwürdiger Name, so wie der Name der Firma die ihn herstellt. C=Commodore, was hat dieser Computer mit der Seefahrt zu tun frage ich mich? Gut, immerhin war mir die Seite ins Auge gefallen. Das Ding holen wir uns, aber gleich den „Großen“, der C64 mit 64 KB. Den bestellen wir im Versandhandel bei Quelle. So trat mein Kumpel an mich heran. Das war damals noch mit erheblichen Kosten verbunden. Der Computer 799 D-Mark, Floppy 799 D-Mark und noch ein Bildschirm in Farbe dazu. Damals noch ein Portable TV für 599 D-Mark. Als alles da war ging es los! Ohne Selbststudium war da nichts zu machen, für mich war diese Technologie absolutes Neuland. Ich kannte auch niemanden, der sich hier auskannte, auch mein Kumpel nicht. Es wurden Fachbücher gekauft! BASIC für Anfänger! Was für eine spannende Geschichte. Man tippt etwas ein und es gibt gleich ein Ergebnis, manchmal ein erwartetes und manchmal ein unerwartetes. Das Ding hatte mich gefesselt, Tag und Nacht, wenn es die Arbeit und die Freundin zuließ. Irgendwann viel mir das Adventure „Zauberschloß“ von Dennis Merbach in die Hände. Diese Art von Spielen war genau mein Ding! Spielen und denken! In mir keimte der Gedanke auch so ein Adventure zu basteln. „Adventures und wie man sie programmiert“ hieß das Buch, das ich zu Rate zog. Ich wollte auf jeden Fall eine schöne Grafik haben und natürlich möglichst viele Räume. Die Geschichte habe ich mir dann ausgedacht und im Laufe der Programmierung auch ziemlich oft geändert und verbessert. Ich hatte mich entschieden, die Grafik mit einem geänderten Zeichensatz zu erzeugen. Also, den Zeichensatzeditor aus der 64’er Zeitung abgetippt. Ja, Sprites brauchte ich auch, also den Sprite-Editor aus der 64’er Zeitung abgetippt. „Maschinensprache für Anfänger“ und fertig war die kleine abgeänderte Laderoutine im Diskettenpuffer. Die Entwicklung des neuen Zeichensatzes war dann eine sehr mühselige Angelegenheit. Zeichen ändern und in die Grafik einbauen. Zeichen ändern und in die Grafik einbauen………….und so weiter. Nicht schön geworden, dann noch mal von vorne. Als das Listing zu groß wurde kam, ich ohne Drucker nicht mehr aus und musste mir einen anschaffen. Irgendwann sind mir dann auch noch die Bytes ausgegangen und der Programmcode musste optimiert werden. Jetzt hatte sich die Anschaffung des Druckers ausgezahlt[...]Some Assembly Required



50 000 Node Choria NetworkR.I.Pienaar

2018-03-07T16:50:04+00:00

I’ve been saying for a while now my aim with Choria is that someone can get a 50 000 node Choria network that just works without tuning, like, by default that should be the scale it supports at minimum. I started working on a set of emulators to let you confirm that yourself – and for me to use it during development to ensure I do not break this promise – though that got a bit side tracked as I wanted to do less emulation and more just running 50 000 instances of actual Choria, more on that in a future post. Today I want to talk a bit about a actual 50 000 real nodes deployment and how I got there – the good news is that it’s terribly boring since as promised it just works. Setup Network The network is pretty much just your typical DC network. Bunch of TOR switches, Distribution switches and Core switches, nothing special. Many dom0’s and many more domUs and some specialised machines. It’s flat there are firewalls between all things but it’s all in one building. Hardware I have 4 machines, 3 set aside for the Choria Network Broker Cluster and 1 for a client, while waiting for my firewall ports I just used the 1 machine for all the nodes as well as the client. It’s a 8GB RAM VM with 4 vCPU, not overly fancy at all. Runs Enterprise Linux 6. In the past I think we’d have considered this machine on the small side for a ActiveMQ network with 1000 nodes I’ll show some details of the single Choria Network Broker here and later follow up about the clustered setup. Just to be clear, I am going to show managing 50 000 nodes on a machine that’s the equivalent of a $40/month Linode. Choria I run a custom build of Choria 0.0.11, I bump the max connections up to 100k and turned off SSL since we simply can’t provision certificates, so a custom build let me get around all that. The real reason for the custom build though is that we compile in our agent into the binary so the whole deployment that goes out to all nodes and broker is basically what you see below, no further dependencies at all, this makes for quite a nice deployment story since we’re a bit challenged in that regard. $ rpm -ql choria /etc/choria/broker.conf /etc/choria/server.conf /etc/logrotate.d/choria /etc/init.d/choria-broker /etc/init.d/choria-server /etc/sysconfig/choria-broker /etc/sysconfig/choria-server /usr/sbin/choria Other than this custom agent and no SSL we’re about on par what you’d get if you just install Choria from the repos. Network Broker Setup The Choria Network Broker is deployed basically exactly as the docs. Including setting the sysctl values to what was specified in the docs. identity = choria1.example.net logfile = /var/log/choria.log   plugin.choria.stats_address = :: plugin.choria.stats_port = 8222 plugin.choria.network.listen_address = :: plugin.choria.network.client_port = 4222 plugin.choria.network.peer_port = 4223 Most of this isn’t even needed basically if you use defaults like you should. Server Setup The server setup was even more boring: logger_type [...]



Choria Progress UpdateR.I.Pienaar

2018-03-05T09:42:33+00:00

It’s been a while since I posted about Choria and where things are. There are major changes in the pipeline so it’s well overdue a update. The features mentioned here will become current in the next release cycle – about 2 weeks from now. New choria module The current gen Choria modules grew a bit organically and there’s a bit of a confusion between the various modules. I now have a new choria module, it will consume features from the current modules and deprecate them. On the next release it can manage: Choria YUM and APT repos Choria Package Choria Network Broker Choria Federation Broker Choria Data Adatpaters Network Brokers We have had amazing success with the NATS broker, lightweight, fast, stable. It’s perfect for Choria. While I had a pretty good module to configure it I wanted to create a more singular experience. Towards that there is a new Choria Broker incoming that manages an embedded NATS instance. To show what I am on about, imagine this is all that is required to configure a cluster of 3 production ready brokers capable of hosting 50k or more Choria managed nodes on modestly specced machines: plugin.choria.broker_network = true plugin.choria.network.peers = nats://choria1.example.net:4223, nats://choria2.example.net:4223, nats://choria3.example.net:4223 plugin.choria.stats_address = :: Of course there is Puppet code to do this for you in choria::broker. That’s it, start the choria-broker daemon and you’re done – and ready to monitor it using Prometheus. Like before it’s all TLS and all that kinds of good stuff. Federation Brokers We had good success with the Ruby Federation Brokers but they also had issues particularly around deployment as we had to deploy many instances of them and they tended to be quite big Ruby processes. The same choria-broker that hosts the Network Broker will now also host a new Golang based Federation Broker network. Configuration is about the same as before you don’t need to learn new things, you just have to move to the configuration in choria::broker and retire the old ones. Unlike the past where you had to run 2 or 3 of the Federation Brokers per node you now do not run any additional processes, you just enable the feature in the singular choria-broker, you only get 1 process. Internally each run 10 instances of the Federation Broker, its much more performant and scalable. Monitoring is done via Prometheus. Data Adapters Previously we had all kinds of fairly bad schemes to manage registration in MCollective. The MCollective daemon would make requests to a registration agent, you’d designate one or more nodes as running this agent and so build either a file store, mongodb store etc. This was fine at small size but soon enough the concurrency in large networks would overwhelm what could realistically be expected from the Agent mechanism to manage. I’ve often wanted to revisit that but did not know what approach to take. In the years since then the Stream Processing world [...]



Lurch: a unixy launcher and auto-typerElectricmonk.nl weblog

2018-03-04T08:45:44+00:00

I cobbled together a unixy command / application launcher and auto-typer. I've dubbed it Lurch.

Features:

  • Fuzzy filtering as-you-type.
  • Execute commands.
  • Open new browser tabs.
  • Auto-type into currently focussed window
  • Auto-type TOTP / rfc6238 / two-factor / Google Authenticator codes.
  • Unixy and composable. Reads entries from stdin.

You can use and combine these features to do many things:

  • Auto-type passwords
  • Switch between currently opened windows by typing a part of its title (using wmctrl to list and switch to windows)
  • As a generic (and very customizable) application launcher by parsing .desktop entries or whatever.
  • Quickly cd to parts of your filesystem using auto-type.
  • Open browser tabs and search via google or specific search engines.
  • List all entries in your SSH configuration and quickly launch an ssh session to one of them.
  • Etc.

You'll need a way to launch it when you press a keybinding. That's usually the window manager's job. For XFCE, you can add a keybinding under the Keyboard -> Application Shortcuts settings dialog.

Here's what it looks like:

(image)

Unfortunately, due to time constraints, I cannot provide any support for this project:

NO SUPPORT: There is absolutely ZERO support on this project. Due to time constraints, I don't take bug or features reports and probably won't accept your pull requests.

You can get it from the Github page.

Ferry Boender's ramblings



Monthly Blog Round-Up – February 2018Dr Anton Chuvakin Blog PERSONAL Blog

2018-03-01T17:19:01+00:00

It is mildly shocking that I’ve been blogging for 13+ years (my first blog post on this blog was in December 2005, my old blog at O’Reilly predates this by about a year), so let’s spend a moment contemplating this fact.Here is my next monthly "Security Warrior" blog round-up of top 5 popular posts based on lastmonth’s visitor data  (excluding other monthly or annual round-ups):“New SIEM Whitepaper on Use Cases In-Depth OUT!” (dated 2010) presents a whitepaper on select SIEM use cases described in depth with rules and reports [using now-defunct SIEM product]; also see this SIEM use case in depth and this for a more current list of popular SIEM use cases. Finally, see our 2016 research on developing security monitoring use cases here – and we just UPDATED IT FOR 2018.“Updated With Community Feedback SANS Top 7 Essential Log Reports DRAFT2” is about top log reports project of 2008-2013, I think these are still very useful in response to “what reports will give me the best insight from my logs?”“Why No Open Source SIEM, EVER?” contains some of my SIEM thinking from 2009 (oh, wow, ancient history!). Is it relevant now? You be the judge.  Succeeding with SIEM requires a lot of work, whether you paid for the software, or not. BTW, this post has an amazing “staying power” that is hard to explain – I suspect it has to do with people wanting “free stuff” and googling for “open source SIEM” …  Again, my classic PCI DSS Log Review series is extra popular! The series of 18 posts cover a comprehensive log review approach (OK for PCI DSS 3+ even though it predates it), useful for building log review processes and procedures, whether regulatory or not. It is also described in more detail in our Log Management book and mentioned in our PCI book  – note that this series is even mentioned in some PCI Council materials.  “Simple Log Review Checklist Released!” is often at the top of this list – this rapidly aging checklist is still a useful tool for many people. “On Free Log Management Tools” (also aged quite a bit by now) is a companion to the checklist (updated version) In addition, I’d like to draw your attention to a few recent posts from my Gartner blog [which, BTW, now has more than 5X of the traffic of this blog]:  Critical reference posts:Important: How to Impress / Annoy an Analyst During a Vendor Briefing? Best / Worst Tips Here!“Tell Us About Your Technology” and More Analyst Briefing TipsCurrent research on testing security:How Much of Your Security Gear Is Misconfigured or Not Configured?Security Testing: At What Level?On Negative Pressure or Why NOT Objectively Test Security?The Bane of All Security Tests: Acting on ResultsThreat Simulation Call to Action for 2018New Research: How to Actually Test Security?Current research on threat detection “starter kit”Back to Basics: Indispensable Securit[...]



How to Troubleshoot Unreliable or Malfunctioning HardwareThe Lone Sysadmin

2018-03-01T16:55:20+00:00

My post on Intel X710 NICs being awful has triggered a lot of emotion and commentary from my readers. One of the common questions has been: so I have X710 NICs, what do I do? How do I troubleshoot hardware that isn’t working right? 1. Document how to reproduce the problem and its severity. Is […]

The post How to Troubleshoot Unreliable or Malfunctioning Hardware appeared first on The Lone Sysadmin. Head over to the source to read the full post!

(image) Rounding Up IT Outlaws



Seeking Last Group of ContributorsOpenSSL Blog

2018-03-01T06:00:00+00:00

The following is a press release that we just put out about how finishing off our relicensing effort. For the impatient, please see https://license.openssl.org/trying-to-find to help us find the last people; we want to change the license with our next release, which is currently in Alpha, and tentatively set for May. For background, you can see all posts in the license category. One copy of the press release is at https://www.prnewswire.com/news-releases/openssl-seeking-last-group-of-contributors-300607162.html. OpenSSL Seeking Last Group of Contributors Looking for programmers who contributed code to the OpenSSL project The OpenSSL project, [https://www.openssl.org] (https://www.openssl.org), is trying to reach the last couple-dozen people who have contributed code to OpenSSL. They are asking people to look at https://license.openssl.org/trying-to-find to see if they recognize any names. If so, contact license@openssl.org with any information. This marks one of the final steps in the project’s work to change the license from its non-standard custom text, to the highly popular Apache License. This effort first started in the Fall of 2015, by requiring contributor agreements. Last March, the project made a major publicity effort, with large coverage in the industry. It also began to reach out and contact all contributors, as found by reviewing all changes made to the source. Over 600 people have already responded to emails or other attempts to contact them, and more than 98% agreed with the change. The project removed the code of all those who disagreed with the change. In order to properly respect the desires of all original authors, the project continues to make strong efforts to find everyone. Measured purely by simple metrics, the average contribution still outstanding is not large. There are a total of 59 commits without a response, out of a history of more than 32,300. On average, each person submitted a patch that modified 3-4 files, adding 100 lines and removing 23. “We’re very pleased to be changing the license, and I am personally happy that OpenSSL has adopted the widely deployed Apache License,” said Mark Cox, a founding member of the OpenSSL Management Committee. Cox is also a founder and former Board Member of the Apache Software Foundation. The project hopes to conclude its two-year relicensing effort in time for the next release, which will include an implementation of TLS 1.3. For more information, email osf-contact@openssl.org. -30-[...]



Intel X710 NICs Are CrapThe Lone Sysadmin

2018-02-28T21:29:01+00:00

(I’m grumpy this week and I’m giving myself permission to return to my blogging roots and complain about stuff. Deal with it.) In the not so distant past we were growing a VMware cluster and ordered 17 new blade servers with X710 NICs. Bad idea. X710 NICs suck, as it turns out. Those NICs do […]

The post Intel X710 NICs Are Crap appeared first on The Lone Sysadmin. Head over to the source to read the full post!

(image) Rounding Up IT Outlaws



DevOpsDays New York City 2019: Join the planning committee!Tom Limoncelli's EverythingSysadmin Blog

2018-02-28T16:06:22+00:00

2019 feels like a long way off, but since the conference is in January, we need to start planning soon. The sooner we start, the less rushed the planning can be.

I have to confess that working with the 2018 committee was one of the best and most professional conference planning experiences I've ever had. I've been involved with many conferences over the years and this experience was one of the best!

I invite new people to join the committee for 2019. The best way to learn about organizing is to join a committee and help out. You will be mentored and learn a lot in the process. Nothing involved in creating a conference is difficult, it just takes time and commitment.

Interested in being on the next planning committee? An informational meeting will be held via WedEx on Tuesday, March 6 at 2pm (NYC timezone, of course!).

During this kick-off meeting, the 2018 committee will review what roles they took on, what went well, what could be improved and the timeframe for the 2019 event. Please note, attendance to this meeting doesn't commit you to help organize this event, however, it is hoped by the end that we will be able to firm up who will comprise the 2019 event committee.

Hope you all can make it!

If you are interested in attending, email devopsdaysnyc@gmail.com for connection info.

(image) Thoughts, news and views of Limoncelli, Hogan & Chalup



Importing Pcap into Security OnionTaoSecurity

2018-02-26T17:12:31+00:00

Within the last week, Doug Burks of Security Onion (SO) added a new script that revolutionizes the use case for his amazing open source network security monitoring platform.I have always used SO in a live production mode, meaning I deploy a SO sensor sniffing a live network interface. As the multitude of SO components observe network traffic, they generate, store, and display various forms of NSM data for use by analysts.The problem with this model is that it could not be used for processing stored network traffic. If one simply replayed the traffic from a .pcap file, the new traffic would be assigned contemporary timestamps by the various tools observing the traffic.While all of the NSM tools in SO have the independent capability to read stored .pcap files, there was no unified way to integrate their output into the SO platform.Therefore, for years, there has not been a way to import .pcap files into SO -- until last week!Here is how I tested the new so-import-pcap script. First, I made sure I was running Security Onion Elastic Stack Release Candidate 2 (14.04.5.8 ISO) or later. Next I downloaded the script using wget from https://github.com/Security-Onion-Solutions/securityonion-elastic/blob/master/usr/sbin/so-import-pcap.I continued as follows:richard@so1:~$ sudo cp so-import-pcap /usr/sbin/richard@so1:~$ sudo chmod 755 /usr/sbin/so-import-pcapI tried running the script against two of the sample files packaged with SO, but ran into issues with both.richard@so1:~$ sudo so-import-pcap /opt/samples/10k.pcapso-import-pcapPlease wait while......creating temp pcap for processing.mergecap: Error reading /opt/samples/10k.pcap: The file appears to be damaged or corrupt(pcap: File has 263718464-byte packet, bigger than maximum of 262144)Error while merging!I checked the file with capinfos.richard@so1:~$ capinfos /opt/samples/10k.pcapcapinfos: An error occurred after reading 17046 packets from "/opt/samples/10k.pcap": The file appears to be damaged or corrupt.(pcap: File has 263718464-byte packet, bigger than maximum of 262144)Capinfos confirmed the problem. Let's try another!richard@so1:~$ sudo so-import-pcap /opt/samples/zeus-sample-1.pcapso-import-pcapPlease wait while......creating temp pcap for processing.mergecap: Error reading /opt/samples/zeus-sample-1.pcap: The file appears to be damaged or corrupt(pcap: File has 1984391168-byte packet, bigger than maximum of 262144)Error while merging!Another bad file. Trying a third!richard@so1:~$ sudo so-import-pcap /opt/samples/evidence03.pcapso-import-pcapPlease wait while......creating temp pcap for processing....setting sguild debug to 2 and restarting sguild....configuring syslog-ng to pick up sguild logs....disabling syslog output in barnyard....configuring logstash to parse sguild logs (this may take a few minutes,[...]



Active Directory & Certificates – Which One is Being Used?Ben's Practical Admin Blog

2018-02-26T00:55:52+00:00

So here’s a question I want you to try answering off the top of your head – Which certificate is your domain controller using for Kerberos & LDAPS and what happens when there are multiple certificates in the crypto store? The answer is actually pretty obvious if you already know the answer, however this was the question I faced recently, and ended up having to do a little bit of poking around to answer the question. The scenario in question for me is having built a new multi-tier PKI in our environment I have reached the point of migrating services to it, including the auto-enrolling certificates templates used on Domain Controllers. For most contemporary active directory installs where AD certificate services is also used, there are two main certificate templates related to domain controllers: Kerberos Authentication Directory Email Replication The “Kerberos Authentication” certificate template made it’s appearance in Windows Server 2008, replacing the “Domain Controller” and “Domain Controller Authentication” templates in earlier versions of ADCS. The “Directory Email Replication” template is used where you use email protocols to replicate AD (I am not quite sure why anyone would want to do this in this day & age). Getting back to my scenario and question, how do you work out which certificate is in use? In both examples, we’re interested in the certificate serial number. The first way is to use a network analyser such as Wireshark (or MS Message Analyzer) to trace a connection to port 636 of a domain controller: Using a network analyser is nifty in that you can see the full handshake occurring and the data passed – something crypto-geeks can get excited about expanding out the information we can obtain the serial number: 655dc58900010000e01e Alternatively, if you have openSSL available, you can use the following commands to connect and obtain similar information: openssl s_client -connect This will connect to the server and amongst the output will be the offered certificate in bas64 format.  Copying the All text between and including —–BEGIN CERTIFICATE—– & —–END CERTIFICATE—– to a file which will give you the public key being offered. You can then run this command: openssl x509 -in -check To obtain all the detailed information on the certificate, including the serial number. From here, it’s just a matter of checking the personal certificate store on the local computer account and find the certificate with the matching serial: What Happens for multiple Kerberos Certificates? Again, looking back at my scenario, I now have two Kerberos Authentication certificates in my store – one from the old CA Infrastructure, and the [...]Sometimes you don't want to read a whitepaper



listening to very specific eventsthe evolving ultrasaurus

2018-02-25T16:05:01+00:00

The model of declarative eventing allows for listening to very specific events and then triggering specific actions. This model simplifies the developer experience, as well as optimizing the system by reducing network traffic. AWS S3 bucket trigger In looking AWS to explain changes in S3 can trigger Lambda functions, I found that the AWS product docs focus on the GUI configuration experience. This probably makes it easy for new folks to write a specific Lambda function; however, it a little harder to see the system patterns before gaining a lot of hands-on experience. The trigger-action association can be seen more clearly in a Terraform configuration. Under the hood, Teraform must be using AWS APIs for setting up the trigger). The configuration below specifies that whenever a json file is uploaded to a specific bucket with the path prefix “content-packages” then a specific Lambda function will be executed: resource "aws_s3_bucket_notification" "bucket_terraform_notification" { bucket = "${aws_s3_bucket.terraform_bucket.id}" lambda_function { lambda_function_arn = "${aws_lambda_function.terraform_func.arn}" events = ["s3:ObjectCreated:*"] filter_prefix = "content-packages/" filter_suffix = ".json" } } — via justinsoliz’ github gist Google Cloud events To illustrate an alternate developer experience, the examples below are shown with Firebase JavaScript SDK for Google Cloud Functions, which is idiomatic for JavaScript developers using the Fluent API style, popularized by jQuery. The same functionality is available via command line options using gcloud, the Google Cloud CLI. ** Cloud Storage trigger** Below is an example of specifying a trigger for a change to a Google Cloud Storage object in a specific bucket: exports.generateThumbnail = functions.storage.bucket('my-bucket').object().onChange((event) => { // ... }); Cloud Firestore trigger This approach to filtering events at their source is very powerful when applied to database operations, where a developer can listen to a specific database path, such as with Cloud Firestore events: exports.createProduct = functions.firestore .document('products/{productId}') .onCreate(event => { // Get an object representing the document // e.g. {'name': 'Wooden Doll', 'description': '...} var newValue = event.data.data(); // access a particular field as you would any JS property var name = newValue.name; // perform desired operations ... }); [...]Sarah Allen's reflections on internet software and other topics



declarative eventingthe evolving ultrasaurus

2018-02-22T13:47:14+00:00

An emerging pattern in server-side event-driven programming formalizes the data that might be generated by an event source, then a consumer of that event source registers for very specific events.

A declarative eventing system establishes a contract between the producer (event source) and consumer (a specific action) and allows for binding a source and action without modifying either.

Comparing this to how traditional APIs are constructed, we can think of it as a kind of reverse query — we reverse the direction of typical request-response by registering a query and then getting called back every time there’s a new answer. This new model establishes a specific operational contract for registering these queries that are commonly called event triggers.

This pattern requires a transport for event delivery. While systems typically support HTTP and RPC mechanisms for local events which might be connected point-to-point in a mesh network, they also often connect to messaging or streaming data systems, like Apache Kafka, RabbitMQ, as well as proprietary offerings.

This declarative eventing pattern can be seen in a number of serverless platforms, and is typically coupled with Functions-as-a-Service offerings, such as AWS Lambda and Google Cloud Functions.

An old pattern applied in a new way

Binding events to actions is nothing new. We have seen this pattern in various GUI programming environment for decades, and on the server-side in many Services Oriented Architecture (SOA) frameworks. What’s new is that we’re seeing server-side code that can be connected to managed services in a way that is almost as simple to set up as an onClick handler in HyperCard. However, the problems that we can solve with this pattern are today’s challenges of integrating data from disparate systems, often at high volume, along with custom analysis, business logic, machine learning and human interaction.

Distributed systems programming is no longer solely the domain of specialized systems engineers who create infrastructure, most applications we use every day integrate data sources from multiple systems across many providers. Distributed systems programming has become ubiquitous, providing an opportunity for interoperable systems at a much higher level.

Sarah Allen's reflections on internet software and other topics



Murdlok: A new old adventure game for the C64pagetable.com

2018-02-20T20:07:25+00:00

Murdlok is a previously unreleased graphical text-based adventure game for the Commodore 64 written in 1986 by Peter Hempel. A German and an English version exist.

(image)

Murdlok – Ein Abenteuer von Peter Hempel

Befreie das Land von dem bösen Murdlok. Nur Nachdenken und kein Leichtsinn führen zum Ziel.

(image)
murdlok_de.d64

(Originalversion von 1986)

Murdlok – An Adventure by Peter Hempel

Liberate the land from the evil Murdlok! Reflection, not recklessness will guide you to your goal!

(image)
murdlok_en.d64

(English translation by Lisa Brodner and Michael Steil, 2018)

The great thing about a new game is that no walkthroughs exist yet! Feel free to use the comments section of this post to discuss how to solve the game. Extra points for the shortest solution – ours is 236 steps!

(image) (image) (image) (image) (image)

Some Assembly Required



A few notes on Medsec and St. Jude MedicalA Few Thoughts on Cryptographic Engineering

2018-02-17T18:27:22+00:00

In Fall 2016 I was invited to come to Miami as part of a team that independently validated some alleged flaws in implantable cardiac devices manufactured by St. Jude Medical (now part of Abbott Labs). These flaws were discovered by a company called MedSec. The story got a lot of traction in the press at the time, primarily due to the fact that a hedge fund called Muddy Waters took a large short position on SJM stock as a result of these findings. SJM subsequently sued both parties for defamation. The FDA later issued a recall for many of the devices. Due in part to the legal dispute (still ongoing!), I never had the opportunity to write about what happened down in Miami, and I thought that was a shame: because it’s really interesting. So I’m belatedly putting up this post, which talks a bit MedSec’s findings, and implantable device security in general. By the way: “we” in this case refers to a team of subject matter experts hired by Bishop Fox, and retained by legal counsel for Muddy Waters investments. I won’t name the other team members here because some might not want to be troubled by this now, but they did most of the work — and their names can be found in this public expert report (as can all the technical findings in this post.) Quick disclaimers: this post is my own, and any mistakes or inaccuracies in it are mine and mine alone. I’m not a doctor so holy cow this isn’t medical advice. Many of the flaws in this post have since been patched by SJM/Abbot. I was paid for my time and travel by Bishop Fox for a few days in 2016, but I haven’t worked for them since. I didn’t ask anyone for permission to post this, because it’s all public information. A quick primer on implantable cardiac devices  Implantable cardiac devices are tiny computers that can be surgically installed inside a patient’s body. Each device contains a battery and a set of electrical leads that can be surgically attached to the patient’s heart muscle. When people think about these devices, they’re probably most familiar with the cardiac pacemaker. Pacemakers issue small electrical shocks to ensure that the heart beats at an appropriate rate. However, the pacemaker is actually one of the least powerful implantable devices. A much more powerful type of device is the Implantable Cardioverter-Defibrillator (ICD). These devices are implanted in patients who have a serious risk of spontaneously entering a dangerous state in which their heart ceases to pump blood effectively. The ICD continuously monitors the patient’s heart rhythm to identify when the patient’s heart has entered this condition, and applies a series of inc[...]



A Life Lesson in Mishandling SMTP Sender VerificationThat grumpy BSD guy

2018-02-17T16:38:00+00:00

An attempt to report spam to a mail service provider's abuse address reveals how incompetence is sometimes indistinguishable from malice.It all started with one of those rare spam mails that got through.This one was hawking address lists, much like the ones I occasionally receive to addresses that I can not turn into spamtraps. The message was addressed to, of all things, root@skapet.bsdly.net. (The message with full headers has been preserved here for reference).Yes, that's right, they sent their spam to root@. And a quick peek at the headers revealed that like most of those attempts at hawking address lists for spamming that actually make it to a mailbox here, this one had been sent by an outlook.com customer.The problem with spam delivered via outlook.com is that you can't usefully blacklist the sending server, since the largish chunk of the world that uses some sort of Microsoft hosted email solution (Office365 and its ilk) have their usually legitimate mail delivered via the very same infrastructure.And since outlook.com is one of the mail providers that doesn't play well with greylisting (it spreads its retries across no less than 81 subnets (the output of 'echo outlook.com | doas smtpctl spf walk' is preserved here), it's fairly common practice to just whitelist all those networks and avoid the hassle of lost or delayed mail to and from Microsoft customers.I was going to just ignore this message too, but we've seen an increasing number of spammy outfits taking advantage of outlook.com's seeming right of way to innocent third parties' mail boxes.So I decided to try both to do my best at demoralizing this particular sender and alert outlook.com to their problem. I wrote a messsage (preserved here) with a Cc: to abuse@outlook.com where the meat is,Ms Farell,The address root@skapet.bsdly.net has never been subscribed to any mailing list, for obvious reasons. Whoever sold you an address list with that address on it are criminals and you should at least demand your money back.Whoever handles abuse@outlook.com will appreciate the attachment, which is a copy of the message as it arrived here with all headers intact.Yours sincerely,Peter N. M. HansteenWhat happened next is quite amazing.If my analysis is correct, it may not be possible for senders who are not themselves outlook.com customers to actually reach the outlook.com abuse team.Almost immediately after I sent the message to Ms Farell with a Cc: to abuse@outlook.com, two apparently identical messages from staff@hotmail.com, addressed to postmaster@bsdly.net appeared (preserved here and here), with the main content of both statingThis is an email [...]



Commodore KERNAL Historypagetable.com

2018-02-17T12:38:10+00:00

If you have ever written 6502 code for the Commodore 64, you may remember using “JSR $FFD2” to print a character on the screen. You may have read that the jump table at the end of the KERNAL ROM was designed to allow applications to run on a all Commodore 8 bit computers from the PET to the C128 (and the C65!) – but that is a misconception. This article will show how the first version of the jump table in the PET was designed to only hook up BASIC to the system’s features it wasn’t until the VIC-20 that the jump table was generalized for application development (and the vector table introduced) all later machines add their own calls, but later machines don’t necessary support older calls. KIM-1 (1976) The KIM-1 was originally meant as a computer development board for the MOS 6502 CPU. Commodore acquired MOS in 1976 and kept selling the KIM-1. It contained a 2 KB ROM (“TIM”, “Terminal Interface Monitor”), which included functions to read characters from ($1E5A) and write characters to ($1EA0) a serial terminal, as well as code to load from and save to tape and support for the hex keyboard and display. Commodore asked Microsoft to port their BASIC for 6502 to it, which interfaced with the monitor only through the two character in and out functions. The original source of BASIC shows how Microsoft adapted it to work with the KIM-1 by defining CZGETL and OUTCH to point to the monitor routines: IFE REALIO-1, (The values are octal, since the assembler Microsoft used did not support hexadecimal.) The makers of the KIM-1 never intended to change the ROM, so there was no need to have a jump table for these calls. Applications just hardcoded their offsets in ROM. PET (1977) The PET was Commodore’s first complete computer, with a keyboard, a display and a built-in tape drive. The system ROM (“KERNAL”) was now 4 KB and included a powerful file I/O system for tape, RS-232 and IEEE-488 (for printers and disk drives) as well as timekeeping logic. Another 2 KB ROM (“EDITOR”) handled screen output and character input. Microsoft BASIC was included in ROM and was marketed – with the name “COMMODORE BASIC” – as the actual operating system, making the KERNAL and the editor merely a device driver package. Like with the KIM-1, Commodore asked Microsoft to port BASIC to the PET, and provided them with addresses of a jump table in the KERNAL ROM for interfacing with it. These are the symbol definitions in Microso[...]Some Assembly Required



FreeBSD/EC2 historyDaemonic Dispatches

2018-02-12T19:50:00+00:00

A couple years ago Jeff Barr published a blog post with a timeline of EC2 instances. I thought at the time that I should write up a timeline of the FreeBSD/EC2 platform, but I didn't get around to it; but last week, as I prepared to ask for sponsorship for my work I decided that it was time to sit down and collect together the long history of how the platform has evolved and improved over the years.

Musings from Colin Percival



toolsmith #131 - The HELK vs APTSimulator - Part 1HolisticInfoSec™

2018-02-12T06:56:00+00:00

Ladies and gentlemen, for our main attraction, I give you...The HELK vs APTSimulator, in a Death Battle! The late, great Randy "Macho Man" Savage said many things in his day, in his own special way, but "Expect the unexpected in the kingdom of madness!" could be our toolsmith theme this month and next. Man, am I having a flashback to my college days, many moons ago. :-) The HELK just brought it on. Yes, I know, HELK is the Hunting ELK stack, got it, but it reminded me of the Hulk, and then, I thought of a Hulkamania showdown with APTSimulator, and Randy Savage's classic, raspy voice popped in my head with "Hulkamania is like a single grain of sand in the Sahara desert that is Macho Madness." And that, dear reader, is a glimpse into exactly three seconds or less in the mind of your scribe, a strange place to be certain. But alas, that's how we came up with this fabulous showcase.In this corner, from Roberto Rodriguez, @Cyb3rWard0g, the specter in SpecterOps, it's...The...HELK! This, my friends, is the s**t, worth every ounce of hype we can muster.And in the other corner, from Florian Roth, @cyb3rops, the The Fracas of Frankfurt, we have APTSimulator. All your worst adversary apparitions in one APT mic drop. This...is...Death Battle!Now with that out of our system, let's begin. There's a lot of goodness here, so I'm definitely going to do this in two parts so as not undervalue these two offerings.HELK is incredibly easy to install. Its also well documented, with lots of related reading material, let me propose that you take the tine to to review it all. Pay particular attention to the wiki, gain comfort with the architecture, then review installation steps.On an Ubuntu 16.04 LTS system I ran:git clone https://github.com/Cyb3rWard0g/HELK.gitcd HELK/sudo ./helk_install.sh Of the three installation options I was presented with, pulling the latest HELK Docker Image from cyb3rward0g dockerhub, building the HELK image from a local Dockerfile, or installing the HELK from a local bash script, I chose the first and went with the latest Docker image. The installation script does a fantastic job of fulfilling dependencies for you, if you haven't installed Docker, the HELK install script does it for you. You can observe the entire install process in Figure 1.Figure 1: HELK InstallationYou can immediately confirm your clean installation by navigating to your HELK KIBANA URL, in my case http://192.168.248.29.For my test Windows system I created a Windows 7 x86 virtual machine with Virtualbox. The key to success here is ensuring that [...]



The future of configuration management (again), and a suggestionA sysadmin's logbook

2018-02-11T21:03:51+00:00

I have attended the Config Management Camp in Gent this year, where I also presented the talk “Promise theory: from configuration management to team leadership“. A thrilling experience, considering that I was talking about promise theory at the same conference and in the same track where Mark Burgess, the inventor of promise theory, was holding one of the keynotes! The quality of the conference was as good as always, but my experience at the conference was completely different from the past. Last time I attended, in 2016, I was actively using CFEngine and that shaped in both the talks I attended and the people that I hanged on with the most. This year I was coming from a different work environment and a different job: I jumped a lot through the different tracks and devrooms, and talked with many people with a very different experience than mine. And that was truly enriching. I’ll focus on one experience in particular, that led me to see what the future of configuration management could be.   I attended all the keynotes. Mark Burgess’ was, as always, rich in content and a bit hard to process; lots of food for though, but I couldn’t let it percolate in my brain until someone made it click several hours later. More on that in a minute. Then there was Luke Kanies’ keynote, explaining where configuration management and we, CM practitioners, won the battle; and also where we lost the battle and where we are irrelevant. Again, more stuff accumulated, waiting for something to trigger the mental process to consume the information. There was also the keynote by Adam Jacob about the future of Configuration Management, great and fun as always but not part of this movie I recommend that you enjoy it on youtube. Later, at the social event, I had the pleasure to have a conversation with Stein Inge Morisbak, whom I knew from before as we met in Oslo several times. With his experience working on public cloud infrastructures like AWS and Google Cloud Platform, Stein Inge was one of the people who attended the conference with a sceptical eye about configuration management and, at the same time, with the open mind that you would expect from the great guy he is. In a sincere effort to understand, he couldn’t really see how CM, “a sinking ship”, could possibly be relevant in an era where public cloud, immutable infrastructure and all the tooling around are the modern technology of today. While we were talking, another great guy chimed in, namely Ivan Rossi. If you look at Ivan’s LinkedIn page[...]



Moving to the Cloud? Don’t Forget End-User ExperienceThe Virtual Horizon

2018-02-08T15:22:22+00:00

The cloud has a lot to offer IT departments.  It provides the benefits of virtualization in a consumption-based model, and it allows new applications to quickly be deployed while waiting for, or even completely forgoing, on-premises infrastructure.  This can provide a better time-to-value and greater flexibility for the business.  It can help organizations reduce, or eliminate, their on-premises data center footprint. But while the cloud has a lot of potential to disrupt how IT manages applications in the data center, it also has the potential to disrupt how IT delivers services to end users. In order to understand how cloud will disrupt end-user computing, we first need to look at how organizations are adopting the cloud.  We also need to look at how the cloud can change application development patterns, and how that will change how IT delivers services to end users. The Current State of Cloud When people talk about cloud, they’re usually talking about three different types of services.  These services, and their definitions, are: Infrastructure-as-a-Service: Running virtual machines in a hosted, multi-tenant virtual data center. Platform-as-a-Service: Allows developers to subscribe to build applications without having to build the supporting infrastructure.  The platform can include some combination of web services, application run time services (like .Net or Java), databases, message bus services, and other managed components. Software-as-a-Service: Subscription to a vendor hosted and managed application. The best analogy to explain this comparing the different cloud offerings with different types of pizza restaurants using the graphic below from episerver.com: Image retrieved from: http://www.episerver.com/learn/resources/blog/fred-bals/pizza-as-a-service/ So what does this have to do with End-User Computing? Today, it seems like enterprises that are adopting cloud are going in one of two directions.  The first is migrating their data centers into infrastructure-as-a-service offerings with some platform-as-a-service mixed in.  The other direction is replacing applications with software-as-a-service options.  The former is migrating your applications to Azure or AWS EC2, the latter is replacing on-premises services with options like ServiceNow or Microsoft Office 365. Both options can present challenges to how enterprises deliver applications to end-users.  And the choices made when migrating on-premises applications to the cloud can greatly impact [...]



Using TLS1.3 With OpenSSLOpenSSL Blog

2018-02-08T11:00:00+00:00

Note: This is an updated version of an earlier blog post available here. The forthcoming OpenSSL 1.1.1 release will include support for TLSv1.3. The new release will be binary and API compatible with OpenSSL 1.1.0. In theory, if your application supports OpenSSL 1.1.0, then all you need to do to upgrade is to drop in the new version of OpenSSL when it becomes available and you will automatically start being able to use TLSv1.3. However there are some issues that application developers and deployers need to be aware of. In this blog post I am going to cover some of those things. Differences with TLS1.2 and below TLSv1.3 is a major rewrite of the specification. There was some debate as to whether it should really be called TLSv2.0 - but TLSv1.3 it is. There are major changes and some things work very differently. A brief, incomplete, summary of some things that you are likely to notice follows: There are new ciphersuites that only work in TLSv1.3. The old ciphersuites cannot be used for TLSv1.3 connections. The new ciphersuites are defined differently and do not specify the certificate type (e.g. RSA, DSA, ECDSA) or the key exchange mechanism (e.g. DHE or ECHDE). This has implications for ciphersuite configuration. Clients provide a “key_share” in the ClientHello. This has consequences for “group” configuration. Sessions are not established until after the main handshake has been completed. There may be a gap between the end of the handshake and the establishment of a session (or, in theory, a session may not be established at all). This could have impacts on session resumption code. Renegotiation is not possible in a TLSv1.3 connection More of the handshake is now encrypted. More types of messages can now have extensions (this has an impact on the custom extension APIs and Certificate Transparency) DSA certificates are no longer allowed in TLSv1.3 connections Note that at this stage only TLSv1.3 is supported. DTLSv1.3 is still in the early days of specification and there is no OpenSSL support for it at this time. Current status of the TLSv1.3 standard As of the time of writing TLSv1.3 is still in draft. Periodically a new version of the draft standard is published by the TLS Working Group. Implementations of the draft are required to identify the specific draft version that they are using. This means that implementations based on different draft versions do not interoperate with each other. OpenSSL 1.1.1 will not be rel[...]