Subscribe: stillhq.com : Mikal, a geek from Canberra living in Silicon Valley
http://www.stillhq.com/cgi-bin/blosxom/index.rss
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
data  device linux  device  foundation root  instance  linux foundation  linux  node exporter  prometheus  pushgateway  things 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: stillhq.com : Mikal, a geek from Canberra living in Silicon Valley

stillhq.com : Mikal, a geek from Canberra living in Silicon Valley



The life, times, travel and software of Michael Still



 



Nova vendordata deployment, an excessively detailed guide

Thu, 02 Feb 2017 19:49:00 -800

Nova presents configuration information to instances it starts via a mechanism called metadata. This metadata is made available via either a configdrive, or the metadata service. These mechanisms are widely used via helpers such as cloud-init to specify things like the root password the instance should use. There are three separate groups of people who need to be able to specify metadata for an instance. User provided data The user who booted the instance can pass metadata to the instance in several ways. For authentication keypairs, the keypairs functionality of the Nova APIs can be used to upload a key and then specify that key during the Nova boot API request. For less structured data, a small opaque blob of data may be passed via the user-data feature of the Nova API. Examples of such unstructured data would be the puppet role that the instance should use, or the HTTP address of a server to fetch post-boot configuration information from. Nova provided data Nova itself needs to pass information to the instance via its internal implementation of the metadata system. Such information includes the network configuration for the instance, as well as the requested hostname for the instance. This happens by default and requires no configuration by the user or deployer. Deployer provided data There is however a third type of data. It is possible that the deployer of OpenStack needs to pass data to an instance. It is also possible that this data is not known to the user starting the instance. An example might be a cryptographic token to be used to register the instance with Active Directory post boot -- the user starting the instance should not have access to Active Directory to create this token, but the Nova deployment might have permissions to generate the token on the user's behalf. Nova supports a mechanism to add "vendordata" to the metadata handed to instances. This is done by loading named modules, which must appear in the nova source code. We provide two such modules: StaticJSON: a module which can include the contents of a static JSON file loaded from disk. This can be used for things which don't change between instances, such as the location of the corporate puppet server. DynamicJSON: a module which will make a request to an external REST service to determine what metadata to add to an instance. This is how we recommend you generate things like Active Directory tokens which change per instance. Tell me more about DynamicJSON Having said all that, this post is about how to configure the DynamicJSON plugin, as I think its the most interesting bit here. To use DynamicJSON, you configure it like this: Add "DynamicJSON" to the vendordata_providers configuration option. This can also include "StaticJSON" if you'd like. Specify the REST services to be contacted to generate metadata in the vendordata_dynamic_targets configuration option. There can be more than one of these, but note that they will be queried once per metadata request from the instance, which can mean a fair bit of traffic depending on your configuration and the configuration of the instance. The format for an entry in vendordata_dynamic_targets is like this: @ Where name is a short string not including the '@' character, and where the URL can include a port number if so required. An example would be: testing@http://127.0.0.1:125 Metadata fetched from this target will appear in the metadata service at a new file called vendordata2.json, with a path (either in the metadata service URL or in the configdrive) like this: openstack/2016-10-06/vendor_data2.json For each dynamic target, there will be an entry in the JSON file named after that target. For example:: { "testing": { "value1": 1, "value2": 2, "value3": "three" } } Do not specify the same name more than once. If you do, we will ignore subsequent uses of a previously used name. The following data is passed to your REST[...]



Giving serial devices meaningful names

Tue, 31 Jan 2017 12:04:00 -800

This is a hack I've been using for ages, but I thought it deserved a write up.

I have USB serial devices. Lots of them. I use them for home automation things, as well as for talking to devices such as the console ports on switches and so forth. For the permanently installed serial devices one of the challenges is having them show up in predictable places so that the scripts which know how to drive each device are talking in the right place.

For the trivial case, this is pretty easy with udev:

$  cat /etc/udev/rules.d/60-local.rules 
KERNEL=="ttyUSB*", \
    ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", \
    ATTRS{serial}=="A8003Ye7", \
    SYMLINK+="radish"


This says for any USB serial device that is discovered (either inserted post boot, or at boot), if the USB vendor and product ID match the relevant values, to symlink the device to "/dev/radish".

You find out the vendor and product ID from lsusb like this:

$ lsusb
Bus 003 Device 003: ID 0624:0201 Avocent Corp. 
Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 007 Device 002: ID 0665:5161 Cypress Semiconductor USB to Serial
Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 004 Device 002: ID 0403:6001 Future Technology Devices International, Ltd FT232 Serial (UART) IC
Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 009 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 008 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub


You can play with inserting and removing the device to determine which of these entries is the device you care about.

So that's great, until you have more than one device with the same USB serial vendor and product id. Then things are a bit more... difficult.

It turns out that you can have udev execute a command on device insert to help you determine what symlink to create. So for example, I have this entry in the rules on one of my machines:

KERNEL=="ttyUSB*", \
    ATTRS{idVendor}=="067b", ATTRS{idProduct}=="2303", \
    PROGRAM="/usr/bin/usbtest /dev/%k", \
    SYMLINK+="%c"


This results in /usr/bin/usbtest being run with the path of the device file on its command line for every device detection (of a matching device). The stdout of that program is then used as the name of a symlink in /dev.

So, that script attempts to talk to the device and determine what it is -- in my case either a currentcost or a solar panel inverter.

Tags for this post: linux udev serial usb usbserial
Related posts: SMART and USB storage; Video4Linux, ov511, and RGB24 palettes; ov511 hackery; Ubuntu, Dapper Drake, and that difficult Dell e310; Roomba serial cables; Via M10000, video, and a Belkin wireless USB thing

Comment



A pythonic example of recording metrics about ephemeral scripts with prometheus

Mon, 30 Jan 2017 01:08:00 -800

In my previous post we talked about how to record information from short lived scripts (I call them ephemeral scripts by the way) with prometheus. The example there was a script which checked the SMART status of each of the disks in a machine and reported that via pushgateway. I now want to work through a slightly more complicated example. I think you hit the limits of reporting simple values in shell scripts via curl requests fairly quickly. For example with the SMART monitoring script, SMART is capable of returning a whole heap of metrics about the performance of a disk, but we boiled that down to a single "health" value. This is largely because writing a parser for all the other values that smartctl returns would be inefficient and fragile in shell. So for this post, we're going to work through an example of how to report a variety of values from a python script. Those values could be the parsed output of smartctl, but to mix things up a bit, I'm going to use a different script I wrote recently. This new script uses the Weather Underground API to lookup weather stations near my house, and then generate graphics of the weather forecast. These graphics are displayed on the various Cisco SIP phones I already had around the house. The forecasts look like this: The script to generate these weather forecasts is relatively simple python, and you can see the source code on github. My cunning plan here is to use prometheus' time series database and alert capabilities to drive home automation around my house. The first step for that is to start gathering some simple facts about the home environment so that we can do trending and decision making on them. The code to do this isn't all that complicated. First off, we need to add the python prometheus client to our python environment, which is hopefully a venv: pip install prometheus_client pip install six That second dependency isn't a strict requirement for prometheus, but the script I'm working on needs it (because it needs to work out what's a text value, and python 3 is bonkers). Next we import the prometheus client in our code and setup the counter registry. At the same time I record when the script was run: from prometheus_client import CollectorRegistry, Gauge, push_to_gateway registry = CollectorRegistry() Gauge('job_last_success_unixtime', 'Last time the weather job ran', registry=registry).set_to_current_time() And then we just add gauges for any values we want to add to the pushgateway Gauge('_'.join(field), '', registry=registry).set(value) Finally, the values don't exist in the pushgateway until we actually push them there, which we do like this: push_to_gateway('localhost:9091', job='weather', registry=registry) You can see the entire patch I wrote to add prometheus support on github if you're interested in an example with more context. Now we can have pretty graphs of temperature and stuff! Tags for this post: prometheus monitoring python pushgatewayRelated posts: Recording performance information from short lived processes with prometheus; Basic prometheus setup; Implementing SCP with paramiko; Mona Lisa Overdrive; Packet capture in python; mbot: new hotness in Google Talk bots Comment [...]



Recording performance information from short lived processes with prometheus

Fri, 27 Jan 2017 20:17:00 -800

Now that I'm recording basic statistics about the behavior of my machines, I now want to start tracking some statistics from various scripts I have lying around in cron jobs. In order to make myself sound smarter, I'm going to call these short lived scripts "ephemeral scripts" throughout this document. You're welcome. The promethean way of doing this is to have a relay process. Prometheus really wants to know where to find web servers to learn things from, and my ephemeral scripts are both not permanently around and also not running web servers. Luckily, prometheus has a thing called the pushgateway which is designed to handle this situation. I can run just one of these, and then have all my little scripts just tell it things to add to its metrics. Then prometheus regularly scrapes this one process and learns things about those scripts. Its like a game of Telephone, but for processes really. First off, let's get the pushgateway running. This is basically the same as the node_exporter from last time: $ wget https://github.com/prometheus/pushgateway/releases/download/v0.3.1/pushgateway-0.3.1.linux-386.tar.gz $ tar xvzf pushgateway-0.3.1.linux-386.tar.gz $ cd pushgateway-0.3.1.linux-386 $ ./pushgateway Let's assume once again that we're all adults and did something nicer than that involving configuration management and init scripts. The pushgateway implements a relatively simple HTTP protocol to add values to the metrics that it reports. Note that the values wont change once set until you change them again, they're not garbage collected or aged out or anything fancy. Here's a trivial example of adding a value to the pushgateway: echo "some_metric 3.14" | curl --data-binary @- http://pushgateway.example.org:9091/metrics/job/some_job This is stolen straight from the pushgateway README of course. The above command will have the pushgateway start to report a metric called "some_metric" with the value "3.14", for a job called "some_job". In other words, we'll get this in the pushgateway metrics URL: # TYPE some_metric untyped some_metric{instance="",job="some_job"} 3.14 You can see that this isn't perfect because the metric is untyped (what types exist? we haven't covered that yet!), and has these confusing instance and job labels. One tangent at a time, so let's explain instances and jobs first. On jobs and instances Prometheus is built for a universe a little bit unlike my home lab. Specifically, it expects there to be groups of processes doing a thing instead of just one. This is especially true because it doesn't really expect things like the pushgateway to be proxying your metrics for you because there is an assumption that every process will be running its own metrics server. This leads to some warts, which I'll explain in a second. Let's start by explaining jobs and instances. For a moment, assume that we're running the world's most popular wordpress site. The basic architecture for our site is web frontends which run wordpress, and database servers which store the content that wordpress is going to render. When we first started our site it was all easy, as they could both be on the same machine or cloud instance. As we grew, we were first forced to split apart the frontend and the database into separate instances, and then forced to scale those two independently -- perhaps we have reasonable database performance so we ended up with more web frontends than we did database servers. So, we go from something like this: To an architecture which looks a bit like this: Now, in prometheus (i.e. google) terms, there are three jobs here. We have web frontends, database masters (the top one which is getting all the writes), and database slaves (the bottom one which everyone is reading from). For one of the jobs, the frontends, there is more than one instance of the job. To put that into pictures: So, the topmost frontend job would be job="fe" and instance="0". Google also had a cool way[...]



Basic prometheus setup

Thu, 26 Jan 2017 21:23:00 -800

I've been playing with prometheus for monitoring. It feels quite familiar to me because its based on an internal google technology called borgmon, but I suspect that means it feels really weird to everyone else. The first thing to realize is that everything at google is a web server. Your short lived tool that copies some files around probably runs a web server. All of these web servers have built in URLs which report the progress and status of the task at hand. Prometheus is built to: scrape those web servers; aggregate the data; store the data into a time series database; and then perform dashboarding, trending and alerting on that data. The most basic example is to just export metrics for each machine on my home network. This is the easiest first step, because we don't need to build any software to do this. First off, let's install node_exporter on each machine. node_exporter is the tool which runs a web server to export metrics for each node. Everything in prometheus land is written in go, which is new to me. However, it does make running node exporter easy -- just grab the relevant binary from https://prometheus.io/download/, untar, and run. Let's do it in a command line script example thing: $ wget https://github.com/prometheus/node_exporter/releases/download/v0.14.0-rc.1/node_exporter-0.14.0-rc.1.linux-386.tar.gz $ tar xvzf node_exporter-0.14.0-rc.1.linux-386.tar.gz $ cd node_exporter-0.14.0-rc.1.linux-386 $ ./node_exporter That's all it takes to run the node_exporter. This runs a web server at port 9100, which exposes the following metrics: $ curl -s http://localhost:9100/metrics | grep filesystem_free | grep 'mountpoint="/data"' node_filesystem_free{device="/dev/mapper/raidvg-srvlv",fstype="xfs",mountpoint="/data"} 6.811044864e+11 Here you can see that the system I'm running on is exporting a filesystem_free value for the filesystem mounted at /data. There's a lot more than that exported, and I'd encourage you to poke around at that URL a little before continuing on. So that's lovely, but we really want to record that over time. So let's assume that you have one of those running on each of your machines, and that you have it setup to start on boot. I'll leave the details of that out of this post, but let's just say I used my existing puppet infrastructure. Now we need the central process which collects and records the values. That's the actual prometheus binary. Installation is again trivial: $ wget https://github.com/prometheus/prometheus/releases/download/v1.5.0/prometheus-1.5.0.linux-386.tar.gz $ tar xvzf prometheus-1.5.0.linux-386.tar.gz $ cd prometheus-1.5.0.linux-386 Now we need to move some things around to install this nicely. I did the puppet equivalent of: Moving the prometheus file to /usr/bin Creating an /etc/prometheus directory and moving console_libraries and consoles into it Creating a /etc/prometheus/prometheus.yml config file, more on the contents on this one in a second And creating an empty data directory, in my case at /data/prometheus The config file needs to list all of your machines. I am sure this could be generated with puppet templating or something like that, but for now here's my simple hard coded one: # my global config global: scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute. evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute. # scrape_timeout is set to the global default (10s). # Attach these labels to any time series or alerts when communicating with # external systems (federation, remote storage, Alertmanager). external_labels: monitor: 'stillhq' # Load rules once and periodically evaluate them according to the global 'evaluation_interval'. rule_files: # - "first.rules" # - "second.rules" # A scrape configuration containing exactly one endpoint to scrape: # Here it's Prometheus itself. scrape_configs: # The[...]



Gods of Metal

Mon, 08 Feb 2016 01:22:00 +1000

(image)


ISBN: 9780141982267
LibraryThing
The reviews online for this book aren't great, and frankly they're right. The plot is predictable, and there isn't much character development. Just lots and lots of blow-by-blow combat. It gets wearing after a while, and I found this book at bit of a slog. Not recommended.

Tags for this post: book william_c_dietz combat halo engineered_human cranial_computer personal_ai aliens
Related posts: Halo: The Fall of Reach; The Last Colony ; The End of All Things; The Human Division; Old Man's War ; The Ghost Brigades
Comment Recommend a book