Subscribe: see shy jo
http://kitenet.net/~joey/blog/index.rss
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
batteries  bloom  client  irc client  irc  key  keysafe  new  request  server  servers  tor  usb sticks  window  work 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: see shy jo

see shy jo



joey



Published: Thu, 05 Jan 2017 19:23:33 -0500

 



the cliff

Thu, 05 Jan 2017 19:12:17 -0500

Falling off the cliff is always a surprise. I know it's there; I've been living next to it for months. I chart its outline daily. Avoiding it has become routine, and so comfortable, and so failing to avoid it surprises.

Monday evening around 10 pm, the laptop starts draining down from 100%. House battery, which has been steady around 11.5-10.8 volts since well before Winter solstice, and was in the low 10's, has plummeted to 8.5 volts.

With the old batteries, the cliff used to be at 7 volts, but now, with new batteries but fewer, it's in a surprising place, something like 10 volts, and I fell off it.

Weather forecast for the week ahead is much like the previous week: Maybe a couple sunny afternoons, but mostly no sun at all.

Falling off the cliff is not all bad. It shakes things up. It's a good opportunity to disconnect, to read paper books, and think long winter thoughts. It forces some flexability.

I have an auxillary battery for these situations. With its own little portable solar panel, it can charge the laptop and run it for around 6 hours. But it takes it several days of winter sun to charge back up.

That's enough to get me through the night. Then I take a short trip, and glory in one sunny afternoon. But I know that won't get me out of the hole, the batteries need a sunny week to recover. This evening, I expect to lose power again, and probably tomorrow evening too.

Luckily, my goal for the week was to write slides for two talks, and I've been able to do that despite being mostly offline, and sometimes decomputered.

And, in a few days I will be jetting off to Australia! That should give the batteries a perfect chance to recover.

Previously: battery bank refresh late summer




p2p dreams

Sat, 31 Dec 2016 22:59:29 -0500

In one of the good parts of the very mixed bag that is "Lo and Behold: Reveries of the Connected World", Werner Herzog asks his interviewees what the Internet might dream of, if it could dream. The best answer he gets is along the lines of: The Internet of before dreamed a dream of the World Wide Web. It dreamed some nodes were servers, and some were clients. And that dream became current reality, because that's the essence of the Internet. Three years ago, it seemed like perhaps another dream was developing post-Snowden, of dissolving the distinction between clients and servers, connecting peer-to-peer using addresses that are also cryptographic public keys, so authentication and encryption and authorization are built in. Telehash is one hopeful attempt at this, others include snow, cjdns, i2p, etc. So far, none of them seem to have developed into a widely used network, although any of them still might get there. There are a lot of technical challenges due to the current Internet dream/nightmare, where the peers on the edges have multiple barriers to connecting to other peers. But, one project has developed something similar to the new dream, almost as a side effect of its main goals: Tor's onion services. I'd wanted to use such a thing in git-annex, for peer-to-peer sharing and syncing of git-annex repositories. On November 13th, I started building it, using Tor, and I'm releasing it concurrently with this blog post. git-annex's Tor support replaces its old hack of tunneling git protocol over XMPP. That hack was unreliable (it needed a TCP on top of XMPP layer) but worse, the XMPP server could see all the data being transferred. And, there are fewer large XMPP servers these days, so fewer users could use it at all. If you were using XMPP with git-annex, you'll need to switch to either Tor or a server accessed via ssh. Now git-annex can serve a repository as a Tor onion service, and that can then be accessed as a git remote, using an url like tor-annex::tungqmfb62z3qirc.onion:42913. All the regular git, and git-annex commands, can be used with such a remote. Tor has a lot of goals for protecting anonymity and privacy. But the important things for this project are just that it has end-to-end encryption, with addresses that are public keys, and allows P2P connections. Building an anonymous file exchange on top of Tor is not my goal -- if you want that, you probably don't want to be exchanging git histories that record every edit to the file and expose your real name by default. Building this was not without its difficulties. Tor onion services were originally intended to run hidden websites, not to connect peers to peers, and this kind of shows.. Tor does not cater to end users setting up lots of Onion services. Either root has to edit the torrc file, or the Tor control port can be used to ask it to set one up. But, the control port is not enabled by default, so you still need to su to root to enable it. Also, it's difficult to find a place to put the hidden service's unix socket file that's writable by a non-root user. So I had to code around this, with a git annex enable-tor that su's to root and sets it all up for you. One interesting detail about the implementation of the P2P protocol in git-annex is that it uses two Free monads to build up actions. There's a Net monad which can be used to send and receive protocol messages, and a Local monad which allows only the necessary modifications to files on disk. Interpreters for Free monad actions can chose exactly which actions to allow for security reasons. For example, before a peer has authenticated, the P2P protocol is being run by an interpreter that refuses to run any Local actions whatsoever. Other interpreters for the Net monad could be used to support other network transports than Tor. When two peers are connected over Tor, one knows it's talking to the owner of a particular onion address, but the other peer knows nothing about who's talking to it, by design. This makes authentication harder than it would be in a P2P s[...]



multi-terminal applications

Mon, 26 Dec 2016 23:58:41 -0500

While learning about and configuring weechat this evening, I noticed a lot of complexity and unsatisfying tradeoffs related to its UI, its mouse support, and its built-in window system. Got to wondering what I'd do differently, if I wrote my own IRC client, to avoid those problems. The first thing I realized is, it is not a good idea to think about writing your own IRC client. Danger will robinson.. So, let's generalize. This blog post is not about designing an IRC client, but about exploring simpler ways that something like an IRC client might handle its UI, and perhaps designing something general-purpose that could be used by someone else to build an IRC client, or be mashed up with an existing IRC client. What any modern IRC client needs to do is display various channels to the user. Maybe more than one channel should be visible at a time in some kind of window, but often the user will have lots of available channel and only want to see a few of them at a time. So there needs to be an interface for picking which channel(s) to display, and if multiple windows are shown, for arranging the windows. Often that interface also indicates when there is activity on a channel. The most recent messages from the channel are displayed. There should be a way to scroll back to see messages that have already scrolled by. There needs to be an interface for sending a message to a channel. Finally, a list of users in the channel is often desired. Modern IRC clients implement their own UI for channel display, windowing, channel selection, activity notification, scrollback, message entry, and user list. Even the IRC clients that run in a terminal include all of that. But how much of that do they need to implement, really? Suppose the user has a tabbed window manager, that can display virtual terminals. The terminals can set their title, and can indicate when there is activity in the terminal. Then an IRC client could just open a bunch of terminals, one per channel. Let the window manager handle channel selection, windowing (naturally), and activity notification. For scrollback, the IRC client can use the terminal's own scrollback buffer, so the terminal's regular scrollback interface can be used. This is slightly tricky; can't use the terminal's alternate display, and have to handle the UI for the message entry line at the bottom. That's all the UI an IRC client needs (except for the user list), and most of that is already implemented in the window manager and virtual terminal. So that's an elegant way to make an IRC client without building much new UI at all. But, unfortunately, most of us don't use tabbed window managers (or tabbed terminals). Such an IRC client, in a non-tabbed window manager, would be a many-windowed mess. Even in a tabbed window manager, it might be annoying to have so many windows for one program. So we need fewer windows. Let's have one channel list window, and one channel display window. There could also be a user list window. And there could be a way to open additional, dedicated display windows for channels, but that's optional. All of these windows can be seperate virtual terminals. A potential problem: When changing the displayed channel, it needs to output a significant number of messages for that channel, so that the scrollback buffer gets populated. With a large number of lines, that can be too slow to feel snappy. In some tests, scrolling 10 thousand lines was noticiably slow, but scrolling 1 thousand lines happens fast enough not to be noticiable. (Terminals should really be faster at scrolling than this, but they're still writing scrollback to unlinked temp files.. sigh!) An IRC client that uses multiple cooperating virtual terminals, needs a way to start up a new virtual terminal displaying the current channel. It could run something like this: x-terminal-emulator -e the-irc-client --display-current-channel That would communicate with the main process via a unix socket to find out what to display. Or, more generally: x-terminal-emul[...]



drought

Wed, 30 Nov 2016 17:09:25 -0500

Drought here since August. The small cistern ran dry a month ago, which has never happened before. The large cistern was down to some 900 gallons. I don't use anywhere near the national average of 400 gallons per day. More like 10 gallons. So could have managed for a few more months. Still, this was worrying, especially as the area moved from severe to extreme drought according to the US Drought Monitor.

Two days of solid rain fixed it, yay! The small cistern has already refilled, and the large will probably be full by tomorrow.

The winds preceeding that same rain storm fanned the flames that destroyed Gatlinburg. Earlier, fire got within 10 miles of here, although that may have been some kind of controlled burn.

Climate change is leading to longer duration weather events in this area. What tended to be a couple of dry weeks in the fall, has become multiple months of drought and weeks of fire. What might have been a few days of winter weather and a few inches of snow before the front moved through has turned into multiple weeks of arctic air, with multiple 1 ft snowfalls. What might have been a few scorching summer days has become a week of 100-110 degree temperatures. I've seen all that over the past several years.

After this, I'm adding "additional, larger cistern" to my todo list. Also "larger fire break around house".




Linux.Conf.Au 2017 presentation on Propellor

Wed, 16 Nov 2016 10:13:32 -0500

On January 18th, I'll be presenting "Type driven configuration management with Propellor" at Linux.Conf.Au in Hobart, Tasmania. Abstract

Linux.Conf.Au is a wonderful conference, and I'm thrilled to be able to attend it again.

--

Update: My presentation on keysafe has also been accepted for the Security MiniConf at LCA, January 17th.




keysafe with local shares

Thu, 06 Oct 2016 18:37:24 -0400

If your gpg key is too valuable for you to feel comfortable with backing it up to the cloud using keysafe, here's an alternative that might appeal more.

Keysafe can now back up some shares of the key to local media, and other shares to the cloud. You can arrange things so that the key can't be restored without access to some of the local media and some of the cloud servers, as well as your password.

For example, I have 3 USB sticks, and there are 3 keysafe servers. So let's make 6 shares total of my gpg secret key and require any 4 of them to restore it.

I plug in all 3 USB sticks and look at mount to get the paths to them. Then, run keysafe, to back up the key spread amoung all 6 locations.

keysafe --backup --totalshares 6 --neededshares 4 \
    --add-storage-directory /media/sdc1 \
    --add-storage-directory /media/sdd1 \
    --add-storage-directory /media/sde1

Once it's done, I can remove the USB sticks, and distribute them to secure places.

To restore, I need at least one of the USB sticks. (If some of the servers are down, more USB sticks will be needed.) Again I tell keysafe the paths where USB stick(s) are mounted.

keysafe --restore --totalshares 6 --neededshares 4 \
    --add-storage-directory /media/sdb1

Using keysafe this way, physical access to the USB sticks is the first level of defense, and hopefully you'll know if that's breached. The keysafe password is the second level of defense, and cracking that will take a lot of work. Leaving plenty of time to revoke your key, etc, if it comes to that.

I feel this is better than the methods I've been using before to back up my most important gpg keys. With paperkey, physical access to the printout immediately exposes the key. With Shamir Secret Sharing and manual distribution of shares, the only second line of defense is the much easier to crack gpg passphrase. Using OpenPGP smartcards is still a more secure option, but you'd need 3 smartcards to reach the same level of redundancy, and it's easier to get your hands on 3 USB sticks than 3 smartcards.

There's another benefit to using keysafe this way. It means that sometimes, the data stored on the keysafe servers is not sufficient to crack a key. There's no way to tell, so an attacker risks doing a lot of futile work.

If you're not using an OpenPGP smartcard, I encourage you to back up your gpg key with keysafe as described above.

Two of the three necessary keysafe servers are now in operation, and I hope to have a full complement of servers soon.

(This was sponsored by Thomas Hochstein on Patreon.)




battery bank refresh

Wed, 05 Oct 2016 18:12:54 -0400

My house entered full power saving mode with fall. Lantern light and all devices shutdown at bedtime.

But, it felt early to need to do this. Comparing with my logbook for last year, the batteries were indeed doing much worse.

I had added a couple of new batteries to the bank last winter, and they seemed to have helped at the time, although it's difficult to tell when you have a couple of good batteries amoung a dozen failing ones.

The bank was set up like this:

+---- house ----
|              |
+( 6v )-+( 6v )-
|              |
+( 6v )-+( 6v )-
|              |
+( 6v )-+( 6v )-
|              |
+( 6v )-+( 6v )-
|              |
+( 6v )-+( 6v )-
|              |
+(   new 12v  )-
|              |
+(   new 12v  )-

Tried as an experiement disconnecting all the bridges between the old 6v battery pairs. I expected this would mean only the new 12v ones would be in the circuit, and so I could see how well they powered the house. Instead, making this change left the house without any power at all!

On a hunch, I then reconnected one bridge, like this -- and power was restored.

+---- house ----
|              |
+( 6v )-+( 6v )-
|              |
+( 6v )  ( 6v )-
|              |
+( 6v )  ( 6v )-
|              |
+( 6v )  ( 6v )-
|              |
+( 6v )  ( 6v )-
|              |
+(   new 12v  )-
|              |
+(   new 12v  )-

My best guess of what's going on is that the wires forming the positive and negative rails are not making good connections (due to corrosion, rust, broken wires etc), and so batteries further down are providing less and less power. The new 12v ones may not have been able to push power up to the house at all.

(Or, perhaps having partially dead batteries hanging half-connected off the circuit has some effect that my meager electronics knowledge can't account for.)

So got longer cables to connect the new batteries directly to the house, bypassing all the old stuff. That's working great -- house power never dropped below 11.9v last night, vs 11.1v the night before.

The old battery bank might still be able to provide another day or so of power in a pinch, so I am going to keep them in there for now, but if I don't use them at all this winter I'll be recycling them. Astounding that those batteries were in use for 20 years.




keysafe beta release

Thu, 22 Sep 2016 16:13:21 -0400

After a month of development, keysafe 0.20160922 is released, and ready for beta testing. And it needs servers.

With this release, the whole process of backing up and restoring a gpg secret key to keysafe servers is implemented. Keysafe is started at desktop login, and will notice when a gpg secret key has been created, and prompt to see if it should back it up.

At this point, I recommend only using keysafe for lower-value secret keys, for several reasons:

  • There could be some bug that prevents keysafe from restoring a backup.
  • Keysafe's design has not been completely reviewed for security.
  • None of the keysafe servers available so far or planned to be deployed soon meet all of the security requirements for a recommended keysafe server. While server security is only the initial line of defense, it's still important.

Currently the only keysafe server is one that I'm running myself. Two more keysafe servers are needed for keysafe to really be usable, and I can't run those.

If you're interested in running a keysafe server, read the keysafe server requirements and get in touch.




PoW bucket bloom: throttling anonymous clients with proof of work, token buckets, and bloom filters

Tue, 13 Sep 2016 01:14:47 -0400

An interesting side problem in keysafe's design is that keysafe servers, which run as tor hidden services, allow anonymous data storage and retrieval. While each object is limited to 64 kb, what's to stop someone from making many requests and using it to store some big files? The last thing I want is a git-annex keysafe special remote. ;-) I've done a mash-up of three technologies to solve this, that I think is perhaps somewhat novel. Although it could be entirely old hat, or even entirely broken. (All I know so far is that the code compiles.) It uses proof of work, token buckets, and bloom filters. Each request can have a proof of work attached to it, which is just a value that, when hashed with a salt, starts with a certain number of 0's. The salt includes the ID of the object being stored or retrieved. The server maintains a list of token buckets. The first can be accessed without any proof of work, and subsequent ones need progressively more proof of work to be accessed. Clients will start by making a request without a PoW, and that will often succeed, but when the first token bucket is being drained too fast by other load, the server will reject the request and demand enough proof of work to allow access to the second token bucket. And so on down the line if necessary. At the worst, a client may have to do 8-16 minutes of work to access a keysafe server that is under heavy load, which would not be ideal, but is acceptible for keysafe since it's not run very often. If the client provides a PoW good enough to allow accessing the last token bucket, the request will be accepted even when that bucket is drained. The client has done plenty of work at this point, so it would be annoying to reject it. To prevent an attacker that is willing to burn CPU from abusing this loophole to flood the server with object stores, the server delays until the last token bucket fills back up. So far so simple really, but this has a big problem: What prevents a proof of work from being reused? An attacker could generate a single PoW good enough to access all the token buckets, and flood the server with requests using it, and so force everyone else to do excessive amounts of work to use the server. Guarding against that DOS is where the bloom filters come in. The server generates a random request ID, which has to be included in the PoW salt and sent back by the client along with the PoW. The request ID is added to a bloom filter, which the server can use to check if the client is providing a request ID that it knows about. And a second bloom filter is used to check if a request ID has been used by a client before, which prevents the DOS. Of course, when dealing with bloom filters, it's important to consider what happens when there's a rare false positive match. This is not a problem with the first bloom filter, because a false positive only lets some made-up request ID be used. A false positive in the second bloom filter will cause the server to reject the client's proof of work. But the server can just request more work, or send a new request ID, and the client will follow along. The other gotcha with bloom filters is that filling them up too far sets too many bits, and so false positive rates go up. To deal with this, keysafe just keeps count of how many request IDs it has generated, and once it gets to be too many to fit in a bloom filter, it makes a new, empty bloom filter and starts storing request IDs in it. The old bloom filter is still checked too, providing a grace period for old request IDs to be used. Using bloom filters that occupy around 32 mb of RAM, this rotation only has to be done every million requests of so. But, that rotation opens up another DOS! An attacker could cause lots of request IDs to be generated, and so force the server to rotate its bloom filters too quickly, which would prevent a[...]



late summer

Tue, 30 Aug 2016 21:15:40 -0400

With days beginning to shorten toward fall, my house is in initial power saving mode. Particularly, the internet gateway is powered off overnight. Still running electric lights until bedtime, and still using the inverter and other power without much conservation during the day.

Indeed, I had two laptops running cpu-melting keysafe benchmarks for much of today and one of them had to charge up from empty too. That's why the house power is a little low, at 11.0 volts now, despite over 30 amp-hours of power having been produced on this mostly clear day. (1 week average is 18.7 amp-hours)

September/October is the tricky time where it's easy to fall off a battery depletion cliff and be stuck digging out for a long time. So time to start dusting off the conservation habits after summer's excess.


I think this is the first time I've mentioned any details of living off grid with a bare minimum of PV capacity in over 4 years. Solar has a lot of older posts about it, and I'm going to note down the typical milestones and events over the next 8 months.