Subscribe: stillhq.com : Mikal, a geek from Canberra living in Silicon Valley
http://www.stillhq.com/cgi-bin/blosxom/index.rss
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
aio  assertion  call  ceph  dummy  install  labosa rexray  labosa  mock worked  mock  openstack  print  rexray  root labosa  root 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: stillhq.com : Mikal, a geek from Canberra living in Silicon Valley

stillhq.com : Mikal, a geek from Canberra living in Silicon Valley



The life, times, travel and software of Michael Still



 



I think I found a bug in python's unittest.mock library

Wed, 27 Sep 2017 21:58:00 -800

Mocking is a pretty common thing to do in unit tests covering OpenStack Nova code. Over the years we've used various mock libraries to do that, with the flavor de jour being unittest.mock. I must say that I strongly prefer unittest.mock to the old mox code we used to write, but I think I just accidentally found a fairly big bug. The problem is that python mocks are magical. Its an object where you can call any method name, and the mock will happily pretend it has that method, and return None. You can then later ask what "methods" were called on the mock. However, you use the same mock object later to make assertions about what was called. Herein is the problem -- the mock object doesn't know if you're the code under test, or the code that's making assertions. So, if you fat finger the assertion in your test code, the assertion will just quietly map to a non-existent method which returns None, and your code will pass. Here's an example: #!/usr/bin/python3 from unittest import mock class foo(object): def dummy(a, b): return a + b @mock.patch.object(foo, 'dummy') def call_dummy(mock_dummy): f = foo() f.dummy(1, 2) print('Asserting a call should work if the call was made') mock_dummy.assert_has_calls([mock.call(1, 2)]) print('Assertion for expected call passed') print() print('Asserting a call should raise an exception if the call wasn\'t made') mock_worked = False try: mock_dummy.assert_has_calls([mock.call(3, 4)]) except AssertionError as e: mock_worked = True print('Expected failure, %s' % e) if not mock_worked: print('*** Assertion should have failed ***') print() print('Asserting a call where the assertion has a typo should fail, but ' 'doesn\'t') mock_worked = False try: mock_dummy.typo_assert_has_calls([mock.call(3, 4)]) except AssertionError as e: mock_worked = True print('Expected failure, %s' % e) print() if not mock_worked: print('*** Assertion should have failed ***') print(mock_dummy.mock_calls) print() if __name__ == '__main__': call_dummy() If I run that code, I get this: $ python3 mock_assert_errors.py Asserting a call should work if the call was made Assertion for expected call passed Asserting a call should raise an exception if the call wasn't made Expected failure, Calls not found. Expected: [call(3, 4)] Actual: [call(1, 2)] Asserting a call where the assertion has a typo should fail, but doesn't *** Assertion should have failed *** [call(1, 2), call.typo_assert_has_calls([call(3, 4)])] So, we should have been told that typo_assert_has_calls isn't a thing, but we didn't notice because it silently failed. I discovered this when I noticed an assertion with a (smaller than this) typo in its call in a code review yesterday. I don't really have a solution to this right now (I'm home sick and not thinking straight), but it would be interesting to see what other people think. Tags for this post: python unittest.mock mock testingRelated posts: Terrible pong; paramiko exec_command timeout; Rsyncing everything but the data; Python effective TLD library; Python DNS modules; mbot: new hotness in Google Talk bots Comment [...]



Configuring docker to use rexray and Ceph for persistent storage

Sun, 28 May 2017 18:45:00 -800

For various reasons I wanted to play with docker containers backed by persistent Ceph storage. rexray seemed like the way to do that, so here are my notes on getting that working... First off, I needed to install rexray: root@labosa:~/rexray# curl -sSL https://dl.bintray.com/emccode/rexray/install | sh Selecting previously unselected package rexray. (Reading database ... 177547 files and directories currently installed.) Preparing to unpack rexray_0.9.0-1_amd64.deb ... Unpacking rexray (0.9.0-1) ... Setting up rexray (0.9.0-1) ... rexray has been installed to /usr/bin/rexray REX-Ray ------- Binary: /usr/bin/rexray Flavor: client+agent+controller SemVer: 0.9.0 OsArch: Linux-x86_64 Branch: v0.9.0 Commit: 2a7458dd90a79c673463e14094377baf9fc8695e Formed: Thu, 04 May 2017 07:38:11 AEST libStorage ---------- SemVer: 0.6.0 OsArch: Linux-x86_64 Branch: v0.9.0 Commit: fa055d6da595602715bdfd5541b4aa6d4dcbcbd9 Formed: Thu, 04 May 2017 07:36:11 AEST Which is of course horrid. What that script seems to have done is install a deb'd version of rexray based on an alien'd package: root@labosa:~/rexray# dpkg -s rexray Package: rexray Status: install ok installed Priority: extra Section: alien Installed-Size: 36140 Maintainer: Travis CI User Architecture: amd64 Version: 0.9.0-1 Depends: libc6 (>= 2.3.2) Description: Tool for managing remote & local storage. A guest based storage introspection tool that allows local visibility and management from cloud and storage platforms. . (Converted from a rpm package by alien version 8.86.) If I was building anything more than a test environment I think I'd want to do a better job of installing rexray than this, so you've been warned. Next to configure rexray to use Ceph. The configuration details are cunningly hidden in the libstorage docs, and aren't mentioned at all in the rexray docs, so you probably want to take a look at the libstorage docs on ceph. First off, we need to install the ceph tools, and copy the ceph authentication information from the the ceph we installed using openstack-ansible earlier. root@labosa:/etc# apt-get install ceph-common root@labosa:/etc# scp -rp 172.29.239.114:/etc/ceph . The authenticity of host '172.29.239.114 (172.29.239.114)' can't be established. ECDSA key fingerprint is SHA256:SA6U2fuXyVbsVJIoCEHL+qlQ3xEIda/MDOnHOZbgtnE. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '172.29.239.114' (ECDSA) to the list of known hosts. rbdmap 100% 92 0.1KB/s 00:00 ceph.conf 100% 681 0.7KB/s 00:00 ceph.client.admin.keyring 100% 63 0.1KB/s 00:00 ceph.client.glance.keyring 100% 64 0.1KB/s 00:00 ceph.client.cinder.keyring 100% 64 0.1KB/s 00:00 ceph.client.cinder-backup.keyring 71 0.1KB/s 00:00 root@labosa:/etc# modprobe rbd You also need to configure rexray. My first attempt looked like this: root@labosa:/var/log# cat /etc/rexray/config.yml libstorage: service: ceph And the rexray output sure made it look like it worked... root@labosa:/etc# rexray service start ● rexray.service - rexray Loaded: loaded (/etc/systemd/system/rexray.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2017-05-29 10:14:07 AEST; 33ms ago Main PID: 477423 (rexray) Tasks: 5 Memory: 1.5M CPU: 9ms CGroup: /system.slice/rexray.service └─477423 /usr/bin/rexray start -f May 29 10:14:07 labosa systemd[1]: Started rexray. Which looked good, but /var/log/syslog said: May 29 10:14:08 labosa rexray[477423]: REX-Ray May 29 10:14:08 labosa rexray[477423]: ------- May 29 10:14:08 labosa rexray[477423]: Binary: /usr/bin/rexray May 29 10:14:08 labosa rexray[477423]: Flavor: client+agent+controller May 29 10:14:08 labosa rexray[477423]: SemVer: 0.9.0 May 29 10:14:08 labosa rexray[477423]: OsArch: Linux-x86_64 May 29 10:14:08 labosa rexray[477423]: Branch:[...]



So you want to setup a Ceph dev environment using OSA

Sat, 27 May 2017 18:30:00 -800

Support for installing and configuring Ceph was added to openstack-ansible in Ocata, so now that I have a need for a Ceph development environment it seems logical that I would build it by building an openstack-ansible Ocata AIO. There were a few gotchas there, so I want to explain the process I used.

First off, Ceph is enabled in an openstack-ansible AIO using a thing I've never seen before called a "Scenario". Basically this means that you need to export an environment variable called "SCENARIO" before running the AIO install. Something like this will do the trick?L:

    export SCENARIO=ceph
    


Next you need to set the global pg_num in the ceph role or the install will fail. I did that with this patch:

    --- /etc/ansible/roles/ceph.ceph-common/defaults/main.yml       2017-05-26 08:55:07.803635173 +1000
    +++ /etc/ansible/roles/ceph.ceph-common/defaults/main.yml       2017-05-26 08:58:30.417019878 +1000
    @@ -338,7 +338,9 @@
     #     foo: 1234
     #     bar: 5678
     #
    -ceph_conf_overrides: {}
    +ceph_conf_overrides:
    +  global:
    +    osd_pool_default_pg_num: 8
     
     
     #############
    @@ -373,4 +375,4 @@
     # Set this to true to enable File access via NFS.  Requires an MDS role.
     nfs_file_gw: true
     # Set this to true to enable Object access via NFS. Requires an RGW role.
    -nfs_obj_gw: false
    \ No newline at end of file
    +nfs_obj_gw: false
    


That of course needs to be done after the Ceph role has been fetched, but before it is executed, so in other words after the AIO bootstrap, but before the install.

And that was about it (although of course that took a fair while to work out). I have this automated in my little install helper thing, so I'll never need to think about it again which is nice.

Once Ceph is installed, you interact with it via the monitor container, not the utility container, which is a bit odd. That said, all you really need is the Ceph config file and the Ceph utilities, so you could move those elsewhere.

    root@labosa:/etc/openstack_deploy# lxc-attach -n aio1_ceph-mon_container-a3d8b8b1
    root@aio1-ceph-mon-container-a3d8b8b1:/# ceph -s
        cluster 24424319-b5e9-49d2-a57a-6087ab7f45bd
         health HEALTH_OK
         monmap e1: 1 mons at {aio1-ceph-mon-container-a3d8b8b1=172.29.239.114:6789/0}
                election epoch 3, quorum 0 aio1-ceph-mon-container-a3d8b8b1
         osdmap e20: 3 osds: 3 up, 3 in
                flags sortbitwise,require_jewel_osds
          pgmap v36: 40 pgs, 5 pools, 0 bytes data, 0 objects
                102156 kB used, 3070 GB / 3070 GB avail
                      40 active+clean
    root@aio1-ceph-mon-container-a3d8b8b1:/# ceph osd tree
    ID WEIGHT  TYPE NAME       UP/DOWN REWEIGHT PRIMARY-AFFINITY 
    -1 2.99817 root default                                      
    -2 2.99817     host labosa                                   
     0 0.99939         osd.0        up  1.00000          1.00000 
     1 0.99939         osd.1        up  1.00000          1.00000 
     2 0.99939         osd.2        up  1.00000          1.00000 
    


Tags for this post: openstack osa ceph openstack-ansible
Related posts: Configuring docker to use rexray and Ceph for persistent storage

Comment



The Collapsing Empire

Sat, 23 Apr 2016 01:30:00 +1000

(image)


ISBN: 076538888X
LibraryThing
A reading group of managers at work has been reading this book, except for the last chapter which we were left to read by ourselves. Overall, the book is interesting and very readable. Its a little dated, being all excited with the invention of email and some unfortunate gender pronouns, but if you can get past those minor things there is a lot of wise advice here. I'm not sure I agree with 100% of it, but I do think the vast majority is of interest. A well written book that I'd recommend to new managers.

Tags for this post: book andy_gove management intel non_fiction
Related posts: Being Geek; Bad Science; Juno nova mid-cycle meetup summary: slots; Sticklers, Sideburns and Bikinis; Bad Pharma; The Bad Popes
Comment Recommend a book