Subscribe: Ned Batchelder's blog
http://www.nedbatchelder.com/blog/rss.xml
Added By: Feedage Forager Feedage Grade A rated
Language: English
Tags:
code  coverage  def  exception  exceptions  global  import  make  module  people  python  run  skiptest  test suite  test  tests 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Ned Batchelder's blog

Ned Batchelder's blog



Ned Batchelder's personal blog.



 



Rubik's algorithms

2017-02-26T19:43:54-06:00

Recently, a nephew asked about how to solve a Rubik's Cube. I couldn't sit down with him to show him what I knew, so I looked around the web for explanations. I was surprised by two things: first, that all the pages offering solutions seemed to offer the same one, even down to the colors discussed: "Start by making a white cross, ..., finally, finish the yellow side."

Second, that the techniques (or "algorithms") were often given without explanation. They're presented as something to memorize.

My own solving technique uses a few algorithms constructed in a certain way that I describe in Two-Part Rubik's Algorithms. I wrote them up as a resource I hope my nephew will be able to use.

(image)

BTW, that page makes use of Conrad Rider's impressive TwistSim library.




Https

2017-02-25T17:37:11-06:00

Someone posted a link to my latest blog post on /r/Python, but somehow got an https link for it. That's odd: my site doesn't even properly serve content over https. People were confused by the broken link.

I should say, my site didn't even serve content over https, because now it does. I'd been meaning to enable https, and force its use, for a long time. This broken link pushed it to the top of the list.

Let's Encrypt is the certificate authority of choice these days, because they are free and automatable. And people say they make it easy, but I have to say, I would not have classified this as easy. I'm sure it's easier than it used to be, but it's still a confusing maze of choices, with decision points you are expected to navigate.

Actually getting everything installed requires sudo, or without sudo, using third-party tools, with instructions from obscure blog posts. There's clearly still room for improvement.

Once you have the certificate in place, you need to redirect your http site to https. Then you have to fix the http references in your site. Protocol-relative (or schema-less) URLs are handy here.

It's all done now, the entire site should always be https. I'm glad I finally got the kick in the pants to do it. If you find something wrong, let me know.




A tale of two exceptions, continued

2017-02-23T06:36:00-06:00

In my last blog post, A tale of two exceptions, I laid out the long drawn-out process of trying to get a certain exception to make tests skip in my test runner. I ended on a solution I liked at the time.But it still meant having test-specific code in the product code, even if it was only a single line to set a base class for an exception. It didn't feel right to say "SkipTest" in the product code, even once.In that blog post, I said,One of the reasons I write this stuff down is because I'm hoping to get feedback that will improve my solution, or advance my understanding. ... a reader might object and say, "you should blah blah blah."Sure enough, Ionel said,A better way is to handle this in coverage's test suite. Possible solution: wrap all your tests in a decorator that reraises with a SkipException.I liked this idea. The need was definitely a testing need, so it should be handled in the tests. First I tried doing something with pytest to get it to do the conversion of exceptions for me. But I couldn't find a way to make it work.So: how to decorate all my tests? The decorator itself is fairly simple. Just call the method with all the arguments, and return its value, but if it raises StopEverything, then raise SkipTest instead:def convert_skip_exceptions(method):     """A decorator for test methods to convert StopEverything to SkipTest."""     def wrapper(*args, **kwargs):         """Run the test method, and convert exceptions."""         try:             result = method(*args, **kwargs)         except StopEverything:             raise unittest.SkipTest("StopEverything!")         return result     return wrapper But decorating all the test methods would mean adding a @convert_skip_exceptions line to hundreds of test methods, which I clearly was not going to do. I could use a class decorator, which meant I would only have to add a decorator line to dozens of classes. That also felt like too much to do and remember to do in the future when I write new test classes.It's not often I say this, but: it was time for a metaclass. Metaclasses are one of the darkest magics Python has, and they can be mysterious. At heart, they are simple, but in a place you don't normally think to look. Just as a class is used to make objects, a metaclass is used to make classes. Since there's something I want to do everytime I make a new class (decorate its methods), a metaclass gives me the tools to do it.class SkipConvertingMetaclass(type):     """Decorate all test methods to convert StopEverything to SkipTest."""     def __new__(mcs, name, bases, attrs):         for attr_name, attr_value in attrs.items():             right_name = attr_name.startswith('test_')             right_type = isinstance(attr_value, types.FunctionType)             if right_name and right_type:                 attrs[attr_name] = convert_skip_except[...]



A tale of two exceptions

2017-01-22T17:19:00-06:00

It was the best of times, it was the worst of times...This week saw the release of three different versions of Coverage.py. This is not what I intended. Clearly something was getting tangled up. It had to do with some tricky exception handling. The story is kind of long and intricate, but has a number of chewy nuggets that fascinate me. Your mileage may vary.Writing it all out, many of these missteps seem obvious and stupid. If you take nothing else from this, know that everyone makes mistakes, and we are all still trying to figure out the best way to solve some problems.It started because I wanted to get the test suite running well on Jython. Jython is hard to support in Coverage.py: it can do "coverage run", but because it doesn't have the same internals as CPython, it can't do "coverage report" or any of the other reporting code. Internally, there's one place in the common reporting code where we detect this, and raise an exception. Before all the changes I'm about to describe, that code looked like this:for attr in ['co_lnotab', 'co_firstlineno']:     if not hasattr(self.code, attr):         raise CoverageException(             "This implementation of Python doesn't support code analysis.\n"             "Run coverage.py under CPython for this command."         ) The CoverageException class is derived from Exception. Inside of Coverage.py, all exceptions raised are derived from CoverageException. This is a good practice for any library. For the coverage command-line tool, it means we can catch CoverageException at the top of main() so that we can print the message without an ugly traceback from the internals of Coverage.py.The problem with running the test suite under Jython is that this "can't support code analysis" exception was being raised from hundreds of tests. I wanted to get to zero failures or errors, either by making the tests pass (where the operations were supported on Jython) or skipping the tests (where the operations were unsupported).There are lots of tests in the Coverage.py test suite that are skipped for all sorts of reasons. But I didn't want to add decorators or conditionals to hundreds of tests for the Jython case. First, it would be a lot of noise in the tests. Second, it's not always immediately clear from a test that it is going to touch the analysis code. Lastly and most importantly, if someday in the future I figured out how to do analysis on Jython, or if it grew the features to make the current code work, I didn't want to have to then remove all that test-skipping noise.So I wanted to somehow automatically skip tests when this particular exception was raised. The unittest module already has a way to do this: tests are skipped by raising a unittest.SkipTest exception. If the exception raised for "can't support code analysis" derived from SkipTest, then the tests would be skipped automatically. Genius idea!So in 4.3.2, the code changed to this (spread across a few files):from coverage.backunittest import unittest class StopEverything(unittest.SkipTest):     """An exception that means everything should stop.     This derives from SkipTest so that tests that spring this trap will be     skipped automatically, without a lot of boilerplate all over&#[...]



Coverage.py 4.3.2 and 4.3.3, and 4.3.4

2017-01-16T18:37:00-06:00

A handful of fixes for Coverage.py today: v4.3.2. Having active contributors certainly makes it easier to move code more quickly.

...and then it turns out, 4.3.2 wouldn't run on Python 2.6. So quick like a bunny, here comes Coverage.py version 4.3.3.

...and then that fix broke other situations on all sorts of Python versions, so Coverage.py version 4.3.4.




Evil ninja module initialization

2017-01-10T06:34:54-06:00

A question about import styles on the Python-Dev mailing list asked about imports like this:import os as _os Understanding why people do this is an interesting lesson in how modules work. A module is nothing more than a collection of names. When you define a name in a .py file, it becomes an attribute of the module, and is then importable from the module.An underlying simplicity in Python is that many statements are really just assignment statements in disguise. All of these define the name X:X = 17 def X(): print("look!") import X When you create a module, you can make the name "X" importable from that module by assigning to it, or defining it as a function. You can also make it importable by importing it yourself.Suppose your module looks like this:# yourmodule.py import os def doit():     os.something_or_other() This module has two names defined in it: "doit", and "os". Someone else can now do this:# someone.py from yourmodule import os # or worse, this imports os and doit: from yourmodule import * This bothers some people. "os" is not part of the actual interface of yourmodule. That first import I showed prevents this leaking of your imports into your interface. Importing star doesn't pull in names starting with underscores. (Another solution is to define __all__ in your module.)Most people though, don't worry about this kind of name leaking. Import-star is discouraged anyway, and people know not to import os from other modules. The solution of renaming os to _os just makes your code ugly for little benefit.The part of the discussion thread that really caught my eye was Daniel Holth's winking suggestion of the "evil ninja mode pattern" of module initialization:def ninja():     global exported     import os     def exported():         os.do_something() ninja() del ninja What's going on here!? Remember that def is an assignment statement like any other. When used inside a function, it defines a local name, as assignment always does. But an assignment in a function can define a global name if the name is declared as global. It's a little unusual to see a global statement without an explicit assignment at the top-level, but it works just fine. The def statement defines a global "exported" function, because the global statement told it to. "os" is now a local in our function, because again, the import statement is just another form of assignment.So we define ninja(), and then execute it immediately. This defines the global "exported", and doesn't define a global "os". The only problem is the name "ninja" has been defined, which we can clean up with a del statement.Please don't ever write code this way. It's a kind of over-defensiveness that isn't needed in typical Python code. But understanding what it does, and why it does it, is a good way to flex your understanding of Python workings.For more about how names (and values) work in Python, people seem to like my PyCon talk, Python Names and Values. [...]



No PyCon for me this year

2017-01-05T20:23:38-06:00

2017 will be different for me in one specific way: I won't be attending PyCon. I've been to ten in a row:

(image)

This year, Open edX con is in Madrid two days later after PyCon, actually overlapping with the sprints. I'm not a good enough traveler to do both. Crossing nine timezones is not something to be taken lightly.

I'll miss the usual love-fest at PyCon, but after ten in a row, it should be OK to miss one. I can say that now, but probably in May I will feel like I am missing the party. Maybe I really will watch talks on video for a change.

I usually would be working on a presentation to give. I like making presentations, but it is a lot of work. This spring I'll have that time back.

In any case, this will be a new way to experience the Python community. See you all in 2018 in Cleveland!




D'oh: Coverage.py 4.3.1

2016-12-28T10:37:00-06:00

Yesterday I released five months' of fixes as Coverage.py 4.3, and today I am releasing Coverage.py 4.3.1. This is not because releasing is fun, but because releasing is error-prone.

Two bad problems were very quickly reported by my legions of adoring fans, and they are now fixed. I'll sheepishly tell you that one of them was a UnicodeError in a bit of too-cute code in setup.py.

Perhaps I should have released a 4.3 beta. But my experience in the past is that betas do not get the kind of attention that final releases do. Partly this is just due to people's attention budget: lots of people won't install a beta. But it's also due to continuous integration servers. When a final release is out, hundreds if not thousands of CI servers will install it automatically as part of the next triggered build. They won't install pre-releases.

So there's a not-great choice to make: should I put out a beta, and hope that people try it and tell me what went wrong? Will enough people in enough disparate environments take that step to truly test the release?

Or should I skip that step, jump straight to a final release, and prepare instead to quickly fix whatever problems occur? I chose the latter course for 4.3. I guess I could use meta-feedback about which form of feedback I should pursue in the future...




Coverage.py 4.3

2016-12-27T16:23:00-06:00

The latest Coverage.py release: Coverage.py 4.3 is ready.

This version adds --skip-covered support to the HTML report, implements sys.excepthook support, reads configuration from tox.ini, and contains improvements that close 18 issues. The complete change history is in the source.

A special shout-out to Loïc Dachary: he read my blog post about Who Tests What, and got interested in contributing. And I mean, really interested. Suddenly he seemed to be everywhere, making pull requests and commenting on issues. In a week, I had 122 emails due to his activity. That energy really helped push me along, and is a big reason why this release happened, five months after 4.2.

Random trivia: this is the 30th version on PyPI; it's the 57th if you include pre-releases.




Finding test coupling

2016-12-22T06:22:39-06:00

Before we get started: this is a story about a problem I had and how I solved it. This retelling is leaving out lots of small false trails and hard learnings, which I will summarize at the end. I report these stories not to lecture from on high, but to share with peers, help people learn, and ideally, elicit teachings from others so that I can do it better next time. The main qualities I am demonstrating here are not intelligence and experience, but perseverance, patience, and optimism.OK, on with the story:Running our large test suite the other day, we got a test failure. It seemed unrelated to the changes we were making, but you can never be sure, so we investigated. Along the way I used a few techniques to narrow down, widen, and identify suspects.Running just that one test passed, but running the whole test suite, it failed, and this behavior was repeatable. So we had some kind of coupling between tests. Ideally, all tests would be isolated from each other. Perfect test isolation would mean that no matter what order you ran tests, and no matter what subset of tests you ran, the results would be the same. Clearly we did not have perfect test isolation.The job now was to find the test we were coupled with, or perhaps one of the many possible tests that we were coupled with.The test failure itself was a UnicodeError while trying to log a warning message involving a username with a non-ASCII character in it. Apparently this is something that doesn't work well: when warnings are routed through the logging system, if the message is actually logged, and the message has a non-ASCII Unicode string, an exception will happen. That's unfortunate, but we'll have to live with that for the moment.Our best guess at the moment is that when the test passes, it's because either the warnings settings, or the logging settings, are deciding not to log the warning. When the test fails, it's because some previous test has changed one (or both!) of those settings, causing the message to proceed all the way through the warnings/logging pipeline, to the point of producing the UnicodeError. This is a plausible theory because those settings are global to the process, and would be easy to change without realizing the consequences for test suites.But we still have to find that test. Here's the command that runs just the one test, that failed:python ./manage.py lms test --verbosity=2 --with-id --settings=test \     --xunitmp-file=/edx/app/edxapp/edx-platform/reports/lms/nosetests.xml \     --with-database-isolation \     openedx/core/djangoapps/external_auth/tests/test_openid_provider.py:OpenIdProviderTest.test_provider_login_can_handle_unicode_email_inactive_account This is the Django test runner, using nose. That last line selects one particular test method in one particular class in one specific test file. To try to find a failing combination, we'll widen the scope of our test run by peeling off trailing components. This will give us progressively more tests in the run, and eventually (we hope), the test will fail:openedx/core/djangoapps/external_auth/tests/test_openid_provider.py:OpenIdProviderTest openedx/core/djangoapps/external_auth/tests/test_openid_provider.py openedx/core/djangoapps/external_auth openedx/core/djangoapps This last one finally failed, with 1810 tests. That's still too many to examine manually. We can run those tests again, with nose-randomly to randomize the order of the tests. This gives us an opportunity to run experiments where the randomization can tell us something about coupling.[...]