Subscribe: Ned Batchelder's blog
Added By: Feedage Forager Feedage Grade B rated
Language: English
auto  child nth  child  don  find  list  lists  loop  loops  make  nth child  nth  pairs  range  test  thing  tuple  tuples 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Ned Batchelder's blog

Ned Batchelder's blog

Ned Batchelder's personal blog.


Mac un-installs


The Mac is a nice machine and operating system, but there's one part of the experience I don't understand: software installation and uninstallation. I'm sure the App Store is meant to solve some of this, but the current situation is oddly manual.

Usually when I install applications on the Mac, I get a .dmg file, I open it, and there's something to copy to the Applications folder. Often, the .dmg window that opens has a cute graphic as a background, to encourage me to drag the application to the folder.

Proponents of this say, "it's so simple! The whole app is just a folder, so you can just drag it to Applications, and you're done. When you don't want the application any more, you just drag the application to the Trash."

This is not true. Applications may start self-contained in a folder, but they write data to other places on the disk. Those places are orphaned when you discard the application. Why is there no uninstaller to clean up those things?

As an example, I was cleaning up my disk this morning. Grand Perspective helped me find some big stuff I didn't need. One thing it pointed out to me was in a Caches folder. I wondered how much stuff was in folders called Caches:

sudo find / -type d -name '*Cache*' -exec du -sk {} \; -prune 2>&-

(Find every directory with 'Cache' in its name, show its disk usage in Kb, and don't show any errors along the way.) This found all sorts of interesting things, including folders from applications I had long ago uninstalled.

Now I could search for other directories belonging to these long-gone applications. For example:

sudo find / -type d -name '*TweetDeck*' -exec du -sh {} \; -prune 2>&-
 12K    /Users/ned/Library/Application Support/Fluid/FluidApps/TweetDeck
 84K    /Users/ned/Library/Caches/com.fluidapp.FluidApp.TweetDeck
 26M    /Users/ned/Library/Containers/com.twitter.TweetDeck
1.7M    /Users/ned/Library/Saved Application State/com.fluidapp.FluidApp.TweetDeck.savedState
sudo find / -type d -name '*twitter-mac*' -exec du -sh {} \; -prune 2>&-
288K    /private/var/folders/j2/gr3cj3jn63s5q8g3bjvw57hm0000gp/C/com.twitter.twitter-mac
 99M    /Users/ned/Library/Containers/com.twitter.twitter-mac
4.0K    /Users/ned/Library/Group Containers/

That's about 128Mb of junk left behind by two applications I no longer have. In the scheme of things, 128Mb isn't that much, but it's a lot more disk space than I want to devote to applications I've discarded. And what about other apps I tried and removed? Why leave this? Am I missing something that should have handled this for me?

One of Them


I have not written here about this year's presidential election. I am as startled, confused, and dismayed as many others about how Trump has managed to con people into following him, with nothing more than bluster and lies.It feels enormous to take it on in writing. Nathan Uno also feels as I do, but for different reasons. I've never met Nathan: he's an online friend, part of a small close-knit group who mostly share a religious background, and who enjoy polite intellectual discussions of all sorts of topics. I'm not sure why they let me in the group... :)Nathan and I started talking about our feelings about the election, and it quickly became clear that he had a much more visceral reason to oppose Trump than I did. I encouraged him to write about it, and he did. Here it is, "One of Them."•    •    •One of ThemArmed police came in the middle of the night and in the middle of winter, to take a husband away from his wife and a father away from his children. No explanation was given and his family was not allowed to see him or even know where he was being held. A few months later the man’s wife and children were also rounded up and taken away. They had only the belongings that they could carry with them, leaving everything else to be lost or stolen or claimed by others, including some of the family’s most precious possessions. The family was imprisoned in a camp surrounded by barbed wire and armed soldiers. They had little food and little heat and absolutely no freedom. A few months after the wife and children arrived they were finally reunited with their husband and father, seven months after he was taken from them in the night. They remained together at the camp for years until being released, given $25 and a bus ticket each, and left to try to put their shattered lives back together.No member of the family was ever charged with a crime. In fact, no member of the family was ever even suspected of a crime. They were imprisoned, along with tens of thousands of others, simply for being “one of them.”This is the story of my grandfather’s family. And my grandmother’s family. And tens of thousands of other families of Japanese descent who had the misfortune of living on the Pacific coast of the United States after the attack on Pearl Harbor.In the 1980s the U.S. government formally apologized, acknowledging their mistake, and financial reparations were made. Growing up I believed that we, as a country, had moved on, had learned a lesson. It never occurred to me that such a thing could happen again. And yet here we are, with a presidential candidate who has openly advocated violence against his opponents and detractors, offered to pay legal fees for those who break the law on his behalf, recommended policies that would discriminate against people based on their ethnicity, religion, or country of ancestry, suggested that deliberately killing women and children might be an appropriate response to terrorism, and yes, even said that he “might have” supported the policies that imprisoned my family. Xenophobic public policy leaves enduring scars on our society, scars that may not be obvious at first. We have Chinatowns today largely because public policy in San Fransisco in the late 1800s pushed Chinese immigrants to a live in a specific neighborhood. The proliferation of Chinese restaurants and Chinese laundries in our country can be traced back to the same time period, when policy restricted employment opportunities for Chinese immigrants and pushed them into doing low-paying “women’s work," like cooking and cleaning.I’ve chosen to make my point with these simple examples from the history of Asian Americans because that’s my heritage. But these examples are trivial compared to the deep, ugly scars left on our society by slavery, and Jim Crow, and the near genocide of the Native American peoples. And despite many positive gains, women continue to be at a significant disadvantage from millennia of policies designed to keep[...]

Multi-parameter Jupyter notebook interaction


I'm working on figuring out retirement scenarios. I wasn't satified with the usual online calculators. I made a spreadsheet, but it was hard to see how the different variables affected the outcome. Aha! This sounds like a good use for a Jupyter Notebok!Using widgets, I could make a cool graph with sliders for controlling the variables, and affecting the result. Nice.But there was a way to make the relationship between the variables and the outcome more apparent: choose one of the variables, and plot its multiple values on a single graph. And of course, I took it one step further, so that I could declare my parameters, and have the widgets, including the selection of the variable to auto-slide, generated automatically.I'm pleased with the result, even if it's a little rough. You can download retirement.ipynb to try it yourself.The general notion of a declarative multi-parameter model with an auto-slider is contained in a class:%pylab --no-import-all inline from collections import namedtuple from ipywidgets import interact, IntSlider, FloatSlider class Param(namedtuple('Param', "default, range")):     """     A parameter for `Model`.     """     def make_widget(self):         """Create a widget for a parameter."""         is_float = isinstance(self.default, float)         is_float = is_float or any(isinstance(v, float) for v in self.range)         wtype = FloatSlider if is_float else IntSlider         return wtype(             value=self.default,             min=self.range[0], max=self.range[1], step=self.range[2],              continuous_update=True,         ) class Model:     """     A multi-parameter model.     """     output_limit = None     num_auto = 7          def _show_it(self, auto_param, **kw):         if auto_param == 'None':             plt.plot(self.inputs,, **kw))         else:             autop = self.params[auto_param]             auto_values = np.arange(*autop.range)             if len(auto_values) > self.num_auto:                 lo, hi = autop.range[:2]                 auto_values = np.arange(lo, hi, (hi-lo)/self.num_auto)             for auto_val in auto_values:                 kw[auto_param] =[...]

A failed plugin


A different kind of story today: a clever test runner plugin that in the end, did not do what I had hoped.At edX, our test suite is large, and split among a number of CI workers. One of the workers was intermittently running out of memory. Something (not sure what) lead us to the idea that TestCase objects were holding onto mocks, which themselves held onto their calls' arguments and return values, which could be a considerable amount of memory.We use nose (but plan to move to pytest Real Soon Now™), and nose holds onto all of the TestCase objects until the very end of the test run. We thought, there's no reason to keep all that data on all those test case objects. If we could scrub the data from those objects, then we would free up that memory.We batted around a few possibilities, and then I hit on something that seemed like a great idea: a nose plugin that at the end of a test, would remove data from the test object that hadn't been there before the test started.Before I get into the details, the key point: when I had this idea, it was a very familiar feeling. I have been here many times before. A problem in some complicated code, and a clever idea of how to attack it. These ideas often don't work out, because the real situation is complicated in ways I don't understand yet.When I had the idea, and mentioned it to my co-worker, I said to him, "This idea is too good to be true. I don't know why it won't work yet, but we're going to find out." (foreshadowing!)I started hacking on the plugin, which I called blowyournose. (Nose's one last advantage over other test runners is playful plugin names...)The implementation idea was simple: before a test runs, save the list of the attributes on the test object. When the test ends, delete any attribute that isn't in that list:from nose.plugins import Plugin class BlowYourNose(Plugin):     # `test` is a Nose test object. `test.test` is the     # actual TestCase object being run.     def beforeTest(self, test):         test.byn_attrs = set(dir(test.test))     def afterTest(self, test):         obj = test.test         for attr in dir(obj):             if attr not in test.byn_attrs:                 delattr(obj, attr) By the way: a whole separate challenge is how to test something like this. I did it with a class that could report on its continued existence at the end of tests. Naturally, I named that class Booger! If you are interested, the code is in the repo.At this point, the plugin solved this problem:class MyLeakyTest(unittest.TestCase):     def setUp(self):         self.big_thing = big_thing()     def test_big_thing():         self.assertEqual(self.big_thing.whatever, 47) The big_thing attribute will be deleted from the test object once the test is over, freeing the memory it consumed.The next challenge was tests like this:@mock.patch('os.listdir') def test_directory_handling(self, mock_listdir):     blah blah ... The patch decorator stores the patches on an attribute of the function, so I updated blowyournose to look for that attribute, and set it to None. This nicely reclaimed the space at the end of the test.But you can see wh[...]

Computing primes with CSS


I've been working on a redesign of this site, so doing more CSS, finally internalizing Sass, etc. During my reading, the nth-child pseudo-class caught my eye. It's oddly specific, providing syntax like "p:nth-child(4n+3)" to select every fourth paragraph, starting with the third. It isn't an arbitrary expression, it has to be of the form An+B, where A and B are integers, possibly negative. An element is selected if it is the An+B'th child of its parent, for some value of n ≥ 0.It struck me that this is just enough computational power to compute primes with a Sieve of Eratosthenes, so I whipped up an demonstration (see it live here):


1 2 3 4 ... 996 997 998 999
The code has only linear sequences of numbers. There are spans for 1 through 999, the candidate numbers. These are arranged so that the number N is the Nth child of their containing div. The CSS has nth-child styles for 2 through 32, the possible factors.The Sieve will hide numbers that are determined not to be primes with a "display: none" rule. A first-child selector hides 1, which is typical, seems like you always have to treat 1 specially when looking for primes. The other selectors for the display:none rule select the multiples of each number in turn. "nth-child(2n+4)" will hide elements 4, 6, 8, and so on. "nth-child(3n+6)" will hide 6, 9, 12, and so on.So CSS has two features that together are just enough to implement the Sieve. The nth-child selector accomplishes the marking of factors. The overlapped application of separate rules implements the multiple passes, one for each factor.Of course, I didn't write this file by hand, I wrote a Python program to do it. It's pretty simple, I won't clog up this post with the whole thing. But, it was my first use of a new feature in Python 3.6: f-strings. The loop that writes the nth-child selectors looks like this:for i in range(2, 33):     print(f"span:nth-child({i}n+{2*i}),") The f"" string has curly-bracketed expressions in it which are evaluated in the current scope. This string in Python 3.6:f"span:nth-child({i}n+{2*i})" is equivalent to this in previous Pythons:"span:nth-child({i}n+{i2})".format(i=i, i2=2*i) It felt really natural to use this new feature, and really convenient. [...]

Don't follow me on Instagram


This summer I started taking pictures and posting them on Instagram. It started with a conversation with my son Max, and his assertion that posting more than one picture a day on Instagram was Instaspam. That constraint appealed to me. I like the idea of photography as a way of attending to what I am seeing. So I started trying to look around me to find interesting shots for Instagram posts.

My summer has been different than I expected, so I've had chances to look around places I didn't expect to be. Ironically, thinking about what can go on Instagram can be a way to focus on the here-and-now. You have to see what is immediately around you in order to get a shot.

Normally, thinking about stuff to post on a social network can be the furthest thing from being in the moment. You're thinking about how people will react to your tweet, or who will look at your Facebook status. It's easy to fall into second-guessing what will get the biggest response. You spend time going back to look at what happened to your recent activity.

I have mixed stances toward different social media. I like Twitter, and like having followers. I want my tweets to get widely retweeted. I ignore Facebook, except to find out what my sons are up to. Pinterest and Snapchat might as well not exist. Now I'm putting pictures on Instagram, but not to get followers or tons of likes. The photos have no message, I rarely put any words on them. If I can post a picture I like, and have one other person like it, that's enough.

If you want to follow someone good on Instagram, my brother is an actual photographer who knows what he is doing. Follow him!

Walks in the morning


The summer is wrapping up, and it's been a strange one. On July 4th weekend, we discovered a serious bruise on Nat's chest. We took him to the emergency room to have it properly documented so we could make a formal investigation. The doctor there told us that Nat had a broken rib, and what's more, he had another that had healed perhaps a year ago.

Nat is 26, and has autism. We tried asking him what had happened, but his reports are sketchy, and it's hard to know how accurate they are. We moved him out of his apartment, and back home with us. We ended his day program. He'd had a good experience at a camp in Colorado a few years ago, so we sent him back there, which was expensive, and meant two Colorado trips for us.

The investigation has not come up with any answers. A year ago, he had been acting oddly, very still and reluctant to move. Then, we couldn't figure out why, but now we know: he had a broken rib.

We've found a new day program for Nat which seems really good. It starts full-time on Monday. During the last month, we've been cobbling together things for Nat to do during the day. He has a lot of energy and likes walking, so I've switched my exercise from swimming to doing early-morning walks with Nat before work.

Parenting is not easy. No matter what kind of child(ren) you have, there are challenges. You have to understand their needs, decide what you want for them, and try to make a match. You have to include them in the many forces that shape your days and your life.

This summer has been a challenge that way, figuring out how to fit this complicated man into our day. The walks have been something Nat and I do together, one of the few things we both enjoy. I'll be glad to be back to my swimming routine, but I'm also glad to have had this expansion of our walking together, something that used to only happen on weekends.


We still have to find a place for Nat to live, and we have to make sure the day program takes hold in a good way. I know this is not that last time Nat will need our energy, worry, and attention, and I know we won't always know when those times are coming. This is what it means to be his parent. He needs us to plan and guide his life.

And he needs to walk in the morning.

Lists vs. Tuples


A common beginner Python question: what's the difference between a list and a tuple?The answer is that there are two different differences, with complex interplay between the two. There is the Technical Difference, and the Cultural Difference.First, the things that are the same: both lists and tuples are containers, a sequence of objects:>>> my_list = [1, 2, 3] >>> type(my_list) >>> my_tuple = (1, 2, 3) >>> type(my_tuple) Either can have elements of any type, even within a single sequence. Both maintain the order of the elements (unlike sets and dicts).Now for the differences. The Technical Difference between lists and tuples is that lists are mutable (can be changed) and tuples are immutable (cannot be changed). This is the only distinction that the Python language makes between them:>>> my_list[1] = "two" >>> my_list [1, 'two', 3] >>> my_tuple[1] = "two" Traceback (most recent call last):   File "", line 1, in  TypeError: 'tuple' object does not support item assignment That's the only technical difference between lists and tuples, though it manifests in a few ways. For example, lists have a .append() method to add more elements to the list, while tuples do not:>>> my_list.append("four") >>> my_list [1, 'two', 3, 'four'] >>> my_tuple.append("four") Traceback (most recent call last):   File "", line 1, in  AttributeError: 'tuple' object has no attribute 'append' Tuples have no need for an .append() method, because you can't modify tuples.The Cultural Difference is about how lists and tuples are actually used: lists are used where you have a homogenous sequence of unknown length; tuples are used where you know the number of elements in advance because the position of the element is semantically significant.For example, suppose you have a function that looks in a directory for files ending with *.py. It should return a list, because you don't know how many you will find, and all of them are the same semantically: just another file that you found.>>> find_files("*.py") ["", "", "", ""] On the other hand, let's say you need to store five values to represent the location of weather observation stations: id, city, state, latitude, and longitude. A tuple is right for this, rather than a list:>>> denver = (44, "Denver", "CO", 40, 105) >>> denver[1] 'Denver' (For the moment, let's not talk about using a class for this.) Here the first element is the id, the second element is the city, and so on. The position determines the meaning.To put the Cultural Difference in terms of the C language, lists are like arrays, tuples are like structs.Python has a namedtuple facility that can make the meaning more explicit:>>> from collections import namedtuple >>> Station = namedtuple("Station", "id, city, state, lat, long") >>> denver = Station(44, "Denver",&#x[...]

Breaking out of two loops


A common question is, how do I break out of two nested loops at once? For example, how can I examine pairs of characters in a string, stopping when I find an equal pair? The classic way to do this is to write two nested loops that iterate over the indexes of the string:s = "a string to examine" for i in range(len(s)):     for j in range(i+1, len(s)):         if s[i] == s[j]:             answer = (i, j)             break   # How to break twice??? Here we are using two loops to generate the two indexes that we want to examine. When we find the condition we're looking for, we want to end both loops.There are a few common answers to this. But I don't like them much: Put the loops into a function, and return from the function to break the loops. This is unsatisfying because the loops might not be a natural place to refactor into a new function, and maybe you need access to other locals during the loops. Raise an exception and catch it outside the double loop. This is using exceptions as a form of goto. There's no exceptional condition here, you're just taking advantage of exceptions' action at a distance. Use boolean variables to note that the loop is done, and check the variable in the outer loop to execute a second break. This is a low-tech solution, and may be right for some cases, but is mostly just extra noise and bookkeeping. My preferred answer, and one that I covered in my PyCon 2013 talk, Loop Like A Native, is to make the double loop into a single loop, and then just use a simple break.This requires putting a little more work into the loops, but is a good exercise in abstracting your iteration. This is something Python is very good at, but it is easy to use Python as if it were a less capable language, and not take advantage of the loop abstractions available.Let's consider the problem again. Is this really two loops? Before you write any code, listen to the English description again:How can I examine pairs of characters in a string, stopping when I find an equal pair?I don't hear two loops in that description. There's a single loop, over pairs. So let's write it that way:def unique_pairs(n):     """Produce pairs of indexes in range(n)"""     for i in range(n):         for j in range(i+1, n):             yield i, j s = "a string to examine" for i, j in unique_pairs(len(s)):     if s[i] == s[j]:         answer = (i, j)         break Here we've written a generator to produce the pairs of indexes we need. Now our loop is a single loop over pairs, rather than a double loop over indexes. The double loop is still there, but abstraced away inside the unique_pairs generator.This makes our code nicely match our English. And notice we no longer have to write len(s) twice, another sign that the original code wanted refactoring. The unique_pairs generator can be reused if we find other places we want to iterate like this, though remember that multiple uses is not a requirement for writing a fun[...] 4.2

2016-07-27T08:35:00-05:00 4.2 is done.

As I mentioned in the beta 1 announcement, this contains work from the sprint at PyCon 2016 in Portland.

The biggest change since 4.1 is the only incompatible change. The "coverage combine" command now will ignore an existing .coverage data file, rather than appending to it as it used to do. This new behavior makes more sense to people, and matches how "coverage run" works. If you've ever seen (or written!) a tox.ini file with an explicit coverage-clean step, you won't have to any more. There's also a new "--append" option on "coverage combine", so you can get the old behavior if you want it.

The multiprocessing support continues to get the polish it deserves:

  • Now the concurrency option can be multi-valued, so you can measure programs that use multiprocessing and another library like gevent.
  • Options on the command line weren't being passed to multiprocessing subprocesses. Now they still aren't, but instead of failing silently, you'll get an error explaining the situation.
  • If you're using a custom-named configuration file, multiprocessing processes now will use that same file, so that all the processes will be measured the same.
  • Enabling multiprocessing support now also enables parallel measurement, since there will be subprocesses. This reduces the possibility for error when configuring

Finally, the text report can be sorted by columns as you wish, making it more convenient.

The complete change history is in the source.