Subscribe: Planet Python
http://www.planetpython.org/rss20.xml
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
app  cheat sheet  code  data  feed tweet  feed  file  function  google  learn  new  python  range  sheet  tweet  │   
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Planet Python

Planet Python



Planet Python - http://planetpython.org/



 



Carl Chenet: Cryptocurrencies On the New Social Network Mastodon

Thu, 27 Apr 2017 13:50:22 +0000

  As you may know the new Twitter-like social network Mastodon grows rapidly with great features like great UI based on Tweetdeck and with a decentralized architecture allowing resilience against censorship. But what about cryptocurrencies on Mastodon? My dear readers know cryptocurrencies are one of the topics I care about on this blog, with at least  6 articles already published about Bitcoin, Ethereum, Monero and others. So I was eager to find account dedicated to the cryptocurrencies on Mastodon. Here is the ones I found: Bitcoin accounts on Mastodon Bitcoin #btc account on Mastodon -> publish value of Bitcoin in Euros and Dollars, from various sources (at least Kraken, Poloniex and Bitstamp). Also forward lots of articles about Bitcoin (Reddit /r/Bitcoin and others). Ethereum accounts on Mastodon Ethereum #eth account on Mastodon -> provide value of Ethers at least from Kraken and Poloniex. Forward the official Ethereum blog, Reddit /r/Ethereum and Reddit /r/Eth (at least, I guess other sources too). Monero account on Mastodon Monero #xmr account on Mastodon -> lots of content about Monero latest news. Also follow the market values of Monero with Kraken and Poloniex, at least. Duniter account on Mastodon Duniter -> Duniter is a project to develop a set of tools to produce Libre currencies with today the first implemented Libre currency: Ğ1. Follow this account to get more information about Libre currencies and Ğ1. That’s only a start The Mastodon network just started to grow. It’s only the beginning of this network and especially the cryptocurrencies on the Mastodon network. I only found a few accounts that’s I’m sharing today with you. Don’t hesitate to write comments below in order to increase this list in order everyone benefits from this list and enjoy finding information about cryptocurrencies on the Mastodon network. … and finally You can help my work on the Free Software and Open Source projects by donating anything through Liberaypay (also possible with cryptocurrencies). That’s a big factor motivation [...]



Python Software Foundation: Brian Costlow, “the quietly amazing rock” Python volunteer: Community Service Award Recipient

Thu, 27 Apr 2017 10:30:08 +0000

Brian began volunteering unofficially, when he threw out a pile of pizza boxes. He was at PyOhio in 2009 when, as Brian recalls, “Lunch was delivered pizza and the organizers had to clean up after the conference. I felt bad that I had attended a great event, entirely free, so I started to help them sweep the floors and throw away the pizza boxes." The next two years Brian worked the conference as a volunteer. In 2012 he joined the organizing committee and he went on to chair the conference in 2014, 2015, and 2016. The Python Software Foundation is pleased to present Brian Costlow with the 2017 Quarter 1 Community Service Award for: RESOLVED, that the Python Software Foundation award the 2017 Q1 Community Service Award to Brian Costlow for his work organizing PyOhio, chairing PyOhio, and for being the head volunteer for PyCon US captioning. A Jack-Of-All-Trades PyOhio Organizer “I was working in IT for a large printing and media company when I started to use Python for a number of projects. I joined the Ohio Python Mailing List and when the planning for the first PyOhio began, I made plans to attend,” Brian recalls. Regrettably Brian was unable to attend the inaugural PyOhio conference. He did, however, attend the second PyOhio, where his volunteerism began with a simple act of cleaning empty pizza boxes. Since 2010 Brian has worked alongside PyOhio organizers like Catherine Devlin and Eric Floehr, the founder of the Central Ohio Python User Group, chairing the conference, recruiting workshops, seeking new speakers. Katie Cunningham, one of the leaders of the Young Coders project, first met Brian when he invited Young Coders to join PyOhio. During their first year Young Coders encountered a few technical hiccups. Cunningham recalls, “Brian wasn’t fazed. He helped keep the room together." She adds that “he is one of those quietly amazing people who quickly goes from someone you know to someone you couldn't do without. In an industry rife with people who tend to be a bit flighty, he's a rock”. On a personal note, I first attended PyOhio in 2015 as both an inaugural speaker and attendee. I was terrified. Following my talk I had the pleasure to meet Brian. He was excited to hear about my experience as a novice speaker and we continued our discussion after the conference. Brian has been continuously interested in learning how to recruit more women and underrepresented individuals to both attend and speak at PyOhio. What makes PyOhio unique as a place to begin one’s speaking or Python career is the simple fact that PyOhio is free to attend. According to Brian, PyOhio never deliberately set out to be accessible, “it just happened organically because the Python community really is a community, and we all wanted to give back to the community that gave so much to us”. Continuing to Give Back at PyCon through Captioning Brandon Rhodes, PyCon 2016 and 2017 chair, reached out to Brian a year early in 2015 seeking assistance with captioning. Brian, mindful of Rhodes's support for PyOhio, says, “Brandon has always been a great friend to PyOhio. So when he was selected as chair for PyCon 2016 and 2017, I reached out and said if there was anything I could help him with, I would gladly do it." Captioning was a remote volunteer position in 2015. “Brian first helped out in Montreal and then took the lead in 2016 at Portland. Every year Brian takes part in the process, takes careful observations, and notes what works and what doesn’t,” Ewa Jodlowska explains. Given some of Brian’s feedback and other lessons learned at 2015 the PyCon organizing team opened to turn the captioning position into a staff position. “We learn lots year after year and make several improvements thanks to Brian's involvement,” Jodlowska adds. Brian continues to work with the captioning staff team. He says, “If someone wants to get involved, just reach out to Ewa or me, we're always open to suggestions for improvement!" Brian Costlow, by his willingness to work behind [...]



PyCon: Python 1994: Recollections from the First Conference

Thu, 27 Apr 2017 08:34:51 +0000

We are happy to announce PyCon 2017’s Sunday morning plenary event — the final day of this year’s main conference will feature Guido van Rossum on a panel of Python programmers who attended the first-ever Python conference back in 1994! Paul Everitt will moderate the panel as they answer questions and share their memories about that first Python conference when the programming language was still young.

At the beginning of 1994, the World Wide Web consisted of less than 1,000 sites. There was no distributed version control. No public issue trackers. Programmers communicated their ideas, issues, and patches in plain text on mailing lists and Usenet newsgroups. The small community of Python programmers were connected through both a mailing list and the comp.lang.python newsgroup, which was busy enough that several new messages were appearing each day.

An exciting announcement blazed out to subscribers of the Python mailing list in September 1994: Guido van Rossum, the Dutch researcher who had invented Python, was going to visit the United States! An impromptu Python workshop was quickly organized for the beginning of November where Python programmers could for the first time meet each other in person.

Attendees of the first Python conference, in a tiny and highly artifacted JPEG that was typical of the era

Of the small group who gathered at NIST over November 1–3, 1994, several will be on stage to share about both the triumphs and the mistakes of those early years. The panel is currently slated to include:

  • Guido van Rossum
  • Paul Everitt (moderator)
  • Barry Warsaw
  • Jim Fulton

There is one way that you in the Python community can go ahead and start helping us prepare for the panel:

We need your questions!

You can go ahead and suggest questions by tweeting them with a hashtag of #nist1994. The panel will curate your tweeted questions along with questions that they solicit elsewhere, and will have their favorites ready for the panel at PyCon.

Thanks to Paul Everitt for organizing the panel, which will aim to spur not only nostaligia for a lost era but lessons, warnings, and inspirations for future generations of Python developers!




DataCamp: Scikit-Learn Cheat Sheet: Python Machine Learning

Thu, 27 Apr 2017 08:02:21 +0000

Most of you who are learning data science with Python will have definitely heard already about scikit-learn, the open source Python library that implements a wide variety of machine learning, preprocessing, cross-validation and visualization algorithms with the help of a unified interface.  If you're still quite new to the field, you should be aware that machine learning, and thus also this Python library, belong to the must-knows for every aspiring data scientist.  That's why DataCamp has created a scikit-learn cheat sheet for those of you who have already started learning about the Python package, but that still want a handy reference sheet. Or, if you still have no idea about how scikit-learn works, this machine learning cheat sheet might come in handy to get a quick first idea of the basics that you need to know to get started.  Either way, we're sure that you're going to find it useful when you're tackling machine learning problems!   This scikit-learn cheat sheet will introduce you to the basic steps that you need to go through to implement machine learning algorithms successfully: you'll see how to load in your data, how to preprocess it, how to create your own model to which you can fit your data and predict target labels, how to validate your model and how to tune it further to improve its performance.  In short, this cheat sheet will kickstart your data science projects: with the help of code examples, you'll have created, validated and tuned your machine learning models in no time.   So what are you waiting for? Time to get started! (Click above to download a printable version or read the online version below.)  :target:before { content:""; display:block; height:150px; margin:-150px 0 0; } h3 {font-weight:normal; } h4 { font-weight: lighter; } Python For Data Science Cheat Sheet: Scikit-learn Scikit-learn is an open source Python library that implements a range of machine learning, preprocessing, cross-validation and visualization algorithms using a unified interface. A Basic Example >>> from sklearn import neighbors, datasets, preprocessing >>> from sklearn.model_selection import train_test_split >>> from sklearn.metrics import accuracy_score >>> iris = datasets.load_iris() >>> X, y = iris.data[:, :2], iris.target >>> X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=33) >>> scaler = preprocessing.StandardScaler().fit(X_train) >>> X_train = scaler.transform(X_train) >>> X_test = scaler.transform(X_test) >>> knn = neighbors.KNeighborsClassifier(n_neighbors=5) >>> knn.fit(X_train, y_train) >>> y_pred = knn.predict(X_test) >>> accuracy_score(y_test, y_pred) Loading The Data Your data needs to be numeric and stored as NumPy arrays or SciPy sparse matrices. Other types that are convertible to numeric arrays, such as Pandas DataFrame, are also acceptable. >>> import numpy as np >>> X = np.random.random((10,5)) >>> y = np.array(['M','M','F','F','M','F','M','M','F','F','F']) >>> X[X < 0.7] = 0 Preprocessing The Data Standardization >>> from sklearn.preprocessing import StandardScaler >>> scaler = StandardScaler().fit(X_train) >>> standardized_X = scaler.transform(X_train) >>> standardized_X_test = scaler.transform(X_test) Normalization >>> from sklearn.preprocessing import Normalizer >>> scaler = Normalizer().fit(X_train) >>> normalized_X = scaler.transform(X_train) >>> normalized_X_test = scaler.transform(X_test) Binarization >>> from sklearn.preprocessing import Binarizer >>> binarizer = Binarizer(threshold=0.0).fit(X) >>> binary_X = binarizer.transform(X) Encoding Categorical Features >>> from sklearn.preprocessing import LabelEncoder >>> enc = LabelEncoder() >>> y = enc.fit_transform(y) Imp[...]



Ruslan Spivak: Let’s Build A Simple Interpreter. Part 13: Semantic Analysis.

Thu, 27 Apr 2017 04:41:58 +0000

Anything worth doing is worth overdoing. Before doing a deep dive into the topic of scopes, I’d like to make a “quick” detour and talk in more detail about symbols, symbol tables, and semantic analysis. In the spirit of “Anything worth doing is worth overdoing”, I hope you’ll find the material useful for building a more solid foundation before tackling nested scopes. Today we will continue to increase our knowledge of how to write interpreters and compilers. You will see that some of the material covered in this article has parts that are much more extended versions of what you saw in Part 11, where we discussed symbols and symbol tables. In the final four articles of the series we’ll discuss the remaining bits and pieces. Below you can see the major topics we will cover and the timeline: Okay, let’s get started! Introduction to semantic analysis While our Pascal program can be grammatically correct and the parser can successfully build an abstract syntax tree, the program still can contain some pretty serious errors. To catch those errors we need to use the abstract syntax tree and the information from the symbol table. Why can’t we check for those errors during parsing, that is, during syntax analysis? Why do we have to build an AST and something called the symbol table to do that? In a nutshell, for convenience and the separation of concerns. By moving those extra checks into a separate phase, we can focus on one task at a time without making our parser and interpreter do more work than they are supposed to do. When the parser has finished building the AST, we know that the program is grammatically correct; that is, that its syntax is correct according to our grammar rules and now we can separately focus on checking for errors that require additional context and information that the parser did not have at the time of building the AST. To make it more concrete, let’s take a look at the following Pascal assignment statement: x := x + y; The parser will handle it all right because, grammatically, the statement is correct (according to our previously defined grammar rules for assignment statements and expressions). But that’s not the end of the story yet, because Pascal has a requirement that variables must be declared with their corresponding types before they are used. How does the parser know whether x and y have been declared yet? Well, it doesn’t and that’s why we need a separate semantic analysis phase to answer the question (among many others) of whether the variables have been declared prior to their use. What is semantic analysis? Basically, it’s just a process to help us determine whether a program makes sense, and that it has meaning, according to a language definition. What does it even mean for a program to make sense? It depends in large part on a language definition and language requirements. Pascal language and, specifically, Free Pascal’s compiler, has certain requirements that, if not followed in a program, would lead to an error from the fpc compiler indicating that the program doesn’t “make sense”, that it is incorrect, even though the syntax might look okay. Here are some of those requirements: The variables must be declared before they are used The variables must have matching types when used in arithmetic expressions (this is a big part of semantic analysis called type checking that we’ll cover separately) There should be no duplicate declarations (Pascal prohibits, for example, having a local variable in a procedure with the same name as one of the procedure’s formal parameters) A name reference in a call to a procedure must refer to the actual declared procedure (It doesn’t make sense in Pascal if, in the procedure call foo(), the name foo refers to a variable foo of a primit[...]



Vasudev Ram: Using nested conditional expressions to classify characters

Thu, 27 Apr 2017 00:37:50 +0000

By Vasudev RamWhile writing some Python code, I happened to use a conditional expression, a Python language feature.Conditional expressions are expressions (not statements) that have if/else clauses inside them, and they evaluate to either one of two values (in the basic case), depending on the value of a boolean condition. For example:for n in range(4): print n, 'is odd' if n % 2 == 1 else 'is even'0 is even1 is odd2 is even3 is oddHere, the conditional expression is this part of the print statement above:'is odd' if n % 2 == 1 else 'is even'This expression evaluates to 'is odd' if the condition after the if is True, and evaluates to 'is even' otherwise. So it evaluates to a string in either case, and that string gets printed (after the value of n).Excerpt from the section about conditional expressions in the Python Language Reference:[conditional_expression ::= or_test ["if" or_test "else" expression]expression ::= conditional_expression | lambda_exprConditional expressions (sometimes called a “ternary operator”) have the lowest priority of all Python operations.The expression x if C else y first evaluates the condition, C (not x); if C is true, x is evaluated and its value is returned; otherwise, y is evaluated and its value is returned.]You can see that the definition of conditional_expression is recursive, since it is partly defined in terms of itself (via the definition of expression).This implies that you can have recursive or nested conditional expressions.Also, since the syntax of the Python return statement is:return [ expression_list ](where expression_list means one or more expressions, separated by commas, it follows that we can use a nested conditional expression in a return statement (because a nested conditional expresssion is an expression).Here is a small program to demonstrate that:'''File: return_with_nested_cond_exprs.py Purpose: Demonstrate nested conditional expressions used in a return statement, to classify letters in a string as lowercase, uppercase or neither.Also demonstrates doing the same task without a function and a return, using a lambda and map instead.Author: Vasudev RamCopyright 2017 Vasudev RamWeb site: https://vasudevram.github.ioBlog: https://jugad2.blogspot.com'''from __future__ import print_functionfrom string import lowercase, uppercase# Use return with nested conditional expressions inside a function, # to classify characters in a string as lowercase, uppercase or neither:def classify_char(ch): return ch + ': ' + ('lowercase' if ch in lowercase else \ 'uppercase' if ch in uppercase else 'neither')print("Classify using a function:")for ch in 'AaBbCc12+-': print(classify_char(ch))print()# Do it using map and lambda instead of def and for:print("Classify using map and lambda:")print('\n'.join(map(lambda ch: ch + ': ' + ('lowercase' if ch in lowercase else 'uppercase' if ch in uppercase else 'neither'), 'AaBbCc12+-')))Running it with:$ python return_with_nested_cond_exprs.pygives this output:Classify using a function:A: uppercasea: lowercaseB: uppercaseb: lowercaseC: uppercasec: lowercase1: neither2: neither+: neither-: neitherClassify using map and lambda:A: uppercasea: lowercaseB: uppercaseb: lowercaseC: uppercasec: lowercase1: neither2: neither+: neither-: neitherAs you can see from the code and the output, I also used that same nested conditional expression in a lambda function, along with map, to do the same task in a more functional style.- Vasudev Ram - Online Python training and consulting Get updates (via Gumroad) on my forthcoming apps and content. Jump to posts: Python * DLang * xtopdf Subscribe to my blog by email My ActiveState Code recipesFollow me on: LinkedIn * Twitter Are you a blogger with some traffic? Get Convertkit:Email marketing for professional bloggers Share | Vasudev Ram [...]



Daniel Bader: Let’s Program with Python: Functions and Lists (Part 2)

Thu, 27 Apr 2017 00:00:00 +0000

Let’s Program with Python: Functions and Lists (Part 2) In part two of this four-part Python introduction you’ll see how to write reusable “code building blocks” in your Python programs with functions. In this guest post series by Doug Farrell you’ll learn the basics of programming with Python from scratch. If you’ve never programmed before or need a fun little class to work through with your kids, you’re welcome to follow along. Looking for the rest of the series? Here you go: Let’s Program with Python: Statements, Variables, and Loops (Part 1) Table of Contents – Part 2 Programmers Are Lazy Introduction to Functions New Turtle Drawing Functions Drawing With Multiple Turtles Grouping Things With Lists Conclusion Programmers Are Lazy We mentioned this in the last class, but if you’re going to be a programmer, you have to embrace basic laziness. Programmers don’t like to repeat themselves and always look for ways to write less code rather than more to get the same things done. In our last class we saw how using a for loop could reduce the amount of code we had to write to draw a flower. We used a loop to repeat drawing the “petals” of our flower so we didn’t have to write code for every one. Let’s learn about another tool we can put in our programmers toolbelt called functions. Introduction to Functions Functions allow us to use the same set of Python statements over and over again, and even change what the Python code does without having to change the code. We’ve already used functions in the previous session in our turtle program. We used the range() function as part of a for loop. The range() function is built into Python, but what does it do? It generates a range of numbers we can use inside a for loop, as simple as that. Let’s start Idle, get into interactive mode and enter this at the Python command prompt: >>> range(10) range(0, 10) The range(10) function created something that will generate a count from 0 to 9 (that’s 10 numbers in total). Notice we told the range() function how big the range we wanted was by passing 10 as the parameter of the function. Using this in a for loop shows the values generated by range(10): >>> for x in range(10): ... print(x) 0 1 2 3 4 5 6 7 8 9 What we’ve done is: Create a for loop that’s going to assign the range of values generated one at a time to the variable x. Then inside the loop we’re just printing the latest value of x. You’ll notice that value of x goes from 0 to 9, not 10 as you might expect. There are still ten values, but because Python is zero based (starts things at zero, unless told otherwise), the range(10) function goes from 0 → 9. In our flower drawing turtle program we called range() like this: >>> range(36) range(0, 36) This generated a range of 36 values, from 0 to 35. These two examples demonstrate we are changing what the range() function does based on the value we give to it. The value we give to the range() function is called a parameter, and the value of that parameter is used to change what the range() function does. In the examples above the parameter tells the range() function how many numbers to generate and gives back to our program a way to use them. We’ve also used functions when we were working with our turtle. For example when I changed the color of my turtle t, with the color() function, like this: >>> t.color("yellow", "red") I was calling the color() function of the turtle variable t, and passed it two parameters, "yellow" and "red": The "yellow" parameter changed the color of the t turtle and the color it draws with. The "red" parameter changed the color the turtle used when filling a shape. Flower Drawing Using Functions Okay, so it’s great Python provides a bunch of functions we can use to do different things, how do functions help me be lazy? Well, Python also lets us crea[...]



DataCamp: Pandas Cheat Sheet for Data Science in Python

Wed, 26 Apr 2017 14:48:21 +0000

The Pandas library is one of the most preferred tools for data scientists to do data manipulation and analysis, next to matplotlib for data visualization and NumPy, the fundamental library for scientific computing in Python on which Pandas was built.  The fast, flexible, and expressive Pandas data structures are designed to make real-world data analysis significantly easier, but this might not be immediately the case for those who are just getting started with it. Exactly because there is so much functionality built into this package that the options are overwhelming. That's where this Pandas cheat sheet might come in handy.  It's a quick guide through the basics of Pandas that you will need to get started on wrangling your data with Python.  As such, you can use it as a handy reference if you are just beginning their data science journey with Pandas or, for those of you who already haven't started yet, you can just use it as a guide to make it easier to learn about and use it.  The Pandas cheat sheet will guide you through the basics of the Pandas library, going from the data structures to I/O, selection, dropping indices or columns, sorting and ranking, retrieving basic information of the data structures you're working with to applying functions and data alignment. In short, everything that you need to kickstart your data science learning with Python! Do you want to learn more? Start the Intermediate Python For Data Science course for free now or try out our Pandas DataFrame tutorial!  Also, don't miss out on our Bokeh cheat sheet for data visualization in Python and our Python cheat sheet for data science.  (Click above to download a printable version or read the online version below.) Python For Data Science Cheat Sheet: Pandas Basics Use the following import convention: >>> import pandas as pd Pandas Data Structures table, th, td { border: 1px solid black; color: black; font-size: 12px; max-width:25%; white-space:nowrap; } h3 {font-weight:normal; } h4 { font-weight: lighter; } Series A one-dimensional labeled array capable of holding any data type >>> s = pd.Series([3, -5, 7, 4], index=['a', 'b', 'c', 'd']) A 3 B 5 C 7 D 4 DataFrame A two-dimensional labeled data structure with columns of potentially different types >>> data = {'Country': ['Belgium', 'India', 'Brazil'], 'Capital': ['Brussels', 'New Delhi', 'Brasilia'], 'Population': [11190846, 1303171035, 207847528]} >>> df = pd.DataFrame(data,columns=['Country', 'Capital', 'Population']) Country Capital Population 1 Belgium Brussels 11190846 2 India New Delhi 1303171035 3 Brazil Brasilia 207847528 Please note that the first column 1,2,3 is the index and Country,Capital,Population are the Columns. Asking For Help >>> help(pd.Series.loc) I/O Read and Write to CSV >>> pd.read_csv('file.csv', header=None, nrows=5) >>> pd.to_csv('myDataFrame.csv') Read multiple sheets from the same file >>> xlsx = pd.ExcelFile('file.xls') >>> df = pd.read_excel(xlsx, 'Sheet1') Read and Write to Excel >>> pd.read_excel('file.xlsx') >>> pd.to_excel('dir/myDataFrame.xlsx', sheet_name='Sheet1') Read and Write to SQL Query or Database Table (read_sql()is a convenience wrapper around read_sql_table() and read_sql_query()) >>> from sqlalchemy import create_engine >>> engine = create_engine('sqlite:///:memory:') >>> pd.read_sql(SELECT * FROM my_table;, engine) >>> pd.read_sql_table('my_table', engine) >>> pd.read_sql_query(SELECT * FROM my_table;', engine) >>> pd.to_sql('myDf', engine) Selection Getting Get one element >>> s['b'] -5 Get subset of a DataFrame >>> df[1:] Country Capital Population 1 India [...]



Mike Driscoll: Free Python Resources

Wed, 26 Apr 2017 13:54:45 +0000

There are lots of free resources for learning Python available now. I wrote about some of them way back in 2013, but there’s even more now then there was then! In this article, I want to share these resources with you. If I miss anything that you have found helpful, feel free to link to them in the comments. Blogs and Websites When I am learning Python, one of the first places I turn to is the official Python documentation: Python 2 documentation (Note: Python 2 End of Life is 2020) Python 3 documentation There are also lots of other pieces of documentation that can be found on the Python website. Doug Hellman has been producing a series called Python Module of the Week (PyMOTW) for years. He now has a version of the series for Python 3 as well. Here are the links: PyMOTW-2 (Python 2) PyMOTW-3 (Python 3) There are two interesting “Hitchhiker” websites on Python, but I don’t think they’re related except by name: The Hitchhiker’s Guide to Python The Hitchhiker’s Guide to Packaging – Learn how to package up your code and distribute it! DataCamp has lots of good free and paid content on Python for Data Science. They also have a neat blog with Python content. If you are into reading blogs, then Planet Python is for you. Planet Python is basically an RSS aggregator of dozens of Python blogs. You can see links to each blog that is aggregated on the left side of the page. Free Python Books Mark Pilgrim’s books have been online for over a decade. He created two versions of Dive Into Python, one for Python 2 and the other for 3. I’m just going to link to Python 3 here. Al Sweigart has been putting out Python books for quite a while as well. His latest Python book is Automate the Boring Stuff with Python. It’s a fun book and well worth checking out. You can see his other books over on his website. They are all available for free, but you can purchase them too. WikiBooks has a Python 3 book called Non-Programmer’s Tutorial for Python 3 that is still recommended. While I’ve never read it, I have heard that Full Stack Python is good. If you’d like to learn Test-Driven Development, there’s a book for that too over on Obey the Testing Goat. I will note that this book is heavily focused on web programming with Python and how to test that, so keep that in mind. There is a neat online book called Program Arcade Games with Python and Pygame that is available for free in multiple languages, which is something that the previous books just don’t offer. Finally I thought I would mention my own book, Python 101 which is available for free or pay what you want over on Leanpub. Wrapping Up There are tons of other free resources and books on Python too. These are just an overview. If you happen to get stuck in any of these books or resources, then you will be happy to know that there are a lot of helpful people on the following websites that will answer your questions: StackOverflow Reddit’s r/learnpython subreddit comp.lang.python is on Google groups Have fun learning Python. It’s a great language! [...]



PyCharm: Webinar Recording: Visual Debugging

Wed, 26 Apr 2017 13:52:27 +0000

Our April 25th webinar on Visual Debugging went quite well: just over 1 hour, a lot of fun, and covered nearly all of the topics. The recording is now available on YouTube. Additionally, the code and slides used are available in a GitHub repository.

“Visually Debugging” is a theme that we plan to touch on repeatedly during 2017. For example, we have tutorial proposals for DjangoCon and EuroPython that expand on the topics in this webinar, conducted in a hands-on, 3-hour format.

About Visual Debugging

PyCharm puts a visual face on debugging and we’d like to help more developers take the plunge and debug. In this webinar, Paul Everitt, PyCharm Developer Advocate, introduced the debugger and went through many of the essentials in the context of writing a 2d game.

This webinar was aimed at developers who have never used the debugger, but even those familiar with PyCharm’s visual debugging will learn some new tricks and see some deeper usages.

Video Contents

1. (3:05) Debugging without PyCharm’s debugger: print and pdb
2. (6:18) First use of the debugger (and the Cython speedups)
3. (12:18) Interactive use
4. (19:47) Breakpoints
5. (29:16) Stepping
6. (40:43) Watch expressions
7. (44:11) Stack Frames
8. (48:17) Debugging during testing
9. (50:22) Attaching to processes
10. (52:11) Extracting type information
12. (58:52) Extra credit: Debugging JavaScript in Node and Chrome, configure stepping, keyboard shortcuts, show execution points, mute breakpoints, inspect watch value

Keep up with the latest PyCharm news on our blog and follow us on Twitter @PyCharm.

(image)



Experienced Django: Deep dive into manage.py

Wed, 26 Apr 2017 13:03:11 +0000

I don’t know about you, but when I started with Django, I wondered about the “manage” script and what it did.  But there were too many other things to learn, so I accepted that it worked and moved on, using the recipes I found in examples and documentation. My curiosity has gotten the better of me, so now I’m going to take a break and examine what’s going on under the hood. Naively, I had assumed that the manage script was some large, static beast with 1000’s of lines of code.  (Too much C and make in my background, I guess).  Of course, this is python, so manage.py is small, reflective, and quite dynamic. On a brand new project, manage.py weighs in at a whopping 22 lines of code.  Now, that’s not really a fair accounting as all that code is doing is importing a method from a module: from django.core.management import execute_from_command_line and then running it: execute_from_command_line(sys.argv) Th execute_from_command_line function is located in the __init__.py package in python/site-packages in your virtualenv   (you are using virtualenv, right?). This app does several bookkeeping things (managing command line args, deciding if it needs to do “help”, etc) and then calls the get_commands function.  This is a very elegant function (the comment block explaining it is much longer than the code) that creates a { command_name : app_name } dictionary for all of the commands it can find.  It starts by adding all of the django core commands by searching the management/commands directory in the core django directory.  It then searches the management/commands subdirectory of each installed app. It took a little digging, but I managed to figure out a bit more detail here which I found interesting.  Django has a list called django.apps.apps.  These are the apps listed in the INSTALLED_APPS list in your settings.py file.  An entry is made for each app loaded with some details about the app.  For our purposes here, the data that we care about is the path and the name. Jumping back to the get_commands function and the __init__.py file in the django.core.management module, you can see at the top of the file that it imports this list of apps: from django.apps import apps. This allows the get_commands function to walk the list of apps, getting the name of each app and the path in which it’s stored.  It appends the management/commands subdirectory to the path and adds each python file there as a command to the { command_name : app_name }, listed above.  (Note that I’m skipping some details here).  That’s how it figures it out. So, to complete the loop, we can see that the apps listed in settings.py INSTALLED_APPS = [     'django.contrib.admin',     'django.contrib.auth',     'django.contrib.contenttypes',     'django.contrib.sessions',     'django.contrib.messages',     'django.contrib.staticfiles', ] show up in the lib/python3.4/site-packages/ directory structure: django/ ├── core │   ├── management │   │   ├── commands │   │   │   ├── check.py │   │   │   ├── compilemessages.py │   │   │   ├── createcachetable.py │   │   │   └── [several commands removed] │   │   │   ├── runserver.py │   │   │   └── [several commands removed] ├── contrib │   ├── auth │   │   ├── management │   │   │   ├── commands │   │   │   │   ├── changepassword.py │   │   │   │   ├── createsuperuser.py │   ├── contenttypes │   │   ├── management │   │   │   ├── commands │   │   │   │   └── remove_stal[...]



A. Jesse Jiryu Davis: Grok the GIL: Write Fast and Thread-Safe Python

Wed, 26 Apr 2017 11:52:37 +0000

(Cross-posted from Red Hat’s OpenSource.com.) When I was six years old I had a music box. I’d wind it up, and a ballerina revolved on top of the box while a mechanism inside plinked out “Twinkle Twinkle Little Star.” The thing must have been godawful tacky, but I loved it, and I wanted to know how it worked. Somehow I got it open and was rewarded with the sight of a simple device, a metal cylinder the size of my thumb, studded so that as it rotated, it plucked the teeth of a steel comb and made the notes. Of all a programmer’s traits, curiosity about how things work is the sine qua non. When I opened my music box to see inside, I showed that I could grow up to be, if not a great programmer, then at least a curious one. It is odd, then, that for many years I wrote Python programs while holding mistaken notions about the Global Interpreter Lock, because I was never curious enough to look at how it worked. I’ve met others with the same hesitation, and the same ignorance. The time has come for us to pry open the box. Let’s read the CPython interpreter source code and find out exactly what the GIL is, why Python has one, and how it affects your multithreaded programs. I’ll show some examples to help you grok the GIL. You will learn to write fast and thread-safe Python, and how to choose between threads and processes. (And listen, my dear pedantic reader: for the sake of focus, I only describe CPython here, not Jython, PyPy, or IronPython. If you would like to learn more about those interpreters I encourage you to research them. I restrict this article to the Python implementation that working programmers overwhelmingly use.) Behold, the Global Interpreter Lock Here it is: static PyThread_type_lock interpreter_lock = 0; /* This is the GIL */ This line of code is in ceval.c, in the CPython 2.7 interpreter’s source code. Guido van Rossum’s comment, “This is the GIL,” was added in 2003, but the lock itself dates from his first multithreaded Python interpreter in 1994. On Unix systems, PyThread_type_lock is an alias for the standard C lock, mutex_t. It is initialized when the Python interpreter begins: void PyEval_InitThreads(void) { interpreter_lock = PyThread_allocate_lock(); PyThread_acquire_lock(interpreter_lock); } All C code within the interpreter must hold this lock while executing Python. Guido first built Python this way because it is simple, and every attempt to remove the GIL from CPython has cost single-threaded programs too much performance to be worth the gains for multithreading. The GIL’s effect on the threads in your program is simple enough that you can write the principle on the back of your hand: “One thread runs Python, while N others sleep or await I/O.” Python threads can also wait for a threading.Lock or other synchronization object from the threading module; we shall consider threads in that state to be “sleeping”, too. When do threads switch? Whenever a thread begins sleeping or awaiting network I/O, there is a chance for another thread to take the GIL and execute some Python code. This is “cooperative multitasking.” CPython also has “preemptive multitasking”: If a thread runs uninterrupted for 1000 bytecode instructions in Python 2, or runs 15 milliseconds in Python 3, then it gives up the GIL and another thread may run. Think of this like “time slicing” in the olden days when we had many threads but one CPU. We shall discuss these two kinds of multitasking in detail. Think of Python as an old mainframe: many tasks share one CPU. Cooperative multitasking When it begins a task, such as network I/O, that is of long or uncertain duration and does n[...]



PythonClub - A Brazilian collaborative blog about Python: What the Flask? pt 4 - Extensões para o Flask

Wed, 26 Apr 2017 10:41:55 +0000

What The Flask - 4/5 Finalmente!!! Depois de uma longa espera o What The Flask está de volta! A idéia era publicar primeiro a parte 4 (sobre Blueprints) e só depois a 5 sobre como criar extensões. Mas esses 2 temas estão muito interligados então neste artigo os 2 assuntos serão abordados. E a parte 5 será a final falando sobre deploy! Hello Flask: Introdução ao desenvolvimento web com Flask Flask patterns: Estruturando aplicações Flask Plug & Use: extensões essenciais para iniciar seu projeto Magic(app): Criando Extensões para o Flask(<-- Você está aqui) Run Flask Run: "deploiando" seu app nos principais web servers e na nuvem Não sei se você ainda se lembra? mas estavámos desenvolvendo um CMS de notícias, utilizamos as extensões Flask-MongoEngine, Flask-Security, Flask-Admin e Flask-Bootstrap. E neste artigo iremos adicionar mais uma extensão em nosso CMS, mas iremos criar uma extensão ao invés de usar uma das extensões disponíveis. Extensão ou Plugin? Por definição plugins diferem de extensões. Plugins geralmente são externos e utilizam algum tipo de API pública para se integrar com o aplicativo. Extensões, por outro lado, geralmente são integradas com a lógica da aplicação, isto é, as interfaces do próprio framework. Ambos, plugins e extensões, aumentam a utilidade da aplicação original, mas plugin é algo relacionado apenas a camadas de mais alto nível da aplicação, enquanto extensões estão acopladas ao framework. Em outras palavras, plugin é algo que você escreve pensando apenas na sua aplicação e está altamente acoplado a ela enquanto extensão é algo que pode ser usado por qualquer aplicação escrita no mesmo framework pois está acoplado a lógica do framework e não das aplicações escritas com ele. Quando criar uma extensão? Faz sentido criar uma extensão quando você identifica uma functionalidade que pode ser reaproveitada por outras aplicações Flask, assim você mesmo se beneficia do fato de não precisar reescrever (copy-paste) aquela funcionalidade em outros apps e também pode publicar sua extensão como open-source beneficiando toda a comunidade e incorporando as melhorias, ou seja, todo mundo ganha! Exemplo prático Imagine que você está publicando seu site mas gostaria de prover um sitemap. (URL que lista todas as páginas existentes no seu site usada pelo Google para melhorar a sua classificação nas buscas). Como veremos no exemplo abaixo publicar um sitemap é uma tarefa bastante simples, mas é uma coisa que você precisará fazer em todos os sites que desenvolver e que pode se tornar uma funcionalidade mais complexa na medida que necessitar controlar datas de publicação e extração de URLs automáticamente. Exemplo 1 - Publicando o sitemap sem o uso de extensões from flask import Flask, make_response app = Flask(__name__) @app.route('/artigos') def artigos(): "este endpoint retorna a lista de artigos" @app.route('/paginas') def paginas(): "este endpoint retorna a lista de paginas" @app.route('/contato') def contato(): "este endpoint retorna o form de contato" ###################################### # Esta parte poderia ser uma extensão ###################################### @app.route('/sitemap.xml') def sitemap(): items = [ '{0}'.format(page) for page in ['/artigos', '/paginas', '/contato'] ] sitemap_xml = ( '' '{0}' ).format(''.join(items)).strip() response = make_response(sitemap_xml) response[...]



Wesley Chun: Exporting a Google Sheet spreadsheet as CSV

Wed, 26 Apr 2017 10:12:27 +0000

IntroductionToday, we'll follow-up to my earlier post on the Google Sheets API and multiple posts (first, second, third) on the Google Drive API by answering one common question: How do you download a Google Sheets spreadsheet as a CSV file? The "FAQ"ness of the question itself as well as various versions of Google APIs has led to many similar StackOverflow questions: one, two, three, four, five, just to list a few. Let's answer this question definitively and walk through a Python code sample that does exactly that. The main assumption is that you have a Google Sheet file in your Google Drive named "inventory".Choosing the right APIUpon first glance, developers may think the Google Sheets API is the one to use. Unfortunately that isn't the case. The Sheets API is the one to use for spreadsheet-oriented operations, such as inserting data, reading spreadsheet rows, managing individual tab/sheets within a spreadsheet, cell formatting, creating charts, adding pivot tables, etc., It isn't meant to perform file-based requests like exporting a Sheet in CSV (comma-separated values) format. For file-oriented operations with a Google Sheet, you would use the Google Drive API.Using the Google Drive APIAs mentioned earlier, Google Drive features numerous API scopes of authorization. As usual, we always recommend you use the most restrictive scope possible that allows your app to do its work. You'll request fewer permissions from your users (which makes them happier), and it also makes your app more secure, possibly preventing modifying, destroying, or corrupting data, or perhaps inadvertently going over quotas. Since we're only exporting a Google Sheets file from Google Drive, the only scope we need is:'https://www.googleapis.com/auth/drive.readonly' — Read-only access to file content or metadataThe earlier post I wrote on the Google Drive API featured sample code that exported an uploaded Google Docs file as PDF and download that from Drive. This post will not only feature a change to exporting a Google Sheets file in CSV format, but also demonstrate one additional feature of the Drive API: queryingSince we've fully covered the authorization boilerplate fully in earlier posts and videos, we're going to skip that here and jump right to the action, creating of a service endpoint to Drive. The API name is (of course 'drive', and the current version of the API is 3, so use the string 'v3' in this call to the apiclient.discovey.build() function:DRIVE = discovery.build('drive', 'v3', http=creds.authorize(Http()))Query and export files from Google DriveWhile unnecessary, we'll create a few string constants representing the filename, source and destination file MIME types to make the code easier to understand:FILENAME = 'inventory'SRC_MIMETYPE = 'application/vnd.google-apps.spreadsheet'DST_MIMETYPE = 'text/csv'In this simple example, we're only going to export one Google Sheets file as CSV, arbitrarily choosing a file named, "inventory." So to perform the query, you need both the filename and its MIME type, "application/vnd.google-apps.spreadsheet". Query components are conjoined with the "and" keyword, so your query string will look like this: q='name="%s" and mimeType="%s"' % (FILENAME, SRC_MIMETYPE).Since there may be more than one Google Sheets file named 'inventory". we opt for newest one and thus need to sort all matching files in descending order of last modification time then name if "mtime"s are identical via an "order by" clause: orderBy='modifiedTime desc,name'. Here is the complete call to DRIVE.files().list() to issue the query:files = DRIVE.files().list( q='name="%s" and mimeType="%s"' % (FILENAME, SRC_MIMETYPE), orderBy='modifiedTime desc,name'[...]



Gocept Weblog: Last call for take off to the Python 3 wonderland

Wed, 26 Apr 2017 06:56:56 +0000

We are approaching the Zope 2 Resurrection Sprint and hope that all those who are willing to help earl Zope II on his endeavor to port his realm have already prepared there horses and packed the necessary equipment to arrive in Halle (Saale), Germany. To help with the preparations we have set up some means of communication: Etherpad In the Etherpad we have collected a fundamental set of obstacles that the immigration authority of Python 3 wonderland send to earl Zope II via a mounted messenger. If there are additional problems we need to solve with the immigration or other authorities, feel free to point those out in the pad. IRC Channel During the sprint we will have an owl waiting for messages in the #sprint channel on irc.freenode.net, so additional information and questions can be placed there. General Schedule In general the gates of the gocept manor in Halle (Saale) are open from 8:00 till 18:00 during the sprint for the squires to help earl Zope II. There will be some refreshments in the morning (8:00 – 9:00) and during lunch time (12:00 – 13:00) in order to keep everyone happy and content. Apart from that, there will be some fixed points in time to meet: Monday 2017-05-01 19:00 CEST, pre-sprint get-together for early arrivals at Anny Kilkenny. Attention: There will be a bigger political demonstration in Halle which might impact the arrival here, take that into consideration. Tuesday 2017-05-02 9:00 CEST, official welcome and sprint planning afterwards. 16:30-17:30 CEST, Discussion: TBD 18:00 CEST, guided tour through the city of Halle, meeting point 19:30 CEST, dinner and get-together at Wenzels Bierstuben, location, separate bills Wednesday 2017-05-03 9:00 CEST, daily meeting and review 16:30-18:00 CEST, Discussion: TDB 19:00 CEST, BBQ evening in the lovely garden at gocept manor Thursday 2017-05-04 9:00 CEST, daily meeting and review 16:30-17:30 CEST, Discussion: TBD 19:00 CEST, dinner and get-together at Pizzeria “Rote Soße”, location, separate bills Friday 2017-05-05 9:00 CEST, daily meeting and review 13:00 CEST, sprint closing session with review and possibility to present lightning talks of your projects. We are looking forward to the sprint and hope to overcome the remaining migration problems of earl Zope II. [...]



Carl Chenet: Migrate Feed2tweet from 1.0 to 1.1

Tue, 25 Apr 2017 22:00:58 +0000

Feed2tweet 1.1, your RSS to Twitter bot, had a compatibility-breaking change: the format of the cache file changed because of a Python platform-dependent issue of one of the dependencies.

 

(image) It is mandatory to execute the following steps in order to keep safe the timeline of your Twitter account (and maybe more important the timelines of your followers).

How to migrate Feed2tweet 1.0 to 1.1

We start by commenting the Feed2tweet entry in the system crontab:

# */10 * * * * feed2tweet feed2tweet -c /etc/feed2tweet/feed2tweet.ini

The next step is to update Feed2tweet:

# pip3 install feed2tweet --upgrade

As we may have an issue during the upgrade, we back up the cache file:

$ mkdir feed2tweet-backup
$ cp /var/lib/feed2tweet/feed2tweet.db feed2tweet-backup/

Now we remove the Feed2tweet cache file:

$ rm -f /var/lib/feed2tweet/feed2tweet.db

So far so good, let’s regenerate the cache file by getting all then entries but NOT sending them to Twitter:

$ feed2tweet --populate-cache -c /etc/feed2tweet/feed2tweet.ini

Almost finished. We execute the dry run mode in order to check the result. You should not have any displayed entry, meaning you’re now ready to send the next ones to Twitter:

$ feed2tweet --dry-run -c /etc/feed2tweet/feed2tweet.ini

And of course to finish we uncomment the Feed2tweet line in the /etc/crontab file

*/10 * * * * feed2tweet feed2tweet -c /etc/feed2tweet/feed2tweet.ini

We’re all set! The new RSS entries of your feeds will be automatically posted to Twitter.

More information about Feed2tweet

… and finally

You can help the Feed2tweet Bot by donating anything through Liberaypay (also possible with cryptocurrencies). That’s a big factor motivation (image)

(image)




Philip Semanchuk: Thanks, Science!

Tue, 25 Apr 2017 21:20:12 +0000

I took part in the Raleigh March for Science last Saturday. For the opportunity to learn about it, participate in it, photograph it, share it with you — oh, and also for, you know, being alive today — thanks, science!

(image)

(image)

(image)

(image)

(image)

(image)

(image)

(image)










DataCamp: Keras Cheat Sheet: Neural Networks in Python

Tue, 25 Apr 2017 18:10:46 +0000

Keras is an easy-to-use and powerful library for Theano and TensorFlow that provides a high-level neural networks API to develop and evaluate deep learning models.

We recently launched one of the first online interactive deep learning course using Keras 2.0, called "Deep Learning in Python.

Now, DataCamp has created a Keras cheat sheet for those who have already taken the course and that still want a handy one-page reference or for those who need an extra push to get started.

In short, you'll see that this cheat sheet not only presents you with the six steps that you can go through to make neural networks in Python with the Keras library.

(image)

In no time, this Keras cheat sheet will make you familiar with how you can load datasets from the library itself, preprocess the data, build up a model architecture, and compile, train, and evaluate it. As there is a considerable amount of freedom in how you build up your models, you'll see that the cheat sheet uses some of the simple key code examples of the Keras library that you need to know to get started with building your own neural networks in Python.

Furthermore, you'll also see some examples of how to inspect your model, and how you can save and reload it. Lastly, you’ll also find examples of how you can predict values for test data and how you can fine tune your models by adjusting the optimization parameters and early stopping.

Everything you need to make your first neural networks in Python with Keras!

Also, don't miss out on our scikit-learn cheat sheet, NumPy cheat sheet and Pandas cheat sheet!




Mike Driscoll: Getting Started with pywebview

Tue, 25 Apr 2017 12:30:03 +0000

I stumbled across the pywebview project a couple of weeks ago. The pywebview package “is a lightweight cross-platform wrapper around a webview component that allows to display HTML content in its own native GUI window.” It uses WebKit on OSX and Linux and Trident (MSHTML) on Windows, which is actually what wxPython’s webview widget also does. The idea behind pywebview is that it provides you the ability to load a website in a desktop application, kind of Electron. While pywebview claims it “has no dependencies on an external GUI framework”, on Windows it requires pythonnet, PyWin32 and comtypes installed. OSX requires “pyobjc”, although that is included with the default Python installed in OSX. For Linux, it’s a bit more complicated. On GTK3 based systems you will need PyGObject whereas on Debian based systems, you’ll need to install PyGObject + gir1.2-webkit-3.0. Finally, you can also use PyQt 4 or 5. You can use Python micro-web frameworks, such as Flask or bottle, with pywebview to create cool applications using HTML5 instead of Python. To install pywebview itself, just use pip: pip install pywebview Once installed and assuming you also have the prerequisites, you can do something like this: import webview   webview.create_window('My Web App', 'http://www.mousevspython.com') This will load the specified URL in a window with the specified title (i.e. the first argument). Your new application should end up looking something like this: The API for pywebview is quite short and sweet and can be found here: https://github.com/r0x0r/pywebview#api There are only a handful of methods that you can use, which makes them easy to remember. But since you can’t create any other controls for your pywebview application, you will need to do all your user interface logic in your web application. The pywebview package supports being frozen using PyInstaller for Windows and py2app for OSX. It also works with virtualenv, although there are known issues that you will want to read about before using virtualenv. Wrapping Up The pywebview package is actually pretty neat and I personally think it’s worth a look. If you want something that’s a bit more integrated to your desktop, then you might want to give wxPython or PyQt a try. But if all you need to do is distribute an HTML5-based web app, then this package might be just the one for you. [...]



Chris Moffitt: Effectively Using Matplotlib

Tue, 25 Apr 2017 12:15:00 +0000

Introduction The python visualization world can be a frustrating place for a new user. There are many different options and choosing the right one is a challenge. For example, even after 2 years, this article is one of the top posts that lead people to this site. In that article, I threw some shade at matplotlib and dismissed it during the analysis. However, after using tools such as pandas, scikit-learn, seaborn and the rest of the data science stack in python - I think I was a little premature in dismissing matplotlib. To be honest, I did not quite understand it and how to use it effectively in my workflow. Now that I have taken the time to learn some of these tools and how to use them with matplotlib, I have started to see matplotlib as an indispensable tool. This post will show how I use matplotlib and provide some recommendations for users getting started or users who have not taken the time to learn matplotlib. I do firmly believe matplotlib is an essential part of the python data science stack and hope this article will help people understand how to use it for their own visualizations. Why all the negativity towards matplotlib? In my opinion, there are a couple of reasons why matplotlib is challenging for the new user to learn. First, matplotlib has two interfaces. The first is based on MATLAB and uses a state-based interface. The second option is an an object-oriented interface. The why’s of this dual approach are outside the scope of this post but knowing that there are two approaches is vitally important when plotting with matplotlib. The reason two interfaces cause confusion is that in the world of stack overflow and tons of information available via google searches, new users will stumble across multiple solutions to problems that look somewhat similar but are not the same. I can speak from experience. Looking back on some of my old code, I can tell that there is a mishmash of matplotlib code - which is confusing to me (even if I wrote it). Key Point New matplotlib users should learn and use the object oriented interface. Another historic challenge with matplotlib is that some of the default style choices were rather unattractive. In a world where R could generate some really cool plots with ggplot, the matplotlib options tended to look a bit ugly in comparison. The good news is that matplotlib 2.0 has much nicer styling capabilities and ability to theme your visualizations with minimal effort. The third challenge I see with matplotlib is that there is confusion as to when you should use pure matplotlib to plot something vs. a tool like pandas or seaborn that is built on top of matplotlib. Anytime there can be more than one way to do something, it is challenging for the new or infrequent user to follow the right path. Couple this confusion with the two different API’s and it is a recipe for frustration. Why stick with matplotlib? Despite some of these issues, I have come to appreciate matplotlib because it is extremely powerful. The library allows you to create almost any visualization you could imagine. Additionally, there is a rich ecosystem of python tools built around it and many of the more advanced visualization tools use matplotlib as the base library. If you do any work in the python data science stack, you will need to develop some basic familiarity with how to use matplotlib. That is the focus of the rest of this post - developing a basic approach for effectively using matplotlib. Basic Premises If you take nothing else away from this post, I recommend the followin[...]



Django Weblog: DjangoCon Europe 2017 in retrospect

Tue, 25 Apr 2017 09:05:08 +0000

DjangoCon Europe 2017 upheld all the traditions established by previous editions: a volunteer-run event, speakers from all sections of the community and a commitment to stage a memorable, enjoyable conference for all attendees.

Held in a stunning Art Deco cinema in the centre of the city, this year's edition was host to over 350 Djangonauts.

The team of always-smiling and willing volunteers, led by Emanuela Dal Mas and Iacopo Spalletti under the auspices of the Fuzzy Brains association, created a stellar success on behalf of all the community.

Of note in this year's conference was an emphasis on inclusion, as expressed in the conference's manifesto. The organisers' efforts to expand the notion of inclusion was visible in the number of attendees from Africa and south Asia, nearly all of whom were also given a platform at the event. This was made possible not only by the financial assistance programme but also through the considerable logistical help the organisers were able to offer.

The conference's opening keynote talk by Anna Makarudze and Humphrey Butau on the growing Python community in Zimbabwe, and an all-woman panel discussing their journeys in technology, were just two examples of a commitment to making more space for voices and stories that are less often heard.

DjangoCon Europe continues to thrive and sparkle in the hands of the people who care about it most, and who step forward each year as volunteers who commit hundreds of hours of their time to make the best possible success of it. Once again, this care has shone through.

On behalf of the whole Django community, the Django Software Foundation would like to thank the entire organising team and all the other volunteers of this year's DjangoCon Europe, for putting on a superb and memorable production.

The next DjangoCons in Europe

The DSF Board is considering bids for DjangoCon Europe 2018-2020. If you're interested in hosting the event in one of these years, we'd like to hear from you as soon as possible.




PyBites: How to Write a Simple Slack Bot to Monitor Your Brand on Twitter

Tue, 25 Apr 2017 09:00:00 +0000

In this article I show you how to monitor Twitter and post alerts to a Slack channel. We built a nice tool to monitor whenever our domain gets mentioned on Twitter. The slacker and twython modules made this pretty easy. We also use configparser and logging.




S. Lott: Modules vs. Monoliths vs. Microservices:

Tue, 25 Apr 2017 08:00:20 +0000

(image) Dan Bader (@dbader_org)
Worth a read: "Modules vs. Microservices" (and how to find a middle ground) oreilly.com/ideas/modules-…

"don't trick yourself into a microservices-only mindset"

Thanks for sharing.

The referenced post gives you the freedom to have a "big-ish" microservice. My current example has four very closely-related resources. There's agony in decomposing these into separate services. So we have several distinct Python modules bound into a single Flask container.

Yes. We lack the advertised static type checking for module boundaries. The kind of static type checking that doesn't actually solve any actual problems, since the issues are always semantic and can only be found with unit tests and integration tests and Gherkin-based acceptance testing (see Python BDD: https://pypi.python.org/pypi/pytest-bdd and https://pypi.python.org/pypi/behave/1.2.5).

We walk a fine line. How tightly coupled are these resources? Can they actually be used in isolation? What do the possible future changes look like? Where is the swagger.json going to change?

It's helpful to have both options on the table.



Full Stack Python: Python for Entrepreneurs

Tue, 25 Apr 2017 04:00:00 +0000

Python for Entrepreneurs is a new video course by the creators of Talk Python to Me and Full Stack Python.

Update: The Kickstarter has been funded! Michael and I are hard at work on the course content. Thank you to everyone who supported us as a backer. The course is available in early access mode on training.talkpython.fm until it is fully released.

We are creating this course and running a Kickstarter for it based on feedback that it's still too damn difficult to turn basic Python programming knowledge into a business to generate income as a side or full time project. Both Michael and I have been able to make that happen for ourselves and we want to share every difficult lesson we've learned through this course.

The Python for Entrepreneurs videos and content will dive into building and deploying a real-world web application, marketing it to prospective customers, handling search engine optimization, making money through credit card payments, getting help from part-time contractors for niche tasks and scaling up to meet traffic demands.

If this course hits the mark for what you want to do with Python, check out the Kickstarter - we've set up steep discounts for early backers.

If you have any questions, please reach out to Michael Kennedy or me, Matt Makai.




Full Stack Python: Configuring Python 3, Bottle and Gunicorn for Development on Ubuntu 16.04 LTS

Tue, 25 Apr 2017 04:00:00 +0000

The Ubuntu 16.04 Long Term Support (LTS) Linux operating system was released in April 2016. This latest Ubuntu release is named "Xenial Xerus" and it is the first Ubuntu release to include Python 3, instead of Python 2.x, as the default Python installation. We can quickly start a new Bottle web application project and run it with Green Unicorn (Gunicorn) on Ubuntu 16.04. Tools We Need Our setup requires the Ubuntu 16.04 release along with a few other code libraries. Don't install these tools just yet since we'll get to them as we go through the walkthrough. Our requirements and their current versions as of April 2017 are: Ubuntu 16.04.2 LTS (Xenial Xerus) Python version 3.5.1 (default in Ubuntu 16.04.2) Bottle web framework version 0.13 Green Unicorn (Gunicorn) version 19.7.1 If you are developing on Mac OS X or Windows, make sure to use virtualization software such as Parallels or VirtualBox with the Ubuntu .iso file. Either the amd64 or i386 version of 16.04 is fine. I use the amd64 version for my own local development. A desktop screen like this one appears when you boot up Ubuntu. Open a terminal window to install the system packages. System Packages We can see the python3 system version Ubuntu comes with and where its executable is stored using these commands. python3 --version which python3 Our Ubuntu installation requires a few system packages. We will get prompted for the superuser password because restricted system access is needed to install packages through apt. sudo apt-get install python3-pip python3-dev Enter y to let the system package installation process do its job. The packages we need are now installed. We can continue on to install our Python-specific dependencies. Virtualenv In the previous section, virtualenv and pip were installed to handle our application dependencies. We can now use them to download and install Bottle and Gunicorn. Create a directory for the virtualenvs. Then create a new virtualenv. # make sure pip and setuptools are the latest version pip3 install --upgrade pip setuptools # the tilde "~" specifies the user's home directory, like /home/matt cd ~ mkdir venvs # specify the system python3 installation virtualenv --python=/usr/bin/python3 venvs/bottleproj python3 -m venv venvs/bottleproj Activate the virtualenv. source ~/venvs/bottleproj/bin/activate Our prompt will change after we properly activate the virtualenv. Our virtualenv is now activated with Python 3. We can install whatever dependencies we want, in our case Bottle and Gunicorn. Bottle and Gunicorn We can now install Bottle and Green Unicorn via the pip command. pip install bottle gunicorn No errors like we see in the following screenshot is a good sign. Use the mkdir command to create a new directory to keep our Bottle project then use the cd (change directory) command to move into the new folder. mkdir ~/bottleproj cd ~/bottleproj Create a new file named app.py within our bottleproj directory so we can test to make sure Bottle is working properly. I prefer to use Vim but Emacs and other development environments work great as well. Within the new app.py file write the following code. import bottle from bottle import route, run, Response # a basic URL route to test whether Bottle is responding properly @route('/') def index(): return Response("It works!") # these two lines are only used for python app[...]