Subscribe: Planet Python
Added By: Feedage Forager Feedage Grade B rated
Language: English
api  application  code  data  database  django  file  flask  import  library  new  program  python  slack  time  web 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Planet Python

Planet Python

Planet Python -


Patrick Kennedy: Relational Database Migrations using Flask-Migrate

Fri, 28 Oct 2016 12:42:36 +0000

Introduction In this blog post, I’ll show how to use the Flask-Migrate module to simplify database schema changes for a Flask web application that is using SQLAlchemy. The Flask-Migrate module is written by Miguel Grinberg and utilizes the Alembic module for changing the database schema (ie. performing database migrations). Flask-Migrate provides a convenient command-line interface for performing database migrations, including versioning of the database schema. One of the more challenging aspects of working with relational databases is making changes to the database schema, such as adding or deleting columns. During the early development of an application, making changes to a table in a relational database is easy if you’re not worried about deleting all of the data in your database. However, once you get into production and are actually storing real data in your relational database, you need to be very cautious when changing a table. While relational databases have a lot of strong points, being able to easily update the underlying schema of a database table is not one of them. To be explicit, the use of the Flask-Migrate module is intended for Flask applications that are using SQLAlchemy as the ORM (Object-Relational Mapper) for interfacing with the underlying database. Typically, using a PostgreSQL database is a good choice for a Flask application and SQLAlchemy is a great tool for allowing you to work in terms of python instead of SQL for interfacing with the database. Why Is a Database Migration Tool Needed? At first, it may seem like having a tool for doing database migrations is overkill. It very well may be for a simple application, but I’d argue that any application that is going into production (where there is real data being stored) should utilize a database migration tool. Let’s take a simple example to show how a database migration tool (such as Flask-Migrate) can be beneficial… Suppose you have a web application with a mature database schema that you’ve pushed to production and there are a few users already using your application. As part of a new feature, you want to add the ability to have users be part of groups, so you need to update your database schema to allow users to be associated with groups. While this is a change that you can test out in your development environment with test data, it’s a significant change for your production database as you must ensure that you are not going to delete or alter any existing data. At this point, you could write a script to perform this database migration. OK, not a huge deal to write a single script. How about if you have to make three database migrations in two weeks or 7 each week, writing new scripts every time becomes quite tedious. So why not use a tool that was developed just for this purpose and has been tested out thoroughly? Configuring Flask-Migrate OK, hopefully you’ve been convinced that using a database migration tool (in this case, Flask-Migrate) is a good idea. Let’s start off by configuring the Flask-Migrate module for use with an existing Flask application (I’ll be using the application that I’ve been documenting on this blog: Flask Tutorial). The first step is to install the Flask-Migrate module using pip and then update your listing of modules being used by your application (make sure that you are working within your virtual environment!): (ffr_env) $ pip install Flask-Migrate (ffr_env) $ pip freeze > requirements.txt If you look at the Flask-Migrate documentation, there are two ways for utilizing this module: Including Flask-Migrate directly into your application Creating a separate script for handling database migrations I’ve used both methods successfully, but I prefer #1 due to the simplicity. To utilize Flask-Migrate, you’ll need to update the configuration of your Flask application by adding the following lines to the file in …/web/ (source code is truncated to just show the updates): ################# #### imports #### ################# from flask import Flask, render_templat[...]

Python Software Foundation: Power Of "We": PyCon CZ Keynote Speaker Ana Balica

Thu, 27 Oct 2016 20:46:47 +0000

Next week Ana Balica will give her first keynote, but she isn't nervous. At least, not yet. "I definitely will be before going on stage," she says. "That’s a natural response; you can’t help it." Balica is a Django Developer at Potato, a web agency in London. Although she is only two years out of university, she's making a name for herself as a programmer, community leader, and conference speaker. This month at PyCon CZ in Brno, Czech Republic, she presents "Humanizing Among Coders," a talk that describes five aspects of effective communication. "I hope it will get people talking about how they communicate, and believing they can make a difference by being empathetic." Python and Ana Balica first met while she was studying software engineering at the Technical University of Moldova. Although the school focuses on C and C++, she tried out Python as a sort of snack on the side, and got hooked. After she graduated she spent her "gap year" enjoying two adventures at once. The first adventure was traveling Asia and Europe. Since she felt free of any responsibility, she went on a second adventure concurrently, expanding her skills with the "Learn IT Girl" mentorship program online. Her project for Learn IT Girl was an alphabet game for children that ran on Android phones. She built the first prototype with Kivy so she could write more Python. A Shared Network for Women in Computing Balica was a Google Summer of Code participant twice as a student and once as a mentor. Her project for GSoC was to create a new online platform for Systers. Systers is a community of over 3,000 women in tech, the largest such group in the world. Anita Borg founded Systers in 1987 as an email list for women in “systems”, but it has grown and diversified into a dozen subgroups such as Latinas in Computing, Black Women in Computing, and many others. In 2014 Balica began building a Systers Portal with Python and Django to give all these members and subgroups a unified platform to share news. During her gap year, Balica continued to hack on the portal. "While I was traveling I had plenty of free time, so I was constantly doing something, improving things, adding new features." She returned to GSoC the following summer to mentor students who joined the project. Systers leader Rose Robinson was the portal's guiding visionary; she says Balica's coding ability is exceptional, and her leadership was crucial to the project's success. "When teams are in four different time zones, challenges are heightened, but Ana was never phased." The Launch of a Speaking Career Balica's first conference talk was at DjangoCon Europe, in Wales in June 2015. At first, she didn't intend to speak at the conference. She was filling out the registration form when she noticed the offer of free mentorship for new speakers. She could get help inventing a topic or structuring her talk. The DjangoCon organizers wrote on the site, "We don’t want to be proud because we had a lot of superstar speakers at our conference. We want to be proud because we were the conference where you began your superstar speaking career." Ola Sitarska was Balica's mentor for her proposal. Sitarska says, "I was impressed with the level of detail she went into, and only advised about what the organizers might be looking for, as she didn't need much help!" Balica says about Sitarska, "I wrote her a couple of paragraphs and her response was three times longer, with a lot of encouragement a lot of little tips." Balica's talk on Django mixins was accepted and she had a blast presenting it to the audience. Ever since that talk, Balica's speaking career has been accelerating. She attributes it to a snowball effect; she'd never intended to give so many talks. PSF director Anna Ossowski saw her speak at DjangoCon the following year, about testing with mocks. "It was excellent to see her on stage. She had a Shakespeare theme and it was more like an acting performance than giving a talk." Code and Compassion At PyCon CZ next week, Balica will give h[...]

Peter Bengtsson: Django test optimization with no-op PIL engine

Thu, 27 Oct 2016 12:34:18 +0000

The Air Mozilla project is a regular Django webapp. It's reasonably big for a more or less one man project. It's ~200K lines of Python and ~100K lines of JavaScript. There are 816 "unit tests" at the time of writing. Most of them are kinda typical Django tests. Like: def test_some_feature(self): thing = MyModel.objects.create(key='value') url = reverse('namespace:name', args=(,)) response = self.client.get(url) .... Also, the site uses sorl.thumbnail to automatically generate thumbnails from uploaded images. It's a great library. However, when running tests, you almost never actually care about the image itself. Your eyes will never feast on them. All you care about is that there is an image, that it was resized and that nothing broke. You don't write tests that checks the new image dimensions of a generated thumbnail. If you need tests that go into that kind of detail, it best belongs somewhere else. So, I thought, why not fake ALL operations that are happening inside sorl.thumbnail to do with resizing and cropping images. Here's the changeset that does it. Note, that the trick is to override the default THUMBNAIL_ENGINE that sorl.thumbnail loads. It usually defaults to sorl.thumbnail.engines.pil_engine.Engine and I just wrote my own that does no-ops in almost every instance. I admittedly threw it together quite quickly just to see if it was possible. Turns out, it was. # Depends on setting something like: # THUMBNAIL_ENGINE = 'airmozilla.base.tests.testbase.FastSorlEngine' # in your settings specifically for running tests. from sorl.thumbnail.engines.base import EngineBase class _Image(object): def __init__(self): self.size = (1000, 1000) self.mode = 'RGBA' = '\xa0' class FastSorlEngine(EngineBase): def get_image(self, source): return _Image() def get_image_size(self, image): return image.size def _colorspace(self, image, colorspace): return image def _scale(self, image, width, height): image.size = (width, height) return image def _crop(self, image, width, height, x_offset, y_offset): image.size = (width, height) return image def _get_raw_data(self, image, *args, **kwargs): return def is_valid_image(self, raw_data): return bool(raw_data) So, was it much faster? It's hard to measure because the time it takes to run the whole test suite depends on other stuff going on on my laptop during the long time it takes to run the tests. So I ran them 8 times with the old code and 8 times with this new hack. Iteration Before After 1 82.789s 73.519s 2 82.869s 67.009s 3 77.100s 60.008s 4 74.642s 58.995s 5 109.063s 80.333s 6 100.452s 81.736s 7 85.992s 61.119s 8 82.014s 73.557s Average 86.865s 69.535s Median 82.869s 73.519s Std Dev 11.826s 9.0757s So rougly 11% faster. Not a lot but it adds up when you're doing test-driven development or debugging where you run a suite or a test over and over as you're saving the files/tests you're working on. Room for improvement In my case, it just worked with this simple solution. Your site might do fancier things with the thumbnails. Perhaps we can combine forces on this and finalize a working solution into a standalone package. [...]

Glyph Lefkowitz: What Am Container

Thu, 27 Oct 2016 09:23:00 +0000

Perhaps you are a software developer. Perhaps, as a developer, you have recently become familiar with the term "containers". Perhaps you have heard containers described as something like "LXC, but better", "an application-level interface to cgroups" or "like virtual machines, but lightweight", or perhaps (even less usefully), a function call. You've probably heard of "docker"; do you wonder whether a container is the same as, different from, or part of an Docker? Are you are bewildered by the blisteringly fast-paced world of "containers"? Maybe you have no trouble understanding what they are - in fact you might be familiar with a half a dozen orchestration systems and container runtimes already - but frustrated because this seems like a whole lot of work and you just don't see what the point of it all is? If so, this article is for you. I'd like to lay out what exactly the point of "containers" are, why people are so excited about them, what makes the ecosystem around them so confusing. Unlike my previous writing on the topic, I'm not going to assume you know anything about the ecosystem in general; just that you have a basic understanding of how UNIX-like operating systems separate processes, files, and networks.1 At the dawn of time, a computer was a single-tasking machine. Somehow, you'd load your program into main memory, and then you'd turn it on; it would run the program, and (if you're lucky) spit out some output onto paper tape. When a program running on such a computer looked around itself, it could "see" the core memory of the computer it was running on, any attached devices, including consoles, printers, teletypes, or (later) networking equipment. This was of course very powerful - the program had full control of everything attached to the computer - but also somewhat limiting. This mode of addressing hardware is limiting because it meant that programs would break the instant you moved them to a new computer. They had to be re-written to accommodate new amounts and types of memory, new sizes and brands of storage, new types of networks. If the program had to contain within itself the full knowledge of every piece of hardware that it might ever interact with, it would be very expensive indeed. Also, if all the resources of a computer were dedicated to one program, then you couldn't run a second program without stomping all over the first one - crashing it by mangling its structures in memory, deleting its data by overwriting its data on disk. So, programmers cleverly devised a way of indirecting, or "virtualizing", access to hardware resources. Instead of a program simply addressing all the memory in the whole computer, it got its own little space where it could address its own memory - an address space, if you will. If a program wanted more memory, it would ask a supervising program - what we today call a "kernel" - to give it some more memory. This made programs much simpler: instead of memorizing the address offsets where a particular machine kept its memory, a program would simply begin by saying "hey operating system, give me some memory", and then it would access the memory in its own little virtual area. In other words: memory allocation is just virtual RAM. Virtualizing memory - i.e. ephemeral storage - wasn't enough; in order to save and transfer data, programs also had to virtualize disk - i.e. persistent storage. Whereas a whole-computer program would just seek to position 0 on the disk and start writing data to it however it pleased, a program writing to a virtualized disk - or, as we might call it today, a "file" - first needed to request a file from the operating system. In other words: file systems are just virtual disks. Networking was treated in a similar way. Rather than addressing the entire network connection at once, each program could allocate a little slice of the network - a "port". That way a program could, instead of consuming all network traffic destined for the entire machine, ask the operatin[...]

Talk Python to Me: #82 Grokking Algorithms in Python

Thu, 27 Oct 2016 08:00:00 +0000

Algorithms underpin almost everything we do in programming and in problem solving in general. Yet, many of us have partial or incomplete knowledge of the most important and common ones. In this episode, you'll meet Adit Bhargava, the author of the light and playful Grokking Algorithms: An illustrated guide book.

If you struggled to understand and learn the key algorithms, this episode is for you.

Links from the show:

Adit on the web:
Book: Grokking Algorithms: An illustrated guide:
Grokking Algorithms GitHub:
Adit on Twitter: @_egonschiele
High perf search of Talk Python:

Gocept Weblog: Towards RestrictedPython 3

Thu, 27 Oct 2016 06:26:13 +0000

The biggest blocker to port Zope to Python 3 is RestrictedPython. What is RestrictedPython? It is a library used by Zope to restrict Python code at instruction level to a bare minimum of trusted functionality. It parses and filters the code for not allowed constructs (such as open()) and adds wrappers around each access on attributes or items. These wrappers can be used by Zope to enforce access control on objects in the ZODB without requiring manual checks in the code. Why is RestrictedPython needed? Zope allows writing Python code in the Zope management interface (ZMI) using a web browser (“through the web” aka TTW). This code is stored in the ZODB. The code is executed on the server. It would be dangerous to allow a user to execute arbitrary code with the rights of the web server process. That’s why the code is filtered through RestrictedPython to make sure this approach is not a complete security hole. RestrictedPython is used in many places of Zope as part of its security model. An experiment on the Zope Resurrection Sprint showed that it would be really hard to create a Zope version which does not need RestrictedPython thus removing the TTW approach. What is the problem porting RestrictedPython to Python 3? RestrictedPython relies on the compiler package of the Python standard library. This package no longer exists in Python 3 because it was poorly documented, unmaintained and out of sync with the compiler Python uses itself. (There are whisperings that it was only kept because of Zope.) Since Python 2.6 there is a new ast module in the Python standard library which is not a direct replacement for compiler. There is no documentation how to replace compiler by ast. What is the current status? Several people already worked on various Plone and Zope sprints and mostly in their spare time on a Python 3 branch of RestrictedPython to find out how this package works and to start porting some of its functionality as a proof of concept. It seems to be possible to use ast as the new base for RestrictedPython. Probably the external API of RestrictedPython could be kept stable. But packages using or extending some of the internals of RestrictedPython might need to be updated as well. What are the next steps? Many Zope and Plone packages depend on RestrictedPython directly (like AccessControl or Products.ZCatalog) or indirectly (like Products.PythonScripts, or even When RestrictedPython has successfully been tested against these packages porting them can start. There is a nice list of all Plone 5.1 dependencies and their status regarding Python 3. Our goal is to complete porting RestrictedPython by the end of March 2017. It opens up the possibility guiding Zope into the Python 3 wonderland by the end of 2017. This is ambitious, especially if the work is done in spare time besides the daily customer work. You can help us by either contributing PullRequests via Github or review them. We are planning two Zope sprints in spring and autumn 2017. Furthermore we are grateful for each and every kind of support. [...]

Full Stack Python: Dialing Outbound Phone Calls with a Bottle Web App

Thu, 27 Oct 2016 04:00:00 +0000

Python web apps built with the Bottle web framework can send and receive SMS text messages. In this tutorial we will go beyond texting and learn how to dial outbound phone calls. The calls will read a snippet of text then play an MP3 file, but they can then be easily modified to create conference lines and many other voice features in your Python web apps. Tools We Need You should have either Python 2 or 3 installed to create your Bottle app, although Python 3 is recommended for new applications. We also need: pip and virtualenv to handle application dependencies Ngrok for localhost tunneling to our Bottle application while it's running on our local development environment Bottle web framework Free Twilio account to use their phone calling web API Twilio's Python helper library, which is open source on GitHub and available for download from PyPI Take a look at this guide on setting up Python 3, Bottle and Gunicorn on Ubuntu 16.04 LTS if you need help getting your development environment configured before continuing on through the remainder of this tutorial. You can snag all the open source code for this tutorial in the python-bottle-phone GitHub repository under the outbound directory. Use and copy the code however you want - it's all open source under the MIT license. Installing Our Application Dependencies Our Bottle app needs a helper code library to make it easy to dial outbound phone calls. Bottle and the Twilio helper library are installable from PyPI into a virtualenv. Open your terminal and use the virtualenv command to create a new virtualenv: virtualenv bottlephone Use the activate script within the virtualenv, which makes this virtualenv the active Python installation. Note that you need to do this in every terminal window that you want this virtualenv to be used. source bottlephone/bin/activate The command prompt will change after activating the virtualenv to something like (bottlephone) $. Here is a screenshot of what my environment looked like when I used the activate script. Next use the pip command to install the Bottle and Twilio Python packages into your virtualenv. pip install bottle twilio After the installation script finishes, we will have the required dependencies to build our app. Time to write some Python code to dial outbound phone calls. Bottle and Twilio Our simple Bottle web app will have three routes: / - returns a text string to let us know our Bottle app is running /twiml - responds with TwiML (a simple subset of XML) that instructs Twilio what to do when someone picks up the call to them from our Bottle web app /dial-phone/, where "outbound_phone_number" is a phone number in the format "+12025551234" - this route uses the Twilio helper library to send a POST request to the Twilio Voice API to dial a phone call We can build the structure of our Bottle app and the first route right now. Create a new file named with the following contents to start our app. import os import bottle from bottle import route, run, post, Response from twilio import twiml from import TwilioRestClient app = bottle.default_app() # plug in account SID and auth token here if they are not already exposed as # environment variables twilio_client = TwilioRestClient() TWILIO_NUMBER = os.environ.get('TWILIO_NUMBER', '+12025551234') NGROK_BASE_URL = os.environ.get('NGROK_BASE_URL', '') @route('/') def index(): """ Returns a standard text response to show the app is up and running. """ return Response("Bottle app running!") if __name__ == '__main__': run(host='', port=8000, debug=False, reloader=True) Make sure you are in the directory where you created the above file. Run the app via the Bottle development server with the following command. [...]

Full Stack Python: How to Build Your First Slack Bot with Python

Thu, 27 Oct 2016 04:00:00 +0000

Bots are a useful way to interact with chat services such as Slack. If you have never built a bot before, this post provides an easy starter tutorial for combining the Slack API with Python to create your first bot. We will walk through setting up your development environment, obtaining a Slack API bot token and coding our simple bot in Python. Tools We Need Our bot, which we will name "StarterBot", requires Python and the Slack API. To run our Python code we need: Either Python 2 or 3 pip and virtualenv to handle Python application dependencies Free Slack account with a team on which you have API access or sign up for the Slack Developer Hangout team Official Python slackclient code library built by the Slack team Slack API testing token It is also useful to have the Slack API docs handy while you're building this tutorial. All the code for this tutorial is available open source under the MIT license in the slack-starterbot public repository. Establishing Our Environment We now know what tools we need for our project so let's get our development environment set up. Go to the terminal (or Command Prompt on Windows) and change into the directory where you want to store this project. Within that directory, create a new virtualenv to isolate our application dependencies from other Python projects. virtualenv starterbot Activate the virtualenv: source starterbot/bin/activate Your prompt should now look like the one in this screenshot. The official slackclient API helper library built by Slack can send and receive messages from a Slack channel. Install the slackclient library with the pip command: pip install slackclient When pip is finished you should see output like this and you'll be back at the prompt. We also need to obtain an access token for our Slack team so our bot can use it to connect to the Slack API. Slack Real Time Messaging (RTM) API Slack grants programmatic access to their messaging channels via a web API. Go to the Slack web API page and sign up to create your own Slack team. You can also sign into an existing account where you have administrative privileges. After you have signed in go to the Bot Users page. Name your bot "starterbot" then click the “Add bot integration” button. The page will reload and you will see a newly-generated access token. You can also change the logo to a custom design. For example, I gave this bot the Full Stack Python logo. Click the "Save Integration" button at the bottom of the page. Your bot is now ready to connect to Slack's API. A common practice for Python developers is to export secret tokens as environment variables. Export the Slack token with the name SLACK_BOT_TOKEN: export SLACK_BOT_TOKEN='your slack token pasted here' Nice, now we are authorized to use the Slack API as a bot. There is one more piece of information we need to build our bot: our bot's ID. Next we will write a short script to obtain that ID from the Slack API. Obtaining Our Bot’s ID It is finally time to write some Python code! We'll get warmed up by coding a short Python script to obtain StarterBot's ID. The ID varies based on the Slack team. We need the ID because it allows our application to determine if messages parsed from the Slack RTM are directed at StarterBot. Our script also tests that our SLACK_BOT_TOKEN environment variable is properly set. Create a new file named and fill it with the following code. import os from slackclient import SlackClient BOT_NAME = 'starterbot' slack_client = SlackClient(os.environ.get('SLACK_BOT_TOKEN')) if __name__ == "__main__": api_call = slack_client.api_call("users.list") if api_call.get('ok'): # retrieve all users so we can find our bot users = api_call.get('members') for user in users: if 'name' in user and user.get('na[...]

Import Python: Apologies Planet Python Subscribers

Thu, 27 Oct 2016 00:00:00 +0000

This is Ankur from Import Python. Day before yesterday the planet python feed aggregated lot of our previous old post. My apologies for the mistake. Code introduced to test new feed generation ended up in production due to incorrect git merge. I have since taken the feed down, tested and fixed the problem.

Zato Blog: Interesting real-world Single Sign-On with JWT, WebSockets and Zato

Wed, 26 Oct 2016 18:07:39 +0000

Overview In a recent project, an interesting situation occurred that let JSON Web Tokens (JWT) and WebSockets, two newly added features of Zato middleware server, be nicely employed in practice with great results. The starting point was the architecture as below. Pretty common stuff - users authenticate with a frontend web-application serving pages to browsers and at the same time communicating with Zato middleware which provides a unified interface to further backend systems. WebSockets Now, the scenario started to look intriguing when at one point a business requirement meant that in technical terms it was decided that WebSocket connections be employed so that browsers could be swiftly notified of events taking place in backend systems. WebSockets are straight-forward, come with all modern browsers, and recently Zato grew means to mount services on WebSocket channels - this in addition to REST, SOAP, ZeroMQ, AMQP, WebSphere MQ, scheduler, and all the other already existing channels. The gotcha However, when it came to implementation it turned out that the frontend web-application is incapable to act as a client of Zato services exposed via WebSockets. That is, it could offer WebSockets to browsers but would not be able itself to establish long-running WebSocket connections to Zato - it had been simply designed to work in a strict request-reply fashion and WebSockets were out of its reach. This meant that it was not possible for Zato to notify the frontend application of new events without the frontend constantly polling for them which beat the purpose of employing WebSockets in the first place. Thus, seeing as browsers themselves support WebSockets very well, it was agreed that there is no choice but have each user browser connect to Zato directly and WebSocket channels in Zato would ensure that browsers receive notification as they happen in backend systems. Browser authentication Deciding that browsers connect directly to Zato posed a new challenge, however. Whereas previously users authenticated with the frontend that had its own application-level credentials in Zato, now browsers connecting directly to Zato would also have to authenticate. Naturally, it was ruled out that suddenly users would be burdened with a new username/password to enter anywhere. At the same time it was not desirable to embed the credentials in HTML served to browsers because that would have to be done in clear text. Instead, JWT was used by the frontend application to securely establish a session in Zato and transfer its ownership to a browser. How JWT works in Zato At their core, JWT (JSON Web Tokens) are essentially key-value mappings that declare that certain information is true. In the context of Zato authentication, when selected services Zato server are secured with JWT, the following happens: Applications need to obtain a token for the security definition assigned to the service - they do it either by sending username/password in to an endpoint that returns a new token or by calling a special API service that lets an application generate a token on behalf of another application No matter how it was generated, the token will contain information for whom it was generated and until when it is valid Such a token is next encrypted on server side using Fernet keys (AES-128) After obtaining the token, the end application needs to provide it to Zato on each call When a request comes in, the process is reversed - a JWT is decrypted on server side, its validity confirmed, a service is called and its response is returned to the calling application Since tokens would have expired ultimately otherwise, their validity is extended each time a call is made to Zato with a JWT which indicates to servers that the token is still in use In other words, JWT declares that a us[...]

Mike Driscoll: Creating Graphs with Python and GooPyCharts

Wed, 26 Oct 2016 17:15:49 +0000

Over the summer, I came across an interesting plotting library called GooPyCharts which is a Python wrapper for the Google Charts API. In this article, we will spend a few minutes learning how to use this interesting package. GooPyCharts follows syntax that is similar to MATLAB and is actually meant to be an alternative to matplotlib. To install GooPyCharts, all you need to do is use pip like this: pip install gpcharts Now that we have it installed, we can give it a whirl! Our First Graph Using GooPyCharts to create a chart or graph is extremely easy. In fact, you can create a simple graph in 3 lines of code: >>> from gpcharts import figure >>> my_plot = figure(title='Demo') >>> my_plot.plot([1, 2, 10, 15, 12, 23]) If you run this code, you should see your default browser pop open with the following image displayed: You will note that you can download the figure as a PNG or save the data that made the chart as a CSV file. GooPyCharts also integrates with the Jupyter Notebook. Creating a Bar Graph The GooPyCharts package has a nice script included to help you learn how to use the package. Unfortunately it doesn’t actually demonstrate different types of charts. So I took one of the examples from there and modified it to create a bar chart: from gpcharts import figure   fig3 = figure() xVals = ['Temps','2016-03-20','2016-03-21','2016-03-25','2016-04-01'] yVals = [['Shakuras','Korhal','Aiur'],[10,30,40],[12,28,41],[15,34,38],[8,33,47]]   fig3.title = 'Weather over Days' fig3.ylabel = 'Dates', yVals) You will note that in this example we create our title using the figure instance’s title property. We also set the ylabel the same way. You can also see how to define dates for the chart as well as set an automatic legend using nested lists. Finally you can see that instead of calling plot we need to call bar to generate a bar chart. Here is the result: Creating Other Types of Graphs Let’s modify the code a bit more and see if we can create other types of graphs. We will start with a scatter plot: from gpcharts import figure   my_fig = figure() xVals = ['Dates','2016-03-20','2016-03-21','2016-03-25','2016-04-01'] yVals = [['Shakuras','Korhal','Aiur'],[10,30,40],[12,28,41],[15,34,38],[8,33,47]]   my_fig.title = 'Scatter Plot' my_fig.ylabel = 'Temps'   my_fig.scatter(xVals, yVals) Here we can most of the same data that we used in the last example. We just need to modify a few values to make the X and Y labels work correctly and we need to title the graph with something that makes sense. When you run this code, you should see something like this: That was pretty simple. Let’s try creating a quick and dirty histogram: from gpcharts import figure   my_fig = figure() my_fig.title = 'Random Histrogram' my_fig.xlabel = 'Random Values' vals = [10, 40, 30, 50, 80, 100, 65] my_fig.hist(vals) The histogram is much simpler than the last two charts we created as it only needs one list of values to create it successfully. This is what I got when I ran the code: This is a pretty boring looking histogram, but it’s extremely easy to modify it and add a more realistic set of data. Wrapping Up While this was just a quick run-through of some of GooPyCharts capabilities, I think we got a pretty good idea of what this charting package is capable of. It’s really easy to use, but only has a small set of charts to work with. PyGal, Bokeh and matplotlib have many other types of charts that they can create. However if you are looking for something that’s super e[...]

Zaki Akhmad: Fail Running Test Code

Wed, 26 Oct 2016 16:47:37 +0000

I was recalling, how did I run the test code. Until I found that I had written the snippet. I copy-paste it… and the test code was fail. Something wrong. No changes in the test code since my last commit.

I tried the other test code. Looks OK. I tried to rename the fail test code filename, still no good. So, what’s the problem?

Until I found this stackoverflow question.

So, I tried to import the modules written in the test code via shell. Finally now I know which part of the code that fails to import the library.

Continuum Analytics News: Recursion Pharmaceuticals Wants to Cure Rare Genetic Diseases - and We’re Going to Help

Wed, 26 Oct 2016 14:17:33 +0000

Company Blog Wednesday, October 26, 2016 Michele Chambers EVP Anaconda Business Unit & CMO Continuum Analytics Today we are pleased to announce that Continuum Analytics and Recursion Pharmaceuticals are teaming up to use data science in the quest to find cures for rare genetic diseases. Using Bokeh on Anaconda, Recursion is building its drug discovery assay platform to analyze layered cell images and weigh the effectiveness of different remedies. As we always say, Anaconda arms data scientists with superpowers to change the world. This is especially valuable for Recursion, since success literally means saving lives and changing the world by bringing drug remedies for rare genetic diseases to market faster than ever before.   It’s estimated that there are over 6,000 genetic disorders, yet many of these diseases represent a small market. Pharmaceutical companies aren’t usually equipped to pursue the cure for each disease. Anaconda will help Recursion by blending biology, bioinformatics and machine learning, bringing cell data to life. By identifying patterns and assessing drug remedies quickly, Recursion is using data science to discover potential drug remedies for rare genetic diseases. In English - this company is trying to cure big, bad, killer diseases using Open Data Science.  The ODS community is important to us. Working with a company in the pharmaceutical industry, an industry that is poised to convert ideas into life-saving medications, is humbling. With so many challenges, not the least of which include regulatory roadblocks and lengthy and complex R&D processes, researchers must continually adapt and innovate to speed medical advances. Playing a part in that process? That’s why we do what we do. We’re excited to welcome Recursion to the family and observe as it uses its newfound superpowers to change the world, one remedy at a time. Want to learn more about this news? Check out the press release, here.  [...]

Continuum Analytics News: Recursion Pharmaceuticals Selects Anaconda to Create Innovative Next Generation Drug Discovery Assay Platform to Eradicate Rare Genetic Diseases

Wed, 26 Oct 2016 12:01:10 +0000

News Wednesday, October 26, 2016 Open Data Science Platform Accelerates Time-to-Market for Drug Remedies AUSTIN, TX—October 26, 2016—Continuum Analytics, the creator and driving force behind Anaconda, the leading Open Data Science platform powered by Python, today announced that Recursion Pharmaceuticals, LLC, a drug discovery company focused on rare genetic diseases, has adopted Bokeh––a Continuum Analytics open source visualization framework that operates on the Anaconda platform. Bokeh on Anaconda makes it easy for biologists to identify genetic disease markers and assess drug efficacy when visualizing cell data, allowing for faster time-to-value for pharmaceutical companies.  “Bokeh on Anaconda enables us to perform analyses and make informative, actionable decisions that are driving real change in the treatment of rare genetic diseases,” said Blake Borgeson, CTO & co-founder at Recursion Pharmaceuticals. “By layering information and viewing images interactively, we are obtaining insights that were not previously possible and enabling our biologists to more quickly assess the efficacy of drugs. With the power of Open Data Science, we are one step closer to a world where genetic diseases are more effectively managed and more frequently cured, changing patient lives forever.”  By combining interactive, layered visualizations in Bokeh on Anaconda to show both healthy and diseased cells along with relevant data, biologists can experiment with thousands of potential drug remedies and immediately understand the effectiveness of the drug to remediate the genetic disease. Biologists realize faster insights, speeding up time-to-market for potential drug treatments.  “Recursion Pharmaceuticals’ data scientists crunch huge amounts of data to lay the foundation for some of the most advanced genetic research in the marketplace. With Anaconda, the Recursion data science team has created a breakthrough solution that allows biologists to quickly and cost effectively identify therapeutic treatments for rare genetic diseases,” said Peter Wang, CTO & co-founder at Continuum Analytics. “We are enabling companies like Recursion to harness the power of data on their terms, building solutions for both customized and universal insights that drive new value in all areas of business and science. Anaconda gives superpowers to people who change the world––and Recursion is a great example of how our Open Data Science vision is being realized and bringing solid, everyday value to critical healthcare processes.” Data scientists at Recursion evaluate hundreds of genetic diseases, ranging from one evaluation per month to thousands in the same time frame. Bokeh on Anaconda delivers insights derived from heat maps, charts, plots and other scientific visualizations interactively and intuitively, while providing holistic data to enrich the context and allow biologists to discover potential treatments quickly. These visualizations empower the team with new ways to re-evaluate shelved pharmaceutical treatments and identify new potential uses for them. Ultimately, this creates new markets for pharmaceutical investments and helps develop new treatments for people suffering from genetic diseases.  Bokeh on Anaconda is a framework for creating versatile, interactive and browser-based visualizations of streaming data or Big Data from Python, R or Scala without writing any JavaScript. It allows for exploration, embedded visualization apps and interactive dashboards, so that users can create rich, contextual plots, graphs, charts and more to enable more comprehensive deductions from images.  For additional information about Continuum Analytics and Anaconda please [...]

A. Jesse Jiryu Davis: Announcing Motor 0.7

Wed, 26 Oct 2016 07:16:27 +0000


Three weeks after I released the beta, I’m proud to present Motor 0.7.

For asynchronous I/O Motor now uses a thread pool, which is faster and simpler than the prior implementation with greenlets. It no longer requires the greenlet package, and now requires the futures backport package on Python 2. Read the beta announcement to learn more about the switch from greenlets to threads.

Install with:

python -m pip install motor

This version updates the PyMongo dependency from 2.8.0 to 2.9.x, and wraps PyMongo 2.9’s new APIs.

Since the beta release, I’ve fixed one fun bug, a manifestation in Motor of the same import deadlock I fixed in PyMongo, Tornado, and Gevent last year.

The next release will be Motor 1.0, which will be out in less than a month. Most of Motor 1.0’s API is now implemented in Motor 0.7, and APIs that will be removed in Motor 1.0 are now deprecated and raise warnings.

This is a large release, please read the documentation carefully:

If you encounter any issues, please file them in Jira.

—A. Jesse Jiryu Davis

Kushal Das: Science Hack Day India 2016

Wed, 26 Oct 2016 07:05:00 +0000

Few months back Praveen called to tell me about the new event he is organizing along with FOSSASIA, Science Hack Day, India. I never even registered for the event as Praveen told me that he just added mine + Anwesha’s name there. Sadly as Py was sick for the last few weeks, Anwesha could not join us in the event. On 20th Hong Phuc came down to Pune, in the evening we had the PyLadies meetup in the Red Hat office. On 21st early morning we started our journey. Sayan, Praveen Kumar, and Pooja joined us in my car. This was my longest driving till date (bought the car around a year back). As everyone suggested, the road in Karnataka was smooth. I am now waiting for my next chance to drive on that road. After reaching Belgaum we decided to follow Google maps, which turned out to be a very bad decision. As the maps took us to a dead end with a blue gate. Later we found many localities also followed Google maps, and reached the same dead end. The location of the event was Sankalp Bhumi, a very well maintained resort, full with greenery, and nature. We stayed in the room just beside the lake. Later at night Saptak joined us. Siddesh, Nisha + Ira also reached later in the evening. Day 1 We had a quick inauguration event, all mentors talked about the project they will be working on, and then we moved towards the area for us. The main hall was slowly filled with school kids who had a build your own solar light workshop (lead by Jithin). Pooja also joined the workshop to help the kids with soldering. I managed to grab the largest table in the hack area. Around 9 people joined me, among them we had college students, college professors, and someone came in saying she is from different background than computers. I asked her to try this Python thing, and by the end of the day she was also totally hooked into learning. I later found her daughter was also participating in the kids section. Before lunch we went through the basics of Python as a programming language. All of the participants had Windows laptops, so it was fun to learn various small things about Windows. But we managed to get going well. Later we started working on MicroPython. We went ahead step by step, first turn on LED, later to DHT11 sensors for temperature+humidity measurements. By late afternoon all of us managed to write code to read the measurements from the sensors. I had some trouble with the old firmware I was using, but using the latest nightly firmware helped to fix the issue related to MQTT. I kept one of the board running for the whole night, Sayan wrote the client to gather the information from the Mosquitto server. In the evening we also had lighting talks, I gave a talk there about dgplug summer training. The last talk in the evening was from Prof. Pravu, and during that talk I saw someone started a powerful gas stove outside the hut. I was then totally surprised to learn that the fire was not from gas, but using water and some innovative design his team managed to make a small stove which is having 35% efficiency of any biomass, the fire was blue, and no smoke. This was super cool. After dinner, there was a special live show of laser lights+sound work done by Praveen. Teachers are important part of our lives. When we see someone like Praveen, who is taking the learning experience to another level while being in one of the small town in India, that gives a lot of pride to us. Btw, if you are wondering, he uses Python for most of his experiments :) Day 2 I managed to move to the hack area early morning, and kept the setup ready. My team joined me after breakfast. They decided to keep one of the boards under the Sun beside [...]

Vasudev Ram: Read from CSV with D, write to PDF with Python

Wed, 26 Oct 2016 03:37:22 +0000

By Vasudev RamCSV => PDFHere is another in my series of applications of xtopdf, my PDF creation toolkit for Python (xtopdf source here).This xtopdf application is actually a pipeline (nothing Unix-specific though, will work on both *nix and Windows) - a D program reading CSV data and sending it to a Python program, which writes the data to PDF.The D program, read_csv.d, reads CSV data from a .csv file, and writes it to standard output.The Python program, (which is part of the xtopdf toolkit), reads its standard input (which is redirected by the pipeline to come from the D program's standard output) and writes the data it reads, to PDF.Here is the D program, read_csv.d:/**************************************************File: read_csv.dPurpose: A program to read CSV data from a file and write it to standard output.Author: Vasudev RamDate created: 2016-10-25Copyright 2016 Vasudev RamWeb site: https://vasudevram.github.ioBlog: http://jugad2.blogspot.comProduct store:**************************************************/import std.algorithm;import std.array;import std.csv;import std.stdio;import std.file;import std.typecons;int main(){ try { stderr.writeln("Reading CSV data from file."); auto file = File("input.csv", "r"); foreach (record; file.byLine.joiner("\n").csvReader!(Tuple!(string, string, int))) { writefln("%s works as a %s and earns $%d per year", record[0], record[1], record[2]); } } catch (CSVException csve) { stderr.writeln("Caught CSVException: msg = ", csve.msg, " at row, col = ", csve.row, ", ", csve.col); } catch (FileException fe) { stderr.writeln("Caught FileException: msg = ", fe.msg); } catch (Exception e) { stderr.writeln("Caught Exception: msg = ", e.msg); } return 0;}The D program is compiled as usual with:dmd read_csv.dI ran it first (only the D program) with an invalid CSV file (it has an extra comma at the start on line 3, which invalidates the data by making "Driver" be in the salary column position), and got the expected error message, which includes the row and column number of the place in the CSV file where the program encountered the error - this is useful for fixing the input data:$ type input.csvJack,Carpenter,40000Tom,Blacksmith,50000,Jill,Driver,60000$ read_csvReading CSV data from file.Jack works as a Carpenter and earns $40000 per yearTom works as a Blacksmith and earns $50000 per yearCaught CSVException: msg = Unexpected 'D' when converting from type string to type int at row, col = 3, 3Then I ran it again, in the regular way, this time with a valid CSV file, and as part of a pipeline, the other pipeline component being StdinToPDF:$ read_csv | python csv_output.pdfReading CSV data from file.And here is a cropped view of the output as seen in Foxit PDF Reader:- Enjoy.- Vasudev Ram - Online Python training and consulting Get updates on my software products / ebooks / courses. Jump to posts: Python   DLang   xtopdf Subscribe to my blog by email My ActiveState recipes FlyWheel - Managed WordPress Hosting Share | Vasudev Ram [...]

Import Python: ImportPython Issue 95

Wed, 26 Oct 2016 01:39:50 +0000

Worthy ReadFixing Python Performance with RustperformanceExcellent post from Armin Ronacher on tackling a CPython performance bottleneck with a custom Rust extension module. How to create read only attributes and restrict setting attribute values on object in python ?core pythonThere are different way to prevent setting attributes and make attributes read only on object in python. We can use any one of the following way to make attributes readonly. 1) Property Descriptor 2) Using descriptor methods __get__ and __set__ 3) Using slots (only restricts setting arbitary attributes). Filestack - Upload Files From Anywhere. The API for file uploads. Integrate Filestack in 2 lines of code. Python library for Filestack to Deploy a Django Application to Digital OceandeploymentIn this tutorial we will be deploying ,a empty Django project I created to illustrate the deployment process. Asynchronous Scraping with PythonScraping is often an example of code that is embarrassingly parallel. With some slight changes, our tasks can be done asynchronously, allowing us to process more than one URL at a time. In version 3.2, Python introduced the concurrent.futures module, which is a joy to use for parallelizing tasks like scraping. The rest of this post will show how we can use the module to make our previously synchronous code asynchronous. Weekly Python Chat: Class-Based Views in DjangovideoMost Django programmers use function-based views, but some use class-based views. Why? Special guest Buddy Lindsey will be joining us this week to talk about how class-based views are different. Talk Python to Me: #80 TinyDB: A tiny document db written in PythonpodcastI'm excited to introduce you to Markus Siemens and TinyDb. This is a 100% pure python, embeddable, pip-installable document DB for Python. Handling statuses in Djangodjango, finite state machineWhether you're building up a CMS or a bespoke application, chances are that you will have to handle some states / statuses. Let's discuss your options in Django. JIRA IT Help Desk & Ticketing. Start a free trial of JIRA Service Desk and get your free Konami Code shirt.SponsorUpgrading Django - Never CleverdjangoGeneral Guidelines when upgrading Django. My Startling Encounter With Python Debuggers (Part 2)debuggingBenoit writes about debugging his software using gdb, python-debuginfo. Yoda on python dependencyhumorCheck the tweet :) lptraceopensource projectlptrace is strace for Python programs. It lets you see in real-time what functions a Python program is running. It's particularly useful to debug weird issues on production. Static types in Python, oh my(py)!mypyIn this post, I’ll explain how mypy works, the benefits and pain points we’ve seen in using mypy, and share a detailed guide for adopting mypy in a large production codebase (including how to find and fix dozens of issues in a large project in the first few days of using mypy!). sanicweb serverPython 3.5+ web server that's written to go fast Great Dev - Meet Great Jobs Try Hired and get in front of 4,000+ companies with one application. No more pushy recruiters, no more dead end applications and mismatched companies, Hired puts the power in your hands.Sponsor JobsCTO / Lead Developer at PatchHoxton, City of London, London, United KingdomPatch are hiring a CTO / Lead developer. We are expanding our tech team as part of scaling the company. This is an opportunity to make a big impact on our E- commerce platform and help shape the new services we’re creating. Upcoming Conference / [...]

Import Python: ImportPython Issue 94

Wed, 26 Oct 2016 01:39:50 +0000

Worthy ReadReview of Djaneiro, a Sublime Text plugin for Django development.django, sublimeIn this review I’ll explain how Djaneiro can make your Django development workflow more productive and I’ll go over the pros and cons of the plugin as I experienced them. After that I’ll take a look at alternatives to Djaneiro in the Sublime Text plugin landscape. At the end I’ll share my final verdict and ratings. PyData DC 2016 -- The Five Kinds of Python FunctionsTalk by Steven F. Lott. Python AST explorerASTWrite Python code and see how the ast looks like in the browser right now. No installation needed. Python cheat sheetsThis project tries to provide a lot of piece of Python code that makes life easier. pandasql: Make python speak SQLpandasql, a Python package we (Yhat) wrote that emulates the R package sqldf. It's a small but mighty library comprised of just 358 lines of code. The idea of pandasql is to make Python speak SQL. For those of you who come from a SQL-first background or still "think in SQL", pandasql is a nice way to take advantage of the strengths of both languages. Great Dev - Meet Great Jobs Try Hired and get in front of 4,000+ companies with one application. No more pushy recruiters, no more dead end applications and mismatched companies, Hired puts the power in your hands.SponsorPython Library for Learning Binary TreesBinaryTree is a minimal Python library which provides you with a simple API to generate, visualize and inspect binary trees so you can skip the tedious work of mocking up test trees, and dive right into practising your algorithms! Heaps and BSTs (binary search trees) are also supported. Patat – Terminal-based presentations using Pandocpatat (Presentations And The ANSI Terminal) is a small tool that allows you to show presentations using only an ANSI terminal. It does not require ncurses. PyCon 2017 site is livepyconPyCon 2017 ( US ) site is live. Note - Registration starts on Oct 17th. If you are looking to speak/attend reach out dates for talk/tutorial/paper aka Call For Proposals ( CFP ) submission. Async Python: The Different Forms of Concurrency - By Abu Ashraf MasnunconcurrencyIn this post we shall explore the different ways we can achieve concurrency and the benefits/drawbacks of them. With the advent of Python 3 the way we’re hearing a lot of buzz about “async” and “concurrency”, one might simply assume that Python recently introduced these concepts/capabilities. But that would be quite far from the truth. We have had async and concurrent operations for quite some times now. Also many beginners may think that asyncio is the only/best way to do async/concurrent operations. Functional PythonFunctional programming is a discipline, not a language feature. It is supported by a wide variety of languages, although those languages can make it more or less difficult to practice the discipline. Python has a number of features that support functional programming, including map/reduce functions, partial application, and decorators. A Whirlwind Tour of Pythonpython3Jake VanderPlas explains Python’s essential syntax and semantics, built-in data types and structures, function definitions, control flow statements, and more, using Python 3 syntax. An Intro to the Python Imaging Library / PillowThe Python Imaging Library or PIL allowed you to do image processing in Python. Here is a tutorial. How to Create a Diff of an Image in Pythonimage processing, pillowFor the past couple of years, I’ve been writing automated tests for my employer. One of the many types of tests that I do is comparing how an a[...]

Import Python: ImportPython Issue 93

Wed, 26 Oct 2016 01:39:50 +0000

Worthy ReadSetting up Sublime Text for Python DeveloperssublimeWe have been sharing Daniel's articles and videos from this youtube channel for a while now. Daniel Bader just published his book on Sublime Text for Python Developers. Have a look at his book if you are a sublime text user. Here is a 30% discount for all ImportPython Subscribers. Implementing the "Soft Delete" Pattern with Flask and SQLAlchemyflask, SQLAlchemyYou can find lots of reasons to never delete records from your database. The Soft Delete pattern is one of the available options to implement deletions without actually deleting the data. It does it by adding an extra column to your database table(s) that keeps track of the deleted state of each of its rows. This sounds straightforward to implement, and strictly speaking it is, but the complications that derive from the use of soft deletes are far from trivial. In this article I will discuss some of these issues and how I avoid them in Flask and SQLAlchemy based applications. A Dramatic Tour through Python’s Data Visualization Landscape (including ggplot and Altair)data visualizationComprehensive listing of all data visualization packages with small codesnippets. Database concurrency in Django the right waydjangoGuilherme Caminha explores the utility of using on_commit hook available from 1.9 onwards in sequencing part of a time consuming task in django view and rest offloaded to an async process. Python has come a long way. So has job hunting. Try Hired and get in front of 4,000+ companies with one application. No more pushy recruiters, no more dead end applications and mismatched companies, Hired puts the power in your hands.SponsorThinking in coroutinesasync-ioLukasz Langa uses asyncio source code to explain the event loop, blocking calls, coroutines, tasks, futures, thread pool executors, and process pool executors. Automate generation of man pages for python click applicationsopensource projectClick is my go to Python package for creating command line applications. click-man will generate one man page per command of your click CLI application specified in console_scripts in your PyDev of the Week: Bryan Van de VeninterviewBryan is a core developer of the Bokeh project, which is a visualization package for Python. He has also helped with the development of Anaconda. Flashlight is a lightweight Python library for analyzing and solving quadrotor control problems.Flashlight enables you to easily solve for minimum snap trajectories that go through a sequence of waypoints, compute the required control forces along trajectories, execute the trajectories in a physics simulator, and visualize the simulation results. Churchopensource projectChurch is a library to generate fake data. It's very useful when you need to bootstrap your database. 5 music things and PythonRaspberry and Python projects/scripts. validrA simple,fast,extensible python library for data validation. Upcoming Conference / User Group MeetSanta Cruz Python MeetupPython Brasil [12]PyCon Ireland 2016PyCon Canada 2016PyData Cologne 2016GeoPython 2017 Projectstf-agent - 27 Stars, 1 Forktensorflow reinforcement learning agents for OpenAI gym environmentsbecome - 5 Stars, 0 ForkMake one object become another.python-line-api - 4 Stars, 0 ForkSDK of the LINE Messaging API for - 2 Stars, 0 ForkFootball stats is a system which has the purpose of helping football match analyses. The final goal of the project is to have the capability of ball and playe[...]

Import Python: ImportPython Issue 92 - django-perf-rec track django performance, Mock testing, python alias more

Wed, 26 Oct 2016 01:39:50 +0000

Worthy ReadPython alias commands that play nice with virtualenvOver the years, I’ve come up with my own Python aliases that play nice with virtual environments. For this post, I tried to stay as generic as possible such that any alias here can be used by every Pythonista. Keep detailed records of the performance of your Django code.django, performance"Keep detailed records of the performance of your Django code.". django-perf-rec is like Django's assertNumQueries on steroids. It lets you track the individual queries and cache operations that occur in your code. This blog post explains the workings of this project . Practical ML for Engineers talk at #pyconuk last weekendmachine learningLast weekend I had the pleasure of introducing Machine Learning for Engineers (a practical walk-through, no maths) at PyConUK 2016 ( Video link on page ). My talk covered a practical guide to a 2 class classification challenge (Kaggle’s Titanic) with scikit-learn, backed by a longer Jupyter Notebook (github) and further backed by Ezzeri’s 2 hour tutorial from PyConUK 2014. Mocks and Monkeypatching in PythontestingThis tutorial will help you understand why mocking is important, and show you how to mock in Python with Mock and Pytest monkeypatch. Abu Ashraf Masnun: Introduction to Django ChannelsYet another introduction to Django Channels. This one is a lot more clear and step by step tutorial. If you still don't know what Django channels is / how to get started, read this. Python has come a long way. So has job hunting. Try Hired and get in front of 4,000+ companies with one application. No more pushy recruiters, no more dead end applications and mismatched companies, Hired puts the power in your hands.SponsorPython Mocks: a gentle introduction - Part 1 and 2testing, mockIn this series of posts I am going to review the Python mock library and exemplify its use. I will not cover everything you may do with mock, obviously, but hopefully I'll give you the information you need to start using this powerful library. Note it's a two part series as of now, here is the second part's url Decorators: The Function's Function - Weekly Python Chat with Trey Hunnerwebcast, videoDecorators are one of those features in Python that people like to talk about. Why? Because they're different. Because they're a little weird. Because they're a little mind-bending. Let's talk about decorators: how do you make them and when should you use them? Simple REST APIs for charts and datasetschartsThe Plotly V2 API suite is a simple alternative to the Google Charts API. Make a request to a Plotly URL and get a link to a dataset or D3 chart. Python code snippet are included on the page. Python Code Review: Unplugged – Episode 2 - By Daniel Badercode reviewDaniel is doing a series of code review sessions with Python developers. Have a look at the accompanied video where he gives his opinion on a open source project by Milton. Python by the C sidec bindingCPython, the primary implementation of Python used by millions, is written in C. Python core developers embraced and exposed Python’s strong C roots, taking a traditional tack on portability, contrasting with the “write once, debug everywhere” approach popularized elsewhere. The community followed suit with the core developers, developing several methods for linking to C. This has given us a lot of choi[...]

Import Python: ImportPython Issue 91 - asynq from quora, python packaging ecosystem and more

Wed, 26 Oct 2016 01:39:50 +0000

Worthy ReadI will be attending Pycon India 2016 in DelhiimportpythonHey guys, this is Ankur. Curator behind ImportPython. Will be attending PyconIndia. Happy to meet you all and discuss all things Python. Get your opinion on the newsletter, How to make it better ?. Ping me on ankur at outlook dot com or just reply to this email. I will respond back. See you there. Asynchronous Programming in Python at Quora and asynqasync-ioasynq is a library for asynchronous programming in Python with a focus on batching requests to external services. It also provides seamless interoperability with synchronous code, support for asynchronous context managers, and tools to make writing and testing asynchronous code easier. asynq was developed at Quora and is a core component of Quora's architecture. See the original blog post here. Python has come a long way. So has job hunting. Try Hired and get in front of 4,000+ companies with one application. No more pushy recruiters, no more dead end applications and mismatched companies, Hired puts the power in your hands.SponsorThe Python Packaging Ecosystem - Nick CoghlanpackagingThere have been a few recent articles reflecting on the current status of the Python packaging ecosystem from an end user perspective, so it seems worthwhile for me to write-up my perspective as one of the lead architects for that ecosystem on how I characterise the overall problem space of software publication and distribution, where I think we are at the moment, and where I'd like to see us go in the future. Deploying modern Python apps to ancient infrastructure with pkgsrcinfrastructureThis team is responsible for supplying a variety of web apps built on a modern stack (mostly Celery, Django, nginx and Redis), but have almost no control over the infrastructure on which it runs, and boy, is some of that infrastructure old and stinky. We have no root access to these servers, most software configuration requires a ticket with a lead time of 48 hours plus, and the watchful eyes of a crusty old administrator and obtuse change management process. The machines are so old that many are still running on real hardware, and those that are VMs still run some ancient variety of Red Hat Linux, with, if we’re lucky, Python 2.4 installed. Making publication ready Python notebooksipythonThe notebook functionality of Python provides a really amazing way of analyzing data and writing reports in one place. However in the standard configuration, the pdf export of the Python notebook is somewhat ugly and unpractical. In the following I will present my choices to create almost publication ready reports from within IPython/Jupyter notebook. SF Pybay conference videosPaul Bailey, "A Guide to Bad Programming", at PyBay2016 was my fav talk amongst all. Check out the youtube channel. Compressing and enhancing hand-written notesimage processingI wrote a program to clean up scans of handwritten notes while simultaneously reducing file size. Some of my classes don’t have an assigned textbook. For these, I like to appoint weekly “student scribes” to share their lecture notes with the rest of the class, so that there’s some kind written resource for students to double-check their understanding of the material. The notes get posted to a course website as PDFs. Image Manipulation in Python - Tutorialimage processingThis tutorial will show you how to transform an image with different filters and techniques to deliver different outputs. These methods are still in use and part of a [...]

Import Python: ImportPython Issue 90 - Real-time streaming data pipeline, generators, channels, and more

Wed, 26 Oct 2016 01:39:50 +0000

Worthy ReadReal-time streaming data pipeline in pythonstreamingMotorway is a real-time data pipeline, much like Apache Storm - but made in Python :-) We use it over at Plecto and we're really happy with it - but we're continously developing it. The reason why we started this project was that we wanted something similar to Storm, but without Zookeeper and the need to take the pipeline down to update the topology. Working with streaming data: Using the Twitter API to capture tweetstwitterThis tutorial tries to teach event driven programming by making use of streaming API offered by twitter. Python has come a long way. So has job hunting.sponsorTry Hired and get in front of 4,000+ companies with one application. No more pushy recruiters, no more dead end applications and mismatched companies, Hired puts the power in your hands. Introduction to Python generatorsgeneratorsIn this guide we 'll cover generators in depth . We 'll talk about how and why to use them , the difference between generator functions and regular functions , the yield keyword , and provide plenty of examples.This guide assumes you have a basic knowledge of Python ( especially regular functions).Throughout this guide we are going to work towards solving a problem . Python 3.6.0 beta 1 is now available!python3Python 3.6.0b1 is the first of four planned beta releases of Python 3.6, the next major release of Python, and marks the end of the feature development phase for 3.6. There are quite many new features have a look. Channels adopted as an official Django projectdjangoThe Django team is pleased to announce that the Channels project is now officially part of the Django project, under our new Official Projects program. Channels is the effort to bring WebSockets, long-poll HTTP, and other non-request-response protocol and business logic handling to Django, as part of our ongoing effort to establish what makes a useful web framework in 2016. Testing dates in DjangotestingDjango makes unit & functional testing easy (especially with WebTest). Tests on routing, permissions, database updates and emails are all straightforward to implement but how do you test dates & time? You might for example want to test regular email notifications. A Brief Introduction to Django ChannelsdjangoThe idea behind Channels is quite simple. To understand the concept, let’s first walk through an example scenario, let’s see how Channels would process a request. Episode 74 - Python at Zalandopodcast, communityOpen source has proven its value in many ways over the years. In many companies that value is purely in terms of consuming available projects and platforms. In this episode Zalando describes their recent move to creating and releasing a number of their internal projects as open source and how that has benefited their business. We also discussed how they are leveraging Python and a couple of the libraries that they have published. Book Contest: Win a Copy of Python 201booksTo win your copy of this book, all you need to do is come up with a comment below highlighting the reason “why you would like to win this book”. Try your luck guys :) Machine Learning in a yearmachine learningOnly people with masters degrees or Ph.D’s work with machine learning professionally isn't true. The truth is you don’t need much maths to get started with machine learning, and you don’t need a degree to use it professionally. Here is Per Harald Borgen journey. Yes he is using Python. 12 versions [...]

Import Python: ImportPython Issue 89

Wed, 26 Oct 2016 01:39:50 +0000

Worthy ReadBuild a Slack bot that mimics your colleagues with Python using Markov Chains.botImagine in your company slack team there's this person (we'll call him Jeff). Everything that Jeff says is patently Jeff. Maybe you've even coined a term amongst your group: a Jeffism. What if you could program a Slack bot that randomly generates messages that were undeniably Jeff? Does Python have a ternary conditional operator?core pythonLearn how to use Python’s ternary operator to create powerful “one-liners” and enhance logical constructions of your arguments. 500 Lines or Less | A Python Interpreter Written in PythonByterun is a Python interpreter implemented in Python. Through my work on Byterun, I was surprised and delighted to discover that the fundamental structure of the Python interpreter fits easily into the 500-line size restriction. This chapter will walk through the structure of the interpreter and give you enough context to explore it further. The goal is not to explain everything there is to know about interpreters—like so many interesting areas of programming and computer science, you could devote years to developing a deep understanding of the topic. Looking for a Better Python Jobs ?. Get offers from companies, not chase them. Try Hired and get in front of 4,000+ companies with one application. No more pushy recruiters, no more dead end applications and mismatched companies, Hired puts the power in your hands.SponsorEpisode 73 - Alex MartelliNote from curator - I met Alex at Pycon Singapore / Py APAC as it was called then, I found him inspirational. We sat down and talked about Java developer's obsession with design patterns. It was a blast. I wonder if he would remember. Here is a podcast where he is interviewed. Alex Martelli has dedicated a large part of his career to teaching others how to work with software. He has the highest number of Python questions answered on Stack Overflow, he has written and co-written a number of books on Python, and presented innumerable times at conferences in multiple countries. We spoke to him about how he got started in software, his work with Google, and the trends in development and design patterns that are shaping modern software engineering. Machinalis: OCR with Django and Tesseractdjango, OCRA Django site that integrates with Tesseract to provide an OCR service. Using the Messages FrameworkdjangoTutorial on how to use messages framework. Total of pip packages downloaded, separated by Python versionsbenchmarkWow 3.x isn't far behind. Couple of years may be. I see more and more companies using 3.x series for newer projects. Continuum Analytics News: Introducing GeoViewsGeoViews is a new Python library that makes it easy to explore and visualize geographical, meteorological, oceanographic, weather, climate, and other real-world data. GeoViews was developed by Continuum Analytics, in collaboration with the Met Office. GeoViews is completely open source, available under a BSD license freely for both commercial and non-commercial use, and can be obtained as described at the Github site. Mike Driscoll: PyDev of the Week: Reinout van ReesinterviewThis week we welcome Reinout van Rees (@reinoutvanrees) as our PyDev of the Week! Reinout is the creator / maintainer of zest.releaser. He has a nice website that includes a Python blog that you might want to check out. I would also recommend checking his Github page to see what projects he’s a part of. Note [...]

Import Python: ImportPython Issue 88

Wed, 26 Oct 2016 01:39:50 +0000

Worthy Readdoctest — Testing Through Documentation — PyMOTW 3testingdoctest tests source code by running examples embedded in the documentation and verifying that they produce the expected results. It works by parsing the help text to find examples, running them, then comparing the output text against the expected value. Many developers find doctest easier to use than unittest because, in its simplest form, there is no API to learn before using it. Follow getpy on twittertwitterIf you like this newsletter and you are on twitter you want to follow getpy. Daily get selected ( 4 - 5 ) tweets super relevant to Python. Deploying Django with Gunicorn and SupervisordjangoWe deploy all Django applications with Gunicorn and Supervisor. I personally prefer Gunicorn to uWSGI because it has better configuration options and more predictable performance. In this article we will be deploying a typical Django application. We won't be using async workers because we're just serving HTML and there are no heavy-lifting task in background. Python 3 Patterns, Recipes and Idiomspython3What you see here is an early version of the book. Rules for Radicals: Changing the Culture of Python at FacebookvideoToday, services built on Python 3.5 using asyncio are widely used at Facebook. But as recently as May of 2014 it was actually impossible to use Python 3 at Facebook. Come learn how we cut the Gordian Knot of dependencies and social aversion to the point where new services are now being written in Python 3 and existing codebases have plans to move to Python 3.5. Brian Okken: 21: Terminology: test fixtures, subcutaneous testing, end to end testing, system testingCovered in this episode: Test Fixtures, Subcutaneous Testing, End to End Testing (System Testing) . Curator's note - Of all the podcast out there pythontesting is my fav podcast. Looking for a Better Python Jobs ?. Get offers from companies, not chase them. Try Hired and get in front of 4,000+ companies with one application. No more pushy recruiters, no more dead end applications and mismatched companies, Hired puts the power in your hands.SponsorReuven Lerner: Implementing “zip” with list comprehensionscore pythonSimple tutorial with code snippets on zip. Automating OSINT: Dark Web OSINT With Python Part Three: VisualizationsecurityWelcome back! In this series of blog posts we are wrapping the awesome OnionScan tool and then analyzing the data that falls out of it. If you haven’t read parts one and two in this series then you should go do that first. In this post we are going to analyze our data in a new light by visualizing how hidden services are linked together as well as how hidden services are linked to clearnet sites. One of the awesome things that OnionScan does is look for links between hidden services and clearnet sites and makes these links available to us in the JSON output. Additionally it looks for IP address leaks or references to IP addresses that could be used for deanonymization. Running Django on Container Engine | Python | Google Cloud PlatformdjangoHow to deploy Django app on Google Cloud Conda: Myths and MisconceptionsIn the four years since its initial release, many words have been spilt introducing conda and espousing its merits, but one thing I have consistently noticed is the number of misconceptions that seem to remain in the (often fervent) discussions surrounding this tool. I hope in this post to do a small p[...]