Subscribe: Planet Python
http://www.planetpython.org/rss20.xml
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
\python python  api  code  data  flask  lambda  make  new  numba test  numba  python  release  run  test \python  test 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Planet Python

Planet Python



Planet Python - http://planetpython.org/



 



Sandipan Dey: Some more Social Network Analysis with Python

Fri, 22 Sep 2017 00:02:43 +0000

In this article, some more social networking concepts will be illustrated with a few problems. The problems appeared in the programming assignments in the coursera course Applied Social Network Analysis in Python.  The descriptions of the problems are taken from the assignments. The analysis is done using NetworkX.   1. Measures of  Centrality In this assignment, we explore … Continue reading Some more Social Network Analysis with Python(image)



Brian Okken: What info do you need to start testing?

Thu, 21 Sep 2017 20:42:04 +0000

I got some feedback today from someone who said they were looking through the pythontesting.net/start-here page and were still a bit confused as to how to start testing. I totally understand that and want to improve it. I want to make the site more friendly to people new to testing. What info would help? (try […]

The post What info do you need to start testing? appeared first on Python Testing.




Continuum Analytics Blog: Back to School: Data Science & AI Open New Doors for Students

Thu, 21 Sep 2017 18:40:09 +0000

School is back in session and now is the time students are thinking about their future. When considering options, science-oriented students are likely thinking about what is arguably today’s hottest technology: artificial intelligence (AI).



Stack Abuse: Flask vs Django

Thu, 21 Sep 2017 15:52:21 +0000

In this article, we will take a look at two of the most popular web frameworks in Python: Django and Flask. Here, we will be covering how each of these frameworks compares when looking at their learning curves, how easy it is to get started. Next, we'll also be looking at how these two stands against each other with concluding by when to use one of them. Getting Started One of the easiest ways to compare two frameworks is by installing them and taking note how easily a user can get started with it, which is exactly what we will do next. We will try setting up Django and Flask on a Linux machine and create an app to see how easy (or difficult) the process is with each one. Setting up Django In this section, we will setup Django on an Linux-powered machine. Best way to get started with any Python framework is by using virtual environments. We will install it using pip. $ sudo apt-get install python3-pip $ pip3 install virtualenv $ virtualenv --python=`which python3` ~/.virtualenvs/mydev_env Note: If the pip3 command gives you an error, you may need to prefix it with sudo to make it work. Once we're done setting up our virtual environment, which we've named mydev_env, we must activate it to start using it: $ source ~/.virtualenvs/djangodev/bin/activate Once activated, we can finally install Django: $ pip install Django Suppose our project is called myapp. Make a new directory and enter it, run following commands: $ mkdir myapp $ cd myapp $ django-admin startproject myapp $ ls -la Once you run the last command, your directory structure will be shown as: myapp/ manage.py myapp/ __init__.py settings.py urls.py wsgi.py Let's take a look at what is significant about each of the directories and files that were created. The root myapp/ directory is the container directory for our project manage.py is a command line tool that enables us to work with the project in different ways myapp/ directory is the Python package of our project code myapp/__init__.py is a file which informs Python that current directory should be considered a Python package myapp/settings.py will contain the configuration properties for current project myapp/urls.py is a Python file which contains the URL definitions for this project myapp/wsgi.py acts as an entry for a WSGI web server that forwards requests to your project From here, we can actually run the app using the manage.py tool. The following command does some system checks, checks for database migrations, and some other things before actually running your server: $ python manage.py runserver Performing system checks... System check identified no issues (0 silenced). You have unapplied migrations; your app may not work properly until they are applied. Run 'python manage.py migrate' to apply them. September 20, 2017 - 15:50:53 Django version 1.11, using settings 'myapp.settings' Starting development server at http://127.0.0.1:8000/ Quit the server with CONTROL-C. Note: Running your server in this way is meant for development only, and not production environments. To check out your app, head to http://localhost:8000/, where you should see a "Welcome to Django" page. Setting up Flask Just like Django, we will use a virtual environment with Flask as well. So the commands for activating a virtual environment will remain the same as before. After that, instead of installing Django, we'll install Flask instead. $ pip install Flask Once the installation completes, we can start creating our Flask application. Now, unlike Django, Flask doesn't have a complicated directory structure. The structure of your Flask project is entirely up to you. Borrowing an example from the Flask homepage, you can create a runnable Flask app from just a single file: from flask import Flask app = Flask(__name__) @app.route("/") def hello(): return "Hello World!" And running the app is about as easy as setting it up: $ FLASK_APP=hello.py flask run * Running on http:[...]



Continuum Analytics Blog: Anaconda to Present at Strata Data Conference, New York

Thu, 21 Sep 2017 14:11:40 +0000

Anaconda, the most popular Python data science platform provider, today announced that several company experts will present two sessions and one tutorial at The Strata Data Conference on September 26 and 27 at the Javits Center in New York City.



DataCamp: How Not To Plot Hurricane Predictions

Thu, 21 Sep 2017 12:59:53 +0000

Visualizations help us make sense of the world and allow us to convey large amounts of complex information, data and predictions in a concise form. Expert predictions that need to be conveyed to non-expert audiences, whether they be the path of a hurricane or the outcome of an election, always contain a degree of uncertainty. If this uncertainty is not conveyed in the relevant visualizations, the results can be misleading and even dangerous. Here, we explore the role of data visualization in plotting the predicted paths of hurricanes. We explore different visual methods to convey the uncertainty of expert predictions and the impact on layperson interpretation. We connect this to a broader discussion of best practices with respect to how news media outlets report on both expert models and scientific results on topics important to the population at large. No Spaghetti Plots? We have recently seen the damage wreaked by tropical storm systems in the Americas. News outlets such as the New York Times have conveyed a great deal of what has been going on using interactive visualizations for Hurricanes Harvey and Irma, for example. Visualizations include geographical visualisation of percentage of people without electricity, amount of rainfall, amount of damage and number of people in shelters, among many other things. One particular type of plot has understandably been coming up recently and raising controversy: how to plot the predicted path of a hurricane, say, over the next 72 hours. There are several ways to visualize predicted paths, each way with its own pitfalls and misconceptions. Recently, we even saw an article in Ars Technica called Please, please stop sharing spaghetti plots of hurricane models, directed at Nate Silver and fivethirtyeight. In what follows, I'll compare three common ways, explore their pros and cons and make suggestions for further types of plots. I'll also delve into why these types are important, which will help us decide which visual methods and techniques are most appropriate. Disclaimer: I am definitively a non-expert in metereological matters and hurricane forecasting. But I have thought a lot about visual methods to convey data, predictions and models. I welcome and actively encourage the feedback of experts, along with that of others. Visualizing Predicted Hurricane Paths There are three common ways of creating visualizations for predicted hurricane paths. Before talking about at them, I want you to look at them and consider what information you can get from each of them. Do your best to interpret what each of them is trying to tell you, in turn, and then we'll delve into what their intentions are, along with their pros and cons: The Cone of Uncertainty From the National Hurricane Center Spaghetti Plots (Type I) From South Florida Water Management District via fivethirtyeight Spaghetti Plots (Type II) From The New York Times. Surrounding text tells us 'One of the best hurricane forecasting systems is a model developed by an independent intergovernmental organization in Europe, according to Jeff Masters, a founder of the Weather Underground. The system produces 52 distinct forecasts of the storm’s path, each represented by a line [above].' Interpretation and Impact of Visualizations of Hurricanes' Predicted Paths The Cone of Uncertainty The cone of uncertainty, a tool used by the National Hurricane Center (NHC) and communicated by many news outlets, shows us the most likely path of the hurricane over the next five days, given by the black dots in the cone. It also shows how certain they are of this path. As time goes on, the prediction is less certain and this is captured by the cone, in that there is an approximately 66.6% chance that the centre of the hurricane will fall in the bounds of the cone. Was this apparent f[...]



PyCharm: PyCharm 2017.3 EAP 2

Thu, 21 Sep 2017 11:44:20 +0000

After a strong start, we continue our early access program (EAP) with its second release. Download EAP 2 now! Testing RESTful Applications Many of us work on web applications which expose a RESTful API, or at least an API that pretends to be RESTful. To test these some of us use cURL, some browser extension, or some other piece of software. There is a REST client in PyCharm, but we’ve decided it can use some improvement, so we’re making an all new one. The new REST client is entirely editor based, you write your request in a file, and then run the request to get a response. Sounds easy enough, right? To see how it works, we’ll take the sample application from Flask-Restplus, which as you might expect exposes a todo API. We’ll start out by creating a new todo. This is done by POST-ing to the /todos/ endpoint. To use the new PyCharm REST client, we should start by creating a .http file. If we don’t intend to save this, we can create a scratch file. Press Ctrl+Alt+Shift+Insert (Shift+Command+N on macOS) to start creating a scratch file and choose ‘HTTP request’ as the type. Let’s type our request into the file:### Post a todo POST http://localhost:5000/todos/ Accept: application/json Content-Type: application/json { "task": "Create my task!" }  Now click the green play button next to the first line, and you should see that the task was created: You can see the response in the Run tool window, and you might also notice that PyCharm wrote a new line in our file with the name of a .json file. This file contains the response, so if we Ctrl+Click (Cmd+Click) the filename, or use Ctrl+B (Cmd+B) to go to definition we see the full response in a separate file. Those files become really useful when we do the same request a couple times but get different results. If we use a GET request to get our todo, and then use a PUT to change it, and redo our GET, we’ll now have two files there. We can then use the blue icon with the arrows to see the difference between the responses: Now it’s your turn! Get the EAP and try it yourself Further Improvements Code completion for CSS now supports more notations for colors (rgb, hsl), added completion for CSS transitions, and more React and TypeScript code intelligence: TypeScript 2.4 mapped types, and more Docker improvements: you can now pass –build-arg arguments to a Docker run configuration To read about all improvements in this version, see the release notes As always, we really appreciate your feedback! Please let us know on YouTrack about any issues you experience or suggestions for improvement. You can also reach us on Twitter, or by leaving a comment below. [...]



eGenix.com: eGenix Talks & Videos: Python Idioms Talk

Thu, 21 Sep 2017 08:00:00 +0000

EuroPython 2016 in Bilbao, Basque Country, Spain

Marc-André Lemburg, Python Core Developer, chair of the EuroPython Society, the organization behind the EuroPython Conference, and Senior Software Architect, held a talk at EuroPython 2016 show casing our experience with the valuation of a Python startup.

We have now published the talk as video and also released the presentation slides.

So you think your Python startup is worth $10 million...

Talk given at the EuroPython 2016 conference in Bilbao, Basque Country, Spain, presenting experience gained from valuation of a Python company and its code base.

Click to proceed to the talk video and slides ...

This talk is based on the speaker’s experience running a Python focused software company for more than 15 years and a recent consulting project to support the valuation of a Python startup company in the due diligence phase.

For the valuation we had to come up with metrics, a catalog of criteria analyzing risks, potential and benefits of the startup’s solution, as well as an estimate for how much effort it would take to reimplement the solution from scratch.

In the talk, I am going to show the metrics we used, how they can be applied to Python code, the importance of addressing risk factors, well designed code and data(base) structures.

By following some of the advice from this talk, you should be able to improve the valuation of your startup or consulting business in preparation for investment rounds or an acquisition.

-- Marc-André Lemburg

More interesting eGenix presentations are available in the presentations and talks community section of our website.

Related Python Coaching and Consulting

If you are interested in learning more about these advanced techniques, eGenix now offers Python project coaching and consulting services to give your project teams advice on how to design Python applications, successfully run projects, or find excellent Python programmers. Please contact our eGenix Sales Team for information.

Enjoy !

Charlie Clark, eGenix.com Sales & Marketing




Fabio Zadrozny: PyDev 6.0: pip, conda, isort and subword navigation

Thu, 21 Sep 2017 05:19:06 +0000

The new PyDev release is now out and offers some really nice features on a number of fronts!

The interpreter configuration now integrates with both pip and conda, showing the installed packages and allowing any package to be installed and uninstalled from inside the IDE.

Also, it goes a step further in the conda integration and allows users to load the proper environment variables from the env -- this is actually false by default and can be turned on in the interpreter configuration page when PyDev identifies an interpreter as being managed by conda by checking the "Load conda env vars before run" configuration (so, if you have some library which relies on some configuration you don't have to activate the env outside the IDE).



Another change which is pretty nice is that now when creating a project there's an option to specify that the project should always use the interpreter version for syntax validation.

Previously a default version for the grammar was set, but users could be confused when the version didn't match the interpreter... note that it's still possible to set a different version or even add additional syntax validators, for cases when you're actually dealing with supporting more than one Python version.

The editor now has support for subword navigation (so, navigating words as MyReallyNiceClass with Ctrl+Left/Right will stop after each subword -- i.e.: 'My', 'Really', 'Nice', 'Class' -- remember that Shift+Alt+Up can be used to select the full word for the cases where Ctrl+ShiftLeft/Right did it previously).

This mode is now also consistent among all platforms (previously each platform had its own style based on the underlying platform -- it's still possible to revert to that mode in the Preferences > PyDev > Editor > Word navigation option).

Integration with PyLint and isort were also improved: the PyLint integration now provides an option to search for PyLint in the interpreter which a project is using and isort integration was improved to know about the available packages (i.e.: based on the project/interpreter configuration, PyDev knows a lot about which should be third party/ library projects and passes that information along to isort).

In the unittest front, Robert Gomulka did some nice work and now the name of the unittest being run is now properly shown in the run configuration and it's possible to right-click a given selection in the dialog to run tests (Ctrl+F9) and edit the run configuration (to edit environment variables, etc) before running it.

Aside from that there were also a number of other fixes and adjustments (see http://pydev.org for more details).

Enjoy!

p.s.: Thank you to all PyDev supporters -- https://www.brainwy.com/supporters/PyDev/ -- which enable PyDev to keep on being improved!

p.s.: LiClipse 4.2.0 already bundles PyDev 6.0, see: http://www.liclipse.com/download.html for download links.

(image)



Montreal Python User Group: Montréal-Python 66: Call For Speakers

Thu, 21 Sep 2017 04:00:00 +0000

It's back-to-everything and Montreal-Python is no exception! We are looking for speakers for our first meetup of fall.

We are looking for speakers that want to give a regular presentation (20 to 25 minutes) or a lightning talk (5 minutes).

Submit your proposal at team@montrealpython.org

When

October 2nd, 2017 at 6PM

Where

TBD

PyCon Canada Early Bird Tickets

Also, a little reminder that Early Bird tickets for PyCon Canada (which will be held in Montreal on November 18th to 21st) are now available at https://2017.pycon.ca/.

The early bird rates are only for a limited quantity of tickets, so get yours soon!

PyCon Canada Sponsorship

Would you like to become a sponsor for PyCon Canada? Send an email to sponsorship@pycon.ca




Matthew Rocklin: Fast GeoSpatial Analysis in Python

Thu, 21 Sep 2017 00:00:00 +0000

This work is supported by Anaconda Inc., the Data Driven Discovery Initiative from the Moore Foundation, and NASA SBIR NNX16CG43P This work is a collaboration with Joris Van den Bossche. This blogpost builds on Joris’s EuroSciPy talk (slides) on the same topic. You can also see Joris’ blogpost on this same topic. TL;DR: Python’s Geospatial stack is slow. We accelerate the GeoPandas library with Cython and Dask. Cython provides 10-100x speedups. Dask gives an additional 3-4x on a multi-core laptop. Everything is still rough, please come help. We start by reproducing a blogpost published last June, but with 30x speedups. Then we talk about how we achieved the speedup with Cython and Dask. All code in this post is experimental. It should not be relied upon. Experiment In June Ravi Shekhar published a blogpost Geospatial Operations at Scale with Dask and GeoPandas in which he counted the number of rides originating from each of the official taxi zones of New York City. He read, processed, and plotted 120 million rides, performing an expensive point-in-polygon test for each ride, and produced a figure much like the following: This took about three hours on his laptop. He used Dask and a bit of custom code to parallelize Geopandas across all of his cores. Using this combination he got close to the speed of PostGIS, but from Python. Today, using an accelerated GeoPandas and a new dask-geopandas library, we can do the above computation in around eight minutes (half of which is reading CSV files) and so can produce a number of other interesting images with faster interaction times. A full notebook producing these plots is available below: NYC Taxi GeoSpatial Analysis Notebook The rest of this article talks about GeoPandas, Cython, and speeding up geospatial data analysis. Background in Geospatial Data The Shapely User Manual begins with the following passage on the utility of geospatial analysis to our society. Deterministic spatial analysis is an important component of computational approaches to problems in agriculture, ecology, epidemiology, sociology, and many other fields. What is the surveyed perimeter/area ratio of these patches of animal habitat? Which properties in this town intersect with the 50-year flood contour from this new flooding model? What are the extents of findspots for ancient ceramic wares with maker’s marks “A” and “B”, and where do the extents overlap? What’s the path from home to office that best skirts identified zones of location based spam? These are just a few of the possible questions addressable using non-statistical spatial analysis, and more specifically, computational geometry. Shapely is part of Python’s GeoSpatial stack which is currently composed of the following libraries: Shapely: Manages shapes like points, linestrings, and polygons. Wraps the GEOS C++ library Fiona: Handles data ingestion. Wraps the GDAL library Rasterio: Handles raster data like satelite imagery GeoPandas: Extends Pandas with a column of shapely geometries to intuitively query tables of geospatially annotated data. These libraries provide intuitive Python wrappers around the OSGeo C/C++ libraries (GEOS, GDAL, …) which power virtually every open source geospatial library, like PostGIS, QGIS, etc.. They provide the same functionality, but are typically much slower due to how they use Python. This is acceptable for small datasets, but becomes an issue as we transition to larger and larger datasets. In this post we focus on GeoPandas, a geospatial extension of Pandas which manages tabular data that is annotated with geometry information like points, paths, and polygons. GeoPandas Example GeoPandas makes it easy to load, manipulate, and plot geospatial data. For ex[...]



Evennia: Evennia 0.7 released

Wed, 20 Sep 2017 22:44:00 +0000

As of today Evennia 0.7 is officially out! A big thank you  to all collaborators that have helped with code and testing along the way!Here is the hefty forum post that details how you migrate to Evennia 0.7. Evennia 0.7 comes with a range changes and updates (these are just the ones merged from the latest devel branch, a lot more has happened since 0.6 that were already in master):  EvMenu formatting functions now part of class - EvMenu no longer accepts formatting functions as inputs, these are now part of the EvMenu class. To override the formatting of EvMenu nodes you should now override EvMenu and replace the format method you want. This brings EvMenu more in line with other services in Evennia, all of which are built around overriding.Scripts are now valid message senders - Also Scripts can now act as "sender" of a Msg, for example to a Channel or a player.Prefix-ignoring - All default commands are now renamed without their @-prefixes. Both @examine, +examine or examine will now all point to the same command. You can customize which prefixes Evennia simply ignores when searching for a command. The mechanic is clever though - if you create a command with a specific key "+foo", then that will still work and won't clash with another command named just "foo". Due to the prefix-ignoring above, @desc (builder-level description-setting) was renamed to setdesc to make it clearly separate from desc (used by a player setting their own description). This was actually the only clash we had to resolve this way in the default commands.Permission Hierarchy names change - To make Evennia's permission hierarchy better reflect how Evennia actually works, the old Players, Player Helpers, Builders, Wizards, Immortals hierarchy has changed to Player, Helper, Builder, Admin, Developer. Singular/Plural form is now ignored so Builder and Builders will both work (this was a common source of simple errors) Old permissions will be renamed as part of the migration process. The distribution of responsibilities has not changed: Wizards (which in some other systems was the highest level, which confused some) always had the power to affect player accounts (like an admin) whereas Immortals had server-level access such as @py - that is, they were developers. The new names hopefully makes this distinction clearer. All manager methods now return querysets - Some of these used to return lists which was a throwback to an older version of Typeclasses. With manager methods returning querysets one can chain queries onto their results like you can with any django query.PrettyTable was removed from the codebase. You can still fetch it for your game if you prefer (it's on pypi). But EvTable has more functionality as well as color-support.Add **kwargs support to all object at_* hooks - All object at_ hooks, such as at_look, at_give etc now has an additional **kwarg argument. This allows developers to send arbitrary data to those hooks so they can expand on them without changing the API. Rename Player to Account - This is the big one and the reason for the big migration work. The name "Player" for the OOC entity has been a constant source of confusion for newcomers to Evennia. Renaming it to "Account" should hopefully make it much more clear what its purpose is while freeing "player" to represent the person behind the keyboard. Remove {-color tags - The use of {r, {n etc was deprecated for years and have now been completely removed from Evennia's core in favor of only one form, namely  |r, |n etc. However, the color parser is now pluggable and the {-style as well as the %c style color tags etc can be re-added to your game since they are now in a contrib. Change of Evennia's website design - The default website is now [...]



DataCamp: New Course: Case Studies in Statistical Thinking!

Wed, 20 Sep 2017 14:30:04 +0000

Hello everyone! We just launched Case Studies in Statistical Thinking by Justin Bois, our latest Python course! Mastery requires practice. Having completed Statistical Thinking I and II, you developed your probabilistic mindset and the hacker stats skills to extract actionable insights from your data. Your foundation is in place, and now it is time practice your craft. In this course, you will apply your statistical thinking skills, exploratory data analysis, parameter estimation, and hypothesis testing, to two new real-world data sets. First, you will explore data from the 2013 and 2015 FINA World Aquatics Championships, where you will quantify the relative speeds and variability among swimmers. You will then perform a statistical analysis to assess the "current controversy" of the 2013 Worlds in which swimmers claimed that a slight current in the pool was affecting result. Second, you will study the frequency and magnitudes of earthquakes around the world. Finally, you will analyze the changes in seismicity in the US state of Oklahoma after the practice of high pressure waste water injection at oil extraction sites became commonplace in the last decade. As you work with these data sets, you will take vital steps toward mastery as you cement your existing knowledge and broaden your abilities to use statistics and Python to make sense of your data. Take me to chapter 1! Case Studies in Statistical Thinking features interactive exercises that combine high-quality video, in-browser coding, and gamification for an engaging learning experience that will make you an expert in applied statistical thinking! What you'll learn 1. Fish sleep and bacteria growth: A review of Statistical Thinking I and II To begin, you'll use two data sets from Caltech researchers to rehash the key points of Statistical Thinking I and II to prepare you for the following case studies! 2. Analysis of results of the 2015 FINA World Swimming Championships In this chapter, you will practice your EDA, parameter estimation, and hypothesis testing skills on the results of the 2015 FINA World Swimming Championships. 3. The "Current Controversy" of the 2013 World Championships Some swimmers said that they felt it was easier to swim in one direction versus another in the 2013 World Championships. Some analysts have posited that there was a swirling current in the pool. In this chapter, you'll investigate this claim! References - Quartz Media, Washington Post, SwimSwam (and also here), and Cornett, et al. 4. Statistical seismology and the Parkfield region Herein, you'll use your statistical thinking skills to study the frequency and magnitudes of earthquakes. Along the way, you'll learn some basic statistical seismology, including the Gutenberg-Richter law. This exercise exposes two key ideas about data science: 1) As a data scientist, you wander into all sorts of domain-specific analyses, which is very exciting. You constantly get to learn. 2) You are sometimes faced with limited data, which is also the case for many of these earthquake studies. You can still make good progress! 5. Earthquakes and oil mining in Oklahoma Of course, earthquakes have a big impact on society and recently are connected to human activity. In this final chapter, you'll investigate the effect that increased injection of saline wastewater due to oil mining in Oklahoma has had on the seismicity of the region. Hone your real-world data science skills in our course Case Studies in Statistical Thinking! [...]



Jarrod Millman: NetworkX 2.0 released

Wed, 20 Sep 2017 13:43:32 +0000

I am happy to announce the release of NetworkX 2.0!  NetworkX is a Python package for the creation, manipulation, and study of the structure, dynamics, and functions of complex networks.

This release supports Python 2.7 and 3.4-3.6 and is the result of over two years of work with 1212 commits and 193 merges by 86 contributors.  We have made major changes to the methods in the Multi/Di/Graph classes.  There is a  migration guide for people moving from 1.X to 2.0.

For more information, please visit our website and our gallery of examples.  Please send comments and questions to the networkx-discuss mailing.



Andre Roberge: Reeborg's World partially available in Polish

Wed, 20 Sep 2017 09:17:08 +0000

Thanks to Adam Jurkiewicz, Reeborg's World is now partially available in Polish.



Python Bytes: #44 pip install malicious-code

Wed, 20 Sep 2017 08:00:00 +0000




Python Insider: Python 2.7.14 released

Wed, 20 Sep 2017 00:23:22 +0000

The latest bugfix release in the Python 2.7 series, Python 2.7.14, is now available for download.(image)



Python Insider: Python 3.6.3rc1 and 3.7.0a1 now available for testing and more

Tue, 19 Sep 2017 21:53:47 +0000

The Python build factories have been busy the last several weeks preparing our fall lineup of releases.  Today we are happy to announce three additions: 3.6.3rc1, 3.7.0a1, and 3.3.7 final, which join last weekend's 2.7.14 and last month's 3.5.4 bug-fix releases and 3.4.7 security-fix update (see all downloads). 1. Python 3.6.3rc1 is the first release candidate for Python 3.6.3, the next maintenance release of Python 3.6.  While 3.6.3rc1 is a preview release and, thus, not intended for production environments, we encourage you to explore it and provide feedback via the Python bug tracker (https://bugs.python.org). 3.6.3 is planned for final release on 2017-10-02 with the next maintenance release expected to follow in about 3 months.  You can find Python 3.6.3rc1 and more information here:     https://www.python.org/downloads/release/python-363rc1/ 2. Python 3.7.0a1 is the first of four planned alpha releases of Python 3.7, the next feature release of Python.  During the alpha phase, Python 3.7 remains under heavy development: additional features will be added and existing features may be modified or deleted.  Please keep in mind that this is a preview release and its use is not recommended for production environments.  The next preview release, 3.7.0a2, is planned for 2017-10-16. You can find Python 3.7.0a1 and more information here:     https://www.python.org/downloads/release/python-370a1/ 3. Python 3.3.7 is also now available.  It is a security-fix source-only release and is expected to be the final release of any kind for Python 3.3.x before it reaches end-of-life status on 2017-09-29, five years after its initial release. Because 3.3.x has long been in security-fix mode, 3.3.7 may no longer build correctly on all current operating system releases and some tests may fail. If you are still using Python 3.3.x, we strongly encourage you to upgrade now to a more recent, fully supported version of Python 3.  You can find Python 3.3.7 here:    https://www.python.org/downloads/release/python-337/ [...]



DataCamp: Jupyter Notebook Cheat Sheet

Tue, 19 Sep 2017 14:41:48 +0000

You’ll probably already know the Jupyter notebooks pretty well - it’s probably one of the most well-known parts of the Jupyter ecosystem! If you haven’t explored the ecosystem yet or if you simply want to know more about it, don’t hesitate to go and explore it here!.

For those who are new to Project Jupyter, the Jupyter Notebook Application produces documents that contain a mix of executable code, text elements, and even HTML, which makes it thé ideal place to bring together an analysis description and its results as well as to perform data analysis in real time. This, combined with its many useful functionalities, explains why the Jupyter Notebook is a one of the data scientist’s preferred development environments that allows for interactive, reproducible data science analysis, computation and communication.

One of the other great things about the Jupyter notebooks?

They’re super easy to get started with! You might have already noticed this when you read DataCamp’s definitive guide to Jupyter Notebook. However, when you first enter in the application, you might have to find your way around the variety of functionalities that are presented to you: from saving your current notebook to adding or moving cells in the notebook or embedding current widgets in your notebook - without a doubt, there’s a lot out there to discover when you first get started!

That’s why DataCamp made a Jupyter Notebook cheat sheet for those who are just starting out and that want to have some help to find their way around.

Note also that the Jupyter Notebook Application has a handy “Help” menu that includes a full-blown User Interface Tour! – No worries, we also included this in the cheat sheet :)

Check it out here:

(image)

In short, this cheat sheet will help you to kickstart your data science projects, however small or big they might be: with of some screenshots and explanations, you’ll be a Jupyter Notebook expert in no time!

So what are you waiting for? Time to get started!




Continuum Analytics Blog: What to Do When Things Go Wrong in Anaconda

Tue, 19 Sep 2017 14:00:40 +0000

Below is a question that was recently asked on StackOverflow and I decided it would be helpful to publish an answer explaining the various ways in which to troubleshoot a problem you may be having in Anaconda.



Python Data: Python and AWS Lambda – A match made in heaven

Tue, 19 Sep 2017 13:42:41 +0000

In recent months, I’ve begun moving some of my analytics functions to the cloud. Specifically, I’ve been moving them many of my python scripts and API’s to AWS’ Lambda platform using the Zappa framework.  In this post, I’ll share some basic information about Python and AWS Lambda…hopefully it will get everyone out there thinking about new ways to use platforms like Lambda. Before we dive into an example of what I’m moving to Lambda, let’s spend some time talking about Lambda. When I first heard about, I was a confused…but once I ‘got’ it, I saw the value. Here’s the description of Lambda from AWS’ website: AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume – there is no charge when your code is not running. With Lambda, you can run code for virtually any type of application or backend service – all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app. Once I realized how easy it is to move code to lambda to use whenever/wherever I needed it, I jumped at the opportunity.  But…it took a while to get a good workflow in place to simplify deploying to lambda. I stumbled across Zappa and couldn’t be happier…it makes deploying to lambda simple (very simple). OK.  So. Why would you want to move your code to Lambda? Lots of reasons. Here’s a few: Rather than host your own server to handle some API endpoints — move to Lambda Rather than build out a complex development environment to support your complex system, move some of that complexity to Lambda and make a call to an API endpoint. If you travel and want to downsize your travel laptop but still need to access your python data analytics stack move the stack to Lambda. If you have a script that you run very irregularly and don’t want to pay $5 a month at Digital Ocean — move it to Lambda. There are many other more sophisticated reasons of course, but these’ll do for now. Let’s get started looking at python and AWS Lambda.  You’ll need an AWS account for this. First – I’m going to talk a bit about building an API endpoint using Flask. You don’t have to use flask, but its an easy framework to use and you can quickly build an API endpoint with it with very little fuss.  With this example, I’m going to use Lambda to host an API endpoint that uses the Newspaper library to scrape a website, pull down the text and return that text to my local script. Writing your first Flask + Lambda API To get started, install Flask,Flask-Restful and Zappa.  You’ll want to do this in a fresh environment using virtualenv (see my previous posts about virtualenv and vagrant) because we’ll be moving this up to Lambda using Zappa.pip install flask flask_restful zappaOur flask driven API is going to be extremely simple and exist in less than 20 lines of code:from flask import Flask from newspaper import Article from flask_restful import Resource, Api app = Flask(__name__) api = Api(app) class hello(Resource): def get(self): return "Hello World" api.add_resource(hello, '/hello') if __name__ == '__main__': app.run(debug=True, host='0.0.0.0', port=5001)Note: The ‘host = 0.0.0.0’[...]



Python Piedmont Triad User Group: PYPTUG monthly meeting: Plotly, dash and company

Tue, 19 Sep 2017 08:48:13 +0000

Come join PYPTUG at out next monthly meeting (September 19th 2017) to learn more about the Python programming language, modules and tools. Python is the ideal language to learn if you've never programmed before, and at the other end, it is also a tool that no expert would do without. Monthly meetings are in addition to our project nights.WhatMeeting will start at 6:00pm.Main Talk: "Plotly, dash and company"by Francois DionRemake of W. Playfair's classic visualization (source: Plot.ly)Abstract:There are many visualization packages available out there, each best suited to specific scenarios. In the past several years, I've covered Matplotlib, Seaborn, Vincent, ggplot, 3d visualizations through matplotlib, D3 and mpld3 and Bokeh. In this presentation we will cover plotly (for javascript, R and Python) and related packages and when it makes sense to use it.Bio:Francois Dion is the founder and Chief Data Scientist of Dion Research LLC, specializing in analytics, data science, IoT and visualization. He is the author of several open source software, such as stemgraphic (www.stemgraphic.org), the founder of the Python user group for the Piedmont Triad of North Carolina (www.pyptug.org) and mentors various groups in Python, R and analytics at large. You might have run across his multiple part series on LinkedIn on data science books, including part V on Visualization.When:Please note, this meeting will be one week early in the month compared to our normal schedule:Tuesday, September 19th 2017Meeting starts at 6:00PMWhere:Wake Forest University, close to Polo Rd and University Parkway:Manchester Hallroom: Manchester 241 Wake Forest University, Winston-Salem, NC 27109 Map thisSee also this campus map (PDF) and also the Parking Map (PDF) (Manchester hall is #20A on the parking map)And speaking of parking:  Parking after 5pm is on a first-come, first-serve basis.  The official parking policy is:"Visitors can park in any general parking lot on campus. Visitors should avoid reserved spaces, faculty/staff lots, fire lanes or other restricted area on campus. Frequent visitors should contact Parking and Transportation to register for a parking permit."Mailing List:Don't forget to sign up to our user group mailing list:https://groups.google.com/d/forum/pyptug?hl=enIt is the only step required to become a PYPTUG member.Please RSVP so we have enough food for people attending!RSVP on meetup:https://www.meetup.com/PYthon-Piedmont-Triad-User-Group-PYPTUG/events/242721091/[...]



S. Lott: Three Unsolvable Problems in Computing

Tue, 19 Sep 2017 08:07:39 +0000

The three unsolvable problems in computing:NamingDistributed Cache CoherenceOff-By-One ErrorsLet's talk about naming.The project team decided to call the server component "FlaskAPI".Seriously.It serves information about two kinds of resources: images and running instances of images. (Yes, it's a kind of kubernetes/dockyard lite that gives us a lot of control over servers with multiple containers.)The feature set is growing rapidly. The legacy name needs to change. As we move forward, we'll be adding more microservices. Unless they have a name that reflects the resource(s) being managed, this is rapidly going to become utterly untenable.Indeed, the name chosen may already be untenable: the name doesn't reflect the resource, it reflects an implementation choice that is true of all the microservices. (It's a wonder they didn't call it "PythonFlaskAPI".)See https://blogs.mulesoft.com/dev/api-dev/best-practices-for-building-apis/ for some general guidelines on API design.These guidelines don't seem to address naming in any depth. There are a few blog posts on this, but there seem to be two extremes.Details/details/details. Long paths: class-of-service/service/version-of-service/resources/resource-id kind of paths. Yes. I get it. The initial portion of the path can then route the request for us. But it requires a front-end request broker or orchestration layer to farm out the work. I'm not enamored of the version information in the path because the path isn't an ontology of the entities; it becomes something more and reveals implementation details. The orchestration is pushed down the client. Yuck.Resources/resource. I kind of like this. The versioning information can be in the Content-Type header: application/json+vnd.yournamehere.vx+json.  I like this because the paths don't change. Only the vx in the header. But how does the client select the latest version of the service if it doesn't go in the path? Ugh. Problem not solved.I'm not a fan of an orchestration layer. But there's this: https://medium.com/capital-one-developers/microservices-when-to-react-vs-orchestrate-c6b18308a14c  tl;dr: Orchestration is essentially unavoidable.There are articles on choreography. https://specify.io/concepts/microservices the idea is that an event queue is used to choreograph among microservices. This flips orchestration around a little bit by having a more peer-to-peer relationship among services. It replaces complex orchestration with a message queue, reducing the complexity of the code.On the one hand, orchestration is simple. The orchestrator uses the resource class and content-type version information to find the right server. It's not a lot of code.On the other hand, orchestration is overhead. Each request passes through two services to get something done. The pace of change is slow. HATEOAS suggests that a "configuration" or "service discovery" service (with etags to support caching and warning of out-of-date cache) might be a better choice. Clients can make a configuration request, and if cache is still valid, it can then make the real working request.The client-side overhead is a burden that is -- perhaps -- a bad idea. It has the potential to make  the clients very complex. It can work if we're going to provide a sophisticated client library. It can't work if we're expecting developers to make RESTful API requests to get useful results. Who wants to make the extra meta-request all the time?[...]



Talk Python to Me: #130 10 books Python developers should be reading

Tue, 19 Sep 2017 08:00:00 +0000

One of the hallmarks of successful developers is continuous learning. The best developers I know don't just keep learning, it's one of the things that drives them. That's why I'm excited to bring you this episode on 10 books Python developers should read.



Catalin George Festila: The numba python module - part 002 .

Tue, 19 Sep 2017 05:48:39 +0000

Today I tested how fast is jit from numba python and fibonacci math function.You will see strange output I got for some values.First example:import numbafrom numba import jitfrom timeit import default_timer as timerdef fibonacci(n): a, b = 1, 1 for i in range(n): a, b = a+b, a return afibonacci_jit = jit(fibonacci)start = timer()fibonacci(100)duration = timer() - startstartnext = timer()fibonacci_jit(100)durationnext = timer() - startnextprint(duration, durationnext)The result of this run is: C:\Python27>python numba_test_003.py(0.00018731270733896962, 0.167499256682878)C:\Python27>python numba_test_003.py(1.6357787798437412e-05, 0.1683614083221368)C:\Python27>python numba_test_003.py(2.245186560569841e-05, 0.1758382003097716)C:\Python27>python numba_test_003.py(2.3093347480146938e-05, 0.16714964906130353)C:\Python27>python numba_test_003.py(1.5395564986764625e-05, 0.17471143739730277)C:\Python27>python numba_test_003.py(1.5074824049540363e-05, 0.1847134227837042)As you can see the fibonacci function is not very fast.The jit - just-in-time compile is very fast.Let's see if the python source code may slow down.Let's see the new source code with jit will not work well:import numbafrom numba import jitfrom timeit import default_timer as timerdef fibonacci(n): a, b = 1, 1 for i in range(n): a, b = a+b, a return afibonacci_jit = jit(fibonacci)start = timer()print fibonacci(100)duration = timer() - startstartnext = timer()print fibonacci_jit(100)durationnext = timer() - startnextprint(duration, durationnext)The result is this:C:\Python27>python numba_test_003.py9273726921930789991761445263496(0.0002334994022992635, 0.17628787910376)C:\Python27>python numba_test_003.py9273726921930789991761445263496(0.0006886307922204926, 0.17579169287387408)C:\Python27>python numba_test_003.py9273726921930789991761445263496(0.0008105123483657127, 0.18209553525407973)C:\Python27>python numba_test_003.py9273726921930789991761445263496(0.00025466830415606486, 0.17186550306131188)C:\Python27>python numba_test_003.py9273726921930789991761445263496(0.0007348174871807866, 0.17523103771560608)The result for value 100 is not the same: 927372692193078999176 and 1445263496.First problem is: The problem is that numba can't intuit the type of lookup. If you put a print nb.typeof(lookup) in your method, you'll see that numba is treating it as an object, which is slow. The second problem is the output but can be from same reason. I test with value 5 and the result is :C:\Python27>python numba_test_003.py13131313(0.0007258367409385072, 0.17057997338491704)C:\Python27>python numba_test_003.py1313(0.00033709872502270044, 0.17213235952108247)C:\Python27>python numba_test_003.py1313(0.0004836773333341886, 0.17184433415945508)C:\Python27>python numba_test_003.py1313(0.0006854233828482501, 0.17381272129120037) [...]