Subscribe: Planet Python
http://www.planetpython.org/rss20.xml
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
code  data science  data  django  https  list  new  open source  open  project  python  source  target  world  │ │   
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Planet Python

Planet Python



Planet Python - http://planetpython.org/



 



Catalin George Festila: The Google Cloud SDK - part 002 .

Fri, 18 Aug 2017 14:13:25 +0000

The next part of my tutorials about the Google Cloud SDK come with some infos about the project.As you know I used the default sample appengine hello word standard application.The goal is to understand how it works by working with Google's documentation and examples.Into this project folder we have this files:08/17/2017 11:12 PM 98 app.yaml08/17/2017 11:12 PM 854 main.py08/17/2017 11:12 PM 817 main_test.pyLet's see what these files contain:First is app.yaml and come with:runtime: python27api_version: 1threadsafe: truehandlers:- url: /.* script: main.appThe next is main.py file:# Copyright 2016 Google Inc.## Licensed under the Apache License, Version 2.0 (the "License");# you may not use this file except in compliance with the License.# You may obtain a copy of the License at## http://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.import webapp2class MainPage(webapp2.RequestHandler): def get(self): self.response.headers['Content-Type'] = 'text/plain' self.response.write('Hello, World!')app = webapp2.WSGIApplication([ ('/', MainPage),], debug=True)The last from this folder is main_test.py :# Copyright 2016 Google Inc. All rights reserved.## Licensed under the Apache License, Version 2.0 (the "License");# you may not use this file except in compliance with the License.# You may obtain a copy of the License at## http://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.import webtestimport maindef test_get(): app = webtest.TestApp(main.app) response = app.get('/') assert response.status_int == 200 assert response.body == 'Hello, World!'The app.yaml file is used to configure your App Engine application's settings of the project.You can have many application-level configuration files (dispatch.yaml, cron.yaml, index.yaml, and queue.yaml).This all type of configuration files are included in the top level app directory ( in this case : hello_world).Let's see some common gcloud commands:gcloud app deploy  --project XXXXXX - deploy your project;gcloud app browse - show your project running into your browser;gcloud components list - show all all available components;gcloud components update - update all gcloud components;gcloud projects list --limit=10 - show all projects with a limit number;Let's test some changes:First , change the text from main.py file with something else:self.response.write('Hello, World!')Now use this commands:C:\Python27\python-docs-samples\appengine\standard\hello_world>gcloud app deployC:\Python27\python-docs-samples\appengine\standard\hello_world>gcloud app browseThe result is show into your browser. You can read about this files into google documentation page - here.Also some gcloud commands and reference you can read here. [...]



Codementor: Python Practices for Efficient Code: Performance, Memory, and Usability

Fri, 18 Aug 2017 10:51:21 +0000

Explore best practices to write Python code that executes faster, uses less memory, and looks more appealing.



Codementor: How I Learned Python Programming Language

Fri, 18 Aug 2017 08:06:18 +0000

Read about one person's perspective on learning to program using Python.



Anwesha Das: DreamHost fighting to protect the fundamental rights of its users

Thu, 17 Aug 2017 19:13:44 +0000

Habeas data, my data my right, is the ethos of the right to be a free and fulfilling individual. It offers the individual to be him/herself without being monitored. In The United States, there are several salvos to protect and further the concept. The First Amendment The First Amendment (Amendment I) to the United States Constitution establishes the Freedom of speech, Freedom of the press, Freedom exercise of religion, Freedom to assemble peaceably The Fourth Amendment The Fourth Amendment(Amendment IV) to the United States Constitution prohibits unreasonable searches and seizures; establishes The right to privacy is protected by the US Constitution, though the exact word privacy has not been used. But if we have a close look to the Amendment to the Bill of Rights, it bars Government to issue a general warrant. The Privacy Protection Act, 1980 The Act protects press, journalists, media house, newsroom from the search conducted by the government office bearers. It mandates that it shall be unlawful for a government employee to search for or seize “work product” or “documentary materials” that are possessed by a person “in connection with a purpose to disseminate to the public a newspaper, book, broadcast, or other similar form of public communication”, in connection with the investigation or prosecution of a criminal offense, [42 U.S.C. §§ 2000aa (a), (b) (1996)]. An order, a subpoena is necessary for accessing the information, documents. But the Government for the time again have violated, disregarded these mandates and stepped outside their periphery in the name of security of the state. The present situation with DreamHost DreamHost is A Los Angeles based company(private). It provides the following services, Web hosting service, Cloud computing service, Cloud storage service, Domain name registrar. The company since past few months is fighting a legal battle to protect their and one of their customer’s, disruptj20.org fundamental right. What is disruptj20.org? The company hosts disruptj20.org in the web. It is a website which organized, encouraged willing individuals to participate against the present US Government. Wikipedia says - “DisruptJ20 (also Disrupt J20), a Washington, D.C.-based political organization founded in July 2016 and publicly launched on November 11 of the same year, stated its initial aim as protesting and disrupting events of the presidential inauguration of the 45th U.S.” The Search Warrant There was a Search Warrant issued against DreamHost. It requires them to disclose, give away “the information associated with www.disruptj20.org that is stored at the premises owned, maintained, controlled, or operated by DreamHost,” [ATTACHMENT A]. The particular list of information to be disclosed and information to be seized by the government can be seen at ATTACHMENT B. How it affects third parties (other than www.disruptj20.org)? It demands to reveal to the government of “all files” related to the website, which includes the HTTP logs for the visitors, - means the time and date of the visit, the IP address for the visitor, the website pages viewed by the visitor (through their IP address), the detailed description of the software running in the visitor’s computer. details of emails by the third party to the www.disruptj20.org. Responding to it the company challenged the Department of Justice (on the warrant). They made an attempt to quash the demand of seizure and disclosure of the information by due legal process and reason. Motion to show cause In a usual course of action, the DOJ would respond to the inquiries of DreamHost. But here instead of answering to their inquiries, DOJ chose to file a motion to show cause in the Washington, D.C. Superior Court. DOJ asked for an order to compel them to produce the records, The Opposition The Opposition for the denial of the above mentioned motion filed by DreamHost filed an Opposition for the denial of the above mentio[...]



Catalin George Festila: The Google Cloud SDK - part 001 .

Thu, 17 Aug 2017 14:06:06 +0000

This tutorial will cover this steps into development with Google Cloud SDK and Python version 2.7:install the Google Cloud SDK on computer;make settings online for your google project to use Google Cloud SDK;run online project of Google Cloud SDK;make setting into your computer to run the local project ;First you need to download the Google Cloud SDK and run it.After GUI install a window command will ask you to set the default project for your work.Welcome to the Google Cloud SDK! Run "gcloud -h" to get the list of available commands.---Welcome! This command will take you through the configuration of gcloud.Your current configuration has been set to: [default]You can skip diagnostics next time by using the following flag: gcloud init --skip-diagnosticsNetwork diagnostic detects and fixes local network connection issues.Checking network connection...done.Reachability Check passed.Network diagnostic (1/1 checks) passed.You must log in to continue. Would you like to log in (Y/n)? Y...The next step is to start online with deploy a Hello World app with: Deploy a Hello World app:This will start a online tutorial into the right area of screen with all commands and steps for your Google Cloud SDK online project.Follow this steps and in the end will see how the online Google Cloud SDK project will show: Hello, World! into your browser .The next step is to make a local project and run it.You can use the python docs sample from GoogleCloudPlatform, but is not the same with the online example.To download the GoogleCloudPlatform sample use git command:C:\Python27>git clone https://github.com/GoogleCloudPlatform/python-docs-samplesCloning into 'python-docs-samples'...remote: Counting objects: 12126, done.remote: Compressing objects: 100% (16/16), done.remote: Total 12126 (delta 1), reused 10 (delta 1), pack-reused 12106Receiving objects: 100% (12126/12126), 3.37 MiB | 359.00 KiB/s, done.Resolving deltas: 100% (6408/6408), done.C:\Python27>cd python-docs-samples/appengine/standard/hello_worldTo start this sample into your google project you need to use this:C:\Python27\python-docs-samples\appengine\standard\hello_world>gcloud app deploy app.yaml --project encoded-metrics-147522Services to deploy:descriptor: [C:\Python27\python-docs-samples\appengine\standard\hello_world\app.yaml]source: [C:\Python27\python-docs-samples\appengine\standard\hello_world]target project: [encoded-metrics-147522]target service: [default]target version: [20170817t234925]target url: [https://encoded-metrics-147522.appspot.com]Do you want to continue (Y/n)? YBeginning deployment of service [default]...#============================================================##= Uploading 5 files to Google Cloud Storage =##============================================================#File upload done.Updating service [default]...done.Waiting for operation [apps/encoded-metrics-147522/operations/XXXXXX] to complete...done.Updating service [default]...done.Deployed service [default] to [https://XXXXXX.appspot.com]You can stream logs from the command line by running: $ gcloud app logs tail -s defaultTo view your application in the web browser run: $ gcloud app browseC:\Python27\python-docs-samples\appengine\standard\hello_world>gcloud app browseOpening [https://XXXXXX.appspot.com] in a new tab in your default browser.C:\Python27\python-docs-samples\appengine\standard\hello_world>This will start your application with trhe text - Hello, World! into your browser address bar with this web address: XXXXXX.appspot.com . [...]



Codementor: How to run a script as a background process?

Thu, 17 Aug 2017 08:42:05 +0000

A simple demonstration on how to run a script as a background process in a Debian environment.



Python Bytes: #39 The new PyPI

Thu, 17 Aug 2017 08:00:00 +0000

Mahmoud #1: The New PyPI

  • Donald Stufft and his PyPA team have been hard at work replacing the old pypi.python.org
  • The new site is now handling almost all the old functionality (excepting deprecated features, of course): https://pypi.org/
  • The new site has handled downloads (presently exceeding 1PB monthly bandwidth) for a while now, and uploads as of recently.
  • A nice full-fledged, open-source Python application, eagerly awaiting your review and contribution: https://github.com/pypa/warehouse/
  • More updates at: https://mail.python.org/pipermail/distutils-sig/

Brian #2: CircuitPython Snakes its Way onto Adafruit Hardware

  • Adafruit announced CircuitPython in January
  • New product is Gemma M0:
    • Announced at the end of July.
    • It’s about the size of a quarter and is considered a wearable computer.
    • “When you plug it in, it will show up as a very small disk drive with main.py on it. Edit main.py with your favorite text editor to build your project using Python, the most popular programming language. No installs, IDE or compiler needed, so you can use it on any computer, even ChromeBooks or computers you can’t install software on. When you’re done, unplug the Gemma M0 and your code will go with you."
    • They’re under $10. I gotta get one of these and play with it. (Anyone from Adafruit listening, want to send me one?)
    • Here's the intro video for it: https://www.youtube.com/watch?v=nRE_cryQJ5c&feature=youtu.be
  • Creating and sharing a CircuitPython Library is a good introduction to the Python open source community, including:
    • Creating a library (package or module)
    • Sharing on GitHub
    • Sharing docs on ReadTheDocs
    • Testing with Travis CI
    • Releasing on GitHub

Mahmoud #3:<[...]




Duncan McGreggor: NASA/EOSDIS Earthdata

Thu, 17 Aug 2017 07:05:25 +0000

UpdateIt's been a few years since I posted on this blog -- most of the technical content I've been contributing to in the past couple years has been in the following:The LFE BlogThe Clojang BlogBut since the publication of the Mastering matplotlib book, I've gotten more and more into satellite data. The book, it goes without saying, focused on Python for the analysis and interpretation of satellite data (in one of the many topics covered). After that I spent some time working with satellite and GIS data in general using Erlang and LFE. Ultimately though, I found that more and more projects were using the JVM for this sort of work, and in particular, I noted that Clojure had begun to show up in a surprising number of Github projects.EOSDISEnter NASA's Earth Observing System Data and Information System (see also earthdata.nasa.gov and EOSDIS on Wikipedia), a key part of the agency's Earth Science Data Systems Program. It's essentially a concerted effort to bring together the mind-blowing amounts of earth-related data being collected throughout, around, and above the world so that scientists may easily access and correlate earth science data for their research.Related NASA projects include the following:EOSESDISESMOThe acronym menagerie can be bewildering, but digging into the various NASA projects is ultimately quite rewarding (greater insights, previously unknown resources, amazing research, etc.).ClojureBack to the Clojure reference I made above:  I've been contributing to the nasa/Common-Metadata-Repository open source project (hosted on Github) for a few months now, and it's been amazing to see how all this data from so many different sources gets added, indexed, updated, and generally made so much more available to any who want to work with it. The private sector always seems to be so far ahead of large projects in terms of tech and continuously improving updates to existing software, so its been pretty cool to see a large open source project in the NASA Github org make so many changes that find ways to keep helping their users do better research. More so that users are regularly delivered new features in a large, complex collection of libraries and services thanks in part to the benefits that come from using a functional programming language.It may seem like nothing to you, but the fact that there are now directory pages for various data providers (e.g., GES_DISC, i.e., Goddard Earth Sciences Data and Information Services Center) makes a big difference for users of this data. The data provider pages now also offer easy access to collection links such as UARS Solar Ultraviolet Spectral Irradiance Monitor. Admittedly, the directory pages still take a while to load, but there are improvements on the way for page load times and other related tasks. If you're reading this a month after this post was written, there's a good chance it's already been fixed by now.Summary In summary, it's been a fun personal journey from looking at Landsat data for writing a book to working with open source projects that really help scientists to do their jobs better :-) And while I have enjoyed using the other programming languages to explore this problem space, Clojure in particular has been a delightfully powerful tool for delivering new features to the science community.[...]



Continuum Analytics News: Continuum Analytics to Share Insights at JupyterCon 2017

Wed, 16 Aug 2017 15:12:10 +0000

News Thursday, August 17, 2017 Presentation topics include Jupyter and Anaconda in the enterprise; open innovation in a data-centric world; building an Excel-Python bridge; encapsulating data science using Anaconda Project and JupyterLab; deploying Jupyter dashboards for datapoints; JupyterLab NEW YORK, August 17, 2017—Continuum Analytics, the creator and driving force behind Anaconda, the leading Python data science platform, today announced that the team will present one keynote, three talks and two tutorials at JupyterCon on August 23 and 24 in NYC, NY. The event is designed for the data science and business analyst community and offers in-depth trainings, insightful keynotes, networking events and talks exploring the Project Jupyter platform. Peter Wang, co-founder and CTO of Continuum Analytics, will present two sessions on August 24. The first is a keynote at 9:15 am, titled “Jupyter & Anaconda: Shaking Up the Enterprise.” Peter will discuss the co-evolution of these two major players in the new open source data science ecosystem and next steps to a sustainable future. The other is a talk, “Fueling Open Innovation in a Data-Centric World,” at 11:55 am, offering Peter’s perspectives on the unique challenges of building a company that is fundamentally centered around sustainable open source innovation. The second talk features Christine Doig, senior data scientist, product manager, and Fabio Pliger, software engineer, of Continuum Analytics, “Leveraging Jupyter to build an Excel-Python Bridge.” It will take place on August 24 at 11:05 am and Christine and Fabio will share how they created a native Microsoft Excel plug-in that provides a point-and-click interface to Python functions, enabling Excel analysts to use machine learning models, advanced interactive visualizations and distributed compute frameworks without needing to write any code. Christine will also be holding a talk on August 25 at 11:55 am on “Data Science Encapsulation and Deployment with Anaconda Project & JupyterLab.” Christine will share how Anaconda Project and JupyterLab encapsulate data science and how to deploy self-service notebooks, interactive applications, dashboards and machine learning. James Bednar, senior solutions architect, and Philipp Rudiger, software developer, of Continuum Analytics, will give a tutorial on August 23 at 1:30 pm titled, “Deploying Interactive Jupyter Dashboards for Visualizing Hundreds of Millions of Datapoints.” This tutorial will explore an overall workflow for building interactive dashboards, visualizing billions of data points interactively in a Jupyter notebook, with graphical widgets allowing control over data selection, filtering and display options, all using only a few dozen lines of code. The second tutorial, “JupyterLab,” will be hosted by Steven Silvester, software engineer at Continuum Analytics and Jason Grout, software developer at Bloomberg, on August 23 at 1:30 pm. They will walk through JupyterLab as a user and as an extension author, exploring its capabilities and offering a demonstration on how to create a simple extension to the environment. Keynote:WHO: Peter Wang, co-founder and CTO, Anaconda Powered by Continuum AnalyticsWHAT: Jupyter & Anaconda: Shaking Up the EnterpriseWHEN: August 24, 9:15am-9:25am ETWHERE: Grand Ballroom Talk #1:WHO: Peter Wang, co-founder and CTO, Anaconda Powered by Continuum AnalyticsWHAT: Fueling Open Innovation in a Data-Centric WorldWHEN: August 24, 11:55am–12:35pm ETWHERE: Regent Parlor Talk #2:WHO:  Christine Doig, senior data scientist, product manager, Anaconda Powered by Continuum Analytics Fabio Pliger, software engineer, Anaconda Powered by Continuum Analytics WHAT: Leveraging Jupyter to Build an Excel-Python BridgeWHEN: August 24, 11:05am–11:45am ETWHERE: Murray Hill Talk #3:WHO: Christine D[...]



Eli Bendersky: Right and left folds, primitive recursion patterns in Python and Haskell

Wed, 16 Aug 2017 12:48:00 +0000

A "fold" is a fundamental primitive in defining operations on data structures; it's particularly important in functional languages where recursion is the default tool to express repetition. In this article I'll present how left and right folds work and how they map to some fundamental recursive patterns. The article starts with Python, which should be (or at least look) familiar to most programmers. It then switches to Haskell for a discussion of more advanced topics like the connection between folding and laziness, as well as monoids. Extracting a fundamental recursive pattern Let's begin by defining a couple of straightforward functions in a recursive manner, in Python. First, computing the product of all the numbers in a given list: def product(seq): if not seq: return 1 else: return seq[0] * product(seq[1:]) Needless to say, we wouldn't really write this function recursively in Python; but if we were, this is probably how we'd write it. Now another, slightly different, function. How do we double (multiply by 2) every element in a list, recursively? def double(seq): if not seq: return [] else: return [seq[0] * 2] + double(seq[1:]) Again, ignoring the fact that Python has much better ways to do this (list comprehensions, for example), this is a straightforward recursive pattern that experienced programmers can produce in their sleep. In fact, there's a lot in common between these two implementation. Let's try to find the commonalities. As this diagram shows, the functions product and double are really only different in three places: The initial value produced when the input sequence is empty. The mapping applied to every sequence value processed. The combination of the mapped sequence value with the rest of the sequence. For product: The initial value is 1. The mapping is identity (each sequence element just keeps its value, without change). The combination is the multiplication operator. Can you figure out the same classification for double? Take a few moments to try for yourself. Here it is: The initial value is the empty list []. The mapping takes a value, multiplies it by 2 and puts it into a list. We could express this in Python as lambda x: [x * 2]. The combination is the list concatenation operator +. With the diagram above and these examples, it's straightforward to write a generalized "recursive transform" function that can be used to implement both product and double: def transform(init, mapping, combination, seq): if not seq: return init else: return combination(mapping(seq[0]), transform(init, mapping, combination, seq[1:])) The transform function is parameterized with init - the initial value, mapping- a mapping function applied to every sequence value, and combination - the combination of the mapped sequence value with the rest of the sequence. With these given, it implements the actual recursive traversal of the list. Here's how we'd write product in terms of transform: def product_with_transform(seq): return transform(1, lambda x: x, lambda a, b: a * b, seq) And double: def double_with_transform(seq): return transform([], lambda x: [x * 2], lambda a, b: a + b, seq) foldr - fold right Generalizations like transform make functional programming fun and powerful, since they let us express complex ideas with the help of relatively few building blocks. Let's take this idea further, by generalizing transform even more. The main insight guiding us is that the mapping and the combination don't even have to be separate functions. A single function can play both roles. In the definition of transform, combination is applied to: The result of calling mapping on the current sequence value. The recursive application of the transformatio[...]



Catalin George Festila: The DreamPie - interactive shell .

Wed, 16 Aug 2017 12:47:40 +0000

The DreamPie was designed to bring you a great interactive shell Python experience.
There are two ways to install the DreamPie:
  • cloning the git repository;
  • downloading a release.
You can read about installation and download here.
To run it just try the dreampie.exe with your python shell, I used with my python 2.7 version:
C:\DreamPie>dreampie.exe --hide-console-window c:\Python27\python.exe
Let's see one screenshot of this running command:
(image)
Also, I tested with Python 3.6.2 and works well.
The main window is divided into the history box and the code box.
The history box lets you view previous commands and their output.
The code box for write your code.
Some keys I used:

  • Ctr+Enter - run the code;
  • Ctr+up / down arrow - adds the previous / next source code;
  • Ctr+Space - show code completions;
  • Ctr+T - open a new tab code;
  • Ctr+W - close the tab code;
  • Ctr+S - save your work history into HTML file.

You can set your font , colors and many features.
I make the installation into C:\DreamPie folder , and comes with all these folders and files:
C:\DreamPie>tree
Folder PATH listing for volume free-tutorials
Volume serial number is 000000FF 0EB1:091D
C:.
├───data
│ ├───language-specs
│ ├───subp-py2
│ │ └───dreampielib
│ │ ├───common
│ │ └───subprocess
│ └───subp-py3
│ └───dreampielib
│ ├───common
│ └───subprocess
├───gtk-2.0
│ ├───cairo
│ ├───gio
│ ├───glib
│ ├───gobject
│ ├───gtk
│ └───runtime
│ ├───bin
│ ├───etc
│ │ ├───bash_completion.d
│ │ ├───fonts
│ │ ├───gtk-2.0
│ │ └───pango
│ ├───lib
│ │ ├───gdk-pixbuf-2.0
│ │ │ └───2.10.0
│ │ │ └───loaders
│ │ ├───glib-2.0
│ │ │ └───include
│ │ └───gtk-2.0
│ │ ├───2.10.0
│ │ │ └───engines
│ │ ├───include
│ │ └───modules
│ └───share
│ ├───aclocal
│ ├───dtds
│ ├───glib-2.0
│ │ ├───gdb
│ │ ├───gettext
│ │ │ └───po
│ │ └───schemas
│ ├───gtk-2.0
│ ├───gtksourceview-2.0
│ │ ├───language-specs
│ │ └───styles
│ ├───icon-naming-utils
│ ├───themes
│ │ ├───Default
│ │ │ └───gtk-2.0-key
│ │ ├───Emacs
│ │ │ └───gtk-2.0-key
│ │ ├───MS-Windows
│ │ │ └───gtk-2.0
│ │ └───Raleigh
│ │ └───gtk-2.0
│ └───xml
│ └───libglade
└───share
├───applications
├───man
│ └───man1
└───pixmaps



PyCharm: Analyzing Data in Amazon Redshift with Pandas

Wed, 16 Aug 2017 11:45:41 +0000

Redshift is Amazon Web Services’ data warehousing solution. They’ve extended PostgreSQL to better suit large datasets used for analysis. When you hear about this kind of technology as a Python developer, it just makes sense to then unleash Pandas on it. So let’s have a look to see how we can analyze data in Redshift using a Pandas script! Setting up Redshift If you haven’t used Redshift before, you should be able to get the cluster up for free for 2 months. As long as you make sure that you don’t use more than 1 instance, and you use the smallest available instance. To play around, let’s use Amazon’s example dataset, and to keep things very simple, let’s only load the ‘users’ table. Configuring AWS is a complex subject, and they’re a lot better at explaining how to do it than we are, so please complete the first four steps of the AWS tutorial for setting up an example Redshift environment. We’ll use PyCharm Professional Edition as the SQL client. Connecting to Redshift After spinning up Redshift, you can connect PyCharm Professional to it by heading over to the database tool window (View | Tool Windows | Database), then use the green ‘+’ button, and select Redshift as the  data source type. Then fill in the information for your instance: Make sure that when you click the ‘test connection’ button you get a ‘connection successful’ notification. If you don’t, make sure that you’ve correctly configured your Redshift cluster’s VPC to allow connections from 0.0.0.0/0 on port 5439. Now that we’ve connected PyCharm to the Redshift cluster, we can create the tables for Amazon’s example data. Copy the first code listing from here, and paste it into the SQL console that was opened in PyCharm when you connected to the database. Then execute it by pressing Ctrl + Enter, when PyCharm asks which query to execute, make sure to select the full listing. Afterward, you should see all the tables in the database tool window: To load the sample data, go back to the query window, and use the Redshift ‘load’ command to load data from an Amazon S3 bucket into the database: The IAM role identifier should be the identifier for the IAM role you’ve created for your Redshift cluster in the second step in the Amazon tutorial. If everything goes right, you should have about 50,000 rows of data in your users table after the command completes. Loading Redshift Data into a Pandas Dataframe So let’s get started with the Python code! In our example we’ll use Pandas, Matplotlib, and Seaborn. The easiest way to get all of these installed is by using Anaconda, get the Python 3 version from their website. After installing, we need to choose Anaconda as our project interpreter: If you can’t find Anaconda in the dropdown, you can click the settings “gear” button, and then select ‘Add Local’ and find your Anaconda installation on your disk. We’re using the root Anaconda environment without Conda, as we will depend on several scientific libraries which are complicated to correctly install in Conda environments. Pandas relies on SQLAlchemy to load data from an SQL data source. So let’s use the PyCharm package manager to install sqlalchemy: use the green ‘+’ button next to the package list and find the package. To make SQLAlchemy work well with Redshift, we’ll need to install both the postgres driver, and the Redshift additions. For postgres, you can use the PyCharm package manager to install psycopg2. Then we need to install sqlalchemy-redshift to teach SQLAlchemy the specifics of working with a Redshift cluster. This package is unfortunately not available in the default Anaconda repository, so we’ll need to add a custom repository. To add a custom repository click the ‘Manage Reposito[...]



Stéphane Wirtel: PythonFOSDEM 2018

Tue, 15 Aug 2017 19:00:00 +0000

Because I want to be in advance this year for the organization of the PythonFOSDEM 2018, I have worked on the web site. The Call for Proposals will be announced once we have the “Go” from FOSDEM. In fact, for 2018, I do not know if we will have a room at FOSDEM, because during PythonFOSDEM 2017, we have received a sponsoring from Facebook and the organizers of FOSDEM were angry, because the sponsoring was for 1000 free beers ;-)



PyCharm: Support a Great Partnership: PyCharm and Django Team up Again

Tue, 15 Aug 2017 17:49:50 +0000

Last June (2016) JetBrains PyCharm partnered with the Django Software Foundation to generate a big boost to Django fundraising. The campaign was a huge success. Together we raised a total of $50,000 for the Django Software Foundation!

This year we hope to repeat that success. During the two-week campaign, buy a new PyCharm Professional Edition individual license with a 30% discount code, and all the money raised will go to the DSF’s general fundraising and the Django Fellowship program.

Promotion details

(image) Up until Aug 28th, you can effectively donate to Django by purchasing a New Individual PyCharm Professional annual subscription at 30% off. It’s very simple:

1. When buying a new annual PyCharm subscription in our e-store, on the checkout page, сlick “Have a discount code?”.
2. Enter the following 30% discount promo code:
ISUPPORTDJANGO
Alternatively, just click this shortcut link to go to the e-store with the code automatically applied
3. Fill in the other required fields on the page and click the “Place order” button.

All of the income from this promotion code will go to the DSF fundraising campaign 2017 – not just the profits, but actually the entire sales amount including taxes, transaction fees – everything. The campaign will help the DSF to maintain the healthy state of the Django project and help them continue contributing to their different outreach and diversity programs.

Read more details on the special promotion page.

(image)

“Django has grown to be a world-class web framework, and coupled with PyCharm’s Django support, we can give tremendous developer productivity,” says Frank Wiles, DSF President. “Last year JetBrains was a great partner for us in support of raising money for the Django Software Foundation, on behalf of the community, I would like to extend our deepest thanks for their generous help. Together we hope to make this a yearly event!”

If you have any questions, get in touch with Django at fundraising@djangoproject.com or JetBrains at sales@jetbrains.com.

(image)



Django Weblog: Support a Great Partnership: PyCharm and Django Team up Again

Tue, 15 Aug 2017 14:38:34 +0000

Last June (2016) JetBrains PyCharm partnered with the Django Software Foundation to generate a big boost to Django fundraising. The campaign was a huge success. Together we raised a total of $50,000 for the Django Software Foundation!

This year we hope to repeat that success. During the two-week campaign, buy a new PyCharm Professional Edition individual license with a 30% discount code, and all the money raised will go to the DSF’s general fundraising and the Django Fellowship program.

Promotion details

Up until Aug 28th, you can effectively donate to Django by purchasing a New Individual PyCharm Professional annual subscription at 30% off. It’s very simple:

  1. When buying a new annual PyCharm subscription in our e-store, on the checkout page, сlick “Have a discount code?”.
  2. Enter the following 30% discount promo code:
    IDONATETODJANGO

Alternatively, just click this shortcut link to go to the e-store with the code automatically applied

Fill in the other required fields on the page and click the “Place order” button.

All of the income from this promotion code will go to the DSF fundraising campaign 2017 – not just the profits, but actually the entire sales amount including taxes, transaction fees – everything. The campaign will help the DSF to maintain the healthy state of the Django project and help them continue contributing to their different outreach and diversity programs.

Read more details on the special promotion page.

“Django has grown to be a world-class web framework, and coupled with PyCharm’s Django support, we can give tremendous developer productivity,” says Frank Wiles, DSF President. “Last year JetBrains was a great partner for us in support of raising money for the Django Software Foundation, on behalf of the community, I would like to extend our deepest thanks for their generous help. Together we hope to make this a yearly event!”

If you have any questions, get in touch with Django at fundraising@djangoproject.com or JetBrains at sales@jetbrains.com.




Fabio Zadrozny: PyDev 5.9.2 released (Debugger improvements, isort, certificate)

Tue, 15 Aug 2017 11:34:20 +0000

PyDev 5.9.2 is now available for download.

This version now integrates the performance improvements which were done in PyDev.Debugger for 3.6 (which use the new hook available by Python and changes bytecode to add calls to the debugger so that there's less overhead during the debugging -- note that this only really takes place if breakpoints are added before a given code is loaded, adding or removing breakpoints afterwards falls back to the previous approach of tracing).

Another nice feature in this release is that isort (https://github.com/timothycrosley/isort) can be used as the default engine for sorting imports (needs to be configured in preferences > PyDev > Editor > Code Style > Imports -- note that at that same preferences dialog you may save the settings to a project, not only globally).

There were also a number of bug-fixes... in particular one that prevented text searches from working if the user had another plugin which also used Lucene in a different version was really nasty... http://www.pydev.org has more details on the changes.

This is also the first release which is signed with a proper certificate (provided by Comodo) -- so, it's nice that Eclipse won't complain that the plugin is not signed when it's being installed, although I discovered that it isn't as useful as I thought... it does work as intended for Eclipse plugins, but for Windows, even signing the LiClipse installer will show a dialog for users (there's a more expensive version with extended validation which could be used, but I didn't go for that one) and on Mac OS I haven't even tried to sign as it seems Comodo certificates are worthless there (the only choice is having a development subscription from Apple and using a certificate Apple gives you... the verification they do seems compatible with what Comodo gives, which uses a DUNS number, so, it's apparently just a point of them wanting more $$$/control, not really being more secure), so, currently Mac users will still use unsigned binaries (the sha256 is provided for users which want to actually check that what they download is what's being distributed).
(image)



Martin Fitzpatrick: KropBot: Multiplayer Internet-controlled robot

Tue, 15 Aug 2017 09:00:00 +0000

KropBot is a little multiplayer robot you can control over the internet. Co-operate with random internet strangers to drive around in circles and into walls. If it is online, you can drive the KropBot yourself! KropBot is dead. 15 minutes after posting to Planet Python, Kropbot was mercilessly driven down a flight of stairs. He is no more, he is kaput. He is an ex-robot. Requirements If you already have a working 2-motor robot platform you can skip straight to the code. The code shown below will work with any Raspberry Pi with WiFi (Zero W recommended) and MotorHAT. Raspberry Pi Zero W or Raspberry Pi 3B Raspberry Pi Camera If you’re using a Zero W you need the specific Pi Zero camera cable (it’s smaller on the Pi end) MotorHAT There are official kits from AdaFruit but cheaper options are also available. 2-motor robot platform/chassis — In this example I’m using a rubber tank chassis but this required some hacking (see below) to get a 4xAA battery pack inside and is not recommended. Try this shock-absorbed robot tank-chassis or a 4WD car base. If you want to go off road avoid bases with 2 wheels + a bumper which are only suitable for flat smooth floors. 4x AA (or bigger, check motor specifications) battery pack to power motors. Li-Ion battery pack for Pi Zero — I recommend using a USB backup powerbank as they’re cheap, easy to charge and will easily run a Pi Zero W for a few hours. If you want longer runtime or a permanent installation you could also look into use multiple 18650 cells with a charging board. I also needed — Short lengths of wire (to extend the short wires in the base) Soldering iron, for extending wires and soldering the switch Heatshrink tubing to cover solder joints Card, bluetack and tape to hold the camera in place Build Chassis The chassis I used came pre-constructed, with motors in place and seems to be the deconstructed base of a toy tank. The sales photo slightly oversells its capabilities. There was no AA battery holder included, just a space behind a flap labelled 4.8V. The space measured the size of 4xAA batteries (giving 6V total) but the 4xAA battery holder I ordered didn’t fit. However, a 6xAA battery pack I had could be cut down to size by lopping off 2 holders and rewiring. Save yourself the hassle and get a chassis with a battery holder. The pack is still a bit too deep, but the door can be closed with a screwdriver to wedge it shut. An ON/OFF switch is provided in the bottom of the case which I wanted to be able to use to switch off the motor power (to save battery life, when the Pi wasn’t running). The power leads were fed through to the upper side, but were too short to reach the switch, so these were first extended before being soldered to the switch. MotorHAT The MotorHAT (here using a cheap knock-off) is a extension board for controlling 4 motors (or stepper motors) from a Pi, with speed and direction control. This is a shortcut, but you could also use L293D motor drivers together with PWM on the GPIO pins for speed control. Once wired into the power supply (AA batteries) the MotorHAT power LED should light up. The motor supply is wired in separately to the HAT, and the board keeps this isolated from the Pi supply/GPIO. The + lead is wired in through the switch as already described. Next the motors are wired into the terminals, using the outer terminals for each motor. The left motor goes on M1 and the right on M2. Getting the wires the right way around (so forward=forward) is a cas[...]



Talk Python to Me: #125 Django REST framework and a new API star is born

Tue, 15 Aug 2017 08:00:00 +0000

APIs were once the new and enabling thing in technology. Today they are table-stakes. And getting them right is important. Today we'll talk about one of the most popular and mature API frameworks in Django REST Framework. You'll meet the creator, Tom Christie and talk about the framework, API design, and even his successful take on funding open source projects.

But Tom is not done here. He's also creating the next generation API framework that fully embraces Python 3's features called API Star.

Links from the show:

Django REST framework: django-rest-framework.org
API Star: github.com/tomchristie/apistar
Tom on Twitter: @_tomchristie



Daniel Bader: Unpacking Nested Data Structures in Python

Tue, 15 Aug 2017 00:00:00 +0000

Unpacking Nested Data Structures in Python A tutorial on Python’s advanced data unpacking features: How to unpack data with the “=” operator and for-loops. Have you ever seen Python’s enumerate function being used like this? for (i, value) in enumerate(values): ... In Python, you can unpack nested data structures in sophisticated ways, but the syntax might seem complicated: Why does the for statement have two variables in this example, and why are they written inside parentheses? This article answers those questions and many more. I wrote it in two parts: First, you’ll see how Python’s “=” assignment operator iterates over complex data structures. You’ll learn about the syntax of multiple assignments, recursive variable unpacking, and starred targets. Second, you’ll discover how the for-statement unpacks data using the same rules as the = operator. Again, we’ll go over the syntax rules first and then dive into some hands-on examples. Ready? Let’s start with a quick primer on the “BNF” syntax notation used in the Python language specification. BNF Notation – A Primer for Pythonistas This section is a bit technical, but it will help you understand the examples to come. The Python 2.7 Language Reference defines all the rules for the assignment statement using a modified form of Backus Naur notation. The Language Reference explains how to read BNF notation. In short: symbol_name ::= starts the definition of a symbol ( ) is used to group symbols * means appearing zero or more times + means appearing one or more times (a|b) means either a or b [ ] means optional "text" means the literal text. For example, "," means a literal comma character. Here is the complete grammar for the assignment statement in Python 2.7. It looks a little complicated because Python allows many different forms of assignment: An assignment statement consists of one or more (target_list "=") groups followed by either an expression_list or a yield_expression assignment_stmt ::= (target_list "=")+ (expression_list | yield_expression) A target list consists of a target followed by zero or more ("," target) groups followed by an optional trailing comma target_list ::= target ("," target)* [","] Finally, a target consists of any of the following a variable name a nested target list enclosed in ( ) or [ ] a class or instance attribute a subscripted list or dictionary a list slice target ::= identifier | "(" target_list ")" | "[" [target_list] "]" | attributeref | subscription | slicing As you’ll see, this syntax allows you to take some clever shortcuts in your code. Let’s take a look at them now: #1 – Unpacking and the “=” Assignment Operator First, you’ll see how Python’s “=” assignment operator iterates over complex data structures. You’ll learn about the syntax of multiple assignments, recursive variable unpacking, and starred targets. Multiple Assignments in Python: Multiple assignment is a shorthand way of assigning the same value to many variables. An assignment statement usually assigns one value to one variable: x = 0 y = 0 z = 0 But in Python you can combine these three assignments into one expression: x = y = z = 0 Recursive Variable Unpacking: I’m sure you’ve written [ ] and ( ) on the right side of an assignment statement to pack values into a data structure. But did you know that you can literally flip the script by writing [ ] and ( ) on the left side? Here’s an example: [target, target, target, ...] = or (target,[...]



Continuum Analytics News: Five Organizations Successfully Fueling Innovation with Data Science

Mon, 14 Aug 2017 18:12:50 +0000

Company Blog Tuesday, August 15, 2017 Christine Doig Sr. Data Scientist, Product Manager Data science innovation requires availability, transparency and interoperability. But what does that mean in practice? At Anaconda, it means providing data scientists with open source tools that facilitate collaboration; moving beyond analytics to intelligence. Open source projects are the foundation of modern data science and are popping up across industries, making it more accessible, more interactive and more effective. So, who’s leading the open source charge in the data science community? Here are five organizations to keep your eye on: 1. TaxBrain. TaxBrain is a platform that enables policy makers and the public to simulate and study the effects of tax policy reforms using open source economic models. Using the open source platform, anyone can plug elements of the administration’s proposed tax policy to get an idea of how it would perform in the real world. Why public policy is going #opensource via @teoliphant @MattHJensen in @datanami https://t.co/vKTzYtdvGl #datascience #taxbrain — Continuum Analytics (@ContinuumIO) August 17, 2016   2. Recursion Pharmaceuticals. Recursion is a pharmaceutical company dedicated to finding the remedies for rare genetic diseases. Its drug discovery assay is built on an open source software platform, combining biological science with machine learning techniques to visualize cell data and test drugs efficiently. This approach shortens research and development process, reducing time to market for remedies to these rare genetic diseases. Their goal is to treat 100 diseases by 2026 using this method. 3. The U.S. Government. Under the previous administration, the U.S. government launched Data.gov, an open data initiative that offers more than 197K datasets for public use. This database exists, in part, thanks to the former U.S. chief data scientist, DJ Patil. He helped drive the government’s data science projects forward, at the city, state and federal levels. Recently, concerns have been raised over the the Data.gov portal, as certain information has started to disappear. Data scientists are keeping a sharp eye on the portal to ensure that these resources are updated and preserved for future innovative projects. 4. Comcast. Telecom and broadcast giant, Comcast, run their projects on open source platforms to drive data science innovation in the industry.  For example, earlier this month, Comcast’s advertising branch announced they were creating a Blockchain Insights Platform to make the planning, targeting, execution and measurement of video ads more efficient. This data-driven, secure approach would be a game changer for the advertising industry, which eagerly awaits its launch in 2018. 5. DARPA. The Defense Advanced Research Projects Agency (DARPA) is behind the Memex project, a program dedicated to fighting human trafficking, which is a top mission for the defense department. DARPA estimates that in two years, traffickers spent $250 million posting the temporary advertisements that fuel the human trafficking trade. Using an open source platform, Memex is able to index and cross reference interactive and social media, text, images and video across the web. This allows them to find the patterns in web data that indicate human trafficking. Memex’s data science approach is already credited in generating at least 20 active cases and nine open indictments.  These are just some of the examples of open source-fueled[...]



PyPy Development: Let's remove the Global Interpreter Lock

Mon, 14 Aug 2017 15:34:26 +0000

Hello everyone The Python community has been discussing removing the Global Interpreter Lock for a long time. There have been various attempts at removing it: Jython or IronPython successfully removed it with the help of the underlying platform, and some have yet to bear fruit, like gilectomy. Since our February sprint in Leysin, we have experimented with the topic of GIL removal in the PyPy project. We believe that the work done in IronPython or Jython can be reproduced with only a bit more effort in PyPy. Compared to that, removing the GIL in CPython is a much harder topic, since it also requires tackling the problem of multi-threaded reference counting. See the section below for further details. As we announced at EuroPython, what we have so far is a GIL-less PyPy which can run very simple multi-threaded, nicely parallelized, programs. At the moment, more complicated programs probably segfault. The remaining 90% (and another 90%) of work is with putting locks in strategic places so PyPy does not segfault during concurrent accesses to data structures. Since such work would complicate the PyPy code base and our day-to-day work, we would like to judge the interest of the community and the commercial partners to make it happen (we are not looking for individual donations at this point). We estimate a total cost of $50k, out of which we already have backing for about 1/3 (with a possible 1/3 extra from the STM money, see below). This would give us a good shot at delivering a good proof-of-concept working PyPy with no GIL. If we can get a $100k contract, we will deliver a fully working PyPy interpreter with no GIL as a release, possibly separate from the default PyPy release. People asked several questions, so I'll try to answer the technical parts here. What would the plan entail? We've already done the work on the Garbage Collector to allow doing multi- threaded programs in RPython. "All" that is left is adding locks on mutable data structures everywhere in the PyPy codebase. Since it would significantly complicate our workflow, we require real interest in that topic, backed up by commercial contracts in order to justify the added maintenance burden. Why did the STM effort not work out? STM was a research project that proved that the idea is possible. However, the amount of user effort that is required to make programs run in a parallelizable way is significant, and we never managed to develop tools that would help in doing so. At the moment we're not sure if more work spent on tooling would improve the situation or if the whole idea is really doomed. The approach also ended up adding significant overhead on single threaded programs, so in the end it is very easy to make your programs slower. (We have some money left in the donation pot for STM which we are not using; according to the rules, we could declare the STM attempt failed and channel that money towards the present GIL removal proposal.) Wouldn't subinterpreters be a better idea? Python is a very mutable language - there are tons of mutable state and basic objects (classes, functions,...) that are compile-time in other language but runtime and fully mutable in Python. In the end, sharing things between subinterpreters would be restricted to basic immutable data structures, which defeats the point. Subinterpreters suffers from the same problems as multiprocessing with no additional benefits. We believe that reducing mutability to implement subinterpreters is not viable without seriously impacting the semantics o[...]



Doug Hellmann: statistics — Statistical Calculations — PyMOTW 3

Mon, 14 Aug 2017 13:00:31 +0000

The statistics module implements many common statistical formulas for efficient calculations using Python’s various numerical types ( int , float , Decimal , and Fraction ). Read more… This post is part of the Python Module of the Week series for Python 3. See PyMOTW.com for more articles from the series.(image)



Mike Driscoll: PyDev of the Week: Brian E. Granger

Mon, 14 Aug 2017 12:30:20 +0000

This week we welcome Brian E. Granger (@ellisonbg) as our PyDev of the Week! Brian is an early core contributor of the IPython Notebook and now leads the Project Jupyter Notebook team. He is also an Associate Professor of Physics and Data Science at California Polytechnic State University. You can also check out what projects he is working on over at Github. Let’s take a few moments to get to know Brian better! Can you tell us a little about yourself (hobbies, education, etc): I am going to start with the fun stuff. Since high school I have been playing the guitar, swimming and meditating. It is hard to be disciplined, but I couldn’t survive without a regular practice of these things. Doing intellectual work, such as coding, for long periods of time (decades) is really taxing on the mind, and that spills over to the body. I truly love coding, but these other things are the biggest reason I am still coding productively at 45. In some ways, I look like a pretty traditional academic, with a Ph.D. in theoretical physics from the University of Colorado, Boulder, followed by a postdoc and now a tenured faculty position in the Physics Department at Cal Poly San Luis Obispo. Along the way, I started building open-source software and that has slowly overtaken my entire professional life. Fernando Pérez (IPython’s creator) and I were classmates in graduate school; I began working on IPython around 2005. Fernando remains a dear friend and the best collaborator I could ever ask for. The vision for the IPython/Jupyter notebook came out of a late night discussion over ice cream with him in 2004. It took us until 2011 to ship the original IPython Notebook. Since then my main research focus has been on Project Jupyter and other open-source tools for data science and scientific computing. Why did you start using Python? I first used Python as a postdoc in 2003. My first Python program used VPython to simulate and visualize traffic flow. I had written a previous version of the simulation using C++, and couldn’t believe how Python enabled me to spend more time thinking about the physics and less about the code. Within a short period of time, I couldn’t bring myself to keep working in C++ for scientific work. What other programming languages do you know and which is your favorite? I used Mathematica in my physics research during the 1990’s. During graduate school and as a postdoc, I worked in C++. At the time, C++ was still pretty painful. I don’t miss that, but modern C++ actually looks quite nice. Python remains my favorite language, mainly because it is so much fun and has an amazing community. At the same time, these days I am doing a lot of frontend development for JupyterLab in TypeScript. For a large project with many contributors, having static type checking is revolutionary. TypeScript looks a lot like Python 3’s type annotations, and I can’t wait to begin using Python with static type checking. What projects are you working on now? Jupyter and IPython continue to take up most of my time. On that side of things I am working hard with the rest of the JupyterLab team to get the first version of JupyterLab released this summer. In 2016, Jake VanderPlas and I started Altair, which is a statistical visualization package for Python based on Vega/Vega-Lite from Jeff Heer’s Interactive Data Lab at the University of Washington. While I spend less time on Altair, it, along with Vega/Vega-Lite are a critical part of the overall [...]



Sandipan Dey: Dogs vs. Cats: Image Classification with Deep Learning using TensorFlow in Python

Sun, 13 Aug 2017 22:28:19 +0000

The problem Given a set of labeled images of  cats and dogs, a  machine learning model  is to be learnt and later it is to be used to classify a set of new images as cats or dogs. This problem appeared in a Kaggle competition and the images are taken from this kaggle dataset. The original dataset … Continue reading Dogs vs. Cats: Image Classification with Deep Learning using TensorFlow in Python(image)



Jeremy Epstein: Using Python's namedtuple for mock objects in tests

Sat, 12 Aug 2017 20:56:03 +0000

I have become quite a fan of Python's built-in namedtuple collection lately. As others have already written, despite having been available in Python 2.x and 3.x for a long time now, namedtuple continues to be under-appreciated and under-utilised by many programmers.

# The ol'fashioned tuple way
fruits = [
    ('banana', 'medium', 'yellow'),
    ('watermelon', 'large', 'pink')]

for fruit in fruits:
    print('A {0} is coloured {1} and is {2} sized'.format(
        fruit[0], fruit[2], fruit[1]))

# The nicer namedtuple way
from collections import namedtuple

Fruit = namedtuple('Fruit', 'name size colour')

fruits = [
    Fruit(name='banana', size='medium', colour='yellow'),
    Fruit(name='watermelon', size='large', colour='pink')]

for fruit in fruits:
    print('A {0} is coloured {1} and is {2} sized'.format(
        fruit.name, fruit.colour, fruit.size))

namedtuples can be used in a few obvious situations in Python. I'd like to present a new and less obvious situation, that I haven't seen any examples of elsewhere: using a namedtuple instead of MagicMock or flexmock, for mocking objects in unit tests.