Subscribe: Planet Python
http://www.planetpython.org/rss20.xml
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Planet Python

Planet Python



Planet Python - http://planetpython.org/



 



Mike Driscoll: Flask 101: Adding, Editing and Displaying Data

Thu, 14 Dec 2017 06:05:37 +0000

Last time we learned how to add a search form to our music database application. Of course, we still haven’t added any data to our database, so the search form doesn’t actually do much of anything except tell us that it didn’t find anything. In this tutorial we will learn how to actually add data, display search results and edit entries in the database. Let’s get started! Adding Data to the Database Let’s start by coding up our new album form. Open up the “forms.py” file we created in the last tutorial and add the following class: class AlbumForm(Form): media_types = [('Digital', 'Digital'), ('CD', 'CD'), ('Cassette Tape', 'Cassette Tape') ] artist = StringField('Artist') title = StringField('Title') release_date = StringField('Release Date') publisher = StringField('Publisher') media_type = SelectField('Media', choices=media_types) This defines all the fields we need to create a new Album. Now we need to open “main.py” and add a function to handle what happens when we want to create the new album. # main.py   from app import app from db_setup import init_db, db_session from forms import MusicSearchForm, AlbumForm from flask import flash, render_template, request, redirect from models import Album   init_db()     @app.route('/', methods=['GET', 'POST']) def index(): search = MusicSearchForm(request.form) if request.method == 'POST': return search_results(search)   return render_template('index.html', form=search)     @app.route('/results') def search_results(search): results = [] search_string = search.data['search']   if search.data['search'] == '': qry = db_session.query(Album) results = qry.all()   if not results: flash('No results found!') return redirect('/') else: # display results return render_template('results.html', table=table)     @app.route('/new_album', methods=['GET', 'POST']) def new_album(): """ Add a new album """ form = AlbumForm(request.form) return render_template('new_album.html', form=form)     if __name__ == '__main__': app.run() Here we add an import to import our new form at the top and then we create a new function called new_album(). Then we create an instance of our new form and pass it to our render_template() function which will render a file called “new_album.html”. Of course, this HTML file doesn’t exist yet, so that will be the next thing we need to create. When you save this new HTML file, make sure you save it to the “templates” folder inside of your “musicdb” folder. Once you have “new_album.html” created, add the following HTML to it: New Album - Flask Music Database

New Album

  {% from "_formhelpers.html" import render_field %}
{{ render_field(form.artist) }} {{ render_field(form.title) }} {{ render_field(form.release_date) }} {{ render_field(form.publisher) }} {{ render_field(form.media_type) }}

This code will render each field in the form and it also creates a Submit button so we can save our changes. The last thing we need to do is update our “index.html” code so that it has a link that will load our new album page. Basically all we need to do is add the following: New Album So the full change looks like this: &l[...]






Enthought: Cheat Sheets: Pandas, the Python Data Analysis Library

Wed, 13 Dec 2017 22:12:18 +0000

Download all 8 Pandas Cheat Sheets Learn more about the Python for Data Analysis and Pandas Mastery Workshop training courses Pandas (the Python Data Analysis library) provides a powerful and comprehensive toolset for working with data. Fundamentally, Pandas provides a data structure, the DataFrame, that closely matches real world data, such as experimental results, SQL tables, and Excel spreadsheets, that no other mainstream Python package provides. In addition to that, it includes tools for reading and writing diverse files, data cleaning and reshaping, analysis and modeling, and visualization. Using Pandas effectively can give you super powers, regardless of whether you’re working in data science, finance, neuroscience, economics, advertising, web analytics, statistics, social science, or engineering. However, learning Pandas can be a daunting task because the API is so rich and large. This is why we created a set of cheat sheets built around the data analysis workflow illustrated below. Each cheat sheet focuses on a given task. It shows you the 20% of functions you will be using 80% of the time, accompanied by simple and clear illustrations of the different concepts. Use them to speed up your learning, or as a quick reference to refresh your mind. Here’s the summary of the content of each cheat sheet: Reading and Writing Data with Pandas: This cheat sheet presents common usage patterns when reading data from text files with read_table, from Excel documents with read_excel, from databases with read_sql, or when scraping web pages with read_html. It also introduces how to write data to disk as text files, into an HDF5 file, or into a database. Pandas Data Structures: Series and DataFrames: It presents the two main data structures, the DataFrame, and the Series. It explain how to think about them in terms of common Python data structure and how to create them. It gives guidelines about how to select subsets of rows and columns, with clear explanations of the difference between label-based indexing, with .loc, and position-based indexing, with .iloc. Plotting with Series and DataFrames: This cheat sheet presents some of the most common kinds of plots together with their arguments. It also explains the relationship between Pandas and matplotlib and how to use them effectively. It highlights the similarities and difference of plotting data stored in Series or DataFrames. Computation with Series and DataFrames: This one codifies the behavior of DataFrames and Series as following 3 rules: alignment first, element-by-element mathematical operations, and column-based reduction operations. It covers the built-in methods for most common statistical operations, such as mean or sum. It also covers how missing values are handled by Pandas. Manipulating Dates and Times Using Pandas: The first part of this cheatsheet describes how to create and manipulate time series data, one of Pandas’ most celebrated features. Having a Series or DataFrame with a Datetime index allows for easy time-based indexing and slicing, as well as for powerful resampling and data alignment. The second part covers “vectorized” string operations, which is the ability to apply string transformations on each element of a column, while automatically excluding missing values. Combining Pandas DataFrames: The sixth cheat sheet presents the tools for combining Series and DataFrames together, with SQL-type joins and concatenation. It then goes on to explain how to clean data with missing values, using different strategies to locate, remove, or replace them. Split/Apply/Combine with DataFrames: “Group by” operations involve splitting the data based on some criteria, applying a function to each group to aggregate, transform, or filter them and then combining the results. It’s an incredibly powerful and expressive tool. The cheat sheet also highlights the similarity between “group by” operations and window functions, such as resample, rolling and ewm (exponentially weighted functions). Reshaping[...]



Reuven Lerner: Focus on the process: Your Python questions, answered

Wed, 13 Dec 2017 18:39:50 +0000

(image) A few weeks ago, I asked subscribers to my free, weekly “Better developers” list to send me their Python problems.  I got about 20 responses from around the world, some more complex than others.  I promised to answer some of them in video.

Why? Because becoming an expert Python developer means understanding, in a deep way, how Python works. The stronger your mental model of Python’s innards, the better you can use the language to solve problems.  And watching someone solve problems, or work their way through problems in real life, helps to develop and improve those mental models.

A large proportion of my teaching takes place via exercises. (And in the case of Weekly Python Exercise, it’s the overwhelming majority of the teaching, as the name implies.) But as important as it is for my students to work through the exercises, it’s also important that I walk them through the process I use when solving the exercise. Learning the correct process is more important than getting the answer right to a specific problem, because once you start thinking in the right way, with an improved mental model, you’ll be able to solve new and different problems.

I’m trying something similar here: People ask questions, and I try to answer them. Sometimes, I’ll hit a brick wall, or an unexpected detour, or I’ll just be surprised. Guess what? This happens to all of us, and it’s part of the process of solving problems. But it’s also part of the fun and excitement of development.

With that in mind, I present the first two problems/questions that my readers submitted:

If you like these questions and walkthroughs, then you’ll love Weekly Python Exercise, starting on January 2nd, a year-long course for intermediate Python developers.

And if you have Python questions you’d like me to answer, join my list and ask away!  If I choose your question, I’ll give you a coupon for 30% off any of my books or courses.

The post Focus on the process: Your Python questions, answered appeared first on Lerner Consulting Blog.







A. Jesse Jiryu Davis: Join Me at PyTennessee 2018

Wed, 13 Dec 2017 15:59:54 +0000

(image)

I’m looking forward to PyTennessee this February in Nashville. It’s a friendly Python conference, not too big, with great talks and smart people. Kind of like the city of Nashville itself: friendly and not too big.

This year I’m giving two talks. One is about keeping our temper with beginners when they ask basic programming questions, the other is a CPython internals talk about destructors.

Let me know if you’re coming, I hope to see you there.


Image: Ben Shahn, Street musicians in Maynardville, Tennessee, 1935.




Codementor: Python Strings

Wed, 13 Dec 2017 11:18:46 +0000

A string is a sequence of characters. To specify a string, we may enclose a sequence of characters between single, double, or triple quotes. Thus, ' Hello world', ' "Ajay " ', "what's ", "'today's...



Programiz: Python Matrix

Wed, 13 Dec 2017 10:48:10 +0000

In this article we will be learning about python matrices; how they are created, slicing of a matrix, adding or removing elements of a matrix.



Talk Python to Me: #142 Automating the web with Selenium and InstaPy

Wed, 13 Dec 2017 08:00:00 +0000

Is there some task you find yourself performing frequently, repetitively on the web? With Python and modern tooling, virtual every website has become easily scriptable.



Dataquest: Pandas Concatenation Tutorial

Wed, 13 Dec 2017 07:46:49 +0000

You'd be hard pressed to find a data science project which doesn't require multiple data sources to be combined together. Often times, data analysis calls for appending new rows to a table, pulling additional columns in, or in more complex cases, merging distinct tables on a common key. All of these tricks are handy to keep in your back pocket so disparate data sources don't get in the way of your analysis! In this tutorial, we will walk through several methods of combining data using pandas. It's geared towards beginner to intermediate levels and will require knowledge on the fundamentals of the pandas DataFrame. Some prior understanding of SQL and relational databases will also come in handy, but is not required. We will walk through four different techniques (concatenate, append, merge, and join) while analyzing average annual labor hours for a handful of countries. We will also create a plot after every step so we visually understand the different results each data combination technique produces. As a bonus, you will leave this tutorial with insights about labor trends around the globe and a sweet looking set of graphs you can add to your portfolio! We will play the role of a macroeconomic analyst at the Organization for Economic Cooperation and Development (OECD). The question we are trying to answer is simple but interesting: which countries have citizens putting in the longest work hours and how have these trends been changing over time? Unfortunately, the OECD has been collecting data for different continents and time periods separately. Our job is to first get all of the data into one place so we can run the necessary analysis. Accessing the data set We will use data from the OECD Employment and Labour Market Statistics database, which provides data on average annual labor hours for most developed countries dating back to 1950. Throughout the tutorial, I will refer to DataFrames and tables interchangeably. We will use a Jupyter Notebook in Python 3 (you are welcome to use any IDE (integrated development environment) you wish, but this tutorial will be easiest to follow along with in Jupyter). Once that's launched, let's import the pandas and matplotlib libraries, then use %matplotlb inline so Jupyter knows to display plots within the notebook cells. If any of the tools I mentioned sound unfamiliar, I'd recommend looking at Dataquest's getting started guide. import pandas as pd import matplotlib.pyplot as plt %matplotlib inline Next, we will use the pd.read_csv() function to open our first two data files. We will specify that the first column should be used as the row index by passing the argument index_col=0. Finally, we'll display what our initial tables look like. north_america = pd.read_csv('./north_america_2000_2010.csv', index_col=0) south_america = pd.read_csv('./south_america_2000_2010.csv', index_col=0) north_america .dataframe thead tr:only-child th { text-align: right; } .dataframe thead th { text-align: left; } .dataframe tbody tr th { vertical-align: top; } 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 Country Canada 1779.0 1771.0 1754.0 1740.0 1760.0 1747 1745.0 1741.0 1735 1701.0 1703.0 Mexico 2311.2 2285.2 2271.2 2276.5 2270.6 2281 2280.6 2261.4 2258 2250.2 2242.4 USA 1836.0 1814.0 1810.0 1800.0 1802.0 1799 1800.0 1798.0 1792 1767.0 1778.0 south_america .dataframe thead tr:only-[...]



A. Jesse Jiryu Davis: Announcing Motor 1.2 Release Candidate, With MongoDB 3.6 Support

Wed, 13 Dec 2017 07:01:32 +0000

MongoDB 3.6 was released December 5. Today I’ve uploaded a release candidate for version 1.2 of Motor, the async Python driver for MongoDB. This will be a big release so I hope you try the release candidate and tell me if it works for you or if you find bugs. Install the release candidate with pip: python -m pip install motor==1.2rc0 Compatibility Changes MongoDB: drop 2.4, add 3.6 Python: drop 2.6, continue to support 2.7 and 3.3+ Tornado: drop 3, continue to support Tornado 4.5+ aiohttp: support 2.0 and later, drop older version See the Compatibility Matrix for the relationships among Motor, Python, Tornado, and MongoDB versions. MongoDB 3.6 Features Change Streams There’s a new method MotorCollection.watch to acquire a Change Stream on a collection: async for change in collection.watch(): print(change) I’ve written you a little sample app that watches a collection for changes and posts each notification over a websocket using the Tornado webserver: Tornado Change Stream Example Causal Consistency There’s a new Session API to support causal consistency, meaning you can read your writes and perform monotonic reads, even reading from secondaries in a replica set or a sharded cluster. Create a session with MotorClient.start_session and use it for a sequence of related operations: collection = client.db.collection with (await client.start_session()) as s: doc = {'_id': ObjectId(), 'x': 1} await collection.insert_one(doc, session=s) secondary = collection.with_options( read_preference=ReadPreference.SECONDARY) # Sessions are causally consistent by default, we can read the doc # we just inserted, even reading from a secondary. async for doc in secondary.find(session=s): print(doc) Array Filters You can now update subdocuments in arrays within documents, and use “array filters” to choose which subdocuments to change. Pass the array_filters argument to MotorCollection.update_one, MotorCollection.update_many, MotorCollection.find_one_and_update, or MotorCollection.bulk_write. For example if your document looks like this: { _id: 1, points: [ {x: 0, y: 0}, {x: 1, y: 0}, {x: 2, y: 0} ] } You can update all subdocuments where x is greater than zero: await collection.update_one( {'_id': 1}, {'$set': {'points.$[i].y': 5}}, array_filters=[{'i.x': {'$gt': 0}}]) “mongodb+srv://” URIs This new URI scheme is convenient for short, maintainable URIs that represent large clusters, especially in Atlas. See PyMongo’s MongoClient for details. Retryable Writes Now, with a MongoDB 3.6 replica set or sharded cluster, Motor will safely retry any write that’s interrupted by a network glitch or a primary failover, without any risk of a double write. Just add retryWrites to the URI: mongodb://host1,host2,host3/?replicaSet=rs&retryWrites=true See PyMongo’s MongoClient for details about retryable writes. Thanks to everyone who contributed to this version: A. Jesse Jiryu Davis Bernie Hackett Jakub Wilk Karol Horowski Peace, A. Jesse Jiryu Davis [...]



Mike Driscoll: Flask 101: How to Add a Search Form

Wed, 13 Dec 2017 06:05:01 +0000

In our last article, we added a database to our Flask web application, but didn’t have a way to add anything to our database. We also didn’t have a way to view anything, so basically we ended up having a pretty useless web application. This article will take the time to teach you how to do the following: Create a form to add data to our database Use a form to edit data in our database Create some kind of view of what’s in the database Adding forms to Flask is pretty easy too, once you figure out what extension to install. I had heard good things about WTForms so I will be using that in this tutorial. To install WTForms you will need to install Flask-WTF. Installing Flask-WTF is pretty easy; just open up your terminal and activate the virtual environment we set up in our first tutorial. Then run the following command using pip: pip install Flask-WTF This will install WTForms and Flask-WTF (along with any dependencies) to your web app’s virtual environment. Serving HTML Files Originally when I started this series, all I was serving up on the index page of our web application was a string. We should probably spruce that up a bit and use an actual HTML file. Create a folder called “templates” inside the “musicdb” folder. Now create a file called “index.html” inside the “templates” folder and put the following contents in it: Flask Music Database  

Flask Music Database

Now before we update our web application code, let’s go ahead and create a search form for filtering our music database’s results. Adding a Search Form When working with a database, you will want a way to search for items in it. Fortunately creating a search form with WTForms is really easy. Create a Python script called “forms.py” and save it to the “musicdb” folder with the following contents: # forms.py   from wtforms import Form, StringField, SelectField   class MusicSearchForm(Form): choices = [('Artist', 'Artist'), ('Album', 'Album'), ('Publisher', 'Publisher')] select = SelectField('Search for music:', choices=choices) search = StringField('') Here we just import the items we need from the wtforms module and then we subclass the Form class. In our subclass, we create a selection field (a combobox) and a string field. This allows us to filter our search to the Artist, Album or Publisher categories and enter a string to search for. Now we are ready to update our main application. Updating the Main Application Let’s rename our web application’s script from “test.py” to “main.py” and update it so it looks like this: # main.py   from app import app from db_setup import init_db, db_session from forms import MusicSearchForm from flask import flash, render_template, request, redirect from models import Album   init_db()     @app.route('/', methods=['GET', 'POST']) def index(): search = MusicSearchForm(request.form) if request.method == 'POST': return search_results(search)   return render_template('index.html', form=search)     @app.route('/results') def search_results(search): results = [] search_string = search.data['search']   if search.data['search'] == '': qry = db_session.query(Album) results = qry.all()   if not results: flash('No results found!') return redirect('/') [...]



Wingware News: Wing Python IDE 6.0.9: December 13, 2017

Wed, 13 Dec 2017 01:00:00 +0000

This release adds support for Vagrant containers, improves support for Django and Plone/Zope, further improves remote development, fixes startup problems seen on some OS X systems, and makes about 35 other improvements.



Python Anywhere: Introducing the beta for always-on tasks

Tue, 12 Dec 2017 18:47:10 +0000

.jab-post img { border: 2px solid #eeeeee; padding: 5px; }

Today we're starting the beta for a new paid feature on PythonAnywhere: always-on tasks. If you have a paid account and would like to try them out, drop us a line at support@pythonanywhere.com.

(image)

With always-on tasks, you can keep a script constantly running. Once the feature's been switched on for your account, they can be set up in the "Tasks" tab -- you specify a bash command (eg. "python3.6 /home/username/myscript.py"), and the system will start running it. If it exits -- say, if it crashes, or if the server has a problem -- it will be restarted quickly.

We'd be really grateful for any and all feedback people have about this. We've been testing it internally for some months, and a private beta has been running for a while -- so we're reasonably certain it's solid and reliable, but please do remember that it is currently beta, and there may be bugs we don't know about.




Continuum Analytics Blog: Parallel Python with Numba and ParallelAccelerator

Tue, 12 Dec 2017 18:28:18 +0000

With CPU core counts on the rise, Python developers and data scientists often struggle to take advantage of all of the computing power available to them. CPUs with 20 or more cores are now available, and at the extreme end, the Intel® Xeon Phi™ has 68 cores with 4-way Hyper-Threading. (That’s 272 active threads!) To …
Read more →



Python Engineering at Microsoft: North Bay Python 2017 Recap

Tue, 12 Dec 2017 18:00:38 +0000

Last week I had the privilege to attend the inaugural North Bay Python conference, held in Petaluma, California in the USA. Being part of any community-run conference is always enjoyable, and to help launch a new one was very exciting. In this post, I'm going to briefly tell you about the conference and help you find recordings of some of the best sessions (and also the session that I presented). Petaluma is a small city in Sonoma County, about one hour north of San Francisco. Known for their food and wine, it was a surprising location to find a conference, including for many locals who got to attend their first Python event. If the photo to the right looks familiar, you probably remember it as the default Windows XP background image. It was taken in the area, inspired the North Bay Python logo, and doesn't actually look all that different from the hills surrounding Petaluma today. Nearly 250 attendees converged on a beautiful old theatre to hear from twenty-two speakers. Topics ranged from serious topics of web application accessibility, inclusiveness, through to lighthearted talks on machine learning and Django, and the absolutely hilarious process of implementing merge sort using the import statement. All the videos can be found on the North Bay Python YouTube channel. Recently I have been spending some of my time working on a proposal to add security enhancements to Python, similar to those already in Powershell. While Microsoft is known for being highly invested in security, not everyone shares the paranoia. I used my twenty-five minute session to raise awareness of how modern malware attacks play out, and to show how PEP 551 can enable security teams to better defend their networks. (Image credit: VM Brasseur, CC-BY 2.0) While I have a general policy of not uploading my presentation (slides are for speaking, not reading), here are the important links and content that you may be interested in: Full text of PEP 551 on python.org (with links for further reading) My sample Python code as a gist on GitHub PEP 551 implementation for Python 3.7.0a3 on GitHub PEP 551 implementation for Python 3.6.3 on GitHub Overall, the conference was a fantastic success. Many thanks to the organizing committee, Software Freedom Conservancy, and the sponsors who made it possible, and I am looking forward to attending in 2018. Until the next North Bay Python though, we would love to have a chance to meet you at the events that you are at. Let us know in the comments what your favorite Python events are and the ones you would most like to have people from our Python team come and speak at. [...]



Django Weblog: DSF travel grants available for PyCon Namibia 2018

Tue, 12 Dec 2017 16:37:59 +0000

About PyCon Namibia

PyCon Namibia held its first edition in 2015.

The conference has been held annually since then, and has been at the heart of a new open-source software movement in Namibia. In particular, through PyNam, the Namibian Python Society, Python has become the focus of self-organised community volunteering activity in schools and universities.

In the last two years, assisted greatly by Helen Sherwood-Taylor, Django Girls has become an important part of the event too.

PyCons in Africa

The conference has also been the direct prompt for further new PyCons across Africa; Zimbabwe in 2016, Nigeria in 2017 and a planned PyCon Ghana next year. In each case, PyCon attendees from another country have returned home to set up their own events.

An important aspect of these events is the opportunity to establish relationships with the international community. Numerous people have travelled from other corners of the world to meet African programmers in their own countries, and many have returned multiple times.

Be a Pythonista, not a tourist

There is enormous value in this exchange, which gives Python/Django programmers from beyond Africa a unique opportunity to encounter African programmers in their own country, and to visit not as passing tourists but as Pythonistas and Djangonauts who will form long-term relationships with their African counterparts. This helps ensure that the international Python community meaningfully includes its members, wherever in the world they may be, and represents a chance like no other to understand them and what Python might mean in Africa.

There is probably no better way to understand what Python might mean in Namibia, for example, than having lunch with a group of Namibian high-school pupils and hearing about their ideas and plans for programming.

This exchange enriches not only the PyCon itself, but also the lives of the Pythonistas that it embraces, from both countries, and the communities they are a part of.

About the travel fund

In order to help maintain this valuable exchange between international Python communities, the Django Software Foundation has set aside a total of US$1500 to help enable travellers from abroad to visit Namibia for next year's PyCon, 20th-22nd February.

The DSF seeks expressions of interest from members of the international Django community who'd like to take advantage of these funds.

Please get in touch with us by email. We'd like to know:

  • who you are
  • why you'd like to participate
  • where you are travelling from and how much you estimate you will need

PyCon Namibia will benefit most from attendees who are interested in developing long-term relationships with its community and attendees.

See the conference website for information about travel and more.




Kushal Das: Qubes OS 4.0rc3 and latest UEFI systems

Tue, 12 Dec 2017 15:19:00 +0000

Last week I received a new laptop, I am going to use it as my primary work station. The first step was to install Qubes OS 4.0rc3 on the system. It is a Thinkpad T470 with 32GB RAM and a SSD drive. How to install Qubes on the latest UEFI systems? A few weeks back, a patch was merged to the official Qubes documentation, which explains in clear steps how to create a bootable USB drive on a Fedora system using livecd-tools. Please follow the guide and create a USB drive which will work on these latest machines. Just simply using dd will not help. First step after installing Qubes I upgraded the dom0 to the current testing packages using the following command. $ sudo qubes-dom0-update --enablerepo=qubes-dom0-current-testing $ sudo qubes-dom0-update qubes-template-fedora-26 I also installed the Fedora 26 template on my system using the next command. One of the important point to remember that Fedora 25 is going to be end of life today. So, better to use updated version of the distribution :) There was another important thing happened in the last two weeks. I was in the Freedom of the Press Foundation office in San Fransisco. Means not only I managed to meet my amazing team, I also met many of my personal heroes in this trip. I may write a separate blog post about that later. But for now I can say that I managed to sit near to Micah Lee for 2 weeks and learn a ton about various things, including his Qubes workflow. The following two things were the first change I did to my installation (with his guidance) to make things working properly. How to modify the copy-paste between domains shortcuts? Generally Ctrl+Shift+c and Ctrl+Shift+v are used to copy-paste securely between different domains. But, those are the shortcuts to copy-paste from the terminal in all the systems. So, modifying them to a different key combination is very helpful for the muscle memory :) Modify the following lines in the /etc/qubes/guid.conf file in dom0, I did a reboot after that to make sure that I am using this new key combination. secure_copy_sequence = “Mod-c”; secure_paste_sequence = “Mod-v”; The above configuration will modify the copy paste shortcuts to Windows+c and Windows+v in my keyboard layout. Fixing the wireless driver issue in suspend/resume I also found that if I suspend the system, after starting it on again, the wireless device was missing from the sys-net domain. Adding the following two module in the /rw/config/suspend-module-blacklist file on the sys-net domain helped me to fix that. iwlmvm iwlwifi The official documentation has a section on the same. You can follow my posts on Qubes OS here.[...]



Programiz: Python Arrays

Tue, 12 Dec 2017 14:00:59 +0000

In this article, you’ll learn about python array. Before getting started, you should be familiar with python, variables and datatypes.



Continuum Analytics Blog: Anaconda Welcomes Lars Ewe as SVP of Engineering

Tue, 12 Dec 2017 14:00:40 +0000

Anaconda, Inc., provider of the most popular Python data science platform, today announced Lars Ewe as the company’s new senior vice president (SVP) of engineering. With more than 20 years of enterprise engineering experience, Ewe brings a strong foundation in big data, real-time analytics and security. He will lead the Anaconda Enterprise and Anaconda Distribution engineering teams.



PyCon: Python Education Summit celebrates its 6th year in 2018

Tue, 12 Dec 2017 10:55:02 +0000

Teachers, educators, and Python users: come and share your projects, experiences, and tools of the trade you use to teach coding and Python to your students. The Annual Python Education Summit is held in conjunction with PyCon 2018, taking place on Thursday May 10. Our Call for Proposals is open until January 3rd, and we want to hear from you! See https://us.pycon.org/2018/speaking/education-summit/ for more details.

What we look for in Education Summit talks are ideas, experiences, and best practices on how teachers and programmers have implemented instruction in their schools, communities, books, tutorials, and other places of learning by using Python.

  • Have you implemented a program that you've been dying to talk about?
  • Have you tried something that failed but learned some great lessons that you can share?
  • Have you been successful implementing a particular program?

We urge anyone in this space to submit a talk! We’re looking for people who want to share their knowledge and leverage the experience of their peers in bettering the education fields around Python. You do not need to be an experienced speaker to apply!

This year, talks that focus on the challenges and triumphs of implementing programming education are especially encouraged.

About the Python Education Summit

In 2018, the focus will be to bring educators and their experiences from diverse categories, to the forefront. Building on last year’s successful participation by young coders, this year we again urge young programmers to submit to speak about their learning experiences or present a demonstration of their coding projects.

We hope to see you at the Education Summit in 2018! January 3 is the deadline for submissions, so pen down your thoughts and ideas and submit them to us in your dashboard at https://us.pycon.org/2018/dashboard/. For more infomation about the summit, see https://us.pycon.org/2018/speaking/education-summit/

Registration for the Education Summit will open in January 2018. A formal announcement will be made via @pycon and here on the PyCon blog.

Be on the lookout for more details and we hope to see you there!

Written by Meenal Pant
Edited by Brian Curtin



pgcli: Introducing mssql-cli - A query tool for SQL Server

Tue, 12 Dec 2017 08:00:00 +0000

We are excited to welcome mssql-cli to the dbcli org. This new command line query tool for Microsoft's SQL Server is developed by Microsoft.

(image)

Microsoft's engineers reached out to the dbcli team and pitched the idea of writing a tool for SQL Server based on pgcli and mycli. The new tool named mssql-cli will live in the dbcli org in github.

mssql-cli will ship with context-aware auto-completion, syntax highlighting, alias support, paged output and so on. Essentially all the niceties of pgcli that works with SQL Server.

Here's the official announcement from Microsoft.

Backstory:

A couple of months ago, I received an email from a Microsoft PM asking me about pgcli and dbcli org in github. I hopped on a skype call with the PM and a few of the MS engineers. They seemed genuinely impressed by the feature set of pgcli and mycli. I got to see a little demo of the prototype in action.

I was flattered when asked if the tool could be part of the dbcli suite. Microsoft created the tool and offered to maintain it, but it will be under the dbcli org in github.

Now the tool and the code are available for public preview.




PyCon Pune: Welcome Brett Cannon, Our First Keynote Speaker

Tue, 12 Dec 2017 07:44:48 +0000

PyCon Pune 2018 is thrilled to welcome Brett Cannon, our first keynote speaker. “Came for the language and stayed for the community” - our motto is the quote that actually originated the idea of having PyCon Pune. The lines which was used in two different Python Conferences of India in 2017. Therefore apart from Python, the love for this quote is also binding the Python Community in India. Now is the our chance to meet the person behind that thought, in PyCon Pune 2018.



Mike Driscoll: Flask 101: Adding a Database

Tue, 12 Dec 2017 06:15:29 +0000

Last time we learned how to get Flask set up. In this article we will learn how to add a database to our music data website. As you might recall, Flask is a micro-web-framework. That means it doesn’t come with an Object Relational Mapper (ORM) like Django does. If you want to add database interactivity, then you need to add it yourself or install an extension. I personally like SQLAlchemy, so I thought it was nice that there is a ready-made extension for adding SQLAlchemy to Flask called Flask-SQLAlchemy. To install Flask-SQLAlchemy, you just need to use pip. Make sure that you are in your activated virtual environment that we created in the first part of this series before you run the following or you’ll end up installing the extension to your base Python instead of your virtual environment: pip install flask-sqlalchemy Now that we have the Flask-SQLAlchemy installed along with its dependencies, we can get started creating a database! Creating a Database Creating a database with SQLAlchemy is actually pretty easy. SQLAlchemy supports a couple of different ways of working with a database. My favorite is using its declarative syntax that allows you to create classes that model the database itself. So I will use that for this example. We will be using SQLite as our backend too, however we could easily change that backend to something else, such as MySQL or Postgres if we wanted to. To start out, we will look at how you create the database file using just normal SQLAlchemy. Then we will create a separate script that uses the slightly different Flask-SQLAlchemy syntax. Put the following codee into a file called db_creator.py # db_creator.py   from sqlalchemy import create_engine, ForeignKey from sqlalchemy import Column, Integer, String from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import relationship, backref   engine = create_engine('sqlite:///mymusic.db', echo=True) Base = declarative_base()     class Artist(Base): __tablename__ = "artists"   id = Column(Integer, primary_key=True) name = Column(String)   def __repr__(self): return "{}".format(self.name)     class Album(Base): """""" __tablename__ = "albums"   id = Column(Integer, primary_key=True) title = Column(String) release_date = Column(String) publisher = Column(String) media_type = Column(String)   artist_id = Column(Integer, ForeignKey("artists.id")) artist = relationship("Artist", backref=backref( "albums", order_by=id))   # create tables Base.metadata.create_all(engine) The first part of this code should look pretty familiar to anyone using Python as all we are doing here is importing the bits and pieces we need from SQLAlchemy to make the rest of the code work. Then we create SQLAlchemy’s engine object, which basically connects Python to the database of choice. In this case, we are connecting to SQLite and creating a file instead of creating the database in memory. We also create a “base class” that we can use to create declarative class definitions that actually define our database tables. The next two classes define the tables we ca[...]



Mike Driscoll: Flask 101: Getting Started

Tue, 12 Dec 2017 06:05:15 +0000

The Flask 101 series is my attempt at learning the Flask microframework for Python. For those who haven’t heard of it, Flask is micro web framework for creating web applications in Python. According to their website, Flask is based on Werkzeug, Jinja 2 and good intentions. For this series of articles, I wanted to create a web application that would do something useful without being too complicated. So for my learning sanity, I decided to create a simple web application that I can use to store information about my music library. Over the course of multiple articles, you will see how this journey unfolded. Getting Setup To get started using Flask, you will need to install it. We will create a virtual environment for this series of tutorials as there will be a number of other Flask dependencies that we will need to add and most people probably don’t want to pollute their main Python installation with a lot of cruft they may not end up using. So before we install Flask, let’s create a virtual environment using virtualenv. If you want to use virtualenv, then we will need to install that with pip: pip install virtualenv Now that we have that installed, we can create our virtual environment. Find a location on your local system where you want to store your web application. Then open up a terminal and run the following command: virtualenv musicdb On Windows, you might have to give the full path to virtualenv, which is usually something like C:\Python36\Scripts\virtualenv.exe. Note that starting in Python 3.3, you can also use Python’s built-in venv module to create a virtual environment instead of using virtualenv. Of course, the virtualenv package can be installed in Python 3, so it’s up to you which you want to use. They work in pretty much the same way. Once you have your virtual environment set up, you will need to activate it. To do that, you will need to change your directory in your terminal to the folder you just created using the “cd” command: cd musicdb If you are on a Linux or Mac OS, you should run the following: source bin/activate Windows is a bit different. You still need to “cd” into your folder, but the command to run is this: Scripts/activate For more details on activating and deactivating your virtual environment, check out the user guide. You may have noticed that when you created your virtual environment, it copied in your Python executable as well as pip. This means that you can now install packages to your virtual environment using pip, which is the reason so many people like virtual environments. Once the virtual environment is activated, you should see that your terminal has changed to prepend the name of the virtual environment to the terminal’s prompt. Here’s an example screenshot using Python 2.7: Now we’re ready to install Flask! Getting Started with Flask Flask is easy to install using the pip installer. Here’s how you can do it: pip install flask This command will install Flask and any of the dependencies that it needs. This is the output I received: Collecting flask Downloading Flask-0.12.2-py2.py3-none-any.whl (83kB) 100% |████████████████████████████████| 92kB 185kB/s Collecting itsdangerous>=0.21 (from flask) Downloading itsdangerous-0.24.tar.gz (46kB) 100% |██████[...]