Subscribe: Planet Python
http://www.planetpython.org/rss20.xml
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
api  code  contact  cython  data  django  file  function  google  mock  pyladies  python  read  sheet  sheets  square  talk 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Planet Python

Planet Python



Planet Python - http://planetpython.org/



 



Abu Ashraf Masnun: Creating an executable file using Cython

Sat, 01 Oct 2016 11:27:23 +0000

Disclaimer: I am quite new to Cython, if you find any part of this post is incorrect or there are better ways to do something, I would really appreciate your feedback. Please do feel free to leave your thoughts in the comments section :) I know Cython is supposed to be used for building extensions, but I was wondering if we can by any chance compile a Python file into executable binary using Cython? I searched on Google and found this StackOverflow question. There is a detailed answer on this question which is very helpful. I tried to follow the instructions and after (finding and ) fixing some paths, I managed to do it. I am going to write down my experience here in case someone else finds it useful as well. Embedding the Python Interpreter Cython compiles the Python or the Cython files into C and then compiles the C code to create the extensions. Interestingly, Cython has a CLI switch --embed whic can generate a main function. This main function embeds the Python interpreter for us. So we can just compile the C file and get our single binary executable. Getting Started First we need to have a Python (.py) or Cython (.pyx) file ready for compilation. Let’s start with a plain old “Hello World” example. print("Hello World!") Let’s convert this Python file to a C source file with embedded Python interpreter. cython --embed -o hello_world.c hello_world.py It should generate a file named hello_world.c in the current directory. We now compile it to an executable. gcc -v -Os -I /Users/masnun/.pyenv/versions/3.5.1/include/python3.5m -L /usr/local/Frameworks/Python.framework/Versions/3.5/lib -o test test.c -lpython3.5 -lpthread -lm -lutil -ldl Please note you must have the Python source code and dynamic libraries in order to successfully compile it. I am on OSX and I use PyEnv. So I passed the appropriate paths and it compiled fine. Now I have an executable file, which I can run: $ ./hello_world Hello World! Dynamic Linking In this case, the executable we produce is dynamically linked to our specified Python version. So this may not be fully portable (the libraries will need to be available on target machines). But this should work fine if we compile against common versions (for example the default version of Python or a version easily obtainable via the package manager). Including Other Modules Up untill now, I haven’t found any easy ways to include other 3rd party pure python modules (ie. requests) directly compiled into the binary. However, if I want to split my codes into multiple files, I can create other .pyx files and use the include statement with those. For example, here’s hello.pyx: cdef struct Person: char *name int age cdef say(): cdef Person masnun = Person(name="masnun", age=20) print("Hello {}, you are {} years old!".format(masnun.name.decode('utf8'), masnun.age)) And here’s my main file - test.pyx - include "hello.pyx" say() Now if I compile test.pyx just like above example, it will also include the code in hello.pyx and I can call the say function as if it was in test.pyx itself. However, shared libraries like PyQt would have no issues - we can compile them as is. So basically we can take any PyQt code example and compile it with Cython - it should work fine![...]



بايثون العربي: موقع Github القيادة والسيطرة (الجزء الثالث) إعداد حصان طراودة

Sat, 01 Oct 2016 11:15:52 +0000

نحن نريد أن نكون قادرين على تكليف حصان طراودة الخاص بنا على أداء بعض الأعمال على مدى فترة من الزمن ، وهذا يعني أننا نحتاج إلى طريقة لنخبره بالإجراءات التي يجب عليه القيام بها وماهي الوحدات المسؤولة لتنفيذ هذه المهام ، وبإستعمالنا لملف إعدادات يمكننا الوصول إلى هذا المستوى من التحكم ، كما تمكننا أيضا من جعل حصان طراودة يخضع للنوم (عدم القيام بأي مهام).

يجب أن يكون لكل حصان طراوة تقوم بنشره على معرف فريد بحيث يمكنك فرز البيانات وإسترجاعها ولتتمكن أيضا من التحكم في الحصان الذي يقوم بمهمة معينة، سنقوم بإعداد حصان طراودة لينظر في دليل الإعدادات وبالتحديد من ملف TROJANID.json .

إن صيغة JSON تجعل من السهل تغيير خيارات الإعدادات ، اوجه إلى دليل الإعدادات وقم بإنشاء ملف جديد تحت إسم abc.json وأضف إليه السطور التالية :


[
{
"module" : "dirlister"
},
{
"module" : "environment"
}
]

هذه مجرد قائمة بسيطة من الوحدات التي نريد من حصان طراوة أن يقوم بتنفيذها ، وفيما بعد سنرى كيفية قراءة في هكذا وثائق JSON ثم نقوم بإعادة تكرار كل خيار لتمكين تلك الوحدات من التنفيذ، كما يمكنك إضافة خيارات و إعدادات جديدة مثل تحديد مدة التنفيذ ، عدد المرات لتشغيل وحدة معينة أو مدخلات لمتريرها إلى الوحدة ، قم الأن بفتح الطرفية وأكتب الأوامر التالية من خلال دليل المستودع الرئيسي


$ git add .
$ git commit -m "Adding simple config."
$ git push origin master
Username: ********
Password: ********

تعتبر هذه الإعدادات التي قمنا بها في غاية البساطة حيث تقوم بتقديم قائمة من القواميس والتي تخبر حصان طراودة ما هي الوحدات الواجب جلبها و تنفيذها وطبعا يمكن إضافة العديد من الخيارات والإعدادات على حسب ما يناسبك وإلى هنا نكون جاهزون تماما لبناء حصان طراودة الرئيسي







Programming Ideas With Jake: Multi-Line Lambdas in Python

Sat, 01 Oct 2016 05:00:56 +0000

Python can be a slightly disappointing language when it comes to lambdas, which can only be single expressions or statements. Here, we'll see a way around that.(image)



Catalin George Festila: Another learning python post with pygame.

Fri, 30 Sep 2016 16:37:54 +0000

This is a simple python script with pygame python module.
I make it for for educational purposes for the children.
I used words into romanian language for variables, functions and two python class.
See this tutorial here
(image)



Anwesha Das: My talk about software licenses in PyCon India

Fri, 30 Sep 2016 15:51:28 +0000

The title for my talk was "The trends in choosing licenses in Python ecosystem". As a lawyer what interests me is the legal foundation of the open source world. It is the licenses, which defines the reach of the software. Most of the developers would frown their eyebrows to hear that. Most of the developers think about licenses as large, boring, legal, gibberish text. So the aim of my talk was to give them an overview of licenses, why it is important and what are the best practices for the developers regarding the licenses. I framed my talk (as a lawyer) around licenses, majorly focusing on them as: what are the different kinds of licenses? Definition and elaborate explanation of each of open source licenses. What difference between free software and open source software? Little bit of my work around PyPI. Why the developers has chosen some particular licenses more than the others? The answers to that. The best practice for the developers while choosing a license in a gist. Three days before my traveling I got a mail from the organizers that all the proceedings of the conference such as the slides, talk videos all these things will be in Public Domain. I was shocked and stunned too. It took me a lot of time, and mails to make them really understand that by speaking in front of public does not really make anything Public Domain. I was tweeting about the same to the speakers please add license in the slides for your talk. But even in then in Day 0 in the venue itself I got to know many of speakers had no clue about what were the licenses for their talks. I realized my talk is too lawyerish. It would be a sure super flop talk. People would leave the hall within 5 minutes. So, I needed to reframe my talk from lawyerish way to developrish way. I had only few hours (approximately 10 hours) left. Kushal really wanted to stay for his friends but I dragged him (sudo wify power:)). We reached the Airbnb where we were staying. Started rearranging my talk slides. I was stressing on: My project. How did I do my work the progrmish way. The basic concepts of licenses, without going into much details. Explaining them with real life examples which will everyone understand. A special stress on Public Domain (given the problems going on in PyCon India). Then the most part of my talk was covering Best Practices for the Developers. Slides got ready by 10PM. Had a quick dinner. While waiting for the dinner wrote the Blogpost. That day only I read a blog post by Zainab Bawa about practicing talk, such a perfect timing and a great post. I discerned the fact again that I need to begin practice my reframed talk. It was almost 12. I had no time left to practice my talk. I was tensed, scared. I began practicing and of course I was not been able to do it properly. That made me more frightened, nervous. Then started the whole episode of super attack of inferiority complex, IANAD (I am Not a Developer) syndrome, crying, yelling at Kushal and many other regular melodrama before me giving any talk. I wasted almost half an hour and then practiced my talk again. It was much better now. At least Kushal liked it much more. We slept at 2PM. Got up with a shrill alarm at 5:30AM. Practiced the talk again twice. We had to leave for the venue. Reached there and did the PyLadies related work. My talk was scheduled at 4PM. After lunch I did my PyLadies table duties for half an hour. Thanks to all my PyLadies friends they released me from the duty. I went to the devsprint room and practiced my talk over there. I reached at lecture hall 2 where my talk was scheduled before 5 minutes of time. It took some time to do the set ups ready. I started my talk with my memory of PyCon India 2012, which made the foundation for this talk. I defined software license, copyright and different open source licenses, FOSS with various real life examples which will help developers to easily understand the legal concepts in a luc[...]



François Dion: 5 music things

Fri, 30 Sep 2016 14:37:12 +0000

5 in 5I like to cover 5 things in 5 minutes for lightning talks. Or one thing. At the localPython user group, sometimes questions or other circumstances turn these 5in 5 more into a 5 in 10-15... 5 Music ThingsEventually, after a year or two, I'll revisit a subject. I recently noticed that I hadnot talked about music related things in almost two and a half years, so I did5 quick Jupyter notebooks and presented that. Interestingly enough, none ofthese 5 things were covered back then. The github repo includes edited versionsof the notebooks, based on the interactions at the meeting during my presentation. Requirements: All require the followingpip install jupyterAlphabetically...1 - AudioNotebook2 - libROSAHere we will need to pip install matplotlib and numpy, and of course librosa.Notebook3 - music21pip install music21You'll need some external programs: Lilypond and MusescoreYou also need launch scripts for each of them. On a mac, use the providedlaunch scripts in the mac/ folder of this repo. Make sure you chmod a+x them.Change the path in the notebook to reflect your own user path.Notebook4 - python-sonicpip install python-sonicYou'll need one external program: Sonic Pi and to start it before running throughthe notebook.Notebook5 - pyKnonpip install pyknonYou'll need one external program: timidityeasily installed:in Linux with apt-get install timidityon a Mac with brew install timidityThis was mostly an excuse to demo that external command line tools like timidityor sox can be used here.NotebookHave fun!@f_dion - francois(dot)dion(at)gmail(dot)comP.S.: Github repo at: https://github.com/fdion/5_music_things but for some strange reason, github will not render the first (0-StartHere) notebook. This blog post is basically that notebook, putting things in context.[...]



Semaphore Community: Mocks and Monkeypatching in Python

Fri, 30 Sep 2016 13:59:14 +0000

This article is brought with ❤ to you by Semaphore.Post originally published on http://krzysztofzuraw.com/. Republished with author's permission. Introduction In this post I will look into the essential part of testing — mocks. First of all, what I want to accomplish here is to give you basic examples of how to mock data using two tools — mock and pytest monkeypatch. Why bother mocking? Some of the parts of our application may have dependencies for other libraries or objects. To isolate the behaviour of our parts, we need to substitute external dependencies. Here comes the mocking. We mock an external API to check certain behaviours, such as proper return values, that we previously defined. Mocking function Let’s say we have a module called function.py: def square(value): return value ** 2 def cube(value): return value ** 3 def main(value): return square(value) + cube(value) Then let’s see how these functions are mocked using the mock library: try: import mock except ImportError: from unittest import mock import unittest from function import square, main class TestNotMockedFunction(unittest.TestCase): @mock.patch('__main__.square', return_value=1) def test_function(self, mocked_square): # because you need to patch in exact place where function that has to be mocked is called self.assertEquals(square(5), 1) @mock.patch('function.square') @mock.patch('function.cube') def test_main_function(self, mocked_square, mocked_cube): # underling function are mocks so calling main(5) will return mock mocked_square.return_value = 1 mocked_cube.return_value = 0 self.assertEquals(main(5), 1) mocked_square.assert_called_once_with(5) mocked_cube.assert_called_once_with(5) if __name__ == '__main__': unittest.main() What is happening here? Lines 1-4 are for making this code compatible between Python 2 and 3. In Python 3, mock is part of the standard library, whereas in Python 2 you need to install it by pip install mock. In line 13, I patched the square function. You have to remember to patch it in the same place you use it. For instance, I’m calling square(5) in the test itself so I need to patch it in __main__. This is the case if I’m running this by using python tests/test_function.py. If I’m using pytest for that, I need to patch it as test_function.square. In lines 18-19, I patch the square and cube functions in their module because they are used in the main function. The last two asserts come from the mock library, and are there to make sure that mock was called with proper values. The same can be accomplished using mokeypatching for py.test: from function import square, main def test_function(monkeypatch): monkeypatch.setattr(“test_function_pytest.square”, lambda x: 1) assert square(5) == 1 def test_main_function(monkeypatch): monkeypatch.setattr(‘function.square’, lambda x: 1) monkeypatch.setattr(‘function.cube’, lambda x: 0) assert main(5) == 1 As you can see, I’m using monkeypatch.setattr for setting up a return value for given functions. I still need to monkeypatch it in proper places — test_function_pytest and function. Mocking classes I have a module called square: import math class Square(object): def __init__(radius): self.radius = radius def calculate_area(self): return math.sqrt(self.radius) * math.pi and mocks using standard lib: try: import mock except ImportError: from unittest import mock import unittest from square import Square class TestClass(unittest.TestCase): @mock.patch('__main__.Square') # depends in witch from is run def test_mocking_instance(self, mocked_instance)[...]



Automating OSINT: Dark Web OSINT Part Four: Using Scikit-Learn to Find Hidden Service Clones

Fri, 30 Sep 2016 13:38:57 +0000

Welcome back to the fourth and final instalment in this series. If you haven’t read part one, two or three definitely feel free to go and do so. This will be much shorter than the others The original inspiration for this post was from a @krypti3a blog post called: Counterfeiting on the Darknet: USD4U. If you aren’t already following his blog you should, there’s a lot of interesting stuff there. He details a counterfeiting hidden service at usd4you5sa237ulk.onion that seems to display counterfeit currency. The first thing you see however, is the big warning at the top of the page warning against clone sites and that what you are viewing is the real site. This got me wondering how we could leverage our OnionScan results to try to find cloned hidden services so that we could examine the differences between them or just use them as a jumping off point for an investigation. A subsequent conversation with Scot, where he also suggested that finding perfect mirrors would be a good thing as well since that could indicate a site backing itself up or preparing to move to a new hidden service address. The counterfeiting post gives us a great opportunity to try this out. Let’s get started. Getting Scikit-Learn Installed Scikit-Learn is a machine learning library for Python that has all kinds of cool bits for data analysis and high powered machine learning tasks. Full disclosure: I know precisely nothing about machine learning. Now the cool thing is that there are a number of supporting classes and functions in scikit-learn that can be used for other tasks, such as what we are going to be doing. This all being said, the installation of scikit-learn can be a bit of a pain but just follow these steps carefully. Windows We need to download and install scipy, numpy and then scikit-learn. Each of them has a binary download called a “wheel” file that we can grab from the links below. How you choose the right download is like so, using the following example link: scipy-0.18.1-cp27-cp27m-win32.whl “cp27” indicates that it is for Python 2.7 (this is what I use). “win32” indicates that it is for 32-bit Windows. Now download the appropriate wheel files for each of the required libraries: http://www.lfd.uci.edu/~gohlke/pythonlibs/#scipy http://www.lfd.uci.edu/~gohlke/pythonlibs/#numpy http://www.lfd.uci.edu/~gohlke/pythonlibs/#scikit-learn Once you have them downloaded you can install them using pip. If you have never used pip before you should check out my Python course here. For example do the follow for numpy: pip install numpy-1.11.1+mkl-cp27-cp27m-win32 Mac OSX / Linux In my experience, installing the prerequisites from pip works perfectly fine but your mileage may vary: sudo pip install scipy sudo pip install numpy sudo pip install scitkit-learn Once you have the prerequisites installed we can move on to writing some code! Coding it Up Before we start pounding out the code, I started this whole research question out by asking Google: “similarity between two text documents”. It landed me on a great StackOverflow.com thread here that explained how to do this in scikit-learn. I do not know a lick of math or machine learning but I am always up for experimenting with snippets of code that much smarter people post and I have verified that this technique works great for finding cloned hidden services. Let’s get started by creating a new Python script called clone_finder.py and start entering the following code:import argparse import glob import json import os import sys from sklearn.feature_extraction.text import TfidfVectorizer ap = argparse.ArgumentParser() ap.add_argument("-s","--hidden-service", required=True,help="The hidden service .onion address you are interested in.") args = vars(ap.parse_args()) base_hidden_service = args['hidden_[...]



Gocept Weblog: Zope Resurrection Part 1 – Reanimation

Fri, 30 Sep 2016 08:43:32 +0000

Now we are helping Zope in the Python 3 wonderland: Almost 20 people started with the reanimation of Zope. We are working mostly on porting important dependencies of Zope to Python 3:

Zope 4 is now per default based on WSGI. Thanks to Hanno who invested much time to make the WSGI story of Zope much more streamlined.

We found out that the ZTK (Zope Toolkit) is no longer used in any of the projects using packages of it (Zope, Grok, Bluebream – as it is dead). It should be kept to test compatibility between the packages inside the ZTK.


(image) (image)



Brian Okken: 23: Lessons about testing and TDD from Kent Beck

Fri, 30 Sep 2016 07:29:37 +0000

Kent Beck’s twitter profile says “Programmer, author, father, husband, goat farmer”. But I know him best from his work on extreme programming, test first programming, and test driven development. He’s the one. The reason you know about TDD is because of Kent Beck. I first ran across writings from Kent Beck as started exploring Extreme […]

The post 23: Lessons about testing and TDD from Kent Beck appeared first on Python Testing.




Semaphore Community: Dockerizing a Python Django Web Application

Fri, 30 Sep 2016 06:09:30 +0000

This article is brought with ❤ to you by Semaphore.Introduction This article will cover building a simple 'Hello World'-style web application written in Django and running it in the much talked about and discussed Docker. Docker takes all the great aspects of a traditional virtual machine, e.g. a self contained system isolated from your development machine, and removes many of the drawbacks such as system resource drain, setup time, and maintenance. When building web applications, you have probably reached a point where you want to run your application in a fashion that is closer to your production environment. Docker allows you to set up your application runtime in such a way that it runs in exactly the same manner as it will in production, on the same operating system, with the same environment variables, and any other configuration and setup you require. By the end of the article you'll be able to: Understand what Docker is and how it is used, Build a simple Python Django application, and Create a simple Dockerfile to build a container running a Django web application server. What is Docker, Anyway? Docker's homepage describes Docker as follows: "Docker is an open platform for building, shipping and running distributed applications. It gives programmers, development teams, and operations engineers the common toolbox they need to take advantage of the distributed and networked nature of modern applications." Put simply, Docker gives you the ability to run your applications within a controlled environment, known as a container, built according to the instructions you define. A container leverages your machines resources much like a traditional virtual machine (VM). However, containers differ greatly from traditional virtual machines in terms of system resources. Traditional virtual machines operate using Hypervisors, which manage the virtualization of the underlying hardware to the VM. This means they are large in terms of system requirements. Containers operate on a shared Linux operating system base and add simple instructions on top to execute and run your application or process. The difference being that Docker doesn't require the often time-consuming process of installing an entire OS to a virtual machine such as VirtualBox or VMWare. Once Docker is installed, you create a container with a few commands and then execute your applications on it via the Dockerfile. Docker manages the majority of the operating system virtualization for you, so you can get on with writing applications and shipping them as you require in the container you have built. Furthermore, Dockerfiles can be shared for others to build containers and extend the instructions within them by basing their container image on top of an existing one. The containers are also highly portable and will run in the same manner regardless of the host OS they are executed on. Portability is a massive plus side of Docker. Prerequisites Before you begin this tutorial, ensure the following is installed to your system: Python 2.7 or 3.x, Docker (Mac users: it's recommended to use docker-machine, available via Homebrew-Cask), and A git repository to store your project and track changes. Setting Up a Django web application Starting a Django application is easy, as the Django dependency provides you with a command line tool for starting a project and generating some of the files and directory structure for you. To start, create a new folder that will house the Django application and move into that directory. $ mkdir project $ cd project Once in this folder, you need to add the standard Python project dependencies file which is usually named requirements.txt, and add the Django and Gunicorn dependency to it. Gunicorn is a production standard web ser[...]



Wesley Chun: Migrating SQL data to Google Sheets using the new Google Sheets API

Thu, 29 Sep 2016 21:57:09 +0000

NOTE: The code covered in this post are also available in a video walkthrough.UPDATE (Sep 2016): Removed use of argparse module & flags (effective as of Feb 2016).IntroductionIn this post, we're going to demonstrate how to use the latest generation Google Sheets API. Launched at Google I/O 2016 (full talk here), the Sheets API v4 can do much more than previous versions, bringing it to near-parity with what you can do with the Google Sheets UI (user interface) on desktop and mobile. Below, I'll walk you through a Python script that reads the rows of a relational database representing customer orders for a toy company and pushes them into a Google Sheet. Other API calls we'll make: one to create new Google Sheets with and another that reads the rows from a Sheet. Earlier posts demonstrated the structure and "how-to" use Google APIs in general, so more recent posts, including this one, focus on solutions and use of specific APIs. Once you review the earlier material, you're ready to start with authorization scopes then see how to use the API itself.Google Sheets API authorization & scopesPrevious versions of the Google Sheets API (formerly called the Google Spreadsheets API), were part of a group of "GData APIs" that implemented the Google Data (GData) protocol, an older, less-secure, REST-inspired technology for reading, writing, and modifying information on the web. The new API version falls under the more modern set of Google APIs requiring OAuth2 authorization and whose use is made easier with the Google APIs Client Libraries.The current API version features a pair of authorization scopes: read-only and read-write. As usual, we always recommend you use the most restrictive scope possible that allows your app to do its work. You'll request fewer permissions from your users (which makes them happier), and it also makes your app more secure, possibly preventing modifying, destroying, or corrupting data, or perhaps inadvertently going over quotas. Since we're creating a Google Sheet and writing data into it, we must use the read-write scope:'https://www.googleapis.com/auth/spreadsheets' — Read/write access to Sheets and Sheet propertiesUsing the Google Sheets APILet's look at some code that reads rows from a SQLite database and creates a Google Sheet with that data. Since we covered the authorization boilerplate fully in earlier posts and videos, we're going straight to creating a Sheets service endpoint. The API string to use is 'sheets' and the version string to use is 'v4' as we call the apiclient.discovey.build() function:SHEETS = discovery.build('sheets', 'v4', http=creds.authorize(Http()))With the SHEETS service endpoint in hand, the first thing to do is to create a brand new Google Sheet. Before we use it, one thing to know about the Sheets API is that most calls require a JSON payload representing the data & operations you wish to perform, and you'll see this as you become more familiar with it. For creating new Sheets, it's pretty simple, you don't have to provide anything, in which case you'd pass in an empty (dict as the) body, but a better bare minimum would be a name for the Sheet, so that's what data is for:data = {'properties': {'title': 'Toy orders [%s]' % time.ctime()}}Notice that a Sheet's "title" is part of its "properties," and we also happen to add the timestamp as part of its name. With the payload complete, we call the API with the command to create a new Sheet [spreadsheets().create()], passing in data in the (eventual) request body: res = SHEETS.spreadsheets().create(body=data).execute()Alternatively, you can use the Google Drive API (v2 or v3) to create a Sheet but would also need to pass in th[...]



PyTennessee: PyTN 2017 Kickoff

Thu, 29 Sep 2016 13:27:17 +0000

Hello everyone!  It’s time to kick off the lead up to a great PyTN 2017! I’m very excited about this year event, and I hope you are as well.  I’m pleased to welcome a few new organizers to this year’s event: Bill Israel (co-chair) and Jessica Wynn (Volunteer Chair). I know you’ll find their extra efforts will help the conference feel more coordinated and organized, so please thank them if you see them. Let’s start with the keynotes, which we’ll be posting full profiles of in the coming weeks. However, here are the reasons they are special to me:- Sophie Rapoport: A software engineer from Nashville who has made a distinct impression on me with her sharp intellect, dedication to learning, and teaching nature. Simply an amazing person.- Courey Elliott: A software developer who I first meet about 2 years ago with an amazing capacity for reasoning, a wide array of interests, a constant desire to learn, and a wonderful soul.- Jurnell Cockhren: An amazingly wide and deeply learned individual inside and outside of technology. Working right at the line between software engineering, operations, and teaching, Jurnell’s sharp mind, excellent sense of humor, and beautiful humanity endeared him to me from the beginning.- Sarah Guido: One of the people I greatly admire in the data science world, Sarah can take complex concepts break them down into understandable pieces with ease. Sarah’s impressive engineering skills lead her to solve complex problems in data science and development alike.  With a great compassion to empower others, powerful logic, and writing, Sarah is truly an asset to our community.CFP Opens Saturday Oct 1st!PyTennessee 2017 is accepting all types of proposals starting this Saturday Oct 1st through Nov 15th, 2017. Due to the competitive nature of the selection process, we encourage prospective speakers to submit their proposals as early as possible. We’re looking forward to receiving your best proposals for tutorials and talks. Lightning talk sign-ups will be available at the registration desk the day of the event.Note: To submit a proposal, sign up or log in to your account and proceed to your account dashboard!Special Sponsorship ProfileIntellovations has been the first sponsor of PyTN for all 4 years we have had the conference, and in addition to their financial support, they have been providing words of encouragement behind the scenes for all those years as well. They are truly part of the PyTN family, and for that, we are greatly appreciative and lucky to call them friends.Young CodersWe will be having young coders again this year. Tickets will be available December 15th at noon and they normally go fast.  The program will be lead by Brad Montgomery, a masterful python teacher, again this year. This free program is open to ages 12-17, and all attendees will receive take home books and a Chromebook.After PartyLast year, we had a games night put on by Eventbrite, and behind the scenes of that party was a group called Meeple Mountain. We heard from many MANY of you how much you enjoyed the game night, and it will be returning this year! The Meeple Mountain crew will be on hand and you can learn more about this and this event at their website.A Closing NoteFour years ago, a collection of us started PyTN to fill what we felt was a gap in our community. We wanted a place to come together to learn about python, share our experiences, and add a human touch to what is sometimes a tough profession in which to do that. This year as we started planning for PyTN, I realized that we had grown from 110 to over 360 people during these years. On behalf of myself, my wife and all the other organizers, we personally want t[...]



Import Python: ImportPython Issue 92 - django-perf-rec track django performance, Mock testing, python alias more

Thu, 29 Sep 2016 10:40:34 +0000

Worthy ReadPython alias commands that play nice with virtualenvOver the years, I’ve come up with my own Python aliases that play nice with virtual environments. For this post, I tried to stay as generic as possible such that any alias here can be used by every Pythonista. Keep detailed records of the performance of your Django code.django, performance"Keep detailed records of the performance of your Django code.". django-perf-rec is like Django's assertNumQueries on steroids. It lets you track the individual queries and cache operations that occur in your code. This blog post explains the workings of this project https://tech.yplanapp.com/2016/09/26/introducing-django-perf-rec/ . Practical ML for Engineers talk at #pyconuk last weekendmachine learningLast weekend I had the pleasure of introducing Machine Learning for Engineers (a practical walk-through, no maths) at PyConUK 2016 ( Video link on page ). My talk covered a practical guide to a 2 class classification challenge (Kaggle’s Titanic) with scikit-learn, backed by a longer Jupyter Notebook (github) and further backed by Ezzeri’s 2 hour tutorial from PyConUK 2014. Mocks and Monkeypatching in PythontestingThis tutorial will help you understand why mocking is important, and show you how to mock in Python with Mock and Pytest monkeypatch. Abu Ashraf Masnun: Introduction to Django ChannelsYet another introduction to Django Channels. This one is a lot more clear and step by step tutorial. If you still don't know what Django channels is / how to get started, read this. Python has come a long way. So has job hunting. Try Hired and get in front of 4,000+ companies with one application. No more pushy recruiters, no more dead end applications and mismatched companies, Hired puts the power in your hands.SponsorPython Mocks: a gentle introduction - Part 1 and 2testing, mockIn this series of posts I am going to review the Python mock library and exemplify its use. I will not cover everything you may do with mock, obviously, but hopefully I'll give you the information you need to start using this powerful library. Note it's a two part series as of now, here is the second part's url http://blog.thedigitalcatonline.com/blog/2016/09/27/python-mocks-a-gentle-introduction-part-2/#.V-ysf9HhXQo Decorators: The Function's Function - Weekly Python Chat with Trey Hunnerwebcast, videoDecorators are one of those features in Python that people like to talk about. Why? Because they're different. Because they're a little weird. Because they're a little mind-bending. Let's talk about decorators: how do you make them and when should you use them? Simple REST APIs for charts and datasetschartsThe Plotly V2 API suite is a simple alternative to the Google Charts API. Make a request to a Plotly URL and get a link to a dataset or D3 chart. Python code snippet are included on the page. Python Code Review: Unplugged – Episode 2 - By Daniel Badercode reviewDaniel is doing a series of code review sessions with Python developers. Have a look at the accompanied video where he gives his opinion on a open source project by Milton. Python by the C sidec bindingCPython, the primary implementation of Python used by millions, is written in C. Python core developers embraced and exposed Python’s strong C roots, taking a traditional tack on portability, contrasting with the “write once, debug everywhere” approach popularized elsewhere. The community followed suit with the core developers, developing several methods for linking to C. This has given us a lot of choices for interfacing with c, let us look at them. Django Tips #15 Using Mixins With Class-Based ViewsdjangoGeneral rules to use mixins to compose your own view class[...]



Vasudev Ram: Publish Peewee ORM data to PDF with xtopdf

Thu, 29 Sep 2016 03:28:02 +0000

By Vasudev RamPeewee => PDFPeewee is a small, expressive ORM for Python, created by Charles Leifer.After trying out Peewee a bit, I thought of writing another application of xtopdf (my Python toolkit for PDF creation), to publish Peewee data to PDF. I used an SQLite database underlying the Peewee ORM, but it also supports MySQL and PostgreSQL, per the docs. Here is the program, in file PeeweeToPDF.py:# PeeweeToPDF.py# Purpose: To show basics of publishing Peewee ORM data to PDF.# Requires: Peewee ORM and xtopdf.# Author: Vasudev Ram# Copyright 2016 Vasudev Ram# Web site: https://vasudevram.github.io# Blog: http://jugad2.blogspot.com# Product store: https://gumroad.com/vasudevramfrom peewee import *from PDFWriter import PDFWriterdef print_and_write(pw, s): print s pw.writeLine(s)# Define the database.db = SqliteDatabase('contacts.db')# Define the model for contacts.class Contact(Model): name = CharField() age = IntegerField() skills = CharField() title = CharField() class Meta: database = db# Connect to the database.db.connect() # Drop the Contact table if it exists.db.drop_tables([Contact])# Create the Contact table.db.create_tables([Contact])# Define some contact rows.contacts = ( ('Albert Einstein', 22, 'Science', 'Physicist'), ('Benjamin Franklin', 32, 'Many', 'Polymath'), ('Samuel Johnson', 42, 'Writing', 'Writer'))# Save the contact rows to the contacts table.for contact in contacts: c = Contact(name=contact[0], age=contact[1], \ skills=contact[2], title=contact[3]) c.save()sep = '-' * (20 + 5 + 10 + 15)# Publish the contact rows to PDF.with PDFWriter('contacts.pdf') as pw: pw.setFont('Courier', 12) pw.setHeader('Demo of publishing Peewee ORM data to PDF') pw.setFooter('Generated by xtopdf: slides.com/vasudevram/xtopdf') print_and_write(pw, sep) print_and_write(pw, "Name".ljust(20) + "Age".center(5) + "Skills".ljust(10) + "Title".ljust(15)) print_and_write(pw, sep) # Loop over all rows queried from the contacts table. for contact in Contact.select(): print_and_write(pw, contact.name.ljust(20) + str(contact.age).center(5) + contact.skills.ljust(10) + contact.title.ljust(15)) print_and_write(pw, sep)# Close the database connection.db.close()I could have used Python's namedtuple feature instead of tuples, but did not do it for this small program.I ran the program with:python PeeweeToPDF.pyHere is a screenshot of the output as seen in Foxit PDF Reader (click image to enlarge):- Enjoy.- Vasudev Ram - Online Python training and consulting Get updates on my software products / ebooks / courses. Jump to posts: Python   DLang   xtopdf Subscribe to my blog by email My ActiveState recipes FlyWheel - Managed WordPress Hosting Share | Vasudev Ram [...]



Wesley Chun: Formatting cells in Google Sheets with Python

Wed, 28 Sep 2016 19:32:13 +0000

IntroductionOne of the critical things that developers have not been able to do in previous versions of the Google Sheets API is to format cells... that's a big deal! Anyway, the past is the past, and I choose to look ahead. In my earlier post on the Google Sheets API, I introduced Sheets API v4 with a tutorial on how to transfer data from a SQL database to a Sheet. You'd do that primarily to make database data more presentable rather than deliberately switching to a storage mechanism with weaker querying capabilities. At the very end of that post, I challenged readers to try formatting. If you got stuck, confused, or haven't had a chance yet, today's your lucky day. One caveat is that there's more JavaScript in this post than Python... you've been warned!Using the Google Sheets APIWe need to write (formatting) into a Google Sheet, so we need the same scope as last time, read-write:'https://www.googleapis.com/auth/spreadsheets' — Read-write access to Sheets and Sheet propertiesSince we've fully covered the authorization boilerplate fully in earlier posts and videos, including how to connect to the Sheets API, we're going to skip that here and jump right to the action.Formatting cells in Google SheetsThe way the API works, in general, is to take one or more commands, and execute them on the Sheet. This comes in the form of individual requests, either to cells, a Sheet, or the entire spreadsheet. A group if requests is organized as a JavaScript array. Each request in the array is represented by a JSON object. Yes, this part of the post may seem like a long exercise in JavaScript, but stay with me here. Continuing... once your array is complete, you send all the requests to the Sheet using the SHEETS.spreadsheets().batchUpdate() command. Here's pseudocode sending 5 commands to the API:SHEET_ID = . . .reqs = {'requests': [ {'updateSheetProperties': . . . {'repeatCell': . . . {'setDataValidation': . . . {'sortRange': . . . {'addChart': . . .]}SHEETS.spreadsheets().batchUpdate( spreadsheetId=SHEET_ID, body=reqs).execute()What we're executing will be similar. The target spreadsheet will be the one you get when you run the code from the previous post, only without the timestamp in its title as it's unnecessary:Once you've run the earlier script and created a Sheet of your own, be sure to assign it to the SHEET_ID variable. The goal is to send enough formatting commands to arrive at the same spreadsheet but with improved visuals:Four (4) requests are needed to bring the original Sheet to this state: Set the top row as "frozen", meaning it doesn't scroll even when the data does Also bold the first row, as these are the column headers Format column E as US currency with dollar sign & 2 decimal places Set data validation for column F, requiring values from a fixed setCreating Sheets API requestsAs mentioned before, each request is represented by a JSON object, cleverly disguised as Python dictionaries in this post, and the entire request array is implemented as a Python list. Let's take a look at what it takes to together the individual requests: Frozen rowsFrozen rows is a Sheet property, so in order to change it, users must employ the updateSheetProperties command. Specifically, frozenRowCount is a grid property, meaning the field that must be updated is gridProperties.frozenRowCount, set to 1. Here's the Python dict (that gets converted to a JSON object) representing this request: {'updateSheetProperties': { 'properties': {'gridProperties': {'frozenRowCount': 1}}, 'fields': 'gridProperties.frozenRowCount',}},The properties attribut[...]



Richard Tew: Mixing iteration and read methods would lose data

Wed, 28 Sep 2016 17:58:47 +0000

I just upgraded to Python 2.7.12 and my code is now erroring with rather unhelpful Mixing iteration and read methods would lose data errors. Googling for this yields a number of StackOverflow posts where people are iterating over a file, and then within the loop, doing calls on the file. This sounds like a good error for their case, but in my case the fixes are somewhat more inane.

This errors:
    persistence.write_dict_uint32_to_list_of_uint32s(output_file, { k: [] for k in read_set_of_uint32s(input_file) })
This fixes that error:
    set_value = persistence.read_set_of_uint32s(input_file)
persistence.write_dict_uint32_to_list_of_uint32s(output_file, { k: [] for k in set_value })
Another error at another location:
    persistence.write_set_of_uint32s(output_file, persistence.read_set_of_uint32s(input_file))
Some simple rearranging makes this error go away:
    set_value = persistence.read_set_of_uint32s(input_file)
persistence.write_set_of_uint32s(output_file, set_value)
And finally, the next error at another location:
def read_string(f):
s = ""
while 1:
v = f.read(1) # errors here
if v == '\0':
break
s += v
return s
And the fix for this:
def read_bytes(f, num_bytes):
return f.read(num_bytes)

def read_string(f):
s = ""
while 1:
v = read_bytes(f, 1)
if v == '\0':
break
s += v
return s
This error seems like a horribly broken idea, that is popping up in places it wasn't intended.



Anwesha Das: First time PyLadies presence in Pycon India

Wed, 28 Sep 2016 11:06:15 +0000

The title for the blog post is not something I wrote but copy pasted a tweet by @kushaldas. Kushal is being attending PyCon India since 2010, a year after it started. This one tweet of his says if not the whole but lot of it. So, lets explore our journey of PyLadies in PyCon India August In the first week of August I had mailed the PyCon India Organizers requesting a table for PyLadies during PyCon India. But mid August got a reply that they will be unable to give one since we were not the sponsors. Then Rupali got into the matter and she said that Red Hat (the Yoda for PyLadies Pune) would love to give us a space in their booth to promote PyLadies. With the that support we started our planning for PyCon. We had a lot of things to do. The prime of them was that to break to earlier notion about PyLadies in India. A Glibc developer, Siddhesh Poyarekar actually sponsored Python tee shirts for us. This is an example the true diversity and the beauty of the Open Source Community. Day 0 Went to the venue to check things and the Red Hat booth. Was a little disappointed to see that it is on the last corner, and not in front of the main auditorium. Asked Janki and Dhriti to prepare their lightning talks about their work in Python. Day 1 The day 1 started with a shrill alarm set at 5:30 in the morning. I had to practice my talk before reaching the venue. In the venue met Nisha (what a relief to see her). Me, Nisha, Pooja, Trishna, Rupali, Janki did all the booth set up ready. Rupali managed to give us a separate table inside the Red Hat booth so that we could keep our stuffs. Nisha had done a great job in designing the posters. Actually the new PyLadies Pune logo fetched attention of many people. Again Red Hat sponsored PyLadies posters too. As we never had the money to print stickers, so we have made print outs of the PyLadies Logo. Trishna made 50 print outs of them. We just cut that out and pined it up on our tee shirts and PyCon India badges. These badges made us noticed. Actually people came and asked but unfortunately we could not give it to everyone (we had only 50 of them and never thought it would be so popular :)). We had the photo prop, the PyLadies Umbrella, I had suggested Nisha to bring it over. Thank you Nisha for making it so nicely. Then came the tough part going to people and telling them about PyLadies. As we are a group who are starting off, So, when initially I asked if they want to know and come to PyLadies here are some replies I got: So I was going to people and asking them "Do you want to know about PyLadies?" or "Are you interested in PyLadies?" or "Do you want to join PyLadies?" Here are some answers which I got: "Do you have PyLadies in India! Never heard of that." " What is PyLadies?" " Coming in PyCon India since its inception. Never saw anything happening regarding PyLadies. Does PyLadies exist in India?" " Do you actually code in PyLadies?" " I am interested in Python but not in PyLadies." That was disappointing (but I was not). We actually explained them about PyLadies. Most importantly that what binds PyLadies is the love to code in Python. Then I explained that what we do in our regular Pyladies Pune meet up. In the first half of the meetups we actually learn some Python syntax. In the second half we write code using them, so when we go back home we actually have something which we have created. We also have sessions on things important to contribute in Open Source. We listen to talks. We have a mailing list in which we keep posting problems regularly. After all these explanations[...]



Talk Python to Me: #78 How I built an entire game and toolchain 100% in Python

Wed, 28 Sep 2016 08:00:00 +0000

What kind of applications can you build with python? You hear me featuring many people on this show that build websites, web services, or some data science driven application. Of course, all of those are wonderful but I know many of you have dreamed of building a game.

This episode I'm interviewing Joseph Cherlin. He created the game Epikos and the entire tool chain entirely in Python. He has a great story about how he came across Python, why he decided to use it in his game, and advice he has for anyone out there taking on a large project like this.

Links from the show:



Abu Ashraf Masnun: Can Cython make Python Great in Programming Contests?

Wed, 28 Sep 2016 02:00:30 +0000

Python is getting very popular as the first programming language in both home and aborad. I know many of the Bangladeshi universities have started using Python to introduce beginners to the wonderful world of programming. This also seems to be the case in the US. I have talked to a few friends from other countries and they agree to the fact that Python is quickly becoming the language people learn first. A quick google search could explain why Python is getting so popular among the learners. Python in Programming Contests Recently Python has been been included in ICPC, before that Python has usually had less visibility / presence in programming contests. And of course there are valid reasons behind that. The defacto implementation of Python - “CPython” is quite slow. It’s a dynmaic language and that costs in terms of execution speed. C / C++ / Java is way faster than Python and programming contests are all about speed / performance. Python would allow you to solve problems in less lines of code but you may often hit the time limit. Despite the limitation, people have continiously chosen Python to learn programming and solve problems on numerous programming related websites. This might have convnced the authority to include Python in ICPC. But we do not yet know which flavor (read implementation) and version of Python will be available to the ICPC contestants. From different sources I gather that Python will be supported but the time limit issue remains - it is not guranteed that a problem can be solved within the time limit using Python. That makes me wonder, can Cython help in such cases? Introduction to Cython From the official website: Cython is an optimising static compiler for both the Python programming language and the extended Cython programming language (based on Pyrex). It makes writing C extensions for Python as easy as Python itself. With Cython, we can add type hints to our existing Python programs and compile them to make them run faster. But what is more awesome is the Cython language - it is a superset of Python and allows us to write Python like code which performs like C. Don’t trust my words, see for yourself in the Tutorial and Cython Language Basics. Cython is Fast When I say fast, I really mean - very very fast. Image Source: http://ibm.co/20XSZ4F The above image is taken from an article from IBM Developer Works which shows how Cython compares to C in terms of speed. You can also check out these links for random benchmarks from different people: Cython beating C++ Cython being 30% faster than the C++ Another Benchmark And finally, do try yourself and benchmark Cython against C++ and see how it performs! Bonus article – Blazing fast Python networking :-) Cython is easy to Setup OK, so is it easy to make Cython available in the contest environments? Yes, it is! The only requirements of Cython is that you must have a C Compiler installed on your system along with Python. Any computer used for contest programming is supposed to have a C compiler installed anyway. We just need one command to install Cython: pip install Cython PS: Many Scientific distributions of Python (ie. Anaconda) already ships Cython. Cython in Programming Contests Since we saw that Cython is super fast and easy to setup, programming contests can make Cython available along with CPython to allow the contestants make their programs faster and get along with Java / C++. It will make Python an attractive choice for serious problem solvin[...]



Python Engineering at Microsoft: Microsoft’s participation in the 2016 Python core sprint

Tue, 27 Sep 2016 19:11:26 +0000

From September 5th to the 9th a group of Python core developers gathered for a sprint hosted at Instagram and sponsored by Instagram, Microsoft, and the Python Software Foundation. The goal was to spend a week working towards the Python 3.6.0b1 release, just in time for the Python 3.6 feature freeze on Monday, September 12, 2016. The inspiration for this sprint was the Need for Speed sprint held in Iceland a decade ago, where many performance improvements were made to Python 2.5. How time flies! That’s the opening paragraph from the Python Insider blog post discussing the 2016 Python core sprint that recently took place. In the case of Microsoft’s participation in the sprint, both Steve Dower and I (Brett Cannon) were invited to participate (which meant Microsoft had one of the largest company representations at the sprint). Between the two of us we spent the week completing work on four of our own PEPs for Python 3.6: Adding a file system path protocol (PEP 519) Adding a frame evaluation API to CPython (PEP 523) Change Windows console encoding to UTF-8 (PEP 528) Change Windows filesystem encoding to UTF-8 (PEP 529) I also helped review the patches implementing PEP 515 and PEP 526 (“Underscores in Numeric Literals” and “Syntax for Variable Annotations”, respectively). Both Steve and I also participated in many technical discussions on various topics and we cleared out our backlog of bug reports. If you’re curious as to what else has made it into Python 3.6 (so far), the rough draft of the “What’s New” document for Python 3.6 is a good place to start (feature freeze has been reached, so no new features will be added to Python 3.6 but bugs are constantly being fixed). We also strongly encourage everyone to download Python 3.6.0b1 and try it with their code. If you find bugs, please file a bug report. There are also various features which will stay or be removed based on community feedback, so please do give this beta release a try! Overall the week was very productive not only for the two of us but for everyone at the sprints and Python as a project. We hope that the success of this sprint will help lead to it becoming an annual event so that Python can benefit from such a huge burst of productivity every year. And if you or your company want to help with these sorts of sprints in the future or Python’s development in general, then please consider helping with Python’s development and/or sponsoring the Python Software Foundation.  [...]



Weekly Python Chat: Decorators: The Function's Function

Tue, 27 Sep 2016 18:30:00 +0000

Decorators are one of those features in Python that people like to talk about.

Why? Because they're different. Because they're a little weird. Because they're a little mind-bending.

Let's talk about decorators: how do you make them and when should you use them?




Mike Driscoll: wxPython Cookbook Available for Pre-Order

Tue, 27 Sep 2016 17:15:59 +0000

I am excited to announce that the wxPython Cookbook is now available for Pre-Order. You can get your digital copy on Gumroad or Leanpub now. You can get a sample of the book on Leanpub if you’d like to “try before you buy”.

There will be over 50 recipes in this book. The examples in my book will work with both wxPython 3.0.2 Classic as well as wxPython Phoenix, which is the bleeding edge of wxPython that supports Python 3. If I discover any recipes that do not work with Phoenix, they will be clearly marked or there will be an alternative example given that does work.

(image)

Here is a partial listing of the current set of recipes in no particular order:

  • Adding / Removing Widgets Dynamically
  • How to put a background image on a panel
  • Binding Multiple Widgets to the Same Handler
  • Catching Exceptions from Anywhere
  • wxPython’s Context Managers
  • Converting wx.DateTime to Python datetime
  • Creating an About Box
  • How to Create a Login Dialog
  • How to Create a “Dark Mode”
  • Generating a Dialog from a Config File
  • How to Disable a Wizard’s Next Button
  • How to Use Drag and Drop
  • How to Drag and Drop a File From Your App to the OS
  • How to Edit Your GUI Interactively Using reload()
  • How to Embed an Image in the Title Bar
  • Extracting XML from the RichTextCtrl
  • How to Fade-in a Frame / Dialog
  • How to Fire Multiple Event Handlers
  • Making your Frame Maximize or Full Screen
  • Using wx.Frame Styles
  • Get the Event Name Instead of an Integer
  • How to Get Children Widgets from a Sizer
  • How to Use the Clipboard
  • Catching Key and Char Events
  • Learning How Focus Works in wxPython
  • Making Your Text Flash
  • Minimizing to System Tray
  • Using ObjectListView instead of ListCtrl

You can read more about the project in my Kickstarter announcement article. Please note that the Kickstarter campaign is over.

Related Posts




Continuum Analytics News: Continuum Analytics Joins Forces with IBM to Bring Open Data Science to the Enterprise

Tue, 27 Sep 2016 12:18:35 +0000

News Tuesday, September 27, 2016 Optimized Python experience empowers data scientists to develop advanced open source analytics on Spark     AUSTIN, TEXAS—September 27, 2016—Continuum Analytics, the creator and driving force behind Anaconda, the leading Open Data Science platform powered by Python, today announced an alliance with IBM to advance open source analytics for the enterprise. Data scientists and data engineers in open source communities can now embrace Python and R to develop analytic and machine learning models in the Spark environment through its integration with IBM's Project DataWorks.  Combining the power of IBM's Project DataWorks with Anaconda enables organizations to build high-performance Python and R data science models and visualization applications required to compete in today’s data-driven economy. The companies will collaborate on several open source initiatives including enhancements to Apache Spark that fully leverage Jupyter Notebooks with Apache Spark – benefiting the entire data science community. “Our strategic relationship with Continuum Analytics empowers Project DataWorks users with full access to the Anaconda platform to streamline and help accelerate the development of advanced machine learning models and next-generation analytics apps,” said Ritika Gunnar, vice president, IBM Analytics. “This allows data science professionals to utilize the tools they are most comfortable with in an environment that reinforces collaboration with colleagues of different skillsets.” By collaborating to bring about the best Spark experience for Open Data Science in IBM's Project DataWorks, enterprises are able to easily connect their data, analytics and compute with innovative machine learning to accelerate and deploy their data science solutions.  “We welcome IBM to the growing family of industry titans that recognize Anaconda as the defacto Open Data Science platform for enterprises,” said Michele Chambers, EVP of Anaconda Business & CMO at Continuum Analytics. “As the next generation moves from machine learning to artificial intelligence, cloud-based solutions are key to help companies adopt and develop agile solutions––IBM recognizes that. We’re thrilled to be one of the driving forces powering the future of machine learning and artificial intelligence in the Spark environment.” IBM's Project Dataworks the industry’s first cloud-based data and analytics platform that integrates all types of data to enable AI-powered decision making. With this, companies are able to realize the full promise of data by enabling data professionals to collaborate and build cognitive solutions by combining IBM data and analytics services and a growing ecosystem of data and analytics partners - all delivered on Apache Spark. Project Dataworks is designed to allow for faster development and deployment of data and analytics solutions with self-service user experiences to help accelerate business value.  To learn more, join Bob Picciano, SVP of IBM Analytics and Travis Oliphant, CEO of Continuum Analytics at the IBM DataFirst Launch Event on Sept 27, 2016, Hudson Mercantile Building in NYC. The event is also available on livestream. About Continuum Analytics Continuum Analytics is the creator and driving force behind Anaconda, the leading Open Data Science platform powered by Python. We put superpowers into the hands of people who[...]