Subscribe: O'Reilly Radar - Insight, analysis, and research about emerging technologies
http://radar.oreilly.com/atom.xml
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
continue reading  data  deep learning  deep  learning  links january  machine learning  machine  new  reading  short links  short 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: O'Reilly Radar - Insight, analysis, and research about emerging technologies

All - O'Reilly Media



All of our Ideas and Learning material from all of our topics.



Updated: 2018-01-23T18:11:51Z

 



Fishing for graphs in a Hadoop data lake

2018-01-23T17:15:00Z

Exploring many small regions of a graph with low latency using specialized graph and multi-model databases.Graphs in the sense of linked data with vertices and edges are everywhere these days: social networks, computer networks, and citations, to name but a few. And the data is generally big. Therefore, a lot of folks have huge amounts of data in graph format, typically sitting in some Hadoop file system as a table file for the vertices and a table file for the edges. At the same time, there are very good analytics tools available, such as Spark with GraphX and the more recent GraphFrames libraries. As a consequence, people are able to run large-scale graph analytics, and thus derive a lot of value out of the data they have. Spark is generally very well suited for such tasks—it offers a great deal of parallelism and makes good use of distributed resources. Thus, it works great for workloads that essentially inspect the whole graph. But, there is even more information to be found in your graphs by inspecting many small regions of the graph, and I’ll present concrete examples in this article. For such smaller and rather ad-hoc workloads, it is not really worthwhile to launch a big job in Spark, even if you have to run many of them. Therefore, we will explore another possibility, which is to extract your graph data—or at least a part of it—and store it in a specialized graph database, which is then able to run many relatively small ad-hoc queries with very low latency. This makes it possible to use the results in interactive programs or graphical user interfaces. In this article, I’ll explain this approach of exploring small regions of a graph using specialized graph databases in detail by discussing graph queries and capabilities of graph databases, while presenting concrete use cases in real-world applications. As an aside, we’ll take a little detour to discuss multi-model databases, which further enhances the possibilities of the presented approach. This article targets data scientists, people doing data analytics, and indeed anybody who has access to large amounts of graph data and would like to get more insight out of it. We in the community believe that exploring these graphs very quickly, using many local queries with graph or multi-model databases, is eminently feasible and greatly enhances our toolboxes to gain insight into our data. Graph data and "graphy" queries Our tour through graph data land begins with a few examples of graphs in real-world data science. The prototypical example is a social network, in which the people in the network are the vertices of our graph, and their "friendship" relation is described by the edges: whenever person A is a friend of a person B, there is an edge in the graph between A and B, so edges are "undirected" (see Example 1 in Figure 1). Note that in another incarnation of this example, edges can be "directed" (see Example 2 in Figure 1)—an edge from A to B can, for example, mean that A "follows" B, and this can happen without B following A. Usually in these examples, there will be one (or at most two in the follower case) edge(s) between any two given vertices. Figure 1. Example 1 friendship (left) vs. Example 2 followers (right). Images courtesy of ArangoDB, used with permission. Another example is computer networks, in which the individual machines and routers are the vertices and the edges describe direct connectivity between vertices. A variation of this is that the edges are actually happening in network connections, which will usually carry a timestamp. In this latter case, there can be many different edges between two given machines over time. Examples are actually so abundant that there is a danger to bore you by presenting too many in too great a detail. So, I'll just mention in passing that dependency chains between events can be described by directed edges between them. See Example 3 in Figure 2 for an illustration. Hierarchies can be described by directed edges, which describe the ancestor/descendant relation. Citations betw[...]



The working relationship between AIs and humans isn't master/slave

2018-01-23T12:00:00Z

We need a new model for how AI systems and humans interact.A recent article claimed that humans don't trust results from AI systems—specifically, IBM's Watson. They gave the example of a cancer doctor planning a course of a treatment. If the AI agrees with the doctor's plan, well, that's to be expected. If the AI disagrees with the doctor's treatment, the doctor assumes that it's "incompetent." Whether the doctor's analysis is based on careful reasoning or instinct, doctors went with their assumptions, rather than an opinion coming from the outside. It seems like a situation where the AI can't win. This isn't terribly surprising, for better or for worse. At least as stated, I don't think the problem will change. And perhaps it shouldn't because we're expecting the wrong thing from the AI. Several years ago—shortly after Watson beat the Jeopardy champions—IBM invited me to an event where they showed off Watson's capabilities. What impressed me at the demo wasn't its ability to beat humans, but that fact that it could tell you why it came to a conclusion. While IBM hadn't yet developed the user interface (which was irrelevant to Jeopardy), Watson could show probabilities that each potential answer (sorry, each potential question) was correct, based on the facts that supported each possible answer. To me, that's really where the magic happened. Seeing the rationale behind the result raised the possibility of having an intelligent conversation with an AI. I don't know whether IBM continued to develop this feature. Watson in 2017 certainly differs from the Watson that won Jeopardy. But the ability to expose the rationale behind a recommendation is certainly a key to the problem of trust, whether the application is medicine, agriculture, finance, or something else. Let's change the doctor's problem slightly. Instead of an AI, imagine another doctor came in on a consult and offered a differing opinion. Would that doctor say, "you're wrong; here's what I think; I'm not going to tell you why; if you disagree, that's up to you."? I don't think so—and if that did happen, I wouldn't be at all surprised if the first doctor stuck with the original treatment plan and dismissed the consulting doctor as incompetent. (And as a jerk.) But that's precisely the position in which we put our AI systems. We expect them to be oracles. We expect them to make a recommendation, and expect that doctor to implement that recommendation. That's not how it works with human doctors, and that's not how it should work with AI. In real life, I would expect the doctors to have a discussion about what the treatment should be, comparing their thought processes in arriving at their recommendations. And I'd expect them to arrive at a better result. The irony is that, while you can't spend a day without reading several articles about the problem of the inscrutability of AI, IBM had this problem solved back in the days of Jeopardy. (And yes, it's an easier problem to solve with a system like the original Watson, and a much tougher problem for deep learning.) The issue in medicine isn't whether treatment A is better than treatment B; it's all about the underlying rationale. Did the second doctor take into account factors that the first didn't notice? Does the first doctor have a better understanding of the patient's history? Does something in the patient suggest that the problem isn't what it seems, and that a completely different diagnosis might be correct? That's a discussion that human doctors can have with each other, and that they might be able to have with a machine. I'm not writing another article saying, "we need AI that can explain itself." We already know that. We need something different: we need a new model for how AI systems and humans interact. Whether we're talking about doctors, lawyers, engineers, Go players, or taxi drivers, we shouldn't expect AI systems to give us unchallengeable answers ex silico. We shouldn't be told that we need to "trust AI." What's important is the conversation. AI that "explains[...]



Four short links: 23 January 2018

2018-01-23T11:00:00Z

Facebook Detectron, Near Enemies, Computation for Social Scientists, and JavaScript Treasures

  1. Facebook Detectron -- open source research platform for object detection research, implementing popular algorithms like Mask R-CNN and RetinaNet. ... The goal of Detectron is to provide a high-quality, high-performance codebase for object detection research. It is designed to be flexible in order to support rapid implementation and evaluation of novel research.
  2. Near Enemies (Pam Fox) -- For example: the near enemy of loving-kindness is conditional love—selfish, sentimental attachment—when you wish well for others only when they make you happy. It’s like when you say “I love ice cream.” You are not wishing well for the ice cream; you are wishing for the ice cream to bring your mouth pleasure. ... Ideally, we could measure very specific metrics that truly capture usefulness. Or perhaps, when we come up with metrics, we can think very hard about what non-desirable outcomes they might accidentally measure (what ways that metric might mislead us), and then make sure we also measure those counter-metrics alongside them. Basically, each metric has near enemies, and those get recorded alongside it. This! Don't just measure what you want to have happen—also measure what mustn't be lost as you aim for the former.
  3. Introducing Computational Methods to Social Scientists (Benjamin Mako Hill) -- we hope that the chapter provides an accessible introduction to computational social science and encourages more social scientists to incorporate computational methods in their work, either by gaining computational skills themselves or by partnering with more technical colleagues. The text and code are available CC BY-NC-SA and GPL v3 respectively.
  4. JavaScript Things I Never Knew Existed -- labels, void, comma, with conditional, internationalization API, pipeline operator, ...

Continue reading Four short links: 23 January 2018.

(image)



Four short links: 22 January 2018

2018-01-22T11:15:00Z

Corporate Surveillance, Crowds and Decisions, Crawling Robot Infant, and Personal Data Representatives

  1. Corporate Surveillance in Everyday Life -- a quite detailed report on how thousands of companies monitor, analyze, and influence the lives of billions. Who are the main players in today’s digital tracking? What can they infer from our purchases, phone calls, web searches, and Facebook likes? How do online platforms, tech companies, and data brokers collect, trade, and make use of personal data? (via BoingBoing)
  2. Smaller Crowds Outperform Larger Crowds and Individuals in Realistic Task Conditions -- We derive this nonmonotonic relationship between group size and accuracy from the Condorcet jury theorem and use simulations and further analyses to show that it holds under a variety of assumptions. We further show that situations favoring moderately sized groups occur in a variety of real-life situations, including political, medical, and financial decisions and general knowledge tests. These results have implications for the design of decision-making bodies at all levels of policy. Take with the usual kilogram-sized pinch of social science salt. (via Marginal Revolution)
  3. Robotic Crawling Infant -- the least-cute infant robot I've seen this year. Research is on what the babies collect from the carpet/ground as they walk. (via IEEE Spectrum)
  4. Personal Data Representatives: An Idea (Tom Steinberg) -- it is time to allow people to nominate trusted representatives who can make decisions about our personal data for us, so that we can get on with our lives.

Continue reading Four short links: 22 January 2018.

(image)



New releases from O'Reilly for January 2018

2018-01-22T11:00:00Z

(image)

Find out what's new in iOS, data visualization, service design, and more.

Get a fresh start on building a new skill or augment what you currently know with one of these five newly released titles from O'Reilly.

Table 1. Programming iOS 1

(image)

If you’re grounded in the basics of Swift, Xcode, and the Cocoa framework, Programming iOS 11 by Matt Neuburg provides a structured explanation of all essential real-world iOS app components. Through deep exploration and copious code examples, you’ll learn how to create views, manipulate view controllers, and add features from iOS frameworks.

Continue reading New releases from O'Reilly for January 2018.

(image)



Introducing RLlib: A composable and scalable reinforcement learning library

2018-01-19T12:00:00Z

RISE Lab’s Ray platform adds libraries for reinforcement learning and hyperparameter tuning.In a previous post, I outlined emerging applications of reinforcement learning (RL) in industry. I began by listing a few challenges facing anyone wanting to apply RL, including the need for large amounts of data, and the difficulty of reproducing research results and deriving the error estimates needed for mission-critical applications. Nevertheless, the success of RL in certain domains has been the subject of much media coverage. This has sparked interest, and companies are beginning to explore some of the use cases and applications I described in my earlier post. Many tasks and professions, including software development, are poised to incorporate some forms of AI-powered automation. In this post, I’ll describe how RISE Lab’s Ray platform continues to mature and evolve just as companies are examining use cases for RL. Assuming one has identified suitable use cases, how does one get started with RL? Most companies that are thinking of using RL for pilot projects will want to take advantage of existing libraries. Figure 1. RL training nests many types of computation. Image courtesy of Richard Liaw and Eric Liang, used with permission. There are several open source projects that one can use to get started. From a technical perspective, there are a few things to keep in mind when considering a library for RL: Support for existing machine learning libraries. Because RL typically uses gradient-based or evolutionary algorithms to learn and fit policy functions, you will want it to support your favorite library (TensorFlow, Keras, PyTorch, etc.). Scalability. RL is computationally intensive, and having the option to run in a distributed fashion becomes important as you begin using it in key applications. Composability. RL algorithms typically involve simulations and many other components. You will want a library that lets you reuse components of RL algorithms (such as policy graphs, rollouts), that is compatible with multiple deep learning frameworks, and that provides composable distributed execution primitives (nested parallelism). Figure 2. A few open source libraries for reinforcement learning. Source: Richard Liaw and Eric Liang, used with permission. Introducing Ray RLlib Ray is a distributed execution platform (from UC Berkeley’s RISE Lab) aimed at emerging AI applications, including those that rely on RL. RISE Lab recently released RLlib, a scalable and composable RL library built on top of Ray: Figure 3. Ray, open source platform for emerging AI applications. Image by Ben Lorica. RLlib is designed to support multiple deep learning frameworks (currently TensorFlow and PyTorch) and is accessible through a simple Python API. It currently ships with the following popular RL algorithms (more to follow): Proximal Policy Optimization (PPO) which is a proximal variant of TRPO. The Asynchronous Advantage Actor-Critic (A3C). Deep Q Networks (DQN). Evolution Strategies, as described in this paper. It’s important to note that there is no dominant pattern for computing and composing RL algorithms and components. As such, we need a library that can take advantage of parallelism at multiple levels and physical devices. RLlib is an open source library for the scalable implementation of algorithms that connect the evolving set of components used in RL applications. In particular, RLlib enables rapid development because it makes it easy to build scalable RL algorithms through the reuse and assembly of existing implementations (“parallelism encapsulation”). RLlib also lets developers use neural networks created with several popular deep learning frameworks, and it integrates with popular third-party simulators. Figure 4. RLlib offers composability. Image courtesy of Richard Liaw and Eric Liang, used[...]



Four short links: 19 January 2018

2018-01-19T11:50:00Z

Pricing, Windows Emulation, Toxic Tech Culture, and AI Futures

  1. Pricing Summary -- quick and informative read. Three-part tariff (3PT)—Again, the software has a base platform fee, but the fee is $25,000 because it includes the first 150K events free. Each marginal event costs $0.15. In academic research and theory, the three-part tariff is proven to be best. It provides many different ways for the sales team to negotiate on price and captures the most value.
  2. Wine 3.0 Released -- the Windows emulator now runs Photoshop CC 2018! Astonishing work.
  3. Getting Free of Toxic Tech Culture (Val Aurora and Susan Wu) -- We didn’t realize how strongly we’d unconsciously adopted this belief that people in tech were better than those who weren’t until we started to imagine ourselves leaving tech and felt a wave of self-judgment and fear. Early on, Valerie realized that she unconsciously thought of literally every single job other than software engineer as “for people who weren’t good enough to be a software engineer” – and that she thought this because other software engineers had been telling her that for her entire career. This.
  4. The Future Computed: Artificial Intelligence and its Role in Society -- Microsoft's book on the AI-enabled future. Three chapters: The Future of Artificial Intelligence; Principles, Policies, and Laws for the Responsible Use of AI; and AI and the Future of Jobs and Work.

Continue reading Four short links: 19 January 2018.

(image)



Build the right thing; build the thing right

2018-01-18T14:10:00Z

(image)

How design thinking supports delivering product.

Continue reading Build the right thing; build the thing right.

(image)



Convolutional neural networks for language tasks

2018-01-18T14:10:00Z

Though they are typically applied to vision problems, convolution neural networks can be very effective for some language tasks.When approaching problems with sequential data, such as natural language tasks, recurrent neural networks (RNNs) typically top the choices. While the temporal nature of RNNs are a natural fit for these problems with text data, convolutional neural networks (CNNs), which are tremendously successful when applied to vision tasks, have also demonstrated efficacy in this space. In our LSTM tutorial, we took an in-depth look at how long short-term memory (LSTM) networks work and used TensorFlow to build a multi-layered LSTM network to model stock market sentiment from social media content. In this post, we will briefly discuss how CNNs are applied to text data while providing some sample TensorFlow code to build a CNN that can perform binary classification tasks similar to our stock market sentiment model. Figure 1. Sample CNN model architecture for text classification. Image by Garrett Hoffman, based on a figure from “Convolutional Neural Networks for Sentence Classification.” We see a sample CNN architecture for text classification in Figure 1. First, we start with our input sentence (of length seq_len), represented as a matrix in which the rows are our words vectors and the columns are the dimensions of the distributed word embedding. In computer vision problems, we typically see three input channels for RGB; however, for text we have only a single input channel. When we implement our model in TensorFlow, we first define placeholders for our inputs and then build the embedding matrix and embedding lookup. # Define Inputs inputs_ = tf.placeholder(tf.int32, [None, seq_len], name='inputs') labels_ = tf.placeholder(tf.float32, [None, 1], name='labels—) training_ = tf.placeholder(tf.bool, name='training') # Define Embeddings embedding = tf.Variable(tf.random_uniform((vocab_size, embed_size), -1, 1)) embed = tf.nn.embedding_lookup(embedding, inputs_) Notice how the CNN processes the input as a complete sentence, rather than word by word as we did with the LSTM. For our CNN, we pass a tensor with all word indices in our sentence to our embedding lookup and get back the matrix for our sentence that will be used as the input to our network. Now that we have our embedded representation of our input sentence, we build our convolutional layers. In our CNN, we will use one-dimensional convolutions, as opposed to the two-dimensional convolutions typically used on vision tasks. Instead of defining a height and a width for our filters, we will only define a height, and the width will always be the embedding dimension. This makes sense intuitively, when compared to how images are represented in CNNs. When we deal with images, each pixel is a unit for analysis, and these pixels exist in both dimensions of our input image. For our sentence, each word is a unit for analysis and is represented by the dimension of our embeddings (the width of our input matrix), so words exist only in the single dimension of our rows. We can include as many one-dimensional kernels as we like with different sizes. Figure 1 shows a kernel size of two (red box over input) and a kernel size of three (yellow box over input). We also define a uniform number of filters (in the same fashion as we would for a two-dimensional convolutional layer) for each of our layers, which will be the output dimension of our convolution. We apply a relu activation and add a max-over-time pooling to our output that takes the maximum output for each filter of each convolution—resulting in the extraction of a single model feature from each filter. # Define Convolutional Layers with Max Pooling convs = [] for filter_size in filter_sizes: conv = tf.layers.conv1d(inputs=[...]



How machine learning can be used to write more secure computer programs

2018-01-18T13:00:00Z

(image)

The O’Reilly Data Show Podcast: Fabian Yamaguchi on the potential of using large-scale analytics on graph representations of code.

In this episode of the Data Show, I spoke with Fabian Yamaguchi, chief scientist at ShiftLeft. His 2015 Ph.D. dissertation sketched out how the combination of static analysis, graph mining, and machine learning, can be used to develop tools to augment security analysts. In a recent post, I argued for machine learning tools to augment teams responsible for deploying and managing models in production (machine learning engineers). These are part of a general trend of using machine learning to develop and manage the software systems of tomorrow. Yamaguchi’s work is step one in this direction: using machine learning to reduce the number of security vulnerabilities in complex software products.

Continue reading How machine learning can be used to write more secure computer programs.

(image)



Four short links: 18 January 2018

2018-01-18T11:55:00Z

Secure Software, Delta Robots, Authenticating Reverse Proxy, and Public Domain is Back​

  1. Some Thoughts on Security After 10 Years of qmail 1.0 -- Ten years after the launch of qmail 1.0, and at a time when more than a million of the Internet’s SMTP servers ran either qmail or netqmail, only four known bugs had been found in the qmail 1.0 releases, and no security issues. This paper lays out the principles which made this possible. That's from A Paper A Day's redux.
  2. Harvard's milliDelta Robot Is Tiny and Scary Fast (IEEE) -- researcher Hayley McClintock has designed one of the tiniest delta robots ever. Called milliDelta, it may be small, but it’s one of the fastest moving and most precise robots we’ve ever seen.
  3. oath2_proxy -- A reverse proxy that provides authentication with Google, GitHub or other provider.
  4. The Public Domain Starts Growing Again Next Year, and It's About Time (EFF) -- Our language is made up of references, and our art should reflect that. Creativity is enriched when the public domain is robust and easily accessed, and we look forward to finally seeing it grow once again in 2019.

Continue reading Four short links: 18 January 2018.

(image)



What is a reactive microservice?

2018-01-18T11:00:00Z

(image)

Explore the factors that make up a reactive microservice.

One of the key principles in employing a Microservices-based Architecture is Divide and Conquer: the decomposition of the system into discrete and isolated subsystems communicating over well-defined protocols.

Isolation is a prerequisite for resilience and elasticity and requires asynchronous communication boundaries between services to decouple them in:

Continue reading What is a reactive microservice?.

(image)



Machine learning tools for fairness, at scale

2018-01-17T15:25:00Z

It’s time to think about how the systems we build interact with our world and build systems that make our world a better place.The problem of fairness comes up in any discussion of data ethics. We've seen analyses of products like COMPASS, we've seen the maps that show where Amazon first offered same-day delivery, and we've seen how job listings shown to women are skewed toward lower-paying jobs. We also know that "fair" is a difficult concept for any number of reasons, not the least of which is the data used to train machine learning models. Kate Crawford’s recent NIPS keynote, The Trouble with Bias, is an excellent introduction to the problem. Fairness is almost always future oriented and aspirational: we want to be fair, we want to build algorithms that are fair. But the data we train with is, by definition, backward-looking, and reflects our history, which frequently isn't fair. Real estate data reflects the effects of racial discrimination in housing, which is still taking place, many years after it became illegal. Employment data reflects assumptions about what men and women are expected to do (and have historically done): women get jobs as nurses, men get jobs as engineers. Not only are our models based on that historical data, they've proven to be excellent at inferring characteristics like race, gender, and age, even when they're supposed to be race, age, and gender neutral. In his keynote for the O'Reilly Strata conference in Singapore (see slides and write-up here), Ben Lorica talked about the need to build systems that are fair, and isolated several important problems facing developers: disparate impact (for example, real estate redlining), disproportionate error rate (for example, recommending incorrect treatments for elderly patients), and unwarranted associations (for example, tagging pictures of people as “gorillas”). The problem is bigger than figuring out how to make machine learning fair; that is only a first step. Simply understanding how to build systems that are fair would help us craft a small number of systems. But in the future, machine learning will be in everything: not just applications for housing and employment, but possibly even in our databases themselves. We're not looking at a future with a few machine learning applications; we're looking at machine learning embedded into everything. This will be a shift in software development that's perhaps as radical as the industrial revolution was for manufacturing, though that's another topic. In practice, what it means is that a company might not have a small number of machine learning models to deal with; a company, even a small one, might well have thousands, and a Google or a Facebook might have millions. We also have to think about how our models evolve over time. Machine learning isn't static; you don't finish an application, send it over to operations, and deploy it. People change, as does the way they use applications. Many models are constantly re-training themselves based on the most recent data, and as we've seen, the data flowing into a model has become a new attack vector. It's a new security problem, and a big one, as danah boyd and others have argued. Ensuring that a model is fair when it is deployed isn't enough. We have to ensure that it remains fair—and that means constantly testing and re-testing it for fairness. Fairness won’t be possible at this scale if we expect developers to craft models one at a time. Likewise, we can't expect teams to test and re-test thousands of models once they have been deployed. That artisanal vision of AI development has started us down this road, but it certainly will not survive. Lorica suggests in his keynote that the answ[...]



4 tips to stop reading about AI and start doing AI

2018-01-17T12:15:00Z

How to use AI as a tool in your business.AI is just a tool. And, like your hammer, it needs you to be productive. OK, yes, it is a cool, advanced, complicated tool. Let’s go with a 3D printer as the better analogy than a hammer: training required, still in its early stages, making great advances every day. So, how do you use this tool in your business? My top four tips: 1. Understand that AI is not magic; it is just maths – and it needs you. From chess to weather, humans + computers can produce significantly better results than either alone. You should be asking your team, or questioning yourself, about what machine learning algorithm was chosen and why it fits the results you need and the data you have. And, you should be questioning the results; remember, Kasparov lost to Deep Blue due to a bug. Just because an algorithm is right more often than you doesn’t mean its rightness overlaps yours. Again, you and AI are better together. 2. Get serious about your digital transformation. You likely didn’t take the internet or mobile as seriously as you should have. Internet and mobile are the extroverts of technology; AI is an introvert. As we have learned, introverts don’t speak up as much. Don’t let this keep you from engaging AI. I recommend starting with areas where you have already started statistical optimizations, as you know your data better there and already have measurements and benchmarks. Then, move to empowering innovations with some experience in your sails. 3. Stop with the excuses: “We don’t have enough data” or “We don’t have the expertise.” One of the foremost AI experts, Google’s Jeff Dean, recently replied to the question on whether mere mortal companies have enough data for AI with an emphatic, “YES!” As for the expertise, focus on best practices in software development and design, a.k.a., read Marty Cagan’s Inspired. The only addition, I have for his framework when integrating AI into development is to add a data whisperer to the team – someone who knows your data and how it is generated and used by your company now. 4. Build a learning org. What if an AI in your ecommerce platform calculated a new product bundle? How fast could your team react? What if the AI created a new bundle in real time? How would you deal with that? What if an AI in your sales force automation calculated it was best to politely ignore requests from who you think is your best customer? How would you assess that learning from your new team member, AI? Intelligence is defined by the ability to acquire and apply knowledge, a.k.a., the ability to learn. You now must learn how to learn from AI. Start here. In reference to AI, Kevin Kelly, founding executive editor of Wired said, “There has never been a better time with more opportunities, more openings, lower barriers, higher benefit/risk ratios, better returns, greater upside than now.” The opportunity is in front of you, and accessible to you. Start working with the tool you’ve been given, and start finding your upside. Continue reading 4 tips to stop reading about AI and start doing AI.[...]



Put machine learning to work in the real world

2018-01-17T12:00:00Z

The sessions and training courses at Strata Data San Jose 2018 will focus on practical use cases of machine learning for data scientists, engineers, managers, and executives.We're in an empirical era of machine learning. Companies are now building platforms that facilitate experimentation and collaboration. At our upcoming Strata Data Conference in San Jose, we have many tutorials and sessions on “Data Science and Machine Learning” (including two days of sessions on enterprise applications of deep learning), and “Data Engineering & Architecture” (including sessions on streaming/real-time from several open source communities). If you want to understand how companies are using big data and machine learning to reinvigorate their businesses, there are many case studies on the schedule geared toward hands-on technologists, and sessions aimed at managers and executives. Putting data and machine learning technologies to work Over the past few years, companies have invested in data gathering and data management technologies, and many have began unlocking value from their vast repositories. From the inception of Strata, we’ve featured case studies from companies across a wide variety of industries. This marks the second year we will offer a series of executive briefings over two days. These are 40-minute overviews on important topics in big data and data science, aimed at executives and managers tasked with understanding how to incorporate these technologies and techniques into their business operations. Topics will include privacy and security, AI and machine learning, data infrastructure, and culture (including hiring, managing, and nurturing data teams). We also have tutorials and many case studies tailored for managers and executives, including a day-long focus on media and ad tech. “Strata Business Summit” “Media and AdTech Day” Data and machine learning are becoming more pervasive—and a greater competitive differentiator As the use of data technologies and analytics become more prevalent, it’s critical to keep up with the latest technologies, architectures, best practices, and methodologies. The increasing importance of data and machine learning means that managers and analysts also need to familiarize themselves with critical tools and technologies. We’ve assembled a series of two-day training courses for engineers and developers, data scientists and analysts, and managers. We’re expanding our suite of courses to cover critical technologies and concepts for a broad set of workers: Data science for managers Apache Spark programming Data science and machine learning with Apache Spark Machine learning with TensorFlow Real-time systems with Spark Streaming and Kafka Machine learning with PyTorch Hands-on data science with Python The importance of data pipelines and data integration Media coverage of machine learning exploded last year. But anyone who works with machine learning will tell you that (at least for now), everything depends on having large (labeled) data sets. Data used for analytics and in machine learning products typically come from a variety of sources. There are usually a series of steps to combine, validate, and prepare data for use in machine learning. For many data scientists and machine learning engineers, maintaining robust data pipelines remains a critical part of their jobs. We’ve assembled a series of sessions to showcase some of the current best practices for building scalable data pipelines: The future of ETL isn’t what it used to be Radically modular data ingestion APIs in Apache Beam How to build leakproof[...]



Four short links: 17 January 2018

2018-01-17T11:45:00Z

Surprise Transmissions, Sam Altman, Video Conferencing, and Microservice Hosting

  1. System Bus Radio -- Transmits AM radio on computers without radio-transmitting hardware. The home run: Run this using a 2015 model MacBook Air. Then use a Sony STR-K670P radio receiver with the included antenna and tune it to 1580 kHz on AM. You should hear the "Mary Had a Little Lamb" tune playing repeatedly. (via Matt Webb)
  2. Sam Altman's Manifest Destiny (New Yorker) -- excellent profile full of thought-provoking soundbites. YC prides itself on rejecting jerks and bullies. “We’re good at screening out assholes,” Graham told me. “In fact, we’re better at screening out assholes than losers. All of them start off as losers—and some evolve.”
  3. Jitsi -- open source video conferencing.
  4. hook.io -- hook.io provides an easy way to create, host, and share microservices. [...] The most basic use case for hook.io is quick and free webhook hosting.

Continue reading Four short links: 17 January 2018.

(image)



What defines a known open source vulnerability?

2018-01-16T15:45:00Z

(image)

Understanding known vulnerabilities in open source packages.

Introduction

Open source software—the code of which is publicly available to scrutinize and typically free to use—is awesome. As consumers, it spares us the need to reinvent the wheel, letting us focus on our core functionality and dramatically boosting our productivity. As authors, it lets us share our work, gaining community love, building up a reputation, and at times having an actual impact on the way software works.

Because it’s so amazing, open source usage has skyrocketed. Practically every organization out there, from mom-and-pop shops to banks and governments, relies on open source to operate their technical stacks—and their businesses. Tools and best practices have evolved to make such consumption increasingly easier, pulling down vast amounts of functionality with a single code or terminal line.

Continue reading What defines a known open source vulnerability?.

(image)



Square off: Machine learning libraries

2018-01-16T12:00:00Z

Top five characteristics to consider when deciding which library to use.Choosing a machine learning (ML) library to solve predictive use cases is easier said than done. There are many to choose from, and each have their own niche and benefits that are good for specific use cases. Even for someone with decent experience in ML and data science, it can be an ordeal to vet all the varied solutions. Where do you start? At Salesforce Einstein, we have to constantly research the market to stay on top of it. Here are some observations on the top five characteristics of ML libraries that developers should consider when deciding what library to use: 1. Programming paradigm Most ML libraries fall into two tribes on a high-level design pattern: the symbolic tribe and the imperative tribe for mathematical computation. In symbolic programs, you define a complex mathematical computation functionally without actually executing it. It generally takes the form of a computational graph. You lay out all the pieces and connect them in an abstract fashion before materializing it with real values as inputs. The biggest advantages of this pattern are composability and abstraction, thus allowing the developers to focus on higher level problems. Efficiency is another big advantage, as it is relatively easy to parallelize such functions. Apache Spark’s ML library, Spark MLlib, and any library built on Spark, such as Microsoft’s MMLSpark and Intel's BigDL, follow this paradigm. Directed acyclic graph (DAG) is their representation of the computational graphs. Other examples of symbolic program ML libraries are CNTK, with static computational graphs; Caffe2, with Net (which is a graph of operators); H2O.ai; and Keras. In imperative programs, everything is execution-first. You write a line of code, and when the compiler reads that line and executes it, it actually runs the numerical computation and moves to the next line of code. This style makes prototyping much easier, as it tends to be more flexible and much easier to debug and troubleshoot. Scikit-learn is a popular Python library that falls into this category. Other libraries such as auto sklearn and TPOT are layers of abstraction on top of scikit-learn, which also follow this paradigm. PyTorch is yet another popular choice that supports dynamic computational graphs, thereby making the process imperative. Clearly, there are tradeoffs with either approach, and the right one depends on the use case. Imperative programming is great for research, as it naturally supports faster prototyping—allowing for repetitive iterations, failed attempts, and a quick feedback loop—whereas symbolic programming is better catered toward production applications. There are some libraries that combine both approaches and create a hybrid style. The best example is MXNet, which allows imperative programs within symbolic programs as callbacks, or uses symbolic programs as a part of imperative programs. Another newer development is Eager Execution from Google’s TensorFlow. Though, originally a Python library with a symbolic paradigm (a static computational graph of tensors), Eager Execution does not need a graph, and execution can happen immediately. Symbolic: Spark MLlib, MMLSpark, BigDL, CNTK, H2O.ai, Keras, Caffe2 Imperative: scikit-learn, auto sklearn, TPOT, PyTorch Hybrid: MXNet, TensorFlow 2. Machine learning algorithms Supervised learning, unsupervised learning, recommendation systems, and deep learning are the common classes of problems that we deal with in machine learning. Agai[...]



Four short links: 16 January 2018

2018-01-16T11:50:00Z

Forgetful Google, Machine Comprehension, Music from Space-Filling Curves, and Reading Papers

  1. Google Is Losing Its Memory (Tim Bray) -- I've also found Google is losing its effectiveness in finding arcane subjects. And woe betide the knowledge seeker whose topic is also a product's name. I suspect the mission statement has transformed into "indexing the world's information on hot deals near you."
  2. SQuAD: 100,000+ Questions for Machine Comprehension of Text -- described in this paper, it's the test in which AliBaba has a neural network that outperforms humans.
  3. The Sound of Space-Filling Curves -- you map dimensions to scales, then a point on the space-filling curve has notes associated with it. As you move along the curve to new points, the notes change, and this is the music of the curve. Some is beautiful, some is weird, all is interesting.
  4. Strategies for Reading Papers Depend on Academic Career Stage -- Inexperienced readers found the methods and results sections of research papers the most difficult to read, and undervalued the importance of the results section and critical interpretation of data. (via Heather Piwowar)

Continue reading Four short links: 16 January 2018.

(image)



Tips for jumpstarting your journey to developing AI apps

2018-01-16T11:30:00Z

Use cases and tips to help businesses take full advantage of AI technology."AI won’t replace managers, but managers who use AI will replace those who don’t." So write Erik Brynjolfsson and Andy McAffee in the Harvard Business Review. That’s an important point to which all managers must pay attention: AI isn’t a technology for assisting humans; it’s for augmenting them. For managers, it’s for helping them make better decisions, and for enabling their staff to be more effective. Programmers can use AI to build products that are better at taking their customers’ needs and requirements into account—and they can build new kinds of products that help their customers be more effective. We’re entering an era in which whoever helps others to be the most effective will win. Those solutions will certainly involve AI. But how does an enterprise take advantage of AI? It’s not by pouring magic AI sauce over everything in an effort to get ahead of some mythical curve. That’s a recipe for expensive mistakes. To get ahead with AI, enterprises must make plans that are in line with their business objectives, and that help you to be more effective at what you already do. AI isn’t a strategy; it’s part of an overall strategy. And it isn’t about replacing humans; it’s about augmenting them, making them better at what they do. Likewise, enterprises need to use AI to do what they already do, but better. How can your enterprise take advantage of AI? Here are a few use cases and tips: Customer service Most people hate calling customer service: there are long waits, and tired front-line support staff who frequently can’t solve their problems. A chatbot can be a great tool for answering the phone, handling the simple questions (often, most of them), and forwarding the difficult questions to the human support team. Customers don’t wait as long; staff isn’t as burdened, and doesn’t have to spend time on the easy questions; and the AI never gets tired or bored. Autodesk made their customer support staff much more efficient by using IBM’s Watson Conversation to build a chatbot to handle simple requests, like getting software activation codes. Tip Remember that AI is an assistive technology. Handling the “easy cases” means cases that can be handled with a simple script. (Watson Conversation has some great tools for building dialogs.) Don’t get bogged down trying to build an extremely complex dialog system; that’s a research problem, and a very difficult one. Escalate to human support sooner, rather than later. Do remember to test the system early, and to test it on people who aren’t familiar with your product or business. Allow plenty of time to integrate the results of your testing into the product. Inspection AI systems are excellent at classifying images. There are many enterprise applications of classification, in industries ranging from health care to agriculture. Systems can be built that fly a drone over a field to take pictures of crops, then use AI to determine whether the crops are healthy. Similar systems are used to inspect manufactured items for flaws, or to recognize faces to allow entry. In the last California drought, a company called OmniEarth used Watson’s Visual Recognition Service to identify properties that needed to reduce water consumption by identifying swimming pools, water-intensive landscaping, and other features. Tip Training an image classifier requires a lot of data. Make sure you have enough labeled images to train [...]



Four short links: 15 January 2018

2018-01-15T11:55:00Z

Computational vs. Inferential Science, Cost of AI Policy Solutions, Google Brain in 2017, and Lego Worms On Computational Thinking, Inferential Thinking, and Data Science -- That classical perspectives from these fields are not adequate to address emerging problems in data science is apparent from their sharply divergent nature at an elementary level—in computer science, the growth of the number of data points is a source of "complexity" that must be tamed via algorithms or hardware, whereas in statistics, the growth of the number of data points is a source of "simplicity" in that inferences are generally stronger and asymptotic results can be invoked. On a formal level, the gap is made evident by the lack of a role for computational concepts such as "runtime" in core statistical theory and the lack of a role for statistical concepts such as "risk" in core computational theory. Slides from a similar version of this talk are available. Beyond the Rhetoric of Algorithmic Solutionism (danah boyd) -- If you ever hear that implementing algorithmic decision-making tools to enable social services or other high-stakes government decision-making will increase efficiency or reduce the cost to taxpayers, know that you’re being lied to. When implemented ethically, these systems cost more. And they should. Points to Automating Inequality, a new book due out next week: Eubanks offers a clear portrait of just how algorithmic systems actually play out on the ground, despite all of the hope that goes into their implementation. (via BoingBoing) Google Brain Team Looks Back on 2017 -- a useful recap of some of the interesting work that happened in AI last year. This first half is about technical advances, including differential privacy and machine learning, and better interpretability of machine learning systems. Part 2 summarizes their engagements with industries as diverse as detecting cancer, robots that learn from watching, and piano duets. Worm Brain in a Lego Body -- (simulated a worm brain, because the worm brain is fully mapped). This Lego robot has all the equivalent limited body parts that C. elegans has—a sonar sensor that acts as a nose, and motors that replace the worm's motor neurons on each side of its body. Amazingly, without any instruction being programmed into the robot, the C. elegans virtual brain controlled and moved the Lego robot. Continue reading Four short links: 15 January 2018.[...]



Engineering well-rounded technology leaders

2018-01-12T21:30:00Z

(image)

From developers to CTOs, everyone has a role to play in shaping their own transformation.

One of the greatest drivers of professional development is learning through doing. 2018 marks the fourth year of O’Reilly’s Software Architecture Conference, a software engineering event focused on providing hands-on training experiences for technologists at all levels of an organization—from experienced developers up through CTOs. As the latest installment of this conference approaches, here’s a brief roadmap of the trends, sessions, and tutorials to explore as you write your own story of transformation.

Building evolutionary software architecture

For a variety of reasons, parts of software systems resist change, becoming more brittle and intractable over time. However, the world we inhabit has exactly the opposite characteristic: the software development ecosystem exists in a state of dynamic equilibrium. New tools, techniques, approaches, and frameworks constantly impact this equilibrium in unanticipated ways.

Continue reading Engineering well-rounded technology leaders.

(image)



Four short links: 12 January 2018

2018-01-12T11:05:00Z

Sharing Concepts, Augment Engineers, AI for Good, and AI to Spot Lies

  1. One Model to Learn Them All (Paper a Day) -- summary of a nifty paper that has one neural net per input modality (speech, text, pictures, etc.) but a shared encoder to create a unified representation. This lets you share concepts between different modalities, the way that a banana is a banana whether we read about it, talk about it, or see one.
  2. We Need to Build Machine Learning Tools to Augment Machine Learning Engineers (Ben Lorica) -- Your staff of experts will still need to look through issues that arise, but they will need at least some automated tools to help them handle the volume of models in production.
  3. Open AI Challenge: Aerial Imagery of South Pacific -- The winning team(s) will receive public praise and a Certificate of Achievement. More importantly, they will enable the World Bank and partners to significantly accelerate the analysis of aerial imagery before and after major humanitarian disasters. This will help accelerate and improve humanitarian and development efforts across the South Pacific. Winning teams will also have the opportunity to engage in other related projects around the world. (via DIY Drones)
  4. Deception Analysis and Reasoning Engine -- a system for covert automated deception detection using information available in a video. Code on GitHub.

Continue reading Four short links: 12 January 2018.

(image)



6 lessons for team leaders

2018-01-11T17:35:00Z

(image)

The most important things you can do as a leader of your team.

Continue reading 6 lessons for team leaders.

(image)



We need to build machine learning tools to augment machine learning engineers

2018-01-11T16:05:00Z

As the use of analytics proliferate, companies will need to be able to identify models that are breaking bad.In this post, I share slides and notes from a talk I gave in December 2017 at the Strata Data Conference in Singapore offering suggestions to companies that are actively deploying products infused with machine learning capabilities. Over the past few years, the data community has focused on infrastructure and platforms for data collection, including robust pipelines and highly scalable storage systems for analytics. According to a recent LinkedIn report, the top two emerging jobs are “machine learning engineer” and “data scientist.” Companies are starting to staff to put their data infrastructures to work, and machine learning is going become more prevalent in the years to come. Figure 1. Slide by Ben Lorica. As more companies start using machine learning in products, tools, and business processes, let’s take a quick tour of model building, model deployment, and model management. It turns out that once a model is built, deploying and managing it in production requires engineering skills. So much so that earlier this year, we noted that companies have created a new job role—machine learning (or deep learning) engineer—for people tasked with productionizing machine learning models. Figure 2. Slide by Ben Lorica. Modern machine learning libraries and tools like notebooks have made model building simpler. New data scientists need to make sure they understand the business problem and optimize their models for it. In a diverse region like Southeast Asia, models need to be localized, as conditions and contexts differ across countries in the ASEAN. Figure 3. Slide by Ben Lorica. Looking ahead to 2018, rising awareness of the impact of bias, and the importance of fairness and transparency, means that data scientists need to go beyond simply optimizing a business metric. We will need to treat these issues seriously, in much the same way we devote resources to fixing security and privacy issues. Figure 4. Slide by Ben Lorica. While there’s no comprehensive checklist one can go through to systematically address issues pertaining to fairness, transparency, and accountability, the good news is that the machine learning research community has started to offer suggestions and some initial steps model builders can take. Let me go through a couple of simple examples. Imagine you have an important feature (say, distance from a specific location) of a machine learning model. But there are groups in your population (say, high and low income) for which this feature has very different distributions. What could happen is that your model would have disparate impact across these two groups. A relevant example is a pricing model introduced online by Staples: the model suggested different prices based on location of users. Figure 5. Slide by Ben Lorica. In 2014, a group of researchers offered a data renormalization method to remove disparate impact: Figure 6. Slide by Ben Lorica, with HT to arXiv.org. Another example has to do with error: once we are satisfied with a certain error rate, aren’t we done and ready to deploy our model to production? Consider a scenario where you have a machine learning model used in health care: in the course of model building, your training data for millenials (in red) is quite large compar[...]



Luciano Ramalho on Python’s features and libraries

2018-01-11T12:25:00Z

(image)

The O’Reilly Programming Podcast: A look at some of Python’s valuable, but often overlooked, features.

In this episode of the O’Reilly Programming Podcast, I talk about Python with Luciano Ramalho, technical principal at ThoughtWorks, author of the O’Reilly book Fluent Python, and presenter of the Oriole Fluent Python: The Power of Special Methods.

Continue reading Luciano Ramalho on Python’s features and libraries.

(image)



Introduction to LSTMs with TensorFlow

2018-01-11T12:00:00Z

How to build a multilayered LSTM network to infer stock market sentiment from social conversation using TensorFlow.Long short-term memory (LSTM) networks have been around for 20 years (Hochreiter and Schmidhuber, 1997), but have seen a tremendous growth in popularity and success over the last few years. LSTM networks are a specialized type of recurrent neural network (RNN)—a neural network architecture used for modeling sequential data and often applied to natural language processing (NLP) tasks. The advantage of LSTMs over traditional RNNs is that they retain information for long periods of time, allowing for important information learned early in the sequence to have a larger impact on model decisions made at the end of the sequence. In this tutorial, we will introduce the LSTM network architecture and build our own LSTM network to classify stock market sentiment from messages on StockTwits. We use TensorFlow because it offers compact, high-level commands and is very popular these days. LSTM cells and network architecture Before we dive into building our network, let’s go through a brief introduction of how LSTM cells work and an LSTM network architecture (Figure 1). Figure 1. Unrolled RNN cell structure (top) vs. LSTM cell structure (bottom). Image courtesy of Christopher Olah, used with permission. \(A\) represents a full RNN cell that takes the current input of the sequence (in our case the current word), \(x_i\), and outputs the current hidden state, \(h_i\), passing this to the next RNN cell for our input sequence. The inside of a LSTM cell is a lot more complicated than a traditional RNN cell. While the traditional RNN cell has a single “internal layer” acting on the current state \((h_{t-1})\) and input \((x_t)\), the LSTM cell has three. First we have the “forget gate” that controls what information is maintained from the previous state. This takes in the previous cell output \(h_{t-1}\) and the current input \(x_t\) band applies a sigmoid activation layer \((\sigma)\) to get values between 0 and 1 for each hidden unit. This is followed by element-wise multiplication with the current state (the first operation in the “upper conveyor belt” in Figure 1). Next is an “update gate” that updates the state based on the current input. This passes the same input (\(h_{t-1}\) and \(x_t\)) into a sigmoid activation layer \((\sigma)\) and into a tanh activation layer \((tanh)\) and performs element-wise multiplication between these two results. Next, element-wise addition is performed with the result and the current state after applying the “forget gate” to update the state with new information. (The second operation in the “upper conveyor belt” in Figure 1.) Finally, we have an “output gate” that controls what information gets passed to the next state. We run the current state through a tanh activation layer \((tanh)\) and perform element-wise multiplication with the cell input (\(h_{t-1}\) and \(x_t\)) run through a sigmoid layer \((\sigma)\) that acts as a filter on what we decide to output. This output \(h_t\) is then passed to the LSTM cell for the next input of our sequence and also passed up to the next layer of our network. Now that we have a better understanding of an LSTM cell, let’s look at an example LSTM network architecture in Figure 2. Figure 2. Unrolle[...]



Four short links: 11 January 2018

2018-01-11T11:10:00Z

Data from PDFs, Unencumbered Media, Unintended Consequences, and AMP Protested

  1. Using pdftabextract to Liberate Tabular Data from PDFs -- amazing how much effort is spent chiseling data out of PDFs.
  2. The Fight For Patent-Unencumbered Media Codecs Is Nearly Won -- Apple joining the Alliance for Open Media is a really big deal. Now all the most powerful tech companies—Google, Microsoft, Apple, Mozilla, Facebook, Amazon, Intel, AMD, ARM, Nvidia—plus content providers like Netflix and Hulu are on board. Supporting AOM AV1 codec.
  3. Legends of the Ancient Web (Maciej Ceglowski) -- Radio was a force of persuasion different in kind from anything the world had seen before. Those who learned to use it first for political ends had a tremendous advantage. In less than four decades, radio had completed the journey from fledgeling technology, to nerdy hobby, to big business, to potent political weapon. [...] It is hard to accept that good people, working on technology that benefits so many, with nothing but good intentions, could end up building a powerful tool for the wicked. But we can't afford to re-learn this lesson every time.
  4. A Letter About Google AMP -- AMP keeps users within Google’s domain and diverts traffic away from other websites for the benefit of Google. At a scale of billions of users, this has the effect of further reinforcing Google’s dominance of the Web.

Continue reading Four short links: 11 January 2018.

(image)



Handling dependency injection using Java 9 modularity

2018-01-11T11:00:00Z

(image)

How to decouple your Java code using a mix of dependency injection, encapsulation, and services.

In this post we will look at how we can mix the Java 9 module system, dependency injection and services to accomplish decoupling between modules.

It’s almost hard to imagine a Java code base without the use of a dependency injection framework. And for good reason, dependency injection can help a lot with achieving decoupling. Decoupling is about hiding implementations. Decoupling is key to make code maintainable and easy to extend. In Java, this effectively comes down to programming against interfaces instead of concrete types.

Continue reading Handling dependency injection using Java 9 modularity.

(image)



What is a design pattern?

2018-01-11T11:00:00Z

(image)

Discover what design patterns are and how they can be used to communicate solutions to common problems.

Continue reading What is a design pattern?.

(image)



3 technologies business leaders should watch and explore in 2018

2018-01-11T11:00:00Z

Artificial intelligence, cloud technologies, and blockchain are big growth areas on O’Reilly’s learning platform.A dazzling and sometimes confusing jumble of technologies cry out for attention from executives and managers. The recent growth of things like automation, analytics, and big data alert us that companies need to move fast into new technologies, but which ones should businesses pursue? And where to start? To help answer those questions, we analyzed search activity on our learning platform to uncover the important topics our users are exploring[1]. These are growing areas where organizations and their employees are investing their resources—and it’s where you might want to invest your efforts as well. Artificial intelligence (AI) is wildly popular In addition to powering consumer technologies that would have seemed magical a decade ago, such as speech interaction by Siri, Alexa, and others, the O’Reilly data suggests organizations believe AI can also improve their bottom lines. You can see this in the year-over-year leap for TensorFlow (146% growth in activity), which is a fairly new AI programming library, along with the more generic terms deep learning (60% growth) and machine learning (42% growth). Analytics are often run on incoming data through Kafka (32% growth) and Spark (down 5% in year-over-year activity, but the No. 7 overall search term). The dominant position held by our learning platform’s top search term, Python, is likely aligned with AI as well, as people dig into Python’s powerful and robust libraries for statistics and data manipulation. This data suggests that AI is not just a collection of statistical techniques, but has moved into production use with tools for quick and straightforward deployment. It also shows that open source technologies are taking the lead over proprietary tools in the AI domain. Figure 1. The top 25 search terms on O’Reilly’s learning platform in 2017. The move to the cloud continues and matures We came to this conclusion based on the particular tools people are searching for on our learning platform. AWS, Amazon's cloud services platform, was the No. 4 search term. Kubernetes (98% growth in activity) and Docker (No. 3 search term), facilitate running programs in the cloud in software packages called containers. Also popular is the term microservices (No. 24 search term), which is a computer architecture notably designed for cloud use. All of this suggests that organizations are realizing cloud systems need to be developed, deployed, and managed. Figure 2. The top 25 search terms on O’Reilly’s learning platform and their year-over-year rate of change in search frequency. Blockchain experiences sudden growth Bitcoin grabs the headlines, but blockchain is what’s making its mark among O’Reilly users. The leap in interest on our learning platform around blockchain (107% growth in activity) can be explained by its promise to record transactions securely. The blockchain accumulates transactions in a way that allows them to be verified. If either party to the transaction tries to repudiate it, anyone with access to the blockchain can prove that the party agreed to the transaction. Thus, blockchain goes far [...]



Four short links: 10 January 2018

2018-01-10T11:15:00Z

Attention Arms Race, Robot/AI Principles, GDPR vs. Ad Tech, and Meltdown Code

  1. The Arms Race for Your Attention (Cory Doctorow) -- today’s weaponized attention is tomorrow’s ghost ad I've been banging on about this for a while: anything designed to catch our attention only works while it's novel because we're designed to "tune out" familiar things. But history is littered with armies of seemingly invincible attention warriors who were out-evolved by their prey, and could not overcome the counter­measures that were begat by repeated exposure to their once-undefeatable tactics.
  2. A Roundup of Robotics and AI Ethics: Part 1, Principles (RoboHub) -- we started with Asimov's three, and people have been working on them ever since. Bloat is obviously a problem: there are 23 Asilomar Principles. I like IEEE's: How can we ensure that A/IS do not infringe human rights? Traditional metrics of prosperity do not take into account the full effect of A/IS technologies on human well-being. How can we assure that designers, manufacturers, owners and operators of A/IS are responsible and accountable? How can we ensure that A/IS are transparent? How can we extend the benefits and minimize the risks of AI/AS technology being misused?
  3. EU Data Protection Directive Requires A Top-To-Bottom Redo of the Ad Tech Industry (Cory Doctorow) -- Under the new directive, every time a European's personal data is captured or shared, they have to give meaningful consent, after being informed about the purpose of the use with enough clarity that they can predict what will happen to it. Every time your data is shared with someone, you should be given the name and contact details for an "information controller" at that entity. That's the baseline: when a company is collecting or sharing information about (or that could reveal!) your "racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, ... [and] data concerning health or data concerning a natural person’s sex life or sexual orientation," there's an even higher bar to hurdle.
  4. Meltdown Proof-of-Concept -- reading another process's memory, boom. See also image reconstruction using this technique. Wow.

Continue reading Four short links: 10 January 2018.

(image)



Four short links: 9 January 2018

2018-01-09T11:05:00Z

Power With AI, FocusWriter, PC Fonts, and Community Management

  1. What Did I Miss? (Tim O'Reilly) -- Zeynep Tufekci, a professor at the University of North Carolina and author of "Twitter and Tear Gas," perfectly summed up the situation in a tweet from September: "Let me say: too many worry about what AI—as if some independent entity—will do to us. Too few people worry what *power* will do *with* AI." That's a quote that would have had pride of place in the book had it not already been in production.
  2. FocusWriter -- a simple, distraction-free writing environment. It utilizes a hide-away interface that you access by moving your mouse to the edges of the screen, allowing the program to have a familiar look and feel to it while still getting out of the way so that you can immerse yourself in your work. Available for Windows, Mac, and Linux.
  3. The Ultimate Old-School PC Font Pack -- do the time warp again. (My terminal uses Print Char 21 in black and green, because I code better when my brain thinks it's 1992)
  4. The Many Faces of the Community Manager (Dave Neary) -- There are many traditional software and product development roles that a community manager can fill for an open source community. Here are five I'd like to highlight: Marketing; Partner/ecosystem development; Developer enablement; Product/release management; Product strategy. Once you've answered "what do you hope to achieve by open-sourcing your software?", you'll often find you need a person whose job it is to help you get that result; just releasing software is never enough.

Continue reading Four short links: 9 January 2018.

(image)



5 AI trends to watch in 2018

2018-01-09T11:00:00Z

From methods to tools to ethics, Ben Lorica looks at what's in store for artificial intelligence.What will 2018 bring in AI? Here's what's on our radar. Expect substantial progress in machine learning methods, understanding, and pedagogy As in recent years, new deep learning architectures and (distributed) training algorithms will lead to impressive results and applications in a range of domains, including computer vision, speech, and text. Expect to see companies make progress on efficient algorithms for training, inference, and data processing on edge devices. At the same time, collaboration between machine learning experts will produce interesting breakthroughs—examples include work that draws from Bayesian methods and deep learning and work on neuroevolution and gradient-based deep learning. However, as successful as deep learning has been, our level of understanding of why it works so well is still lacking. Both researchers and practitioners are already hard at work addressing this challenge. We anticipate that in 2018 we'll see even more people engage in improving theoretical understanding and pedagogy. New developments and lowered costs in hardware will enable better data collection and faster deep learning Deep learning is computationally intensive. As a result, much of the innovation in hardware pertains to deep learning training and inference (on both the edge and the server). Look for new processors, accompanying software frameworks and interconnects, and optimized systems assembled specifically to allow companies to speed up their deep learning experiments to emerge from established hardware companies, cloud providers, and startups in the West and in China. But the data behind deep learning has to be collected somehow. Many industrial AI systems rely on specialized sensors—LIDAR for instance. Costs will continue to decline as startups produce alternative sensors and new methods for gathering and using data, such as high-volume, low-resolution data from edge devices and sensor fusion. Developer tools for AI and deep learning will continue to evolve TensorFlow remains by far the most popular deep learning library, but other frameworks like Caffe, PyTorch, and BigDL will continue to garner users and use cases. We also anticipate new deep learning tools to simplify architecture and hyperparameter tuning, distributed training, and model deployment and management. Others areas in which we expect progress include: Simulators, such as digital twins, which allow developers to speed up the development of AI systems, along with the reinforcement learning libraries that integrate with them (the RL library that's part of RISE Lab's Ray is a great example) Developer tools for building AI applications that can process multimodal inputs Tools that target developers who aren't data engineers or data scientists We'll see many more use cases for automation, particularly in the enterprise As more companies enter the AI space, they'll continue to find tasks that can be (semi) automated using existing tools and methods. A natural starting point will be low-skilled tasks that consume the time of high[...]



Microservices at scale

2018-01-09T11:00:00Z

(image)

Learn about architectural safety measures, scaling data, caching schemes, service discovery, and more.

When you’re dealing with nice, small, book-sized examples, everything seems simple. But the real world is a more complex space. What happens when our microservice architectures grow from simpler, more humble beginnings to something more complex? What happens when we have to handle failure of multiple separate services or manage hundreds of services? What are some of the coping patterns when you have more microservices than people? Let’s find out.

Failure Is Everywhere

We understand that things can go wrong. Hard disks can fail. Our software can crash. And as anyone who has read the fallacies of distributed computing can tell you, we know that the network is unreliable. We can do our best to try to limit the causes of failure, but at a certain scale, failure becomes inevitable. Hard drives, for example, are more reliable now than ever before, but they’ll break eventually. The more hard drives you have, the higher the likelihood of failure for an individual unit; failure becomes a statistical certainty at scale.

Continue reading Microservices at scale.

(image)



Opportunities and challenges AI will face in the coming year

2018-01-08T11:20:00Z

Experts weigh in on what we can expect from AI in 2018.What will happen in artificial intelligence (AI) in 2018? Four experts–Chad Meley and Stephen Brobst of Teradata Corporation, a leading provider of business and data analytics solutions, and Atif Kureishy and Eliano Marques of Think Big Analytics, a business outcome-led global analytics consultancy–share their predictions on the new technologies, business opportunities, and challenges the AI field will see in the coming year. Chad Meley, vice president of marketing, Teradata Corporation AI infrastructure will improve dramatically. Survey respondents from 260 large enterprises told us their most significant barrier to realizing benefits from AI was “lack of IT infrastructure,” surpassing all other headwinds, such as little or no access to talent, lack of budget, and weak or unknown business cases. Poor software integration between various open source software components, along with GPU environments that do not meet enterprise service levels, have become top priorities to be addressed by vendors and the community. They will respond in 2018 with enterprise-grade AI product and support offerings that overcome the growing pains associated with new advanced technology. This will not only accelerate the value that comes from AI, but will also mark a shift in enterprise AI algorithms from running almost exclusively in the cloud (given its position as the path of least resistance), to a more balanced mix between public cloud and on-premises deployments that more accurately reflect the realities and foreseeable intentions of large enterprises. The first mainstream killer app for enterprise AI will happen in financial services. A large majority of the top 15 banks in the world will begin to share publicly their results in using AI to thwart bad actors across a variety of financial crimes, such as credit card fraud, synthetic identity theft, AML, and others. Unlike the traditional systems in place that are based on manually set rules and light analytics, AI-based systems using a wider spectrum of data along with lightly supervised algorithms will prove to be an order of magnitude more effective in detecting financial malfeasance with respect to accuracy, completeness, and timeliness. One could argue that deep learning is being applied in retail already for next-generation recommendation engines--while true, it’s limited to only the most advanced digital native retailers, and therefore excluded from this “mainstream” characterization. AI use cases from other industries such as health care, manufacturing, and transportation will progress in 2018 with some key one-off wins by select companies, but will not see a widespread break-out in 2018 to the degree we’ll see in financial services. Things are lined up perfectly for the killer AI app for fighting financial crimes: a well-understood and high-impact business case, recognition of the need to fight fire with fire (as the bad actors are using AI against the financial institutions), and a couple of innovative banks that have done it in 2017 and are[...]



Four short links: 8 January 2018

2018-01-08T11:10:00Z

Old Unix, Differentiable Programming, Systems Change, and Resilience

  1. v7/86 -- a port of UNIX* Version 7 to the x86 (IA-32) based PC, is now available. Most of the source code is under a Berkeley-style license.
  2. Deep Learning est mort. Vive Differentiable Programming! (Yann LeCun) -- An increasingly large number of people are defining the network procedurally in a data-dependant way (with loops and conditionals), allowing them to change dynamically as a function of the input data fed to them. It's really very much like a regular progam, except it's parameterized, automatically differentiated, and trainable/optimizable.
  3. Better Philanthropy Through Systems Change -- Our focus should be more on solving problems through creative collaboration, and less on the establishment and perpetuation of new institutions. In addition, we need to develop and employ system entrepreneurs who are skilled in coordinating systematic approaches to addressing the complex, large-scale problems of our time. In case you're thinking of doing Stuff That Matters. (via Jo Allum)
  4. Loss, Trauma, and Human Resilience -- resilience in the face of loss or potential trauma is more common than is often believed, and there are multiple and sometimes unexpected pathways to resilience. A thought to keep you going through the dark hours at work. (via Rolf Degen)

Continue reading Four short links: 8 January 2018.

(image)



What’d I miss?

2018-01-08T11:00:00Z

Tim O'Reilly reflects on the stories from 2017 that played out after he finished writing his new book.There's a scene in Lin-Manuel Miranda's Hamilton in which Thomas Jefferson, who has been away as ambassador to France after the American Revolution, comes home and sings, "What'd I miss?" We all have "What'd I miss?" moments, and authors of books most of all. Unlike the real-time publishing platforms of the web, where the act of writing and the act of publishing are nearly contemporaneous, months or even years can pass between the time a book is written and the time it is published. Stuff happens in the world, you keep learning, and you keep thinking about what you've written, what was wrong, and what was left out. Because I finished writing my new book, WTF? What's the Future and Why It's Up to Us, in February of 2017, my reflections on what I missed and what stories continued to develop as I predicted form a nice framework for thinking about the events of the year. Our first cyberwar "We just fought our first cyberwar. And we lost," I wrote in the book, quoting an anonymous US government official to whom I'd spoken in the waning months of the Obama administration. I should have given that notion far more than a passing mention. In the year since, the scope of that cyberwar has become apparent, as has how all of the imagined scenarios we used to prepare turned out to mislead us. Cyberwar, we thought, would involve hacking into systems, denial-of-service attacks, manipulating data, or perhaps taking down the power grid, telecommunications, or banking systems. We missed that it would be a war directly targeting human minds. It is we, not our machines, that were hacked. The machines were simply the vector by which it was done. (The Guardian gave an excellent account of the evolution of Russian cyberwar strategy, as demonstrated against Estonia, Ukraine, and the US.) Social media algorithms were not modified by Russian hackers. Instead, the Russian hackers created bots that masqueraded as humans and then left it to us to share the false and hyperpartisan stories they had planted. The algorithms did exactly what their creators had told them to do: show us more of what we liked, shared, and commented on. In my book, I compare the current state of algorithmic big data systems and AI to the djinni (genies) of Arabian mythology, to whom their owners so often give a poorly framed wish that goes badly awry. In my talks since, I've also used the homelier image of Mickey Mouse, the sorcerer's apprentice of Walt Disney's Fantasia, who uses his master's spell book to compel a broomstick to help him with his chore fetching buckets of water. But the broomsticks multiply. One becomes two, two become four, four become eight, eight sixteen, and soon Mickey is frantically turning the pages of his master's book to find the spell to undo what he has so unwisely wished for. That image perfectly encapsulates the state of those who are now trying to come to grips with the monsters that social media has unleashed. This image also p[...]



Four short links: 5 January 2018

2018-01-05T11:00:00Z

Intolerable Speech Rule, Deep Learning, C++ in Jupyter, and Complexity+Game Theory+Economics

  1. The Intolerable Speech Rule -- The Paradox of Tolerance says that a tolerant society should be intolerant of one thing: intolerance itself. This is because if a tolerant society allows intolerance to take over, it will destroy the tolerant society and there will be no tolerance left anywhere. What this means for tech companies is that they should not support intolerant speech when it endangers the existence of tolerant society itself. A talk that presents the simple rule: If the content or the client is: advocating for the removal of human rights from people based on an aspect of their identity, in the context of systemic oppression primarily harming that group in a way that overall increases the danger to that group...then don’t allow them to use your products.
  2. Deep Learning: A Critical Appraisal (Gary Marcus) -- Ten challenges: Deep learning thus far is data hungry; deep learning thus far is shallow and has limited capacity for transfer; deep learning thus far has no natural way to deal with hierarchical structure; deep learning thus far has struggled with open-ended inference; Deep learning thus far is not sufficiently transparent; deep learning thus far has not been well integrated with prior knowledge; deep learning thus far cannot inherently distinguish causation from correlation; deep learning presumes a largely stable world, in ways that may be problematic; deep learning thus far works well as an approximation, but its answers often cannot be fully trusted; deep learning thus far is difficult to engineer with. (via Gary Marcus)
  3. C++ in Jupyter -- what it says on the box. Nifty.
  4. Complexity Theory, Game Theory, and Economics​ -- This document collects the lecture notes from my mini-course "Complexity Theory, Game Theory, and Economics." Starts well: To an algorithms person (like your lecturer), complexity theory is the science of why you can’t get what you want.

Continue reading Four short links: 5 January 2018.

(image)



7 systems engineering and operations trends to watch in 2018

2018-01-05T11:00:00Z

How edge networks, Kubernetes, serverless and other trends will shape systems engineering and operations.We asked members of the 2018 O'Reilly Velocity Conference program committee for their take on the tools and trends that will change how you work. Below you’ll find the insights that I believe will have the greatest impact on the community in the year ahead. Networking the edge This year was all about the cloud as enterprises continued their migration to public, private, hybrid, and multi-cloud infrastructures to compete with agile, cloud-native competitors who can scale quickly at less cost. But next year, Fastly’s Senior Communications Manager Elaine Greenberg expects we’ll see more companies moving their networks closer to the edge. “Businesses that were previously just making the move to the cloud are beginning to move more of their app logic to the edge (closer to the end user), in new and interesting ways, in order to support speed and scale,” says Greenberg. “We are seeing more and more content about the edge, what exactly it means, and how edge computing is distinct from the more generic ‘serverless’ initiative, especially in the context of IoT and AI scalability. With large cloud vendors investing more effort into edge tooling, there is a growing interest in understanding and leveraging distributed systems work at the edge.” Kubernetes domination Kubernetes came into its own in 2017 and its popularity will only grow in 2018. Edward Muller, engineering manager at Salesforce, predicts that building tools on top of Kubernetes is going to be more prevalent next year. “Previously, most tooling targeted one or more cloud infrastructure APIs,” says Muller. “Recent announcements of Kubernetes as a Service (KaaS?) from major cloud providers is likely to only hasten the shift.” Evolution of the service mesh One trend to watch out for next year will be to see how service meshes evolve—or as imgix Engineer Cindy Sridharan calls it, “the proxy wars.” Sridharan says: “Envoy, Linkerd, NGINX Plus, and HAProxy are all in this space now and are rapidly developing new features to make them first-class citizens in the cloud native ecosystem. It'll be interesting to see how these tools evolve in conjunction with Kubernetes.” Expect to hear a lot more about Istio in 2018. And, Sridharan adds, "As the service mesh architecture gains more traction, expect to hear more about the failure modes of the mesh itself as well as best practices to test, deploy, operate, and debug the mesh and/or its configuration better." Serverless monitoring While most organizations are still trying to figure out where and how to use serverless, the general consensus is not if organizations will embrace serverless, but when. The New York Times CTO Nick Rockwell believes serverless is underhyped and that we’re on the way to a largely serverless world. When that time comes, we’ll need to have a process for monitoring large-scale serverless apps, say[...]



Bringing AI into the enterprise

2018-01-04T14:50:00Z

(image)

The O’Reilly Data Show Podcast: Kris Hammond on business applications of AI technologies and educating future AI specialists.

In this episode of the Data Show, I spoke with Kristian Hammond, chief scientist of Narrative Science and professor of EECS at Northwestern University. He has been at the forefront of helping companies understand the power, limitations, and disruptive potential of AI technologies and tools. In a previous post on machine learning, I listed types of uses cases (a taxonomy) for machine learning that could just as well apply to enterprise applications of AI. But how do you identify good use cases to begin with?

Continue reading Bringing AI into the enterprise.

(image)



How a RESTful API represents resources

2018-01-04T11:00:00Z

(image)

Formats, linking, and versioning are important in well-formed RESTful APIs.

In this series of posts on RESTful API design, we started from the spec of a bike rental application and we're moving towards a fully functional API design. In the first article, we talked about how to identify the URL and HTTP method pairs we would need to implement the server-side API for the application. Then, in the second part, we explored how our server should react to incoming requests and communicate its state with status codes. We also learned how HTTP implements authentication, caching, and optimistic locking.

Now, in the third and last part of this example-guided tour, we will cover some of the most controversial topics among the REST community. They are actively debated and are always on the spot in the discussions about RESTful API design: resource representations, Media Types, HATEOAS, and versioning.

Continue reading How a RESTful API represents resources.

(image)



5 ways the web will evolve in 2018

2018-01-04T11:00:00Z

Progressive web apps, offline-first development, customer experience, and other web trends to watch.How will the web world change in 2018? Here's what we're watching. Progressive web apps shift from "on the rise" to "here to stay" Looking into my crystal ball at the beginning of 2017, I predicted that we'd see a rise in interest in progressive web apps (PWA). In June at the O’Reilly Fluent Conference, Addy Osmani discussed how progressive web apps are becoming the new normal for many organizations, including Lyft, Twitter, Lancôme, and Hacker News. Gartner predicts that by 2020 PWAs will replace 50% of consumer-facing apps, given their ability to meld the accessibility of the web with the features of native apps. With cross-browser support and compatibility with mobile-friendly frameworks like Vue, React and Preact, and Angular, we can expect that in 2018 we'll see PWAs become a standard for mobile web strategy. Looking to get started building your own PWA? Tal Ater's Building Progressive Web Apps can get you up and running. Offline matters A fundamental part of progressive web apps is that they work online, offline, and on those shaky, intermittent connections web users inevitably encounter. While mobile first and responsive web design have been widely adopted by the industry, offline first is poised to be one of the next most important design tenets for web developers. Why does offline support matter so much? The stakes are a lot higher than dealing with the Wi-Fi dead zone in your apartment or trying to get your email app to load on the subway. In crisis situations, when people will be contending with low-bandwidth connectivity or no connectivity at all, text-only sites and offline app support become increasingly crucial. CNN launched its text-only site, lite.cnn.io, during Hurricane Irma in response to the need for news sites free from video and ads for low-bandwidth users in the affected areas. In addition to helping in crisis situations, text-only sites offer accessibility and performance improvements, which are both vital in reaching users. When it comes to offline apps, service workers provide a foundation for offline experiences. As developers, we tend to experience our apps and sites in ideal states: from a modern browser with a high-speed connection or on a local or development server. However, it's important to understand that your users don’t often encounter apps and sites in the same way and that low-connectivity or no-connectivity experiences are a part of reality. Offline first is a way to put users first and give them the best possible experience with your web app, even if network connections are less than ideal. Learn more about service workers and embracing offline first on Safari. Customer experience leads the way A major component of progressive web apps and offline first is putting user and customer needs first. Alex Russell's recent res[...]



Four short links: 4 January 2018

2018-01-04T10:45:00Z

State of the World, Implants, Meltdown and Spectre, and Anonymous GitHub

  1. State of the World 2018 -- Bruce Sterling and Jon Lebkowsky take on the state of the world again. I’m always interested in people who make that claim—“We’re the future.” They’re not resentful in Dubai or Estonia, they’re not looking backward; that’s what I appreciate. They don’t yearn to make themselves great again, because they’ve never been great. They’re not trying to take back control of anything, because they never had any control. (via BoingBoing)
  2. A Practical Guide to Microchip Implants -- 50K-100K people have voluntarily microchipped themselves, attempting to have a digital effect with their physical presence.
  3. Meltdown and Spectre -- a lot of bad news at the CPU level. The workaround means slow hardware, and the path to safety may require replacing the chips. Oh, and Mozilla says internal experiments confirm that it is possible to use similar techniques from web content to read private information between different origins. And this is Day 4 of 2018.
  4. Gitmask -- anonymously submit pull requests on GitHub.

Continue reading Four short links: 4 January 2018.

(image)



Reinforcement learning with TensorFlow

2018-01-03T14:55:00Z

Solving problems with gradient ascent, and training an agent in Doom.The world of deep reinforcement learning can be a difficult one to grasp. Between the sheer number of acronyms and learning models, it can be hard to figure out the best approach to take when trying to learn how to solve a reinforcement learning problem. Reinforcement learning theory is not something new; in fact, some aspects of reinforcement learning date back to the mid-1950s. If you are absolutely fresh to reinforcement learning, I suggest you check out my previous article, "Introduction to reinforcement learning and OpenAI Gym," to learn the basics of reinforcement learning. Deep reinforcement learning requires updating large numbers of gradients, and deep learning tools such as TensorFlow are extremely useful for calculating these gradients. Deep reinforcement learning also requires visual states to be represented abstractly, and for this, convolutional neural networks work best. In this article, we will use Python, TensorFlow, and the reinforcement learning library Gym to solve the 3D Doom health gathering environment. For a full version of the code and required dependencies, please access the GitHub repository and Jupyter Notebook for this article. Exploring the environment In this environment, the Doom player is standing on top of acid water and needs to learn how to navigate and collect health packs to stay alive. Figure 1. Environment, courtesy of Justin Francis. One method of reinforcement learning we can use to solve this problem is the REINFORCE with baselines algorithm. Reinforce is very simple—the only data it needs includes states and rewards from an environment episode. Reinforce is called a policy gradient method because it solely evaluates and updates an agent's policy. A policy is the way the agent will behave in a current state. For example, in the game pong, a simple policy would be: if the ball is moving at a certain angle, the best action would be to move the paddle to a position relative to that angle. On top of using a convolutional neural network to estimate the best policy for any given state, we will use the same network to estimate the value or predicted long-term reward at a given state. We will start by defining our environment using Gym. env = gym.make('ppaquette/DoomHealthGathering-v0') Before trying to get an agent to learn, let’s look at a standard baseline of observing a random agent. It’s clear to see we have a lot to learn. Figure 2. Random agent, courtesy of Justin Francis. Setting up our learning environment Reinforce is considered a Monte Carlo method of learning, this means that the agent will collect data from an entire episode then perform calculations at the end of that episode. In our case, we will gather a batch of multiple episodes to train o[...]



Four short links: 3 January 2018

2018-01-03T11:15:00Z

Charlie Stross, Pepper's Cone, Proving Correctness, and Chutes and Ladders

  1. Dude, You Broke the Future (Charlie Stross) -- text and video of his keynote at 34C3. tl;dr: don't worry about AI; corporations are already the runaway artificial entity with a single goal.
  2. Pepper's Cone -- This paper describes a simple 3D display that can be built from a tablet computer and a plastic sheet folded into a cone. This display allows naturally viewing a three-dimensional object from any direction over a 360-degree path of travel without the use of a head mount or special glasses. Inspired by the Pepper's Ghost illusion behind the ghosts at Disney's Haunted Mansion.
  3. Frap -- This is an in-progress, open source book by Adam Chlipala, simultaneously introducing the Coq proof assistant and techniques for proving correctness of programs. That is, the game is doing completely rigorous, machine-checked mathematical proofs, showing that programs meet their specifications.
  4. Simulating Chutes and Ladders -- FAR more than you ever thought you wanted to know about the game of Chutes and Ladders (Snakes and Ladders in my country). There's a section titled "Eigenvectors and Stationary States," so suit up before going in.

Continue reading Four short links: 3 January 2018.

(image)



8 fintech trends on our radar for 2018

2018-01-03T11:00:00Z

AI, blockchain, payment regionalization, and other fintech trends to watch.2017 saw big changes, a lot of investment, and some regulatory challenges in fintech. What will 2018 bring? Here’s what we’ll be watching in the coming year. 1. AI will be implemented across the stack AI is sweeping across all industry sectors, including financial services. AI touches customer interactions (voice services like Siri and dialog systems), fraud detection, trading, and risk management (machine learning), and is being used to automate many back-office tasks (robotic process automation). AI technologies are also giving rise to new fintech startups that use techniques like computer vision to unlock new datasets (e.g., aerial images). 2. New products will make advanced analytics easier Talk to any vendor or startup in big data analytics or cloud computing and they probably have key customers in financial services. This means that many technology providers will create products tailored for finance (most likely products that comply with existing regulations), which lowers the barrier to using advanced analytics. 3. Blockchain technologies will be used in financial services products Blockchain technologies are being used by startups to disrupt existing financial services products, but there are also proof-of-concept projects that rely on blockchain tools taking place in most financial institutions. The technology is promising enough that interest in blockchains isn’t limited to financial services. Within finance, one area where blockchain technologies has grown rapidly is access to business capital (via ICOs). 4. Data partnerships will increase Companies have long sought more data to understand their customers. A typical consumer holds accounts with more than one financial services company. With more companies vying for the attention of consumers, data partnerships are being formed to enable companies to share (complementary) data securely. 5. We’ll see even greater regionalization and regulation in payments Our daily transactions now involve payment systems (mobile, online, and offline) that didn’t exist a decade ago. This is an area where both startups and global companies are flourishing. The technologies, protocols, and players vary by region. I’m amazed by how widely used QR codes are in China (WeChat and Alipay) and the rest of the world (GoPay in Southeast Asia, Paytm in India). For companies operating in the EU, PSD2 is a new 2018 regulation aimed at fostering innovation and competition in the payments industry. 6. Customer experience will be increasingly mobile Judging by how much time users spend on them, platforms like WeChat, Facebook, and the iPhone will increasingly be how customers interact with financial services. This is particularly true in reg[...]



What lies ahead for data in 2018

2018-01-02T11:00:00Z

How new developments in algorithms, machine learning, analytics, infrastructure, data ethics, and culture will shape data in 2018.Here's what we expect to see—or see more of—in the data world in 2018. 1. New tools will make graphs and time series easier, leading to new use cases Graphs and time series have been a crucial part of the explosion in big data. 2018 will see the emergence of a new generation of tools for storing and analyzing graphs and time series at large scale. These new analytic and visualization tools will help product groups devise new offerings, especially for use cases in security and fraud detection. 2. More companies will join data partnerships to share data In 2016, I started hearing companies express interest in data sharing platforms, and startups have now begun to build data exchanges to allow companies to share data across organizational boundaries, while protecting privacy and IP. Ideas from the blockchain world have inspired some of these initiatives, particularly crypto and distributed control. Data partnerships are taking hold in financial services companies, and I anticipate this trend to spread into other industries this year. 3. Expect advances in tools that facilitate ML experimentation and collaboration We're in an empirical era of machine learning. Companies are now building tools that facilitate experimentation and collaboration. There's been a particular focus on data science platforms that allow users to share processing pipelines and features/predictors, use different libraries, and enable end-to-end reproducibility. 4. As well as in onboarding courses and tools for data scientists As more companies add deep learning to the mix of algorithms they use, we'll see new onboarding courses and tools that allow data scientists to share best practices, architectures, and parameters. 5. We'll see new use cases for deep learning as a machine learning method Besides traditional applications in computer vision, speech, and text, companies are actively exploring deep learning for recommenders, search ranking, fraud and anomaly detection, and time series forecasting. 6. Data pipelines that draw on multiple data sources will continue to evolve Machine learning products require data pipelines that draw on disparate data sources, so data integration, data enrichment, and data processing tools continue to be critical. 7. Anticipate new methods for unifying live and historical data In recent months, I've come across startups and open source communities building storage systems that enable analytics on live and long-term historical data. These unified data management systems enable analysts to build applications using a single system rather than querying live and historical data stores separately.[...]



Four short links: 2 January 2018

2018-01-02T10:55:00Z

Public Domain, Training Data, DIY CS Degree, and Collaborative Neural Net Design

  1. What Could Have Entered Public Domain on January 1 (Duke) -- What books would be entering the public domain if we had the pre-1978 copyright laws? We're well into familiar authors and books now: Joseph Heller's Catch-22, Heinlein's Stranger in a Strange Land, and Roald Dahl's James and the Giant Peach to name a few. Of course, the real cost isn't these books, which are still in print, but rather the far greater number of minor works whose fame wasn't enough to keep them in print.
  2. Snorkel -- a system for rapidly creating, modeling, and managing training data, currently focused on accelerating the development of structured or "dark" data extraction applications for domains in which large labeled training sets are not available or easy to obtain. Open source.
  3. Path to a Free Self-Taught Education in Computer Science -- The OSSU curriculum is a complete education in computer science using online materials. It's not merely for career training or professional development. It's for those who want a proper, well-rounded grounding in concepts fundamental to all computing disciplines, and for those who have the discipline, will, and (most importantly!) good habits to obtain this education largely on their own, but with support from a worldwide community of fellow learners.
  4. Fabrik -- collaboratively build, visualize, and design neural nets in browser.

Continue reading Four short links: 2 January 2018.

(image)



Four short links: 1 January 2018

2018-01-01T09:00:00Z

Future Game, Speech Recognition, Theoretical Computer Science, and Assembly Game

  1. Where Do You Stand? (Stuart Candy) -- a simple classroom or workshop game where you orient yourself along two axes according to how bad you think things are, and how much influence you think you have over it. The UR [Upper Right: things are good and getting better] may, for instance, think of themselves as powerful change agents, but then hear from others (moving clockwise) that the LR [Lower Right: things are getting worse but I can act] regard them as being unrealistic or just privileged; the LL [Lower Left: things are getting worse and there's nothing I can do about it] describe them as deluded or hubristic, and the UL see them as the ones who create the world that the LL live in. You can then move people into different quadrants to “see how things look from where others stand”.
  2. wav2letter -- a simple and efficient end-to-end Automatic Speech Recognition (ASR) system from Facebook AI Research.
  3. Intro to Theoretical Computer Science -- lecture notes for an introductory undergraduate course on theoretical computer science. I am using these notes for Harvard CS 121.
  4. Much Assembly Required -- an assembly-programming game, a little CoreWars-esque. Open-sourced.

Continue reading Four short links: 1 January 2018.

(image)



Four short links: 29 December 2017

2017-12-29T12:55:00Z

Signal Processing, Minecraft Economics, OS Design, and Real-time Video Faking

  1. A Pragmatic Introduction to Signal Processing -- tutorials with examples in Matlab and Octave.
  2. Economics of Minecraft -- an absolutely fascinating story about economic adventures on one Minecraft server.
  3. Operating System Design Book Series -- low-level, readable series of books on how operating systems work.
  4. Face2Face: Real-time Face Capture and Reenactment of RGB Videos -- Our goal is to animate the facial expressions of the target video by a source actor and re-render the manipulated output video in a photo-realistic fashion. [...] At run time, we track facial expressions of both source and target video using a dense photometric consistency measure. Reenactment is then achieved by fast and efficient deformation transfer between source and target. VERY impressive.

Continue reading Four short links: 29 December 2017.

(image)



Sam Newman on building microservices

2017-12-28T13:30:00Z

(image)

The O’Reilly Programming Podcast: How to effectively make the transition from monoliths to microservices.

In this episode of the O’Reilly Programming Podcast, we revisit our June 2017 conversation with Sam Newman, presenter of the O’Reilly video course The Principles of Microservices and the online training course From Monolith to Microservices. He is also the author of the book Building Microservices: Designing Fine-Grained Systems.

Continue reading Sam Newman on building microservices.

(image)



How a RESTful API server reacts to requests

2017-12-28T11:00:00Z

(image)

Learn how to properly design RESTful APIs communication with clients, accounting for request structure, authentication, and caching.

This series of articles shows you how to derive an easy-to-use, robust, efficient API to serve users on the web or on mobile devices. We are using the principles of RESTful architecture over HTTP. In the first piece, we started from a list of specs for a simple bike rental service, defining URLs and the HTTP methods to serve the app. In this second part, we will talk in more detail about how the server should react to incoming requests with status codes. We will also talk about how to identify who is the user performing a request (authentication), why Cross-Origin Resource Sharing (CORS) matters for APIs, how caching can improve performance, and how HTTP optimistic locking can prevent inconsistencies in resources.

Here is where we left off from the last post: we have the URLs (nouns) and the HTTP methods (actions) our API responds to. Each combination of URL and HTTP method corresponds to a functionality available in the bike rental app:

Continue reading How a RESTful API server reacts to requests.

(image)



Four short links: 28 December 2017

2017-12-28T11:00:00Z

Blockchain, Patent Problems, Great Speech, and Inventor Research Ten Years In, Nobody has Come up with a Use for Blockchain -- you’re relying on single-point encryption — your own private keys — rather than a more sophisticated system that might involve two-factor authorization, intrusion detection, volume limits, firewalls, remote IP tracking, and the ability to disconnect the system in an emergency. [And], price tradeoffs are entirely implausible — the bitcoin blockchain has consumed almost a billion dollars worth of electricity to hash an amount of data equivalent to about a sixth of what I get for my ten dollar a month dropbox subscription. (via Marginal Revolution) Empirical Research Reveals Three Big Problems With How Patents are Vetted (Ars Technica) -- data-driven analysis that concludes: The United States Patent and Trademark Office (USPTO) is funded by fees—and the agency gets more fees if it approves an application. Unlimited opportunities to refile rejected applications means sometimes granting a patent is the only way to get rid of a persistent applicant. Patent examiners are given less time to review patent applications as they gain seniority, leading to less thorough reviews.​​ Tacotron 2 -- Our model achieves a mean opinion score (MOS) of 4.53 comparable to a MOS of 4.58 for professionally recorded speech. Software-generated speech that's indistinguishable from human speech. Try the samples, they're wow. The Social Origins of Inventors -- In this paper we merge three datasets - individual income data, patenting data, and IQ data - to analyze the deterninants of an individual’s probability of inventing. We find that: (i) parental income matters even after controlling for other background variables and for IQ, yet the estimated impact of parental income is greatly diminished once parental education and the individual’s IQ are controlled for; (ii) IQ has both a direct effect on the probability of inventing an indirect impact through education. The effect of IQ is larger for inventors than for medical doctors or lawyers. [...] Finally, we find a positive and significant interaction effect between IQ and father income, which suggests a misallocation of talents to innovation. Continue reading Four short links: 28 December 2017.[...]



Four short links: 27 December 2017

2017-12-27T11:00:00Z

When to Decentralize, Uncanny Valley, Anti-Adblocker AI, and Biosensors When To Decentralize Decision-Making and When Not To (HBR) -- author suggests it depends on whether you need responsiveness, reliability, efficiency, or perennity (e.g., the quality of being perennial, or continuing reliably in perpetuity). If you want responsiveness, decentralize and promote immediacy. If you want reliability, centralize and promote compliance. If you want efficiency, centralize and promote syndication (distribute tasks but centralize management). If you want perennity, centralize and detach the operational unit from other departments that might inhibit execution. Creepiness Creeps In: Uncanny Valley Feelings Are Acquired in Childhood -- Two hundred forty 3- to 18-year-olds viewed one of two robots (machine-like or very human-like) and rated their feelings toward (e.g., "Does the robot make you feel weird or happy?") and perceptions of the robot's capacities (e.g., "Does the robot think for itself?"). Like adults, children older than 9 judged the human-like robot as creepier than the machine-like robot-but younger children did not. Children's perceptions of robots' mental capacities predicted uncanny feelings: children judge robots to be creepy depending on whether they have human-like minds. The uncanny valley is therefore acquired over development and relates to changing conceptions about robot minds. (via BoingBoing) Measuring and Disrupting Anti-Adblockers Using Differential Execution Analysis -- We want to develop a comprehensive understanding of anti-adblockers, with the ultimate aim of enabling adblockers to bypass state-of-the-art anti-adblockers. In this paper, we present a differential execution analysis to automatically detect and analyze anti-adblockers. 6 In 1 Biosensor for Smartphones --The six data points that the sensor package is able to collect are as follows: Heart-rate – heart beats per minute; Heart-rate variability – variation in time between heartbeats; Blood pressure trends – measured range of data over time; Peripheral oxygen saturation (SpO2) – measurement of blood oxygen levels; Electrocardiography (ECG) – electrical activity of the heart over a period of time; Photoplethysmography (PPG) – measurement of blood volume changes Continue reading Four short links: 27 December 2017.[...]



Four short links: 26 December 2017

2017-12-26T11:00:00Z

Internet Mea Culpa, Lending for Data, Graphical Projections, and NPS Harmful

  1. My Internet Mea Culpa -- What we’re living in now is not the state the prophets proposed. [...] If I had known in 1994 that this whole internet thing would have brought generations — generations — of pain before the solution came, it would have been a totally different decision process for me to help it out.
  2. China’s New Lenders Collect Invasive Data and Offer Billions (NYT) -- lots of finance apps for cash loans. The new online lending platforms also raise issues of privacy, a new but growing area of public concern in China. Many platforms that track smartphone use have access to data like location services, phone contact lists and call logs that can be used to track and harass delinquent borrowers. “The government has struggled a lot because they realize that consumers’ personal information is everywhere,” said Liu Yue, a partner at the Boston Consulting Group in Beijing. “But they don’t really know how to change that because the data is already being used.”
  3. Game Developer’s Guide to Graphical Projections -- This article series explains one of the fundamentals of drawing: how to draw three-dimensional things correctly. It’s an essential skill for artists, but it’s also a great first topic for coders that want to get started with art. Even better, we’ll learn it all by simply looking at video games. I don't easily picture things in the 3D world, but this was totally followable for this flatlander.
  4. Net Promoter Score Considered Harmful -- It’s easy to game NPS to look like you’ve made experience improvements when you may have made it worse. [...] [T]here’s no one number that represents a company’s customer experience.

Continue reading Four short links: 26 December 2017.

(image)



Four short links: 25 December 2017

2017-12-25T12:35:00Z

Downscaling Attacks, CS50x Online, Input Lag, DIY ICs

  1. Wolf in Sheep's Clothing: The Downscaling Attack Against Deep Learning Applications -- common image scaling algorithms are not designed to handle human-crafted images. Attackers can make the scaling outputs look dramatically different from the corresponding input images.
  2. Introduction to Computer Science: CS50 -- This is CS50x, Harvard University's introduction to the intellectual enterprises of computer science and the art of programming for majors and non-majors alike, with or without prior programming experience. An entry-level course.
  3. Input Lag -- comparing the time it takes the screen to update after a keypress across a bunch of different systems. It’s a bit absurd that a modern gaming machine running at 4,000x the speed of an Apple II, with a CPU that has 500,000x as many transistors (with a GPU that has 2,000,000x as many transistors) can maybe manage the same latency as an Apple II in very carefully coded applications if we have a monitor with nearly 3x the refresh rate.
  4. The High School Student Who’s Building His Own Integrated Circuits (IEEE) -- inspired by Jeri Ellsworth's videos, he's making his own chips. I love that he bought a scanning electron microscope on eBay, negotiating the seller down: The electron microscope was “a broken one from a university that just needed some electrical repairs [...] It was listed for sale at $2,500, but Zeloof persuaded the seller to take “well below that” and ended up spending more on shipping than it cost to buy the microscope.

Continue reading Four short links: 25 December 2017.

(image)



Four short links: 22 Dec 2017

2017-12-22T11:00:00Z

Jamming Robot, Online Politeness, AR and Self-Organization, and Open Paperless

  1. Robot Drummer Posts Pictures of Jam Sessions on Facebook -- "This particular research sought to examine whether the relationships that were initially developed face-to-face, but under lab conditions, could be extended to the more open, but virtual, realm of social media." Answer: not so much. Making friends is still hard for robots.
  2. Politeness (Matt Webb) -- Makes me wonder what a similar benevolent, positive philosophy -- pointing inwards at the self and outwards at society -- would be nowadays, and what new modes of interaction it could draw on. The internet I suppose. But how. This is one of the great challenges of our time.
  3. The Augmented Commons: How Augmented Reality Aids Agile Self-Organization -- We provide examples from existing AR applications and conceptualize how AR strengthens self-organization and enables polycentric loci of private governance to emerge, what we call agile self-organization. Examples include information-enhancing overlays, automatic language translation, individualizing and privatizing the provision of personal and worker safety, reducing emergency response times, and enriching education with overlays and holograms. We conclude that AR technologies could erode traditional policy rationales for intervention and allow private governance to take hold and flourish in situations where it has traditionally had difficulty doing so. (via Marginal Revolution)
  4. Open Paperless -- Open source software for scanning, indexing, and archiving paper documents. Could be a useful starting point if you wanted to build something in this space.

Continue reading Four short links: 22 Dec 2017.

(image)



How to design a RESTful API architecture from a human-language spec

2017-12-21T19:30:00Z

(image)

A process to build RESTful APIs that solve users’ needs with simplicity, reliability, and performance.

Every piece of software exists to solve a real-world problem. Directly or indirectly. Most web APIs are consumed by client applications running on PCs, mobile devices, etc., which in turn are used by humans. Despite being consumed directly by machines, APIs are made to satisfy the needs of human beings, so designing them should follow a user-centered process, but often it doesn’t. It's common to find APIs that merely expose database functions, without considering what the end users will do with them. If your API is merely CRUDing (Creating, Reading, Updating, Deleting) your database, it may serve machines really well, but not the humans using it.

In this article, we'll assume you already have a human-readable specification of a system, and we'll teach you a process for designing a RESTful HTTP API architecture from it. By RESTful API we mean an API that follows Representational State Transfer (REST) architectural style. REST is a very popular approach to build APIs because it emphasizes simplicity, extensibility, reliability, and performance.

Continue reading How to design a RESTful API architecture from a human-language spec.

(image)



Uncovering hidden patterns through machine learning

2017-12-21T17:55:00Z

Lessons from FizzBuzz for Apache MXNet.When data scientist Joel Grus wrote an article on using machine learning to solve the "fizzbuzz" problem last year, most people saw it as an exercise in comedy, perhaps with a warning about the inappropriate use of AI. But we saw a deeper lesson. Certainly, you don’t need AI to solve fizzbuzz, so long as someone tells you the algorithm underlying the problem. But suppose you discover a seemingly random pattern like fizzbuzz output in nature? Patterns like that exist throughout real life, and no one gives us the algorithm. Machine learning solves such problems. This summer, I had an opportunity to interview with an AI startup that I really liked. And guess what? I was asked to solve fizzbuzz using deep learning. Long story short, I didn't get the job offer. But this made us think about why fizzbuzz makes sense as an application of deep learning. On the surface, it’s a silly problem in integer arithmetic (or number theory, if you like to be pedantic). But it generates interesting patterns, and if you saw a list of inputs and outputs without knowing the underlying algorithm, finding a way to predict the outputs would be hard. Therefore, fizzbuzz is an easy way to generate patterns on which you can test deep learning techniques. In this article, we’ll try the popular Apache MXNet tools and find that this little exercise takes more effort than one might expect. What is fizzbuzz ? According to Wikipedia, fizzbuzz originated as children’s game, but for a long time has been a popular challenge that interviewers give programming candidates. Given an integer x, the programmer has to produce output according to the following rules: if x is divisible by 3, output is "fizz" if x is divisible by 5, output is "buzz" if x is divisible by 15, output is "fizzbuzz" else, the output is x A typical output sequence will look like this: Input Output 1 1 2 2 3 "fizz" 4 4 5 "buzz" 6 "fizz" 7 7 8 8 9 "fizz" 10 "buzz" 11 11 12 "fizz" 13 13 14 14 15 "fizzbuzz" 16 16 The requirements generate a surprisingly complex state machine, so they can reveal whether a novice programming candidate has good organizational skills. But if we know the rules that generate the data, there’s really no need for machine learning. Unfortunately, in real-life, we only have the data. Machine learning helps us cr[...]