Subscribe: O'Reilly Radar - Insight, analysis, and research about emerging technologies
http://radar.oreilly.com/atom.xml
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
alexa  analytics  application  big data  big  continue reading  continue  data  database  design  hadoop  new  open source  reading 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: O'Reilly Radar - Insight, analysis, and research about emerging technologies

All - O'Reilly Media



All of our Ideas and Learning material from all of our topics.



Updated: 2016-09-30T10:30:11Z

 



What we take for granted: Examining the barriers to contributing to open source

2016-09-30T10:00:00Z

(image)

Saron Yitbarek explains how socioeconomic status, location, education, and other factors affect the likelihood of being able to contribute to the open source community.

Continue reading What we take for granted: Examining the barriers to contributing to open source.

(image)



Thinking in coroutines

2016-09-30T10:00:00Z

(image)

Lukasz Langa uses asyncio source code to explain the event loop, blocking calls, coroutines, tasks, futures, thread pool executors, and process pool executors.

Continue reading Thinking in coroutines.

(image)



Microservices for Java developers

2016-09-30T10:00:00Z

(image)

Learn how to build scalable, adaptive, complex systems that help your business adjust to rapidly changing competitive markets.

What Can You Expect from This Book?

This book is for Java developers and architects interested in developing microservices. We start the book with the high-level understanding and fundamental prerequisites that should be in place to be successful with a microservice architecture. Unfortunately, just using new technology doesn’t magically solve distributed systems problems. We take a look at some of the forces involved and what successful companies have done to make microservices work for them, including culture, organizational structure, and market pressures. Then we take a deep dive into a few Java frameworks for implementing microservices. The accompanying source-code repository can be found on GitHub. Once we have our hands dirty, we’ll come back up for air and discuss issues around deployment, clustering, failover, and how Docker and Kubernetes deliver solutions in these areas. Then we’ll go back into the details with some hands-on examples with Docker, Kubernetes, and NetflixOSS to demonstrate the power they bring for cloud-native, microservice architectures. We finish with thoughts on topics we cannot cover in this small book but are no less important, like configuration, logging, and continuous delivery.

Microservices are not a technology-only discussion. Implementations of microservices have roots in complex-adaptive theory, service design, technology evolution, domain-driven design, dependency thinking, promise theory, and other backgrounds. They all come together to allow the people of an organization to truly exhibit agile, responsive, learning behaviors to stay competitive in a fast-evolving business world. Let’s take a closer look.

Continue reading Microservices for Java developers.

(image)



What should Alexa do?

2016-09-29T12:00:00Z

The success of the Amazon Echo’s speech interface shows there's an opportunity for someone to build a completely new mobile operating system.Much as existing phones seemed curiously inert after your first touch of an iPhone screen in 2007, once you've used an Amazon Echo with Alexa, every device that isn't always listening and ready to respond to your commands seems somehow lacking. Alexa, not Siri, not Google Now, and not Cortana, has ushered in the age of the true intelligent agent, and we are approaching a tipping point where speech user interfaces are going to change the entire balance of power in the technology industry. Last month, I wrote a piece outlining why Apple, Google, Microsoft, automakers, home electronics manufacturers and appliance makers—virtually every consumer device developer–should be rethinking their user interfaces in light of the Echo's success, asking themselves “What would Alexa do?” I've continued to think about the impact of speech user interfaces, and it's become clear to me that Alexa challenges the very foundations of today's mobile operating systems. As I illustrated in that previous piece (do read it if you haven't already), a comparison of conversations with Alexa on the Echo and with Google's voice recognition on my Nexus 6P Android phone, the fundamental interaction paradigm of the phone isn't well suited to the conversational era. Like the touchscreen, the voice agent simply serves as a launcher. Control is passed to the whichever app you launch, and once that app is up and running, the voice agent is out of the picture. I'm back in the touchscreen-oriented paradigm of last generation's apps. By contrast, with the Amazon Echo, I can "stack" multiple apps (music, weather, timers, calls out to independent apps like Uber) while Alexa remains on call, dealing with ongoing questions or commands and passing them along to whichever app seems most appropriate. The more I thought about it, the more I realized that Alexa on the Echo seems so surprising not because its speech recognition is better (it isn't), nor because it lets you ask for things that neither Siri nor Google can do (it doesn't), but because its fundamental human interface is superior. The agent remains continuously, courteously present, doing its best to help. On the phone, the easiest thing for developers to do is to simply use voice to summon the app, and then let the app's old touchscreen metaphor take over. That's what Apple and Google do, and that's why the interactions seem so flawed whenever they involve a function that Siri or Google can't complete on their own. In short, Apple and Google will need to completely rethink iOS and Android operating systems for the age of voice. Not only that, every app will have to be refactored to accept the new interaction paradigm. I'd already been thinking about the further implications of Alexa. In my first piece, I made the case that every device maker would need to redesign for voice control, but I hadn't taken the thought to its logical conclusion: that there's an opportunity for a completely new mobile OS. The question is whether Apple, Google, Amazon, or some as-yet unknown player will seize this advantage. Given Jeff Bezos' penchant for bold bets, I wouldn't put it past Amazon to be the first to create a phone OS for the conversational era. I doubt that the first Alexa-enabled phone will do this, but the limitations of the handoff to Android or iOS will make clear the opportunity. P.S. A few nights ago, when I was on stage with Mike George and Toni Reid of the Amazon Alexa team for an interview with them at the Churchill Awards, Mike said something really important when I asked him for the secret of the Echo's success. "We didn't have a screen," he said. And that's exactly it. In the age of voice, you have to design as if there is no screen, even for devices that have one. When you use an app that relies on the screen, you still have to provide affordances for the[...]



Why you need another database

2016-09-29T10:45:00Z

An analytics database can offer performance and scalability advantages.Do you really need yet another database? Yes, in the age of big data, you do—and it’s called an analytical database. It’s the right tool for the job when it comes to transforming large amounts of raw data into actionable intelligence. After all, the purpose of a big data initiative is to leverage data for better business decision-making. That requires getting super fast turnarounds to queries. Yet, with the volumes of data accruing at most enterprises today, such turnaround is getting more and more difficult to achieve with traditional data warehouses. Although relational databases can support data warehouses for analytics and business intelligence (BI), an analytics database offers performance and scalability advantages over traditional database software, especially as data volumes continue to grow faster than IT budgets. Of course, businesses can go the data lake route by setting up a data repository on Hadoop that can accommodate even the heaviest floods of data. But this won’t help much with performing the analytics that are necessary to glean knowledge from all the bits and bytes. For example, Hadoop is an extremely cost-effective way to store and process large volumes of structured or unstructured data. It’s also designed to optimize batch jobs. But fast, it is not. When time isn’t a constraint, Hadoop can be a boon. For more urgent business analytics tasks, however, it’s not a big data panacea. This is where analytical databases come in. Analytical databases typically sit next to the system of record—whether that’s Hadoop, Oracle, or Microsoft—to perform speedy analytics of big data. It’s the right tool for the job of fast, powerful, actionable analytics. You should choose your analytical database based on three factors: the structure of your data, the size of your data, and the types of questions you want to ask of your data. Criteria for choosing an analytical database Let’s look a little more closely at each of these factors: Structure: Does your data fit into a nice, clean data model? Or is it unstructured with a schema that either lacks clarity, is dynamic, or both? Different analytical databases have different specialties when it comes down to structured or unstructured data. Size: Is your data “big data” or does it have the potential to grow into big data? How big is big? If your answers are “yes” and “very big,” you need an analytics database that can scale appropriately. Analytics: What questions do you want to ask of the data? Short-running queries or deeper, longer-running, or predictive queries? Of course, you have other considerations when choosing an analytical database. You’ll want to consider the total cost of ownership (TCO) based on the cost per terabyte. A very important point to consider is your staff’s familiarity with the database technology involved. (Hadoop is notoriously difficult and unfriendly to new users.) Finally, you’ll want to take into account the openness of the database in question and the size of the user community and ecosystem surrounding it. What should drive your database investment decision In the end, what drives your database investment decision are the same forces that drive IT decisions in general. You want to: Increase revenues: You do this by investing in a big data analytics solution that allows you to reach more customers, develop new product offerings, focus on customer satisfaction, and understand your customers’ buying patterns. Enhance efficiency: You accomplish this by choosing a big data analytics solution that reduces software-licensing costs, performs processes more efficiently, takes advantage of new data sources effectively, and accelerates the speed that information is turned into knowledge. Improve compliance: Finally, you must choose an analytics database that helps you to comply with local, state, feder[...]



Tom Greever on articulating design decisions

2016-09-29T10:25:00Z

(image)

The O’Reilly Design Podcast: The CEO button, an IDEAL framework, and converting likes into works.

In this week’s Design Podcast, I sit down with Tom Greever, UX director at Bitovi and author of Articulating Design Decisions. We talk about how to effectively explain your design decisions, avoiding the CEO button, and how saying 'yes' is a facilitation superpower.

Continue reading Tom Greever on articulating design decisions.

(image)



Lili Cheng on bot personalities

2016-09-29T10:15:00Z

(image)

The O’Reilly Bots Podcast: Group interaction through social computing.

In this episode of the O’Reilly Bots podcast, Pete Skomoroch and I speak with Lili Cheng, general manager of FUSE Labs at Microsoft Research. Cheng’s team is responsible for the Microsoft Bot Framework. She’s also a speaker at O’Reilly’s upcoming Bot Day on October 19, 2016, in San Francisco.

Cheng talks about Microsoft’s experimental bots and their goal of making conversations playful and engaging. We also discuss the importance of designing good dialog; the potential of workplace bots; and Xiaoice, Microsoft’s popular Chinese chatbot. We also reflect on the fate and significance of Microsoft’s Tay bot.

Continue reading Lili Cheng on bot personalities.

(image)



Four short links: 29 September 2016

2016-09-29T10:10:00Z

AI and Anthropology, 3D Screen, Move to New Zealand, and Kano 2

  1. Anthropologist on AI -- not just any anthropologist, this is Genevieve Bell (Intel's anthropologist). I loved this talk.
  2. Volume -- a volumetric display, preorder for $1K.
  3. Edmund Hillary Fellowship -- in 2017, NZ will offer three-year open work visas to visionary entrepreneurs, investors, and startup teams to live and work in New Zealand, and to create scalable, positive global impact. Just in time for those who wish to flee President Trump.
  4. Kano 2 -- the Kano is an awesome tinkerable computer, and their second edition looks even better: hardware for camera, screen, and sound, with software that lets kids engage with it.

Continue reading Four short links: 29 September 2016.

(image)



What is the Docker workflow?

2016-09-29T10:00:00Z

(image)

From spinning up containers to shipping software as a team.

The Path to Production Containers

In this chapter, we cover some of the ideas around deploying and testing containers in production. This chapter is intended to show you how you might take containers to production based on our experience doing so. There are a myriad of ways in which you will probably need to tailor this to your own application and environment. The purpose of this chapter is really to provide a starting point and to help you understand the Docker philosophy in practical terms.

Deploying

Deployment, which is often the most mine-ridden of the steps in getting to production, is made vastly simpler by the shipping container model. If you can imagine what it was once like to load goods into a ship to take across the ocean before shipping containers existed, you can get a sense of what most deployment systems look like. In that old shipping model, random-sized boxes, crates, barrels, and all manner of other packaging were all loaded by hand onto ships. They then had to be manually unloaded by someone who could tell which pieces needed to be unloaded first so that the whole pile wouldn’t collapse like a Jenga puzzle.

Continue reading What is the Docker workflow?.

(image)



Concurrency in modern apps

2016-09-29T10:00:00Z

(image)

Thriving in the cloud with horizontal scaling.

In Beyond the Twelve-Factor App, I present a new set of guidelines that builds on Heroku’s original 12 factors and reflects today’s best practices for building cloud-native applications. I have changed the order of some to indicate a deliberate sense of priority, and added factors such as telemetry, security, and the concept of “API first” that should be considerations for any application that will be running in the cloud. These new 15-factor guidelines are:

  1. One codebase, one application
  2. API first
  3. Dependency management
  4. Design, build, release, and run
  5. Configuration, credentials, and code
  6. Logs
  7. Disposability
  8. Backing services
  9. Environment parity
  10. Administrative processes
  11. Port binding
  12. Stateless processes
  13. Concurrency
  14. Telemetry
  15. Authentication and authorization

Factor 8, concurrency, advises us that cloud-native applications should scale out using the process model. There was a time when, if an application reached the limit of its capacity, the solution was to increase its size. If an application could only handle some number of requests per minute, then the preferred solution was to simply make the application bigger.

Adding CPUs, RAM, and other resources (virtual or physical) to a single monolithic application is called vertical scaling, and this type of behavior is typically frowned upon in civilized society these days.

A much more modern approach, one ideal for the kind of elastic scalability that the cloud supports, is to scale out, or horizontally. Rather than making a single big process even larger, you create multiple processes, and then distribute the load of your application among those processes.

Most cloud providers have perfected this capability to the point where you can even configure rules that will dynamically scale the number of instances of your application based on load or other run‐time telemetry available in a system.

If you are building disposable, stateless, share-nothing processes then you will be well positioned to take full advantage of horizontal scaling and running multiple, concurrent instances of your application so that it can truly thrive in the cloud.

Continue reading Concurrency in modern apps.

(image)



Highlights from Strata + Hadoop World in New York 2016

2016-09-28T21:00:00Z

Watch highlights covering data science, big data, data in the enterprise, and more. From Strata + Hadoop World in New York 2016.Experts from across the data world are coming together in New York for Strata + Hadoop World in New York 2016. Below you'll find links to highlights from the event. The new dynamics of big data Mike Olson says a renewed approach focused on where, who, and why can lead to cutting edge data solutions. Watch "The new dynamics of big data." Decision 2016: What is your data platform? What’s good for the country is good for your data. Jack Norris says organizations need to consider what their next four years will look like. Watch "Decision 2016: What is your data platform?" U.S. venture: Risk, values, founder outcomes Susan Woodward discusses venture outcomes—what fraction make lots of money, which just barely return capital, and which fraction fail completely. Watch "U.S. venture: Risk, values, founder outcomes." Collaboration and openness drive innovation in artificial intelligence The power of AI and advanced analytics is realized from the ability to analyze and compute large data sets from varied devices and locations. Martin Hall says collaboration and openness are key elements driving this innovation. Watch "Collaboration and openness drive innovation in artificial intelligence." Driving open source adoption within the enterprise Ron Bodkin explains how Teradata spurs open source adoption inside enterprises through a range of initiatives. Watch "Driving open source adoption within the enterprise." Modern analytics with Dell EMC With data increasing at an alarming rate, Patricia Florissi says it is essential that it be a component to your deepest insights. Watch "Modern analytics with Dell EMC." Transforming health care through precision data science: Myths & facts Sriram Vishwanath outlines areas where data science can have a significant impact on health care, and dispels myths where data science use contradicts realities within the health care ecosystem. Watch "Transforming health care through precision data science: Myths & facts."     Continue reading Highlights from Strata + Hadoop World in New York 2016.[...]



The new dynamics of big data

2016-09-28T20:00:00Z

(image)

Mike Olson discusses the new dynamics of big data and how a renewed approach focused on where, who, and why can lead to cutting edge solutions.

Continue reading The new dynamics of big data.

(image)



Transforming health care through precision data science: Myths & facts

2016-09-28T20:00:00Z

(image)

Sriram Vishwanath outlines areas where data science can have a significant impact on health care, and dispels myths where data science use contradicts realities within the health care ecosystem.

Continue reading Transforming health care through precision data science: Myths & facts.

(image)



Decision 2016: What is your data platform?

2016-09-28T20:00:00Z

(image)

What’s good for the country is good for your data. Consider what the next four years will look like for your organization.

Continue reading Decision 2016: What is your data platform?.

(image)



Collaboration and openness drive innovation in artificial intelligence

2016-09-28T20:00:00Z

(image)

The power of AI and advanced analytics is realized from the ability to analyze and compute large data sets from varied devices and locations. Learn how collaboration and openness are key elements driving this innovation.

Continue reading Collaboration and openness drive innovation in artificial intelligence.

(image)



Modern analytics with Dell EMC

2016-09-28T20:00:00Z

(image)

With your most precious commodity, data, increasing at an alarming rate, it is essential that it be a component to your deepest insights.

Continue reading Modern analytics with Dell EMC.

(image)



U.S. venture: Risk, values, founder outcomes

2016-09-28T20:00:00Z

(image)

Susan Woodward discusses venture outcomes—what fraction make lots of money, which just barely return capital, and which fraction fail completely

Continue reading U.S. venture: Risk, values, founder outcomes.

(image)



Driving open source adoption within the enterprise

2016-09-28T20:00:00Z

(image)

Ron Bodkin explains how Teradata spurs open source adoption inside enterprises through a range of initiatives.

Continue reading Driving open source adoption within the enterprise.

(image)



Josh Corman on the challenges of securing safety-critical health care systems

2016-09-28T13:15:00Z

(image)

The O’Reilly Security Podcast: Where bits and bytes meet flesh, misaligned incentives, and hacking the security industry itself.

In this episode, I talk with Josh Corman, co-founder of I Am the Cavalry and director of the Cyber Statecraft Initiative for the non-profit organization Atlantic Council. We discuss his recent work advising the White House and Congress on the many issues lurking in safety-critical systems in the health care industry, the misaligned incentives across health care, regulatory bodies and the software industry, and the recent incident between MedSec and St. Jude regarding their medical devices.

Continue reading Josh Corman on the challenges of securing safety-critical health care systems.

(image)



Strata + Hadoop World in New York 2016 livestream

2016-09-28T12:45:00Z

(image)

Watch keynotes from Strata + Hadoop World in New York City.

Continue reading Strata + Hadoop World in New York 2016 livestream.

(image)



Four short links: 28 September 2016

2016-09-28T12:40:00Z

Offline First, Machine Translation, Kernel Security, and Javascript Maps

  1. Offline First -- as Google designs for the Rest of The World, they're learning to build entirely different priorities and assumptions into their software. Meet YouTube Go: a new YouTube app built from scratch to bring YouTube to the next generation of viewers. YouTube Go is designed with four concepts in mind. It’s relatable, with video recommendations and a user interface that is made for you. The app is designed to be offline first and work even when there’s low or no connectivity. It’s also cost effective, providing transparency and reducing data usage. And finally, it’s a social experience, connecting you with the people and content you care about.
  2. Google's Neural Machine Translation System -- On the WMT English-to-French and English-to-German benchmarks, GNMT achieves competitive results to state of the art. Using a human side-by-side evaluation on a set of isolated simple sentences, it reduces translation errors by an average of 60% compared to Google's phrase-based production system.
  3. Linux Kernel Security Needs Fixing -- "Cars were designed to run but not to fail," Kees Cook, head of the Linux Kernel Self Protection Project, and a Google employee working on the future of IoT security, said at the summit. "Very comfortable while you're going down the road, but as soon as you crashed, everybody died. That's not acceptable anymore," he added, "and in a similar fashion the Linux kernel needs to deal with attacks in a manner where it actually is expecting them and actually handles gracefully in some fashion the fact that it's being attacked."
  4. Leaflet -- JavaScript library for mobile-friendly interactive maps.

Continue reading Four short links: 28 September 2016.

(image)



Highlights from the O'Reilly AI Conference in New York 2016

2016-09-28T04:00:00Z

Watch highlights covering artificial intelligence, machine learning, intelligence engineering, and more. From the O'Reilly AI Conference in New York 2016.Experts from across the AI world came together in New York for the O’Reilly AI Conference in New York 2016. Below you'll find links to highlights from the event. Software engineering of systems that learn in uncertain domains Building reliable, robust software is hard, says Peter Norvig. It's even harder when we move from deterministic domains, such as balancing a checkbook, to uncertain domains, such as recognizing speech or objects in an image. Watch "Software engineering of systems that learn in uncertain domains." Why we'll never run out of jobs Tim O’Reilly explains why we can’t just use technology to replace people; we must use it to augment them so that they can do things that were previously impossible. Watch "Why we'll never run out of jobs." Artificial intelligence: Making a human connection Genevieve Bell explores the meaning of “intelligence” within the context of machines and its cultural impact on humans and their relationships. Watch "Artificial intelligence: Making a human connection." Humanizing AI development: Lessons from China and Xiaoice at Microsoft Lili Cheng discusses the human aspects of artificial intelligence. Watch "Humanizing AI development: Lessons from China and Xiaoice at Microsoft." How AI is propelling driverless cars, the future of surface transport Shahin Farshchi examines the role artificial intelligence will play in driverless cars. Watch "How AI is propelling driverless cars, the future of surface transport." Obstacles to progress in AI Yann LeCun says significant progress in AI will require breakthroughs in unsupervised/predictive learning, as well as in reasoning, attention, and episodic memory. Watch "Obstacles to progress in AI." Minds and brains, and the route to smarter machines Gary Marcus discusses the machine-human connection. Watch "Minds and brains, and the route to smarter machines." Thor’s hammer Jim McHugh shares real-world examples of companies solving problems once thought unsolvable. Watch "Thor’s hammer." Lessons on building data products at Google Aparna Chennapragada provides an overview of Google's process for developing data products. Watch "Lessons on building data products at Google." Deep learning at scale and use cases Naveen Rao outlines deep learning challenges and explores how changes to the organization of computation and communication can lead to advances in capabilities. Watch "Deep learning at scale and use cases." Why AI needs emotion Rana El Kaliouby explores why emotion in AI is critical to accelerating adoption of AI systems. Watch "Why AI needs emotion." Continue reading Highlights from the O'Reilly AI Conference in New York 2016.[...]



Why AI needs emotion

2016-09-27T20:00:00Z

(image)

Rana El Kaliouby explores why emotion in AI is critical to accelerating adoption of AI systems.

Continue reading Why AI needs emotion.

(image)



Deep learning at scale and use cases

2016-09-27T20:00:00Z

(image)

Naveen Rao outlines deep learning challenges and explores how changes to the organization of computation and communication can lead to advances in capabilities.

Continue reading Deep learning at scale and use cases.

(image)



Minds and brains, and the route to smarter machines

2016-09-27T20:00:00Z

(image)

Gary Marcus discusses the machine-human connection.

Continue reading Minds and brains, and the route to smarter machines.

(image)



Thor’s hammer

2016-09-27T20:00:00Z

(image)

Jim McHugh shares real-world examples of companies solving problems once thought unsolvable.

Continue reading Thor’s hammer.

(image)



Obstacles to progress in AI

2016-09-27T20:00:00Z

(image)

Significant progress in AI will require breakthroughs in unsupervised/predictive learning, as well as in reasoning, attention, and episodic memory.

Continue reading Obstacles to progress in AI.

(image)



Lessons on building data products at Google

2016-09-27T20:00:00Z

(image)

Aparna Chennapragada discusses Google's process for developing data products.

Continue reading Lessons on building data products at Google.

(image)



Four short links: 27 September 2016

2016-09-27T12:10:00Z

Microsoft's Fuzzer, Fed Game, MadLibs Machine, and Secure Time

  1. Microsoft Springfield -- Microsoft's fuzzing tool, now "with AI." Good to see competition not just around keeping software running in the cloud, but now around making sure your code is cloud-safe in the first place. (And by "cloud-safe," I mean against the cybers.)
  2. Chair the Fed -- a monetary policy game. Monetary policy and games, two tastes that taste together. (via Bloomberg)
  3. The Eureka -- Victorian mechanical MadLibs machine.
  4. RoughTime -- secure time synchronization. (via Imperial Violet)

Continue reading Four short links: 27 September 2016.

(image)



Security and performance: Breaking the conundrum ... again!

2016-09-27T10:00:00Z

(image)

Learn how security can be enforced at the browser level through a combination of optimization techniques and security enhancements.

Continue reading Security and performance: Breaking the conundrum ... again!.

(image)



Why we'll never run out of jobs

2016-09-26T23:00:00Z

(image)

Tim O’Reilly explains why we can’t just use technology to replace people; we must use it to augment them so they can do things that were previously impossible.

Continue reading Why we'll never run out of jobs.

(image)



Software engineering of systems that learn in uncertain domains

2016-09-26T20:00:00Z

(image)

Building reliable, robust software is hard. It is even harder when we move from deterministic domains, such as balancing a checkbook, to uncertain domains, such as recognizing speech or objects in an image.

Continue reading Software engineering of systems that learn in uncertain domains.

(image)



Artificial intelligence: Making a human connection

2016-09-26T20:00:00Z

(image)

Genevieve Bell explores the meaning of “intelligence” within the context of machines and its cultural impact on humans and their relationships.

Continue reading Artificial intelligence: Making a human connection.

(image)



Humanizing AI development: Lessons from China and Xiaoice at Microsoft

2016-09-26T20:00:00Z

(image)

Lili Cheng discusses the human aspects of artificial intelligence.

Continue reading Humanizing AI development: Lessons from China and Xiaoice at Microsoft.

(image)



How AI is propelling driverless cars, the future of surface transport

2016-09-26T20:00:00Z

(image)

Shahin Farshchi examines role artificial intelligence will play in driverless cars.

Continue reading How AI is propelling driverless cars, the future of surface transport.

(image)



O'Reilly AI Conference in New York 2016 livestream

2016-09-26T12:45:00Z

(image)

Watch keynotes from the O'Reilly artificial intelligence conference in New York City.

Continue reading O'Reilly AI Conference in New York 2016 livestream.

(image)



Four short links: 26 September 2016

2016-09-26T11:20:00Z

Linking Records, Encrypted Editing, Neural Photo Editing, and Self-Care Resources

  1. A Bayesian Approach to Graphical Record Linkage and De-duplication (PDF) -- When data about individuals comes from multiple sources, it is often desirable to match, or link, records from different files that correspond to the same individual. Other names associated with record linkage are entity disambiguation, entity resolution, and coreference resolution, meaning that records that are linked or co-referent can be thought of as corresponding to the same underlying entity.
  2. CryptPad -- encrypted group text editing, but the server doesn't know the plaintext being edited. Open source.
  3. Neural Photo Editor -- don't clone; paint with "what looks like it might go here." Bizarre!
  4. selfcare.tech -- a repository of self-care resources for developers & others.

Continue reading Four short links: 26 September 2016.

(image)



Reducing Risk in the Petroleum Industry

2016-09-23T11:00:00Z

(image)

Learn the challenges oil and gas companies face when collecting data and how they mitigate short-term operational risk and optimize long-term reservoir management.

Reducing Risk in the Petroleum Industry: Machine Data and Human Intelligence

Introduction

To the buzzword-weary, Big Data has become the latest in the infinite series of technologies that "change the world as we know it." But amidst the hype, there is an epochal shift: the current exponential growth in data is unprecedented and is not showing any signs of slowing down.

Compared to the short timelines of technology startups, the long history of the petroleum industry provides stark examples to illustrate this change. Seismic research happens early in the exploration and extraction stages. In 1990, one square kilometer yielded 300 megabytes of seismic data. In 2015, this was 10 petabytes—33 million times more, according to Satyam Priyadarshy, chief data scientist at Halliburton. First principles, intuition, and manual arts are overwhelmed by this volume and variety of data. Data-driven models, however, can derive immense value from this data flood. This report gathers highlights from Strata+Hadoop World conferences that showcase the use of data science to minimize risk in the petroleum industry.

Continue reading Reducing Risk in the Petroleum Industry.

(image)



2 major reasons why modern C++ is a performance beast

2016-09-23T10:00:00Z

Use smart pointers and move semantics to supercharge your C++ code base.Representing the first major update in the 13 years since 1998, the age of "modern" C++ was heralded with the ambitious C++11 standard. Three years later, C++14 emerged to represent the completion of the overall feature set the committee had been aiming for during that original 13-year gestation period. One only needs to do a bit of Googling to see that there are a lot of new features in modern C++. In this article, I’ll focus on two key features that represent major milestones in C++’s performance evolution: smart pointers and move semantics. Smart pointers The Prime Directive in the C/C++ continuum has always been performance. As I often tell groups when teaching C++, when I ask a question beginning with "Why" concerning the rationale for a particular C++ language or library feature, they have a 90% chance of getting the answer right by replying with a single word: "Performance." Raw pointers may be fragile and prone to errors, but they’re so close to the machine that code using them runs like a bat out of hell. For decades, there was no better way to satisfy the need for speed demanded by a large class of applications. Memory leaks, segfaults, and torturous debugging sessions were simply the price of doing business if you needed the level of performance raw pointers uniquely provided. The trouble with raw pointers is that there are too many ways to misuse them, including: forgetting to initialize them, forgetting to release dynamic memory, and releasing dynamic memory too many times. Many such problems can be mitigated or even completely eliminated through the use of smart pointers—class templates designed to encapsulate raw pointers and greatly improve their overall reliability. C++98 provided the auto_ptr template that did part of the job, but there just wasn't enough language support to do that tricky job completely. As of C++11, that language support is there, and as of C++14 not only is there no remaining need for the use of raw pointers in the language, but there's rarely even any need for the use of the raw new and delete operators. The reliability (including exception safety) of code performing dynamic memory allocation goes way up in modern C++, without any corresponding cost in performance whatsoever. This represents perhaps the most visible paradigm shift in the transition to modern C++: the replacement of raw pointers with smart pointers just about everywhere imaginable. The below code illustrates the fundamental difference between using a raw pointer and a unique_ptr to manage dynamic memory: // ---------------------------- // Using old-style raw pointers: // ---------------------------- Widget *getWidget(); void work() { Widget *wptr = getWidget(); // Exception or return here: Widget never released! delete wptr; // manual release required } //----------------------------- // Using Modern C++ unique_ptr: //----------------------------- unique_ptr getWidget(); void work[...]



The challenges of holiday performance readiness

2016-09-23T09:30:00Z

Five important things you can do to survive the holiday rush.Preparing for holidays, especially if you’re engaging in commerce, can be a stressful time if haven’t done your homework. Black Friday and other holidays can cause traffic to skyrocket, often exceeding 100x non-holiday peak. Failure during Black Friday or other peak times could put you out of a job, generate bad press for your company and even impact your company’s stock price. To survive, you need to first understand that holiday readiness is primarily a management activity. While technology is involved, it is not the focus. Let’s review some of the top ways to ensure you have a stress-free holiday or special event. Remove the “Human” element Humans are bad at repeatedly performing even basic tasks. From driving a car, to eating soup without spilling it on your shirt—humans make mistakes. In our industry, one mistyped character can mean the difference between success and failure. The airline industry has coped with this by making extensive use of automation (autopilot) and checklists. These two principles can be directly applied to holiday readiness. Automation should be extensively employed to avoid downtime. Auto-scaling should be employed at all levels to avoid mistakes while provisioning hardware and software. The installation and configuration of all software should be scripted. Health checking should be automated. A human’s job should be to automate—not to manually perform tasks. Where automation cannot be employed, checklists should be used—just like pilots use in the cockpit. For example, each team should develop an extensive checklist verifying that their part of the overall system is functioning. Check for file permission issues, web server configuration, iptables rules, etc. In addition to verification checklists, have checklists for what happens in an outage. Whose role is it to communicate with executives? Whose role is it to communicate with each vendor? There shouldn’t be any “guessing.” You should have checklists for everything. Manage change properly To avoid unexpected downtime, you should aim to minimize as many changes as possible. Changes include deploying new custom code, changing or upgrading supporting software (application servers, databases, firmware, etc.), manually adding new network gear, upgrading hardware, etc. Every change is a possible cause of an outage. Many retailers freeze their production systems for the months of October and November in preparation for Black Friday, with any change requiring the CIO to sign off. While this puts forward business and technology progress on hold, it does wonders for stability. Changes that need to go into production close to your special event should go through an extensive change management process. Remember, many have lost their jobs due to outages. Cache everything Most of the traffic for a holiday is likely to be for a handful of pages, like the home page, category overview pages, and produc[...]



Four short links: 23 September 2016

2016-09-23T09:00:00Z

On Reproducibility, Robot Monkey Startup, Stealing Predictive Models, and GPU Equivalence

  1. The Winds Have Changed -- wonderfully constructed rebuttal to a self-serving "those nasty people saying they can't reproduce our media-packaged research findings are just terrible stone-throwers, ignore them" editorial, which builds and builds and should have you reaching for a can of petrol and a lighter by the end.
  2. Kindred AI -- using artificial intelligence and high-tech exoskeleton suits to allow humans—and, at least according to one description of the technology, monkeys, too—to control and train an army of intelligent robots. Planet of the Apes inches its way closer to being.
  3. Stealing Machine Learning Models via Prediction APIs -- Unlike in classical learning theory settings, ML-as-a-service offerings may accept partial feature vectors as inputs and include confidence values with predictions. Given these practices, we show simple, efficient attacks that extract target ML models with near-perfect fidelity for popular model classes including logistic regression, neural networks, and decision trees. We demonstrate these attacks against the online services of BigML and Amazon Machine Learning.
  4. GPU Equivalence for Deep Learning -- In our own testing, we've found that one GPU server is about as fast as 400 CPU cores for running the algorithms we're using. The article itself is an unremarkable overview, but this anecdatum leapt out at me.

Continue reading Four short links: 23 September 2016.

(image)



Keynotes from the O'Reilly Velocity Conference in New York 2016

2016-09-22T19:05:00Z

Watch full keynotes covering DevOps, performance, infrastructure, and more. From the O'Reilly Velocity Conference in New York 2016.Experts from across the web operations and performance worlds came together in New York for the O’Reilly Velocity Conference in New York 2016. Below you'll find links to keynotes from the event. Don't gamble when it comes to reliability How do you stay reliable when you can’t keep the whole system in your head? Tom Croucher discusses Uber's approach to reliability. Watch "Don't gamble when it comes to reliability." DevOps, collaboration, and globally distributed teams Ashish Kuthiala presents research-based findings on the factors that play the most important roles in accelerating DevOps adoption. Watch "DevOps, collaboration, and globally distributed teams." Serverless is other people Rachel Chalmers explores about what serverless means for security, networking, support, and culture. Watch "Serverless is other people." We need a bigger goal than collecting data Mehdi Daoudi challenges business leaders and IT ops professionals to consider the ROI of analyses. How quickly can we get real insights from our data? Watch "We need a bigger goal than collecting data." Situation normal: All fouled up Richard Cook and David Woods examine the problems and potential in Internet-facing business incident response. Watch "Situation normal: All fouled up." Data science: Next-gen performance analytics Ken Gardner looks at the latest innovations in performance analytics and how data science can be used in surprising ways to visualize and prioritize improvements. Watch "Data science: Next-gen performance analytics." Two years of the U.S. Digital Service Now two years old and including about 150 people spanning a network of federal agencies, the U.S. Digital Service has taken on immigration, education, veterans benefits, and health data interoperability. Watch "Two years of the U.S. Digital Service." Building bridges with DevOps Katherine Daniels explains how the principles of effective DevOps environments can be used to create sustainable participation from a wide range of people. Watch "Building bridges with DevOps." Make performance data—and beyond—accessible Alois Reitbauer discusses the conversational interface Dynatrace has built to make performance data accessible through natural language questions. Watch "Make performance data—and beyond—accessible." Turning data into leverage Ozan Turgut discusses how to use visualization and analytics to apply data to decision making. Watch "Turning data into leverage." Is ad blocking good for advertisers? Tony Ralph explains why the rise of ad blocking could incite progress in online advertising. Watch "Is ad blocking good for advertisers?." Transforming how the world operates software The problems a[...]



Transforming how the world operates software

2016-09-22T19:00:00Z

(image)

The problems around software aren’t all solved and the story isn’t over. What role will you play?

Continue reading Transforming how the world operates software.

(image)



Turning data into leverage

2016-09-22T19:00:00Z

(image)

Ozan Turgut discusses how to use visualization and analytics to apply data to decision making.

Continue reading Turning data into leverage.

(image)



Security at the speed of innovation: Defensive development for a fast-paced world

2016-09-22T19:00:00Z

(image)

Kelly Lum shares her experiences maintaining a break-neck pace while still producing hacker-resilient code.

Continue reading Security at the speed of innovation: Defensive development for a fast-paced world.

(image)



Is ad blocking good for advertisers?

2016-09-22T19:00:00Z

(image)

Tony Ralph explains why the rise of ad blocking could incite progress in online advertising.

Continue reading Is ad blocking good for advertisers?.

(image)



Make performance data–and beyond–accessible

2016-09-22T19:00:00Z

(image)

Alois Reitbauer discusses the conversational interface Dynatrace has built to make performance data accessible through natural language questions.

Continue reading Make performance data–and beyond–accessible.

(image)



Building bridges with DevOps

2016-09-22T19:00:00Z

(image)

Katherine Daniels explains how the principles of effective DevOps environments can be used to create sustainable participation from a wide range of people.

Continue reading Building bridges with DevOps.

(image)



What is hardcore data science—in practice?

2016-09-22T11:40:00Z

(image)

The anatomy of an architecture to bring data science into production.

Data science has become widely accepted across a broad range of industries in the past few years. Originally more of a research topic, data science has early roots in scientists efforts to understand human intelligence and create artificial intelligence; it has since proven that it can add real business value.

As an example, we can look at the company I work for: Zalando, one of Europe’s biggest fashion retailers, where data science is heavily used to provide data-driven recommendations, among other things. Recommendations are provided as a back-end service in many places, including product pages, catalogue pages, newsletters, and for retargeting.

Continue reading What is hardcore data science—in practice?.

(image)



Haakon Faste on designing for a "post-human" world

2016-09-22T11:34:00Z

(image)

The O'Reilly Radar Podcast: perceptual robotics, post-evolutionary humans, and designing our future with intent.

In this Radar Podcast episode, I chat with Haakon Faste, a design educator and innovation consultant. We talk about his interesting career path, including his perceptual robotics work, his teaching approaches, and his mission with the Ralf A. Faste Foundation. We also talk about navigating our way to a "post-human" world and the importance of designing to make the world a more human-centered place.

Continue reading Haakon Faste on designing for a "post-human" world.

(image)



Data architectures for streaming applications

2016-09-22T11:30:00Z

(image)

The O’Reilly Data Show Podcast: Dean Wampler on streaming data applications, Scala and Spark, and cloud computing.

In this episode of the O’Reilly Data Show I sat down with O’Reilly author Dean Wampler, big data architect at Lightbend. We talked about new architectures for stream processing, Scala, and cloud computing.

Continue reading Data architectures for streaming applications.

(image)



Andy Mauro on bot platforms and tools

2016-09-22T11:15:00Z

(image)

The O’Reilly Bots Podcast: A look at some of the technologies behind the chatbot boom.

In this episode of the O’Reilly Bots Podcast, Pete Skomoroch and I speak with Andy Mauro, co-founder and CEO of Automat, a startup whose tools make it easy to build AI-powered bots. (Disclosure: Automat is a portfolio company of O’Reilly AlphaTech Ventures, a VC firm affiliated with O’Reilly Media.) Mauro will be speaking at O’Reilly Bot Day on October 19, 2016, in San Francisco.

Continue reading Andy Mauro on bot platforms and tools.

(image)



Trends shaping the London tech scene

2016-09-22T10:00:00Z

(image)

London's tech scene has not only pervaded all of its world-leading activities, it’s also created a vibrant, independent business environment of its own.

(image)
Figure 1-1. London does have unicorns

Finance, journalism, trade, the arts, government—London has been known as a world leader in these areas for centuries. Computer technology doesn’t arouse such an immediate association with London, but since the mid-2000s it has steadily taken hold in all those areas of London activity and has created a vibrant, independent business environment of its own. While not as "hot" as Silicon Valley (no Facebooks or Apples have been launched in London, and few unicorns), London’s tech scene obeys its own "slow and steady" growth model.

This report aims to be a comprehensive view of the computer technology scene in London: where it stands, some of its origins, who's participating in it, and what feeds its strengths.

Continue reading Trends shaping the London tech scene.

(image)



Evolving architectures of FinTech

2016-09-22T10:00:00Z

(image)

Learn how new FinTech architectures and startups are creating novel types of business models in Africa and Asia, where there are far fewer traditional banks, and in Europe and the US, where financial institutions generally avoid the market for small business loans.

Fintech, or financial technology, is often reduced to breathless sound bites, such as “It’s like having a bank in your smartphone!” or “By this time next year, no one will be carrying cash or writing checks!”

But the fintech phenomenon is broadly misunderstood, mainly because disruption is a sexier headline word than integration. In the vast majority of cases, fintech solutions will be integrated with existing systems of hardware and software. From the perspective of fintech developers, the challenge is integrating new software with old systems. From the perspective of financial services institutions, the challenge is providing operating platforms that are friendly to developers.

Continue reading Evolving architectures of FinTech.

(image)



Four short links: 22 September 2016

2016-09-22T09:00:00Z

Ops Papers, Moral Tests, Self-Powered Computing Materials, and Self-Driving Regulation

  1. Operability (Morning Paper) -- text of a talk that was a high-speed run past a lot of papers that cover ops issues. Great read, which will swell your reading list.
  2. Moral Machine (MIT) -- We show you moral dilemmas, where a driverless car must choose the lesser of two evils, such as killing two passengers or five pedestrians. As an outside observer, you decide which outcome you think is more acceptable. You can then see how your responses compare with those of other people.
  3. Self-Powered "Materials That Compute" and Recognize Simple Patterns -- “By combining these attributes into a ‘BZ-PZ’ unit and then connecting the units by electrical wires, we designed a device that senses, actuates, and communicates without an external electrical power source,” the researchers explain in the paper.
  4. NHTSA Guidance on Autonomous Vehicles -- requires companies developing self-driving cars to share a lot of data with the regulator.

Continue reading Four short links: 22 September 2016.

(image)



We need a bigger goal than collecting data

2016-09-21T20:00:00Z

(image)

Mehdi Daoudi challenges business leaders and IT ops professionals to consider the ROI of analyses. How quickly can we get real insights from our data?

Continue reading We need a bigger goal than collecting data.

(image)



Data science: Next-gen performance analytics

2016-09-21T20:00:00Z

(image)

Ken Gardner looks at the latest innovations in performance analytics and how data science can be used in surprising ways to visualize and prioritize improvements.

Continue reading Data science: Next-gen performance analytics.

(image)



Two years of the U.S. Digital Service

2016-09-21T20:00:00Z

(image)

Now two years old and including about 150 people spanning a network of federal agencies, the U.S. Digital Service has taken on immigration, education, veterans benefits, and health data interoperability.

Continue reading Two years of the U.S. Digital Service.

(image)



Don't gamble when it comes to reliability

2016-09-21T20:00:00Z

(image)

How do you stay reliable when you can’t keep the whole system in your head? Tom Croucher discusses Uber's approach to reliability.

Continue reading Don't gamble when it comes to reliability.

(image)



Situation normal: All fouled up

2016-09-21T20:00:00Z

(image)

Richard Cook and David Woods examine the problems and potential in Internet-facing business incident response.

Continue reading Situation normal: All fouled up.

(image)