2017-01-18T13:00:00ZPutting deep learning into practice with new tools, frameworks, and future developments.Deep learning has made tremendous advances in the past year. Though managers are aware of what's been happening in the research world, we're still in the early days of putting that research into practice. While the resurgence in interest stems from applications in computer vision and speech, more companies can actually use deep learning on data they already have—including structured data, text, and times-series data. All of this interest in deep learning has led to more tools and frameworks, including some that target non-experts already using other forms of machine learning (ML). Many devices will benefit from these technologies, so expect streaming applications to be infused with intelligence. Finally, there are many interesting research initiatives that point to future neural networks, with different characteristics and enhanced model-building capabilities. Back to machine learning If you think of deep learning as yet another machine learning method, then the essential ingredients should be familiar. Software infrastructure to deploy and maintain models remains paramount. A widely cited paper from Google uses the concept of technical debt to posit that “only a small fraction of real-world ML systems is composed of ML code.” This means that while underlying algorithms are important, they tend to be a small component within a complex production system. As the authors point out, machine learning systems also need to address ML-specific entanglement and dependency issues involving data, features, hyperparameters, models, and model settings (they refer to this as the CACE principle: Changing Anything Changes Everything). Deep learning has also often meant specialized hardware (often GPUs) for training models. For companies that already use SaaS tools, many of the leading cloud platforms and managed services already offer deep learning software and hardware solutions. Newer tools, like BigDL, target companies that prefer tools that integrate seamlessly with popular components like Apache Spark and leverage their existing big data clusters, model serving, and monitoring platforms. You’ll also still need (labeled) data—in fact, you’ll need more. Deep learning specialists describe it as akin to a rocketship that needs a big engine (a model) and a lot of fuel (data) in order to go anywhere interesting. (In many cases, data already resides in clusters; thus, it makes sense that many companies are looking for solutions that run alongside their existing tools.) Clean, labeled data requires data analysts with a combination of domain knowledge, and infrastructure engineers who can design and maintain robust data processing platforms. In a recent conversation, an expert I spoke with joked that with all of the improvements in software infrastructure and machine learning models “soon, all companies will need to hire are analysts who can create good data sets.” Joking aside, the situation is a bit more nuanced. As an example, many companies are beginning to develop and deploy human-in-the-loop systems, sometimes referred to as “human-assisted AI” or “active learning systems,” that augment the work done by domain experts and data scientists. More so than other machine learning techniques, devising and modifying deep learning models requires experience and expertise. Fortunately, many of the popular frameworks ship with example models that produce decent results for problems across a variety of data types and domains. At least initially, packaged solutions or managed services from leading cloud providers obviates the need for in-house expertise, and I suspect many companies will be able to get by with few true deep learning experts. A more sensible option is to hire data scientists with strong software engineering skills who can help deploy machine learning models to production and who understand the nuances of evaluating models. Another common question is the nature of deep lea[...]
The O’Reilly Security Podcast: Human error is not a root cause, studying success along with failure, and how humans make systems more resilient.
In this episode, I talk with Steven Shorrock, a human factors and safety science specialist. We discuss the dangers of blaming human error, studying success along with failure, and how humans are critical to making our systems resilient.
Continue reading Steven Shorrock on the myth of human error.(image)
Continuous Delivery, Three Machines, Chinese Astroturfing, and The Developer Tools Market
Continue reading Four short links: 18 January 2017.(image)
If you're familiar with financial trading and know Python, you can get started with basic algorithmic trading in no time.
Algorithmic trading refers to the computerized, automated trading of financial instruments (based on some algorithm or rule) with little or no human intervention during trading hours. Almost any kind of financial instrument - be it stocks, currencies, commodities, credit products or volatility - can be traded in such a fashion. Not only that, in certain market segments, algorithms are responsible for the lion’s share of the trading volume. The books The Quants by Scott Patterson and More Money Than God by Sebastian Mallaby paint a vivid picture of the beginnings of algorithmic trading and the personalities behind its rise.
The barriers to entry for algorithmic trading have never been lower. Not too long ago, only institutional investors with IT budgets in the millions of dollars could take part, but today even individuals equipped only with a notebook and an Internet connection can get started within minutes. A few major trends are behind this development:
Continue reading Algorithmic trading in less than 100 lines of Python code.(image)
This ebook explains how teams in your organization can co-create these diagrams in only a couple of weeks.
Continue reading Rapid techniques for mapping experiences.(image)
Autonomous RDBMS, Rules of Machine Learning, Open Source LTE, and Limits of Evidence-Based Policy
Continue reading Four short links: 17 January 2017.(image)
2017-01-17T11:00:00ZResults from the O’Reilly Cloud Platform Survey.Businesses must move fast to remain competitive. Cloud-native applications offer improved speed and scalability over traditional applications, with better resource utilization and lower up-front costs, and make it faster and easier for businesses to deliver and distribute applications in an agile fashion. Migrating applications to the cloud enables businesses to improve their time-to-market and maximize business opportunities. However, the way that applications are developed, deployed and managed needs to be adapted to the cloud, and new best practices are emerging that enable businesses to deliver more functionality faster and cheaper, without sacrificing reliability. A Cloud Platform Survey was conducted by O’Reilly Media in collaboration with Dynatrace, to gain insight into how businesses are already using or preparing to move towards cloud-native technologies and practices. The survey found that 94% of the respondents indicate that they intend to run applications in the cloud within the next five years. Figure 1. Cloud strategy within the next five years The biggest challenge, identified by 59% of the survey respondents who are looking to migrate to the cloud, is in identifying all of the applications and dependencies in their existing environment. Understanding dependencies by analyzing and mapping connections between applications, services, and cloud components can assist businesses in identifying which parts to migrate first as well as surface any technical constraints that should be considered during migration. Figure 2. Challenges migrating to the cloud Of those who have already adopted cloud technologies, 74% of respondents are currently running applications on Infrastructure-as-a-Service (IaaS) platforms, compared to just 21% who have adopted Platform-as-a-Service (PaaS) technologies in production. The majority of IaaS adopters are on public cloud platforms, with Amazon EC2 the most widely adopted (by 74% of those on public platforms). However, the results indicate that respondents are converging toward hybrid cloud solutions rather than public or private clouds exclusively, with a combined total of 72% of respondents who already use IaaS, indicating that they intend to migrate to a hybrid cloud in the near future. Only 40% of the survey respondents are currently running containers in production, with technology maturity the top-reported concern. However, more than half of the respondents who are not running container-based environments anticipate adopting containers within the next four years, listing flexible scaling of resources, resource efficiency, and migration to microservices as key motivating factors. Figure 3. Motivations for adopting container technologies Based on their experience and from speaking with companies at different stages of evolution, Dynatrace has devised a maturity framework for gauging how far along businesses are on the journey to cloud native practices—from first steps migrating existing applications to a virtualized infrastructure, through to dynamic microservices environments. The first stage involves migrating applications to IaaS via a lift-and-shift approach, and implementation of a continuous integration/continuous delivery pipeline to speed up releases. The second stage represents the beginning of microservices, where applications migrate from monolithic application architectures toward services running in containers on PaaS platforms. In the third stage, businesses begin to make more efficient use of cloud technologies and shift toward highly decoupled, dynamic microservices. At each step along this journey toward cloud-native, the importance of scheduling, orchestration, auto-scaling, and monitoring increases. Automation of these processes is key to effective management of cloud-native application environments. For more information on h[...]
An interview with Susan Fowler, Site Reliability Engineer at Uber Technologies.
Continue reading Why you should standardize your microservices.(image)
A look into the unspoken side of software architecture.
Microservices. Continuous delivery. Reactive systems. Cloud native architecture. Many software architects (or even aspiring ones) have at least a passing familiarity with the latest trends and developments shaping their industry. There are plenty of resources available for learning these topics, everything from books and online videos to conference presentations; the ability to go from novice to expert in these areas is right at their fingertips. However, as today’s trends quickly bleed into tomorrow’s standards for development, it’s paramount that software architects become change agents within their organizations by putting into practice the “unspoken” side of their skill set: communication and people skills.
With the arrival of each new year, we all verbalize New Year’s resolutions that fall on deaf ears, including our own. For software architects, these vows may take the shape of getting up to speed on a new form of architecture or finally mastering that cutting-edge technology stack. Either way, it’s the development of these soft skills that define architects as managers—both of large architecture teams and the technology choices they embrace—that’s often neglected in favor of learning the shiniest new technology.
Continue reading 4 essential skills software architects need to have but often don’t.(image)
Learn how to use the git-rm command to remove accidental additions to your Git repository.
Continue reading How do I undo additions to a repository before a Git commit?.(image)
Focus Card, Future Tech Trends, Stoic AI Ethics, and Inverse Kinematics
Continue reading Four short links: 16 January 2017.(image)
Learn how to create local copies of remote branches so that you can make local changes in Git.
Continue reading How do I check out a remote branch with Git?.(image)
CSV Conference, Autonomous Paper Planes, Test Wisely, and Some Silliness
Continue reading Four short links: 13 January 2017.(image)
Learn how to overwrite changes to your local repository with the reset command in Git.
Continue reading How do I force overwrite local branch histories with Git?.(image)
The O'Reilly Radar Podcast: The art and science of fostering serendipity skills.
On this week's episode of the Radar Podcast, O'Reilly's Mac Slocum chats with award-winning author Pagan Kennedy about the art and science of serendipity—how people find, invent, and see opportunities nobody else sees, and why serendipity is actually a skill rather than just dumb luck.
The O’Reilly Data Show Podcast: Greg Diamos on building computer systems for deep learning and AI.
Specialists describe deep learning as akin to a rocketship that needs a really big engine (a model) and a lot of fuel (the data) in order to go anywhere interesting. To get a better understanding of the issues involved in building compute systems for deep learning, I spoke with one of the foremost experts on this subject: Greg Diamos, senior researcher at Baidu. Diamos has long worked to combine advances in software and hardware to make computers run faster. In recent years, he has focused on scaling deep learning to help advance the state-of-the-art in areas like speech recognition.
Continue reading How big compute is powering the deep learning rocket ship.(image)
The O’Reilly Bots Podcast: A universal bot for messaging, mobile voice, and the home.
In this episode of the O’Reilly Bots Podcast, Pete Skomoroch and I speak with Brad Abrams, group product manager of Google Assistant, the company’s new AI-driven bot that lives in many different contexts, including the Pixel phone, the Allo messaging app, and the Google Home voice-controlled speaker.
Continue reading Brad Abrams on Google Assistant.(image)
2017-01-12T12:00:00Z5 questions for Amy Silvers: Implementing, embedding, and championing user research in design teams.I recently asked Amy Silvers, a senior product designer and researcher at Nasdaq, to discuss why it’s important to embed user researchers in design teams, how you can train all team members to think like user researchers, and achieving buy-in through sound bites. At the O’Reilly Design Conference, Silvers will present a session, The embedded researcher: Incorporating research findings into the design process. You’re presenting a session on embedding user researchers in design teams. Why is this vital to a creating a successful product or service? Most UX practitioners would agree that talking to customers is an essential part of the design process, but in my experience, many design teams don't carry the voice of the customer through their design decisions. There are many reasons for this—sometimes even good (or at least unavoidable) reasons—but one way to insure against it is to make sure there is someone on the team who represents the voice of the customer and advocates for them at every stage of the design process. "Embedding" a researcher may mean having a researcher (or any member of the team who conducted the research) attend design reviews or be involved in the approval process. It may mean something as simple as pinning the research findings to a whiteboard in the design room or including them in the end-of-sprint review form. But whatever form it takes, it's a necessary step if you want to create products, services, and experiences that meet users' needs and expectations. What are some practical steps for making certain user research isn't treated like an artifact? One obvious one is to not just include designers and product managers in user research sessions but have them take notes and report to the rest of the team on what they heard. Quick debriefs right after a research session are also effective. They can be tricky to arrange if you're doing a lot of research in a short time, but even then, taking 15 minutes for a recap at the end of a day of research is very helpful. We assign someone to do a quick summary, using a simple form, after each interview, and those are great for capturing noteworthy insights before the full findings report is created. We make the summaries easily accessible to everyone on the team, and they can look through a bunch of them in a short time to get an idea of themes that are emerging from the research. I'll have more to say about this in my talk, of course. For organizations that don't have the resources for user researchers, what advice do you have for design teams that want to make certain user research is an integral part of the design process? Designers can be their own user researchers—it just takes a bit of practice and guidance. And when they do their own research, it becomes easier for them to design for the people they've been listening to. They gain empathy for and understanding of the people who will be on the receiving end of their designs, and they may start to think about their designs in ways that would never have occurred to them without speaking to users. They can also enlist others in the organization—customer service reps, product managers, salespeople, and others—in customer research. Have them listen in, and even ask them to help structure the research sessions. Those partnerships mean that you have non-designers who are invested in making sure the users' points of view are represented in the design. For designers who want to champion user research[...]
China and Bitcoin, Software Repeatability, Tesla Radar, and Chinese Data Market
Continue reading Four short links: 12 January 2017.(image)
2017-01-12T11:00:00ZCommunication skills make or break the effectiveness of a developer.Software development often reminds me of the “peanut butter and jelly sandwich” exercise. This is where two people sit back to back: one responsible for making the sandwich, the other responsible for providing precise instructions on how the sandwich is made. Although it sounds like a trivial challenge, there is a twist to it: the person making the sandwich must interpret the instructions literally, even if they’re vague or incomplete. This usually results in silly outcomes like an entire jar of peanut butter being placed on top of a loaf of bread still in its bag. In one demonstration, I have even seen someone use a saber to “slice the bread”—that one was my favorite. The purpose of this exercise (contrived though it is) is to show how difficult communication can be when you don’t have effective two-way feedback, shared context, and frequent reviews of a work-in-progress. Unfortunately, this sort of environment isn’t very far removed from what the typical software development team experiences day-to-day. There is a unique aspect of our work in that eventually each instruction needs to be written up for a mindless automaton to follow. This is literally what the coding process is all about: converting human intentions into digital instructions. Much gets lost in translation because coding is a difficult task in itself, and it takes a continuous effort to keep an eye on the big picture while simultaneously working at the primitive level of modules and functions. The solution to this problem is simple if not always easy to implement: drive up communications quality as much as possible. There are many different tactics that can help create better feedback loops, but at the heart of all of them is the concept of reification: to take an abstract idea and turn it into something tangible. Going back to the peanut butter and jelly sandwich exercise for a moment, think of a slightly more plausible scenario in which the person making the sandwich is reasonable about interpreting instructions, but simply has never prepared or eaten a PB+J sandwich before. A single picture and the right supplies would probably be enough to explain the entire ‘recipe’ to them. Even in the cases where the instructions might conflict with the picture, it still provides a shared reference point for communication. This makes it so that relatively simple alterations like “remove the crust” or “toast the bread” would be far more likely to be carried out successfully without much confusion. When building software, we can use wireframe sketches and mockups for similar purposes. For example, suppose someone asks you to build a music video recommendations feature for an application. The following sketch could serve as a useful starting point for further discussion: Sometimes a sketch will confirm a shared understanding of the problem, where in other cases it’ll reveal fundamental misunderstandings. For example, a music video recommendation system might be designed to work as shown above, but it also might be set up to automatically play one song after another, using thumbs up and thumbs down buttons to give feedback that informs the next selection. Reification is also a progressive process… once you make something even slightly concrete, it is natural to begin filling in some of the more specific details. For example, where are those thumbnail images going to come from? Will the video properly adjust its orientation and aspect ratio when ru[...]
Learn how to view the history of a branch using the “git log” command and perform a checkout in Git.
Continue reading How do I revert a Git repo to a previous commit?.(image)
Stacked Hydrogels, Beginning Measurement, Yubikey Experiments, and Find Lectures
Continue reading Four short links: 11 January 2017.(image)
BioHTP explored the projects the biohacking community is tackling, how the community is organized, and where it's going.
How do we define an open community and what would we want from one? The inaugural BioHack the Planet conference took place this September in Oakland, California. Hosted in Omni Commons, the volunteer collective home to the DIYbio hub Counter Culture Labs, the conference embodied the spirit of the community it sought to bring together. BioHack the Planet (BioHTP) was developed to be "run by BioHackers, designed for BioHackers, with talks solicited from BioHackers." Its two organizers, Karen Ingram and Josiah Zayner, are both veterans of the do-it-yourself (DIY) biology, or biohacking community, in their own right. Ingram is a coauthor of the BioBuilder curriculum for teaching synthetic biology. Zayner's DIY supply store, ODIN, has outfitted many budding scientists with everything from pipette tips and bulk cell media to at-home CRISPR kits. So what happens when biohackers organize themselves to discuss their work? Part symposium, part workshop, and part exhibition, BioHTP explored the projects the biohacking community is tackling, how the community is organized, and where it's going.
While BioHTP isn't the first time the biohacking community has been brought together, it may be the first time it's done so by its own at this scale. Soon after the launch of DIYbio.org in 2008, the FBI co-sponsored the "Building Bridges Around Building Genomes" conference in collaboration with the American Association For The Advancement Of Science (AAAS), the U.S. Department of Health and Human Services, and the Department of State to encourage communication between policymakers and academics as well as synthetic biologists working in industry and in community labs. Later, in 2012, the FBI flew in biohackers from around the world to a DIYbio conference to develop their relations with the community. Since then, DIY biologists have organized largely on a continental basis, as in the formation of a European network of do-it-yourself biology, on the web, as in the Hackteria platform, or complementary to larger hobbyist conferences, such in the Nation of Makers Initiative or South by Southwest.
Continue reading Defining open: Biohack the Planet.(image)
Learn how to resolve a conflict, mark the change as resolved, and commit the branch to the repository with Git.
Continue reading How do I fix a merge conflict in Git?.(image)
Drew Paroski and Gary Orenstein on the rapid spread of machine learning and predictive analytics
Machine learning has been a mainstream commercial field for some time now, but it’s going through an important acceleration. In this podcast episode, I talk about that acceleration with two executives from MemSQL, a company that specializes in in-memory databases: Gary Orenstein, MemSQL chief marketing officer, and Drew Paroski, MemSQL vice president of engineering.src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/301818875&auto_play=false&hide_related=false&show_artwork=true" height="166" width="100%" frameborder="no" scrolling="no">
Orenstein and Paroski identify a few crucial inflections in the machine learning landscape: machine learning models have become easier to write; computing capacity on the cloud has increased dramatically; and new sources of data—everything from drones to smart-home devices and industrial controllers—have added new richness to machine learning models.
Computing capacity and software progress have made it possible to train some machine learning models in real time, says Orenstein: “given enough time in computing, you can do just about anything, but only recently have people been able to apply these machine learning models in real time to critical business processes.”
Other discussion topics:
This post and podcast are part of a collaboration between MemSQL and O’Reilly. See our statement of editorial independence.
Continue reading The 2017 machine learning outlook.(image)
AI Ethics, Real Names, Popup VPN, and Serverless COBOL
Continue reading Four short links: 10 January 2017.(image)
We need to take a firm look at whether we’re actually benefiting from technology and why we’re using that technology.
With all disciplines of design, as technology progresses, the restrictions become fewer and fewer until they get to the point at which a wave crashes and there’s nothing holding you back from making anything you can dream of. Architecture became bolder when steel frames could support nearly any structure architects could dream of without being restricted by the need to cater to gravity to the degree they needed to when building with stone alone. Furniture evolved with advancements in the technology of the materials available to designers, such as fiberglass and bending plywood, as well as technological advancements related to mass manufacturing processes. In graphic design, typefaces became increasingly more complex and detailed as printing technologies evolved to the point that those details could be reproduced accurately, and then it went completely insane when computers removed nearly all graphical restrictions.
Wearable technology has been around in one form or another since 17th-century China when someone came up with an abacus that you could wear on your finger. So, what’s so special about wearable technology at this point in time? There’s a reason wearable technology is making a resurgence. It’s because we’ve finally made it to the digital maturity tipping point: the point at which advancements in computing and the supporting infrastructure have lessened the restrictions of the design of technology so that they’re nearly nonexistent.
Continue reading Design follows technology.(image)
Mike Barlow examines the growth of sophisticated cloud-based AI and machine learning services for a growing market of developers and users in business and academia.
When the automobile was introduced, there were rumors that drivers and passengers would suffocate when the speed of their vehicles exceeded 20 miles per hour. Many people believed that cars would never become popular because the noise of passing engines frightened horses and other farm animals.
Nobody foresaw rush-hour traffic jams, bumper stickers, or accidents caused by people trying to text and drive at the same time.
Continue reading Practical artificial intelligence in the cloud.(image)
2017-01-10T11:30:00ZFocus on your customer’s needs and iterate.“If I knew where all the good songs came from, I’d go there more often,” so said Leonard Cohen when asked how he wrote classic hits like "Suzanne" and "Hallelujah." Formulating the ideas behind timeless hits is not an easy task—serendipity, stimulation, and skill all equally play their part. Yet, in large organizations, a lack of ideas is rarely the problem. Business leaders and executives are inundated with suppositions, proposals, and pitches on how to increase, invent, and imagine new revenue streams. Most often, the biggest challenge is not conjuring up the concept—it’s killing it as quickly and cheaply as possible. In The Principles of Product Development Flow, Don Reinertsen’s research concluded that about 50% of total product development time is spent on the ‘fuzzy front end’—i.e., the pitching, planning, and funding activities before an initiative starts formal development. In today’s fast-paced digital economy, the thought of spending half of the total time to market on meetings and executive lobbying with no working product to show isn’t just counterproductive and wasteful—it’s ludicrous. Figure 1. Image by Barry O'Reilly. Furthermore, the result of all this investment is often an externally researched, expensive, and beautifully illustrated 100-page document to endorse claims of certain success. The punchline presented through slick slide transitions is “All we need is $10 million, two years, 100 people and we’ll save the business!” Science fiction, theater, and fantasy rolled into one. What is really needed is a systematic, evidence-based approach to managing the uncertainty that is inherent at the beginning of any innovation process. Our purpose when commencing new initiatives is to collect information to help us make better decisions while seeking to identify unmet customer needs and respond to them. New business possibilities are explored by quickly testing and refining business hypotheses through rapid experimentation with real customers. Our goal is to perform this activity as early, quickly, and cheaply as possible; lengthy stakeholder consensus building, convoluted funding processes, and hundreds of senior management sign off sessions is not. Decisions to stop, continue, or change course must be based on the ‘real world’ findings from the experiments we run, not subjective HiPPO (Highest Paid Person’s Opinions) statements about how they’ve “been in the business for 30 years and know better.” Imagine a world without costly executive innovation retreats, and where the practice of pitching business cases at annual planning and/or budgeting cycle meetings is extinct. Instead, a similarly sized investment is assigned to small cross-functional teams to explore given problems, obstacles, or opportunities throughout the course of a year. Over a short fixed-time period, a team creates a prototyped solution to be tested with real customers to see if they find it valuable or not. We are investing to reduce uncertainty and make better decisions. You are paying for information. The question is really: how much do you want to invest to find out? In his book, How To Measure Anything, Douglas Hubbard studied the varia[...]
An interview with Martin Thompson, High-Performance Computing Specialist at Real Logic.
Continue reading Design pressures lead to better code and better outcomes.(image)
Learn how to use Kubernetes and Prometheus together to reimagine infrastructure and measure "the right things."
Continue reading Kubernetes and Prometheus: The beginning of a beautiful friendship.(image)
Learn this new security fuzz testing technique that leverages browser capabilities to detect cross-site scripting vulnerabilities before production deployment.
Integrating cross-site scripting (XSS) tests into the continuous integration and continuous delivery (CI/CD) pipeline is an effective way for development teams to identify and fix XSS vulnerabilities early in the software development lifecycle. However, due to the nature of the vulnerability, automating XSS detection in the build pipeline has always been a challenge. A common practice is to employ dynamic web vulnerability scanners that perform blind black-box testing on newly built web application, before production deployment. There is another way to detect XSS vulnerabilities with web applications in CI/CD: XSS-Checkmate is a security fuzz testing technique (not a tool) that complements traditional dynamic scanners by leveraging browser capabilities to detect XSS vulnerabilities. Since modern web applications widely use tools such as Selenium, Cucumber, and Sauce Labs to test user interactions, this technique can be integrated into those test frameworks with less effort.
XSS is one of the oldest types of security vulnerabilities found in web applications. It enables execution of an attacker-injected malicious script when a victim visits an infected page. The primary reason for XSS is the improper neutralization of user inputs when rendered on a web page. XSS has remained a top threat on OWASP’s top ten list since its first publication in 2004.
Continue reading Automating XSS detection in the CI/CD pipeline with XSS-Checkmate.(image)
2017-01-10T11:00:00ZFrom "Linux is a cancer" to Windows Subsystem for Linux.Since the early 1990s, when Windows became much more popular in the enterprise, people have been trying to put Unix and Linux into places where it doesn’t want to be, using toolkits that implement just enough of the Portable Operating System Interface (POSIX) standard to feel like Unix. The reasons are pretty simple: a lot of open source tools, especially development tools, are primarily targeted to Unix/Linux platforms. Although the most important ones have been ported to Windows, they are designed to work best when they have access to a Unix/Linux shell scripting environment and the many utilities that come with those platforms. Today, there are many options availability for getting Unix or Linux functionality on Windows, and the most recent one, the Windows Subsystem for Linux (WSL), provides a Linux environment that is deeply integrated with Windows. But before I get into that, I’ll look at the some of the other options. The early days In the early 1990s, you could get a pretty good Unix shell, compiler, and text utilities on DOS using DJGPP. It was also around this time that you could simply install Linux on a computer if you wanted a pure Unix-like environment. Linux had none of the limitations that a Unix-like subsystem had, but it also meant that you had to convert in total to Linux. So, if you wanted—or needed—to run a Unix-like environment alongside a Microsoft operating system, you needed a subsystem like DJGPP. And in order to comply with US Federal Information Processing Standards and be considered for defense-related projects, Microsoft needed one, too. Figure 1. Running GNU bash under DJGPP in a modern DOS emulator (DOSBox). Windows NT, Microsoft’s first foray into true multitasking and multi-user operating systems, is the basis of their current operating system offerings, whether you’re running it on a phone, computer, or Raspberry Pi. Although it shares superficial traits with Unix and Linux, internally, it is not at all like them, which wasn’t surprising considering when Windows NT was born: in 1993, Unix was something you’d find on a workstation or server. Apple’s Macs were running their venerable System 7, and it would be eight more years before Mac OS X would come out, itself a Unix-based operating system. The difference between Windows and Unix meant that when you ported a Unix application to Windows, there was substantial functionality, such as the fork() system call, that was simply not available. Because of this, the number of Unix programs that could be fully ported to Windows was fairly small, and many programs had reduced functionality as a result. Over the years, Microsoft kept Unix and Linux at arm’s length. To comply with those federal standards, it supplied a bare minimum POSIX subsystem, just enough to satisfy the standards. The POSIX subsystem was eventually replaced with something called the Windows Services for Unix, which provided a Unix shell, a compiler, and a lot of command-line tools. But it wasn’t enough. The age of Linux It didn’t take long for Linux to define what the Unix-like experience should be on [...]
Learn how to rename a branch using the -m flag in Git.
Continue reading How do I rename a local branch in Git?.(image)
2017-01-09T12:00:00ZTurning physical resource management into a data and learning problem.A common refrain about artificial intelligence (AI) lauds the ability of machines to pick out hard-to-see patterns from data to make valuable, and more importantly, actionable observations. Strong interest has naturally followed in applying AI to all sorts of promising applications, from potentially life-saving ones like health care diagnostics to the more profit-driven variety like financial investment. Meanwhile, rather ethereal speculation about existential risk and more tangible concern about job dislocation have both run rampant as the topics du jour concerning AI. In the midst of all this, many tend to overlook a more concrete and pretty important, but admittedly less sexy subject—AI’s impact on core infrastructure. We were certainly guilty of this when we organized our first Artificial Intelligence Conference. We did not predict the strong representation that we would see from the telecommunications industry, but we won’t make that mistake next time. This sector now offers a fascinating look into how AI can deliver great value to infrastructure. To understand why, it is instructive to first recap what is happening in the telecom industry. Major shifts usually happen as a confluence of not one, but many trends. That is the case in telecommunications as well. Continuous proliferation and obsolescence of devices means more and more endpoints, often mobile, need to be managed as they come online and offline. Meanwhile, bandwidth usage is exploding as consumers and corporations demand more content and data, and the march toward cloud services continues. All of this means that network infrastructure must have more capacity and be more dynamic and fault tolerant than ever as providers feverishly optimize network configurations and perform load balancing. One can imagine the complexity of this when delivering connectivity services to a large portion of an entire country’s online presence. To solve this problem, telecommunications companies are looking to borrow principles from people who have solved similar issues at smaller scales—running data centers with software-defined networking (SDN). By developing SDN for a wide area network (WAN), telecom providers can effectively make their networks reprogrammable by abstracting away hardware complexity into software. Still, even as WANs become software defined, their enormity makes management and operation difficult. While hardware management becomes less complex, running software-defined WANs (SD-WANs) presents highly complex data problems—information flows may exhibit patterns, but they are hard to extract. This is a perfect application for AI algorithms, however. Reinforcement learning is a particularly promising approach that already has history with networking applications. More recently, researchers have combined deep learning with traditional reinforcement learning to develop deep reinforcement learning techniques that will likely find usage in networking. Here is a watered-down version of how that might work. First, consider traditional [...]
"Universal Truths," Getting Stuck, Kafka Dot Com, and Auto Ordering
Continue reading Four short links: 9 January 2017.(image)
Wayne Carter and Ali LeClerc show you how to build a mobile app that has a consistent user experience, both online and offline.
Continue reading The Offline Challenge: Delivering Mobile Apps that Always Work.(image)
Learn 6502, Graphviz in Browser, Messenger Bot Discovery, and Data Shapeshifting
Continue reading Four short links: 6 January 2017.(image)
Learn how to analyze operations data in the presence of “holes” in the time series, how missing data impacts analysis, and a gamut of techniques that can be used to address the missing data issue.
Continue reading Data, data everywhere, not an insight to take action upon.(image)
Oren Etzioni talks about the current AI landscape, projects at the Allen Institute, and why we need AI to make sense of AI.
Continue reading What AI needs is a dose of realism.(image)
The O’Reilly Design Podcast: Identifying use cases for robots, the five laws of robots, and the ethics and philosophy of robotics.
In this week’s Design Podcast, I sit down with Andra Keay, managing director of Silicon Valley Robotics. We talk about the evolution of robots, applications that solve real problems, and what constitutes a good robot.
Continue reading Andra Keay on robots crossing the chasm.(image)
The O’Reilly Hardware Podcast: An all-in-one workshop factory.
In this episode of the O’Reilly Hardware Podcast, Jeff Bleiel and I speak with Joel Johnson, co-founder and CEO of BoXZY, a startup that makes an all-in-one desktop CNC mill, 3D printer, and laser engraver. We discuss the BoXZY device’s software and hardware, including its CAD/CAM software (Autodesk’s Fusion 360) and controller board (Arduino Mega), as well as its shield, steppers, and firmware.
Continue reading Joel Johnson on desktop fabrication.(image)
Crowdsourced Phone, Social Network Analysis of Literature, Sterling and Lebowsky, and Narrative Patterns
Continue reading Four short links: 5 January 2017.(image)
2017-01-05T12:05:00ZHint: It’s not just one bright idea, repeated several times.Some people appear to be blessed. They aren’t just lucky enough to have a single right idea at the right time; they keep coming up with more bright ideas that make the world better, or at least are valued enough for a profitable business model. While innovation and market success do not always have a strong correlation, there are a few things the creators often have in common. It’s one thing to get lucky—to have a bright idea at the exact right moment. But many of the people I admire have been “lucky” several times over, sometimes in different guises. Maybe it’s creating a business that evolves from a solo success to a wide range of profitable endeavors under the same corporate umbrella, such as Amazon’s Jeff Bezos. Or the spark of creativity may touch different realms, such as Philippe Kahn, whose success began with Borland’s Turbo Pascal, then camera phones, and now working in the Internet of Things. I’ve paid some attention to what these people do differently from the rest of us mere mortals, including the things they don’t notice. Because, often, we take our personal strengths for granted; they are, after all, the things that require the least conscious effort. Lesson 1: It’s all about technology. Except when it isn’t. Always lead with technology, says Philippe Kahn. “I’m passionate about making things that help Ms. and Mr. Everyone. At different times, different things. But it all has to be led by unique technology innovation.” You may remember Kahn’s legacy at Borland, which drove software development on microcomputers. At LightSurf, it was the camera phone, accompanied by key patents. “Today at Fullpower, it’s the leading platform for the smartbed in the IoT smarthome, powered by machine learning and data science,” he says, with “a lot of innovation and patents.” However, the world is full of technology that is inherently cool, but also an answer in search of a solution. Whether an innovation is an improvement over existing solutions (the iPod, a faster CPU, a more affordable compiler) or a disruptive game changer (DVDs by mail, crowdsourced classified ads), it answers questions that people immediately realize they had—as soon as the answer appears. The technology breakthrough itself can be a distraction. Even though the market makers may talk about technology publicly, says Saul Kaplan, founder of the Business Innovation Factory, their attention usually is on problem solving and business models. It’s part of their DNA, he says. “When they stand in line at the supermarket, they are considering how to improve the buying experience,” he points out. Indeed, architect and inventor Buckminster Fuller was incensed by the time wasted standing in line when the bank tried to “save money” by limiting the number of[...]
This ebook explores guidelines that can help you make crucial "right item, right time" decisions and assist in the integration of new technology into existing business processes.
There are two kinds of fool. One says, “This is old, and therefore good.” And one says, “This is new, and therefore better.”
Continue reading Guidelines for keeping pace with innovation and tech adoption.(image)
From disclosure to machine learning to IoT, here are the security trends to watch in the months ahead.
When we started planning the inaugural O’Reilly Security Conference, we did so with a unique focus on defensive security. There are many excellent researchers and leaders working to solve the problems of security today, but what we wanted to do was to get down to the nuts and bolts of building better defenses across the board. The Security Conference and our newsletter have received an enthusiastic welcome from our audience—the defenders. And it’s for you, the defenders, that we offer this look forward at the key trends in 2017.
The nearly unexpected happened in 2016: the Department of Defense released their vulnerability disclosure policy, putting the government ahead of a vast percentage of private enterprises when it comes to partnering with well-intentioned hackers instead of viewing them as dangerous adversaries. This should serve as inspiration for both more government groups and corporations that would all benefit. On top of that, Katie Moussouris worked with ISO to release a free version of their ISO 29147 standard, providing free best practices for organizations looking to establish their own vulnerability disclosure programs.
Continue reading 5 trends in defensive security for 2017.(image)
2017-01-05T11:00:00ZWhat to watch for in distributed systems, SRE, serverless, containers and more. Forecasting trends is tricky, especially in the fast-moving world of systems operations and engineering. This year, at our Velocity Conference, we have talked about distributed systems, SRE, containerization, serverless architectures, burnout, and many other topics related to the human and technological challenges of delivering software. Here are some of the trends we see for the next year: 1. Distributed Systems We think this is important enough that we re-focused the entire Velocity conference on it. 2. Site Reliability Engineering Site Reliability Engineering—is it just ops? Or is it DevOps by another name? Google's profile for an ops professional calls for heavy doses of systems and software engineering. Spread further into the industry by Xooglers at companies like Dropbox, hiring for SRE positions continues to increase, particularly for web-facing companies with large data centers. In some contexts, the role of SREs becomes more about helping developers operate their own services. 3. Containerization Companies will continue to containerize their software delivery. Docker Inc. itself has positioned Docker as a tool for "incremental revolution," and containerizing legacy applications has become a common use case in the enterprise. What's the future of Docker? As engineers continue to adopt orchestration tools like Kubernetes and Mesos, the higher level of abstraction may make more room for other flavors of containers (like rkt, Garden, etc.). 4. Unikernels Are unikernels the next step after containerization? Are they unfit for production? Some tout the security and performance benefits of unikernels. Keep an eye out for how unikernels evolve in 2017, particularly with an eye to what Docker Inc. does in this area (having acquired Unikernel Systems this year). 5. Serverless Serverless architectures treat functions as the basic unit of computation. Some find the term misleading (and reminiscent of "noops"), and prefer to refer to this trend as Functions-as-a-Service. Developers and architects are experimenting with the technology more and more, and expect to see more applications being written in this paradigm. For more on what serverless/FaaS means for operations, check out the free ebook on Serverless Ops by Michael Hausenblas. 6. Cloud-Native application development Like DevOps, this term has been used and abused by marketers for a long while, but the Cloud Native Computing Foundation makes a strong case for these new sets of tools (often Google-inspired) that take advantage not just of the cloud, but in particular the strengths and opportunities provided by distributed systems—in short, microservices, c[...]
2017-01-04T13:45:00ZFrom systems thinking to voice interfaces, these are the design trends to watch in the months ahead.As awareness of design's value continues to grow, 2017 promises to offer the design community plenty of opportunities and challenges. Here's a look at what designers should pay attention to in the year ahead. 1. Systems thinking Anything created today exists and interacts within a larger ecosystem. Systems thinking is by no means a new discipline, but it has started to work its way into design conversations, and for good reason. Systems thinking is focused on looking at the entirety of a system and understanding the interconnectedness and relationships between elements in a system. In Thinking in Systems, Donella Meadows shares a great quote from a Sufi teaching story that encapsulates the essence of systems thinking: You think that because you understand "one" that you must therefore understand "two" because one and one makes two. But you forget that you must also understand "and." The best applications, services, and products will have designers with a firm systems-thinking mindset behind them. Systems thinking doesn't solve complexity, but it will help designers think about their products as part of an ecosystem. If you're going to learn one new skill in 2017, systems thinking is the one I'd recommend pursuing. 2. Designing for voice Machine learning, natural language processing, and artificial intelligence (AI) are fueling the development of voice user interfaces. In 2016 we witnessed the arrival of voice as a relevant and useful interaction model, with Amazon's Alexa being the most compelling example. Designing for voice is in its early days, and its applications are promising—from chatbots to cars. I expect we will see tremendous experimentation and growth of voice user interfaces in 2017. Cathy Pearl, author of Designing Voice User Interfaces and director of user experience at Sensely, says: Many companies and developers are jumping on the voice trend. Amazon, for example, allows any developer to add new "skills" to their Echo. It's important, however, to consider whether the task or app you want to build will actually *benefit* from voice. Voice is great for hands-free, accessibility, and efficiency. It's not so great for public spaces, noisy environments, and privacy. As more people build VUIs, designers will play a crucial role, crafting conversational experiences that are useful and engaging. We already know a lot about good VUI design principles, but we need designers to apply them and to have the tools to prototype and build designs quickly and easily. Just like a poorly designed website can drive users to frustration, a poorly designed VUI will do the same." One [...]
This DIY initiative aims to genotype Pacific salmon to help sustain and improve salmon populations.
Seattle is rich in cultural icons. Think artisanal coffee, grunge rock, the Space Needle, Pike Place Market: they are all instantly recognizable symbols of the city.
But Seattle is more than a city. As the foremost urban center of the Pacific Northwest—or Cascadia, in the preferred regional parlance—its significance and influence are broad. Seattle’s symbols are thus Cascadia’s symbols. And nothing is more emblematic of Cascadia than salmon. The five species of Pacific salmon—chinook, coho, pink, sockeye, and chum—have both economic and emotional significance to the people of the Pacific Northwest. Residents enjoy catching them and eating them, but the fish also serve as indicators for the health of the regional biome. Moreover, they are totems. Cascadia people don’t just value salmon. They revere them.
Continue reading Biohacking with Citizen Salmon.(image)
2017-01-04T12:00:00ZDesigners may well provide the missing link between AI and humanity.For anyone doubting that AI is here, the New York Times recently reported that Carnegie Mellon University plans to create a research center that focuses on the ethics of artificial intelligence. Harvard Business Review started laying the foundation for what it means for management, and CNBC started analyzing promising AI stocks. I made the relatively optimistic case that design in the short term is safe from AI because good design demands creative and social intelligence. But this short-term positive outlook did not alleviate all of my concerns. This year, my daughter started college, pursuing a degree in interaction design. As I began to explore how AI would affect design, I started wondering what advice I would give my daughter and a generation of future designers to help them not only be relevant, but thrive in the future AI world. Here is what I think they should expect and be prepared for in 2025. Everyone will be a designer Today, most design jobs are defined by creative and social intelligence. These skill sets require empathy, problem framing, creative problem solving, negotiation, and persuasion. The first impact of AI will be that more and more non-designers develop their creativity and social intelligence skills to bolster their employability. In fact, in the Harvard Business Review article I mentioned above, advice #4 to managers is to act more like designers. The implication for designers is that more than just the traditional creative occupations will be trained to use “design thinking” techniques to do their work. Designers will no longer hold a monopoly (if that were ever true) on being the most “creative” people in the room. To stay competitive, more designers will need additional knowledge and expertise to contribute in multidisciplinary contexts, perhaps leading to increasingly exotic specializations. You can imagine a classroom, where an instructor trained in design thinking is constantly testing new interaction frameworks to improve learning. Or a designer/hospital administrator who is tasked with rethinking the inpatient experience to optimize it for efficiency, ease of use, and better health outcomes. We’re already seeing this trend emerge—the Seattle mayor’s office has created an innovation team to find solutions to Seattle’s most immediate issues and concerns. The team embraces human-centered design as a philosophy, and includes designers and design strategists. Stanford’s d.school has been developing the creative intelligence of non-traditionally trained designers for over a decade. And new programs like MIT[...]
2017-01-04T11:55:00ZScience and Complexity, App Store Farm, Portability over Performance, and Incident Response Docs Science and Complexity (PDF) -- in the first part of the article, Weaver offers a historical perspective of problems addressed by science, a classification that separates simple, few-variable problems from the “disorganized complexity” of numerous-variable problems suitable for probability analysis. The problems in the middle are “organized complexity” with a moderate number of variables and interrelationships that cannot be fully captured in probability statistics. The second part of the article addresses how the study of organized complexity might be approached. The answer is through harnessing the power of computers and cross-discipline collaboration. Originally published in 1948. How to Manipulate App Store Rankings the Hard Way -- photo shows a wall of iPads in front of a woman. Her job is to download, install, and uninstall specific apps over and over again to boost their App Store rankings. (via BoingBoing) Intel's 10nm Chip Tech -- As has been the case for years already, clock speed isn’t liable to increase, though. “It’s really power reduction or energy efficiency that’s the primary goal on these new generations, besides or in addition to transistor cost reduction,” Bohr says. Improved compactness and efficiency will make it more attractive to add more cores to server chips and more execution units onto GPUs, he says. PagerDuty's Incident Response Documentation -- It is a cut-down version of our internal documentation, used at PagerDuty for any major incidents, and to prepare new employees for on-call responsibilities. It provides information not only on preparing for an incident, but also what to do during and after. It is intended to be used by on-call practitioners and those involved in an operational incident response process (or those wishing to enact a formal incident response process). Open sourced, so you can copy it and localize it for your company's systems and outage patterns. (via PagerDuty blog) Continue reading Four short links: 4 January 2017.[...]
The O’Reilly Security Podcast: Sniffing out fraudulent sleeper cells, incubation in money transfer fraud, and adopting a more proactive stance.
In this episode, O’Reilly’s Jenn Webb talks with Fang Yu, cofounder and CTO of DataVisor. They discuss sniffing out fraudulent sleeper cells, incubation in money transfer fraud, and adopting a more proactive stance against fraud.
Continue reading Fang Yu on machine learning and the evolving nature of fraud.(image)
2017-01-04T11:00:00ZAn improved asyncio module, Pyjion for speed, and moving to Python 3 will make for a rich Python ecosystem.Python is everywhere. It’s easy to learn, can be used for lots of things, and often is the right tool for the right job. The huge standard library and refreshingly abundant libraries and tools make for a very productive programming biosphere. The 3.6 release was just last month, and included various improvements, such as: New features in the asyncio module Addition of a file system path protocol Formatted string literals Here I’ll offer a few thoughts on a some things from this recent release and elsewhere in the Python biome that have caught my eye. This is nowhere near an exhaustive list, so let me know other things you think will be important for Python for 2017. Will Pyjion’s boost be just in time? Speeding up runtime performance has been in Python’s sights for a while. PyPy has aimed to tackle the issue since its official release in 2007, while many developers have turned to Cython (the C extension of Python) to improve speed, and numerous just-in-time (JIT) compilers have been created to step up runtime. Pyjion is one new JIT compiler for revving up speed; it accelerates CPython (the Python interpreter) by boosting its stock interpreter with a JIT API from Microsoft’s CoreCLR project. The goals of the Pyjion project, as explained in the FAQ section of the Pyjion Github repository, are: “... to make it so that CPython can have a JIT plugged in as desired. That would allow for an ecosystem of JIT implementations for Python where users can choose the JIT that works best for their use-case.” Such an ecosystem would be a great boost for Python, so I’m hopeful, along with a lot of other folks, that 2017 will see Pyjion or others like it make faster Python fauna. To explore Cython as one alternative for speedier Python, check out the Learning Cython video course by Caleb Hattingh. You can also download Hattingh's free ebook, 20 Python Libraries You Aren’t Using (But Should). Asyncio is no longer provisional. Let the concurrency begin! In Python 3.6, the asyncio module is no longer provisional and its API is now considered stable; it’s had usability and performance tweaks, and several bug fixes. This module supports writing single-threaded concurrent code. I know a lot of Python developers who work with asynchronous code are eager to check out this improved module and see what the new level of concurrency can do for their projects. An uptick in use of the asyncio module will likely draw even more attention t[...]
What will innovations like dynamic plugins, serverless Go, and HTTP/2 Push mean for your development this year?
Go 1.8 is due to be released next month and it’s slated to have several new features, including:
Which of these new features will have the most impact probably depends on how you and your development team use Go. Since Go 1.0 released in 2012, its emphasis on simplicity, concurrency, and built-in support has kept its popularity pointed up and to the right, so the answers to “What is Go good for?” keep multiplying.
Continue reading 5 things to watch in Go programming in 2017.(image)
Fortran MVC, Decent Security, Livecoding a Game, and Data Pipelines
Continue reading Four short links: 3 January 2017.(image)
2017-01-03T12:00:00ZFrom AI to uncertain political outlooks: What's on our radar.Fintech companies large and small face many of the same disruptive trends as every other kind of tech company—especially the rise of artificial intelligence (AI) and uncertain political outlooks in the United States and Europe. Jon Bruner takes a look at what 2017 might hold in the fintech world. 1. The rise of Chinese fintech China’s eight fintech unicorns—privately held startups valued at more than $1 billion—are together valued at nearly $100 billion, three times as much as America’s 14 fintech unicorns. A couple of them are big because they’re tie-ups between several of China’s largest internet companies, but the breathtaking scale of China’s fintech market is still clear. China’s fintech startups benefit from a colossal domestic market made up of consumers who are newly affluent; are in need of services to manage their wealth; and, in many cases, aren’t already committed to an incumbent financial institution. Plus, many Chinese consumers are mobile natives who don’t need to be convinced to manage their finances through smartphone apps. 2. Blockchain for everything Sorry about this one. You’re probably tired of reading about the blockchain on end-of-year fintech watchlists, but there’s reason to keep it here. Put aside bitcoin and all of its unmet hype, and you’re left with a secure-by-design means of authentication that’s gradually creeping into the world’s financial infrastructure. In 2016, the central banks of both Canada and Singapore announced plans to use blockchain-based digital currencies for interbank settlements, as has a consortium that includes Santander, UBS, BNY Mellon, and Deutsche Bank. Expect to see more blockchain-based infrastructure—and regulatory support—in everything from real estate transfers to intellectual property filings. 3. Pressure on startups The last two years have seen the proliferation of robo-advisors, which manage portfolios algorithmically and charge minuscule fees—typically well below half a percent of assets, compared with 1-3% for conventional human asset managers. Betterment and Wealthfront, two of the startups that led the robo-advisor boom, looked like they were entering periods of hockey-stick growth in assets under management back in 2014 and 2015. Instead, they’ve eked out more or less linear growth since then (which is to say, declining month-over-month growth rates). The reason: it turne[...]
2017-01-03T11:15:00ZFrom tools, to research, to ethics, Ben Lorica looks at what’s in store for artificial intelligence in 2017.2016 saw tremendous innovation, lots of AI investment in both big companies and startups, and more than a little hype. What will 2017 bring? 1. Democratization of tools will enable more companies to try AI technologies. A recent Forrester survey of business and technology professionals found that 58% of them are researching AI, but only 12% are using AI systems. This is partially because applied AI applications are only now starting to be realized, but it’s also because right now AI is hard. It requires very specialized skills and a develop-it-yourself attitude. But frameworks like Facebook’s Wit.ai and Howdy’s Slack bot are competing to become the Visual Basic of AI, promising point-and-click development of intelligent conversational interfaces to relatively unsophisticated developers. Tools like Bonsai, Keras, and TensorFlow (if you don't mind coding) simplify the implementation of deep learning models. And cloud platforms like Google’s APIs and Microsoft Azure allow you to create intelligent apps without having to worry about setting up and maintaining accompanying infrastructure. 2. We’ll see many more targeted AI systems. We’re not expecting large, general purpose AI systems—yet. But we do anticipate an explosion in specific, highly targeted AI systems, such as: Robotics: personal, industrial, and retail Autonomous vehicles (cars, drones, etc.) Bots: CRM, consumer (such as Amazon Echo), and personal assistants Industry-specific AI for finance, health, security, and retail 3. The economic impact of increased automation will be the subject of discussion. Expect to hear (a little) less about malevolent AI taking over the world and more about the economic impacts of AI. Concerns about AI stealing jobs are nothing new, of course, but we anticipate deeper, more nuanced conversations on what AI will mean economically. 4. In an attention economy, systems that help overcome information overload will become more sophisticated. We’re noticing (and applauding) some very interesting developments in AI that help parse information to overcome information overload, especially in the areas of: Natural language understanding Structured data extraction (from “dark data” to “structured information”) Information cartography Automatic summarization (text, video, and audio) 5. AI researchers wil[...]
Open source development, changing infrastructure, machine learning, and customer-first design meet in a perfect storm to shape the next massive digital transformation.
Open source software development, infrastructure disruption and re-assembly, machine learning, and customer-first design are part of a perfect storm shaping the next massive digital transformation. You know the one that is creating amazing startups that are literally upheaving industries as Uber and Lyft have done with transportation, Twitter and Facebook have done with communication, and Netflix and Hulu have have done with cable television. All of these enterprises have transformed or created industries. Now all enterprise must shake off the dust of older technology and reinvent themselves if they want to stay competitive.
One piece of the puzzle is to incorporate the code and culture of open source. It is a central driver behind each and every one of these disruptive and successful companies. Each has woven in open source, from the code to the culture. This is the change happening and open source is how you can play a part.
Continue reading 5 software development trends shaping enterprise in 2017.(image)