Subscribe: O'Reilly Radar - Insight, analysis, and research about emerging technologies
http://radar.oreilly.com/atom.xml
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
continue reading  continue  data  design  learning  links january  machine learning  new  reading  short links  software  web 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: O'Reilly Radar - Insight, analysis, and research about emerging technologies

All - O'Reilly Media



All of our Ideas and Learning material from all of our topics.



Updated: 2017-01-18T13:50:20Z

 



Use deep learning on data you already have

2017-01-18T13:00:00Z

Putting deep learning into practice with new tools, frameworks, and future developments.Deep learning has made tremendous advances in the past year. Though managers are aware of what's been happening in the research world, we're still in the early days of putting that research into practice. While the resurgence in interest stems from applications in computer vision and speech, more companies can actually use deep learning on data they already have—including structured data, text, and times-series data. All of this interest in deep learning has led to more tools and frameworks, including some that target non-experts already using other forms of machine learning (ML). Many devices will benefit from these technologies, so expect streaming applications to be infused with intelligence. Finally, there are many interesting research initiatives that point to future neural networks, with different characteristics and enhanced model-building capabilities. Back to machine learning If you think of deep learning as yet another machine learning method, then the essential ingredients should be familiar. Software infrastructure to deploy and maintain models remains paramount. A widely cited paper from Google uses the concept of technical debt to posit that “only a small fraction of real-world ML systems is composed of ML code.”  This means that while underlying algorithms are important, they tend to be a small component within a complex production system. As the authors point out, machine learning systems also need to address ML-specific entanglement and dependency issues involving data, features, hyperparameters, models, and model settings (they refer to this as the CACE principle: Changing Anything Changes Everything). Deep learning has also often meant specialized hardware (often GPUs) for training models. For companies that already use SaaS tools, many of the leading cloud platforms and managed services already offer deep learning software and hardware solutions. Newer tools, like BigDL, target companies that prefer tools that integrate seamlessly with popular components like Apache Spark and leverage their existing big data clusters, model serving, and monitoring platforms. You’ll also still need (labeled) data—in fact, you’ll need more. Deep learning specialists describe it as akin to a rocketship that needs a big engine (a model) and a lot of fuel (data) in order to go anywhere interesting. (In many cases, data already resides in clusters; thus, it makes sense that many companies are looking for solutions that run alongside their existing tools.) Clean, labeled data requires data analysts with a combination of domain knowledge, and infrastructure engineers who can design and maintain robust data processing platforms. In a recent conversation, an expert I spoke with joked that with all of the improvements in software infrastructure and machine learning models “soon, all companies will need to hire are analysts who can create good data sets.” Joking aside, the situation is a bit more nuanced. As an example, many companies are beginning to develop and deploy human-in-the-loop systems, sometimes referred to as “human-assisted AI” or “active learning systems,” that augment the work done by domain experts and data scientists. More so than other machine learning techniques, devising and modifying deep learning models requires experience and expertise. Fortunately, many of the popular frameworks ship with example models that produce decent results for problems across a variety of data types and domains. At least initially, packaged solutions or managed services from leading cloud providers obviates the need for in-house expertise, and I suspect many companies will be able to get by with few true deep learning experts. A more sensible option is to hire data scientists with strong software engineering skills who can help deploy machine learning models to production and who understand the nuances of evaluating models. Another common question is the nature of deep lea[...]



Steven Shorrock on the myth of human error

2017-01-18T12:20:00Z

(image)

The O’Reilly Security Podcast: Human error is not a root cause, studying success along with failure, and how humans make systems more resilient.

In this episode, I talk with Steven Shorrock, a human factors and safety science specialist. We discuss the dangers of blaming human error, studying success along with failure, and how humans are critical to making our systems resilient.

Continue reading Steven Shorrock on the myth of human error.

(image)



Four short links: 18 January 2017

2017-01-18T12:10:00Z

Continuous Delivery, Three Machines, Chinese Astroturfing, and The Developer Tools Market

  1. Screwdriver -- Yahoo has open-sourced their continuous delivery.
  2. The Three Machines (Brad Feld) -- (1) the Product machine, (2) the Customer machine, and (3) the Company machine. An interesting suggestion for organizational design, which SGTM.
  3. How the Chinese Government Fabricates Social Media Posts -- researchers studying the Chinese government's paid social media commentators have determined their purpose is to distract, not debate. We estimate that the government fabricates and posts about 448 million social media comments a year. In contrast to prior claims, we show that the Chinese regime's strategy is to avoid arguing with skeptics of the party and the government, and to not even discuss controversial issues. We infer that the goal of this massive secretive operation is instead to regularly distract the public and change the subject, as most of the these posts involve cheerleading for China, the revolutionary history of the Communist Party, or other symbols of the regime. (via Marginal Revolution)
  4. RethinkDB Post-Mortem -- fantastic analysis of why the company failed, from a founder. our users clearly thought of us as an open source developer tools company, because that's what we really were. Which turned out to be very unfortunate, because the open source developer tools market is one of the worst markets one could possibly end up in. Thousands of people used RethinkDB, often in business contexts, but most were willing to pay less for the lifetime of usage than the price of a single Starbucks coffee (which is to say, they weren't willing to pay anything at all). This wasn't because the product was so good people didn't need to pay for support, or because developers don't control budgets, or because of failure of capitalism. The answer is basic microeconomics. Developers love building developer tools, often for free. So, while there is massive demand, the supply vastly outstrips it. This drives the number of alternatives up, and the prices down to zero.

Continue reading Four short links: 18 January 2017.

(image)



Algorithmic trading in less than 100 lines of Python code

2017-01-18T11:00:00Z

(image)

If you're familiar with financial trading and know Python, you can get started with basic algorithmic trading in no time.

Algorithmic Trading

Algorithmic trading refers to the computerized, automated trading of financial instruments (based on some algorithm or rule) with little or no human intervention during trading hours. Almost any kind of financial instrument - be it stocks, currencies, commodities, credit products or volatility - can be traded in such a fashion. Not only that, in certain market segments, algorithms are responsible for the lion’s share of the trading volume. The books The Quants by Scott Patterson and More Money Than God by Sebastian Mallaby paint a vivid picture of the beginnings of algorithmic trading and the personalities behind its rise.

The barriers to entry for algorithmic trading have never been lower. Not too long ago, only institutional investors with IT budgets in the millions of dollars could take part, but today even individuals equipped only with a notebook and an Internet connection can get started within minutes. A few major trends are behind this development:

Continue reading Algorithmic trading in less than 100 lines of Python code.

(image)



Rapid techniques for mapping experiences

2017-01-18T11:00:00Z

(image)

This ebook explains how teams in your organization can co-create these diagrams in only a couple of weeks.

(image)

Design ad

(image) ??? Edition

(only include edition line if it's 2e or higher --> Subtitle Goes Here

Continue reading Rapid techniques for mapping experiences.

(image)



Four short links: 17 January 2017

2017-01-17T12:05:00Z

Autonomous RDBMS, Rules of Machine Learning, Open Source LTE, and Limits of Evidence-Based Policy

  1. Peloton -- a relational database management system that is designed for autonomous operation. The system’s integrated planning component not only optimizes the system for the current workload, but also predicts future workload trends before they occur so that the system can prepare itself accordingly.
  2. Google's Rules of Machine Learning (PDF) -- so good. Hard-won lessons, good advice for avoiding mistakes, and rules of thumb. The number of feature weights you can learn in a linear model is roughly proportional to the amount of data you have.
  3. srsUE -- Open source software radio 3GPP LTE UE.
  4. I'm a Data Nerd But... (Keith Ng) -- Governance is about making decisions with imperfect knowledge. They have to be best guesses because inaction can be as catastrophic as incorrect action, and because sometimes solid knowledge is hard to come by. When we use the language of science for governance, we’re setting up a hair-trigger (“but there’s no evidence for that”) that favours doing nothing. The best case scenario is that it’s a sincerely inquisitive process that could be abused by bad actors to keep us locked into inaction. The worst case scenario is that this is inaction by design, that it is ideologically driven conservatism trying to hide behind the language of scientific conservatism.

Continue reading Four short links: 17 January 2017.

(image)



Going cloud native

2017-01-17T11:00:00Z

Results from the O’Reilly Cloud Platform Survey.Businesses must move fast to remain competitive. Cloud-native applications offer improved speed and scalability over traditional applications, with better resource utilization and lower up-front costs, and make it faster and easier for businesses to deliver and distribute applications in an agile fashion. Migrating applications to the cloud enables businesses to improve their time-to-market and maximize business opportunities. However, the way that applications are developed, deployed and managed needs to be adapted to the cloud, and new best practices are emerging that enable businesses to deliver more functionality faster and cheaper, without sacrificing reliability. A Cloud Platform Survey was conducted by O’Reilly Media in collaboration with Dynatrace, to gain insight into how businesses are already using or preparing to move towards cloud-native technologies and practices. The survey found that 94% of the respondents indicate that they intend to run applications in the cloud within the next five years. Figure 1. Cloud strategy within the next five years The biggest challenge, identified by 59% of the survey respondents who are looking to migrate to the cloud, is in identifying all of the applications and dependencies in their existing environment. Understanding dependencies by analyzing and mapping connections between applications, services, and cloud components can assist businesses in identifying which parts to migrate first as well as surface any technical constraints that should be considered during migration. Figure 2. Challenges migrating to the cloud Of those who have already adopted cloud technologies, 74% of respondents are currently running applications on Infrastructure-as-a-Service (IaaS) platforms, compared to just 21% who have adopted Platform-as-a-Service (PaaS) technologies in production. The majority of IaaS adopters are on public cloud platforms, with Amazon EC2 the most widely adopted (by 74% of those on public platforms). However, the results indicate that respondents are converging toward hybrid cloud solutions rather than public or private clouds exclusively, with a combined total of 72% of respondents who already use IaaS, indicating that they intend to migrate to a hybrid cloud in the near future. Only 40% of the survey respondents are currently running containers in production, with technology maturity the top-reported concern. However, more than half of the respondents who are not running container-based environments anticipate adopting containers within the next four years, listing flexible scaling of resources, resource efficiency, and migration to microservices as key motivating factors. Figure 3. Motivations for adopting container technologies Based on their experience and from speaking with companies at different stages of evolution, Dynatrace has devised a maturity framework for gauging how far along businesses are on the journey to cloud native practices—from first steps migrating existing applications to a virtualized infrastructure, through to dynamic microservices environments. The first stage involves migrating applications to IaaS via a lift-and-shift approach, and implementation of a continuous integration/continuous delivery pipeline to speed up releases. The second stage represents the beginning of microservices, where applications migrate from monolithic application architectures toward services running in containers on PaaS platforms. In the third stage, businesses begin to make more efficient use of cloud technologies and shift toward highly decoupled, dynamic microservices. At each step along this journey toward cloud-native, the importance of scheduling, orchestration, auto-scaling, and monitoring increases. Automation of these processes is key to effective management of cloud-native application environments. For more information on h[...]



Why you should standardize your microservices

2017-01-17T11:00:00Z

(image)

An interview with Susan Fowler, Site Reliability Engineer at Uber Technologies.

Continue reading Why you should standardize your microservices.

(image)



4 essential skills software architects need to have but often don’t

2017-01-17T11:00:00Z

(image)

A look into the unspoken side of software architecture.

Microservices. Continuous delivery. Reactive systems. Cloud native architecture. Many software architects (or even aspiring ones) have at least a passing familiarity with the latest trends and developments shaping their industry. There are plenty of resources available for learning these topics, everything from books and online videos to conference presentations; the ability to go from novice to expert in these areas is right at their fingertips. However, as today’s trends quickly bleed into tomorrow’s standards for development, it’s paramount that software architects become change agents within their organizations by putting into practice the “unspoken” side of their skill set: communication and people skills.

With the arrival of each new year, we all verbalize New Year’s resolutions that fall on deaf ears, including our own. For software architects, these vows may take the shape of getting up to speed on a new form of architecture or finally mastering that cutting-edge technology stack. Either way, it’s the development of these soft skills that define architects as managers—both of large architecture teams and the technology choices they embrace—that’s often neglected in favor of learning the shiniest new technology.

Continue reading 4 essential skills software architects need to have but often don’t.

(image)



How do I undo additions to a repository before a Git commit?

2017-01-17T09:00:00Z

(image)

Learn how to use the git-rm command to remove accidental additions to your Git repository.

Continue reading How do I undo additions to a repository before a Git commit?.

(image)



Four short links: 16 January 2017

2017-01-16T13:00:00Z

Focus Card, Future Tech Trends, Stoic AI Ethics, and Inverse Kinematics

  1. The Card I've Carried In My Notebook (DJ Patil) -- Dream in years, plan in months, evaluate in weeks, ship daily. Prototype for 1x, build for 10x, engineer for 100x. What's required to cut the timeline in 1/2? What's required to double the impact?
  2. 13 Future Tech Trends (CB Insights) -- nice to see something other than "AI, IoT, and Blockchain"! The 13 are: customized babies; robotics companions; the rise of personalized food; 3D-printed housing; lab-engineered luxury; solar roads; ephemeral retail; enhanced workers; "botroots" actions; microbe-made chemicals; neuroprosthetics; instant expertise; AI ghosts. The linked report (they want to email it to you) gives examples of companies in the space and unpacks the topic a little.
  3. Stoic Ethics for Artificial Agents -- traditional AI ethics focuses on utilitarian or deontological ethical theories. We relate ethical AI to several core stoic notions, including the dichotomy of control, the four cardinal virtues, the ideal sage, stoic practices, and stoic perspectives on emotion or affect. More generally, we put forward an ethical view of AI that focuses more on internal states of the artificial agent rather than on external actions of the agent.
  4. Inverse Kinematics -- it’s a problem common to any robotic application: you want to put the end (specifically, the “end effector”) of your robot arm in a certain place, and to do that you have to figure out a valid pose for the arm that achieves that. This problem is called inverse kinematics (IK), and it’s one of the key problems in robotics. A gentle introduction with a little maths and some Python.

Continue reading Four short links: 16 January 2017.

(image)



How do I check out a remote branch with Git?

2017-01-16T09:00:00Z

(image)

Learn how to create local copies of remote branches so that you can make local changes in Git.

Continue reading How do I check out a remote branch with Git?.

(image)



Four short links: 13 January 2017

2017-01-13T12:00:00Z

CSV Conference, Autonomous Paper Planes, Test Wisely, and Some Silliness

  1. CSV Conference -- A community conference for data makers everywhere, featuring stories about data sharing and data analysis from science, journalism, government, and open source.
  2. Disposable Paper Drones -- an autonomous drone made out of cardboard that can fly twice the distance of any fixed-range aircraft because it’s disposable. The drone only goes one way. Star Simpson's project at OtherLab.
  3. Engineering War Stories -- Tests aren’t free. Be economical. We had a TDA (Test Driven Apocalypse) where our CI builds had crept up to 20 minutes, tests were failing randomly, and development speed was at an all-time low. It was extremely demoralizing waiting 15-20 minutes and getting a random test failure. We called it The Roulette.
  4. The Best Hacker News Conspiracy Theory Ever -- A few nights ago, over a liberal quantity of beers, my friends and I came up with our latest nonsensical conspiracy theory.... This is wonderful fun.

Continue reading Four short links: 13 January 2017.

(image)



How do I force overwrite local branch histories with Git?

2017-01-13T09:00:00Z

(image)

Learn how to overwrite changes to your local repository with the reset command in Git.

Continue reading How do I force overwrite local branch histories with Git?.

(image)



Pagan Kennedy on how people find, invent, and see opportunities nobody else sees

2017-01-12T12:24:00Z

(image)

The O'Reilly Radar Podcast: The art and science of fostering serendipity skills.

On this week's episode of the Radar Podcast, O'Reilly's Mac Slocum chats with award-winning author Pagan Kennedy about the art and science of serendipity—how people find, invent, and see opportunities nobody else sees, and why serendipity is actually a skill rather than just dumb luck.

Continue reading Pagan Kennedy on how people find, invent, and see opportunities nobody else sees.

(image)



How big compute is powering the deep learning rocket ship

2017-01-12T12:20:00Z

(image)

The O’Reilly Data Show Podcast: Greg Diamos on building computer systems for deep learning and AI.

Specialists describe deep learning as akin to a rocketship that needs a really big engine (a model) and a lot of fuel (the data) in order to go anywhere interesting. To get a better understanding of the issues involved in building compute systems for deep learning, I spoke with one of the foremost experts on this subject: Greg Diamos, senior researcher at Baidu. Diamos has long worked to combine advances in software and hardware to make computers run faster. In recent years, he has focused on scaling deep learning to help advance the state-of-the-art in areas like speech recognition.

Continue reading How big compute is powering the deep learning rocket ship.

(image)



Brad Abrams on Google Assistant

2017-01-12T12:15:00Z

(image)

The O’Reilly Bots Podcast: A universal bot for messaging, mobile voice, and the home.

In this episode of the O’Reilly Bots Podcast, Pete Skomoroch and I speak with Brad Abrams, group product manager of Google Assistant, the company’s new AI-driven bot that lives in many different contexts, including the Pixel phone, the Allo messaging app, and the Google Home voice-controlled speaker.

Continue reading Brad Abrams on Google Assistant.

(image)



Applying user research to product development

2017-01-12T12:00:00Z

5 questions for Amy Silvers: Implementing, embedding, and championing user research in design teams.I recently asked Amy Silvers, a senior product designer and researcher at Nasdaq, to discuss why it’s important to embed user researchers in design teams, how you can train all team members to think like user researchers, and achieving buy-in through sound bites. At the O’Reilly Design Conference, Silvers will present a session, The embedded researcher: Incorporating research findings into the design process. You’re ​presenting a session on embedding user researchers in design teams. Why is this vital to a creating a successful product or service? Most UX practitioners would agree that talking to customers is an essential part of the design process, but in my experience, many design teams don't carry the voice of the customer through their design decisions. There are many reasons for this—sometimes even good (or at least unavoidable) reasons—but one way to insure against it is to make sure there is someone on the team who represents the voice of the customer and advocates for them at every stage of the design process. "Embedding" a researcher may mean having a researcher (or any member of the team who conducted the research) attend design reviews or be involved in the approval process. It may mean something as simple as pinning the research findings to a whiteboard in the design room or including them in the end-of-sprint review form. But whatever form it takes, it's a necessary step if you want to create products, services, and experiences that meet users' needs and expectations. What are some practical steps for making certain user research isn't treated like an artifact? One obvious one is to not just include designers and product managers in user research sessions but have them take notes and report to the rest of the team on what they heard. Quick debriefs right after a research session are also effective. They can be tricky to arrange if you're doing a lot of research in a short time, but even then, taking 15 minutes for a recap at the end of a day of research is very helpful. We assign someone to do a quick summary, using a simple form, after each interview, and those are great for capturing noteworthy insights before the full findings report is created. We make the summaries easily accessible to everyone on the team, and they can look through a bunch of them in a short time to get an idea of themes that are emerging from the research. I'll have more to say about this in my talk, of course. For organizations that don't have the resources for user researchers, what advice do you have for design teams that want to make certain user research is an integral part of the design process? Designers can be their own user researchers—it just takes a bit of practice and guidance. And when they do their own research, it becomes easier for them to design for the people they've been listening to. They gain empathy for and understanding of the people who will be on the receiving end of their designs, and they may start to think about their designs in ways that would never have occurred to them without speaking to users. They can also enlist others in the organization—customer service reps, product managers, salespeople, and others—in customer research. Have them listen in, and even ask them to help structure the research sessions. Those partnerships mean that you have non-designers who are invested in making sure the users' points of view are represented in the design. For designers who want to champion user research[...]



Four short links: 12 January 2017

2017-01-12T11:55:00Z

China and Bitcoin, Software Repeatability, Tesla Radar, and Chinese Data Market

  1. China's Deep in Bitcoin -- RMB accounted for 98% of global bitcoin trading volume over the past six months. (via Marginal Revolution)
  2. Software Repeatability -- we're rapidly moving into a space where the long tail of data is going to become useless because the software needed to interpret it is vanishing. [...]Second, it's not clear to me that we'll actually know if the software is running robustly, which is far worse than simply having it break.
  3. No, a Tesla Didn't Predict an Accident and Brake For It (Brad Templeton) -- Radar beams bounce off many things, including the road. That means a radar beam can bounce off the road under a car that is in front of you, and then hit a car in front of it, even if you can’t see the car. Because the radar tells you “I see something in your lane 40m ahead going 20mph and something else 30m ahead going 60mph” you know it’s two different things. The Tesla radar saw just that.
  4. China Data for Sale -- Using just the personal ID number of a colleague, reporters bought detailed data about hotels stayed at, flights and trains taken, border entry and exit records, real estate transactions and bank records. All of them with dates, times and scans of documents (for an extra fee, the seller could provide the names of who the colleague stayed with at hotels and rented apartments). Most major Chinese apps report IMEI numbers.

Continue reading Four short links: 12 January 2017.

(image)



The importance of reification in software development

2017-01-12T11:00:00Z

Communication skills make or break the effectiveness of a developer.Software development often reminds me of the “peanut butter and jelly sandwich” exercise. This is where two people sit back to back: one responsible for making the sandwich, the other responsible for providing precise instructions on how the sandwich is made. Although it sounds like a trivial challenge, there is a twist to it: the person making the sandwich must interpret the instructions literally, even if they’re vague or incomplete. This usually results in silly outcomes like an entire jar of peanut butter being placed on top of a loaf of bread still in its bag. In one demonstration, I have even seen someone use a saber to “slice the bread”—that one was my favorite. The purpose of this exercise (contrived though it is) is to show how difficult communication can be when you don’t have effective two-way feedback, shared context, and frequent reviews of a work-in-progress. Unfortunately, this sort of environment isn’t very far removed from what the typical software development team experiences day-to-day. There is a unique aspect of our work in that eventually each instruction needs to be written up for a mindless automaton to follow. This is literally what the coding process is all about: converting human intentions into digital instructions. Much gets lost in translation because coding is a difficult task in itself, and it takes a continuous effort to keep an eye on the big picture while simultaneously working at the primitive level of modules and functions. The solution to this problem is simple if not always easy to implement: drive up communications quality as much as possible. There are many different tactics that can help create better feedback loops, but at the heart of all of them is the concept of reification: to take an abstract idea and turn it into something tangible. Going back to the peanut butter and jelly sandwich exercise for a moment, think of a slightly more plausible scenario in which the person making the sandwich is reasonable about interpreting instructions, but simply has never prepared or eaten a PB+J sandwich before. A single picture and the right supplies would probably be enough to explain the entire ‘recipe’ to them. Even in the cases where the instructions might conflict with the picture, it still provides a shared reference point for communication. This makes it so that relatively simple alterations like “remove the crust” or “toast the bread” would be far more likely to be carried out successfully without much confusion. When building software, we can use wireframe sketches and mockups for similar purposes. For example, suppose someone asks you to build a music video recommendations feature for an application. The following sketch could serve as a useful starting point for further discussion: Sometimes a sketch will confirm a shared understanding of the problem, where in other cases it’ll reveal fundamental misunderstandings. For example, a music video recommendation system might be designed to work as shown above, but it also might be set up to automatically play one song after another, using thumbs up and thumbs down buttons to give feedback that informs the next selection. Reification is also a progressive process… once you make something even slightly concrete, it is natural to begin filling in some of the more specific details. For example, where are those thumbnail images going to come from? Will the video properly adjust its orientation and aspect ratio when ru[...]



How do I revert a Git repo to a previous commit?

2017-01-12T09:00:00Z

(image)

Learn how to view the history of a branch using the “git log” command and perform a checkout in Git.

Continue reading How do I revert a Git repo to a previous commit?.

(image)



Four short links: 11 January 2017

2017-01-11T12:00:00Z

Stacked Hydrogels, Beginning Measurement, Yubikey Experiments, and Find Lectures

  1. Stacked Hydrogels for Implantable Microelectromechanical Systems -- enables development of biocompatible implantable microdevices with a wide range of intricate moving components that can be wirelessly controlled on demand, in a manner that solves issues of device powering and biocompatibility. As with almost all such papers, "enables" isn't the same as "we now can make," but this is a clever step forward. (via Robohub)
  2. The First Four Things You Measure -- great advice for how to start measuring useful things about your service.
  3. Yubikey Handbook -- A collection of guidelines, use cases, and experiments with the Yubikey.
  4. Find Lectures -- tens of thousands of video and audio lectures, search and browsing. (via Werner Vogels)

Continue reading Four short links: 11 January 2017.

(image)



Defining open: Biohack the Planet

2017-01-11T11:00:00Z

(image)

BioHTP explored the projects the biohacking community is tackling, how the community is organized, and where it's going.

How do we define an open community and what would we want from one? The inaugural BioHack the Planet conference took place this September in Oakland, California. Hosted in Omni Commons, the volunteer collective home to the DIYbio hub Counter Culture Labs, the conference embodied the spirit of the community it sought to bring together. BioHack the Planet (BioHTP) was developed to be "run by BioHackers, designed for BioHackers, with talks solicited from BioHackers." Its two organizers, Karen Ingram and Josiah Zayner, are both veterans of the do-it-yourself (DIY) biology, or biohacking community, in their own right. Ingram is a coauthor of the BioBuilder curriculum for teaching synthetic biology. Zayner's DIY supply store, ODIN, has outfitted many budding scientists with everything from pipette tips and bulk cell media to at-home CRISPR kits. So what happens when biohackers organize themselves to discuss their work? Part symposium, part workshop, and part exhibition, BioHTP explored the projects the biohacking community is tackling, how the community is organized, and where it's going.

While BioHTP isn't the first time the biohacking community has been brought together, it may be the first time it's done so by its own at this scale. Soon after the launch of DIYbio.org in 2008, the FBI co-sponsored the "Building Bridges Around Building Genomes" conference in collaboration with the American Association For The Advancement Of Science (AAAS), the U.S. Department of Health and Human Services, and the Department of State to encourage communication between policymakers and academics as well as synthetic biologists working in industry and in community labs. Later, in 2012, the FBI flew in biohackers from around the world to a DIYbio conference to develop their relations with the community. Since then, DIY biologists have organized largely on a continental basis, as in the formation of a European network of do-it-yourself biology, on the web, as in the Hackteria platform, or complementary to larger hobbyist conferences, such in the Nation of Makers Initiative or South by Southwest.

Continue reading Defining open: Biohack the Planet.

(image)



5 web trends for 2017

2017-01-11T11:00:00Z

What's coming with PWAs, Angular, React, and Vue; the rising tide of functional reactive; looking beyond REST to GraphQL and Falcor; and the future of artificial intelligence on the web.The rise of Progressive Web Apps 2016 was the year of the question “what are Progressive Web Apps”? Originally developed and coined by Google’s Chrome team, Progressive Web Apps (PWAs) are a new form of mobile web development, where greater parity can occur between native and web. This brings together the organic discovery people like about the mobile web and the features of native apps users have come to rely on, like push notifications, offline support, and speed. Gartner predicts that by 2019, 20% of brands will abandon their native mobile apps, so this year may be the time to give PWAs a try. Native apps will continue to be a part of our daily lives, but currently, mobile users spend 80% of the time on their devices using only their top three apps. On the other hand, a lot of organic discovery is happening on the mobile web, but users tend to spend less time on these mobile sites due to less than optimal browsing experiences. One strategy some companies have used is creating prompts for users to open their site in the native app. However, this comes with the cost of building and maintaining both a mobile site and a native app, and often these prompts simply annoy users. Making the transition from native to progressive web apps brings many development concerns to the forefront, from accessibility and responsive design, to security and performance. There’s also the fact that PWAs are being built on technologies still in active development, such as service workers. Though it’s still early days, already major outlets and companies are adopting PWAs for their mobile web experiences, with two major examples being the Washington Post and Flipkart, India’s largest ecommerce site. Stabilization and flexibility among web stacks Over the past year, there have been major developments for the two leading JavaScript frameworks in the web ecosystem: Angular had its first major release since 2009 (Angular 2), and React had its 15.0 release, which included significant improvements. With the Angular team’s meticulous rewrite of the framework, and React’s ecosystem finding maturation and wide adoption, we’re seeing both communities coalescing around common best practices. Vue has also gained a lot of popularity in the last year and is now in its 2.0 version. More lightweight and less opinionated than Angular, Vue focuses on the view layer only, making it easy to integrate with other libraries. Laravel has adopted Vue as its default frontend JavaScript framework, helping to grow its popularity in the PHP and wider web communities. While there’s still no shortage of churn in the JavaScript tool space, the stabilization in these leading communities highlights some of the opportunities that lie ahead for developers looking to get some of the best of all worlds when it comes to frameworks and tools in their web stack. While a lot of posts and conference talks boil things down to Angular vs. React, the reality is that Angular is a full-fledged framework whereas React is a library, built as a view-layer (in other words, the “V” in the MVC model). Just because they are often pitted against each other doesn’t mean you have to pick one or the other—or pick one at all, for that matter. In fact, t[...]



How do I fix a merge conflict in Git?

2017-01-11T09:00:00Z

(image)

Learn how to resolve a conflict, mark the change as resolved, and commit the branch to the repository with Git.

Continue reading How do I fix a merge conflict in Git?.

(image)



The 2017 machine learning outlook

2017-01-10T14:00:00Z

(image)

Drew Paroski and Gary Orenstein on the rapid spread of machine learning and predictive analytics

Machine learning has been a mainstream commercial field for some time now, but it’s going through an important acceleration. In this podcast episode, I talk about that acceleration with two executives from MemSQL, a company that specializes in in-memory databases: Gary Orenstein, MemSQL chief marketing officer, and Drew Paroski, MemSQL vice president of engineering.

src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/301818875&auto_play=false&hide_related=false&show_artwork=true" height="166" width="100%" frameborder="no" scrolling="no">

Orenstein and Paroski identify a few crucial inflections in the machine learning landscape: machine learning models have become easier to write; computing capacity on the cloud has increased dramatically; and new sources of data—everything from drones to smart-home devices and industrial controllers—have added new richness to machine learning models.

Computing capacity and software progress have made it possible to train some machine learning models in real time, says Orenstein: “given enough time in computing, you can do just about anything, but only recently have people been able to apply these machine learning models in real time to critical business processes.”

Other discussion topics:

  • How the machine learning field overlaps with the data science field
  • How energy companies use real-time predictive analytics to operate wind farms and oil fields
  • What skills managers need to consider as they’re building teams that specialize in machine learning
  • How to build a real-time data pipeline, and how to consider cloud versus on-premise infrastructure

This post and podcast are part of a collaboration between MemSQL and O’Reilly. See our statement of editorial independence.

Continue reading The 2017 machine learning outlook.

(image)



Four short links: 10 January 2017

2017-01-10T12:05:00Z

AI Ethics, Real Names, Popup VPN, and Serverless ​COBOL

  1. Top 9 Ethical Issues in AI (WEF) -- unemployment, inequality, humanity, mistakes, bias, security, unintended consequences, singularity, and robot rights.
  2. The Real Name Fallacy -- real names don't make for more civility, forcing real names in online communities could also increase discrimination and worsen harassment.
  3. PopUp OpenVPN -- Make a self-hosted OpenVPN server in 15 minutes.
  4. COBOLambda -- Serverless COBOL on AWS Lambda. (via Werner Vogels)

Continue reading Four short links: 10 January 2017.

(image)



Design follows technology

2017-01-10T12:00:00Z

(image)

We need to take a firm look at whether we’re actually benefiting from technology and why we’re using that technology.

With all disciplines of design, as technology progresses, the restrictions become fewer and fewer until they get to the point at which a wave crashes and there’s nothing holding you back from making anything you can dream of. Architecture became bolder when steel frames could support nearly any structure architects could dream of without being restricted by the need to cater to gravity to the degree they needed to when building with stone alone. Furniture evolved with advancements in the technology of the materials available to designers, such as fiberglass and bending plywood, as well as technological advancements related to mass manufacturing processes. In graphic design, typefaces became increasingly more complex and detailed as printing technologies evolved to the point that those details could be reproduced accurately, and then it went completely insane when computers removed nearly all graphical restrictions.

Digital Maturity

Wearable technology has been around in one form or another since 17th-century China when someone came up with an abacus that you could wear on your finger. So, what’s so special about wearable technology at this point in time? There’s a reason wearable technology is making a resurgence. It’s because we’ve finally made it to the digital maturity tipping point: the point at which advancements in computing and the supporting infrastructure have lessened the restrictions of the design of technology so that they’re nearly nonexistent.

Continue reading Design follows technology.

(image)



Practical artificial intelligence in the cloud

2017-01-10T11:45:00Z

(image)

Mike Barlow examines the growth of sophisticated cloud-based AI and machine learning services for a growing market of developers and users in business and academia.

When the automobile was introduced, there were rumors that drivers and passengers would suffocate when the speed of their vehicles exceeded 20 miles per hour. Many people believed that cars would never become popular because the noise of passing engines frightened horses and other farm animals.

Nobody foresaw rush-hour traffic jams, bumper stickers, or accidents caused by people trying to text and drive at the same time.

Continue reading Practical artificial intelligence in the cloud.

(image)



How to cut your innovation speed in half

2017-01-10T11:30:00Z

Focus on your customer’s needs and iterate.“If I knew where all the good songs came from, I’d go there more often,” so said Leonard Cohen when asked how he wrote classic hits like "Suzanne" and "Hallelujah." Formulating the ideas behind timeless hits is not an easy task—serendipity, stimulation, and skill all equally play their part. Yet, in large organizations, a lack of ideas is rarely the problem. Business leaders and executives are inundated with suppositions, proposals, and pitches on how to increase, invent, and imagine new revenue streams. Most often, the biggest challenge is not conjuring up the concept—it’s killing it as quickly and cheaply as possible. In The Principles of Product Development Flow, Don Reinertsen’s research concluded that about 50% of total product development time is spent on the ‘fuzzy front end’—i.e., the pitching, planning, and funding activities before an initiative starts formal development. In today’s fast-paced digital economy, the thought of spending half of the total time to market on meetings and executive lobbying with no working product to show isn’t just counterproductive and wasteful—it’s ludicrous. Figure 1. Image by Barry O'Reilly. Furthermore, the result of all this investment is often an externally researched, expensive, and beautifully illustrated 100-page document to endorse claims of certain success. The punchline presented through slick slide transitions is “All we need is $10 million, two years, 100 people and we’ll save the business!” Science fiction, theater, and fantasy rolled into one. What is really needed is a systematic, evidence-based approach to managing the uncertainty that is inherent at the beginning of any innovation process. Our purpose when commencing new initiatives is to collect information to help us make better decisions while seeking to identify unmet customer needs and respond to them. New business possibilities are explored by quickly testing and refining business hypotheses through rapid experimentation with real customers. Our goal is to perform this activity as early, quickly, and cheaply as possible; lengthy stakeholder consensus building, convoluted funding processes, and hundreds of senior management sign off sessions is not. Decisions to stop, continue, or change course must be based on the ‘real world’ findings from the experiments we run, not subjective HiPPO (Highest Paid Person’s Opinions) statements about how they’ve “been in the business for 30 years and know better.” Imagine a world without costly executive innovation retreats, and where the practice of pitching business cases at annual planning and/or budgeting cycle meetings is extinct. Instead, a similarly sized investment is assigned to small cross-functional teams to explore given problems, obstacles, or opportunities throughout the course of a year. Over a short fixed-time period, a team creates a prototyped solution to be tested with real customers to see if they find it valuable or not. We are investing to reduce uncertainty and make better decisions. You are paying for information. The question is really: how much do you want to invest to find out? In his book, How To Measure Anything, Douglas Hubbard studied the varia[...]



Design pressures lead to better code and better outcomes

2017-01-10T11:00:00Z

(image)

An interview with Martin Thompson, High-Performance Computing Specialist at Real Logic.

Continue reading Design pressures lead to better code and better outcomes.

(image)



Kubernetes and Prometheus: The beginning of a beautiful friendship

2017-01-10T11:00:00Z

(image)

Learn how to use Kubernetes and Prometheus together to reimagine infrastructure and measure "the right things."

Continue reading Kubernetes and Prometheus: The beginning of a beautiful friendship.

(image)



Automating XSS detection in the CI/CD pipeline with XSS-Checkmate

2017-01-10T11:00:00Z

(image)

Learn this new security fuzz testing technique that leverages browser capabilities to detect cross-site scripting vulnerabilities before production deployment.

Integrating cross-site scripting (XSS) tests into the continuous integration and continuous delivery (CI/CD) pipeline is an effective way for development teams to identify and fix XSS vulnerabilities early in the software development lifecycle. However, due to the nature of the vulnerability, automating XSS detection in the build pipeline has always been a challenge. A common practice is to employ dynamic web vulnerability scanners that perform blind black-box testing on newly built web application, before production deployment. There is another way to detect XSS vulnerabilities with web applications in CI/CD: XSS-Checkmate is a security fuzz testing technique (not a tool) that complements traditional dynamic scanners by leveraging browser capabilities to detect XSS vulnerabilities. Since modern web applications widely use tools such as Selenium, Cucumber, and Sauce Labs to test user interactions, this technique can be integrated into those test frameworks with less effort.

What is XSS?

XSS is one of the oldest types of security vulnerabilities found in web applications. It enables execution of an attacker-injected malicious script when a victim visits an infected page. The primary reason for XSS is the improper neutralization of user inputs when rendered on a web page. XSS has remained a top threat on OWASP’s top ten list since its first publication in 2004.

Continue reading Automating XSS detection in the CI/CD pipeline with XSS-Checkmate.

(image)



How we got Linux on Windows

2017-01-10T11:00:00Z

From "Linux is a cancer" to Windows Subsystem for Linux.Since the early 1990s, when Windows became much more popular in the enterprise, people have been trying to put Unix and Linux into places where it doesn’t want to be, using toolkits that implement just enough of the Portable Operating System Interface (POSIX) standard to feel like Unix. The reasons are pretty simple: a lot of open source tools, especially development tools, are primarily targeted to Unix/Linux platforms. Although the most important ones have been ported to Windows, they are designed to work best when they have access to a Unix/Linux shell scripting environment and the many utilities that come with those platforms. Today, there are many options availability for getting Unix or Linux functionality on Windows, and the most recent one, the Windows Subsystem for Linux (WSL), provides a Linux environment that is deeply integrated with Windows. But before I get into that, I’ll look at the some of the other options. The early days In the early 1990s, you could get a pretty good Unix shell, compiler, and text utilities on DOS using DJGPP. It was also around this time that you could simply install Linux on a computer if you wanted a pure Unix-like environment. Linux had none of the limitations that a Unix-like subsystem had, but it also meant that you had to convert in total to Linux. So, if you wanted—or needed—to run a Unix-like environment alongside a Microsoft operating system, you needed a subsystem like DJGPP. And in order to comply with US Federal Information Processing Standards and be considered for defense-related projects, Microsoft needed one, too. Figure 1. Running GNU bash under DJGPP in a modern DOS emulator (DOSBox). Windows NT, Microsoft’s first foray into true multitasking and multi-user operating systems, is the basis of their current operating system offerings, whether you’re running it on a phone, computer, or Raspberry Pi. Although it shares superficial traits with Unix and Linux, internally, it is not at all like them, which wasn’t surprising considering when Windows NT was born: in 1993, Unix was something you’d find on a workstation or server. Apple’s Macs were running their venerable System 7, and it would be eight more years before Mac OS X would come out, itself a Unix-based operating system. The difference between Windows and Unix meant that when you ported a Unix application to Windows, there was substantial functionality, such as the fork() system call, that was simply not available. Because of this, the number of Unix programs that could be fully ported to Windows was fairly small, and many programs had reduced functionality as a result. Over the years, Microsoft kept Unix and Linux at arm’s length. To comply with those federal standards, it supplied a bare minimum POSIX subsystem, just enough to satisfy the standards. The POSIX subsystem was eventually replaced with something called the Windows Services for Unix, which provided a Unix shell, a compiler, and a lot of command-line tools. But it wasn’t enough. The age of Linux It didn’t take long for Linux to define what the Unix-like experience should be on [...]



How do I rename a local branch in Git?

2017-01-10T09:00:00Z

(image)

Learn how to rename a branch using the -m flag in Git.

Continue reading How do I rename a local branch in Git?.

(image)



Making telecommunications infrastructure smarter

2017-01-09T12:00:00Z

Turning physical resource management into a data and learning problem.A common refrain about artificial intelligence (AI) lauds the ability of machines to pick out hard-to-see patterns from data to make valuable, and more importantly, actionable observations. Strong interest has naturally followed in applying AI to all sorts of promising applications, from potentially life-saving ones like health care diagnostics to the more profit-driven variety like financial investment. Meanwhile, rather ethereal speculation about existential risk and more tangible concern about job dislocation have both run rampant as the topics du jour concerning AI. In the midst of all this, many tend to overlook a more concrete and pretty important, but admittedly less sexy subject—AI’s impact on core infrastructure. We were certainly guilty of this when we organized our first Artificial Intelligence Conference. We did not predict the strong representation that we would see from the telecommunications industry, but we won’t make that mistake next time. This sector now offers a fascinating look into how AI can deliver great value to infrastructure. To understand why, it is instructive to first recap what is happening in the telecom industry. Major shifts usually happen as a confluence of not one, but many trends. That is the case in telecommunications as well. Continuous proliferation and obsolescence of devices means more and more endpoints, often mobile, need to be managed as they come online and offline. Meanwhile, bandwidth usage is exploding as consumers and corporations demand more content and data, and the march toward cloud services continues. All of this means that network infrastructure must have more capacity and be more dynamic and fault tolerant than ever as providers feverishly optimize network configurations and perform load balancing. One can imagine the complexity of this when delivering connectivity services to a large portion of an entire country’s online presence. To solve this problem, telecommunications companies are looking to borrow principles from people who have solved similar issues at smaller scales—running data centers with software-defined networking (SDN). By developing SDN for a wide area network (WAN), telecom providers can effectively make their networks reprogrammable by abstracting away hardware complexity into software. Still, even as WANs become software defined, their enormity makes management and operation difficult. While hardware management becomes less complex, running software-defined WANs (SD-WANs) presents highly complex data problems—information flows may exhibit patterns, but they are hard to extract. This is a perfect application for AI algorithms, however. Reinforcement learning is a particularly promising approach that already has history with networking applications. More recently, researchers have combined deep learning with traditional reinforcement learning to develop deep reinforcement learning techniques that will likely find usage in networking. Here is a watered-down version of how that might work. First, consider traditional [...]



Four short links: 9 January 2017

2017-01-09T11:55:00Z

"Universal Truths," Getting Stuck, Kafka Dot Com, and Auto Ordering

  1. 50 Universal Truths for Success (Business Insider) -- I've been collecting my own "lessons learned" and some look like these. In the early stages of a company, career, or project, you’ll have to say “yes” to a lot of things. In the later stages, you’ll have to say “no.” True for stages of life as well.
  2. How do Individual Contributors Get Stuck? (Camille Fournier) -- top advice. My competence took a massive step forward when I finally realized what caused me to get stuck (and what getting stuck felt like). Noticing how people get stuck is a super power, and one that many great tech leads (and yes, managers) rely on to get big things done. When you know how people get stuck, you can plan your projects to rely on people for their strengths and provide them help or even completely side-step their weaknesses. You know who is good to ask for which kinds of help, and who hates that particular challenge just as much as you do.
  3. Kafka the Entrepreneur (Guardian) -- wanted to write indie European travel guides, came up with a five-page business plan, but was so paranoid he demanded NDAs from publishers before meeting ... so nobody funded him.
  4. TV Anchor says "Alexa Buy Me A Dollhouse" on Air (The Register) -- voice-command purchasing is enabled by default on Alexa devices. Hilarity ensures.

Continue reading Four short links: 9 January 2017.

(image)



The Offline Challenge: Delivering Mobile Apps that Always Work

2017-01-09T11:45:00Z

(image)

Wayne Carter and Ali LeClerc show you how to build a mobile app that has a consistent user experience, both online and offline.

Continue reading The Offline Challenge: Delivering Mobile Apps that Always Work.

(image)



Four short links: 6 January 2017

2017-01-06T12:05:00Z

Learn 6502, Graphviz in Browser, Messenger Bot Discovery, and Data Shapeshifting

  1. Easy 6502 -- take your first steps in programming in the assembly language behind the Apple II and the Commodore 64. It has an inline assembler and 6502 emulator, resulting in a tutorial that reads like an iPython Notebook.
  2. Viz -- Graphviz in your browser.
  3. FB Messenger PM on Discovery -- short version: discovery is not FB's problem; it's your problem as a bot-developer. Have an audience, and point them to your bot. There will be no instant millionaires courtesy of Facebook's discovery system. This leaves me feeling unsatisfied, even as I understand the temptation to avoid the Robert-Scoble-Wearing-It-In-The-Shower-brand death moment of shiny new tech du jour.
  4. Odo: Shapeshifting For Your Data -- It efficiently migrates data from the source to the target through a network of conversions.

Continue reading Four short links: 6 January 2017.

(image)



Data, data everywhere, not an insight to take action upon

2017-01-06T11:00:00Z

(image)

Learn how to analyze operations data in the presence of “holes” in the time series, how missing data impacts analysis, and a gamut of techniques that can be used to address the missing data issue.

Continue reading Data, data everywhere, not an insight to take action upon.

(image)



What AI needs is a dose of realism

2017-01-05T12:40:00Z

(image)

Oren Etzioni talks about the current AI landscape, projects at the Allen Institute, and why we need AI to make sense of AI.

Continue reading What AI needs is a dose of realism.

(image)



Andra Keay on robots crossing the chasm

2017-01-05T12:30:00Z

(image)

The O’Reilly Design Podcast: Identifying use cases for robots, the five laws of robots, and the ethics and philosophy of robotics.

In this week’s Design Podcast, I sit down with Andra Keay, managing director of Silicon Valley Robotics. We talk about the evolution of robots, applications that solve real problems, and what constitutes a good robot.

Continue reading Andra Keay on robots crossing the chasm.

(image)



Joel Johnson on desktop fabrication

2017-01-05T12:15:00Z

(image)

The O’Reilly Hardware Podcast: An all-in-one workshop factory.

In this episode of the O’Reilly Hardware Podcast, Jeff Bleiel and I speak with Joel Johnson, co-founder and CEO of BoXZY, a startup that makes an all-in-one desktop CNC mill, 3D printer, and laser engraver. We discuss the BoXZY device’s software and hardware, including its CAD/CAM software (Autodesk’s Fusion 360) and controller board (Arduino Mega), as well as its shield, steppers, and firmware.

Continue reading Joel Johnson on desktop fabrication.

(image)



Four short links: 5 January 2017

2017-01-05T12:05:00Z

Crowdsourced Phone, Social Network Analysis of Literature, Sterling and Lebowsky, and Narrative Patterns

  1. Crowdsourced Phone Gets Release Date and Price -- September, $199, and the two features from the crowdsourced list of "what could we put in a mobile phone?" are eye-tracking navigation and an adhesive case that lets you stick it to surfaces.
  2. Comparative Social Network Analysis of Irish and British Fiction, 1800-1922 -- they just added Portrait of the Artist as a Young Man on the anniversary of its original publication.
  3. State of the World -- Sterling and Lebowsky go again. Not that I forgive them for cyberspying, but really, getting kicked out of a country is such a personal drag. You've got to sell your house, get rid of your car, fire the babysitter, fill out all kinds of stupid international paperwork... One minute you've got a plum job in Washington, a weird foreign power where things are just getting lively and interesting, and the next day you're packing a valise like some goddamn Syrian refugee. Just because you've been spearphishing VCs, CEOs, and Congressmen, whatever. Sure, it's sneaky and against the host country's national interests, but were you supposed to NOT do that? You're in the freakin' diplomatic corps! It's an existential condition.
  4. NAPA Cards -- narrative patterns. I'm not sure it's useful, but even making me go "are these really examples of the same category of thing?" is making my brain work in interesting ways.

Continue reading Four short links: 5 January 2017.

(image)



3 lessons from serial innovators

2017-01-05T12:05:00Z

Hint: It’s not just one bright idea, repeated several times.Some people appear to be blessed. They aren’t just lucky enough to have a single right idea at the right time; they keep coming up with more bright ideas that make the world better, or at least are valued enough for a profitable business model. While innovation and market success do not always have a strong correlation, there are a few things the creators often have in common. It’s one thing to get lucky—to have a bright idea at the exact right moment. But many of the people I admire have been “lucky” several times over, sometimes in different guises. Maybe it’s creating a business that evolves from a solo success to a wide range of profitable endeavors under the same corporate umbrella, such as Amazon’s Jeff Bezos. Or the spark of creativity may touch different realms, such as Philippe Kahn, whose success began with Borland’s Turbo Pascal, then camera phones, and now working in the Internet of Things. I’ve paid some attention to what these people do differently from the rest of us mere mortals, including the things they don’t notice. Because, often, we take our personal strengths for granted; they are, after all, the things that require the least conscious effort. Lesson 1: It’s all about technology. Except when it isn’t. Always lead with technology, says Philippe Kahn. “I’m passionate about making things that help Ms. and Mr. Everyone. At different times, different things. But it all has to be led by unique technology innovation.” You may remember Kahn’s legacy at Borland, which drove software development on microcomputers. At LightSurf, it was the camera phone, accompanied by key patents. “Today at Fullpower, it’s the leading platform for the smartbed in the IoT smarthome, powered by machine learning and data science,” he says, with “a lot of innovation and patents.” However, the world is full of technology that is inherently cool, but also an answer in search of a solution. Whether an innovation is an improvement over existing solutions (the iPod, a faster CPU, a more affordable compiler) or a disruptive game changer (DVDs by mail, crowdsourced classified ads), it answers questions that people immediately realize they had—as soon as the answer appears. The technology breakthrough itself can be a distraction. Even though the market makers may talk about technology publicly, says Saul Kaplan, founder of the Business Innovation Factory, their attention usually is on problem solving and business models. It’s part of their DNA, he says. “When they stand in line at the supermarket, they are considering how to improve the buying experience,” he points out. Indeed, architect and inventor Buckminster Fuller was incensed by the time wasted standing in line when the bank tried to “save money” by limiting the number of[...]



Guidelines for keeping pace with innovation and tech adoption

2017-01-05T12:00:00Z

(image)

This ebook explores guidelines that can help you make crucial "right item, right time" decisions and assist in the integration of new technology into existing business processes.

Guidelines for Keeping Pace with Innovation and Tech Adoption: Don’t Just Fail Fast—Learn Fast

There are two kinds of fool. One says, “This is old, and therefore good.” And one says, “This is new, and therefore better.”

Dean Inge

Continue reading Guidelines for keeping pace with innovation and tech adoption.

(image)



5 trends in defensive security for 2017

2017-01-05T11:00:00Z

(image)

From disclosure to machine learning to IoT, here are the security trends to watch in the months ahead.

When we started planning the inaugural O’Reilly Security Conference, we did so with a unique focus on defensive security. There are many excellent researchers and leaders working to solve the problems of security today, but what we wanted to do was to get down to the nuts and bolts of building better defenses across the board. The Security Conference and our newsletter have received an enthusiastic welcome from our audience—the defenders. And it’s for you, the defenders, that we offer this look forward at the key trends in 2017.

1. Greater coordination on vulnerability disclosure

The nearly unexpected happened in 2016: the Department of Defense released their vulnerability disclosure policy, putting the government ahead of a vast percentage of private enterprises when it comes to partnering with well-intentioned hackers instead of viewing them as dangerous adversaries. This should serve as inspiration for both more government groups and corporations that would all benefit. On top of that, Katie Moussouris worked with ISO to release a free version of their ISO 29147 standard, providing free best practices for organizations looking to establish their own vulnerability disclosure programs.

Continue reading 5 trends in defensive security for 2017.

(image)



Top 8 systems operations and engineering trends for 2017

2017-01-05T11:00:00Z

What to watch for in distributed systems, SRE, serverless, containers and more. Forecasting trends is tricky, especially in the fast-moving world of systems operations and engineering. This year, at our Velocity Conference, we have talked about distributed systems, SRE, containerization, serverless architectures, burnout, and many other topics related to the human and technological challenges of delivering software. Here are some of the trends we see for the next year: 1. Distributed Systems We think this is important enough that we re-focused the entire Velocity conference on it. 2. Site Reliability Engineering Site Reliability Engineering—is it just ops? Or is it DevOps by another name? Google's profile for an ops professional calls for heavy doses of systems and software engineering. Spread further into the industry by Xooglers at companies like Dropbox, hiring for SRE positions continues to increase, particularly for web-facing companies with large data centers. In some contexts, the role of SREs becomes more about helping developers operate their own services. 3. Containerization Companies will continue to containerize their software delivery. Docker Inc. itself has positioned Docker as a tool for "incremental revolution," and containerizing legacy applications has become a common use case in the enterprise. What's the future of Docker? As engineers continue to adopt orchestration tools like Kubernetes and Mesos, the higher level of abstraction may make more room for other flavors of containers (like rkt, Garden, etc.). 4. Unikernels Are unikernels the next step after containerization? Are they unfit for production? Some tout the security and performance benefits of unikernels. Keep an eye out for how unikernels evolve in 2017, particularly with an eye to what Docker Inc. does in this area (having acquired Unikernel Systems this year). 5. Serverless Serverless architectures treat functions as the basic unit of computation. Some find the term misleading (and reminiscent of "noops"), and prefer to refer to this trend as Functions-as-a-Service. Developers and architects are experimenting with the technology more and more, and expect to see more applications being written in this paradigm. For more on what serverless/FaaS means for operations, check out the free ebook on Serverless Ops by Michael Hausenblas. 6. Cloud-Native application development Like DevOps, this term has been used and abused by marketers for a long while, but the Cloud Native Computing Foundation makes a strong case for these new sets of tools (often Google-inspired) that take advantage not just of the cloud, but in particular the strengths and opportunities provided by distributed systems—in short, microservices, c[...]



Building a better web in 2017

2017-01-05T11:00:00Z

O'Reilly editors and web practitioners weigh in on what they hope to see in 2017.Building a better web. Sounds great, doesn't it? But what do we need to do to make it so? And what do we mean by better? Isn't it great as is? This is what we are going to try to unravel and present to you this coming year. The web is at the center of business from the software industry to banking and at the core of the future as a go-to platform for chatbots and virtual reality. The following is a bit about where we are right now and what we are looking into and thinking about as we enter 2017. You'll get some insight from O'Reilly's web content team and from others in our network who are doing the work of building the web. After you read this, let us know what you think by tweeting @FluentConf so we can continue the conversation—and while you're at it, propose a talk for Fluent in San Jose, California, June 19–22; our call for proposals closes January 10. 1. Resilience and scalability The web is accessed from a multitude of end points, from fitness trackers to 18-wheelers, and in more traditional places like the laptop on which I write this and your iPhone. In many cases, it's a platform that at a certain time and place has millions of people riding the zeros and ones to Netflix, for instance, to watch Stranger Things, or just a handful to see a niece riding her bike for the first time on YouTube. The web needs to expand and contract at a millisecond's notice to accommodate not just entertainment but critical business information. How can we make sure that we have a resilient web that is fast and reliable as well as scalable? Do we need to start at the code-line level with reactive programming? Is it about safeguarding against attacks with the right security protocols? It will take a combination of many initiatives that make sense for your team and project to ensure that your application or site can withstand a hack, serve up content in a flash, and never go down. This year we're going to look at providing stories, case studies, examples—whatever you'd like to call them—to make sure that best practices are making their way to the greater community. —Rachel Roumeliotis, strategic content director at O'Reilly Media, chair of Fluent 2017 2. Separating signal and noise 2016 saw an almost endless number of tweets, posts, and rants concerning web development churn, which even coined a new phrase: "JavaScript fatigue." I think it's safe to say we've reached peak fatigue, and in 2017, I believe the pendulum will begin to swing back in favor of more streamlined development workflows with fewer dependencies to weigh down developers, especially those new to the web. While there's no shortage of new tools, frameworks, and projects to lear[...]



5 developments that will shape design in 2017

2017-01-04T13:45:00Z

From systems thinking to voice interfaces, these are the design trends to watch in the months ahead.As awareness of design's value continues to grow, 2017 promises to offer the design community plenty of opportunities and challenges. Here's a look at what designers should pay attention to in the year ahead. 1. Systems thinking Anything created today exists and interacts within a larger ecosystem. Systems thinking is by no means a new discipline, but it has started to work its way into design conversations, and for good reason. Systems thinking is focused on looking at the entirety of a system and understanding the interconnectedness and relationships between elements in a system. In Thinking in Systems, Donella Meadows shares a great quote from a Sufi teaching story that encapsulates the essence of systems thinking: You think that because you understand "one" that you must therefore understand "two" because one and one makes two. But you forget that you must also understand "and." The best applications, services, and products will have designers with a firm systems-thinking mindset behind them. Systems thinking doesn't solve complexity, but it will help designers think about their products as part of an ecosystem. If you're going to learn one new skill in 2017, systems thinking is the one I'd recommend pursuing. 2. Designing for voice Machine learning, natural language processing, and artificial intelligence (AI) are fueling the development of voice user interfaces. In 2016 we witnessed the arrival of voice as a relevant and useful interaction model, with Amazon's Alexa being the most compelling example. Designing for voice is in its early days, and its applications are promising—from chatbots to cars. I expect we will see tremendous experimentation and growth of voice user interfaces in 2017. Cathy Pearl, author of Designing Voice User Interfaces and director of user experience at Sensely, says: Many companies and developers are jumping on the voice trend. Amazon, for example, allows any developer to add new "skills" to their Echo. It's important, however, to consider whether the task or app you want to build will actually *benefit* from voice. Voice is great for hands-free, accessibility, and efficiency. It's not so great for public spaces, noisy environments, and privacy. As more people build VUIs, designers will play a crucial role, crafting conversational experiences that are useful and engaging. We already know a lot about good VUI design principles, but we need designers to apply them and to have the tools to prototype and build designs quickly and easily. Just like a poorly designed website can drive users to frustration, a poorly designed VUI will do the same." One [...]



Biohacking with Citizen Salmon

2017-01-04T12:15:00Z

(image)

This DIY initiative aims to genotype Pacific salmon to help sustain and improve salmon populations.

Seattle is rich in cultural icons. Think artisanal coffee, grunge rock, the Space Needle, Pike Place Market: they are all instantly recognizable symbols of the city.

But Seattle is more than a city. As the foremost urban center of the Pacific Northwest—or Cascadia, in the preferred regional parlance—its significance and influence are broad. Seattle’s symbols are thus Cascadia’s symbols. And nothing is more emblematic of Cascadia than salmon. The five species of Pacific salmon—chinook, coho, pink, sockeye, and chum—have both economic and emotional significance to the people of the Pacific Northwest. Residents enjoy catching them and eating them, but the fish also serve as indicators for the health of the regional biome. Moreover, they are totems. Cascadia people don’t just value salmon. They revere them.

Continue reading Biohacking with Citizen Salmon.

(image)



AI and the future of design: What will the designer of 2025 look like?

2017-01-04T12:00:00Z

Designers may well provide the missing link between AI and humanity.For anyone doubting that AI is here, the New York Times recently reported that Carnegie Mellon University plans to create a research center that focuses on the ethics of artificial intelligence. Harvard Business Review started laying the foundation for what it means for management, and CNBC started analyzing promising AI stocks. I made the relatively optimistic case that design in the short term is safe from AI because good design demands creative and social intelligence. But this short-term positive outlook did not alleviate all of my concerns. This year, my daughter started college, pursuing a degree in interaction design. As I began to explore how AI would affect design, I started wondering what advice I would give my daughter and a generation of future designers to help them not only be relevant, but thrive in the future AI world. Here is what I think they should expect and be prepared for in 2025. Everyone will be a designer Today, most design jobs are defined by creative and social intelligence. These skill sets require empathy, problem framing, creative problem solving, negotiation, and persuasion. The first impact of AI will be that more and more non-designers develop their creativity and social intelligence skills to bolster their employability. In fact, in the Harvard Business Review article I mentioned above, advice #4 to managers is to act more like designers. The implication for designers is that more than just the traditional creative occupations will be trained to use “design thinking” techniques to do their work. Designers will no longer hold a monopoly (if that were ever true) on being the most “creative” people in the room. To stay competitive, more designers will need additional knowledge and expertise to contribute in multidisciplinary contexts, perhaps leading to increasingly exotic specializations. You can imagine a classroom, where an instructor trained in design thinking is constantly testing new interaction frameworks to improve learning. Or a designer/hospital administrator who is tasked with rethinking the inpatient experience to optimize it for efficiency, ease of use, and better health outcomes. We’re already seeing this trend emerge—the Seattle mayor’s office has created an innovation team to find solutions to Seattle’s most immediate issues and concerns. The team embraces human-centered design as a philosophy, and includes designers and design strategists. Stanford’s d.school has been developing the creative intelligence of non-traditionally trained designers for over a decade. And new programs like MIT[...]



Four short links: 4 January 2017

2017-01-04T11:55:00Z

Science and Complexity, App Store Farm, Portability over Performance, and Incident Response Docs Science and Complexity (PDF) -- in the first part of the article, Weaver offers a historical perspective of problems addressed by science, a classification that separates simple, few-variable problems from the “disorganized complexity” of numerous-variable problems suitable for probability analysis. The problems in the middle are “organized complexity” with a moderate number of variables and interrelationships that cannot be fully captured in probability statistics. The second part of the article addresses how the study of organized complexity might be approached. The answer is through harnessing the power of computers and cross-discipline collaboration. Originally published in 1948. How to Manipulate App Store Rankings the Hard Way -- photo shows a wall of iPads in front of a woman. Her job is to download, install, and uninstall specific apps over and over again to boost their App Store rankings. (via BoingBoing) Intel's 10nm Chip Tech -- As has been the case for years already, clock speed isn’t liable to increase, though. “It’s really power reduction or energy efficiency that’s the primary goal on these new generations, besides or in addition to transistor cost reduction,” Bohr says. Improved compactness and efficiency will make it more attractive to add more cores to server chips and more execution units onto GPUs, he says. PagerDuty's Incident Response Documentation -- It is a cut-down version of our internal documentation, used at PagerDuty for any major incidents, and to prepare new employees for on-call responsibilities. It provides information not only on preparing for an incident, but also what to do during and after. It is intended to be used by on-call practitioners and those involved in an operational incident response process (or those wishing to enact a formal incident response process). Open sourced, so you can copy it and localize it for your company's systems and outage patterns. (via PagerDuty blog) Continue reading Four short links: 4 January 2017.[...]



Fang Yu on machine learning and the evolving nature of fraud

2017-01-04T11:00:00Z

(image)

The O’Reilly Security Podcast: Sniffing out fraudulent sleeper cells, incubation in money transfer fraud, and adopting a more proactive stance.

In this episode, O’Reilly’s Jenn Webb talks with Fang Yu, cofounder and CTO of DataVisor. They discuss sniffing out fraudulent sleeper cells, incubation in money transfer fraud, and adopting a more proactive stance against fraud.

Continue reading Fang Yu on machine learning and the evolving nature of fraud.

(image)



5 things to watch in Python in 2017

2017-01-04T11:00:00Z

An improved asyncio module, Pyjion for speed, and moving to Python 3 will make for a rich Python ecosystem.Python is everywhere. It’s easy to learn, can be used for lots of things, and often is the right tool for the right job. The huge standard library and refreshingly abundant libraries and tools make for a very productive programming biosphere. The 3.6 release was just last month, and included various improvements, such as: New features in the asyncio module Addition of a file system path protocol Formatted string literals Here I’ll offer a few thoughts on a some things from this recent release and elsewhere in the Python biome that have caught my eye. This is nowhere near an exhaustive list, so let me know other things you think will be important for Python for 2017. Will Pyjion’s boost be just in time? Speeding up runtime performance has been in Python’s sights for a while. PyPy has aimed to tackle the issue since its official release in 2007, while many developers have turned to Cython (the C extension of Python) to improve speed, and numerous just-in-time (JIT) compilers have been created to step up runtime. Pyjion is one new JIT compiler for revving up speed; it accelerates CPython (the Python interpreter) by boosting its stock interpreter with a JIT API from Microsoft’s CoreCLR project. The goals of the Pyjion project, as explained in the FAQ section of the Pyjion Github repository, are: “... to make it so that CPython can have a JIT plugged in as desired. That would allow for an ecosystem of JIT implementations for Python where users can choose the JIT that works best for their use-case.” Such an ecosystem would be a great boost for Python, so I’m hopeful, along with a lot of other folks, that 2017 will see Pyjion or others like it make faster Python fauna. To explore Cython as one alternative for speedier Python, check out the Learning Cython video course by Caleb Hattingh. You can also download Hattingh's free ebook, 20 Python Libraries You Aren’t Using (But Should). Asyncio is no longer provisional. Let the concurrency begin! In Python 3.6, the asyncio module is no longer provisional and its API is now considered stable; it’s had usability and performance tweaks, and several bug fixes. This module supports writing single-threaded concurrent code. I know a lot of Python developers who work with asynchronous code are eager to check out this improved module and see what the new level of concurrency can do for their projects. An uptick in use of the asyncio module will likely draw even more attention t[...]



5 things to watch in Go programming in 2017

2017-01-04T11:00:00Z

(image)

What will innovations like dynamic plugins, serverless Go, and HTTP/2 Push mean for your development this year?

Go 1.8 is due to be released next month and it’s slated to have several new features, including:

  • HTTP/2 Push
  • HTTP Server Graceful Shutdown
  • Plugins
  • Default GOPATH

Which of these new features will have the most impact probably depends on how you and your development team use Go. Since Go 1.0 released in 2012, its emphasis on simplicity, concurrency, and built-in support has kept its popularity pointed up and to the right, so the answers to “What is Go good for?” keep multiplying.

Continue reading 5 things to watch in Go programming in 2017.

(image)



Four short links: 3 January 2017

2017-01-03T12:05:00Z

Fortran MVC, Decent Security, Livecoding a Game, and Data Pipelines

  1. Fortran.io -- Fortran.io is like PHP: FastCGI runs a script that outputs HTML strings. (via Werner Vogels)
  2. Decent Security -- good advice for people who aren't security experts. (via Taylor Swift)
  3. Handmade Hero -- this chap is writing a game, from the ground up, on Twitch. He explains everything he does, so this is basically a master-class for the true would-be games programmer.
  4. Proof -- A Python library for creating fast, repeatable, and self-documenting data analysis pipelines.

Continue reading Four short links: 3 January 2017.

(image)



Fintech in 2017: 4 things to watch

2017-01-03T12:00:00Z

From AI to uncertain political outlooks: What's on our radar.Fintech companies large and small face many of the same disruptive trends as every other kind of tech company—especially the rise of artificial intelligence (AI) and uncertain political outlooks in the United States and Europe. Jon Bruner takes a look at what 2017 might hold in the fintech world. 1. The rise of Chinese fintech China’s eight fintech unicorns—privately held startups valued at more than $1 billion—are together valued at nearly $100 billion, three times as much as America’s 14 fintech unicorns. A couple of them are big because they’re tie-ups between several of China’s largest internet companies, but the breathtaking scale of China’s fintech market is still clear. China’s fintech startups benefit from a colossal domestic market made up of consumers who are newly affluent; are in need of services to manage their wealth; and, in many cases, aren’t already committed to an incumbent financial institution. Plus, many Chinese consumers are mobile natives who don’t need to be convinced to manage their finances through smartphone apps. 2. Blockchain for everything Sorry about this one. You’re probably tired of reading about the blockchain on end-of-year fintech watchlists, but there’s reason to keep it here. Put aside bitcoin and all of its unmet hype, and you’re left with a secure-by-design means of authentication that’s gradually creeping into the world’s financial infrastructure. In 2016, the central banks of both Canada and Singapore announced plans to use blockchain-based digital currencies for interbank settlements, as has a consortium that includes Santander, UBS, BNY Mellon, and Deutsche Bank. Expect to see more blockchain-based infrastructure—and regulatory support—in everything from real estate transfers to intellectual property filings. 3. Pressure on startups The last two years have seen the proliferation of robo-advisors, which manage portfolios algorithmically and charge minuscule fees—typically well below half a percent of assets, compared with 1-3% for conventional human asset managers. Betterment and Wealthfront, two of the startups that led the robo-advisor boom, looked like they were entering periods of hockey-stick growth in assets under management back in 2014 and 2015. Instead, they’ve eked out more or less linear growth since then (which is to say, declining month-over-month growth rates). The reason: it turne[...]



7 AI trends to watch in 2017

2017-01-03T11:15:00Z

From tools, to research, to ethics, Ben Lorica looks at what’s in store for artificial intelligence in 2017.2016 saw tremendous innovation, lots of AI investment in both big companies and startups, and more than a little hype. What will 2017 bring? 1. Democratization of tools will enable more companies to try AI technologies. A recent Forrester survey of business and technology professionals found that 58% of them are researching AI, but only 12% are using AI systems. This is partially because applied AI applications are only now starting to be realized, but it’s also because right now AI is hard. It requires very specialized skills and a develop-it-yourself attitude. But frameworks like Facebook’s Wit.ai and Howdy’s Slack bot are competing to become the Visual Basic of AI, promising point-and-click development of intelligent conversational interfaces to relatively unsophisticated developers. Tools like Bonsai, Keras, and TensorFlow (if you don't mind coding) simplify the implementation of deep learning models. And cloud platforms like Google’s APIs and Microsoft Azure allow you to create intelligent apps without having to worry about setting up and maintaining accompanying infrastructure. 2. We’ll see many more targeted AI systems. We’re not expecting large, general purpose AI systems—yet. But we do anticipate an explosion in specific, highly targeted AI systems, such as: Robotics: personal, industrial, and retail Autonomous vehicles (cars, drones, etc.) Bots: CRM, consumer (such as Amazon Echo), and personal assistants Industry-specific AI for finance, health, security, and retail 3. The economic impact of increased automation will be the subject of discussion. Expect to hear (a little) less about malevolent AI taking over the world and more about the economic impacts of AI. Concerns about AI stealing jobs are nothing new, of course, but we anticipate deeper, more nuanced conversations on what AI will mean economically. 4. In an attention economy, systems that help overcome information overload will become more sophisticated. We’re noticing (and applauding) some very interesting developments in AI that help parse information to overcome information overload, especially in the areas of: Natural language understanding Structured data extraction (from “dark data” to “structured information”) Information cartography Automatic summarization (text, video, and audio) 5. AI researchers wil[...]



5 software development trends shaping enterprise in 2017

2017-01-03T11:00:00Z

(image)

Open source development, changing infrastructure, machine learning, and customer-first design meet in a perfect storm to shape the next massive digital transformation.

Open source software development, infrastructure disruption and re-assembly, machine learning, and customer-first design are part of a perfect storm shaping the next massive digital transformation. You know the one that is creating amazing startups that are literally upheaving industries as Uber and Lyft have done with transportation, Twitter and Facebook have done with communication, and Netflix and Hulu have have done with cable television. All of these enterprises have transformed or created industries. Now all enterprise must shake off the dust of older technology and reinvent themselves if they want to stay competitive.

Open source continues to shape our world

One piece of the puzzle is to incorporate the code and culture of open source. It is a central driver behind each and every one of these disruptive and successful companies. Each has woven in open source, from the code to the culture. This is the change happening and open source is how you can play a part.

Continue reading 5 software development trends shaping enterprise in 2017.

(image)