Subscribe: O'Reilly Radar - Insight, analysis, and research about emerging technologies
http://radar.oreilly.com/atom.xml
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
continue reading  continue  data  design  learning  links november  links  quality  reading  security  short links  short  team 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: O'Reilly Radar - Insight, analysis, and research about emerging technologies

All - O'Reilly Media



All of our Ideas and Learning material from all of our topics.



Updated: 2016-12-04T12:21:09Z

 



Reactive programming vs. Reactive systems

2016-12-02T18:10:00Z

(image)

Landing on a set of simple reactive design principles in a sea of constant confusion and overloaded expectations.

Since co-authoring the Reactive Manifesto in 2013, we’ve seen the topic of Reactive go from being a virtually unacknowledged technique for constructing applications—used by only fringe projects within a select few corporations—to become part of the overall platform strategy in numerous big players in the middleware field. This article aims to define and clarify the different aspects of «Reactive» by looking at the differences between writing code in a Reactive Programming style, and the design of Reactive Systems as a cohesive whole.

Reactive is a set of design principles

One recent indicator of success is that Reactive has become an overloaded term and is now being associated with several different things to different people—in good company with words like «streaming», «lightweight», and «real-time».

Continue reading Reactive programming vs. Reactive systems.

(image)



Four short links: 2 December 2016

2016-12-02T12:30:00Z

Engineering Teams, State Machines, Social Media, and Security Feudalism

  1. Building and Motivating Engineering Teams (Camille Fournier) -- My experience has been that most great engineers want to work somewhere that inspires them to achieve. Many of us stop at the idea of hard technical problems when we think about inspiring our engineering teams, but challenging them to partner with people who have different perspectives is another way you can help them grow.
  2. Validating State Machines (Tim Bray) -- I love well-written technical exposition. Tim's a master. Not that many people care about validators and parsers; so, if you think you probably won't be interested in the rest of this piece, you're probably right.
  3. Social Media is Killing Discourse Because It's Too Much Like TV (MIT TR) -- And, Postman argued, when news is constructed as a form of entertainment, it inevitably loses its function for a healthy democracy. "I am saying something far more serious than that we are being deprived of authentic information. I am saying we are losing our sense of what it means to be well informed. Ignorance is always correctable. But what shall we do if we take ignorance to be knowledge?"
  4. Security and Feudalism: Own or be Pwned (YouTube) -- Cory Doctorow's full keynote from O'Reilly Security Conference. Masterful.

Continue reading Four short links: 2 December 2016.

(image)



“Wall Street made me do it”

2016-12-01T15:15:00Z

We have to change the incentives that encourage companies to choose boosting their stock price over investing in people and the real economy."Can Trump Save Their Jobs?" the headline in a recent New York Times Sunday Business section trumpets, below a photo of two disgruntled workers from a Carrier factory that is leaving Indianapolis for Monterrey, Mexico. The subtitle adds, "The president-elect predicted he would stop a Carrier factory from moving to Mexico. Workers at the plant expect him to follow through." The focus of the article was the difficulties that Trump may face in coming through on his campaign promises. But that wasn't the most interesting part of the article. I was struck once again, as I was when I first heard the story of Carrier closing their Indianapolis plant and moving the jobs to Mexico, by a major false note in the narrative: Carrier isn't changing its plans...regardless of who is in the Oval Office; manufacturers are seeing relentless pressure, from investors and rival companies, to automate, replacing workers with machines that do not break down or require health benefits and pension plans. Wall Street hedge fund managers are demanding steadily rising earnings from Carrier's parent, United Technologies, even as growth remains sluggish worldwide. Why is it that we automatically assume that companies must respond when Wall Street or hedge funds "demand rising earnings"? What is it that compels managers to decimate local communities by outsourcing jobs? I'd love to see a deeper analysis of the reasons why companies dance to Wall Street investors' tunes. If you're an up-and-coming company about to raise capital from Wall Street, you do need to please their dictates. Uber is in thrall to investors— they need them to believe in the company and its future because they are not yet profitable and need to go to the markets to raise additional funds to grow the business. Existing companies may also need to go to "the market" to raise additional capital for expansion. But for many companies, capital markets are primarily used to enrich management (and investors), not to raise capital for the business. Companies as profitable as United Technologies, the parent of Carrier, or Apple, Facebook, and Google, have no need to raise capital from Wall Street. They generate it from their own operations. A quick look at UTX (United Technologies) on Google Finance shows that they had a net income of $7.3 billion in 2015. Cash flow from operations was $6.3 billion. An additional $6.5 billion in cash flow came from "investing activities." What did they use this cash for? $2.5 billion went to investments in plant or equipment or other investments $2.2 billion went to shareholder dividends $8.8 billion went to stock buybacks That is, of the nearly $13 billion in cash flow from the business, only $2.5 billion was reinvested, and the rest went to Wall Street in one form or another. Why do we accept this massive redirection of resources from the real economy to the financial economy? What purpose does it serve? In theory, companies care about their stock price because financial markets provide the capital that allows them to invest and expand. But in practice, this is less and less true. The buying and selling of stocks is a form of gambling, not true investment. In theory, investor pressure can lead companies to become more efficient, but in practice, what we see is, effectively, looting of the real economy by the financial economy. Rana Foroohar has written about this phenomenon in her book Makers and Takers. She traces the roots of the problem to a failed hypothesis that aligning management compensation with stock market performance would make for better companies. Instead, it has made for worse companies and a worse economy, as there is less and less corporate investment in workers, factories, and the stuff of the real economy, because it's easier for management to get rich playing in the financial casino. If you don't have time for the whole bo[...]



Fang Yu on using data analytics to catch constantly evolving fraudsters

2016-12-01T13:40:00Z

(image)

The O'Reilly Radar Podcast: Big data for security, challenges in fraud detection, and the growing complexity of fraudster behavior.

This week, I sit down with Fang Yu, cofounder and CTO of DataVisor, where she focuses on big data for security. We talk about the current state of the fraud landscape, how fraudsters are evolving, and how data analytics and behavior analysis can help defend against—and prevent—attacks.

Continue reading Fang Yu on using data analytics to catch constantly evolving fraudsters.

(image)



Introducing model-based thinking into AI systems

2016-12-01T13:10:00Z

(image)

The O’Reilly Data Show Podcast: Vikash Mansinghka on recent developments in probabilistic programming.

In this episode I spoke with Vikash Mansinghka, research scientist at MIT, where he leads the Probabilistic Computing Project, and co-founder of Empirical Systems. I’ve long wanted to introduce listeners to recent developments in probabilistic programming, and I found the perfect guide in Mansinghka.

Continue reading Introducing model-based thinking into AI systems.

(image)



Richard Socher on the future of deep learning

2016-12-01T13:00:00Z

(image)

The O’Reilly Bots Podcast: Making neural networks more accessible.

In this episode of the O’Reilly Bots Podcast, Pete Skomoroch and I talk with Richard Socher, chief scientist at Salesforce.  He was previously the founder and CEO of MetaMind, a deep learning startup that Salesforce acquired in 2016.  Socher also teaches the “Deep Learning for Natural Language Processing” course at Stanford University. Our conversation focuses on where deep learning and NLP are headed, and interesting current and near-future applications.

Continue reading Richard Socher on the future of deep learning.

(image)



Getting the most from your design team

2016-12-01T12:00:00Z

5 questions for Peter Merholz: Building design orgs, eight core design skills, and how culture influences outcomes.I recently asked Peter Merholz, design executive and author of Org Design for Design Orgs, about what he has learned building and managing design teams. At the O’Reilly Design Conference, Peter Merholz will be running a workshop with Kristin Skinner, Org design for design orgs: The workshop. You’re teaching a workshop at the Design Conference, and you've written a book on organizational design for design organizations. You've commented that organizations that recognize the value of design aren't realizing the return on their investments. Can you explain what you mean and why you think this is happening? Companies are building sizable internal design teams. These cost money—headcount, facilities, fancy new MacBooks. I think there’s an expectation in many of these companies that this investment in design will help them get ahead, compete better, possibly even transform the business. However, if these companies treat design like any other corporate function, they will constrain its potential. And they might wonder if it’s worth the investment, not recognizing that simply hiring a bunch of designers is not enough—there are organizational, operational, and managerial matters that must also be addressed in order to get the most out of that team. What are some practical steps for building a successful design team? You want to begin with a strong leader. Too often, a company hires as its first designer someone junior whose job is to bang out screens. If the company is a startup, hire a senior designer, someone who is still comfortable with making, but who understands how design works, how to recruit and hire, how to productively interact with other functions. If the company is a more established enterprise that is only just now establishing an internal design capability (this still happens!), that person should probably be director level, and be given a headcount to grow a team. Either way, the senior-most designer should be no more than two levels from the CEO. Early on, that leader and their team should make clear a shared sense of purpose for the design team, perhaps by drafting a charter. Something that establishes why the design team exists, it’s role within the organization, the impact it hopes to have. Without this sense of purpose, as the team grows, it can get lost, resorting to design for design’s sake. With the sense of purpose, the team maintains a point of view that encourages better decision making as the team grows. In the book, we identified eight core design skills for digital designers: user research, interaction design, visual design, information architecture, writing, prototyping, front-end development, and service design. In building a team, try to establish this breadth of skills as quickly as possible. This doesn’t mean hiring eight specialists—it means hiring two, three, four generalists whose skills complement one another. What are the organizational models design teams can adopt? We have an entire chapter about that! Design teams typically operate under one of two models—wholly centralized or decentralized and embedded within product or business teams. The centralized org resembles a design agency, where designers are assembled on teams for projects throughout the company. This disempowers the designers, as they are brought in after requirements have been set, and are expected to simply execute. It also frustrates their internal ‘clients’, as they have to wait for design to be available. In the decentralized model, designers are embedded within products teams, reporting up through business units. This works well at first— things move faster, as there’s no wait for design resources, and designers are more engaged as they’re involved throughout the product development cycle. Over time, [...]



Four short links: 1 December 2016

2016-12-01T11:55:00Z

AI Fears, Amazon Bot Framework, Data Web App, and Feature Factories

  1. Genevieve Bell: Humanity's Greatest Fear is About Being Irrelevant (Guardian) -- Mori’s argument was that we project our own anxieties and when we ask: “Will the robots kill us?”, what we are really asking is: “Will we kill us?” Coming from a Japanese man who lived through the 20th century, that might not be an unreasonable question. He wonders what would happen if we were to take as our starting point that technology could be our best angels, not our worst.
  2. Introducing Amazon Lex -- Amazon's bot framework, basically building out from Alexa intents.
  3. CyberChef -- GCHQ's web app for encryption, encoding, compression and data analysis, which you can play with straight from GitHub.
  4. 12 Signs You're in a Feature Factory -- I'm crying, it's so accurate. The signs: No measurement. Rapid shuffling of teams. Success theatre. Infrequent (acknowledged) failures. No connection to core metrics. No PM retrospectives. Obsessing about prioritization. No tweaking. Culture of hand-offs. Large batches. Chasing upfront revenue. Shiny objects.

Continue reading Four short links: 1 December 2016.

(image)



3 warning signs to help you save your CSS

2016-12-01T11:00:00Z

Find three signs that you should be aware of for maintainable CSS. Catch them early and plan accordingly.I love avocados! I like them as guacamole, spread on a bagel, or sometimes I'll even just eat them on their own with a spoon. On the other hand I absolutely hate buying avocados because they're not usually ripe at the store and they continue to stay that way until all of a sudden one day they're too ripe. CSS is a lot like an avocado because after a while it can suddenly get very difficult to maintain, especially when there are multiple people involved. As more and more CSS gets written, things can spiral out of control and an overwhelming feeling can sink in pretty quickly. If you recognize when your CSS is approaching a state where it's becoming more difficult to maintain, you'll be able to course correct with less difficulty and refactor your code to a better state. Following are some of the warning signs that you should be aware of—catch them early and plan accordingly! Warning sign #1: Lots of small inconsistencies Inconsistencies make code less predictable. While this might not sound like a big deal, consider that the less predictable your code is, the more time you'll have to spend verifying that you used it correctly. For example, if some of your CSS classes are written using spinal-case (letters using the same case with words joined by hyphens) and others are written in snake_case (letters using the same case with words joined by underscores) it's much easier to mix the two up when writing HTML. If the styles don't appear the way you'd expect, then sure you could just try the opposite, but over time this can get really annoying. Even worse, you might end up with two classes with the same name that use a different convention to style things differently: .product-grid { border-bottom: 1px solid #F5F5F5; border-top: 1px solid #F5F5F5; display: block; } .product_grid { background-color: #F5F5F5; border-bottom: 1px solid #333333; border-top: 1px solid #333333; display: block; } Making a conscious effort to keep your CSS more consistent as you write it is a surefire way to prevent future headaches. Subtle things like consistently naming classes in a way that promotes code reuse, ordering declaration block properties using some convention, and even deciding how to declare colors (three-digit hex, six-digit hex, RGB, HSLa) will make your code more predictable and easier to scan, which will help you detect and work through bugs easier. Warning sign #2: Tight deadlines and changing requirements Tight deadlines are nothing new and sometimes they can come out of left field. These deadlines can result in people getting in the mindset that they need to "just get it done" even if that means writing code that they are less than proud of. As a result, CSS can be duplicated, given class names that aren't properly thought out for reuse, or be made very brittle using overly specific selectors. Changing requirements can be just as bad as tight deadlines and might even contribute to making a reasonable deadline more difficult to hit than expected. Even worse, if something changes drastically enough, dead code—code that exists but isn't used—might be left behind, which leads to a cluttered code base. While the rhythm and flow varies from organization to organization, it's common for off-the-cuff estimates to become the basis for project schedules. When asked how long a given task will take, you could blurt out whatever you think is accurate at the time, but it's much better to perform some due diligence and add a reasonable amount of time to use for refactoring. Warning sign #3: The cascade is no longer your friend Have you ever written CSS expecting to s[...]



The BK BioReactor takes on the Gowanus Canal

2016-11-30T12:00:00Z

(image)

The project was spurred by a U.S. Environmental Protection Agency plan to dredge and cap the waterway over the coming decade.

Brooklyn is perhaps New York City’s most vibrant borough, drawing the young, the visionary, and the entrepreneurial. Its restaurants express the cutting edge of culinary theory, and it’s a hotspot for IT and biotech start-ups. Maybe you still go to Manhattan to make money, but you go to Brooklyn to make a difference (and also some money, of course).

But it’s not all high-end brewpubs and succulent tapas for Brooklyn. The borough has some problems, many of them the legacy of long human habitation. Case in point: the Gowanus Canal.

Continue reading The BK BioReactor takes on the Gowanus Canal.

(image)



Four short links: 30 November 2016

2016-11-30T11:55:00Z

Robots and Bio, Art and Money, Augmentation vs. Replacement, and Retro Slack

  1. Robot Revolution in Synthetic Biology (IEEE Spectrum) -- Once the robots have created a thousand yeast variants containing different mashups of genes, it’s time to see if the cells are making their rose oil. Mass spectrometry machines crack open the cells and examine all the molecules inside, checking for the product and also determining whether the yeast is healthy.
  2. Adventure Cartoonist (Lucy Bellwood) -- video of her XOXO talk about money, art, success, and being poor enough and rich enough, among much else. So worth watching.
  3. A Civilian at the World of Watson Conference (Simon Ouderkirk) -- augmentation not replacement, was clearly the party line. IBM Chairwoman Ginny Rometty focused on it in her keynote; Computers as helpers, as assistants, as efficiency aids. [...] [Sebastian] Thrun [of Udacity] took a radically different direction in predicting the future; he used the metaphor of the simple shovel versus a backhoe. When we invent a tool that can do a job better, faster, and more cheaply than using human labor, why not use that tool? Why not free up that labor for other pursuits? We don’t dig foundations by hand, using a backhoe only on the tricky spots. We get in there with the big machine and get it done fast.
  4. Slack Client for C64 -- this is so awesome.

Continue reading Four short links: 30 November 2016.

(image)



Your service just went down. Here's what you do next.

2016-11-30T11:00:00Z

(image)

Incident management experts explain how to quickly restore service and prevent future outages.

Continue reading Your service just went down. Here's what you do next..

(image)



Four short links: 29 November 2016

2016-11-29T12:05:00Z

Data in Type, China Adopts Black Mirror, Thinking Technology, and Wireless Mice

  1. Datalegreya -- a typeface that can interweave data curves with text. (via Waxy)
  2. China's Credit Rating for Everything (WSJ) -- More than three dozen local governments across China are beginning to compile digital records of social and financial behavior to rate creditworthiness. A person can incur black marks for infractions such as fare cheating, jaywalking, and violating family-planning rules. The effort echoes the dang’an, a system of dossiers the Communist party keeps on urban workers’ behavior. Search for that text on Google and click through to the WSJ article if you aren't a subscriber.
  3. Thought as a Technology (Michael Nielsen) -- a thought-provoking essay that mixes two topics I'm interested in: UI and learning new ways of thinking of the world.
  4. Wireless Control of Mice (IEEE) -- Implanted in that mouse’s brain was a device about the size of a peppercorn. When we used our wireless power system to switch it on, the device glowed with a blue light that activated genetically engineered brain cells in the premotor cortex, which sends signals to the muscles. We watched in amazement as the mouse stopped its random motions and began to run in neat circles around the cage.

Continue reading Four short links: 29 November 2016.

(image)



Improve the quality of your leadership decisions

2016-11-29T12:00:00Z

Decision science can help you get better at making tough calls.Several years ago, I was part of a small team charged with overhauling the e-commerce site of an established company. We developed a strong perspective on the overarching taxonomy for the site after looking at overall market trends and best-of-breed sites. Unfortunately, the product groups whose merchandise populated the site had a distinctly different view based largely on internal organizational boundaries, hierarchies, loyalties, and incentives. At the end of the day, it was their revenue on the line. They ultimately paid for the site (and our salaries). And these product groups had far more political clout than any centralized service functions such as ours. In short, although the redesign had been assigned to us, the product groups had enormous influence over the ultimate decisions on organization and visual presentation. The debate persisted and ultimately escalated to the CEO. Knowing that we were seriously outgunned, we decided to move our argument away from our opinion versus their opinion. We would have lost that battle. Instead, we proposed that data should drive the decision. We knew that the data on search and user pathways supported our approach; customers did not care about how the product groups divided the world. Customers shopped based on their needs. The CEO agreed with our proposition and, in the end, our approach. The relaunch was a great success with improvements in every key metric. The point is not that my team was right (although it felt awfully good). It is that we, albeit inadvertently, introduced a bit of decision science into the equation. Unfortunately, although decision-making always makes it onto the list of core competencies for leaders, formal training in decision-making is rarely offered. My team relied on our experience, intuition, and the advice of the outside experts who were helping us with the site. We agreed that our team’s recommendation would be reached by consensus. None of that was necessarily good or bad in and of itself. The problem is that none of it was particularly intentional or disciplined. We were as much lucky as smart. Since that time, I have further explored the social, cognitive, and other factors that determine the quality of the decisions leaders make. I learned about the emerging wisdom from the study of decision science about how to improve outcomes by approaching decisions large and small more methodically. Common decision-making mistakes One of the most frequent pitfalls in learning to make better decisions is outcome bias. This is the tendency to equate a good outcome with a good decision process and a bad outcome with poor decision-making. You might do everything right yet get a poor outcome resulting from factors outside of your control. Or you might fumble through the process and have everything turn out better than expected. With the many variables in your operating environment, there will never be an automatic correlation between process and outcome. Even the most highly skilled athletes sometimes lose. Separating your analysis of how decisions were made from the end result is the first step in becoming more disciplined about the decision-making process. It will help you improve critical skills necessary for getting it right more consistently. It will also help you better diagnose where things went wrong when the outcome isn’t what you hoped. Another common mistake is failing to grasp all of the repercussions of your decision. Often ascribed to an unavoidable law of unintended consequences, repercussion blindness can result from failing to consider or solicit input from all of the relevant stakeholders or bowing to pre[...]



How to run an effective design organization

2016-11-29T12:00:00Z

(image)

To maximize investments in in-house design teams, companies must put a structure and system in place to get the most out of the team.

Continue reading How to run an effective design organization.

(image)



"The Internet is going to fall down if I don't fix this"

2016-11-29T11:00:00Z

(image)

An interview with Susan Sons from the Center for Applied Cybersecurity Research at Indiana University.

Continue reading "The Internet is going to fall down if I don't fix this".

(image)



Website quality control

2016-11-29T11:00:00Z

Why bother with quality control when building a website? Introduction There’s always something professional about doing a thing superlatively well. Colonel Pickering, in George Bernard Shaw’s Pygmalion What is a good website? For us web professionals, this is a most important question. Building good websites is part of our professional ethics, stemming from a code of honor that asserts that we can be professionals only if our work is good. But how do we know that our work—that our websites—are good? Many criteria and examinations come to mind, but there is actually an entire field dedicated to informing us: quality management. Quality management, which can be broken down into quality planning, quality control, quality assurance, and quality improvement, comes with a host of methods to not just identify (control) and fix (improvement) defects, but to avoid them systematically (planning, assurance). This little book, which is the third in a series of books that cover important components of modern web development (after web frameworks and coding standards), focuses mostly on the quality control piece, for if we can’t “see” what’s wrong, we won’t fix or plan to avoid what’s wrong. Still, it’s going to share advice on how to tie quality to our processes, for it is more useful to learn how to fish than to hope to be fed every day. The book will do all of this in a loose and relaxed manner, however, and not to the extent ISO standards would cover quality. Finally, and although this should matter only in few instances, the book hinges more on websites rather than web apps. That distinction is usually relevant when it comes to standards and development best practices, but there are some differences in how one should go about quality checking of sites as opposed to apps. What follows will work slightly better and allow for more complete quality control of websites. This is a little book, then, because it’s short. Let’s leave the intro behind. What Is Quality Control? Wikipedia defines quality control (often, but rarely in this book, abbreviated as “QC”) as “a process by which entities review the quality of all factors involved in production.” ISO 9000, also through Wikipedia, is said to define quality control as “a part of quality management focused on fulfilling quality requirements.” Google, without offering attribution, understands quality control to be “a system of maintaining standards in manufactured products by testing a sample of the output against the specification.” We want to use a definition that is stricter on the one end and more lenient on the other: “Website quality control entails the means to determine (a) whether they meet our expectations and (b) to what degree our websites meet professional best practices.” “Means,” then, will refer largely to infrastructure—that is, tools. Also, as stated a moment ago, we’ll look at some processes and methods useful to improve, not just measure, the quality of our work. Why Is Quality Control Important? Quality control is—for that decisive reason—important, because without it we have no robust way of determining whether what we do and produce is any good. Quality control, therefore, is a key differentiator between professional and amateur work. Consistent quality is the mark of the professional. Quality control, finally, saves time and money and sometimes nerves, particularly in the long run. But what are our options to control the quality of our websites? We’ll look at that now in more detail. Continue reading[...]



Four short links: 28 November 2016

2016-11-28T12:05:00Z

Skill Levels, Auto-Provisioning from GitHub, Laptops are Niche, and Vehicular Firehose

  1. Computer Skill Levels -- Across 33 rich countries, only 5% of the population has high computer-related abilities, and only a third of people can complete medium-complexity tasks. (via Greg Linden)
  2. Fodor -- Auto setup and provision GitHub repositories on new DigitalOcean droplets. Genius!
  3. Tablets, PCs, and Office (Ben Evans) -- it was Ben's podcast that really opened my eyes: we laptop users are in a dying market. Mobile won. Jean-Louis Gassée agrees.
  4. Car Wars -- Cory Doctorow short fiction that builds on the trolleybus problem, commissioned by Deakin University in Australia to illustrate the kinds of research they're doing. When you worked for Huawei, you got access to the firehose: every scrap of telemetry ever gleaned by a Huawei vehicle, plus all the licensed data sets from the other big automotive and logistics companies, right down to the driver data collected from people who wore court-ordered monitors: paroled felons, abusive parents under restraining orders, government employees. You got the post-mortem data from the world’s worst crashes, and you got all the simulation data from the botcaves: the vast, virtual killing-field where the machine learning algorithms duked it out to see which one could generate the fewest fatalities per kilometre.

Continue reading Four short links: 28 November 2016.

(image)



Leveraging analytics 1.0 for the analytics 2.0 revolution

2016-11-28T12:00:00Z

Rather than hiring data scientists from outside, consider training your proto data scientists.Big data and data science is so much in vogue that we often forget there were plenty of professionals analyzing data before it became fashionable. This can be thought of as a divide between Analytics 1.0, practiced by those in traditional roles like data analysts, quants, statisticians, and actuaries, and Analytics 2.0, characterized by data scientists and big data. Many companies scrambling to hire data science talent have begun to realize the wealth of latent analytics talent right at their fingertips — talent capable of becoming data scientists with a little bit of training. In other words, the divide between Analytics 1.0 and 2.0 is not as wide as you might believe. Analytics 1.0 professionals come from many industries, including finance, health care, government, and technology. But they all share the same core technical skill sets around computation and statistics that make them ideal candidates for training to get them caught up on data science. In addition to possessing the foundational data science skills, these employees already understand the industry needs — often being corporate veterans. Of course, these advantages are not without their challenges, and from my experience, the three main challenges revolve around learning new computational techniques, new statistical techniques, and a new mindset. Let’s walk through each one. Learning new computational techniques Analytics 2.0 is defined by the sheer volume and variety of data available. Data scientists need strong computational skills to handle the ever-increasing size of data sets and computational complexity. One such important new skill is parallelization and distributed computing — breaking up a large computational load across a network of computers. But successfully leveraging parallelization requires understanding how to orchestrate an ensemble of computers and imposes severe limits on what tasks can and cannot be parallelized. Developing the skills to cope with the sheer scale of data is an integral part of Analytics 2.0. The variety of data types is also a major challenge. Analytics 1.0 utilizes data sets that are clean, structured, and single-sourced. In contrast, Analytics 2.0 is focused on data sets that are messy, unstructured, and multi-sourced, and practitioners require greater software engineering capabilities to clean, structure, and merge multiple sources together. Learning new statistical analyses The uninitiated are often under the mistaken impression that big data is just doing the same analytics on more data. This is often wrong in two major ways. First, larger data allows us to leverage more powerful techniques that are simply not useful for smaller data sets. Understanding highly nuanced customer preferences on relatively narrow segments is completely predicated on having enough data to be able to make statistically sound inferences on subtle effects on narrow user groups. Throwing a deep neuro-network on a tiny data set is a recipe for disaster (although that doesn’t seem to prevent some managers from asking for it). Secondly, even if you’re still running the same analyses mathematically, the sheer scale of the data presents new challenges. How do you take a mean if you can’t fit all your data onto your laptop? How can your analyses keep up if it takes more than 24 hours to analyze 24 hours of data? Parallelizing across multiple machines is expensive and only works in certain cases. Understanding ho[...]



Four short links: 25 November 2016

2016-11-25T21:15:00Z

Malware Phylogenetics, Bipartite News, Human-Like AI, and Structuring Work

  1. Cosa Nostra -- a graph-based malware clustering toolkit. Creates phylogenetic trees of malware showing relationships between variants.
  2. Fake News is Not The Only Problem -- neither side of the graph’s frame was false, per se. Rather, each carefully crafted story on either side omitted important detail and context. When this happens constantly, on a daily basis, it systematically and deeply affects people’s perception of what is real. (via BoingBoing)
  3. Building Machines that Think and Learn Like People (Paper a Day) -- summary of a paper that covers why deep learning alone isn't likely to get us human-like AI, and hints at what's needed.
  4. How We Set Up Our Work and Teams at Basecamp (Jason Fried) -- six week cycles, with a few weeks in-between for independent clean-up, pet projects, etc. These are not sprints. I despise the word sprints. Sprints and work don’t go together. This isn’t about running all out as fast as you can; it’s about working calmly, at a nice pace, and making smart calls along the way. No brute force here, no catching our collective breath at the end.

Continue reading Four short links: 25 November 2016.

(image)



Four short links: 24 November 2016

2016-11-24T11:55:00Z

Smart Reply, Serverless, Identifying Criminals, and Programmer Ethics

  1. Smart Reply for Email (A Paper a Day) -- The Smart Reply system generates around 12.9K unique suggestions each day, that collectively belong to about 380 semantic clusters. 31.9% of the suggestions actually get used, from 83.2% of the clusters. “These statistics demonstrate the need to go well beyond a simple system with five or 10 canned responses.” Furthermore, out of all used suggestions, 45% were from the 1st position, 35% from the 2nd position, and 20% from the third position.
  2. Why All The Fuss About Serverless (Simon Wardley) -- Billing by the function not only enables me to see what is being used but also to quickly identify costly areas of my program. [...] Everyone talks about “algorithmic management” these days (well, everyone in certain circles I walk in) but if you don’t know the cost of something, its impact on revenue, and how it is changing, then no algorithm is going to help you magic up the answer to the investment decisions you need to make
  3. Neural Network Identifies Criminals By Their Faces -- Xiaolin and Xi say there are three facial features that the neural network uses to make its classification. These are: the curvature of upper lip which is on average 23% larger for criminals than for noncriminals; the distance between two inner corners of the eyes, which is 6% shorter; and the angle between two lines drawn from the tip of the nose to the corners of the mouth, which is 20% smaller. You have to read the paper to learn it has a 6% false-positive rate.
  4. The Future of Programming (Bob Martin) -- we, programmers, are going to have to grow up and define our profession, and that includes ethics, because everything is controlled by software. Democracy, news, jobs, health care, government, police...these jobs have moral dimensions and programmers are being asked to code unethical things. Martin says, “Let’s decide what it means to be a programmer. Civilization depends on us. Civilization doesn’t understand this yet.”

Continue reading Four short links: 24 November 2016.

(image)



Richard Moulds on harnessing entropy for a more secure world

2016-11-23T12:45:00Z

(image)

The O’Reilly Security Podcast: Randomness, our dependence on entropy for security and privacy, and rating entropy sources for more effective encryption.

In this episode, I talk with Richard Moulds, vice president of strategy and business development at Whitewood Encryption. We discuss whether random number generation is as random as some might think and the implications that has on securing systems with encryption, how to harness entropy for better randomness, and emerging standards for evaluating and certifying the quality of entropy sources.

Continue reading Richard Moulds on harnessing entropy for a more secure world.

(image)



Steph Hay on designing for Alexa

2016-11-23T12:40:00Z

(image)

The O’Reilly Design Podcast: Designing for trust in finance, conversational UIs, and the value of a weekly oasis.

In this week’s Design Podcast, I sit down with Steph Hay, head of content, culture, and AI design at Capital One. We talk about designing for voice interactions, connecting with remote team members, and the importance of baking humanity into AI.

Continue reading Steph Hay on designing for Alexa.

(image)



Gilad Rosner on privacy in the age of the Internet of Things

2016-11-23T12:20:00Z

(image)

The O’Reilly Hardware Podcast: Safeguarding against new privacy risks.

In this episode of the O’Reilly Hardware Podcast, Jeff Bleiel and I speak with Gilad Rosner, a privacy and information policy researcher, and the founder of the Internet of Things Privacy Forum.  Rosner is also the author of the recently-published free O’Reilly ebook, “Privacy and the Internet of Things.”

Continue reading Gilad Rosner on privacy in the age of the Internet of Things.

(image)



Four short links: 23 November 2016

2016-11-23T12:05:00Z

Facebook Censorship, Regulating Security, A/B Testing, and Spying Headphones

  1. Facebook Said to Create Censorship Tool to Re-Enter China (NYT) -- Unveiling a new censorship tool in China could lead to more demands to suppress content from other countries. The fake news problem, which has hit countries across the globe, has already led some governments to use the issue as an excuse to target sites of political rivals, or shut down social media sites altogether.
  2. Internet Era of Fun and Games is Over -- It was OK when it was fun and games. But already there’s stuff on this device that monitors my medical condition, controls my thermostat, talks to my car: I just crossed four regulatory agencies, and it’s not even 11 o’clock. This is something that we’re going to need to do something new about. And like many new agencies in the 20th century, many new agencies were created: trains, cars, airplanes, radio, nuclear power. My guess is that [the internet] is going to be one of them. And that’s because this is different. This is all coming. Whether we like that the technology is coming, it’s coming faster than we think. I think government involvement is coming, and I’d like to get ahead of it. I’d like to start thinking about what this would look like. Bruce Schneier testifies.
  3. Bayesian A/B Testing Not Immune to Peeking -- dang it's hard to do science!
  4. Your Headphones Can Spy On You (Wired) -- Their malware uses a little-known feature of RealTek audio codec chips to silently “retask” the computer’s output channel as an input channel, allowing the malware to record audio even when the headphones remain connected into an output-only jack and don’t even have a microphone channel on their plug. The researchers say the RealTek chips are so common that the attack works on practically any desktop computer, whether it runs Windows or MacOS, and most laptops, too.

Continue reading Four short links: 23 November 2016.

(image)



Bots in society

2016-11-23T11:45:00Z

(image)

Lili Cheng provides insights into the planning and release of Microsoft's bot, Tay.

Continue reading Bots in society.

(image)



ChatOps: Supercharging DevOps with group chat

2016-11-23T11:00:00Z

(image)

The O’Reilly Podcast: Jason Hand discusses how to get your team started with ChatOps.

How can software teams leverage chat to reduce friction and enhance collaboration? This episode of the O’Reilly Podcast features my discussion with Jason Hand, DevOps Evangelist at VictorOps and author of the recent O'Reilly book, ChatOps: Managing Infrastructure In Group Chat. We talk about ChatOps, managing incidents, and what it means to be a learning organization.

The full conversation is available through the embedded audio. Highlights from the discussion are noted below.

Continue reading ChatOps: Supercharging DevOps with group chat .

(image)



Strategies to validate your security detections

2016-11-22T15:50:00Z

(image)

Machine learning to generate attack data and improve security analytics.

There is a lot of focus in building security detections, and the attention is always on the algorithms. However, as security data scientists are quickly realizing, the tough task is not model building but rather, model evaluation: how do you convince yourself that the detections work as intended, and measure efficacy given the lack of tainted data?  This article provides strategies—both from the perspective of security and machine learning—for generating attack data to validate security data science detections.

Drought of good quality test data in security analytics

If you attend any security data science presentation, you are most likely to see “lack of labeled data” in the “challenges” section. The difficulty in building credible security analytics is not only the know-how, but also the drought of good quality test data. Textbook applications of machine learning boast a huge corpus of labeled data; for instance, for image recognition problems, the ImageNet data set offers 14 million labeled images, and for speech recognition, the SwitchBoard corpus offers close to 5,800 minutes of labelled data.

Continue reading Strategies to validate your security detections.

(image)



An overview of the bot landscape

2016-11-22T11:55:00Z

(image)

Are bots your new best friend?

Bots are a growing segment of software that acts as an agent on a human’s behalf. These tasks range from ordering online, to making dinner reservations, to handling customer service requests, to helping employees be more productive in the workplace.

Historically, most bots have used simple rules-based approaches to present an output for a given input (such as presenting the weather). But today, with advances in server-side processing power and improvements in implementing artificial intelligence (AI) and machine learning (ML), bots are starting to provide real value to consumers. The tide has finally turned and bots are entering the mainstream consciousness, especially after the recent announcements at Facebook’s annual conference F8.

Continue reading An overview of the bot landscape.

(image)



Upcoming bio events

2016-11-22T11:00:00Z

(image)

Events and gatherings for the biological revolution.

Science: Disrupt London Sessions - Disrupting the Lab

Science: Disrupt is a podcast, editorial, and events community organization about the innovators, iconoclasts, and entrepreneurs intent on creating change in science. They've recently spoken about health tech, research environments, open science, maths education, science startups and basic science funding. At Science: Disrupt's next event on the Dec. 6 they'll be covering automation, collaborative lab spaces, portable laboratories, and what investors look for in science software startups. Sign up is free.

Details:


Genome Editing with CRISPR-Cas9

Want to learn how to do hands-on genome editing? This is an intensive laboratory class with limited space. Class size is limited to 10 slots. In this class you will: Learn how to culture and work with Saccharomyces cerevisciae (brewers yeast). Build two CRISPR-Cas9 genome editing systems, one for gene deletion (target the ADE2 gene in the adenine biosynthesis pathway or use a guide RNA of your own design) and another for inserting the gene for a fluorescent protein. And transform yeast cells with the plasmids and confirm the results by DNA sequence analysis.

Prerequisites: You must have taken the CRISPR workshop at Genspace. You must be familiar with molecular biology lab techniques such as pipetting, gel electrophoresis, use of restriction enzymes and PCR (Genspace Biotech Crash Course, Biohacker Boot Camp or equivalent lab experience)

Details:

  • Nov. 29, Dec. 1, Dec. 6, and Dec. 13, 2016
  • Genspace
  • Learn more

Continue reading Upcoming bio events.

(image)



Securing application deployments in CI/CD environments

2016-11-22T11:00:00Z

(image)

Binu Ramakrishnan highlights current security risks and CI/CD threat modeling and presents security patterns-based techniques to mitigate these risks, including a novel idea called auth events to delegate user privileges to CI/CD workflow jobs.

Continue reading Securing application deployments in CI/CD environments.

(image)



Four short links: 22 November 2016

2016-11-22T11:00:00Z

Connections App, Voice UIs, Pentagon's Dystopia, and Robot Armies

  1. James Burke's Connections App -- being Kickstarted as we speak. I backed it like a boss.
  2. Designing Voice User Interfaces -- don’t ask a question if you won’t be able to understand the answer.
  3. Pentagon's Dystopic Sci-Fi Future (The Intercept) -- video made to illustrate a possible future: economic have-nots, megacities, and rogues who enable a proliferation of “digital domains” that facilitate “sophisticated illicit economies and decentralized syndicates of crime to give adversaries global reach at an unprecedented level.” (via Ars Technica)
  4. Who Will Command the Robot Armies? (Maciej Ceglowski) -- We, on the other hand, didn't plan a thing. We just built ourselves a powerful apparatus for social control with no sense of purpose or consensus about shared values. Do we want to be safe? Do we want to be free? Do we want to hear valuable news and offers?

Continue reading Four short links: 22 November 2016.

(image)



Why Reactive?

2016-11-22T11:00:00Z

(image)

A deep dive into the technical aspects of reactive.

Introduction

It’s increasingly obvious that the old, linear, three-tier architecture model is obsolete.1

A Gartner Summit track description

Continue reading Why Reactive?.

(image)



What's new in Swift 3

2016-11-22T11:00:00Z

(image)

Learn about Swift’s most impactful and interesting new features, and explore Swift’s use on non-Apple platforms.

Introduction

Swift was introduced to the world in 2014 and has rapidly evolved since then, eventually being released as an open source project in late 2015. Swift has been one of the fastest-growing programming languages in history, by a variety of metrics, and is worth serious consideration regardless of your level of programming experience or the size and age of your project’s code.

Swift was designed to be a complete replacement for Objective-C, the language used for all iOS and Mac OS development prior to Swift’s release. Swift is ideal for new projects; additionally, because you can easily use Swift and Objective-C in the same project, you can incrementally convert your existing Objective-C code to Swift.

Continue reading What's new in Swift 3.

(image)



Pagan Kennedy on the art and science of serendipity

2016-11-21T20:25:00Z

(image)

It's important in this age of big data to return the original meaning of serendipity and talk about it as a skill.

Continue reading Pagan Kennedy on the art and science of serendipity.

(image)



Basic principles for designing voice user interfaces

2016-11-21T12:00:00Z

Challenges and opportunities, how VUIs differ from IVRs, and tools for creating great conversational designs.In the early 2000s, interactive voice response (IVR) systems were becoming more common. Initially a touch-tone/voice hybrid (“Please press or say one”) and very primitive, they became an expected way to communicate with many companies. IVRs could help callers get stock quotes, book flights, transfer money, and provide traffic information. Many of them were designed poorly, and websites popped up with back doors on how to get transferred immediately to an operator (something many companies actively tried to hide). IVRs got a bad reputation, ending up the subject of satire on Saturday Night Live. IVRs were created to automate tasks so customers would not always have to speak to a live person to get things done. They were created before the internet became commonly used and before smart phones were invented. Nowadays, many IVRs are used as the “first response” part of a phone call, so that even if the caller ends up speaking with an agent, basic information has been collected (such as a credit card number). For many tasks, even complex ones such as booking a flight, an IVR can do the job. In addition, IVRs are great at routing customers to a variety of different agent pools, so that one phone number can serve many needs. Finally, some users actually prefer using an IVR over speaking with an agent, because they can take their time and ask for information over and over (such as the early Charles Schwab stock quote IVR) without feeling they’re “bothering” a human agent. Although some of the design strategies from the IVR world also apply to mobile voice user interface (VUI) design (as well as VUI for devices), mobile VUIs also present a unique set of challenges (and opportunities). This chapter outlines design principles for the more varied and complex world of designing VUI systems. One of the challenges of mobile VUIs is whether or not it will have a visual representation, such as an avatar. In addition, when will your VUI allow the user to speak? Will users be able to interrupt? Will it use push-to-talk? These challenges are discussed later in the book. One of the opportunities that mobile devices have that IVRs do not is that mobile devices can have a visual component. This can be a big advantage in many ways, from communicating information to the user, to confirming it, even to help the user know when it’s their turn to speak. Allowing users to interact both via voice and using a screen is an example of a “multimodal” interface. Many of the examples in this book are for multimodal designs. In some cases, the modes live together in one place, such as a virtual assistant on a mobile phone. In others, the main interaction is voice-only, but there is also a companion app available on the user’s smartphone. For example, let’s say you ask Google, “Who are the 10 richest people in the world?” Google could certainly read off a list of people (and their current worth), but that is a heavy cognitive load. It’s much better to display them, as shown belo[...]



Four short links: 21 Nov 2016

2016-11-21T12:00:00Z

Master Strategy, Writing a MMORPG, Human-Level AI, and Network of Sorrows

  1. How to Master Strategy (Simon Wardley) -- Most companies aren't playing chess when it comes to strategy (despite what you read). At best, most are simply meme copying others or running on gut feel and highest paid person's opinion.
  2. Yegge's Back! -- he's been writing a game in his spare time, and launched it. It runs on the Google Cloud Platform.
  3. Kurzweil Interviews Minsky on Human-Level AI -- resourceful versatile machines. Adjectives you don't normally hear applied to software.
  4. A Network of Sorrows: Small Adversaries and Small Allies (Quinn Norton) -- One media report in the U.S. esti­mat­ed 8,500 schools in America have been hit with ran­somware this year. Now, the rea­son why I think it’s real­ly inter­est­ing to point out the American fig­ures here is this is also a nation­al sys­tem where as of last year, half of all stu­dents in U.S. pub­lic schools qual­i­fy for pover­ty assis­tance. Those are the peo­ple pay­ing the­se ran­somwares. Full of gold. And every­thing in our fields is dual-use. Everything that we can use to help some­body can lat­er be repur­posed to hurt them. Our sor­rows come in bat­tal­ions the­se days.

Continue reading Four short links: 21 Nov 2016.

(image)



Four short links: 18 November 2017

2016-11-18T12:00:00Z

Bare Metal, Serverless, Deep Learning, and Junkbot Winners

  1. Why Choose Bare Metal? (GitLab) -- There is a threshold of performance on the cloud, and if you need more, you will have to pay a lot more, be punished with latencies, or leave the cloud.
  2. Going Serverless: Compelling Science Fiction -- (CSF is the name of the site, not implying that the idea of serverlessness is itself compell... never mind.) Simple workflow broken into the AWS services and functionality.
  3. Deep Learning Papers -- Papers about deep learning ordered by task, date. Current state-of-the-art papers are labeled.
  4. Junkbot Competition Winners -- wonderful creativity!

Continue reading Four short links: 18 November 2017.

(image)



Ben Brown on bot tools

2016-11-17T18:10:00Z

(image)

The O’Reilly Bots Podcast: An optimistic look at the future of bots.

In this episode of the O’Reilly Bots Podcast, Pete Skomoroch and I speak with Ben Brown, co-founder and CEO of Howdy.ai, the bot toolmaker behind the Botkit framework. Brown also runs the Talkabot conference, which was held in Austin this past September.

Continue reading Ben Brown on bot tools.

(image)



Hilary Mason on the wisdom missing in the AI conversation

2016-11-17T14:15:00Z

(image)

The O'Reilly Radar Podcast: Thinking critically about AI, modeling language, and overcoming hurdles.

This week, I sit down with Hilary Mason, who is a data scientist in residence at Accel Partners and founder and CEO of Fast Forward Labs. We chat about current research projects at Fast Forward Labs, adoption hurdles companies face with emerging technologies, and the AI technology ecosystem—what's most intriguing for the short term and what will have the biggest long-term impact.

Continue reading Hilary Mason on the wisdom missing in the AI conversation.

(image)



Building the next-generation big data analytics stack

2016-11-17T13:05:00Z

(image)

The O’Reilly Data Show Podcast: Michael Franklin on the lasting legacy of AMPLab.

In this episode I spoke with Michael Franklin, co-director of UC Berkeley’s AMPLab and chair of the Department of Computer Science at the University of Chicago. AMPLab is well-known in the data community for having originated Apache Spark, Alluxio (formerly Tachyon) and many other open source tools. Today marks the start of a two-day symposium commemorating the end of AMPLab, and we took the opportunity to reflect on its impressive accomplishments.

Continue reading Building the next-generation big data analytics stack.

(image)



Four short links: 17 November 2016

2016-11-17T12:00:00Z

Hacking Locked Machines, Rules Engine, Trello Tips, and Autonomous Driving Soon

  1. PoisonTap -- siphons cookies, exposes internal router & installs web backdoor on locked computers. Open source, runs on a Raspberry Pi. (via Ars Technica)
  2. Trigger Happy -- open source IFTTT-styled rule engine for remixing ingredients.
  3. 18F Guide to Trello -- I've used Trello for several years, and these tips were still useful.
  4. Tesla and NVIDIA -- Huang continued by saying that autonomous driving is not a “detection problem” but an “AI problem,” and he insists that it’s going to be solved in 2017. Also, yow, every Tesla comes with a supercomputer.

Continue reading Four short links: 17 November 2016.

(image)



Achieving security and speed

2016-11-17T11:00:00Z

The O’Reilly Podcast: Nathan Moore discusses caching, CDNs, and scaling front end security and performance.In this episode of the O’Reilly podcast, I talk with Nathan Moore, CDN architect at StackPath. We discuss intelligent caching, building secure CDN solutions, and the future of front end security and performance. Here are some highlights. The full conversation is available in the embed, below. What is caching? Back in the old days, everyone would put up their own page. They'd host it somewhere, and then you would just count the hits as they popped up. But in the modern world, we have to scale up. As we get bigger, as we serve more and more pages and more and more people, there’s a need to be able to handle more than just your individual page. The idea behind a cache is that you have a choice: you can either serve that page yourself or you can go out to take advantage of what's called a proxy, which is a server that sits between the end user and the web page the end user wants to see. The idea is if that proxy can cache the content that's coming from the origin, then that end user never has to talk to the origin but can always talk to the cache. The higher the cache hit ratio, the more requests can be served directly from that proxy, and the less work the origin has to do. As a result, origins can now scale much better. The web in general can run much faster, and end users wind up being much happier. frameborder="no" height="166" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/292591755&auto_play=false&hide_related=false&show_artwork=true" width="100%"> Common trade-offs between speed and security at the front end One of the things that's come up in recent years is the idea that a content delivery network shouldn't just deliver content; it should also be actively engaged in securing that content. It should ensure that the content goes to the intended end user and not to malicious end users.  It should also protect the customer's origin server. Part of that is understanding where and how to encrypt the content. Obviously, if somebody does sniff, you want content to be unintelligible to them. Modern cryptography effectively does that. We can deploy SSL along the full path if that's what the end user specifies. One of the problems with SSL is that it has a processing overhead associated with it. Part of the value proposition of a company like StackPath is the ability to provide security while offloading the inevitable performance side effects of that security. In other words, we better perform while serving encrypted content, and we better do that very, very well. The other part is protecting the origin. This is where things like WAF (web application firewalls) start to come in, because we want to be able to dynamically view the requests and determine what's a legitimate request and what’s not. You want to be able to appl[...]



Highlights from the O'Reilly Software Architecture Conference in San Francisco 2016

2016-11-17T11:00:00Z

Watch highlights covering software architecture, microservices, distributed systems, and more. From the O'Reilly Software Architecture Conference in San Francisco 2016.Experts from across the software architecture world came together in San Francisco for the O’Reilly Software Architecture Conference. Below you'll find links to highlights from the event. An introduction to serverless Mike Roberts introduces the concepts behind serverless architectures, explains how serverless breaks from the past, and provides reasons why it's worthy of some of the hype it’s currently receiving. Watch "An introduction to serverless." Walking the tightrope: Balancing bias to action and planning Dianne Marsh explains how microservices paved the way for traffic steering at Netflix, and she highlights current challenges. Watch "Walking the tightrope: Balancing bias to action and planning." Confessions of an enterprise architect Faced with business and regulatory complexity, Scott Shaw found himself committing some of the software development acts he once condemned. He confesses those sins and explains why they’re sometimes necessary. Watch "Confessions of an enterprise architect." Listening to the design pressures Martin Thompson explores the architectures that emerge from applying design patterns required for high performance, resilience, security, usability, and other quality-of-services measures. Watch "Listening to the design pressures." Microservices: Pros and cons Rachel Laycock and Cassandra Shum take opposing sides in the microservices debate. Watch "Microservices: Pros and cons." Kubernetes: What you need to know ... and why Kelsey Hightower offers a quick overview of Kubernetes—the community, the project, and the technology for managing containerized workloads. Watch "Kubernetes: What you need to know ... and why." Continue reading Highlights from the O'Reilly Software Architecture Conference in San Francisco 2016.[...]



Kubernetes: What you need to know ... and why

2016-11-16T21:00:00Z

(image)

Kelsey Hightower offers a quick overview of Kubernetes—the community, the project, and the technology for managing containerized workloads.

Continue reading Kubernetes: What you need to know ... and why.

(image)



Microservices: Pros and cons

2016-11-16T21:00:00Z

(image)

Rachel Laycock and Cassandra Shum take opposing sides in the microservices debate.

Continue reading Microservices: Pros and cons.

(image)



Listening to the design pressures

2016-11-16T21:00:00Z

(image)

Martin Thompson explores the architectures that emerge from applying design patterns required for high performance, resilience, security, usability, and other quality-of-services measures.

Continue reading Listening to the design pressures.

(image)



Media in the age of algorithms

2016-11-16T15:40:00Z

(image)

The problem of fake news and bad sites trying to game the system is an industry-wide problem — companies should share data and best practices in the effort to combat it.

Since the U.S. election, there’s been a lot of finger pointing, and many of those fingers are pointing at Facebook, arguing that its newsfeed algorithms played a major role in spreading misinformation and magnifying polarization. Some of the articles are thoughtful in their criticism, others thoughtful in their defense of Facebook, while others are full of the very misinformation and polarization that they hope will get them to the top of everyone’s newsfeed. But all of them seem to me to make a fundamental error in how they are thinking about media in the age of algorithms.

Consider Jessica Lessin’s argument in The Information:

Continue reading Media in the age of algorithms.

(image)



Growing a design team

2016-11-16T14:05:00Z

5 questions for Alastair Simpson: Customer empathy, culture’s impact on design, and the power of saying “I don’t know.”I recently asked Alastair Simpson, design manager at Atlassian, to share insights from leading and growing a design team, what management has taught him about himself, and how to measure design maturity. At the O’Reilly Design Conference, Simpson will be presenting a session on managing design teams: From 6 to 126 in 4 Years. You are presenting a talk at the 2017 Design Conference, titled “From 6 to 126 in 4 years.” What were some of the challenges of growing a team from 6 to 100+ at Atlassian? When you grow a team that quickly, there are numerous challenges that come with it. Many of these are also ongoing challenges that you are constantly working to improve. Just like great design we continue to learn and iterate on our team and the design of our organisational structure as we grow. Initially, the biggest strategic challenge was ensuring that the company truly valued the practice of design. And when I say “truly valued,” I really mean that. Many organisations have large design teams, yet many of those design teams are still treated as a “service" within their organisation, rather than an integral part of how they build great products. Making design matter within the organisation was the first hurdle to overcome, and if you don't have that right from day one, everything else can just be wasted effort. Once you start scaling design in an organisation that values it, you bump into much more tactical but no less important problems. Issues like: Retaining the core values within the design team as you grow so quickly Scaling the design language to ensure that it can be used by a global team of designers Giving designers the right tools and frameworks to allow them to integrate successfully with their peers in product management and engineering Educating other disciplines about how to reframe problems and understand the core tenants of design thinking All of these challenges are more tactical in nature, but extremely tough to tackle within any organisation, let alone one growing as quickly as Atlassian. What has been the hardest part of taking the company culture from engineering driven to experience driven? One of the toughest challenges has been building up the right level of customer empathy throughout all product development disciplines within Atlassian. Atlassian has always had a very customer-centric culture, but we have predominantly spoken to administrators who set up our products, and generally they love us. As we scaled design, we realised that we needed to create more empathy across other disciplines for all of our customers, not just our admins. We put many things in p[...]



The hard thing about deep learning

2016-11-16T13:15:00Z

Deeper neural nets often yield harder optimization problems.At the heart of deep learning lies a hard optimization problem. So hard that for several decades after the introduction of neural networks, the difficulty of optimization on deep neural networks was a barrier to their mainstream usage and contributed to their decline in the 1990s and 2000s. Since then, we have overcome this issue. In this post, I explore the “hardness” in optimizing neural networks and see what the theory has to say. In a nutshell: the deeper the network becomes, the harder the optimization problem becomes. The simplest neural network is the single-node perceptron, whose optimization problem is convex. The nice thing about convex optimization problems is that all local minima are also global minima. There is a rich variety of optimization algorithms to handle convex optimization problems, and every few years a better polynomial-time algorithm for convex optimization is discovered. Optimizing weights for a single neuron is easy using convex optimization (see graphic below). Let’s see what happens when we go past a single neuron. Figure 1. Left: a convex function. Right: a non-convex function. It is much easier to find the bottom of the surface in the convex function than the non-convex surface. (Source: Reza Zadeh) The next natural step is to add many neurons, while keeping a single layer. For the single-layer, n-node perceptron neural network, if there exist edge weights so that the network correctly classifies a given training set, then such weights can be found in polynomial time in n using linear programming, which is also a special subset of convex optimization. A natural question arises: can we make similar guarantees about deeper neural networks, with more than one layer? Unfortunately not. To provably solve optimization problems for general neural networks with two or more layers, the algorithms that would be necessary hit some of the biggest open problems in computer science. So, we don't think there's much hope for machine learning researchers to try to find algorithms that are provably optimal for deep networks. This is because the problem is NP-hard, meaning that provably solving it in polynomial time would also solve thousands of open problems that have been open for decades. Indeed, in 1988 J. Stephen Judd shows the following problem to be NP-hard: Given a general neural network and a set of training examples, does there exist a set of edge weights for the network so that the network produces the correct output for all the training examples? Judd also shows that the problem remains NP-hard even if it only requires a network to produce the correct output for just two-thirds of the training examples, which i[...]



Scaling oneself: Delegating as a leader means letting go

2016-11-16T12:10:00Z

It’s important to be willing to delegate tasks to the experts who can best help achieve the company vision.As a company grows, a difficult but necessary transition is for its visionary to recognize when it’s time to let go of the day-to-day, hands-on tasks. That doesn’t just mean “delegate.” It means: “Let go. Permit other people to contribute to the vision.” I have boxes full of nostalgia T-shirts from long-gone tech companies. As I look through the corporate tchotchkes I collected, such as the Poqet polo shirt or the LaserMaster coffee mug, I sometimes contemplate how much passion and dedication its founders invested in their companies. And how sad it is that so much of those true innovations were wasted. Sometimes, technology projects fail because the idea didn’t work, or the funding dried up, or the company lacked expertise in a key business area. But too often, I’ve seen startups fail because its leadership was unable to make an important transition. The stumble is most common when the company reaches an awkward size: too “established” to call it a startup but not accepted-enough to say it isn’t a startup. I just saw this happen again. The company is doing well, with a top-tier technology product that businesses want to buy. It has plenty of VC funding and a lot of incredibly smart, competent people among its employees. This business ought to be a rousing success, and in some ways it is—at least, if you only judge it externally. But an inside peek divulges important danger signs. Among them are high staff turnover, particularly among the mid-senior staff; missed ship dates; and weak, me-too business messaging. How can such smart people fail? The answer begins with the CEO. He’s a brilliant guy who is easy to admire. He built the company starting with the proverbial “three guys sitting around a kitchen table,” he taught himself business skills despite primarily being a tech whiz, and he prides himself on being involved in every facet of the business. Every job candidate meets with him during the interview process, even for low-level positions. The CEO is willing and able to do everything, from designing the software to speaking at conferences to writing blog posts. In fact: that’s the source of the problem. What was once a strength has become a weakness he does not acknowledge. He is not willing to let go. Oh, he thinks he’s delegating. As I said, he hired top-notch people with laudable achievements. He shared the corporate vision and tells them what he wants them to contribute. …Or he thinks he does. The CEO insists on approving every blog post topic before someone on the staff[...]



Design.bio

2016-11-16T12:00:00Z

(image)

Bringing biology to the design studio.

Biology at the design studio

As scientists' understanding and ability to manipulate biological systems have improved, there is growing interest in the design community to work with synthetic biology and biological design. In the hands of speculative designers, bacteria is turning into gold, furniture is grown out of mushrooms, specs of meat become leather jackets or discussed in cooking shows. Many of these bold ideas are realized as creative design statements or symbolic gestures that make us think about the cultural and environmental implications of working with biology. Sometimes due to the limits in knowledge or resources—or by simply by intention—these designs do not become part of everyday life and remain as intellectual endeavors featured in books, exhibitions, or competitions. This disconnect gives such designs the ability to be radical, unconstrained from regulations, feasibility, cost, or market demands. In return, however, they gain the power to shape public opinion, create discussion and debate, and inspire many more designers to work with biology.

Continue reading Design.bio.

(image)



Learning How to Delegate as a Leader

2016-11-16T12:00:00Z

(image)

This report examines the best ways to delegate work so you not only end up with successfully completed tasks, but also a team of cheerful people—including a cheerful you.

Learning How to Delegate (Without Making People Hate You)

If you want to build a ship, don’t drum up people to collect wood, and don’t assign them tasks and work, but rather teach them to long for the endless immensity of the sea.

Antoine de Saint-Exupéry

Continue reading Learning How to Delegate as a Leader.

(image)



Four short links: 16 November 2016

2016-11-16T11:40:00Z

CRISPR in Humans, VR Sadness, Reasoning, and Robot Dancing

  1. CRISPR Gene Editing Tested in a Person for First Time (Nature) -- On 28 October, a team led by oncologist Lu You at Sichuan University in Chengdu delivered the modified cells into a patient with aggressive lung cancer as part of a clinical trial at the West China Hospital. Milestone.
  2. The Post-VR Sadness -- In the first couple minutes after any VR experience, you feel strange, almost like you’re detached from reality. You will interact with physical objects with special care because, for some reason, you think that you can simply fly through them. Interacting with your smartphone touch screen becomes almost comical because the interface seems so dull and disappointing to you. It’s like your fingers are passing through the touch screen when touching it. This specific feeling usually fades within the first 1–2 hours and gets better over time. It’s almost like a little hangover, depending on the intensity of your VR experience.
  3. Reasoning about Truthfulness of Agents Using Answer Set Programming (PDF) -- The paper illustrates how, starting from observations, knowledge about the actions of the agents, and the normal behavior of agents, one can evaluate the statements made by agents against a set of observations over time. NB Facebook newsfeed software creators.
  4. RHex Dances -- we have now advanced robot technology to the point where they can dance as well as I can. Brace for impact, humanity—the dance floor is being disrupted!

Continue reading Four short links: 16 November 2016.

(image)



How containers add to a proven cloud-native architecture

2016-11-16T11:00:00Z

(image)

Mike McGarr and Andrew Spyker explain the potential containers have to help Netflix create a more productive development experience while simultaneously deepening its control over resource management.

Continue reading How containers add to a proven cloud-native architecture.

(image)



How to get superior text processing in Python with Pynini

2016-11-16T11:00:00Z

(image)

Regular expressions are the standard for string processing, but did you know you can often get better text untangling with Pynini's finite-state transducers?

It's hard to beat regular expressions for basic string processing. But for many problems, including some deceptively simple ones, we can get better performance with finite-state transducers (or FSTs). FSTs are simply state machines which, as the name suggests, have a finite number of states. But before we talk about all the things you can do with FSTs, from fast text annotation—with none of the catastrophic worst-case behavior of regular expressions—to simple natural language generation, or even speech recognition, let’s explore what a state machine is, what they have to do with regular expressions.

From gumball machines to state machines

A state machine is any piece of software or hardware whose behavior can be described in terms of transitions between states. For instance, a gumball machine can be in one of two states:

Continue reading How to get superior text processing in Python with Pynini.

(image)



Understanding reactive architecture through the actor model

2016-11-16T11:00:00Z

The O’Reilly Podcast: Hugh McKee on learning how to think asynchronously to create highly concurrent business systems.In this episode of the O’Reilly Podcast, I sat down with Hugh McKee, solutions architect at Lightbend. McKee and I discussed why the actor model is an ideal choice for building today’s distributed architecture, how actor-based systems manage requests to perform tasks, the kinds of enterprises already having success with actors, and best practices for getting started with asynchronous systems. frameborder="no" height="166" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/291953876&auto_play=false&hide_related=false&show_artwork=true" width="100%"> Here are a few highlights from our conversation: A simple but elegant abstraction layer The thing I like about actors are the abstraction that it provides for building systems, and it really covers things in two levels and makes it really fun both in the architecture and design of the system as well as in the actual implementation of the code. In the architecture and design of the system, the actor model gives us this abstraction layer that is simple and elegant but powerful. In a way, with actors it is almost like you get this infinite box of Lego blocks and you can build things anyway you want. It is that fundamental model of actors that are very simple where actors communicate with each other asynchronously and send each other messages, and that's it. On the code side, the fun is that you are able to decompose a system into a collection of actors from, say, high-level actors that live for a long time to relatively low-level actors that only live for a very short period of time but are very focused on what they have to do. They do one thing and they do it well—the Unix philosophy, but it was nice from the coding standpoint because you could build systems that had things like high levels of concurrency. Asynchronous behavior We can think of actors like humans that are texting each other. I think it is an analogy that we can use that has a lot of similarities in that if you send me a text message asking me to do something, you are free to continue to do whatever you want to do. You are not stopped waiting for me to respond. In an asynchronous system when we write synchronous code, of course, the caller waits until the thing that was called responds. In an asynchronous world, in the actor world, you have this more asynchronous behavior, and it introduces all kinds of interesting dynamics in[...]



An introduction to serverless

2016-11-15T21:00:00Z

(image)

Mike Roberts introduces the concepts behind serverless architectures, explains how serverless breaks from the past, and provides reasons why it's worthy of some of the hype it’s currently receiving.

Continue reading An introduction to serverless.

(image)



Confessions of an enterprise architect

2016-11-15T21:00:00Z

(image)

Faced with business and regulatory complexity, Scott Shaw found himself committing some of the software development acts he once condemned. He confesses those sins and explains why they’re sometimes necessary.

Continue reading Confessions of an enterprise architect.

(image)



Walking the tightrope: Balancing bias to action and planning

2016-11-15T21:00:00Z

(image)

Dianne Marsh explains how microservices paved the way for traffic steering at Netflix, and she highlights current challenges.

Continue reading Walking the tightrope: Balancing bias to action and planning.

(image)