The O’Reilly Bots Podcast: Automating “psyops” with AI-driven bots.
In this episode of the O’Reilly Bots Podcast, I speak with Tim Hwang, an affiliated researcher at the Oxford Internet Institute, about AI-driven psyops bots and their capacity for social destabilization.
Continue reading Tim Hwang on bots that cause chaos.(image)
Apple DRM, Automatic Forecasting, Conversation API, and API Idempotency
Continue reading Four short links: 24 Feb 2017.(image)
The O’Reilly Data Show Podcast: Parvez Ahammad on minimal supervision, and the importance of explainability, interpretability, and security.
In this episode of the Data Show, I spoke with Parvez Ahammad, who leads the data science and machine learning efforts at Instart Logic. He has applied machine learning in a variety of domains, most recently to computational neuroscience and security. Along the way, he has assembled and managed teams of data scientists and has had to grapple with issues like explainability and interpretability, ethics, insufficient amount of labeled data, and adversaries who target machine learning models. As more companies deploy machine learning models into products, it’s important to remember there are many other factors that come into play aside from raw performance metrics.
The O'Reilly Radar Podcast: Turning personalization into a two-way conversation.
In this week's Radar Podcast, O’Reilly’s Mac Slocum chats with Sara Watson, a technology critic and writer in residence at Digital Asia Hub. Watson is also a research fellow at the Tow Center for Digital Journalism at Columbia and an affiliate with the Berkman Klein Center for Internet and Society at Harvard. They talk about how to optimize personalized experience for consumers, the role of machine learning in this space, and what will drive the evolution of personalized experiences.
Continue reading Sara Watson on optimizing personalized experiences.(image)
2017-02-23T12:00:00ZFor design thinking to fulfill its promise, you can’t stop when you’re finished with the thinking.There are two questions I’d like to try to answer in this post: Can design thinking processes be useful to product makers? What’s missing from current design thinking practices? Can design thinking be practical? By definition, design thinking refers to the creative thinking strategies designers utilize during the process of designing. However, design thinking is now used by a broader range of people than just designers, and it has become an approach used to resolve problems more broadly than just in the design environment. Design consultancies, like my firm Fresh Tilled Soil, are hired to apply design thinking to a wide range of business and social issues. My concern, which is shared by other product leaders, is that design thinking is becoming another buzzword associated with theory and not practice. A lot of smart people like design legends Don Norman and Jared Spool have written about how the value of design thinking has been misappropriated. Others have suggested replacing these thinking processes with doing processes. I highly recommend reading the awesome suggestions by Tim Malbon, Don Norman, and Nikkel Blasse. The theme running through these reactions to general design thinking is the same: Where are the people? To get practical results out of design thinking, it turns out that you need to have people thinking and doing. The origins of design thinking In essence, design thinking is an extension of the scientific method. It’s a proven process by which you make observations, create a hypothesis, and then test its validity. Figure 1. This diagram is a simplified version of the scientific method. A more detailed version of the process can be found here. Courtesy of Richard Banfield. This process of making observations, testing theories, and then generalizing those theories to reflect reality has been the basis for almost all scientific discovery. It follows, then, that this process would work well for design-driven problem solving. This process is essential to discovering solutions, but its goal is to validate a hypothesis. This process doesn’t turn those validations into practical solutions. Design thinking borrows from this scientific rigor and provides us with a reliable way to go from unproven concept to validated solution. That’s a great process to produce a useful outcome. What it doesn’t do is provide us with a way to deliver value to our customers. We need something else to help us produce valuable outcomes. A small slice of the overall value Using the scientific method as a starting point, design leaders at places like IDEO have described how teams can validate ideas and get closer to solutions. This has been going on for a few decades now, and it’s been essential in revolutionizing how we get better at design. But these design thinking processes are only half the story. Teams relying exclusively on design thinking exercises are missing out on the best opportunities because it’s only delivering the first part of the promised value. In general, design thinking frameworks can only be effective if you take the outputs and use those to produce outcomes. This is why we like design sprints so much. They don’t just deliver artifacts (outputs); they also deliver answers (outcomes). For most, design thinking is like writing a recipe. It’s a guide to what needs to be done, but it’s not the outcome. Unless we actually gather and measure the ingredients, cook the meal, taste the final product, serve it to customers, and gather their responses, we haven’t really delivered the complete value. Recipes are great because they provide the instructions for a guaranteed and repeatable outcome. If you ordered a slice of pie at a restaurant and they passed you the recipe you’d walk out disappointed. And yet, t[...]
Arduino & Pi Bundle, Encryption Primer, Travel Mode, and API Design
Continue reading Four short links: 23 Feb 2017.(image)
7 common mistakes that keep teams from achieving optimum mobile performance, and what to do about them.
When it comes to optimizing web performance, mobile devices need a lot of attention. There are several misconceptions among web professionals, from designers to developers and operations staff, that contribute to a diminished user experience. I’ll detail these anti-patterns below and offer some tips for overcoming the most common mistakes.
The first big issue is that teams don’t sufficiently prioritize performance from the very early design stages. Performance considerations should be incorporated throughout the whole process by the entire team, not just executed by a small team after the project is done. When you don't think about performance for mobile devices from the start you end up working towards the wrong goals, like using a client-side only solution (compared to a server-side or mixed-side solution) or using client-side responsive web design techniques only. Those aren’t really goals; they’re solutions to problems. When you make performance the goal, you can properly analyze the situation, determine what problems you’re facing, and come up with the right solutions to solve them. Stop applying optimizations at the end—because often by the end, it’s too late.
Continue reading Mobile web performance anti-patterns.(image)
A look at three alternate natural sweeteners that could be safe for our health.
Many of us have a sweet tooth and crave sweet treats throughout the day, like chocolate, candies, fudge, ice cream—you name it. Some of us consume these a little above the recommended amounts because sometimes sweet food just makes us feel better.
Sucrose is the major carbohydrate that is being consumed in every form of processed food such as confectionery, dairy products, or soft drinks. A food company may add honey (40% fructose and 35% glucose), date sugar (80% sucrose), rice syrup (100% glucose), corn syrup (98% glucose), coconut sugar (70%–80% glucose), or evaporated cane sugar (sucrose). But those are just sugar in different forms.
Continue reading Amazing, super-sweet natural proteins.(image)
2017-02-22T12:00:00ZDesign thinking helps organizations grow, innovate, and improve financial performance.Design thinking is a process that uses design principles for solving complex problems. It helps organizations identify opportunities, unlock innovation, and improve their businesses. Market leaders as varied as Apple, IBM, Intuit, Kaiser Permanente, and Nike have used design thinking to gain a competitive advantage, applying it to create innovative products and services. Within an organization, design thinking is a tool for unlocking cultural change. It makes companies more flexible, more responsive to their customers, and ultimately, more successful. What are the elements of design thinking? Although the name and number of its key principles may vary depending on how you apply them, the basic elements of design thinking always include some variation on the following: researching and defining the problem, ideating, and prototyping and iterating. Researching and defining the problem: Design thinking draws upon user-centered research techniques, including ethnographic analysis, for understanding customers and users. During the research phase of the design thinking process, the goal is to understand and empathize with the people for whom you’re designing. Ideating: During the ideation phase of the design thinking process, the goal is to generate a large number of interesting ideas that represent potential solutions. Techniques for ideation may include sketching, brainstorming, and mind mapping to create high-level concepts. Prototyping and iterating: Making ideas tangible is critical to the design thinking process, as are the iteration cycles required to test and refine those ideas. Design has a bias toward making things, and prototyping is the technique that pushes the making process forward. You’ll create prototypes for demonstrating and validating your basic designs that are based on the best concepts from your ideation exercises. To properly evaluate a design concept, you’ll want to prototype it in the same environment and context in which it will eventually function. Prototypes can be low or high fidelity, interactive or static. What matters is that the prototypes must convey the experience flow. Learn more about the elements of design thinking: Watch Intuit’s Suzanne Pellican talk about Intuit’s journey From design thinking to design driven at the 2016 O’Reilly Design Conference. Learn when to use design thinking by reading the chapter “Design thinking” from the book Key MBA Models. Discover the basics of conducting effective user research in the book UX Research and “The Role of Research in Design Thinking,” a chapter from Design Thinking for Entrepreneurs and Small Businesses: Putting the Power of Design to Work. Learn how to approach the ideation phase in “Design Thinking, Ideation, and Sketching,” a chapter from The UX Book. Learn to build and test prototypes in Creating Prototypes to Test Product Market Fit and "Minimum Viable Products and Prototypes," a chapter from Lean UX. What are the benefits of applying design thinking? By introducing different ways of problem solving and methods for discovering what people truly need, design thinking helps organizations change their cultures to become more customer centric and collaborative. While every company is different, useful metrics for assessing the impact of design thinking include: cultural measures, such as employee satisfaction, internal engagement, and efficiency; financial measures, such as sales and productivity; and product quality measures, such as customer satisfaction. What is the value of design thinking? While the financials alone may paint an incomplete picture, it’s notable nonetheless that design-led companies, such as Apple, IBM, and Nike, outperform t[...]
2017-02-22T12:00:00ZMachines learn what we teach them. If you don't want AI agents to shoot, don't give them guns.There's been a lot of buzz about some experiments at DeepMind that study whether AI systems will be aggressive or collaborative when playing a game. Players gather virtual apples; they have the ability to temporarily incapacitate an opponent by "shooting" a virtual "laser." And humans are surprised that AIs at times decide that it's to their advantage to shoot their opponent, rather than peacefully gathering apples. My question is simple: what does this tell us? The answer is also simple: nothing at all. If you ask an AI to play a game in which firing lasers at your opponents is allowed, it isn't surprising that the AI fires lasers at opponents, whether the opponents are virtual or physical. You wouldn't expect it to a priori develop some version of Asimov's laws and say, "I can't do this." (If the software doesn't allow it to fire the lasers, well, it won't, but that's hardly interesting.) You wouldn't expect an AI to have a crisis of conscience and say, "no, no, I can't do it." Unless it was programmed with some sort of guilt module, which as far as I know, doesn't exist. Humans, after all, do the same. They kill in first-person shooters as well as in real life. We have whole divisions of the government devoted to the organized killing of other people. (We ironically call that "keeping the peace.") And while humans have a guilt module, it usually only engages after the fact. The only interesting question that a game like this might answer is whether AI systems are more, or less, willing to pull the trigger than humans. I would be willing to bet that: When computers play humans, the computers win. We've certainly had enough experience losing at chess, Go, and poker. Humans are more likely to go for the guns because, well, it's what we do. DeepMind's research suggests that a computer would only shoot if it's part of an efficient strategy for winning; it won't shoot because it's a reflex, because it's scared, or because it's fun. It's up to you whether shooting as part of an efficient strategy for winning is an improvement over human behavior, but it's exactly what I would expect. DeepMind didn't beat Lee Sedol at Go by refusing to be aggressive. And even then, given that we're only talking about a game, I'm not sure that experiment shows us anything at all. I'd expect an AI to be pretty good at playing a first-person shooter, and I don't see any reason for it to derive Asimov's Laws from first principles when it's only exterminating bits. I certainly wouldn't volunteer to participate in a real-life shooter against some scary Boston Dynamics creation, and I hope nobody plans to run that experiment. Likewise, I don't see any reason for an AI to "learn" that there are things in its universe that aren't just bits. We are fascinated by "machine learning"; but in the end, the machines only learn what we tell them to learn. I'm skeptical of singularities, but I will agree that we're facing a singularity when a computer can learn, entirely on its own, that some of the bit patterns coming in through its sensors are humans, and that these bit patterns are qualitatively different from the bit patterns of dogs, cats, or rocks. In the end, we're back where we started. Fear of AI reflects our fear of ourselves. AI mimics human behaviors because we teach it to do so—in this case, by asking it to play a game with human rules. As I've said, if we want better AI, we have to be better people. If we want an AI that can distinguish between humans and bits, we have to teach it what humans are, and how to behave differently in their presence. ("You can shoot the wolf; you can't shoot the human.") And if don't want AI agents to shoot at all, we have to build [...]
Delayed Feedback, Post-Human World, APL in R, and Demand-Driven Digitized Markets
Continue reading Four short links: 22 Feb 2017.(image)
Locate data quickly and easily with the SQL Server Management Studio diagram tool.
Learn how to visualize tables and data using SQL Server Management Studio’s graphical query tool.
Continue reading How can I create a Transact-SQL query graphically?.(image)
Learn the formatting possibilities for Transact-SQL queries and develop your own code structure.
Continue reading How should I format Transact-SQL queries?.(image)
Fact Checking, Simulated Universes, Radio Hacking, and Fly Brain Hackathon
Continue reading Four short links: 21 Feb 2017.(image)
2017-02-21T12:00:00ZUsing a data-driven analysis to understand IoT technology adoption.Although we’ve been talking about it for years, it was in 2016 that the adoption of the Internet of Things (IoT)—among consumers and businesses—rose dramatically. At least in part, this was due to factors like the increased numbers of sensors and connected devices, a growing pool of skilled IoT developers, and real-time data and analytics support for IoT systems. As consumers, we now live with and are helped by IoT devices in our homes, our cars, even our toys. In the business world, the impact is even more pervasive. One of the industries that comes to mind for many of us when we think about the IoT is manufacturing, where connected technologies can improve safety, maintenance, and efficiency. Globally, the impact of the IoT on manufacturing has been evidenced by national initiatives such as Germany’s Industrie 4.0 and China’s Made in China 2025. But that’s old news. In the past year, we’ve seen businesses use IoT technologies in other ways, and with wider applications. Innovations like advances in deep learning that can be applied to the IoT make it likely that the growth of the IoT will continue to accelerate. If last year the IoT “grew up,” 2017 will mark the year that the IoT starts to become essential to modern business. A newly published report, The Internet of Things Market, by Aman Naimat, presents a current snapshot of the IoT business landscape, describing a data-driven analysis of the companies, industries, and workers using IoT technologies. Making use of web crawlers, natural language processing, and network analysis to process over 300 TB of data, Naimat uncovers some surprising results. For one, the IoT landscape looks different than the market for big data tools in several ways: which companies are adopting IoT technologies, their sizes, and their locations. Secondly, we found that the current use cases that incur the most spending on the IoT are not the ones that have been predicted to become the most valuable: health care and smart cities, for example. The report opens by looking at the number of companies using IoT technologies and the maturity level of their projects. Figure 1. Companies that have adopted IoT technologies by level of maturity. What do we mean by IoT project maturity? Level 1 projects are still under development, meaning products have not been deployed; strategy, design, scope, and infrastructure are all a work in progress. Level 2 projects have been deployed either within a specific department or for a single use case, such as inventory control. Level 3 maturity projects represent companies like Nest or Amazon, that have made the IoT a strategic directive for their businesses and have deployed IoT tools or products to address multiple use cases. While the numbers listed in the figure may not seem large, they represent actual adoption of IoT technologies and are, in fact, similar to the current level of adoption of big data technologies. Given the buzz around big data began as early as the 1990s, much earlier than interest in the IoT began, this points to a faster adoption curve for IoT. Ever increasing data sets and more robust compute power and scalability will doubtless lead to more IoT breakthroughs—and therefore business investments—in the future. But to know where we’re going, we need to understand where we currently are. The Internet of Things Market report can help you get your bearings. This post is a collaboration between O’Reilly and Talend. See our statement of editorial independence. Continue reading All grown up: The IoT market today.[...]
2017-02-21T12:00:00ZBots are made possible by recent advances in artificial intelligence, user interface, and communication.“Bots are the new apps”—Satya Nadella, CEO of Microsoft Bots are a new, AI-driven way to interact with users in a variety of environments. As AI improves and users turn away from single-purpose apps and toward messaging interfaces, they could revolutionize customer service, productivity, and communication. Getting started with bots is as simple as using any of a handful of new bot platforms that aim to make bot creation easy; sophisticated bots require an understanding of natural language processing (NLP) and other areas of artificial intelligence. Bots use artificial intelligence to converse in human terms, usually through a lightweight messaging interface like Slack or Facebook Messenger, or a voice interface like Amazon Echo or Google Assistant. Since late 2015, bots have been the subject of immense excitement in the belief that they might replace mobile apps for many tasks and provide a flexible and natural interface for sophisticated AI technology. Bots are promising for a number of use cases currently served by mobile apps, as well as many that have never been well served by mobile apps: Customer relationship management: consumer-facing bots can assist customers with difficult transactions, make recommendations, and gather data. For instance, an airline’s bot could answer questions about fees, rebook flights, and suggest add-ons like hotel and car reservations. Matched to a sophisticated data-mining back end, the bot could build up customer profiles that the airline can use to market vacations, travel deals, and additional services. Productivity: bots could replace many slow tasks with a simple natural-language interface. For instance, a bot connected to an electronic medical record system could retrieve information faster than a conventional lookup; just ask “what was the patient’s blood pressure during his January visit?” Bots are already able to schedule meetings over email, retrieve reports from analytics systems, and facilitate communication among teams. Publishing and entertainment: bots provide an engaging, highly dynamic interface for many kinds of content. The New York Times and Quartz, for instance, both offer bots that display articles in a conversational format. Other publishers have experimented with conversational novels, video, and magazine content sent directly to users through popular messaging platforms. Bots are a particularly promising way to reach younger users, many of whom spend enormous amounts of time in messaging apps like Kik, and could be a significant avenue for influencer marketing. Conversational bots are a response to several larger trends that are reshaping communication and mobile computing: The mobile app economy is stagnating; it’s getting harder to persuade consumers to download and use new apps. As The Economist pointed out: “The 20 most successful developers grab nearly half of all revenues on Apple’s App Store. Building apps and promoting them is getting more costly. Meanwhile, users’ enthusiasm is waning, as they find downloading apps and navigating between them a hassle. A quarter of all downloaded apps are abandoned after a single use.” Consumers like conversational interfaces, and companies want to make themselves available on the platforms that their customers enjoy using. Facebook Messenger is the most popular Android app; it and WhatsApp (another Facebook messaging app) each have more than one billion active users. Artificial intelligence [...]
2017-02-21T11:00:00ZThe better prepared you are to utilize all the data in your data lake, the more likely you are to be successful.Big data tools and technologies started out by meeting the needs of the analytics community, but they have been evolving ever since. These tools and technologies were born out of the necessity to support large-scale analytics that wouldn’t break the bank. They have since morphed into a set of technologies to support the live operational aspects of a business. Starting with Hadoop, Pig and Hive, HBase and other NoSQL point solutions onto Spark, Flink, Drill, and Kafka—a plethora of technologies that were built to each handle individual aspects of the three V’s of big data (volume, variety, and velocity). Let’s take a moment to level set. These technologies started out by displacing workloads previously reserved for the traditional data warehouse. Those data warehouses are now actively being augmented or displaced by data lakes. Fundamentally, this is predominantly driven by the cost savings when handling large volumes of data. Does this mean that traditional RDBMS or data warehouse are obsolete? Of course not! At this point in time, this is still primarily a story of co-existence. Putting big data aside for a just a moment, let’s peer into the applications that create data. These applications or services that have been built are still mostly monolithic in nature. This has been due in part to the cost of scaling the messaging layer to build properly decoupled components. The data these applications are generating has been growing by magnitudes in recent years, and making services smaller and more micro in nature means more data moving and more data being stored. Businesses have begun the move to microservices, so the better prepared you are to leverage all of this data with your data lake, the more likely you are to be successful. In the meantime, companies are putting more and more data in their data lakes. System “A” generating data, process “B” transforming that data, process “C” moving the data to store it in data lake “D” to be analyzed, by analytics application “E.” Just because the only thing that was really changed between the data warehouse and the data lake is the final storage location doesn’t mean we shouldn’t think further into the future. Time to evolve beyond the data lake Generally speaking, moving from the data warehouse to the data lake enables new ways to look at the business opportunities available. The processes don’t change a lot by just swapping in the data lake for the data warehouse. It still takes a substantial amount of time to get the data moved from the source to the destination, and the transformations are likely to still be complex. While this technology change may help move the business in the right direction, it likely doesn’t take it far enough. Companies use software to drive their business. The faster software can be written, tested, and deployed to production, the more nimble and agile a company can be to respond to its business needs. The same goes for the speed at which data can be stored at the final destination and made available to all downstream processes. Instead of just swapping technologies, we should think bigger. Let’s consider for a moment that we built our business application and deployed it directly on top of our data lake. I’m not talking about just some analytics application. I’m talking about applications that require data persistence like a database—maybe key-value, wide-column oriented, or even a document [...]
Learn how to pass data to a command without violating the command pattern in C#.
Continue reading How can I pass parameters to a command in C#?.(image)
Learn how to create thread-safe instances with the singleton pattern in C#.
Continue reading How do I use the singleton pattern in C#?.(image)
Learn how to correctly implement the repository pattern in C#.
Continue reading How do I use the repository pattern in C#?.(image)
Car Security, Civ Math, Free Mindstorms, and Chinese AI Research
Continue reading Four short links: 20 Feb 2017.(image)
Your company is probably already doing AI and machine learning, but it needs a road map.
Continue reading How to drive shareholder value with artificial intelligence.(image)
Robot Governance, Emotional Labour, Predicting Personality, and Music History
Continue reading Four short links: 17 February 2017.(image)
The O’Reilly Bots Podcast: Slack’s head of developer relations talks about what bots can bring to Slack channels.
In this episode of the O’Reilly Bots Podcast, Pete Skomoroch and I speak with Amir Shevat, head of developer relations at Slack and the author of the forthcoming O’Reilly book Designing Bots: Creating Conversational Experiences.
Continue reading Amir Shevat on workplace communication.(image)
The O’Reilly Design Podcast: The guiding light of strategy, designing Allbirds, and what makes the magic of a brand identity.
In this week’s Design Podcast, I sit down with Simon Endres, creative director and partner at Red Antler. We talk about working from a single idea, how Red Antler is helping transform product categories, and the importance of having a point of view.
Continue reading Simon Endres on designing in an arms race of high-tech materials.(image)
Continue reading Four short links: 16 February 2017.(image)
Use Python's magic methods to amplify your code.
Continue reading How Python syntax works beneath the surface.(image)
Docker Data, Smart Broadcasting, Open Source, and Cellphone Spy Tools
Continue reading Four short links: 15 Feb 2017.(image)
The O’Reilly Security Podcast: The problem with perimeter security, rethinking trust in a networked world, and automation as an enabler.
In this episode, I talk with Doug Barth, site reliability engineer at Stripe, and Evan Gilman, Doug’s former colleague from PagerDuty who is now working independently on Zero Trust networking. They are also co-authoring a book for O’Reilly on Zero Trust networks. They discuss the problems with traditional perimeter security models, rethinking trust in a networked world, and automation as an enabler.
Continue reading Doug Barth and Evan Gilman on Zero Trust networks.(image)
How to map out a plan for finding value in data.
Rapping Neural Network, H1B Research, Quantifying Controversy, Social Media Research Tools
Continue reading Four short links: 14 Feb 2017.(image)
David Beyer talks about AI adoption challenges, who stands to benefit most from the technology, and what's missing from the conversation.
Continue reading The dirty secret of machine learning.(image)
Urban Attractors, Millimetre-Scale Computing, Ship Small Code, and C++ Big Data
Continue reading Four short links: 13 Feb 2017.(image)
Microsoft Graph Engine, Data Exploration, Godel Escher Bach, and Docker Secrets
Continue reading Four short links: 10 Feb 2017.(image)
Alex Rice on the importance of inviting hackers to find vulnerabilities in your system, and how to measure the results of incorporating their feedback.
Continue reading Hacker quantified security.(image)
The O'Reilly Radar Podcast: The value humans bring to AI, guaranteed job programs, and the lack of AI productivity.
This week, I sit down with Tom Davenport. Davenport is a professor of Information Technology and Management at Babson College, the co-founder of the International Institute for Analytics, a fellow at the MIT Center for Digital Business, and a senior advisor for Deloitte Analytics. He also pioneered the concept of “competing on analytics.” We talk about how his ideas have evolved since writing the seminal work on that topic, Competing on Analytics: The New Science of Winning; his new book Only Humans Need Apply: Winners and Losers in the Age of Smart Machines, which looks at how AI is impacting businesses; and we talk more broadly about how AI is impacting society and what we need to do to keep ourselves on a utopian path.
Continue reading Tom Davenport on mitigating AI's impact on jobs and business.(image)
2017-02-09T12:00:00Z5 questions for Aarron Walter: Shaping products, growing teams, and managing through change.I recently asked Aarron Walter, VP of design education at InVision and author of Designing for Emotion, to discuss what he has learned through his years of building and managing design teams. At the O’Reilly Design Conference, Aaron will be presenting a session, Hard-learned lessons in leading design. Your talk at the upcoming O'Reilly Design Conference is titled Hard-learned lessons in leading design. Tell me what attendees should expect. I had the unique opportunity of watching a company grow from just a handful of people to more than 550 over the course of eight years at MailChimp. When I started we had a few thousand customers, but when I left in February of 2016, there were more than 10 million worldwide. We saw tremendous growth, and I learned so much in my time there. In my talk, I'll be sharing the most salient lessons I learned along the way—how to shape a product, grow a team, how a company changes and how it changes people's careers, and a lot more. What are some of the challenges that come along with building and leading a design team in a strong growth period? As a company grows, the people who run it have to grow, too. There's a steep learning curve. When you're a small team it's easy to make decisions and get things done. But when a company grows, clear processes are needed, more people need to be brought into the planning process, and rapport has to be developed between teams and key individuals. The trick is you never really know what stage the company is in, so there's always uncertainty about whether you're doing the right thing. Everyone has to adapt and change with each new stage, and that can be hard for some people. What are some of the more memorable lessons you learned along the way? Early on as the director of UX, I thought my most important job was designing a great product. That was true but only until we needed to start building teams. Then my most important job was hiring great people. That remained my top priority for years to come, and I see it as my lasting legacy within the company. There are so many smart, talented people at MailChimp. I'm proud to have played a part in hiring and mentoring a number of people who've gone on to lead their own teams. In the early years of the product, we were focused on the future, toward new features and new ideas. But as the product and company matured, we had to master the art of refinement. Feature production is a treadmill: there will always be something else you can build. But if those features are half-baked or unrefined, you can end up with a robust product that is too complicated or too broken to use. Phil Libin said it best, "The best product companies in the world have figured out how to make constant quality improvements part of their essential DNA." You will be speaking about the importance of building a strong design practice. Can you explain what this looks like? A strong design practice has t[...]
The O’Reilly Data Show Podcast: Jason Dai on BigDL, a library for deep learning on existing data frameworks.
In this episode of the Data Show, I spoke with Jason Dai, CTO of big data technologies at Intel, and co-chair of Strata + Hadoop World Beijing. Dai and his team are prolific and longstanding contributors to the Apache Spark project. Their early contributions to Spark tended to be on the systems side and included Netty-based shuffle, a fair-scheduler, and the “yarn-client” mode. Recently, they have been contributing tools for advanced analytics. In partnership with major cloud providers in China, they’ve written implementations of algorithmic building blocks and machine learning models that let Apache Spark users scale to extremely high-dimensional models and large data sets. They achieve scalability by taking advantage of things like data sparsity and Intel’s MKL software. Along the way, they’ve gained valuable experience and insight into how companies deploy machine learning models in real-world applications.
Continue reading Deep learning for Apache Spark.(image)
In-Memory Malware, Machine Ethics, Open Source Maintainer's Dashboard, and Cards Against Silicon Valley
Continue reading Four short links: 9 February 2017.(image)
Becoming a Troll, Magic Paper, HTTPS Interception, and Deep NLP
Continue reading Four short links: 8 February 2017.(image)
Learn how to allow for parallelization using the reduce algorithm, new in C++17.
Continue reading What is the new reduce algorithm in C++17?.(image)
Game Theory, Algorithms and Robotics, High School Not Enough, and RethinkDB Rises
Continue reading Four short links: 7 February 2017.(image)
Understanding the FTC’s role in policing analytics.
Continue reading Staying out of trouble with big data.(image)
Learn how to extract data from a structure correctly and efficiently using Python's slice notation.
In this tutorial, we will review the Python slice notation, and you will learn how to effectively use it. Slicing is used to retrieve a subset of values.
The basic slicing technique is to define the starting point, the stopping point, and the step size - also known as stride.
Continue reading How do I use the slice notation in Python?.(image)
NPC AI, Deep Learning Math Proofs, Amazon Antitrust, and Code is Law
Continue reading Four short links: 6 February 2017.(image)
Learn how to set up your configuration file to indicate the types of packages you want to install by using the “yum” command.
Continue reading How do you customize packages in a Kickstart installation?.(image)
Stream Alerting, Probabilistic Cognition, Migrations at Scale, and Interactive Machine Learning
Continue reading Four short links: 3 February 2017.(image)
Sara M. Watson from Digital Asia Hub discusses the state of personalization and how it can become more useful for consumers.
Continue reading Personalization's big question: Why am I seeing this?.(image)
Learn how to create and make changes to a Kickstart configuration file using the anaconda-ks.cfg.
Continue reading How do you create a Kickstart file?.(image)
Learn how to handle array comparisons using the set_intersection algorithm in C++.
Continue reading How do I use the set_intersection algorithm in C++?.(image)
The O’Reilly Hardware Podcast: Powering connected devices with low-power networks.
In this episode of the O’Reilly Hardware Podcast, Brian Jepson and I speak with Mike Vladimer, co-founder of the Orange IoT Studio at Orange Silicon Valley. Vladimer discusses how Internet of Things devices could benefit from connectivity options other than those provided by well-known technologies (including cellular, WiFi, and Bluetooth), and explains the LoRa wireless protocol, which supports long-range and lower-power applications.
Continue reading Mike Vladimer on IoT connectivity.(image)
How to use the wordcount example as a starting point (and you thought you’d escape the wordcount example).
While Spark ML pipelines have a wide variety of algorithms, you may find yourself wanting additional functionality without having to leave the pipeline model. In Spark MLlib, this isn't much of a problem—you can manually implement your algorithm with RDD transformations and keep going from there. For Spark ML pipelines, the same approach can work, but we lose some of the nicely integrated properties of the pipeline, including the ability to automatically run meta-algorithms, such as cross-validation parameter search. In this article, you will learn how to extend the Spark ML pipeline model using the standard wordcount example as a starting point (one can never really escape the intro to big data wordcount example).
Continue reading Extend Spark ML for your own model/transformer types.(image)
The O’Reilly Design Podcast: Building bridges across disciplines, universal vs. inclusive design, and what playground design can teach us about inclusion.
In this week’s Design Podcast, I sit down with Kat Holmes, principal design director, inclusive design at Microsoft. We talk about what she looks for in designers, working on the right problems to solve, and why both inclusive and universal design are important but not the same.
Physical Authentication, Crappy Robots, Immigration Game, and NN Flashcards
Continue reading Four short links: 2 February 2017.(image)
Learn to use Kickstart to get the same look on multiple Red Hat Enterprise Linux system installations.
Continue reading What is a Kickstart installation and why would you use it?.(image)
Learn how to write shorter, better performing, and easier to read code using standard algorithms with object methods in C++.
Continue reading How do you use standard algorithms with object methods in C++?.(image)
Unhappy Developers, Incident Report, Compliance as Code, AI Ethics
Continue reading Four short links: 1 February 2017.(image)