Subscribe: All Things Distributed
http://www.allthingsdistributed.com/atom.xml
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
alexa  amazon ecs  amazon  applications  aws  cluster  customers  data  dynamodb  lambda  new  service  services  time  today 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (1)

Feed Details and Statistics Feed Statistics
Preview: All Things Distributed

All Things Distributed





Updated: 2016-06-28T09:22:48-04:00

 



New Ways to Discover and Use Alexa Skills

2016-06-27T08:00:00-04:00

Introducing New Features That Make It Easier for Customers to Discover and Use Your Alexa Skills Alexa, Amazon’s cloud-based voice service, powers voice experiences on millions of devices, including Amazon Echo and Echo Dot, Amazon Tap, Amazon Fire TV devices, and devices like Triby that use the Alexa Voice Service. One year ago, Amazon opened up Alexa to developers, enabling you to build Alexa skills with the Alexa Skills Kit and integrate Alexa into your own products with the Alexa Voice Service. Today, tens of thousands of developers are building skills for Alexa, and there are over 1,400 skills for Alexa – including Lyft and Honeywell, which were added today. A New Experience for Discovering Skills Today, we announced new ways for customers to discover and use the Alexa skills that developers have built, including a new voice-enablement feature and a completely redesigned Alexa app. Customers can now quickly search, discover and use skills. Starting today, customers can browse Alexa skills by categories such as “Smart Home” and “Lifestyle” in the Alexa app, apply additional search filters, and access their previously enabled skills via the “Your Skills” section. Also available today, Alexa customers can use their voice to enable your skills: simply say “Alexa, enable NBC News” or “Alexa, enable 7 Minute Workout” and access them instantly. Customers can also find your skills with Amazon’s Skill Finder. To use Skill Finder, simply enable it via voice or in the Alexa app and say "Alexa, ask Skill Finder for the top skills." One-Year Anniversary: ASK, AVS and The Alexa Fund In addition to the new Alexa skill features, June 25th marked the one-year anniversary of our developer services. Last June, we introduced the Alexa Skills Kit (ASK), the Alexa Voice Service (AVS), and the Alexa Fund to help enable anyone to build the experience they wanted for Alexa. Some fun facts about the Alexa Skills Kit, Alexa Voice Service, and Alexa Fund include: There are now over 1,400 Alexa skills and the catalog has grown by 50% in just over one month Customers have made over 3 million requests using the top 10 most popular Alexa skills Since January 2016, selection of Alexa smart home API skills has grown by more than 5x There are now over 10,000 registered developers using the Alexa Voice Service to integrate Alexa into their products There are tens of thousands of developers currently working on Alexa projects The Alexa Fund has invested in 16 startups, with a focus on smart home and wearable products to date. Over the next year, The Alexa Fund will be expanding investments into startups that focus on robotics, developer tools, healthcare, accessibility and more Some of the most popular Alexa skills are Jeopardy!, Daily Affirmation, Magic 8 Ball, Fitbit, and The Bartender Build a Skill Today - Special Offers Our skill templates and step-by-step guides are a valuable way to quickly learn the end-to-end process for building and publishing an Alexa skill. You can get started quickly using the flash cards skill template, fact skill template, trivia skill template, or how to skill template. Plus, if you publish a skill, you’ll receive an Alexa dev t-shirt. Quantities are limited. See Terms and Conditions.  Additional Resources For more information on getting started with devloping for Alexa, check out the following resources: Alexa Developer Platform Alexa Skills Kit (ASK) Alexa Voice Service (AVS) The Alexa Fund ASK Developer Forums Voice Design Education Voice Design 101 (On Demand Webinar) Intro to ASK (On Demand Webinar) Alexa on Udemy Weekly Office Hours [...]



Expanding the Cloud: Introducing the AWS Asia Pacific (Mumbai) Region

2016-06-27T01:00:00-04:00

In June 2015, Amazon Web Services announced that it would launch a new AWS infrastructure region in India. Today, I’m happy to announce that the Asia Pacific (Mumbai) Region is generally available for use by customers worldwide. The opportunity to revolutionize A region in India has been highly sought after by companies around the world who want to participate in one of the most significant economic opportunities in the world – India, a rising economy that holds tremendous promise for growth, a thriving technology hub with a rich eco-system of technology talent, and more. Rapid economic growth in India is creating several business opportunities such as distributed ledger technology with blockchains that could drive efficiencies in the real estate market, Fin-Tech innovations such as P2P mobile apps that have the power to change the social economic lives of people through financial inclusion, applying the sharing economy from cabs to other modes of transportation such as two-wheelers and tractors, telemedicine in the remote reaches of the nation with smartphone apps, or enabling the agricultural sector with on-demand diagnostics to improve farm yield, to name just a few. The platform to revolutionize Market innovators and change agents need a comprehensive infrastructure platform that can reliably scale on-demand. Here are the benefits of a comprehensive platform, with customer examples: A connected platform to sense the business environment Examples of continuous sensing are found in the managed cloud platform built by Rachio on AWS IoT to enable the secure interaction of its connected devices with cloud applications/other devices. In addition, Change Healthcare (previously known as Emdeon) uses Amazon SNS to handle millions of confidential client transactions daily to process claims and pharmacy requests serving over 340K physicians and 60K pharmacies in full compliance with healthcare industry regulations.   Seamless ingestion of large volumes of sensed data AdiMap uses Amazon Kinesis to process real-time streaming online ad data and job feeds, and processes them for storage in petabyte-scale Amazon Redshift warehouses to glean business insights for jobs, ad spend, or financials for mobile apps. Advanced problem solving that connects big data with machine learning BuildFax illustrates a practical use case using Amazon Machine Learning to provide roof-age and job-cost estimations for insurers and builders, with property-specific values that don’t need to rely on broad, zip code-level estimates. At-scale computing and visual analysis DNAnexus deploys its customers’ genomic pipelines on Amazon EC2 for highly complex and sensitive DNA research activities. On a more playful note, for those that are inclined to look at our serverless compute architecture, I would love to reacquaint you with Dubsmash’s innovative use of AWS Lambda. A workflow engine to drive business decisions NASA’s Jet Propulsion Laboratory (JPL) used Amazon SWF as an integral part of several missions, including the MER and Carbon in the Arctic Reservoir Vulnerability Experiment (CARVE). NASA/JPL engineers used Amazon SWF and integrated the service with the Polyphony pipelines responsible for data processing of Mars images for tactical operations; expressing it with SWF requires a few simple lines of Java code together with AWS Flow Framework annotations. Let’s build groundbreaking innovations together I hope these short sketches illustrate our optimism in what the future holds. We sincerely believe that such capabilities permit creative expressions for unique solutions that are not only affordable but also scale reliably in order to drive meaningful benefits to the end-user or drive efficiency into business operations. For more details, see the case studies at All AWS Customer Stories. We are excited to offer a complete portfolio of services from our foundational service stack for compute, storage, and networking to our more advanced solutions and applications. We look forward [...]



Serverless Reference Architectures with AWS Lambda

2016-06-10T10:00:00-04:00

Building your applications with only managed components has become very popular, and AWS Lambda plays a crucial role in that. I see a tremendous interest in examples how to build such applications, and articles such as "The Serverless Start-Up - Down With Servers!" about teletext.io are read eagerly around the globe. If you are looking for more examples there are the Lambda Serverless Reference Architectures that can serve as the blueprint for building your own serverless applications. Mobile Backend Serverless Reference Architecture The Mobile Backend reference architecture demonstrates how to use AWS Lambda along with other services to build a serverless backend for a mobile application. The specific example application provided in this repository enables users to upload photos and notes using Amazon Simple Storage Service (Amazon S3) and Amazon API Gateway respectively. The notes are stored in Amazon DynamoDB, and are processed asynchronously using DynamoDB streams and a Lambda function to add them to an Amazon CloudSearch domain. In addition to the source code for the Lambda functions, this repository also contains a prototype iOS application that provides examples for how to use the AWS Mobile SDK for iOS to interface with the backend resources defined in the architecture. Real-time File Processing Serverless Reference Architecture The Real-time File Processing reference architecture is a general-purpose, event-driven, parallel data processing architecture that uses AWS Lambda. This architecture is ideal for workloads that need more than one data derivative of an object. This simple architecture is described in the "Fanout S3 Event Notifications to Multiple Endpoints" blog post on the AWS Compute Blog. This sample application demonstrates a Markdown conversion application where Lambda is used to convert Markdown files to HTML and plain text. Web Applications Serverless Reference Architecture By combining AWS Lambda with other AWS services, developers can build powerful web applications that automatically scale up and down and run in a highly available configuration across multiple data centers—with zero administrative effort required for scalability, backups, or multi–data center redundancy. This example looks at using AWS Lambda and Amazon API Gateway to build a dynamic voting application, which receives votes via SMS, aggregates the totals into Amazon DynamoDB, and uses Amazon Simple Storage Service (Amazon S3)to display the results in real time. The architecture can be created with an AWS CloudFormation template. The template does the following: Creates an S3 bucket named to hold your web app. Creates a DynamoDB table named VoteApp to store votes Creates a DynamoDB table named VoteAppAggregates to aggregate vote totals Creates a Lambda function that allows your application to receive votes Creates a Lambda function that allows your application to aggregate votes Creates an AWS Identity and Access Management (IAM) role and policy to allow Lambda functions to write to Amazon CloudWatch Logs and write and query the DynamoDB tables IoT Backend Serverless Reference Architecture The Internet of Things (IoT) Backend reference architecture demonstrates how to use AWS Lambda in conjunction with Amazon Kinesis, Amazon DynamoDB, Amazon Simple Storage Service (Amazon S3), and Amazon CloudWatch to build a serverless system for ingesting and processing sensor data. By leveraging these services, you can build cost-efficient applications that can meet the massive scale required for processing the data generated by huge deployments of connected devices. This repository contains sample code for all the Lambda functions depicted in this diagram as well as a AWS CloudFormation template for creating the functions and related resources. There is also a simple webpage that you can run locally to publish sample events and query the data from DynamodDB. Real-time Stream Processing Serverless Reference Architecture You can use AWS Lambda and Amazon Kin[...]



10 Lessons from 10 Years of Amazon Web Services

2016-03-11T09:00:00-05:00

The epoch of AWS is the launch of Amazon S3 on March 14, 2006, now almost 10 years ago. Looking back over the past 10 years, there are hundreds of lessons that we’ve learned about building and operating services that need to be secure, reliable, scalable, with predictable performance at the lowest possible cost. Given that AWS is a pioneer in building and operating these services world-wide, these lessons have been of crucial importance to our business. As we’ve said many times before, “There is no compression algorithm for experience.” With over a million active customers per month, who in turn may serve hundreds of millions of their own customers, there is no lack of opportunities to gain more experience and perhaps no better environment for continuous improvement in the way we serve our customers. I have picked a few of these lessons to share with you in the hope that they may be of use for you as well. 1. Build evolvable systems Almost from day one, we knew that the software we were building would not be the software that would be running a year later. The expectation was that with each order or two of magnitude, we would need to revisit and revise the architecture to make sure we could address the issues of scale. But we couldn’t adopt the old style approach of upgrading systems through a maintenance outage, as many businesses around the world are relying on our platform for 24/7 availability. We needed to build such an architecture that we could introduce new software components without taking the service down. Marvin Theimer, Amazon Distinguished Engineer, once jokingly said that the evolution of Amazon S3 could best be described as starting off as a single engine Cessna plane, but over time the plane was upgraded to a 737, then a group of 747s, all the way to the large fleet of Airbus 380s that it is now. All the while, we were refueling in midair and moving customers from plane to plane without them even realizing it. 2. Expect the unexpected Failures are a given and everything will eventually fail over time: from routers to hard disks, from operating systems to memory units corrupting TCP packets, from transient errors to permanent failures. This is a given, whether you are using the highest quality hardware or lowest cost components. This becomes an even more important lesson at scale: for example, as S3 processes trillions and trillions of storage transactions, anything that has even the slightest probability of error will become realistic. Many of those failure scenarios can be anticipated beforehand, but many more are unknown at design and build time. We needed to build systems that embrace failure as a natural occurrence even if we did not know what the failure might be. Systems need to keep running even if the “house is on fire.” It is important to be able to manage pieces that are impacted without the need to take the overall system down. We’ve developed the fundamental skill of managing the “blast radius” of a failure occurrence such that the overall health of the system can be maintained. 3. Primitives not frameworks Pretty quickly, we started to realize that the way customers would like to use our services was a work in progress. When customers left the constraining, old world of IT hardware and datacenters behind, they started to develop systems with new and interesting usage patterns that no one had ever seen before. As such, we needed to be ultra-agile to make sure we were catering to our customers’ needs. One of the most important mechanisms we provided was to offer customers a collection of primitives and tools, where they could pick and choose their preferred way to engage with the AWS cloud, instead of only providing one framework that they are forced to use, which includes everything and the kitchen sink. This approach has enabled our customers to become so successful, that even later generations of AWS services make use of exactly the same primitive services our customers h[...]



Expanding the Cloud: Introducing the AWS Asia Pacific (Seoul) Region

2016-01-06T18:00:00-05:00

In November, Amazon Web Services announced that it would launch a new AWS infrastructure region in South Korea. Today, I’m happy to announce that the Asia Pacific (Seoul) Region is now generally available for use by customers worldwide. A region in South Korea has been highly requested by companies around the world who want to take full advantage of Korea’s world-leading Internet connectivity and provide their customers with quick, low-latency access to websites, mobile applications, games, SaaS applications, and more. We’ve also been hearing many requests from Korean companies, including large enterprises like Samsung and Mirae Asset. For example, Samsung Electronic Printing used AWS to deploy its Printing Apps Center in a way that didn’t require them to invest up-front capital and kept total costs quite low. Mirae Asset Global Investments improved its web service environment and reduced annual management costs by 50% by consolidating the management of all web services, including servers, network, database, and security. We believe that with the launch of the Seoul Region, AWS will enable many more enterprise customers in Korea to reduce the cost of their IT operations and innovate faster in critical new areas such as big data analysis, Internet of Things, and more. Many of these enterprises are assisted by our extensive partner ecosystem in Korea. The rapidly expanding AWS Partner Network (APN) in Korea includes independent software vendors (ISVs) and systems integrators (SIs) who are building innovative solutions and services around the AWS cloud. ISV partners such as Ahnlab, IGAWorks, Hancom, TMAXSoft, and Dreamline are providing a variety of software, security, and connectivity solutions that can be used in conjunction with AWS. SIs such as Vsystems, Bespin Global, Megazone, and GS Neotek are helping enterprises to migrate to AWS, deploy mission-critical applications on AWS, or are providing a full range of monitoring, automation, and management services for customers’ AWS environments. More details on these partners and solutions can be found at https://aws.amazon.com/partners/. The Seoul Region also gives Korean gaming companies the freedom to successfully enable global services. For example, Nexon is Korea’s premier game company, operating 150 games in 150 countries, including major PC games such as FIFA Online 3, MapleStory 2, and Sudden Attack. Nexon uses AWS global infrastructure to manage its IT infrastructure more effectively, and they are now using AWS for their domestic workloads as well. With the Seoul Region now available, Nexon plans to use AWS not just for mobile games but also for latency-sensitive PC online games. All of the top 10 gaming companies in Korea use AWS, and we look forward to continuing to support their global growth and continued success. Finally, the Seoul Region brings the benefits of the cloud much closer to home for Korean startups. In 2015, we expanded the AWS Activate program in Korea to provide startups with the resources needed to get started on AWS, such as access to guidance and 1:1 time with AWS experts as well as web-based training, self-paced labs, customer support, third-party offers, and AWS promotional credits. Through local partnerships with leading venture capitalists (VCs), accelerators, and incubators such as SparkLabs, Primer, Mashup Angels, BonAngels, TheVentures, and Futureplay, 250+ startups in Korea participated in the AWS Activate program this year, and we are excited to see what they are able to achieve with an AWS region in Korea. You can learn more about our growing global infrastructure footprint at http://aws.amazon.com/about-aws/globalinfrastructure. [...]



London Calling! An AWS Region is coming to the UK!

2015-11-05T23:00:00-05:00

Yesterday, AWS evangelist Jeff Barr wrote that AWS will be opening a region in South Korea in early 2016 that will be our 5th region in Asia Pacific. Customers can choose between 11 regions around the world today and, in addition to Korea, we are adding regions in India, a second region in China, and Ohio in 2016. Today, I am excited to add the United Kingdom to that list! The AWS UK region will be our third in the European Union (EU), and we're shooting to have it ready by the end of 2016 (or early 2017). This region will provide even lower latency and strong data sovereignty to local users. More startups, small and medium businesses, large enterprises, universities, and government organizations all over the world are moving to the AWS Cloud faster than ever before. We are committed to meeting our customers’ increasing needs for capacity and for powerful AWS services that eliminate the heavy lifting of the underlying IT infrastructure -- allowing them to focus more of their precious resources on their core business. Leading UK organizations were among the early adopters of the cloud when we first started AWS back in 2006 and we continue to help them drive increased agility, lower IT costs, and easily scale globally. Here are some examples of how our UK customers are using the AWS platform: Hot Startups – Shazam, Hailo, Omnifone, Yplan, SwiftKey, Aire, GoSquared Mid-sized Organisations – Haven Power, Holiday Extras, Exeter Family Friendly, Royal Opera House, Total Jobs, Retail Companies – Shop Direct, Nisa Retail, Kurt Geiger, Sport Pursuit Enterprise Companies – Unilever, ATOC, National Rail Enquiries Media and Entertainment – BBC, Channel 4, ITV, News UK, The FT, Trinity Mirror, The Guardian Public Sector & Not-for-Profit – UCAS, Makewaves, JustGiving The new region, coupled with the existing AWS regions in Dublin and Frankfurt, will provide customers with quick, low-latency access to websites, mobile applications, games, SaaS applications, big data analysis, Internet of Things (IoT) applications, and more. [...]



Expanding the Cloud: Introducing Amazon QuickSight

2015-10-07T09:00:00-04:00

We live in a world where massive volumes of data are being generated from websites, connected devices and mobile apps. In such a data intensive environment, making key business decisions such as running marketing and sales campaigns, logistic planning, financial analysis, and ad targeting require deriving insights from these data. However, the data infrastructure to collect, store, and process data is geared primarily towards developers and IT professionals (e.g., Amazon Redshift, Amazon DynamoDB, Amazon EMR) whereas insights need to be derived by not just technical professionals but also non-technical, business users. In our quest to enable the best data storage options for customers, over the years we have built several innovative database solutions such as Amazon RDS, Amazon RDS for Aurora, Amazon DynamoDB, and Amazon Redshift. Not surprisingly, customers are using them to collect and store massive amounts of data. Yet, the process of deriving actionable insights out of this wide variety of data sources is not easy. Traditionally, companies had to invest in a lot of complex tools to discover their data sets, ETL tools to prepare for analysis, and separate tools for analyzing and providing visually interactive dashboards. Today, I am excited to share with you a brand new service called Amazon QuickSight that aims to simplify the process of deriving insights from a wide variety of data sources quickly, easily and at a low cost. QuickSight is a very fast, cloud powered, business intelligence service for the 1/10th the cost of old-guard BI solutions. Big data challenges Over the last several years, AWS has delivered on a comprehensive set of services to help customers collect, store, and process their growing volume of data. Today, many thousands of companies—from large enterprises such as Johnson & Johnson, Samsung, and Philips to established technology companies such as Netflix and Adobe to innovative startups such as Airbnb, Yelp, and Foursquare—use Amazon Web Services for their big data needs. Every day, large amount of data is generated from customer applications running on top of AWS infrastructure, collected and streamed using services like Amazon Kinesis, and stored in AWS relational data sources such as Amazon RDS, Amazon Aurora, and Amazon Redshift; NoSQL data sources such as Amazon DynamoDB; and file-based data sources such as Amazon S3. Customers also use a variety of different tools, including Amazon EMR for Hadoop, Amazon Machine Learning, AWS Data Pipeline, and AWS Lambda to process and analyze their data. There’s an inherent gap between the data that is collected, stored, and processed and the key decisions that business users make on a daily basis. Put simply, data is not always readily available and accessible to organizational end users. Most business users continue to struggle answering key business questions such as “Who are my top customers and what are they buying?”, “How is my marketing campaign performing?”, and “Why is my most profitable region not growing?” While BI solutions have existed for decades, customers have told us that it takes an enormous amount of time, IT effort, and money to bridge this gap. Traditional BI solutions typically require teams of data engineers to spend several months building complex data models and synthesizing the data before they can generate their first report. These solutions lack interactive data exploration and visualization capabilities, limiting most business users to canned reports and pre-selected queries. On-premise BI tools also require companies to provision and maintain complex hardware infrastructure and invest in expensive software licenses, maintenance fees, and support fees that cost upwards of thousands of dollars per user per year. To scale to a larger number of users and support the growth in data volume spurred by social media, web, mobile, IoT, ad-tech[...]



The Startup Experience at AWS re:Invent

2015-09-28T09:00:00-04:00

(image)

AWS re:Invent is just over one week away—as I prepare to head to Vegas, I’m pumped up about the chance to interact with AWS-powered startups from around the world. One of my favorite parts of the week is being able to host three startup-focused sessions Thursday afternoon:

The Startup Scene in 2016: a Visionary Panel [Thursday, 2:45PM]
In this session, I’ll moderate a diverse panel of technology experts who’ll discuss emerging trends all startups should be aware of, including how local governments, microeconomic trends, evolving accelerator programs, and the AWS cloud are influencing the global startup scene. This panel will include:

  • Tracy DiNunzio, Founder & CEO, Tradesy
  • Michael DeAngelo, Deputy CIO, State of Washington
  • Ben Whaley, Founder & Principal Consultant, WhaleTech LLC
  • Jason Seats, Managing Director (Austin), & Partner, Techstars

CTO-to-CTO Fireside Chat [Thursday, 4:15 PM]
This is one of my favorite sessions as I get a chance to sit down and get inside the minds of technical leaders behind some of the most innovative and disruptive startups in the world. I’ll have 1x1 chats with the following CTOs:

  • Laks Srini, CTO and Co-founder, Zenefits
  • Mackenzie Kosut, Head of Technical Operations, Oscar Health
  • Jason MacInnes, CTO, DraftKings
  • Gautam Golwala, CTO and Co-founder, Poshmark

4th Annual Startup Launches [Thursday, 5:30 PM]
To wrap up our startup track, in the 4th Annual Startup Launches event we’ll invite five AWS-powered startups to launch their companies on stage, immediately followed by a happy hour. I can’t share the lineup as some of these startups are in stealth mode, but I can promise you this will be an exciting event with each startup sharing a special offer, exclusive to those of you in attendance.

Other startup activities

Startup Insights from a Venture Capitalists Perspective [Thursday, 1:30 PM]
Immediately before I take the stage, you can join a group of venture capitalists as they share insights and observations about the global startup ecosystem: each panelist will share the most significant insight they’ve gained in the past 12 months and what they believe will be the most impactful development in the coming year.

The AWS Startup Pavilion [Tuesday – Thursday]
If you’re not able to join the startup sessions Thursday afternoon, I encourage you to swing by the AWS Startup Pavilion (within re:Invent Central, booth 1062) where you can meet the AWS startup team, mingle with other startups, chat 1:1 with an AWS architect, and learn about AWS Activate.

Startup Stop on the re:Invent Pub Crawl [Wednesday evening]
And to relax and unwind in the evening, you won’t want to miss the startup stop on the re:Invent pub crawl, at the Rockhouse within The Grand Canal Shoppes at The Venetian. This is the place to be for free food, drinks, and networking during the Wednesday night re:Invent pub crawl.

Look forward to seeing you in Vegas!




The AWS Pop-up Lofts are opening in London and Berlin

2015-09-08T09:00:00-04:00

Amazon Web Services (AWS) has been working closely with the startup community in London, and Europe, since we launched back in 2006. We have grown substantially in that time and today more than two thirds of the UK’s startups with valuations of over a billion dollars, including Skyscanner, JustEat, Powa, Fanduel and Shazam, are all leveraging our platform to deliver innovative services to customers around the world. This week I will have the pleasure of meeting up with our startup customers to we celebrate the opening of the first of the AWS Pop-up Lofts to open outside of the US in one of the greatest cities in the World, London. The London Loft opening will be followed in quick succession by our fourth Pop-up Loft opening its doors in Berlin. Both London and Berlin are vibrant cities with a concentration of innovative startups building their businesses on AWS. The Loft’s will give them a physical place to not only learn about our services but will aim to help cultivate a community of AWS customers that can learn from each other. Every time I’ve visited the Loft’s in both San Francisco and New York there has been a great buzz with people getting advice from our solution architects, getting training or attending talks and demos. By opening the London and Berlin Loft’s we’re hoping to cultivate that same community and expand on the base of loyal startups we have, such as Hailo, YPlan, SwiftKey, Mendley, GoSquared, Playmob and Yoyo Wallet, to help them to grow their companies globally and be successful. You can expect to see some of the brightest and most creative minds in the industry being on hand in the Lofts to help and I’d encourage all local startups to make the most of the resources which will be at your fingertips, ranging from technology resources through access to our vast network of customers, partners, accelerators, incubators and venture capitalists who will all be in the loft to help you gain the insight you need and provide advice on how to secure funding, and gain the ‘softer skills’ needed to to grow your businesses. The AWS Pop-up Loft, in London will be open from September 10 to October 29 between 10am and 6pm and later for evening events, Monday through Friday, in Moorgate. You can go online now at http://awsloft.london, to make one-on-one appointments with an AWS expert, register for boot camps and technical sessions, including: Ask an Architect: an hour session which can be scheduled with a member of the AWS technical team. Bring your questions about AWS architecture, cost optimisation, services and features, or anything else AWS related. You can also drop in if you don’t have an appointment. Technical Bootcamps: a one-day training sessions, taught by experienced AWS instructors and solutions architects. You will get hands-on experience using a live environment with the AWS Management Console. There is a ‘Getting started with AWS’ bootcamp on Chef bootcamp which will show customers how they can safeguard their infrastructure, manage complexity, and accelerate time to market. Self-paced Hands-on Labs: beginners through advanced users can attend the labs which will help sharpen AWS technical skills at a personal pace and are available for free in the Loft during operating hours. The London Loft will also feature an IoT Lab with a range of devices running on AWS services, many of which have been developed by our Solutions Architects. Visitors to the Loft will be able to participate in live demos and Q&A opportunities, as our technical team demonstrates what is possible with IoT on AWS. You are all invited to join us for the grand opening party at the Loft in London on September 10 at 6PM. There will be food, drinks, DJ, and free swag. The event will be packed, so RSVP today if you want to come and mingle with hot startups, accelerators, incubato[...]



Titan Graph Database Integration with DynamoDB: World-class Performance, Availability, and Scale for New Workloads

2015-08-20T09:00:00-04:00

Today, we are releasing a plugin that allows customers to use the Titan graph engine with Amazon DynamoDB as the backend storage layer. It opens up the possibility to enjoy the value that graph databases bring to relationship-centric use cases, without worrying about managing the underlying storage. The importance of relationships Relationships are a fundamental aspect of both the physical and virtual worlds. Modern applications need to quickly navigate connections in the physical world of people, cities, and public transit stations as well as the virtual world of search terms, social posts, and genetic code, for example. Developers need efficient methods to store, traverse, and query these relationships. Social media apps navigate relationships between friends, photos, videos, pages, and followers. In supply chain management, connections between airports, warehouses, and retail aisles are critical for cost and time optimization. Similarly, relationships are essential in many other use cases such as financial modeling, risk analysis, genome research, search, gaming, and others. Traditionally, these connections have been stored in relational databases, with each object type requiring its own table. When using relational databases, traversing relationships requires expensive table JOIN operations, causing significantly increased latency as table size and query complexity grow. Enter graph databases Graph databases belong to the NoSQL family, and are optimized for storing and traversing relationships. A graph consists of vertices, edges, and associated properties. Each vertex contains a list of properties and edges, which represent the relationships to other vertices. This structure is optimized for fast relationship query and traversal, without requiring expensive table JOIN operations. In this way, graphs can scale to billions of vertices and edges, while allowing efficient queries and traversal of any subset of the graph with consistent low latency that doesn’t grow proportionally to the overall graph size. This is an important benefit for many use cases that involve accessing and traversing small subsets of a large graph. A concrete example is generating a product recommendation based on purchase interests of a user’s friends, where the relevant social connections are a small subset of the total network. Another example is for tracking inventory in a vast logistics system, where only a subset of its locations is relevant for a specific item. For us at Amazon, the challenge of tracking inventory at massive scale is not just theoretical, but very real. Graph databases at Amazon Like many AWS innovations, the desire to build a solution for a scalable graph database came from Amazon’s retail business. Amazon runs one of the largest fulfillment networks in the world, and we need to optimize our systems to quickly and accurately track the movement of vast amounts of inventory. This requires a database that can quickly traverse the logistics history for a given item or order. Graph databases are ideal for the task, since they make it easy to store and retrieve each item’s logistics history. Our criteria for choosing the right graph engine were: The ability to support a graph containing billions of vertices and edges. The ability to scale with the accelerating pace of new items added to the catalog, and new objects and locations in the company’s expanding fulfillment network. After evaluating different technologies, we decided to use Titan, a distributed graph database engine optimized for creating and querying large graphs. Titan has a pluggable storage architecture, using existing NoSQL databases as underlying storage for the graph data. While the Titan-based solution worked well for our needs, the team quickly found itself having to devote an increasing amoun[...]



Under the Hood of Amazon EC2 Container Service

2015-07-20T09:00:00-04:00

In my last post about Amazon EC2 Container Service (Amazon ECS), I discussed the two key components of running modern distributed applications on a cluster: reliable state management and flexible scheduling. Amazon ECS makes building and running containerized applications simple, but how that happens is what makes Amazon ECS interesting. Today, I want to explore the Amazon ECS architecture and what this architecture enables. Below is a diagram of the basic components of Amazon ECS: How we coordinate the cluster Let’s talk about what Amazon ECS is actually doing. The core of Amazon ECS is the cluster manager, a backend service that handles the tasks of cluster coordination and state management. On top of the cluster manager sits various schedulers. Cluster management and container scheduling are components decoupled from each other allowing customers to use and build their own schedulers. A cluster is just a pool of compute resources available to a customer’s applications. The pool of resources, at this time, is the CPU, memory, and networking resources of Amazon EC2 instances as partitioned by containers. Amazon ECS coordinates the cluster through the Amazon ECS Container Agent running on each EC2 instance in the cluster. The agent allows Amazon ECS to communicate with the EC2 instances in the cluster to start, stop, and monitor containers as requested by a user or scheduler. The agent is written in Go, has a minimal footprint, and is available on GitHub under an Apache license. We encourage contributions and feedback is most welcome. How we manage state To coordinate the cluster, we need to have a single source of truth on the clusters themselves: EC2 instances in the clusters, tasks running on the EC2 instances, containers that make up a task, and resources available or occupied (e.g., networks ports, memory, CPU, etc). There is no way to successfully start and stop containers without an accurate knowledge of the state of the cluster. In order to solve this, state needs to be stored somewhere, so at the heart of any modern cluster manager is a key/value store. This key/value store acts as the single source of truth for all information on the cluster (state, and all changes to state transitions) are entered and stored here. To be robust and scalable, this key/value store needs to be distributed for durability and availability, to protect against network partitions or hardware failures. But because the key/value store is distributed, making sure data is consistent and handling concurrent changes becomes more difficult, especially in an environment where state constantly changes (e.g., containers stopping and starting). As such, some form of concurrency control has to be put in place in order to make sure that multiple state changes don’t conflict. For example, if two developers request all the remaining memory resources from a certain EC2 instance for their container, only one container can actually receive those resources and the other would have to be told their request could not be completed. To achieve concurrency control, we implemented Amazon ECS using one of Amazon’s core distributed systems primitives: a Paxos-based transactional journal based data store that keeps a record of every change made to a data entry. Any write to the data store is committed as a transaction in the journal with a specific order-based ID. The current value in a data store is the sum of all transactions made as recorded by the journal. Any read from the data store is only a snapshot in time of the journal. For a write to succeed, the write proposed must be the latest transaction since the last read. This primitive allows Amazon ECS to store its cluster state information with optimistic concurrency, which is ideal in environments where constantly changi[...]



Back-to-Basics Weekend Reading - Data Compression

2015-07-17T13:00:00-04:00

(image)

Data compression today is still as important as it was in the early days of computing. Although in those days all computer and storage resources were very limited, the objects in use were much smaller than today. We have seen a shift from generic compression to compression for specific file types, especially those in images, audio and video. In this weekend's back to basic reading we go back in time, 1987 to be specific, when Leweler and Hirschberg wrote a survey paper that covers the 40 years of data compression research. It covers all the areas that we like in a back to basics paper, it does not present the most modern results but it gives you a great understanding of the fundamentals. It is a substantial paper but easy to read.

Data compression, D.A. Lelewer and D.S. Hirschberg, Data compression, Computing Surveys 19,3 (1987) 261-297.




Embrace event-driven computing: Amazon expands DynamoDB with streams, cross-region replication, and database triggers

2015-07-14T13:00:00-04:00

In just three short years, Amazon DynamoDB has emerged as the backbone for many powerful Internet applications such as AdRoll, Druva, DeviceScape, and Battlecamp. Many happy developers are using DynamoDB to handle trillions of requests every day. I am excited to share with you that today we are expanding DynamoDB with streams, cross-region replication, and database triggers. In this blog post, I will explain how these three new capabilities empower you to build applications with distributed systems architecture and create responsive, reliable, and high-performance applications using DynamoDB that work at any scale. DynamoDB Streams enables your application to get real-time notifications of your tables’ item-level changes. Streams provide you with the underlying infrastructure to create new applications, such as continuously updated free-text search indexes, caches, or other creative extensions requiring up-to-date table changes. DynamoDB Streams is the enabling technology behind two other features announced today: cross-region replication maintains identical copies of DynamoDB tables across AWS regions with push-button ease, and triggers execute AWS Lambda functions on streams, allowing you to respond to changing data conditions. Let me expand on each one of them. DynamoDB Streams DynamoDB Streams provides you with a time-ordered sequence, or change log, of all item-level changes made to any DynamoDB table. The stream is exposed via the familiar Amazon Kinesis interface. Using streams, you can apply the changes to a full-text search data store such as Elasticsearch, push incremental backups to Amazon S3, or maintain an up-to-date read cache. I have heard from many of you that one of the common challenges you have is keeping DynamoDB data in sync with other data sources, such as search indexes or data warehouses. In traditional database architectures, database engines often run a small search engine or data warehouse engines on the same hardware as the database. However, the model of collocating all engines in a single database turns out to be cumbersome because the scaling characteristics of a transactional database are different from those of a search index or data warehouse. A more scalable option is to decouple these systems and build a pipe that connects these engines and feeds all change records from the source database to the data warehouse (e.g., Amazon Redshift) and Elasticsearch machines. The velocity and variety of data that you are managing continues to increase, making your task of keeping up with the change more challenging as you want to manage the systems and applications in real time and respond to changing conditions. A common design pattern is to capture transactional and operational data (such as logs) that require high throughput and performance in DynamoDB, and provide periodic updates to search clusters and data warehouses. However, in the past, you had to write code to manage the data changes and deal with keeping the search engine and data warehousing engines in sync. For cost and manageability reasons, some developers have collocated the extract job, the search cluster, and data warehouses on the same box, leading to performance and scalability compromises. DynamoDB Streams simplifies and improves this design pattern with a distributed systems approach. You can enable the DynamoDB Streams feature for a table with just a few clicks using the AWS Management Console, or you can use the DynamoDB API. Once configured, you can use an Amazon EC2 instance to read the stream using the Amazon Kinesis interface, and apply the changes in parallel to the search cluster, the data warehouse, and any number of data consumers. You can read the changes as they occur in real time or [...]



Amazon announces the Alexa Skills Kit, Enabling Developers to Create New Voice Capabilities

2015-06-25T13:00:00-04:00

Today, Amazon announced the Alexa Skills Kit (ASK), a collection of self-service APIs and tools that make it fast and easy for developers to create new voice-driven capabilities for Alexa. With a few lines of code, developers can easily integrate existing web services with Alexa or, in just a few hours, they can build entirely new experiences designed around voice. No experience with speech recognition or natural language understanding is required—Amazon does all the work to hear, understand, and process the customer’s spoken request so you don’t have to. All of the code runs in the cloud — nothing is installed on any user device. The easiest way to build a skill for Alexa is to use AWS Lambda, an innovative compute service that runs a developer’s code in response to triggers and automatically manages the compute resources in the AWS Cloud, so there is no need for a developer to provision or continuously run servers. Developers simply upload the code for the new Alexa skill they are creating, and AWS Lambda does the rest, executing the code in response to Alexa voice interactions and automatically managing the compute resources on the developer’s behalf. Using a Lambda function for your service also eliminates some of the complexity around setting up and managing your own endpoint: You do not need to administer or manage any of the compute resources for your service. You do not need an SSL certificate. You do not need to verify that requests are coming from the Alexa service yourself. Access to execute your function is controlled by permissions within AWS instead. AWS Lambda runs your code only when you need it and scales with your usage, so there is no need to provision or continuously run servers. For most developers, the Lambda free tier is sufficient for the function supporting an Alexa skill. The first one million requests each month are free. Note that the Lambda free tier does not automatically expire, but is available indefinitely. AWS Lambda supports code written in Node (JavaScript) and Java. You can copy JavaScript code directly into the inline code editor in the AWS Lambda console or upload it in a zip file. For basic testing, you can invoke your function manually by sending it JSON requests in the Lambda console. In addition, Amazon announced today that the Alexa Voice Service (AVS), the same service that powers Amazon Echo, is now available to third party hardware makers who want to integrate Alexa into their devices—for free. For example, a Wi-Fi alarm clock maker can create an Alexa-enabled clock radio, so a customer can talk to Alexa as they wake up, asking “What’s the weather today?” or “What time is my first meeting?” Read the press release here. Got an innovative idea for how voice technology can improve customers’ lives? The Alexa Fund was also announced today and will provide up to $100 million in investments to fuel voice technology innovation. Whether that’s creating new Alexa capabilities with the Alexa Skills Kit, building devices that use Alexa for new and novel voice experiences using the Alexa Voice Service, or something else entirely, if you have a visionary idea, Amazon would love to hear from you. For more details about Alexa you can check out today’s announcements on the AWS blog and Amazon Appstore blog. [...]



Back-to-Basics Weekend Reading - The Working Set Model for Program Behavior

2015-06-12T13:00:00-04:00

(image)

This weekend we go back in time all the way to the beginning of operating systems research. In the first SOSP conference in 1967 there were several papers that laid the foundation for the development of structured operating systems. There was the of course the lauded paper on the THE operating system by Dijkstra but for this weekend I picked the paper on memory locality by Peter Denning as this work laid the groundwork for the development of virtual memory systems.

The Working Set Model for Program Behavior, Peter J. Denning, Proceedings of the First ACM Symposium on Operating Systems Principles, October 1967, Gatlinburg, TN, USA.




Back-to-Basics Weekend Reading - Survey of Local Algorithms

2015-05-29T13:00:00-04:00

(image)

As we know the run time of most algorithms increases when the input set increases in size. There is one noticeable exception: there is a class of distributed algorithms, dubbed local algorithms, that run in constant time, independently of the size of the network. Being highly scalable and fault tolerant, such algorithms are ideal in the operation of large-scale distributed systems. Furthermore, even though the model of local algorithms is very limited, in recent years we have seen many positive results for nontrivial problems. In this weekend's paper Jukka Suomela surveys the state-of-the-art in the field, covering impossibility results, deterministic local algorithms, randomized local algorithms, and local algorithms for geometric graphs.

Survey of Local Algorithms, Jukka Suomela, in ACM Computing Surveys (CSUR) Surveys, Volume 45 Issue 2, February 2013, Article No. 24




Join me at the AWS Summit in Paris, Tel Aviv, Berlin, Amsterdam or New York

2015-05-28T19:30:00-04:00

(image)

An important way of engaging with AWS customers is through the AWS Global Summit Series. All AWS Summits feature a keynote address highlighting the latest announcements from AWS and customer testimonials, technical sessions led by AWS engineers, and hands-on technical training. You will learn best practices for deploying applications on AWS, optimizing performance, monitoring cloud resources, managing security, cutting costs, and more. You will also have opportunities to meet AWS staff and partners to get your technical questions answered.

At the Summit we focus on education and helping our customers, there are deep technical developer sessions, broad sessions on architectural principles, sessions for enterprise decision makers and how to best exploit AWS in a public sector or education settings. In all of these we make sure that it is not just the AWS team talking but invite customers to join us to tell about their journey.

I am fortunate to join our customers for a number of these Summits. London and Stockholm were already great events, but now we have another series coming up in 3 very packed weeks starting at the end of June:

Hope to see you in one of these locations!




The AWS Pop-up Loft opens in New York City

2015-05-27T16:30:00-04:00

Over a year ago the AWS team opened a "pop-up loft" in San Francisco at 925 Market Street. The goal of opening the loft was to give developers an opportunity to get in-person support and education on AWS, to network, get some work done, or just hang out with peers. It became a great success; every time when I visit the loft there is a great buzz with people getting advice from our solution architects, getting training or attending talks and demos. It became such a hit among developers that we decided to reopen the loft last year August after its initial run of 4 weeks, making sure everyone would have continued access to this important resource. Building on the success of the San Francisco loft the team will now also open a pop-up loft in New York City on June 24 at 350 West Broadway. It will extend the concept that was pioneered in SF to give NYC developers access to great AWS resources: Ask an Architect: You can schedule a 1:1, 60-minute session with a member of the AWS technical team. Bring your questions about AWS architecture, cost optimization, services and features, and anything else AWS related. And don’t be shy — walk-ins are welcome too. Technical Presentations: AWS solution architects, product managers, and evangelists deliver technical presentations covering some of the highest-rated and best-attended sessions from recent AWS events. Talks cover solutions and services including Amazon Echo, Amazon DynamoDB, mobile gaming, Amazon Elastic MapReduce, and more. AWS Technical Bootcamps: Limited to 25 participants per bootcamp, these full-day bootcamps include hands-on lab exercises that use a live environment with the AWS console. Usually these cost $600, but at the AWS Pop-up Loft we are offering them for free. Bootcamps you can register for include “Getting Started with AWS — Technical,” “Store and Manage Big Data in the Cloud,” “Architecting Highly Available Apps,” and “Taking AWS Operations to the Next Level.” Self-paced, Hands-on Labs: Beginners through advanced users can attend labs on topics that range from creating Amazon EC2 instances to launching and managing a web application with AWS CloudFormation. Usually $30 each, these labs are offered for free in the AWS loft. You are all invited to join us for the grand opening party at the loft on June 24 at 7 PM. There will be food, drinks, DJ, and free swag. The event will be packed, so RSVP today if you want to come and mingle with hot startups, accelerators, incubators, VCs, and our AWS technical experts. Entrance is on a first come, first serve basis. Both the San Francisco loft and the New York City loft have packed calendars with events. Visit their web pages for more details and to sign up. I have signed up to do two loft events: Fireside Chat with AWS Community Heroes on June 16 starting at 6pm Jeremy Edberg (Reddit/Netflix) and Valentino Volonghi (Adroll) will be joining me at the San Francisco loft for a fireside chat about startups, technology, entrepreneurship and more. Jeremy and Valentino have been recognized by AWS as Community Heroes, an honor reserved for developers who’ve had a real impact within the community. Following the talk, we’ll kick off a unique networking social including specialty cocktails, beer, wine, food, and party swag! Fireside Chat with NYC Founders on July 7 a number of startup founders who have gone through NYC accelerators will join me for a conversation about trends in the New York startup scene. I hope to see you there! [...]



Expanding the Cloud: Amazon Machine Learning Service, the Amazon Elastic Filesystem and more

2015-04-09T14:00:00-04:00

Today was a big day for the Amazon Web Services teams as a whole range of new services and functionality was delivered to our customers. Here is a brief recap of it: The Amazon Machine Learning service As I wrote last week machine learning is becoming an increasingly important tool to build advanced data driven applications. At Amazon we have hundreds of teams using machine learning and by making use of the Machine Learning Service we can significantly speed up the time they use to bring their technologies into production. And you no longer need to be a machine learning expert to be able to use it. Amazon Machine Learning is a service that allows you easily to build predictive applications, including fraud detection, demand forecasting, and click prediction. Amazon ML uses powerful algorithms that can help you create machine learning models by finding patterns in existing data, and using these patterns to make predictions from new data as it becomes available. The Amazon ML console and API provide data and model visualization tools, as well as wizards to guide you through the process of creating machine learning models, measuring their quality and fine-tuning the predictions to match your application requirements. Once the models are created, you can get predictions for your application by using the simple API, without having to implement custom prediction generation code or manage any infrastructure. Amazon ML is highly scalable and can generate billions of predictions, and serve those predictions in real-time and at high throughput. With Amazon ML there is no setup cost and you pay as you go, so you can start small and scale as your application grows. Details on the AWS Blog The Amazon Elastic File System AWS has been offering a range of storage solutions: objects, block storage, databases, archiving, etc. for a while already. Customers have been asking to add file system functionality to our set of solutions as much of their traditional software required an EC2 mountable shared file system. When we designed Amazon EFS we decided to build along the AWS principles: Elastic, scalable, highly available, consistent performance, secure, and cost-effective. Amazon EFS is a fully-managed service that makes it easy to set up and scale shared file storage in the AWS Cloud. With a few clicks in the AWS Management Console, customers can use Amazon EFS to create file systems that are accessible to EC2 instances and that support standard operating system APIs and file system semantics. Amazon EFS file systems can automatically scale from small file systems to petabyte-scale without needing to provision storage or throughput. Amazon EFS can support thousands of concurrent client connections with consistent performance, making it ideal for a wide range of uses that require on-demand scaling of file system capacity and performance. Amazon EFS is designed to be highly available and durable, storing each file system object redundantly across multiple Availability Zones. With Amazon EFS, there is no minimum fee or setup costs, and customers pay only for the storage they use. Details on the AWS Blog. The Amazon EC2 Container Service (ECS) Containers are an important building block in modern style of software development and since the launch of Amazon ECS last year November it has become a very important tool for architects and developers. Today Amazon ECS moves into General Availability (GA) so you can use it for your certified production systems. With going GA Amazon ECS also delivers a new scheduler to support long running application, see my de[...]



State Management and Scheduling with the Amazon EC2 Container Service

2015-04-09T13:00:00-04:00

Last November, I had the pleasure of announcing the preview of Amazon EC2 Container Service (ECS) at re:Invent. At the time, I wrote about how containerization makes it easier for customers to decompose their applications into smaller building blocks resulting in increased agility and speed of feature releases. I also talked about some of the challenges our customers were facing as they tried to scale container-based applications including challenges around cluster management. Today, I want to dive deeper into some key design decisions we made while building Amazon ECS to address the core problems our customers are facing. Running modern distributed applications on a cluster requires two key components - reliable state management and flexible scheduling. These are challenging problems that engineers building software systems have been trying to solve for a long time. In the past, many cluster management systems assumed that the cluster was going to be dedicated to a single application or would be statically partitioned to accommodate multiple users. In most cases, the applications you ran on these clusters were limited and set by the administrators. Your jobs were often put in job queues to ensure fairness and increased cluster utilization. For modern distributed applications, many of these approaches break down, especially in the highly dynamic environment enabled by Amazon EC2 and Docker containers. Our customers expect to spin up a pool of compute resources for their clusters on demand and dynamically change the resources available as their jobs change over time. They expect these clusters to span multiple availability zones, and increasingly want to distribute multiple applications - encapsulated in Docker containers - without the need to statically partition the cluster. These applications are typically a mix of long running processes and short lived jobs with varying levels of priority. Perhaps most importantly, our customers told us that they wanted to be able to start with a small cluster and grow over time as their needs grew without adding operational complexity. A modern scheduling system demands better state management than available with traditional cluster management systems. Customers running Docker containers across a cluster of Amazon EC2 instances need to know where those containers are running and whether they are in their desired state. They also need information about the resources in use and the remaining resources available as well as the ability to respond to failures, including the possibility that an entire Availability Zone may become unavailable. This requires customers to store the state of their cluster in a highly available and distributed key-value store. Our customers have told us that scaling and operating these data storage systems is very challenging. Furthermore, they felt that this was undifferentiated heavy lifting and would rather focus their energy on running their applications and growing their businesses. Let's dive into the innovations of Amazon ECS that addresses these problems and removes much of the complexity and "muck" of running a high performance, highly scalable Docker-aware cluster management system. State Management with Amazon ECS At Amazon, we have built a number of core distributed systems primitives to support our needs. Amazon ECS is built on top of one of these primitives - a Paxos-based transaction journal that maintains a history of state transitions. These transitions are offered and accepted using optimistic concurrency control and accept[...]