Subscribe: All Things Distributed
http://www.allthingsdistributed.com/atom.xml
Added By: Feedage Forager Feedage Grade A rated
Language: English
Tags:
alexa  amazon  aws  cloud  companies  customers  data  deep learning  deep  digital  learning  machine learning  new  services 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (1)

Feed Details and Statistics Feed Statistics
Preview: All Things Distributed

All Things Distributed





Updated: 2017-03-24T14:33:13-07:00

 



Back-to-Basics Weekend Reading: Deep learning in neural networks

2017-03-24T21:00:00-07:00

(image)

In the past few years, we have seen an explosion in the use of Deep Learning as its software platforms mature and the supporting hardware, especially GPUs with larger memories, become widely available. Even though this is a recent development, Deep Learning has deep historical roots, tracing back all the way to the sixties, or maybe even earlier. By reading up on its history, we get a better understanding of the current state of the art of Deep Learning algorithms and the Neural Networks that you build with them.

There is such a broad set of papers to read if we would want to dive deep into the history, that it would take us multiple weekends. Instead, we will be reading an excellent overview paper by Jürgen Schmidhuber of IDSIA from 2014. Jürgen evaluates the current state of the art in deep learning by tracing it back to its roots, so we get an excellent historical context.

Enjoy!

"Deep Learning in Neural Networks: An Overview." Jürgen Schmidhuber, in Neural Networks, Volume 61, January 2015, Pages 85-117 (DOI: 10.1016/j.neunet.2014.09.003)




Amazon Makes it Free for Developers to Build and Host Most Alexa Skills Using AWS

2017-03-15T10:00:00-07:00

Amazon today announced a new program that will make it free for tens of thousands of Alexa developers to build and host most Alexa skills using Amazon Web Services (AWS). Many Alexa skill developers currently take advantage of the AWS Free Tier, which offers one million AWS Lambda requests and up to 750 hours of Amazon Elastic Compute Cloud (Amazon EC2) compute time per month at no charge. However, if developers exceed the AWS Free Tier limits, they may incur AWS usage fees each month. Now, developers with a live Alexa skill can apply to receive a $100 AWS promotional credit and can earn an additional $100 per month in AWS promotional credits if they incur AWS usage charges for their skill – making it free for developers to build and host most Alexa skills. Our goal is to free up developers to create more robust and unique skills that can take advantage of AWS services. We can't wait to see what you create. How It Works If you have one or more live Alexa skills, you are eligible to receive a $100 AWS promotional credit to be used toward AWS fees incurred in connection with your skills. Additionally, if you continue to incur skill-related AWS charges that exceed the initial $100 promotional credit, you will also be eligible to receive monthly AWS promotional credits of $100. All you need to do is apply once. Apply Now > Build and Host Alexa Skills with AWS With the new program, if you exceed the AWS Free Tier due to growth of your skill, or are looking to scale your skill using AWS services, you will be eligible to receive AWS promotional credits to be applied to AWS services such as Amazon EC2, Amazon Simple Storage Service (Amazon S3), Amazon DynamoDB, and Amazon CloudFront. For example, you can use DynamoDB to create more engaging skills that have context and memory. In a game with memory, you could pause for a few hours and then keep going (like the Wayne Investigation, or Sub War). Or, to give your customers a more immersive experience, consider incorporating audio files via Amazon S3 to stream short audio bursts, games, podcasts, or news stories in your skill. Many of our most engaging skills, like Ambient Noise and RuneScape Quests – One Piercing Note, add audio sounds to soothe and voiceovers and sound effects to make the in-game experience more immersive. Build a Skill Today - Special Offers Our skill templates and step-by-step guides are a valuable way to quickly learn the end-to-end process for building and publishing an Alexa skill. You can get started quickly with the city guide template or fact skill template, or use the Alexa SDK for Node on GitHub to create a custom skill. Plus, if you publish a skill, you'll receive an Alexa dev t-shirt. Quantities are limited. See Terms and Conditions. Additional Resources For more information on getting started with developing for Alexa, check out the following resources: Voice Design Best Practices Alexa Skills Kit (ASK) Alexa Voice Service (AVS) The Alexa Fund ASK Developer Forums Weekly Developer Office Hours [...]



How companies can become magnets for digital talent

2017-03-13T10:00:00-07:00

This article titled "Wie Unternehmen digitale Talente anziehen" appeared in German last week in the "Tipps für Arbeitgeber" section of Wirtschaftwoche. The rise in digital business models is a huge challenge for recruiting and talent selection. The sort of skills businesses need today are in short supply. How companies can prepare themselves to attract the best talents for shaping their digital business. Digitalization offers almost endless possibilities to communicate faster, work more efficiently, and be more creative – in real-time. But groundbreaking digital business models need pioneers: creators, forward-looking thinkers and inventors who don't hesitate to leave the beaten path, embody ownership, and who understand how to translate customers' wishes into superb new products, services and solutions that evolve with speed. It is a no-brainer, that getting the right talent on board can decisively accelerate a company's digital transformation. At the same time, if your daily corporate practice doesn't fulfill their expectations regarding a vibrant and flexible working culture and a social media-minded environment, digital natives will simply turn their back on you and go elsewhere. Finding those kind of people is not easy. There are probably only a few companies that can say, they already have a sufficient number of such employees among their staff. Job openings for machine learning scientists, data analytics experts, IT security experts or developers are already difficult to fill, and the demand for this knowledge will increase significantly in the next few years as customers show their demand for digital engagements. The market for digital skills is "hot", in the U.S. as well as in Germany. And these talents are by no means coveted only by companies that always had a digital business model to begin with; suppliers to the automotive industry, financial services companies, and retailers also, urgently need product managers, and technical staff who can quickly make their organizations digitally attractive to their customers. Recruiting and selection in the digital age therefore needs to be tackled in a more strategic way than in the past. So how do you position your company as an attractive employer for digital talent? Preparing the organization for a new beginning One way is to eliminate rigid structures, previously the enemy to digital thinking. Digitalization involves, among others, suddenly converging areas that used to be siloed. Take industrial companies. In the past, their sales departments defined specifications according to the customer's wishes, which were then transferred step by step into the manufacturing process. These days, it's expected that everything should happen almost simultaneously. Previously, the top priorities for IT departments were equipping data centers with hardware, purchasing software, and further developing proprietary software. Today, companies take their server capacity and software from the cloud. These changes have to be taken into account when scanning the market for talent. At Düsseldorf-based fashion retailer Peek&Cloppenburg, for example, the business, development and IT functions are increasingly cooperating with each other because they realize that isolated departments and rigid hierarchies can slow down the organization's innovative strength and speed. That is also why employees have more and more room to make decisions themselves. P&C's digital transformation is supported by an in-house consulting team that helps the specialized departments analyze and digitize those processes that strengthen the customer touchpoints. The freedom to create Another way to make your company attractive for digital talent is to give them as much creative freedom as possible AutoScout24, a Munich-based online marketplace for car, motorcycle and utility vehicle sales is a digital native company. Recognizing that it needed faster decision making, AutoScout24 started to empower employees[...]



Back-to-Basics Weekend Reading: The Foundations of Blockchain

2017-03-10T08:00:00-08:00

(image)

More and more we see stories appearing, like this one in HBR by MIT Media Lab's Joi Ito and crew. It praises the power of blockchain as a disruptive technology, on par with how "the internet" changed everything.

I am always surprised to see that these far-reaching predictions are made, without diving into the technology itself. This weekend I would like to read about some of the technologies that predate blockchain, as they are its fundamental building blocks.

Blockchain technology first came on the scene in 2008, as a core component of the bitcoin cryptocurrency. Blockchain provides transactional, distributed ledger functionality that can operate without a centralized, trusted authority. Updates recorded in the ledger are immutable, with cryptographic time-stamping to achieve serializability. Blockchain's robust, decentralized functionality is very attractive for global financial systems, but can easily be applied to contracts, or operations such as global supply chain tracking.

When we look at the foundation of blockchain, there are three papers from the nineties that describe different components whose principles found its way into blockchain. The 91 paper by Haber and Stornetta describes how to use crypto signatures to time-stamp documents. The 98 paper by Schneier and Kelsey describes how to use crypto to protect sensitive information in log files on untrusted machines. Finally, the 96 paper by Ross Anderson describes a decentralized storage system, from which recorded updates cannot be deleted.

I hope these will enlighten your fundamental understanding of blockchain technology.

"How to Time-Stamp a Digital Document", Stuart Haber, and W. Scott Stornetta, In Advances in Cryptology – Crypto ’90, pp. 437–455. Lecture Notes in Computer Science v. 537, Springer-Verlag, Berlin 1991.

"Cryptographic Support for Secure Logs on Untrusted Machines", Bruce Schneier, and John Kelsey, in The Seventh USENIX Security Symposium Proceedings, pp. 53–62. USENIX Press, Januar 1998.

"The Eternity Service", Ross J. Anderson. Pragocrypt 1996.




Back-to-Basics Weekend Reading: Why Do Computers Stop and What Can Be Done About It?

2017-03-04T09:00:00-08:00

(image)

"Everything fails, all the time." A humble computer scientist once said. With all the resources we have today, it is easier for us to achieve fault-tolerance than it was many decades ago when computers began playing a role in critical systems such as health care, air traffic control and financial market systems. In the early days, the thinking was to use a hardware approach to achieve fault-tolerance. It was not until the mid-nineties that software fault-tolerance became more acceptable.

Tandem Computer was one of the pioneers in building these fault-tolerant, mission-critical systems. They used a shared-nothing multi-cpu approach. This is where each CPU had its own memory- and io-bus, and all were connected through a replicated shared bus, over which the independent OS instances could communication and run in lock step. In the late seventies and early eighties, this was considered state of the art in fault-tolerance.

Jim Gray, the father of concepts like transactions, worked for Tandem on software fault-tolerance. To be able to build better systems, he went deep in deconstructing the kind of failures Tandem customers were experiencing. He wrote up his findings in his "Why do Computers Stop" report. For a very longtime, this would be the only study available on reliability in production computer systems.

As important as the study is, the paper additionally covers "What can be done about it." Jim, for the first time, introduces concepts like process-pairs and transactions as the basis for software fault-tolerance. This is one of the fundamental papers of fault-tolerance in distributed systems, and I am going to enjoy reading it this weekend. I hope you will also.

"Why Do Computers Stop and What Can Be Done About It?", Jim Gray, June 1985, Tandem Technical report 85.7




Back-to-Basics Weekend Reading: Byzantine Generals

2017-02-24T20:00:00-08:00

(image)

In Reliable Distribution Systems, we need to handle different failure scenarios. Many of those deal with message loss and process failure. However, there is a class of scenarios that deal with malfunctioning processes, which send out conflicting information. The challenge is developing algorithms that can reach an agreement in the presence of these failures.

Lamport described that he was frustrated with the attention that Dijkstra had gotten for describing a computer science problem as the story of dining philosophers. He decided the best way to attract attention to a particular distributed systems problem was to present it in terms of a story; hence, the Byzantine Generals.

Abstractly, the problem can be described in terms of a group of generals of the Byzantine army, who camped with their troops around an enemy city. Communicating only by messenger, the generals were required to agree upon a common battle plan. However, one or more of them may be traitors who would try to confuse the others. The problem is: to find an algorithm that ensures the loyal generals will reach an agreement.

It is shown, using only oral messages, this problem is solvable if and only if more than two-thirds of the generals are loyal. So, a single traitor can confound two loyal generals. With unforgeable written messages, the problem is solvable for any number of generals and possible traitors. This weekend, I will be going back in time and reading three fundamental papers that laid-out the problems, and its first solutions. In the SIFT paper, the problem is first described, the "reaching agreement" paper describes the fundamental 3n+1 processor solution, and the last paper reviews and generalizes the previous results.

Maybe you will enjoy them as well. "SIFT: Design and Analysis of a Fault-Tolerant Computer for Aircraft Control" John H. Wensley, Leslie Lamport, Jack Goldberg, Milton W. Green, Karl N. Levitt, P. M. Melliar-Smith, Robert E. Shostak, Charles B. Weinstock, in Proceedings of the IEEE 66, October 5, 1978

"Reaching Agreement in the Presence of Faults" M. Pease, R. Shostak, and L. Lamport, 1980, J. ACM 27, 2 (April 1980), 228-234.

"The Byzantine Generals Problem", Lamport, L.; Shostak, R.; Pease, M. (1982), ACM Transactions on Programming Languages and Systems. 4 (3): 382–401. doi:10.1145/357172.357176.




Back-to-Basic Weekend Reading: Monte-Carlo Methods

2017-02-10T11:00:00-08:00

(image)

I always enjoy looking for solutions to difficult challenges in non-obvious places. That is probably why I like using probabilistic techniques for problems that appear to be hard, or impossible to solve deterministically. The probabilistic approach may not result in the perfect result, but it may get you very close, and much faster than deterministic techniques (which may even be computationally impossible).

Some of the earliest approaches using probabilities in physics experiments resulted in the Monte Carlo methods. Their essential idea is using randomness to solve problems that might be deterministic in principle. These are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results.

The Monte Carlo methods can be traced back to Stanislaw Ulam, John van Neuman, and Nick Metropolis at the Los Alamos Scientific Laboratory in the late 40s. The Monte Carlo methods were crucial in the simulations of the Manhattan Project, given the limited computational power available in those days

The paper I will be reading this weekend is the original paper from 1949, by Metropolis and Ulam. For fun, I’ve also decided to add a second paper by Herbert Anderson, who was a member of the Manhattan project. Anderson’s paper describes the use of Monte Carlo methods, and the computers in the Manhattan project.

The Monte Carlo Method”, Nicholas Metropolis, S. Ulam, Journal of the American Statistical Association, Vol. 44, No. 247. (Sep., 1949), pp. 335-341.

"Metropolis, Monte Carlo and the MANIAC", Anderson, Herbert L., Los Alamos Science, (1986) 14: 96–108.




Back-to-Basics Weekend Reading - Bloom Filters

2017-02-03T11:00:00-08:00

(image)

Listening to the "Algorithms to Live By" audio on my commute this morning, once again I was struck by the beauty of Bloom Filters. So, I decided it is time to resurrect the 'Back-to-Basics Weekend Reading' series, as I will be re-reading some fundamental CS papers this weekend.

In the past, I have done some weekend reading about Counting Bloom Filters, but now I am going even more fundamental, and I invite you to join me.

Bloom Filters, conceived by Burton Bloom in 1970, are probabilistic data structures to test whether an item is in a set. False positives are possible, but false negatives are not. Meaning, if a bit in the filter is not set, you can be sure the item is not in the set. If it is in the set, the mapped item may be in the set.

This is a hugely important technique if you need to process and track massive amounts of unique data units, as it is very space-efficient. From Dynamo and Postgresql, to HBase and Bitcoin, Bloom Filters are used in almost all modern distributed systems. This weekend I will be reading the original paper by Bloom from 1970, and another more recent survey paper that describes several variants and applications that have been developed over the years.

"Space/Time Trade-offs in Hash Coding with Allowable Errors", Bloom, Burton H., in Communications of the ACM, 13 (7): 422–426

"Cache-, Hash- and Space-Efficient Bloom Filters", Putze, F.; Sanders, P.; Singler, J., in Demetrescu, Camil, Experimental Algorithms, 6th International Workshop, WEA 200




A survival strategy for the digital transformation

2017-01-29T09:00:00-08:00

This article titled "Überlebensstrategie für die digitale Transformation" appeared in German last week in the "Die Zukunft beginnt heute (the future starts today)" section of Wirtschaftwoche. Smaller companies have a lot to gain in the digital era – provided they adopt the right mindset. The winners will be those that view their business from the eyes of their customers and understand that fast-paced innovation is the key to long-term growth. With this mindset they can take on even the largest enterprises who are slow to adapt to the fast moving digital reality. The digital era is here. Companies that haven't realized that by now will fall behind. In many industry segments and markets, for example, platform services, we've already witnessed how start-ups and niche providers have unleashed a revolution. Companies that used to be dominant but stare at all the changes around them for too long in a state of paralysis can quickly end up in a struggle to survive – look no further than the entertainment and music industry where streaming services have eaten a significant piece of the cake of the hard-copy providers. The better you understand why and how small and medium-sized players can conquer global markets, you'll be better positioned to come out as a winner. Digitalization allows even the smallest companies to think big because it puts technology into their hands that was previously hard to access and too costly to acquire. But adopting modern technologies alone is not enough to win the market battle. However, when new technologies are combined with a passion for putting the interests of the customer at the heart of everything you do, they can give agile companies that decisive push to the front of the pack. And Mittelstand companies have fantastic opportunities, provided they digitize more of their existing business models. Especially in manufacturing-based industries, introducing more software that complements hardware can eliminate fixed costs and allow you to quickly scale up to a global level. Companies that embrace this can rise to become leading players, taking the place previously reserved for the 'big shots' in their industries. Digitalization starts with having the right mindset: namely one aimed at creating innovative digital experiences. Continuous customer-centric experimentation has been the leading principle at Amazon from the start, in both our e-commerce activities as well as in Amazon Web Services. We found out that by organizing our innovation efforts around customers' needs, we could innovate very fast. Since 2006, Amazon Web Services has introduced far more than 2,500 new services and features, and around 90% of them were the result of wishes articulated directly by customers. The first requirement for developing an innovation mindset is to adapt your offerings fast to changing customer behavior. There are great examples of German companies that already do this. One is Vorwerk and its all-in-one cooking machine, called Thermomix. This premium product has been on the market for more than 50 years. But the way people cook today is different than in the 1960s. Today, cooking must be convenient, fast, and healthy. People want to prepare meals without too much effort, yet some appreciate a bit of guidance during the entire cooking process, from picking a new recipe from Vorwerk's cloud-based database to putting the finished meal on the table. Companies that want to adopt a digital innovation mindset should start leaving their comfort zone – even if they don't (yet) feel any pressure to change. Or put another way: they have to develop an inner drive to not just deliver on their customers' changing needs, but rather anticipate them. A company that does this very well is SKF, the global market leader in ball bearings and a supplier to many i[...]



Expanding the AWS Cloud Introducing the AWS Europe (London) Region

2016-12-14T00:00:00-08:00

In November 2015, Amazon Web Services announced that it would launch a new AWS infrastructure region in the United Kingdom. Today, I'm happy to announce that the AWS Europe (London) Region, our 16th technology infrastructure region globally, is now generally available for use by customers worldwide. UK companies are using AWS to innovate across diverse industries, such as energy, manufacturing, medicaments, retail, media, and financial services and the UK is home to some of the world's most forward-thinking businesses. These include startups like Fanduel, JustEat, and Monzo to enterprises such as British Gas, Trainline, Travis Perkins, News UK, the Financial Times. The British Government is also helping to drive innovation and has embraced a cloud-first policy for technology adoption. Take Peterborough City Council as an example. The council has deployed IoT Weather Stations in Schools across the City and is using the sensor information collated in a Data Lake to gain insights on whether the weather or pollution plays a part in learning outcomes. London has also established itself as a critical center for the financial services sector and a significant hub for venture capital activity across all Europe. The City's thriving venture capital and start-up accelerator communities are fueling growth and innovation, making it one of the most important locations in the world to do business. AWS is working with incubators and accelerators such as SeedCamp and Techstars, in London; Ignite100 in Newcastle; and DotForge in Sheffield and Manchester to help startups make the most of the cloud. We believe in our customers and are investing for the long term. With the AWS Europe (London) Region, we look to better serve end users in the UK. With the launch of the AWS Europe (London) Region, AWS can enable many more UK enterprise, public sector and startup customers to reduce IT costs, address data locality needs, and embark on rapid transformations in critical new areas, such as big data analysis and Internet of Things. All around us we see that the AWS capabilities foster a culture of experimentation with businesses of all sizes. AWS is not only affordable but it is secure and scales reliably to drive efficiencies into business transformations. I have been humbled by just how much our UK customers have been able to achieve using AWS technology so far. In just this past month we've had HSBC, ARM, Missguided, and most recently at re:Invent 2016, Trainline, talking with us about how they are using AWS to transform and scale their businesses. Following are just a few of the reasons that customers have given us for building their business on the AWS Cloud: Blend seamlessly into the digital world: With the rising importance of technology-driven business transformation, an emphasis on certain enterprise and consumer-based opportunities emerges. To take advantage of the game-changing opportunities, businesses are looking to blend into the digital world. Take GoSquared, a UK startup that runs all its development and production processes on AWS, as an example. GoSquared provides various analytics services that web and mobile companies can use to understand their customers' behaviors. With AWS, GoSquared can process tens of billions of data points every day from four continents to provide customers with a single view. Use catalysts for real-time business models: The Internet of Things (IoT) is undoubtedly driving a philosophy of interconnecting people, process, and machines to create massive volumes of data that has potential for disruptive change. The BMW Group is using AWS for its new connected-car application that collects sensor data from BMW 7 Series cars to give drivers dynamically updated map information. BMW built its new car-as-a-sensor (CARASSO) service in o[...]



Expanding the AWS Cloud: Introducing the AWS Canada (Central) Region

2016-12-08T10:30:00-08:00

Earlier this year, Amazon Web Services (AWS) announced it would launch a new AWS infrastructure region in Montreal, Quebec. Today, I'm happy to share that the Canada (Central) Region is available for use by customers worldwide. The AWS Cloud now operates in 40 Availability Zones within 15 geographic regions around the world, with seven more Availability Zones and three more regions coming online in China, France, and the U.K. in the coming year. The Canadian opportunity Canada has set forth a bold innovation agenda grounded in entrepreneurship, scientific research, growing small and medium-sized businesses with a focus on environmentally friendly technologies, and the transition to a digital economy. This agenda leverages the transformative aspects of technology and encourages Canadian companies, universities, governments, not-for-profits, and entrepreneurs to contribute to building a durable innovation economy. Given this, enterprises, public sector bodies, startups, and small businesses are looking to adopt agile, scalable, and secure public cloud solutions. The new Canada (Central) Region offers a robust suite of infrastructure, management, and developer services that can enable innovators to deploy market-leading applications. Access to secure, scalable, low-cost AWS infrastructure in Canada allows customers to innovate and provide tools to meet privacy, sovereignty, and compliance requirements. The new AWS Canada (Central) Region also continues the company's focus on delivering cloud technologies to customers in an environmentally friendly way. AWS data centers in Canada will draw from a regional electricity grid that is 99 percent powered by hydropower. For more information about AWS efforts, see AWS & Sustainability. Some examples of how current customers use AWS are: Cost-effective solutions Kik Interactive is a Canadian chat platform with hundreds of millions of users around the globe. It adopted Amazon Redshift, Amazon EMR and AWS Lambda to power its data warehouse, big data, and data science applications, supporting the development of product features at a fraction of the cost of competing solutions. Rapid time to market The Globe and Mail (Globe) is one of Canada's most read newspapers, with a national weekly circulation of 4.7 million. To increase online readership, it worked with AWS Partner Network (APN) Partner ClearScale to develop a personal recommendation capability. The solution, which leverages Amazon Kinesis, Amazon DynamoDB, and Amazon EMR to collect, store, and process the data, as well as AWS CloudFormation and AWS OpsWorks to support the Globe's DevOps environment, was deployed in three months—less than half the time it would have taken had the newspaper built it on-premises. Enterprise-class services available from Canada Box is an enterprise content management and collaboration platform used by more than 41 million users and 59,000 businesses—including 59% of the Fortune 500. It relies on the scale and power of Amazon Simple Storage Service (Amazon S3) to deliver in-region storage options to businesses and organizations across the world in Canada, Japan, Singapore, Australia, Ireland, Germany, and the U.S., as part of its Box Zones ecosystem. Having the ability to provide these services locally enables Box to better serve Canadian enterprises looking for cloud solutions while ensuring their data is stored inside Canada. Increasing agility Lululemon Athletica is a Canadian athletic apparel company that is using AWS Lambda, AWS CodePipeline, and AWS CodeDeploy to rapidly build and deploy their digital marketing and e-commerce solutions for the upcoming 2016 holiday season. By using AWS to manage the continuous deployment and delivery of their applications, Lululemon personnel can focus on[...]



Transforming Development with AWS

2016-12-01T12:00:00-08:00

In my keynote at AWS re:Invent today, I announced 13 new features and services (in addition to the 15 we announced yesterday). My favorite parts of James Bond movies is are where 007 gets to visit Q to pick up and learn about new tools of the trade: super-powered tools with special features which that he can use to complete his missions, and, in some cases, get out of some nasty scrapes. Bond always seems to have the perfect tool for every situation that he finds himself in. * At AWS, we want to be the Q for developers, giving them the super-powered tools and services with deep features in the Cloud. In the hands of builders, the impact of these services has been to completely transform the way applications are developed, debugged, and delivered to customers. I was joined by 32,000 James Bonds at the conference today from all around the world, and we introduced new services focused on accelerating this transformation across development, testing and operations, data and analytics, and computation itself. Transformation in Development, Testing, & Operations Although development and operations are often overlooked, they are the engines of agility for most organizations. Today, cCompanies cannot afford to wait two or three years between releases, and; customers have found that continually releasing incremental functionality to customer frequently reduces risk and improves quality. Today, we're making available broad new services which that let builders prepare and operate their applications more quickly and efficiently, and respond to changes in both their business and their operating environment, swiftly. We launched the following new services and features today to help. AWS OpsWorks for Chef : a fully managed Chef Automate environment, available through AWS OpsWorks to fuel even more automation and reduce the heavy lifting associated with continuous deployment. Amazon EC2 Systems Manager : A collection of tools for package installation, patching, resource configuration, and task automation on Amazon EC2. AWS Codebuild: A new, fully managed, extensible service for compiling source code and running unit tests, which is integrated with other application lifecycle management services— such as AWS CodeDeploy, AWS CodeCommit, and AWS CodePipeline— for dramatically decreasing the time between iterations of software. Amazon X-Ray: A new service to analyze, visualize, and debug distributed applications, allowing builders to identify performance bottlenecks and errors. Personal Health Dashboard: A new personalized view of AWS service health for all customers, allowing developers to gain visibility into service health issues which that may be affecting their application. AWS Shield : protective Protective armor against distributed denial of service (DDoS) attacks, available as Shield Standard and Shield Advanced. Shield Standard gives DDoS protection to all customers using API Gateway, Elastic Load Balancing, Route 53, CloudFront, and EC2. Shield Advanced protects against more sophisticated DDoS attacks, with access to help through a 24x7 AWS DDoS response team. Transformation in Data In the old world, access to infrastructure resources was a big differentiator for big, wealthy companies. No more. Today, any developer can have access to a wealth of infrastructure technology services which that bring advanced technology to their fingertips times in the Cloud. The days of differentiation through infrastructure are behind us; the technology is now evenly distributed. Instead, most companies today and in the future will differentiate themselves through the data that they collect and have access to, and the way in which they can put that data to work for the benefit of their customers. We rolled out three new servi[...]



Bringing the Magic of Amazon AI and Alexa to Apps on AWS.

2016-11-30T10:00:00-08:00

From the early days of Amazon, Machine learning (ML) has played a critical role in the value we bring to our customers. Around 20 years ago, we used machine learning in our recommendation engine to generate personalized recommendations for our customers. Today, there are thousands of machine learning scientists and developers applying machine learning in various places, from recommendations to fraud detection, from inventory levels to book classification to abusive review detection. There are many more application areas where we use ML extensively: search, autonomous drones, robotics in fulfillment centers, text processing and speech recognition (such as in Alexa) etc. Among machine learning algorithms, a class of algorithms called deep learning has come to represent those algorithms that can absorb huge volumes of data and learn elegant and useful patterns within that data: faces inside photos, the meaning of a text, or the intent of a spoken word.After over 20 years of developing these machine learning and deep learning algorithms and end user services listed above, we understand the needs of both the machine learning scientist community that builds these machine learning algorithms as well as app developers who use them. We also have a great deal of machine learning technology that can benefit machine scientists and developers working outside Amazon. Last week, I wrote a blog about helping the machine learning scientist community select the right deep learning framework from among many we support on AWS such as MxNet, TensorFlow, Caffe, etc. Today, I want to focus on helping app developers who have chosen to develop their apps on AWS and have in the past developed some of the seminal apps of our times on AWS, such as Netflix, AirBnB, or Pinterest or created internet connected devices powered by AWS such as Alexa and Dropcam. Many app developers have been intrigued by the magic of Alexa and other AI powered products they see being offered or used by Amazon and want our help in developing their own magical apps that can hear, see, speak, and understand the world around them. For example, they want us to help them develop chatbots that understand natural language, build Alexa-style conversational experiences for mobile apps, dynamically generate speech without using expensive voice actors, and recognize concepts and faces in images without requiring human annotators. However, until now, very few developers have been able to build, deploy, and broadly scale applications with AI capabilities because doing so required specialized expertise (with Ph.D.s in ML and neural networks) and access to vast amounts of data. Effectively applying AI involves extensive manual effort to develop and tune many different types of machine learning and deep learning algorithms (e.g. automatic speech recognition, natural language understanding, image classification), collect and clean the training data, and train and tune the machine learning models. And this process must be repeated for every object, face, voice, and language feature in an application. Today, I am excited to announce that we are launching three new Amazon AI services that eliminate all of this heavy lifting, making AI broadly accessible to all app developers by offering Amazon's powerful and proven deep learning algorithms and technologies as fully managed services that any developer can access through an API call or a few clicks in the AWS Management Console. These services are Amazon Lex, Amazon Polly, and Amazon Rekognition that will help AWS app developers build these next generation of magical, intelligent apps. Amazon AI services make the full power of Amazon's natural language understanding, speech recognition, text-to-speech, and im[...]



MXNet - Deep Learning Framework of Choice at AWS

2016-11-22T09:00:00-08:00

Machine learning is playing an increasingly important role in many areas of our businesses and our lives and is being employed in a range of computing tasks where programming explicit algorithms is infeasible. At Amazon, machine learning has been key to many of our business processes, from recommendations to fraud detection, from inventory levels to book classification to abusive review detection. And there are many more application areas where we use machine learning extensively: search, autonomous drones, robotics in fulfillment centers, text and speech recognitions, etc. Among machine learning algorithms, a class of algorithms called deep learning hascome to represent those algorithms that can absorb huge volumes of data and learn elegant and useful patterns within that data: faces inside photos, the meaning of a text, or the intent of a spoken word. A set of programming models has emerged to help developers define and train AI models with deep learning; along with open source frameworks that put deep learning in the hands of mere mortals. Some examples of popular deep learning frameworks that we support on AWS include Caffe, CNTK, MXNet, TensorFlow, Theano, and Torch. Among all these popular frameworks, we have concluded that MXNet is the most scalable framework. We believe that the AI community would benefit from putting more effort behind MXNet. Today, we are announcing that MXNet will be our deep learning framework of choice. AWS will contribute code and improved documentation as well as invest in the ecosystem around MXNet. We will partner with other organizations to further advance MXNet. AWS and Support for Deep Learning Frameworks At AWS, we believe in giving choice to our customers. Our goal is to support our customers with tools, systems, and software of their choice by providing the right set of instances, software (AMIs), and managed services. Just like in Amazon RDS―where we support multiple open source engines like MySQL, PostgreSQL, and MariaDB, in the area of deep learning frameworks, we will support all popular deep learning frameworks by providing the best set of EC2 instances and appropriate software tools for them. Amazon EC2, with its broad set of instance types and GPUs with large amounts of memory, has become the center of gravity for deep learning training. To that end, we recently made a set of tools available to make it as easy as possible to get started: a Deep Learning AMI, which comes pre-installed with the popular open source deep learning frameworks mentioned earlier; GPU-acceleration through CUDA drivers which are already installed, pre-configured, and ready to rock; and supporting tools such as Anaconda and Jupyter. Developers can also use the distributed Deep Learning CloudFormation template to spin up a scale-out, elastic cluster of P2 instances using this AMI for even larger training runs. As Amazon and AWS continue to invest in several technologies powered by deep learning, we will continue to improve all of these frameworks in terms of usability, scalability, and features. However, we plan to contribute significantly to one in particular, MXNet. Choosing a Deep Learning Framework Developers, data scientists, and researchers consider three major factors when selecting a deep learning framework: The ability to scale to multiple GPUs (across multiple hosts) to train larger, more sophisticated models with larger, more sophisticated datasets. Deep learning models can take days or weeks to train, so even modest improvements here make a huge difference in the speed at which new models can be developed and evaluated. Development speed and programmability, especially the opportunity to use languages they are already familiar w[...]



Spice up your Analytics: Amazon QuickSight Now Generally Available in N. Virginia, Oregon, and Ireland.

2016-11-15T14:00:00-08:00

Previously, I wrote about Amazon QuickSight, a new service targeted at business users that aims to simplify the process of deriving insights from a wide variety of data sources quickly, easily, and at a low cost. QuickSight is a very fast, cloud-powered, business intelligence service for the 1/10th the cost of old-guard BI solutions. Today, I am very happy to announce that QuickSight is now generally available in the N. Virginia, Oregon, and Ireland regions. When we announced QuickSight last year, we set out to help all customers—regardless of their technical skills—make sense out of their ever-growing data. As I mentioned, we live in a world where massive volumes of data are being generated, every day, from connected devices, websites, mobile apps, and customer applications running on top of AWS infrastructure. This data is collected and streamed using services like Amazon Kinesis and stored in AWS relational data sources such as Amazon RDS, Amazon Aurora, and Amazon Redshift; NoSQL data sources such as Amazon DynamoDB; and file-based data sources such as Amazon S3. Along with data generated in the cloud, customers also have legacy data sitting in on-premises datacenters, scattered on user desktops, or stored in SAS applications. There’s an inherent gap between the data that is collected, stored, and processed and the key decisions that business users make on a daily basis. Put simply, data is not always readily available and accessible to organizational end users. The data infrastructure to collect, store, and process data is geared primarily towards developers and IT professionals whereas insights need to be derived by not just technical professionals but also non-technical business users. Most business users continue to struggle to answer key business questions such as, “Who are my top customers and what are they buying?”, “How is my marketing campaign performing?”, and “Why is my most profitable region not growing?” While BI solutions have existed for decades, customers have told us that it takes an enormous amount of time, IT effort, and money to bridge this gap. The reality is that many traditional BI solutions are built on top of legacy desktop and on-premises architectures that are decades old. They require companies to provision and maintain complex hardware infrastructure and invest in expensive software licenses, maintenance fees, and support fees that cost upwards of thousands of dollars per user per year. They require teams of data engineers to spend months building complex data models and synthesizing the data before they can generate their first report. To scale to a larger number of users and support the growth in data volume spurred by social media, web, mobile, IoT, ad-tech, and ecommerce workloads, these tools require customers to invest in even more infrastructure to maintain performance. Finally, their complex user experiences are designed for power users and not suitable for the fast-growing segment of business users. The cost and complexity to implement, scale, and use BI makes it difficult for most companies to make data analysis ubiquitous across their organizations. Enter Amazon QuickSight QuickSight is a cloud-powered BI service built from the ground up to address the big data challenges around speed, complexity, and cost. QuickSight puts data at the fingertips of your business users in an easy-to-use user interface and at one-tenth the cost of traditional BI solutions, even if that data is scattered across various sources such as Amazon Redshift, Amazon RDS, Amazon S3, or Salesforce.com; legacy databases running on-premises; or even user desktops in Microsoft Excel or CSV file formats. Getti[...]



Meet the Teams Competing for the Alexa Prize

2016-11-14T09:00:00-08:00

(image)

On September 29, 2016, Amazon announced the Alexa Prize, a $2.5 million university competition to advance conversational AI through voice. We received applications from leading universities across 22 countries. Each application was carefully reviewed by senior Amazon personnel against a rigorous set of criteria covering scientific contribution, technical merit, novelty, and ability to execute. Teams of scientists, engineers, user experience designers, and product managers read, evaluated, discussed, argued, and finally selected the twelve teams who would be invited to participate in the competition.

Today, we’re excited to announce the 12 teams selected to compete with an Amazon sponsorship. In alphabetical order, they are:

  • Carnegie-Mellon University: CMU Magnus
  • Carnegie-Mellon University: TBD
  • Czech Technical University, Prague: eClub Prague
  • Heriot-Watt University, UK: WattSocialBot
  • Princeton University: Princeton Alexa
  • Rensselaer Polytechnic Institute: BAKAbot
  • University of California, Berkeley: Machine Learning @ Berkeley
  • University of California, Santa Cruz: SlugBots
  • University of Edinburgh, UK: Edina
  • University of Montreal, Canada: MILA Team
  • University of Trento, Italy: Roving Minds
  • University of Washington, Seattle: HuskyBot

These teams will each receive a $100,000 research grant as a stipend, Alexa-enabled devices, free Amazon Web Services (AWS) services to support their development efforts, access to new Alexa Skills Kit (ASK) APIs, and support from the Alexa team. Teams invited to participate without sponsorship will be announced on December 12, 2016.

We have challenged these teams to create a socialbot, a conversational AI skill for Alexa that converses engagingly and coherently with humans for 20 minutes on popular topics and news events such as Entertainment, Fashion, Politics, Sports, and Technology. This seemingly intuitive task continues to be one of the ultimate challenges for AI.

Teams will need to advance several areas of conversational AI including knowledge acquisition, natural language understanding, natural language generation, context modeling, common sense reasoning, and dialog planning. We will provide students with data and technical support to help them tackle these problems at scale, and live interactions and feedback from Alexa’s large user base to help them test ideas and iterate their algorithms much faster than previously possible.

As teams gear up for the challenge, we invite all of you to think about what you’d like to chat with Alexa about. In April, you and millions of other Alexa customers will be able to test the socialbots and provide feedback to the teams to help them create a socialbot you’ll want to chat with every day. Your feedback will also help select the finalists. In the meantime, follow the #AlexaPrize hashtag and bookmark the Alexa Prize site for updates.




Welcoming Adrian Cockcroft to the AWS Team.

2016-10-24T09:00:00-07:00

(image)

I am excited that Adrian Cockcroft will be joining AWS as VP of Cloud Architecture. Adrian has played a crucial role in developing the cloud ecosystem as Cloud Architect at Netflix and later as a Technology Fellow at Battery Ventures. Prior to this, he held positions as Distinguished Engineer at eBay and Sun Microsystems. One theme that has been consistent throughout his career is that Adrian has a gift for seeing the bigger engineering picture.

At Netflix, Adrian played a key role in the company's much-discussed migration to a "cloud native" architecture, and the open sourcing of the widely used (and award-winning) NetflixOSS platform. AWS customers around the world are building more scalable, reliable, efficient and well-performing systems thanks to Adrian and the Netflix OSS effort.

Combine Adrian's big thinking with his excellent educational skills, and you understand why Adrian deserves the respect he receives around the world for helping others be successful on AWS. I'd like to share a few Adrian's own words about his decision to join us....

"After working closely with many folks at AWS over the last seven years, I am thrilled to be joining the clear leader in cloud computing.The state of the art in infrastructure, software packages, and services is nowadays a combination of AWS and open source tools. -- and they are available to everyone. This democratization of access to technology levels the playing field, and means anyone can learn and compete to be the best there is."

I am excited about welcoming Adrian to the AWS team where he will work closely with AWS executives and product groups and consult with customers on their cloud architectures -- from start-ups that were born in the cloud to large web-scale companies and enterprises that have an “all-in” migration strategy. Adrian will also spend time engaging with developers in the Amazon-sponsored and supported open source communities. I am looking really looking forward to working with Adrian again and seeing the positive impact he will have on AWS customers around the world.




Expanding the AWS Cloud: Introducing the AWS US East (Ohio) Region

2016-10-17T09:00:00-07:00

(image)

Today I am very happy to announce the opening of the new US East (Ohio) Region. The Ohio Region is the fifth AWS region in the US. It brings the worldwide total of AWS Availability Zones (AZs) to 38, and the number of regions globally to 14. The pace of expansion at AWS is accelerating, and Ohio is our third region launch this year. In the remainder of 2016 and in 2017, we will launch another four AWS regions in Canada, China, the United Kingdom, and France, adding another nine AZs to our global infrastructure footprint.

We strive to place customer feedback first in our considerations for where to open new regions. The Ohio Region is no different. Now customers who have been requesting a second US East region have more infrastructure options for running workloads, storing files, running analytics, and managing databases. The Ohio Region launches with three AZs so that customers can create high-availability environments and architect for fault tolerance and scalability. As with all AWS AZs, the AZs in Ohio each have redundant power, networking, and connectivity, which are designed to be resilient to issues in another AZ.

We are also glad to offer low transfer rates between both US East Regions. Data transfer between the Ohio Region and the Northern Virginia Region is priced the same as data transfer between AZs within either of these regions. We hope this will be helpful for customers who want to implement backup or disaster recovery architectures and need to transfer large amounts of data between these regions. It will also be useful for developers who simply want to use services in both regions and move resources back and forth between them. The Ohio Region also has a broad set of services comparable to our Northern Virginia Region, including Amazon Elastic Compute Cloud (Amazon EC2), Amazon Simple Storage Service (Amazon S3), Amazon Relational Database Service (Amazon RDS), and AWS Marketplace. Check out the Regional Products and Services page for the full list.

We’ll continue to add new infrastructure to grow our footprint and make AWS as useful as possible for all of our customers around the world. You can learn more about our growing global infrastructure footprint at https://aws.amazon.com/about-aws/global-infrastructure/.




Accelerating Data: Faster and More Scalable ElastiCache for Redis

2016-10-12T22:00:00-07:00

Fast Data is an emerging industry term for information that is arriving at high volume and incredible rates, faster than traditional databases can manage. Three years ago, as part of our AWS Fast Data journey we introduced Amazon ElastiCache for Redis, a fully managed in-memory data store that operates at sub-millisecond latency. Since then we’ve introduced Amazon Kinesis for real-time streaming data, AWS Lambda for serverless processing, Apache Spark analytics on EMR, and Amazon QuickSight for high performance Business Intelligence. While caching continues to be a dominant use of ElastiCache for Redis, we see customers increasingly use it as an in-memory NoSQL database. Developers love the blazing fast performance and in-memory capabilities provided by Redis, making it among the most popular NoSQL key-value stores. However, until now ElastiCache for Redis customers could only run single-shard Redis. This limited the workload size and write throughput to that of a single VM, or required application level sharding. Today, as a next step in our Fast Data journey, we have extended the ElastiCache for Redis service to support “Redis Cluster,” the sharding capability of Redis. Customers can now scale a single deployment to include up to 15 shards, making each Redis-compatible data store up to 3.5 terabytes in size, that operate on microsecond time scales. We also do this at very high rates: up to 4.5 million writes per second and 20 million reads per second. Each shard can include up to five read replicas to ensure high availability so that both planned and unforeseen outages of the infrastructure do not cause application outages. Building upon Redis There are some great examples and use cases for Redis, which you can see at companies like Hudl, which offers mobile and desktop video analytics solutions to sports teams and athletes. Hudl is using ElastiCache for Redis to provide millions of coaches and sports analysts with near real-time data feeds that they need to help drive their teams to victory. Another example is Trimble, a global leader in location services who is using ElastiCache for Redis as their primary database for workforce location, helping customers like DirecTV get the right technician to the right location as quickly and inexpensively as possible, enabling both reduced costs and increased satisfaction for their own subscribers. Increasingly, ElastiCache for Redis has become a mission critical in-memory database for our customers whose availability, durability, performance and scale matter to their business. We have therefore been enhancing the Redis engine running on ElastiCache for the last few years using our own expertise in making enterprise infrastructure scalable and reliable. Amazon’s enhancements address many day-to-day challenges with running Redis. By utilizing techniques such as granular memory management, dynamic I/O throttling and fine grained replica synchronization, ElastiCache for Redis delivers a more robust Redis experience. It enables customers to run their Redis nodes at higher memory utilization without risking swap usage during events such as snapshotting and replica synchronization. It also offers improved synchronization of replicas under load. In addition, ElastiCache for Redis provides smoother Redis failovers by combining our Multi-AZ automated failover with streamlined synchronization of read replicas. Replicas now recover faster as they no longer need to flush their data to do a full resynchronization with the primary. All these capabilities are available to customers at no additional[...]



Introducing the Alexa Prize, It’s Day One for Voice

2016-09-29T10:00:00-07:00

In the past voice interfaces were seen as gimmicks, or a nuisance for driving “hands-free.” The Amazon Echo and Alexa have completely changed that perception. Voice is now seen as potentially the most important interface to interact with the digitally connected world. From home automation to commerce, from news organizations to government agencies, from financial services to healthcare, everyone is working on the best way is to interact with their services if voice is the interface. Especially for the exciting case where voice is the only interface. Voice makes access to digital services far more inclusive than traditional screen-based interaction, for example, an aging population may be much more comfortable interacting with voice-based systems than through tablets or keyboards. Alexa has propelled the conversational interface forward given how natural the interactions are with Alexa-enabled devices. However, it is still Day One, and a lot of innovation is underway in this world. Given the tremendous impact of voice on how we interact with the digital world, it influences how we will build products and services that can support conversations in ways that we have never done before. As such there is also a strong need for fundamental research on these interactions, best described as “Conversational Artificial Intelligence.” Today, we are pleased to announce the Alexa Prize, a $2.5 million university competition to accelerate advancements in conversational AI. With this challenge, we aim to advance several areas of conversational AI including knowledge acquisition, natural language understanding, natural language generation, context modeling, commonsense reasoning and dialog planning. The goal is that through the innovative work of students, Alexa users will experience novel, engaging conversational experiences.   Teams of university students around the world are invited to participate in a conversational AI challenge (see contest rules for details). The challenge is to create a socialbot, an Alexa skill that converses with users on popular topics. Social conversation can occur naturally on any topic, and teams will need to create an engaging experience while maintaining relevance and coherence throughout the interaction. For the grand challenge we ask teams to invent a socialbot smart enough to engage in a fun, high quality conversation on popular societal topics for 20 minutes. As part of the research and judging process, millions of Alexa customers will have the opportunity to converse with the socialbots on popular topics by saying, “Alexa, let’s chat about (a topic, for example, baseball playoffs, celebrity gossip, scientific breakthroughs, etc.).” Following the conversation, Alexa users will give feedback on the experience to provide valuable input to the students for improving their socialbots. The feedback from Alexa users will also be used to help select the best socialbots to advance to the final, live judging phase. The team with the highest-performing socialbot will win a $500,000 prize. Additionally, a prize of $1 million will be awarded to the winning team’s university if their socialbot achieves the grand challenge of conversing coherently and engagingly with humans for 20 minutes. Teams of university students can submit applications now and the contest will conclude at AWS re:Invent in November 2017, where the winners will be announced. Up to ten teams will be sponsored by Amazon and receive a $100,000 stipend, Alexa-enabled devices, free AWS services and support from the[...]