Subscribe: The Official Google Blog
Added By: Feedage Forager Feedage Grade A rated
Language: English
android  app  apps  cloud  data  google cloud  google  learning  machine learning  machine  make  new  time  we’re 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: The Official Google Blog

The Official Google Blog

Insights from Googlers into our products, technology, and the Google culture.

Last Build Date: Sun, 26 Feb 2017 12:00:00 +0000


The Google Assistant is coming to more Android phonesThe Google Assistant is coming to more Android phonesPRODUCT LEAD, GOOGLE ASSISTANT

Sun, 26 Feb 2017 12:00:00 +0000

Everyone needs a helping hand sometimes. Enter the Google Assistant, which is conversational, personal and helps you get things done—from telling you about your day to taking a selfie. The Assistant is already available on Pixel, Google Home, Google Allo and Android Wear. Now we're bringing it to even more people. Starting this week, the Google Assistant is coming to smartphones running Android 7.0 Nougat and Android 6.0 Marshmallow. Whether you need to know how to say “nice to meet you” in Korean or just a simple reminder to do laundry when you get home, your Assistant can help. With the Google Assistant on Android phones, you have your own personal, helpful Google right in your pocket. The Google Assistant on the Samsung Galaxy S7, LG V20 and HTC 10. And here are a few other things to try out—just long press on the Home button or say “Ok Google” to get started:What’s my confirmation number for my London flight?Take me to Museu Picasso.Show my photos of sunsets in Tahoe.Do I need an umbrella today?Turn on the living room lights. The Google Assistant will begin rolling out this week to English users in the U.S., followed by English in Australia, Canada and the United Kingdom, as well as German speakers in Germany. We’ll continue to add more languages over the coming year.The Google Assistant will automatically come to eligible Android phones running Nougat and Marshmallow with Google Play Services. You'll also see the Google Assistant on some newly announced partner devices, including the LG G6. If you happen to be in Barcelona, Spain at the mobile industry’s largest trade show Mobile World Congress this week, stop by the Android Global Village to try out the Google Assistant across a number of Android partner phones, including HTC, Huawei, Samsung and Sony. Our goal is to make the Assistant available anywhere you need it. It came to Android Wear 2.0—via new smartwatches—just a few weeks ago and, as we previewed in January, the Assistant is also coming to TVs and cars. With this update, hundreds of millions of Android users will now be able to try out the Google Assistant. What will you ask first?[...]Starting this week, we’ll begin rolling out the Google Assistant to smartphones running Android 7.0 Nougat and Android 6.0 Marshmallow.

Media Files:

The making of “Pearl”The making of “Pearl”Technical Project Lead, Spotlight Stories

Fri, 24 Feb 2017 16:30:00 +0000

Spotlight Stories' “Pearl” follows a father and daughter as they travel the country in their beloved hatchback, chasing their dreams. Created and produced as an interactive VR experience, a 360 video, and a theatrical short film, “Pearl” premiered last summer at the TriBeCa film festival, and is nominated this year for an Oscar for best animated short film.With the Oscars just a few days away, we asked Director Patrick Osborne, Producer David Eisenmann, Music and Sound Creative Director Scot Stafford, and Technical Art Lead Cassidy Curtis to reflect on the journey of “Pearl.” You can watch “Pearl” on the YouTube app, on Daydream through the YouTube VR app, on the Google Spotlight Stories app for iOS and Android, or on HTC Vive.Patrick Osborne, DirectorMy father is an artist and has worked as a toy designer. He loved to draw. He sacrificed a lot, as most parents do, in order to provide the best life for me and my brothers. One of those sacrifices was choosing family over career. “Pearl” was inspired by our relationship. Parents give us much more than material things—they give us taste, passion, their time. The time I spent drawing with my dad as a kid set up a foundation for the career I have today. I think of “Pearl” as a folk-roadtrip-VR-musical. In 360 and VR, you’re creating a film without the constraint of borders, edges or a frame or control over timing.  That means the story is happening all around you, and the audience is free to look anywhere at any time. As a director, giving that control to the audience was a scary prospect.I had to figure out how to tell a story that spanned decades without the typical editing cuts you experience in a traditional film, which make it easy to understand that time has passed. In order to tell this story the way I had envisioned it, I had to truncate time and transport the audience from scene to scene. I made the car the focal point of the story, used the car’s windows to frame and compose shots, and put the audience in the passenger seat.David Eisenmann, ProducerPearl is a single story made for several mediums at once: as a 2D theatrical film, a 360º interactive story, and fully immersive VR. All of these versions were built from the same core of story, animation, sound and music, yet to make the best possible version for each medium, we had to make different choices along the way. For example, the rhythm of editing from shot to shot was much quicker in 2D than in VR, with almost twice as many cuts between scenes. Working with Evil Eye Pictures, we used each medium’s strengths to help the others: to create the 2D version, Patrick actually “shot” the scenes in 360, using the mobile phone as a camera. Editor Stevan Riley assembled the film from this footage, much as he would do with one of his documentaries.The result is a rare opportunity to see how one filmmaker tells the same story in all these different mediums. While the VR version feels like being there in the passenger seat with the characters, the theatrical version is more like watching their home movies. Different forms of intimacy, but they all bring you closer to these characters’ lives. Patrick Osborne, DirectorAs a fan of modern folk and Americana trends in music, I jumped at the chance to wrap the story in a song. "No Wrong Way Home" perfectly complements the visual style of the film, and the lyrics and imagery leave room for the audience to see and hear a little bit of themselves in our story.Scot Stafford, Music and Sound Creative DirectorPatrick wanted the story to evolve through music and for the song to be passed from father to daughter, along with the car. After an extensive search for songwriters, he chose Alexis Harte and JJ Wiesler for their sketch that contained the refrain, “there’s no wrong way home.”  It matched perfectly with Patrick’s vision and his early sketch[...]

Media Files:

Delivering RCS messaging to Android users worldwideDelivering RCS messaging to Android users worldwideHead of RCS

Fri, 24 Feb 2017 08:00:00 +0000

Whether we’re receiving a boarding pass for a flight or chatting with friends and family, SMS (better known as text messaging) is a universal way for us to stay connected. But despite its ubiquity, SMS hasn’t evolved to take advantage of all the features that smartphones enable today. We believe it’s important to innovate in messaging standards, so we’ve been working with the mobile industry on an initiative to upgrade SMS through a universal standard called RCS (Rich Communications Services), bringing more enhanced features to the standard messaging experience on mobile devices. Today, we’re taking a significant step toward making RCS messaging universally available to users across the world with 27 carriers and device manufacturers launching RCS to Android users with Google.Following our partnerships with Sprint, Rogers, and Telenor, today we’re announcing that Orange, Deutsche Telekom, and Globe are committed to launching RCS messaging powered by the Jibe RCS cloud from Google and will be preloading Android Messages (formerly called Messenger for Android) as the standard native messaging app for their subscribers. We’re also announcing that the Vodafone Group RCS service is supporting Android Messages and has already launched across 10 markets for Vodafone subscribers globally.These partners have also committed to interconnecting through the Jibe RCS hub so that RCS messages are delivered to subscribers across carrier networks, helping RCS messaging become truly universal. We’re now partnering with carriers representing more than 1B subscribers worldwide.Upgrading the default messaging experience for Android We want to make sure that Android users can access all the features that RCS messaging offers, like group chat, high-res photo sharing, read receipts, and more. So we’re working with mobile device manufacturers to make Android Messages the default messaging app for Android devices. Mobile device brands LG, Motorola, Sony, HTC, ZTE, Micromax, HMD Global - Home of Nokia Phones, Archos, BQ, Cherry Mobile, Condor, Fly, General Mobile, Lanix, LeEco, Lava, Kyocera, MyPhone, QMobile, Symphony and Wiko, along with Pixel and Android One devices, will preload Android Messages as the default messaging app on their devices. With these partners, we’re upgrading the messaging experience for Android users worldwide and ensuring a consistent and familiar experience for users. We’ll continue to add more partners over time. Android Messages supports RCS, SMS and MMS so people can message all their friends regardless of their network or device type. We’ll continue to update and improve Android Messages to bring new features enabled through RCS, such as the ability to search and share all types of content and easily access the messages that are most important to you.Improving business messaging with RCSCurrently millions of businesses, service providers, and brands use SMS to communicate with their customers, whether they’re sending a bank fraud alert or package delivery notification. But while SMS provides a universal way for consumers to connect with businesses, the messages are limited to just plain text. RCS will upgrade today’s business messaging experience by enabling brands to send more useful and interactive messages. For example, a message from your airline reminding you to check in for a flight can now take advantage of rich media and interactivity to provide a full check-in experience, complete with boarding pass, visual flight updates, and terminal maps on demand, all directly within the messaging experience. Businesses can also have a branded messaging experience with information about the business and the ability to share content like images, video clips and gifs.To make it easier for brands to participate in RCS business messaging, we’re creating an Early Access Program which will allow busines[...]

Media Files:

Celebrating Penpan Sittitrai, Thailand’s master of fruit carvingCelebrating Penpan Sittitrai, Thailand’s master of fruit carvingMarketing, Google Thailand

Thu, 23 Feb 2017 23:55:00 +0000

Today's Doodle in Thailand celebrates national artist Penpan Sittitrai and the delicate art of fruit carving, which she mastered, skillfully turning every fruit and vegetable she touched into something truly exquisite. The tradition of fruit carving has been around for centuries, initially carried out to decorate the tables of the Thai royal family. Over time, it has turned into a staple at most cultural events — something would be amiss at a Thai wedding without one of these as a centerpiece. But it’s at Songkran, the Thai New Year festival, when this custom is especially popular. Penpai carving a mango (Image source: the family's private photo collection) Penpan Sittitrai is Thailand’s most famous fruit carving artist. Using nothing but a simple carving knife, she shaped watermelons into delicate leaves and mangoes into elegant swans. Nature was Sittitrai’s favorite theme, and from girlhood through her golden years, Sittitrai practiced her craft, elevating it to a form of fine art.Penpan left behind many legacies, including her book “The Art of Thai Vegetable and Fruit Carving,” so anyone, anywhere can learn how to turn their apple-a-day into a work of art. [...]Penpan Sittitrai skillfully turned every fruit and vegetable she touched into something truly exquisite.

Media Files:

Improve your nonprofit’s account security with 2-step verificationImprove your nonprofit’s account security with 2-step verificationCommunications & Public Affairs ManagerGoogle for Nonprofits

Thu, 23 Feb 2017 17:00:00 +0000

While online accounts allow nonprofits to easily communicate with partners, volunteers and donors across the world, this shared network can also leave your account vulnerable to intruders. As your nonprofit continues to grow its online presence, it’s crucial to keep confidential information (e.g., finances or donor’s information) safe. While passwords have historically been the sole guardian for online account access, research from Google has shown that many passwords and security questions can easily be guessed. That's why we strongly recommend that all nonprofits using GSuite for Nonprofits, or Google products like Gmail, use 2-Step Verification (2SV) as an additional protection on their account(s). Account hijacking—a process through which an online account is stolen or hijacked by a hacker—constitutes a serious threat to your nonprofit’s operations. Typically, account hijackings are carried out by phishing attempts or hackers who guess weak passwords. Because of this, it’s especially important for your nonprofit to maintain strong and unique account passwords to keep sensitive data safe.But 2SV goes beyond just a strong password. It's an effective security feature that combines "something you know" (e.g., a password) and "something you have" (e.g., a text, a prompt, or a Security Key) to protect your accounts. Think of this like withdrawing money from an ATM/cash machine: You need both your PIN and your debit card. Our free Google Authenticator app is available for Android and iOS devices, which generates a code for you each time you want to sign in to your account. Now that you know what 2SV is, head over to our Help Page to start improving your nonprofit’s online security now. (Quick tip: Remember to keep your account settings up to date and configure backup options to use if your phone is ever lost or stolen). Stay safe, nonprofits!  To see if your nonprofit is eligible to participate, review the Google for Nonprofits eligibility guidelines. Google for Nonprofits offers organizations like yours access to Google tools like Gmail, Google Calendar, Google Drive, Google Ad Grants, YouTube for Nonprofits and more at no charge. These tools can help you reach new donors and volunteers, work more efficiently, and tell your nonprofit’s story. Learn more and enroll here.[...]Nonprofits use the web to communicate with partners, volunteers and donors across the world. That's why it's important for them to keep their online accounts safe and secure.

Media Files:

Doing more for racial justiceDoing more for racial justicePrincipal

Thu, 23 Feb 2017 15:00:00 +0000

I'm the grandson of a Port of Seattle police officer, the nephew of a Washington State Trooper, and the son of a Snohomish County Detention Chief. The Black men in my family were all engaged in some form of law enforcement, and throughout my lifetime, I’ve seen law enforcement officers be a force for good in communities. But I’ve also borne witness to injustices that have shaken my faith in our criminal justice system. In my work at, I help identify causes and organizations that aim to ultimately help correct many of these injustices.Since 2015, has committed more than $5 million to nonprofits advancing racial justice, and we’ve aimed to get proximate and better understand how racial bias can lead to exclusion from opportunity. Today we’re doubling our previous commitment, and investing $11.5 million in new grants to organizations across the country working to reform our criminal justice system.Mass incarceration is a huge issue in the United States, and a major area of focus for our grants. The U.S. penal population has exploded, growing by 400 percent since 1984 to more than 2 million today, with Black men sentenced at over five times the rate of white men. We have the highest rate of incarceration in the world, dwarfing the rates of every developed country and even surpassing those in highly repressive regimes.Videos of police shooting unarmed people of color have woken many of us up to the impact that racism and internalized bias have on black and brown communities. But we have almost no data on police behavior and criminal sentencing at a national level. Individual agencies and court systems keep track of some information, but aggregated reporting is nearly nonexistent and the data is often not complete enough to identify potential bias. Each agency collects and reports data in their own way, making comparisons between jurisdictions nearly impossible. The average rate of police use of force for Black residents is 2.5 times as high as the overall rate and 3.6 times as high as the rate for White residents (Source: CPE’s report The Science of Justice) We believe better data can be can be part of the solution, which is why we’re investing in organizations using data and evidence to reduce racial disparities in the criminal justice system. We’re giving $5 million to support the Center for Policing Equity (CPE), which partners with police agencies and communities by bringing together data science, trainings and policy reforms to address racial disparity. This intersection gives CPE a unique opportunity to both identify the cause of problems, and propose concrete solutions. CPE’s National Justice Database is the first in the nation to track national statistics on police behavior, including stops and use of force, and standardizes data collection across many of the country’s police departments. Soon, Google engineers will be volunteering their time and skills with CPE to help build and improve this platform. We’re also supporting two organizations in California that are focused on ways that data can help bring about more equity in our court systems. Our $1.5 million grant to Measures for Justice aims to create a first-of-its-kind web platform that allows anyone to get a snapshot of how their local justice system treats people based on their offense history and across different categories of race/ethnicity, sex, indigent status and age. And $500,000 to the W. Haywood Burns Institute is helping to ensure this data across each of California’s 58 counties is accessible to criminal justice reform organizations so they can make data-informed decisions. [...]

Media Files:

How Google Maps APIs are fighting HIV in KenyaHow Google Maps APIs are fighting HIV in KenyaProduct Marketing Manager

Thu, 23 Feb 2017 14:00:00 +0000

In 2015, the Joint United Nations Programme on HIV/AIDS (UNAIDS) and mobile analytics solutions provider iVEDiX  came together to create the HIV Situation Room, a mobile app designed to help fight the HIV epidemic in Kenya. The app uses Google Maps APIs to create a comprehensive picture of HIV prevention efforts, testing and treatment — and make this programmatic data accessible both to local staff in clinics and others on the front lines, as well as to policy makers.We sat down with Taavi Erkkola, senior advisor on monitoring and evaluation for UNAIDS, and Brian Annechino, director of government and public sector solutions for iVEDiX, to hear more about the project and why they chose Google Maps APIs to help them in the fight against HIV.How did the idea for the UNAIDS HIV Situation Room app come about?Taavi Erkkola: As of 2015, UNAIDS estimates a total of 36.7 million people living with HIV globally. Of those, 2.1 million are newly infected, with approximately 5,700 new HIV infections a day. Sixty-six percent of all infected by HIV reside in sub-Saharan Africa, and approximately 400 people infected per day there are children under age 15. To effectively combat HIV, we need access to up-to-date information on everything from recent outbreaks and locations of clinics, to in-country educational efforts and inventory levels within healthcare facilities. UNAIDS has a Situation Room at our headquarters in Geneva that gives us access to this kind of worldwide HIV data. But we wanted to build a mobile app that provided global access to the Situation Room data, with more detail at a national, county and facility-level.We tested out the app in Kenya because the country has a strong appetite for the use of technology to better its citizens’ health. Kenyan government agencies, including the National AIDS Council, encouraged organizations like Kenya Medical Supplies Authority (KEMSA) and the Ministry of Health to contribute their disease control expertise and data to the Situation Room solution. Kenya's President Uhuru Kenyatta was an early advocate, and has demonstrated his government’s commitment to making data-driven decisions, especially in the fight against HIV and AIDS. Why did UNAIDS and iVEDiX choose Google Maps, and how did you use Google Maps APIs to build the HIV Situation Room app?Brian Annechino: In Kenya, more than 80 percent of adults own a cell phone, and Android is by far the most popular operating system. Google Maps APIs are available across all platforms, including native APIs for Android, and Google Maps also offers the kind of fine-grained detail we needed — for example, the locations of more than 7,500 Kenyan healthcare facilities servicing the HIV and AIDS epidemic. Using data from multiple sources along with Google Maps, we can map things like a clinic’s risk of running out of antiretroviral medicine.Onix, a Google Premier Partner, identified the right Google Maps components to build the app and helped us procure the licensing we needed. We used the Google Maps Android API to build the main interface. Since it was important to have the most accurate and up-to-date map data for Kenya to support the effort, we used the Street View feature of the Google Maps Android API to let people zoom into the street level and see clinics that offer HIV services in locations where Street View imagery is available.TE: These mapping capabilities are critical because we need to give our county-level users as much insight as possible on service delivery at health facilities. Decision-makers in HIV response are at national and county-level. In this app, we’re able to combine multiple data sources to get a more comprehensive picture of HIV prevention efforts, testing and treatment across these levels[...]

Media Files:

When computers learn to swear: Using machine learning for better online conversationsWhen computers learn to swear: Using machine learning for better online conversationsPresident

Thu, 23 Feb 2017 12:00:00 +0000

Imagine trying to have a conversation with your friends about the news you read this morning, but every time you said something, someone shouted in your face, called you a nasty name or accused you of some awful crime. You’d probably leave the conversation. Unfortunately, this happens all too frequently online as people try to discuss ideas on their favorite news sites but instead get bombarded with toxic comments.  Seventy-two percent of American internet users have witnessed harassment online and nearly half have personally experienced it. Almost a third self-censor what they post online for fear of retribution. According to the same report, online harassment has affected the lives of roughly 140 million people in the U.S., and many more elsewhere. This problem doesn’t just impact online readers. News organizations want to encourage engagement and discussion around their content, but find that sorting through millions of comments to find those that are trolling or abusive takes a lot of money, labor, and time. As a result, many sites have shut down comments altogether. But they tell us that isn’t the solution they want. We think technology can help.Today, Google and Jigsaw are launching Perspective, an early-stage technology that uses machine learning to help identify toxic comments. Through an API, publishers—including members of the Digital News Initiative—and platforms can access this technology and use it for their sites. How it worksPerspective reviews comments and scores them based on how similar they are to comments people said were “toxic” or likely to make someone leave a conversation. To learn how to spot potentially toxic language, Perspective examined hundreds of thousands of comments that had been labeled by human reviewers. Each time Perspective finds new examples of potentially toxic comments, or is provided with corrections from users, it can get better at scoring future comments.Publishers can choose what they want to do with the information they get from Perspective. For example, a publisher could flag comments for its own moderators to review and decide whether to include them in a conversation. Or a publisher could provide tools to help their community understand the impact of what they are writing—by, for example, letting the commenter see the potential toxicity of their comment as they write it. Publishers could even just allow readers to sort comments by toxicity themselves, making it easier to find great discussions hidden under toxic ones. We’ve been testing a version of this technology with The New York Times, where an entire team sifts through and moderates each comment before it’s posted—reviewing an average of 11,000 comments every day. That’s a lot of comments. As a result the Times has comments on only about 10 percent of its articles. We’ve worked together to train models that allows Times moderators to sort through comments more quickly, and we’ll work with them to enable comments on more articles every day.Where we go from herePerspective joins the TensorFlow library and the Cloud Machine Learning Platform as one of many new machine learning resources Google has made available to developers. This technology is still developing. But that’s what’s so great about machine learning—even though the models are complex, they’ll improve over time. When Perspective is in the hands of publishers, it will be exposed to more comments and develop a better understanding of what makes certain comments toxic.While we improve the technology, we’re also working to expand it. Our first model is designed to spot toxic language, but over the next year we’re keen to partner and deliver new models that work in languages other than English as well as models that can identify other perspectives, such as when [...]

Media Files:

Gboard for iPhone gets an upgradeGboard for iPhone gets an upgradeAssociate Product Manager

Thu, 23 Feb 2017 08:00:00 +0000

In May  2016, you first met Gboard, our app that let you search and send information, GIFs, emoji and more, right from your keyboard. In July, Gboard went global. And today we’re upgrading your Gboard experience on iPhone by adding access to 15 additional languages, Google Doodles, new emoji, and—by popular demand—voice typing.

New languages and new emoji

Gboard will now work in Croatian, Czech, Danish, Dutch, Finnish, Greek, Polish, Romanian, Swedish, Catalan, Hungarian, Malay, Russian, Latin American Spanish and Turkish. To get typing, searching and sharing in these new languages, open the Gboard app and go to “Languages” > “Add Language.”

We’ve also increased support for the universal language—emoji. Now you can search and send all of the latest emoji from iOS 10.👏🕴 💁

Google Doodles

Doodles are one of the Googley-est things about Google. These fun animations honor holidays, anniversaries and notable people, and often teach you about a little slice of history. Now you can access them right from Gboard. On days when there’s a Doodle, you’ll see the “G” button animate, cuing you to  quickly tap to open up the day’s Doodle and search for more information about it.

Say it faster

With today’s update, we’ve added voice typing, which allows you to dictate messages directly to Gboard. To tee up your next text, just long press the mic button on the space bar and talk.

To enjoy these updates to Gboard for iPhone, head to the App Store and make sure you’re running the latest version of the app. We’re always working on new features and languages, so please keep sharing your feedback in the app store—we’re listening!

(image) Today’s update to add 15 additional languages, the ability to see Doodles right from Gboard, new emoji and—by popular demand—voice typing.

Media Files:

What do productivity, machine learning and next generation teams have in common? Google Cloud Next ‘17.What do productivity, machine learning and next generation teams have in common? Google Cloud Next ‘17.Director of Product Management, Google Docs and Drive

Wed, 22 Feb 2017 18:00:00 +0000

On March 8-10, Google will host one its largest events ever — Google Cloud Next 2017. In the last year, the Google Cloud team has introduced some new products and solutions to help businesses face some of their biggest productivity problems. Next is our way of bringing together customers and partners under one roof to see the results of all these updates. That includes the latest cloud innovations and more than 200 sessions, where you can check out new products and features firsthand.While I applaud anyone who figures out a way to attend all 200, there are a few sessions that you should definitely see if you want ideas to help boost your team’s productivity.One that comes to mind is the Building stronger teams with team-based functionality session. Think about when you work on a project at home. Now think about how you work on a project at work. Do you find that your work’s success depends on a team of people rather than one person? Most would say yes. Yet, historically, productivity tools have focused on helping individuals get more done — like how you manage your inbox or tackle your to-do list. Since we rely on teams to successfully complete tasks, we need tools to help that group be more productive as a whole. It’s a new concept, and I’m excited that this session will share some of the early work that we’re doing to move beyond individual productivity to, instead, use technology to help entire teams achieve more.Businesses hear all the time about how machine learning can have a positive impact, and many are interested to see how they can achieve that same impact for their companies. Fortunately, Google has always been at the forefront of machine learning technologies like computer vision, predictive modeling, natural language processing and speech recognition.To that end, I recommend checking out Machine learning powering the workforce: Explore in Google Docs to see how machine learning in G Suite can instantly help you tackle everyday tasks and complex business challenges with the click of a button. Then, follow that up with Introduction to Google Cloud Machine Learning to learn how you can build your very own custom business applications on Google Cloud Platform (GCP).Whether it's using the Sheets API to give project managers using Asana a way to do deeper comparison of their projects, or using the Slides API to create a deck in Slides from a Trello board in just one click, the ways in which our customers and partners are automating their processes using G Suite APIs are impressive (and growing). The APIs we’re building across G Suite, as part of the larger Cloud platform, are being tailored to solve the most common business flows and the Automating internal processes using Apps Script and APIs for Docs editors session shows how some folks are already using Apps Script to make their internal processes hum.These are the sessions that excite me, but you can find the sessions that excite you in the full Next '17 agenda. And if you’re wondering, you can still register. Grab your spot and I’ll see you there![...]

Media Files:

Google Cloud supports $3M in grant credits for the NSF BIGDATA programGoogle Cloud supports $3M in grant credits for the NSF BIGDATA programDirectorCloud Solutions Architect

Wed, 22 Feb 2017 14:00:00 +0000

Google Cloud Platform (GCP) serves more than one billion end-users, and we continue to seek ways to give researchers access to these powerful tools. Through the National Science Foundation’s BIGDATA grants program, we're offering researchers $3M in Google Cloud Platform credits to use the same infrastructure, analytics and machine learning that we use to drive innovation at Google. About the BIGDATA grantsThe National Science Foundation (NSF) recently announced its flagship research program on big data, Critical Techniques, Technologies and Methodologies for Advancing Foundations and Applications of Big Data Sciences and Engineering (BIGDATA). The BIGDATA program encourages experimentation with datasets at scale. Google will provide cloud credits to qualifying NSF-funded projects, giving researchers access to the breadth of services on GCP, from scalable data management (Google Cloud Storage, Google Cloud Bigtable, Google Cloud Datastore), to analysis (Google BigQuery, Google Cloud Dataflow, Google Cloud Dataproc, Google Cloud Datalab, Google Genomics) to machine learning (Google Cloud Machine Learning, TensorFlow).This collaboration combines NSF’s experience in managing diverse research portfolios with Google’s proven track record in secure and intelligent cloud computing and data science. NSF is accepting proposals from March 15, 2017 through March 22, 2017.  All proposals that meet NSF requirements will be reviewed through NSF’s merit review process. GCP in action at Stanford UniversityTo get an idea of the potential impact of GCP, consider Stanford University’s Center of Genomics and Personalized Medicine, where scientists work with data at a massive scale. Director Mike Snyder and his lab have been involved in a number of large efforts, from ENCODE to the Million Veteran Program. Snyder and his colleagues turned to Google Genomics, which gives scientists access to GCP to help secure, store, process, explore and share biological datasets. With the costs of cloud computing dropping significantly and demand for ever-larger genomics studies growing, Snyder thinks fewer labs will continue relying on local infrastructure.“We’re entering an era where people are working with thousands or tens of thousands or even million genome projects, and you’re never going to do that on a local cluster very easily,” he says. “Cloud computing is where the field is going.”“What you can do with Google Genomics — and you can’t do in-house — is run 1,000 genomes in parallel,” says Somalee Datta, bioinformatics director of Stanford University’s Center of Genomics. “From our point of view, it’s almost infinite resources.”[...]Through the National Science Foundation’s BIGDATA grants program, we're offering researchers $3M in Google Cloud Platform credits.

Media Files:

Google Research and Daydream Labs: Seeing eye to eye in mixed realityGoogle Research and Daydream Labs: Seeing eye to eye in mixed realitySoftware Engineer

Wed, 22 Feb 2017 00:00:00 +0000

Virtual reality lets you experience amazing things—from exploring new worlds, to painting with trails of stars, to defending your fleet to save the world. But, headsets can get in the way. If you're watching someone else use VR, it's hard to tell what's going on and what they’re seeing. And if you’re in VR with someone else, there aren’t easy ways to see their facial expressions without an avatar representation.Daydream Labs and Google Research teamed up to start exploring how to solve these problems. Using a combination of machine learning, 3D computer vision, and advanced rendering techniques, we’re now able to “remove” headsets and show a person’s identity, focus and full face in mixed reality. Mixed reality is a way to convey what’s happening inside and outside a virtual place in a two dimensional format. With this new technology, we’re able to make a more complete picture of the person in VR.Using a calibrated VR setup including a headset (like the HTC Vive), a green screen, and a video camera, combined with accurate tracking and segmentation, you can see the “real world” and the interactive virtual elements together. We used it to show you what Tilt Brush can do and took Conan O’Brien on a virtual trip to outer space from our YouTube Space in New York. Unfortunately, in mixed reality, faces are obstructed by headsets.  Artist Steve Teeple in Tilt Brush, shown in traditional mixed reality on the left and with headset removal on the right, which reveals the face and eyes for a more engaging experience. The first step to removing the VR headset is to construct a dynamic 3D model of the person’s face, capturing facial variations as they blink or look in different directions. This model allows us to mimic where the person is looking, even though it's hidden under the headset.Next, we use an HTC Vive, modified by SMI to include eye-tracking, to capture the person’s eye-gaze from inside the headset. From there, we create the illusion of the person’s face by aligning and blending the 3D face model with a camera’s video stream. A translucent "scuba mask" look helps avoid an "uncanny valley" effect.Finally, we composite the person into the virtual world, which requires calibrating between the Vive tracking system and the external camera. We’re able to automate this and make it highly accurate so movement looks natural. The end result is a complete view of both the virtual world and the person in it, including their entire face and where they’re looking. Our initial work focused on mixed reality is just one potential application of this technology. Seeing beyond VR headsets could help enhance communication and social interaction in VR. Imagine being able to VR video conference and see the expressions and nonverbal cues of the people you are talking to, or seeing your friend’s reactions as you play your favorite game together.It’s just the beginning for this technology and we’ll share more moving forward. But, if you’re game to go deeper, we’ve described the technical details on the Google Research blog. This is an ongoing collaboration between Google Research, Daydream Labs, and the YouTube team. We’re making mixed reality capabilities available in select YouTube Spaces and are exploring how to bring this technology to select creators in the future. [...]

Media Files:

Work hacks from G Suite: Make it automaticWork hacks from G Suite: Make it automatic Technical Writer, Transformation Gallery, G Suite

Tue, 21 Feb 2017 17:00:00 +0000

More than a year ago, the Google Cloud Customer team, which focuses on providing helpful information to G Suite users, set out to create the Transformation Gallery — a resource for businesses to search and find tips on how to transform everyday processes in the workplace using Google Cloud tools. As a part of a monthly series, we’ll highlight some of the best Transformation Gallery tips to help your teams achieve more, quicker. Today, we take a look at how managers can save time by automating simple manual processes in industries like retail and financial services. Speed up approval workflowsManaging the flow of information between employees can be overwhelming. It can get in the way of the actual work you need to do. Whether you’re entering paper-form data into a spreadsheet or emailing back and forth for approvals, at some point, these manual workflows require a lot of upkeep, or worse, they break. Here are a few steps you can take to automate your day: 1. Think of a process to improveLook around your desk or inbox for a time-consuming request process. It might be for employee performance evaluations, requesting equipment for a new hire, or collecting daily production reports. Now, think through the steps of the process and map it out. What information do you need to collect or pass on? Who needs to review it or approve it? Who needs to be notified of the status?  2. Use Forms to collect dataWith that process in mind, build a survey using Google Forms. Make sure it has all the fields included in it for the information you need. You can also collect file uploads directly from participants at the same time you collect data, which makes it easy for employees to submit information without going back-and-forth. Here’s an example where retail teams used Forms to collect store manager feedback. 3. Set up your response spreadsheetAny data you collect in Forms automatically populates in a single spreadsheet in Sheets. Be sure to share the sheet with those who need to take action once a response is submitted, and have your team set up spreadsheet notifications. That way, everyone knows when responses are in or data changes on the sheet. Add extra columns to the sheet for editors to update the status of an entry, indicate an approval, or add additional details. Now, you’ve got a single electronic record that your team can use to check on status and requests. Here’s an example where employees used Sheets to track interoffice transfers. 4. Automate further with Apps ScriptIf you want to make it even more automatic, use Apps Script. Set up one or more approval workflows, and send notifications and reminders to approvers and requestors through email. You can also program the script to update spreadsheets or other G Suite tools with data on the approval status as it happens. Here’s a simple example from The G Suite Show: And if you’re interested in a deeper dive on Apps Script, there’s a session at Google Cloud Next ‘17 called "Automating internal processes using Apps Script and APIs for Docs editors," that can help you get familiar with the tool. Register for Next ‘17 here.These are just a few ways you can automate workflows, and here are some often overlooked benefits: The approval process is standardized and streamlinedSheets digitally tracks all requests, which is great for historical data and audits (and the sheet can be shared.)Notifications are sent automatically for approvals and statusForms creates a simple and consistent way for employees to make[...]

Media Files:

Paint with Touch - Tilt Brush is now available on Oculus RiftPaint with Touch - Tilt Brush is now available on Oculus RiftProduct Manager

Tue, 21 Feb 2017 16:00:00 +0000

Whether you’re a first time doodler tracing lines of fire and stars against the night sky or a concept artist designing a set for a film, the possibilities are endless when you paint in virtual reality with Tilt Brush.Starting today, Tilt Brush is available on the Oculus Rift in addition to the HTC Vive. We brought it to Oculus Rift so more of you with PC-powered systems can create and experience works of art in VR. No matter what you decide to make in Tilt Brush, painting should be natural, comfortable and immersive. So, we thought a lot about how to customize the app for Rift’s platform, hardware, and Touch controllers:In order to make it more convenient to paint, we recently added features that let you rotate and resize your work. We redesigned interactions to take advantage of the Oculus Touch controllers. For example, you can easily highlight which button you're touching on the controller and get an indication of what it does just by resting your finger on it. This makes it easy to see exactly what button you're about to press while using Tilt Brush.Painting isn’t just visual. Thanks to the Rift’s built in headphones, you’ll be fully immersed from the moment you enter Tilt Brush's virtual canvas. Different brushes create different sound effects, and they become a vivid part of the experience through your headphones. We love using audio reactive mode with Rift headphones and seeing strokes come to life with light and sound. So, if you have an Oculus Rift and Touch controllers, Tilt Brush is available now. And if you need inspiration for getting started, have a look at some of the creations from our Artist in Residence (AiR) program, many of which you can access right from the app. Happy painting![...]Starting today, Tilt Brush is available on the Oculus Rift in addition to the HTC Vive.

Media Files:

Google Cloud at HIMSS: engaging with the healthcare and health IT communityGoogle Cloud at HIMSS: engaging with the healthcare and health IT communityVice President of Healthcare

Fri, 17 Feb 2017 20:00:00 +0000

At Google Cloud, we’re working closely with the healthcare industry to provide the technology and tools that help create better patient experiences, empower care teams to work together and accelerate research. We're focused on supporting the digital transformation of our healthcare customers through data management at scale and advancements in machine learning for timely and actionable insights.Next week at the HIMSS Health IT Conference, we're demonstrating the latest innovations in smart data, digital health, APIs, machine learning and real-time communications from Google Cloud, Research, Search, DeepMind and Verily. Together, we offer solutions that help enable hospital and health IT customers to tackle the rapidly evolving and long standing challenges facing the healthcare industry. Here’s a preview of the Google Cloud customers and partners who are joining us at HIMSS.For customers like the Colorado Center for Personalized Medicine (CCPM) at the University of Colorado Denver, trust and security are paramount. CCPM has worked closely with the Google Cloud Platform (GCP) team to securely manage and analyze a complicated data set to identify  genetic patterns across a wide range of diseases and reveal new treatment options based on a patient’s unique DNA. And the Broad Institute of MIT and Harvard has used Google Genomics for years to combine the power, security features and scale of GCP with the Broad Institute’s expertise in scientific analysis.“At the Broad Institute we are committed to driving the pace of innovation through sharing and collaboration. Google Cloud Platform has profoundly transformed the way we build teams and conduct science and has accelerated our research,"  William Mayo, Chief Information Officer at Broad Institute told us.To continue to offer these and other healthcare customers the tools they need, today we’re announcing support for the HL7 FHIR Foundation to help the developer community advance data interoperability efforts. The FHIR open standard defines a modern, web API-based approach to communicating healthcare data, making it easier to securely communicate across the healthcare ecosystem including hospitals, labs, applications and research studies."Google Cloud Platform’s commitment to support the ongoing activities of the FHIR community will help advance our goal of global health data interoperability. The future of health computing is clearly in the cloud, and our joint effort will serve to accelerate this transition," said Grahame Grieve, Principal at Health Intersections, FHIR Product LeadBeyond open source, we're committed to supporting a thriving ecosystem of partners whose solutions enable customers to improve patient care across the industry.We’ve seen great success for our customers in collaboration with Kinvey, which launched its HIPAA-compliant application backend as a service on GCP to leverage our cloud infrastructure and integrate its capabilities with our machine learning and analytics services.  “In the past year, we’ve seen numerous organizations in healthcare, from institutions like Thomas Jefferson University and Jefferson Health that are building apps to transform care, education and research, and startups like iTether and TempTraq that are driving innovative new solutions, turn to Kinvey on GCP to accelerate their journey to a new patient-centric world,” said Sravish Sridhar, CEO of Kinvey.We’ve also published a new guide for HIPAA compliance on GCP, which describes our approach to data security on GCP and provides best-practice guidance on how to securely bring healthcare workloads to the cloud.Stop by our booth at HIMSS to hear more about how we’re working with[...]

Media Files:

Get in the game with NBA VR on DaydreamGet in the game with NBA VR on DaydreamHead of Entertainment Partnerships for Google AR/VR

Fri, 17 Feb 2017 18:00:00 +0000

Can't get enough dunks, three pointers, and last-second jumpers? Experience the NBA in a whole new way with the new NBA VR app, available on Daydream.

Catch up with highlights in your own virtual sports lounge or watch the NBA’s first original VR series, “House of Legends,” where NBA legends discuss everything from pop culture to the greatest moments of their career. The series tips off today with seven-time NBA Champion Robert Horry. New episodes featuring stars like Chauncey Billups and Baron Davis will debut regularly.

Daydream gives sports fans a new way to connect to the leagues, teams and players they care about most. The NBA VR app joins a lineup that already includes:

  • NFL VR: Get access to the NFL Immersed series featuring 360° behind-the-scenes looks into the lives of players, coaches, cheerleaders, and even fans themselves as they prepare for game day.
  • Home Run Derby VR: Hit monster home runs with the Daydream controller in eight iconic MLB ballparks and bring home the ultimate Derby crown.
  • NextVR: From NBA games and the Kentucky Derby, to the NFL and the US Open, experience your favorite sporting events live or revisit them through highlights.

You're just a download away from being closer than ever to the sporting events and athletes you love!

(image) Experience the NBA in a whole new way with the new NBA VR app, available on Daydream.

Media Files:

Bringing digital skills training to more classrooms in KoreaBringing digital skills training to more classrooms in KoreaPublic Policy and Government Relations Manager, Google Korea

Fri, 17 Feb 2017 10:30:00 +0000

Recently a group of Googlers visited Ogeum Middle School in Seoul, where they joined a junior high school class that had some fun trying out machine learning based experiments. The students got to see neural nets in action, with experiments that have trained computers to guess what someone’s drawing, or that turn a picture taken with a smartphone into a song. Students at Ogeum Middle School trying out Giorgio Cam, an experiment built with machine learning that lets you make music with the computer just by taking a picture. It uses image recognition to label what it sees, then it turns those labels into lyrics of a song. We’re always excited to see kids develop a passion for technology, because it seeds an interest in using technology to solve challenges later in life.The students at Ogeum Middle School are among the first of over 3,000 kids across Korea we hope to reach through “Digital Media Campus” (or 디지털 미디어 캠퍼스 in Korean), a new digital literacy education program. Through a grant to the Korea Federation of Science Culture and Education Studies (KOSCE), we plan to reach junior high school students in 120 schools across the country this year. Students in their ‘free semester’—a time when middle schoolers can take up electives to explore future career paths—will be able to enroll in this 32-hour course spanning 16 weeks beginning next month.KOSCE-trained tutors will show kids how to better evaluate information online and assess the validity of online sources, teach them to use a range of digital tools so they can do things like edit videos and create infographics, and help them experience exciting technologies like AR and VR. By giving them a glimpse of how these technologies work, we hope to excite them about the endless possibilities offered by technology. Perhaps this will even encourage them to consider the world of careers that technology opens up to them.  Helping kids to recognize these opportunities often starts with dismantling false perceptions at home. This is why we’re also offering a two-hour training session to 2,000 parents, who’ll pick up tips to help their kids use digital media.We ran a pilot of the program last year, and have been heartened by the positive feedback we’ve received so far. Teachers and parents have told us that they appreciate the skills it teaches kids to be competitive in a digital age. And the students are excited to discover new digital tools and resources that are useful to them in their students.While we might not be able to reach every high school student with this program, we hope to play a small role in helping to inspire Korea’s next generation of tech innovators. [...] helping to bring “Digital Media Campus” to kids and parents across the country

Media Files:

Three ways to get started with computer science and computational thinkingThree ways to get started with computer science and computational thinkingProfessor at University of Canterbury

Thu, 16 Feb 2017 23:00:00 +0000

Editor’s note: We’re highlighting education leaders across the world to share how they’re creating more collaborative, engaging classrooms. Today’s guest author is Tim Bell, a professor in the department of Computer Science and Software Engineering at the University of Canterbury and creator of CS Unplugged. Tim is a recipient of CS4HS awards and has partnered with Google in Australia to develop free resources to support teachers around the world to successfully implement computational thinking and computer science into classrooms. My home of New Zealand, like many countries around the world, is fully integrating computer science (CS) into the national curriculum. This change affects all teachers, because the goal of standardizing CS education curriculum is bigger than CS itself. It’s not just about grooming the next generation of computer scientists—it’s about equipping every student an approach to solving problems through computational thinking (CT). This way of thinking can and must be applied to other subjects. Math, science, and even English and history teachers will need to teach CT, and many feel uncertain about the road ahead. Progressing CS + CT education at the national level will only be successful if all teachers feel confident in their ability to get started. This first step can be the most daunting, so I want to share a few simple ways any teacher can bring CS and CT into the classroom. 1. Engage students as builders and teachersCT is about building new ways to solve problems. These problem-solving methods can be implemented with a computer, but the tool is much less important than the thinking behind it. Offline activities create opportunities for students to explain their thinking, work with others to solve open-ended problems, and learn by teaching their peers.My session during Education on Air showed some of these offline activities in practice. For example, playing with a set of binary cards, pictured below, can teach students how to explain binary representation. Year 5 and 6 students learn about binary representation through a CS Unplugged activity 2. Build lessons around real-world examplesCS is practical—algorithms speed up processes so people don’t have to wait, device interfaces need to be designed so they don't frustrate users, programs need to be written so they don't waste resources like battery power on a mobile phone. Examples like these can help students understand how CS and CT impact the world around them. Consider discussing human interface design as it applies to popular mobile apps as well as real-world systems, like factories and libraries.As Maggie Johnson, Google’s director of education and university relations, wrote last year: “If we can make these explicit connections for students, they will see how the devices and apps that they use everyday are powered by algorithms and programs. They will learn the importance of data in making decisions. They will learn skills that will prepare them for a workforce that will be doing vastly different tasks than the workforce of today.” 3. Connect new ideas and familiar subjectsSome of the most successful CS and CT lessons reference other subjects. For example, biology students can reconstruct an evolutionary tree using a string matching algorithm. Students might also apply geometry skills to Scratch programming by using their knowledge of angles to represent polygons with blocks of code. [...]

Media Files:

Shielding you from Potentially Harmful ApplicationsShielding you from Potentially Harmful ApplicationsCommunications and Public Affairs Manager

Thu, 16 Feb 2017 22:00:00 +0000

Earlier this month, we shared an overview of the ways we keep you safe, on Google and on the web, more broadly. Today, we wanted to specifically focus on one element of Android security—Potentially Harmful Applications—highlighting fraudsters’ common tactics, and how we shield you from these threats. “Potentially Harmful Applications,” or PHAs, are Android applications that could harm you or your device, or do something unintended with the data on your device. Some examples of PHA badness include: Backdoors: Apps that let hackers control your device, giving them unauthorized access to your data.Billing fraud: Apps that charge you in an intentionally misleading way, like premium SMS scams or call scams.Spyware: Apps that collect personal information from your device without consentHostile Downloads: Apps that download harmful programs, often through bundling with another programTrojan Apps: Apps that appear benign (e.g., a game that claims only to be a game) but actually perform undesirable actions. As we described in the Safer Internet post, we have a variety of automated systems that help keep you safe on Android, starting with Verify Apps—one of our key defenses against PHAs. Verify Apps is a cloud-based service that proactively checks every application prior to install to determine if the application is potentially harmful, and subsequently rechecks devices regularly to help ensure they’re safe. Verify Apps checks more than 6 billion installed applications and scans around 400 million devices per day. If Verify Apps detects a PHA before you install it or on your device if, it will prompt you to remove the app immediately. Sometimes, Verify Apps will remove an application without requiring you to confirm the removal. This is an action we’ll take very rarely, but if a PHA is purely harmful, has no possible benefit to users, or is  impossible for you to remove on your own, we’ll zap it automatically. Ongoing protection from Verify Apps has ensured that in 2015, over 99 percent of all Android devices were free of known PHAs.Verify Apps is just one of many protections we’ve instituted on Android to keep billions of people and devices safe. Just as PHAs are constantly evolving their tactics, we’re constantly improving our protections. We’ll continue to take action when we have the slightest suspicion that something might not be right. And we’re committed to educating and protecting people from current and future security threats—on mobile and online in general.Be sure to check if Verify Apps is enabled on your Android device, and stay clear from harmful apps by only installing from a trusted source.[...]How we protect you, your Android device and your data from potentially harmful applications.

Media Files:

Play a duet with a computer, through machine learningPlay a duet with a computer, through machine learningCoder and Musician

Thu, 16 Feb 2017 17:00:00 +0000

Technology can inspire people to be creative in new ways. Magenta, an open-source project we launched last year, aims to do that by giving developers tools to explore music using neural networks.

To help show what’s possible with Magenta, we’ve created an interactive experiment called A.I. Duet, which lets you play a duet with the computer. Just play some notes, and the computer will respond to your melody. You don’t even have to know how to play piano—it’s fun to just press some keys and listen to what comes back. We hope it inspires you—whether you’re a developer or musician, or just curious—to imagine how technology can help creative ideas come to life. Watch our video above to learn more, or just start playing with it.
(image) Just play some notes, and the computer will respond to your melody.

Media Files: