Subscribe: The Official Google Blog
Added By: Feedage Forager Feedage Grade A rated
Language: English
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: The Official Google Blog

The Official Google Blog

Insights from Googlers into our products, technology, and the Google culture.

Last Build Date: Wed, 22 Feb 2017 00:00:00 +0000


Google Research and Daydream Labs: Seeing eye to eye in mixed realityGoogle Research and Daydream Labs: Seeing eye to eye in mixed realitySoftware Engineer

Wed, 22 Feb 2017 00:00:00 +0000

Virtual reality lets you experience amazing things—from exploring new worlds, to painting with trails of stars, to defending your fleet to save the world. But, headsets can get in the way. If you're watching someone else use VR, it's hard to tell what's going on and what they’re seeing. And if you’re in VR with someone else, there aren’t easy ways to see their facial expressions without an avatar representation.Daydream Labs and Google Research teamed up to start exploring how to solve these problems. Using a combination of machine learning, 3D computer vision, and advanced rendering techniques, we’re now able to “remove” headsets and show a person’s identity, focus and full face in mixed reality. Mixed reality is a way to convey what’s happening inside and outside a virtual place in a two dimensional format. With this new technology, we’re able to make a more complete picture of the person in VR.Using a calibrated VR setup including a headset (like the HTC Vive), a green screen, and a video camera, combined with accurate tracking and segmentation, you can see the “real world” and the interactive virtual elements together. We used it to show you what Tilt Brush can do and took Conan O’Brien on a virtual trip to outer space from our YouTube Space in New York. Unfortunately, in mixed reality, faces are obstructed by headsets.  Artist Steve Teeple in Tilt Brush, shown in traditional mixed reality on the left and with headset removal on the right, which reveals the face and eyes for a more engaging experience. The first step to removing the VR headset is to construct a dynamic 3D model of the person’s face, capturing facial variations as they blink or look in different directions. This model allows us to mimic where the person is looking, even though it's hidden under the headset.Next, we use an HTC Vive, modified by SMI to include eye-tracking, to capture the person’s eye-gaze from inside the headset. From there, we create the illusion of the person’s face by aligning and blending the 3D face model with a camera’s video stream. A translucent "scuba mask" look helps avoid an "uncanny valley" effect.Finally, we composite the person into the virtual world, which requires calibrating between the Vive tracking system and the external camera. We’re able to automate this and make it highly accurate so movement looks natural. The end result is a complete view of both the virtual world and the person in it, including their entire face and where they’re looking. Our initial work focused on mixed reality is just one potential application of this technology. Seeing beyond VR headsets could help enhance communication and social interaction in VR. Imagine being able to VR video conference and see the expressions and nonverbal cues of the people you are talking to, or seeing your friend’s reactions as you play your favorite game together.It’s just the beginning for this technology and we’ll share more moving forward. But, if you’re game to go deeper, we’ve described the technical details on the Google Research blog. This is an ongoing collaboration between Google Research, Daydream Labs, and the YouTube team. We’re making mixed reality capabilities available in select YouTube Spaces and are exploring how to bring this technology to select creators in the future. [...]

Media Files:

Work hacks from G Suite: Make it automaticWork hacks from G Suite: Make it automatic Technical Writer, Transformation Gallery, G Suite

Tue, 21 Feb 2017 17:00:00 +0000

More than a year ago, the Google Cloud Customer team, which focuses on providing helpful information to G Suite users, set out to create the Transformation Gallery — a resource for businesses to search and find tips on how to transform everyday processes in the workplace using Google Cloud tools. As a part of a monthly series, we’ll highlight some of the best Transformation Gallery tips to help your teams achieve more, quicker. Today, we take a look at how managers can save time by automating simple manual processes in industries like retail and financial services. Speed up approval workflowsManaging the flow of information between employees can be overwhelming. It can get in the way of the actual work you need to do. Whether you’re entering paper-form data into a spreadsheet or emailing back and forth for approvals, at some point, these manual workflows require a lot of upkeep, or worse, they break. Here are a few steps you can take to automate your day: 1. Think of a process to improveLook around your desk or inbox for a time-consuming request process. It might be for employee performance evaluations, requesting equipment for a new hire, or collecting daily production reports. Now, think through the steps of the process and map it out. What information do you need to collect or pass on? Who needs to review it or approve it? Who needs to be notified of the status?  2. Use Forms to collect dataWith that process in mind, build a survey using Google Forms. Make sure it has all the fields included in it for the information you need. You can also collect file uploads directly from participants at the same time you collect data, which makes it easy for employees to submit information without going back-and-forth. Here’s an example where retail teams used Forms to collect store manager feedback. 3. Set up your response spreadsheetAny data you collect in Forms automatically populates in a single spreadsheet in Sheets. Be sure to share the sheet with those who need to take action once a response is submitted, and have your team set up spreadsheet notifications. That way, everyone knows when responses are in or data changes on the sheet. Add extra columns to the sheet for editors to update the status of an entry, indicate an approval, or add additional details. Now, you’ve got a single electronic record that your team can use to check on status and requests. Here’s an example where employees used Sheets to track interoffice transfers. 4. Automate further with Apps ScriptIf you want to make it even more automatic, use Apps Script. Set up one or more approval workflows, and send notifications and reminders to approvers and requestors through email. You can also program the script to update spreadsheets or other G Suite tools with data on the approval status as it happens. Here’s a simple example from The G Suite Show: And if you’re interested in a deeper dive on Apps Script, there’s a session at Google Cloud Next ‘17 called "Automating internal processes using Apps Script and APIs for Docs editors," that can help you get familiar with the tool. Register for Next ‘17 here.These are just a few ways you can automate workflows, and here are some often overlooked benefits: The approval process is standardized and streamlinedSheets digitally tracks all requests, which is great for historical data and audits (and the sheet can be shared.)Notifications are sent automatically for approvals and statusForms creates a simple and consistent way for employees to make requestsEmployees can use a mobile device to initiate and complete a requestBest of all, by transforming your workflows with these tips, you and your coworkers will save time. Something we can all appreciate. [...]

Media Files:

Paint with Touch - Tilt Brush is now available on Oculus RiftPaint with Touch - Tilt Brush is now available on Oculus RiftProduct Manager

Tue, 21 Feb 2017 16:00:00 +0000

Whether you’re a first time doodler tracing lines of fire and stars against the night sky or a concept artist designing a set for a film, the possibilities are endless when you paint in virtual reality with Tilt Brush.Starting today, Tilt Brush is available on the Oculus Rift in addition to the HTC Vive. We brought it to Oculus Rift so more of you with PC-powered systems can create and experience works of art in VR. No matter what you decide to make in Tilt Brush, painting should be natural, comfortable and immersive. So, we thought a lot about how to customize the app for Rift’s platform, hardware, and Touch controllers:In order to make it more convenient to paint, we recently added features that let you rotate and resize your work. We redesigned interactions to take advantage of the Oculus Touch controllers. For example, you can easily highlight which button you're touching on the controller and get an indication of what it does just by resting your finger on it. This makes it easy to see exactly what button you're about to press while using Tilt Brush.Painting isn’t just visual. Thanks to the Rift’s built in headphones, you’ll be fully immersed from the moment you enter Tilt Brush's virtual canvas. Different brushes create different sound effects, and they become a vivid part of the experience through your headphones. We love using audio reactive mode with Rift headphones and seeing strokes come to life with light and sound. So, if you have an Oculus Rift and Touch controllers, Tilt Brush is available now. And if you need inspiration for getting started, have a look at some of the creations from our Artist in Residence (AiR) program, many of which you can access right from the app. Happy painting![...]Starting today, Tilt Brush is available on the Oculus Rift in addition to the HTC Vive.

Media Files:

Google Cloud at HIMSS: engaging with the healthcare and health IT communityGoogle Cloud at HIMSS: engaging with the healthcare and health IT communityVice President of Healthcare

Fri, 17 Feb 2017 20:00:00 +0000

At Google Cloud, we’re working closely with the healthcare industry to provide the technology and tools that help create better patient experiences, empower care teams to work together and accelerate research. We're focused on supporting the digital transformation of our healthcare customers through data management at scale and advancements in machine learning for timely and actionable insights.Next week at the HIMSS Health IT Conference, we're demonstrating the latest innovations in smart data, digital health, APIs, machine learning and real-time communications from Google Cloud, Research, Search, DeepMind and Verily. Together, we offer solutions that help enable hospital and health IT customers to tackle the rapidly evolving and long standing challenges facing the healthcare industry. Here’s a preview of the Google Cloud customers and partners who are joining us at HIMSS.For customers like the Colorado Center for Personalized Medicine (CCPM) at the University of Colorado Denver, trust and security are paramount. CCPM has worked closely with the Google Cloud Platform (GCP) team to securely manage and analyze a complicated data set to identify  genetic patterns across a wide range of diseases and reveal new treatment options based on a patient’s unique DNA. And the Broad Institute of MIT and Harvard has used Google Genomics for years to combine the power, security features and scale of GCP with the Broad Institute’s expertise in scientific analysis.“At the Broad Institute we are committed to driving the pace of innovation through sharing and collaboration. Google Cloud Platform has profoundly transformed the way we build teams and conduct science and has accelerated our research,"  William Mayo, Chief Information Officer at Broad Institute told us.To continue to offer these and other healthcare customers the tools they need, today we’re announcing support for the HL7 FHIR Foundation to help the developer community advance data interoperability efforts. The FHIR open standard defines a modern, web API-based approach to communicating healthcare data, making it easier to securely communicate across the healthcare ecosystem including hospitals, labs, applications and research studies."Google Cloud Platform’s commitment to support the ongoing activities of the FHIR community will help advance our goal of global health data interoperability. The future of health computing is clearly in the cloud, and our joint effort will serve to accelerate this transition," said Grahame Grieve, Principal at Health Intersections, FHIR Product LeadBeyond open source, we're committed to supporting a thriving ecosystem of partners whose solutions enable customers to improve patient care across the industry.We’ve seen great success for our customers in collaboration with Kinvey, which launched its HIPAA-compliant application backend as a service on GCP to leverage our cloud infrastructure and integrate its capabilities with our machine learning and analytics services.  “In the past year, we’ve seen numerous organizations in healthcare, from institutions like Thomas Jefferson University and Jefferson Health that are building apps to transform care, education and research, and startups like iTether and TempTraq that are driving innovative new solutions, turn to Kinvey on GCP to accelerate their journey to a new patient-centric world,” said Sravish Sridhar, CEO of Kinvey.We’ve also published a new guide for HIPAA compliance on GCP, which describes our approach to data security on GCP and provides best-practice guidance on how to securely bring healthcare workloads to the cloud.Stop by our booth at HIMSS to hear more about how we’re working with the healthcare industry across Google. We would love to learn how we can engage with you on your next big idea to positively transform healthcare.[...]Next week at the HIMSS Health IT Conference, we're demonstrating the latest innovations in smart data, digital health, APIs, machine learning and rea[...]

Media Files:

Get in the game with NBA VR on DaydreamGet in the game with NBA VR on DaydreamHead of Entertainment Partnerships for Google AR/VR

Fri, 17 Feb 2017 18:00:00 +0000

Can't get enough dunks, three pointers, and last-second jumpers? Experience the NBA in a whole new way with the new NBA VR app, available on Daydream.

Catch up with highlights in your own virtual sports lounge or watch the NBA’s first original VR series, “House of Legends,” where NBA legends discuss everything from pop culture to the greatest moments of their career. The series tips off today with seven-time NBA Champion Robert Horry. New episodes featuring stars like Chauncey Billups and Baron Davis will debut regularly.

Daydream gives sports fans a new way to connect to the leagues, teams and players they care about most. The NBA VR app joins a lineup that already includes:

  • NFL VR: Get access to the NFL Immersed series featuring 360° behind-the-scenes looks into the lives of players, coaches, cheerleaders, and even fans themselves as they prepare for game day.
  • Home Run Derby VR: Hit monster home runs with the Daydream controller in eight iconic MLB ballparks and bring home the ultimate Derby crown.
  • NextVR: From NBA games and the Kentucky Derby, to the NFL and the US Open, experience your favorite sporting events live or revisit them through highlights.

You're just a download away from being closer than ever to the sporting events and athletes you love!

(image) Experience the NBA in a whole new way with the new NBA VR app, available on Daydream.

Media Files:

Bringing digital skills training to more classrooms in KoreaBringing digital skills training to more classrooms in KoreaPublic Policy and Government Relations Manager, Google Korea

Fri, 17 Feb 2017 10:30:00 +0000

Recently a group of Googlers visited Ogeum Middle School in Seoul, where they joined a junior high school class that had some fun trying out machine learning based experiments. The students got to see neural nets in action, with experiments that have trained computers to guess what someone’s drawing, or that turn a picture taken with a smartphone into a song. Students at Ogeum Middle School trying out Giorgio Cam, an experiment built with machine learning that lets you make music with the computer just by taking a picture. It uses image recognition to label what it sees, then it turns those labels into lyrics of a song. We’re always excited to see kids develop a passion for technology, because it seeds an interest in using technology to solve challenges later in life.The students at Ogeum Middle School are among the first of over 3,000 kids across Korea we hope to reach through “Digital Media Campus” (or 디지털 미디어 캠퍼스 in Korean), a new digital literacy education program. Through a grant to the Korea Federation of Science Culture and Education Studies (KOSCE), we plan to reach junior high school students in 120 schools across the country this year. Students in their ‘free semester’—a time when middle schoolers can take up electives to explore future career paths—will be able to enroll in this 32-hour course spanning 16 weeks beginning next month.KOSCE-trained tutors will show kids how to better evaluate information online and assess the validity of online sources, teach them to use a range of digital tools so they can do things like edit videos and create infographics, and help them experience exciting technologies like AR and VR. By giving them a glimpse of how these technologies work, we hope to excite them about the endless possibilities offered by technology. Perhaps this will even encourage them to consider the world of careers that technology opens up to them.  Helping kids to recognize these opportunities often starts with dismantling false perceptions at home. This is why we’re also offering a two-hour training session to 2,000 parents, who’ll pick up tips to help their kids use digital media.We ran a pilot of the program last year, and have been heartened by the positive feedback we’ve received so far. Teachers and parents have told us that they appreciate the skills it teaches kids to be competitive in a digital age. And the students are excited to discover new digital tools and resources that are useful to them in their students.While we might not be able to reach every high school student with this program, we hope to play a small role in helping to inspire Korea’s next generation of tech innovators. [...] helping to bring “Digital Media Campus” to kids and parents across the country

Media Files:

Three ways to get started with computer science and computational thinkingThree ways to get started with computer science and computational thinkingProfessor at University of Canterbury

Thu, 16 Feb 2017 23:00:00 +0000

Editor’s note: We’re highlighting education leaders across the world to share how they’re creating more collaborative, engaging classrooms. Today’s guest author is Tim Bell, a professor in the department of Computer Science and Software Engineering at the University of Canterbury and creator of CS Unplugged. Tim is a recipient of CS4HS awards and has partnered with Google in Australia to develop free resources to support teachers around the world to successfully implement computational thinking and computer science into classrooms. My home of New Zealand, like many countries around the world, is fully integrating computer science (CS) into the national curriculum. This change affects all teachers, because the goal of standardizing CS education curriculum is bigger than CS itself. It’s not just about grooming the next generation of computer scientists—it’s about equipping every student an approach to solving problems through computational thinking (CT). This way of thinking can and must be applied to other subjects. Math, science, and even English and history teachers will need to teach CT, and many feel uncertain about the road ahead. Progressing CS + CT education at the national level will only be successful if all teachers feel confident in their ability to get started. This first step can be the most daunting, so I want to share a few simple ways any teacher can bring CS and CT into the classroom. 1. Engage students as builders and teachersCT is about building new ways to solve problems. These problem-solving methods can be implemented with a computer, but the tool is much less important than the thinking behind it. Offline activities create opportunities for students to explain their thinking, work with others to solve open-ended problems, and learn by teaching their peers.My session during Education on Air showed some of these offline activities in practice. For example, playing with a set of binary cards, pictured below, can teach students how to explain binary representation. Year 5 and 6 students learn about binary representation through a CS Unplugged activity 2. Build lessons around real-world examplesCS is practical—algorithms speed up processes so people don’t have to wait, device interfaces need to be designed so they don't frustrate users, programs need to be written so they don't waste resources like battery power on a mobile phone. Examples like these can help students understand how CS and CT impact the world around them. Consider discussing human interface design as it applies to popular mobile apps as well as real-world systems, like factories and libraries.As Maggie Johnson, Google’s director of education and university relations, wrote last year: “If we can make these explicit connections for students, they will see how the devices and apps that they use everyday are powered by algorithms and programs. They will learn the importance of data in making decisions. They will learn skills that will prepare them for a workforce that will be doing vastly different tasks than the workforce of today.” 3. Connect new ideas and familiar subjectsSome of the most successful CS and CT lessons reference other subjects. For example, biology students can reconstruct an evolutionary tree using a string matching algorithm. Students might also apply geometry skills to Scratch programming by using their knowledge of angles to represent polygons with blocks of code. CS can also be combined with non-academic subjects, like physical education.Google’s engineering director in Australia, Alan Noble, explained this interdisciplinary approach well: “CS combined with another discipline, brings with it new insights and new ways of approaching things. We call this [...]

Media Files:

Shielding you from Potentially Harmful ApplicationsShielding you from Potentially Harmful ApplicationsCommunications and Public Affairs Manager

Thu, 16 Feb 2017 22:00:00 +0000

Earlier this month, we shared an overview of the ways we keep you safe, on Google and on the web, more broadly. Today, we wanted to specifically focus on one element of Android security—Potentially Harmful Applications—highlighting fraudsters’ common tactics, and how we shield you from these threats. “Potentially Harmful Applications,” or PHAs, are Android applications that could harm you or your device, or do something unintended with the data on your device. Some examples of PHA badness include: Backdoors: Apps that let hackers control your device, giving them unauthorized access to your data.Billing fraud: Apps that charge you in an intentionally misleading way, like premium SMS scams or call scams.Spyware: Apps that collect personal information from your device without consentHostile Downloads: Apps that download harmful programs, often through bundling with another programTrojan Apps: Apps that appear benign (e.g., a game that claims only to be a game) but actually perform undesirable actions. As we described in the Safer Internet post, we have a variety of automated systems that help keep you safe on Android, starting with Verify Apps—one of our key defenses against PHAs. Verify Apps is a cloud-based service that proactively checks every application prior to install to determine if the application is potentially harmful, and subsequently rechecks devices regularly to help ensure they’re safe. Verify Apps checks more than 6 billion installed applications and scans around 400 million devices per day. If Verify Apps detects a PHA before you install it or on your device if, it will prompt you to remove the app immediately. Sometimes, Verify Apps will remove an application without requiring you to confirm the removal. This is an action we’ll take very rarely, but if a PHA is purely harmful, has no possible benefit to users, or is  impossible for you to remove on your own, we’ll zap it automatically. Ongoing protection from Verify Apps has ensured that in 2015, over 99 percent of all Android devices were free of known PHAs.Verify Apps is just one of many protections we’ve instituted on Android to keep billions of people and devices safe. Just as PHAs are constantly evolving their tactics, we’re constantly improving our protections. We’ll continue to take action when we have the slightest suspicion that something might not be right. And we’re committed to educating and protecting people from current and future security threats—on mobile and online in general.Be sure to check if Verify Apps is enabled on your Android device, and stay clear from harmful apps by only installing from a trusted source.[...]How we protect you, your Android device and your data from potentially harmful applications.

Media Files:

Play a duet with a computer, through machine learningPlay a duet with a computer, through machine learningCoder and Musician

Thu, 16 Feb 2017 17:00:00 +0000

Technology can inspire people to be creative in new ways. Magenta, an open-source project we launched last year, aims to do that by giving developers tools to explore music using neural networks.

To help show what’s possible with Magenta, we’ve created an interactive experiment called A.I. Duet, which lets you play a duet with the computer. Just play some notes, and the computer will respond to your melody. You don’t even have to know how to play piano—it’s fun to just press some keys and listen to what comes back. We hope it inspires you—whether you’re a developer or musician, or just curious—to imagine how technology can help creative ideas come to life. Watch our video above to learn more, or just start playing with it.
(image) Just play some notes, and the computer will respond to your melody.

Media Files:

Start shopping with the Google Assistant on Google HomeStart shopping with the Google Assistant on Google HomeProduct ManagerProduct Manager

Thu, 16 Feb 2017 16:00:00 +0000

What do you need to get done today? If picking up paper towels or stocking up on coffee is on your list, consider it done. To help you keep up with your busy schedule and shop for the things you need, we’re introducing shopping with your Google Assistant on Google Home.

Starting today, you can shop for your everyday essentials—from paper towels to vitamins. You'll be able to order from participating Google Express retailers, including Costco, Whole Foods Market, Walgreens, PetSmart, Bed Bath & Beyond and more than 50 other national and locally available retailers.

To get started, just say “Ok Google, how do I shop?” or “Ok Google, order paper towels”.


Through April 30, 2017, when you shop via Google Home, you don't have to worry about additional service or membership fees. And set-up is easy! To get started, go to the Google Home app, navigate to “More settings” and then scroll down to “Payments.” From there, set your default credit card and delivery address, and you’re ready to shop.

Today is just the beginning of what's possible for shopping with the Google Assistant. Over the coming months, we’ll continue to add new features and enable purchases for other apps and services.

(image) Starting today you can shop for all the essentials—from paper towels to coffee—with your Google Assistant on Google Home.

Media Files:

Bringing the We Love You Project to Google Arts & CultureBringing the We Love You Project to Google Arts & CultureFounder and Photographer

Thu, 16 Feb 2017 14:00:00 +0000

Editor’s Note: Today we're launching a new exhibit in Google Arts & Culture featuring the work of photographer Bryon Summers. We've invited Summers to share more about the We Love You Project in this post. In 2016 I set out to create 1,000+ portraits of Black men of all ages. From the moment we’re born, Black boys are bombarded with images that strip us of our humanity. We see Black bodies cast as criminals and predators, implicitly urging viewers of all stripes to believe these characterizations as unwavering truths of Black male identity. What we don’t see are the smiling, reassuring, loving faces of our sons, brothers, cousins, husbands and fathers. With the We Love You Project, I wanted to show that even though we may feel as if our bodies are under attack, we’re still part of a larger community that loves and supports us. Bryon takes a photo of Evan Ward at Google's Mountain View campus The We Love You Project has now surpassed 500 participants, and the groundswell of support and joyful participation from Black men across the country has been one of the most powerful experiences of my artistic career. As we continue to photograph Black men and boys, we want to ensure that our work continues to be seen and drives meaningful conversations about many Black men’s experiences in America. This is why we’ve partnered with Google Arts & Culture to create a digital gallery of more than 500 portraits from the series.Google also invited us to photograph Black Googlers at its Mountain View headquarters—another huge turning point for the project. Not only is Google helping us reach our goal of 1,000 portraits, the company's participation reflects its commitment to diversity and to being an ally of the Black community. Evan Ward, Software Engineer Mike Costa, Senior Counsel Andy Hinton, VP, Global Ethics & Compliance Gerald Jean-Baptiste, Staffing Channels Specialist Emmanuel Matthews, Innovative Experience Generalist [...]

Media Files:

Partnering with Telenor to launch RCS messaging in Europe and AsiaPartnering with Telenor to launch RCS messaging in Europe and AsiaHead of RCS

Thu, 16 Feb 2017 08:00:00 +0000

Over the past year, we’ve worked with the mobile industry on an initiative to upgrade SMS for people everywhere, providing a more enhanced messaging experience through RCS (Rich Communications Services). Today, we’re excited to announce that we’re partnering with Telenor to enable the launch of RCS messaging to their 214 million subscribers across Europe and Asia, including Norway, Denmark, Sweden, Hungary, Montenegro, Serbia, Bulgaria, Pakistan, Myanmar, Bangladesh, Thailand, Malaysia and India. Subscribers will have access to advanced messaging features as a standard part of their Android device.

Features like group chat, high-res photo sharing, read receipts, and more, will come standard on Android. Subscribers will have their SMS experience upgraded through the Messenger app for Android devices, developed by Google. The service will be powered by the Jibe RCS cloud from Google.


In markets where RCS is launched, Telenor subscribers who already have the Messenger app on their phone will automatically get access to RCS services through an app update. Subscribers who don't have the app can install the Messenger app from the Google Play store. In addition, as part of the partnership with Telenor, many new Android devices will come with Messenger for Android preloaded as the default SMS and RCS messaging app.

This RCS messaging implementation supports the GSMA universal profile—a standard supported by more than 58 carriers and manufacturers collectively covering a subscriber base of 4.7 billion people globally. We’ve launched RCS messaging using the universal profile with carriers in the U.S. and Canada, and plan on launching RCS in more countries in the coming months.

(image) With this partnership, millions more Android users will get an upgraded SMS experience.

Media Files:

How online courses can help teach computational thinking and CSHow online courses can help teach computational thinking and CSResearch Fellow

Wed, 15 Feb 2017 23:00:00 +0000

Editor’s note: We’re highlighting education leaders across the world to share how they’re creating more collaborative, engaging classrooms. Today’s guest author is Rebecca Vivian, one of the keynote speakers from Education on Air, Google’s free online conference which took place in December 2016. Rebecca, a Research Fellow at the computer science education research group (CSER) at the University of Adelaide in Australia, shares professional development ideas for preparing teachers for a classroom focused on computational thinking and computer science. These days, we need to prepare the next generation of students to be creators—not just consumers—of digital technology. As the demand for computer science and computational thinking skills increases, countries are integrating these skills into their K-12 curriculum. This year, Australia implemented a digital technologies curriculum, incorporating the teaching of computational thinking and CS into curricula from foundation level, and many other countries are rapidly following suit. But teachers need help to implement this type of digital focussed curriculum.One of the ways we can support teachers in this area is via "massive open online courses," or MOOCs. In Australia, the computer science education research group (CSER) at the University of Adelaide, is partnering with Google to develop online communities and free MOOCs, where K-12 teachers can share their creative ideas and suggest professional development lessons. With these resources, teachers are learning to integrate computational thinking and computer science into their curriculum.Since launching the digital technologies MOOC in 2014, we’ve been able to scale professional learning across Australia and introduce new learning styles such as algorithmic thinking, which teaches students to develop step-by-step solutions for problems they encounter. More than 7,200 teachers are engaged in this professional learning program and have shared more than 4,500 resources as a result of these MOOCs. The program isn’t just working for experienced CS teachers: Ellie, a 56-year-old grandmother and primary school teacher with virtually no technology background, created a lesson on binary and data resources after taking our first course. In late 2016, the Australian government decided to invest nearly 7 million dollars over four years to scale our efforts further and support remote and low-income communities. Connecting teachers to share creativity and insightThe success of Australia’s MOOCs and online teacher community has proven the value of peer-to-peer professional learning. Teachers have embraced our “professional learning in a box” kits—slide decks of instructor notes, videos, and in-person activity ideas that they can customize to deliver professional learning sessions in their school or community. Teachers also love user-generated content in our online communities because they can interact with teachers who created them, and apply concepts they’ve learned online to their classroom. Education on Air, which I participated in last year, works much like a MOOC by providing a space for people with a shared interest to come together and learn from one another, no matter where they’re located. In my breakout session, “Making Computational Thinking Visible: Classroom Activities and Google Tools,” I explained algorithmic thinking, demonstrated the way it applies to other learning areas, and shared tips on how Google tools can assist in introducing this framework to students. Teachers left the session with ideas they could implement the next day, including tips for engaging lessons that integrate algorithmic thinking, and ideas for applying this framework to other learning areas[...]

Media Files:

Expanding Fact Checking at GoogleExpanding Fact Checking at Google

Wed, 15 Feb 2017 18:00:00 +0000

Over the years we’ve heard from Google News users that our efforts to label stories ranging from local to satire to user-generated have helped expand their view of what is happening in the world. Last October we added a new Fact Check tag to help people find news stories that have been fact checked, so they can understand the value of what they’re reading. Soon after, we introduced the tag in France and Germany. Starting today, people in Brazil, Mexico and Argentina can see fact check tagged articles in the expanded story box on and in the Google News & Weather iOS and Android apps. Fact Check in Brazil We’re also launching the fact check tag in these countries on news mode in Search. That means if you do a regular search and click the news tab, fact check articles will be elevated and annotated with the same fact check label that you would see in stories on Google News. Fact Check in news mode in Search We’re able to do this work because the fact check industry itself has grown—there are now more than 120 organizations involved in tackling this issue—but our commitment to this area is not new. In Europe over the last couple of years we’ve been working with publishers on a number of efforts focused on fact checking. Last week, we announced CrossCheck, a joint project involving nearly 20 French newsrooms and the First Draft Coalition to debunk myths pertaining to the upcoming French elections.In addition, as part of the Digital Initiative Fund, we’ve provided support for more than 10 projects looking at fact checking and authentication, adding six new initiatives at the end of last year:U.K.-based Full Fact is building an automated fact-checker tailored for journalists.Scotland’s the Ferret is using funding to build up a formal fact checking operation in their newsroom in the wake of the EU referendum.Factmata, developed at University College London and University of Sheffield, will use machine learning to build tools to help readers better understand claims made in digital media content, such as news articles and political speech transcripts.In Italy, Catchy’s team of scientists and media analysts, has created Compass, a fact checking platform to call out misleading stories, rebut bad facts and connect news events to reliable information.In France, Le Monde’s 13-person fact checking unit called Les Décodeurs has received funding for their Hoaxbuster Decodex project.Norway’s ambitious Leserkritikk (“Reader Critic”) project, currently running its prototype on, lets readers give specific and structured feedback on facts, language and mistakes in published content.  These projects clearly illustrate a desire for more of this work, and we’re eager to bring the fact check tag to other countries around the world. In order to make this a reality, we need your help. Publishers who would like to see their work appear with the Fact Check tag should use the open ClaimReview schema from in their stories.  Adding this markup allows Google to find these stories and highlight the fact checking work that has gone into them.  For more information, head on over to our help center.[...]

Media Files:

Making it easier for developers to create spatialized sound with FMOD and WwiseMaking it easier for developers to create spatialized sound with FMOD and WwiseSoftware Engineer

Wed, 15 Feb 2017 17:00:00 +0000

Recreating spatialized sound the way humans actually hear it can greatly improve the sense the immersion in any game or app experience. But for developers, battling with various unconnected spatial audio tools can be both confusing and time-consuming. We’ve worked closely with Firelight Technologies and Audiokinetic, creators of the popular audio engines FMOD and Wwise, on a suite of streamlined spatial audio plugins that make it possible to add high-quality, spatialized audio into your apps across desktop, mobile, and VR platforms—including Android, iOS, Windows, OSX and Linux. The new Google VR FMOD and Wwise plugin suite provides all the features developers need to create highly immersive spatial audio experiences:Highly accurate rendering of large numbers of spatialized sound sources.Distance, elevation and occlusion effects, all at minimal overhead.Room acoustics that react in real-time to the listener’s location, smoothly transitioning between different environments.Playback of immersive ambisonic sound fields, using the same technology that powers spatial audio on YouTube.These plugins work seamlessly with the FMOD and Wwise integrations into Unity and Unreal Engine. The Unity integration provides an intuitive way to control room acoustics that react instantly to changes in your app or game environments. Changes to room sizes, material types and object positions are all reflected in real time through the Google VR spatialization engine to produce lifelike sound.Up until today, our spatialization algorithms were primarily optimized for smartphones to have minimal impact on the primary CPU, where mobile apps do most of their work. Now running on desktop PCs, the new FMOD and Wwise spatial audio plugins offer faster performance, spatializing greater numbers of high-quality sound sources, while continuing to minimize impact to your CPU budget.To get started, download the latest version of FMOD Studio, which now includes the GVR plugins, or download the plugins for Wwise on GitHub. For more details, check out our developer documentation for FMOD and Wwise.[...]We’ve worked closely with Firelight Technologies and Audiokinetic, creators of the popular audio engines FMOD and Wwise, on a suite of streamlined spatial audio plugins that make it possible to add high-quality, spatialized audio into your apps across desktop, mobile, and VR platforms.

Media Files:

By Washington’s teeth! U.S. presidential history, now on Google Arts & CultureBy Washington’s teeth! U.S. presidential history, now on Google Arts & CultureProgram Manager

Wed, 15 Feb 2017 17:00:00 +0000

Did you know that the Bush Family has a favorite taco recipe, which First Lady Barbara Bush described as “loved by all who love Mexican food”? Or that George Washington’s dentures were not made of wood as is popularly thought, but actually from human and cow teeth as well as ivory? Or how about that, to celebrate his Inauguration, Theodore Roosevelt received a lock of president Lincoln’s hair as a gift?No, we’re not presidential scholars; we’re just excited for Presidents’ Day! Today, as a follow-up to our American Democracy collection, Google Arts & Culture is partnering with more than 30 cultural institutions to bring you history from the United States presidency, available at over 2,000 new artifacts, photos, pictures and more, and 63 new exhibits (for 158 exhibits, total) this collection invites you to remember and celebrate the history, lives and legacies of the 44 U.S. presidents. Take an immersive tour of presidents’ iconic homes and get a sneak peek into their private lives—from childhood and family life, to favorite pastimes and chefs—in addition to their public accomplishments. Explore the weird world of the presidential pets—other than dogs, there have been raccoons, sheep, horses, badgers, and even a pygmy hippopotamus and elephants. You can view 25 presidential portraits captured using Google’s Art Camera. These gigapixel quality images allow you to zoom in and explore details of these portraits more thoroughly than you could with the naked eye. Dwight D. Eisenhower, 35th president of the United States. We’re making available 17 new 360-degree virtual tours that transport you to places full of presidential history. Using the Google Arts & Culture App (available on iOS and Android) and Google Cardboard, take a virtual tour of places like the home of Franklin D. Roosevelt and the Ulysses S. Grant National Historic Site. And, in addition, educators can use Google Expeditions to take students on a guided tour of the White House, right from their desks! There are 14 Google Expeditions relating to the Office of the President, including Presidential Museums and work by the First Ladies, all great trips for students across grades and subjects. Take a virtual reality tour of the White House, right from wherever you are. Ever wonder what it’s like to travel like POTUS? Take a look at Ronald Reagan’s Air Force One (now housed in his Presidential Library) and other ways presidents have traveled in safety and style.Our Presidents’ Day collection covers the vast political and personal histories of our U.S. heads of state, full of intriguing and surprising stories that allow for anyone with an internet connection to turn into a presidential historian. We hope you enjoy![...]Today, Google Arts & Culture brings you history from the U.S. presidency.

Media Files:

Did you know...Google Search now has easy-to-find fun facts?Did you know...Google Search now has easy-to-find fun facts?Product Manager

Wed, 15 Feb 2017 17:00:00 +0000

Did you know a cat can’t chew big pieces of food because their jaw can’t move sideways? Or that hamsters got their name from the German word “hamstern” which means to hoard? And how do we know this? Starting today on Google Search, you can find fun facts about living creatures from around the world, making you the most interesting person at the dinner party or the reigning champ at trivia. Head to Google, ask for a fun fact about something (think plants, animals, fruits and veggies), and ta-da! A trivia tidbit is delivered right at the top of your search results. For the animal lovers out there, fun facts might be man’s (new) best friend. It might surprise you to learn that dogs have three eyelids to help protect and keep their eyes from drying out. Or for the arachnophobes out there: The venom of the black widow spider is apparently 15 times more potent than a rattlesnake's. The animal kingdom is chock full of wild facts and even wilder beasts! For those of you still finding a reason to celebrate Valentine’s Day (or perhaps looking to make up for yesterday), stop and smell some fun facts about flowers. Did you know that light red carnations represent admiration, while dark red denotes deep love and affection? Or that the Ancient Greeks considered the violet to be a symbol of love and fertility and was an essential ingredient for love potions? A quick search may come in handy before you buy your blooms.If you’re trying to convince the little ones in your life to eat healthy, fun facts about fruits and veggies are sure to please. After all, who knew that strawberries actually aren’t berries at all? Or that the inner temperature of a cucumber can be up to 20 degrees cooler than the outside air? That’s sure to put your brain in a pickle.These are just a few of the fun facts out there for you to find on Google. And here’s a pro-tip for the trivia lovers out there: Some queries have multiple facts, one of which we randomly display when searched. So if you’re interested in learning more, just hit refresh and another fact may surface. Enjoy your fact finding![...]Now when you’re curious about learning interesting things about various living things (think plants, animals and fruits), you can search on Google to get fun facts quickly and easily.

Media Files:

Looking forward to Next ‘17: 8 G Suite sessions you don’t want to missLooking forward to Next ‘17: 8 G Suite sessions you don’t want to missVice President of Apps, Google Cloud

Tue, 14 Feb 2017 19:00:00 +0000

We’re three weeks away from Google Cloud Next, one of the largest events Google has ever hosted. As we get ready to welcome you on March 8th, I’m reminded of how exciting it is to be in the cloud computing industry right now, helping shape how businesses will work together in the coming years.About six months have passed since we announced G Suite, our set of intelligent apps for business. Since then, we’ve focused on bringing you new collaboration tools, like Team Drives and Jamboard, and have partnered with companies like Box and Slack to help businesses of all sizes unlock productivity across their organizations. At Next, we’ll get a chance to hear from businesses directly about these G Suite additions and collect feedback to shape what we build in the future.I have the privilege to join some of Google’s top leaders on stage at Next, including Diane Greene, Sundar Pichai and Eric Schmidt. While I look forward to hearing my colleagues unpack the potential of cloud for businesses, I’m especially excited to hear from one of our newest Google Cloud leaders, Fei-Fei Li, about the value that machine learning will bring to the enterprise.It’s one thing to talk about product innovations, it’s another to try them for yourself. This year’s Next will feature Cloud Showcase,  an area for interactive product experiences, allowing each attendee to see and feel the power of Google Cloud products firsthand. We’re opening up the doors for attendees to experience machine learning, application development, collaboration and productivity through interactive installations that are unique to Google Cloud. Besides the keynotes and show floor, there are over 200 sessions at Next this year. If you need help narrowing down that list, here are some sessions I’m excited for:If you’re interested in learning more about how Machine Learning can impact your business or how you can build more agile, productive teams, check out:Introduction to Google Cloud Machine LearningTransform your business with machine learning and Explore in Google DocsMachine learning powering the workforce: Explore in Google DocsTo learn how to create custom apps with G Suite, or integrate your G Suite apps with existing workflows to accomplish more, there’s:Automating internal processes using Apps Script and APIs for Docs editorsBuild powerful custom apps with App Maker on G SuiteNew Google Docs integrations to streamline your workflowsFor insight into controlling business data, building custom dashboards and running custom queries, be sure to go to:Gaining full control over your organization’s cloud resourcesGetting the most out of Google Admin Reports and BigQueryRegister here to secure your spot at Next ‘17.[...]

Media Files:

Google Cloud Next ‘17 daily programs announced - day passes now availableGoogle Cloud Next ‘17 daily programs announced - day passes now availableVice President of Compute and Developer Services

Mon, 13 Feb 2017 16:04:00 +0000

In early March, thousands of developers, IT decision makers and cloud industry leaders will descend on Moscone Center West in San Francisco for Next ‘17, Google Cloud’s premier annual conference. Today, I would like to share more information on the major themes for each day as well as more speakers and new ticketing options.The first day (Wednesday, March 8) of Next ‘17 will feature keynotes from Diane Greene, SVP of Google Cloud; Sundar Pichai, CEO of Google; Eric Schmidt, Chairman of Alphabet and Fei-Fei Li, Chief Scientist for Google Cloud Machine Learning and AI and Professor of Computer Science at Stanford. Our lineup of executives will discuss what Google Cloud offers today and discuss Google Cloud's vision for the future. Attendees will also hear how our customers and partners are embracing the cloud in new and innovative ways. We’re excited that Quentin Hardy (formerly of The New York Times and now with Google Cloud) will be interviewing Marc Andreessen and Vint Cerf on stage.  All these keynotes will be followed by a series of fantastic breakout sessions.On Day 2 (Thursday, March 9), we’ll announce new products for Google Cloud Platform (GCP) and G Suite. Our product and engineering leaders, including Urs Hölzle, Prabhakar Raghavan, Brian Stevens and Chet Kapoor will share roadmaps of Google Cloud’s future product direction. We’ll also see exciting product demos and hear from customers about how Google Cloud is helping them compete and succeed.The final day (Friday,  March 10) of Next ‘17 will be dedicated to Google’s commitment to open source and cloud-native architectures, with deep dives on Kubernetes and TensorFlow with talks from Jeff Dean, Senior Google Fellow and leader of the Google Brain team, along with Rajat Monga to expand on the progress Google is making with TensorFlow and Google Brain.Start-ups and the venture capital community will come on stage to share how they’re leveraging the cloud to build the next wave of innovative products and services, born in the cloud.In addition to the speakers we announced in January, we're delighted to announce that Jim Zemlin, Executive Director of the Linux Foundation; Eric Brewer, creator of the CAP theorem and Vice President of Infrastructure at Google and Chris Wright, Vice President and Chief Technologist at Red Hat, who will share more about the Kubernetes project and discuss open source in the enterprise, will also speak at the event.Finally, I will be sharing Google’s vision for an open cloud platform and what we believe the future holds.To make Google Cloud Next ‘17 even more accessible to the cloud community, we’re excited to launch new day passes that allow attendees to attend on their day of interest. One-day passes are available for Day 1 or Day 2 of Next ‘17 for $549 and include $300 in GCP credits.Ready to register? It couldn’t be a better time to take part of Next ‘17.  We look forward to welcoming you in March.[...]Today, we’re excited to share more information on the major themes for each day of Google Cloud Next '17 as well as more speakers and new ticketing options.

Media Files:

Keep track of your favorite places and share them with friendsKeep track of your favorite places and share them with friendsProduct Manager, Google Maps

Mon, 13 Feb 2017 13:00:00 +0000

Is your bucket list etched in your memory, or scribbled on a dozen post-it notes scattered around your home? Have you ever promised out-of-town guests an email full of your favorite spots, only to never get around to clicking send? Starting today, you can create lists of places, share your lists with others, and follow the lists your friends and family share with you—without ever leaving the Google Maps app (Android, iOS).

Getting started is easy. Simply open the Google Maps app and find that BBQ spot you’ve been wanting to try. Tapping on the place name and then the “Save” icon adds the place to one of several pre-set lists like “Want to Go” or “Favorites.” You can also add the restaurant to a new list that you name yourself, like “Finger Lickin’ BBQ.” To recall the lists you’ve created, go to Your Places (in the side menu) and then open the saved tab. Icons for the places you’ve saved to lists will appear on the map itself, so you’ll always know whether one of your must-try BBQ spots is nearby.

Because sharing is caring, we made it easy to share lists like “Best Views in SF” via text, email, social networks and popular messaging apps. Whenever friends and family come to town, tap the share button to get a link and start flexing your local knowledge muscles. Once you send a link to your out-of-towners, they can tap “Follow” to pull up the list from Your Places whenever they need it. Here’s how it all works in real life:


The lists you follow are with you wherever you take Google Maps and are viewable on mobile and desktop—and even offline. Next time you're on a trip, download offline maps of the area in advance and you'll be able to see all the places you’ve added to lists on the map itself.

With the millions of landmarks, businesses and other points of interest in Google Maps, there’s no shortage of places to try. Now that we’ve got the world mapped, it’s your turn to map your world with Lists—from local hotspots to bucket list destinations worlds away.

(image) Now you can create lists of your favorite places in Google Maps and share them with friends and family.

Media Files: