Subscribe: The Official Google Blog
Added By: Feedage Forager Feedage Grade A rated
Language: English
business  digital skills  digital  google  grow google  grow  light  make  new  people  skills  technology  we’re  world 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: The Official Google Blog

The Official Google Blog

Insights from Googlers into our products, technology, and the Google culture.

Last Build Date: Fri, 16 Mar 2018 18:45:00 -0000


Team Pixel is in bloom this springTeam Pixel is in bloom this springPixel Team

Fri, 16 Mar 2018 18:45:00 -0000

Our community of photographers is on the rise, and the #teampixel tribe is officially 35,000 members strong (and counting)! This week’s highlights range from colorful plum blossoms in Sakura, Japan to a confetti-filled wedding.teampixel_315_2.jpgLeft: @archibajda - Upside down in Kraków, Poland. Right: @tanyakhanijow - confetti party in portrait mode, Indiateampixel_315_4.gif@zuvamart - tea time in Hyperbad, Indiateampixel_315_.jpgLeft: @motivates - plum blossoms in Sakura, Japan. Right: @juicegee - Gardens by the Bay, Singaporeteampixel_315_3.jpg@peter.hudston - neon skies in Annapolis Royal, Nova ScotiaIf you’re looking for a daily dose of #teampixel photos, follow our feed on Instagram and keep spreading the loves and likes with fellow Pixel photographers.[...]Check out photos from our growing community of #teampixel photographers.

Media Files:

The High Five: “A Brief History” of this week’s searchesThe High Five: “A Brief History” of this week’s searchesOn-air trends expert

Fri, 16 Mar 2018 18:45:00 -0000

Sifting through the week’s news can feel like sinking into a black hole. Luckily, we have some standout trends this week, gathered with data from Google News Lab. They start with a tribute to legendary physicist and black hole escape artist Stephen Hawking, who passed away Wednesday at age 76.

“Look up”
Stephen Hawking’s intelligence was a cut above the rest, in life and in Search: interest in “Stephen Hawking IQ” was 170 percent higher than “Stephen Hawking quote” over the past week. But of his many memorable quotes, here’s the most searched: “Look up at the stars and not down at your feet. Be curious. And however difficult life may seem, there is always something you can do and succeed at.”

Turbulent times
“What happened on United Airlines?” was a trending question this week. The company faced scrutiny after a French bulldog—the second most searched dog breed this week—suffocated in an overhead compartment and a pet German Shepherd was accidentally shipped to Japan. For those searching for canine breeds this week, Rhodesian Ridgebacks were top dog.

A cue from teens
Search interest in “walkout” has reached an all-time high in the U.S. this month. On Wednesday, students around the country participated in a walkout to call on elected officials to take action on gun laws—the top cities searching for “walkout” were Charlottesville, VA, Fort Smith, AR, and Madison, WI.

It’s bracket season 
March Madness is in full swing, especially for North Carolina, Duke and Kentucky fans, whose teams have been the most searched in the past week. The top-searched celebrity brackets are from basketball commentator Jay Bilas, former President Barack Obama, and Warren Buffett. And the winner is anyone’s guess: Michigan State, favored by both Bilas and Obama, wasn’t among the top 10 teams being searched this week.

Go green
Saturday marks St. Patrick’s Day and, in true spirit, corned beef and cabbage is the top trending St. Patrick’s Day recipe this week, followed by … jello shots 🤔. If you’re feeling lucky, you might be among those searching for lucky horseshoes, lucky cats and lucky clovers (the top searched “lucky” items in the past week). And although New York has the biggest parade and Boston the biggest reputation, the top states searching for the holiday are Connecticut, Kansas, and Delaware. Illinois, where Chicagoans annually dye their river green, comes in at number four.

(image) Check out what’s trending on Google with a look at a few of the top searches from this week.

Media Files:

OK Go makes some noise in the classroomOK Go makes some noise in the classroomLead singer and video director for OK Go

Thu, 15 Mar 2018 17:00:00 -0000

Editor’s Note: Many of us on Google’s Science Journal team are huge fans of OK Go, the popular rock and YouTube sensation. Their music videos are a spectacular blend of science, engineering, and creativity—a great formula for engaging classroom activities. So when professor AnnMarie Thomas approached us about the OK Go Sandbox, a collection of materials for K-12 educators, we simply couldn’t pass up the opportunity. OK Go frontman Damian Kulash tells us more in this guest post.I’m always so proud and excited when I hear from a teacher who uses an OK Go music video in the classroom, and over the years, I’ve heard it more and more frequently—from pre-school teachers to grad school professors. We know our videos are joyful and nerdy (we’ve done a Rube Goldberg machine and a dance in zero gravity, for instance), but we didn’t plan them for the classroom environment. It’s a wonderful surprise to hear they’re sneaking in there on their own, and we want to support that in any way we can.Last year I met Dr. AnnMarie Thomas, who leads the Playful Learning Lab at the University of St. Thomas. Together we brainstormed ways to open up our videos for classrooms, and we set up a survey to ask educators for their ideas. Within just a few days, nearly a thousand teachers sent us their thoughts, and, with support from Google, we took this feedback and together developed our new OK Go Sandbox. It’s a collection of materials created for and with K-12 educators: design challenges, educator guides, and more.Here’s Dr. AnnMarie Thomas and me meeting with teachers to go over OK Go Sandbox materials.It was especially cool to work with Google’s Science Journal team to develop tools that allow students to explore the world around them through music. Their new pitch detection feature makes it possible to make sounds using glasses of water (like we did in the Rube Goldberg machine for “This Too Shall Pass”, and in the musical performance of a robotic car for “Needing/Getting”), and there’s now an option to play data values as pitches which lets students use their phone’s sensors to compose new sounds and interpret their data in a new way.So whether we’re exploring frame rates by making flip books, or using a light sensor to make music (with Google’s Science Journal app), we hope that the challenges in the OK Go Sandbox help stoke curiosity and encourage learning through joy and wonder. And we particularly look forward to learning more from educators as this stuff gets into the world.Educators! Please reach out to us at hello@OKGoSandbox.orgwith your input and ideas so that we can grow and adapt this to be maximally useful in inspiring your students. The best part of a sandbox is that we can try building lots of new things, even if we occasionally have to knock some things down and start over.[...]Q&A with OK Go frontman Damian Kulash about the OK Go Sandbox, a collection of materials for K-12 educators.

Media Files:

Keeping cloud entry points secure with Google Chrome EnterpriseKeeping cloud entry points secure with Google Chrome EnterpriseProduct Manager

Thu, 15 Mar 2018 16:00:00 -0000

When we introduced Chrome Enterprise last August, our aim was to provide a single solution that connected employees while giving admins the flexibility and control they needed to keep their businesses protected. Since then, security has only become more of a priority for enterprises. In fact, last year alone, 98% of businesses were affected by malware, and employee endpoints—like laptops, tablets, and smartphones—were increasingly the target of attacks.Enterprise IT admins know this all too well. With hardware, firmware, browsers, apps and networks to protect, admins now face more risks than ever, while managing more devices than ever. We built our Chrome Enterprise ecosystem with this complex landscape in mind, and today we’re adding new enhancements and partnerships as we continue to make Chrome Enterprise the most secure endpoint solution for businesses in the cloud.Here’s a look at how these updates can help protect businesses, and their data, at every cloud access point.Offering more ways for businesses to manage their devices from a single unified management solutionFor many businesses, managing a broad range of devices within one unified endpoint management solution is a necessity. Last year, we announced our first enterprise mobility management (EMM) partnership with VMware AirWatch, the first third-party solution with the capability to manage Chrome OS. Today, we’re expanding this with four new partnerships with EMM providers, which gives IT admins the ability to manage and implement security policies across their full fleet of devices from a single place.Cisco Meraki offers a comprehensive set of solutions that includes wireless, switching, security, endpoint management, and security cameras, all managed through Meraki’s web-based dashboard interface.Citrix XenMobile provides device and application management for comprehensive mobile security, and pairs well with other recent Citrix integrations.IBM MaaS360 with Watson delivers a cognitive approach to unified endpoint management, enabling the management of endpoints, end users and everything in between.ManageEngine Mobile Device Manager Plus (a division of Zoho Corp) is a unified endpoint management console for configuring, managing and securing mobile devices, desktops and apps.With these partnerships in place, enterprises can pick the solution that fits their business best.Helping enterprises manage Chrome OS alongside legacy infrastructure with more Active Directory enhancementsBuilding on our initial integration with Active Directory last August, we’ve added a number of enhancements to help admins manage Chrome OS alongside legacy infrastructure. Administrators can now configure managed extensions directly through Group Policy Objects. Users can authenticate to Kerberos and NTLMv2 endpoints on their local network directly from Chrome OS. We’re also expanding our support for common enterprise Active Directory setups like multiple domain scenarios. And we’ve improved our existing certificate enrollment flows with Active Directory Certificate Services (ADCS).Continuing to deepen and expand management capabilities in Chrome Browser and Chrome OSThe less time IT has to spend on mundane, manual tasks means more time to focus on business critical projects. That’s why Chrome Enterprise was designed to give IT admins the ability to grant, manage and adjust user permissions at scale, with fewer repetitive tasks. Chrome Enterprise already lets admins fine tune more than 200 security policies and grant secure, authorized employee access to online resources, and we’re continuing to add additional controls to help. In recent months, we’ve added the following controls to help admins:Per-permission extension blacklisting lets admins restrict access to extensions based on the permissions required, for example, extensions that require the use of a webcam. This allows admins to now authorize an employee’s access to more extensions in the Google Chrome Web Store but main[...]

Media Files:

Android Wear, it’s time for a new nameAndroid Wear, it’s time for a new nameDirector of Product Management

Thu, 15 Mar 2018 15:45:00 -0000

Android Wear was founded on the belief that wearable technology should be for everyone, no matter what style you wear on your wrist or what phone you have in your pocket. Since then, we’ve partnered with top watch and electronics brands to create more than 50 watches to help you manage your fitness, connect with the people who matter most, and show you the information you care about. The best part: We’re just scratching the surface of what’s possible with wearables and there’s even more exciting work ahead.

As our technology and partnerships have evolved, so have our users. In 2017, one out of three new Android Wear watch owners also used an iPhone. So as the watch industry gears up for another Baselworld next week, we’re announcing a new name that better reflects our technology, vision, and most important of all—the people who wear our watches. We’re now Wear OS by Google, a wearables operating system for everyone. 


You’ll begin to see the new name on your watch and phone app over the next few weeks.

(image) Android Wear is now Wear OS by Google—a new name that better reflects our technology, vision, and the people who wear our watches.

Introducing “wheelchair accessible” routes in transit navigationIntroducing “wheelchair accessible” routes in transit navigationProduct Manager, Google Maps

Thu, 15 Mar 2018 13:00:00 -0000

Google Maps was built to help people navigate and explore the world, providing  directions, worldwide, to people traveling by car, bicycle or on foot. But in city centers, buses and trains are often the best way to get around, which presents a challenge for people who use wheelchairs or with other mobility needs. Information about which stations and routes are wheelchair friendly isn’t always readily available or easy to find. To make public transit work for everyone, today we’re introducing “wheelchair accessible” routes in transit navigation to make getting around easier for those with mobility needs.Adam, Lucy, Omari and Meridyth shared their experience using public transportation.To access the “wheelchair accessible” routes, type your desired destination into Google Maps. Tap “Directions” then select the public transportation icon. Then tap “Options” and under the Routes section, you’ll find “wheelchair accessible” as a new route type. When you select this option, Google Maps will show you a list of possible routes that take mobility needs into consideration.  Starting today, this feature is rolling out in major metropolitan transit centers around the world, starting with London, New York, Tokyo, Mexico City, Boston, and Sydney. We're looking forward to working with additional transit agencies in the coming months to bring more wheelchair accessible routes to Google Maps.In addition to making public transportation more accessible, people around the world have been helping us add accessibility information to Google Maps. Last September, Local Guides from around the world gathered at 200 global meet-ups to answer accessibility questions—like whether a place has a step-free entrance or an accessible restroom—for more than 12 million places. Additionally, we’ve been busy capturing and updatingStreet View imagery of transit stations and city centers so people can preview a place or transit station ahead of time.Tokyo Station We built this feature to make life easier for people who use wheelchairs, but accessible routes are also helpful if you’re on crutches or pushing a stroller. With the help of transit agencies around the globe and people like you who contribute local knowledge, we’re making progress toward a more accessible world for everyone.[...]Today we’re adding “wheelchair accessible” routes to Google Maps transit - making getting around on public transportation a bit easier for those with mobility needs.

Media Files:

Helping 1 million Europeans find a job or grow their business by 2020Helping 1 million Europeans find a job or grow their business by 2020President, Business & Operations

Thu, 15 Mar 2018 08:00:00 -0000

The world is undergoing a digital transformation, offering enormous opportunities for growth, innovation and jobs. However, digital skills and tools can still seem out of reach to many.That’s why we’re renewing our commitment to the EU Digital Skills and Jobs Coalition, with a new pledge to help 1 million Europeans find a job or grow their business by 2020. This commitment goes beyond our previous pledge to help people develop digital skills to ensure that we support trainees as they put those skills to use in building careers and businesses.We’ve now trained 3 million Europeans, and more than 2 million people in Africa, in digital skills. This is our “Grow with Google” project, launched in 2015 and localized with expert partners in each country to maximize relevance and results. Our digital skills work, a reflection of the talents of people we have trained, was recognised by the European Commission in 2016 and2017.But does digital skills training really translate into economic impact and improved prospects for those people who invest their time?  To answer that question, we launched an independent research starting in 2016, and asked Grow with Google trainees about the impact they saw on their career or business 14 weeks after their training.The research shows that following Grow with Google training, so far over 190,000 Europeans have found a job or started a business—like Ildikó in Hungary, a mother of two who learned how to code and now manages her own business from home. More than half a million European businesses have grown their business through new customers or revenue, like Ntina from Greece, who during the recession opened up a hotel business which now welcomes people from all over the word. And 32,000 small and medium sized businesses have taken on more staff, such as Mark & Andersfrom Denmark who have grown from two to 30 people in the last year.Our new Grow with Google Impact Report gathers together stories of people such as Ildikó, Ntina, Mark and Anders who have found a job or grown their business. Going forward, we will work with our research Partner Ipsos to measure impact and we’ll publish quarterly updates showing how new skills can translate into opportunities for business owners and job seekers alike.Grow with Google aims to help everyone in Europe get access to training and products to grow their skills, career, or business, and we’ll continue to partner with governments, city councils, universities, private-sector businesses and nonprofits through the support of to achieve this. In Italy, Crescere in Digitale, a partnership with the Ministry of Labour and Chamber of Commerce, will activate 5,000 more internships for young unemployed people at SMBs by 2020, which can lead to full-time employment for people like Cristina at Lux Made-In, a traditional jewellery store. In Spain, we just launched a digital skills employment program with the Government and in Germany, we continue to work with Fraunhofer IAIS on their Open Roberta program, teaching young women how to code.Today anyone with a smartphone and an idea can be an entrepreneur, reach customers around the globe, can hire, grow and export. Technology is the toolkit for a world of opportunities—and Grow with Google is about helping everyone put those tools to work.[...]We’re renewing our commitment to the EU Digital Skills and Jobs Coalition with a new pledge to help 1 million Europeans find a job or grow their business by 2020.

Media Files:

Pinkoi: Sharing love for local craft in a global marketplacePinkoi: Sharing love for local craft in a global marketplace

Thu, 15 Mar 2018 02:00:00 -0000

Pinkoi’s founders (from left to right): Mike Lee, Maibelle Lin and Peter Yen.Editor’s note: As part of our series of interviews with people across the Asia-Pacific who use the internet as a tool to connect, create and grow, we spoke with Peter Yen, the CEO of Pinkoi, Asia’s leading online marketplace for original design and art products. Peter founded Pinkoi seven years ago along with Mike Lee, Pinkoi’s Chief Technology Officer and Maibelle Lin, Pinkoi’s Chief Product Officer. From a staff of three, Pinkoi has grown to a business of 82 employees serving more than two million customers in 88 countries. The platform is now home to more than 50,000 artisans and designers.Why did you start Pinkoi?My wife loves craft fairs and vintage markets. That’s where I first connected with artisans and designers. They produce great original products, but are often unsure about how to promote them or connect with their customers. I also thought that the designer community lacked an online space to share their creative and business experiences.Some of the 50,000 designers and artisans on Pinkoi.How did you meet your two co-founders Mike and Maibelle?  It was the internet that brought us together. When I got the idea for Pinkoi, I researched developer blogs extensively and that is how I came across Mike. We chatted and exchanged ideas for the business on Gmail and Hangouts. We connected to Mai through a mutual friend who introduced us online.With the help of the internet, we gradually conceived and developed the idea for Pinkoi, although we did not live in the same place at the time. Although we are all tech geeks at heart, we also shared a common passion for design and helping the designer community. Our passion resulted in us becoming not just business partners, but also good friends!A few of the 980,000 items for sale on Pinkoi.What impact do you think the Internet has had on your business?The internet is the reason why a platform like Pinkoi can work. Pinkoi gives anyone in the world easy access to our designers’ quality products. We think beautiful design is a universal language and should be shared. It’s not just our business, but also the livelihoods of all our designers. With the Internet,  our designers have the opportunity of making a living while pursuing their passions. Most designers are hobbyists when they join Pinkoi, but quite a few become full-time entrepreneurs after receiving training from us. In particular, Google is like our oxygen and our business wouldn’t survive without it. Our online business relies on Google’s solutions. We use Google Analytics to understand performance across all acquisition channels and to gain insights into what people are searching for. We also have the ability to advertise to relevant segments of the population with Google Adwords. GSuite and Google Calendar are the backbone of our daily communications.What’s the best part about working with artists and designers from around the world on Pinkoi? It’s really empowering to know that you can have a positive impact on livelihoods and lives, even across borders. Our designers also pay that positive impact forward to their customers. Many of them have told us about online customers finding them at offline events to express their appreciation for products they bought on Pinkoi. Pinkoi isn’t just an online marketplace for transactions, it’s a platform to connect real people across the world.  Pinkoi_CEOPeter, CEO of Pinkoi: Early on my engineering career, my goal was to become a software architect. I didn’t think entrepreneurship was in the cards for me. However, I was inspired by many colleagues at Yahoo who left to start their own companies. Like Maibelle, it definitely took some persuading before my family members began to support my decision. They did not believe that an engineer like me could run a business or connect eff[...]

Media Files:

Open sourcing Resonance AudioOpen sourcing Resonance AudioProduct Manager

Wed, 14 Mar 2018 21:00:00 -0000

Spatial audio adds to your sense of presence when you’re in VR or AR, making it feel and sound, like you’re surrounded by a virtual or augmented world. And regardless of the display hardware you’re using, spatial audio makes it possible to hear sounds coming from all around you.Resonance Audio, our spatial audio SDK launched last year, enables developers to create more realistic VR and AR experiences on mobile and desktop. We’ve seen a number of exciting experiences emerge across a variety of platforms using our SDK. Recent examples include apps like Pixar’s Coco VR for Gear VR, Disney’s Star WarsTM: Jedi Challenges AR app for Android and iOS, and Runaway’s Flutter VR for Daydream, which all used Resonance Audio technology.To accelerate adoption of immersive audio technology and strengthen the developer community around it, we’re opening Resonance Audio to a community-driven development model. By creating an open source spatial audio project optimized for mobile and desktop computing, any platform or software development tool provider can easily integrate with Resonance Audio. More cross-platform and tooling support means more distribution opportunities for content creators, without the worry of investing in costly porting projects.What’s included in the open source projectAs part of our open source project, we’re providing a reference implementation of YouTube’s Ambisonic-based spatial audio decoder, compatible with the same Ambisonics format (Ambix ACN/SN3D) used by others in the industry. Using our reference implementation, developers can easily render Ambisonic content in their VR media and other applications, while benefiting from Ambisonics open source, royalty-free model. The project also includes encoding, sound field manipulation and decoding techniques, as well as head related transfer functions (HRTFs) that we’ve used to achieve rich spatial audio that scales across a wide spectrum of device types and platforms. Lastly, we’re making our entire library of highly optimized DSP classes and functions, open to all. This includes resamplers, convolvers, filters, delay lines and other DSP capabilities. Additionally, developers can now use Resonance Audio’s brand new Spectral Reverb, an efficient, high quality, constant complexity reverb effect, in their own projects.We’ve open sourced Resonance Audio as a standalone library and associated engine plugins, VST plugin, tutorials, and examples with the Apache 2.0 license. This means Resonance Audio is yours, so you’re free to use Resonance Audio in your projects, no matter where you work. And if you see something you’d like to improve, submit a GitHub pull request to be reviewed by the Resonance Audio project committers. While the engine plugins for Unity, Unreal, FMOD, and Wwise will remain open source, going forward they will be maintained by project committers from our partners, Unity, Epic, Firelight Technologies, and Audiokinetic, respectively.If you’re interested in learning more about Resonance Audio, check out the documentation on our developer site. If you want to get more involved, visit our GitHub to access the source code, build the project, download the latest release, or even start contributing. We’re looking forward to building the future of immersive audio with all of you.[...]Resonance Audio is transitioning to an open, community-driven development model.

Media Files:

The She Word: how Emily Hanley shares her passion for computer scienceThe She Word: how Emily Hanley shares her passion for computer scienceManaging Editor

Wed, 14 Mar 2018 17:00:00 -0000

Editor’s Note: The She Word is a Keyword series all about dynamic and creative women at Google. Last week, the Grow with Google tour—which brings workshops, one-on-one coaching, and hands-on demos to cities across the U.S.—stopped in Lansing, Michigan. Emily Hanley, one of our very own software engineers and a Michigan native, taught introductory coding classes at the event. We spoke to her about returning to her hometown to teach, exposing more kids to computer science, and how her Google Home helps her have more dance parties with her kids.What was your biggest takeaway from Grow with Google in Lansing?It was inspiring to see so many people excited about the opportunity to not only interact with Google products, but to try out programming.What was one memorable moment of the day?Seeing the “ah ha” moments when people realized they had actually written code and produced something on their own. The classrooms were packed all day long, and it was so neat to interact with people who realized the potential of what they had just learned to do. People shared their stories of how they were already using technology in their fields, and this class helped them think about how they could do even more.Grow with Google's event in Lansing was close to where you grew up.What was it like to go back to your hometown?It’s amazing to see the investment in towns like Lansing, and to witness the revitalization that’s happening. People are bringing new ideas and technology to industries that have existed in Michigan for decades.How did you get your start at Google?I started as an intern in 2007 and have been here ever since—you could say I’ve grown up with Google.How do you explain your job at a dinner party?I’m a software engineer—I speak the language of computers. I work on the Chrome browser and make sure that other engineers who write code for Chrome don’t make it slower.What advice do you have for girls who want to be engineers?Don’t be afraid to dig in. Sometimes that means failing, but failure is a natural discovery that helps you figure out what you’re good at. Always ask the question that’s on your mind—chances are half the room is thinking the same thing, and more importantly, it’s how you grow. Tell us about your path to computer science.I didn’t learn about computers until college. I was more into physics and chemistry, and computers seemed like a black box. That’s part of why I’m so passionate about computer science education—if I can pass on what I’ve learned to the next generation, they can make something even bigger. They’ll do it tenfold.When kids are exposed to CS at a young age, it becomes a crucial tool for them. It’s not just a platform for playing games. And you can use CS no matter what your passion is. If it’s fashion or journalism or something else, CS can be a part of it. Who has helped you along your journey?My mom always told me there’s never a dream too big. She was always an advocate and a dreamer. Whenever I’ve felt intimidated, or had less technical experience that others in the room, I thought, “Dang it, I’ll work harder and find the next door to bang down.” My mom taught me that. How do you pass that advice onto your own (five!) kids?The biggest thing I want to give all my children is confidence in themselves and their abilities to pursue their passion (I always say “pursue your passion, not a paycheck”). So often people internalize criticisms and roadblocks as indications they aren't good enough to keep going on that path, instead of seeing those roadblocks as opportunities to grow.What role does technology play in your family life?I have five kids under the age of 7. I try to make technology part of our everyday life, but not the main focus of it. We utilize our Google Home for things like dance parti[...]

Experimenting with Light FieldsExperimenting with Light FieldsSenior Researcher

Wed, 14 Mar 2018 15:00:00 -0000

We’ve always believed in the power of virtual reality to take you places. That’s why we created Expeditions, to transport people around the world to hundreds of amazing, hard-to-reach or impossible-to-visit places. It’s why we launched Jump, which lets professional creators film beautiful scenes in stereoscopic 360 VR video, and it’s why we’re introducing VR180, a new format for anyone—even those unfamiliar with VR technology—to capture life’s special moments.But to create the most realistic sense of presence, what we show in VR needs to be as close as possible to what you’d see if you were really there. When you’re actually in a place, the world reacts to you as you move your head around: light bounces off surfaces in different ways and you see things from different perspectives. To help create this more realistic sense of presence in VR, we’ve been experimenting with Light fields.Light fields are a set of advanced capture, stitching, and rendering algorithms. Much more work needs to be done, but they create still captures that give you an extremely high-quality sense of presence by producing motion parallax and extremely realistic textures and lighting. To demonstrate the potential of this technology, we’re releasing “Welcome to Light Fields,” a free app available on Steam VR for HTC Vive, Oculus Rift, and Windows Mixed Reality headsets. Let’s take a look at how it works.Capturing and processing a light fieldWith light fields, nearby objects seem near to you—as you move your head, they appear to shift a lot. Far-away objects shift less and light reflects off objects differently, so you get a strong cue that you’re in a 3D space. And when viewed through a VR headset that supports positional tracking, light fields can enable some truly amazing VR experiences based on footage captured in the real world.This is possible because a light field records all the different rays of light coming into a volume of space. To record them, we modified a GoPro Odyssey Jump camera, bending it into a vertical arc of 16 cameras mounted on a rotating platform.Left: A time lapse video of recording a spherical light field on the flight deck of Space Shuttle Discovery.Right: Light field rendering allows us to synthesize new views of the scene anywhere within the spherical volume by sampling and interpolating the rays of light recorded by the cameras on the rig.It takes about a minute for the camera rig to swing around and record about a thousand outward-facing viewpoints on a 70cm sphere. This gives us a two-foot wide diameter volume of light rays, which determines the size of the headspace that users have to lean around in to explore the scenes once they are processed. To render views for the headset, rays of light are sampled from the camera positions on the surface of the sphere to construct novel views as seen from inside the sphere to match how the user moves their head. They’re aligned and compressed in a custom dataset file that’s read by special rendering software we’ve implemented as a plug-in for the Unity game engine.Recording the World with Light FieldsWe chose a few special places to try out our light field-recording camera rig. We love the varnished teak and mahogany interiors at the Gamble House in Pasadena, the fragments of glossy ceramic and shiny mirrors adorning the Mosaic Tile House in Venice, and the sun-filled stained glass window at St. Stephen’s Church in Granada Hills. Best of all, the Smithsonian Institute’s Air and Space Museum and 3D Digitization Office gave us access to NASA’s Space Shuttle Discovery, providing an astronaut’s view inside the orbiter’s flight deck which has never been open to the public.  And we closed with recording a variety of light fields of people, experimenting with how eye [...]

Media Files:

An advertising ecosystem that works for everyoneAn advertising ecosystem that works for everyoneDirector of Sustainable Ads

Wed, 14 Mar 2018 04:00:00 -0000

Digital advertising plays an important role in making the web what it is today—a forum where anyone with a good idea and good content can reach an audience and potentially make a living. In order for this ads-supported, free web to work, it needs to be a safe and effective place to learn, create and advertise. Unfortunately, this isn’t always the case. Whether it's a one-off accident or a coordinated action by scammers trying to make money, a negative experience hurts the entire ecosystem. That’s why for the last 15 years, we’ve invested in technology, policies and talent to help us fight issues like ad fraud, malware and content scammers. Last year, we were able to remove more bad actors from our ad ecosystem than ever before, and at a faster rate.We removed 100 bad ads per secondIn 2017, we took down more than 3.2 billion ads that violated our advertising policies. That’s more than 100 bad ads per second! This means we’re able to block the majority of bad ad experiences, like malvertising and phishing scams, before the scams impact people. We blocked 79 million ads in our network for attempting to send people to malware-laden sites, and removed 400,000 of these unsafe sites last year. And, we removed 66 million “trick-to-click” ads as well as 48 million ads that were attempting to get users to install unwanted software.New technology to better protect advertisersLast year, we removed 320,000 publishers from our ad network for violating our publisher policies, and blacklisted nearly 90,000 websites and 700,000 mobile apps. We also introduced technology that allows us to better protect our advertisers by removing Google ads from individual pages on a website that violate our policies. Last year, we removed 2 million pages for policy violations each month. This has been critical in scaling enforcement for policies that prohibit monetization of inappropriate and controversial content. In fact, after expanding our policy against dangerous and derogatory content in April 2017 to cover additional forms of discrimination and intolerance, we removed Google ads from 8,700 pages that violated the expanded policy.Fighting deceptive content onlineMany website owners use our advertising platforms, like AdSense, to run Google ads on their sites and content and make money. We paid $12.6 billion to publishing partners in our ad network last year. But in order to make money from Google ads, you have to play by rules— that means respecting the user experience more than the ads.Our publisher policies exist to help us maintain that balance, even as trends change online. For example, in recent years, we’ve seen the rise of scammers trying to take advantage of the growing popularity of online news to make money. We prohibit websites in our ad network from serving ads on misrepresentative content. Essentially this means that you can’t serve ads if you’re pretending to be a legitimate news website based in London when you’re actually a content scammer in a different city. In 2017, we found that a small number of publishers were responsible for the majority of these violations. Of the 11,000 websites we reviewed for potentially violating the misrepresentative content policy, we blocked over 650 of those sites and terminated 90 publishers from our network.  More frequently, we see violations of our scraping content policy. This type of policy violation occurs when bad actors try to make money as quickly as possible by copying news or content from other sites. In 2017, we blocked over 12,000 websites for “scraping,” duplicating and copying content from other sites, up from 10,000 in 2016.Does an ad with the headline “Ellen DeGeneres adopts a baby elephant!” make you want to click on it? You’re not alone. In recent[...]

Grow with Google comes to LansingGrow with Google comes to LansingCommunity Engagement Manager

Tue, 13 Mar 2018 21:40:00 -0000

Editor’s Note: Grow with Google offers free tools, trainings and events to help people grow their skills, careers, and businesses. The Grow with Google tour brings workshops, one-on-one coaching, and hands-on demos to cities and towns across the United States. Through a series of Keyword posts, we’ll highlight where we’ve been, but you can find out where we’re headed next on our site.Grow with Google can be a homecoming for Googlers like Emily Hanley, who was born and raised near Lansing. As an Ann Arbor-based engineer, Emily had the chance to return to her hometown to lead an introductory coding classes at the event. She explained the importance of introducing computer science to young people, saying, “When kids are exposed to computer science at a young age, it becomes a crucial tool for them. And you can use computer science no matter what your passion is.”The fifty Michiganders who attended Emily’s class were among the 1,100 job-seekers, small business owners, developers, educators and students who joined Grow with Google on tour at Lansing Community College in Michigan. With two offices (Ann Arbor and Birmingham-Detroit) and over 600 Googlers in Michigan, we are proud to have been able to bring some home to Lansing to help out with the two-day event.In addition to hometown Googlers, we were fortunate to work with the local community, including many Lansing-based organizations focused on education and economic development. One of our partners, the Information Technology Empowerment Center (ITEC), provides after-school and summer programs to build excitement for coursework and careers in STEM fields. As part of our continued efforts in the city, Google announced a $100,000 sponsorship to ITEC to expand their digital skills offerings to even more K-12 students across the region.GoogleGrowFinals-346.jpgNative Michigander and Google engineer Emily Hanley teaches introductory coding. GoogleGrowFinals-182.jpgAnn Arbor-based Googler Taylor Holmes provides one-on-one coachingGoogleGrowFinals-192.jpgSmall business owners are introduced to Google Primer, an app which provides free, quick lessons in digital marketing.GoogleGrowFinals-257.jpgA young Michigander goes exploring with Expeditions in Google CardboardThe Grow with Google tour will continue in many more cities and towns throughout 2018. Our next stop is Louisville, Kentucky on March 29th. Learn more at[...]

Media Files:

Watch live performances at The FADER FORT from SXSW in VR180Watch live performances at The FADER FORT from SXSW in VR180Director, VR Video

Tue, 13 Mar 2018 20:00:00 -0000

For over 15 years, The FADER has introduced the world to new music artists at The FADER FORT, the global media company's annual live music event at South by Southwest (SXSW). FADER FORT has been the breakout, must-do gig for famous artists including Cardi B, Dua Lipa, Drake and many others. The event gives emerging and global artists an intimate stage to experiment on and allows those in attendance to experience performances up close and personal. But with an intimate experience, only a lucky few are able to make it into the must-see show making it one of the most in demand events at SXSW.


To bring The FADER FORT experience to more fans, we partnered with The FADER to livestream performances by Saweetie, Bloc Boy, Valee, Speedy Ortiz, YBN Nahmir and other special guests in VR180 on YouTube. No matter where you are, you can watch live on YouTube via your desktop or mobile device, or using Cardboard, Daydream View or PlayStation VR.

With VR180, those not in attendance at The FADER FORT in Austin will be able to experience three dimensional, 4K video of the show, providing a more immersive experience than a traditional video and making you feel like you are there.

From March 14-16th, we’ll livestream the the best acts of the day in VR180 and a cutdown of each set—that can be viewed at any time.

Check out the calendar below, grab your headset and get ready to see some of the best new artists on the scene without ever setting foot in Austin. Visit Faderfor the full lineup. See you at the Fort!


Google Station brings better, faster Wi-Fi to more people in MexicoGoogle Station brings better, faster Wi-Fi to more people in MexicoGoogle Station engineering team and proud Mexico City native

Tue, 13 Mar 2018 17:00:00 -0000

Over the last decade, mobile connectivity has gotten much better—and our data consumption has skyrocketed accordingly. We used to send texts and check webpages on our phones; now we scroll through hundreds of photos and watch high-quality videos.

In Mexico, the third highest Internet penetration country in Latin America, most people access the web through mobile. But even as data plans are more affordable than ever, people are always looking for ways to enjoy the web without using up their data. And access to information is still a challenge for many.

To bring better Internet access to people in Mexico, we’re working with Internet service provider Sitwifi to convert their existing hotspots to Google Station, our high-speed public Wi-Fi platform that gives partners an easy set of tools to roll out Wi-Fi hotspots in public places. Starting today, Google Station will be available in 60+ high-traffic venues across Mexico City and nationwide, including airports, shopping malls and public transit stations. We plan to reach 100+ locations before the end of the year. 


Mexico is the first country in Latin America to launch Google Station, and the third country globally, after India and Indonesia. Google Station can be found in Mexico City and 44 more cities in the country, so if you’re near one of the locations, go watch a high-quality video (or maybe save some YouTube offline for later)!

To learn more, see

(image) Starting today, Google Station will be available in 60+ locations in Mexico City and nationwide, including airports, shopping malls and public transit stations.

Media Files:

Helping learners develop the digital skills for their futuresHelping learners develop the digital skills for their futuresSchool Media Specialist at Belmar Elementary School

Tue, 13 Mar 2018 17:00:00 -0000

Editor’s note: This guest post is authored by Danielle Arnold, a School Media Specialist at Belmar Elementary School in New Jersey and a Ready to Code librarian. She shared this post, a version of which was originally published here.At the American Library Association Midwinter Meeting earlier this month, I spoke about how Applied Digital Skills—a new digital literacy program that is part of the Grow with Google initiative—helps me address some of the challenges my learners face. The curriculum uses video-based lessons to help learners create projects with digital applications and is free for everyone. As learners build projects, they practice the new skills they will need for future jobs, as well as other practical skills such as financial literacy, communication, critical thinking, and collaboration. The curriculum includes more than 100 hours of lessons that can be used in a library or classroom, independently or in a group.As Belmar Elementary School’s school librarian, I’ve been using Applied Digital Skills with fourth through eighth graders. I’m also Belmar’s School Media Specialist, which means that I’m always looking for news ways to support my school’s learners. I’ve used the Applied Digital Skills units to help learners improve in a few areas: The If-Then Adventure Stories unit allows my learners to develop interactive projects both independently and in teams. As learners collaborate in shared documents, they work together to solve problems and motivate each other to stay on track. At Belmar Elementary, we also use the Research and Develop a Topic unit to help learners conduct research more effectively. One seventh grade ELA teacher described how “learners have a hard time deciphering what is important and should be included, and what is not. They have a tough time putting the research into their own words…” In the unit, my learners identify credible sources, evaluate bias of digital information, write about their research, and get feedback.Many of my colleagues are also using Applied Digital Skills as well, using the curriculum to help our learners develop resumes, search for jobs, plan events, explore local history and more. Our technology facilitator at Belmar noted, ​"Devising real world scenarios that are relevant to the learners while also being able to teach them specific skills is often difficult. I love the format of the lessons and how the learners can follow the prompts and videos to work at their own pace.”These are just a few examples of how librarians and educators like me are using the Applied Digital Skills curriculum to offer learners of all ages free, engaging lessons that teach the essential digital skills they’ll need for their futures.To start using the curriculum in your own library or school, visit the Applied Digital Skills Getting Started Guide. You can also reach out to the curriculum team at with any questions.[...]

Media Files:

Get more useful information with captions on Google ImagesGet more useful information with captions on Google ImagesTech Lead

Tue, 13 Mar 2018 16:00:00 -0000

People around the world use Google Images to find visual information online. Whether you’re searching for ideas for your next baking project, how to tie shoelaces so they stay put, or tips on the proper form for doing a plank, scanning image results can be much more helpful than scanning text. Today, we’re sharing more about new changes to Google Images to provide even better visual discovery with more context on the image results page.

By adding more context around images, results can become much more useful. Last year we started showing badges (like “recipe” or “product”) on certain results to aid in the discovery process. Since then, we’ve also added the website’s domain URL for each result to show you where the image is coming from.

This week we’re adding captions to image results, showing you the title of the web page where each image is published. This extra piece of information gives you  more context so you can easily find out what the image is about and whether the website would contain more relevant content for your needs. Here’s how it looks:


In this example, the image results give you visual confirmation that you found the right fruit, but captions make results instantly more useful with additional context. For instance, you can learn that this fruit is called carambola or starfruit, and that it’s popular in China. This also helps you choose the result page to click and explore further.

This update underscores our ongoing goal to make Google Images an ever more useful tool to discover and explore more information from the web. Image captions are starting to roll out globally this week on the Google app (Android and iOS) and on mobile browsers.

(image) To help make Google Images even more useful, we’re adding short captions to images that share more information about the image.

Introducing the Google Assistant on iPadIntroducing the Google Assistant on iPadProduct Manager

Tue, 13 Mar 2018 16:00:00 -0000

Last year we brought the Google Assistant to iPhones and today, iPads are joining the party. The Assistant on iPad can do everything the Assistant on your iPhone can do, with the added benefit of a bigger screen that supports both portrait and landscape mode.

Here are some highlights of how the Assistant can help on your iPad while you’re hanging around the house:

  • Set the mood by having the Assistant “dim the lights”
  • Cast to your TV by asking the Assistant to “watch the latest news on the living room TV”
  • Stay in touch by asking the Assistant to “video call mom” or “text Lauren”
  • Keep up with your chores by asking the Assistant to “remind me to take out the recycling at 8 PM”

Plus, you can stay productive by multitasking on iPad with iOS 11, letting you chat with the Assistant while you play a game, plan a trip or check your calendar.


The Assistant on iPad is rolling out today and will be available in English, French, German, Italian, Japanese, Portuguese (Brazil) and Spanish.


Making music using new sounds generated with machine learningMaking music using new sounds generated with machine learningResearch Scientist, Magenta, Google Brain

Tue, 13 Mar 2018 14:00:00 -0000

Technology has always played a role in inspiring musicians in new and creative ways. The guitar amp gave rock musicians a new palette of sounds to play with in the form of feedback and distortion. And the sounds generated by synths helped shape the sound of electronic music. But what about new technologies like machine learning models and algorithms? How might they play a role in creating new tools and possibilities for a musician’s creative process? Magenta, a research project within Google, is currently exploring answers to these questions.Building upon past research in the field of machine learning and music, last year Magenta released NSynth (Neural Synthesizer). It’s a machine learning algorithm that uses deep neural networks to learn the characteristics of sounds, and then create a completely new sound based on these characteristics. Rather than combining or blending the sounds, NSynth synthesizes an entirely new sound using the acoustic qualities of the original sounds—so you could get a sound that’s part flute and part sitar all at once.Since then, Magenta has continued to experiment with different musical interfaces and tools to make the algorithm more easily accessible and playable. As part of this exploration, Google Creative Lab and Magenta collaborated to create NSynth Super. It’s an open source experimental instrument which gives musicians the ability to explore new sounds generated with the NSynth algorithm.To create our prototype, we recorded 16 original source sounds across a range of 15 pitches and fed them into the NSynth algorithm. The outputs, over 100,000 new sounds, were then loaded into NSynth Super to precompute the new sounds. Using the dials, musicians can select the source sounds they would like to explore between, and drag their finger across the touchscreen to navigate the new, unique sounds which combine their acoustic qualities. NSynth Super can be played via any MIDI source, like a DAW, sequencer or keyboard.Part of the goal of Magenta is to close the gap between artistic creativity and machine learning. It’s why we work with a community of artists, coders and machine learning researchers to learn more about how machine learning tools might empower creators. It’s also why we create everything, including NSynth Super, with open source libraries, including TensorFlow and openFrameworks. If you’re maker, musician, or both, all of the source code, schematics, and design templates are available for download on GitHub.New sounds are powerful. They can inspire musicians in creative and unexpected ways, and sometimes they might go on to define an entirely new musical style or genre. It’s impossible to predict where the new sounds generated by machine learning tools might take a musician, but we're hoping they lead to even more musical experimentation and creativity.Learn more about NSynth Super at[...]NSynth Super is an experimental instrument for making music using new sounds generated with machine learning.

Media Files:

Understanding the inner workings of neural networksUnderstanding the inner workings of neural networksResearch Scientist

Mon, 12 Mar 2018 16:45:00 -0000

Neural networks are a powerful approach to machine learning, allowing computers to understand images, recognize speech, translate sentences, play Go, and much more. As much as we’re using neural networks in our technology at Google, there’s more to learn about how these systems accomplish these feats. For example, neural networks can learn how to recognize images far more accurately than any program we directly write, but we don’t really know how exactly they decide whether a dog in a picture is a Retriever, a Beagle, or a German Shepherd.We’ve been working for several years to better grasp how neural networks operate. Last week we shared new research on how these techniques come together to give us a deeper understanding of why networks make the decisions they do—but first, let’s take a step back to explain how we got here.Neural networks consist of a series of “layers,” and their understanding of an image evolves over the course of multiple layers. In 2015, we started a project called DeepDream to get a sense of what neural networks “see” at the different layers. Itled to a much larger research project that would not only develop beautiful art, but also shed light on the inner workings of neural networks.Outside Google, DeepDream grew into a small art movement producing all sorts of amazing things.Last year, we shared new work on this subject, showing how techniques building on DeepDream—and lots of excellent research from our colleagues around the world—can help us explore how neural networks build up their understanding of images. We showed that neural networks build on previous layers to detect more sophisticated ideas and eventually reach complex conclusions. For instance, early layers detect edges and textures of images, but later layers progress to detecting parts of objects.The neural network first detects edges, then textures, patterns, parts, and objects.Last week we released another milestone in our research: an exploration of how different techniques for understanding neural networks fit together into a bigger picture.This work, which we've published in the online journal Distill, explores how different techniques allow us to “stand in the middle of a neural network” and see how decisions made at an individual point influence a final output. For instance, we can see how a network detects a “floppy ear,” and then that increases the probability that the image will be labeled as a Labrador Retriever or Beagle.In one example, we explore which neurons activate in response to different inputs—a kind of “MRI for neural networks.” The network has some floppy ear detectors that really like this dog!We can also see how different neurons in the middle of the network—like those floppy ear detectors—affect the decision to classify an image as a Labrador Retriever or tiger cat.If you want to learn more, check out our interactive paper, published in Distill. We’ve also open sourced our neural net visualization library, Lucid, so you can make these visualizations, too.[...]New research on that gives us a deeper understanding of why networks make the decisions they do

Media Files: