Subscribe: Google Webmaster Central Blog
http://googlewebmastercentral.blogspot.com/atom.xml
Added By: hpsoftware Feedage Grade A rated
Language: English
Tags:
content  data  feature  google  job  mobile  new  page  pages  search console  search  site  sites  structured data  users 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Google Webmaster Central Blog

Google Webmaster Central Blog



Official news on crawling and indexing sites for the Google index.



Updated: 2017-11-16T08:05:00.145-08:00

 



Engaging users through high quality AMP pages

2017-11-16T08:05:00.152-08:00

To improve our users' experience with AMP results, we are making changes to how we enforce our policy on content parity with AMP. Starting Feb 1, 2018, the policy requires that the AMP page content be comparable to the (original) canonical page content. AMP is not a ranking signal and there is no change in terms of the ranking policy with respect to AMP.

The open source accelerated mobile pages project (AMP) launched in 2015 and has seen tremendous growth with over 25M domains having implemented the AMP format. This rapid progress comes with a sense of responsibility of ensuring that our users continue to have a great content consumption experience that ultimately leads to more engagement with publisher content.

In some cases, webmasters publish two versions of their content: a canonical page that is not based on AMP and an AMP page. In the ideal scenario, both these pages have equivalent content leading the user to get the same content but with a faster and smoother experience via AMP.  However, in some cases the content on the AMP page does not match the content on its original (canonical) page.

In a small number of cases, AMP pages are used as teaser pages which create a particularly bad user experience since they only contain minimal content. In these instances, users have to click twice to get to the real content. Below is an example of how this may look like: a brief text of the main article and then asking the user to click to visit another page to complete reading the article.

(image)

AMP was introduced to dramatically improve the performance of the web and deliver a fast, consistent content consumption experience. In keeping with this goal, we'll be enforcing the requirement of close parity between AMP and canonical page, for pages that wish to be shown in Google Search as AMPs.

Where we find that an AMP page doesn't contain the same critical content as its non-AMP equivalent, we will direct our users to the non-AMP page. This does not affect Search ranking. However, these pages will not be considered for Search features that require AMP, such as the Top Stories carousel with AMP. Additionally, we will notify the webmaster via Search console as a manual action message and give the publisher the opportunity to fix the issue before its AMP page can be served again. The AMP open source website has several helpful guides to help produce fast, beautiful and high-performing AMP pages.

We hope this change encourages webmasters to maintain content parity between the canonical and AMP equivalent. This will lead to better experience on your site and ultimately happier users.


(image)



Make your site's complete jobs information accessible to job seekers

2017-11-15T03:00:27.973-08:00

In June, we announced a new experience that put the convenience of Search into the hands of job seekers. Today, we are taking the next step in improving the job search experience on Google by adding a feature that shows estimated salary information from the web alongside job postings, as well as adding new UI features for users. Salary information has been one of the most requested additions from job seekers. This helps people evaluate whether a job is a good fit, and is an opportunity for sites with estimated salary information to:Increase brand awareness: Estimated salary information shows a representative logo from the estimated salary provider. Get more referral traffic: Users can click through directly to salary estimate pages when salary information surfaces in job search results. If your site provides salary estimates, you can take advantage of these changes in the following ways: Specify actual salary information Actual salary refers to the base salary information that is provided by the employer. If your site publishes job listings, you can add JobPosting structured data and populate the baseSalary property to be eligible for inclusion in job search results. This salary information will be made available in both the list and the detail views. Provide estimated salary information In cases where employers don’t provide actual salary, job seekers may see estimated salaries sourced from multiple partners for the same or similar occupation. If your site provides salary estimate information, you can add Occupation structured data to be eligible for inclusion in job search results.   Include exact location information We've heard from users that having accurate, street-level location information helps them to focus on opportunities that work best for them. Sites that publish job listings can do this can do this by using the jobLocation property in JobPosting structured data. Validate your structured data To double-check the structured data on your pages, we'll be updating the Structured Data Testing Tool and the Search Console reports in the near future. In the meantime, you can monitor the performance of your job postings in Search Analytics. Stay tuned! Since launching this summer, we’ve seen over 60% growth in number of companies with jobs showing on Google and connected tens of millions of people to new job opportunities. We are excited to help users find jobs with salaries that meet their needs, and to route them to your site for more information. We invite sites that provide salary estimates to mark up their salary pages using the Occupation structured data. Should you have any questions regarding the use of structured data on your site, feel free to drop by our webmaster help forums. Posted by Nick Zakrasek, Product Manager [...]



Enabling more high quality content for users

2017-10-01T21:09:21.425-07:00

In Google’s mission to organize the world's information, we want to guide Google users to the highest quality content, the principle exemplified in our quality rater guidelines. Professional publishers provide the lion’s share of quality content that benefits users and we want to encourage their success. The ecosystem is sustained via two main sources of revenue: ads and subscriptions, with the latter requiring a delicate balance to be effective in Search. Typically subscription content is hidden behind paywalls, so that users who don’t have a subscription don’t have access. Our evaluations have shown that users who are not familiar with the high quality content behind a paywall often turn to other sites offering free content. It is difficult to justify a subscription if one doesn't already know how valuable the content is, and in fact, our experiments have shown that a portion of users shy away from subscription sites. Therefore, it is essential that sites provide some amount of free sampling of their content so that users can learn how valuable their content is. The First Click Free (FCF) policy for both Google web search and News was designed to address this issue. It offers promotion and discovery opportunities for publishers with subscription content, while giving Google users an opportunity to discover that content. Over the past year, we have worked with publishers to investigate the effects of FCF on user satisfaction and on the sustainability of the publishing ecosystem. We found that while FCF is a reasonable sampling model, publishers are in a better position to determine what specific sampling strategy works best for them. Therefore, we are removing FCF as a requirement for Search, and we encourage publishers to experiment with different free sampling schemes, as long as they stay within the updated webmaster guidelines. We call this Flexible Sampling. One of the original motivations for FCF is to address the issues surrounding cloaking, where the content served to Googlebot is different from the content served to users. Spammers often seek to game search engines by showing interesting content to the search engine, say healthy food recipes, but then showing users an offer for diet pills. This “bait and switch” scheme creates a bad user experience since users do not get the content they expected. Sites with paywalls are strongly encouraged to apply the new structured data to their pages, because without it, the paywall may be interpreted as a form of cloaking, and the pages would then be removed from search results. Based on our investigations, we have created detailed best practices for implementing flexible sampling. There are two types of sampling we advise: metering, which provides users with a quota of free articles to consume, after which paywalls will start appearing; and lead-in, which offers a portion of an article’s content without it being shown in full. For metering, we think that monthly (rather than daily) metering provides more flexibility and a safer environment for testing. The user impact of changing from one integer value to the next is less significant at, say, 10 monthly samples than at 3 daily samples. All publishers and their audiences are different, so there is no single value for optimal free sampling across publishers. However, we recommend that publishers start by providing 10 free clicks per month to Google search users in order to preserve a good user experience for new potential subscribers. Publishers should then experiment to optimize the tradeoff between discovery and conversion that works best for their businesses. Lead-in is generally implemented as truncated content, such as the first few sentences or 50-100 words of the article. Lead-in allows users a taste of how valuable the content may be. Compared to a page with completely blocked content, lead-in clearly provides more utility and added value to users. We are excited by this change as it allows the growth of the premium content ecosystem, which ult[...]



How to move from m-dot URLs to responsive site

2017-09-14T14:32:59.944-07:00

With more sites moving towards responsive web design, many webmasters have questions about migrating from separate mobile URLs, also frequently known as "m-dot URLs", to using responsive web design. Here are some recommendations on how to move from separate urls to one responsive URL in a way that gives your sites the best chance of performing well on Google's search results.

Moving to responsive sites in a Googlebot-friendly way

Once you have your responsive site ready, moving is something you can definitely do with just a bit of forethought. Considering your URLs stay the same for desktop version, all you have to do is to configure 301 redirects from the mobile URLs to the responsive web URLs.

Here are the detailed steps:

  1. Get your responsive site ready
  2. Configure 301 redirects on the old mobile URLs to point to the responsive versions (the new pages). These redirects need to be done on a per-URL basis, individually from each mobile URLs to the responsive URLs.
  3. Remove any mobile-URL specific configuration your site might have, such as conditional redirects or a vary HTTP header.
  4. As a good practice, setup rel=canonical on the responsive URLs pointing to themselves (self-referential canonicals).

If you're currently using dynamic serving and want to move to responsive design, you don't need to add or change any redirects.

Some benefits for moving to responsive web design

Moving to a responsive site should make maintenance and reporting much easier for you down the road. Aside from no longer needing to manage separate URLs for all pages, it will also make it much easier to adopt practices and technologies such as hreflang for internationalization, AMP for speed, structured data for advanced search features and more.

As always, if you need more help you can ask a question in our webmaster forum.

(image)



Introducing Our New International Webmaster Blogs!

2017-08-23T13:02:55.555-07:00

Join us in welcoming the latest additions to the Webmasters community: नमस्ते Webmasters in Hindi! Добро Пожаловать Webmasters in Russian! Hoşgeldiniz Webmasters in Turkish! สวัสดีค่ะ Webmasters in Thai! xin chào Webmasters in Vietnamese! We will be sharing webmaster-related updates in our current and new blogs to make sure you have a place to follow the latest launches, updates and changes in Search in your languages! We will share links to relevant Help resources, educational content and events as they become available. Just a reminder, here are some of the resources that we have available in multiple languages: Google.com/webmasters - documentation, support channels, tools (including a link to Search Console) and learning materials. Help Center - tips and tutorials on using Search Console, answers to frequently asked questions and step-by-step guides. Help forum - ask your questions and get advice from the Webmaster community YouTube Channel - recordings of Hangouts on Air in different languages are on our G+ community - another place we announce and share our Hangouts On Air Testing tools: PageSpeed insights - actionable insights on how to increase your site's performance Mobile-Friendly test - identify areas where you can improve your site's performance on Mobile devices Structure Data testing tool - preview and test your Structured Data markupSome other valuable resources (English-only): Developer documentation on Search - a great resource where you can find feature guides, code labs, videos and links to more useful tools for webmasters. If you have webmaster-specific questions, check our event calendar for the next hangout session or live event! Alternatively, you can post your questions to one of the local help forum, where our talented Product Experts from the TC program will try to answer your questions. Our Experts are product enthusiasts who have earned the distinction of "Top Contributor," or "Rising Star," by sharing their knowledge on the Google Help Forums. If you have suggestions, please let us know in the comments below. We look forward to working with you in your language! .blogimg img { width: 100%; border: 0; margin: 0; padding: 0 0 10px 0; } [...]



The new Search Console: a sneak peek at two experimental features

2017-08-01T06:39:34.593-07:00

Search Console was initially launched with just four reports more than a decade ago. Today, the product includes more than two dozen reports and tools covering AMP, structured data, and live testing tools, all designed to help improve your site's performance on Google Search.Now we have decided to embark on an extensive redesign to better serve you, our users. Our hope is that this redesign will provide you with:More actionable insights - We will now group the identified issues by what we suspect is the common “root-cause” to help you find where you should fix your code. We organize these issues into tasks that have a state (similar to bug tracking systems) so you can easily see whether the issue is still open, whether Google has detected your fix, and track the progress of re-processing the affected pages. Better support of your organizational workflow - As we talked to many organizations, we’ve learned that multiple people are typically involved in implementing, diagnosing, and fixing issues. This is why we are introducing sharing functionality that allows you to pick-up an action item and share it with other people in your group, like developers who will get references to the code in question.Faster feedback loops between you and Google - We’ve built a mechanism to allow you to iterate quickly on your fixes, and not waste time waiting for Google to recrawl your site, only to tell you later that it’s not fixed yet. Rather, we’ll provide on-the-spot testing of fixes and are automatically speeding up crawling once we see things are ok. Similarly, the testing tools will include code snippets and a search preview - so you can quickly see where your issues are, confirm you've fixed them, and see how the pages will look on Search.In the next few weeks, we're releasing two exciting BETA features from the new Search Console to a small set of users — Index Coverage report and AMP fixing flow.The new Index Coverage report shows the count of indexed pages, information about why some pages could not be indexed, along with example pages and tips on how to fix indexing issues. It also enables a simple sitemap submission flow, and the capability to filter all Index Coverage data to any of the submitted sitemaps.Here’s a peek of our new Index Coverage report: The new AMP fixing flowThe new AMP fixing experience starts with the AMP Issues report. This report shows the current AMP issues affecting your site, grouped by the underlying error. Drill down into an issue to get more details, including sample affected pages. After you fix the underlying issue, click a button to verify your fix, and have Google recrawl the pages affected by that issue. Google will notify you of the progress of the recrawl, and will update the report as your fixes are validated. As we start to experiment with these new features, some users will be introduced to the new redesign through the coming weeks.Posted by John Mueller and the Search Console Team [...]



Badges on Image Search help users find what they really want

2017-08-02T05:56:17.051-07:00

When you want to bake cupcakes, but you don't know what kind, Image Search can help you make a decision. Finding an image with a recipe can be challenging: you might end up on a page that has only pictures of these delicious things, or a cupcake fan site that doesn't have recipes, but everything else about them.
To help users find exactly what they want, Image Search on mobile devices now includes relevant badges on the thumbnails. Currently we have badges for recipes, videos, products, and animated images (GIFs).

If you have images on your site, you can help users identify the type of content associated with the image by using appropriate structured data on your pages. This helps users find relevant content quickly, and sends better targeted traffic to your site.
If you're publishing recipes, add Recipe markup on your page, for products, add Product markup, and for videos, add Video markup. Our algorithms will automatically badge GIFs, without the need of any markup. While we can't guarantee that badges will always be shown, adding the recommended structured data fields in addition to the required fields may increase the chance of adding a badge to your image search results.
You can use the Structured Data Testing Tool to verify that your pages are free of errors, and therefore eligible for the new Image Search badges. In addition, the Rich Cards report in Search Console can provide aggregate stats on your markup.
If you have questions about the feature, please ask us in the Webmaster Help Forum.
(image)



Connect to job seekers with Google Search

2017-07-20T05:55:04.530-07:00

July 20, 2017 update: Starting today, impressions and clicks stats for job listing pages and job details pages are available in the Search Analytics report in Search Console. Read more about how Jobs impressions and clicks are counted in the help centre. If you have questions, head to the webmaster forums. At Google I/O this year, we announced Google for Jobs, a new company-wide initiative focused on helping both job seekers and employers, through collaboration with the job matching industry. One major part of this effort is launching an improved experience for job seekers on Google Search. We’re happy to announce this new experience is now open for all developers and site owners.For queries with clear intent like [head of catering jobs in nyc] or [entry level jobs in DC], we’ll show a job listings preview, and each job can expand to display comprehensive details about the listing: For employers or site owners with job content, this feature brings many benefits: Prominent place in Search results: your postings are eligible to be displayed in the in the new job search feature on Google, featuring your logo, reviews, ratings, and job details.More, motivated applicants: job seekers can filter by various criteria like location or job title, meaning you’re more likely to get applicants who are looking exactly for that job.Increased chances of discovery and conversion: job seekers will have a new avenue to interact with your postings and click through to your site.Get your job listings on GoogleImplementation involves two steps: Mark up your job listings with Job Posting structured data.Submit a sitemap (or an RSS or Atom feed) with a date for each listing.If you have more than 100,000 job postings or more than 10,000 changes per day, you can express interest to use the High Change Rate feature.If you already publish your job openings on another site like LinkedIn, Monster, DirectEmployers, CareerBuilder, Glassdoor, and Facebook, they are eligible to appear in the feature as well.Job search is an enriched search experience. We’ve created a dedicated guide to help you understand how Google ranking works for enriched search and practices for improving your presenceKeep track of how you’re doing and fix issuesThere’s a suite of tools to help you with the implementation: Validate your markup with the Structured Data Testing ToolPreview your listing in the Structured Data Testing ToolKeep track of your sitemap status in Search ConsoleSee aggregate stats and markup error examples in Search ConsoleIn the coming weeks, we’ll add new job listings filters in the Search Analytics report in Search Console, so you can track clicks and impressions for your listings.As always, if you have questions, ask in the forums or find us on Twitter! Posted by Nick Zakrasek, Product Manager [...]



Making the Internet safer and faster: Introducing reCAPTCHA Android API

2017-06-09T09:18:24.974-07:00

When we launched reCAPTCHA ten years ago, we had a simple goal: enable users to visit the sites they love without worrying about spam and abuse. Over the years, reCAPTCHA has changed quite a bit. It evolved from the distorted text to street numbers and names, then No CAPTCHA reCAPTCHA in 2014 and Invisible reCAPTCHA in March this year.

(image)

By now, more than a billion users have benefited from reCAPTCHA and we continue to work to refine our protections.

reCAPTCHA protects users wherever they may be online. As the use of mobile devices has grown rapidly, it’s important to keep the mobile applications and data safe. Today, on reCAPTCHA’s tenth birthday, we’re glad to announce the first reCAPTCHA Android API as part of Google Play Services.

With this API, reCAPTCHA can better tell human and bots apart to provide a streamlined user experience on mobile. It will use our newest Invisible reCAPTCHA technology, which runs risk analysis behind the scene and has enabled millions of human users to pass through with zero click everyday. Now mobile users can enjoy their apps without being interrupted, while still staying away from spam and abuse.

(image)

reCAPTCHA Android API is included with Google SafetyNet, which provides services like device attestation and safe browsing to protect mobile apps. Mobile developers can do both the device and user attestations in the same API to mitigate security risks of their apps more efficiently. This adds to the diversity of security protections on Android: Google Play Protect to monitor for potentially harmful applications, device encryption, and regular security updates. Please visit our site to learn more about how to integrate with the reCAPTCHA Android API, and keep an eye out for our iOS library.

The journey of reCAPTCHA continues: we’ll make the Internet safer and easier to use for everyone (except bots).


(image)



Better Snippets for your Users

2017-06-02T02:00:14.387-07:00

Before buying a book, people like to get a snapshot of how they’re about to spend a few hours reading. They'll take a look at the synopsis, the preface, or even the prologue just to get a sense about whether they'll like the book. Search result snippets are much the same; they help people decide whether or not it makes sense to invest the time reading the page the snippet belongs to.  The more descriptive and relevant a search result snippet is, the more likely that people will click through and be satisfied with the page they land on. Historically, snippets came from 3 places:The content of the pageThe meta descriptionDMOZ listingsThe content of the page is an obvious choice for result snippets, and  the content that can be extracted is often the most relevant to people’s queries. However, there are times when the content itself isn't the best source for a snippet. For instance, when someone searches for a publishing company for their book, the relevant homepages in the result set may contain only a few images describing the businesses and a logo, and maybe some links, none of which are particularly useful for a snippet. The logical fallback in cases when the content of a page doesn't have much textual content for a search result snippet is the meta description. This should be short blurbs that describe accurately and precisely the content in a few words. Finally, when a page doesn't have much textual content for snippet generation and the meta description is missing, unrelated to the page, or low quality, our fallback was DMOZ, also known as The Open Directory Project. For over 10 years, we relied on DMOZ for snippets because the quality of the DMOZ snippets were often much higher quality than those  provided by webmasters in their meta description, or were more descriptive than what the page provided. With DMOZ now closed, we've stopped using its listings for snippeting, so it's a lot more important that webmasters provide good meta descriptions, if adding more content to the page is not an option. What makes a good meta description?Good meta descriptions are short blurbs that describe accurately the content of the page. They are like a pitch that convince the user that the page is exactly what they're looking for. For more tips, we have a handy help center article on the topic. Remember to make sure that both your desktop and your mobile pages include both a title and a meta description. What are the most common problems with meta descriptions?Because meta descriptions are usually visible only to search engines and other software, webmasters sometimes forget about them, leaving them completely empty. It's also common, for the same reason, that the same meta description is used across multiple (and sometimes many) pages. On the flip side, it's also relatively common that the description is completely off-topic, low quality, or outright spammy. These issues tarnish our users' search experience, so we prefer to ignore such meta descriptions. Is there a character limit for meta descriptions?There's no limit on how long a meta description can be, but the search result snippets are truncated as needed, typically to fit the device width.What will happen with the "NOODP" robots directive?With DMOZ (ODP) closed, we stopped relying on its data and thus the NOODP directive is already no-op. Can I prevent Google from using the page contents as snippet? You can prevent Google from generating snippets altogether by specifying the "nosnippet" robots directive. There's no way to prevent using page contents as snippet while allowing other sources.As always, if you have questions, ask in the forums or find us on Twitter!Posted by Gary, Search Team [...]



A reminder about links in large-scale article campaigns

2017-05-26T05:18:37.646-07:00

Lately we've seen an increase in spammy links contained in articles referred to as contributor posts, guest posts, partner posts, or syndicated posts. These articles are generally written by or in the name of one website, and published on a different one.Google does not discourage these types of articles in the cases when they inform users, educate another site’s audience or bring awareness to your cause or company. However, what does violate Google's guidelines on link schemes is when the main intent is to build links in a large-scale way back to the author’s site. Below are factors that, when taken to an extreme, can indicate when an article is in violation of these guidelines:Stuffing keyword-rich links to your site in your articlesHaving the articles published across many different sites; alternatively, having a large number of articles on a few large, different sitesUsing or hiring article writers that aren’t knowledgeable about the topics they’re writing onUsing the same or similar content across these articles; alternatively, duplicating the full content of articles found on your own site (in which case use of rel=”canonical”, in addition to rel=”nofollow”, is advised)When Google detects that a website is publishing articles that contain spammy links, this may change Google's perception of the quality of the site and could affect its ranking. Sites accepting and publishing such articles should carefully vet them, asking questions like: Do I know this person? Does this person’s message fit with my site’s audience? Does the article contain useful content? If there are links of questionable intent in the article, has the author used rel=”nofollow” on them?For websites creating articles made for links, Google takes action on this behavior because it’s bad for the Web as a whole. When link building comes first, the quality of the articles can suffer and create a bad experience for users. Also, webmasters generally prefer not to receive aggressive or repeated "Post my article!" requests, and we encourage such cases to be reported to our spam report form. And lastly, if a link is a form of endorsement, and you’re the one creating most of the endorsements for your own site, is this putting forth the best impression of your site? Our best advice in relation to link building is to focus on improving your site’s content and everything--including links--will follow (no pun intended).Posted by the Google Webspam Team [...]



How we fought webspam - Webspam Report 2016

2017-05-26T05:19:15.521-07:00

With 2017 well underway, we wanted to take a moment and share some of the insights we gathered in 2016 in our fight against webspam. Over the past year, we continued to find new ways of keeping spam from creating a poor quality search experience, and worked with webmasters around the world to make the web better.We do a lot behind the scenes to make sure that users can make full use of what today’s web has to offer, bringing relevant results to everyone around the globe, while fighting webspam that could potentially harm or simply annoy users.Webspam trends in 2016Website security continues to be a major source of concern. Last year we saw more hacked sites than ever - a 32% increase compared to 2015. Because of this, we continued to invest in improving and creating more resources to help webmasters know what to do when their sites get hacked. We continued to see that sites are compromised not just to host webspam. We saw a lot of webmasters affected by social engineering, unwanted software, and unwanted ad injectors. We took a stronger stance in Safe Browsing to protect users from deceptive download buttons, made a strong effort to protect users from repeatedly dangerous sites, and we launched more detailed help text within the Search Console Security Issues Report.Since more people are searching on Google using a mobile device, we saw a significant increase in spam targeting mobile users. In particular, we saw a rise in spam that redirects users, without the webmaster’s knowledge, to other sites or pages, inserted into webmaster pages using widgets or via ad units from various advertising networks.How we fought spam in 2016We continued to refine our algorithms to tackle webspam. We made multiple improvements to how we rank sites, including making Penguin (one of our core ranking algorithms) work in real-time.The spam that we didn’t identify algorithmically was handled manually. We sent over 9 million messages to webmasters to notify them of webspam issues on their sites. We also started providing more security notifications via Google Analytics.We performed algorithmic and manual quality checks to ensure that websites with structured data markup meet quality standards. We took manual action on more than 10,000 sites that did not meet the quality guidelines for inclusion in search features powered by structured data.Working with users and webmasters for a better webIn 2016 we received over 180,000 user-submitted spam reports from around the world. After carefully checking their validity, we considered 52% of those reported sites to be spam. Thanks to all who submitted reports and contributed towards a cleaner and safer web ecosystem!We conducted more than 170 online office hours and live events around the world to audiences totaling over 150,000 website owners, webmasters and digital marketers.We continued to provide support to website owners around the world through our Webmaster Help Forums in 15 languages. Through these forums we saw over 67,000 questions, with a majority of them being identified as having a Best Response by our community of Top contributors, Rising Stars and Googlers. We had 119 volunteer Webmaster Top Contributors and Rising Stars, whom we invited to join us at our local Top Contributor Meetups in 11 different locations across 4 continents (Asia, Europe, North America, South America). We think everybody deserves high quality, spam-free search results. We hope that this report provides a glimpse of what we do to make that happen.Posted by Michal Wicinski, Search Quality Strategist and Kiyotaka Tanaka, User Education & Outreach Specialist  [...]



Similar items: Rich products feature on Google Image Search

2017-04-10T12:25:43.680-07:00

Image Search recently launched “Similar items” on mobile web and the Android Search app. The “Similar items” feature is designed to help users find products they love in photos that inspire them on Google Image Search. Using machine vision technology, the Similar items feature identifies products in lifestyle images and displays matching products to the user. Similar items supports handbags, sunglasses, and shoes and will cover other apparel and home & garden categories in the next few months.

(image)

The Similar items feature enables users to browse and shop inspirational fashion photography and find product info about items they’re interested in. Try it out by opening results from queries like [designer handbags].

Finding price and availability information was one of the top Image Search feature requests from our users. The Similar items carousel gets millions of impressions and clicks daily from all over the world.

To make your products eligible for Similar items, make sure to add and maintain schema.org product metadata on your pages. The schema.org/Product markup helps Google find product offerings on the web and give users an at-a-glance summary of product info.

To ensure that your products are eligible to appear in Similar items:

  • Ensure that the product offerings on your pages have schema.org product markup, including an image reference. Products with name, image, price & currency, and availability meta-data on their host page are eligible for Similar items
  • Test your pages with Google’s Structured Data Testing Tool to verify that the product markup is formatted correctly
  • See your images on image search by issuing the query “site:yourdomain.com.” For results with valid product markup, you may see product information appear once you tap on the images from your site. It can take up to a week for Googlebot to recrawl your website.

Right now, Similar items is available on mobile browsers and the Android Google Search App globally, and we plan to expand to more platforms in 2017.

If you have questions, find us in the dedicated Structured data section of our forum, on Twitter, or on Google+. To prevent your images from showing in Similar items, webmasters can opt-out of Google Image Search.

We’re excited to help users find your products on the web by showcasing buyable items. Thanks for partnering with us to make the web more shoppable!

(image)



Updates to the Google Safe Browsing’s Site Status Tool

2017-03-30T02:32:37.641-07:00

(Cross-posted from the Google Security Blog) Google Safe Browsing gives users tools to help protect themselves from web-based threats like malware, unwanted software, and social engineering. We are best known for our warnings, which users see when they attempt to navigate to dangerous sites or download dangerous files. We also provide other tools, like the Site Status Tool, where people can check the current safety status of a web page (without having to visit it).We host this tool within Google’s Safe Browsing Transparency Report. As with other sections in Google’s Transparency Report, we make this data available to give the public more visibility into the security and health of the online ecosystem. Users of the Site Status Tool input a webpage (as a URL, website, or domain) into the tool, and the most recent results of the Safe Browsing analysis for that webpage are returned...plus references to troubleshooting help and educational materials.We’ve just launched a new version of the Site Status Tool that provides simpler, clearer results and is better designed for the primary users of the page: people who are visiting the tool from a Safe Browsing warning they’ve received, or doing casual research on Google’s malware and phishing detection. The tool now features a cleaner UI, easier-to-interpret language, and more precise results. We’ve also moved some of the more technical data on associated ASes (autonomous systems) over to the malware dashboard section of the report. While the interface has been streamlined, additional diagnostic information is not gone: researchers who wish to find more details can drill-down elsewhere in Safe Browsing’s Transparency Report, while site-owners can find additional diagnostic information in Search Console. One of the goals of the Transparency Report is to shed light on complex policy and security issues, so, we hope the design adjustments will indeed provide our users with additional clarity. Posted by Deeksha Padma Prasad and Allison Miller, Safe Browsing [...]



#NoHacked: A year in review

2017-03-20T08:03:47.732-07:00

We hope your year started out safe and secure!We wanted to share with you a summary of our 2016 work as we continue our #NoHacked campaign. Let’s start with some trends on hacked sites from the past year. State of Website Security in 2016First off, some unfortunate news. We’ve seen an increase in the number of hacked sites by approximately 32% in 2016 compared to 2015. We don’t expect this trend to slow down. As hackers get more aggressive and more sites become outdated, hackers will continue to capitalize by infecting more sites.On the bright side, 84% webmasters who do apply for reconsideration are successful in cleaning their sites. However, 61% of webmasters who were hacked never received a notification from Google that their site was infected because their sites weren't verified in Search Console. Remember to register for Search Console if you own or manage a site. It’s the primary channel that Google uses to communicate site health alerts.More Help for Hacked Webmasters We’ve been listening to your feedback to better understand how we can help webmasters with security issues. One of the top requests was easier to understand documentation about hacked sites. As a result we’ve been hard at work to make our documentation more useful.First, we created new documentation to give webmasters more context when their site has been compromised. Here is a list of the new help documentation:Top ways websites get hacked by spammersGlossary for Hacked SitesFAQs for Hacked SitesHow do I know if my site is hacked?Next, we created clean up guides for sites affected by known hacks. We’ve noticed that sites often get affected in similar ways when hacked. By investigating the similarities, we were able to create clean up guides for specific known type of hack. Below is a short description of each of the guides we created:Gibberish Hack: The gibberish hack automatically creates many pages with non-sensical sentences filled with keywords on the target site. Hackers do this so the hacked pages show up in Google Search. Then, when people try to visit these pages, they’ll be redirected to an unrelated page, like a porn site. Learn more on how to fix this type of hack.Japanese Keywords Hack: The Japanese keywords hack typically creates new pages with Japanese text on the target site in randomly generated directory names. These pages are monetized using affiliate links to stores selling fake brand merchandise and then shown in Google search. Sometimes the accounts of the hackers get added in Search Console as site owners. Learn more on how to fix this type of hack.Cloaked Keywords Hack: The cloaked keywords and link hack automatically creates many pages with non-sensical sentence, links, and images. These pages sometimes contain basic template elements from the original site, so at first glance, the pages might look like normal parts of the target site until you read the content. In this type of attack, hackers usually use cloaking techniques to hide the malicious content and make the injected page appear as part of the original site or a 404 error page. Learn more on how to fix this type of hack.Prevention is Key As always it’s best to take a preventative approach and secure your site rather than dealing with the aftermath. Remember a chain is only as strong as its weakest link. You can read more about how to identify vulnerabilities on your site in our hacked help guide. We also recommend staying up-to-date on releases and announcements from your Content Management System (CMS) providers and software/hardware vendors.Looking ForwardHacking behavior is constantly evolving, and research allows us to stay up to date on and combat the latest trends. [...]



Closing down for a day

2017-06-06T05:26:24.576-07:00

Note: This post is specific to Google's organic web-search. For Google's other services, please check with the appropriate help center (e.g., for Google Shopping) or help forum. Even in today's "always-on" world, sometimes businesses want to take a break. There are times when even their online presence needs to be paused. This blog post covers some of the available options so that a site's search presence isn't affected. Option: Block cart functionality If a site only needs to block users from buying things, the simplest approach is to disable that specific functionality. In most cases, shopping cart pages can either be blocked from crawling through the robots.txt file, or blocked from indexing with a robots meta tag. Since search engines either won't see or index that content, you can communicate this to users in an appropriate way. For example, you may disable the link to the cart, add a relevant message, or display an informational page instead of the cart. Option: Always show interstitial or pop-up If you need to block the whole site from users, be it with a "temporarily unavailable" message, informational page, or popup, the server should return a 503 HTTP result code ("Service Unavailable"). The 503 result code makes sure that Google doesn't index the temporary content that's shown to users. Without the 503 result code, the interstitial would be indexed as your website's content. Googlebot will retry pages that return 503 for up to about a week, before treating it as a permanent error that can result in those pages being dropped from the search results. You can also include a "Retry after" header to indicate how long the site will be unavailable. Blocking a site for longer than a week can have negative effects on the site's search results regardless of the method that you use. Option: Switch whole website off Turning the server off completely is another option. You might also do this if you're physically moving your server to a different data center. For this, have a temporary server available to serve a 503 HTTP result code for all URLs (with an appropriate informational page for users), and switch your DNS to point to that server during that time. Set your DNS TTL to a low time (such as 5 minutes) a few days in advance.Change the DNS to the temporary server's IP address. Take your main server offline once all requests go to the temporary server. … your server is now offline ...When ready, bring your main server online again. Switch DNS back to the main server's IP address.Change the DNS TTL back to normal. We hope these options cover the common situations where you'd need to disable your website temporarily. If you have any questions, feel free to drop by our webmaster help forums! PS If your business is active locally, make sure to reflect these closures in the opening hours for your local listings too! Posted by John Mueller, Webmaster Trends Analyst, Switzerland [...]



Introducing the Mobile-Friendly Test API

2017-01-31T04:53:37.944-08:00

With so many users on mobile devices, having a mobile-friendly web is important to us all. The Mobile-Friendly Test is a great way to check individual pages manually. We're happy to announce that this test is now available via API as well.

The Mobile-Friendly Test API lets you test URLs using automated tools. For example, you could use it to monitor important pages in your website in order to prevent accidental regressions in templates that you use. The API method runs all tests, and returns the same information - including a list of the blocked URLs - as the manual test. The documentation includes simple samples to help get you started quickly.

(image)

We hope this API makes it easier to check your pages for mobile-friendliness and to get any such issues resolved faster. We'd love to hear how you use the API -- leave us a comment here, and feel free to link to any code or implementation that you've set up! As always, if you have any questions, feel free to drop by our webmaster help forum.


(image)



Protect your site from user generated spam

2017-01-27T05:49:52.777-08:00

As a website owner, you might have come across some auto-generated content in comments sections or forum threads. When such content is created on your pages, not only does it disrupt those visiting your site, but it also shows some content that you may not want to be associated with your site to Google and other search engines.In this blog post, we will give you tips to help you deal with this type of spam in your site and forum. Some spammers abuse sites owned by others by posting deceiving content and links, in an attempt to get more traffic to their sites. Here are a few examples: Comments and forum threads can be a really good source of information and an efficient way of engaging a site's users in discussions. This valuable content should not be buried by auto-generated keywords and links placed there by spammers. There are many ways of securing your site’s forums and comment threads and making them unattractive to spammers:Keep your forum software updated and patched. Take the time to keep your software up-to-date and pay special attention to important security updates. Spammers take advantage of security issues in older versions of blogs, bulletin boards, and other content management systems. Add a CAPTCHA. CAPTCHAs require users to confirm that they are not robots in order to prove they're a human being and not an automated script. One way to do this is to use a service like reCAPTCHA, Securimage and  Jcaptcha . Block suspicious behavior. Many forums allow you to set time limits between posts, and you can often find plugins to look for excessive traffic from individual IP addresses or proxies and other activity more common to bots than human beings. For example, phpBB, Simple Machines, myBB, and many other forum platforms enable such configurations.Check your forum’s top posters on a daily basis. If a user joined recently and has an excessive amount of posts, then you probably should review their profile and make sure that their posts and threads are not spammy.Consider disabling some types of comments. For example, It’s a good practice to close some very old forum threads that are unlikely to get legitimate replies.If you plan on not monitoring your forum going forward and users are no longer interacting with it, turning off posting completely may prevent spammers from abusing it.Make good use of moderation capabilities. Consider enabling features in moderation that require users to have a certain reputation before links can be posted or where comments with links require moderation.If possible, change your settings so that you disallow anonymous posting and make posts from new users require approval before they're publicly visible.Moderators, together with your friends/colleagues and some other trusted users can help you review and approve posts while spreading the workload. Keep an eye on your forum's new users by looking on their posts and activities on your forum.  Consider blacklisting obviously spammy terms. Block obviously inappropriate comments with a blacklist of spammy terms (e.g. Illegal streaming or pharma related terms) . Add inappropriate and off-topic terms that are only used by spammers, learn from the spam posts that you often see on your forum or other forums. Built-in features or plugins can delete or mark comments as spam for you. Use the "nofollow" attribute for links in the comment field. This will deter spammers from targeting your site. By default, many blogging sites (such as Blogger) automatically add this attribute to any posted comments.Use automated systems to defend your site.  Comprehensive systems li[...]



What Crawl Budget Means for Googlebot

2017-01-27T05:50:18.679-08:00

Recently, we've heard a number of definitions for "crawl budget", however we don't have a single term that would describe everything that "crawl budget" stands for externally. With this post we'll clarify what we actually have and what it means for Googlebot. First, we'd like to emphasize that crawl budget, as described below, is not something most publishers have to worry about. If new pages tend to be crawled the same day they're published, crawl budget is not something webmasters need to focus on. Likewise, if a site has fewer than a few thousand URLs, most of the time it will be crawled efficiently.Prioritizing what to crawl, when, and how much resource the server hosting the site can allocate to crawling is more important for bigger sites, or those that auto-generate pages based on URL parameters, for example.Crawl rate limitGooglebot is designed to be a good citizen of the web. Crawling is its main priority, while making sure it doesn't degrade the experience of users visiting the site. We call this the "crawl rate limit," which limits the maximum fetching rate for a given site. Simply put, this represents the number of simultaneous parallel connections Googlebot may use to crawl the site, as well as the time it has to wait between the fetches. The crawl rate can go up and down based on a couple of factors: Crawl health: if the site responds really quickly for a while, the limit goes up, meaning more connections can be used to crawl. If the site slows down or responds with server errors, the limit goes down and Googlebot crawls less. Limit set in Search Console: website owners can reduce Googlebot's crawling of their site. Note that setting higher limits doesn't automatically increase crawling.Crawl demandEven if the crawl rate limit isn't reached, if there's no demand from indexing, there will be low activity from Googlebot. The two factors that play a significant role in determining crawl demand are: Popularity: URLs that are more popular on the Internet tend to be crawled more often to keep them fresher in our index. Staleness: our systems attempt to prevent URLs from becoming stale in the index. Additionally, site-wide events like site moves may trigger an increase in crawl demand in order to reindex the content under the new URLs. Taking crawl rate and crawl demand together we define crawl budget as the number of URLs Googlebot can and wants to crawl. Factors affecting crawl budgetAccording to our analysis, having many low-value-add URLs can negatively affect a site's crawling and indexing. We found that the low-value-add URLs fall into these categories, in order of significance: Faceted navigation and session identifiersOn-site duplicate contentSoft error pagesHacked pages Infinite spaces and proxies Low quality and spam content Wasting server resources on pages like these will drain crawl activity from pages that do actually have value, which may cause a significant delay in discovering great content on a site. Top questionsCrawling is the entry point for sites into Google's search results. Efficient crawling of a website helps with its indexing in Google Search. Q: Does site speed affect my crawl budget? How about errors?A: Making a site faster improves the users' experience while also increasing crawl rate. For Googlebot a speedy site is a sign of healthy servers, so it can get more content over the same number of connections. On the flip side, a significant number of 5xx errors or connection timeouts signal the opposite, and crawling slows down. We recommend paying attention to the Crawl Errors report i[...]



Enhancing property sets to cover more reports in Search Console

2016-12-12T01:36:18.572-08:00

Since initially announcing property sets earlier this year, one of the most popular requests has been to expand this functionality to more sections of Search Console. Thanks to your feedback, we're now expanding property sets to more features!

(image)

Property sets help to show how your business is seen by Google across separate websites or apps. For example, if you have multiple international or brand-specific websites, and perhaps even an Android app, it can be useful to see changes of the whole set over time: are things headed in the expected direction? are there any outliers that you'd want to drill down into? Similarly, you could monitor your site's hreflang setup across different versions of the same website during a planned transition, such as when you move from HTTP to HTTPS, or change domains.

With Search Console's property sets, you can now just add any verified properties to a set, let the data collect, and then check out features like the mobile usability report, review your AMP implementation, double-check rich cards, or hreflang / internationalization markup, and more.


We hope these changes make it easier to understand your properties in Search Console. Should you have any questions - or if you just want to help others, feel free to drop by our Webmaster Help Forums.


(image)



An update on Google's feature-phone crawling & indexing

2016-11-30T04:36:07.784-08:00

Limited mobile devices, "feature-phones", require a special form of markup or a transcoder for web content. Most websites don't provide feature-phone-compatible content in WAP/WML any more. Given these developments, we've made changes in how we crawl feature-phone content (note: these changes don't affect smartphone content):

1. We've retired the feature-phone Googlebot

We won't be using the feature-phone user-agents for crawling for search going forward.

2. Use "handheld" link annotations for dynamic serving of feature-phone content.

Some sites provide content for feature-phones through dynamic serving, based on the user's user-agent. To understand this configuration, make sure your desktop and smartphone pages have a self-referential alternate URL link for handheld (feature-phone) devices:

This is a change from our previous guidance of only using the "vary: user-agent" HTTP header. We've updated our documentation on making feature-phone pages accordingly. We hope adding this link element is possible on your side, and thank you for your help in this regard. We'll continue to show feature-phone URLs in search when we can recognize them, and when they're appropriate for users.

3. We're retiring feature-phone tools in Search Console

Without the feature-phone Googlebot, special sitemaps extensions for feature-phone, the Fetch as Google feature-phone options, and feature-phone crawl errors are no longer needed. We continue to support sitemaps and other sitemaps extensions (such as for videos or Google News), as well as the other Fetch as Google options in Search Console.


We've worked to make these changes as minimal as possible. Most websites don't serve feature-phone content, and wouldn't be affected. If your site has been providing feature-phone content, we thank you for your help in bringing the Internet to feature-phone users worldwide!

For any questions, feel free to drop by our Webmaster Help Forums!

(image)



Saying goodbye to Content Keywords

2016-11-29T06:29:44.346-08:00

In the early days - back when Search Console was still called Webmaster Tools - the content keywords feature was the only way to see what Googlebot found when it crawled a website. It was useful to see that Google was able to crawl your pages at all, or if your site was hacked.

In the meantime, you can easily check any page on your website and see how Googlebot fetches it immediately, Search Analytics shows you which keywords we've shown your site in search for, and Google informs you of many kinds of hacks automatically. Additionally, users were often confused about the keywords listed in content keywords. And so, the time has come to retire the Content Keywords feature in Search Console.

The words on your pages, the keywords if you will, are still important for Google's (and your users') understanding of your pages. While our systems have gotten better, they can't read your mind: be clear about what your site is about, and what you'd like to be found for. Tell visitors what makes your site, your products and services, special!

What was your most surprising, or favorite, keyword shown? Let us know in the comments!

(image)



Rich Cards expands to more verticals

2016-11-21T14:47:31.068-08:00

At Google I/O in May, we launched Rich Cards for Movies and Recipes, creating a new way for site owners to present previews of their content on the Search results page. Today, we’re expanding to two new verticals for US-based sites: Local restaurants and Online courses. Evolution of search results for queries like [best New Orleans restaurants] and [leadership courses]: with rich cards, results are presented in new UIs, like carousels that are easy to browse by scrolling left and right, or a vertical three-pack that displays more individual courses By building Rich Cards, you have a new opportunity to attract more engaged users to your page. Users can swipe through restaurant recommendations from sites like TripAdvisor, Thrillist, Time Out, Eater, and 10Best. In addition to food, users can browse through courses from sites like Coursera, LinkedIn Learning, EdX, Harvard, Udacity, FutureLearn, Edureka, Open University, Udemy, Canvas Network, and NPTEL. If you have a site that contains local restaurant information or offers online courses, check out our developer docs to start building Rich Cards in the Local restaurant and Online courses verticals. While AMP HTML is not required for Local restaurant pages and Online Courses rich cards, AMP provides Google Search users with a consistently fast experience, so we recommend that you create AMP pages to further engage users. Users consuming AMP’d content will be able to swipe near instantly from restaurant to restaurant or from recipe to recipe within your site. Users who tap on your Rich Card will be taken near instantly to your AMP page, and be able to swipe between pages within your site. Check out our developer site for implementation details. To make it easier for you to create Rich Cards, we made some changes in our tools: The Structured Data Testing Tool displays markup errors and a preview card for Local restaurant content as it might appear on Search. The Rich Cards report in Search Console shows which cards across verticals contain errors, and which ones could be enhanced with more markup. The AMP Test helps validate AMP pages as well as mark up on the page. What’s next? We are actively experimenting with new verticals globally to provide more opportunities for you to display richer previews of your content. If you have questions, find us in the dedicated Structured data section of our forum, on Twitter or on Google+. Post by Stacie Chan, Global Product Partnerships [...]



Building Indexable Progressive Web Apps

2016-11-09T04:42:17.074-08:00

Progressive Web Apps (PWAs) are taking advantage of new technologies to bring the best of mobile sites and native applications to users -- and they’re one of the most exciting new ideas on the web. But to truly have an impact, it's important that they’re indexable and linkable. Every recommendation presented in this article is an existing best practice for indexability -- regardless of whether you're building a Progressive Web App or a simple static website. Nonetheless, we have collated these best practices to provide a checklist to guide you: Make Your Content CrawlableWhy? Historically, websites would always generate or render their HTML on the server which is the simplest way to ensure your content is directly linkable. Web applications popularised the concept of client-side rendering in which content is updated dynamically on the page as the users navigates without requiring the page to be reloaded.The modern approach is hybrid rendering, in which server-side rendering is used when a user navigates directly to a URL and client-side rendering is used after the initial page load for subsequent navigation and asynchronous requests.Our server-side PWA sample demonstrates pure server-side rendering, while our hybrid PWA sample demonstrates the combined approach. If you are unfamiliar with the server-side and client-side rendering terminology, check out these articles on the web read here and here. .boxbox { width: 100%; word-wrap:break-word; padding: 0.2em; } .badbox { background-color: #eba; } .goodbox { background-color: #ded; } .avoidbox { background-color: #ffd; } .boxbox h5 { font-size: 1em; font-weight: bold; margin: 0;} .boxbox p { margin-top: 0.6em; margin-bottom: 0.6em; } br.endboxen { clear: both; } h3.subhead { margin-top: 2em; } Best Practice:Use server-side or hybrid rendering so users receive the content in the initial payload of their web request.Always ensure your URLs are independently accessible:https://www.example.com/product/25/The above should deep link to that particular resource.If you can’t support server-side or hybrid rendering for your Progressive Web App and you decide to use client-side rendering, we recommend using the Google Search Console “Fetch as Google tool” to verify your content successfully renders for our search crawler. Don’t:Don't redirect users accessing deep links back to your web app's homepage.Additionally, serving an error page to users instead of deep linking should also be avoided. Provide Clean URLsWhy? Fragment identifiers (#user/24601/ or #!user/24601/) were an effective workaround for browsers to AJAX new content from a server without reloading the page. This design is known as client-side rendering.However, the fragment identifier syntax isn’t compatible with some web tools, frameworks and protocols such as Facebook’s Open Graph protocol. The History API enables us to update the URL without fragment identifiers while still fetching resources asynchronously and therefore avoiding page reloads -- it’s the best of both worlds. The AJAX crawling scheme (with its #! / escaped-fragment URLs) made sense at its time, but is now no longer recommended. Our hybrid PWA and client-side PWA samples demonstrate the History API. Best Practice:Provide clean URLs without fragment identifiers (# or #!) such as:https://www.example.com/product/25/If using client-side or hybrid rendering be sure to support browser navigation with the History API[...]



Mobile-first Indexing

2016-11-04T10:12:55.616-07:00

Today, most people are searching on Google using a mobile device. However, our ranking systems still typically look at the desktop version of a page’s content to evaluate its relevance to the user. This can cause issues when the mobile page has less content than the desktop page because our algorithms are not evaluating the actual page that is seen by a mobile searcher. To make our results more useful, we’ve begun experiments to make our index mobile-first. Although our search index will continue to be a single index of websites and apps, our algorithms will eventually primarily use the mobile version of a site’s content to rank pages from that site, to understand structured data, and to show snippets from those pages in our results. Of course, while our index will be built from mobile documents, we're going to continue to build a great search experience for all users, whether they come from mobile or desktop devices. We understand this is an important shift in our indexing and it’s one we take seriously. We’ll continue to carefully experiment over the coming months on a small scale and we’ll ramp up this change when we’re confident that we have a great user experience. Though we’re only beginning this process, here are a few recommendations to help webmasters prepare as we move towards a more mobile-focused index. If you have a responsive site or a dynamic serving site where the primary content and markup is equivalent across mobile and desktop, you shouldn’t have to change anything.If you have a site configuration where the primary content and markup is different across mobile and desktop, you should consider making some changes to your site.Make sure to serve structured markup for both the desktop and mobile version.Sites can verify the equivalence of their structured markup across desktop and mobile by typing the URLs of both versions into the Structured Data Testing Tool and comparing the output. When adding structured data to a mobile site, avoid adding large amounts of markup that isn’t relevant to the specific information content of each document.Use the robots.txt testing tool to verify that your mobile version is accessible to Googlebot. Sites do not have to make changes to their canonical links; we’ll continue to use these links as guides to serve the appropriate results to a user searching on desktop or mobile.If you are a site owner who has only verified their desktop site in Search Console, please add and verify your mobile version.If you only have a desktop site, we'll continue to index your desktop site just fine, even if we're using a mobile user agent to view your site.If you are building a mobile version of your site, keep in mind that a functional desktop-oriented site can be better than a broken or incomplete mobile version of the site. It's better for you to build up your mobile site and launch it when ready.   If you have any questions, feel free to contact us via the Webmaster forums or our public events. We anticipate this change will take some time and we’ll update you as we make progress on migrating our systems. Posted by Doantam Phan, Product Manager [...]