Subscribe: Google Testing Blog
http://googletesting.blogspot.com/feeds/posts/default
Added By: Feedage Forager Feedage Grade A rated
Language: English
Tags:
code health  code  end end  end  engineers  flaky  google  gtac  make  project  projects  software  test  testing  tests 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Google Testing Blog

Google Testing Blog



If it ain't broke, you're not trying hard enough.



Updated: 2017-09-20T04:50:16.740-07:00

 



Code Health: Providing Context with Commit Messages and Bug Reports

2017-09-11T14:12:45.960-07:00

This is another post in our Code Health series. A version of this post originally appeared in Google bathrooms worldwide as a Google Testing on the Toilet episode. You can download a printer-friendly version to display in your office.By Chris LewisYou are caught in a trap. Mechanisms whirl around you, but they make no sense. You desperately search the room and find the builder's original plans! The description of the work order that implemented the trap reads, "Miscellaneous fixes." Oh dear.Reading other engineers' code can sometimes feel like an archaeology expedition, full of weird and wonderful statements that are hard to decipher. Code is always written with a purpose, but sometimes that purpose is not clear in the code itself. You can address this knowledge gap by documenting the context that explains why a change was needed. Code comments provide context, but comments alone sometimes can’t provide enough.There are two key ways to indicate context:Commit MessagesA commit message is one of the easiest, most discoverable means of providing context. When you encounter lines of code that may be unclear, checking the commit message which introduced the code is a great way to gain more insight into what the code is meant to do.Write the first line of the commit message so it stands alone, as tools like GitHub will display this line in commit listing pages. Stand-alone first lines allow you to skim through code history much faster, quickly building up your understanding of how a source file evolved over time. Example:   Add Frobber to the list of available widgets.This allows consumers to easily discover the new Frobber widget andadd it to their application. Bug ReportsYou can use a bug report to track the entire story of a bug/feature/refactoring, adding context such as the original problem description, the design discussions between the team, and the commits that are used to solve the problem. This lets you easily see all related commits in one place, and allows others to easily keep track of the status of a particular problem.Most commits should reference a bug report. Standalone commits (e.g. one-time cleanups or other small unplanned changes) don't need their own bug report, though, since they often contain all their context within the description and the source changes.Informative commit messages and bug reports go hand-in-hand, providing context from different perspectives. Keep in mind that such context can be useful even to yourself, providing an easy reminder about the work you did last week, last quarter, or even last year. Future you will thank past you! [...]



Code Health: Eliminate YAGNI Smells

2017-08-14T12:19:32.891-07:00

This is another post in our Code Health series. A version of this post originally appeared in Google bathrooms worldwide as a Google Testing on the Toilet episode. You can download a printer-friendly version to display in your office.By Marc EaddyThe majority of software development costs are due to maintenance. One way to reduce maintenance costs is to implement something only when you actually need it, a.k.a. the “You Aren't Gonna Need It” (YAGNI) design principle. How do you spot unnecessary code? Follow your nose!A code smell is a code pattern that usually indicates a design flaw. For example, creating a base class or interface with only one subclass may indicate a speculation that more subclasses will be needed in the future. Instead, practice incremental development and design: don't add the second subclass until it is actually needed.The following C++ code has many YAGNI smells:class Mammal { ... virtual Status Sleep(bool hibernate) = 0;};class Human : public Mammal { ... virtual Status Sleep(bool hibernate) { age += hibernate ? kSevenMonths : kSevenHours; return OK; }};Maintainers are burdened with understanding, documenting, and testing both classes when only one is really needed. Code must handle the case when hibernate is true, even though all callers pass false, as well as the case when Sleep returns an error, even though that never happens. This results in unnecessary code that never executes. Eliminating those smells simplifies the code:class Human { ... void Sleep() { age += kSevenHours; }};Here are some other YAGNI smells:Code that has never been executed other than by tests (a.k.a. code that is dead on arrival)Classes designed to be subclassed (have virtual methods and/or protected members) that are not actually subclassedPublic or protected methods or fields that could be privateParameters, variables, or flags that always have the same valueThankfully, YAGNI smells, and code smells in general, are often easy to spot by looking for simple patterns and are easy to eliminate using simple refactorings.Are you thinking of adding code that won't be used today? Trust me, you aren't gonna need it! [...]



Code Health: To Comment or Not to Comment?

2017-07-25T12:56:50.399-07:00

This is another post in our Code Health series. A version of this post originally appeared in Google bathrooms worldwide as a Google Testing on the Toilet episode. You can download a printer-friendly version to display in your office. By Dori Reuveni and Kevin BourrillionWhile reading code, often there is nothing more helpful than a well-placed comment. However, comments are not always good. Sometimes the need for a comment can be a sign that the code should be refactored.Use a comment when it is infeasible to make your code self-explanatory. If you think you need a comment to explain what a piece of code does, first try one of the following:Introduce an explaining variable. // Subtract discount from price.finalPrice = (numItems * itemPrice) - min(5, numItems) * itemPrice * 0.1; price = numItems * itemPrice;discount = min(5, numItems) * itemPrice * 0.1;finalPrice = price - discount; Extract a method. // Filter offensive words.for (String word : words) { ... } filterOffensiveWords(words); Use a more descriptive identifier name. int width = ...; // Width in pixels. int widthInPixels = ...; Add a check in case your code has assumptions. // Safe since height is always > 0.return width / height; checkArgument(height > 0);return width / height; There are cases where a comment can be helpful:Reveal your intent: explain why the code does something (as opposed to what it does). // Compute once because it’s expensive. Protect a well-meaning future editor from mistakenly “fixing” your code. // Create a new Foo instance because Foo is not thread-safe. Clarification: a question that came up during code review or that readers of the code might have. // Note that order matters because... Explain your rationale for what looks like a bad software engineering practice. @SuppressWarnings("unchecked") // The cast is safe because... On the other hand, avoid comments that just repeat what the code does. These are just noise: // Get all users.userService.getAllUsers(); // Check if the name is empty.if (name.isEmpty()) { ... } [...]



Evolution of GTAC and Engineering Productivity

2017-07-25T12:55:51.631-07:00

When Google first hosted GTAC in 2006, we didn’t know what to expect. We kicked off this conference with the intention to share our innovation in test automation, learn from others in the industry and connect with academia. Over the last decade we’ve had great participation and had the privilege to host GTAC in North America, Europe and Asia -- largely thanks to the many of you who spoke, participated and connected!

In the recent months, we’ve been taking a hard look at the discipline of Engineering Productivity as a logical next step in the evolution of test automation. In that same vein, we’re going to rethink what an Engineering Productivity focused conference should look like today.  As we pivot, we will be extending these changes to GTAC and because we expect changes in theme, content and format, we are canceling the upcoming event scheduled in London this November. We’ll be bringing the event back in 2018 with a fresh outlook and strategy.

While we know this may be disappointing for many of the folks who were looking forward to GTAC, we’re excited to come back with a new format which will serve this conference well in today’s environment.

(image)



Code Health: Too Many Comments on Your Code Reviews?

2017-06-20T10:20:33.423-07:00

This is another post in our Code Health series. A version of this post originally appeared in Google bathrooms worldwide as a Google Testing on the Toilet episode. You can download a printer-friendly version to display in your office. By Tom O'NeillCode reviews can slow down an individual code change, but they’re also an opportunity to improve your code and learn from another intelligent, experienced engineer. How can you get the most out of them?Aim to get most of your changes approved in the first round of review, with only minor comments. If your code reviews frequently require multiple rounds of comments, these tips can save you time. Spend your reviewers’ time wisely—it’s a limited resource. If they’re catching issues that you could easily have caught yourself, you’re lowering the overall productivity of your team.Before you send out the code review:Re-evaluate your code: Don’t just send the review out as soon as the tests pass. Step back and try to rethink the whole thing—can the design be cleaned up? Especially if it’s late in the day, see if a better approach occurs to you the next morning. Although this step might slow down an individual code change, it will result long-term in greater average throughput.Consider an informal design discussion: If there’s something you’re not sure about, pair program, talk face-to-face, or send an early diff and ask for a “pre-review” of the overall design.Self-review the change: Try to look at the code as critically as possible from the standpoint of someone who doesn’t know anything about it. Your code review tool can give you a radically different view of your code than the IDE. This can easily save you a round trip.Make the diff easy to understand: Multiple changes at once make the code harder to review. When you self-review, look for simple changes that reduce the size of the diff. For example, save significant refactoring or formatting changes for another code review.Don’t hide important info in the submit message: Put it in the code as well. Someone reading the code later is unlikely to look at the submit message.When you’re addressing code review comments:Re-evaluate your code after addressing non-trivial comments: Take a step back and really look at the code with fresh eyes. Once you’ve made one set of changes, you can often find additional improvements that are enabled or suggested by those changes. Just as with any refactoring, it may take several steps to reach the best design.Understand why the reviewer made each comment: If you don’t understand the reasoning behind a comment, don’t just make the change—seek out the reviewer and learn something new.Answer the reviewer’s questions in the code: Don’t just reply—make the code easier to understand (e.g., improve a variable name, change a boolean to an enum) or add a comment. Someone else is going to have the same question later on. [...]



Code Health: Reduce Nesting, Reduce Complexity

2017-06-08T11:24:52.287-07:00

This is another post in our Code Health series. A version of this post originally appeared in Google bathrooms worldwide as a Google Testing on the Toilet episode. You can download a printer-friendly version to display in your office. By Elliott Karpilovsky Deeply nested code hurts readability and is error-prone. Try spotting the bug in the two versions of this code: Code with too much nesting Code with less nesting response = server.Call(request) if response.GetStatus() == RPC.OK: if response.GetAuthorizedUser(): if response.GetEnc() == 'utf-8': if response.GetRows(): vals = [ParseRow(r) for r in response.GetRows()] avg = sum(vals) / len(vals) return avg, vals else: raise EmptyError() else: raise AuthError('unauthorized') else: raise ValueError('wrong encoding')else: raise RpcError(response.GetStatus()) response = server.Call(request) if response.GetStatus() != RPC.OK: raise RpcError(response.GetStatus())if not response.GetAuthorizedUser(): raise ValueError('wrong encoding')if response.GetEnc() != 'utf-8': raise AuthError('unauthorized') if not response.GetRows(): raise EmptyError()vals = [ParseRow(r) for r in response.GetRows()]avg = sum(vals) / len(vals)return avg, vals Answer: the "wrong encoding" and "unauthorized" errors are swapped. This bug is easier to see in the refactored version, since the checks occur right as the errors are handled.The refactoring technique shown above is known as guard clauses. A guard clause checks a criterion and fails fast if it is not met. It decouples the computational logic from the error logic. By removing the cognitive gap between error checking and handling, it frees up mental processing power. As a result, the refactored version is much easier to read and maintain. Here are some rules of thumb for reducing nesting in your code: Keep conditional blocks short. It increases readability by keeping things local. Consider refactoring when your loops and branches are more than 2 levels deep. Think about moving nested logic into separate functions. For example, if you need to loop through a list of objects that each contain a list (such as a protocol buffer with repeated fields), you can define a function to process each object instead of using a double nested loop.Reducing nesting results in more readable code, which leads to discoverable bugs, faster developer iteration, and increased stability. When you can, simplify! [...]



GTAC Diversity Scholarship

2017-05-22T07:03:35.032-07:00



by Lesley Katzen on behalf of the GTAC Diversity Committee


We are committed to increasing diversity at GTAC, and we believe the best way to do that is by making sure we have a diverse set of applicants to speak and attend. As part of that commitment, we are excited to announce that we will be offering travel scholarships again this year.
Travel scholarships will be available for selected applicants from traditionally underrepresented groups in technology.

To be eligible for a grant to attend GTAC, applicants must:
  • Be 18 years of age or older.
  • Be from a traditionally underrepresented group in technology.
  • Work or study in Computer Science, Computer Engineering, Information Technology, or a technical field related to software testing.
  • Be able to attend core dates of GTAC, November 14th - 15th 2017 in London, England.
To apply:
You must fill out the following scholarship formand register for GTAC to be considered for a travel scholarship.
The deadline for submission is July 1st. Scholarship recipients will be announced on August 15th. If you are selected, we will contact you with information on how to proceed with booking travel.

What the scholarship covers:
Google will pay for round-trip standard coach class airfare to London for selected scholarship recipients, and 3 nights of accommodations in a hotel near the Google King's Cross campus. Breakfast and lunch will be provided for GTAC attendees and speakers on both days of the conference. We will also provide a £75.00 gift card for other incidentals such as airport transportation or meals. You will need to provide your own credit card to cover any hotel incidentals.

Google is dedicated to providing a harassment-free and inclusive conference experience for everyone. Our anti-harassment policy can be found at:
https://www.google.com/events/policy/anti-harassmentpolicy.html

(image)



GTAC 2017 - Registration is open!

2017-05-15T14:40:58.025-07:00

by Diego Cavalcanti on behalf of the GTAC 2017 Committee
The Google Test Automation Conference (GTAC) is an annual test automation conference hosted by Google. It brings together engineers from industry and academia to discuss advances in test automation and the test engineering computer science field. It is a great opportunity to present, learn, and challenge modern testing technologies and strategies.

We are pleased to announce that this year, GTAC will be held in Google's London office on November 14th and 15th, 2017.

Registration is currently OPEN for attendees and speakers. See more information here.

The schedule for the upcoming months is as follows:
  • May 15, 2017 - Registration opens for speakers and attendees, including applicants for the diversity scholarship.
  • July 1, 2017 - Registration closes for speaker submissions.
  • July 15, 2017 - Registration closes for attendee submissions.
  • August 15, 2017 - Selected speakers and attendees will be notified.
  • November 13, 2017 - Rehearsal day for speakers (not open for attendees).
  • November 14-15, 2017 - GTAC 2017!
As part of our efforts to increase diversity of speakers and attendees at GTAC, we will again be offering travel scholarships for selected applicants from traditionally underrepresented groups in technology. Please find more information here.

Please do not hesitate to contact gtac2017@google.com if you have any questions. We look forward to seeing you in London!

(image)



OSS-Fuzz: Five Months Later, and Rewarding Projects

2017-05-08T09:00:08.912-07:00

By Oliver Chang, Abhishek Arya (Security Engineers, Chrome Security), Kostya Serebryany (Software Engineer, Dynamic Tools), and Josh Armour (Security Program Manager)Five months ago, we announcedOSS-Fuzz, Google's effort to help make open source software more secure and stable. Since then, our robot army has been working hard at fuzzing, processing 10 trillion test inputs a day. Thanks to the efforts of the open source community who have integrated a total of 47 projects, we've found over 1,000bugs (264of which are potential security vulnerabilities).Breakdown of the types of bugs we're findingNotable resultsOSS-Fuzz has found numerous security vulnerabilities in several critical open source projects: 10in FreeType2, 17in FFmpeg, 33in LibreOffice, 8in SQLite 3, 10in GnuTLS, 25in PCRE2, 9in gRPC, and 7in Wireshark. We've also had at least one bug collision with another independent security researcher (CVE-2017-2801). (Some of the bugs are still view-restricted so links may show smaller numbers.) Once a project is integrated into OSS-Fuzz, the continuous and automated nature of OSS-Fuzz means that we often catch these issues just hours after the regression is introduced into the upstream repository, so that the chances of users being affected is reduced. Fuzzing not only finds memory safety related bugs, it can also find correctness or logic bugs. One example is a carry propagating bug in OpenSSL (CVE-2017-3732). Finally, OSS-Fuzz has reported over 300 timeout and out-of-memory failures (~75% of which got fixed). Not every project treats these as bugs, but fixing them enables OSS-Fuzz to find more interesting bugs.Announcing rewards for open source projectsWe believe that user and internet security as a whole can benefit greatly if more open source projects include fuzzing in their development process. To this end, we'd like to encourage more projects to participate and adopt the ideal integration guidelines that we've established. Combined with fixing all the issues that are found, this is often a significant amount of work for developers who may be working on an open source project in their spare time. To support these projects, we are expanding our existing Patch Rewardsprogram to include rewards for the integration of fuzz targets into OSS-Fuzz. To qualify for these rewards, a project needs to have a large user base and/or be critical to global IT infrastructure. Eligible projects will receive $1,000 for initial integration, and up to $20,000 for ideal integration (the final amount is at our discretion). You have the option of donating these rewards to charity instead, and Google will double the amount. To qualify for the ideal integration reward, projects must show that: Fuzz targets are checked into their upstream repository and integrated in the build system with sanitizer support (up to $5,000). Fuzz targets are efficientand provide good code coverage (>80%) (up to $5,000). Fuzz targets are part of the official upstream development and regression testing process, i.e. they are maintained, run against old known crashers and the periodically updated corpora(up to $5,000). The last $5,000 is a "l33t" bonus that we may reward at our discretion for projects that we feel have gone the extra mile or done something really awesome.We've already started to contact the first round of projects that are eligible for the initial reward. If you are the maintainer or point of contact for one of these projects, you may also reach out to us in order to apply for our ideal integration rewards. The futureWe'd like to thank the existing contributors who integrated their projects and fixed countless bugs. We hope to see more projects integrated into OSS-Fuzz, and greater adoption of fuzzing as standard practice when developing software. [...]



Where do our flaky tests come from?

2017-04-17T14:45:07.377-07:00

author: Jeff Listfield When tests fail on code that was previously tested, this is a strong signal that something is newly wrong with the code. Before, the tests passed and the code was correct; now the tests fail and the code is not working right. The goal of a good test suite is to make this signal as clear and directed as possible. Flaky (nondeterministic) tests, however, are different. Flaky tests are tests that exhibit both a passing and a failing result with the same code. Given this, a test failure may or may not mean that there's a new problem. And trying to recreate the failure, by rerunning the test with the same version of code, may or may not result in a passing test. We start viewing these tests as unreliable and eventually they lose their value. If the root cause is nondeterminism in the production code, ignoring the test means ignoring a production bug. Flaky Tests at GoogleGoogle has around 4.2 million tests that run on our continuous integration system. Of these, around 63 thousand have a flaky run over the course of a week. While this represents less than 2% of our tests, it still causes significant drag on our engineers.If we want to fix our flaky tests (and avoid writing new ones) we need to understand them. At Google, we collect lots of data on our tests: execution times, test types, run flags, and consumed resources. I've studied how some of this data correlates with flaky tests and believe this research can lead us to better, more stable testing practices. Overwhelmingly, the larger the test (as measured by binary size, RAM use, or number of libraries built), the more likely it is to be flaky. The rest of this post will discuss some of my findings. For a previous discussion of our flaky tests, see John Micco's postfrom May 2016. Test size - Large tests are more likely to be flakyWe categorize our tests into three general sizes: small, medium and large. Every test has a size, but the choice of label is subjective. The engineer chooses the size when they initially write the test, and the size is not always updated as the test changes. For some tests it doesn't reflect the nature of the test anymore. Nonetheless, it has some predictive value. Over the course of a week, 0.5% of our small tests were flaky, 1.6% of our medium tests were flaky, and 14% of our large tests were flaky [1]. There's a clear increase in flakiness from small to medium and from medium to large. But this still leaves open a lot of questions. There's only so much we can learn looking at three sizes.The larger the test, the more likely it will be flakyThere are some objective measures of size we collect: test binary size and RAM used when running the test [2]. For these two metrics, I grouped tests into equal-sized buckets [3] and calculated the percentage of tests in each bucket that were flaky. The numbers below are the r2 values of the linear best fit [4]. Correlation between metric and likelihood of test being flaky Metric r2 Binary size 0.82 RAM used 0.76 The tests that I'm looking at are (for the most part) hermetic tests that provide a pass/fail signal. Binary size and RAM use correlated quite well when looking across our tests and there's not much difference between them. So it's not just that large tests are likely to be flaky, it's that the larger the tests get, the more likely they are to be flaky. I have charted the full set of tests below for those two metrics. Flakiness increases with increases in binary size [5], but we also see increasing linear fit residuals [6] at larger sizes. The RAM use chart below has a clearer progression and only starts showing large residuals between the first and second vertical lines. While the bucket sizes are constant, the number of tests in each bucket is different. The points on the right with[...]



Code Health: Google's Internal Code Quality Efforts

2017-04-10T17:52:00.887-07:00

By Max Kanat-Alexander, Tech Lead for Code Health and Author of Code SimplicityThere are many aspects of good coding practices that don't fall under the normal areas of testing and tooling that most Engineering Productivity groups focus on in the software industry. For example, having readable and maintainable code is about more than just writing good tests or having the right tools—it's about having code that can be easily understood and modified in the first place. But how do you make sure that engineers follow these practices while still allowing them the independence that they need to make sound engineering decisions? Many years ago, a group of Googlers came together to work on this problem, and they called themselves the "Code Health" group. Why "Code Health"? Well, many of the other terms used for this in the industry—engineering productivity, best practices, coding standards, code quality—have connotations that could lead somebody to think we were working on something other than what we wanted to focus on. What we cared about was the processes and practices of software engineering in full—any aspect of how software was written that could influence the readability, maintainability, stability, or simplicity of code. We liked the analogy of having "healthy" code as covering all of these areas. This is a field that many authors, theorists, and conference speakers touch on, but not an area that usually has dedicated resources within engineering organizations. Instead, in most software companies, these efforts are pushed by a few dedicated engineers in their extra time or led by the senior tech leads. However, every software engineer is actually involved in code health in some way. After all, we all write software, and most of us care deeply about doing it the "right way." So why not start a group that helps engineers with that "right way" of doing things? This isn't to say that we are prescriptive about engineering practices at Google. We still let engineers make the decisions that are most sensible for their projects. What the Code Health group does is work on efforts that universally improve the lives of engineers and their ability to write products with shorter iteration time, decreased development effort, greater stability, and improved performance. Everybody appreciates their code getting easier to understand, their libraries getting simpler, etc. because we all know those things let us move faster and make better products. But how do we accomplish all of this? Well, at Google, Code Health efforts come in many forms. There is a Google-wide Code Health Group composed of 20%contributors who work to make engineering at Google better for everyone. The members of this group maintain internal documents on best practices and act as a sounding board for teams and individuals who wonder how best to improve practices in their area. Once in a while, for critical projects, members of the group get directly involved in refactoring code, improving libraries, or making changes to tools that promote code health. For example, this central group maintains Google's code review guidelines, writes internal publications about best practices, organizes tech talks on productivity improvements, and generally fosters a culture of great software engineering at Google. Some of the senior members of the Code Health group also advise engineering executives and internal leadership groups on how to improve engineering practices in their areas. It's not always clear how to implement effective code health practices in an area—some people have more experience than others making this happen broadly in teams, and so we offer our consulting and experience to help make simple code and great developer experiences a reality. In addition to the central group, many products and[...]



Discomfort as a Tool for Change

2017-02-13T08:53:49.827-08:00

by Dave Gladfelter (SETI, Google Drive)IntroductionThe SETI (Software Engineer, Tools and Infrastructure) role at Google is a strange one in that there's no obvious reason why it should exist. The SWEs (Software Engineers) on a project understand its problems best, and understanding a problem is most of the way to fixing it. How can SETIs bring unique value to a project when SWEs have more on-the-ground experience with their impediments? The answer is scope. A SWE is rewarded for being an expert in their particular area and domain and is highly motivated to make optimizations to their carved-out space. SETIs (and Test Engineers and EngProdin general) identify and solve product-wide problems. Product-wide problems frequently arise because local optimizations don't necessarily add up to product-wide optimizations. The reason may be the limits of attention, blind spots, or mis-aligned incentives, but a group of SWEs each optimizing for their own sub-projects will not achieve product-wide maxima. Often SETIs and Test Engineers (TEs) know what behavior they'd like to see, such as more integration tests. We may even have management's ear and convince them to mandate such tests. However, in the absence of incentives, it's unlikely that the decisions SWEs make in response to such mandates will add up to the behavior we desire. Mandates around methods/practices are often ineffective. For example, a mandate of documentation for each public method on an interface often results in "method foo does foo." The best way to create product-wide efficiencies is to change the way the team or process works in ways that will (initially) be uncomfortable for the engineering team, but that pays dividends that can't be achieved any other way. SETIs and TEs must work to identify the blind spots and negative interactions between engineering teams and change the environment in ways that align engineering teams' incentives. When properly incentivized, SWEs will make optimal decisions enhanced by product-wide vision rather than micro-management. Common Product-Wide ProblemsHard-to-use APIsOne common example of local optimizations resulting in cross-team de-optimization is documentation and ease-of-use of internal APIs. The team that implements an internal API is not rewarded for making it easy to use except in the most oblique ways. Clients are compelled to use the internal APIs provided to them, so the API owner has a monopoly and will set the price of using it at "you must read all the code and debug it yourself" in the absence of incentives or (rare) heroes. Big, slow releasesAnother example is large and slow releases. Without EngProd help or external pressure, teams will gravitate to the slowest, biggest release possible. This makes sense from the position of any individual SWE: releases are painful, you have to ensure that there are no UI and API regressions, watch traffic and error rates for some time, and re-learn and use tools and processes that are complex and specific to releases. Multiple teams will naturally gravitate to having one big release so that all of these costs can be bundled into one operation for "efficiency." The result is that engineers don't get feedback on features for weeks and versioning of APIs and data stores is ignored (since all the parts of the system are bundled together into one big release). This greatly slows down developer and feature velocity and greatly increases risks of cascading failures when the release fails. How EngProd fixes product-wide problemsSETIs can nibble around the edges of these kinds of problems by writing tools and automation. TEs can create easy-to-use test environments that facilitate isolating and debugging faults in integration and ambiguities in APIs. We can use fancy technologies to sample l[...]



Testing on the Toilet: Keep Cause and Effect Clear

2017-01-31T08:51:16.154-08:00

by Ben YuThis article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office. Can you tell if this test is correct? 208: @Test public void testIncrement_existingKey() {209: assertEquals(9, tally.get("key1"));210: }It’s impossible to know without seeing how the tally object is set up: 1: private final Tally tally = new Tally();2: @Before public void setUp() {3: tally.increment("key1", 8);4: tally.increment("key2", 100);5: tally.increment("key1", 0);6: tally.increment("key1", 1);7: }// 200 lines away208: @Test public void testIncrement_existingKey() {209: assertEquals(9, tally.get("key1"));210: }The problem is that the modification of key1's values occurs 200+ lines away from the assertion. Otherwise put, the cause is hidden far away from the effect.Instead, write tests where the effects immediately follow the causes. It's how we speak in natural language: “If you drive over the speed limit (cause), you’ll get a traffic ticket (effect).” Once we group the two chunks of code, we easily see what’s going on: 1: private final Tally tally = new Tally();2: @Test public void testIncrement_newKey() {3: tally.increment("key", 100);5: assertEquals(100, tally.get("key"));6: }7: @Test public void testIncrement_existingKey() {8: tally.increment("key", 8);9: tally.increment("key", 1);10: assertEquals(9, tally.get("key"));11: }12: @Test public void testIncrement_incrementByZeroDoesNothing() {13: tally.increment("key", 8);14: tally.increment("key", 0);15: assertEquals(8, tally.get("key"));16: }This style may require a bit more code. Each test sets its own input and verifies its own expected output. The payback is in more readable code and lower maintenance costs. [...]



Happy 10th Birthday Google Testing Blog!

2017-03-22T14:22:42.272-07:00

by Anthony Vallone

Ten years ago today, the first Google Testing Blog article was posted (official announcement 2 days later). Over the years, Google engineers have used this blog to help advance the test engineering discipline. We have shared information about our testing technologies, strategies, and theories; discussed what code quality really means; described how our teams are organized for optimal productivity; announced new tooling; and invited readers to speak at and attend the annual Google Test Automation Conference.

Google Testing Blog banner in 2007


The blog has enjoyed excellent readership. There have been over 10 million page views of the blog since it was created, and there are currently about 100 to 200 thousand views per month.

This blog is made possible by many Google engineers who have volunteered time to author and review content on a regular basis in the interest of sharing. Thank you to all the contributors and our readers!

Please leave a comment if you have a story to share about how this blog has helped you.

(image)



Announcing OSS-Fuzz: Continuous Fuzzing for Open Source Software

2016-12-01T09:00:37.877-08:00

By Mike Aizatsky, Kostya Serebryany (Software Engineers, Dynamic Tools); Oliver Chang, Abhishek Arya (Security Engineers, Google Chrome); and Meredith Whittaker (Open Research Lead). We are happy to announce OSS-Fuzz, a new Beta program developed over the past years with the Core Infrastructure Initiative community. This program will provide continuous fuzzing for select core open source software.Open source software is the backbone of the many apps, sites, services, and networked things that make up "the internet." It is important that the open source foundation be stable, secure, and reliable, as cracks and weaknesses impact all who build on it. Recent security storiesconfirm that errors likebuffer overflow anduse-after-free can have serious, widespread consequences when they occur in critical open source software. These errors are not only serious, but notoriously difficult to find via routine code audits, even for experienced developers. That's wherefuzz testing comes in. By generating random inputs to a given program, fuzzing triggers and helps uncover errors quickly and thoroughly. In recent years, several efficient general purpose fuzzing engines have been implemented (e.g. AFL and libFuzzer), and we use them to fuzz various components of the Chrome browser. These fuzzers, when combined with Sanitizers, can help find security vulnerabilities (e.g. buffer overflows, use-after-free, bad casts, integer overflows, etc), stability bugs (e.g. null dereferences, memory leaks, out-of-memory, assertion failures, etc) and sometimeseven logical bugs. OSS-Fuzz's goal is to make common software infrastructure more secure and stable by combining modern fuzzing techniques with scalable distributed execution. OSS-Fuzz combines various fuzzing engines (initially, libFuzzer) with Sanitizers (initially, AddressSanitizer) and provides a massive distributed execution environment powered by ClusterFuzz. Early successesOur initial trials with OSS-Fuzz have had good results. An example is the FreeType library, which is used on over a billion devices to display text (and which might even be rendering the characters you are reading now). It is important for FreeType to be stable and secure in an age when fonts are loaded over the Internet. Werner Lemberg, one of the FreeType developers, wasan early adopter of OSS-Fuzz. Recently the FreeType fuzzer found a new heap buffer overflow only a few hours after the source change: ERROR: AddressSanitizer: heap-buffer-overflow on address 0x615000000ffa READ of size 2 at 0x615000000ffa thread T0SCARINESS: 24 (2-byte-read-heap-buffer-overflow-far-from-bounds) #0 0x885e06 in tt_face_vary_cvtsrc/truetype/ttgxvar.c:1556:31OSS-Fuzz automatically notifiedthe maintainer, whofixed the bug; then OSS-Fuzz automaticallyconfirmed the fix. All in one day! You can see the full list of fixed and disclosed bugs found by OSS-Fuzz so far. Contributions and feedback are welcomeOSS-Fuzz has already found 150 bugs in several widely used open source projects (and churns ~4 trillion test cases a week). With your help, we can make fuzzing a standard part of open source development, and work with the broader community of developers and security testers to ensure that bugs in critical open source applications, libraries, and APIs are discovered and fixed. We believe that this approach to automated security testing will result in real improvements to the security and stability of open source software. OSS-Fuzz is launching in Beta right now, and will be accepting suggestions for candidate open source projects. In order for a project to be accepted to OSS-Fuzz, it needs to have a large user base and/or be critical to Global IT infrastructure, a g[...]



What Test Engineers do at Google: Building Test Infrastructure

2016-11-18T09:13:57.623-08:00

Author: Jochen WuttkeIn a recent post, we broadly talked about What Test Engineers do at Google. In this post, I talk about one aspect of the work TEs may do: building and improving test infrastructure to make engineers more productive. Refurbishing legacy systems makes new tools necessaryA few years ago, I joined an engineering team that was working on replacing a legacy system with a new implementation. Because building the replacement would take several years, we had to keep the legacy system operational and even add features, while building the replacement so there would be no impact on our external users. The legacy system was so complex and brittle that the engineers spent most of their time triaging and fixing bugs and flaky tests, but had little time to implement new features. The goal for the rewrite was to learn from the legacy system and to build something that was easier to maintain and extend. As the team's TE, my job was to understand what caused the high maintenance cost and how to improve on it. I found two main causes: Tight coupling and insufficient abstraction made unit testing very hard, and as a consequence, a lot of end-to-end tests served as functional tests of that code. The infrastructure used for the end-to-end tests had no good way to create and inject fakes or mocks for these services. As a result, the tests had to run the large number of servers for all these external dependencies. This led to very large and brittle tests that our existing test execution infrastructure was not able to handle reliably.Exploring solutionsAt first, I explored if I could split the large tests into smaller ones that would test specific functionality and depend on fewer external services. This proved impossible, because of the poorly structured legacy code. Making this approach work would have required refactoring the entire system and its dependencies, not just the parts my team owned. In my second approach, I also focussed on large tests and tried to mock services that were not required for the functionality under test. This also proved very difficult, because dependencies changed often and individual dependencies were hard to trace in a graph of over 200 services. Ultimately, this approach just shifted the required effort from maintaining test code to maintaining test dependencies and mocks. My third and final approach, illustrated in the figure below, made small tests more powerful. In the typical end-to-end test we faced, the client made RPCcalls to several services, which in turn made RPC calls to other services. Together the client and the transitive closure over all backend services formed a large graph (not tree!) of dependencies, which all had to be up and running for the end-to-end test. The new model changes how we test client and service integration. Instead of running the client on inputs that will somehow trigger RPC calls, we write unit tests for the code making method calls to the RPC stub. The stub itself is mocked with a common mocking framework like Mockito in Java. For each such test, a second test verifies that the data used to drive that mock "makes sense" to the actual service. This is also done with a unit test, where a replay client uses the same data the RPC mock uses to call the RPC handler method of the service. This pattern of integration testing applies to any RPC call, so the RPC calls made by a backend server to another backend can be tested just as well as front-end client calls. When we apply this approach consistently, we benefit from smaller tests that still test correct integration behavior, and make sure that the behavior we are testing is "real". To arrive at this solution[...]



Hackable Projects - Pillar 3: Infrastructure

2016-11-10T11:12:07.469-08:00

By: Patrik HöglundThis is the third article in our series on Hackability; also see the first and second article.We have seen in our previous articles how Code Health and Debuggability can make a project much easier to work on. The third pillar is a solid infrastructure that gets accurate feedback to your developers as fast as possible. Speed is going to be major theme in this article, and we’ll look at a number of things you can do to make your project easier to hack on.Build Systems SpeedQuestion: What’s a change you’d really like to see in our development tools?“I feel like this question gets asked almost every time, and I always give the same answer:  I would like them to be faster.”         -- Ian Lance TaylorReplace make with ninja. Use the gold linker instead of ld. Detect and delete dead code in your project (perhaps using coverage tools). Reduce the number of dependencies, and enforce dependency rules so new ones are not added lightly. Give the developers faster machines. Use distributed build, which is available with many open-source continuous integration systems (or use Google’s system, Bazel!). You should do everything you can to make the build faster.Figure 1: “Cheetah chasing its prey” by Marlene Thyssen.Why is that? There’s a tremendous difference in hackability if it takes 5 seconds to build and test versus one minute, or even 20 minutes, to build and test. Slow feedback cycles kill hackability, for many reasons:Build and test times longer than a handful of seconds cause many developers’ minds to wander, taking them out of the zone.Excessive build or release times* makes tinkering and refactoring much harder. All developers have a threshold when they start hedging (e.g. “I’d like to remove this branch, but I don’t know if I’ll break the iOS build”) which causes refactoring to not happen.* The worst I ever heard of was an OS that took 24 hours to build!How do you actually make fast build systems? There are some suggestions in the first paragraph above, but the best general suggestion I can make is to have a few engineers on the project who deeply understand the build systems and have the time to continuously improve them. The main axes of improvement are:Reduce the amount of code being compiled.Replace tools with faster counterparts.Increase processing power, maybe through parallelization or distributed systems.Note that there is a big difference between full builds and incremental builds. Both should be as fast as possible, but incremental builds are by far the most important to optimize. The way you tackle the two is different. For instance, reducing the total number of source files will make a full build faster, but it may not make an incremental build faster. To get faster incremental builds, in general, you need to make each source file as decoupled as possible from the rest of the code base. The less a change ripples through the codebase, the less work to do, right? See “Loose Coupling and Testability” in Pillar 1 for more on this subject. The exact mechanics of dependencies and interfaces depends on programming language - one of the hardest to get right is unsurprisingly C++, where you need to be disciplined with includes and forward declarations to get any kind of incremental build performance. Build scripts and makefiles should be held to standards as high as the code itself. Technical debt and unnecessary dependencies have a tendency to accumulate in build scripts, because no one has the time to understand and fix them. Avoid this by addressing the technical debt as you go.Continuous Integration and[...]



Hackable Projects - Pillar 2: Debuggability

2016-11-10T11:34:50.205-08:00

By: Patrik Höglund This is the second article in our series on Hackability; also see the first article.“Deep into that darkness peering, long I stood there, wondering, fearing, doubting, dreaming dreams no mortal ever dared to dream before.” -- Edgar Allan PoeDebuggability can mean being able to use a debugger, but here we’re interested in a broader meaning. Debuggability means being able to easily find what’s wrong with a piece of software, whether it’s through logs, statistics or debugger tools. Debuggability doesn’t happen by accident: you need to design it into your product. The amount of work it takes will vary depending on your product, programming language(s) and development environment. In this article, I am going to walk through a few examples of how we have aided debuggability for our developers. If you do the same analysis and implementation for your project, perhaps you can help your developers illuminate the dark corners of the codebase and learn what truly goes on there.Figure 1: computer log entry from the Mark II, with a moth taped to the page.Running on Localhost Read more on the Testing Blog: Hermetic Servers by Chaitali Narla and Diego Salas Suppose you’re developing a service with a mobile app that connects to that service. You’re working on a new feature in the app that requires changes in the backend. Do you develop in production? That’s a really bad idea, as you must push unfinished code to production to work on your change. Don’t do that: it could break your service for your existing users. Instead, you need some kind of script that brings up your server stack on localhost. You can probably run your servers by hand, but that quickly gets tedious. In Google, we usually use fancy python scripts that invoke the server binaries with flags. Why do we need those flags? Suppose, for instance, that you have a server A that depends on a server B and C. The default behavior when the server boots should be to connect to B and C in production. When booting on localhost, we want to connect to our local B and C though. For instance: b_serv --port=1234 --db=/tmp/fakedbc_serv --port=1235a_serv --b_spec=localhost:1234 --c_spec=localhost:1235That makes it a whole lot easier to develop and debug your server. Make sure the logs and stdout/stderr end up in some well-defined directory on localhost so you don’t waste time looking for them. You may want to write a basic debug client that sends HTTP requests or RPCs or whatever your server handles. It’s painful to have to boot the real app on a mobile phone just to test something. A localhost setup is also a prerequisite for making hermetic tests,where the test invokes the above script to bring up the server stack. The test can then run, say, integration tests among the servers or even client-server integration tests. Such integration tests can catch protocol drift bugs between client and server, while being super stable by not talking to external or shared services. Debugging Mobile AppsFirst, mobile is hard. The tooling is generally less mature than for desktop, although things are steadily improving. Again, unit tests are great for hackability here. It’s really painful to always load your app on a phone connected to your workstation to see if a change worked. Robolectric unit tests and Espresso functional tests, for instance, run on your workstation and do not require a real phone. xcTestsand Earl Grey give you the same on iOS. Debuggers ship with Xcode and Android Studio. If your Android app ships JNI code, it’s a bit trickier, but you can attach GDB to running processes on your ph[...]



Testing on the Toilet: What Makes a Good End-to-End Test?

2016-09-21T15:59:04.593-07:00

by Adam BenderThis article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.An end-to-end test tests your entire system from one end to the other, treating everything in between as a black box. End-to-end tests can catch bugs that manifest across your entire system. In addition to unit and integration tests, they are a critical part of a balanced testing diet, providing confidence about the health of your system in a near production state. Unfortunately, end-to-end tests are slower, more flaky, and more expensive to maintain than unit or integration tests. Consider carefully whether an end-to-end test is warranted, and if so, how best to write one. Let's consider how an end-to-end test might work for the following "login flow":In order to be cost effective, an end-to-end test should focus on aspects of your system that cannot be reliably evaluated with smaller tests, such as resource allocation, concurrency issues and API compatibility. More specifically:For each important use case, there should be one corresponding end-to-end test. This should include one test for each important class of error. The goal is the keep your total end-to-end count low. Be prepared to allocate at least one week a quarter per test to keep your end-to-end tests stable in the face of issues like slow and flaky dependencies or minor UI changes. Focus your efforts on verifying overall system behavior instead of specific implementation details; for example, when testing login behavior, verify that the process succeeds independent of the exact messages or visual layouts, which may change frequently. Make your end-to-end test easy to debug by providing an overview-level log file, documenting common test failure modes, and preserving all relevant system state information (e.g.: screenshots, database snapshots, etc.).End-to-end tests also come with some important caveats: System components that are owned by other teams may change unexpectedly, and break your tests. This increases overall maintenance cost, but can highlight incompatible changes It may be more difficult to make an end-to-end test fully hermetic; leftover test data may alter future tests and/or production systems. Where possible keep your test data ephemeral. An end-to-end test often necessitates multiple test doubles (fakes or stubs) for underlying dependencies; they can, however, have a high maintenance burden as they drift from the real implementations over time. [...]



What Test Engineers do at Google

2016-09-12T08:00:20.149-07:00

by Matt Lowrie, Manjusha Parvathaneni, Benjamin Pick, and Jochen Wuttke Test engineers (TEs) at Google are a dedicated group of engineers who use proven testing practices to foster excellence in our products. We orchestrate the rapid testing and releasing of products and features our users rely on. Achieving this velocity requires creative and diverse engineering skills that allow us to advocate for our users. By building testable user journeys into the process, we ensure reliable products. TEs are also the glue that bring together feature stakeholders (product managers, development teams, UX designers, release engineers, beta testers, end users, etc.) to confirm successful product launches. Essentially, every day we ask ourselves, “How can we make our software development process more efficient to deliver products that make our users happy?”. The TE role grew out of the desire to make Google’s early free products, like Search, Gmail and Docs, better than similar paid products on the market at the time. Early on in Google’s history, a small group of engineers believed that the company’s “launch and iterate” approach to software deployment could be improved with continuous automated testing. They took it upon themselves to promote good testing practices to every team throughout the company, via some programs you may have heard about: Testing on the Toilet, the Test Certified Program, and the Google Test Automation Conference (GTAC). These efforts resulted in every project taking ownership of all aspects of testing, such as code coverage and performance testing. Testing practices quickly became commonplace throughout the company and engineers writing tests for their own code became the standard. Today, TEs carry on this tradition of setting the standard of quality which all products should achieve. Historically, Google has sustained two separate job titles related to product testing and test infrastructure, which has caused confusion. We often get asked what the difference is between the two. The rebranding of the Software engineer, tools and infrastructure (SETI) role, which now concentrates on engineering productivity, has been addressed in a previous blog post. What this means for test engineers at Google, is an enhanced responsibility of being the authority on product excellence. We are expected to uphold testing standards company-wide, both programmatically and persuasively. Test engineer is a unique role at Google. As TEs, we define and organize our own engineering projects, bridging gaps between engineering output and end-user satisfaction. To give you an idea of what TEs do, here are some examples of challenges we need to solve on any particular day: Automate a manual verification process for product release candidates so developers have more time to respond to potential release-blocking issues. Design and implement an automated way to track and surface Android battery usage to developers, so that they know immediately when a new feature will cause users drained batteries. Quantify if a regenerated data set used by a product, which contains a billion entities, is better quality than the data set currently live in production. Write an automated test suite that validates if content presented to a user is of an acceptable quality level based on their interests. Read an engineering design proposal for a new feature and provide suggestions about how and where to build in testability. Investigate correlated stack traces submitted by users through our feedback tracking system, and search the code [...]



Hackable Projects - Pillar 1: Code Health

2016-11-10T11:33:37.416-08:00

By: Patrik Höglund IntroductionSoftware development is difficult. Projects often evolve over several years, under changing requirements and shifting market conditions, impacting developer tools and infrastructure. Technical debt, slow build systems, poor debuggability, and increasing numbers of dependencies can weigh down a project The developers get weary, and cobwebs accumulate in dusty corners of the code base. Fighting these issues can be taxing and feel like a quixotic undertaking, but don’t worry — the Google Testing Blog is riding to the rescue! This is the first article of a series on “hackability” that identifies some of the issues that hinder software projects and outlines what Google SETIs usually do about them. According to Wiktionary, hackable is defined as: Adjectivehackable ‎(comparative more hackable, superlative most hackable) (computing) That can be hacked or broken into; insecure, vulnerable. That lends itself to hacking (technical tinkering and modification); moddable.Obviously, we’re not going to talk about making your product more vulnerable (by, say, rolling your own crypto or something equally unwise); instead, we will focus on the second definition, which essentially means “something that is easy to work on.” This has become the mainfocus for SETIs at Google as the role has evolved over the years.In PracticeIn a hackable project, it’s easy to try things and hard to break things. Hackability means fast feedback cycles that offer useful information to the developer. This is hackability: Developing is easy Fast build Good, fast tests Clean code Easy running + debugging One-click rollbacksIn contrast, what is not hackability? Broken HEAD (tip-of-tree) Slow presubmit (i.e. checks running before submit) Builds take hours Incremental build/link > 30s FlakytestsCan’t attach debugger Logs full of uninteresting informationThe Three Pillars of HackabilityThere are a number of tools and practices that foster hackability. When everything is in place, it feels great to work on the product. Basically no time is spent on figuring out why things are broken, and all time is spent on what matters, which is understanding and working with the code. I believe there are three main pillars that support hackability. If one of them is absent, hackability will suffer. They are: Pillar 1: Code Health“I found Rome a city of bricks, and left it a city of marble.”   -- AugustusKeeping the code in good shape is critical for hackability. It’s a lot harder to tinker and modify something if you don’t understand what it does (or if it’s full of hidden traps, for that matter). TestsUnit and small integration tests are probably the best things you can do for hackability. They’re a support you can lean on while making your changes, and they contain lots of good information on what the code does. It isn’t hackability to boot a slow UI and click buttons on every iteration to verify your change worked - it is hackability to run a sub-second set of unit tests! In contrast, end-to-end (E2E) tests generally help hackability much less (and can evenbe a hindrance if they, or the product, are in sufficiently bad shape). Figure 1: the Testing Pyramid.I’ve always been interested in how you actually make unit tests happen in a team. It’s about education. Writing a product such that it has good unit tests is actually a hard problem. It requires knowledge of dependency injection, testing/mocking frameworks, language idioms and refactoring. The difficulty v[...]



The Inquiry Method for Test Planning

2016-08-21T09:30:20.217-07:00

by Anthony Valloneupdated: July 2016Creating a test plan is often a complex undertaking. An ideal test plan is accomplished by applying basic principles of cost-benefit analysis and risk analysis, optimally balancing these software development factors:Implementation cost: The time and complexity of implementing testable features and automated tests for specific scenarios will vary, and this affects short-term development cost.Maintenance cost: Some tests or test plans may vary from easy to difficult to maintain, and this affects long-term development cost. When manual testing is chosen, this also adds to long-term cost.Monetary cost: Some test approaches may require billed resources.Benefit: Tests are capable of preventing issues and aiding productivity by varying degrees. Also, the earlier they can catch problems in the development life-cycle, the greater the benefit.Risk: The probability of failure scenarios may vary from rare to likely, and their consequences may vary from minor nuisance to catastrophic.Effectively balancing these factors in a plan depends heavily on project criticality, implementation details, resources available, and team opinions. Many projects can achieve outstanding coverage with high-benefit, low-cost unit tests, but they may need to weigh options for larger tests and complex corner cases. Mission critical projects must minimize risk as much as possible, so they will accept higher costs and invest heavily in rigorous testing at all levels.This guide puts the onus on the reader to find the right balance for their project. Also, it does not provide a test plan template, because templates are often too generic or too specific and quickly become outdated. Instead, it focuses on selecting the best content when writing a test plan.Test plan vs. strategyBefore proceeding, two common methods for defining test plans need to be clarified:Single test plan: Some projects have a single "test plan" that describes all implemented and planned testing for the project.Single test strategy and many plans: Some projects have a "test strategy" document as well as many smaller "test plan" documents. Strategies typically cover the overall test approach and goals, while plans cover specific features or project updates.Either of these may be embedded in and integrated with project design documents. Both of these methods work well, so choose whichever makes sense for your project. Generally speaking, stable projects benefit from a single plan, whereas rapidly changing projects are best served by infrequently changed strategies and frequently added plans.For the purpose of this guide, I will refer to both test document types simply as "test plans”. If you have multiple documents, just apply the advice below to your document aggregation.Content selectionA good approach to creating content for your test plan is to start by listing all questions that need answers. The lists below provide a comprehensive collection of important questions that may or may not apply to your project. Go through the lists and select all that apply. By answering these questions, you will form the contents for your test plan, and you should structure your plan around the chosen content in any format your team prefers. Be sure to balance the factors as mentioned above when making decisions.PrerequisitesDo you need a test plan? If there is no project design document or a clear vision for the product, it may be too early to write a test plan.Has testability been considered in the project[...]



GTAC 2016 Registration Deadline Extended

2016-11-15T12:09:42.025-08:00

by Sonal Shah on behalf of the GTAC Committee

Our goal in organizing GTAC each year is to make it a first-class conference, dedicated to presenting leading edge industry practices. The quality of submissions we've received for GTAC 2016 so far has been overwhelming. In order to include the best talks possible, we are extending the deadline for speaker and attendee submissions by 15 days. The new timelines are as follows:

June 1, 2016 June 15, 2016 - Last day for speaker, attendee and diversity scholarship submissions.
June 15, 2016 July 15, 2016 - Attendees and scholarship awardees will be notified of selection/rejection/waitlist status. Those on the waitlist will be notified as space becomes available.
August 15, 2016 August 29, 2016 - Selected speakers will be notified.

To register, please fill out this form.
To apply for diversity scholarship, please fill out this form.

The GTAC website has a list of frequently asked questions. Please do not hesitate to contact gtac2016@google.com if you still have any questions.

(image)



Flaky Tests at Google and How We Mitigate Them

2016-05-27T17:34:03.355-07:00

by John MiccoAt Google, we run a very large corpus of tests continuously to validate our code submissions. Everyone from developers to project managers rely on the results of these tests to make decisions about whether the system is ready for deployment or whether code changes are OK to submit. Productivity for developers at Google relies on the ability of the tests to find real problems with the code being changed or developed in a timely and reliable fashion. Tests are run before submission (pre-submit testing) which gates submission and verifies that changes are acceptable, and again after submission (post-submit testing) to decide whether the project is ready to be released. In both cases, all of the tests for a particular project must report a passing result before submitting code or releasing a project.Unfortunately, across our entire corpus of tests, we see a continual rate of about 1.5% of all test runs reporting a "flaky" result. We define a "flaky" test result as a test that exhibits both a passing and a failing result with the same code. There are many root causes why tests return flaky results, including concurrency, relying on non-deterministic or undefined behaviors, flaky third party code, infrastructure problems, etc. We have invested a lot of effort in removing flakiness from tests, but overall the insertion rate is about the same as the fix rate, meaning we are stuck with a certain rate of tests that provide value, but occasionally produce a flaky result. Almost 16% of our tests have some level of flakiness associated with them! This is a staggering number; it means that more than 1 in 7 of the tests written by our world-class engineers occasionally fail in a way not caused by changes to the code or tests.When doing post-submit testing, our Continuous Integration (CI) system identifies when a passing test transitions to failing, so that we can investigate the code submission that caused the failure. What we find in practice is that about 84% of the transitions we observe from pass to fail involve a flaky test! This causes extra repetitive work to determine whether a new failure is a flaky result or a legitimate failure. It is quite common to ignore legitimate failures in flaky tests due to the high number of false-positives. At the very least, build monitors typically wait for additional CI cycles to run this test again to determine whether or not the test has been broken by a submission adding to the delay of identifying real problems and increasing the pool of changes that could contribute.In addition to the cost of build monitoring, consider that the average project contains 1000 or so individual tests. To release a project, we require that all these tests pass with the latest code changes. If 1.5% of test results are flaky, 15 tests will likely fail, requiring expensive investigation by a build cop or developer. In some cases, developers dismiss a failing result as flaky only to later realize that it was a legitimate failure caused by the code. It is human nature to ignore alarms when there is a history of false signals coming from a system. For example, see this article about airline pilots ignoring an alarm on 737s. The same phenomenon occurs with pre-submit testing. The same 15 or so failing tests block submission and introduce costly delays into the core development process. Ignoring legitimate failures at this stage results in the submission of broken code.We have several miti[...]



GTAC Diversity Scholarship

2016-06-03T06:35:40.232-07:00

by Lesley Katzen on behalf of the GTAC Diversity Committee

We are committed to increasing diversity at GTAC, and we believe the best way to do that is by making sure we have a diverse set of applicants to speak and attend. As part of that commitment, we are excited to announce that we will be offering travel scholarships this year.
Travel scholarships will be available for selected applicants from traditionally underrepresented groups in technology.

To be eligible for a grant to attend GTAC, applicants must:

  • Be 18 years of age or older.
  • Be from a traditionally underrepresented group in technology.
  • Work or study in Computer Science, Computer Engineering, Information Technology, or a technical field related to software testing.
  • Be able to attend core dates of GTAC, November 15th - 16th 2016 in Sunnyvale, CA.


To apply:
Please fill out the following form to be considered for a travel scholarship.
The deadline for submission is June 1st June 15th.  Scholarship recipients will be announced on June 30th July 15th. If you are selected, we will contact you with information on how to proceed with booking travel.


What the scholarship covers:
Google will pay for standard coach class airfare for selected scholarship recipients to San Francisco or San Jose, and 3 nights of accommodations in a hotel near the Sunnyvale campus. Breakfast and lunch will be provided for GTAC attendees and speakers on both days of the conference. We will also provide a $50.00 gift card for other incidentals such as airport transportation or meals. You will need to provide your own credit card to cover any hotel incidentals.


Google is dedicated to providing a harassment-free and inclusive conference experience for everyone. Our anti-harassment policy can be found at:
https://www.google.com/events/policy/anti-harassmentpolicy.html(image)