Subscribe: Google Testing Blog
Added By: Feedage Forager Feedage Grade A rated
Language: English
code health  code  end end  end  engineers  google  health  make  oss fuzz  projects  software  test  testing  tests 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Google Testing Blog

Google Testing Blog

If it ain't broke, you're not trying hard enough.

Updated: 2018-02-19T08:30:43.526-08:00


Testing on the Toilet: Only Verify State-Changing Method Calls


This article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.By Dillon Bly and Andrew TrenkWhich lines can be safely removed from this test?@Test public void addPermissionToDatabase() { new UserAuthorizer(mockUserService, mockPermissionDb).grantPermission(USER, READ_ACCESS); // The test will fail if any of these methods is not called. verify(mockUserService).isUserActive(USER); verify(mockPermissionDb).getPermissions(USER); verify(mockPermissionDb).isValidPermission(READ_ACCESS); verify(mockPermissionDb).addPermission(USER, READ_ACCESS);}The answer is that the calls to verify the non-state-changing methods can be removed.Method calls on another object fall into one of two categories:State-changing: methods that have side effects and change the world outside the code under test, e.g., sendEmail(), saveRecord(), logAccess().Non-state-changing: methods that return information about the world outside the code under test and don't modify anything, e.g., getUser(), findResults(), readFile().You should usually avoid verifying that non-state-changing methods are called:It is often redundant: a method call that doesn't change the state of the world is meaningless on its own. The code under test will use the return value of the method call to do other work that you can assert.It makes tests brittle: tests need to be updated whenever method calls change. For example, if a test is expecting mockUserService.isUserActive(USER) to be called, it would fail if the code under test is modified to call user.isActive() instead.It makes tests less readable: the additional assertions in the test make it more difficult to determine which method calls actually affect the state of the world.It gives a false sense of security: just because the code under test called a method does not mean the code under test did the right thing with the method’s return value.Instead of verifying that they are called, use non-state-changing methods to simulate different conditions in tests, e.g., when(mockUserService.isUserActive(USER)).thenReturn(false). Then write assertions for the return value of the code under test, or verify state-changing method calls.Verifying non-state-changing method calls may be useful if there is no other output you can assert. For example, if your code should be caching an RPC result, you can verify that the method that makes the RPC is called only once.With the unnecessary verifications removed, the test looks like this:@Test public void addPermissionToDatabase() { new UserAuthorizer(mockUserService, mockPermissionDb).grantPermission(USER, READ_ACCESS); // Verify only the state-changing method. verify(mockPermissionDb).addPermission(USER, READ_ACCESS);}That’s much simpler! But remember that instead of using a mock to verify that a method was called, it would be even better to use a real or fake object to actually execute the method and check that it works properly. For example, the above test could use a fake database to check that the permission exists in the database rather than just verifying that addPermission() was called. You can learn more about this topic in the book Growing Object-Oriented Software, Guided by Tests. Note that the book uses the terms “command” and “query” instead of “state-changing” and “non-state-changing”. [...]

Code Health: Obsessed With Primitives?


This is another post in our Code Health series. A version of this post originally appeared in Google bathrooms worldwide as a Google Testing on the Toilet episode. You can download a printer-friendly version to display in your office. By Marc EaddyProgramming languages provide basic types, such as integers, strings, and maps, which are useful in many contexts. For example, a string can be used to hold everything from a person's name to a web page URL. However, code that relies too heavily on basic types instead of custom abstractions can be hard to understand and maintain.Primitive obsession is the overuse of basic ("primitive") types to represent higher-level concepts. For example, this code uses basic types to represent shapes:vector> polygon = ...pair, pair> bounding_box = GetBoundingBox(polygon);int area = (bounding_box.second.first - bounding_box.first.first) * (bounding_box.second.second - bounding_box.first.secopair is not the right level of abstraction because its generically-named first and second fields are used to represent X and Y in one case and lower-left (er, upper-left?) and upper-right (er, lower-right?) in the other. Worse, basic types don't encapsulate domain-specific code such as computing the bounding box and area.Replacing basic types with higher-level abstractions results in clearer and better encapsulated code:Polygon polygon = area = polygon.GetBoundingBox().GetArea();Here are some other examples of primitive obsession:Related maps, lists, vectors, etc. that can be easily combined into a single collection by consolidating the values into a custom higher-level abstraction. map id_to_name;map id_to_age; map id_to_person; A vector or map with magic indices/keys, e.g. string values at indices/keys 0, 1, and 2 hold name, address, and phone #, respectively. Instead, consolidate these values into a higher-level abstraction. person_data[kName] = "Foo"; person.SetName("Foo"); A string that holds complex or structured text (e.g. a date). Instead, use a higher-level abstraction (e.g. Date) that provides self-documenting accessors (e.g. GetMonth) and guarantees correctness. string date = "01-02-03"; Date date(Month::Feb, Day(1), Year(2003)); An integer or floating point number that stores a time value, e.g. seconds. Instead, use a structured timestamp or duration type. int timeout_secs = 5; Duration timeout = Seconds(5); It's possible for any type—from a lowly int to a sophisticated red-black tree—to be too primitive for the job. If you see code that uses a lot of basic types that would be clearer or better encapsulated by using a higher-level abstraction, refactor it or politely remind the author to keep it classy! [...]

Code Health: IdentifierNamingPostForWorldWideWebBlog


This is another post in our Code Health series. A version of this post originally appeared in Google bathrooms worldwide as a Google Testing on the Toilet episode. You can download a printer-friendly version to display in your office.By Chris Lewis and Bob NystromIt's easy to get carried away creating long identifiers. Longer names often make things more readable. But names that are too long can decrease readability. There are many examples of variable names longer than 60 characters on GitHub and elsewhere. In 58 characters, we managed this haiku for you to consider: Name variables Using these simple guidelines Beautiful source codeNames should be two things: clear (know what it refers to) and precise (know what it does not refer to). Here are some guidelines to help:• Omit words that are obvious given a variable's type declaration.// Bad, the type tells us what these variables are:String nameString; List holidayDateList;// Better:String name; List holidays;• Omit irrelevant details.// Overly specific names are hard to read:Monster finalBattleMostDangerousBossMonster; Payments nonTypicalMonthlyPayments;// Better, if there's no other monsters or payments that need disambiguation:Monster boss; Payments payments;• Omit words that are clear from the surrounding context.// Bad, repeating the context:class AnnualHolidaySale {int annualSaleRebate; boolean promoteHolidaySale() {...}}// Better:class AnnualHolidaySale {int rebate; boolean promote() {...}}• Omit words that could apply to any identifier.You know the usual suspects: data, state, amount, number, value, manager, engine, object, entity, instance, helper, util, broker, metadata, process, handle, context. Cut them out.There are some exceptions to these rules; use your judgment. Names that are too long are still better than names that are too short. However, following these guidelines, your code will remain unambiguous and be much easier to read. Readers, including "future you,” will appreciate how clear your code is! [...]

Code Health: Providing Context with Commit Messages and Bug Reports


This is another post in our Code Health series. A version of this post originally appeared in Google bathrooms worldwide as a Google Testing on the Toilet episode. You can download a printer-friendly version to display in your office.By Chris LewisYou are caught in a trap. Mechanisms whirl around you, but they make no sense. You desperately search the room and find the builder's original plans! The description of the work order that implemented the trap reads, "Miscellaneous fixes." Oh dear.Reading other engineers' code can sometimes feel like an archaeology expedition, full of weird and wonderful statements that are hard to decipher. Code is always written with a purpose, but sometimes that purpose is not clear in the code itself. You can address this knowledge gap by documenting the context that explains why a change was needed. Code comments provide context, but comments alone sometimes can’t provide enough.There are two key ways to indicate context:Commit MessagesA commit message is one of the easiest, most discoverable means of providing context. When you encounter lines of code that may be unclear, checking the commit message which introduced the code is a great way to gain more insight into what the code is meant to do.Write the first line of the commit message so it stands alone, as tools like GitHub will display this line in commit listing pages. Stand-alone first lines allow you to skim through code history much faster, quickly building up your understanding of how a source file evolved over time. Example:   Add Frobber to the list of available widgets.This allows consumers to easily discover the new Frobber widget andadd it to their application. Bug ReportsYou can use a bug report to track the entire story of a bug/feature/refactoring, adding context such as the original problem description, the design discussions between the team, and the commits that are used to solve the problem. This lets you easily see all related commits in one place, and allows others to easily keep track of the status of a particular problem.Most commits should reference a bug report. Standalone commits (e.g. one-time cleanups or other small unplanned changes) don't need their own bug report, though, since they often contain all their context within the description and the source changes.Informative commit messages and bug reports go hand-in-hand, providing context from different perspectives. Keep in mind that such context can be useful even to yourself, providing an easy reminder about the work you did last week, last quarter, or even last year. Future you will thank past you! [...]

Code Health: Eliminate YAGNI Smells


This is another post in our Code Health series. A version of this post originally appeared in Google bathrooms worldwide as a Google Testing on the Toilet episode. You can download a printer-friendly version to display in your office.By Marc EaddyThe majority of software development costs are due to maintenance. One way to reduce maintenance costs is to implement something only when you actually need it, a.k.a. the “You Aren't Gonna Need It” (YAGNI) design principle. How do you spot unnecessary code? Follow your nose!A code smell is a code pattern that usually indicates a design flaw. For example, creating a base class or interface with only one subclass may indicate a speculation that more subclasses will be needed in the future. Instead, practice incremental development and design: don't add the second subclass until it is actually needed.The following C++ code has many YAGNI smells:class Mammal { ... virtual Status Sleep(bool hibernate) = 0;};class Human : public Mammal { ... virtual Status Sleep(bool hibernate) { age += hibernate ? kSevenMonths : kSevenHours; return OK; }};Maintainers are burdened with understanding, documenting, and testing both classes when only one is really needed. Code must handle the case when hibernate is true, even though all callers pass false, as well as the case when Sleep returns an error, even though that never happens. This results in unnecessary code that never executes. Eliminating those smells simplifies the code:class Human { ... void Sleep() { age += kSevenHours; }};Here are some other YAGNI smells:Code that has never been executed other than by tests (a.k.a. code that is dead on arrival)Classes designed to be subclassed (have virtual methods and/or protected members) that are not actually subclassedPublic or protected methods or fields that could be privateParameters, variables, or flags that always have the same valueThankfully, YAGNI smells, and code smells in general, are often easy to spot by looking for simple patterns and are easy to eliminate using simple refactorings.Are you thinking of adding code that won't be used today? Trust me, you aren't gonna need it! [...]

Code Health: To Comment or Not to Comment?


This is another post in our Code Health series. A version of this post originally appeared in Google bathrooms worldwide as a Google Testing on the Toilet episode. You can download a printer-friendly version to display in your office. By Dori Reuveni and Kevin BourrillionWhile reading code, often there is nothing more helpful than a well-placed comment. However, comments are not always good. Sometimes the need for a comment can be a sign that the code should be refactored.Use a comment when it is infeasible to make your code self-explanatory. If you think you need a comment to explain what a piece of code does, first try one of the following:Introduce an explaining variable. // Subtract discount from price.finalPrice = (numItems * itemPrice) - min(5, numItems) * itemPrice * 0.1; price = numItems * itemPrice;discount = min(5, numItems) * itemPrice * 0.1;finalPrice = price - discount; Extract a method. // Filter offensive words.for (String word : words) { ... } filterOffensiveWords(words); Use a more descriptive identifier name. int width = ...; // Width in pixels. int widthInPixels = ...; Add a check in case your code has assumptions. // Safe since height is always > 0.return width / height; checkArgument(height > 0);return width / height; There are cases where a comment can be helpful:Reveal your intent: explain why the code does something (as opposed to what it does). // Compute once because it’s expensive. Protect a well-meaning future editor from mistakenly “fixing” your code. // Create a new Foo instance because Foo is not thread-safe. Clarification: a question that came up during code review or that readers of the code might have. // Note that order matters because... Explain your rationale for what looks like a bad software engineering practice. @SuppressWarnings("unchecked") // The cast is safe because... On the other hand, avoid comments that just repeat what the code does. These are just noise: // Get all users.userService.getAllUsers(); // Check if the name is empty.if (name.isEmpty()) { ... } [...]

Evolution of GTAC and Engineering Productivity


When Google first hosted GTAC in 2006, we didn’t know what to expect. We kicked off this conference with the intention to share our innovation in test automation, learn from others in the industry and connect with academia. Over the last decade we’ve had great participation and had the privilege to host GTAC in North America, Europe and Asia -- largely thanks to the many of you who spoke, participated and connected!

In the recent months, we’ve been taking a hard look at the discipline of Engineering Productivity as a logical next step in the evolution of test automation. In that same vein, we’re going to rethink what an Engineering Productivity focused conference should look like today.  As we pivot, we will be extending these changes to GTAC and because we expect changes in theme, content and format, we are canceling the upcoming event scheduled in London this November. We’ll be bringing the event back in 2018 with a fresh outlook and strategy.

While we know this may be disappointing for many of the folks who were looking forward to GTAC, we’re excited to come back with a new format which will serve this conference well in today’s environment.


Code Health: Too Many Comments on Your Code Reviews?


This is another post in our Code Health series. A version of this post originally appeared in Google bathrooms worldwide as a Google Testing on the Toilet episode. You can download a printer-friendly version to display in your office. By Tom O'NeillCode reviews can slow down an individual code change, but they’re also an opportunity to improve your code and learn from another intelligent, experienced engineer. How can you get the most out of them?Aim to get most of your changes approved in the first round of review, with only minor comments. If your code reviews frequently require multiple rounds of comments, these tips can save you time. Spend your reviewers’ time wisely—it’s a limited resource. If they’re catching issues that you could easily have caught yourself, you’re lowering the overall productivity of your team.Before you send out the code review:Re-evaluate your code: Don’t just send the review out as soon as the tests pass. Step back and try to rethink the whole thing—can the design be cleaned up? Especially if it’s late in the day, see if a better approach occurs to you the next morning. Although this step might slow down an individual code change, it will result long-term in greater average throughput.Consider an informal design discussion: If there’s something you’re not sure about, pair program, talk face-to-face, or send an early diff and ask for a “pre-review” of the overall design.Self-review the change: Try to look at the code as critically as possible from the standpoint of someone who doesn’t know anything about it. Your code review tool can give you a radically different view of your code than the IDE. This can easily save you a round trip.Make the diff easy to understand: Multiple changes at once make the code harder to review. When you self-review, look for simple changes that reduce the size of the diff. For example, save significant refactoring or formatting changes for another code review.Don’t hide important info in the submit message: Put it in the code as well. Someone reading the code later is unlikely to look at the submit message.When you’re addressing code review comments:Re-evaluate your code after addressing non-trivial comments: Take a step back and really look at the code with fresh eyes. Once you’ve made one set of changes, you can often find additional improvements that are enabled or suggested by those changes. Just as with any refactoring, it may take several steps to reach the best design.Understand why the reviewer made each comment: If you don’t understand the reasoning behind a comment, don’t just make the change—seek out the reviewer and learn something new.Answer the reviewer’s questions in the code: Don’t just reply—make the code easier to understand (e.g., improve a variable name, change a boolean to an enum) or add a comment. Someone else is going to have the same question later on. [...]

Code Health: Reduce Nesting, Reduce Complexity


This is another post in our Code Health series. A version of this post originally appeared in Google bathrooms worldwide as a Google Testing on the Toilet episode. You can download a printer-friendly version to display in your office. By Elliott Karpilovsky Deeply nested code hurts readability and is error-prone. Try spotting the bug in the two versions of this code: Code with too much nesting Code with less nesting response = server.Call(request) if response.GetStatus() == RPC.OK: if response.GetAuthorizedUser(): if response.GetEnc() == 'utf-8': if response.GetRows(): vals = [ParseRow(r) for r in response.GetRows()] avg = sum(vals) / len(vals) return avg, vals else: raise EmptyError() else: raise AuthError('unauthorized') else: raise ValueError('wrong encoding')else: raise RpcError(response.GetStatus()) response = server.Call(request) if response.GetStatus() != RPC.OK: raise RpcError(response.GetStatus())if not response.GetAuthorizedUser(): raise ValueError('wrong encoding')if response.GetEnc() != 'utf-8': raise AuthError('unauthorized') if not response.GetRows(): raise EmptyError()vals = [ParseRow(r) for r in response.GetRows()]avg = sum(vals) / len(vals)return avg, vals Answer: the "wrong encoding" and "unauthorized" errors are swapped. This bug is easier to see in the refactored version, since the checks occur right as the errors are handled.The refactoring technique shown above is known as guard clauses. A guard clause checks a criterion and fails fast if it is not met. It decouples the computational logic from the error logic. By removing the cognitive gap between error checking and handling, it frees up mental processing power. As a result, the refactored version is much easier to read and maintain. Here are some rules of thumb for reducing nesting in your code: Keep conditional blocks short. It increases readability by keeping things local. Consider refactoring when your loops and branches are more than 2 levels deep. Think about moving nested logic into separate functions. For example, if you need to loop through a list of objects that each contain a list (such as a protocol buffer with repeated fields), you can define a function to process each object instead of using a double nested loop.Reducing nesting results in more readable code, which leads to discoverable bugs, faster developer iteration, and increased stability. When you can, simplify! [...]

GTAC Diversity Scholarship


by Lesley Katzen on behalf of the GTAC Diversity Committee

We are committed to increasing diversity at GTAC, and we believe the best way to do that is by making sure we have a diverse set of applicants to speak and attend. As part of that commitment, we are excited to announce that we will be offering travel scholarships again this year.
Travel scholarships will be available for selected applicants from traditionally underrepresented groups in technology.

To be eligible for a grant to attend GTAC, applicants must:
  • Be 18 years of age or older.
  • Be from a traditionally underrepresented group in technology.
  • Work or study in Computer Science, Computer Engineering, Information Technology, or a technical field related to software testing.
  • Be able to attend core dates of GTAC, November 14th - 15th 2017 in London, England.
To apply:
You must fill out the following scholarship formand register for GTAC to be considered for a travel scholarship.
The deadline for submission is July 1st. Scholarship recipients will be announced on August 15th. If you are selected, we will contact you with information on how to proceed with booking travel.

What the scholarship covers:
Google will pay for round-trip standard coach class airfare to London for selected scholarship recipients, and 3 nights of accommodations in a hotel near the Google King's Cross campus. Breakfast and lunch will be provided for GTAC attendees and speakers on both days of the conference. We will also provide a £75.00 gift card for other incidentals such as airport transportation or meals. You will need to provide your own credit card to cover any hotel incidentals.

Google is dedicated to providing a harassment-free and inclusive conference experience for everyone. Our anti-harassment policy can be found at:


GTAC 2017 - Registration is open!


by Diego Cavalcanti on behalf of the GTAC 2017 Committee
The Google Test Automation Conference (GTAC) is an annual test automation conference hosted by Google. It brings together engineers from industry and academia to discuss advances in test automation and the test engineering computer science field. It is a great opportunity to present, learn, and challenge modern testing technologies and strategies.

We are pleased to announce that this year, GTAC will be held in Google's London office on November 14th and 15th, 2017.

Registration is currently OPEN for attendees and speakers. See more information here.

The schedule for the upcoming months is as follows:
  • May 15, 2017 - Registration opens for speakers and attendees, including applicants for the diversity scholarship.
  • July 1, 2017 - Registration closes for speaker submissions.
  • July 15, 2017 - Registration closes for attendee submissions.
  • August 15, 2017 - Selected speakers and attendees will be notified.
  • November 13, 2017 - Rehearsal day for speakers (not open for attendees).
  • November 14-15, 2017 - GTAC 2017!
As part of our efforts to increase diversity of speakers and attendees at GTAC, we will again be offering travel scholarships for selected applicants from traditionally underrepresented groups in technology. Please find more information here.

Please do not hesitate to contact if you have any questions. We look forward to seeing you in London!


OSS-Fuzz: Five Months Later, and Rewarding Projects


By Oliver Chang, Abhishek Arya (Security Engineers, Chrome Security), Kostya Serebryany (Software Engineer, Dynamic Tools), and Josh Armour (Security Program Manager)Five months ago, we announcedOSS-Fuzz, Google's effort to help make open source software more secure and stable. Since then, our robot army has been working hard at fuzzing, processing 10 trillion test inputs a day. Thanks to the efforts of the open source community who have integrated a total of 47 projects, we've found over 1,000bugs (264of which are potential security vulnerabilities).Breakdown of the types of bugs we're findingNotable resultsOSS-Fuzz has found numerous security vulnerabilities in several critical open source projects: 10in FreeType2, 17in FFmpeg, 33in LibreOffice, 8in SQLite 3, 10in GnuTLS, 25in PCRE2, 9in gRPC, and 7in Wireshark. We've also had at least one bug collision with another independent security researcher (CVE-2017-2801). (Some of the bugs are still view-restricted so links may show smaller numbers.) Once a project is integrated into OSS-Fuzz, the continuous and automated nature of OSS-Fuzz means that we often catch these issues just hours after the regression is introduced into the upstream repository, so that the chances of users being affected is reduced. Fuzzing not only finds memory safety related bugs, it can also find correctness or logic bugs. One example is a carry propagating bug in OpenSSL (CVE-2017-3732). Finally, OSS-Fuzz has reported over 300 timeout and out-of-memory failures (~75% of which got fixed). Not every project treats these as bugs, but fixing them enables OSS-Fuzz to find more interesting bugs.Announcing rewards for open source projectsWe believe that user and internet security as a whole can benefit greatly if more open source projects include fuzzing in their development process. To this end, we'd like to encourage more projects to participate and adopt the ideal integration guidelines that we've established. Combined with fixing all the issues that are found, this is often a significant amount of work for developers who may be working on an open source project in their spare time. To support these projects, we are expanding our existing Patch Rewardsprogram to include rewards for the integration of fuzz targets into OSS-Fuzz. To qualify for these rewards, a project needs to have a large user base and/or be critical to global IT infrastructure. Eligible projects will receive $1,000 for initial integration, and up to $20,000 for ideal integration (the final amount is at our discretion). You have the option of donating these rewards to charity instead, and Google will double the amount. To qualify for the ideal integration reward, projects must show that: Fuzz targets are checked into their upstream repository and integrated in the build system with sanitizer support (up to $5,000). Fuzz targets are efficientand provide good code coverage (>80%) (up to $5,000). Fuzz targets are part of the official upstream development and regression testing process, i.e. they are maintained, run against old known crashers and the periodically updated corpora(up to $5,000). The last $5,000 is a "l33t" bonus that we may reward at our discretion for projects that we feel have gone the extra mile or done something really awesome.We've already started to contact the first round of projects that are eligible for the initial reward. If you are the maintainer or point of contact for one of these projects, you may also reach out to us in order to apply for our ideal integration rewards. The futureWe'd like to thank the existing contributors who integrated their projects and fixed countless bugs. We hope to see more projects integrated into OSS-Fuzz, and greater adoption of fuzzing as standard practice when developing software. [...]

Where do our flaky tests come from?


author: Jeff Listfield When tests fail on code that was previously tested, this is a strong signal that something is newly wrong with the code. Before, the tests passed and the code was correct; now the tests fail and the code is not working right. The goal of a good test suite is to make this signal as clear and directed as possible. Flaky (nondeterministic) tests, however, are different. Flaky tests are tests that exhibit both a passing and a failing result with the same code. Given this, a test failure may or may not mean that there's a new problem. And trying to recreate the failure, by rerunning the test with the same version of code, may or may not result in a passing test. We start viewing these tests as unreliable and eventually they lose their value. If the root cause is nondeterminism in the production code, ignoring the test means ignoring a production bug. Flaky Tests at GoogleGoogle has around 4.2 million tests that run on our continuous integration system. Of these, around 63 thousand have a flaky run over the course of a week. While this represents less than 2% of our tests, it still causes significant drag on our engineers.If we want to fix our flaky tests (and avoid writing new ones) we need to understand them. At Google, we collect lots of data on our tests: execution times, test types, run flags, and consumed resources. I've studied how some of this data correlates with flaky tests and believe this research can lead us to better, more stable testing practices. Overwhelmingly, the larger the test (as measured by binary size, RAM use, or number of libraries built), the more likely it is to be flaky. The rest of this post will discuss some of my findings. For a previous discussion of our flaky tests, see John Micco's postfrom May 2016. Test size - Large tests are more likely to be flakyWe categorize our tests into three general sizes: small, medium and large. Every test has a size, but the choice of label is subjective. The engineer chooses the size when they initially write the test, and the size is not always updated as the test changes. For some tests it doesn't reflect the nature of the test anymore. Nonetheless, it has some predictive value. Over the course of a week, 0.5% of our small tests were flaky, 1.6% of our medium tests were flaky, and 14% of our large tests were flaky [1]. There's a clear increase in flakiness from small to medium and from medium to large. But this still leaves open a lot of questions. There's only so much we can learn looking at three sizes.The larger the test, the more likely it will be flakyThere are some objective measures of size we collect: test binary size and RAM used when running the test [2]. For these two metrics, I grouped tests into equal-sized buckets [3] and calculated the percentage of tests in each bucket that were flaky. The numbers below are the r2 values of the linear best fit [4]. Correlation between metric and likelihood of test being flaky Metric r2 Binary size 0.82 RAM used 0.76 The tests that I'm looking at are (for the most part) hermetic tests that provide a pass/fail signal. Binary size and RAM use correlated quite well when looking across our tests and there's not much difference between them. So it's not just that large tests are likely to be flaky, it's that the larger the tests get, the more likely they are to be flaky. I have charted the full set of tests below for those two metrics. Flakiness increases with increases in binary size [5], but we also see increasing linear fit residuals [6] at larger sizes. The RAM use chart below has a clearer progression and only starts showing large residuals between the first and second vertical lines. While the bucket sizes are const[...]

Code Health: Google's Internal Code Quality Efforts


By Max Kanat-Alexander, Tech Lead for Code Health and Author of Code SimplicityThere are many aspects of good coding practices that don't fall under the normal areas of testing and tooling that most Engineering Productivity groups focus on in the software industry. For example, having readable and maintainable code is about more than just writing good tests or having the right tools—it's about having code that can be easily understood and modified in the first place. But how do you make sure that engineers follow these practices while still allowing them the independence that they need to make sound engineering decisions? Many years ago, a group of Googlers came together to work on this problem, and they called themselves the "Code Health" group. Why "Code Health"? Well, many of the other terms used for this in the industry—engineering productivity, best practices, coding standards, code quality—have connotations that could lead somebody to think we were working on something other than what we wanted to focus on. What we cared about was the processes and practices of software engineering in full—any aspect of how software was written that could influence the readability, maintainability, stability, or simplicity of code. We liked the analogy of having "healthy" code as covering all of these areas. This is a field that many authors, theorists, and conference speakers touch on, but not an area that usually has dedicated resources within engineering organizations. Instead, in most software companies, these efforts are pushed by a few dedicated engineers in their extra time or led by the senior tech leads. However, every software engineer is actually involved in code health in some way. After all, we all write software, and most of us care deeply about doing it the "right way." So why not start a group that helps engineers with that "right way" of doing things? This isn't to say that we are prescriptive about engineering practices at Google. We still let engineers make the decisions that are most sensible for their projects. What the Code Health group does is work on efforts that universally improve the lives of engineers and their ability to write products with shorter iteration time, decreased development effort, greater stability, and improved performance. Everybody appreciates their code getting easier to understand, their libraries getting simpler, etc. because we all know those things let us move faster and make better products. But how do we accomplish all of this? Well, at Google, Code Health efforts come in many forms. There is a Google-wide Code Health Group composed of 20%contributors who work to make engineering at Google better for everyone. The members of this group maintain internal documents on best practices and act as a sounding board for teams and individuals who wonder how best to improve practices in their area. Once in a while, for critical projects, members of the group get directly involved in refactoring code, improving libraries, or making changes to tools that promote code health. For example, this central group maintains Google's code review guidelines, writes internal publications about best practices, organizes tech talks on productivity improvements, and generally fosters a culture of great software engineering at Google. Some of the senior members of the Code Health group also advise engineering executives and internal leadership groups on how to improve engineering practices in their areas. It's not always clear how to implement effective code health practices in an area—some people have more experience than others making this happen broadly in teams, and so we offer our consulting and experience to help make simple code and great dev[...]

Discomfort as a Tool for Change


by Dave Gladfelter (SETI, Google Drive)IntroductionThe SETI (Software Engineer, Tools and Infrastructure) role at Google is a strange one in that there's no obvious reason why it should exist. The SWEs (Software Engineers) on a project understand its problems best, and understanding a problem is most of the way to fixing it. How can SETIs bring unique value to a project when SWEs have more on-the-ground experience with their impediments? The answer is scope. A SWE is rewarded for being an expert in their particular area and domain and is highly motivated to make optimizations to their carved-out space. SETIs (and Test Engineers and EngProdin general) identify and solve product-wide problems. Product-wide problems frequently arise because local optimizations don't necessarily add up to product-wide optimizations. The reason may be the limits of attention, blind spots, or mis-aligned incentives, but a group of SWEs each optimizing for their own sub-projects will not achieve product-wide maxima. Often SETIs and Test Engineers (TEs) know what behavior they'd like to see, such as more integration tests. We may even have management's ear and convince them to mandate such tests. However, in the absence of incentives, it's unlikely that the decisions SWEs make in response to such mandates will add up to the behavior we desire. Mandates around methods/practices are often ineffective. For example, a mandate of documentation for each public method on an interface often results in "method foo does foo." The best way to create product-wide efficiencies is to change the way the team or process works in ways that will (initially) be uncomfortable for the engineering team, but that pays dividends that can't be achieved any other way. SETIs and TEs must work to identify the blind spots and negative interactions between engineering teams and change the environment in ways that align engineering teams' incentives. When properly incentivized, SWEs will make optimal decisions enhanced by product-wide vision rather than micro-management. Common Product-Wide ProblemsHard-to-use APIsOne common example of local optimizations resulting in cross-team de-optimization is documentation and ease-of-use of internal APIs. The team that implements an internal API is not rewarded for making it easy to use except in the most oblique ways. Clients are compelled to use the internal APIs provided to them, so the API owner has a monopoly and will set the price of using it at "you must read all the code and debug it yourself" in the absence of incentives or (rare) heroes. Big, slow releasesAnother example is large and slow releases. Without EngProd help or external pressure, teams will gravitate to the slowest, biggest release possible. This makes sense from the position of any individual SWE: releases are painful, you have to ensure that there are no UI and API regressions, watch traffic and error rates for some time, and re-learn and use tools and processes that are complex and specific to releases. Multiple teams will naturally gravitate to having one big release so that all of these costs can be bundled into one operation for "efficiency." The result is that engineers don't get feedback on features for weeks and versioning of APIs and data stores is ignored (since all the parts of the system are bundled together into one big release). This greatly slows down developer and feature velocity and greatly increases risks of cascading failures when the release fails. How EngProd fixes product-wide problemsSETIs can nibble around the edges of these kinds of problems by writing tools and automation. TEs can create easy-to-use test environments that facilitate isolating and debugging faults[...]

Testing on the Toilet: Keep Cause and Effect Clear


by Ben YuThis article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office. Can you tell if this test is correct? 208: @Test public void testIncrement_existingKey() {209: assertEquals(9, tally.get("key1"));210: }It’s impossible to know without seeing how the tally object is set up: 1: private final Tally tally = new Tally();2: @Before public void setUp() {3: tally.increment("key1", 8);4: tally.increment("key2", 100);5: tally.increment("key1", 0);6: tally.increment("key1", 1);7: }// 200 lines away208: @Test public void testIncrement_existingKey() {209: assertEquals(9, tally.get("key1"));210: }The problem is that the modification of key1's values occurs 200+ lines away from the assertion. Otherwise put, the cause is hidden far away from the effect.Instead, write tests where the effects immediately follow the causes. It's how we speak in natural language: “If you drive over the speed limit (cause), you’ll get a traffic ticket (effect).” Once we group the two chunks of code, we easily see what’s going on: 1: private final Tally tally = new Tally();2: @Test public void testIncrement_newKey() {3: tally.increment("key", 100);5: assertEquals(100, tally.get("key"));6: }7: @Test public void testIncrement_existingKey() {8: tally.increment("key", 8);9: tally.increment("key", 1);10: assertEquals(9, tally.get("key"));11: }12: @Test public void testIncrement_incrementByZeroDoesNothing() {13: tally.increment("key", 8);14: tally.increment("key", 0);15: assertEquals(8, tally.get("key"));16: }This style may require a bit more code. Each test sets its own input and verifies its own expected output. The payback is in more readable code and lower maintenance costs. [...]

Happy 10th Birthday Google Testing Blog!


by Anthony Vallone

Ten years ago today, the first Google Testing Blog article was posted (official announcement 2 days later). Over the years, Google engineers have used this blog to help advance the test engineering discipline. We have shared information about our testing technologies, strategies, and theories; discussed what code quality really means; described how our teams are organized for optimal productivity; announced new tooling; and invited readers to speak at and attend the annual Google Test Automation Conference.

Google Testing Blog banner in 2007

The blog has enjoyed excellent readership. There have been over 10 million page views of the blog since it was created, and there are currently about 100 to 200 thousand views per month.

This blog is made possible by many Google engineers who have volunteered time to author and review content on a regular basis in the interest of sharing. Thank you to all the contributors and our readers!

Please leave a comment if you have a story to share about how this blog has helped you.


Announcing OSS-Fuzz: Continuous Fuzzing for Open Source Software


By Mike Aizatsky, Kostya Serebryany (Software Engineers, Dynamic Tools); Oliver Chang, Abhishek Arya (Security Engineers, Google Chrome); and Meredith Whittaker (Open Research Lead). We are happy to announce OSS-Fuzz, a new Beta program developed over the past years with the Core Infrastructure Initiative community. This program will provide continuous fuzzing for select core open source software.Open source software is the backbone of the many apps, sites, services, and networked things that make up "the internet." It is important that the open source foundation be stable, secure, and reliable, as cracks and weaknesses impact all who build on it. Recent security storiesconfirm that errors likebuffer overflow anduse-after-free can have serious, widespread consequences when they occur in critical open source software. These errors are not only serious, but notoriously difficult to find via routine code audits, even for experienced developers. That's wherefuzz testing comes in. By generating random inputs to a given program, fuzzing triggers and helps uncover errors quickly and thoroughly. In recent years, several efficient general purpose fuzzing engines have been implemented (e.g. AFL and libFuzzer), and we use them to fuzz various components of the Chrome browser. These fuzzers, when combined with Sanitizers, can help find security vulnerabilities (e.g. buffer overflows, use-after-free, bad casts, integer overflows, etc), stability bugs (e.g. null dereferences, memory leaks, out-of-memory, assertion failures, etc) and sometimeseven logical bugs. OSS-Fuzz's goal is to make common software infrastructure more secure and stable by combining modern fuzzing techniques with scalable distributed execution. OSS-Fuzz combines various fuzzing engines (initially, libFuzzer) with Sanitizers (initially, AddressSanitizer) and provides a massive distributed execution environment powered by ClusterFuzz. Early successesOur initial trials with OSS-Fuzz have had good results. An example is the FreeType library, which is used on over a billion devices to display text (and which might even be rendering the characters you are reading now). It is important for FreeType to be stable and secure in an age when fonts are loaded over the Internet. Werner Lemberg, one of the FreeType developers, wasan early adopter of OSS-Fuzz. Recently the FreeType fuzzer found a new heap buffer overflow only a few hours after the source change: ERROR: AddressSanitizer: heap-buffer-overflow on address 0x615000000ffa READ of size 2 at 0x615000000ffa thread T0SCARINESS: 24 (2-byte-read-heap-buffer-overflow-far-from-bounds) #0 0x885e06 in tt_face_vary_cvtsrc/truetype/ttgxvar.c:1556:31OSS-Fuzz automatically notifiedthe maintainer, whofixed the bug; then OSS-Fuzz automaticallyconfirmed the fix. All in one day! You can see the full list of fixed and disclosed bugs found by OSS-Fuzz so far. Contributions and feedback are welcomeOSS-Fuzz has already found 150 bugs in several widely used open source projects (and churns ~4 trillion test cases a week). With your help, we can make fuzzing a standard part of open source development, and work with the broader community of developers and security testers to ensure that bugs in critical open source applications, libraries, and APIs are discovered and fixed. We believe that this approach to automated security testing will result in real improvements to the security and stability of open source software. OSS-Fuzz is launching in Beta right now, and will be accepting suggestions for candidate open source projects. In order for a project to be accepted to OSS-Fuzz, it n[...]

What Test Engineers do at Google: Building Test Infrastructure


Author: Jochen WuttkeIn a recent post, we broadly talked about What Test Engineers do at Google. In this post, I talk about one aspect of the work TEs may do: building and improving test infrastructure to make engineers more productive. Refurbishing legacy systems makes new tools necessaryA few years ago, I joined an engineering team that was working on replacing a legacy system with a new implementation. Because building the replacement would take several years, we had to keep the legacy system operational and even add features, while building the replacement so there would be no impact on our external users. The legacy system was so complex and brittle that the engineers spent most of their time triaging and fixing bugs and flaky tests, but had little time to implement new features. The goal for the rewrite was to learn from the legacy system and to build something that was easier to maintain and extend. As the team's TE, my job was to understand what caused the high maintenance cost and how to improve on it. I found two main causes: Tight coupling and insufficient abstraction made unit testing very hard, and as a consequence, a lot of end-to-end tests served as functional tests of that code. The infrastructure used for the end-to-end tests had no good way to create and inject fakes or mocks for these services. As a result, the tests had to run the large number of servers for all these external dependencies. This led to very large and brittle tests that our existing test execution infrastructure was not able to handle reliably.Exploring solutionsAt first, I explored if I could split the large tests into smaller ones that would test specific functionality and depend on fewer external services. This proved impossible, because of the poorly structured legacy code. Making this approach work would have required refactoring the entire system and its dependencies, not just the parts my team owned. In my second approach, I also focussed on large tests and tried to mock services that were not required for the functionality under test. This also proved very difficult, because dependencies changed often and individual dependencies were hard to trace in a graph of over 200 services. Ultimately, this approach just shifted the required effort from maintaining test code to maintaining test dependencies and mocks. My third and final approach, illustrated in the figure below, made small tests more powerful. In the typical end-to-end test we faced, the client made RPCcalls to several services, which in turn made RPC calls to other services. Together the client and the transitive closure over all backend services formed a large graph (not tree!) of dependencies, which all had to be up and running for the end-to-end test. The new model changes how we test client and service integration. Instead of running the client on inputs that will somehow trigger RPC calls, we write unit tests for the code making method calls to the RPC stub. The stub itself is mocked with a common mocking framework like Mockito in Java. For each such test, a second test verifies that the data used to drive that mock "makes sense" to the actual service. This is also done with a unit test, where a replay client uses the same data the RPC mock uses to call the RPC handler method of the service. This pattern of integration testing applies to any RPC call, so the RPC calls made by a backend server to another backend can be tested just as well as front-end client calls. When we apply this approach consistently, we benefit from smaller tests that still test correct integration behavior, an[...]

Hackable Projects - Pillar 3: Infrastructure


By: Patrik HöglundThis is the third article in our series on Hackability; also see the first and second article.We have seen in our previous articles how Code Health and Debuggability can make a project much easier to work on. The third pillar is a solid infrastructure that gets accurate feedback to your developers as fast as possible. Speed is going to be major theme in this article, and we’ll look at a number of things you can do to make your project easier to hack on.Build Systems SpeedQuestion: What’s a change you’d really like to see in our development tools?“I feel like this question gets asked almost every time, and I always give the same answer:  I would like them to be faster.”         -- Ian Lance TaylorReplace make with ninja. Use the gold linker instead of ld. Detect and delete dead code in your project (perhaps using coverage tools). Reduce the number of dependencies, and enforce dependency rules so new ones are not added lightly. Give the developers faster machines. Use distributed build, which is available with many open-source continuous integration systems (or use Google’s system, Bazel!). You should do everything you can to make the build faster.Figure 1: “Cheetah chasing its prey” by Marlene Thyssen.Why is that? There’s a tremendous difference in hackability if it takes 5 seconds to build and test versus one minute, or even 20 minutes, to build and test. Slow feedback cycles kill hackability, for many reasons:Build and test times longer than a handful of seconds cause many developers’ minds to wander, taking them out of the zone.Excessive build or release times* makes tinkering and refactoring much harder. All developers have a threshold when they start hedging (e.g. “I’d like to remove this branch, but I don’t know if I’ll break the iOS build”) which causes refactoring to not happen.* The worst I ever heard of was an OS that took 24 hours to build!How do you actually make fast build systems? There are some suggestions in the first paragraph above, but the best general suggestion I can make is to have a few engineers on the project who deeply understand the build systems and have the time to continuously improve them. The main axes of improvement are:Reduce the amount of code being compiled.Replace tools with faster counterparts.Increase processing power, maybe through parallelization or distributed systems.Note that there is a big difference between full builds and incremental builds. Both should be as fast as possible, but incremental builds are by far the most important to optimize. The way you tackle the two is different. For instance, reducing the total number of source files will make a full build faster, but it may not make an incremental build faster. To get faster incremental builds, in general, you need to make each source file as decoupled as possible from the rest of the code base. The less a change ripples through the codebase, the less work to do, right? See “Loose Coupling and Testability” in Pillar 1 for more on this subject. The exact mechanics of dependencies and interfaces depends on programming language - one of the hardest to get right is unsurprisingly C++, where you need to be disciplined with includes and forward declarations to get any kind of incremental build performance. Build scripts and makefiles should be held to standards as high as the code itself. Technical debt and unnecessary dependencies have a tendency to accumulate in build scripts, because no one has the time to understand and fix them[...]

Hackable Projects - Pillar 2: Debuggability


By: Patrik Höglund This is the second article in our series on Hackability; also see the first article.“Deep into that darkness peering, long I stood there, wondering, fearing, doubting, dreaming dreams no mortal ever dared to dream before.” -- Edgar Allan PoeDebuggability can mean being able to use a debugger, but here we’re interested in a broader meaning. Debuggability means being able to easily find what’s wrong with a piece of software, whether it’s through logs, statistics or debugger tools. Debuggability doesn’t happen by accident: you need to design it into your product. The amount of work it takes will vary depending on your product, programming language(s) and development environment. In this article, I am going to walk through a few examples of how we have aided debuggability for our developers. If you do the same analysis and implementation for your project, perhaps you can help your developers illuminate the dark corners of the codebase and learn what truly goes on there.Figure 1: computer log entry from the Mark II, with a moth taped to the page.Running on Localhost Read more on the Testing Blog: Hermetic Servers by Chaitali Narla and Diego Salas Suppose you’re developing a service with a mobile app that connects to that service. You’re working on a new feature in the app that requires changes in the backend. Do you develop in production? That’s a really bad idea, as you must push unfinished code to production to work on your change. Don’t do that: it could break your service for your existing users. Instead, you need some kind of script that brings up your server stack on localhost. You can probably run your servers by hand, but that quickly gets tedious. In Google, we usually use fancy python scripts that invoke the server binaries with flags. Why do we need those flags? Suppose, for instance, that you have a server A that depends on a server B and C. The default behavior when the server boots should be to connect to B and C in production. When booting on localhost, we want to connect to our local B and C though. For instance: b_serv --port=1234 --db=/tmp/fakedbc_serv --port=1235a_serv --b_spec=localhost:1234 --c_spec=localhost:1235That makes it a whole lot easier to develop and debug your server. Make sure the logs and stdout/stderr end up in some well-defined directory on localhost so you don’t waste time looking for them. You may want to write a basic debug client that sends HTTP requests or RPCs or whatever your server handles. It’s painful to have to boot the real app on a mobile phone just to test something. A localhost setup is also a prerequisite for making hermetic tests,where the test invokes the above script to bring up the server stack. The test can then run, say, integration tests among the servers or even client-server integration tests. Such integration tests can catch protocol drift bugs between client and server, while being super stable by not talking to external or shared services. Debugging Mobile AppsFirst, mobile is hard. The tooling is generally less mature than for desktop, although things are steadily improving. Again, unit tests are great for hackability here. It’s really painful to always load your app on a phone connected to your workstation to see if a change worked. Robolectric unit tests and Espresso functional tests, for instance, run on your workstation and do not require a real phone. xcTestsand Earl Grey give you the same on iOS. Debuggers ship with Xcode and Android Studio. If your Android app ships JNI c[...]

Testing on the Toilet: What Makes a Good End-to-End Test?


by Adam BenderThis article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.An end-to-end test tests your entire system from one end to the other, treating everything in between as a black box. End-to-end tests can catch bugs that manifest across your entire system. In addition to unit and integration tests, they are a critical part of a balanced testing diet, providing confidence about the health of your system in a near production state. Unfortunately, end-to-end tests are slower, more flaky, and more expensive to maintain than unit or integration tests. Consider carefully whether an end-to-end test is warranted, and if so, how best to write one. Let's consider how an end-to-end test might work for the following "login flow":In order to be cost effective, an end-to-end test should focus on aspects of your system that cannot be reliably evaluated with smaller tests, such as resource allocation, concurrency issues and API compatibility. More specifically:For each important use case, there should be one corresponding end-to-end test. This should include one test for each important class of error. The goal is the keep your total end-to-end count low. Be prepared to allocate at least one week a quarter per test to keep your end-to-end tests stable in the face of issues like slow and flaky dependencies or minor UI changes. Focus your efforts on verifying overall system behavior instead of specific implementation details; for example, when testing login behavior, verify that the process succeeds independent of the exact messages or visual layouts, which may change frequently. Make your end-to-end test easy to debug by providing an overview-level log file, documenting common test failure modes, and preserving all relevant system state information (e.g.: screenshots, database snapshots, etc.).End-to-end tests also come with some important caveats: System components that are owned by other teams may change unexpectedly, and break your tests. This increases overall maintenance cost, but can highlight incompatible changes It may be more difficult to make an end-to-end test fully hermetic; leftover test data may alter future tests and/or production systems. Where possible keep your test data ephemeral. An end-to-end test often necessitates multiple test doubles (fakes or stubs) for underlying dependencies; they can, however, have a high maintenance burden as they drift from the real implementations over time. [...]

What Test Engineers do at Google


by Matt Lowrie, Manjusha Parvathaneni, Benjamin Pick, and Jochen Wuttke Test engineers (TEs) at Google are a dedicated group of engineers who use proven testing practices to foster excellence in our products. We orchestrate the rapid testing and releasing of products and features our users rely on. Achieving this velocity requires creative and diverse engineering skills that allow us to advocate for our users. By building testable user journeys into the process, we ensure reliable products. TEs are also the glue that bring together feature stakeholders (product managers, development teams, UX designers, release engineers, beta testers, end users, etc.) to confirm successful product launches. Essentially, every day we ask ourselves, “How can we make our software development process more efficient to deliver products that make our users happy?”. The TE role grew out of the desire to make Google’s early free products, like Search, Gmail and Docs, better than similar paid products on the market at the time. Early on in Google’s history, a small group of engineers believed that the company’s “launch and iterate” approach to software deployment could be improved with continuous automated testing. They took it upon themselves to promote good testing practices to every team throughout the company, via some programs you may have heard about: Testing on the Toilet, the Test Certified Program, and the Google Test Automation Conference (GTAC). These efforts resulted in every project taking ownership of all aspects of testing, such as code coverage and performance testing. Testing practices quickly became commonplace throughout the company and engineers writing tests for their own code became the standard. Today, TEs carry on this tradition of setting the standard of quality which all products should achieve. Historically, Google has sustained two separate job titles related to product testing and test infrastructure, which has caused confusion. We often get asked what the difference is between the two. The rebranding of the Software engineer, tools and infrastructure (SETI) role, which now concentrates on engineering productivity, has been addressed in a previous blog post. What this means for test engineers at Google, is an enhanced responsibility of being the authority on product excellence. We are expected to uphold testing standards company-wide, both programmatically and persuasively. Test engineer is a unique role at Google. As TEs, we define and organize our own engineering projects, bridging gaps between engineering output and end-user satisfaction. To give you an idea of what TEs do, here are some examples of challenges we need to solve on any particular day: Automate a manual verification process for product release candidates so developers have more time to respond to potential release-blocking issues. Design and implement an automated way to track and surface Android battery usage to developers, so that they know immediately when a new feature will cause users drained batteries. Quantify if a regenerated data set used by a product, which contains a billion entities, is better quality than the data set currently live in production. Write an automated test suite that validates if content presented to a user is of an acceptable quality level based on their interests. Read an engineering design proposal for a new feature and provide suggestions about how and where to build in testability. Investigate correlated stack tr[...]

Hackable Projects - Pillar 1: Code Health


By: Patrik Höglund IntroductionSoftware development is difficult. Projects often evolve over several years, under changing requirements and shifting market conditions, impacting developer tools and infrastructure. Technical debt, slow build systems, poor debuggability, and increasing numbers of dependencies can weigh down a project The developers get weary, and cobwebs accumulate in dusty corners of the code base. Fighting these issues can be taxing and feel like a quixotic undertaking, but don’t worry — the Google Testing Blog is riding to the rescue! This is the first article of a series on “hackability” that identifies some of the issues that hinder software projects and outlines what Google SETIs usually do about them. According to Wiktionary, hackable is defined as: Adjectivehackable ‎(comparative more hackable, superlative most hackable) (computing) That can be hacked or broken into; insecure, vulnerable. That lends itself to hacking (technical tinkering and modification); moddable.Obviously, we’re not going to talk about making your product more vulnerable (by, say, rolling your own crypto or something equally unwise); instead, we will focus on the second definition, which essentially means “something that is easy to work on.” This has become the mainfocus for SETIs at Google as the role has evolved over the years.In PracticeIn a hackable project, it’s easy to try things and hard to break things. Hackability means fast feedback cycles that offer useful information to the developer. This is hackability: Developing is easy Fast build Good, fast tests Clean code Easy running + debugging One-click rollbacksIn contrast, what is not hackability? Broken HEAD (tip-of-tree) Slow presubmit (i.e. checks running before submit) Builds take hours Incremental build/link > 30s FlakytestsCan’t attach debugger Logs full of uninteresting informationThe Three Pillars of HackabilityThere are a number of tools and practices that foster hackability. When everything is in place, it feels great to work on the product. Basically no time is spent on figuring out why things are broken, and all time is spent on what matters, which is understanding and working with the code. I believe there are three main pillars that support hackability. If one of them is absent, hackability will suffer. They are: Pillar 1: Code Health“I found Rome a city of bricks, and left it a city of marble.”   -- AugustusKeeping the code in good shape is critical for hackability. It’s a lot harder to tinker and modify something if you don’t understand what it does (or if it’s full of hidden traps, for that matter). TestsUnit and small integration tests are probably the best things you can do for hackability. They’re a support you can lean on while making your changes, and they contain lots of good information on what the code does. It isn’t hackability to boot a slow UI and click buttons on every iteration to verify your change worked - it is hackability to run a sub-second set of unit tests! In contrast, end-to-end (E2E) tests generally help hackability much less (and can evenbe a hindrance if they, or the product, are in sufficiently bad shape). Figure 1: the Testing Pyramid.I’ve always been interested in how you actually make unit tests happen in a team. It’s about education. Writing a product such that it has good unit tests is actually a hard problem. It requires knowledge of dependency inject[...]

The Inquiry Method for Test Planning


by Anthony Valloneupdated: July 2016Creating a test plan is often a complex undertaking. An ideal test plan is accomplished by applying basic principles of cost-benefit analysis and risk analysis, optimally balancing these software development factors:Implementation cost: The time and complexity of implementing testable features and automated tests for specific scenarios will vary, and this affects short-term development cost.Maintenance cost: Some tests or test plans may vary from easy to difficult to maintain, and this affects long-term development cost. When manual testing is chosen, this also adds to long-term cost.Monetary cost: Some test approaches may require billed resources.Benefit: Tests are capable of preventing issues and aiding productivity by varying degrees. Also, the earlier they can catch problems in the development life-cycle, the greater the benefit.Risk: The probability of failure scenarios may vary from rare to likely, and their consequences may vary from minor nuisance to catastrophic.Effectively balancing these factors in a plan depends heavily on project criticality, implementation details, resources available, and team opinions. Many projects can achieve outstanding coverage with high-benefit, low-cost unit tests, but they may need to weigh options for larger tests and complex corner cases. Mission critical projects must minimize risk as much as possible, so they will accept higher costs and invest heavily in rigorous testing at all levels.This guide puts the onus on the reader to find the right balance for their project. Also, it does not provide a test plan template, because templates are often too generic or too specific and quickly become outdated. Instead, it focuses on selecting the best content when writing a test plan.Test plan vs. strategyBefore proceeding, two common methods for defining test plans need to be clarified:Single test plan: Some projects have a single "test plan" that describes all implemented and planned testing for the project.Single test strategy and many plans: Some projects have a "test strategy" document as well as many smaller "test plan" documents. Strategies typically cover the overall test approach and goals, while plans cover specific features or project updates.Either of these may be embedded in and integrated with project design documents. Both of these methods work well, so choose whichever makes sense for your project. Generally speaking, stable projects benefit from a single plan, whereas rapidly changing projects are best served by infrequently changed strategies and frequently added plans.For the purpose of this guide, I will refer to both test document types simply as "test plans”. If you have multiple documents, just apply the advice below to your document aggregation.Content selectionA good approach to creating content for your test plan is to start by listing all questions that need answers. The lists below provide a comprehensive collection of important questions that may or may not apply to your project. Go through the lists and select all that apply. By answering these questions, you will form the contents for your test plan, and you should structure your plan around the chosen content in any format your team prefers. Be sure to balance the factors as mentioned above when making decisions.PrerequisitesDo you need a test plan? If there is no project design document or a clear vision for the product, it ma[...]