Trish Khoo, Engineering Manager, Google
Youâre already selling ahead of your roadmap and your dev team is getting pretty big. In this talk, Trish Khoo outlines two approaches to keeping pace and quality high without hiring an army, drawing on a decade of software testing at Campaign Monitor, Google and Microsoft.
Slides, Video & Transcript below
Slides from Trish Khoo’s talk at BoS USA 2015 here
 
Video
Learn how great SaaS & software companies are run
We produce exceptional conferences & content that will help you build better products & companies.
Join our friendly list for event updates, ideas & inspiration.
Transcript
Trish Khoo, Google: Hey, everyone! Iâm Trish, I work for Google and Iâd like to talk to you about software testing. Iâm going to talk to you about quality. First, though, I would like to talk to you about tradeoffs. Has any of you heard of the iron triangle of software development by any chance? Yeah? Thereâs a couple of people. Ok, for those that donât know, this is basically a triangle that software people use to justify to ourselves why we canât have nice things. It looks like this, yeah. So nice things being things like quality for instance in our product. And in order to have things like quality, we have to sacrifice things like time, also known as money, or we have to add more people, also known as money. And guess which of these things is usually the one that takes the hit. Itâs the one that isnât money. So what I do at Google is I help teams to enter a triangle free world. I want teams to be able to get all of these things, to have the quality of software that they want, as well as having that at speed as well as not having to hire a few people to do that, and itâs possible. I used to think it wasnât, definitely is. And I even know that there are teams that donât even have a QA anymore at Google, Microsoft, Pivotal labs and many companies around the world.
The dream is real and additionally this is how it actually helps you scale. Because Iâm at Google, and Google is all about scale. So this is a type of scale that we deal with at Google on a regular basis in the area that I work. I work in the area of Google that worked with geographical data, massive ones. And these are the types of numbers that we deal with on a regular basis. Weâre talking big traffic, data and impact and also big risk as well. Things like over a billion active monthly users of Google Map services, over 2 million apps and services that are depending on the data that weâre curating and weâre even getting these tens of thousands of data by real life users as well. Scary stuff! And to top it off, weâre also a little bit obsessed with quality to the point where Google Maps is even accounting for shifts of the tectonic plates underneath the earth. Weâve got the world maps, 200 countries and territories, North Korea is even in there somehow. Donât ask me how! Street view imagery is available across 66 countries, this is including parts of the Arctic and Antarctic so from the comfort of your living room, you can walk around with the penguins. And satellite imagery is covering a significant amount of the world surface, to quite a high level of resolution so that you can see your house from here.
But how do we get to be able to do these amazing Google things and Google quality and scale?
So what I noticed and what other people have also noticed such as my friend Simon Monashy who is the director of a venture capital firm in London, he works with startups and he was saying to me something that really resonated with me, because what he finds when heâs doing his due diligence is that small companies often hit the point where the fast technical decisions that theyâve made along the way to get the revenue very quickly has hurt them when they got to a certain size because itâs created these scaling difficulties after that. And this is something that you saw Jeff talk about earlier, tall Jeff, when he was talking about technical debt â I know he had this graph where it seemed to go up and thereâs this wiggly thing happening in the middle and I think it was the hot coals of software growth. So this is the same thing that Iâve noticed and also investors and what I noticed it because it usually has an impact on my inbox because about this time, this is when typically executives will make decisions about quality and quality teams and I will get something, usually in the form of an job add in my inbox, because somebody has made a decision like this.
So quality problem, quality solution, hire a QA assurance team
It makes sense, right? When I said QA team, I am talking about hiring a team of manual testers usually to come in and help assure that the product is the quality that you want. To test it out, and this is usually people using the system like you would imagine a real user would so they can give you feedback on the overall quality of your product so that you can fix it. Donât do this! So you may have some people already and thinking of outsourcing this and hiring a brand new team to wage war against the bad quality of your software. The problem with this is that now youâve got a team of people who is accountable for quality in your software and they have no power to affect the quality of it directly. People are reporting issues but they canât fix them. Thereâs an old QA joke and youâre gonna find this hilarious â where it goes how many QA people does it take to fix a light bulb? And the answer is none because they will just tell you that the room is dark. I thought that it was funny. So this is the problem with them.
The main problem and the reason it doesnât scale is manual testing is very slow especially when youâve gone to a simple product. Manual testing is really fast in a simple product and it is the best way to be testing a simple product. But when you get to a complex product, then manual testing becomes really, really slow because there are so many parts that you can now take through your system, somebody manually accessing all these parts in your system takes a really, really long time and people are having to do this for your entire complex system every time you want to launch something new, in case somebody has broken something when they implemented your new, shiny feature. Because usually they have. So then the next thing turns up in my inbox and the next type of email I get because another decision has been made hiring an automation team to fix the testing problem. And this is when we have a team of people who can write automated tests, replace the manual tests that were happening before. And this is usually with the approach taken along the lines of â I have a whole lot of manual tests, letâs make them faster by writing a program that will execute the test so a human being doesnât have to. Itâs a lot faster â in theory this works really well because automated testing is faster than manual testing. The problem is that usually this is approached in a way that isnât terribly scalable. So automated testing good, automation team bad.
This is typically the mail I get in my inbox, the reason why this makes me cringe is because â the blimp Iâm all in favour of, I love the blimp but itâs other things that Iâm not a fan of. You can see that low coding skill is ok, for the test automation engineer jobs. These are jobs which are seen as not so skilled in terms of software development, generally seen as scripting work. Even if you wanted to hire a really good software developer to do this, itâs very unlikely that they would because they are making less money doing this, itâs a much less glamorous position and theyâre going to work with a lot of people whose coding skills is not that great.
So the other thing I see people doing to address this coding problem is if they have a hard time finding good people to do this, they go letâs teach our manual testers how to code them because theyâre doing the same job, apparently itâs not that hard. How hard can coding be? Itâs only 10 lines or whatever. And then theyâre writing these tests on behalf of the developers so you end up with this big, unmaintainable piece of software thatâs testing your software thatâs not only really difficult to maintain because the codeâs a mess, but itâs usually unreliable. And this is also because this is a separate team that is taking a really full stack approach on test automation which is unreliable and itâs going straight to the UI – itâs trying to do things that users would do. But we donât really have technology that can be acting like a human being yet. This artificial intelligence is beyond us at this point, hopefully weâll end up one day and Iâm sure the first thing weâll do is apply it to software testing.
So one of my friends recently described this as hiring a team of people to encourage bad practices within your development team. Because what youâre doing is getting a group of people to say verify what another person has made work correctly which doesnât really sound right if you think about it for a minute. So we went through this ourselves, nobody is really excluded from this kind of change I think. About 7 years ago, Google teams were actually doing a lot of testing for releases, it was a lot of manual testing and it took up the majority of our releases. We realised at that point that we wanted to get faster and we needed to get faster in order to compete in our market. Once or twice a month releases were not gonna cut it anymore. So after some amount of time, a few years, weâve gotten to the point today where teams are releasing even multiple times a day and there was no decrease in quality as a result of this.
So whatâs the secret? The whole team has to commit to quality.
We donât have a separate group of people where quality is your job but you canât do anything about it. We have everybody in the team committing to quality as being part of their jobs. This means developers, product managers and you, miss or mister CEO, it means everybody has to commit to quality as something that we care about as a team and weâre gonna do our best to actually make sure that it happens.
Now letâs talk a little bit about this testing phase, because Iâm going to tell you why software launchers can be really painful if testing is not done in a really great way. Anyone of you heard of the testing phase as a leader? Itâs usually because itâs around the time when youâre asking why isnât my project done yet and people will say itâs still in the testing phase or they will say something like weâre still fixing bugs. And another symptom of this is users ask then when is it going to be done? And usually the answer is something along the lines of we have no idea or something that is more like yes, definitely by Friday but you know it really means we have no idea. So this is a symptom of having this find and fix kind of thing thatâs happening around this time. You can actually try to predict when something will launch if you have a testing phase, it just requires a lot of human hours to manage it and you will end up hiring somebody almost full time to manage this process with graphs and fixed rates and the amount of bugs we have now and you will see the graph go down and you have 0 bugs but you know tomorrow it will bounce back. And thatâs what we call 0 bug bounce and youâll see it bounce. Weâve got 0 bugs! Then we can launch. And thatâs not even the real amount of bugs because every day youâre doing a triage meeting and the triage meeting is designed so that you can look at the amount of bugs you have and you can negotiate with everybody to work out what is the least crappy product youâre willing to live with in production? We just need to get this thing out the door! And as well as that is a huge waste in expensive engineering hours and I will show you why.
Because this is whatâs happening in the development teams. If we have a look at a typical development team, weâve got something like a developer, tester and some kind of product owner, often a business analyst of some kind, project manager, whatever you want to call this person, somebody who cares about product requirements. And what will happen is youâll have the developer â weâll be giving a new feature to the tester, the tester will have to drop whatever the heck they were doing and then have a look at this feature and then they will find something wrong with it, inevitably they will give it back to the developer, the developer will say donât bother me, Iâm busy! And then when they eventually get around to looking at the bug, then they will have to figure out what it means, reproduce it, debug and then they have to give it back to the tester and say did I fix it or not? Because I donât know. And then the tester will usually say something along the lines of yes or no and you introduced 2 more while you were at it. Anyway â weâll just log them and move on. And then that goes back to the product owner and he will say either yes, that was exactly what I wanted â good feature, thank you everyone! Or no, actually now that I look at it, itâs not exactly what I wanted so can you just go back and make just a couple of little tweaks? Weâve still got time with the schedule, thatâs fine. And it goes round and round and you know these loops and feedback loops that you see here? These can be hours, days, weeks. Weeks of launch time. This is weeks between your idea and you getting that idea into your production. And a lot of it is just waiting around, itâs waiting for people to know things and that something is broken and they need to fix it and that there is a problem and something needs to change and a decision needs to be made.
So Iâm going to step back a little bit and ask what is testing actually anyway? Because when I usually talk about being a tester or being in a test related role, people usually assume that Iâm in a role that involves something along the lines of manual testing, as in Iâm doing what heâs doing and I find bugs and thatâs kind of it. Professional, sophisticated software testing is an approach that lets people get the information that they need at the time they need it, in order to make timely decisions. And it can be a change to your process, the introduction of tools that will allow people to do this more easily, it can be a wide range of activities.
Now the easiest and the most well-known aspect of testing is called checking. And checking is exactly what it sounds like, itâs like having a to-do list. Can I log into the app? Can I buy a hamster on Amazon? Does my company logo still look like underpants? Things like that. Everybody can do testing, ok? Itâs super easy and everybody should do testing. This is what I was told in grade 3 of school. Check over your own work make sure itâs fine before you hand it in, that kind of thing. What a change that makes! Now we donât have a tester thatâs making sure everything works along the way. We donât have a separate person checking everything for everybody. The developer is doing their own testing and is writing automated tests at home. And he can instantly know whether or not something is broken once Iâve implemented something and they can immediately fix it. Itâs preventing problems before they happen.
And then the product owner preferably is checking over their requirements. The developer is talking to the product owner, making sure that they understand completely what it is that they were supposed to be making and thatâs gonna help them save a lot of process in the making sure that itâs all right. And then this saves all of this wasted launch time and it gets done quicker without having a very terrible bug finding testing phase at the end. You notice I took the tester out of there and it doesnât look great for my own future career prospect, something with testing in the title.
I want to emphasise checking is really easy, testing is really hard. So the checking part everybody should do. Having a testing expert in your team is going to help you to make that process better overall. And this actually helps you get the most value out of your testing expert because theyâre not wasting their time, doing these basic checking activities anymore. They are actually looking at your overall process, helping your team work more efficiently, more productively. And they can even make a development tool if you hire an engineer as a testing expert, because there are engineering tester experts back there. These people can help you make internal tools that are going to help your development team become more productive or if not make these tools, at least research the tools and help introduce them and configure them properly for the team to help the team become more productive.
The hard part – change
Ok, here is the hard part. This is the hardest part of my job because this requires a massive change in culture through the development team throughout the whole company; and is the reason why I wanted to go to this conference not just go to a development type conference and tell them all start testing because this has to come from the top down if itâs ever going to change. So these are the types of excuses I typically hear whenever I say to a team hey weâre going to do things differently now, weâre going to have developer testing and PMâs checking their own work. So itâs an automated test. I get a lot of excuses and a lot of forms, but if I sit down with them for half an hour and ask them what you really mean, then it really boils down to these 3 answers. And these 3 answers are generally solvable from a management level.
First of all is we donât have time. Thatâs a big one. Because I donât usually meet engineers who want to write bad code. But the problem is that when you give an engineer a deadline and say I just want this thing done, that definition of done is fairly flexible. And Iâve worked in companies where the developer who has produced the worst, buggiest code youâve ever seen is the one getting rewarded for being the best developer on the team because they are the ones that are delivering fast, apparently. They are the ones that are getting it done quicker because they changed the definition of done to youâve just done the minimum you need to do to get it to somebody so they can see what it is, right? So the thing is that if youâre explaining the concept of launch time to them and the time youâre saving in launches, doing that due diligence earlier may take more time to get to a new definition of done, but itâs gonna get everybody to a more predictable level of done faster overall. And thatâs the thing that needs to be made transparent to the teams in order to buy into this concept of not having time and it needs to be more apparent to the management level or the leadership and the scheduling people, because theyâre going to see initially it seems like weâre taking a longer time to get to done.
But what theyâll see if everything goes correctly is that even though itâs getting a lot of time to get to that level of getting it done, weâre reducing this big testing phase at the end so in terms of launch time, weâre getting everything done faster. We donât know how this is a big problem actually. In the software industry and computer science degrees, we donât often get taught how to do good testing. I remember from my computer science degree I got taught about uni tests and thatâs it. Most people learn testing practices on the job or if they actually take a self-interest in it, they might actually learn themselves. Itâs not something that sounds exciting enough so that you go and learn on your own time so it just doesnât generally happen that way but sure! It is going to require a level of skilling up in a team of developers who arenât used to working this way. But itâs a worthy investment and itâs worth hiring new developers who actually know how to work in this way to train the other developers, because itâs difficult to just tell someone learn this skill and they donât actually know how.
And of course the last one is not the way we work at this company. Of course, if people donât feel like theyâre getting rewarded for working in this way, then thereâs no incentive for them to do it, because it feels at first that itâs going slower and previously it could be that they wonât get rewarded for getting things done right, but getting them done fast. So it needs to become the new norm that people are getting rewarded for doing things the right way and getting quality things out â this is the new standard of expectations across the company.
Hiring
So I touched on hiring a little bit, obviously if you want developers to start testing, we need to start hiring developers who can test. Itâs pretty simple to add to the interviews in terms of coding. Weâre already asking them to code something and test that code. Thatâs a pretty simple way to at least get started. There are ways as well as looking for people who know TDD, things like that. But what I really want to talk about here is talking about the type of testing experts you can hire, because at Google, we hired two types of testing experts for our teams. So we donât hire full time manual testers, we hire engineers.
You will notice that the first role there doesnât have the word test or quality in the job title at all, thereâs a good reason for that. Itâs because like I said before, itâs very hard to hire very good engineers who are willing to have the word tests and quality in their job title because across the industry thereâs a little stigma across that. So what we have as software engineering tools and infrastructure which is a good description of what they do anyway. We hire these engineers at the same standard as regular Google software engineers which if anybody knows Googleâs hiring process is a pretty damn high bar! And the job of these people is either to be creating new tools and infrastructure for teams to use but then they will also be sitting with development teams as a peer, sitting with them saying hey, have you tried writing a test for that? Look how easy that was! And now we can tell if this thing was broken before it gets pushed into some test environment and breaks everybodyâs day. So this is a critical part of our engineering productivity mission.
Test engineer is a role that is higher in testing experience. They look at processes and at the entire way that the team is functioning, look for ways to make everybody more efficient. Theyâre also an engineer and have the ability to understand the complex system architecture, make recommendations of new tools and approaches that we can use to make things better. Test engineers are usually hired not as higher bar as the software engineers in tools and infrastructure in terms of technical ability but thatâs because itâs counterbalanced with a high expectation of testing ability. But the reason for this is because itâs very hard to find somebody who has this level of testing experience and this level of software development experience because people only have 8 hours of productivity in them a day. So test engineers do tend to be, I would say – somebody once said to me that they would be a lead developer in just about any other company if it wasnât Google, but this is again, this is Googleâs hiring standards so the emphasis is on the testing engineer experience.
It’s important if you find somebody who can do this job that feels they are first class within the team â I say this because people are very hard to find. I know, because I try to hire them all the time and if you find one, hold onto them for dear life, because otherwise somebody like me will steal them away from you. As well as that, this personâs job is usually to convince people in the team to do what they donât want to do and in order for them to be listened to they have to be first class citizens within the team. Like I said before, people with tester quality in their title are often seen at the same level as manual testers in some regard. Itâs often not seen as a skilled role so in order to make them feel like a first class citizen within the team, it has to be recognised in terms of salary and job title to make sure they do feel like they have the authority to change things within the team. They need management to back them up, otherwise anything they say will have no effect and it will all be futile.
So one important thing that we also learned when we changed Google was; we learned that learning was good.
We donât want to blame people, we want to be able to empower people to learn from mistakes and let people feel like itâs ok to make mistakes as long as weâre learning from them. And just getting feedback in general was a very good lesson that we learned here.
Well how we do this at Google is we have a postmortem culture. Itâs a document that somebody writes who has the most experience on a failure incident. And it is a factual timeline event and it may name names but in a factual way, not offering any kind of opinions or blame or anything like that and it will have suggestions after that with things that we can do and never let this happen again, right?
And then weâll have these practical items coming at the end of that, so that people will actually take some action as a result of this. So weâre doing something after we learn, weâre not just musing on the incident and then not doing anything about it. And the great thing about this is that people learn that it is ok to make mistakes and take risks. The other thing is that if people feel like theyâre going to get punished or blamed whenever they make a mistake, theyâre not going to tell you about mistakes that they or other people have made because they like their co-workers, theyâre not going to rat them in. So if you have a culture of people who feel like they can actually tell you about mistakes then youâre going to learn about potential problems earlier rather than later.
Launch and iterate. This is what this entire thing allows you to do. Now weâre getting to the bit where weâre going through all this work and youâre getting the payoff off of it. Faster releases mean faster feedback. So if we can release once every day means that we can get feedback from our customers every single day on what weâve released to production. It means we can make changes every day and we can pivot on those changes and we know things sooner rather than later so we donât waste as much time. Releases are cheap and we can make changes as fast as we can think of them. Thatâs probably a dangerous thing, but thereâs also a very powerful thing.
One thing that we like to do at Google is experiment. Itâs a way for us to make changes in production without necessarily having the huge risk associated with that. If we want to try out something crazy, we can try it out at a small percentage of our users, we donât have to try it to everybody at once. Letâs say we want to figure out if the red button works better than the green button. Do people even notice a red button on a red background. Who knows? Maybe itâs a cool, new idea! Letâs try that out, but weâll only do it to 10% of the users, and the rest of 90% can have their normal experience just in case that itâs a silly idea that the designer had in his sleep.
So we can do that, we also have things like dog food. Dog Food I think itâs a brilliant way of trying out new ideas within your company. Dog food comes from the phrase eating your own dog food. Itâs something that sounds unpleasant but itâs really about trying out your companyâs products internally before you release them to the world. This is a great way for your company to find edge cases and usability issues, not functional issues which should be found before. But itâs also a way for your companyâs employees to get a little bit closer to the product youâre making. Obviously it doesnât work for all products but if you have a chance for your employees to actually be users of your own product, it gives them a sense of owning that quality as well. They have to use it so theyâre very much invested in making it a good product.
And letâs recap here, what youâre getting out of this? Of course, youâre getting better quality software. You can change product direction quickly but ultimately what this is giving is the freedom to innovate and innovate is the fun part of this. So that is the best and most compelling reason I could think of as to why you would want to do this. So I left intentionally a bit of time for questions so please, ask me some questions!
Learn how great SaaS & software companies are run
We produce exceptional conferences & content that will help you build better products & companies.
Join our friendly list for event updates, ideas & inspiration.
Q&A
Audience Question: I win! Hi! Thank you! First of all, Mark, will you please make this video the first one that you post? I have a dev team that needs to see this and a QA team that doesnât want to see this. My question is on regression testing. Now I can understand how you developers want to actually test what theyâve been working on. Regression testing is somewhat important. Are you suggesting that they do that as well so they kick off their test to make sure what they did and they kick off automated tests or whatever they do? You do think thatâs the best way?
Trish Khoo, Google: Yeah, absolutely! In fact, regression testing is the main thing that I want them to be doing. It very much feeds into the checking activity that I described and itâs the poster child of checking basically. It is just a checklist of what still works and actually a developer once said to me that he sees automated regression tests as protection for his code against other developers. I know he was kind of joking there, but itâs kind of true because sometimes itâs not just about knowing if youâve broken the things that youâre expecting with the tests that youâre running, but youâre running other developerâs tests. So thatâs actually protecting you against things that you didnât know about. Thatâs where a lot of regressions come in, is because often developers are working with code that they havenât worked before, they donât know everything about that code and how itâs supposed to work. Maybe the whole team has changed and nobody knows how that code works anymore. And your automated regression testing becomes a way for you to automatically check everybodyâs yearsâ worth of knowledge across that code and it acts as documentation as well to tell you what that code was ever supposed to do in the first place.
Audience Question: So, a bit of big fan of the Joel test and tall Jeff reminded me of it. We make a SASS based application, cloud based and we deploy it for production 20 times a week. Our developers doing their own testing and we think itâs an important part in closing the feedback loop, but there were three things when he reminded me of the Joel test. Specs, testers and a schedule. There are three things that we donât have and I felt bad about it, I was talking to Eli here about it, maybe we didnât do it because of the feedback loop and having a separate group and that sort of thing. And those things seem to be 3 heads of the same beast, specs, testers and schedules because if you have a spec you have to have a schedule, if you have a schedule you have to have a spec and if you have a spec, you have to have testers and if you have testers you have to have a spec. They all seem to be related. My question is â when I saw your stuff, I actually went back to where we were that the faster cycle time and being able to release faster and having the developers say software is a better thing and my question to you is with tall Jeff, if you want to duke it out with him on this, is there a common ground here or is it a either or?
Trish Khoo, Google: When you say common ground you mean between having the spec and testers on team?
Audience Question: Well you donât have a specific â you took the testers out of your diagram and have them as part of the developers doing it, part of the Joel test is you have separate testers. So is there â weâve kind of chosen to go your route, but I was questioning that at lunch, I was â
Trish Khoo, Google: I think itâs interesting that thatâs on the Joel test because I agree with most of what is on there. I think that I would reframe that question of do you have testers in terms of how do you do testing. And I would be looking at that response from the team, whether all of the testing is done by the developers and if itâs done by them, how are they doing the testing. Often, Iâll get the response back oh we do unit testing, but thatâs not really enough. You need to look at how theyâre doing their integration testing, regression testing. Just really looking at their testing practices in general, whether they are using an outsource QA team, weâre kind of testing problems do they have? What kind of bugs do they have? How are they fixing bugs? Are they fixing them as they go, are they having to allocate a huge sprint time to just for fixing bugs? Just figuring out how this team works in terms of testing activities and bug resolution.
Audience Question: Ok, I see a couple of problems with developers being testers. So one is like they might know their scope before, but if you look at the bigger picture, you might have 50 developers and the software needs to work as a single piece or unit of work. So how do you address that problem of testing a bigger picture, like the integration testing, like I guess youâd need QA team to take care of that. So thatâs one problem and the second problem is if a developer understands the function requirements the wrong way, theyâre also going to test it in a wrong way, right? So how are you going to address this?
Trish Khoo, Google: Yeah, so what you mentioned, thereâs a couple of points that I want to make here. And what you mentioned about developers misunderstanding their requirements, that is a big problem on a lot of teams. Finding these requirement gaps is a problem I see many development teams and this is why there needs to be a lot of good planning for any complex project. Weâre analysing what the requirements need to be, even if itâs just a small set of requirements because youâre only doing Agile and a per feature way. It really needs to be thought out in terms of user scenarios, even just writing out user scenarios is a good way for product owners to verify that the way theyâve described the behaviour is correct. Bringing in somebody who knows the piece and well product owners should be across this and having a testing expert as well, somebody like a test engineer, it can also be an advisor to this, but really itâs something that developers and PMâs need to recognise as a critical part of the software process just figuring out that stuff at the start. You mentioned integration testing. One thing that I notice a lot of teams try to do is they have unit testing and they have end to end testing and they donât have anything in between. And the issue with this is that I love unit testing, itâs great! I have a lot of unit tests, itâs fantastic! End to end testing is a massive pain and having these smaller tests in between that are verifying how does this system work with this system. Is this API working as expected? Letâs take away the UI and see how the system works underneath that for a second. These sort of in between level integration tests is really crucial because debugging is awful when its end to end. And you can find something thatâs happening intermittently and you have no idea where itâs happening because youâre testing everything and thereâs environmental issues. Just save yourselves the headache, break it down into smaller concerns is my recommendation. And when it comes to the end test, this is a good test but if everything has been well-tested in small segments up until that point, end to end testing should be as simple as the team getting together, trying it out end to end, making sure that itâs working well and that should work all right.
Audience Question: What percent code coverage do you strive for in your team for your automated tests?
Trish Khoo, Google: Thatâs a contentious one. I think it very much depends on the type of application youâre building, the way it lends itself to unit testing, because usually itâs something thatâs measured in terms of unit testing. With integration testing, code coverage becomes a bit of a hairy thing because itâs usually youâre trying to get more coverage at a behavioural level and code wide coverage becomes more of a hazy signal for you. I would say in general, code coverage I think â I donât even want to put a number on it, because Iâll be in trouble. But I would say, one thing I would say about code coverage is that for a team thatâs not doing a lot of unit testing, at the very least giving some incremental code coverage goal is good, which if you can measure the lines of code covered by tests like the lines of new code thatâs been added and how much of that is covered by tests is a nice way to start getting interim achievements for a team thatâs new to unit testing, to try to strive to hit an absolute goal, whatever that may be. But it very strongly depends.
Audience Question: So the question that I have for you is Iâm wondering if there is any difference or what are the differences to your process in testing to say when youâre releasing the cloud you can release multiple times per day versus the process where you release apps and you have to deliberately spaced out releases.
Trish Khoo, Google: Yeah, thatâs really interesting and something that I found particularly on our web app teams versus our mobile teams. A lot of mobile products, they have a limitation, even on the amount of times you can release, based on the amount of time that it will still take the Apple/Play Store to take your submission so you do have a hard line there. The important thing is that even if youâre not releasing to production as frequently, itâs important to have things that are releasable on a frequent basis. So at least your team is checking them out, youâve got automated tests running on them and itâs important to keep that quality bar even if you donât have the thread hanging over your head, oh itâs going to production today.
Audience Question: Hey, Trish! So Iâm working on moving to a combined engineering approach at the moment and we currently have software engineers and testers embedded in the same team, mostly manual testing around product entry teams. And one of the worries is that if we donât take the testers out, then they will be a kind of recidivist approach. If we leave them in there, nothing will change. And coupled with that we are wondering what to do about things like test infrastructure and best practices and especially testing like performance or security testing. And so weâre thinking about having, pulling the best out and having a separate QA team to make that change. Would you not recommend that and is that not the approach thatâs worked?
Trish Khoo, Google: So just to clarify, youâre thinking of having a separate QA team to start looking at performance testing â
Audience Question: To cover the stuff we just canât do in team. So itâs dangerous if we leave the testers in the team, then nothing will change and we also have the second we need to cover stuff which is about who is gonna manage the test infrastructure, whoâs gonna bring in best practice, whoâs gonna do stuff like performance which is not needed by all teams, just a few and it needs to be done by specialists.
Trish Khoo, Google: Right. So if you have â so this is like where I think our testing experts such as the software engineers play a big part in the infrastructure side. Itâs important to get people that know about that in particular. We kind of course make the assumption that everybody who is in testing knows absolutely everything about testing or infrastructure for that matter. So if somebody has the skills for that, then yeah. Definitely! Itâs a good idea to use that person to be creating infrastructure for you.
Audience Question: Particularly when they sit. I recommend strongly against pulling all those people into a QA team cause youâre obviously keeping them embedded with the teams, we donât really have that option and the danger is that it wonât change if we keep them in the teams.
Trish Khoo, Google: It sounds like youâre changing the definition of the work that theyâre doing. Because it sounds more like what you want to do is not have these people doing manual testing on behalf of the developers anymore, but do you want to change the nature of their work to be focused more on providing tools and infrastructure? Which I think is a great goal and I think the situation youâve got is a tricky one, because first of all itâs changing the expectation of one group of people and saying your job is now this, not the one youâve been doing every day, but itâs something different. But also, the developers are going to still be treating them like youâre testing all my work now. Yeah, thatâs tricky and it needs to be â I think you canât get away with it without making a mass of deal about it and saying everything is changing now, this is going to be different. Theyâre not manually testing your work anymore, but I think from experience, the worst thing to do is say youâre not doing this and theyâre not doing this so you have to do this, it really has to be from an approach of this is going to help and this person is working on this bigger, better infrastructure project and this is going to make the team more effective and because theyâre doing this they donât have time to do this manual testing anymore. So this is something that the development team is going to take on and somebody is going to provide help for that development team to learn how to do this better so itâs not just a matter of dropping people into this world all of a sudden.
Audience Question: Hello! Thank you! We totally drink the devs test and all the things Kool-Aid, we found that there are some of the things that are difficult to test on automatically that we do by hand so on IE6 does the CSS class get applied correctly. Not IE for anybody listening! And right now weâre doing that, weâre very small and trying to figure out how we scale that, do we go offshore, do we find somebody who â right now weâve got iPads and iPhones and Androids spread across everybodyâs desk and that doesnât seem to scale. And so Iâm curious how has Google solved that problem?
Trish Khoo, Google: It sounds like youâre talking about visual testing. Yeah, that is a tricky one.
Audience Question: Something you canât automate?
Trish Khoo, Google: Actually you can. There are some very interesting tools out there and a lot of them are publicly available actually. Looking to things like Apium for instance, sourcelabs provides a lot of things in the web app space, Apium is also for mobile and there are some really great tools in terms of screen diffing so comparing one golden screen against one screen. Even if this is not â even if this causes problems in terms of slight differentiations, sometimes you can tweak these tools so they are doing it within a certain percentage so 97% correct so thatâs possible just to â thereâs a lot of really great tools coming out and I know mobile is still one of the most difficult things to test across platforms, but the technology is slowly keeping up.
Audience Question: If you have developers who arenât used to testing, do you have any resources that could go and point them to, to learn on how to train better?
Trish Khoo, Google: Oh, it actually entirely depends on the platform and the language and tools that theyâre using because every single one is going to take a different approach, for general kind of testing needs, yeah. I think you really just have to find the one thatâs appropriate to that. If you do any kind of search for TTD, thatâs not a bad start, but â
Audience Question: I meant less tools and methodology because I find that when people do â everyone can learn how to do a unit test, but for developers, Iâve worked with a lot of them which were really bad unit tests because theyâre not thinking to the edge cases, so Iâve started giving them like a checklist. You have to always check 1 and 0. And thereâs just different mentalities that a tester will have that developers donât for testing those edge cases to make sure itâs well tested and I was just wondering if there are resources out there to say this is what you need to learn from a mentality perspective. Not a total perspective.
Trish Khoo, Google: I actually have the perfect book for you. Itâs called explore it by Elizabeth Hendrickson who is QA director at Pivotal Labs and she has written a book that is specifically designed for developers to teach them exploratory testing practices which teaches developers how to think the way that expert testers do, to be able to find these edge cases and to be able to mentally model the system in their heads to be able to find the complexity of the parts.
Audience Question: Thank you for this talk, by the way. Itâs excellent! So one thing coming into testing in TDD and test driven development, coming into it, having not worked on that discipline, one thing that I never realised outside of the CYA mentality, making sure that everything works and is fine, nothing is broken. One thing Iâve learned is that there are additional benefits to it and things like software engineering and thinking through your problem and figuring out when issues are difficult, if something is really hard to test, that means that itâs â youâre probably doing it the wrong way. It could be done easier. Are there other things and benefits that youâve seen that you can talk about with regards to testing and TDD that are outside of just hey, our code works?
Trish Khoo, Google: Yeah, absolutely! Youâve hit the nail on the head there! usually what I hear most from TDD practitioners is that itâs more about design than it is about testing. Writing their tests makes them think about what it is theyâre building, what the expectations are and how theyâre going to build it upfront. It forces developers to build their product in a testable way so that this makes it easier to write in the future and this is avoiding a lot of refactoring headaches later on. In addition, it also serves as documentation. So if anybody was wondering what this code was ever meant to do for down the line when you get new developers, then this is more of a plain text kind of way to describe this was meant to do this â this behaviour was working this way because of user expectations and this kind of thing. It sets the expectations for the code and the tester at the same time. Itâs very efficient that way!
Mark Littlewood: And I think that means weâre out of time. Letâs say thank you very much, Trish! Well done!
Learn how great SaaS & software companies are run
We produce exceptional conferences & content that will help you build better products & companies.
Join our friendly list for event updates, ideas & inspiration.