What are the components of a software QA infrastructure...
June 4, 2018 12:48 PM Subscribe
...and how do they interact with each other and the broader development cycle?
I'll be starting a new job where I'm tasked with setting up the QA infrastructure at an online publisher. Currently the devs write unit tests, but the rest of QA is done quite informally, and they're at a point where they need to formalize it to reduce bottlenecks. I am doing a bunch of research, naturally, but I'd love to hear from people in the trenches.
Also, I suspect that the boss has some pretty clear ideas about what he wants to do, and just doesn't have time to actually do it.
The stack includes Node, GraphQL, React, Redux, Relay.
I'll include some more specific guide questions below, but they're mostly there to indicate what I know and don't know.
* Code coverage: Should N% of the code be covered by unit tests?
* Do the devs run their unit tests before committing, or are they run automatically as a condition of successful commit?
* Does the QA department (haha me) write all the UI tests? Can these all be automated? Where in the process do they run? What event is triggered upon failure?
* What tests can't be automated?
Also, I suspect that the boss has some pretty clear ideas about what he wants to do, and just doesn't have time to actually do it.
The stack includes Node, GraphQL, React, Redux, Relay.
I'll include some more specific guide questions below, but they're mostly there to indicate what I know and don't know.
* Code coverage: Should N% of the code be covered by unit tests?
* Do the devs run their unit tests before committing, or are they run automatically as a condition of successful commit?
* Does the QA department (haha me) write all the UI tests? Can these all be automated? Where in the process do they run? What event is triggered upon failure?
* What tests can't be automated?
Build a good staging environment(s) that closely tracks / is the same as production
CI - something like codeship that runs tests on your repositories for PRs
mandatory multiperson reviews of major releases / features
require code reviews / passing CI before you merge and deploy
* Code coverage: Should N% of the code be covered by unit tests?
There's no rule here, but most code should be tested
* Do the devs run their unit tests before committing, or are they run automatically as a condition of successful commit?
both - both.
* Does the QA department (haha me) write all the UI tests? Can these all be automated? Where in the process do they run? What event is triggered upon failure?
No. They should be part of the normal test suite. They should not fail - if they fail and someone deployed, then that someone failed
* What tests can't be automated?
I'm not sure what you mean? typically, it's not worth it to write end to end UI tests (to me). The dev should check that the thing they want to deploy works locally, and then in staging, and then once it is released to prod. Maintaining end to end tests for UI if it's shifting / continuously deployed is not worth it.
posted by durandal at 1:23 PM on June 4, 2018 [1 favorite]
CI - something like codeship that runs tests on your repositories for PRs
mandatory multiperson reviews of major releases / features
require code reviews / passing CI before you merge and deploy
* Code coverage: Should N% of the code be covered by unit tests?
There's no rule here, but most code should be tested
* Do the devs run their unit tests before committing, or are they run automatically as a condition of successful commit?
both - both.
* Does the QA department (haha me) write all the UI tests? Can these all be automated? Where in the process do they run? What event is triggered upon failure?
No. They should be part of the normal test suite. They should not fail - if they fail and someone deployed, then that someone failed
* What tests can't be automated?
I'm not sure what you mean? typically, it's not worth it to write end to end UI tests (to me). The dev should check that the thing they want to deploy works locally, and then in staging, and then once it is released to prod. Maintaining end to end tests for UI if it's shifting / continuously deployed is not worth it.
posted by durandal at 1:23 PM on June 4, 2018 [1 favorite]
Like everything else in software development, most of these answers are at a strange intersection of science, art, and philosophy. You need to find balance between those three things as you build this out.
* Code coverage: Should N% of the code be covered by unit tests?
Probably. Mentally I shoot for 100% but that's very difficult to achieve for various reasons. Something like 70-75% may be reasonable. Note that code coverage tools can offer a nice way to gamify testing, but at the end of the day quality of your tests is what you need to focus on. One thing to note here is that you can likely run a coverage analysis tool during your build/test phase and fail a commit if the coverage percentage is too low.
* Do the devs run their unit tests before committing, or are they run automatically as a condition of successful commit? The ideal here is almost always going to be running tests before committing and then having the repo/build server also run those tests. Your goal here is for people to know about issues, and for new contributions to not hang up other people if a test isn't passing.
* Does the QA department (haha me) write all the UI tests? Can these all be automated? Where in the process do they run? What event is triggered upon failure?
You'd probably write these, but if you aren't a developer, maybe there'd be some sort of rotation to place devs on your team. These should be able to be automated, and would likely run after a commit, just like an other unit test. If they fail, well, that depends on your process. You'd probably want to notify the people related to the code so they can fix it. For some interesting discussion vaguely related to this, see Why are unit tests failing seen as bad? on the SE stack exchange.
* What tests can't be automated?
In theory you could test anything. In practice, that's not always possible, though proper architecture can set you up for success. One thing to be aware of, if you aren't already, is the difference between "unit tests" and "integration tests"...and the other pieces in between those two ends of the spectrum (SE stack exchange discussion). "Unit Tests" look at a small logical unit (does my Add method correctly add two numbers?) while integration and end-to-end tests look at the big picture (if I press a number, the add button, and another number on the calculator, does the correct answer show on the screen?).
UI tests may involve both of these aspects - you might verify that a login button calls the login API, but it might actually be more straightforward to use an automated headless browser to type in user credentials, hit the login button, and verify that the page redirects to the account's home page.
But, to test that, maybe you need a couple test users. Or maybe you need an entire development environment where you can test the password reset functionality. It can get quickly complicated, so make sure to understand why you're testing a given feature and what you're trying to accomplish.
-----
This was a rather platform-agnostic answer, but if you're interested in a few more specifics of what testing for React and Node might look like, add a comment and myself or someone else can probably drop a few pointers.
posted by Nonsteroidal Anti-Inflammatory Drug at 1:28 PM on June 4, 2018 [3 favorites]
* Code coverage: Should N% of the code be covered by unit tests?
Probably. Mentally I shoot for 100% but that's very difficult to achieve for various reasons. Something like 70-75% may be reasonable. Note that code coverage tools can offer a nice way to gamify testing, but at the end of the day quality of your tests is what you need to focus on. One thing to note here is that you can likely run a coverage analysis tool during your build/test phase and fail a commit if the coverage percentage is too low.
* Do the devs run their unit tests before committing, or are they run automatically as a condition of successful commit? The ideal here is almost always going to be running tests before committing and then having the repo/build server also run those tests. Your goal here is for people to know about issues, and for new contributions to not hang up other people if a test isn't passing.
* Does the QA department (haha me) write all the UI tests? Can these all be automated? Where in the process do they run? What event is triggered upon failure?
You'd probably write these, but if you aren't a developer, maybe there'd be some sort of rotation to place devs on your team. These should be able to be automated, and would likely run after a commit, just like an other unit test. If they fail, well, that depends on your process. You'd probably want to notify the people related to the code so they can fix it. For some interesting discussion vaguely related to this, see Why are unit tests failing seen as bad? on the SE stack exchange.
* What tests can't be automated?
In theory you could test anything. In practice, that's not always possible, though proper architecture can set you up for success. One thing to be aware of, if you aren't already, is the difference between "unit tests" and "integration tests"...and the other pieces in between those two ends of the spectrum (SE stack exchange discussion). "Unit Tests" look at a small logical unit (does my Add method correctly add two numbers?) while integration and end-to-end tests look at the big picture (if I press a number, the add button, and another number on the calculator, does the correct answer show on the screen?).
UI tests may involve both of these aspects - you might verify that a login button calls the login API, but it might actually be more straightforward to use an automated headless browser to type in user credentials, hit the login button, and verify that the page redirects to the account's home page.
But, to test that, maybe you need a couple test users. Or maybe you need an entire development environment where you can test the password reset functionality. It can get quickly complicated, so make sure to understand why you're testing a given feature and what you're trying to accomplish.
-----
This was a rather platform-agnostic answer, but if you're interested in a few more specifics of what testing for React and Node might look like, add a comment and myself or someone else can probably drop a few pointers.
posted by Nonsteroidal Anti-Inflammatory Drug at 1:28 PM on June 4, 2018 [3 favorites]
You might not use automated e2e tests or this solution in particular for reasons that boil down to cost in time and money, but if you watch some videos on the SauceLabs channel at YouTube, you'll get a really good picture of how neat the process can be. If you're the only person who owns the process, that could be a good argument for giving you a nice tool for it.
posted by Wobbuffet at 1:51 PM on June 4, 2018 [1 favorite]
posted by Wobbuffet at 1:51 PM on June 4, 2018 [1 favorite]
Make sure you measure what you can.
Measure the defects reported (breaking out on severity), the defects fixed, the recidivation rate, how many defects are found internally vs by customers or both, coverage, etc.
Having these numbers allow you create a feedback loop to improve the quality of the products.
posted by plinth at 1:59 PM on June 4, 2018 [1 favorite]
Measure the defects reported (breaking out on severity), the defects fixed, the recidivation rate, how many defects are found internally vs by customers or both, coverage, etc.
Having these numbers allow you create a feedback loop to improve the quality of the products.
posted by plinth at 1:59 PM on June 4, 2018 [1 favorite]
Response by poster: Thanks for all the responses so far.
@Nonsteroidal Anti-Inflammatory Drug : I'll take you up on your offer to discuss specifics on testing Node and React, as well as GraphQL and Relay, areas in which I have considerably less experience.
posted by NativeHadzaSpeaker at 2:10 PM on June 4, 2018
@Nonsteroidal Anti-Inflammatory Drug : I'll take you up on your offer to discuss specifics on testing Node and React, as well as GraphQL and Relay, areas in which I have considerably less experience.
posted by NativeHadzaSpeaker at 2:10 PM on June 4, 2018
I have a lot of strong opinions on this having been in QA since 2002. feel free to mefi mail me for details.
quick summary:
-have a dedicated bug finding team, and a dedicated test automation framework team.
-bug finders don't necessarily need to have a technical background but need to be creative.
-ALWAYS MAKE IT A GOOD THING THAT A BUG IS FOUND. if someone says "why wasn't this found" defend your team.
-Code coverage is a start, not everything. the last 20% can be almost impossible to hit and not worth it. Block vs Arc also makes a difference.
-Treat your testers with respect, if a dev is rude to testers, talk to the dev manager. don't let it slip.
-Test early, test often.
-pull with the team, don't just give them orders.
-beware of burnout.
-unit tests should always be running, but not written to always pass, and if they fail, same day dev turnaround expected. Devs should write the unit tests.
-include test team on code reviews.
-don't forget to test the design and the spec.
-offshoring rarely works, offshore tests will only do exactly what you want them to.
posted by evilmonk at 12:36 PM on June 5, 2018 [1 favorite]
quick summary:
-have a dedicated bug finding team, and a dedicated test automation framework team.
-bug finders don't necessarily need to have a technical background but need to be creative.
-ALWAYS MAKE IT A GOOD THING THAT A BUG IS FOUND. if someone says "why wasn't this found" defend your team.
-Code coverage is a start, not everything. the last 20% can be almost impossible to hit and not worth it. Block vs Arc also makes a difference.
-Treat your testers with respect, if a dev is rude to testers, talk to the dev manager. don't let it slip.
-Test early, test often.
-pull with the team, don't just give them orders.
-beware of burnout.
-unit tests should always be running, but not written to always pass, and if they fail, same day dev turnaround expected. Devs should write the unit tests.
-include test team on code reviews.
-don't forget to test the design and the spec.
-offshoring rarely works, offshore tests will only do exactly what you want them to.
posted by evilmonk at 12:36 PM on June 5, 2018 [1 favorite]
Try to keep UI testing minimal. It's expensive and costly to maintain. Try to get a gauge for the team and their current testing practices. A big swath of bugs and problems can be eliminated in the CI/CD pipeline. After that taking part in the design process can also help focus the product and determine areas of risk to focus on. Most high quality testing is exploratory in nature using some kind of heuristic to divine where bugs and errors may exist. Automation serves to verify/check that a regression does not occur. Automation can also be used to assist exploratory tests/performance tests etc.
If you're the only qa with a team focus on the biggest areas of risk reduction for the team. Try to be mindful of choices that may incur high levels of maintenance in the future.
posted by andendau at 8:11 PM on June 5, 2018 [1 favorite]
If you're the only qa with a team focus on the biggest areas of risk reduction for the team. Try to be mindful of choices that may incur high levels of maintenance in the future.
posted by andendau at 8:11 PM on June 5, 2018 [1 favorite]
oh, and I swear to god, if someone suggests automating image comparison of UI against baseline images to save time, throw them out of your office, kick them down the hall, paint a giant red X on their office door, flush their lunch down the toilet, burn their pants, etc, etc. I have seen so much time and effort wasted on this, where a manual tester will find better bugs for less money.
posted by evilmonk at 10:54 PM on June 5, 2018 [1 favorite]
posted by evilmonk at 10:54 PM on June 5, 2018 [1 favorite]
and remember: automation will only find what you tell it to look for. nothing else.
posted by evilmonk at 10:55 PM on June 5, 2018 [1 favorite]
posted by evilmonk at 10:55 PM on June 5, 2018 [1 favorite]
I have seen so much time and effort wasted on this, where a manual tester will find better bugs for less money.
This is an important point. Google's Testing Blog has a great article called Just Say No to More End-to-End Tests. I like end-to-end tests but they have their limits.
posted by Nonsteroidal Anti-Inflammatory Drug at 8:11 AM on June 6, 2018 [1 favorite]
This is an important point. Google's Testing Blog has a great article called Just Say No to More End-to-End Tests. I like end-to-end tests but they have their limits.
posted by Nonsteroidal Anti-Inflammatory Drug at 8:11 AM on June 6, 2018 [1 favorite]
« Older Magazines similar to Scientific American Mind? | How to plan for taking over a family rental... Newer »
This thread is closed to new comments.
If you have a code coverage requirement, make sure it’s obvious early on when it is no longer being met — we had a cloud product once with a code coverage requirement that didn’t come into play until *deploy* time, which resulted in folks scrambling to shoehorn in unit tests later in the process, when they should have been written up front.
posted by Blue Jello Elf at 1:11 PM on June 4, 2018 [1 favorite]