Efficient QA workflows: a checklist for testing your own code
By: Peter Keung | February 7, 2017 | Business solutions, eZ Publish development tips, and Productivity tools
Not all of the burden of testing website code lies with automated tests, a QA team, or the end client. There is a lot you can do as a developer to test your own website code and make sure it is as good as possible before passing it over to someone else or an automated system. At Mugo, we've developed a simple and general checklist to follow, in order to make "self-testing" a key step in the QA workflow.
Whenever a developer assigns a ticket to another team member or to the client for testing, they should review the following and make notes under these headings:
Implementation notes
- Quick check: did you review every line in your code commit before committing? To prevent accidentally committing test code or local-only code.
- Provide a short description of the development approach. To make sure it matches up with the spec and helps orient the tester.
- Whether the implementation is any different than originally imagined and why. Hopefully if any changes are discovered during development, this is flagged and discussed before submitting code, so this is often just a documentation step.
- Any relevant challenges. So others have a chance to point out any misunderstandings after the fact and for historical purposes.
- Anything else you noticed that would be useful in the future or something we might consider doing in this ticket. For historical purposes.
- Whether anything affects editors or how the public uses the site. So the appropriate documentation and training can be prepared and provided, and sufficient notice can be given.
Testing notes
- How did you test? To orient the tester and make sure no wrong assumptions were made.
- Which URLs did you use to test and/or what scripts were run? To help the tester and ensure you tested your own code!
- What makes those URLs good URLs to test (if you need a certain field value or content type for example)? To make sure we addressed the correct issue and also identify any other use cases to test.
- Is there anything that couldn't be tested? For example, if we rely on a cronjob, ideally the ticket shouldn't be assigned over until the script has been fully tested in the cron environment on staging (so if we need to wait until the next day to test, we wait). Or, if there is a dependency on a third-party account that we couldn't test, we should note that. General paranoia is good. Do everything you can to test your change.
Deployment notes
- Does anything need to be done to deploy the work other than just a code merge? Definitely document what code merge needs to happen, as you might not be the one doing the deployment.
- Even if it's just a code merge, are there other dependencies, such as on other tickets or some external account needs to be configured? Think about the big picture so you don't break anything else or discovered that you forgot something only at or after deployment.
- Does the client need to know anything specific? Do they need to stop editing part of the site? Is there downtime needed? Think about the human element of the deployment.
Following the checklist above as part of a larger QA workflow helps to prevent unnecessary back-and-forth and duplicate QA efforts, saves time and money, makes people better developers, and ultimately leads to better websites!