Welcome to part 3 of our series! In part 2 of our QA and Testing series, we explored some best practices for planning, organizing, and writing your tests. In this part, we share some tips on speeding up development and troubleshooting.
Speeding up development
Development speed is a factor that you should always take into account (not only when deadlines are looming) because optimizing your process has a direct impact on the quality and delivery speed of your output. Speed up testing using our tips so that it always fits into your schedule — with some bonus side effects, like minimizing context switching, organizing your thoughts, or future-proofing your code. (And of course ultimately, removing the possibility of regression bugs, which saves you more time when clients are becoming your QA team in production!)
Use live mode
You can speed up development by developing in
live mode — as opposed to headless mode —, where you can see the changes
you made to your local codebase immediately in your browser. This is
the case when two monitors come in handy because you don't have to
switch between your code editor and your browser.
Prepare for scaling
To make your tests future-proof, always write them with the concurrency setting in mind — your future self will thank you when you hit a big enough test suite.
If tests are running for more than 5 minutes, developers tend to lose focus and switch context to different tasks after pushing their code. This means you’re not only wasting time because of long-running tests, but also because of context switching. Some say switching focus from one task to another takes around 15 minutes, others lean towards 30 minutes — either way, it’s a lot of time to lose repeatedly. If your tests are finished within 2 minutes it’s less likely that the developer will start doing something else.
- Ideally, run tests with concurrency 4 every now and then, and see if you have race conditions or other non-atomic tests that need to be addressed. Sometimes the solution is as simple as putting things into an .after hook to ensure that what you are trying to test is tested always after a certain action.
Note: This is probably not the perfect solution, because you don't get proper messages in the console and you are tightly coupling operations, but I don't yet know how to make it much better. If you have some ideas, please ping me at pawel@platform-os.com.
- Only use .wait() if you absolutely have to — a couple of waits can kill your test suite performance and TestCafe is extremely good at waiting when necessary even without them. The only time we had to use .wait() was to wait for some weird async external 3rd party responses. After giving it some thought, it turned out that we were not testing our feature, but external responses, so we just removed that part of the test.
There is one exception to this rule: If your application is doing something in a background job and you need to wait until this job is done, use .wait(). Avoid this scenario, if you can, but if you can't, make sure your wait time is short enough not to slow down your tests unnecessarily but long enough not to make your test flaky if your app is slower once a month. - If your test has a lot of form submissions and file uploads (keep them small), try bundling them to save time on submits.
Use only and skip
Use test.only, test.skip, fixture.only, fixture.skip to your advantage in development and debugging. This will make your TestCafe start up faster, and it will run your test quickly in isolation.
Write empty test cases
Start writing tests by writing empty test cases. Just like you would write an outline of a long paper, book, or a blog post, it’s good to know the main points you want to cover.
Filling in the blanks is easier and helps you manage everything in your head. Then add .only to only test the case you are developing. This technique coupled with live mode speeds up test development by allowing you to focus on one thing only and get feedback as quickly as possible.
fixture('Feedback - CRUD using Ajax').page(`${process.env.MP_URL}/feedback`);
test.only('customizations_delete_all cleans feedback correctly', async t => {});
test('Create, Read', async t => {});
test('Update, Read', async t => {});
test('Delete, Read', async t => {});
Troubleshooting
Sometimes things go wrong. Sometimes you will recognize what the problem is at first glance because you might have seen this issue before, but other times your terminal goes red and you have no idea why — it should work, right!?
We’ve encountered many of those moments before, so we'll share our recommendations for troubleshooting steps in the order that saved us the most time during test development.
Make sure your feature works
We can’t stress this enough: Make sure your feature works correctly with a mouse and a keyboard (this is how your users usually interact with it) before you start.
If some invisible element is covering your form field with padding, don't expect it to behave well just because there are 3 pixels that allow you to click on it and grab focus.
TestCafe simulates a user but faster
Investigate event handlers attached to elements. If a page has some very slow JavaScript attached to every possible user interaction (ie. keypress, keydown, keyup on text input) that TestCafe is populating with 10 characters a second, TestCafe might be too fast for it and you missed it while doing manual testing.
A picture is worth a thousand words
Do the following to see exactly what TestCafe is doing on the page:
- Turn off the :headless setting in your CLI to make TestCafe open a graphical version of the browser.
- Add t.debug() in your code to pause the execution of a test at a certain point. Just before it fails is usually a good start. After the browser had stopped, you can unblock the browser and click in it yourself. You can also open developer tools and inspect elements like you normally would.
- Slow down your tests by using speed settings — sometimes TestCafe is faster than the human eye.
- We don't prefer this method and we don't have any experience with it, but you can also take screenshots and recordings. Screenshots are useful for CI-specific errors, but using Docker heavily minimizes the need to use it.
- Use console.log(). You can use console.log during development to have more insight into what is going on.
Experiment with async/await more
It's a little bit convoluted in TestCafe to know when you have to use await, because it is a pretty new feature in JavaScript. A good resource on that is the video async / await in JavaScript - What, Why and How - Fun Fun Function.
We sometimes just add await automatically and we’re ashamed of it, so it is more of a note to ourselves, but hopefully it will save you some head-scratching as well.
Ask for a second pair of eyes
We can't even count how many times someone fixed a problem with a simple observation after days of frustrating debugging. There is a point in troubleshooting when you just can’t see new approaches anymore.
The first method would be to context switch to something different for an amount of time (let’s say one day) and not think about the problem to be able to come back to it with a fresh mind.
If you don't have the comfort of not thinking about it, because there is some kind of time pressure, turn to your team members. Ask a back-end developer for a face-to-face or video meeting. It is very possible that questions asked by that person to understand what is going on will drive you both to the solution. Sometimes the solution will be a conclusion that what you’re doing should be done in a completely different way. Take your ego out of the equation and you will be fine. This advice is valid for all phases of programming, not only testing. Use it wisely and whatever you do will technically become much better. The next step in this approach would be pair programming, which also has its own advantages.
We hope you find our series about QA and testing helpful. If so, check out part 4 about performance testing with TestCafe.
Interested in knowing more about partnering with platformOS?
Ensure your project’s success with the power of platformOS.