Filed under: Performance Monitoring
Performing sanity and thorough regression tests against the frontend of a any web-based product is a resource-intensive exercise; thus AppNeta’s QA team has started to expand into automation using Selenium and Saucelabs. As any tester should know, test cases should be independent of other test cases. It should be atomic so that one failed test won’t affect another test’s result; the logic is sound. Yet dependent tests seem to be unavoidable with a recent request.
To understand why I’m open to including dependent tests we’ll need to first understand the current setup. We’ve setup an image of a database which includes everything from users, organizations, to established paths and webpaths assessments. Every time we want to run a full suite of tests we simply load the database onto an EC2 instance and let Selenium do its job.
This approach is great, it’s clean and easy to use. We don’t have to clean up after our mess after each test. With Jenkins all the above steps are executed by kicking off one job. When we add more tests that require premade items (paths, profiles etc.) we simply update the database image. Unfortunately there are always scenarios that would make dependent tests the only option.
With the progress we made with automation, we were asked can we deploy these tests to production servers? Yea! why not? Our premade database, that’s why not. Suddenly our test suite is rendered effectively useless because production servers don’t have premade paths and such; our tests are trying to act on something that’s not on there.
Testing on production server is not impossible in our situation but we have another goal in mind – keep our footprint on these servers as small as possible. We suddenly have to behave ourselves. Our tests will now have to create its own resource for testing; clean up after creating the dummy users and organizations used for testing.
Consider the test case of creating and deleting a path. On a server without our premade database, we’ll have to create a path to test our delete functionality; this introduces a dependency. What is the right approach here? Arrange the test suite so that creation test comes before deletion? I opted to merge relevant tests into a single tests. This way I would be able to preserve the order of tests with confidence (JUnit does have a cumbersome way to arrange the execution order of tests). This way also allows me to eliminate a lot of overhead time of starting a test (requesting an instance at Saucelabs, opening the browser then navigating to production servers) which takes up the majority of elapsed time.
Test atomicity is important and would no doubt prevent a lot of issues that will crop up over time when the principle is not applied. However there are always scenarios where atomicity cannot be applied and different approaches have to be considered. Even though the occurrence is rare, it is essential to have backup plans to keep test development move smoothly along or else teams would spend hours deliberating how to tackle a new obstacle in a fairly well defined process.