Organizing tests by repetition or granularity?

24 Sep

I’ve seen this often in my job covering test scenarios manually as well as adapting those tests for automation. Have others come across same issue?

In terms of manual, and more correctly, exploratory testing, as a human tester, you can choose how you perform your tests and it is often logical to consolidate areas of testing (or in documentation parlance, “test cases”) around things that would otherwise be repetitive. e.g. re-using same test data to test slightly varying scenarios. Where workflow or sequence of test steps are common across several tests, you may as well run them sort of together like multitasking rather than separately one after another, repeating the common steps all over again.

One example is testing a workflow that happens to be mostly same across desktop, tablet, mobile. Only UX or UI is different. Do the test 3x? Or run test nearly 1x by testing all 3 at same time where/when possible using one test/path to do most of the workflow and verifying across all 3 paths at the critical points.

But that’s easy to do for exploratory (and undocumented manual) testing. But what do you do when it comes time to put this on pen & paper for others like an offshore team to execute said tests, who are one or more connections/layers away from the intimacy of the feature being test than you are? And then converting such documented tests to automation.

Do we strive to follow the human (exploratory test) nature/convention of consolidating the test coverage around the repetitive workflow and shareable test data minimizing amount of test cases and test scripts to create OR strive to keep granular individual specific tests for tracking/auditing purposes and simplify or attempt to idiot proof things, even though it will be in ways redundant in repeating test steps and generating more repeated tests data that could have been shared/reused otherwise?

One argument could be to do it granular since automation and/or hardware is cheap. But it turns out unless your site is 1990s basic HTML, repeating test steps can still be slow against modern websites/applications against modern browsers. And running tests in parallel helps with this but adds complexity and still can take long time once your test codebase grows big. So in some cases, maybe consolidating testing isn’t necessarily bad, you have to weight the pros & cons with short & long term automation coverage/infrastructure in mind.

Just wondering what experiences and approaches folks have taken in the real world.

Advertisements

3 Responses to “Organizing tests by repetition or granularity?”

  1. 朵朵 January 18, 2015 at 7:02 pm #

    Isn’t manual/exploration tests also can be automated? The granular tests also can be repetitive. Whenever we have a few feature, we will do some exploration tests manually first and then automate them. I agree for some cases, it will take longer time to automate so we do manual tests instead, is the problem you are resolving here?

    • autumnator January 18, 2015 at 8:06 pm #

      Perhaps my blog post was not “clear” enough to describe example situations. Yes manual test cases can be automated most of the time. Exploratory tests when automated are no more “exploratory”, it’s the equivalent of automating a manual test.

      But here’s one example to clarify, taken from my blog post: testing a website workflow that happens to be mostly same across desktop, tablet, mobile. Only UX or UI is different via responsive design. So say in the workflow you can login, view product, add to cart. It sounds simple but lets assume theres still quite a few steps in between.

      Now, when you document this test manually, do you write 1 test case that describes testing steps through the workflow, targetting the default desktop view, then at end of test say repeat the test steps for mobile and tablet views? Or do you clone the “desktop” test case and rename/relabel a copy for mobile and another for tablet, resulting in 3 test cases, that actually say much of the same thing, just a slightly different scenario. For simplicity in discussion, let’s assume that the UI differences in the 3 views are not so different that you have to describe them in the test case, we assume it’s intuitive that the user knows what to do for each view as the common steps (login, viewing product on product page, adding to cart) is similar on all views. I personally prefer the 1 test case approach but some people prefer the explicit granular 3 test case approach. When you have these types of test scenarios often, you end up with a huge test suite because of choosing granularity (3 times # of such scenarios) when you could get same test coverage w/ one third the # of tests if you just used 1 test case that required “more” testing in it. What does it matter if you fail a specific test case (if you had 3) vs fail 1 test case because it failed 1 out of 3 scenarios within that one test case (passed desktop & tablet, failed mobile). In the end you file a bug, and that gets tracked. You can link it back to the test case. Doesn’t really matter if it’s granular or not. Perhaps part of the problem is QA sometimes don’t read thoroughly, as the single test case covering 3 scenarios takes more work to read and interpret than splitting them up into 3 granular test cases. Well I say QA is meant to “read” and know this stuff. Not lazily follow simple test cases that are super granular. That just makes it more to manage in test cases.

      Now adapting that to automation. Do you automate that as 3 separate Selenium test scripts. Which ends up being run 3x, each with it’s own setup and teardown steps that take up extra time/resources. Or could you make it as one Selenium test script that has a for loop or iterator that iterates through the 3 scenarios but changing the user agent each time to test each view? Or the single test doesn’t have to have a for loop or iterator, it could just generically define the workflow but written like a template where at runtime you set the user agent type and the test will correctly run for the specified view (via user agent). So meaning one test script can cover all 3 cases by varying the user agent. When you specifically make it as 3 separate Selenium tests, you end up with duplicating code (let’s assume the responsive design UI/UX use same locators across view, it just looks different to the user because of different CSS styling but code-wise locators are same. and the workflow navigation steps are same across all 3 views). All just for granularity to say we covered these 3 cases “speciically” in your test coverage documentation. But isn’t such repetition avoidance why data driven testing was developed? Think of my scenario mentioned as a form of data driven testing by consolidating the 3 cases into 1 manual test case and/or 1 Selenium template test script.

      Another example, for manual testing, comes from the telecommunications world. You have scenario of user A calls user B, user B transfers call to user C. So you have A B C. But the call connection between A B and B C varies due to technology used (VoIP/SIP vs traditional telephony connection). And A, B, C themselves can be using different phone types (analog home phone, proprietary digital business phone, VoIP/SIP phone, mobile phone). So you can imagine how many combinations we would need to cover to test all possible call connections and phone types between A, B, C in that call transfer scenario. For us, rather than write up specific test case of written steps to do call transfer, we just created an Excel test matrix table of all the cases and just check them off as each test succeeded, like a checklist. No specific test case IDs with descriptions for each test case. It’s easy enough to interpret what the test coverage is. As for how to test, a call transfer for someone in telephony QA is straightforward enough to do, plus our product comes with user documentation, so the QA can refer to that rather than the “test case” for procedures to do a call transfer. What was specifically verified is that we can talk among A, B, C with no audio issues.

      So from these examples, I hope that you can see that in testing, not all the test case scenarios are simple to write. Sometimes you come across these and have to spend some time deciding do ou want to go the granular route or organize by repititon (or similarity or whatever you call it).

      Also be aware that you may want to think ahead for the future too. Sometimes, you don’t anticipate change coming in later on too. For the telephony test scenario mentioned earlier. What if another call connection type or another phone type comes up in the future? For the responsive web design scenario, what if you have more device views in the future. How you define/architect the tests can be affected in the future by additions like I mentioned (e.g. add in more complexity/documentation, add another repeat/loop for the test for the additional scenario coverage, clone additional granular test cases/scripts).

      • 朵朵 January 18, 2015 at 9:52 pm #

        Yes that’s clear to me. And the first scenario example you are mentioning more like a A/B tests solution. Thanks for your thorough answer.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Seek Nuance

Python, technology, Seattle, careers, life, et cetera...

TELLURIUM

New Era of Test Automation

Der Flounder

Seldom updated, occasionally insightful.

The 4T - Trail, Tram, Trolley, Train

Exploring Portland with the 4T

Midnight Musings

Thoughts on making art

Automation Guide

The More You Learn The More You Play...!

The Performance Engineer

Code.Test.Tune.Optimize.

humblesoftwaredev

Thoughts related to software development

Yi Wang's Tech Notes

A blog ported from http://cxwangyi.blogspot.com

Appium Tutorial

Technical…..Practical…..Theoretically Interesting

LinuxMeerkat

I swear! Meerkats can do Linux

PacketsDropped

Requeuing the packets dropped in my memory.

Two cents of software value

Writing. Training. Consulting.

@akumar overflow

wisdom exceeding 140 chars.

Lazy Programmer's Shortcut

Java, J2EE, Spring, OOAD, DDD & LIFE! .......all in one :)

Testing Mobile Apps

www.SoftwareTestingStudio.com

%d bloggers like this: