Skip to main content

Test Execution and Reporting - My Notes from a Seminar I attended!

TechGig Webinar by Ashu Chandra - Test Execution and Reporting


Overview

  1. Test Bed Preparation
    1. Environment where to test - ensure we have the rite version or the app server/database/application (AUT - app under test)
  2. Execute Tests
  3. Analyze unexpected behavior - this is when we see a difference in the expected result and actual result
    1. How to drill down
    2. What to look for
    3. not every unexpected behavior is a bug
  4. report bugs
    1. When we get a bug we should report the bug in crisp and clear way to dev
    2. so that dev can recreate it quickly
  5. bug lifecycle
    1. What stages does a bug go through
  6. exit criteria (when to stop testing)
    1. how much should we test
  7. report test findings
  8. best practices
Test Bed Preparation
  1. Ensure you have required hardware/operating system
  2. availability of qa environment of interface applications
    1. If there are intermediate systems we should check their requirements too
  3. setup environment as per supported platform (DB version, app server version etc)
  4. SUT installation from version control
    1. usually we shouldn't accept requirements which aren't version controlled
    2. we should know what we are testing

  5. prepare master data required as per test suite
    1. normally data is created once and we usually export it to a dump file
    2. we should take care of all the criteria to create our test data
Execute Tests: 
  1. Execution of test cases can be done: 
    1. manually 
    2. automated test tools like qtp, selenium, watir
    3. we can also use a combination of the above too
    4. the tests which are part of the core functionality are good candidates for autoamtion as there will be minimal changes to the tests
  2. Execute documented test cases
    1. Most important ones first - why
    2. becoz we should identify the most imp to business first so that dev can work on the fixes to the bugs created first and they have more time to work on such high imp
  3. Exploratory testing
    1. Apart from what we documented we should also go for exploration
    2. execution on the fly - gets the creativity of the manual tester out
    3. Define and execute the test case at the same time
    4. useful when you get new information
    5. discovering a new failure that requires investigation
    6. in real world we keep getting information abt the AUT during the test execution phase only thus exploratory testing is helpful for such requirements
    7. baseed on the bugs discovered from documented tests, the tester gets a hunch of which areas are test prone
    8. We should try finding more and more bugs but then we should also stick to our test timelines 
    9. along with sticking with timelines we should also 
Analyze unexpected behavior
  1. Tips for analyzing unusual behavior
    1. reproduce - test it again; on other supported platforms (browser/os)
      1. one should test the bug or recreate it at least thrice before filing it
    2. isolate - test it differenly
      1. try to pin point which exact scenario isn't working
      2. see which area is failing
    3. generalize - test it differently


    4. compare - review results of similar tests
      1. was it working before?
    5. document clearly - crisp sequence of steps, include screenshot, attach application/server logs(as applicable)
    6. review - be sure
  2. Why bugs can be un-reproducible
    1. intermittence (in specific workflow, with specific data)
      1. thus we should log in as many details as possible with the bug
    2. inconsistent test/dev environment
      1. we should be well aware of the similarities and differences in the two environments should be known 
Bug Reporting
  1. Importance of bug reporting
    1. tangible output of testing
    2. describes failure mode in SUT (system under test)
    3. communication to developers
      1. will communicate the details of when/how/where we got the bug
      2. in today's business world, the tester and the dev aren't in the same geographically distribued - thus in such scenario, it is very important that the bug reporting is proper
      3. possible that dev and tester are from different organization too (third party testing) - thus bug reporting should be formal here
  2. Quality indicators for good bug report
    1. clear to management
      1. bug report isn't just for dev
      2. managers senior levels take a look at it
      3. mostly depicts the progress the team is making
    2. actionable by development
      1. the whole objective of bug reporting is that the dev will have a good idea of what's at hand, their action items 
    3. move quickly from opened to closed
      1. with the details given in the bug, the dev should be able to quickly recreate it 
      2. also should be able to pinpoint where the app is going wrong
    4. by given such a bug report the tester is working constructively towards team's success
  3. report format - typical list can be more
    1. problem title - brief summary or one liner 
    2. description
    3. steps to reproduce
      1. enumerating the steps 
      2. give exact paths - no assumptions
    4. testing environment
      1. jot down all details of the environment
    5. actual results
      1. we encountered while executing the tests
    6. expected results
      1. what were we expecting from the test
    7. severity - based on the customer's risk 
      1. we decide as testers on what the severity is
    8. priority - urgency to fix
      1. usually given by the dev
    9. apart from above we can attach screenshot along with various logs
  4. severity
    1. 0 - critical crash
    2. 1 - functionality not working as designed. no workaround exist
    3. 2 - functionality working but work around exists
    4. 3 - cosmetic issue not a functional issue 
  5. priority
    1. priority 1 - high : immediate fix needed
    2. priority 2 - medium
    3. priority 3 - low
  6. when severity is high priority is high - mostly true : in some cases like a crash of app is a high severity but then this is happening in a feature which is rarely used thus this is of low priority
  7. bug reporting tool 
    1. Bugzilla
    2. QC
    3. Use tools against excel sheets becoz, reports, search, editing is easier in tools than in excel - automatic emails/response updates given when using tools
bug life cycle



exit criteria - when to stop testing
  1. exit criteria is defined during test planning stage
  2. typical test exit criteria: 
    1. all identified test cases executed
    2. no critical and major bugs open 
  3. prioritize testing so that whenever you stop testing, you have done the best testing in the time available 
    1. this will ensure that even when you are asked to stop testing abruptly, the time given was best utilized
    2. what we mean by important test case is that we follow a 80-20 rule: which means that this feature is used 80% of time or 80% of users use this feature for sure
test reporting - concepts 
  1. testing produces very valuable information
  2. should be communicated effectively
  3. test report covers
    1. current quality
      1. how many features are working as they were designed
    2. test progress
      1. how many features were test complete
    3. test efficiency
      1. not a good sign when more bugs are found during the end of test cycle
      2. how effectively the testing time utilized properly - prioritizing is helpful here
      3. sometimes we overwork during releases to complete the testing - becoz build which were given weren't proper - all this can be worked out if we plan properly
    4. coverage
      1. where more bugs were found
      2. which module is stable compared to others
test reporting format
  1. project overview
  2. types of testing done(system, performance, stress)
  3. bug tracking (inflow, outflow, open(cumulative))
    1. this gives me an idea how quickly the bugs were caught, how quickly the caught ones were fixed, how quickly the fixed ones were closed
  4. defect analysis (module/priority/severity/build release)
    1. which area were more bugs
    2. which module had more severe bugs
    3. helps in prioritizing future builds/bugs
    4. lots of bugs but mostly cosmetic - this will help us know that there is no reason for alarm 
  5. invalid and un-reproducible bugs
    1. we should track these as it will help us get know the areas where the tester should improve on 
    2. thumb rule 5 - 10% of invalid bugs is acceptable 
    3. it doesn't mean we shouldn't have any 
    4. when in doubt if its a bug or not, best solution is to file the bug - so that we don't miss it
  6. observations about the release
    1. don't give an opinion abt the quality 
    2. let the data talk 
    3. thus put your observations in the form of data - like 20 severity 1 bugs found etc
    4. more such bugs the more time it takes to test the bug
  7. recommendations
    1. recommend that test manager recommends if we should go for a release or not
    2. giving details on how the TM is supporting his/her decision
    3. many a times quality is considered as causes for delays to release - but we should consider them as ppl who are testing it in and out and giving constructive 
best practices of test execution and reporting
  1. find the scary bug first 
    1. test area with high risk first
      1. finding a high severity bug in the last week of testing isn't good for the project at all
  2. crisp communication of findings
    1. capture findings clearly with relevant details
      1. we have dashboards now coming with the tools - we can use them
    2. circulate these findings to appropriate parties
      1. this should be done at required intervals
  3. adapt to evolving circumstances
    1. number of bugs cannot be predicted
    2. external dependencies (delayed build, change in priorities etc)
    3. we should be ready to plan at short notices because of the issues found 
    4. change in resources/priorities can happen which aren't in control
    5. thus again prioritizing is very effective to utilize the time frame available 
  4. work towards early resolution of blocking issues
    1. blocker is a bug which doesn't allow us to initiate the test itself 
    2. thus blocker blocks all the other workflow related to it
    3. we can always skip this part and test the rest but then when we leave/skip a blocker workflow it appears as a grey area for us
    4. so its always good to resolve the blocker as soon as possible
tester's job isn't just to be the gate keeper for quality but we should be proactive - gain the confidence of dev and manager and show that we are also working for quality of the product

Extra Points Discussed during questions: 
  1. Test Coverage Tools: depends on the code on which your app is written 
  2. 100% test Coverage - its not possible in real time based on various aspects involved - so many error paths - time available - environmental - objective should be that the core functionality 
  3. Can there be a scenario where exit criteria changes? - Management level decision, where compromises are done... 
  4. Ideally priority of a bug should be decided by? - this decision is part of the tester's screen when logging the bug

NOTE: Please note that these are my notes and it contains modifications to the content given on the source site. Also, for any content which was used directly from the source, all the copyrights at source apply too!

Comments

Popular posts from this blog

wget error–“zsh: parse error near &”

There is no doubt that I prefer wget way over any other type of downloads… Syntax: wget <DOWNLOAD_URL>   If you get this error “ zsh: parse error near & ” then its probably because your download URL has a “&” so you should try giving your DOWNLOAD_URL in double quotes wget “<DOWNLOAD_URL>”   If you are trying to download from a site which needs you to give your credentials then you can try giving it this way wget --http-user=<UserName> --http-password=<Password> “<DOWNLOAD_URL>”   Hope this helps

How to Unpack a tar file on Windows?

On Windows: You can download a simple command line tool to do this. You can download the tool from here Usage can be found on the website but pasting it here too for convenience: C:\>TarTool.exe Usage : C:\>TarTool.exe sourceFile destinationDirectory C:\>TarTool.exe D:\sample.tar.gz ./ C:\>TarTool.exe sample.tgz temp C:\>TarTool.exe -x sample.tar temp TarTool 2.0 Beta supports bzip2 decompression for files with extensions like tar.bz2 and .bz2 . TarTool -xj sample.tar.bz2 temp or TarTool -j sample.bz2 Download TarTool 2.0 Beta from here Unpack a .txz file on Windows Use the 7zip tool  to unpack a .txz file on windows On Linux: You can use the bzip2 and tar combined to do this… for ex: bzip2 –cd <tar.bz_fileName> | tar –xvf - This will unpack the contents of the tar.bz file Happy Un-Tar-ing

Apache Commons StringUtils.isEmpty() vs Java String.isEmpty()

You might want to test for if a String is empty many a times. Before we jump onto the numerous solutions available let us take a look at how we define “Empty String”   The difference in the two methods given by Apache and Java are dependent on how we define an empty string. Java String.isEmpty returns a boolean true if the string’s length is zero. If the string has null it throws NullPointerException Apache StringUtils.isEmpty returns a boolean true if the string is either null or has length is zero   Thus its purely dependent on how you are defining “empty string” in your program which will decide which function to use…BTW if you want to skip using Apache Commons funciton and would want to stick to java then you can have your own function like this:   public static boolean isEmptyOrNull(String strStringToTest) {                  return strStringToTest == null || strStringToTest.trim().isEmpty(); }