CS 4320 Software Testing and QA: Ex5 BLACK BOX TEST =================================================== Platform: UNIX or PC (or paper) Given: source code Goal: Use Black Box Test instead of White Box Test. Black Box Test uses a functional description of the software and then applies a method such as: o equivalence partitioning o boundary-value analysis o error guessing In all cases, Black Box Test treats the software as a box with INPUTS and OUTPUTS, but does not know, or care, about the inside of the box. EQUIVALENCE PARTITIONING: Equivalence partitioning is a systematic process that identifies, on the basis of whatever information is available, a set of interesting classes of input conditions to be tested, where each class is representative of (or covers) a large set of other possible tests. If partitioning is applied to the product under test, the product is going to behave in much the same way for all members of the class. The aim is to minimize the number of test cases required to cover these input conditions. There are two distinct steps. The first is to identify the equivalence classes (ECs) and the second is to identify a specific test case within the class. For each external input: (1) If the input specifies a RANGE of valid values, define one valid EC (within range) and two invalid ECs (one below the range and one above the range). Example: If the input requires a month in the range 1-12, Valid EC: 1..12 Test Case: 4 Invalid EC: <1 Test Case: -1 Invalid EC: >12 Test Case: 20 (2) If the input specifies an exact NUMBER (N) of inputs, define one valid EC (with N inputs) and two invalid ECs (N). Example: If the input requires 3 book titles, Valid EC: 3 Test Case: 3 Invalid EC: <3 Test Case: 1 Invalid EC: >3 Test Case: 5 (3) If the input specifies a SET of valid values, define one valid EC (within set) and one invalid EC (outside set). Example: If the input requires one of the names TOM, DICK, or HARRY, Valid EC: in set Test Case: TOM Invalid EC: not in set Test Case: JOE (4) If the input specifies a "must be" situation, define one valid EC and one invalid EC. Example: If the first character of the input must be a digit, Valid EC: 0..9 Test Case: 4 Invalid EC: not 0..9 Test Case: x BOUNDARY-VALUE ANALYSIS Boundary-value analysis (BVA) is a variant and refinement of equivalence partitioning, but instead of picking an arbitrary member of the class, boundaries are tested because that usually is a good place to look for defects. For example, suppose N items are expected as input, a "for loop" in the program might have a "<" sign when it really needs a "<=" sign. Although this is a black box test (without source code), BVA can discover this problem. (1) If the input specifies a RANGE of valid values, Example: If the input requires a month in the range 1-12, Valid EC: 1..12 Test Cases: 1,12 Invalid EC: <1 Test Case: 0 Invalid EC: >12 Test Case: 13 (2) If the input specifies an exact NUMBER (N) of inputs, Example: If the input requires 3 book titles, Valid EC: 3 Test Case: 3 Invalid EC: <3 Test Case: 2 Invalid EC: >3 Test Case: 4 Exercises: Given: a functional design specification % more golfscore.txt a. Run a sample test file (without regard to the test data) % golf 4 5 Note that BVA1.dat can not possibly test one of the other cases listed above. But it might also test 2 golf players. So it will be listed again when you make your test plan for the players. It is good to reuse a .dat file for VALID ECs but not a good idea for INVALID ECs. When bad behavior results, what was the cause of it? You should have one .dat per INVALID EC, and it should be a simple/clear formulation to demonstrate the bad input. This will help isolate the cause and effect. Do not start "multiplying" test cases. You don't need to test 1 golf course VERSUS 2 golfers, VERSUS 10 golfers, etc. Do not put any BLANK spaces at the end of your input lines. The input becomes entirely invalid and you can't find out the real behavior. Test everything about golfscore.txt even if something other than BVA is required. Do not trust that the software works. If you do not feel comfortable using a Unix editor (Software Engineers should be comfortable in Unix), then SFTP the .dat sources from a PC. Note: you may have to call it .txt so it appears nicely in Notepad. Given object code, run the test cases: % golf BVA1.out repeat for BVA2.dat, etc. c. Look are your *.out files. What errors do you see? The best way to report on these errors is to add comments directly into the .out file itself. And include the test plan you used to generate this result (from BVA.txt). For example: (1) Number of golf courses is RANGE 1..4. Test Case Input File Output File Valid EC: 1..4 1 BVA1.dat BVA1.out Be sure to give good observations on the results. This will be part of your grade. Imagine you are the software engineer or, even better, the ultimate user of the software. Given the specs of the software, what would you like to see happen when given an Invalid EC? How well does the golf program actually do that? That's the purpose of this course. After editing your .out files, it's easy to make one concatenation: cat *.out >BVA.out You can simply turn in BVA.txt (with the entire test plan) and BVA.out (with all of your results). d. Project Notebook (see print.bat) README golfscore.txt sample.dat sample.out BVA.txt BVA1.dat BVA2.out etc. report.txt or include reports in .out files directly Email submission: BVA.txt BVA.out (concatenate all)