The Pass/Fail Criteria provides an accurate pass or fail status at the end of the test. You may also determine the status in the middle of a test, however, they may not be very precise or as easy to do. This following topics explains a few methodologies for setting up Pass/Fail Criteria:
In order for Pass/Fail Criteria to work, you must be able to determine if a test passes or fails by analyzing the measurements. Then the measurement criteria can be a direct way to determine the pass or fail status of a test. The test case state and test state or step criteria are provided for custom control of when measurement criteria are evaluated, expired or reset. There would be little value in using the non-measurement criteria to directly determine the overall status of a test, as it is a given that each state or step is expected to occur.
The recommend use case for setting up Pass/Fail Criteria is to first run the test to see how it works, understand what is reported and what measurement values you experience. Then add Criteria, directly from the Report Tabs, right-click on the measurement and Add Criterion.
NOTE: Measurements available for selection in the Measurement Criterion editor are not guaranteed to be reported in the running tests. These editor lists include all the reported measurements, with some additional measurements that may not be reported in some configurations. They do not cover Sub-total measurements (Per-session/Per-DMF/Per-Bearer) that are created dynamically when the tests run. |
The simplest criterion is checking that a single measurement reaches a certain value during a test using the default conditions (see Start, Stop, Expire, or Reset). Some examples of measurements you might check individually are:
This works great with counter measurements that only increase in value over time. In the case of measurements that can cycle, such as Sessions Established, once the value reaches the criteria value the first time, the Criterion will reach its true state and stay in that state until the RESET condition is met.
The Status of each Criterion is also reported in the Test Log.
The System log messages associated with the conditions and that the values in the log message will be the value at the time of the condition was evaluated. Specifically, value is not the value when it was STOPPED.
For Criteria that “EXPIRE”
Value=< VALUE WHEN THE CRITERION EXPIRED >
//In this case if you have a STOP condition that occurs BEFORE the test ends, the value can continue to change and not trigger the criterion such that when it Expires the VALUE might pass the CONDITION, but can’t because it was STOPPED.
For Criteria that “PASS or FAIL” due to the value:
Value=< VALUE WHEN CONDITION WAS TRUE >
NOTE: Averages or Rate type measurements are not good candidates for Pass/Fail Criteria. Subtotals and Per-Interval measurements are not supported at all. |
When comparing two measurements, the criterion should not be evaluated until the end of the test cycle to ensure that both measurements have been reported fully. Some examples of measurements you might compare:
The best way to accomplish an end of cycle evaluation is to set the START and EXPIRE condition based on the End of Test or End of Iteration (see Start, Stop, Expire, or Reset). If the test has multiple test cases running in a series using automation control, you can try to evaluate measurements at the end of each test case by setting up Test Case State or Test State Or Step Criteria and selecting them as your START and EXPIRE conditions.
The following is an example Test Automation Control steps that uses 3 sets of test cases run in series. In this example, there are 3 sets of UMTS tests and the goal is to verify that the Node Connects == Node Detaches in each test case.
Example Test Automation Control Steps
# |
Function or Test Case |
Action |
Predecessor |
State |
Delay |
1 |
UPTS4.5.1.1.1Sim [SGSN Node] / ts1:tc0 |
Init |
|
|
0 |
2 |
UPTS4.5.1.1.1Sim [SGSN Node] / ts1:tc0 |
Start |
|
|
0 |
3 |
UPTS4.5.1.1.1 [UMTS] / ts0:tc0 |
Init |
UPTS4.5.1.1.1Sim [SGSN Node] / ts1:tc0 |
RUNNING |
0 |
4 |
UPTS4.5.1.1.1 [UMTS] / ts0:tc0 |
Start |
|
|
0 |
5 |
UPTS4.5.1.1.1 [UMTS] / ts0:tc0 |
Stop |
UPTS4.5.1.1.1 [UMTS] / ts0:tc0 |
RUNNING |
0 |
6 |
UPTS4.5.1.1.1 [UMTS] / ts0:tc0 |
Cleanup |
UPTS4.5.1.1.1 [UMTS] / ts0:tc0 |
STOPPED |
0 |
7 |
UPTS4.5.1.1.1Sim [SGSN Node] / ts1:tc0 |
Stop |
|
|
0 |
8 |
UPTS4.5.1.1.1Sim [SGSN Node] / ts1:tc0 |
Cleanup |
UPTS4.5.1.1.1Sim [SGSN Node] / ts1:tc0 |
STOPPED |
0 |
9 |
UPTS4.5.1.1.2Sim [SGSN Node] / ts1:tc1 |
Init |
|
|
0 |
10 |
UPTS4.5.1.1.2Sim [SGSN Node] / ts1:tc1 |
Start |
|
|
0 |
11 |
UPTS4.5.1.1.2 [UMTS] / ts0:tc1 |
Init |
UPTS4.5.1.1.2Sim [SGSN Node] / ts1:tc1 |
RUNNING |
0 |
12 |
UPTS4.5.1.1.2 [UMTS] / ts0:tc1 |
Start |
|
|
0 |
13 |
UPTS4.5.1.1.2 [UMTS] / ts0:tc1 |
Stop |
UPTS4.5.1.1.2 [UMTS] / ts0:tc1 |
RUNNING |
0 |
14 |
UPTS4.5.1.1.2 [UMTS] / ts0:tc1 |
Cleanup |
UPTS4.5.1.1.2 [UMTS] / ts0:tc1 |
STOPPED |
0 |
15 |
UPTS4.5.1.1.2Sim [SGSN Node] / ts1:tc1 |
Stop |
|
|
0 |
16 |
UPTS4.5.1.1.2Sim [SGSN Node] / ts1:tc1 |
Cleanup |
UPTS4.5.1.1.2Sim [SGSN Node] / ts1:tc1 |
STOPPED |
0 |
17 |
UPTS4.5.1.1.3Sim [SGSN Node] / ts1:tc2 |
Init |
|
|
0 |
18 |
UPTS4.5.1.1.3Sim [SGSN Node] / ts1:tc2 |
Start |
|
|
0 |
19 |
UPTS4.5.1.1.3 [UMTS] / ts0:tc2 |
Init |
UPTS4.5.1.1.3Sim [SGSN Node] / ts1:tc2 |
RUNNING |
0 |
20 |
UPTS4.5.1.1.3 [UMTS] / ts0:tc2 |
Start |
|
|
0 |
21 |
UPTS4.5.1.1.3 [UMTS] / ts0:tc2 |
Stop |
UPTS4.5.1.1.3 [UMTS] / ts0:tc2 |
RUNNING |
0 |
22 |
UPTS4.5.1.1.3 [UMTS] / ts0:tc2 |
Cleanup |
UPTS4.5.1.1.3 [UMTS] / ts0:tc2 |
STOPPED |
0 |
23 |
UPTS4.5.1.1.3Sim [SGSN Node] / ts1:tc2 |
Stop |
|
|
0 |
24 |
UPTS4.5.1.1.3Sim [SGSN Node] / ts1:tc2 |
Cleanup |
UPTS4.5.1.1.3Sim [SGSN Node] / ts1:tc2 |
STOPPED |
0 |
The following table shows the simplest way to setup a Compare Measurements criterion for each test case, and have each criteria Start and Expire at the end of the test:
# |
Condition |
Start |
Stop |
Expire |
Reset |
0 |
[ PASS if (ts0::tc0-Test Summary-Actual Mobile Node Connects == ts0::tc0-Test Summary-Actual Mobile Node Detaches) ] |
End of Test |
Never |
End of Test |
Never |
1 |
[ PASS if (ts0::tc1-Test Summary-Actual Mobile Node Connects == ts0::tc1-Test Summary-Actual Mobile Node Detaches) ] |
End of Test |
Never |
End of Test |
Never |
2 |
[ PASS if (ts0::tc2-Test Summary-Actual Mobile Node Connects == ts0::tc2-Test Summary-Actual Mobile Node Detaches) ] |
End of Test |
Never |
End of Test |
Never |
If you would like to monitor the Pass/Fail Criteria live, you set up the test such that the Measurement Criteria Start/Expire at the end of each test case execution. The Test Summary tab measurements are reported every second and most other measurements are reported every 15 seconds. However, all measurements for a given test case are reported during the Cleanup function. You can use Cleanup as a trigger for your Criteria.
As soon as the Cleanup step completes, all measurements are reported to the TAS. Based on the table showing the automation control steps, the example below shows Test State or Step Criteria set up at end of each test case execution at step 8 and step 16. These steps are executed just after the Cleanup functions for the Nodal test cases, i.e., steps 6 and 14 respectively.
NOTE: The Test State Or Step Criteria (in the table below) may also be used as the Start/Expire for the custom WHEN Criteria with the Compare Measurements Criteria. |
# |
Condition |
Start |
Expire |
0 |
[ PASS if ( Test.stateOrStep == 8) ] |
End of Test |
End of Test |
1 |
[ PASS if ( Test.stateOrStep == 16) ] |
End of Test |
End of Test |
2 |
[ PASS if (ts0::tc0-Test Summary-Actual Mobile Node Connects == ts0::tc0-Test Summary-Actual Mobile Node Detaches) ] |
(0) Test.stateOrStep == 8 |
(0) Test.stateOrStep == 8 |
3 |
[ PASS if (ts0::tc1-Test Summary-Actual Mobile Node Connects == ts0::tc1-Test Summary-Actual Mobile Node Detaches) ] |
(1) Test.stateOrStep == 16 |
(1) Test.stateOrStep == 16 |
4 |
[ PASS if (ts0::tc2-Test Summary-Actual Mobile Node Connects == ts0::tc2-Test Summary-Actual Mobile Node Detaches) ] |
End of Test |
End of Test |
To ensure the measurements are reported before the Criterion is started, you could also add a two second Delay to the Automation Control step after the Cleanup
NOTE: The steps in the table below are repeated from the Example Test Automation Control Steps table for ease of reference. |
# |
Function or Test Case |
Action |
Predecessor |
State |
Delay |
6 |
UPTS4.5.1.1.1 [UMTS] / ts0:tc0 |
Cleanup |
UPTS4.5.1.1.1 [UMTS] / ts0:tc0 |
STOPPED |
0 |
7 |
UPTS4.5.1.1.1Sim [SGSN Node] / ts1:tc0 |
Stop |
|
|
2 |
8 |
UPTS4.5.1.1.1Sim [SGSN Node] / ts1:tc0 |
Cleanup |
UPTS4.5.1.1.1Sim [SGSN Node] / ts1:tc0 |
STOPPED |
0 |
When evaluating measurements that roll, such as with a Session Loading test, you can use measurement criteria against the Sessions/Nodes Established to start, expire and reset other measurement criteria.
The following is an example of a session model. Where point A is the start of sessions establishing, B is when all sessions are established, C is when sessions begin to stop, and D is when all sessions are stopped. To achieve this model, the session pending and hold times must be equal and large enough to allow for the steady state time from point B to C and point D to A. Ideally there will be at least 15 seconds in the steady state part, to allow for all measurements to report fully.
NOTE: If you are only interested in 1-second measurements, then the following is not required. |
The graph below shows Client Data Nodes Established in an example IP Application Node test that follows the model.
To determine points B and D, two Log type criteria are setup against the Client Data Nodes Established, one at the max sessions, 200, and one at 0 sessions. They expire and reset each other. The individual test case measurement is used because the Summary measurement will only execute on the 15 second interval, whereas individual test case measurement execute on each measurement change. This matters as the Client Data Nodes Established reports measurement every second and we’d lose granularity.
# |
Condition |
Start |
Stop |
Expire |
Reset |
0 |
[ LOG if (ts0::tc0-Test Summary-Client Data Nodes Established >= 200) ] |
Start of Test |
Never |
(1) ts0::tc0-Test Summary-Client Data Nodes Established == 0 |
(1) ts0::tc0-Test Summary-Client Data Nodes Established == 0 |
1 |
[ LOG if (ts0::tc0-Test Summary-Client Data Nodes Established == 0) ] |
(0) ts0::tc0-Test Summary-Client Data Nodes Established >= 200 |
Never |
(0) ts0::tc0-Test Summary-Client Data Nodes Established >= 200 |
(0) ts0::tc0-Test Summary-Client Data Nodes Established >= 200 |
The following shows the Pass/Fail history with each criterion going back and forth based on the session loading test. These criteria may be used to start/stop/expire/reset other criteria when the test first reaches a steady state of all sessions running and when all sessions are stopped.
Interval |
Elapsed Time (sec) |
Test Iteration |
ts0::tc0-Test Summary-Client Data Nodes Established >= 200 |
ts0::tc0-Test Summary-Client Data Nodes Established == 0 |
1 |
15 |
1 |
PENDING |
PENDING |
2 |
30 |
1 |
PENDING |
PENDING |
3 |
45 |
1 |
PENDING |
PENDING |
4 |
60 |
1 |
PENDING |
PENDING |
5 |
75 |
1 |
PENDING |
PENDING |
6 |
90 |
1 |
OCCURRED |
PENDING |
7 |
105 |
1 |
OCCURRED |
PENDING |
8 |
120 |
1 |
OCCURRED |
PENDING |
9 |
135 |
1 |
OCCURRED |
PENDING |
10 |
150 |
1 |
PENDING |
OCCURRED |
11 |
165 |
1 |
PENDING |
OCCURRED |
12 |
180 |
1 |
PENDING |
OCCURRED |
13 |
195 |
1 |
PENDING |
OCCURRED |
14 |
210 |
1 |
OCCURRED |
PENDING |
15 |
225 |
1 |
OCCURRED |
PENDING |
16 |
240 |
1 |
OCCURRED |
PENDING |
17 |
255 |
1 |
OCCURRED |
PENDING |
18 |
270 |
1 |
PENDING |
OCCURRED |
19 |
285 |
1 |
PENDING |
OCCURRED |
20 |
300 |
1 |
PENDING |
OCCURRED |
21 |
315 |
1 |
PENDING |
OCCURRED |
22 |
330 |
1 |
OCCURRED |
PENDING |
23 |
345 |
1 |
OCCURRED |
PENDING |
24 |
360 |
1 |
OCCURRED |
PENDING |
25 |
375 |
1 |
OCCURRED |
PENDING |
26 |
390 |
1 |
PENDING |
OCCURRED |
Current |
399 |
1 |
PENDING |
OCCURRED |
When a test is setup for multiple iterations, you may check the Criteria on each iteration by making sure to use End of Iteration (or earlier state) for Expire and use Start of Iteration for Reset. The following illustrates a multi-iteration test setup for Compare Two Measurements example shown above:
# |
Condition |
Start |
Stop |
Expire |
Reset |
0 |
[ PASS if (ts0::tc0-Test Summary-Actual Mobile Node Connects == ts0::tc0-Test Summary-Actual Mobile Node Detaches) ] |
End of Iteration |
Never |
End of Iteration |
Start of Iteration |
1 |
[ PASS if (ts0::tc1-Test Summary-Actual Mobile Node Connects == ts0::tc1-Test Summary-Actual Mobile Node Detaches) ] |
End of Iteration |
Never |
End of Iteration |
Start of Iteration |
2 |
[ PASS if (ts0::tc2-Test Summary-Actual Mobile Node Connects == ts0::tc2-Test Summary-Actual Mobile Node Detaches) ] |
End of Iteration |
Never |
End of Iteration |
Start of Iteration |
NOTE: The Start condition is also set to End of Iteration, since End of Test wouldn’t make sense anymore (End of Test would be after it Expires, End of Iteration). Each time the test starts a new iteration, the Criteria will be reset to the PENDING state. The Failure Count will be incremented anytime a Criterion fails. |