Photo by John Schnobrich on Unsplash

3 Common QA Misconceptions

Plan Performance Testing with Platform Architecture

If there is to be performance testing both the test tool platform and the test environment need to be robust enough to support it. This will require either a server farm capable of exercising the application at a realistic load or ensuring that the application performance test interfaces can be accessed by an external cloud-based platform. In the latter case, plan security testing to be done before the performance testing.

Automated Testing is Worth the Time

Many projects do not use automated testing because of the time involved in creating the tests. This is based on optimistic estimates of expected number of defects and iterations to resolve. Ironically, not having automated testing for regression testing increases the number of iterations to resolve defects and increases the total effort of testing.

Acceptance Tests are not Regression Tests

Acceptance tests are generally limited in scope to how users are expected to use the system. Regression tests should include edge cases and dynamically-generated inputs, with the inputs recorded in the event of defects being identified in order to support re-creation and potential adjustments to test parameters.

Bonus Best Practice

Defect summaries should be formatted as TYPE: Component – Functional Error Summary

If you found this interesting, please share.

© Scott S. Nelson

Random Data post at Logic20/20

Quick summary: Proven successful approaches to generating random test data in JMeter as part of a more complex API testing framework

Random data prevents optimized libraries from caching requests and displaying artificially high performance. It is also useful to test the range of accepted values to ensure that the full range is supported. In addition, random data can help with building negative test cases, which are important both for exercising functionality and for determining whether negative test cases result in performance impacts (which are also a security risk). There are many ways to create random data, and this article provides examples of some approaches I have used with consistently positive results as part of a more complex API testing framework.

Apache functions

There are several functions documented on the Apache site. I find that I need to look beyond the documentation for good examples, and you will need to look beyond this short article for more than one, though this is one I use often: ${__RandomString([length],[characters], [variableName])}:

${__RandomString(${__Random(3,20)},ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz,
              notes.saveSessionNotes.myOrderSessionApi)}

Using JSR223 Samplers

JSR223 Samplers are great for taking advantage of the efficiency of the Groovy language in JMeter. There are two types of random data that are well suited for JSR223 Sampler generation. One is generic data, such as numbers, dates, and alpha-numeric characters with an easily defined pattern. The other is a small sample of specific values that can be used randomly.

The static data stored in the JSR223 Sampler should be as realistic as possible. Extracting the data from a production database copy using a query such as

SELECT DISTINCT [FIELD_NAME] from [SCHEMA.TABLE]

is the best way to get realistic and valid values. In cases where an empty string as input will return null as output, consideration must be given to the impact of skipping the null inputs versus using assertions that need to be very complex in order to handle the nulls dynamically.

Random dates

The example below demonstrates an approach to extract the difference in days between the earliest and latest date, then randomly select a number in that range of days and deduct it from the latest date to create a random test value (in this case for a date of birth):

Date  todayDob  = new Date();
Date  bottomDate = Date.parse('yyyyMMdd', '19010101');
Int   diffDays  = todayDob.minus(bottomDate);
Int   rndDiff   = org.apache.commons.lang3.RandomUtils.nextInt(0, diffDays);
Date  randDob   = todayDob.minus(rndDiff);
vars.put('myApiPersist.dob',randDob.format( 'yyyy-MM-dd' ));

Random strings from list

An object array is easier to read and maintain for simple arrays.

String[] empStatuses  = ['Full time student', 'Unemployed', 'Retired',
                       'Employed','Part time student','Unspecified'];
int      empStatusIdx = org.apache.commons.lang3.RandomUtils.nextInt(0,
                        empStatuses.size());
vars.put(‘myApiPersist.employmentStatus', empStatuses[empStatusIdx]);

In the example above, note that org.apache.commons.lang3.RandomUtils is used instead of the Groovy Random class because of how Random is not actually random and can easily lead to clashes across threads. (There are other functions and approaches as well, and I tend to re-use previous code until a compelling reason to change occurs).

For sets of very small lists, using a random number and the modulus operator is more efficient if the random number is already being generated for another value, as in this example:

String[] patGender = ['M', 'F'];
vars.put(‘myApiPersist.gender', patGender[(int)todayDob.getTime() % 2]);

Two-dimensional arrays

2D arrays are necessary when values must be used in pairs to be valid. For those unfamiliar with working with multi-dimensional arrays, here is an example:

String[][] nearestLocation = [["Don't know","UNK"],["SoCal","42035005"],
                       ["None with 100 miles","OTH"],
                       ["Midwest","20430005"],
                          ["Virtual","ASKU"],
                          ["America South","38628009"]];
int      nearestLocationIdx = org.apache.commons.lang3.RandomUtils.nextInt(0,
                            nearestLocation.size());
vars.put(‘myApiPersist.sexualOrientation', nearestLocation[nearestLocationIdx][0]);
vars.put(‘myApiPersist.sexualOrientationCode', nearestLocation[nearestLocationIdx][1]);

Maps

Maps are an alternative to the 2D array. Here is an example from a file upload API test:

def fileSize = [1,5,10,20];
def extensionAndMimeType = [tif: 'image/tiff', pdf: 'application/pdf', jpg: 'image/jpeg', docx: 'application/vnd.openxmlformats-officedocument.wordprocessingml.document'];
def extensions = extensionAndMimeType.collect{entry -> entry.key};
String filePrefix = fileSize.get(java.util.concurrent.ThreadLocalRandom.current().nextInt(0, fileSize.size()));
String fileExtension = extensions.get(java.util.concurrent.ThreadLocalRandom.current().nextInt(0, extensions.size()));
vars.put('session_file.name', String.format('%sMB_File.%s', filePrefix, fileExtension));
vars.put('mime.type', extensionAndMimeType.get(fileExtension));

Random keys from maps

Sometimes you only need the key from a map at certain points in your test. Rather than maintain two samplers, you can access just the keys.

def fileTypes 		= 	[ 'FileAttachment': 1, 'FileAttachmentOpaqueId': 1, 'ClinicalNote': 3, 'ClinicalNoteTemplate': 4, 'ExternalDocument': 5, 'FaxDocument': 6, 'Multiple': 8, 'CCDABatch': 9 ];
def randomFileTypeKey = (fileTypes.keySet() as List).get(ThreadLocalRandom.current().nextInt(fileTypes.size())).toString();
OR:
def randomFileTypeValue = fileTypes.get((fileTypes.keySet() as List).get(ThreadLocalRandom.current().nextInt(fileTypes.size())).toString());
OR:
def randomFileTypeValue = fileTypes.get(randomFileTypeKey);

Lists

At a certain (arbitrary) level of complexity, a List implementation may be easier to maintain (or it may justify an exception to use a CSV file instead). Here is an example of a List of Lists being used to generate all possible combinations of three sets of health-related parameters:

List<List> testValues = [
  ["FEVER","COUGH","SORE_THROAT"], //v3SymptomsSet
  ["POSITIVE","NEGATIVE","PENDING"], //testResult
  ["YES","NO","INDICATED_BUT_UNAVAILABLE"]];//v2TestedStatus
List<List> patCOVID19ScreeningApiVars = [];//
List temp = [];
void generatePermutations(List<List> lists, List result, int depth, List current) {
    if (depth == lists.size()) {
       result.add(current);
        return;}
    for (int i = 0; i < lists.get(depth).size(); i++) {
        generatePermutations(lists, result, depth + 1, current + lists.get(depth).get(i));}
}
generatePermutations(testValues, patCOVID19ScreeningApiVars, 0, temp);
vars.putObject("patCOVID19ScreeningApiVars",patCOVID19ScreeningApiVars);
vars.put("patCOVID19ScreeningApiVars.Count", (patCOVID19ScreeningApiVars.size - 1).toString())
SampleResult.setIgnore();

Ctrl+C/Ctrl+V

There are some alternate ways to generate random values described in this Stack Overflow thread.

Some key considerations for options to generate a random number:

  1. java.util.Random does not generate true random values. A test in JMeter resulted in the same series of numbers every single run. This can increase the likelihood of clashes between threads.
  2. While org.apache.commons.lang3.RandomUtils is a wrapper around java.util.Random, it is (like most of the Apache Commons collection) a well-designed wrapper that results in values that are more random and tested to result in a different series of values on every test run.
  3. ThreadLocalRandom is recommended by some as more efficient. Given these efficiencies are measured in sub-milliseconds and that the Apache Commons is a more reliable collection than the java.util collection, the recommendation is to use RandomUtils.

Random conclusion

The failure of many API tests to catch defects prior to release is the limited inputs used to test them. Using random data in conjunction with iterative loops will help lower the amount of time spent patching production issues and leave more time for building business capabilities and value.


Originally published at https://www.logic2020.com/insight/tactical/generate-random-test-data-run-time-jmeter

If you found this interesting, please share.

© Scott S. Nelson

5 tips for using JMeter Fragments for enterprise API testing

Google is the greatest boon to software development since disks replaced those paper bunch tapes … but solutions for individual topics found online often skip the implications when applied at scale. This blog post will not add new information to the collective knowledge of using fragments in JMeter to test APIs; it will put some important concepts in the perspective of applying them to real-world enterprise solutions.

First things are not always first

Ideally, this article would start with the first tip. I know I generally prefer my information that way, i.e., direct and to the point. If you are reading this, it is because you understand that isn’t always the best way to get to where you need to be. Specifically, a good tip to start with for using Test Fragments in large-scale JMeter API Test Plans is how to plan your taxonomy. The problem with that approach is that you may not understand why that is important until you get to the point where significant refactoring will be required. So, we’ll start with the “why,” then get to the “how.”

Tip 1: Not all fragments are the same

The main purpose of a Test Fragment is to apply the same type of reuse that is encouraged in enterprise design. The other value it provides is the ability to cleanly separate different aspects of a single JMeter Test Plan in support of multiple contributors and source control.

There are two types of Test Fragments in JMeter:

• A node with a Test Plan as the parent and referenced by a Module Controller that is a child to a Thread Group. For convenience, these will be referred to as Test Fragment nodes.

• A JMX file that is created by applying the “Save As Test Fragment” menu option on a Controller and then referenced by an Include Controller that is a child of a Thread Group. These will be referred to as Test Fragment files.

Test Fragment nodes cannot be included in a Test Fragment file, which is why they either need to be saved off as a Test Fragment file or duplicated within the controller that will be exported as a Test Fragment file. This is the first thing to know before creating your source taxonomy.

Tip 2: Refactoring before committing

A key concept in using Test Fragments for large Test Plans that is rarely pointed out is that it is best that Test Fragment nodes are only used during development to avoid redundant refactoring. When specific steps are required multiple times, placing them in a Test Fragment node while working them out is a huge time-saver.

At the completion of development, the contents should be saved as one or more Test Fragment files that will be included in the main JMX file containing the Test Plan and potentially (though rarely) other Test Fragment files. The value of moving the node to a Test Fragment file before committing to source control is that it minimizes the overlap in Test Fragment files between efforts. Working alone, this doesn’t present an issue. But as soon as there as multiple contributors, it quickly becomes very time consuming to manage merge conflicts when too much is in the main JMX file. The use of Test Fragment files greatly simplifies this.

Tip 3: Rules are made to be broken, except when they shouldn’t be

The decision to save a Test Fragment node as a Test Fragment file is based on how many other Test Fragment files may require the Test Fragment. For example, if a Test Fragment node is used a few times in a specific Test Fragment file but not in any other Test Fragment file, then saving as a separate Test Fragment file is of little value. Likewise, if the Test Fragment node is only referenced once or twice and in only two Test Fragment files, it is still not worth creating a separate Test Fragment file, because they should rarely need to be refactored once the API they test is released to production. This is when you must break the rule of re-use and simply duplicate the nodes where necessary once they are stable.

While this type of duplication would not be appropriate in an application developed in an object-oriented language, JMeter is a platform that runs off configuration files where re-use is possible only in a limited way without becoming more work than it is worth.

There may be cases where a particular process is repeated several times. First, review the use of this process to determine if there is a more elegant solution. If not, continue to develop with the process in a Test Fragment node and later save it as a separate Test Fragment file with the naming convention of [parentFragment.childFragment.jmx] to make the usage and dependency clear.

Finally, it is important to understand that APIs are meant to abstract the underlying code. Even if you know the origin of the code, test fragments should be grouped primarily by functionality and (where appropriate) by common test data and not by the namespaces of the source code.

Tip 4: Plan fragments like they are stand-alone test plans

Remember that part earlier about occasionally needing to include other Test Fragment files in Test Fragment files? One of those occasions is when a fragment depends on a variable that is the result of another fragment that is included in the thread. For one reason or another, that first fragment may later be excluded from the thread. In such a case, the dependent fragment should test for the variable with an IF Controller and then include the Test Fragment file to generate the necessary variable.

One additional note about Test Fragment files including Test Fragment files: When a Test Fragment file refers to another Test Fragment file, the path is relative to the main JMX file.

Tip 5: The simplest approach to refactoring is to use two files

This last tip raises a lot of eyebrows when first presented. In the likely event that a Test Fragment file has to be updated, either as a result of debugging or an update to the application behavior, the way to work on the Test Fragment file is to copy the nodes into the main JMX and do the work there, then export the results back to the Test Fragment file. The temptation to make the changes in the Test Fragment file and then test from the main JMX should be avoided for all cases but the extremely trivial, like a variable with a name change and only referenced in a single node. Trying to do all of the work in the Test Fragment file will result in moving back and forth between the files and take much longer than anticipated. Just remember to run a final test with the new Test Fragment file and the original main JMX.

Bonus Tip: Taxonomy

Since I clearly pointed out the need for this in the introduction, I feel obligated to include this last tip. At a high level, the recommended taxonomy looks like this (replace [BRACKETED_TEXT] with your own labels and the example text as necessary):


.
+-- [APPLICATION_NAME]_API_TESTS.jmx
+-- Test_Data
|  +-- [COMMON_DATATYPE].csv
|  +-- [FRAGMENT_NAME_01].[DATATYPE].csv
|  +-- [FRAGMENT_NAME_02].[DATATYPE].csv
|  +-- ...
+-- Fragments
|  +-- apis
|      +-- FRAGMENT_NAME_01.jmx
|      +-- FRAGMENT_NAME_02.jmx
|      +-- FRAGMENT_NAME_02.SUB_FRAGMENT.jmx
|  +-- common
|      +-- login.jmx
|      +-- logout.jmx
|      +-- create_account.jmx
|  +-- setup
|      +-- environment_prep.jmx
|      +-- generate_common_data.jmx
|      +-- [OTHER].jmx
|  +-- teardown
|      +-- data_clean_up.jmx

More tips to come

This post is based on common guidelines I help enterprises define. It has been my experience that using fragments as described here is the best way to support testing of a large, complex, API-driven application that is continuously evolving. It has also been my experience that using them counter to the recommendation of always moving fragments to files early results in productivity loss by requiring more time in maintenance, which detracts from expanding the testing footprint. But it is not the only important lesson is scaling JMeter API testing. If you benefited from reading it, please let me know what other areas are of most interest to you and I will base the next article in the series on that.


Originally published at https://logic2020.com/insight/tactical-jmeter-fragments-enterprise-api-testing/

If you found this interesting, please share.

© Scott S. Nelson

Test automation: 3 things you need to know

Test automation — using automation tools to execute test case suites — delivers numerous benefits, including greater time- and cost-efficiency, the ability to run tests unattended/overnight, and a lower risk of integration and production issues. Particularly well suited to automation are test cases that are:

  • New or modified functionalities
  • Business critical
  • Repetitious
  • Tedious for humans to perform
  • Performance sensitive
  • Time consuming

It’s important to recognize that automation can’t eliminate all manual testing because automation is for testing functionality. You still want users doing hands-on testing to ensure the usability of the application.

If your organization is considering automating parts of your testing processes, here are three things to keep in mind.

1. Cost and time requirements will be lower than you think.

Many teams don’t implement test automation because they believe that it takes too much time or too many resources. While this may have been true in the earlier days of test automation, it is no longer the case with modern tools and cloud services. For example, most tools can record user interactions and then allow developers and testers to modify the results for more dynamic testing. Also, most automated build tools can now incorporate many test engines into the build process, catching issues before they are deployed, and can impact live production use.

2. There are no static inputs in the real world.

Teams that do automation testing often use static, hard-coded parameters rather than dynamic parameters. This not only misses mimicking real-world use cases; it also will not properly reflect performance and scalability metrics where caching is in use. Dynamic parameters can either be randomly generated using standard scripting languages or driven from prepared input files. The parameters also need to be as realistic as possible. The use of nonsensical values to populate text fields and simple sequential numbers for complex number fields such as phone or amount can miss validating even the minimal edge cases

It is also important to have in place a procedure to reset databases to pre-test conditions for consistent re-testing.

3. Know when to run the automated tests.

In the Test-Driven Development (TDD) approach, the tests are written before the code. Even if your development team doesn’t follow TDD, writing tests as soon as the interfaces are created and callable can save hours of time spent performing manual functional tests. The amount of testing necessary to get from the first stubbed interfaces to ready-for-deployment into an integration environment is almost always underestimated. Not writing the tests as early as possible consistently results in either increased effort during development, manually testing functionality as the code changes (versus a button click or one-line command to run the automated test), or increased time during quality assurance debugging all of the scenarios that were insufficiently tested during development

Even if you balk at automating functional testing, testing before turning over to QA will save many hours spent in the writing of defect reports, triage reviews, debugging, and re-testing. Most true DevOps approaches include testing with every build-and-deploy cycle.

Conclusion: The bottom-line is about the bottom line.

The time spent developing and maintaining test automation will deliver a positive ROI by reducing the number of production issues and shortening QA cycles. The time it takes to realize that ROI will vary based on complexity and technical culture, though it is often much sooner than anticipated even with the best application teams.

As with most cases of process automation, automating testing is neither a silver bullet nor a one-size-fits-all solution. The value of real user testing is still valid and necessary, as there will always be use cases that are not anticipated by designers, developers, and testers. By taking the time to determine all use cases that can be automated, selecting the tool that best meets your organization’s needs, and leveraging best practices like using dynamic parameters and testing as early as possible, you stand the best chance of improving productivity, shortening QA cycles, building customer trust, and achieving positive ROI.


Originally published at https://www.logic2020.com/insight/tactical/test-automation-three-things-to-know

If you found this interesting, please share.

© Scott S. Nelson