Package with Zipper

A simple Salesforce Package cheat sheet

As an IT consultant, I frequently change technologies and project roles. The frequent shift of focus is great for staying interested, engaged, and marketable. The downside is that many mental-muscle-memory tasks fade or never take root with long gaps between repetitions. One example is XSLT, which I have had to re-learn three times because it is not difficult to learn but only find I need it every five to seven years. Another example is various Linux commands, which I will post on my blog so that I can find them quickly when needed.

An additional technique I use, when available, is little cheats where the broad strokes are easy to remember and help prompt my memory for the details. The example of this I want to share today is creating Salesforce Unlocked Packages.

Before I start, it is important to understand that for a true package-management and -deployment strategy there is a lot more involved than just the ability to create a package. A package strategy requires thinking about how the org is used, who the stakeholders are, how many teams contribute to code and metadata within the org and how different their focuses are, and clear roadmap for an enterprise architecture with agreements on the direction and commitment to sustainable governance model. This post is just about being able to take something built in one org and being able to deploy it into another org as either a consulting starter, a demo set, or a shared feature.

In this example, I built a useful little Flow demo that guides CSRs in first searching for a contact before creating them to reduce duplication while following standard procedures for customer contact.

Flow module in action

It is a good component to keep handy as a starting point for clients that need a similar feature, so I want to be able to easily manage it in source control and distribute it to other orgs. Or I may want to share it with other developers for enhancements. In either case, I don’t work hands-on with packages to easily remember how to construct them. No fear! I do remember how to create unmanaged packages, which is a good start. However, it has been a while since I built this component, so looking at the list of flows does not make it clear to me what is what:

Call in contact flow package definition

I could guess, based on the names, but it is safer to be sure, so I go to the flow itself and look at the Properties to find the API Name:

Flow version properties

Components can have dependencies. Sometimes it is necessary to track them down and can require some test deployments to be sure all have been captured. In this case, the implementation is straight-forward, and Salesforce still finds a component that I had forgotten was involved:

Package component dependencies

Now that there is a package definition in the Salesforce org, I want to be able to move it to Github where I can work with it. The next step is to go to https://workbench.developerforce.com and navigate to Migration\Retrieve and select to extract the package by name:

Using Workbench to extract package

This will produce a zip file of the package components. To make these useful, set up a Visual Studio Code project for Salesforce (see if How to set up a Salesforce development environment you need help with this part). Now unzip the package into a folder name “unpackaged” at the same level as [PROJEC_PATH] and run the following command:

sf project convert mdapi --root-dir ../unpackaged --output-dir force-app

If the above paths were laid out correctly, you will see an output showing the components added to your project. Otherwise, check your folder layout and naming conventions and try again. At this point, it is useful to replace the default /manifest/package.xml with the one in the root of the unzipped package from Workbench (the following step assumes this). Finally, test that you can push the package contents to another org with:

sf project deploy start --target-org [USERNAME] --manifest ../[PROJECT_PATH]/manifest/package.xml --test-level RunLocalTests --wait 100

Once all components are deployed, you should see them in your org (they may not be enabled):

sf cli deployment
Package deployed

In this case, the flows require activation before they can be used. Post deployment steps will depend on your components.

Finally, add your project to source control. The project used in this article can be found at https://github.com/ssnsolutionist/Call-in-Contact-Flow-main.

Some of the steps above included commands from a cheat-sheet I maintain at https://github.com/ssnsolutionist/trailhead1/blob/master/sfdx-cli-common-commands.md

09-08-24 Revision Notes:
  1. The cheat sheet linked above is out of date. I’m working on it, mostly by having Perplexity.ai re-write them for me as I need them.
  2. Originally published as a Logic20/20 Insight article, the editors there later stripped out the screen shots, so publishing the full version here.
If you found this interesting, please share.

© Scott S. Nelson

5 tips for using JMeter Fragments for enterprise API testing

Google is the greatest boon to software development since disks replaced those paper bunch tapes … but solutions for individual topics found online often skip the implications when applied at scale. This blog post will not add new information to the collective knowledge of using fragments in JMeter to test APIs; it will put some important concepts in the perspective of applying them to real-world enterprise solutions.

First things are not always first

Ideally, this article would start with the first tip. I know I generally prefer my information that way, i.e., direct and to the point. If you are reading this, it is because you understand that isn’t always the best way to get to where you need to be. Specifically, a good tip to start with for using Test Fragments in large-scale JMeter API Test Plans is how to plan your taxonomy. The problem with that approach is that you may not understand why that is important until you get to the point where significant refactoring will be required. So, we’ll start with the “why,” then get to the “how.”

Tip 1: Not all fragments are the same

The main purpose of a Test Fragment is to apply the same type of reuse that is encouraged in enterprise design. The other value it provides is the ability to cleanly separate different aspects of a single JMeter Test Plan in support of multiple contributors and source control.

There are two types of Test Fragments in JMeter:

• A node with a Test Plan as the parent and referenced by a Module Controller that is a child to a Thread Group. For convenience, these will be referred to as Test Fragment nodes.

• A JMX file that is created by applying the “Save As Test Fragment” menu option on a Controller and then referenced by an Include Controller that is a child of a Thread Group. These will be referred to as Test Fragment files.

Test Fragment nodes cannot be included in a Test Fragment file, which is why they either need to be saved off as a Test Fragment file or duplicated within the controller that will be exported as a Test Fragment file. This is the first thing to know before creating your source taxonomy.

Tip 2: Refactoring before committing

A key concept in using Test Fragments for large Test Plans that is rarely pointed out is that it is best that Test Fragment nodes are only used during development to avoid redundant refactoring. When specific steps are required multiple times, placing them in a Test Fragment node while working them out is a huge time-saver.

At the completion of development, the contents should be saved as one or more Test Fragment files that will be included in the main JMX file containing the Test Plan and potentially (though rarely) other Test Fragment files. The value of moving the node to a Test Fragment file before committing to source control is that it minimizes the overlap in Test Fragment files between efforts. Working alone, this doesn’t present an issue. But as soon as there as multiple contributors, it quickly becomes very time consuming to manage merge conflicts when too much is in the main JMX file. The use of Test Fragment files greatly simplifies this.

Tip 3: Rules are made to be broken, except when they shouldn’t be

The decision to save a Test Fragment node as a Test Fragment file is based on how many other Test Fragment files may require the Test Fragment. For example, if a Test Fragment node is used a few times in a specific Test Fragment file but not in any other Test Fragment file, then saving as a separate Test Fragment file is of little value. Likewise, if the Test Fragment node is only referenced once or twice and in only two Test Fragment files, it is still not worth creating a separate Test Fragment file, because they should rarely need to be refactored once the API they test is released to production. This is when you must break the rule of re-use and simply duplicate the nodes where necessary once they are stable.

While this type of duplication would not be appropriate in an application developed in an object-oriented language, JMeter is a platform that runs off configuration files where re-use is possible only in a limited way without becoming more work than it is worth.

There may be cases where a particular process is repeated several times. First, review the use of this process to determine if there is a more elegant solution. If not, continue to develop with the process in a Test Fragment node and later save it as a separate Test Fragment file with the naming convention of [parentFragment.childFragment.jmx] to make the usage and dependency clear.

Finally, it is important to understand that APIs are meant to abstract the underlying code. Even if you know the origin of the code, test fragments should be grouped primarily by functionality and (where appropriate) by common test data and not by the namespaces of the source code.

Tip 4: Plan fragments like they are stand-alone test plans

Remember that part earlier about occasionally needing to include other Test Fragment files in Test Fragment files? One of those occasions is when a fragment depends on a variable that is the result of another fragment that is included in the thread. For one reason or another, that first fragment may later be excluded from the thread. In such a case, the dependent fragment should test for the variable with an IF Controller and then include the Test Fragment file to generate the necessary variable.

One additional note about Test Fragment files including Test Fragment files: When a Test Fragment file refers to another Test Fragment file, the path is relative to the main JMX file.

Tip 5: The simplest approach to refactoring is to use two files

This last tip raises a lot of eyebrows when first presented. In the likely event that a Test Fragment file has to be updated, either as a result of debugging or an update to the application behavior, the way to work on the Test Fragment file is to copy the nodes into the main JMX and do the work there, then export the results back to the Test Fragment file. The temptation to make the changes in the Test Fragment file and then test from the main JMX should be avoided for all cases but the extremely trivial, like a variable with a name change and only referenced in a single node. Trying to do all of the work in the Test Fragment file will result in moving back and forth between the files and take much longer than anticipated. Just remember to run a final test with the new Test Fragment file and the original main JMX.

Bonus Tip: Taxonomy

Since I clearly pointed out the need for this in the introduction, I feel obligated to include this last tip. At a high level, the recommended taxonomy looks like this (replace [BRACKETED_TEXT] with your own labels and the example text as necessary):


.
+-- [APPLICATION_NAME]_API_TESTS.jmx
+-- Test_Data
|  +-- [COMMON_DATATYPE].csv
|  +-- [FRAGMENT_NAME_01].[DATATYPE].csv
|  +-- [FRAGMENT_NAME_02].[DATATYPE].csv
|  +-- ...
+-- Fragments
|  +-- apis
|      +-- FRAGMENT_NAME_01.jmx
|      +-- FRAGMENT_NAME_02.jmx
|      +-- FRAGMENT_NAME_02.SUB_FRAGMENT.jmx
|  +-- common
|      +-- login.jmx
|      +-- logout.jmx
|      +-- create_account.jmx
|  +-- setup
|      +-- environment_prep.jmx
|      +-- generate_common_data.jmx
|      +-- [OTHER].jmx
|  +-- teardown
|      +-- data_clean_up.jmx

More tips to come

This post is based on common guidelines I help enterprises define. It has been my experience that using fragments as described here is the best way to support testing of a large, complex, API-driven application that is continuously evolving. It has also been my experience that using them counter to the recommendation of always moving fragments to files early results in productivity loss by requiring more time in maintenance, which detracts from expanding the testing footprint. But it is not the only important lesson is scaling JMeter API testing. If you benefited from reading it, please let me know what other areas are of most interest to you and I will base the next article in the series on that.


Originally published at https://logic2020.com/insight/tactical-jmeter-fragments-enterprise-api-testing/

If you found this interesting, please share.

© Scott S. Nelson

Get Hands-on with VS Code, Salesforce DX and Packages

(Originally published at Logic 20/20 as SFDX, VSCode, and deploying from a package the editors stripped out all of the links, rendering it an entirely different post. This is the original version.)

While I do not immediately dislike new tools, I do struggle with adopting them when I find nothing wrong with the old ones. And then I delay learning them until I’m forced to, which is the case of Visual Studio code for Salesforce (they are no longer supporting the Eclipse IDE and abandoned the DX extension for Eclipse before DX went GA) and Git (because that is the way the dev world has gone). I find the best way to learn new tools is to write about how to learn them, so here we go.

(In the spirit of working in a low code platform, we will also see how much of this I can do with just links to existing documentation…)

If you haven’t already, Install Salesforce Extensions for VS Code.

Then Enable Dev Hub in Your Org and Enable Second-Generation Packaging (note that while 2GP is beta as of this writing, this is required to enable first generation Unlocked Packages which is GA).

Next…Well, that didn’t take long. I cannot find a stand-alone URL for creating an SFDX project, so I’m going to steal a section from a Trailhead lesson (because it is as much typing to say what not to do as it is to re-create it here):

  1. Open VS Code.
  2. From the menu, select View | Command Palette.
  3. In the command palette search box, enter [PROJECT_NAME].
  4. Select SFDX: Create Project.
  5. Use the same name as your GitHub repo, then click Enter.
  6. Click Create Project.
  7. Create a .gitignore file to ignore hidden directories:
    1. Hover over the title bar for the DX project. then click the New File icon.
    2. Enter .gitignore. [check if it already exists and just edit if so]
    3. In the text editor, indicate to ignore these two hidden files:
.sfdx
.vscode

To foster good habits, I will set up a github repo to store this project in (though following a full lifecycle will be another article) by following the excellent documentation at https://help.github.com/en/github/importing-your-projects-to-github/adding-an-existing-project-to-github-using-the-command-line and add the project to the repository.

Now go do some work in Salesforce. For example purposes, let’s do the Build a Simple Flow project.

After you complete the project, follow the instructions to Create and Upload an Unmanaged Package, skipping the Upload part. I named the project TH_Flow_Project, which you don’t have to, I only mention that as I will use that text in the example commands.

Salesforce provides a nice reference to Create a Salesforce DX Project from Existing Source.  I have some additional thinking around how to go about this part, so I will end the approach of linking to references and switch to my own approach. If you followed the last link and stopped here, you won’t learn anymore about the Salesforce DX capabilities, but you may miss out on some of my shortcuts and wit. With that said…

Authorize the org you created the flow in with the following:

sfdx force:auth:web:login –setalias <MY_SOURCE_ALIAS> –instanceurl <MY_ SOURCE_ORG_URL>

Example:

sfdx force:auth:web:login --setalias TH-ORG02 --instanceurl https://infa-ca-wav-dev-ed.my.salesforce.com/

A bit late to mention, but if you are using a Developer org, I highly recommend to Set Up My Domain. Trailhead orgs already have one. If you haven’t, you can probably leave off the instanceurl parameter and it should pick it up from the default configuration for your project (YMMV). Otherwise use the URL that you login to your org with.

Next, download the package using the following:

sfdx force:mdapi:retrieve -r ../ -p <PACKAGE_NAME> -u <USERNAME>, ex:

sfdx force:mdapi:retrieve -r ../ -p TH_Flow_Project -u scott@trailh2.org

Let’s break that down just a bit. The first part is the base command to retrieve (sfdx force:mdapi:retrieve). The -r parameter determines where the downloaded zip file will be located. The example uses a relative path indicating the folder above the DX project. As a best practice, I recommend always staying in the project directory inside the VSC terminal, with all commands base on being relative to that location. This way you can maintain a list of commonly-used commands that will be re-usable across all projects. The downloaded file name is always unpackaged.zip.

The files need to be unzipped before they can be used (someone should make a feature request for the convert command to work on zip files instead of having to unpack them first). In Linux the relative command is:

unzip ../unpackaged.zip -d ../

Now we add the files from the package to our project using the relative path command:

sfdx force:mdapi:convert -r <PATH_TO_UNZIPPED_PACKAGE> -d <PATH_TO_[/force-app]>, ex:

sfdx force:mdapi:convert -r ../TH_Flow_Project -d force-app

Now all of the files from your package are part of your project.

To add this to your target org, first authorize that org as done previously for the source org, i.e.:

sfdx force:auth:web:login –setalias <MY_TARGET_ALIAS> –instanceurl <MY_ TARGET_ORG_URL>

Example:

sfdx force:auth:web:login --setalias TH-ORG02 --instanceurl https://infa-ca-wav-dev-ed.my.salesforce.com/

And (almost) finally, deploy the updates from your project to the target org with:

sfdx force:source:deploy -u <TARGET_USERNAME> -x <PATH-TO-PACKAGE.XML>

sfdx force:source:deploy -u apex@theitsolutionist.com -x ../TH_Flow_Project/package.xml

(Another feature recommendation is to have an alias option instead of only the username.)

And finally (this time for real!) look in your list of flows to see the flow installed in your target org.

Of course, you are doing this with a throw-away org, right? Because I forgot to mention that deploying will over-write any existing components with the same name.

One final note. We used the package.xml from the downloaded package for the sake of simplicity. Once the package import is validated, you will want to combine the package.xml from the download with the package.xml in your project located in the manifest folder of your project.

The project created from the writing of this article can be found at https://github.com/ssnsolutionist/trailhead1

If you found this interesting, please share.

© Scott S. Nelson

Test automation: 3 things you need to know

Test automation — using automation tools to execute test case suites — delivers numerous benefits, including greater time- and cost-efficiency, the ability to run tests unattended/overnight, and a lower risk of integration and production issues. Particularly well suited to automation are test cases that are:

  • New or modified functionalities
  • Business critical
  • Repetitious
  • Tedious for humans to perform
  • Performance sensitive
  • Time consuming

It’s important to recognize that automation can’t eliminate all manual testing because automation is for testing functionality. You still want users doing hands-on testing to ensure the usability of the application.

If your organization is considering automating parts of your testing processes, here are three things to keep in mind.

1. Cost and time requirements will be lower than you think.

Many teams don’t implement test automation because they believe that it takes too much time or too many resources. While this may have been true in the earlier days of test automation, it is no longer the case with modern tools and cloud services. For example, most tools can record user interactions and then allow developers and testers to modify the results for more dynamic testing. Also, most automated build tools can now incorporate many test engines into the build process, catching issues before they are deployed, and can impact live production use.

2. There are no static inputs in the real world.

Teams that do automation testing often use static, hard-coded parameters rather than dynamic parameters. This not only misses mimicking real-world use cases; it also will not properly reflect performance and scalability metrics where caching is in use. Dynamic parameters can either be randomly generated using standard scripting languages or driven from prepared input files. The parameters also need to be as realistic as possible. The use of nonsensical values to populate text fields and simple sequential numbers for complex number fields such as phone or amount can miss validating even the minimal edge cases

It is also important to have in place a procedure to reset databases to pre-test conditions for consistent re-testing.

3. Know when to run the automated tests.

In the Test-Driven Development (TDD) approach, the tests are written before the code. Even if your development team doesn’t follow TDD, writing tests as soon as the interfaces are created and callable can save hours of time spent performing manual functional tests. The amount of testing necessary to get from the first stubbed interfaces to ready-for-deployment into an integration environment is almost always underestimated. Not writing the tests as early as possible consistently results in either increased effort during development, manually testing functionality as the code changes (versus a button click or one-line command to run the automated test), or increased time during quality assurance debugging all of the scenarios that were insufficiently tested during development

Even if you balk at automating functional testing, testing before turning over to QA will save many hours spent in the writing of defect reports, triage reviews, debugging, and re-testing. Most true DevOps approaches include testing with every build-and-deploy cycle.

Conclusion: The bottom-line is about the bottom line.

The time spent developing and maintaining test automation will deliver a positive ROI by reducing the number of production issues and shortening QA cycles. The time it takes to realize that ROI will vary based on complexity and technical culture, though it is often much sooner than anticipated even with the best application teams.

As with most cases of process automation, automating testing is neither a silver bullet nor a one-size-fits-all solution. The value of real user testing is still valid and necessary, as there will always be use cases that are not anticipated by designers, developers, and testers. By taking the time to determine all use cases that can be automated, selecting the tool that best meets your organization’s needs, and leveraging best practices like using dynamic parameters and testing as early as possible, you stand the best chance of improving productivity, shortening QA cycles, building customer trust, and achieving positive ROI.


Originally published at https://www.logic2020.com/insight/tactical/test-automation-three-things-to-know

If you found this interesting, please share.

© Scott S. Nelson