Simplifying Deploy and Destroy with the SF CLI

Personally, I never had any problems  with the SFDX CLI, which is why I was surprised they were replacing it with the SF CLI. What I now sometimes struggle with is translating all of my cheat sheet references from one to the other, and the sifting through the plethora of search results that still refer to the SFDX way of doing things. OK, I know I can minimize the second part by setting the date range in my Google searches, but when I’m being lazy (or in a hurry, both of which are more common than I prefer) I use Perplexity.ai rather than Google.

Most recently my conversion conundrum cropped up with removing components that I had deployed through the CLI. I found a great SFDX syntax in Perplexity literally one day before it was slated to be retired. Here is the relevant part of the session that got me to the solution:

Question: cli to delete objects from an org

Answer:
To delete objects from a Salesforce org using the CLI, you can use the destructive changes process. Here’s how to do it:

Step 1: Create Destructive Changes XML

Create a file named destructiveChanges.xml with the following structure:
<?xml version="1.0" encoding="UTF-8"?>
<Package xmlns="http://soap.sforce.com/2006/04/metadata">
    <types>
        <members>ObjectName1__c</members>
        <members>ObjectName2__c</members>
        <name>CustomObject</name>
    </types>
</Package>
Replace ObjectName1__c and ObjectName2__c with the API names of the objects you want to delete.

Step 2: Create Empty Package XML

Create a file named package.xml with the following content:
xml
<?xml version="1.0" encoding="UTF-8"?>
<Package xmlns="http://soap.sforce.com/2006/04/metadata">
    <version>62.0</version>
</Package>

Step 3: Deploy Destructive Changes

Run the following Salesforce CLI command:
bash
sfdx force:mdapi:deploy -d . -w -1 -u YourOrgAlias
This command deploys the destructive changes to your org:
  • -d . specifies the current directory as the deployment source
  • -w -1 waits for the deployment to finish
  • -u YourOrgAlias specifies the target org alias

Additional Considerations

  • Ensure you have the necessary permissions to delete objects in your org.
  • Deleting objects will also delete all associated records and metadata (fields, validation rules, etc.).
  • Some objects may have dependencies that prevent deletion. You might need to remove these dependencies first.
  • Always test this process in a sandbox environment before applying it to production.
  • Consider backing up your metadata and data before performing destructive changes.
If you need to delete other types of metadata (like Apex classes, Visualforce pages, etc.), you can include them in the destructiveChanges.xml file using the appropriate metadata type names.
Remember, this process permanently deletes the specified objects and their data from your org, so use it with caution.

When I ran it, it worked perfectly, and output a warning that it would no longer work the next day (I did not preserve the message content, but I can tell you the date was 11/4/24 because UpNote metadata is useful). As I mentioned, I’m often in a hurry, so I often make mistakes, and what has kept me in technology since the original Wolfenstein was released is always making sure I have a way of undoing whatever I do. So I decided to go find the cool new sf CLI solution. Which was not terribly clear to me. I found the documentation a bit confusing. Going back to the old-fashioned Google search (trust me, it pains me more to write “old-fashioned Google search” than it does for you to read it), I found a Trailblazer Community post by Max Goldfarb that greatly clarified how to make the new sf call.

I have since refined the process clarified by Max once I read the documentation while in less than my usually rushed efforts. First, I keep that step 2 XML that Perplexity gave me stored in my repo as antiPackage.xml, the full path being manifest\destructiveArchive\antiPackage.xml. I could then generate the destructive xml with:

sf project generate manifest --source-dir force-app/main/default/[WHATEVER-I-WANT-TO-DESTROY] --type destroy

and then run my simplified killer command:

sf project deploy start --manifest manifest/destructiveArchive/antiPackage.xml --pre-destructive-changes manifest/destructiveArchive/destructiveChangesALL.xml -o [ALIAS]

and boom! stuff removed. I even keep a set of destructive xmls in a special folder (you may have noticed it in the command above). I continued to use this happily for a few weeks.

Today I was messing with a single component that was giving me problems (or, more accurately, I was giving it problems by building it incorrectly) and started by removing just it. Then I went about making my fixes and, feeling less hectic than usual, decided to have a package.xml just for it so the deployment would go faster (yes, I know I can do that with a right-click, but once I am comfortable with a CLI, I prefer it, and it lets me make sure that my eventual release script will have fewer issues). It was very small, which made the content leap out at me more, and, so I realized…There is no difference between the XML output with or without the “–type destroy” switch!

So now I just keep one XML for each context in manifest\deployArchive and reference it for both deploy and destroy.

Facebooktwitterredditlinkedinmail
© Scott S. Nelson

10% innovation, 90% stay out of their way

Google had the “20% Time Project”, which was an initiative to encourage people to innovate unsupervised. According to a page in  Wikipedia (found using Perplexity.ai,  as most of my research these days), this resulted in things like AdSense and Google News. The Wikipedia entry and other research shows that it had its issues, too. Few people did any percent, and some called it the “120% Project” because the work happened beyond the regular work day.

Official or not, many companies have some type of program like this, and I’ve worked at a few where they were both formal and informal. The formal ones have some type of financial incentive to them, and the informal ones are generally a good way to accelerate career growth. There are many ways to measure the success of such programs. Increases in employee engagement; broadening of skills (which increases organizational capabilities); competitive edge as better (or even disruptive) innovations evolve; the attraction talented new employees; and things people haven’t even thought of yet because of the nature of the topic.

The innovations that pay off go through some stages that are similar to regular old projects. There is the discomfort stage, where something is identified as bothersome at some level and wouldn’t it be great if something were done about that. There is the ideation phase, where someone thinks that they have a way to make it less bothersome (or more cool, which is a legitimate bypass of the discomfort stage). There is the experimental stage where various things are tried to fix the problem or improve the solution (or sometimes both). Then there is the demonstration stage, where someone shows others (sometimes a someone from an earlier stage) what the experiments have yielded. The demonstration and experimental stages may iterate for a while, until either the commitment or abandonment stage occurs. Those last two are not only similar to regular old projects, they are the start of regular old projects.

One interesting thing about the stages of an innovation is that the “someone” at each stage may or may not be the same someone. There are solopreneurs where they are the someone from start to finish, or at least starting at the experimental stage. Which final stage comes in to play has lots of different influences. Some innovations are clearly awesome from the start, while others seem great in isolation but have impractical issues during or after the ideation stage. Others can be killed by skipping or re-ordering stages. As mentioned, some stage can be skipped because they may not be necessary (though I believe the discomfort stage always occurs and is just not always recognized as such). The biggest problem can occur in re-ordering the stages, especially moving that commitment stage anywhere earlier. It is usually helpful to set a time frame for the demonstration stages. Having an expectation around when the demonstration will be provides the urgency that is sometimes necessary to shift from motivation to action. Gathering estimates and milestones before that first demo is a good way to influence the lifecycle to ending in abandonment. It takes away the sense of ownership, playfulness, and excitement and turns it into a job too soon.

This problem has been observed in several psychological studies, none of which I had the foresight to take note of when I heard of them but Perplexity did manage to find a decent article on the “Overjustification Effect” , which is to say it isn’t just my opinion. One simplification of the phenomenon is that something that is interesting and motivating in and of itself becomes much less so when someone else tells you it is, or tells you how to be interested, or demands that you explain exactly how you are interested.

There is a related effect (which I am too lazy to find references to, but trust me, I’m not making it up) where someone that is instinctively talented at something is studied to determine how they do that thing they do through questioning and validating those unconscious processes, cease to be good at it. Usually it is temporary, but sometimes it is permanent.

All of which is to say that if you want your team to innovate, let them come up with what they will be innovative about, come to an agreement on a time frame where they can demonstrate progress, and then get out of their way. Trying to set detailed steps and milestones for innovation will greatly lower the odds of getting to a point where defining detailed steps and milestones will lead to an innovated solution.

Facebooktwitterredditlinkedinmail
© Scott S. Nelson

Is Your Data Ready for AI?

These days, everyone is either trying AI (rare), considering AI (most common), tried it before they were ready (with mixed results) or just AI curious (which doesn’t necessarily preclude the other possibilities). Sooner or later, your organization is going to be in the trying category, and then you will be either in the group of folks that excelled with it or stumbled. One of the key factors that will determine that result is the quality of your data going in, and the integrity of your data moving forward.

Let’s take a little time now to consider the relationship between data quality, data integrity, and generative technologies, and then think about potential how to improve the odds of landing in the successful adopter side of the coming AI divide.

The Human Edge: Fuzzy Thinking and Pattern Recognition

The current differentiation between AI and human intelligence lies in our capacity for fuzzy thinking and nuanced pattern recognition. Humans possess an innate ability to identify when information doesn’t fit a pattern or context, a skill that AI systems are still developing. While AI can process vast amounts of data at incredible speeds, it may struggle with contextual understanding and adaptability in novel situations.

This limitation in AI’s cognitive flexibility can lead to inefficiencies, particularly when dealing with complex, real-world scenarios. As AI systems attempt to process and make sense of imperfect or inconsistent data, they will consume more computational resources, leading to higher operational costs.

The Rising Costs of Using AI Inefficiently

The inefficiencies in AI processing are already manifesting at a macro level. Major tech companies and AI research institutions are reporting significant increases in power consumption as they scale up their AI offerings and user base. These escalating costs will (eventually and inevitably) be passed on to consumers, likely in the form of changes to service billing structures. Consider the current use of paying per token where either the cost per token will go up or the number of tokens require to complete common operations, or both. Think of how coffee used to be sold in 1-lb bags and now we pay more per bag where the bag now holds 10 ounces. AI may become the first digital form of shrinkflation.

Garbage In, Garbage Out…More Garbage In?

Recognizing these challenges, forward-thinking organizations are prioritizing data cleanup as an important first step on their AI adoption journey. However, it’s important to note that data integrity is not the result of a a one-time effort. It requires ongoing policies, procedures, processes to support what is likely the most import commodity any organization owns.

When data stores are initially created, they are typically clean and well-structured (don’t get me started on garbage test data, that is a separate article…coming soon!). The data becomes messy over time (how much time depends on many factors) simply through regular use (and sometimes irregular, but that is also beyond the scope of this post). When AI is added to that use, trained on that same use, it will get messier faster unless the processes that led to the mess are also addressed.

It may be tempting to consider this a training issue. Inadequate training can certainly lead to bad data, but good training may not be sufficient to correct the problem. This is because training is costly to create, costly to deliver, will need to be delivered again for every new team member, will likely need to be repeated periodically for all team members, and still may not always be remembered or followed.

The most reliable and cost-effective way to improve those processes is to automate those that can be automated. Automation may cost more to create than the training process, but then it is one-and-done until the process itself needs to change. The key to cost-effective automation is determining when it is still OK to kick an edge case out for a human to deal with it and have a good process for the human to be notified and the task tracked to completion.

Automation offers several advantages over traditional training methods:

  1. Consistency: Automated processes perform tasks the same way every time, reducing human error.
  2. Scalability: Once implemented, automated processes can handle increasing volumes of data without proportional increases in cost.
  3. Long-term cost-effectiveness: While initial implementation may be costly, automation provides ongoing benefits without the need for repeated training sessions.

Moving forward

Once the organization’s data has been cleaned up and processes put in place to maintain the integrity of that data, automated where possible, then the opportunity to get ahead of the competition through generative technologies is real for your organization. Like many adventures into new territory, there will be plenty of new challenges that will require urgent attention and decisive action. Preparing for what is known and predictable first will leave more resources for managing the unexpected.

And remember, most people heading into new territory seek the help of an experienced guide. Being new territory, it isn’t so important that the guide be experienced with the specific territory, but that they have experience of venturing into other new areas and have lived to tell about it.


Shout out to Jon Ewoniuk and his new podcast The 360 Salesforce Mastermind Podcast. This article was inspired by his first episode, where his guest spoke about niches (mine being a leadership in digital innovation and automation adoption) and the importance of good data to support generative technologies.

Facebooktwitterredditlinkedinmail
© Scott S. Nelson

How Great Candidates Get Missed by Hiring Managers

YouTube shorts are my guilty pleasure for when I am feeling too lazy to focus on anything for more than two minutes. I ran across the one at the bottom of this post the other day, which I found really inspirational given that I treat a lot of information the same way. Specifically, I don’t spend time committing information to long-term memory that is easily looked up.

I have, at times, committed information temporarily to long-term memory for the purpose of receiving certifications that are helpful in credentialing myself in the eyes of others, I seldom maintain full recall for more than a couple of months past the exam (unless I am currently using it on a regular basis). There are some scripting languages that I have learned multiple times because they fall into disuse between very long term projects.

Before the Einstein quote, I had heard about Henry Ford suing for slander after being called ‘ignorant’. His court testimony was along the lines of “…tell me why I should clutter up my mind with general knowledge, for the purpose of being able to answer questions, when I have men around me who can supply any knowledge I require?” (see Ignorance Is Not Knowing Everything: Henry Ford Model for a good read about this…though I heard about it back when Internet was still capitalized and one had to hear tortured US Robotics™ scream before it could be accessed by most).

In researching the Einstein quote, I ran across a few others on a site I won’t link to because of the nature of the ads they show. Suffice to say, it is a commonly-held sentiment among those considered to be educated, intelligent, and/or innovative.

The point I wish to make, in my frequently-meandering way, is that these quotes and attitudes are all pre-www and even more relevant in light of both the internet as readily available source and the accelerated speed of change it fosters. Memorizing information pertinent to solving technology issues is detrimental to being highly productive both because it requires a narrower range of knowledge, and it often results in relying on outdated information rather than refreshing one’s understanding (which has most likely evolved for the topic since memorized).

Dennis Miller used to close his monologues (which he called rants) with  “But that’s just my opinion. I could be wrong.”, and I used to quote that in my email signature (back when quotes in email signatures was a thing). In this case, I believe if I look it up now, I won’t be (wrong, that is, which I add because even I found that sentence confusing…but still like the Miller-esque structure of it).

Facebooktwitterredditlinkedinmail
© Scott S. Nelson
Freepik rendering of the prompt 6 cats in a line one whispering to the next playing the telephone game

Realizing Agile’s Efficiency

(Feature image by Freepik)
TL;DR: Fostering a culture of trust that leads to calm collaboration up front will yield the benefits that Agile principles promise.
Preface: While agile is in the title of this post, no claim is made that the post is about how to do agile or how SAFe is or is not agile. It is about how the Manifesto for Agile Software Development is self-clarifying in that it concludes with “while there is value in the items on the right, we value the items on the left more.” (italics mine), and how the value of the items on either side should be measured by their effectiveness in a given organization and the organizations influence on the “self-organizing teams” referenced in the Principles behind the Agile Manifesto. That said…
The value of architecture, documentation, and design reviews in SAFe was illustrated in a scenario that played out over several weeks.
The situation started with the discovery that a particular value coming from SAP had two sources. Well, not a particular value from the perspective of the source. The value had the same name, was constrained to the same list of options, but could and did have different values depending on the source, both of which were related to the same physical asset. For numerous reasons not uncommon to SAP implementations that have evolved for over a decade, it was much more prudent to fetch these values from SAP in batches and store them locally.
The issue of the incorrect source was identified by someone outside the development team when it was found to be commonly missing from the source selected for work prioritization. For various reasons that will be common across a variety of applications that support human workflow, this was considered something that needed to be addressed urgently.
The developer who had implemented the fetch to the correct source was tapped to come up with a solution. Now, one thing about this particular application is that it was a rewrite of a previous version where the value of “Working software over comprehensive documentation” was adhered to without considering the contextual reality that the team developing release one would neither be the team working on the inevitable enhancements nor ever meet that team. The re-write came about when the system was on its third generation of developers and every enhancement was slowed because there was no way to regression test all of the undocumented parts. Unsurprisingly, the organizational context that resulted in the first version missing documentation also resulted in some tables schemas being copied wholesale from the original application and not reviewed because requirements were late, resources were late, and the timeline was unchanged. So, with no understanding of why not to, the developer provided a temporary solution of copying the data from one table to the other because it had only been communicated that the data from one source was the correct data for the prioritization filter. Users were able to get their correctly prioritized assignments and  the long-term fix went to the backlog.
As luck and timing would have it, when the design phase of the long term fix was picked up by the architect, the developer was on vacation. Further, while this particular developer had often made time to document his designs, the particular service the long-term fix depended on was one of the few that were not documented. Still further, it had been re-design as another service had been discovered to obtain the same data more reliably. But all of the data currently loaded was from the previous version, so even the attempt of reverse engineering the service to get sample data for evaluation was not possible. These kinds of issues can lead to frustration, which in turn dampens creative thinking, which is to say that had the architect looked at the data instead of following the assumption from the story that the data wasn’t yet readily available, he would have discovered that it was already present.
Eventually the source of the correct value was identified and a design created that would favor the correct value over the incorrect value but use the incorrect value if the correct one was not available to allow for the assignments to continue because sometimes the two actual values were the same (which is inspiration about a future post discussing the value of MDM). The design also included updating to the correct value if it became available after the initial values were set. The architect, being thorough, noted in the design a concern about what should be done when the correct value came into the system after the record that was prioritized based on that value has been assigned and processed by a user. After much back and forth, it was finally communicated that while the data was retrieved from the same system and labeled with the same name, the two values were not different because one was incorrect but because they were in fact to separate values meant for two different viewpoints. Which means that the design of attempting to choose and store a single correct value in both tables was invalid and that the records altered for the work-around were now (potentially) invalid. This made the correct solution a (relatively) simple change to the sorting query.
With the full 20/20 vision of hindsight, it is now clear that if the team did not feel that ever issue needed to be treated as an emergency and all of the product, design, and development stakeholders had discussed the issue prior to taking action, about 80 hours of work would have been reduced to 4 hours. Yes, there were other factors that impacted the need of 80 hours to deal with what is a fairly minor flaw, but those factors would not have come in to play had the questions been asked up front and clarity reached through collaboration.
Facebooktwitterredditlinkedinmail
© Scott S. Nelson