I’ve seen several tech buzz cycles, where even the real stuff is hyped. From BBS systems to .com bubbles, shareware to SaaS, DHTML to AJAX to ReST, and web first to mobile first to cloud first. In almost every one of those booms, the “first-mover advantage” belonged to the command-and-control mindset: direct, rigid, and strictly instrumental.
As I watch the rolling adoption of Generative AI (GenAI), I see a long-overdue validation of a different skillset.
The technical gap is no longer being closed by the most aggressive “commanders,” but by the most collaborative coordinators. I am delighted to see that women are not just adopting this technology, they are mastering its productivity curve at a rate that confirms what many of us have suspected for years:
When technology becomes conversational, the best communicators win.
A Predictable Shift in the Trenches
In hindsight, this was inevitable. We have moved away from a world where you had to speak “machine” (syntax and code) to a world where the machine finally speaks “human” (semantics and dialogue).
I’m seeing this play out in two very specific ways:
In Engineering: I’ve noticed women developers are often faster to move past using AI as a simple code generator. They are using it as a high-level architectural partner, stress-testing logic and managing edge cases. They aren’t just looking for an output; they are managing a relationship with a complex system.
The Non-Technical Leap: This is one of the most gratifying shifts to watch. I’m seeing women in marketing, HR, and operations become “technical” as a side-effect of AI adoption. They are building automated workflows and custom tools that once required a dedicated IT ticket. They are bridging the gap not through brute-force coding, but through precise, collaborative inquiry.
Why the “Soft” Skill is the New “Hard” Skill
Traditional computing was about giving a machine a rigid command. If you didn’t know the exact syntax, the machine failed.
GenAI is different. It requires a dialogue.
The best results don’t come from a single prompt; they come from a back-and-forth “coaching” session. This requires empathy for the model’s logic, iterative questioning, and the patience to refine an idea rather than just demanding a result. Because women have historically been the primary collaborators and “connectors” in the workplace, they are naturally suited for the dialogic nature of GenAI.
The Data Catches Up to the Reality
The industry is starting to recognize this shift, and the data is backing up what we are seeing in our offices:
Closing the Gap:Deloitte’s TMT Predictions suggest that the rate of GenAI adoption among women has been tripling, on track to equal or even exceed male adoption by the end of this year.
The Quality of Interaction: Recent studies indicate that while men may use the tools more frequently for “one-off” tasks, women often show greater knowledge improvement and higher competence after the interaction. They aren’t just using the tool; they are learning with it.
The Bottom Line
We are witnessing the Collaboration Dividend. For decades, “soft skills” were often sidelined as secondary. Today, they have become the ultimate competitive advantage.
It is a pleasure to see these skills—and the women who have mastered them—finally getting the recognition they deserve. In the age of GenAI, the “cooperator” will almost always outperform the “commander.”
About the Feature Image
It is one colleague in particular that inspired the first spark of this post, and I wanted her to be part of the feature image. Then I began thinking of other women that have shown me the benefits of collaboration and I added their images as well as tribute. And my apologies for those I didn’t think of during the 10 minutes of creating this image prompt, or who are no longer on LinkedIn.
Most readers of this post will be too late to take the exam for free . . . which is why I am writing it.
I’ve been following the Salesforce Quests for years now. I first became aware of them when I would receive emails that they were ending with a week or less to finish them when they were always monthly and unique each month. When I had free time, I would jump in and finish them. Sometimes I would receive some merch several weeks later. Then I received a certification voucher when I only had one cert, and I tracked down the URL where the Quests are announced and set a monthly reminder to check for new ones. The Agentforce Specialist is my sixth certification, and I only paid for the first (technically, not even that one, as I talked my employer into allowing me to expense the exam). The rest I won vouchers for, with the exception of this one, which was free to everyone until the end of 2025.
Wall of Swag
I discovered the fact it was free while working through the Agentblazer series of badges. The final badge, Legend, requires certification and that is when I discovered it was free. The certification was free for quite some time, but my employer at the time did not get many Salesforce projects and I had missed the news. I discovered that it was free on October 10, and became determined to pass this one, too.
Even though I don’t get to work in the Salesforce ecosystem as much as I would like to, those monthly reminders to check out the latest Quests keep me involved in keeping up with the changes. So when I started on the Agentblazer series of badges, I already had some trails and modules under my belt, and quickly advanced to the Legend level where I learned of the free certification. Even so, I can honestly say that the Agentblazer Legend quest has been the most difficult I have worked through (disclaimer: at the time of this writing I have not completed the quest, but I will within a day or two . . . check my profile to keep me honest!) in almost a decade of questing.
But, truly, my core skill is digressing, and I have from the topic of getting certified, so back to it . . .
First, definitely earn the Agentblazer badges as a foundation. The path to earning them will prepare you for what comes next.
Which is, as I have always recommended for certification preparation, buy a pack of practice exams with as many quality questions as you can find and work your way through them. For this particular certification exam I used a Udemy course, Salesforce Certified Agentforce Specialist – Practice Exam (currently on sale for $9.99). One of my other blogs is “Cheap, Lazy Investor”, and to the cheap part, I did not buy any other practice exams because this one did the trick. It has 365 questions (not all unique) and they covered 95% of the concepts I found on the actual exam, so no complaints and some kudos.
Passing the exam requires a combination of rote knowledge and conceptual knowledge. Of the two, conceptual knowledge will bring the higher score. You can’t get by with just one. Rote knowledge is necessary for questions where there is clearly only one right answer. Conceptual knowledge is necessary to answer those questions where more than one answer is correct, because one answer is more correct than the other. The “more correct” is driven by understanding what is key to Salesforce and Agentforce. Concepts such as security, flexibility, and that the standard option is the best option if it meets all of the requirements. Use the practice exam to get examples.
Interestingly, while the value of LLMs is their ability to manage probabilistic responses, if one answer leans towards probabilistic and the other leans towards deterministic, the deterministic answer is most likely the correct one. Getting the most likely answer when your own knowledge isn’t helping is where conceptual knowledge is key.
The deployment lifecycle section of the exam focuses on what is specific to Agentforce. I had a really hard time getting NotebookLM to stick to that scope. After two failed attempts where it produced very detailed preparation around the full Salesforce Application Lifecycle Management, I finally created a new notebook, ran deep research specifically on deployment lifecycle processes and pitfalls related to Agentforce, then added my own missed questions and had it generate a note, which I then added as a source and ran the audio prompt again: “Focus only on making the contents of ‘Missed Practice Exam – Deployment Lifecycle.md’ thorough and memorable to the listener to ensure the reader can correctly answer all questions regarding the Agentforce deployment lifecycle questions in the Salesforce Agentforce Specialist certification exam. Avoid the use of emphatic expressions and emphatic modifiers. This is important.”
One important thing about practice exams: They are not the exam you will be taking. The value of reviewing the questions you missed is in identifying the concepts that are not solid in your thinking. This is one of the reasons why it isn’t too bothersome that NotebookLM goes outside the boundaries of provided content when generating the podcast audio. And don’t rely on NotebookLM to catch it all, either. If you miss the same question three times on a practice test, go read the material, re-do the Trailhead module, and create some Bionic notes on the topic. Sound like overkill? There is almost always some questions on the exam on topics not covered by the practice exams, so be fully prepared for those you can expect to answer will offset any score impact of topics that you never heard of until the exam.
I did not use Bionic notes this time. I still think it is a valuable technique.
If you’ve read my other certification articles, you will know that I use notes formatted as Bionic Reading® to review my notes on missed questions and key concepts. And that I sometimes use my own version, where I bold keywords rather than parts of words to get the concepts to stick. I stand behind this approach, but didn’t do much with it this time.
This time I used NotebookLM. I used advanced search to find links to content, plus links from the Trailhead content, and my own study notes exported as markdown from UpNote to create source content. Then I incrementally created generated AI audio content that I posted on YouTube and listened to continuously to drill the concepts into my head.
At the end of the day (or almost the end of the year), I passed the exam.
I usually will write something as a blog post first, but this started as a short LinkedIn post, which received two likes in less than 10 minutes after posting, so I decided to re-post it here.
Here are some thoughts fueled by listening to an enlightening podcast with a neuroscientist host (Andrew Huberman) and a choreographer guest (Twyla Tharp)🧠&🩰:
A fully supported software initiative includes people focused on coding, UI/UX, and testing. In high-performing teams, these specialists interact frequently.
Great solution teams understand that skilled “creatives” have deep grounding in data about human behavior and regularly test their work with users and refactor based on feedback and practicality 🎨 ; “testers” need to understand the limits of the technology, imagine behaviors that are not expected, and analyze the likelihood of something happening versus the impact of it happening 🧪 ; and developers who don’t test as they go, or don’t apply creative thinking to meeting business requirements may produce a lot of code but aren’t really productive 🤠 .
Yet, many organizations keep these experts apart outside of occasional “sync” meetings that don’t result in anything being synchronized but do tend to reduce productivity.
Other organizations recognize that there is overlap in thinking across these specialties and try to cut costs or speed output by removing the specialists and increasing the load of the remaining experts. 🪨
People that have chosen a focus and developed the skills to be good at what they do are happiest and most productive when they are supported and challenged by people with overlapping thought processes and differing skills. 👀 These similarities in thought processes and differences in discipline are the basis of highly productive teams that thrive when leadership aligns them on a shared direction. 🛣️
(Leaving out managers and architects is a peril, too, but including them here would require a much longer post).
There is a lot of commentary about how AI initiatives are “failing”. Some measure it as ROI, which is a fair yardstick. Others point at the lack of adoption, which every technology goes through. The blame for these “failures” is often placed on leadership, which is fair given the meaning of the term. Speaking of terms, derivatives of “fail” have been in quotes so far for a reason: if you think of these things as a fail, then you are either sitting on the sidelines or throwing in the towel on the opportunities that AI offers. The only way to fail with AI is to give up on discovering how it will help your business. That will likely be followed by another type of fail.
Like everything else, failure is also a continuum. A prompt that returns an irrelevant result can technically be considered a fail, again challenged by the assumption that one can truly fail if they take the result as a lesson and do something else. At the other end of the spectrum is an agent that Deletes Company’s Entire Database, which is as close to a true fail one can get. There is no getting away from the fact that some people, teams, even companies, are just not very good at what they do, and capitalistic Darwinism will deal with them regardless of whether they are adopting AI or not (though AI will speed up the process).
Those true fails are a very small percentage of the world. The rest is a result of a type of hubris that (probably not coincidently) has seen a lot of attention in the business- and pop-psychology mediums lately, the Dunning-Kruger effect. Well, actually, just similar. The Dunning-Kruger effect is mostly about those that think they are doing better than they really are. The precursor to the failures that occur from attempting to follow early adopters doesn’t really have a term. However, think of this analogy: a person shows up to open mike night at a comedy club and sees a few folks pull off really good sets. They sign up to give it a try and find out the hard way that there is a lot more to getting an audience to laugh than just telling jokes.
So, lots of companies piled on to the AI bandwagon after having seen others succeeding with what looks from the outside as little or no effort. From the inside, these front runners have been playing with AI for years before ChatGPT woke the world up to the value of LLMs and Generative AI. They already had goals in mind of what they would do with the next evolution of the technology and they applied the latest and greatest with the same rigor they had been working with the latest but-maybe-not-the-greatest capabilities previously. The Lesson: Have a goal, make a plan, and be flexible moving from current state to target.
Back to the leadership blame. Again, some leadership is just not that great. The rest become great by engaging the company as a whole to define goals, make plans, and support the flexibility needed in achieving those goals when following a roadmap that traverses new territory. With AI, that leadership could start with a survey about “What tasks do you perform that are tedious?”
It’s especially important for leadership to ask this of the company as a whole because they are generally insulated from the types of mundane work that takes away from the ability to be productive. Some common areas are time and expense reporting. Most time reporting systems have a way to simplify this for recurring entries, but for those with varying tasks it would boost productivity and morale to have an agent that parses emails, IMs, and calendars to draft the weekly time submission and remind the person to review and submit. Or expense reports drafted by uploading receipts and the driving meeting invite or email. The ROI of such agents will vary greatly between enterprises, from a few dollars per person to a few million per year.
By polling the entire company for input on where AI can free people up from repetitive, simple, and necessary tasks that reduce the time, energy, and interest they could spend on work that is more fulfilling and more profitable, an enterprise can rapidly create a backlog of ROI-driven AI initiatives. The initial ROI may be small and sometimes difficult to measure. There are two key differences these initiatives would have over the 95% that “fail”:
ROI-Driven Goal: The initiative would have an ROI-driven goal from the start, avoiding the trap of, “We’re doing this because everyone is doing it.”
Built-in Adoption: Adoption is built in, because the goal was defined by the very people who will benefit directly and immediately from the results.
Once business success is demonstrated, more ideas will flow. At which point, it’s a good thing you defined some governance first. You did, right? Well, that’s another article.
As an addendum to this post, I asked Google Gemini for some better questions to solicit ideas for AI initiatives. Here is what it responded with:
That is a good starting point, but the question “What tasks do you perform that are tedious?” is limited because it only focuses on negative experience (tedium) and ignores major areas where AI offers massive value.
Here are five better questions to ask when identifying opportunities for AI, along with the reasoning behind why they are more effective:
5 Better Questions for Identifying AI Opportunities
1. “What tasks do you currently perform that involve interpreting or synthesizing a large volume of unstructured data?“
Why It’s Better:
This targets the core strength of modern AI, especially Large Language Models (LLMs) and computer vision. AI excels at processing and making sense of data humans find overwhelming:
Examples: Summarizing thousands of customer feedback emails, extracting key terms from legal contracts, or analyzing unstructured notes from sales calls.
This question moves beyond “tedious” (which a human might not mind doing) and identifies tasks where AI provides superhuman efficiency and insight by handling data volume and complexity.
2. “Where in your process are decisions made based on historical patterns, forecasts, or complex optimization?“
Why It’s Better:
This targets Predictive and Prescriptive AI. These systems go beyond automation; they actively optimize the business for better outcomes.
Examples: Optimizing logistics routes (a complex math problem), predicting equipment failure (preventative maintenance), or determining the best pricing strategy based on market conditions.
This question focuses on driving direct business value (cost savings, revenue growth, risk reduction), which is a stronger justification for AI investment than simply reducing tedium.
3. “What tasks are currently handled by a person that involve repetitive, rules-based digital actions?“
Why It’s Better:
This clearly identifies opportunities for Robotic Process Automation (RPA), which is often the lowest-hanging fruit for AI implementation.
Examples: Copying data between two systems, generating routine reports, or verifying customer identity against a checklist of rules.
The benefit here is accuracy and speed. While these might be “tedious,” they are more importantly tasks where humans introduce errors, meaning AI delivers improved quality and compliance.
4. “What tasks do you perform where the output quality is inconsistent or highly dependent on individual human experience?“
Why It’s Better:
This targets AI’s ability to drive standardization and quality. Humans, no matter how skilled, have variability. AI provides consistent output.
Examples: Writing personalized yet standardized responses to common customer service inquiries, determining the quality of manufactured parts (visual inspection), or classifying documents.
This question focuses on the goal of scaling expertise—making the output of your best employee available to everyone, consistently.
5. “What are the most frequent points of friction or delay in your customer or employee experience?”
Why It’s Better:
This focuses on impact and user experience, rather than just internal tasks. AI is a powerful tool for improving interactions.
Examples: Implementing a chatbot to answer 80% of routine questions instantly (reducing wait times), automatically tagging inbound customer support tickets (speeding up routing), or providing real-time personalized recommendations.
This shifts the focus to strategic business improvement and reveals problems that can be solved with AI-powered interfaces, not just back-office automation.
This one was really frustrating for me, because support kept saying “you have the wrong org”, even after getting a new one with the link they provided four times. Finally, a support rep helped me find a solution (thank you, Kiran Kumar!).
First instead of going to Flows through Setup, they had me go to Flows through the Apps navigation:
Next, change the view from the default to All Flows:
And then search for Get Celebration Details (and you will):
However, when this all started for me, I apparently had a corrupt org, so it still wasn’t there. Support insisted that I sign up for a new org 3 times, and the flow was not seen in all three. Once they pointed me to the App solution, I did find it in one of those orgs. Rather than start the whole module over, I exported the flow from the org where it was and imported it into the org where I had completed the rest of the module.
And, for those who don’t want to go through all of that, you can download the flow metadata directly from