How to Foster AI Adoption from the Bottom Up

There is a lot of commentary about how AI initiatives are “failing”. Some measure it as ROI, which is a fair yardstick. Others point at the lack of adoption, which every technology goes through. The blame for these “failures” is often placed on leadership, which is fair given the meaning of the term. Speaking of terms, derivatives of “fail” have been in quotes so far for a reason: if you think of these things as a fail, then you are either sitting on the sidelines or throwing in the towel on the opportunities that AI offers. The only way to fail with AI is to give up on discovering how it will help your business. That will likely be followed by another type of fail.

Like everything else, failure is also a continuum. A prompt that returns an irrelevant result can technically be considered a fail, again challenged by the assumption that one can truly fail if they take the result as a lesson and do something else. At the other end of the spectrum is an agent that Deletes Company’s Entire Database, which is as close to a true fail one can get. There is no getting away from the fact that some people, teams, even companies, are just not very good at what they do, and capitalistic Darwinism will deal with them regardless of whether they are adopting AI or not (though AI will speed up the process).

Those true fails are a very small percentage of the world. The rest is a result of a type of hubris that (probably not coincidently) has seen a lot of attention in the business- and pop-psychology mediums lately, the Dunning-Kruger effect. Well, actually, just similar. The Dunning-Kruger effect is mostly about those that think they are doing better than they really are. The precursor to the failures that occur from attempting to follow early adopters doesn’t really have a term. However, think of this analogy: a person shows up to open mike night at a comedy club and sees a few folks pull off really good sets. They sign up to give it a try and find out the hard way that there is a lot more to getting an audience to laugh than just telling jokes.

So, lots of companies piled on to the AI bandwagon after having seen others succeeding with what looks from the outside as little or no effort. From the inside, these front runners have been playing with AI for years before ChatGPT woke the world up to the value of LLMs and Generative AI. They already had goals in mind of what they would do with the next evolution of the technology and they applied the latest and greatest with the same rigor they had been working with the latest but-maybe-not-the-greatest capabilities previously. The Lesson: Have a goal, make a plan, and be flexible moving from current state to target.

Back to the leadership blame. Again, some leadership is just not that great. The rest become great by engaging the company as a whole to define goals, make plans, and support the flexibility needed in achieving those goals when following a roadmap that traverses new territory. With AI, that leadership could start with a survey about “What tasks do you perform that are tedious?”

It’s especially important for leadership to ask this of the company as a whole because they are generally insulated from the types of mundane work that takes away from the ability to be productive. Some common areas are time and expense reporting. Most time reporting systems have a way to simplify this for recurring entries, but for those with varying tasks it would boost productivity and morale to have an agent that parses emails, IMs, and calendars to draft the weekly time submission and remind the person to review and submit. Or expense reports drafted by uploading receipts and the driving meeting invite or email. The ROI of such agents will vary greatly between enterprises, from a few dollars per person to a few million per year.

By polling the entire company for input on where AI can free people up from repetitive, simple, and necessary tasks that reduce the time, energy, and interest they could spend on work that is more fulfilling and more profitable, an enterprise can rapidly create a backlog of ROI-driven AI initiatives. The initial ROI may be small and sometimes difficult to measure. There are two key differences these initiatives would have over the 95% that “fail”:

  1. ROI-Driven Goal: The initiative would have an ROI-driven goal from the start, avoiding the trap of, “We’re doing this because everyone is doing it.”
  2. Built-in Adoption: Adoption is built in, because the goal was defined by the very people who will benefit directly and immediately from the results.

Once business success is demonstrated, more ideas will flow. At which point, it’s a good thing you defined some governance first. You did, right? Well, that’s another article.


As an addendum to this post, I asked Google Gemini for some better questions to solicit ideas for AI initiatives. Here is what it responded with:

That is a good starting point, but the question “What tasks do you perform that are tedious?” is limited because it only focuses on negative experience (tedium) and ignores major areas where AI offers massive value.

Here are five better questions to ask when identifying opportunities for AI, along with the reasoning behind why they are more effective:


5 Better Questions for Identifying AI Opportunities

1. “What tasks do you currently perform that involve interpreting or synthesizing a large volume of unstructured data?

Why It’s Better:

  • This targets the core strength of modern AI, especially Large Language Models (LLMs) and computer vision. AI excels at processing and making sense of data humans find overwhelming:

    • Examples: Summarizing thousands of customer feedback emails, extracting key terms from legal contracts, or analyzing unstructured notes from sales calls.

    • This question moves beyond “tedious” (which a human might not mind doing) and identifies tasks where AI provides superhuman efficiency and insight by handling data volume and complexity.

2. “Where in your process are decisions made based on historical patterns, forecasts, or complex optimization?

Why It’s Better:

  • This targets Predictive and Prescriptive AI. These systems go beyond automation; they actively optimize the business for better outcomes.

  • Examples: Optimizing logistics routes (a complex math problem), predicting equipment failure (preventative maintenance), or determining the best pricing strategy based on market conditions.

  • This question focuses on driving direct business value (cost savings, revenue growth, risk reduction), which is a stronger justification for AI investment than simply reducing tedium.

3. “What tasks are currently handled by a person that involve repetitive, rules-based digital actions?

Why It’s Better:

  • This clearly identifies opportunities for Robotic Process Automation (RPA), which is often the lowest-hanging fruit for AI implementation.

  • Examples: Copying data between two systems, generating routine reports, or verifying customer identity against a checklist of rules.

  • The benefit here is accuracy and speed. While these might be “tedious,” they are more importantly tasks where humans introduce errors, meaning AI delivers improved quality and compliance.

4. “What tasks do you perform where the output quality is inconsistent or highly dependent on individual human experience?

Why It’s Better:

  • This targets AI’s ability to drive standardization and quality. Humans, no matter how skilled, have variability. AI provides consistent output.

  • Examples: Writing personalized yet standardized responses to common customer service inquiries, determining the quality of manufactured parts (visual inspection), or classifying documents.

  • This question focuses on the goal of scaling expertise—making the output of your best employee available to everyone, consistently.

5. “What are the most frequent points of friction or delay in your customer or employee experience?”

Why It’s Better:

  • This focuses on impact and user experience, rather than just internal tasks. AI is a powerful tool for improving interactions.

  • Examples: Implementing a chatbot to answer 80% of routine questions instantly (reducing wait times), automatically tagging inbound customer support tickets (speeding up routing), or providing real-time personalized recommendations.

  • This shifts the focus to strategic business improvement and reveals problems that can be solved with AI-powered interfaces, not just back-office automation.

If you found this interesting, please share.

© Scott S. Nelson

SOLVED: Unable to find “Get Celebration Details” for Service Agent Customization with Prompt Builder module

This one was really frustrating for me, because support kept saying “you have the wrong org”, even after getting a new one with the link they provided four times. Finally, a support rep helped me find a solution (thank you, Kiran Kumar!).

First instead of going to Flows through Setup, they had me go to Flows through the Apps navigation:

Next, change the view from the default to All Flows:

And then search for Get Celebration Details (and you will):

However, when this all started for me, I apparently had a corrupt org, so it still wasn’t there. Support insisted that I sign up for a new org 3 times, and the flow was not seen in all three. Once they pointed me to the App solution, I did find it in one of those orgs. Rather than start the whole module over, I exported the flow from the org where it was and imported it into the org where I had completed the rest of the module.

And, for those who don’t want to go through all of that, you can download the flow metadata directly from

https://github.com/ssnsolutionist/ssnsolutionist/blob/main/Get_Celebration_Details.flow-meta.xml 

If you found this interesting, please share.

© Scott S. Nelson

How to Pass the AWS CLF-C02 Cloud Practitioner Exam the First Time

People say the Cloud Practitioner exam is easy. Easy to say if you have used all of the AWS products as an administrator. For me, it took some work. Here’s how I did it.

Updates?

For the record, this is for the CLF-C02 exam, based on having passed it on September 12, 2025. If you’re reading this in the future, check that the exam details haven’t shifted. The process to prepare and pass will be the same, but the details may vary over time.

Despite the name, Cloud Practitioner isn’t all that magical. Though I think one of the most important aspects of this certification is understanding how to manage your AWS account cost effectively, which some may see as magical.

The Formula

As my seventh article on how I have passed certification exams I have prepared for, I have come to a formula that works for me. I still continue to refine it, and added something new this time (yes, of course it is AI-related).

The current formula is this:

  1. Start with a quality exam prep course.
  2. Find a set of practice exams that has at least 5 times as many questions (total) as the actual exam.
  3. Repeat the practice exams until you consistently score over 90%. (trust me on this one)
  4. (New!) Use NotebookLM to generate a podcast of the material you are weakest on and listen to it repeatedly for a few days before the exam.
  5. Schedule the exam for a time of day when you generally find it easiest to concentrate.

Optional Bio hack

There is one additional ingredient I use, which may not be for everyone. I take focus supplements, sometimes called nootropics, and wash them down with a Starbucks Double Shot. This is the third time I have gone the bio hack route. The first two times, I had missed that step of scheduling at a time of day when I am my sharpest, because work schedules were in the way. Both of those times I felt that I really got a concentration boost. This last time, my schedule was more flexible and I don’t think the combo really helped all that much.

A Scout is Always Prepared

This time around, there was another formula side-step in that I did not find the prep course I took to be of particularly good quality. As such, I’m not going to share it, as I usually do with those I liked. This time around, I was very budget conscious and used a course I had access to for free. It did help by giving me exposure to topics I had not had to deal with as part of my regular work, but I credit my long experience in technology for being able to extract value from the content rather than the quality of the content. There were several topics where I used Perplexity to fill in the blanks that I picked up on during the course.

Practice Makes…More Likely to Pass

The practice exams, however were great. I used AWS Certified Cloud Practitioner Practice Exams CLF-C02 at Udemy. It had 6 full sets of exam questions. Each set felt tougher, possibly because newer questions crept in. My scores reflected that. I think this is better than most practice exams where each set has the topics evenly distributed. Of course, it could’ve just been coincidence that my weaknesses aligned with the sequence.

Another thing I really appreciated in the practice exams is that they didn’t stop at just the correct answers for the test review, it also provided detailed explanations. Other practice sets I have used sometimes only gave a link to the vendor documentation. While the links are more in line with how the vendors would like you to study, I have yet to take one of these exams as a way to learn the material, and the ones where I already (thought) I knew the material, this level of preparation wasn’t necessary.

Practice Level 1

Another feature of these practice exams I really liked was labeled as a “beta option,” though I seem to recall this has been available on Udemy long enough to not be considered “beta.” That feature is to get the answer after each question, rather than only at the end of the exam. I did this for the first pass through of the practice exams.

Almost for Real

The second pass I did it “exam style,” getting the answers after the total score. There were improvements, but no enough. This time, I copied the questions and correct answers into a text file, then converted them to Bionic Reading® notes (if you aren’t familiar with Bionic Reading® notes, they make reviewing notes much easier, and I have included mine at the end for reference) and imported the resulting markdown into UpNote for studying.

One More Time with Feeling

After the third pass, I was almost satisfied with my scores:

Progression in Practice Exam Results
Progression in Practice Exam Results

But, my experience with certification exams is that there are always questions in the actual exam that were not covered in my preparation. For this reason, I really prefer to have the practice exams at 100% (which I fell short of this time).

Not Not Necessary (this time)

The practice exams had a few “not” questions which I generally got wrong the first pass and still missed some the second time around. I recently watched an Otherwords video about Why A.I. Struggles with Negative Words, and I still don’t feel better about missing those questions. However, maybe I’m not alone, because there were no such questions in the actual exam I took.

New addition to my standard approach NotebookLM podcast

To help improve my memory of the ones I had missed I went back to the questions I got wrong on the second pass through and noted the reference links to AWS documentation. I then fed those links to NotebookLM and had it generate a podcast, selecting the longest format and prompting that it should target an audience preparing to take the CLF-C02 certification exam. The results were incredibly good. But don’t take my word for it, I have posted the podcast here: AWC CFL C02 Exam Prep Podcast.

Recommended Reading

While I did not use it extensively, I found AWS Certified Cloud Practitioner Study Guide With 500 Practice Test Questions: Foundational (CLF-C02) Exam (Sybex Study Guide) to be a very good resource. I think if I had not been in a rush to get this cert out of the way, I would have started with this and had a better score for it.

If you are studying for this certification, or have already taken it, please share your experience, thoughts, and suggestions in the comments.

My Bionics

Here are the bionic notes I used. Your weak spots may vary, so consider creating your own at https://10015.io/tools/bionic-reading-converter.


Q: What is the benefit of Amazon EBS volumes being automatically replicated within the same availability zone?

A: Durability

Q: Which AWS service can be used to route end users to the nearest AWS Region to reduce latency?

A: Amazon Route 53

Q: What is the main benefit of attaching security groups to an Amazon RDS instance?

A: Controls what IP address ranges can connect to your database instance

Q: What is the recommended storage option when hosting an **often-**changing database on an Amazon EC2 instance?

A: Amazon EBS

Q: What kind of reports does AWS Cost Explorer provide by default?

A: Utilization

Q: What does the term ?Economies of scale? mean?

A: It means that AWS will continuously lower costs as it grows

Q: Which AWS team assists customers in achieving their desired business outcomes?

A: AWS Professional Services

Q: Which of the below options is true of Amazon Cloud Directory?

A: Amazon Cloud Directory allows the organization of hierarchies of data across multiple dimensions

Q: An organization has a legacy application designed using monolithic-based architecture. Which AWS Service can be used to decouple the components of the application?

A: SQS (SNS, and EventBridge)

Q: A company is planning to use Amazon S3 and Amazon CloudFront to distribute its video courses globally. What tool can the company use to estimate the costs of these services?

A: AWS Pricing Calculator

Q: What is the connectivity option that uses Internet Protocol Security (IPSec) to establish encrypted connectivity between an on-premises network and the AWS Cloud?

A: AWS **Site-**to-Site VPN

Q: Both AWS and traditional IT distributors provide a wide range of virtual servers to meet their customers? requirements. What is the name of these virtual servers in AWS?

A: Amazon EC2 Instances

Q: A company uses multiple business cloud applications and wants to simplify its employees? access to these applications. Which AWS service uses SAML **2.**0 to enable single sign-on to multiple applications through a central user portal?

A: AWS IAM Identity Center

Q: A small retail business with multiple physical locations is planning to transfer sensor data and store security camera footage in the cloud for further analysis. The total amount of data is around 8 terabytes, and the business’s internet connection is too slow to transfer such a large amount directly to AWS in a reasonable time. Which AWS service would be the most cost-effective to transfer the data to AWS?

A: AWS Snowcone

Q: Which AWS Service is used to manage user permissions?

A: AWS IAM

Q: A company has hundreds of VPCs in multiple AWS Regions worldwide. What service does AWS offer to simplify the connection management among the VPCs?

A: AWS Transit Gateway

Q: Which statement best describes the operational excellence pillar of the AWS Well-Architected Framework?

A: The ability to monitor systems and improve supporting processes and procedures

Q: A company is migrating its on-premises database to Amazon RDS. What should the company do to ensure Amazon RDS costs are kept to a minimum?

A: Right-size before and after migration

Q: A company is planning to host an educational website on AWS. Their video courses will be streamed all around the world. Which of the following AWS services will help achieve high transfer speeds?

A: Amazon CloudFront

Q: What does AWS Health provide? (Choose TWO)

A: 1) Detailed troubleshooting guidance to address AWSD events impacting your resources

2) Personalized view of AWS service health

Q: Which of the following services allows customers to manage their agreements with AWS?

A: AWS Artifact

Q: You have set up consolidated billing for several AWS accounts. One of the accounts has purchased a number of reserved instances for 3 years. Which of the following is true regarding this scenario?

A: All accounts can receive the hourly cost benefit of the Reserved Instances

Q: A company is deploying a new **two-**tier web application in AWS. Where should the most frequently accessed data be stored so that the application?s response time is optimal?

A: Amazon ElastiCache

Q: If you want to register a new domain name, which AWS service should you use?

A: Route 53

Q: If you want to visualize your spending on your AWS account for the past month, which tool can help you?

A: AWS Cost Explorer

Q: If you go for consolidated billing for multiple AWS accounts under three master accounts, what benefit do you get?

A: Combined usage for discounts

Q: For which support plan do you also have AWS support Concierge Service?

A: Enterprise

Q: Which storage option should you use if you are hosting a frequently-changing database on an Amazon EC2 instance?

A: EBS

Q: To get a high throughput to multiple compute nodes, which storage service would you use to host an application on your EC2 instance?

A: EFS

Q: If you want to upload data to S3 at very high speeds, which AWS service takes advantage of the edge locations?

A: S3 Transfer Acceleration

Q: Which one of these can you NOT assign to a user?

A: IAM identity. You cannot directly assign an “IAM identity” to a user because “IAM identity” is a generalized term referring to any entity in IAM (such as users, groups, or roles).

Q: You have been asked to contact AWS support using the chat feature to seek guidance on an ongoing issue. However, when you log in to the AWS support page, you do not see the chat options. What should you do?

A: Live chat support is only available with Business, or Enterprise Support plans

Q: If you want to launch and manage a virtual private server in AWS, which service is the easiest?

A: Lightsail. Lightsail provides pre-configured virtual server instances

Q: What is AWS Athena?

A: AWS Athena is a serverless, interactive query service that enables you to analyze data directly in Amazon S3 using standard SQL

Q: Choose from the options below to filter your incoming traffic request to your EC2 instance.

A: NACLs and Security Groups

Q: Protect from dDOS attacks?

A: NACLs and Security Groups

Q: Where can you find your historical billing information in the AWS console?

A: Billing and Cost Management console

If you found this interesting, please share.

© Scott S. Nelson

Replacing your Proposal Team with ChatGPT

I’ve heard of some businesses that have completely automated their RFP response process using Agentic AI. To reach that level of automation, you either need a very narrow set of services or a very generous budget to address all the quirks and exceptions.

I have neither of those.

Before I go on, I want to point out that while I will definitely continue to use Generative AI with all of my documentation as tool to improve quality, I much prefer working with a human team that is AI-augmented rather than just AI. It is a strain being the only one managing the human factor of work that is meant to drive decisions. The title is not a suggestion; it is a description of how to cope when it is necessary.

What I do have is access to a few Generative AI tools. For various reasons I won’t get into here, ChatGPT Projects is the best fit for the workflow I have adopted (and still refining). Projects are ChatGPT’s (poor) answer to NotebookLM and Perplexity Spaces

(see my earlier post about Organizing AI Augmentation with Notebooks).

Projects are useful in that they keep related prompts and files in one place, but they don’t really cross-reference or allow for collaboration. It does come with that fine print at the bottom of the screen stating:

“OpenAI doesn’t use [NAME OF COMPANY PAYING SUBSCRIPTION FEE] workspace data to train its models.”

Which is the main one of those reasons I said I wouldn’t get into (oops!).

I recently worked on a proposal at a time when most of the people who would usually help were busy with other things, so I settled into working mostly with ChatGPT like an eager-but-green proposal teammate (the AI being the green one, not me…no matter what that LLM wrapper says).

Setting the Stage

For this particular proposal, the prep work didn’t look all that different from the old manual process. It starts with a short document to capture the proposal’s guiding themes: my company’s strengths, differentiators, and the ideas that needed to shine through in both tone and substance. The document was mostly drafted by practice leadership and refined with a few folks familiar with client, project types, or both.

Next came the outline. Depending on the RFP structure, I sometimes let ChatGPT take the first crack at building an outline from the document, then refine it interactively. Other times, the RFP format or flow is not friendly to automate parsing, even for a well-trained AI (or so I assume, as I haven’t attempted to train one that deeply yet). In this case I build the first draft of the outline myself, then hand it to ChatGPT to check against the original RFP. That combination of back-and-forth has become standard practice.

Draft One: Enter the AI Intern

Once the outline was in good shape, ChatGPT proactively offered to populate the template once it was refined, which fits with the persona I have of it as an eager, educated, and inexperienced intern or junior associate. And given the quality of its suggestions, it is tempting to respond with a “Yes” and let ‘er rip. But tempered experience had me opt for prompting it to do so one section at a time, and waiting for feedback or confirmation before moving on to the next section. In this manner, I was able to put together a pretty decent first draft much faster than doing it entirely on my own (or even with a “real” eager, educated, and inexperienced intern or junior associate, whom I also would not want to do a full draft before getting some feedback).

I would say it was about 50/50 of accepting the first draft of a section versus a revision. As with any Generative AI augmented content generation, most of the issues stemmed from missing levels of details in my prompts versus ChatGPT misunderstanding the intent. Speaking of understanding the intent, I attached the entire proposal (again, because, like I said, I know notebooks and spaces and projects ain’t those), the outline, and the context document after it asked to write the proposal for me, and tempering the response to its offer with “Yes, but…” and then instructions to do it a section at a time and refer to the files.

Staying Sane (a.k.a. Breaks Matter)

As many proponents of utilizing Flow will tell you, it can be very beneficial to take breaks every 60 to 120 minutes (while most of the gurus on the topic seem to gravitate to the 90 minute mark, I hold fast that it varies by person and context, mangling Bruce Lee’s advice to “be like water”, in this case by seeking your own level). Without breaks, your ability to be objective about the quality of GenAI outputs will start to degrade and tilt where your bias is, i.e., past one’s threshold of real focus, some will start accepting every output while others will either keep refining the prompts for sections over and over or just re-write it by hand.

The Human Touch

After ChatGPT’s draft, it was time for the what passes as human intelligence (I used to call coffee my “artificial intelligence” until the term started being used by everyone to refer to what we currently call AI). I have enough experience (and ego) around writing proposals, and made some minor edits of the first AI generated draft. Once that first draft was completed, I dove in to give it a serious human touch, reading through the entire draft and making notes of changes I thought it needed. That read through without editing may seem counterintuitive, but it is necessary because something that jumps out at me as being incomplete, inaccurate, or just plain wrong may be clarified later in the document. After a top to bottom read and making notes of changes, I then work through the notes to actually make the changes, skipping or revising those changes with the full context of the document.

Then it’s ChatGPT’s turn again. I have it go through the document, essentially repeating what I had just done. This is a process I have worked on in other forms of writing as well, and I have a general prompt that I tweak as needed:

Check the attached [PROPOSAL FILENAME] for spelling errors, grammar issues, overall cohesiveness, and that it covers all points expected as a response to [RFP FILENAME].

Only provide detailed descriptions of any corrections or recommended changes so that I can select the changes I agree with. Think hard about this (thanks to Jeff Su‘s YouTube channel for this addition!)

And then I work my way through the response. This same prompt is re-run with updated versions of the proposal until I am satisfied that this stage has yielded as much benefit as it can.

Tightening the Screws

Finally, (or almost so) I have ChatGPT draft the executive summary. In the case of a really big RFP response, I will first have it draft the section summaries. These summaries are necessary to any proposal. In fact, they often make or break the proposal, possibly because they are the only parts the decision makers read, sometimes along with reviews done by others. If the summaries don’t come easy, or don’t sound right based on that original context document, I will go through and collaboratively revise the relevant sections until the summaries flow.

The Final Check

Finally, I try my best to find another human to check the whole of the result. If I’m lucky, I get additional input. If I’m really lucky, they’ve brought their own GenAI-assisted reviews into the mix.

GenAI has had a major impact on my writing output. The flow I use for proposals isn’t all that different from the flow I use to write blog posts or other content. I do a number of stream-of-consciousness sessions (the number varying on the complexity and length of the content), and then start refining it. I used that approach before GenAI, and the key difference that GenAI has made in my process is that I have learned to do less self-editing during those initial brain dumps, because I know that I have a tireless editor to review and give me feedback during the editing phase. Plus, the editor can be coached in both my intent and style to help me improve beyond just the level of “not clear”, and “i before e except after c or when the dictionary says otherwise”.

If you found this interesting, please share.

© Scott S. Nelson
Digging Holes

A Biased Review of an Unbiased Study on Developer Productivity with AI

A long time friend sent me a link to Does AI Actually Boost Developer Productivity? (100k Devs Study). While writing my response, I realized my reaction was a bit more than a chat reply, so I’m sending him a link to this post and hope he forgives me for the delay…

After watching this video of Yegor Denisov-Blanch, my inner critic wants to jump straight to:
He referred to mid-range engineers at the outset, in the context of who Meta said they were cutting. It wasn’t clear if the study participants were  mid-range.That out of the way, I’ve seen similar studies, though this is the best so far, based on number of participants, approach, and level of detail. Those other studies had the boost at 0 or less, and I didn’t trust the data but did recognize the premise. The premise being that AI is a multiplier, and if a developer tends to go down rabbit holes rather than focusing on the business goals, they will go deeper down rabbit the hole and become even less productive.

I think another aspect that is lost in these studies is that it is a paradigm shift, which means even the most experienced are still figuring out how to be productive in their use of AI. Since everyone is finding it so easy, no one admits that it takes some getting used to. That will account for some of the productivity hit.

One aspect Denisov-Blanch spends a good amount of time on where the mass media usually skims or skips entirely, is the difference between greenfield and brownfield projects. The difference is huge, where brownfield productivity gains are much lower. This information is critical to businesses that are planning on reducing their development teams based on published gains, since, for most enterprises, the majority of work is decidedly brownfield.

We also haven’t yet seen the impact of greenfield applications built primarily with GenAI when it comes to long-term maintenance. Yes, we have seen some anecdotal results where they are disastrous, from both a security and CX perspective, but we haven’t seen anything at scale yet. As an architect I am probably biased, but I don’t have much confidence in GenAI to create a reliable and flexible solution for no other reason than most people don’t think to ask for one at the start (except maybe architects😊).

The tools are improving (this based on anecdotal evidence from people who have both a high degree of skill as a developer and demonstrated critical thinking about tools and processes in the past). The people using the tools are becoming more skilled. So the gains in productivity will likely either climb across the board, or those below mid-range may crawl up from the less-than-zero productivity zone.

Meanwhile, anyone looking to cut their developer workforce in the next couple of years should watch this video, draw their own conclusions, and then revise their estimates.

If you found this interesting, please share.

© Scott S. Nelson