How to Foster AI Adoption from the Bottom Up

There is a lot of commentary about how AI initiatives are “failing”. Some measure it as ROI, which is a fair yardstick. Others point at the lack of adoption, which every technology goes through. The blame for these “failures” is often placed on leadership, which is fair given the meaning of the term. Speaking of terms, derivatives of “fail” have been in quotes so far for a reason: if you think of these things as a fail, then you are either sitting on the sidelines or throwing in the towel on the opportunities that AI offers. The only way to fail with AI is to give up on discovering how it will help your business. That will likely be followed by another type of fail.

Like everything else, failure is also a continuum. A prompt that returns an irrelevant result can technically be considered a fail, again challenged by the assumption that one can truly fail if they take the result as a lesson and do something else. At the other end of the spectrum is an agent that Deletes Company’s Entire Database, which is as close to a true fail one can get. There is no getting away from the fact that some people, teams, even companies, are just not very good at what they do, and capitalistic Darwinism will deal with them regardless of whether they are adopting AI or not (though AI will speed up the process).

Those true fails are a very small percentage of the world. The rest is a result of a type of hubris that (probably not coincidently) has seen a lot of attention in the business- and pop-psychology mediums lately, the Dunning-Kruger effect. Well, actually, just similar. The Dunning-Kruger effect is mostly about those that think they are doing better than they really are. The precursor to the failures that occur from attempting to follow early adopters doesn’t really have a term. However, think of this analogy: a person shows up to open mike night at a comedy club and sees a few folks pull off really good sets. They sign up to give it a try and find out the hard way that there is a lot more to getting an audience to laugh than just telling jokes.

So, lots of companies piled on to the AI bandwagon after having seen others succeeding with what looks from the outside as little or no effort. From the inside, these front runners have been playing with AI for years before ChatGPT woke the world up to the value of LLMs and Generative AI. They already had goals in mind of what they would do with the next evolution of the technology and they applied the latest and greatest with the same rigor they had been working with the latest but-maybe-not-the-greatest capabilities previously. The Lesson: Have a goal, make a plan, and be flexible moving from current state to target.

Back to the leadership blame. Again, some leadership is just not that great. The rest become great by engaging the company as a whole to define goals, make plans, and support the flexibility needed in achieving those goals when following a roadmap that traverses new territory. With AI, that leadership could start with a survey about “What tasks do you perform that are tedious?”

It’s especially important for leadership to ask this of the company as a whole because they are generally insulated from the types of mundane work that takes away from the ability to be productive. Some common areas are time and expense reporting. Most time reporting systems have a way to simplify this for recurring entries, but for those with varying tasks it would boost productivity and morale to have an agent that parses emails, IMs, and calendars to draft the weekly time submission and remind the person to review and submit. Or expense reports drafted by uploading receipts and the driving meeting invite or email. The ROI of such agents will vary greatly between enterprises, from a few dollars per person to a few million per year.

By polling the entire company for input on where AI can free people up from repetitive, simple, and necessary tasks that reduce the time, energy, and interest they could spend on work that is more fulfilling and more profitable, an enterprise can rapidly create a backlog of ROI-driven AI initiatives. The initial ROI may be small and sometimes difficult to measure. There are two key differences these initiatives would have over the 95% that “fail”:

  1. ROI-Driven Goal: The initiative would have an ROI-driven goal from the start, avoiding the trap of, “We’re doing this because everyone is doing it.”
  2. Built-in Adoption: Adoption is built in, because the goal was defined by the very people who will benefit directly and immediately from the results.

Once business success is demonstrated, more ideas will flow. At which point, it’s a good thing you defined some governance first. You did, right? Well, that’s another article.


As an addendum to this post, I asked Google Gemini for some better questions to solicit ideas for AI initiatives. Here is what it responded with:

That is a good starting point, but the question “What tasks do you perform that are tedious?” is limited because it only focuses on negative experience (tedium) and ignores major areas where AI offers massive value.

Here are five better questions to ask when identifying opportunities for AI, along with the reasoning behind why they are more effective:


5 Better Questions for Identifying AI Opportunities

1. “What tasks do you currently perform that involve interpreting or synthesizing a large volume of unstructured data?

Why It’s Better:

  • This targets the core strength of modern AI, especially Large Language Models (LLMs) and computer vision. AI excels at processing and making sense of data humans find overwhelming:

    • Examples: Summarizing thousands of customer feedback emails, extracting key terms from legal contracts, or analyzing unstructured notes from sales calls.

    • This question moves beyond “tedious” (which a human might not mind doing) and identifies tasks where AI provides superhuman efficiency and insight by handling data volume and complexity.

2. “Where in your process are decisions made based on historical patterns, forecasts, or complex optimization?

Why It’s Better:

  • This targets Predictive and Prescriptive AI. These systems go beyond automation; they actively optimize the business for better outcomes.

  • Examples: Optimizing logistics routes (a complex math problem), predicting equipment failure (preventative maintenance), or determining the best pricing strategy based on market conditions.

  • This question focuses on driving direct business value (cost savings, revenue growth, risk reduction), which is a stronger justification for AI investment than simply reducing tedium.

3. “What tasks are currently handled by a person that involve repetitive, rules-based digital actions?

Why It’s Better:

  • This clearly identifies opportunities for Robotic Process Automation (RPA), which is often the lowest-hanging fruit for AI implementation.

  • Examples: Copying data between two systems, generating routine reports, or verifying customer identity against a checklist of rules.

  • The benefit here is accuracy and speed. While these might be “tedious,” they are more importantly tasks where humans introduce errors, meaning AI delivers improved quality and compliance.

4. “What tasks do you perform where the output quality is inconsistent or highly dependent on individual human experience?

Why It’s Better:

  • This targets AI’s ability to drive standardization and quality. Humans, no matter how skilled, have variability. AI provides consistent output.

  • Examples: Writing personalized yet standardized responses to common customer service inquiries, determining the quality of manufactured parts (visual inspection), or classifying documents.

  • This question focuses on the goal of scaling expertise—making the output of your best employee available to everyone, consistently.

5. “What are the most frequent points of friction or delay in your customer or employee experience?”

Why It’s Better:

  • This focuses on impact and user experience, rather than just internal tasks. AI is a powerful tool for improving interactions.

  • Examples: Implementing a chatbot to answer 80% of routine questions instantly (reducing wait times), automatically tagging inbound customer support tickets (speeding up routing), or providing real-time personalized recommendations.

  • This shifts the focus to strategic business improvement and reveals problems that can be solved with AI-powered interfaces, not just back-office automation.

If you found this interesting, please share.

© Scott S. Nelson

Replacing your Proposal Team with ChatGPT

I’ve heard of some businesses that have completely automated their RFP response process using Agentic AI. To reach that level of automation, you either need a very narrow set of services or a very generous budget to address all the quirks and exceptions.

I have neither of those.

Before I go on, I want to point out that while I will definitely continue to use Generative AI with all of my documentation as tool to improve quality, I much prefer working with a human team that is AI-augmented rather than just AI. It is a strain being the only one managing the human factor of work that is meant to drive decisions. The title is not a suggestion; it is a description of how to cope when it is necessary.

What I do have is access to a few Generative AI tools. For various reasons I won’t get into here, ChatGPT Projects is the best fit for the workflow I have adopted (and still refining). Projects are ChatGPT’s (poor) answer to NotebookLM and Perplexity Spaces

(see my earlier post about Organizing AI Augmentation with Notebooks).

Projects are useful in that they keep related prompts and files in one place, but they don’t really cross-reference or allow for collaboration. It does come with that fine print at the bottom of the screen stating:

“OpenAI doesn’t use [NAME OF COMPANY PAYING SUBSCRIPTION FEE] workspace data to train its models.”

Which is the main one of those reasons I said I wouldn’t get into (oops!).

I recently worked on a proposal at a time when most of the people who would usually help were busy with other things, so I settled into working mostly with ChatGPT like an eager-but-green proposal teammate (the AI being the green one, not me…no matter what that LLM wrapper says).

Setting the Stage

For this particular proposal, the prep work didn’t look all that different from the old manual process. It starts with a short document to capture the proposal’s guiding themes: my company’s strengths, differentiators, and the ideas that needed to shine through in both tone and substance. The document was mostly drafted by practice leadership and refined with a few folks familiar with client, project types, or both.

Next came the outline. Depending on the RFP structure, I sometimes let ChatGPT take the first crack at building an outline from the document, then refine it interactively. Other times, the RFP format or flow is not friendly to automate parsing, even for a well-trained AI (or so I assume, as I haven’t attempted to train one that deeply yet). In this case I build the first draft of the outline myself, then hand it to ChatGPT to check against the original RFP. That combination of back-and-forth has become standard practice.

Draft One: Enter the AI Intern

Once the outline was in good shape, ChatGPT proactively offered to populate the template once it was refined, which fits with the persona I have of it as an eager, educated, and inexperienced intern or junior associate. And given the quality of its suggestions, it is tempting to respond with a “Yes” and let ‘er rip. But tempered experience had me opt for prompting it to do so one section at a time, and waiting for feedback or confirmation before moving on to the next section. In this manner, I was able to put together a pretty decent first draft much faster than doing it entirely on my own (or even with a “real” eager, educated, and inexperienced intern or junior associate, whom I also would not want to do a full draft before getting some feedback).

I would say it was about 50/50 of accepting the first draft of a section versus a revision. As with any Generative AI augmented content generation, most of the issues stemmed from missing levels of details in my prompts versus ChatGPT misunderstanding the intent. Speaking of understanding the intent, I attached the entire proposal (again, because, like I said, I know notebooks and spaces and projects ain’t those), the outline, and the context document after it asked to write the proposal for me, and tempering the response to its offer with “Yes, but…” and then instructions to do it a section at a time and refer to the files.

Staying Sane (a.k.a. Breaks Matter)

As many proponents of utilizing Flow will tell you, it can be very beneficial to take breaks every 60 to 120 minutes (while most of the gurus on the topic seem to gravitate to the 90 minute mark, I hold fast that it varies by person and context, mangling Bruce Lee’s advice to “be like water”, in this case by seeking your own level). Without breaks, your ability to be objective about the quality of GenAI outputs will start to degrade and tilt where your bias is, i.e., past one’s threshold of real focus, some will start accepting every output while others will either keep refining the prompts for sections over and over or just re-write it by hand.

The Human Touch

After ChatGPT’s draft, it was time for the what passes as human intelligence (I used to call coffee my “artificial intelligence” until the term started being used by everyone to refer to what we currently call AI). I have enough experience (and ego) around writing proposals, and made some minor edits of the first AI generated draft. Once that first draft was completed, I dove in to give it a serious human touch, reading through the entire draft and making notes of changes I thought it needed. That read through without editing may seem counterintuitive, but it is necessary because something that jumps out at me as being incomplete, inaccurate, or just plain wrong may be clarified later in the document. After a top to bottom read and making notes of changes, I then work through the notes to actually make the changes, skipping or revising those changes with the full context of the document.

Then it’s ChatGPT’s turn again. I have it go through the document, essentially repeating what I had just done. This is a process I have worked on in other forms of writing as well, and I have a general prompt that I tweak as needed:

Check the attached [PROPOSAL FILENAME] for spelling errors, grammar issues, overall cohesiveness, and that it covers all points expected as a response to [RFP FILENAME].

Only provide detailed descriptions of any corrections or recommended changes so that I can select the changes I agree with. Think hard about this (thanks to Jeff Su‘s YouTube channel for this addition!)

And then I work my way through the response. This same prompt is re-run with updated versions of the proposal until I am satisfied that this stage has yielded as much benefit as it can.

Tightening the Screws

Finally, (or almost so) I have ChatGPT draft the executive summary. In the case of a really big RFP response, I will first have it draft the section summaries. These summaries are necessary to any proposal. In fact, they often make or break the proposal, possibly because they are the only parts the decision makers read, sometimes along with reviews done by others. If the summaries don’t come easy, or don’t sound right based on that original context document, I will go through and collaboratively revise the relevant sections until the summaries flow.

The Final Check

Finally, I try my best to find another human to check the whole of the result. If I’m lucky, I get additional input. If I’m really lucky, they’ve brought their own GenAI-assisted reviews into the mix.

GenAI has had a major impact on my writing output. The flow I use for proposals isn’t all that different from the flow I use to write blog posts or other content. I do a number of stream-of-consciousness sessions (the number varying on the complexity and length of the content), and then start refining it. I used that approach before GenAI, and the key difference that GenAI has made in my process is that I have learned to do less self-editing during those initial brain dumps, because I know that I have a tireless editor to review and give me feedback during the editing phase. Plus, the editor can be coached in both my intent and style to help me improve beyond just the level of “not clear”, and “i before e except after c or when the dictionary says otherwise”.

If you found this interesting, please share.

© Scott S. Nelson
Digging Holes

A Biased Review of an Unbiased Study on Developer Productivity with AI

A long time friend sent me a link to Does AI Actually Boost Developer Productivity? (100k Devs Study). While writing my response, I realized my reaction was a bit more than a chat reply, so I’m sending him a link to this post and hope he forgives me for the delay…

After watching this video of Yegor Denisov-Blanch, my inner critic wants to jump straight to:
He referred to mid-range engineers at the outset, in the context of who Meta said they were cutting. It wasn’t clear if the study participants were  mid-range.That out of the way, I’ve seen similar studies, though this is the best so far, based on number of participants, approach, and level of detail. Those other studies had the boost at 0 or less, and I didn’t trust the data but did recognize the premise. The premise being that AI is a multiplier, and if a developer tends to go down rabbit holes rather than focusing on the business goals, they will go deeper down rabbit the hole and become even less productive.

I think another aspect that is lost in these studies is that it is a paradigm shift, which means even the most experienced are still figuring out how to be productive in their use of AI. Since everyone is finding it so easy, no one admits that it takes some getting used to. That will account for some of the productivity hit.

One aspect Denisov-Blanch spends a good amount of time on where the mass media usually skims or skips entirely, is the difference between greenfield and brownfield projects. The difference is huge, where brownfield productivity gains are much lower. This information is critical to businesses that are planning on reducing their development teams based on published gains, since, for most enterprises, the majority of work is decidedly brownfield.

We also haven’t yet seen the impact of greenfield applications built primarily with GenAI when it comes to long-term maintenance. Yes, we have seen some anecdotal results where they are disastrous, from both a security and CX perspective, but we haven’t seen anything at scale yet. As an architect I am probably biased, but I don’t have much confidence in GenAI to create a reliable and flexible solution for no other reason than most people don’t think to ask for one at the start (except maybe architects😊).

The tools are improving (this based on anecdotal evidence from people who have both a high degree of skill as a developer and demonstrated critical thinking about tools and processes in the past). The people using the tools are becoming more skilled. So the gains in productivity will likely either climb across the board, or those below mid-range may crawl up from the less-than-zero productivity zone.

Meanwhile, anyone looking to cut their developer workforce in the next couple of years should watch this video, draw their own conclusions, and then revise their estimates.

If you found this interesting, please share.

© Scott S. Nelson

Boost Your GenAI Results with One Simple (and Free) Tool

AI is great at summarizing a document or a small collection of documents. When you get to larger collections, the complexity begins to grow rapidly. More complex prompts are the least of it. You need to set up RAG (retrieval-augmented generation) and the accompanying vector stores. For really large stores, this is going to be necessary regardless. Most of us work in a realm that is between massive content repositories and a manageable set of documents.

One handy helping application for this is Pandoc (https://pandoc.org/), aptly self-described as “your Swiss Army knife” for converting files between formats (without having to do “File > Open > Save As” to the point of carpal tunnel damage). Most of our files are in people-friendly formats like Word and PDF. To an LLM, these files contain mostly useless formatting instructions and metadata (yes, some metadata is useful, but most of it in these files is not going to be helpful as inputs to GenAI models). Pandoc will take those files and convert them to Markdown, which is highly readable for GenAI purposes (and humans can still parse it — and some even prefer it) and use 1/10000000th of the markup for format (confession: I pulled that number out of thin air to get your attention, but the real number is still big enough to matter).

The conversion may not be perfect, especially as the formatting of most documents is not perfect. You can see this for yourself by using the Outline view in Word. With a random document pulled from SharePoint, odds are you will find empty headings between the real ones, entire paragraphs that are marked as headings, or no headings at all because someone manually formatted text using the Normal style to make it look like a heading.

If you are only converting a few documents, you can use a text editor with regex (provided by your favorite GenAI) to do find and replace. Otherwise, leave them as is — it is already in a much more efficient format for prompting against, and the LLM will likely figure it out anyway.

You can get fancier with this by incorporating a call to Pandoc as a tool in an agentic workflow, converting the files at runtime before passing them to an LLM for analysis (and if you are a developer, managing the conversions so that they aren’t wastefully repeated). So long as you are being fancy, you can have it try to fix the minor formatting errors too, but you have already made a huge leap forward just by dumping all the formatting (that is just noise to an LLM) so that the neural network is processing what really matters: the content that is going to make you look like a prompting genius.

If you found this interesting, please share.

© Scott S. Nelson

Upgrading to Windows 11 for Luddites Like Me

tldr; If you have serious performance issues after upgrading and have tried all the usual tweaks, check the Power Mode settings.

The last Windows upgrade where I felt better for the experience was Windows 2000. Yes, there have been some marked improvements in media capabilities since then (if not, I’d still be on Windows 2000 — except for the security path problem). The only past upgrade I found difficult (excluding disappointment as a challenge) was from 3.1 to 95. That was hard because there were all of these disk changes to do because CD ROMs were still not ubiquitous. So I was bit put off when I experience a huge performance hit after the upgrade from 10 to 11. An upgrade that I only performed because they are ending free security updates in October for Windows 10 (I believe that makes it the shortest lived, in terms of support, ever) and I happened to be between deadlines at the moment. The last thing I wanted was to do the upgrade in the midst of some deliverable date because I expected it to be less than fun.

Expectations met. I spent three days after the upgrade trying to live with it. I knew going in that I needed to fix a lot of the default settings to keep big tech from becoming Big Brother, and had a list prepared before I even started so I could thwart the privacy pirates promptly. My inner Fox Mulder believes that much of the performance degradation of Windows 10 the last few years was meant to urge people like us to upgrade. After, I started to think that it was more a manner of acclimating us to problems to come.

After spending way to much time discussing the problem with Perplexity, and then trying to pretend that I could live with a machine that was 50 generations newer than the one I had in 1996 and yet somehow slower, I decided to work on a hobby project that is running on another machine. Did I mention that all of my Windows devices are laptops? When I booted up the hobby machine, I did so before plugging it in and noticed the usual effects of running on battery mode, with the monitor dimmer and the machine a bit slower. I plugged it in and it perked right up, as did my hind brain. I went back to my recently degraded machine and checked the power settings, and lo and behold, Big Brother decided that I could get by with a reduction in performance even while plugged in.

Windows 11 Power Mode Settings

I’m convinced that it is actually a combination of all of the other changes I made in addition to setting Power Mode to Best Performance when Plugged in. I have to believe that, because the alternative of the wasted time, and Tylenol®, is too much to bear.

Speaking  of too much to bear, writing up all of those steps is also a bit of a chore. However, I used a Space on Perplexity for the whole journey, so I asked it to write a blog post about it in my style. It still does not have my style right, which is partly me, for being to brief in the context and skimpy on the training files, and partly just that AI isn’t human and can’t really get me (most humans have the same problem). So as not to let all of that energy-expensive processing go to waste, I’m sharing it with you below.

And now, a word from our AI:

Why I Upgraded to Windows 11 Early—and How You Can Do It with Privacy and a Classic Windows 2000 Feel

Microsoft is ending free upgrades from Windows 10 to Windows 11 this October. I chose to upgrade now while I had no pressing deadlines so I could handle the transition calmly and set everything up for my privacy-first, vintage Windows 2000 style preference. Here’s a streamlined approach to upgrading that respects your privacy and brings a classic user experience.

Why Upgrade Now?

Waiting to upgrade means potential fees and stress when deadlines loom. Tackling Windows 11 ahead of time lets you customize and troubleshoot without pressure, making the switch smoother.

Step 1: Prepare Your System

  • Full Backup: Use an external drive plus a system image backup. Protect your irreplaceable data and system state.
  • Legacy Software & Drivers: Collect all your essential old-school apps and hardware drivers ahead of time, preferably from manufacturer websites.
  • Create a Recovery Plan: Have Windows 10 installation media ready just in case you want to roll back.

Step 2: Privacy-Centered Installation

  • Use a Local (Offline) Account: Avoid Microsoft’s enforced online sign-ins to maintain privacy.
  • Disable Telemetry and Data Sharing: Go to Settings > Privacy & Security and turn off every form of telemetry, diagnostics, and data collection.
  • Disable or Remove Cortana: Cortana’s integration hampers privacy; uninstall or disable it completely.
  • Block Web-Connected Search: Turn off options that send your search queries online.
  • Limit Microsoft Account Usage: Avoid syncing or using Microsoft accounts wherever possible.

Step 3: Recreate a Windows 2000 Look & Feel

  • Classic Start Menu: Install Open-Shell to bring back the familiar Windows 2000 style Start menu.
  • Taskbar Customizations: Use scripts or apps to move the taskbar to the left, reduce icon spacing, and disable icon grouping.
  • Remove Widgets and News Feeds: Disable or uninstall unnecessary widgets and news to reduce clutter.
  • Restore Classic System Sounds and Icons: Swap in old system sounds and icon sets to enhance the nostalgic experience.

Step 4: Upgrade Process Tips

  • Take Your Time: Don’t rush the prompts; decline any offered “extra” software or cloud features.
  • Monitor Network: Temporarily disconnect from the internet during installation to avoid surprise background downloads or telemetry.
  • Updates on Your Terms: Post-upgrade, disable automatic updates or set them to manual to avoid unwanted changes.

Step 5: Post-Upgrade Privacy and Performance Tuning

  • Verify Privacy Settings: Double-check that all telemetry and data sharing options remain off; Windows sometimes re-enables features after upgrades.
  • Manage Startup Programs: Use Task Manager to disable all non-essential startup apps, speeding boot times and avoiding background tracking.
  • Disable Unnecessary Services: Turn off services that you don’t use or that compromise privacy (e.g., Windows Tips, Feedback Hub).
  • Use Host File Hacks or Firewall Rules: Block known Microsoft telemetry servers for an extra layer of privacy.
  • Schedule Routine Privacy Checks: Set reminders to revisit privacy and UI settings after big Windows updates.

Final Thoughts

Upgrading while you have the bandwidth gives you control and peace of mind before the free upgrade deadline passes. By prioritizing privacy, minimizing modern OS distractions, and returning to a classic interface, Windows 11 can feel more comfortable and secure—closer to the straightforward experience of Windows 2000 but with current OS capabilities.

Don’t wait until October and the pressure mounts. Take charge now, protect your privacy, and make Windows 11 work your way. Your future self will be grateful.

If you found this interesting, please share.

© Scott S. Nelson