The New Digital Divide is Analog

Your AI-Driven Digital Transformation is Impeded by Behavioral Challenges

The recent article by CT Crooker, Why Everything You Know is Probably Wrong, is filled with hard truths that everyone in IT needs to consider. It starts by pointing out the evidence supporting the thesis that things are going to be very different.

“Going to be” is the one level where I depart from a lot of recent articles by really brilliant people. When discussing the unprecedented acceleration of new and improved capabilities that come under the media definition of AI, these experts are not only correct in their assessments of the rate of change; they understand the details of those changes better than most.

However, they often present these shifts as a present-tense reality for the masses. For the vast majority of organizations, these changes are still in the “going to be” phase because the experts are focusing on a very active and very small minority.

Then there are people.

  • Most CI pipelines aren’t really continuous and don’t truly integrate.

  • Teams hold stand-ups and manage backlogs that aren’t the least bit Agile.

  • Enterprise CRM systems are treated as glorified address books while the predictive analytics and automation features sit dormant.

  • Smartphones are used for scrolling while the powerful sensors and computing power in our pockets remain largely untouched.

The main impedance to technical solutions is rarely a technical problem. The real culprits are process and culture challenges that act as a silent brake on innovation. This resistance to change usually stems from a deep-seated fear of the unknown or a perceived threat to the status quo.

When a new capability arrives, it doesn’t just offer a faster way to work; it threatens the established hierarchy, the “way we’ve always done it,” and the specialized knowledge that individuals have spent years protecting. These psychological hurdles are the biggest obstacles to adding and improving technical capabilities. It will take significant time before these new tools make it into mainstream IT departments because human behavior does not move at the speed of a GPU.


A Challenge by any Other Name is…Entirely Different

This brings me to the point of my only contention with the article. I disagree with the suggestion that “transformation impedance” is a better way to think about these shifts than “epistemic flexibility under inversion.” While I find the shift in terminology problematic, Crooker’s post is otherwise incredibly thought-provoking and accurate; it is really valuable that he raised these points because they are essential to consider.

He explains “epistemic flexibility under inversion” as a capability characteristic of both systems and people to adapt to rapid changes and then adopt new approaches as a result. He goes on to suggest that “transformation impedance” may be a better way to think about it.

But branding is more important than most realize. People who take up the call of “transformation impedance” will be more likely to focus on the impedance side, which leads to conflicts between those who think everyone should reduce the impedance versus those who want to lower it. I’ll admit there is some room for collaboration on the rate of lowering impedance, but then again, there are still a lot of those CI pipelines that are still neither.

First, I will admit that I had to look up the definition of “epistemic flexibility under inversion” to fully digest it:

“Epistemic flexibility under inversion” is a specialized concept often found at the intersection of Bayesian statistics, cognitive science, and information theory. It refers to a system’s (or a mind’s) ability to maintain a coherent understanding of reality even when the “direction” of information flow or the relationship between cause and effect is flipped.

Once I had this better understanding, I had the same reaction to using “transformation impedance” as an alternative as I do to changing “issue” to “challenge.” (There is a lot more to that definition, of course, and I suggest you talk with your favorite Generative AI LLM to get the rest of the picture.)

The Utility of the Negative

Media tells us we should always be positive and pursue higher goals. We buy into this because the truth is that the method of using the negative to drive action, specifically addressing an “issue,” is much more likely to succeed than the message of chasing a dream. That’s another hard truth.

I like “issue” better than “challenge” because people will deal with an issue so it will go away. A challenge makes them feel good about pursuing it, and since the pursuit is the reward, completing it removes the reward and thus the incentive. If it is an issue, the incentive needs to be to correct it.

While “epistemic flexibility under inversion” may be harder to understand, it keeps the focus on how we need to change our approach to deal with the changes approaching us. “Transformation impedance,” on the other hand, is a label describing a phenomenon and doesn’t necessitate action until it is too late.

We need to flip our approach and find ways to catch up with change and not be left behind or run over. We should begin thinking about what problems need to be solved for our businesses, and even our lives, that for whatever reason we thought were too hard before, and then come up with new solutions taking advantage of the AI. To do that, we must be willing to set aside the old frameworks that impede our ability to do so.

If you found this interesting, please share.

© Scott S. Nelson
Utopia or Dystopia

The Frictionless Trap: AI’s Greatest Benefit is also a Hidden Risk

I’m a big fan of classic science fiction. I generally avoid dystopian themes, but some are just too good to ignore, from A Boy and his Dog to Hunger Games. When ChatGPT started getting all that popular press a few years back, I was looking forward to finally living in that shiny future promised by Heinlein, Asimov, Clarke, and Roddenberry finally coming true, maybe even a flying car (the current prototypes still aren’t there yet, BTW). But the news of the last few years has had more Brave New World and 1984 vibes.

So when I read a recent NPR report on AI in schools, it felt like another example of how we are engineering frustration out of the human experience. The report describes software that is so sensitive to a student’s frustration that it pivots the curriculum before they even have a chance to get annoyed. On paper, it is a triumph of user experience; in practice, it might be a silent deletion of the very thing that makes a mind grow.

The Lesson of the Eloi

When H.G. Wells sent his Time Traveller into the year 802,701, he didn’t find a high-tech utopia or a charred wasteland. He found the Eloi: beautiful, peaceful, and intellectually vacant creatures living in a world of total automation.

Wells’ speculation in his passage on [suspicious link removed] hits quite close to home in the age of generative AI:

“Strength is the outcome of need; security sets a premium on feebleness.”

The Eloi weren’t born “slow” because of biology. They were essentially optimized into that state by an environment that removed every possible hurdle. They had won the game of civilization so thoroughly that they lost the ability to play it.

The parallel to AI-driven education isn’t that the technology is failing, but that it is succeeding too well. If the machine handles every productive struggle (sensing your confusion and immediately smoothing the path), it isn’t just teaching you. It is doing the mental heavy lifting on your behalf. You don’t get stronger by watching your trainer lift the weights, even if the trainer is a hyper-personalized LLM.

The Mirror of “Useful” Atrophy

It isn’t just about the classroom; AI is becoming a universal solvent for friction. History suggests that when we remove friction, we usually lose the muscle that was meant to overcome it.

  • The GPS Effect: We traded the frustration of paper maps for a blue dot that tells us where to turn. The result is that our internal spatial awareness is basically a legacy system. We can get anywhere, but we often have no idea where we are.

  • The Calculator Trade-off: We offloaded long division to a chip. This was a fair trade for most, but it established the precedent: if a machine can do it, the human brain is officially off the clock for that specific skill.

  • The Infinite Search: We stopped memorizing facts because we treat our devices as an external hard drive for our personalities.

Not all of that has been a bad thing, unless we get to live one of those post-EMP stories (which I avoid reading to avoid remembering it isn’t that far-fetched). I, for one, am glad that Einstein said “Never memorize something that you can look up,” because rote memorization is a struggle for me, but I really do enjoy exercising mental muscle memory. Which is where using AI the wrong way will lead to an atrophy that doesn’t need a major solar event to make us realize things went too far. It doesn’t just provide answers; it simulates the thinking.

The Verdict: Designing for Resistance

We should be optimistic about AI’s potential to amplify us, but we have to be wary of the passenger mindset. If we use these tools to abolish difficulty, we aren’t empowering ourselves. Instead, we are prepping for a very comfortable life as Eloi.

The challenge for educators, and for anyone using an AI “intern” in their daily workflow, is to intentionally design productive friction back into the system. We need AI that makes the work more meaningful and not just more invisible.

Mastery requires resistance. If the road is perfectly flat and the bike pedals itself, you aren’t traveling; you are just being delivered.

If you found this interesting, please share.

© Scott S. Nelson

Why Bigger Companies Move Faster than You in the AI Adoption Race

It’s not because they are more innovative.

There is a common myth in tech that smaller, nimbler companies always win the adoption race. But with Generative AI, we are seeing the opposite. While startups are still “tinkering,” enterprises are productionizing. According to recent data shared by Nathaniel Whittemore (a.k.a. NLW, host of the AI Daily Brief & CEO, Super.ai) at the at the AI Engineer World’s Fair, full production deployment of AI agents in billion-dollar enterprises jumped from 11% to 42% in just the first three quarters of 2024 [03:15]. Why? It comes down to a brutal reality of economics, automation, and what I call the “2% vs. 20% ROI Gap.”

AI is Automation (Just Less Consistent)

Many AI enthusiasts argue that automation isn’t AI. That’s true in the sense that not all fruits are apples, but all apples are fruits. AI is automation. The primary difference? Traditional automation is deterministic (consistent); AI is probabilistic (less consistent, but more capable). Smaller companies are already masters of traditional automation because they have to be. They use it to survive with fewer people. But for a massive corporation, the “low-hanging fruit” of basic automation hasn’t even been picked yet. This creates a massive opportunity for Information Gain—the ability to apply AI to “messy” processes that were previously too expensive to automate.

The Math: The 2% vs. 20% Rule

The biggest “moat” for big business isn’t their data or their brand—it’s their Scale ROI. Because a large company doesn’t need significantly more resources than a small company to build a single AI agent or workflow, the math of deployment looks very different:

  • For the Small Business: To pay for the initial R&D and resource overhead, a new AI tool might need to deliver a 20% improvement in efficiency just to break even.
  • For the Enterprise: Because they are applying that tool across thousands of employees or millions of transactions, a mere 2% improvement creates an ROI that justifies the entire department.

Furthermore, as NLW points out, these large organizations are moving toward Systemic Adoption [17:00]. They aren’t just doing “spot experiments”; they are thinking cross-disciplinarily. They can afford to go slower, spend more on high-quality resources, and leverage volume discounts that drive their production costs down even further.

The “Risk Reduction” Transformation

Interestingly, while most companies start with “Time Savings” (the default ROI metric), the real “transformational” wins are happening elsewhere. NLW’s study found that Risk Reduction—while the least common primary goal—was the most likely to result in “transformational” impact [14:59]. Large companies have massive back-office, compliance, and risk burdens. AI can handle the sheer volume of these tasks in ways a human team never could [15:17]. This is a “moat” that small businesses simply don’t have to worry about yet.

The Cycle: From Moat to Commodity

This scale is the moat that gives big business a temporary advantage. But here is the irony: The more they use that advantage, the faster the moat shrinks. As enterprises productionize these efficiencies, they effectively commoditize them. What cost a Fortune 500 company $1 million to develop today will be a $20/month SaaS plugin for a small business tomorrow. We are in a cycle of:

  1. Hype: Everyone talks.
  2. Value: Big companies productionize at scale.
  3. Cheap: The tech becomes a commodity.
  4. Reverse Leverage: Small, disruptive players use those same cheap tools to move faster and out-innovate the giants.

The giants are winning the production race today, but they are also building the very tools that the next generation of “disruptors” will use to tear them down.

If you found this interesting, please share.

© Scott S. Nelson

Replacing your Proposal Team with ChatGPT

I’ve heard of some businesses that have completely automated their RFP response process using Agentic AI. To reach that level of automation, you either need a very narrow set of services or a very generous budget to address all the quirks and exceptions.

I have neither of those.

Before I go on, I want to point out that while I will definitely continue to use Generative AI with all of my documentation as tool to improve quality, I much prefer working with a human team that is AI-augmented rather than just AI. It is a strain being the only one managing the human factor of work that is meant to drive decisions. The title is not a suggestion; it is a description of how to cope when it is necessary.

What I do have is access to a few Generative AI tools. For various reasons I won’t get into here, ChatGPT Projects is the best fit for the workflow I have adopted (and still refining). Projects are ChatGPT’s (poor) answer to NotebookLM and Perplexity Spaces

(see my earlier post about Organizing AI Augmentation with Notebooks).

Projects are useful in that they keep related prompts and files in one place, but they don’t really cross-reference or allow for collaboration. It does come with that fine print at the bottom of the screen stating:

“OpenAI doesn’t use [NAME OF COMPANY PAYING SUBSCRIPTION FEE] workspace data to train its models.”

Which is the main one of those reasons I said I wouldn’t get into (oops!).

I recently worked on a proposal at a time when most of the people who would usually help were busy with other things, so I settled into working mostly with ChatGPT like an eager-but-green proposal teammate (the AI being the green one, not me…no matter what that LLM wrapper says).

Setting the Stage

For this particular proposal, the prep work didn’t look all that different from the old manual process. It starts with a short document to capture the proposal’s guiding themes: my company’s strengths, differentiators, and the ideas that needed to shine through in both tone and substance. The document was mostly drafted by practice leadership and refined with a few folks familiar with client, project types, or both.

Next came the outline. Depending on the RFP structure, I sometimes let ChatGPT take the first crack at building an outline from the document, then refine it interactively. Other times, the RFP format or flow is not friendly to automate parsing, even for a well-trained AI (or so I assume, as I haven’t attempted to train one that deeply yet). In this case I build the first draft of the outline myself, then hand it to ChatGPT to check against the original RFP. That combination of back-and-forth has become standard practice.

Draft One: Enter the AI Intern

Once the outline was in good shape, ChatGPT proactively offered to populate the template once it was refined, which fits with the persona I have of it as an eager, educated, and inexperienced intern or junior associate. And given the quality of its suggestions, it is tempting to respond with a “Yes” and let ‘er rip. But tempered experience had me opt for prompting it to do so one section at a time, and waiting for feedback or confirmation before moving on to the next section. In this manner, I was able to put together a pretty decent first draft much faster than doing it entirely on my own (or even with a “real” eager, educated, and inexperienced intern or junior associate, whom I also would not want to do a full draft before getting some feedback).

I would say it was about 50/50 of accepting the first draft of a section versus a revision. As with any Generative AI augmented content generation, most of the issues stemmed from missing levels of details in my prompts versus ChatGPT misunderstanding the intent. Speaking of understanding the intent, I attached the entire proposal (again, because, like I said, I know notebooks and spaces and projects ain’t those), the outline, and the context document after it asked to write the proposal for me, and tempering the response to its offer with “Yes, but…” and then instructions to do it a section at a time and refer to the files.

Staying Sane (a.k.a. Breaks Matter)

As many proponents of utilizing Flow will tell you, it can be very beneficial to take breaks every 60 to 120 minutes (while most of the gurus on the topic seem to gravitate to the 90 minute mark, I hold fast that it varies by person and context, mangling Bruce Lee’s advice to “be like water”, in this case by seeking your own level). Without breaks, your ability to be objective about the quality of GenAI outputs will start to degrade and tilt where your bias is, i.e., past one’s threshold of real focus, some will start accepting every output while others will either keep refining the prompts for sections over and over or just re-write it by hand.

The Human Touch

After ChatGPT’s draft, it was time for the what passes as human intelligence (I used to call coffee my “artificial intelligence” until the term started being used by everyone to refer to what we currently call AI). I have enough experience (and ego) around writing proposals, and made some minor edits of the first AI generated draft. Once that first draft was completed, I dove in to give it a serious human touch, reading through the entire draft and making notes of changes I thought it needed. That read through without editing may seem counterintuitive, but it is necessary because something that jumps out at me as being incomplete, inaccurate, or just plain wrong may be clarified later in the document. After a top to bottom read and making notes of changes, I then work through the notes to actually make the changes, skipping or revising those changes with the full context of the document.

Then it’s ChatGPT’s turn again. I have it go through the document, essentially repeating what I had just done. This is a process I have worked on in other forms of writing as well, and I have a general prompt that I tweak as needed:

Check the attached [PROPOSAL FILENAME] for spelling errors, grammar issues, overall cohesiveness, and that it covers all points expected as a response to [RFP FILENAME].

Only provide detailed descriptions of any corrections or recommended changes so that I can select the changes I agree with. Think hard about this (thanks to Jeff Su‘s YouTube channel for this addition!)

And then I work my way through the response. This same prompt is re-run with updated versions of the proposal until I am satisfied that this stage has yielded as much benefit as it can.

Tightening the Screws

Finally, (or almost so) I have ChatGPT draft the executive summary. In the case of a really big RFP response, I will first have it draft the section summaries. These summaries are necessary to any proposal. In fact, they often make or break the proposal, possibly because they are the only parts the decision makers read, sometimes along with reviews done by others. If the summaries don’t come easy, or don’t sound right based on that original context document, I will go through and collaboratively revise the relevant sections until the summaries flow.

The Final Check

Finally, I try my best to find another human to check the whole of the result. If I’m lucky, I get additional input. If I’m really lucky, they’ve brought their own GenAI-assisted reviews into the mix.

GenAI has had a major impact on my writing output. The flow I use for proposals isn’t all that different from the flow I use to write blog posts or other content. I do a number of stream-of-consciousness sessions (the number varying on the complexity and length of the content), and then start refining it. I used that approach before GenAI, and the key difference that GenAI has made in my process is that I have learned to do less self-editing during those initial brain dumps, because I know that I have a tireless editor to review and give me feedback during the editing phase. Plus, the editor can be coached in both my intent and style to help me improve beyond just the level of “not clear”, and “i before e except after c or when the dictionary says otherwise”.

If you found this interesting, please share.

© Scott S. Nelson

Boost Your GenAI Results with One Simple (and Free) Tool

AI is great at summarizing a document or a small collection of documents. When you get to larger collections, the complexity begins to grow rapidly. More complex prompts are the least of it. You need to set up RAG (retrieval-augmented generation) and the accompanying vector stores. For really large stores, this is going to be necessary regardless. Most of us work in a realm that is between massive content repositories and a manageable set of documents.

One handy helping application for this is Pandoc (https://pandoc.org/), aptly self-described as “your Swiss Army knife” for converting files between formats (without having to do “File > Open > Save As” to the point of carpal tunnel damage). Most of our files are in people-friendly formats like Word and PDF. To an LLM, these files contain mostly useless formatting instructions and metadata (yes, some metadata is useful, but most of it in these files is not going to be helpful as inputs to GenAI models). Pandoc will take those files and convert them to Markdown, which is highly readable for GenAI purposes (and humans can still parse it — and some even prefer it) and use 1/10000000th of the markup for format (confession: I pulled that number out of thin air to get your attention, but the real number is still big enough to matter).

The conversion may not be perfect, especially as the formatting of most documents is not perfect. You can see this for yourself by using the Outline view in Word. With a random document pulled from SharePoint, odds are you will find empty headings between the real ones, entire paragraphs that are marked as headings, or no headings at all because someone manually formatted text using the Normal style to make it look like a heading.

If you are only converting a few documents, you can use a text editor with regex (provided by your favorite GenAI) to do find and replace. Otherwise, leave them as is — it is already in a much more efficient format for prompting against, and the LLM will likely figure it out anyway.

You can get fancier with this by incorporating a call to Pandoc as a tool in an agentic workflow, converting the files at runtime before passing them to an LLM for analysis (and if you are a developer, managing the conversions so that they aren’t wastefully repeated). So long as you are being fancy, you can have it try to fix the minor formatting errors too, but you have already made a huge leap forward just by dumping all the formatting (that is just noise to an LLM) so that the neural network is processing what really matters: the content that is going to make you look like a prompting genius.

If you found this interesting, please share.

© Scott S. Nelson