Utopia or Dystopia

The Frictionless Trap: AI’s Greatest Benefit is also a Hidden Risk

I’m a big fan of classic science fiction. I generally avoid dystopian themes, but some are just too good to ignore, from A Boy and his Dog to Hunger Games. When ChatGPT started getting all that popular press a few years back, I was looking forward to finally living in that shiny future promised by Heinlein, Asimov, Clarke, and Roddenberry finally coming true, maybe even a flying car (the current prototypes still aren’t there yet, BTW). But the news of the last few years has had more Brave New World and 1984 vibes.

So when I read a recent NPR report on AI in schools, it felt like another example of how we are engineering frustration out of the human experience. The report describes software that is so sensitive to a student’s frustration that it pivots the curriculum before they even have a chance to get annoyed. On paper, it is a triumph of user experience; in practice, it might be a silent deletion of the very thing that makes a mind grow.

The Lesson of the Eloi

When H.G. Wells sent his Time Traveller into the year 802,701, he didn’t find a high-tech utopia or a charred wasteland. He found the Eloi: beautiful, peaceful, and intellectually vacant creatures living in a world of total automation.

Wells’ speculation in his passage on [suspicious link removed] hits quite close to home in the age of generative AI:

“Strength is the outcome of need; security sets a premium on feebleness.”

The Eloi weren’t born “slow” because of biology. They were essentially optimized into that state by an environment that removed every possible hurdle. They had won the game of civilization so thoroughly that they lost the ability to play it.

The parallel to AI-driven education isn’t that the technology is failing, but that it is succeeding too well. If the machine handles every productive struggle (sensing your confusion and immediately smoothing the path), it isn’t just teaching you. It is doing the mental heavy lifting on your behalf. You don’t get stronger by watching your trainer lift the weights, even if the trainer is a hyper-personalized LLM.

The Mirror of “Useful” Atrophy

It isn’t just about the classroom; AI is becoming a universal solvent for friction. History suggests that when we remove friction, we usually lose the muscle that was meant to overcome it.

  • The GPS Effect: We traded the frustration of paper maps for a blue dot that tells us where to turn. The result is that our internal spatial awareness is basically a legacy system. We can get anywhere, but we often have no idea where we are.

  • The Calculator Trade-off: We offloaded long division to a chip. This was a fair trade for most, but it established the precedent: if a machine can do it, the human brain is officially off the clock for that specific skill.

  • The Infinite Search: We stopped memorizing facts because we treat our devices as an external hard drive for our personalities.

Not all of that has been a bad thing, unless we get to live one of those post-EMP stories (which I avoid reading to avoid remembering it isn’t that far-fetched). I, for one, am glad that Einstein said “Never memorize something that you can look up,” because rote memorization is a struggle for me, but I really do enjoy exercising mental muscle memory. Which is where using AI the wrong way will lead to an atrophy that doesn’t need a major solar event to make us realize things went too far. It doesn’t just provide answers; it simulates the thinking.

The Verdict: Designing for Resistance

We should be optimistic about AI’s potential to amplify us, but we have to be wary of the passenger mindset. If we use these tools to abolish difficulty, we aren’t empowering ourselves. Instead, we are prepping for a very comfortable life as Eloi.

The challenge for educators, and for anyone using an AI “intern” in their daily workflow, is to intentionally design productive friction back into the system. We need AI that makes the work more meaningful and not just more invisible.

Mastery requires resistance. If the road is perfectly flat and the bike pedals itself, you aren’t traveling; you are just being delivered.

If you found this interesting, please share.

© Scott S. Nelson

3 Lies They’re Telling Us About AI

As a writer who has (rarely) been paid for the craft, I find attribution to be not only important, but a moral obligation. Especially when the quote resurfaces in my thinking time and again.

I first read “There are three kinds of lies: lies, damned lies, and statistics” in a Robert Heinlein novel, where he (through his character) attributed it to Mark Twain. Samuel Clemens actually attributed this truism to someone else, though that providence has not been agreed upon by scholars (per Wikipedia), so I will leave this instance of noblesse oblige as having made my best effort. But I digress (a specialty of mine).

Despite my failed attempt at correct attribution, it’s still true. And it is being proven once again, in the various reports and claims around AI. Allow me to categorize them as I see them today:

1. Lies

…are that AI will be able to do X by N. No one really knows how long. Extrapolating future data based on similar but different prior data has an unknown margin of error. This is why every honest company states at the beginning of their demo or presentation or pitch that you should not base purchase decisions on functionality not currently released (and I would add not proven for your specific use case).

2. Damned Lies

…are about what people are accomplishing now. The app vibe coded in a weekend is either going to be feature frozen in the near future, suffer a major failure through bad coding or bad actors, or (most rare) be the result of extensive planning prior to the weekend of wonder.

“The ‘vibe-coded’ apps that fail are not failures of code, but failures of craft.”

3. Statistics

…are the worst of the three, because anyone with half a brain and less scruples can get the same numbers to say entirely different things, like:

Lies Aside

What we call AI (usually) isn’t actually AI. What everyone is calling AI is still the biggest paradigm shift in Information Technology since computers shrunk from needing a room to sitting on a desk (and, not long after, a lap).

Impacts to humanity are a physics phenomenon in that every positive improvement has an equal and opposite potential that will eventually be realized. Whenever the many benefit, a very few benefit a lot more. Some deserve those benefits, and some don’t.

The Mirror and the Machine

We find ourselves at a peculiar crossroads where the technology is accelerating faster than our ability to tell the truth about it. We are told AI is a magic wand, a job-thief, or a savior, but the “Lies, Damned Lies, and Statistics” reveal a simpler reality:

“AI is not a destination; it is a mirror.”

The statistics that show executives “saving eight hours a week” while workers save none aren’t an indictment of the technology—they are an indictment of how we value human time. When we strip away the marketing gloss and the manipulated ROI reports, we are left with the same struggle that has defined every Information Technology shift since the first mainframe: the tension between efficiency and agency.

The Paradox of Progress

If my “physics phenomenon” theory holds true, the equal and opposite reaction to the AI boom will be a renewed, premium demand for the irreplaceably human.

“AI makes it remarkably easy to be mediocre at scale.”

The paradigm shift isn’t just about computers moving from desks to our pockets, or from pockets to our cognitive workflows. It can generate the lies, the damned lies, and the statistics for us in seconds. But it cannot provide the providence of a thought. It cannot feel the moral obligation of attribution. It cannot understand why a quote from a Heinlein novel matters to a writer’s soul.

The New Bottom Line

We should stop asking when AI will “arrive” or if the pilot programs will finally hit their 100% success rate. Instead, we should ask: Who is being served by the current narrative?

If the many are to benefit—and not just the few—we must look past the “weekend of wonder” and the padded ROI spreadsheets. We must demand a version of progress that doesn’t just prioritize the speed of the output, but the integrity of the outcome.

The lies will continue to evolve, and the statistics will continue to shift. But the truth remains:

“A tool is only as profound as the intent of the person wielding it.”

The question isn’t what AI can do for you by 2030; the question is what you are willing to stand for today while everyone else is busy chasing the vibe.


Would you like me to generate a SEO-optimized title tag and meta description to accompany this finalized draft?

If you found this interesting, please share.

© Scott S. Nelson

Why Bigger Companies Move Faster than You in the AI Adoption Race

It’s not because they are more innovative.

There is a common myth in tech that smaller, nimbler companies always win the adoption race. But with Generative AI, we are seeing the opposite. While startups are still “tinkering,” enterprises are productionizing. According to recent data shared by Nathaniel Whittemore (a.k.a. NLW, host of the AI Daily Brief & CEO, Super.ai) at the at the AI Engineer World’s Fair, full production deployment of AI agents in billion-dollar enterprises jumped from 11% to 42% in just the first three quarters of 2024 [03:15]. Why? It comes down to a brutal reality of economics, automation, and what I call the “2% vs. 20% ROI Gap.”

AI is Automation (Just Less Consistent)

Many AI enthusiasts argue that automation isn’t AI. That’s true in the sense that not all fruits are apples, but all apples are fruits. AI is automation. The primary difference? Traditional automation is deterministic (consistent); AI is probabilistic (less consistent, but more capable). Smaller companies are already masters of traditional automation because they have to be. They use it to survive with fewer people. But for a massive corporation, the “low-hanging fruit” of basic automation hasn’t even been picked yet. This creates a massive opportunity for Information Gain—the ability to apply AI to “messy” processes that were previously too expensive to automate.

The Math: The 2% vs. 20% Rule

The biggest “moat” for big business isn’t their data or their brand—it’s their Scale ROI. Because a large company doesn’t need significantly more resources than a small company to build a single AI agent or workflow, the math of deployment looks very different:

  • For the Small Business: To pay for the initial R&D and resource overhead, a new AI tool might need to deliver a 20% improvement in efficiency just to break even.
  • For the Enterprise: Because they are applying that tool across thousands of employees or millions of transactions, a mere 2% improvement creates an ROI that justifies the entire department.

Furthermore, as NLW points out, these large organizations are moving toward Systemic Adoption [17:00]. They aren’t just doing “spot experiments”; they are thinking cross-disciplinarily. They can afford to go slower, spend more on high-quality resources, and leverage volume discounts that drive their production costs down even further.

The “Risk Reduction” Transformation

Interestingly, while most companies start with “Time Savings” (the default ROI metric), the real “transformational” wins are happening elsewhere. NLW’s study found that Risk Reduction—while the least common primary goal—was the most likely to result in “transformational” impact [14:59]. Large companies have massive back-office, compliance, and risk burdens. AI can handle the sheer volume of these tasks in ways a human team never could [15:17]. This is a “moat” that small businesses simply don’t have to worry about yet.

The Cycle: From Moat to Commodity

This scale is the moat that gives big business a temporary advantage. But here is the irony: The more they use that advantage, the faster the moat shrinks. As enterprises productionize these efficiencies, they effectively commoditize them. What cost a Fortune 500 company $1 million to develop today will be a $20/month SaaS plugin for a small business tomorrow. We are in a cycle of:

  1. Hype: Everyone talks.
  2. Value: Big companies productionize at scale.
  3. Cheap: The tech becomes a commodity.
  4. Reverse Leverage: Small, disruptive players use those same cheap tools to move faster and out-innovate the giants.

The giants are winning the production race today, but they are also building the very tools that the next generation of “disruptors” will use to tear them down.

If you found this interesting, please share.

© Scott S. Nelson

MLL: Metaphorical Language License

That headline is an epic fail at being clever. Clearly no AI there. Going to go with it anyway to make a point or two.

Human perception is just a collection of filters. Adjusting for “Red,
Green, and Blue” in an image is no different than how our brains handle
new tech through Deletion, Distortion, and Generalization.

The AI hype bubble is the ultimate stress test for these filters.

Human Perception as
Filtering

The first point is, human perception is the result of filtering.
Actually, all perception is the result of filtering; it is just that
humans are more interested in how it affects them. That is actually part
of the filter.

Sort of like image filters, where you have adjustments for Red,
Green, and Blue, perception is adjusted through deletion, distortion,
and generalization. Some examples:

  • Deletion: Everyone has at least one thing they do
    where they wouldn’t do it if they remembered how difficult it was.
  • Distortion: Media algorithms that zoom in or out
    based on audience bias.
  • Generalization: The core of most learning and both
    a boon and barrier to that very learning.

While generalization is core to individual learning, all three
filters can be seen when groups of people are learning. Here is how
those filters are dialed at a group level:

  • Deletion: Things we learned in the past that would
    help us better adopt and adapt to what is a new paradigm in
    technology.
  • Distortion: Knowledge distribution through media
    algorithms that zoom in or out based on audience bias.
  • Generalization: Comparing new paradigms to
    previously familiar concepts; being exposed to high-level concepts as
    parallels to common knowledge to build on iteratively, going deeper each
    time.

The AI hype bubble is a perfect example of the above perceptual
filter settings.

Metaphors for AI Perception

Taking this back to the headline, an anadrome of LLM, here are five
common metaphors being applied to AI (and organized by AI, TBH) that are
worth adding to your own perceptual filters:

  1. The “Alien Intelligence” This metaphor suggests
    that AI doesn’t think like a human; it is a powerful, non-human mind
    that we are trying to communicate with. It highlights the “otherness”
    and unpredictability of Large Language Models (LLMs). Best
    for
    : Discussing AI safety, alignment, or the surprising ways AI
    solves problems. Source: Popularized by technologist
    and writer Kevin Kelly in Wired, where he argues we should view AI as an
    “artificial alien” rather than a human-like mind.
  2. The “Stochastic Parrot” This is a more critical
    metaphor used to describe LLMs. It suggests that AI doesn’t “know”
    anything; it simply repeats patterns of language it has seen before,
    much like a parrot mimics sounds without understanding the meaning.
    Best for: Explaining how LLMs work, discussing
    hallucinations, or tempering over-hyped expectations.
    Source: From the influential 2021 research paper “On
    the Dangers of Stochastic Parrots,” co-authored by Emily M. Bender and
    Timnit Gebru.
  3. The “Bicycle for the Mind” Originally used by
    Steve Jobs to describe the personal computer, this metaphor has been
    reclaimed for AI. It positions AI as a tool that doesn’t replace the
    human, but rather amplifies our natural capabilities, allowing us to go
    “further and faster.” Best for: Productivity-focused
    content, tutorials, and “AI-as-a-copilot” narratives.
    Source: Originally Steve Jobs (referring to PCs);
    recently applied to AI by figures like Sam Altman (OpenAI CEO) in
    various interviews regarding human-AI collaboration.
  4. The “Infinite Intern” This metaphor frames AI as
    a highly capable, tireless assistant that is eager to please but lacks
    common sense and requires very specific instructions (prompting) to get
    things right. Best for: Business use cases, delegation,
    and explaining the importance of “human-in-the-loop” workflows.
    Source: Widely attributed to Ethan Mollick, a Wharton
    professor and leading voice on AI implementation in education and
    workplace settings.
  5. The “Electric Library” Think of AI not as a
    search engine that gives you a list of links, but as a librarian who has
    read every book in the world and can synthesize that information into a
    single answer for you. Best for: Explaining the shift
    from traditional search to Generative AI search.
    Source: A common conceptual framework used by Ben
    Evans, a prominent technology analyst, to describe the shift in how we
    access and process information.

However you perceive the rise of LLM-based AI, include “Springboard”
in your own collection of metaphors. That is, something that helps
anyone reach higher when approached at high speed with focus…and will
trip you if you come at it the wrong way, even at a slow walk if not
paying attention.


This post was inspired by a post on LinkedIn by Dr. Thomas R. Glück. If
you have read this far, please like, comment on, and share both.

If you get all the way through this post, please
tell me if the “MML” headline is an epic fail at being clever or if it
caught your eye. (Full disclosure: I use a Gem to review and edit my posts, and
generally ignore 80% of what it suggests, including losing that
headline.)

 

If you found this interesting, please share.

© Scott S. Nelson

The Collaboration Dividend: Who is really ahead in the GenAI Adoption

I’ve seen several tech buzz cycles, where even the real stuff is hyped. From BBS systems to .com bubbles, shareware to SaaS, DHTML to AJAX to ReST, and web first to mobile first to cloud first. In almost every one of those booms, the “first-mover advantage” belonged to the command-and-control mindset: direct, rigid, and strictly instrumental.
As I watch the rolling adoption of Generative AI (GenAI), I see a long-overdue validation of a different skillset.
The technical gap is no longer being closed by the most aggressive “commanders,” but by the most collaborative coordinators. I am delighted to see that women are not just adopting this technology, they are mastering its productivity curve at a rate that confirms what many of us have suspected for years:
When technology becomes conversational, the best communicators win.

A Predictable Shift in the Trenches

In hindsight, this was inevitable. We have moved away from a world where you had to speak “machine” (syntax and code) to a world where the machine finally speaks “human” (semantics and dialogue).
I’m seeing this play out in two very specific ways:
  • In Engineering: I’ve noticed women developers are often faster to move past using AI as a simple code generator. They are using it as a high-level architectural partner, stress-testing logic and managing edge cases. They aren’t just looking for an output; they are managing a relationship with a complex system.
  • The Non-Technical Leap: This is one of the most gratifying shifts to watch. I’m seeing women in marketing, HR, and operations become “technical” as a side-effect of AI adoption. They are building automated workflows and custom tools that once required a dedicated IT ticket. They are bridging the gap not through brute-force coding, but through precise, collaborative inquiry.

Why the “Soft” Skill is the New “Hard” Skill

Traditional computing was about giving a machine a rigid command. If you didn’t know the exact syntax, the machine failed.
GenAI is different. It requires a dialogue.
The best results don’t come from a single prompt; they come from a back-and-forth “coaching” session. This requires empathy for the model’s logic, iterative questioning, and the patience to refine an idea rather than just demanding a result. Because women have historically been the primary collaborators and “connectors” in the workplace, they are naturally suited for the dialogic nature of GenAI.

The Data Catches Up to the Reality

The industry is starting to recognize this shift, and the data is backing up what we are seeing in our offices:
  • Closing the Gap: Deloitte’s TMT Predictions suggest that the rate of GenAI adoption among women has been tripling, on track to equal or even exceed male adoption by the end of this year.
  • The Quality of Interaction: Recent studies indicate that while men may use the tools more frequently for “one-off” tasks, women often show greater knowledge improvement and higher competence after the interaction. They aren’t just using the tool; they are learning with it.

The Bottom Line

We are witnessing the Collaboration Dividend. For decades, “soft skills” were often sidelined as secondary. Today, they have become the ultimate competitive advantage.
It is a pleasure to see these skills—and the women who have mastered them—finally getting the recognition they deserve. In the age of GenAI, the “cooperator” will almost always outperform the “commander.”

About the Feature Image

It is one colleague in particular that inspired the first spark of this post, and I wanted her to be part of the feature image. Then I began thinking of other women that have shown me the benefits of collaboration and I added their images as well as tribute. And my apologies for those I didn’t think of during the 10 minutes of creating this image prompt, or who are no longer on LinkedIn.

If you found this interesting, please share.

© Scott S. Nelson