MLL: Metaphorical Language License

That headline is an epic fail at being clever. Clearly no AI there. Going to go with it anyway to make a point or two.

Human perception is just a collection of filters. Adjusting for “Red,
Green, and Blue” in an image is no different than how our brains handle
new tech through Deletion, Distortion, and Generalization.

The AI hype bubble is the ultimate stress test for these filters.

Human Perception as
Filtering

The first point is, human perception is the result of filtering.
Actually, all perception is the result of filtering; it is just that
humans are more interested in how it affects them. That is actually part
of the filter.

Sort of like image filters, where you have adjustments for Red,
Green, and Blue, perception is adjusted through deletion, distortion,
and generalization. Some examples:

  • Deletion: Everyone has at least one thing they do
    where they wouldn’t do it if they remembered how difficult it was.
  • Distortion: Media algorithms that zoom in or out
    based on audience bias.
  • Generalization: The core of most learning and both
    a boon and barrier to that very learning.

While generalization is core to individual learning, all three
filters can be seen when groups of people are learning. Here is how
those filters are dialed at a group level:

  • Deletion: Things we learned in the past that would
    help us better adopt and adapt to what is a new paradigm in
    technology.
  • Distortion: Knowledge distribution through media
    algorithms that zoom in or out based on audience bias.
  • Generalization: Comparing new paradigms to
    previously familiar concepts; being exposed to high-level concepts as
    parallels to common knowledge to build on iteratively, going deeper each
    time.

The AI hype bubble is a perfect example of the above perceptual
filter settings.

Metaphors for AI Perception

Taking this back to the headline, an anadrome of LLM, here are five
common metaphors being applied to AI (and organized by AI, TBH) that are
worth adding to your own perceptual filters:

  1. The “Alien Intelligence” This metaphor suggests
    that AI doesn’t think like a human; it is a powerful, non-human mind
    that we are trying to communicate with. It highlights the “otherness”
    and unpredictability of Large Language Models (LLMs). Best
    for
    : Discussing AI safety, alignment, or the surprising ways AI
    solves problems. Source: Popularized by technologist
    and writer Kevin Kelly in Wired, where he argues we should view AI as an
    “artificial alien” rather than a human-like mind.
  2. The “Stochastic Parrot” This is a more critical
    metaphor used to describe LLMs. It suggests that AI doesn’t “know”
    anything; it simply repeats patterns of language it has seen before,
    much like a parrot mimics sounds without understanding the meaning.
    Best for: Explaining how LLMs work, discussing
    hallucinations, or tempering over-hyped expectations.
    Source: From the influential 2021 research paper “On
    the Dangers of Stochastic Parrots,” co-authored by Emily M. Bender and
    Timnit Gebru.
  3. The “Bicycle for the Mind” Originally used by
    Steve Jobs to describe the personal computer, this metaphor has been
    reclaimed for AI. It positions AI as a tool that doesn’t replace the
    human, but rather amplifies our natural capabilities, allowing us to go
    “further and faster.” Best for: Productivity-focused
    content, tutorials, and “AI-as-a-copilot” narratives.
    Source: Originally Steve Jobs (referring to PCs);
    recently applied to AI by figures like Sam Altman (OpenAI CEO) in
    various interviews regarding human-AI collaboration.
  4. The “Infinite Intern” This metaphor frames AI as
    a highly capable, tireless assistant that is eager to please but lacks
    common sense and requires very specific instructions (prompting) to get
    things right. Best for: Business use cases, delegation,
    and explaining the importance of “human-in-the-loop” workflows.
    Source: Widely attributed to Ethan Mollick, a Wharton
    professor and leading voice on AI implementation in education and
    workplace settings.
  5. The “Electric Library” Think of AI not as a
    search engine that gives you a list of links, but as a librarian who has
    read every book in the world and can synthesize that information into a
    single answer for you. Best for: Explaining the shift
    from traditional search to Generative AI search.
    Source: A common conceptual framework used by Ben
    Evans, a prominent technology analyst, to describe the shift in how we
    access and process information.

However you perceive the rise of LLM-based AI, include “Springboard”
in your own collection of metaphors. That is, something that helps
anyone reach higher when approached at high speed with focus…and will
trip you if you come at it the wrong way, even at a slow walk if not
paying attention.


This post was inspired by a post on LinkedIn by Dr. Thomas R. Glück. If
you have read this far, please like, comment on, and share both.

If you get all the way through this post, please
tell me if the “MML” headline is an epic fail at being clever or if it
caught your eye. (Full disclosure: I use a Gem to review and edit my posts, and
generally ignore 80% of what it suggests, including losing that
headline.)

 

If you found this interesting, please share.

© Scott S. Nelson

The Collaboration Dividend: Who is really ahead in the GenAI Adoption

I’ve seen several tech buzz cycles, where even the real stuff is hyped. From BBS systems to .com bubbles, shareware to SaaS, DHTML to AJAX to ReST, and web first to mobile first to cloud first. In almost every one of those booms, the “first-mover advantage” belonged to the command-and-control mindset: direct, rigid, and strictly instrumental.
As I watch the rolling adoption of Generative AI (GenAI), I see a long-overdue validation of a different skillset.
The technical gap is no longer being closed by the most aggressive “commanders,” but by the most collaborative coordinators. I am delighted to see that women are not just adopting this technology, they are mastering its productivity curve at a rate that confirms what many of us have suspected for years:
When technology becomes conversational, the best communicators win.

A Predictable Shift in the Trenches

In hindsight, this was inevitable. We have moved away from a world where you had to speak “machine” (syntax and code) to a world where the machine finally speaks “human” (semantics and dialogue).
I’m seeing this play out in two very specific ways:
  • In Engineering: I’ve noticed women developers are often faster to move past using AI as a simple code generator. They are using it as a high-level architectural partner, stress-testing logic and managing edge cases. They aren’t just looking for an output; they are managing a relationship with a complex system.
  • The Non-Technical Leap: This is one of the most gratifying shifts to watch. I’m seeing women in marketing, HR, and operations become “technical” as a side-effect of AI adoption. They are building automated workflows and custom tools that once required a dedicated IT ticket. They are bridging the gap not through brute-force coding, but through precise, collaborative inquiry.

Why the “Soft” Skill is the New “Hard” Skill

Traditional computing was about giving a machine a rigid command. If you didn’t know the exact syntax, the machine failed.
GenAI is different. It requires a dialogue.
The best results don’t come from a single prompt; they come from a back-and-forth “coaching” session. This requires empathy for the model’s logic, iterative questioning, and the patience to refine an idea rather than just demanding a result. Because women have historically been the primary collaborators and “connectors” in the workplace, they are naturally suited for the dialogic nature of GenAI.

The Data Catches Up to the Reality

The industry is starting to recognize this shift, and the data is backing up what we are seeing in our offices:
  • Closing the Gap: Deloitte’s TMT Predictions suggest that the rate of GenAI adoption among women has been tripling, on track to equal or even exceed male adoption by the end of this year.
  • The Quality of Interaction: Recent studies indicate that while men may use the tools more frequently for “one-off” tasks, women often show greater knowledge improvement and higher competence after the interaction. They aren’t just using the tool; they are learning with it.

The Bottom Line

We are witnessing the Collaboration Dividend. For decades, “soft skills” were often sidelined as secondary. Today, they have become the ultimate competitive advantage.
It is a pleasure to see these skills—and the women who have mastered them—finally getting the recognition they deserve. In the age of GenAI, the “cooperator” will almost always outperform the “commander.”

About the Feature Image

It is one colleague in particular that inspired the first spark of this post, and I wanted her to be part of the feature image. Then I began thinking of other women that have shown me the benefits of collaboration and I added their images as well as tribute. And my apologies for those I didn’t think of during the 10 minutes of creating this image prompt, or who are no longer on LinkedIn.

If you found this interesting, please share.

© Scott S. Nelson

Another Windows 11 Glitch and Fix: Chromium Dark Mode Taskbar Shadow

tl;dr: There is no fix and it is actually random, as I wanted to include a screenshot with this post and it refused to cooperate.

I recently switched laptops and was being irritated by would looked like a shadow overlay on the taskbar. While only a visual effect, it is really annoying because I spend so much time working with and testing apps that the visual que of “you can’t touch this right now” was niggling at my unconscious. So I ran to my tech support, Perplexity. Here’s how it went.

Why is there a shadow over the taskbar when a chromium based browser in dark mode is open

The shadow over the taskbar is the window drop shadow from the browser, and it is a known Windows 11 rendering quirk rather than something specific that you misconfigured.reddit+1

What is happening

  • Windows 11 draws Fluent Design drop shadows around non-maximized app windows, including Chromium-based browsers.elevenforum+1

  • In some builds, these shadows are rendered on top of the taskbar instead of behind it, so when a dark-mode browser is near the bottom of the screen, its dark shadow visually dims or “shades” the taskbar area.reddit+2

  • The effect is more noticeable with dark-mode apps because the shadow blends into the dark UI and makes the taskbar look like it has a dark band or overlay above it.learn.microsoft+1

Why Chromium + dark mode makes it obvious

  • Chromium-based browsers (Chrome, Edge, Brave, etc.) use standard Windows window frames, so their Fluent shadows are quite large and soft; when the window is near the taskbar, that shadow overlaps and darkens the taskbar icons.learn.microsoft

  • In light mode the contrast is lower, so many users do not notice, but in dark mode the dark shadow against a dark taskbar stands out as a clear “shadow strip.”learn.microsoft

Can it be fixed or changed?

  • There is no per-app toggle to stop shadows just for Chromium; the behavior comes from Windows’ global “Show shadows under windows” visual effect.majorgeeks+1

  • Disabling window shadows removes the taskbar shading but also removes drop shadows everywhere:

    • Press Win+R → type sysdm.cpl → Enter

    • Advanced tab → Performance → Settings

    • In Visual Effects, uncheck Show shadows under windows → OK.youtubeelevenforum+1

  • Some users report this as a bug where the taskbar loses its “always-on-top” visual priority and shadows bleed over it; clicking the taskbar often temporarily restores normal appearance.reddit+3

When it will go away

  • Microsoft community threads describe it as a Windows 11 taskbar/window-shadow bug, and not all builds or GPUs show it the same way.reddit+2

  • Until an OS update changes how shadows are layered relative to the taskbar, the only reliable workaround is to either keep windows maximized, move them slightly away from the taskbar, or disable window shadows globally.elevenforum+3

  1. https://www.reddit.com/r/WindowsHelp/comments/121ku7m/window_drop_shadow_appearing_over_taskbar/
  2. https://www.reddit.com/r/Windows11/comments/t4wxmf/drop_shadow_appearing_above_taskbar/
  3. https://www.elevenforum.com/t/enable-or-disable-show-shadows-under-windows-in-windows-11.8475/
  4. https://m.majorgeeks.com/content/page/disable_window_shadows.html
  5. https://learn.microsoft.com/en-us/answers/questions/5596717/taskbar-icons-dimmed-by-window-shadows-cumulative
  6. https://www.youtube.com/watch?v=8k6Tp1nYMMY
  7. https://www.reddit.com/r/WindowsHelp/comments/1i1orkv/windows_and_drop_shadows_rendering_over_taskbar/
  8. https://learn.microsoft.com/en-us/answers/questions/3855099/taskbar-is-dimming-shadowed-and-how-to-stop-it-fro
  9. https://learn.microsoft.com/en-us/answers/questions/4134945/taskbar-on-dark-mode-in-win-11-buggy
  10. https://www.youtube.com/watch?v=4P2Y97gYWfM
  11. https://www.youtube.com/watch?v=IFXAmdJtzyA
  12. https://learn.microsoft.com/en-us/answers/questions/4110832/window-shadow-keeps-returning-in-windows-11
  13. https://www.youtube.com/watch?v=MfSYRfKhjL8
  14. https://learn.microsoft.com/en-us/answers/questions/3837842/still-a-problem-on-windows-11-can-we-finally-get-r
  15. https://issues.chromium.org/issues/40923917
If you found this interesting, please share.

© Scott S. Nelson

How to Foster AI Adoption from the Bottom Up

There is a lot of commentary about how AI initiatives are “failing”. Some measure it as ROI, which is a fair yardstick. Others point at the lack of adoption, which every technology goes through. The blame for these “failures” is often placed on leadership, which is fair given the meaning of the term. Speaking of terms, derivatives of “fail” have been in quotes so far for a reason: if you think of these things as a fail, then you are either sitting on the sidelines or throwing in the towel on the opportunities that AI offers. The only way to fail with AI is to give up on discovering how it will help your business. That will likely be followed by another type of fail.

Like everything else, failure is also a continuum. A prompt that returns an irrelevant result can technically be considered a fail, again challenged by the assumption that one can truly fail if they take the result as a lesson and do something else. At the other end of the spectrum is an agent that Deletes Company’s Entire Database, which is as close to a true fail one can get. There is no getting away from the fact that some people, teams, even companies, are just not very good at what they do, and capitalistic Darwinism will deal with them regardless of whether they are adopting AI or not (though AI will speed up the process).

Those true fails are a very small percentage of the world. The rest is a result of a type of hubris that (probably not coincidently) has seen a lot of attention in the business- and pop-psychology mediums lately, the Dunning-Kruger effect. Well, actually, just similar. The Dunning-Kruger effect is mostly about those that think they are doing better than they really are. The precursor to the failures that occur from attempting to follow early adopters doesn’t really have a term. However, think of this analogy: a person shows up to open mike night at a comedy club and sees a few folks pull off really good sets. They sign up to give it a try and find out the hard way that there is a lot more to getting an audience to laugh than just telling jokes.

So, lots of companies piled on to the AI bandwagon after having seen others succeeding with what looks from the outside as little or no effort. From the inside, these front runners have been playing with AI for years before ChatGPT woke the world up to the value of LLMs and Generative AI. They already had goals in mind of what they would do with the next evolution of the technology and they applied the latest and greatest with the same rigor they had been working with the latest but-maybe-not-the-greatest capabilities previously. The Lesson: Have a goal, make a plan, and be flexible moving from current state to target.

Back to the leadership blame. Again, some leadership is just not that great. The rest become great by engaging the company as a whole to define goals, make plans, and support the flexibility needed in achieving those goals when following a roadmap that traverses new territory. With AI, that leadership could start with a survey about “What tasks do you perform that are tedious?”

It’s especially important for leadership to ask this of the company as a whole because they are generally insulated from the types of mundane work that takes away from the ability to be productive. Some common areas are time and expense reporting. Most time reporting systems have a way to simplify this for recurring entries, but for those with varying tasks it would boost productivity and morale to have an agent that parses emails, IMs, and calendars to draft the weekly time submission and remind the person to review and submit. Or expense reports drafted by uploading receipts and the driving meeting invite or email. The ROI of such agents will vary greatly between enterprises, from a few dollars per person to a few million per year.

By polling the entire company for input on where AI can free people up from repetitive, simple, and necessary tasks that reduce the time, energy, and interest they could spend on work that is more fulfilling and more profitable, an enterprise can rapidly create a backlog of ROI-driven AI initiatives. The initial ROI may be small and sometimes difficult to measure. There are two key differences these initiatives would have over the 95% that “fail”:

  1. ROI-Driven Goal: The initiative would have an ROI-driven goal from the start, avoiding the trap of, “We’re doing this because everyone is doing it.”
  2. Built-in Adoption: Adoption is built in, because the goal was defined by the very people who will benefit directly and immediately from the results.

Once business success is demonstrated, more ideas will flow. At which point, it’s a good thing you defined some governance first. You did, right? Well, that’s another article.


As an addendum to this post, I asked Google Gemini for some better questions to solicit ideas for AI initiatives. Here is what it responded with:

That is a good starting point, but the question “What tasks do you perform that are tedious?” is limited because it only focuses on negative experience (tedium) and ignores major areas where AI offers massive value.

Here are five better questions to ask when identifying opportunities for AI, along with the reasoning behind why they are more effective:


5 Better Questions for Identifying AI Opportunities

1. “What tasks do you currently perform that involve interpreting or synthesizing a large volume of unstructured data?

Why It’s Better:

  • This targets the core strength of modern AI, especially Large Language Models (LLMs) and computer vision. AI excels at processing and making sense of data humans find overwhelming:

    • Examples: Summarizing thousands of customer feedback emails, extracting key terms from legal contracts, or analyzing unstructured notes from sales calls.

    • This question moves beyond “tedious” (which a human might not mind doing) and identifies tasks where AI provides superhuman efficiency and insight by handling data volume and complexity.

2. “Where in your process are decisions made based on historical patterns, forecasts, or complex optimization?

Why It’s Better:

  • This targets Predictive and Prescriptive AI. These systems go beyond automation; they actively optimize the business for better outcomes.

  • Examples: Optimizing logistics routes (a complex math problem), predicting equipment failure (preventative maintenance), or determining the best pricing strategy based on market conditions.

  • This question focuses on driving direct business value (cost savings, revenue growth, risk reduction), which is a stronger justification for AI investment than simply reducing tedium.

3. “What tasks are currently handled by a person that involve repetitive, rules-based digital actions?

Why It’s Better:

  • This clearly identifies opportunities for Robotic Process Automation (RPA), which is often the lowest-hanging fruit for AI implementation.

  • Examples: Copying data between two systems, generating routine reports, or verifying customer identity against a checklist of rules.

  • The benefit here is accuracy and speed. While these might be “tedious,” they are more importantly tasks where humans introduce errors, meaning AI delivers improved quality and compliance.

4. “What tasks do you perform where the output quality is inconsistent or highly dependent on individual human experience?

Why It’s Better:

  • This targets AI’s ability to drive standardization and quality. Humans, no matter how skilled, have variability. AI provides consistent output.

  • Examples: Writing personalized yet standardized responses to common customer service inquiries, determining the quality of manufactured parts (visual inspection), or classifying documents.

  • This question focuses on the goal of scaling expertise—making the output of your best employee available to everyone, consistently.

5. “What are the most frequent points of friction or delay in your customer or employee experience?”

Why It’s Better:

  • This focuses on impact and user experience, rather than just internal tasks. AI is a powerful tool for improving interactions.

  • Examples: Implementing a chatbot to answer 80% of routine questions instantly (reducing wait times), automatically tagging inbound customer support tickets (speeding up routing), or providing real-time personalized recommendations.

  • This shifts the focus to strategic business improvement and reveals problems that can be solved with AI-powered interfaces, not just back-office automation.

If you found this interesting, please share.

© Scott S. Nelson

Replacing your Proposal Team with ChatGPT

I’ve heard of some businesses that have completely automated their RFP response process using Agentic AI. To reach that level of automation, you either need a very narrow set of services or a very generous budget to address all the quirks and exceptions.

I have neither of those.

Before I go on, I want to point out that while I will definitely continue to use Generative AI with all of my documentation as tool to improve quality, I much prefer working with a human team that is AI-augmented rather than just AI. It is a strain being the only one managing the human factor of work that is meant to drive decisions. The title is not a suggestion; it is a description of how to cope when it is necessary.

What I do have is access to a few Generative AI tools. For various reasons I won’t get into here, ChatGPT Projects is the best fit for the workflow I have adopted (and still refining). Projects are ChatGPT’s (poor) answer to NotebookLM and Perplexity Spaces

(see my earlier post about Organizing AI Augmentation with Notebooks).

Projects are useful in that they keep related prompts and files in one place, but they don’t really cross-reference or allow for collaboration. It does come with that fine print at the bottom of the screen stating:

“OpenAI doesn’t use [NAME OF COMPANY PAYING SUBSCRIPTION FEE] workspace data to train its models.”

Which is the main one of those reasons I said I wouldn’t get into (oops!).

I recently worked on a proposal at a time when most of the people who would usually help were busy with other things, so I settled into working mostly with ChatGPT like an eager-but-green proposal teammate (the AI being the green one, not me…no matter what that LLM wrapper says).

Setting the Stage

For this particular proposal, the prep work didn’t look all that different from the old manual process. It starts with a short document to capture the proposal’s guiding themes: my company’s strengths, differentiators, and the ideas that needed to shine through in both tone and substance. The document was mostly drafted by practice leadership and refined with a few folks familiar with client, project types, or both.

Next came the outline. Depending on the RFP structure, I sometimes let ChatGPT take the first crack at building an outline from the document, then refine it interactively. Other times, the RFP format or flow is not friendly to automate parsing, even for a well-trained AI (or so I assume, as I haven’t attempted to train one that deeply yet). In this case I build the first draft of the outline myself, then hand it to ChatGPT to check against the original RFP. That combination of back-and-forth has become standard practice.

Draft One: Enter the AI Intern

Once the outline was in good shape, ChatGPT proactively offered to populate the template once it was refined, which fits with the persona I have of it as an eager, educated, and inexperienced intern or junior associate. And given the quality of its suggestions, it is tempting to respond with a “Yes” and let ‘er rip. But tempered experience had me opt for prompting it to do so one section at a time, and waiting for feedback or confirmation before moving on to the next section. In this manner, I was able to put together a pretty decent first draft much faster than doing it entirely on my own (or even with a “real” eager, educated, and inexperienced intern or junior associate, whom I also would not want to do a full draft before getting some feedback).

I would say it was about 50/50 of accepting the first draft of a section versus a revision. As with any Generative AI augmented content generation, most of the issues stemmed from missing levels of details in my prompts versus ChatGPT misunderstanding the intent. Speaking of understanding the intent, I attached the entire proposal (again, because, like I said, I know notebooks and spaces and projects ain’t those), the outline, and the context document after it asked to write the proposal for me, and tempering the response to its offer with “Yes, but…” and then instructions to do it a section at a time and refer to the files.

Staying Sane (a.k.a. Breaks Matter)

As many proponents of utilizing Flow will tell you, it can be very beneficial to take breaks every 60 to 120 minutes (while most of the gurus on the topic seem to gravitate to the 90 minute mark, I hold fast that it varies by person and context, mangling Bruce Lee’s advice to “be like water”, in this case by seeking your own level). Without breaks, your ability to be objective about the quality of GenAI outputs will start to degrade and tilt where your bias is, i.e., past one’s threshold of real focus, some will start accepting every output while others will either keep refining the prompts for sections over and over or just re-write it by hand.

The Human Touch

After ChatGPT’s draft, it was time for the what passes as human intelligence (I used to call coffee my “artificial intelligence” until the term started being used by everyone to refer to what we currently call AI). I have enough experience (and ego) around writing proposals, and made some minor edits of the first AI generated draft. Once that first draft was completed, I dove in to give it a serious human touch, reading through the entire draft and making notes of changes I thought it needed. That read through without editing may seem counterintuitive, but it is necessary because something that jumps out at me as being incomplete, inaccurate, or just plain wrong may be clarified later in the document. After a top to bottom read and making notes of changes, I then work through the notes to actually make the changes, skipping or revising those changes with the full context of the document.

Then it’s ChatGPT’s turn again. I have it go through the document, essentially repeating what I had just done. This is a process I have worked on in other forms of writing as well, and I have a general prompt that I tweak as needed:

Check the attached [PROPOSAL FILENAME] for spelling errors, grammar issues, overall cohesiveness, and that it covers all points expected as a response to [RFP FILENAME].

Only provide detailed descriptions of any corrections or recommended changes so that I can select the changes I agree with. Think hard about this (thanks to Jeff Su‘s YouTube channel for this addition!)

And then I work my way through the response. This same prompt is re-run with updated versions of the proposal until I am satisfied that this stage has yielded as much benefit as it can.

Tightening the Screws

Finally, (or almost so) I have ChatGPT draft the executive summary. In the case of a really big RFP response, I will first have it draft the section summaries. These summaries are necessary to any proposal. In fact, they often make or break the proposal, possibly because they are the only parts the decision makers read, sometimes along with reviews done by others. If the summaries don’t come easy, or don’t sound right based on that original context document, I will go through and collaboratively revise the relevant sections until the summaries flow.

The Final Check

Finally, I try my best to find another human to check the whole of the result. If I’m lucky, I get additional input. If I’m really lucky, they’ve brought their own GenAI-assisted reviews into the mix.

GenAI has had a major impact on my writing output. The flow I use for proposals isn’t all that different from the flow I use to write blog posts or other content. I do a number of stream-of-consciousness sessions (the number varying on the complexity and length of the content), and then start refining it. I used that approach before GenAI, and the key difference that GenAI has made in my process is that I have learned to do less self-editing during those initial brain dumps, because I know that I have a tireless editor to review and give me feedback during the editing phase. Plus, the editor can be coached in both my intent and style to help me improve beyond just the level of “not clear”, and “i before e except after c or when the dictionary says otherwise”.

If you found this interesting, please share.

© Scott S. Nelson