The Highest ROI from AI Automation Starts with Augmentation

The start of this post may sound like AI alarmism, but bear with me. My goal is to underline  why the right strategy is the key to a positive ROI instead of a reputation that’s KIA.

Before Y2K, Running with Scissors went viral with Postal. It was a bold move that garnered lots of headlines and changed an industry. The term Running with Scissors comes from the adnomination that always began with “Don’t” because it is dangerous and has major negative consequences. Then naming their debut product Postal, a reference to Going postal, was definitely running with scissors, given that it was released when that was something to really be afraid of, with headlines reminding us every few months. Sometimes pulling a bold, dangerous move like puts you ahead of the pack. And the same people who would say “Don’t” might rightly say “you were just lucky this time”.

Sadly, as with many of my long-winded metaphors, this analogy falls apart quickly when I get to the point I am meandering up to: AI automation.

The Hard Lessons of High Hopes Held for Hype

Photo by Immo Wegmann on Unsplash

While Running with Scissors pulled off their risky play and at worst stood to learn from it, in the world of AI, when you jump in too fast, the cost can be far higher. It’s not just about learning. It’s about real, public, expensive failure.

My favorite printed coffee mug in the ‘90s said “To err is human, but to really screw up, you need a computer.” Now I need a growler tagged with “Computer programs have glitches, AI gives you stitches.” Or, as some reporters and pundits have put it:

If you think those are the of the “person bites dog” variety, take a gander AI Failures, Mistakes, and Errors, which brings a whole new meaning to the term “doom scrolling” for those who have only dabbled in the arts of the black box algorithms.

The Hype is Real, the Hyperbole is Not

Generative AI feels like science fiction compared to what we could muster half a decade ago. But if your plan is to fire your interns and forgo fresh recruits because you expect AI to pick up the Slack, you may soon have nothing left but cold coffee, hot-tempered customers, and evaporating bonuses.

[Ego Disclaimer #1: I really liked this section but thought it was too short, so I had Perplexity stretch it a bit with the content below…and I don’t think it did do too bad of a job, but please comment on this post and tell me what you think.]

It’s tempting, of course. There’s a parade of enthusiastic press releases and budget-slashing slideshows from folks who are convinced that with just the right AI prompt, entire departments can be blissfully replaced. The reality? Not so much. As thrilling as it sounds, swapping out eager humans for untested bots leaves you with a lot of gaps—the kind interns and new hires usually fill by catching the weird edge cases, asking the questions you forgot were important, and, occasionally, refilling the printer paper before that big client call. Turns out, there’s no neural network yet that will run down the hall with a sticky note or spot the project that’s quietly rolling off the rails.

You also lose your organization’s early warning system. Interns and rookies see with fresh eyes; they’ll squint at your wobbly workflows and say, “Wait, why do we do it this way?” That’s not inefficiency, that’s built-in feedback. When you replace every junior with an “intelligent” auto-responder, you’re left with no canaries in the coal mine, just a black box churning out confident guesses. And as the headlines keep reminding us, when you let those black boxes loose without human context or oversight, suddenly it’s not just your coffee getting cold—it’s your reputation going up in smoke.

AI Today Feels a Lot Like IT Last Century

“Computer error” is a term that persisted for decades as reason why it was dangerous to leave things up to computers. Truth was, it was always human error, though where it in the chain from deciding to “computerize” to end users who did not RTFM (or disclaimer).

The adoption was a lot slower last century, as was communication, so many business that were early adopters of computers as business tools repeated the same mistakes as others. Step up to this century, and the really smart people are doing things iteratively.

Other people see what these iterators are accomplishing and decide they want that, too. So they rename their current processes to sound like what the iterative people are doing. Some iterators “move fast and break things”, and then fix them. The semi-iterative do the first half, and then blame the “new” process.

Slow is Smooth, Smooth is Fast

It’s not a new saying, but it’s more relevant than ever: “Slow is smooth, smooth is fast.”

Moving fast starts by moving slow, which building a foundation that can be controlled, and by controlled I mean rebuilt with a single command. Then you can quickly add something on top of that foundation, and if breaks, you can start over with no loss. When it succeeds, you repeat that success and add it to your foundation.

Apply this to an AI adoption strategy. It’s been send there is no need to do a Proof of Concept for AI because the concept has been proven, and this is true. Your ability to apply the concept has not. Or, perhaps, you have disproven it in your organization, and now some people think it was the concept that failed rather than the implementation. To prove you can implement, start with a prototype.

A prototype should be something that is simple, valuable, and measurable. Simple because building confidence is one of the results the prototype should yield. Valuable, because people tend to do a sloppy job if there isn’t much value in what they are doing, and there are more than enough bars being fooed in every organization. And measurable, because you need to show some kind of ROI if you are ever going to get the chance to show real ROI.

Once that first prototype has proven your ability to implement AI in a safe and useful manner, you’re ready for production…right? Right?

Governance and the Human-in-the-Loop

Nope. We skipped a step, which is to establish some governance. Truth be told, in some organizations you won’t be able to get buy-in for governance. Or you’ll get another recurring meeting on too many calendars with the subject “Governance” that fails to complete the same agenda each time (Item 1: What is Governance? Item 2: Who owns it?).

In many orgs you first have to win a champion or get enough people excited with a viable prototype. In either case, make sure governance is in place before going to production, and don’t play Evil Knievel getting into production. Which is to say, don’t jump Snake River when there is quite enough danger in the regular trail of iteration.

One Thing at a Time: The Power of Measured Progress

That first successful prototype should do one thing, and do it well. If it’s just a single step in a bigger process, perfect. Now do another step—just one. Pick something valuable and measurable, but also something people secretly or not-so-secretly dislike doing. Before you know it, everyone wants what you just gave that team.

“I do one thing at a time, I do it well, and then I move on” –Charles Emerson Winchester III

Automating one step is augmentation. There’s still a human in the loop, even if one part’s now on autopilot. When that step works, take another. Then another.
Each time, you push humans higher up the value chain and commoditize AI as a proven automation solution.

If you hit a limit, congratulations! You broke something and learned from it. That is how you find limits, by exceeding them. If you test your limits one step at a time, when you exceed them you can take a step back and still be further along than when you started. If you try to skip steps, there is place next to Evil Knievel that you might not wind up in the first time, but eventually it will hurt. And it might be a headline in the next version of this post.

Start Small, Stay Smart, Iterate Relentlessly

The highest ROI from AI comes not from boldly going where no automation has gone before but from incremental, tested, and measured iterations from augmentation to the most practical level of automation.

And if you break something along the way, remember: you’re already further ahead than if you’d never started.

[Ego Disclaimer #2: I created an outline for this blog post using Perplexity, then threw away most of it and did my usual off-the-cuff rant. Then I had Perplexity edit the draft and rolled back most of the edits]

Facebooktwitterredditlinkedinmail
© Scott S. Nelson

Course Review: Build Intelligent Agents Using DeepSeek & n8n

Today I completed Build Intelligent Agents Using DeepSeek & N8N on Coursera. I was really impressed at the start of the first module when there was an interactive AI that dynamically and interactively quizzed me on the content of the material covered. It was more of a showcase of the Coursera Coach than an integral part of the course. I think there are a lot of people who will benefit from this feature on Coursera, and it is available throughout this course, and I have seen it on others. I’m not sure how ubiquitous it is across other courses.
If you have ever worked with one of those people where, if you ask them how to do something they send you one or more links, this course is sort of like working with that person. Most Coursera courses have those Recommended Reading sections that provide useful extra material. I like to bookmark them for future reference, but rarely read them during the course. That approach won’t work with this course, as the quizzes cover content in those courses. I found that having Perplexity summarize the links for me was sufficient to start passing the quizzes. Mostly. There were still questions that I did not recognize from the videos nor the summaries, and a couple of times I read the external content directly and still didn’t find the source of the question.
I don’t just take these courses to hang a certificate on the wall (anymore, though there was a time before 2010 when I collected enough to stack over an inch high, partially shown below).
Pile of certificates
Pile of certificates
Coursera does a best score of up to three attempts, so if I get less than 100% I will repeat it until I do get 100% so that I am certain I have learned the material. The last Coursera course I completed (Introduction to Artificial Intelligence by IBM) I happily pointed out that the quality of the video instruction lead to my consistent 100% scores. For this n8n course, I can honestly say that without many years experience in enterprise integration architecture I would not have been able to score the 100% on at least half of the modules. This is beyond the issue that some of the links to the required reading was broken.
As mentioned, Perplexity summaries were really useful in completing this course, as well as some research to fill in the blanks for missing content. For those that are interested, I exported thread I used for the course and have made it available in PDF format as Perplexity_Research_for_Coursera_Build_Intelligent_Agents_n8n_Course.pdf
You can see the Perplexity thread (including the delete query that started me on this journey) here: https://www.perplexity.ai/search/does-n8n-have-the-ability-to-r-23edr0K1SYqlwvRRk7QGOw
Facebooktwitterredditlinkedinmail
© Scott S. Nelson

Build Intelligent Agents Using DeepSeek & N8N

Today I completed Build Intelligent Agents Using DeepSeek & N8N on Coursera. I was really impressed at the start of the first module when there was an interactive AI that dynamically and interactively quizzed me on the content of the material covered. It was more of a showcase of the Coursera Coach than an integral part of the course. I think there are a lot of people who will benefit from this feature on Coursera, and it is available throughout this course, and I have seen it on others. I’m not sure how ubiquitous it is across other courses.
If you have ever worked with one of those people where, if you ask them how to do something they send you one or more links, this course is sort of like working with that person. Most Coursera courses have those Recommended Reading sections that provide useful extra material. I like to bookmark them for future reference, but rarely read them during the course. That approach won’t work with this course, as the quizzes cover content in those courses. I found that having Perplexity summarize the links for me was sufficient to start passing the quizzes. Mostly. There were still questions that I did not recognize from the videos nor the summaries, and a couple of times I read the external content directly and still didn’t find the source of the question.
I don’t just take these courses to hang a certificate on the wall (anymore, though there was a time before 2010 when I collected enough to stack over an inch high, partially shown below).
Pile of certificates
Pile of certificates
Coursera does a best score of up to three attempts, so if I get less than 100% I will repeat it until I do get 100% so that I am certain I have learned the material. The last Coursera course I completed (Introduction to Artificial Intelligence by IBM) I happily pointed out that the quality of the video instruction lead to my consistent 100% scores. For this n8n course, I can honestly say that without many years experience in enterprise integration architecture I would not have been able to score the 100% on at least half of the modules. This is beyond the issue that some of the links to the required reading was broken.
As mentioned, Perplexity summaries were really useful in completing this course, as well as some research to fill in the blanks for missing content. For those that are interested, I exported thread I used for the course and have made it available in PDF format as Perplexity_Research_for_Coursera_Build_Intelligent_Agents_n8n_Course.pdf
You can see the Perplexity thread (including the delete query that started me on this journey) here: https://www.perplexity.ai/search/does-n8n-have-the-ability-to-r-23edr0K1SYqlwvRRk7QGOw
Facebooktwitterredditlinkedinmail
© Scott S. Nelson

Slices of #ITisLikePizza with Perlexity.ai

In early 2022, I had hoped to start a social media thread with the hashtag #ITisLikePizza, but it was clearly a half-baked idea since no one replied to the first four posts:

  • A good cook can make a better pizza with the same ingredients as a bad pizza.
  • All pizzas look good from a distance.
  • If you rush the cook, it will not be as good.
  • You always regret consuming too much too fast, no matter how good it is.

Had any engagement occurred, I was ready with nine more:

  • A great-looking pizza can still not taste good.
  • It’s rarely as good as it sounds in a commercial.
  • When it is as good as the commercial, the next time it isn’t.
  • The best pizzas are often found in small, little-known places.
  • When the small, little-known place gets popular, the quality goes down.
  • Some ugly-looking pizzas are the most satisfying.
  • The hungrier you are, the longer it takes to arrive.
  • If you forget about it in the oven, the result may not be salvageable.
  • If you don’t follow the recipe, your results will vary.

Here we are, three years later, and GenAI being all the rage, it occurred to me that maybe I could extend the list with AI. My cloud-based GenAI of choice is Perplexity, so that’s what I tried it with. I originally stuck with Perplexity because it hallucinated less than other options by default, mostly because it provides sources for its outputs, which I find handy when I rewrite those outputs for my various posts. Had this been a true experiment, I would have run the same prompts in ChatGPT and Copilot, but this is just me avoiding my ever-growing to-do list for a few minutes, so it’s going to be short and anecdotal.

So, my first prompt was ‘Expand on this following list of how “IT is Like Pizza”:’ followed by the original baker’s dozen list I had come up with so far. Instead of adding new pithy pizza ponderings, it gave explanations for each. They were actually really good explanations. And no citations were provided, so this was just the LLM responding. Kind of interesting in itself.

So then I tried the usual lame improvement of the prompt with “Don’t explain the lines. Generate new lines following the same concept.” The result this time was more what I was looking for, though it may just be writers’ ego that made me think they all need some improvement, except those that could just be tossed.

Then I did something I learned somewhere (I subscribe to half-a-dozen AI-specific newsletters, another dozen that cover AI frequently, plus follow a slew of companies and people on LinkedIn—not to mention that YouTube’s algorithm had caught on to my interest—so I can rarely attribute a particular concept because I either heard it multiple times or I forgot to include the attribution when making a note to cogitate on it more later): I asked Perplexity what the literary term was for the original dirty dozen dictums and it told me “analogical aphorisms” (actually, it told me more than that, but I cling to alliteration the way the one topping you don’t like on the family pizza does).

Armed with my fancy new GenAI-generated term, I demanded (in several of those newsletters and videos I have heard that asking with ‘please’ is just a waste of tokens…which I mostly agree with unless you think the best sources are places like Reddit, but more on that another time): “Create ten more analogical aphorisms with the same them of IT is like Pizza”.  It’s like more sunk-cost-fallacy than truth that this list seemed much more on target, though some were definite deletes, and some clearly needed editing, and…yeah,  it was definitely a case of time commitment bias.

For the curious, the full thread with all the responses can be found here for however long the link is good for (I expect only slightly longer than my Pro subscription):

https://www.perplexity.ai/search/expand-on-this-following-list-UdB7O3KzSxCkl.WTIKBy4A

Interesting side note: I find I sometimes have to remind Perplexity to follow the context instructions for a Space.

Interesting side note about that side note: I have to specify “Perplexity Space” or it will do all sorts of random stuff that has nothing to do with Perplexity Spaces.

One more interesting side note: The most annoying thing that Perplexity does is anticipate punctuation errors. I use it to check my spelling, grammar, and punctuation because I got tired of finding the mistakes after posting. Here is one of the suggestions (similar ones are common):

  • Add a comma after “Here we are, three years later”
    Original: Here we are, three years later, and GenAI being all the rage, it occurred to me…
    Correction: Here we are, three years later, and with GenAI being all the rage, it occurred to me…

OK, one more side note and that’s it: It’s interesting that Perplexity (and other GenAIs) will properly understand a mis-spelled prompt and not point it out, but in spell-checking content it does point it out, as in:

  • Change “theme” not “them”
    Original: …with the same them of IT is like Pizza…
    Correction: …with the same theme of IT is like Pizza…

Sorry, can’t resist: The side notes (originally spelled ‘side-note’) were written while editing, and when I ran them through the “Check the following for spelling, grammar, and punctuation…” prompt, it wanted to correct its own spelling, as in:

  1. In “as in: Change ‘theme’ not ‘them’,” add a comma after “theme” so it reads:
    • Correction: Change “theme,” not “them”
Facebooktwitterredditlinkedinmail
© Scott S. Nelson

Falling down the GenAI Research Rabbit Hole

Before GenAI, I would have a general plan of what kind of information I would gather to prepare for a task. Sometimes that plan would be modified as new information led to a better understanding of the subject. GenAI is so much faster, more detailed, and with fewer unwanted results that I have to remind myself that this is just the beginning of the task. Otherwise, I would just stay in research mode and never get started, where before GenAI I was likely to stop just out of frustration from sifting through all the crap resulting from SEO more focused on views than value.

From “Don’t Be Evil” to Disruption

Google’s early days were defined by a clear mission: organize the world’s information and make it universally accessible and useful. Their unofficial motto, “Don’t be evil,” reflected a user-first approach that made Google the go-to research tool for millions. The results were clean, relevant, and genuinely helpful. Searching felt empowering, and the platform’s focus was on delivering value to users.

But as Google grew, priorities shifted. The drive for revenue and shareholder returns led to an increasing emphasis on advertising and SEO optimization. Search results became cluttered with paid placements and content designed to game the algorithm, rather than serve the user. The once powerful tool for discovery became bogged down by noise, making the research process more frustrating and less productive.

The progress shift of focus from user-friendly to shareholder-value opened the door for disruption. When a company that once prided itself on “not being evil” starts to lose sight of its core values, it creates an opportunity for new technologies to step in and fill the gap.

The GenAI Parallel

GenAI today feels much like Google did in its early years: focused on utility, speed, and user value. The answers are direct, the distractions minimal, and the sense of possibility is fresh. Outside the media buzz there is real value in faster answers, deeper insights, and fewer irrelevant results. But the lesson from Google’s trajectory is clear: success can breed complacency, and the temptation to prioritize profit over usefulness is always lurking.

Just as Google’s early commitment to usefulness made it indispensable, GenAI’s current focus on delivering value is what sets it apart. The challenge will be maintaining that focus as the technology matures and commercial pressures increase.

The Shift in Research

  • Faster answers mean less time wasted on irrelevant results.
  • Deeper insights surface quickly, sometimes revealing connections I wouldn’t have spotted on my own.
  • Fewer distractions no more having to go to page 3 of results because the first two were the result of the successful SEO strategies of clickbait and content farms.

But this abundance is a double-edged sword. The temptation to keep digging, to keep asking “what else?” is strong. Without discipline, I could spend hours exploring every tangent, never moving on to actually apply what I’ve learned.

Hopes for the Future of GenAI Research

As exhilarating as this new era is, I can’t help but wonder what comes next. Will GenAI search maintain its edge, or will it eventually succumb to the same pressures that eroded Google’s utility? The cycle of innovation and decline is a familiar one in tech, and I hope that as GenAI matures, it resists the lure of ad dollars and keeps user value front and center.

  • Transparency in how results are generated will be crucial.
  • User-focused design should always outweigh short-term profits.
  • Continuous improvement based on real user needs (not just engagement metrics) must be the guiding principle.

For now, I’m enjoying the ride, even if it means occasionally reminding myself to climb out of the rabbit hole and get back to on track (which may be how Google got to where they are).

Facebooktwitterredditlinkedinmail
© Scott S. Nelson