AI is great at summarizing a document or a small collection of documents. When you get to larger collections, the complexity begins to grow rapidly. More complex prompts are the least of it. You need to set up RAG (retrieval-augmented generation) and the accompanying vector stores. For really large stores, this is going to be necessary regardless. Most of us work in a realm that is between massive content repositories and a manageable set of documents.
One handy helping application for this is Pandoc (https://pandoc.org/), aptly self-described as “your Swiss Army knife” for converting files between formats (without having to do “File > Open > Save As” to the point of carpal tunnel damage). Most of our files are in people-friendly formats like Word and PDF. To an LLM, these files contain mostly useless formatting instructions and metadata (yes, some metadata is useful, but most of it in these files is not going to be helpful as inputs to GenAI models). Pandoc will take those files and convert them to Markdown, which is highly readable for GenAI purposes (and humans can still parse it — and some even prefer it) and use 1/10000000th of the markup for format (confession: I pulled that number out of thin air to get your attention, but the real number is still big enough to matter).
The conversion may not be perfect, especially as the formatting of most documents is not perfect. You can see this for yourself by using the Outline view in Word. With a random document pulled from SharePoint, odds are you will find empty headings between the real ones, entire paragraphs that are marked as headings, or no headings at all because someone manually formatted text using the Normal style to make it look like a heading.
If you are only converting a few documents, you can use a text editor with regex (provided by your favorite GenAI) to do find and replace. Otherwise, leave them as is — it is already in a much more efficient format for prompting against, and the LLM will likely figure it out anyway.
You can get fancier with this by incorporating a call to Pandoc as a tool in an agentic workflow, converting the files at runtime before passing them to an LLM for analysis (and if you are a developer, managing the conversions so that they aren’t wastefully repeated). So long as you are being fancy, you can have it try to fix the minor formatting errors too, but you have already made a huge leap forward just by dumping all the formatting (that is just noise to an LLM) so that the neural network is processing what really matters: the content that is going to make you look like a prompting genius.
tldr; If you have serious performance issues after upgrading and have tried all the usual tweaks, check the Power Mode settings.
The last Windows upgrade where I felt better for the experience was Windows 2000. Yes, there have been some marked improvements in media capabilities since then (if not, I’d still be on Windows 2000 — except for the security path problem). The only past upgrade I found difficult (excluding disappointment as a challenge) was from 3.1 to 95. That was hard because there were all of these disk changes to do because CD ROMs were still not ubiquitous. So I was bit put off when I experience a huge performance hit after the upgrade from 10 to 11. An upgrade that I only performed because they are ending free security updates in October for Windows 10 (I believe that makes it the shortest lived, in terms of support, ever) and I happened to be between deadlines at the moment. The last thing I wanted was to do the upgrade in the midst of some deliverable date because I expected it to be less than fun.
Expectations met. I spent three days after the upgrade trying to live with it. I knew going in that I needed to fix a lot of the default settings to keep big tech from becoming Big Brother, and had a list prepared before I even started so I could thwart the privacy pirates promptly. My inner Fox Mulder believes that much of the performance degradation of Windows 10 the last few years was meant to urge people like us to upgrade. After, I started to think that it was more a manner of acclimating us to problems to come.
After spending way to much time discussing the problem with Perplexity, and then trying to pretend that I could live with a machine that was 50 generations newer than the one I had in 1996 and yet somehow slower, I decided to work on a hobby project that is running on another machine. Did I mention that all of my Windows devices are laptops? When I booted up the hobby machine, I did so before plugging it in and noticed the usual effects of running on battery mode, with the monitor dimmer and the machine a bit slower. I plugged it in and it perked right up, as did my hind brain. I went back to my recently degraded machine and checked the power settings, and lo and behold, Big Brother decided that I could get by with a reduction in performance even while plugged in.
Windows 11 Power Mode Settings
I’m convinced that it is actually a combination of all of the other changes I made in addition to setting Power Mode to Best Performance when Plugged in. I have to believe that, because the alternative of the wasted time, and Tylenol®, is too much to bear.
Speaking of too much to bear, writing up all of those steps is also a bit of a chore. However, I used a Space on Perplexity for the whole journey, so I asked it to write a blog post about it in my style. It still does not have my style right, which is partly me, for being to brief in the context and skimpy on the training files, and partly just that AI isn’t human and can’t really get me (most humans have the same problem). So as not to let all of that energy-expensive processing go to waste, I’m sharing it with you below.
And now, a word from our AI:
Why I Upgraded to Windows 11 Early—and How You Can Do It with Privacy and a Classic Windows 2000 Feel
Microsoft is ending free upgrades from Windows 10 to Windows 11 this October. I chose to upgrade now while I had no pressing deadlines so I could handle the transition calmly and set everything up for my privacy-first, vintage Windows 2000 style preference. Here’s a streamlined approach to upgrading that respects your privacy and brings a classic user experience.
Why Upgrade Now?
Waiting to upgrade means potential fees and stress when deadlines loom. Tackling Windows 11 ahead of time lets you customize and troubleshoot without pressure, making the switch smoother.
Step 1: Prepare Your System
Full Backup: Use an external drive plus a system image backup. Protect your irreplaceable data and system state.
Legacy Software & Drivers: Collect all your essential old-school apps and hardware drivers ahead of time, preferably from manufacturer websites.
Create a Recovery Plan: Have Windows 10 installation media ready just in case you want to roll back.
Step 2: Privacy-Centered Installation
Use a Local (Offline) Account: Avoid Microsoft’s enforced online sign-ins to maintain privacy.
Disable Telemetry and Data Sharing: Go to Settings > Privacy & Security and turn off every form of telemetry, diagnostics, and data collection.
Disable or Remove Cortana: Cortana’s integration hampers privacy; uninstall or disable it completely.
Block Web-Connected Search: Turn off options that send your search queries online.
Limit Microsoft Account Usage: Avoid syncing or using Microsoft accounts wherever possible.
Step 3: Recreate a Windows 2000 Look & Feel
Classic Start Menu: Install Open-Shell to bring back the familiar Windows 2000 style Start menu.
Taskbar Customizations: Use scripts or apps to move the taskbar to the left, reduce icon spacing, and disable icon grouping.
Remove Widgets and News Feeds: Disable or uninstall unnecessary widgets and news to reduce clutter.
Restore Classic System Sounds and Icons: Swap in old system sounds and icon sets to enhance the nostalgic experience.
Step 4: Upgrade Process Tips
Take Your Time: Don’t rush the prompts; decline any offered “extra” software or cloud features.
Monitor Network: Temporarily disconnect from the internet during installation to avoid surprise background downloads or telemetry.
Updates on Your Terms: Post-upgrade, disable automatic updates or set them to manual to avoid unwanted changes.
Step 5: Post-Upgrade Privacy and Performance Tuning
Verify Privacy Settings: Double-check that all telemetry and data sharing options remain off; Windows sometimes re-enables features after upgrades.
Manage Startup Programs: Use Task Manager to disable all non-essential startup apps, speeding boot times and avoiding background tracking.
Disable Unnecessary Services: Turn off services that you don’t use or that compromise privacy (e.g., Windows Tips, Feedback Hub).
Use Host File Hacks or Firewall Rules: Block known Microsoft telemetry servers for an extra layer of privacy.
Schedule Routine Privacy Checks: Set reminders to revisit privacy and UI settings after big Windows updates.
Final Thoughts
Upgrading while you have the bandwidth gives you control and peace of mind before the free upgrade deadline passes. By prioritizing privacy, minimizing modern OS distractions, and returning to a classic interface, Windows 11 can feel more comfortable and secure—closer to the straightforward experience of Windows 2000 but with current OS capabilities.
Don’t wait until October and the pressure mounts. Take charge now, protect your privacy, and make Windows 11 work your way. Your future self will be grateful.
The start of this post may sound like AI alarmism, but bear with me. My goal is to underline why the right strategy is the key to a positive ROI instead of a reputation that’s KIA.
Before Y2K, Running with Scissors went viral with Postal. It was a bold move that garnered lots of headlines and changed an industry. The term Running with Scissors comes from the adnomination that always began with “Don’t” because it is dangerous and has major negative consequences. Then naming their debut product Postal, a reference to Going postal, was definitely running with scissors, given that it was released when that was something to really be afraid of, with headlines reminding us every few months. Sometimes pulling a bold, dangerous move like puts you ahead of the pack. And the same people who would say “Don’t” might rightly say “you were just lucky this time”.
Sadly, as with many of my long-winded metaphors, this analogy falls apart quickly when I get to the point I am meandering up to: AI automation.
While Running with Scissors pulled off their risky play and at worst stood to learn from it, in the world of AI, when you jump in too fast, the cost can be far higher. It’s not just about learning. It’s about real, public, expensive failure.
My favorite printed coffee mug in the ‘90s said “To err is human, but to really screw up, you need a computer.” Now I need a growler tagged with “Computer programs have glitches, AI gives you stitches.” Or, as some reporters and pundits have put it:
If you think those are the of the “person bites dog” variety, take a gander AI Failures, Mistakes, and Errors, which brings a whole new meaning to the term “doom scrolling” for those who have only dabbled in the arts of the black box algorithms.
The Hype is Real, the Hyperbole is Not
Generative AI feels like science fiction compared to what we could muster half a decade ago. But if your plan is to fire your interns and forgo fresh recruits because you expect AI to pick up the Slack, you may soon have nothing left but cold coffee, hot-tempered customers, and evaporating bonuses.
[Ego Disclaimer #1: I really liked this section but thought it was too short, so I had Perplexity stretch it a bit with the content below…and I don’t think it did do too bad of a job, but please comment on this post and tell me what you think.]
It’s tempting, of course. There’s a parade of enthusiastic press releases and budget-slashing slideshows from folks who are convinced that with just the right AI prompt, entire departments can be blissfully replaced. The reality? Not so much. As thrilling as it sounds, swapping out eager humans for untested bots leaves you with a lot of gaps—the kind interns and new hires usually fill by catching the weird edge cases, asking the questions you forgot were important, and, occasionally, refilling the printer paper before that big client call. Turns out, there’s no neural network yet that will run down the hall with a sticky note or spot the project that’s quietly rolling off the rails.
You also lose your organization’s early warning system. Interns and rookies see with fresh eyes; they’ll squint at your wobbly workflows and say, “Wait, why do we do it this way?” That’s not inefficiency, that’s built-in feedback. When you replace every junior with an “intelligent” auto-responder, you’re left with no canaries in the coal mine, just a black box churning out confident guesses. And as the headlines keep reminding us, when you let those black boxes loose without human context or oversight, suddenly it’s not just your coffee getting cold—it’s your reputation going up in smoke.
AI Today Feels a Lot Like IT Last Century
“Computer error” is a term that persisted for decades as reason why it was dangerous to leave things up to computers. Truth was, it was always human error, though where it in the chain from deciding to “computerize” to end users who did not RTFM (or disclaimer).
The adoption was a lot slower last century, as was communication, so many business that were early adopters of computers as business tools repeated the same mistakes as others. Step up to this century, and the really smart people are doing things iteratively.
Other people see what these iterators are accomplishing and decide they want that, too. So they rename their current processes to sound like what the iterative people are doing. Some iterators “move fast and break things”, and then fix them. The semi-iterative do the first half, and then blame the “new” process.
Slow is Smooth, Smooth is Fast
It’s not a new saying, but it’s more relevant than ever: “Slow is smooth, smooth is fast.”
Moving fast starts by moving slow, which building a foundation that can be controlled, and by controlled I mean rebuilt with a single command. Then you can quickly add something on top of that foundation, and if breaks, you can start over with no loss. When it succeeds, you repeat that success and add it to your foundation.
Apply this to an AI adoption strategy. It’s been send there is no need to do a Proof of Concept for AI because the concept has been proven, and this is true. Your ability to apply the concept has not. Or, perhaps, you have disproven it in your organization, and now some people think it was the concept that failed rather than the implementation. To prove you can implement, start with a prototype.
A prototype should be something that is simple, valuable, and measurable. Simple because building confidence is one of the results the prototype should yield. Valuable, because people tend to do a sloppy job if there isn’t much value in what they are doing, and there are more than enough bars being fooed in every organization. And measurable, because you need to show some kind of ROI if you are ever going to get the chance to show real ROI.
Once that first prototype has proven your ability to implement AI in a safe and useful manner, you’re ready for production…right? Right?
Governance and the Human-in-the-Loop
Nope. We skipped a step, which is to establish some governance. Truth be told, in some organizations you won’t be able to get buy-in for governance. Or you’ll get another recurring meeting on too many calendars with the subject “Governance” that fails to complete the same agenda each time (Item 1: What is Governance? Item 2: Who owns it?).
In many orgs you first have to win a champion or get enough people excited with a viable prototype. In either case, make sure governance is in place before going to production, and don’t play Evil Knievel getting into production. Which is to say, don’t jump Snake River when there is quite enough danger in the regular trail of iteration.
One Thing at a Time: The Power of Measured Progress
That first successful prototype should do one thing, and do it well. If it’s just a single step in a bigger process, perfect. Now do another step—just one. Pick something valuable and measurable, but also something people secretly or not-so-secretly dislike doing. Before you know it, everyone wants what you just gave that team.
Automating one step is augmentation. There’s still a human in the loop, even if one part’s now on autopilot. When that step works, take another. Then another.
Each time, you push humans higher up the value chain and commoditize AI as a proven automation solution.
If you hit a limit, congratulations! You broke something and learned from it. That is how you find limits, by exceeding them. If you test your limits one step at a time, when you exceed them you can take a step back and still be further along than when you started. If you try to skip steps, there is place next to Evil Knievel that you might not wind up in the first time, but eventually it will hurt. And it might be a headline in the next version of this post.
Start Small, Stay Smart, Iterate Relentlessly
The highest ROI from AI comes not from boldly going where no automation has gone before but from incremental, tested, and measured iterations from augmentation to the most practical level of automation.
And if you break something along the way, remember: you’re already further ahead than if you’d never started.
[Ego Disclaimer #2: I created an outline for this blog post using Perplexity, then threw away most of it and did my usual off-the-cuff rant. Then I had Perplexity edit the draft and rolled back most of the edits]
Today I completed Build Intelligent Agents Using DeepSeek & N8N on Coursera. I was really impressed at the start of the first module when there was an interactive AI that dynamically and interactively quizzed me on the content of the material covered. It was more of a showcase of the Coursera Coach than an integral part of the course. I think there are a lot of people who will benefit from this feature on Coursera, and it is available throughout this course, and I have seen it on others. I’m not sure how ubiquitous it is across other courses.
If you have ever worked with one of those people where, if you ask them how to do something they send you one or more links, this course is sort of like working with that person. Most Coursera courses have those Recommended Reading sections that provide useful extra material. I like to bookmark them for future reference, but rarely read them during the course. That approach won’t work with this course, as the quizzes cover content in those courses. I found that having Perplexity summarize the links for me was sufficient to start passing the quizzes. Mostly. There were still questions that I did not recognize from the videos nor the summaries, and a couple of times I read the external content directly and still didn’t find the source of the question.
I don’t just take these courses to hang a certificate on the wall (anymore, though there was a time before 2010 when I collected enough to stack over an inch high, partially shown below).
Pile of certificates
Coursera does a best score of up to three attempts, so if I get less than 100% I will repeat it until I do get 100% so that I am certain I have learned the material. The last Coursera course I completed (Introduction to Artificial Intelligence by IBM) I happily pointed out that the quality of the video instruction lead to my consistent 100% scores. For this n8n course, I can honestly say that without many years experience in enterprise integration architecture I would not have been able to score the 100% on at least half of the modules. This is beyond the issue that some of the links to the required reading was broken.
As mentioned, Perplexity summaries were really useful in completing this course, as well as some research to fill in the blanks for missing content. For those that are interested, I exported thread I used for the course and have made it available in PDF format as Perplexity_Research_for_Coursera_Build_Intelligent_Agents_n8n_Course.pdf
Today I completed Build Intelligent Agents Using DeepSeek & N8N on Coursera. I was really impressed at the start of the first module when there was an interactive AI that dynamically and interactively quizzed me on the content of the material covered. It was more of a showcase of the Coursera Coach than an integral part of the course. I think there are a lot of people who will benefit from this feature on Coursera, and it is available throughout this course, and I have seen it on others. I’m not sure how ubiquitous it is across other courses.
If you have ever worked with one of those people where, if you ask them how to do something they send you one or more links, this course is sort of like working with that person. Most Coursera courses have those Recommended Reading sections that provide useful extra material. I like to bookmark them for future reference, but rarely read them during the course. That approach won’t work with this course, as the quizzes cover content in those courses. I found that having Perplexity summarize the links for me was sufficient to start passing the quizzes. Mostly. There were still questions that I did not recognize from the videos nor the summaries, and a couple of times I read the external content directly and still didn’t find the source of the question.
I don’t just take these courses to hang a certificate on the wall (anymore, though there was a time before 2010 when I collected enough to stack over an inch high, partially shown below).
Pile of certificates
Coursera does a best score of up to three attempts, so if I get less than 100% I will repeat it until I do get 100% so that I am certain I have learned the material. The last Coursera course I completed (Introduction to Artificial Intelligence by IBM) I happily pointed out that the quality of the video instruction lead to my consistent 100% scores. For this n8n course, I can honestly say that without many years experience in enterprise integration architecture I would not have been able to score the 100% on at least half of the modules. This is beyond the issue that some of the links to the required reading was broken.
As mentioned, Perplexity summaries were really useful in completing this course, as well as some research to fill in the blanks for missing content. For those that are interested, I exported thread I used for the course and have made it available in PDF format as Perplexity_Research_for_Coursera_Build_Intelligent_Agents_n8n_Course.pdf