Replacing your Proposal Team with ChatGPT

I’ve heard of some businesses that have completely automated their RFP response process using Agentic AI. To reach that level of automation, you either need a very narrow set of services or a very generous budget to address all the quirks and exceptions.

I have neither of those.

Before I go on, I want to point out that while I will definitely continue to use Generative AI with all of my documentation as tool to improve quality, I much prefer working with a human team that is AI-augmented rather than just AI. It is a strain being the only one managing the human factor of work that is meant to drive decisions. The title is not a suggestion; it is a description of how to cope when it is necessary.

What I do have is access to a few Generative AI tools. For various reasons I won’t get into here, ChatGPT Projects is the best fit for the workflow I have adopted (and still refining). Projects are ChatGPT’s (poor) answer to NotebookLM and Perplexity Spaces

(see my earlier post about Organizing AI Augmentation with Notebooks).

Projects are useful in that they keep related prompts and files in one place, but they don’t really cross-reference or allow for collaboration. It does come with that fine print at the bottom of the screen stating:

“OpenAI doesn’t use [NAME OF COMPANY PAYING SUBSCRIPTION FEE] workspace data to train its models.”

Which is the main one of those reasons I said I wouldn’t get into (oops!).

I recently worked on a proposal at a time when most of the people who would usually help were busy with other things, so I settled into working mostly with ChatGPT like an eager-but-green proposal teammate (the AI being the green one, not me…no matter what that LLM wrapper says).

Setting the Stage

For this particular proposal, the prep work didn’t look all that different from the old manual process. It starts with a short document to capture the proposal’s guiding themes: my company’s strengths, differentiators, and the ideas that needed to shine through in both tone and substance. The document was mostly drafted by practice leadership and refined with a few folks familiar with client, project types, or both.

Next came the outline. Depending on the RFP structure, I sometimes let ChatGPT take the first crack at building an outline from the document, then refine it interactively. Other times, the RFP format or flow is not friendly to automate parsing, even for a well-trained AI (or so I assume, as I haven’t attempted to train one that deeply yet). In this case I build the first draft of the outline myself, then hand it to ChatGPT to check against the original RFP. That combination of back-and-forth has become standard practice.

Draft One: Enter the AI Intern

Once the outline was in good shape, ChatGPT proactively offered to populate the template once it was refined, which fits with the persona I have of it as an eager, educated, and inexperienced intern or junior associate. And given the quality of its suggestions, it is tempting to respond with a “Yes” and let ‘er rip. But tempered experience had me opt for prompting it to do so one section at a time, and waiting for feedback or confirmation before moving on to the next section. In this manner, I was able to put together a pretty decent first draft much faster than doing it entirely on my own (or even with a “real” eager, educated, and inexperienced intern or junior associate, whom I also would not want to do a full draft before getting some feedback).

I would say it was about 50/50 of accepting the first draft of a section versus a revision. As with any Generative AI augmented content generation, most of the issues stemmed from missing levels of details in my prompts versus ChatGPT misunderstanding the intent. Speaking of understanding the intent, I attached the entire proposal (again, because, like I said, I know notebooks and spaces and projects ain’t those), the outline, and the context document after it asked to write the proposal for me, and tempering the response to its offer with “Yes, but…” and then instructions to do it a section at a time and refer to the files.

Staying Sane (a.k.a. Breaks Matter)

As many proponents of utilizing Flow will tell you, it can be very beneficial to take breaks every 60 to 120 minutes (while most of the gurus on the topic seem to gravitate to the 90 minute mark, I hold fast that it varies by person and context, mangling Bruce Lee’s advice to “be like water”, in this case by seeking your own level). Without breaks, your ability to be objective about the quality of GenAI outputs will start to degrade and tilt where your bias is, i.e., past one’s threshold of real focus, some will start accepting every output while others will either keep refining the prompts for sections over and over or just re-write it by hand.

The Human Touch

After ChatGPT’s draft, it was time for the what passes as human intelligence (I used to call coffee my “artificial intelligence” until the term started being used by everyone to refer to what we currently call AI). I have enough experience (and ego) around writing proposals, and made some minor edits of the first AI generated draft. Once that first draft was completed, I dove in to give it a serious human touch, reading through the entire draft and making notes of changes I thought it needed. That read through without editing may seem counterintuitive, but it is necessary because something that jumps out at me as being incomplete, inaccurate, or just plain wrong may be clarified later in the document. After a top to bottom read and making notes of changes, I then work through the notes to actually make the changes, skipping or revising those changes with the full context of the document.

Then it’s ChatGPT’s turn again. I have it go through the document, essentially repeating what I had just done. This is a process I have worked on in other forms of writing as well, and I have a general prompt that I tweak as needed:

Check the attached [PROPOSAL FILENAME] for spelling errors, grammar issues, overall cohesiveness, and that it covers all points expected as a response to [RFP FILENAME].

Only provide detailed descriptions of any corrections or recommended changes so that I can select the changes I agree with. Think hard about this (thanks to Jeff Su‘s YouTube channel for this addition!)

And then I work my way through the response. This same prompt is re-run with updated versions of the proposal until I am satisfied that this stage has yielded as much benefit as it can.

Tightening the Screws

Finally, (or almost so) I have ChatGPT draft the executive summary. In the case of a really big RFP response, I will first have it draft the section summaries. These summaries are necessary to any proposal. In fact, they often make or break the proposal, possibly because they are the only parts the decision makers read, sometimes along with reviews done by others. If the summaries don’t come easy, or don’t sound right based on that original context document, I will go through and collaboratively revise the relevant sections until the summaries flow.

The Final Check

Finally, I try my best to find another human to check the whole of the result. If I’m lucky, I get additional input. If I’m really lucky, they’ve brought their own GenAI-assisted reviews into the mix.

GenAI has had a major impact on my writing output. The flow I use for proposals isn’t all that different from the flow I use to write blog posts or other content. I do a number of stream-of-consciousness sessions (the number varying on the complexity and length of the content), and then start refining it. I used that approach before GenAI, and the key difference that GenAI has made in my process is that I have learned to do less self-editing during those initial brain dumps, because I know that I have a tireless editor to review and give me feedback during the editing phase. Plus, the editor can be coached in both my intent and style to help me improve beyond just the level of “not clear”, and “i before e except after c or when the dictionary says otherwise”.

Facebooktwitterredditlinkedinmail
© Scott S. Nelson

Boost Your GenAI Results with One Simple (and Free) Tool

AI is great at summarizing a document or a small collection of documents. When you get to larger collections, the complexity begins to grow rapidly. More complex prompts are the least of it. You need to set up RAG (retrieval-augmented generation) and the accompanying vector stores. For really large stores, this is going to be necessary regardless. Most of us work in a realm that is between massive content repositories and a manageable set of documents.

One handy helping application for this is Pandoc (https://pandoc.org/), aptly self-described as “your Swiss Army knife” for converting files between formats (without having to do “File > Open > Save As” to the point of carpal tunnel damage). Most of our files are in people-friendly formats like Word and PDF. To an LLM, these files contain mostly useless formatting instructions and metadata (yes, some metadata is useful, but most of it in these files is not going to be helpful as inputs to GenAI models). Pandoc will take those files and convert them to Markdown, which is highly readable for GenAI purposes (and humans can still parse it — and some even prefer it) and use 1/10000000th of the markup for format (confession: I pulled that number out of thin air to get your attention, but the real number is still big enough to matter).

The conversion may not be perfect, especially as the formatting of most documents is not perfect. You can see this for yourself by using the Outline view in Word. With a random document pulled from SharePoint, odds are you will find empty headings between the real ones, entire paragraphs that are marked as headings, or no headings at all because someone manually formatted text using the Normal style to make it look like a heading.

If you are only converting a few documents, you can use a text editor with regex (provided by your favorite GenAI) to do find and replace. Otherwise, leave them as is — it is already in a much more efficient format for prompting against, and the LLM will likely figure it out anyway.

You can get fancier with this by incorporating a call to Pandoc as a tool in an agentic workflow, converting the files at runtime before passing them to an LLM for analysis (and if you are a developer, managing the conversions so that they aren’t wastefully repeated). So long as you are being fancy, you can have it try to fix the minor formatting errors too, but you have already made a huge leap forward just by dumping all the formatting (that is just noise to an LLM) so that the neural network is processing what really matters: the content that is going to make you look like a prompting genius.

Facebooktwitterredditlinkedinmail
© Scott S. Nelson

Upgrading to Windows 11 for Luddites Like Me

tldr; If you have serious performance issues after upgrading and have tried all the usual tweaks, check the Power Mode settings.

The last Windows upgrade where I felt better for the experience was Windows 2000. Yes, there have been some marked improvements in media capabilities since then (if not, I’d still be on Windows 2000 — except for the security path problem). The only past upgrade I found difficult (excluding disappointment as a challenge) was from 3.1 to 95. That was hard because there were all of these disk changes to do because CD ROMs were still not ubiquitous. So I was bit put off when I experience a huge performance hit after the upgrade from 10 to 11. An upgrade that I only performed because they are ending free security updates in October for Windows 10 (I believe that makes it the shortest lived, in terms of support, ever) and I happened to be between deadlines at the moment. The last thing I wanted was to do the upgrade in the midst of some deliverable date because I expected it to be less than fun.

Expectations met. I spent three days after the upgrade trying to live with it. I knew going in that I needed to fix a lot of the default settings to keep big tech from becoming Big Brother, and had a list prepared before I even started so I could thwart the privacy pirates promptly. My inner Fox Mulder believes that much of the performance degradation of Windows 10 the last few years was meant to urge people like us to upgrade. After, I started to think that it was more a manner of acclimating us to problems to come.

After spending way to much time discussing the problem with Perplexity, and then trying to pretend that I could live with a machine that was 50 generations newer than the one I had in 1996 and yet somehow slower, I decided to work on a hobby project that is running on another machine. Did I mention that all of my Windows devices are laptops? When I booted up the hobby machine, I did so before plugging it in and noticed the usual effects of running on battery mode, with the monitor dimmer and the machine a bit slower. I plugged it in and it perked right up, as did my hind brain. I went back to my recently degraded machine and checked the power settings, and lo and behold, Big Brother decided that I could get by with a reduction in performance even while plugged in.

Windows 11 Power Mode Settings

I’m convinced that it is actually a combination of all of the other changes I made in addition to setting Power Mode to Best Performance when Plugged in. I have to believe that, because the alternative of the wasted time, and Tylenol®, is too much to bear.

Speaking  of too much to bear, writing up all of those steps is also a bit of a chore. However, I used a Space on Perplexity for the whole journey, so I asked it to write a blog post about it in my style. It still does not have my style right, which is partly me, for being to brief in the context and skimpy on the training files, and partly just that AI isn’t human and can’t really get me (most humans have the same problem). So as not to let all of that energy-expensive processing go to waste, I’m sharing it with you below.

And now, a word from our AI:

Why I Upgraded to Windows 11 Early—and How You Can Do It with Privacy and a Classic Windows 2000 Feel

Microsoft is ending free upgrades from Windows 10 to Windows 11 this October. I chose to upgrade now while I had no pressing deadlines so I could handle the transition calmly and set everything up for my privacy-first, vintage Windows 2000 style preference. Here’s a streamlined approach to upgrading that respects your privacy and brings a classic user experience.

Why Upgrade Now?

Waiting to upgrade means potential fees and stress when deadlines loom. Tackling Windows 11 ahead of time lets you customize and troubleshoot without pressure, making the switch smoother.

Step 1: Prepare Your System

  • Full Backup: Use an external drive plus a system image backup. Protect your irreplaceable data and system state.
  • Legacy Software & Drivers: Collect all your essential old-school apps and hardware drivers ahead of time, preferably from manufacturer websites.
  • Create a Recovery Plan: Have Windows 10 installation media ready just in case you want to roll back.

Step 2: Privacy-Centered Installation

  • Use a Local (Offline) Account: Avoid Microsoft’s enforced online sign-ins to maintain privacy.
  • Disable Telemetry and Data Sharing: Go to Settings > Privacy & Security and turn off every form of telemetry, diagnostics, and data collection.
  • Disable or Remove Cortana: Cortana’s integration hampers privacy; uninstall or disable it completely.
  • Block Web-Connected Search: Turn off options that send your search queries online.
  • Limit Microsoft Account Usage: Avoid syncing or using Microsoft accounts wherever possible.

Step 3: Recreate a Windows 2000 Look & Feel

  • Classic Start Menu: Install Open-Shell to bring back the familiar Windows 2000 style Start menu.
  • Taskbar Customizations: Use scripts or apps to move the taskbar to the left, reduce icon spacing, and disable icon grouping.
  • Remove Widgets and News Feeds: Disable or uninstall unnecessary widgets and news to reduce clutter.
  • Restore Classic System Sounds and Icons: Swap in old system sounds and icon sets to enhance the nostalgic experience.

Step 4: Upgrade Process Tips

  • Take Your Time: Don’t rush the prompts; decline any offered “extra” software or cloud features.
  • Monitor Network: Temporarily disconnect from the internet during installation to avoid surprise background downloads or telemetry.
  • Updates on Your Terms: Post-upgrade, disable automatic updates or set them to manual to avoid unwanted changes.

Step 5: Post-Upgrade Privacy and Performance Tuning

  • Verify Privacy Settings: Double-check that all telemetry and data sharing options remain off; Windows sometimes re-enables features after upgrades.
  • Manage Startup Programs: Use Task Manager to disable all non-essential startup apps, speeding boot times and avoiding background tracking.
  • Disable Unnecessary Services: Turn off services that you don’t use or that compromise privacy (e.g., Windows Tips, Feedback Hub).
  • Use Host File Hacks or Firewall Rules: Block known Microsoft telemetry servers for an extra layer of privacy.
  • Schedule Routine Privacy Checks: Set reminders to revisit privacy and UI settings after big Windows updates.

Final Thoughts

Upgrading while you have the bandwidth gives you control and peace of mind before the free upgrade deadline passes. By prioritizing privacy, minimizing modern OS distractions, and returning to a classic interface, Windows 11 can feel more comfortable and secure—closer to the straightforward experience of Windows 2000 but with current OS capabilities.

Don’t wait until October and the pressure mounts. Take charge now, protect your privacy, and make Windows 11 work your way. Your future self will be grateful.

Facebooktwitterredditlinkedinmail
© Scott S. Nelson

The Highest ROI from AI Automation Starts with Augmentation

The start of this post may sound like AI alarmism, but bear with me. My goal is to underline  why the right strategy is the key to a positive ROI instead of a reputation that’s KIA.

Before Y2K, Running with Scissors went viral with Postal. It was a bold move that garnered lots of headlines and changed an industry. The term Running with Scissors comes from the adnomination that always began with “Don’t” because it is dangerous and has major negative consequences. Then naming their debut product Postal, a reference to Going postal, was definitely running with scissors, given that it was released when that was something to really be afraid of, with headlines reminding us every few months. Sometimes pulling a bold, dangerous move like puts you ahead of the pack. And the same people who would say “Don’t” might rightly say “you were just lucky this time”.

Sadly, as with many of my long-winded metaphors, this analogy falls apart quickly when I get to the point I am meandering up to: AI automation.

The Hard Lessons of High Hopes Held for Hype

Photo by Immo Wegmann on Unsplash

While Running with Scissors pulled off their risky play and at worst stood to learn from it, in the world of AI, when you jump in too fast, the cost can be far higher. It’s not just about learning. It’s about real, public, expensive failure.

My favorite printed coffee mug in the ‘90s said “To err is human, but to really screw up, you need a computer.” Now I need a growler tagged with “Computer programs have glitches, AI gives you stitches.” Or, as some reporters and pundits have put it:

If you think those are the of the “person bites dog” variety, take a gander AI Failures, Mistakes, and Errors, which brings a whole new meaning to the term “doom scrolling” for those who have only dabbled in the arts of the black box algorithms.

The Hype is Real, the Hyperbole is Not

Generative AI feels like science fiction compared to what we could muster half a decade ago. But if your plan is to fire your interns and forgo fresh recruits because you expect AI to pick up the Slack, you may soon have nothing left but cold coffee, hot-tempered customers, and evaporating bonuses.

[Ego Disclaimer #1: I really liked this section but thought it was too short, so I had Perplexity stretch it a bit with the content below…and I don’t think it did do too bad of a job, but please comment on this post and tell me what you think.]

It’s tempting, of course. There’s a parade of enthusiastic press releases and budget-slashing slideshows from folks who are convinced that with just the right AI prompt, entire departments can be blissfully replaced. The reality? Not so much. As thrilling as it sounds, swapping out eager humans for untested bots leaves you with a lot of gaps—the kind interns and new hires usually fill by catching the weird edge cases, asking the questions you forgot were important, and, occasionally, refilling the printer paper before that big client call. Turns out, there’s no neural network yet that will run down the hall with a sticky note or spot the project that’s quietly rolling off the rails.

You also lose your organization’s early warning system. Interns and rookies see with fresh eyes; they’ll squint at your wobbly workflows and say, “Wait, why do we do it this way?” That’s not inefficiency, that’s built-in feedback. When you replace every junior with an “intelligent” auto-responder, you’re left with no canaries in the coal mine, just a black box churning out confident guesses. And as the headlines keep reminding us, when you let those black boxes loose without human context or oversight, suddenly it’s not just your coffee getting cold—it’s your reputation going up in smoke.

AI Today Feels a Lot Like IT Last Century

“Computer error” is a term that persisted for decades as reason why it was dangerous to leave things up to computers. Truth was, it was always human error, though where it in the chain from deciding to “computerize” to end users who did not RTFM (or disclaimer).

The adoption was a lot slower last century, as was communication, so many business that were early adopters of computers as business tools repeated the same mistakes as others. Step up to this century, and the really smart people are doing things iteratively.

Other people see what these iterators are accomplishing and decide they want that, too. So they rename their current processes to sound like what the iterative people are doing. Some iterators “move fast and break things”, and then fix them. The semi-iterative do the first half, and then blame the “new” process.

Slow is Smooth, Smooth is Fast

It’s not a new saying, but it’s more relevant than ever: “Slow is smooth, smooth is fast.”

Moving fast starts by moving slow, which building a foundation that can be controlled, and by controlled I mean rebuilt with a single command. Then you can quickly add something on top of that foundation, and if breaks, you can start over with no loss. When it succeeds, you repeat that success and add it to your foundation.

Apply this to an AI adoption strategy. It’s been send there is no need to do a Proof of Concept for AI because the concept has been proven, and this is true. Your ability to apply the concept has not. Or, perhaps, you have disproven it in your organization, and now some people think it was the concept that failed rather than the implementation. To prove you can implement, start with a prototype.

A prototype should be something that is simple, valuable, and measurable. Simple because building confidence is one of the results the prototype should yield. Valuable, because people tend to do a sloppy job if there isn’t much value in what they are doing, and there are more than enough bars being fooed in every organization. And measurable, because you need to show some kind of ROI if you are ever going to get the chance to show real ROI.

Once that first prototype has proven your ability to implement AI in a safe and useful manner, you’re ready for production…right? Right?

Governance and the Human-in-the-Loop

Nope. We skipped a step, which is to establish some governance. Truth be told, in some organizations you won’t be able to get buy-in for governance. Or you’ll get another recurring meeting on too many calendars with the subject “Governance” that fails to complete the same agenda each time (Item 1: What is Governance? Item 2: Who owns it?).

In many orgs you first have to win a champion or get enough people excited with a viable prototype. In either case, make sure governance is in place before going to production, and don’t play Evil Knievel getting into production. Which is to say, don’t jump Snake River when there is quite enough danger in the regular trail of iteration.

One Thing at a Time: The Power of Measured Progress

That first successful prototype should do one thing, and do it well. If it’s just a single step in a bigger process, perfect. Now do another step—just one. Pick something valuable and measurable, but also something people secretly or not-so-secretly dislike doing. Before you know it, everyone wants what you just gave that team.

“I do one thing at a time, I do it well, and then I move on” –Charles Emerson Winchester III

Automating one step is augmentation. There’s still a human in the loop, even if one part’s now on autopilot. When that step works, take another. Then another.
Each time, you push humans higher up the value chain and commoditize AI as a proven automation solution.

If you hit a limit, congratulations! You broke something and learned from it. That is how you find limits, by exceeding them. If you test your limits one step at a time, when you exceed them you can take a step back and still be further along than when you started. If you try to skip steps, there is place next to Evil Knievel that you might not wind up in the first time, but eventually it will hurt. And it might be a headline in the next version of this post.

Start Small, Stay Smart, Iterate Relentlessly

The highest ROI from AI comes not from boldly going where no automation has gone before but from incremental, tested, and measured iterations from augmentation to the most practical level of automation.

And if you break something along the way, remember: you’re already further ahead than if you’d never started.

[Ego Disclaimer #2: I created an outline for this blog post using Perplexity, then threw away most of it and did my usual off-the-cuff rant. Then I had Perplexity edit the draft and rolled back most of the edits]

Facebooktwitterredditlinkedinmail
© Scott S. Nelson

Slices of #ITisLikePizza with Perlexity.ai

In early 2022, I had hoped to start a social media thread with the hashtag #ITisLikePizza, but it was clearly a half-baked idea since no one replied to the first four posts:

  • A good cook can make a better pizza with the same ingredients as a bad pizza.
  • All pizzas look good from a distance.
  • If you rush the cook, it will not be as good.
  • You always regret consuming too much too fast, no matter how good it is.

Had any engagement occurred, I was ready with nine more:

  • A great-looking pizza can still not taste good.
  • It’s rarely as good as it sounds in a commercial.
  • When it is as good as the commercial, the next time it isn’t.
  • The best pizzas are often found in small, little-known places.
  • When the small, little-known place gets popular, the quality goes down.
  • Some ugly-looking pizzas are the most satisfying.
  • The hungrier you are, the longer it takes to arrive.
  • If you forget about it in the oven, the result may not be salvageable.
  • If you don’t follow the recipe, your results will vary.

Here we are, three years later, and GenAI being all the rage, it occurred to me that maybe I could extend the list with AI. My cloud-based GenAI of choice is Perplexity, so that’s what I tried it with. I originally stuck with Perplexity because it hallucinated less than other options by default, mostly because it provides sources for its outputs, which I find handy when I rewrite those outputs for my various posts. Had this been a true experiment, I would have run the same prompts in ChatGPT and Copilot, but this is just me avoiding my ever-growing to-do list for a few minutes, so it’s going to be short and anecdotal.

So, my first prompt was ‘Expand on this following list of how “IT is Like Pizza”:’ followed by the original baker’s dozen list I had come up with so far. Instead of adding new pithy pizza ponderings, it gave explanations for each. They were actually really good explanations. And no citations were provided, so this was just the LLM responding. Kind of interesting in itself.

So then I tried the usual lame improvement of the prompt with “Don’t explain the lines. Generate new lines following the same concept.” The result this time was more what I was looking for, though it may just be writers’ ego that made me think they all need some improvement, except those that could just be tossed.

Then I did something I learned somewhere (I subscribe to half-a-dozen AI-specific newsletters, another dozen that cover AI frequently, plus follow a slew of companies and people on LinkedIn—not to mention that YouTube’s algorithm had caught on to my interest—so I can rarely attribute a particular concept because I either heard it multiple times or I forgot to include the attribution when making a note to cogitate on it more later): I asked Perplexity what the literary term was for the original dirty dozen dictums and it told me “analogical aphorisms” (actually, it told me more than that, but I cling to alliteration the way the one topping you don’t like on the family pizza does).

Armed with my fancy new GenAI-generated term, I demanded (in several of those newsletters and videos I have heard that asking with ‘please’ is just a waste of tokens…which I mostly agree with unless you think the best sources are places like Reddit, but more on that another time): “Create ten more analogical aphorisms with the same them of IT is like Pizza”.  It’s like more sunk-cost-fallacy than truth that this list seemed much more on target, though some were definite deletes, and some clearly needed editing, and…yeah,  it was definitely a case of time commitment bias.

For the curious, the full thread with all the responses can be found here for however long the link is good for (I expect only slightly longer than my Pro subscription):

https://www.perplexity.ai/search/expand-on-this-following-list-UdB7O3KzSxCkl.WTIKBy4A

Interesting side note: I find I sometimes have to remind Perplexity to follow the context instructions for a Space.

Interesting side note about that side note: I have to specify “Perplexity Space” or it will do all sorts of random stuff that has nothing to do with Perplexity Spaces.

One more interesting side note: The most annoying thing that Perplexity does is anticipate punctuation errors. I use it to check my spelling, grammar, and punctuation because I got tired of finding the mistakes after posting. Here is one of the suggestions (similar ones are common):

  • Add a comma after “Here we are, three years later”
    Original: Here we are, three years later, and GenAI being all the rage, it occurred to me…
    Correction: Here we are, three years later, and with GenAI being all the rage, it occurred to me…

OK, one more side note and that’s it: It’s interesting that Perplexity (and other GenAIs) will properly understand a mis-spelled prompt and not point it out, but in spell-checking content it does point it out, as in:

  • Change “theme” not “them”
    Original: …with the same them of IT is like Pizza…
    Correction: …with the same theme of IT is like Pizza…

Sorry, can’t resist: The side notes (originally spelled ‘side-note’) were written while editing, and when I ran them through the “Check the following for spelling, grammar, and punctuation…” prompt, it wanted to correct its own spelling, as in:

  1. In “as in: Change ‘theme’ not ‘them’,” add a comma after “theme” so it reads:
    • Correction: Change “theme,” not “them”
Facebooktwitterredditlinkedinmail
© Scott S. Nelson