I’ve heard of some businesses that have completely automated their RFP response process using Agentic AI. To reach that level of automation, you either need a very narrow set of services or a very generous budget to address all the quirks and exceptions.
I have neither of those.
Before I go on, I want to point out that while I will definitely continue to use Generative AI with all of my documentation as tool to improve quality, I much prefer working with a human team that is AI-augmented rather than just AI. It is a strain being the only one managing the human factor of work that is meant to drive decisions. The title is not a suggestion; it is a description of how to cope when it is necessary.
What I do have is access to a few Generative AI tools. For various reasons I won’t get into here, ChatGPT Projects is the best fit for the workflow I have adopted (and still refining). Projects are ChatGPT’s (poor) answer to NotebookLM and Perplexity Spaces
(see my earlier post about Organizing AI Augmentation with Notebooks).
Projects are useful in that they keep related prompts and files in one place, but they don’t really cross-reference or allow for collaboration. It does come with that fine print at the bottom of the screen stating:
“OpenAI doesn’t use [NAME OF COMPANY PAYING SUBSCRIPTION FEE] workspace data to train its models.”
Which is the main one of those reasons I said I wouldn’t get into (oops!).
I recently worked on a proposal at a time when most of the people who would usually help were busy with other things, so I settled into working mostly with ChatGPT like an eager-but-green proposal teammate (the AI being the green one, not me…no matter what that LLM wrapper says).
Setting the Stage
For this particular proposal, the prep work didn’t look all that different from the old manual process. It starts with a short document to capture the proposal’s guiding themes: my company’s strengths, differentiators, and the ideas that needed to shine through in both tone and substance. The document was mostly drafted by practice leadership and refined with a few folks familiar with client, project types, or both.
Next came the outline. Depending on the RFP structure, I sometimes let ChatGPT take the first crack at building an outline from the document, then refine it interactively. Other times, the RFP format or flow is not friendly to automate parsing, even for a well-trained AI (or so I assume, as I haven’t attempted to train one that deeply yet). In this case I build the first draft of the outline myself, then hand it to ChatGPT to check against the original RFP. That combination of back-and-forth has become standard practice.
Draft One: Enter the AI Intern
Once the outline was in good shape, ChatGPT proactively offered to populate the template once it was refined, which fits with the persona I have of it as an eager, educated, and inexperienced intern or junior associate. And given the quality of its suggestions, it is tempting to respond with a “Yes” and let ‘er rip. But tempered experience had me opt for prompting it to do so one section at a time, and waiting for feedback or confirmation before moving on to the next section. In this manner, I was able to put together a pretty decent first draft much faster than doing it entirely on my own (or even with a “real” eager, educated, and inexperienced intern or junior associate, whom I also would not want to do a full draft before getting some feedback).
I would say it was about 50/50 of accepting the first draft of a section versus a revision. As with any Generative AI augmented content generation, most of the issues stemmed from missing levels of details in my prompts versus ChatGPT misunderstanding the intent. Speaking of understanding the intent, I attached the entire proposal (again, because, like I said, I know notebooks and spaces and projects ain’t those), the outline, and the context document after it asked to write the proposal for me, and tempering the response to its offer with “Yes, but…” and then instructions to do it a section at a time and refer to the files.
Staying Sane (a.k.a. Breaks Matter)
As many proponents of utilizing Flow will tell you, it can be very beneficial to take breaks every 60 to 120 minutes (while most of the gurus on the topic seem to gravitate to the 90 minute mark, I hold fast that it varies by person and context, mangling Bruce Lee’s advice to “be like water”, in this case by seeking your own level). Without breaks, your ability to be objective about the quality of GenAI outputs will start to degrade and tilt where your bias is, i.e., past one’s threshold of real focus, some will start accepting every output while others will either keep refining the prompts for sections over and over or just re-write it by hand.
The Human Touch
After ChatGPT’s draft, it was time for the what passes as human intelligence (I used to call coffee my “artificial intelligence” until the term started being used by everyone to refer to what we currently call AI). I have enough experience (and ego) around writing proposals, and made some minor edits of the first AI generated draft. Once that first draft was completed, I dove in to give it a serious human touch, reading through the entire draft and making notes of changes I thought it needed. That read through without editing may seem counterintuitive, but it is necessary because something that jumps out at me as being incomplete, inaccurate, or just plain wrong may be clarified later in the document. After a top to bottom read and making notes of changes, I then work through the notes to actually make the changes, skipping or revising those changes with the full context of the document.
Then it’s ChatGPT’s turn again. I have it go through the document, essentially repeating what I had just done. This is a process I have worked on in other forms of writing as well, and I have a general prompt that I tweak as needed:
Check the attached [PROPOSAL FILENAME] for spelling errors, grammar issues, overall cohesiveness, and that it covers all points expected as a response to [RFP FILENAME].
Only provide detailed descriptions of any corrections or recommended changes so that I can select the changes I agree with. Think hard about this (thanks to Jeff Su‘s YouTube channel for this addition!)
And then I work my way through the response. This same prompt is re-run with updated versions of the proposal until I am satisfied that this stage has yielded as much benefit as it can.
Tightening the Screws
Finally, (or almost so) I have ChatGPT draft the executive summary. In the case of a really big RFP response, I will first have it draft the section summaries. These summaries are necessary to any proposal. In fact, they often make or break the proposal, possibly because they are the only parts the decision makers read, sometimes along with reviews done by others. If the summaries don’t come easy, or don’t sound right based on that original context document, I will go through and collaboratively revise the relevant sections until the summaries flow.
The Final Check
Finally, I try my best to find another human to check the whole of the result. If I’m lucky, I get additional input. If I’m really lucky, they’ve brought their own GenAI-assisted reviews into the mix.
GenAI has had a major impact on my writing output. The flow I use for proposals isn’t all that different from the flow I use to write blog posts or other content. I do a number of stream-of-consciousness sessions (the number varying on the complexity and length of the content), and then start refining it. I used that approach before GenAI, and the key difference that GenAI has made in my process is that I have learned to do less self-editing during those initial brain dumps, because I know that I have a tireless editor to review and give me feedback during the editing phase. Plus, the editor can be coached in both my intent and style to help me improve beyond just the level of “not clear”, and “i before e except after c or when the dictionary says otherwise”.





© Scott S. Nelson