This one was really frustrating for me, because support kept saying “you have the wrong org”, even after getting a new one with the link they provided four times. Finally, a support rep helped me find a solution (thank you, Kiran Kumar!).
First instead of going to Flows through Setup, they had me go to Flows through the Apps navigation:
Next, change the view from the default to All Flows:
And then search for Get Celebration Details (and you will):
However, when this all started for me, I apparently had a corrupt org, so it still wasn’t there. Support insisted that I sign up for a new org 3 times, and the flow was not seen in all three. Once they pointed me to the App solution, I did find it in one of those orgs. Rather than start the whole module over, I exported the flow from the org where it was and imported it into the org where I had completed the rest of the module.
And, for those who don’t want to go through all of that, you can download the flow metadata directly from
People say the Cloud Practitioner exam is easy. Easy to say if you have used all of the AWS products as an administrator. For me, it took some work. Here’s how I did it.
Updates?
For the record, this is for the CLF-C02 exam, based on having passed it on September 12, 2025. If you’re reading this in the future, check that the exam details haven’t shifted. The process to prepare and pass will be the same, but the details may vary over time.
Despite the name, Cloud Practitioner isn’t all that magical. Though I think one of the most important aspects of this certification is understanding how to manage your AWS account cost effectively, which some may see as magical.
The Formula
As my seventh article on how I have passed certification exams I have prepared for, I have come to a formula that works for me. I still continue to refine it, and added something new this time (yes, of course it is AI-related).
The current formula is this:
Start with a quality exam prep course.
Find a set of practice exams that has at least 5 times as many questions (total) as the actual exam.
Repeat the practice exams until you consistently score over 90%. (trust me on this one)
(New!) Use NotebookLM to generate a podcast of the material you are weakest on and listen to it repeatedly for a few days before the exam.
Schedule the exam for a time of day when you generally find it easiest to concentrate.
Optional Bio hack
There is one additional ingredient I use, which may not be for everyone. I take focus supplements, sometimes called nootropics, and wash them down with a Starbucks Double Shot. This is the third time I have gone the bio hack route. The first two times, I had missed that step of scheduling at a time of day when I am my sharpest, because work schedules were in the way. Both of those times I felt that I really got a concentration boost. This last time, my schedule was more flexible and I don’t think the combo really helped all that much.
A Scout is Always Prepared
This time around, there was another formula side-step in that I did not find the prep course I took to be of particularly good quality. As such, I’m not going to share it, as I usually do with those I liked. This time around, I was very budget conscious and used a course I had access to for free. It did help by giving me exposure to topics I had not had to deal with as part of my regular work, but I credit my long experience in technology for being able to extract value from the content rather than the quality of the content. There were several topics where I used Perplexity to fill in the blanks that I picked up on during the course.
Practice Makes…More Likely to Pass
The practice exams, however were great. I used AWS Certified Cloud Practitioner Practice Exams CLF-C02 at Udemy. It had 6 full sets of exam questions. Each set felt tougher, possibly because newer questions crept in. My scores reflected that. I think this is better than most practice exams where each set has the topics evenly distributed. Of course, it could’ve just been coincidence that my weaknesses aligned with the sequence.
Another thing I really appreciated in the practice exams is that they didn’t stop at just the correct answers for the test review, it also provided detailed explanations. Other practice sets I have used sometimes only gave a link to the vendor documentation. While the links are more in line with how the vendors would like you to study, I have yet to take one of these exams as a way to learn the material, and the ones where I already (thought) I knew the material, this level of preparation wasn’t necessary.
Practice Level 1
Another feature of these practice exams I really liked was labeled as a “beta option,” though I seem to recall this has been available on Udemy long enough to not be considered “beta.” That feature is to get the answer after each question, rather than only at the end of the exam. I did this for the first pass through of the practice exams.
Almost for Real
The second pass I did it “exam style,” getting the answers after the total score. There were improvements, but no enough. This time, I copied the questions and correct answers into a text file, then converted them to Bionic Reading® notes (if you aren’t familiar with Bionic Reading® notes, they make reviewing notes much easier, and I have included mine at the end for reference) and imported the resulting markdown into UpNote for studying.
One More Time with Feeling
After the third pass, I was almost satisfied with my scores:
Progression in Practice Exam Results
But, my experience with certification exams is that there are always questions in the actual exam that were not covered in my preparation. For this reason, I really prefer to have the practice exams at 100% (which I fell short of this time).
Not Not Necessary (this time)
The practice exams had a few “not” questions which I generally got wrong the first pass and still missed some the second time around. I recently watched an Otherwords video about Why A.I. Struggles with Negative Words, and I still don’t feel better about missing those questions. However, maybe I’m not alone, because there were no such questions in the actual exam I took.
New addition to my standard approach NotebookLM podcast
To help improve my memory of the ones I had missed I went back to the questions I got wrong on the second pass through and noted the reference links to AWS documentation. I then fed those links to NotebookLM and had it generate a podcast, selecting the longest format and prompting that it should target an audience preparing to take the CLF-C02 certification exam. The results were incredibly good. But don’t take my word for it, I have posted the podcast here: AWC CFL C02 Exam Prep Podcast.
Q: What is the benefit of Amazon EBS volumes being automatically replicated within the same availability zone?
A: Durability
Q: Which AWS service can be used to route end users to the nearest AWS Region to reduce latency?
A: Amazon Route 53
Q: What is the main benefit of attaching security groups to an Amazon RDS instance?
A: Controls what IP address ranges can connect to your database instance
Q: What is the recommended storage option when hosting an **often-**changing database on an Amazon EC2 instance?
A: Amazon EBS
Q: What kind of reports does AWS Cost Explorer provide by default?
A: Utilization
Q: What does the term ?Economies of scale? mean?
A: It means that AWS will continuously lower costs as it grows
Q: Which AWS team assists customers in achieving their desired business outcomes?
A: AWS Professional Services
Q: Which of the below options is true of Amazon Cloud Directory?
A: Amazon Cloud Directory allows the organization of hierarchies of data across multiple dimensions
Q: An organization has alegacy application designed using monolithic-based architecture. Which AWS Service can be used to decouple the components of the application?
A: SQS (SNS, and EventBridge)
Q: Acompany is planning to use Amazon S3 and Amazon CloudFront to distribute its video courses globally. What tool can the company use to estimate the costs of these services?
A: AWS Pricing Calculator
Q: What is the connectivity option that uses Internet Protocol Security (IPSec) to establish encrypted connectivity between an on-premises network and the AWS Cloud?
A: AWS **Site-**to-Site VPN
Q: Both AWS and traditional IT distributors provide awide range of virtual servers to meet their customers? requirements. What is the name of these virtual servers in AWS?
A: Amazon EC2 Instances
Q: Acompany uses multiple business cloud applications and wants to simplify its employees? access to these applications. Which AWS service uses SAML **2.**0 to enable single sign-on to multiple applications through acentral user portal?
A: AWS IAM Identity Center
Q: Asmall retail business with multiple physical locations is planning to transfer sensor data and store security camera footage in the cloud for further analysis. The total amount of data is around 8terabytes, and the business’s internet connection is too slow to transfer such alarge amount directly to AWS in areasonable time. Which AWS service would be the most cost-effective to transfer the data to AWS?
A: AWS Snowcone
Q: Which AWS Service is used to manage user permissions?
A: AWS IAM
Q: Acompany has hundreds of VPCs in multiple AWS Regions worldwide. What service does AWS offer to simplify the connection management among the VPCs?
A: AWS Transit Gateway
Q: Which statement best describes the operational excellence pillar of the AWS Well-Architected Framework?
A: The ability to monitor systems and improve supporting processes and procedures
Q: Acompany is migrating its on-premises database to Amazon RDS. What should the company do to ensure Amazon RDS costs are kept to aminimum?
A: Right-size before and after migration
Q: Acompany is planning to host an educational website on AWS. Their video courses will be streamed all around the world. Which of the following AWS services will help achieve high transfer speeds?
A: Amazon CloudFront
Q: What does AWS Health provide? (Choose TWO)
A: 1) Detailed troubleshooting guidance to address AWSD events impacting your resources
2) Personalized view of AWS service health
Q: Which of the following services allows customers to manage their agreements with AWS?
A: AWS Artifact
Q: You have set up consolidated billing for several AWS accounts. One of the accounts has purchased anumber of reserved instances for 3years. Which of the following is true regarding this scenario?
A: All accounts can receive the hourly cost benefit of the Reserved Instances
Q: Acompany is deploying anew **two-**tier web application in AWS. Where should the most frequently accessed data be stored so that the application?s response time is optimal?
A: Amazon ElastiCache
Q: If you want to register anew domain name, which AWS service should you use?
A: Route 53
Q: If you want to visualize your spending on your AWS account for the past month, which tool can help you?
A: AWS Cost Explorer
Q: If you go for consolidated billing for multiple AWS accounts under three master accounts, what benefit do you get?
A: Combined usage for discounts
Q: For which support plan do you also have AWS support Concierge Service?
A: Enterprise
Q: Which storage option should you use if you are hosting afrequently-changing database on an Amazon EC2 instance?
A: EBS
Q: To get ahigh throughput to multiple compute nodes, which storage service would you use to host an application on your EC2 instance?
A: EFS
Q: If you want to upload data to S3 at very high speeds, which AWS service takes advantage of the edge locations?
A: S3 Transfer Acceleration
Q: Which one of these can you NOT assign to auser?
A: IAM identity. You cannot directly assign an “IAM identity” to auser because “IAM identity” is ageneralized term referring to any entity in IAM (such as users, groups, or roles).
Q: You have been asked to contact AWS support using the chat feature to seek guidance on an ongoing issue. However, when you log in to the AWS support page, you do not see the chat options. What should you do?
A: Live chat support is only available with Business, or Enterprise Support plans
Q: If you want to launch and manage avirtual private server in AWS, which service is the easiest?
A: Lightsail. Lightsail provides pre-configured virtual server instances
Q: What is AWS Athena?
A: AWS Athena is aserverless, interactive query service that enables you to analyze data directly in Amazon S3 using standard SQL
Q: Choose from the options below to filter your incoming traffic request to your EC2 instance.
A: NACLs and Security Groups
Q: Protect from dDOS attacks?
A: NACLs and Security Groups
Q: Where can you find your historical billing information in the AWS console?
I’ve heard of some businesses that have completely automated their RFP response process using Agentic AI. To reach that level of automation, you either need a very narrow set of services or a very generous budget to address all the quirks and exceptions.
I have neither of those.
Before I go on, I want to point out that while I will definitely continue to use Generative AI with all of my documentation as tool to improve quality, I much prefer working with a human team that is AI-augmented rather than just AI. It is a strain being the only one managing the human factor of work that is meant to drive decisions. The title is not a suggestion; it is a description of how to cope when it is necessary.
What I do have is access to a few Generative AI tools. For various reasons I won’t get into here, ChatGPT Projects is the best fit for the workflow I have adopted (and still refining). Projects are ChatGPT’s (poor) answer to NotebookLM and Perplexity Spaces
Projects are useful in that they keep related prompts and files in one place, but they don’t really cross-reference or allow for collaboration. It does come with that fine print at the bottom of the screen stating:
“OpenAI doesn’t use [NAME OF COMPANY PAYING SUBSCRIPTION FEE] workspace data to train its models.”
Which is the main one of those reasons I said I wouldn’t get into (oops!).
I recently worked on a proposal at a time when most of the people who would usually help were busy with other things, so I settled into working mostly with ChatGPT like an eager-but-green proposal teammate (the AI being the green one, not me…no matter what that LLM wrapper says).
Setting the Stage
For this particular proposal, the prep work didn’t look all that different from the old manual process. It starts with a short document to capture the proposal’s guiding themes: my company’s strengths, differentiators, and the ideas that needed to shine through in both tone and substance. The document was mostly drafted by practice leadership and refined with a few folks familiar with client, project types, or both.
Next came the outline. Depending on the RFP structure, I sometimes let ChatGPT take the first crack at building an outline from the document, then refine it interactively. Other times, the RFP format or flow is not friendly to automate parsing, even for a well-trained AI (or so I assume, as I haven’t attempted to train one that deeply yet). In this case I build the first draft of the outline myself, then hand it to ChatGPT to check against the original RFP. That combination of back-and-forth has become standard practice.
Draft One: Enter the AI Intern
Once the outline was in good shape, ChatGPT proactively offered to populate the template once it was refined, which fits with the persona I have of it as an eager, educated, and inexperienced intern or junior associate. And given the quality of its suggestions, it is tempting to respond with a “Yes” and let ‘er rip. But tempered experience had me opt for prompting it to do so one section at a time, and waiting for feedback or confirmation before moving on to the next section. In this manner, I was able to put together a pretty decent first draft much faster than doing it entirely on my own (or even with a “real” eager, educated, and inexperienced intern or junior associate, whom I also would not want to do a full draft before getting some feedback).
I would say it was about 50/50 of accepting the first draft of a section versus a revision. As with any Generative AI augmented content generation, most of the issues stemmed from missing levels of details in my prompts versus ChatGPT misunderstanding the intent. Speaking of understanding the intent, I attached the entire proposal (again, because, like I said, I know notebooks and spaces and projects ain’t those), the outline, and the context document after it asked to write the proposal for me, and tempering the response to its offer with “Yes, but…” and then instructions to do it a section at a time and refer to the files.
Staying Sane (a.k.a. Breaks Matter)
As many proponents of utilizing Flow will tell you, it can be very beneficial to take breaks every 60 to 120 minutes (while most of the gurus on the topic seem to gravitate to the 90 minute mark, I hold fast that it varies by person and context, mangling Bruce Lee’s advice to “be like water”, in this case by seeking your own level). Without breaks, your ability to be objective about the quality of GenAI outputs will start to degrade and tilt where your bias is, i.e., past one’s threshold of real focus, some will start accepting every output while others will either keep refining the prompts for sections over and over or just re-write it by hand.
The Human Touch
After ChatGPT’s draft, it was time for the what passes as human intelligence (I used to call coffee my “artificial intelligence” until the term started being used by everyone to refer to what we currently call AI). I have enough experience (and ego) around writing proposals, and made some minor edits of the first AI generated draft. Once that first draft was completed, I dove in to give it a serious human touch, reading through the entire draft and making notes of changes I thought it needed. That read through without editing may seem counterintuitive, but it is necessary because something that jumps out at me as being incomplete, inaccurate, or just plain wrong may be clarified later in the document. After a top to bottom read and making notes of changes, I then work through the notes to actually make the changes, skipping or revising those changes with the full context of the document.
Then it’s ChatGPT’s turn again. I have it go through the document, essentially repeating what I had just done. This is a process I have worked on in other forms of writing as well, and I have a general prompt that I tweak as needed:
Check the attached [PROPOSAL FILENAME] for spelling errors, grammar issues, overall cohesiveness, and that it covers all points expected as a response to [RFP FILENAME].
Only provide detailed descriptions of any corrections or recommended changes so that I can select the changes I agree with. Think hard about this (thanks to Jeff Su‘s YouTube channel for this addition!)
And then I work my way through the response. This same prompt is re-run with updated versions of the proposal until I am satisfied that this stage has yielded as much benefit as it can.
Tightening the Screws
Finally, (or almost so) I have ChatGPT draft the executive summary. In the case of a really big RFP response, I will first have it draft the section summaries. These summaries are necessary to any proposal. In fact, they often make or break the proposal, possibly because they are the only parts the decision makers read, sometimes along with reviews done by others. If the summaries don’t come easy, or don’t sound right based on that original context document, I will go through and collaboratively revise the relevant sections until the summaries flow.
The Final Check
Finally, I try my best to find another human to check the whole of the result. If I’m lucky, I get additional input. If I’m really lucky, they’ve brought their own GenAI-assisted reviews into the mix.
GenAI has had a major impact on my writing output. The flow I use for proposals isn’t all that different from the flow I use to write blog posts or other content. I do a number of stream-of-consciousness sessions (the number varying on the complexity and length of the content), and then start refining it. I used that approach before GenAI, and the key difference that GenAI has made in my process is that I have learned to do less self-editing during those initial brain dumps, because I know that I have a tireless editor to review and give me feedback during the editing phase. Plus, the editor can be coached in both my intent and style to help me improve beyond just the level of “not clear”, and “i before e except after c or when the dictionary says otherwise”.
A long time friend sent me a link to Does AI Actually Boost Developer Productivity? (100k Devs Study). While writing my response, I realized my reaction was a bit more than a chat reply, so I’m sending him a link to this post and hope he forgives me for the delay…
After watching this video of Yegor Denisov-Blanch, my inner critic wants to jump straight to:
He referred to mid-range engineers at the outset, in the context of who Meta said they were cutting. It wasn’t clear if the study participants were mid-range.That out of the way, I’ve seen similar studies, though this is the best so far, based on number of participants, approach, and level of detail. Those other studies had the boost at 0 or less, and I didn’t trust the data but did recognize the premise. The premise being that AI is a multiplier, and if a developer tends to go down rabbit holes rather than focusing on the business goals, they will go deeper down rabbit the hole and become even less productive.
I think another aspect that is lost in these studies is that it is a paradigm shift, which means even the most experienced are still figuring out how to be productive in their use of AI. Since everyone is finding it so easy, no one admits that it takes some getting used to. That will account for some of the productivity hit.
One aspect Denisov-Blanch spends a good amount of time on where the mass media usually skims or skips entirely, is the difference between greenfield and brownfield projects. The difference is huge, where brownfield productivity gains are much lower. This information is critical to businesses that are planning on reducing their development teams based on published gains, since, for most enterprises, the majority of work is decidedly brownfield.
We also haven’t yet seen the impact of greenfield applications built primarily with GenAI when it comes to long-term maintenance. Yes, we have seen some anecdotal results where they are disastrous, from both a security and CX perspective, but we haven’t seen anything at scale yet. As an architect I am probably biased, but I don’t have much confidence in GenAI to create a reliable and flexible solution for no other reason than most people don’t think to ask for one at the start (except maybe architects😊).
The tools are improving (this based on anecdotal evidence from people who have both a high degree of skill as a developer and demonstrated critical thinking about tools and processes in the past). The people using the tools are becoming more skilled. So the gains in productivity will likely either climb across the board, or those below mid-range may crawl up from the less-than-zero productivity zone.
Meanwhile, anyone looking to cut their developer workforce in the next couple of years should watch this video, draw their own conclusions, and then revise their estimates.
AI is great at summarizing a document or a small collection of documents. When you get to larger collections, the complexity begins to grow rapidly. More complex prompts are the least of it. You need to set up RAG (retrieval-augmented generation) and the accompanying vector stores. For really large stores, this is going to be necessary regardless. Most of us work in a realm that is between massive content repositories and a manageable set of documents.
One handy helping application for this is Pandoc (https://pandoc.org/), aptly self-described as “your Swiss Army knife” for converting files between formats (without having to do “File > Open > Save As” to the point of carpal tunnel damage). Most of our files are in people-friendly formats like Word and PDF. To an LLM, these files contain mostly useless formatting instructions and metadata (yes, some metadata is useful, but most of it in these files is not going to be helpful as inputs to GenAI models). Pandoc will take those files and convert them to Markdown, which is highly readable for GenAI purposes (and humans can still parse it — and some even prefer it) and use 1/10000000th of the markup for format (confession: I pulled that number out of thin air to get your attention, but the real number is still big enough to matter).
The conversion may not be perfect, especially as the formatting of most documents is not perfect. You can see this for yourself by using the Outline view in Word. With a random document pulled from SharePoint, odds are you will find empty headings between the real ones, entire paragraphs that are marked as headings, or no headings at all because someone manually formatted text using the Normal style to make it look like a heading.
If you are only converting a few documents, you can use a text editor with regex (provided by your favorite GenAI) to do find and replace. Otherwise, leave them as is — it is already in a much more efficient format for prompting against, and the LLM will likely figure it out anyway.
You can get fancier with this by incorporating a call to Pandoc as a tool in an agentic workflow, converting the files at runtime before passing them to an LLM for analysis (and if you are a developer, managing the conversions so that they aren’t wastefully repeated). So long as you are being fancy, you can have it try to fix the minor formatting errors too, but you have already made a huge leap forward just by dumping all the formatting (that is just noise to an LLM) so that the neural network is processing what really matters: the content that is going to make you look like a prompting genius.