Course Review: Build Intelligent Agents Using DeepSeek & n8n

Today I completed Build Intelligent Agents Using DeepSeek & N8N on Coursera. I was really impressed at the start of the first module when there was an interactive AI that dynamically and interactively quizzed me on the content of the material covered. It was more of a showcase of the Coursera Coach than an integral part of the course. I think there are a lot of people who will benefit from this feature on Coursera, and it is available throughout this course, and I have seen it on others. I’m not sure how ubiquitous it is across other courses.
If you have ever worked with one of those people where, if you ask them how to do something they send you one or more links, this course is sort of like working with that person. Most Coursera courses have those Recommended Reading sections that provide useful extra material. I like to bookmark them for future reference, but rarely read them during the course. That approach won’t work with this course, as the quizzes cover content in those courses. I found that having Perplexity summarize the links for me was sufficient to start passing the quizzes. Mostly. There were still questions that I did not recognize from the videos nor the summaries, and a couple of times I read the external content directly and still didn’t find the source of the question.
I don’t just take these courses to hang a certificate on the wall (anymore, though there was a time before 2010 when I collected enough to stack over an inch high, partially shown below).
Pile of certificates
Pile of certificates
Coursera does a best score of up to three attempts, so if I get less than 100% I will repeat it until I do get 100% so that I am certain I have learned the material. The last Coursera course I completed (Introduction to Artificial Intelligence by IBM) I happily pointed out that the quality of the video instruction lead to my consistent 100% scores. For this n8n course, I can honestly say that without many years experience in enterprise integration architecture I would not have been able to score the 100% on at least half of the modules. This is beyond the issue that some of the links to the required reading was broken.
As mentioned, Perplexity summaries were really useful in completing this course, as well as some research to fill in the blanks for missing content. For those that are interested, I exported thread I used for the course and have made it available in PDF format as Perplexity_Research_for_Coursera_Build_Intelligent_Agents_n8n_Course.pdf
You can see the Perplexity thread (including the delete query that started me on this journey) here: https://www.perplexity.ai/search/does-n8n-have-the-ability-to-r-23edr0K1SYqlwvRRk7QGOw
Facebooktwitterredditlinkedinmail
© Scott S. Nelson

Build Intelligent Agents Using DeepSeek & N8N

Today I completed Build Intelligent Agents Using DeepSeek & N8N on Coursera. I was really impressed at the start of the first module when there was an interactive AI that dynamically and interactively quizzed me on the content of the material covered. It was more of a showcase of the Coursera Coach than an integral part of the course. I think there are a lot of people who will benefit from this feature on Coursera, and it is available throughout this course, and I have seen it on others. I’m not sure how ubiquitous it is across other courses.
If you have ever worked with one of those people where, if you ask them how to do something they send you one or more links, this course is sort of like working with that person. Most Coursera courses have those Recommended Reading sections that provide useful extra material. I like to bookmark them for future reference, but rarely read them during the course. That approach won’t work with this course, as the quizzes cover content in those courses. I found that having Perplexity summarize the links for me was sufficient to start passing the quizzes. Mostly. There were still questions that I did not recognize from the videos nor the summaries, and a couple of times I read the external content directly and still didn’t find the source of the question.
I don’t just take these courses to hang a certificate on the wall (anymore, though there was a time before 2010 when I collected enough to stack over an inch high, partially shown below).
Pile of certificates
Pile of certificates
Coursera does a best score of up to three attempts, so if I get less than 100% I will repeat it until I do get 100% so that I am certain I have learned the material. The last Coursera course I completed (Introduction to Artificial Intelligence by IBM) I happily pointed out that the quality of the video instruction lead to my consistent 100% scores. For this n8n course, I can honestly say that without many years experience in enterprise integration architecture I would not have been able to score the 100% on at least half of the modules. This is beyond the issue that some of the links to the required reading was broken.
As mentioned, Perplexity summaries were really useful in completing this course, as well as some research to fill in the blanks for missing content. For those that are interested, I exported thread I used for the course and have made it available in PDF format as Perplexity_Research_for_Coursera_Build_Intelligent_Agents_n8n_Course.pdf
You can see the Perplexity thread (including the delete query that started me on this journey) here: https://www.perplexity.ai/search/does-n8n-have-the-ability-to-r-23edr0K1SYqlwvRRk7QGOw
Facebooktwitterredditlinkedinmail
© Scott S. Nelson

Slices of #ITisLikePizza with Perlexity.ai

In early 2022, I had hoped to start a social media thread with the hashtag #ITisLikePizza, but it was clearly a half-baked idea since no one replied to the first four posts:

  • A good cook can make a better pizza with the same ingredients as a bad pizza.
  • All pizzas look good from a distance.
  • If you rush the cook, it will not be as good.
  • You always regret consuming too much too fast, no matter how good it is.

Had any engagement occurred, I was ready with nine more:

  • A great-looking pizza can still not taste good.
  • It’s rarely as good as it sounds in a commercial.
  • When it is as good as the commercial, the next time it isn’t.
  • The best pizzas are often found in small, little-known places.
  • When the small, little-known place gets popular, the quality goes down.
  • Some ugly-looking pizzas are the most satisfying.
  • The hungrier you are, the longer it takes to arrive.
  • If you forget about it in the oven, the result may not be salvageable.
  • If you don’t follow the recipe, your results will vary.

Here we are, three years later, and GenAI being all the rage, it occurred to me that maybe I could extend the list with AI. My cloud-based GenAI of choice is Perplexity, so that’s what I tried it with. I originally stuck with Perplexity because it hallucinated less than other options by default, mostly because it provides sources for its outputs, which I find handy when I rewrite those outputs for my various posts. Had this been a true experiment, I would have run the same prompts in ChatGPT and Copilot, but this is just me avoiding my ever-growing to-do list for a few minutes, so it’s going to be short and anecdotal.

So, my first prompt was ‘Expand on this following list of how “IT is Like Pizza”:’ followed by the original baker’s dozen list I had come up with so far. Instead of adding new pithy pizza ponderings, it gave explanations for each. They were actually really good explanations. And no citations were provided, so this was just the LLM responding. Kind of interesting in itself.

So then I tried the usual lame improvement of the prompt with “Don’t explain the lines. Generate new lines following the same concept.” The result this time was more what I was looking for, though it may just be writers’ ego that made me think they all need some improvement, except those that could just be tossed.

Then I did something I learned somewhere (I subscribe to half-a-dozen AI-specific newsletters, another dozen that cover AI frequently, plus follow a slew of companies and people on LinkedIn—not to mention that YouTube’s algorithm had caught on to my interest—so I can rarely attribute a particular concept because I either heard it multiple times or I forgot to include the attribution when making a note to cogitate on it more later): I asked Perplexity what the literary term was for the original dirty dozen dictums and it told me “analogical aphorisms” (actually, it told me more than that, but I cling to alliteration the way the one topping you don’t like on the family pizza does).

Armed with my fancy new GenAI-generated term, I demanded (in several of those newsletters and videos I have heard that asking with ‘please’ is just a waste of tokens…which I mostly agree with unless you think the best sources are places like Reddit, but more on that another time): “Create ten more analogical aphorisms with the same them of IT is like Pizza”.  It’s like more sunk-cost-fallacy than truth that this list seemed much more on target, though some were definite deletes, and some clearly needed editing, and…yeah,  it was definitely a case of time commitment bias.

For the curious, the full thread with all the responses can be found here for however long the link is good for (I expect only slightly longer than my Pro subscription):

https://www.perplexity.ai/search/expand-on-this-following-list-UdB7O3KzSxCkl.WTIKBy4A

Interesting side note: I find I sometimes have to remind Perplexity to follow the context instructions for a Space.

Interesting side note about that side note: I have to specify “Perplexity Space” or it will do all sorts of random stuff that has nothing to do with Perplexity Spaces.

One more interesting side note: The most annoying thing that Perplexity does is anticipate punctuation errors. I use it to check my spelling, grammar, and punctuation because I got tired of finding the mistakes after posting. Here is one of the suggestions (similar ones are common):

  • Add a comma after “Here we are, three years later”
    Original: Here we are, three years later, and GenAI being all the rage, it occurred to me…
    Correction: Here we are, three years later, and with GenAI being all the rage, it occurred to me…

OK, one more side note and that’s it: It’s interesting that Perplexity (and other GenAIs) will properly understand a mis-spelled prompt and not point it out, but in spell-checking content it does point it out, as in:

  • Change “theme” not “them”
    Original: …with the same them of IT is like Pizza…
    Correction: …with the same theme of IT is like Pizza…

Sorry, can’t resist: The side notes (originally spelled ‘side-note’) were written while editing, and when I ran them through the “Check the following for spelling, grammar, and punctuation…” prompt, it wanted to correct its own spelling, as in:

  1. In “as in: Change ‘theme’ not ‘them’,” add a comma after “theme” so it reads:
    • Correction: Change “theme,” not “them”
Facebooktwitterredditlinkedinmail
© Scott S. Nelson

Falling down the GenAI Research Rabbit Hole

Before GenAI, I would have a general plan of what kind of information I would gather to prepare for a task. Sometimes that plan would be modified as new information led to a better understanding of the subject. GenAI is so much faster, more detailed, and with fewer unwanted results that I have to remind myself that this is just the beginning of the task. Otherwise, I would just stay in research mode and never get started, where before GenAI I was likely to stop just out of frustration from sifting through all the crap resulting from SEO more focused on views than value.

From “Don’t Be Evil” to Disruption

Google’s early days were defined by a clear mission: organize the world’s information and make it universally accessible and useful. Their unofficial motto, “Don’t be evil,” reflected a user-first approach that made Google the go-to research tool for millions. The results were clean, relevant, and genuinely helpful. Searching felt empowering, and the platform’s focus was on delivering value to users.

But as Google grew, priorities shifted. The drive for revenue and shareholder returns led to an increasing emphasis on advertising and SEO optimization. Search results became cluttered with paid placements and content designed to game the algorithm, rather than serve the user. The once powerful tool for discovery became bogged down by noise, making the research process more frustrating and less productive.

The progress shift of focus from user-friendly to shareholder-value opened the door for disruption. When a company that once prided itself on “not being evil” starts to lose sight of its core values, it creates an opportunity for new technologies to step in and fill the gap.

The GenAI Parallel

GenAI today feels much like Google did in its early years: focused on utility, speed, and user value. The answers are direct, the distractions minimal, and the sense of possibility is fresh. Outside the media buzz there is real value in faster answers, deeper insights, and fewer irrelevant results. But the lesson from Google’s trajectory is clear: success can breed complacency, and the temptation to prioritize profit over usefulness is always lurking.

Just as Google’s early commitment to usefulness made it indispensable, GenAI’s current focus on delivering value is what sets it apart. The challenge will be maintaining that focus as the technology matures and commercial pressures increase.

The Shift in Research

  • Faster answers mean less time wasted on irrelevant results.
  • Deeper insights surface quickly, sometimes revealing connections I wouldn’t have spotted on my own.
  • Fewer distractions no more having to go to page 3 of results because the first two were the result of the successful SEO strategies of clickbait and content farms.

But this abundance is a double-edged sword. The temptation to keep digging, to keep asking “what else?” is strong. Without discipline, I could spend hours exploring every tangent, never moving on to actually apply what I’ve learned.

Hopes for the Future of GenAI Research

As exhilarating as this new era is, I can’t help but wonder what comes next. Will GenAI search maintain its edge, or will it eventually succumb to the same pressures that eroded Google’s utility? The cycle of innovation and decline is a familiar one in tech, and I hope that as GenAI matures, it resists the lure of ad dollars and keeps user value front and center.

  • Transparency in how results are generated will be crucial.
  • User-focused design should always outweigh short-term profits.
  • Continuous improvement based on real user needs (not just engagement metrics) must be the guiding principle.

For now, I’m enjoying the ride, even if it means occasionally reminding myself to climb out of the rabbit hole and get back to on track (which may be how Google got to where they are).

Facebooktwitterredditlinkedinmail
© Scott S. Nelson
Google and Microsoft battle of the AI Notebooks

Organize AI Augmentation with Notebooks

I threw up a quick post about vibe writing a couple of months ago that did not go viral (similar to my other work). For that session I bounced between the free version of Perplexity.ai, Microsoft Copilot, and Google’s NotebookLM (both with a business license provided by my employer). It was very productive, with the results easily stored in NotebookLM for later reference.

Last week, I noticed a Notebook feature added to the Copilot screen and thought I would give it a whirl.

The two products have a lot in common. You can load sources to the notebooks, you can chat with the GenAI to analyze or reference the content, and they will both generate an audio summary formatted like a podcast. In that last part, NotebookLM has a maturity advantage both in how long the offering has been available and the capability to control the output.

Both provide easy access to their associated cloud storage. Again, NotebookLM shows the advantage of experience, having incorporated web search discovery for external references.

Copilot Notebook is part of the full Copilot suite of functionality, making it easier to incorporate AI work done earlier and shared functionality within your organization, where Google has its regular menu which is a lesser UX IMHO.

The AI space has a lot in common with New England weather. If you don’t like how it is right now, just wait a bit, it will change fairly soon. I’m pretty sure the Copilot Notebook UI changed just in the week from when I discovered it and today, but I can’t say for sure. Today, if I have my choice (and I do), I would go with NotebookLM for research where I don’t need any sensitive files from Microsoft Office as input, and Copilot Notebook for things where keeping the secret sauce secret is important. That is very much predicated on Office 365 being the collaborative standard in my organization, so YMMV if you don’t.

Not to leave the third participant of my original vibe writing post out, I acquired a Perplexity Pro license since that earlier post and have begun to use their Spaces functionality to have contexts similar to the Notebook offerings. It doesn’t have an audio summary option that I’m aware of, but otherwise I like how it will incorporate references from the internet with attributions for verification. It’s my personal pro account, so I don’t load any work files into it. I do find it useful writing and research. While it does not hallucinate, it is limited to the majority of what is posted online (unless I have time to prompt it along). I originally wanted to have it write the majority of this post, but the content it came back with was not entirely accurate in the areas of capabilities, so I wrote the first part the  old fashioned way.

I’m including the final draft that Perplexity came up with, as it has some good info that bears sharing, but doesn’t bear retyping to claim it as my own. Any discrepancies between what I have already written and the following, my opinions and observations are contained in the former.

Microsoft Copilot Notebooks vs Google NotebookLM

Both platforms promise to make knowledge work more efficient, but their philosophies and user experiences diverge in meaningful ways. Microsoft Copilot Notebooks leverages the deep integration and security of the Microsoft 365 ecosystem, offering a persistent, project-based workspace where AI is grounded in your organization’s documents and conversations. Google NotebookLM, by contrast, is built for flexibility and collaboration, allowing users to aggregate a wide variety of sources, query them conversationally, and generate structured outputs like summaries and study guides.

The stakes for choosing the right tool are high: the right architecture can amplify an organization’s collective intelligence, streamline workflows, and unlock new levels of productivity. Below, I explore how each platform approaches the core challenges of knowledge work—aggregation, synthesis, collaboration, and control—before distilling the comparison into focused tables for quick reference.

The Modern Knowledge Workspace: Context and Control

Microsoft Copilot Notebooks is designed for those who want a unified, persistent workspace where every piece of project context—chats, files, meeting notes, and links—lives alongside AI-powered analysis. The AI here is not a generic assistant; it is tightly constrained to the content you provide, ensuring that responses are both relevant and secure. This approach is a natural extension of Microsoft’s enterprise-first philosophy, emphasizing compliance, data privacy, and seamless integration with tools like Teams, Outlook, and SharePoint.

Google NotebookLM, meanwhile, takes a more open-ended approach. Users can upload PDFs, Google Docs, Slides, and even web content or YouTube URLs, then interact with the AI in a conversational manner. The platform excels at generating structured outputs—summaries, FAQs, timelines—grounded in the uploaded sources, with every answer backed by citations. Collaboration is a first-class feature, with advanced sharing controls and analytics available for power users.

AI as a Creative and Analytical Partner

Both platforms position AI as more than a search tool: it’s a creative and analytical partner. In Copilot Notebooks, the AI can identify themes, answer questions, and draft new content, all within the boundaries of your project’s data. NotebookLM, on the other hand, is optimized for rapid synthesis across disparate formats, making it ideal for research-heavy workflows or teams that need to generate insights from a broad array of materials.

The distinction is subtle but important: Copilot Notebooks is about depth—drilling into your organization’s knowledge base—while NotebookLM is about breadth—pulling together insights from a wide range of sources.

Licensing and Ecosystem Considerations

Choosing between these platforms is not just about features; it’s about fit. Copilot Notebooks is available only as part of a Microsoft 365 Copilot license, targeting organizations already invested in the Microsoft stack. NotebookLM offers a more accessible entry point, with free and paid tiers, and is available to most Google Workspace users. Both offer enterprise-grade privacy, but their licensing models reflect their intended audiences and integration philosophies.

Feature Comparison

Feature Microsoft Copilot Notebooks Google NotebookLM
Content Aggregation Aggregate chats, Microsoft 365 files, meeting notes, links, and more in one place. Upload PDFs, Google Docs, Slides, websites, YouTube URLs; manage all sources in a unified panel.
AI-Powered Insights Copilot analyzes notebook content to answer questions, identify themes, and draft new content grounded in your data. Conversational AI provides answers with citations, generates summaries, FAQs, timelines, and briefing docs, all grounded in your sources.
Audio Overviews Generate audio summaries with two hosts walking through key points. Audio Overviews with interactive AI hosts, listen on the go, higher limits in premium tiers.
Collaboration Currently lacks real-time sharing or collaborative editing. Advanced sharing, including “chat-only” mode and notebook analytics in Pro tier.
Integration Deep integration with Microsoft 365 apps (Teams, Outlook, Word, PowerPoint, etc.), seamless import/export. Integrates with Google Workspace; supports a wide range of file types and sources.
Customization AI responses based on notebook content; less customizable chat settings. Chat customization, adjustable response styles, and analytics in Pro tier.
Limits Governed by Microsoft 365 subscription and license tier. Free and Pro tiers: Pro offers 5x more notebooks, sources, queries, and audio overviews.

License Model Comparison

Aspect Microsoft Copilot Notebooks Google NotebookLM
Eligibility Requires Microsoft 365 Copilot license; only for business/edu accounts, not for personal/family use. Available to most Google Workspace and education accounts; Pro/Enterprise tiers for advanced features.
Pricing $30/user/month (annual subscription), as an add-on to qualifying Microsoft 365 plans. Free basic tier; Pro and Enterprise tiers offer higher limits and premium features, pricing varies by region and subscription.
Trial Availability No trial for Copilot; must have a qualifying Microsoft 365 plan. 14-day full-featured trial for up to 5,000 licenses in Enterprise.
Data Residency/Compliance Built on Microsoft 365’s compliance and security standards. Multi-region support, including EU and US, with enterprise-grade privacy controls.

Perplexity Sources:

  1. https://www.perplexity.ai/page/writing-your-first-book-with-a-BhWJ_y.MS6KRYuSp00k5ag
  2. https://originality.ai/blog/perplexity-and-burstiness-in-writing
  3. https://www.reddit.com/r/perplexity_ai/comments/1hlu5ev/what_model_on_perplexity_is_considered_the_best/
  4. https://www.geeky-gadgets.com/using-perplexity-ai-the-writing/
  5. https://broadbandbreakfast.com/elijah-clark-a-review-of-perplexity-ai-rewritten-by-perplexity-itself/
  6. https://www.allaboutai.com/ai-how-to/use-perplexity-pages-ai-to-write-articles/
  7. https://community.honeybook.com/all-about-ai-145/ai-prompt-for-copying-writing-style-2042
  8. https://www.youtube.com/watch?v=Ch7UWveEKt4
  9. https://ceur-ws.org/Vol-3740/paper-261.pdf
  10. https://ceur-ws.org/Vol-3551/paper3.pdf
Facebooktwitterredditlinkedinmail
© Scott S. Nelson