Google and Microsoft battle of the AI Notebooks

Organize AI Augmentation with Notebooks

I threw up a quick post about vibe writing a couple of months ago that did not go viral (similar to my other work). For that session I bounced between the free version of Perplexity.ai, Microsoft Copilot, and Google’s NotebookLM (both with a business license provided by my employer). It was very productive, with the results easily stored in NotebookLM for later reference.

Last week, I noticed a Notebook feature added to the Copilot screen and thought I would give it a whirl.

The two products have a lot in common. You can load sources to the notebooks, you can chat with the GenAI to analyze or reference the content, and they will both generate an audio summary formatted like a podcast. In that last part, NotebookLM has a maturity advantage both in how long the offering has been available and the capability to control the output.

Both provide easy access to their associated cloud storage. Again, NotebookLM shows the advantage of experience, having incorporated web search discovery for external references.

Copilot Notebook is part of the full Copilot suite of functionality, making it easier to incorporate AI work done earlier and shared functionality within your organization, where Google has its regular menu which is a lesser UX IMHO.

The AI space has a lot in common with New England weather. If you don’t like how it is right now, just wait a bit, it will change fairly soon. I’m pretty sure the Copilot Notebook UI changed just in the week from when I discovered it and today, but I can’t say for sure. Today, if I have my choice (and I do), I would go with NotebookLM for research where I don’t need any sensitive files from Microsoft Office as input, and Copilot Notebook for things where keeping the secret sauce secret is important. That is very much predicated on Office 365 being the collaborative standard in my organization, so YMMV if you don’t.

Not to leave the third participant of my original vibe writing post out, I acquired a Perplexity Pro license since that earlier post and have begun to use their Spaces functionality to have contexts similar to the Notebook offerings. It doesn’t have an audio summary option that I’m aware of, but otherwise I like how it will incorporate references from the internet with attributions for verification. It’s my personal pro account, so I don’t load any work files into it. I do find it useful writing and research. While it does not hallucinate, it is limited to the majority of what is posted online (unless I have time to prompt it along). I originally wanted to have it write the majority of this post, but the content it came back with was not entirely accurate in the areas of capabilities, so I wrote the first part the  old fashioned way.

I’m including the final draft that Perplexity came up with, as it has some good info that bears sharing, but doesn’t bear retyping to claim it as my own. Any discrepancies between what I have already written and the following, my opinions and observations are contained in the former.

Microsoft Copilot Notebooks vs Google NotebookLM

Both platforms promise to make knowledge work more efficient, but their philosophies and user experiences diverge in meaningful ways. Microsoft Copilot Notebooks leverages the deep integration and security of the Microsoft 365 ecosystem, offering a persistent, project-based workspace where AI is grounded in your organization’s documents and conversations. Google NotebookLM, by contrast, is built for flexibility and collaboration, allowing users to aggregate a wide variety of sources, query them conversationally, and generate structured outputs like summaries and study guides.

The stakes for choosing the right tool are high: the right architecture can amplify an organization’s collective intelligence, streamline workflows, and unlock new levels of productivity. Below, I explore how each platform approaches the core challenges of knowledge work—aggregation, synthesis, collaboration, and control—before distilling the comparison into focused tables for quick reference.

The Modern Knowledge Workspace: Context and Control

Microsoft Copilot Notebooks is designed for those who want a unified, persistent workspace where every piece of project context—chats, files, meeting notes, and links—lives alongside AI-powered analysis. The AI here is not a generic assistant; it is tightly constrained to the content you provide, ensuring that responses are both relevant and secure. This approach is a natural extension of Microsoft’s enterprise-first philosophy, emphasizing compliance, data privacy, and seamless integration with tools like Teams, Outlook, and SharePoint.

Google NotebookLM, meanwhile, takes a more open-ended approach. Users can upload PDFs, Google Docs, Slides, and even web content or YouTube URLs, then interact with the AI in a conversational manner. The platform excels at generating structured outputs—summaries, FAQs, timelines—grounded in the uploaded sources, with every answer backed by citations. Collaboration is a first-class feature, with advanced sharing controls and analytics available for power users.

AI as a Creative and Analytical Partner

Both platforms position AI as more than a search tool: it’s a creative and analytical partner. In Copilot Notebooks, the AI can identify themes, answer questions, and draft new content, all within the boundaries of your project’s data. NotebookLM, on the other hand, is optimized for rapid synthesis across disparate formats, making it ideal for research-heavy workflows or teams that need to generate insights from a broad array of materials.

The distinction is subtle but important: Copilot Notebooks is about depth—drilling into your organization’s knowledge base—while NotebookLM is about breadth—pulling together insights from a wide range of sources.

Licensing and Ecosystem Considerations

Choosing between these platforms is not just about features; it’s about fit. Copilot Notebooks is available only as part of a Microsoft 365 Copilot license, targeting organizations already invested in the Microsoft stack. NotebookLM offers a more accessible entry point, with free and paid tiers, and is available to most Google Workspace users. Both offer enterprise-grade privacy, but their licensing models reflect their intended audiences and integration philosophies.

Feature Comparison

Feature Microsoft Copilot Notebooks Google NotebookLM
Content Aggregation Aggregate chats, Microsoft 365 files, meeting notes, links, and more in one place. Upload PDFs, Google Docs, Slides, websites, YouTube URLs; manage all sources in a unified panel.
AI-Powered Insights Copilot analyzes notebook content to answer questions, identify themes, and draft new content grounded in your data. Conversational AI provides answers with citations, generates summaries, FAQs, timelines, and briefing docs, all grounded in your sources.
Audio Overviews Generate audio summaries with two hosts walking through key points. Audio Overviews with interactive AI hosts, listen on the go, higher limits in premium tiers.
Collaboration Currently lacks real-time sharing or collaborative editing. Advanced sharing, including “chat-only” mode and notebook analytics in Pro tier.
Integration Deep integration with Microsoft 365 apps (Teams, Outlook, Word, PowerPoint, etc.), seamless import/export. Integrates with Google Workspace; supports a wide range of file types and sources.
Customization AI responses based on notebook content; less customizable chat settings. Chat customization, adjustable response styles, and analytics in Pro tier.
Limits Governed by Microsoft 365 subscription and license tier. Free and Pro tiers: Pro offers 5x more notebooks, sources, queries, and audio overviews.

License Model Comparison

Aspect Microsoft Copilot Notebooks Google NotebookLM
Eligibility Requires Microsoft 365 Copilot license; only for business/edu accounts, not for personal/family use. Available to most Google Workspace and education accounts; Pro/Enterprise tiers for advanced features.
Pricing $30/user/month (annual subscription), as an add-on to qualifying Microsoft 365 plans. Free basic tier; Pro and Enterprise tiers offer higher limits and premium features, pricing varies by region and subscription.
Trial Availability No trial for Copilot; must have a qualifying Microsoft 365 plan. 14-day full-featured trial for up to 5,000 licenses in Enterprise.
Data Residency/Compliance Built on Microsoft 365’s compliance and security standards. Multi-region support, including EU and US, with enterprise-grade privacy controls.

Perplexity Sources:

  1. https://www.perplexity.ai/page/writing-your-first-book-with-a-BhWJ_y.MS6KRYuSp00k5ag
  2. https://originality.ai/blog/perplexity-and-burstiness-in-writing
  3. https://www.reddit.com/r/perplexity_ai/comments/1hlu5ev/what_model_on_perplexity_is_considered_the_best/
  4. https://www.geeky-gadgets.com/using-perplexity-ai-the-writing/
  5. https://broadbandbreakfast.com/elijah-clark-a-review-of-perplexity-ai-rewritten-by-perplexity-itself/
  6. https://www.allaboutai.com/ai-how-to/use-perplexity-pages-ai-to-write-articles/
  7. https://community.honeybook.com/all-about-ai-145/ai-prompt-for-copying-writing-style-2042
  8. https://www.youtube.com/watch?v=Ch7UWveEKt4
  9. https://ceur-ws.org/Vol-3740/paper-261.pdf
  10. https://ceur-ws.org/Vol-3551/paper3.pdf
Facebooktwitterredditlinkedinmail
© Scott S. Nelson

Working the Plan is not Working for the Plan

This post started with an entirely different approach.

I wanted to rant about how some statements of work are written with absolute certainty based on assumptions, and when those assumptions are proven wrong the work still proceeds with the obligation of fulfilling the SOW, resulting in lot of wasted effort spent on things that do nothing to further the goals of the enterprise. These same SOWs are also written with the absolute certainty of how much time it will take to do the job, so the time spent on the useless parts robs from the time available to work on what will make everyone’s life easier for the next SOW…provided history doesn’t repeat itself.

Instead, I got caught up in mangling metaphors and exaggerating erroneous errors about plans that are too rigid and wound up writing the following. Somehow it still represents the point I was trying to make… at least to me. Apologies in advance if it’s not as good for you.


Waterfall methodologies like RUP ruled the enterprise IT landscape back when it was mostly green fields, and that made sense. Projects were funded based on clear goals where the ROI had already been calculated. That ROI was calculated based on a set of business goals that were frozen once the project got the green light (yes, I know that scope creep existed even back when dinosaurs roamed the server rooms, but stick with me and then tell me if it really matters for the purpose of illustration). These goals were numbered 1 to n, then a person or a team (project sizes and budgets varying) would write functional requirements (FR) that meet those business requirements, numbering them 1.1 to n.. And then there would be development tasks, and test cases, all of which must have a compatible numbering systems, and each must tie back to one (and only one) functional requirement. Life was simple, and project schedules were measured in years. For some, enough years to add up to more than a decade.

Even when Pmanagersaurous (the p is silent) ruled the cube halls, there were businesses (and even some rebellious departments within enterprises) that used a different approach. To those who kept getting their ties caught in printers spewing out detailed requirements to be bound, distributed, and shelved, this alternative method seemed like some cavemen cracking their knuckles and banging on keyboards, intent on creating fire or agriculture or quantum computers. Much of what they built has gone the way of GeoCities and MySpace, but some of it went the way of either owning or replacing the big companies with the big projects and the big budgets. And they taught others their secrets. So many that it stopped being secret.

Then the legacy companies decided they wanted some of this high-margin, low-cost, no-longer-secret sauce for themselves, so they hired agile coaches. Of course, the ones that were really good at doing agile were off doing agile and becoming rich and famous. So the coaches would sometimes wing it, or steal from other processes to differentiate themselves. The legacy companies, being legacy, would pay the coaches lots of money, and thank them profusely, and then start requiring a business case before green lighting an “agile” project. The business cases had numbered paragraphs, and the business leaders wanted to know how things were going every moment of the day, so they insisted that the paragraph numbers be included in the “stories”, and it was Epic.

The little agile companies merged with competitors and became big legacy companies. To compete with even bigger legacy companies, they hired their executives, who needed to know everything that was going on so they could “take it to the next level”. So all of the highly skilled, highly productive people began applying half of their skills and productivity in doing what they have always done best, and half matching numbers to lines of code. Working for the plan.

And then AI came along, but that is post of a different order.

Facebooktwitterredditlinkedinmail
© Scott S. Nelson
Robot Jail Break

Everybody’s a bit of a noob sometimes…

…and that’s a good thing

I hear that title in my head as being sung to the tune of Steve Coogan’s “Everybody’s a bit of a c@@t”, which is self-deprecation at its extreme, and a good theme for this post. I continue to make slow progress on my journey of becoming deep with Generative AI and expect that if this really does become a series (two may be a sequel, but it takes more than that to be a series, even a limited one), keeping up with the game metaphor titles is going to be tough.

Anyway, work I have to do keeps getting in the way of work I want to do. I’ve created an Ubuntu VirtualBox appliance with Ollama installed and tested on it. The goal of the VM is to be able to run it RAGged while disallowing any access to the internet for (moderately) secure work. Eventually, I will add MCP and some UI. If I get really ambitious, I’ll look into how to let some agents access the web and others keep things to themselves, but that is probably pretty far down the road.

Meanwhile, I installed Joplin on the VM and my local PC, using a shared folder to sync them so that I can maintain notes off the cloud yet still work on them when the VM is not running. I also have Brave installed for a little privacy and anonymity (before I cut the network access). Then there is VSCode, because I expect it will be able to do more and more with MCP and other agent tools, plus UI. And, yes, I realize that trying to do all this without an internet connection will be a pain. I know I’ll figure out a better way as I’m working on it. I haven’t had enough time to really think about sandboxing in depth (yet). Feel free to post suggestions in the comments.

While generative AI has grown in users and attention faster than any previous technology innovation, it hasn’t grown nearly as fast as it could. Especially given that it is literally the tool one would use to adopt a new tool quickly. I suspect this is because I’m not alone in having to spend time doing things the “old” way, because there isn’t enough time to learn how to do it better, a problem that has plagued every new technology since people were running from saber-tooth tigers didn’t have time to make spears…until enough people had been eaten that cave management gave them a day to learn flint knapping.

“I’m so busy doing what I must do that I don’t have time for what I ought to do… and I never get a chance to do what I want to do!”
Robert A. Heinlein, Citizen of the Galaxy

If you’re curious why I’m going at this so slowly, there are a few reasons. First, I want to document it as I go so that I can share anything unique in my approach in a manner that can be repeated. Well, that’s not really first. First is because the folks that pay my bills want me spending time doing things for folks that pay their bills. They also think that my having an AWS certification is going to be more profitable for them after my current billable project wraps up, rather than my flexing mad, ninja-level skills with setting up AI infrastructure on anything. I know my crystal ball has more hallucinations than ChatGPT 3.5, so I’ll go with what the bosses say…until I think they are really wrong.

Facebooktwitterredditlinkedinmail
© Scott S. Nelson
Playing AI Far Cry

My AI Learning Strategy based on Playing Far Cry

There was a brief, glorious time when self-taught technologists could compete in the job market. Yes, there is (or was—the job market changes so fast now) a push by some companies to hire developers without degrees, but the intent seems more about paying less for the same (or better) results rather than valuing capability over pedigree.

To learn on your own works best with intrinsic motivations, which can be quite diverse. Some were looking for a brass ring or golden ticket (both Boomer terms kept alive by boomers running Hollywood) to trade their way out of a dead end job or financial poverty. Some were just curious, while others just had a knack for tech and enjoyed the challenge. Or, in my case, all of the above, though it started with playing PC games. I had a low-end, second-hand Packard Bell and a BBS account (look it up ??). Access to shareware games (look that up, too ??) was only half the battle for someone on a $0 budget.

Many game developers target their code at the latest and greatest hardware and not a three-year-old department store discount model with ten more payments to go. I learned to tweak system files to boost performance enough to play the games. I became hooked enough on the games to write game reviews for a couple of now-defunct ezines in exchange for a byline and free games. It also revealed that a PC was more than a platform for gaming and desktop publishing and could be used in god mode!

One of the last games I reviewed before becoming a paying consumer was Far Cry 2. It is an open-world game, meaning you can basically do anything you want and go wherever you want, so long as you can survive or access it. It has an interactive plot, where choices made at certain points will affect later options and opportunities. This wasn’t a new approach; Wing Commander had been doing it for years, even featuring plotlines as cut scenes with well-known actors. But it was the first time I had seen it in a first-person shooter, my personal favorite genre because of the immersive experience (and the popularity of Doom shows I’m not alone in this). Acknowledging both Far Cry 2 and the Wing Commander franchise as a whole, the metaphor continues with the next chapter in the Far Cry series: Far Cry 3.

Far Cry 3 introduced acquiring skills, knowledge, and ingredients. You learned not only how to aim a gun but how to steady it for accuracy. Ingredients, in the form of animal pelts and plants, could be accumulated, and then you learned how to make things with them by combining them in specific quantities (called crafting). The first time I played through, I would just try to accumulate and improve weapons or craft upgrades while running through the map to conquer everything and finish a plotline along the way. It wasn’t until very near the end that the power of skills—and the value of applying the right skills to the right situation—became clear.

In every game since, I start by avoiding the plot and pursuing skills. I’ll step into the plot at points when it’s necessary to acquire a skill or sometimes an ingredient, and then go back to building my capabilities. Once I’ve taken that as far as possible, I enter the plot. This strategy leads to some games where I make it from start to finish without a single death (admittedly, that happens only about 2% of the time).

I’ve lately begun to tell people my superpower is digression. I think I’ve just demonstrated that once again.

What does this have to do with how I’m learning AI? While all the prognosticators are talking about how AI will do all our jobs soon, I’ve been too busy doing my own job to spend as much hands-on time with AI as I’d like. But I do have pockets of time to read articles, listen to podcasts, and watch YouTube until I can get hands-on (which should be very soon). It dawned on me the other day that the concepts from articles and podcasts are like collecting recipes and ingredients for later crafting—stuffing them into larger rucksacks and adding tools as I go. Meanwhile, my expanding YouTube subscriptions and playlists are like gathering more accurate and powerful weapons for taking on the more challenging enemy forts, such as building local LLMs, coordinating MCP implementations for private GPTs, and managing secure agentic operations.

I make most of my playlists public on my YouTube channel. There’s a collection of AI Learning playlists there (it’s the second collection down) that will continue to grow. I’ll also be sharing my discoveries on my blogMedium, and Substack. New posts will be announced on my social accounts, all @scottsnelson1. You won’t find me posting on Instagram yet, because my daughter-the-influencer still hasn’t had time to teach me how to create posts with links (queue Harry Chapin), nor on Pinterest because it’s more work than I have time for right now. And, of course, one of my first MCP solutions should be to manage these posts for me. I’ll let you know when I defeat that fort on the third map.

(Alternate close written by Perplexity.ai)

If you’re an IT stakeholder or technologist, you know that learning new tools and frameworks is a lot like leveling up in a game. You collect knowledge (ingredients), build skills (crafting), and tackle bigger challenges (forts) as you go. I’d love to hear how you approach your own learning journey—drop a comment or connect with me on my channels. Let’s build our skills together and take on the next big challenge in AI and IT!

Facebooktwitterredditlinkedinmail
© Scott S. Nelson
Snake oil salesman with clocks

Buyer Beware Time Savings

People throw around terms like “a fraction of the time” meant to express doing something much faster.

They are trying to confuse you with the similarity of the phrase “fraction of a second”, which is generally considered very fast. But… 59.9 minutes is a fraction of an hour (time), and yet not a time savings you would be willing to spend money on.

Do your own research!

Facebooktwitterredditlinkedinmail
© Scott S. Nelson