Digging Holes

A Biased Review of an Unbiased Study on Developer Productivity with AI

A long time friend sent me a link to Does AI Actually Boost Developer Productivity? (100k Devs Study). While writing my response, I realized my reaction was a bit more than a chat reply, so I’m sending him a link to this post and hope he forgives me for the delay…

After watching this video of Yegor Denisov-Blanch, my inner critic wants to jump straight to:
He referred to mid-range engineers at the outset, in the context of who Meta said they were cutting. It wasn’t clear if the study participants were  mid-range.That out of the way, I’ve seen similar studies, though this is the best so far, based on number of participants, approach, and level of detail. Those other studies had the boost at 0 or less, and I didn’t trust the data but did recognize the premise. The premise being that AI is a multiplier, and if a developer tends to go down rabbit holes rather than focusing on the business goals, they will go deeper down rabbit the hole and become even less productive.

I think another aspect that is lost in these studies is that it is a paradigm shift, which means even the most experienced are still figuring out how to be productive in their use of AI. Since everyone is finding it so easy, no one admits that it takes some getting used to. That will account for some of the productivity hit.

One aspect Denisov-Blanch spends a good amount of time on where the mass media usually skims or skips entirely, is the difference between greenfield and brownfield projects. The difference is huge, where brownfield productivity gains are much lower. This information is critical to businesses that are planning on reducing their development teams based on published gains, since, for most enterprises, the majority of work is decidedly brownfield.

We also haven’t yet seen the impact of greenfield applications built primarily with GenAI when it comes to long-term maintenance. Yes, we have seen some anecdotal results where they are disastrous, from both a security and CX perspective, but we haven’t seen anything at scale yet. As an architect I am probably biased, but I don’t have much confidence in GenAI to create a reliable and flexible solution for no other reason than most people don’t think to ask for one at the start (except maybe architects😊).

The tools are improving (this based on anecdotal evidence from people who have both a high degree of skill as a developer and demonstrated critical thinking about tools and processes in the past). The people using the tools are becoming more skilled. So the gains in productivity will likely either climb across the board, or those below mid-range may crawl up from the less-than-zero productivity zone.

Meanwhile, anyone looking to cut their developer workforce in the next couple of years should watch this video, draw their own conclusions, and then revise their estimates.

Facebooktwitterredditlinkedinmail
© Scott S. Nelson

Upgrading to Windows 11 for Luddites Like Me

tldr; If you have serious performance issues after upgrading and have tried all the usual tweaks, check the Power Mode settings.

The last Windows upgrade where I felt better for the experience was Windows 2000. Yes, there have been some marked improvements in media capabilities since then (if not, I’d still be on Windows 2000 — except for the security path problem). The only past upgrade I found difficult (excluding disappointment as a challenge) was from 3.1 to 95. That was hard because there were all of these disk changes to do because CD ROMs were still not ubiquitous. So I was bit put off when I experience a huge performance hit after the upgrade from 10 to 11. An upgrade that I only performed because they are ending free security updates in October for Windows 10 (I believe that makes it the shortest lived, in terms of support, ever) and I happened to be between deadlines at the moment. The last thing I wanted was to do the upgrade in the midst of some deliverable date because I expected it to be less than fun.

Expectations met. I spent three days after the upgrade trying to live with it. I knew going in that I needed to fix a lot of the default settings to keep big tech from becoming Big Brother, and had a list prepared before I even started so I could thwart the privacy pirates promptly. My inner Fox Mulder believes that much of the performance degradation of Windows 10 the last few years was meant to urge people like us to upgrade. After, I started to think that it was more a manner of acclimating us to problems to come.

After spending way to much time discussing the problem with Perplexity, and then trying to pretend that I could live with a machine that was 50 generations newer than the one I had in 1996 and yet somehow slower, I decided to work on a hobby project that is running on another machine. Did I mention that all of my Windows devices are laptops? When I booted up the hobby machine, I did so before plugging it in and noticed the usual effects of running on battery mode, with the monitor dimmer and the machine a bit slower. I plugged it in and it perked right up, as did my hind brain. I went back to my recently degraded machine and checked the power settings, and lo and behold, Big Brother decided that I could get by with a reduction in performance even while plugged in.

Windows 11 Power Mode Settings

I’m convinced that it is actually a combination of all of the other changes I made in addition to setting Power Mode to Best Performance when Plugged in. I have to believe that, because the alternative of the wasted time, and Tylenol®, is too much to bear.

Speaking  of too much to bear, writing up all of those steps is also a bit of a chore. However, I used a Space on Perplexity for the whole journey, so I asked it to write a blog post about it in my style. It still does not have my style right, which is partly me, for being to brief in the context and skimpy on the training files, and partly just that AI isn’t human and can’t really get me (most humans have the same problem). So as not to let all of that energy-expensive processing go to waste, I’m sharing it with you below.

And now, a word from our AI:

Why I Upgraded to Windows 11 Early—and How You Can Do It with Privacy and a Classic Windows 2000 Feel

Microsoft is ending free upgrades from Windows 10 to Windows 11 this October. I chose to upgrade now while I had no pressing deadlines so I could handle the transition calmly and set everything up for my privacy-first, vintage Windows 2000 style preference. Here’s a streamlined approach to upgrading that respects your privacy and brings a classic user experience.

Why Upgrade Now?

Waiting to upgrade means potential fees and stress when deadlines loom. Tackling Windows 11 ahead of time lets you customize and troubleshoot without pressure, making the switch smoother.

Step 1: Prepare Your System

  • Full Backup: Use an external drive plus a system image backup. Protect your irreplaceable data and system state.
  • Legacy Software & Drivers: Collect all your essential old-school apps and hardware drivers ahead of time, preferably from manufacturer websites.
  • Create a Recovery Plan: Have Windows 10 installation media ready just in case you want to roll back.

Step 2: Privacy-Centered Installation

  • Use a Local (Offline) Account: Avoid Microsoft’s enforced online sign-ins to maintain privacy.
  • Disable Telemetry and Data Sharing: Go to Settings > Privacy & Security and turn off every form of telemetry, diagnostics, and data collection.
  • Disable or Remove Cortana: Cortana’s integration hampers privacy; uninstall or disable it completely.
  • Block Web-Connected Search: Turn off options that send your search queries online.
  • Limit Microsoft Account Usage: Avoid syncing or using Microsoft accounts wherever possible.

Step 3: Recreate a Windows 2000 Look & Feel

  • Classic Start Menu: Install Open-Shell to bring back the familiar Windows 2000 style Start menu.
  • Taskbar Customizations: Use scripts or apps to move the taskbar to the left, reduce icon spacing, and disable icon grouping.
  • Remove Widgets and News Feeds: Disable or uninstall unnecessary widgets and news to reduce clutter.
  • Restore Classic System Sounds and Icons: Swap in old system sounds and icon sets to enhance the nostalgic experience.

Step 4: Upgrade Process Tips

  • Take Your Time: Don’t rush the prompts; decline any offered “extra” software or cloud features.
  • Monitor Network: Temporarily disconnect from the internet during installation to avoid surprise background downloads or telemetry.
  • Updates on Your Terms: Post-upgrade, disable automatic updates or set them to manual to avoid unwanted changes.

Step 5: Post-Upgrade Privacy and Performance Tuning

  • Verify Privacy Settings: Double-check that all telemetry and data sharing options remain off; Windows sometimes re-enables features after upgrades.
  • Manage Startup Programs: Use Task Manager to disable all non-essential startup apps, speeding boot times and avoiding background tracking.
  • Disable Unnecessary Services: Turn off services that you don’t use or that compromise privacy (e.g., Windows Tips, Feedback Hub).
  • Use Host File Hacks or Firewall Rules: Block known Microsoft telemetry servers for an extra layer of privacy.
  • Schedule Routine Privacy Checks: Set reminders to revisit privacy and UI settings after big Windows updates.

Final Thoughts

Upgrading while you have the bandwidth gives you control and peace of mind before the free upgrade deadline passes. By prioritizing privacy, minimizing modern OS distractions, and returning to a classic interface, Windows 11 can feel more comfortable and secure—closer to the straightforward experience of Windows 2000 but with current OS capabilities.

Don’t wait until October and the pressure mounts. Take charge now, protect your privacy, and make Windows 11 work your way. Your future self will be grateful.

Facebooktwitterredditlinkedinmail
© Scott S. Nelson

Slices of #ITisLikePizza with Perlexity.ai

In early 2022, I had hoped to start a social media thread with the hashtag #ITisLikePizza, but it was clearly a half-baked idea since no one replied to the first four posts:

  • A good cook can make a better pizza with the same ingredients as a bad pizza.
  • All pizzas look good from a distance.
  • If you rush the cook, it will not be as good.
  • You always regret consuming too much too fast, no matter how good it is.

Had any engagement occurred, I was ready with nine more:

  • A great-looking pizza can still not taste good.
  • It’s rarely as good as it sounds in a commercial.
  • When it is as good as the commercial, the next time it isn’t.
  • The best pizzas are often found in small, little-known places.
  • When the small, little-known place gets popular, the quality goes down.
  • Some ugly-looking pizzas are the most satisfying.
  • The hungrier you are, the longer it takes to arrive.
  • If you forget about it in the oven, the result may not be salvageable.
  • If you don’t follow the recipe, your results will vary.

Here we are, three years later, and GenAI being all the rage, it occurred to me that maybe I could extend the list with AI. My cloud-based GenAI of choice is Perplexity, so that’s what I tried it with. I originally stuck with Perplexity because it hallucinated less than other options by default, mostly because it provides sources for its outputs, which I find handy when I rewrite those outputs for my various posts. Had this been a true experiment, I would have run the same prompts in ChatGPT and Copilot, but this is just me avoiding my ever-growing to-do list for a few minutes, so it’s going to be short and anecdotal.

So, my first prompt was ‘Expand on this following list of how “IT is Like Pizza”:’ followed by the original baker’s dozen list I had come up with so far. Instead of adding new pithy pizza ponderings, it gave explanations for each. They were actually really good explanations. And no citations were provided, so this was just the LLM responding. Kind of interesting in itself.

So then I tried the usual lame improvement of the prompt with “Don’t explain the lines. Generate new lines following the same concept.” The result this time was more what I was looking for, though it may just be writers’ ego that made me think they all need some improvement, except those that could just be tossed.

Then I did something I learned somewhere (I subscribe to half-a-dozen AI-specific newsletters, another dozen that cover AI frequently, plus follow a slew of companies and people on LinkedIn—not to mention that YouTube’s algorithm had caught on to my interest—so I can rarely attribute a particular concept because I either heard it multiple times or I forgot to include the attribution when making a note to cogitate on it more later): I asked Perplexity what the literary term was for the original dirty dozen dictums and it told me “analogical aphorisms” (actually, it told me more than that, but I cling to alliteration the way the one topping you don’t like on the family pizza does).

Armed with my fancy new GenAI-generated term, I demanded (in several of those newsletters and videos I have heard that asking with ‘please’ is just a waste of tokens…which I mostly agree with unless you think the best sources are places like Reddit, but more on that another time): “Create ten more analogical aphorisms with the same them of IT is like Pizza”.  It’s like more sunk-cost-fallacy than truth that this list seemed much more on target, though some were definite deletes, and some clearly needed editing, and…yeah,  it was definitely a case of time commitment bias.

For the curious, the full thread with all the responses can be found here for however long the link is good for (I expect only slightly longer than my Pro subscription):

https://www.perplexity.ai/search/expand-on-this-following-list-UdB7O3KzSxCkl.WTIKBy4A

Interesting side note: I find I sometimes have to remind Perplexity to follow the context instructions for a Space.

Interesting side note about that side note: I have to specify “Perplexity Space” or it will do all sorts of random stuff that has nothing to do with Perplexity Spaces.

One more interesting side note: The most annoying thing that Perplexity does is anticipate punctuation errors. I use it to check my spelling, grammar, and punctuation because I got tired of finding the mistakes after posting. Here is one of the suggestions (similar ones are common):

  • Add a comma after “Here we are, three years later”
    Original: Here we are, three years later, and GenAI being all the rage, it occurred to me…
    Correction: Here we are, three years later, and with GenAI being all the rage, it occurred to me…

OK, one more side note and that’s it: It’s interesting that Perplexity (and other GenAIs) will properly understand a mis-spelled prompt and not point it out, but in spell-checking content it does point it out, as in:

  • Change “theme” not “them”
    Original: …with the same them of IT is like Pizza…
    Correction: …with the same theme of IT is like Pizza…

Sorry, can’t resist: The side notes (originally spelled ‘side-note’) were written while editing, and when I ran them through the “Check the following for spelling, grammar, and punctuation…” prompt, it wanted to correct its own spelling, as in:

  1. In “as in: Change ‘theme’ not ‘them’,” add a comma after “theme” so it reads:
    • Correction: Change “theme,” not “them”
Facebooktwitterredditlinkedinmail
© Scott S. Nelson

Falling down the GenAI Research Rabbit Hole

Before GenAI, I would have a general plan of what kind of information I would gather to prepare for a task. Sometimes that plan would be modified as new information led to a better understanding of the subject. GenAI is so much faster, more detailed, and with fewer unwanted results that I have to remind myself that this is just the beginning of the task. Otherwise, I would just stay in research mode and never get started, where before GenAI I was likely to stop just out of frustration from sifting through all the crap resulting from SEO more focused on views than value.

From “Don’t Be Evil” to Disruption

Google’s early days were defined by a clear mission: organize the world’s information and make it universally accessible and useful. Their unofficial motto, “Don’t be evil,” reflected a user-first approach that made Google the go-to research tool for millions. The results were clean, relevant, and genuinely helpful. Searching felt empowering, and the platform’s focus was on delivering value to users.

But as Google grew, priorities shifted. The drive for revenue and shareholder returns led to an increasing emphasis on advertising and SEO optimization. Search results became cluttered with paid placements and content designed to game the algorithm, rather than serve the user. The once powerful tool for discovery became bogged down by noise, making the research process more frustrating and less productive.

The progress shift of focus from user-friendly to shareholder-value opened the door for disruption. When a company that once prided itself on “not being evil” starts to lose sight of its core values, it creates an opportunity for new technologies to step in and fill the gap.

The GenAI Parallel

GenAI today feels much like Google did in its early years: focused on utility, speed, and user value. The answers are direct, the distractions minimal, and the sense of possibility is fresh. Outside the media buzz there is real value in faster answers, deeper insights, and fewer irrelevant results. But the lesson from Google’s trajectory is clear: success can breed complacency, and the temptation to prioritize profit over usefulness is always lurking.

Just as Google’s early commitment to usefulness made it indispensable, GenAI’s current focus on delivering value is what sets it apart. The challenge will be maintaining that focus as the technology matures and commercial pressures increase.

The Shift in Research

  • Faster answers mean less time wasted on irrelevant results.
  • Deeper insights surface quickly, sometimes revealing connections I wouldn’t have spotted on my own.
  • Fewer distractions no more having to go to page 3 of results because the first two were the result of the successful SEO strategies of clickbait and content farms.

But this abundance is a double-edged sword. The temptation to keep digging, to keep asking “what else?” is strong. Without discipline, I could spend hours exploring every tangent, never moving on to actually apply what I’ve learned.

Hopes for the Future of GenAI Research

As exhilarating as this new era is, I can’t help but wonder what comes next. Will GenAI search maintain its edge, or will it eventually succumb to the same pressures that eroded Google’s utility? The cycle of innovation and decline is a familiar one in tech, and I hope that as GenAI matures, it resists the lure of ad dollars and keeps user value front and center.

  • Transparency in how results are generated will be crucial.
  • User-focused design should always outweigh short-term profits.
  • Continuous improvement based on real user needs (not just engagement metrics) must be the guiding principle.

For now, I’m enjoying the ride, even if it means occasionally reminding myself to climb out of the rabbit hole and get back to on track (which may be how Google got to where they are).

Facebooktwitterredditlinkedinmail
© Scott S. Nelson

Working the Plan is not Working for the Plan

This post started with an entirely different approach.

I wanted to rant about how some statements of work are written with absolute certainty based on assumptions, and when those assumptions are proven wrong the work still proceeds with the obligation of fulfilling the SOW, resulting in lot of wasted effort spent on things that do nothing to further the goals of the enterprise. These same SOWs are also written with the absolute certainty of how much time it will take to do the job, so the time spent on the useless parts robs from the time available to work on what will make everyone’s life easier for the next SOW…provided history doesn’t repeat itself.

Instead, I got caught up in mangling metaphors and exaggerating erroneous errors about plans that are too rigid and wound up writing the following. Somehow it still represents the point I was trying to make… at least to me. Apologies in advance if it’s not as good for you.


Waterfall methodologies like RUP ruled the enterprise IT landscape back when it was mostly green fields, and that made sense. Projects were funded based on clear goals where the ROI had already been calculated. That ROI was calculated based on a set of business goals that were frozen once the project got the green light (yes, I know that scope creep existed even back when dinosaurs roamed the server rooms, but stick with me and then tell me if it really matters for the purpose of illustration). These goals were numbered 1 to n, then a person or a team (project sizes and budgets varying) would write functional requirements (FR) that meet those business requirements, numbering them 1.1 to n.. And then there would be development tasks, and test cases, all of which must have a compatible numbering systems, and each must tie back to one (and only one) functional requirement. Life was simple, and project schedules were measured in years. For some, enough years to add up to more than a decade.

Even when Pmanagersaurous (the p is silent) ruled the cube halls, there were businesses (and even some rebellious departments within enterprises) that used a different approach. To those who kept getting their ties caught in printers spewing out detailed requirements to be bound, distributed, and shelved, this alternative method seemed like some cavemen cracking their knuckles and banging on keyboards, intent on creating fire or agriculture or quantum computers. Much of what they built has gone the way of GeoCities and MySpace, but some of it went the way of either owning or replacing the big companies with the big projects and the big budgets. And they taught others their secrets. So many that it stopped being secret.

Then the legacy companies decided they wanted some of this high-margin, low-cost, no-longer-secret sauce for themselves, so they hired agile coaches. Of course, the ones that were really good at doing agile were off doing agile and becoming rich and famous. So the coaches would sometimes wing it, or steal from other processes to differentiate themselves. The legacy companies, being legacy, would pay the coaches lots of money, and thank them profusely, and then start requiring a business case before green lighting an “agile” project. The business cases had numbered paragraphs, and the business leaders wanted to know how things were going every moment of the day, so they insisted that the paragraph numbers be included in the “stories”, and it was Epic.

The little agile companies merged with competitors and became big legacy companies. To compete with even bigger legacy companies, they hired their executives, who needed to know everything that was going on so they could “take it to the next level”. So all of the highly skilled, highly productive people began applying half of their skills and productivity in doing what they have always done best, and half matching numbers to lines of code. Working for the plan.

And then AI came along, but that is post of a different order.

Facebooktwitterredditlinkedinmail
© Scott S. Nelson