Freepik rendering of the prompt 6 cats in a line one whispering to the next playing the telephone game

Realizing Agile’s Efficiency

(Feature image by Freepik)
TL;DR: Fostering a culture of trust that leads to calm collaboration up front will yield the benefits that Agile principles promise.
Preface: While agile is in the title of this post, no claim is made that the post is about how to do agile or how SAFe is or is not agile. It is about how the Manifesto for Agile Software Development is self-clarifying in that it concludes with “while there is value in the items on the right, we value the items on the left more.” (italics mine), and how the value of the items on either side should be measured by their effectiveness in a given organization and the organizations influence on the “self-organizing teams” referenced in the Principles behind the Agile Manifesto. That said…
The value of architecture, documentation, and design reviews in SAFe was illustrated in a scenario that played out over several weeks.
The situation started with the discovery that a particular value coming from SAP had two sources. Well, not a particular value from the perspective of the source. The value had the same name, was constrained to the same list of options, but could and did have different values depending on the source, both of which were related to the same physical asset. For numerous reasons not uncommon to SAP implementations that have evolved for over a decade, it was much more prudent to fetch these values from SAP in batches and store them locally.
The issue of the incorrect source was identified by someone outside the development team when it was found to be commonly missing from the source selected for work prioritization. For various reasons that will be common across a variety of applications that support human workflow, this was considered something that needed to be addressed urgently.
The developer who had implemented the fetch to the correct source was tapped to come up with a solution. Now, one thing about this particular application is that it was a rewrite of a previous version where the value of “Working software over comprehensive documentation” was adhered to without considering the contextual reality that the team developing release one would neither be the team working on the inevitable enhancements nor ever meet that team. The re-write came about when the system was on its third generation of developers and every enhancement was slowed because there was no way to regression test all of the undocumented parts. Unsurprisingly, the organizational context that resulted in the first version missing documentation also resulted in some tables schemas being copied wholesale from the original application and not reviewed because requirements were late, resources were late, and the timeline was unchanged. So, with no understanding of why not to, the developer provided a temporary solution of copying the data from one table to the other because it had only been communicated that the data from one source was the correct data for the prioritization filter. Users were able to get their correctly prioritized assignments and  the long-term fix went to the backlog.
As luck and timing would have it, when the design phase of the long term fix was picked up by the architect, the developer was on vacation. Further, while this particular developer had often made time to document his designs, the particular service the long-term fix depended on was one of the few that were not documented. Still further, it had been re-design as another service had been discovered to obtain the same data more reliably. But all of the data currently loaded was from the previous version, so even the attempt of reverse engineering the service to get sample data for evaluation was not possible. These kinds of issues can lead to frustration, which in turn dampens creative thinking, which is to say that had the architect looked at the data instead of following the assumption from the story that the data wasn’t yet readily available, he would have discovered that it was already present.
Eventually the source of the correct value was identified and a design created that would favor the correct value over the incorrect value but use the incorrect value if the correct one was not available to allow for the assignments to continue because sometimes the two actual values were the same (which is inspiration about a future post discussing the value of MDM). The design also included updating to the correct value if it became available after the initial values were set. The architect, being thorough, noted in the design a concern about what should be done when the correct value came into the system after the record that was prioritized based on that value has been assigned and processed by a user. After much back and forth, it was finally communicated that while the data was retrieved from the same system and labeled with the same name, the two values were not different because one was incorrect but because they were in fact to separate values meant for two different viewpoints. Which means that the design of attempting to choose and store a single correct value in both tables was invalid and that the records altered for the work-around were now (potentially) invalid. This made the correct solution a (relatively) simple change to the sorting query.
With the full 20/20 vision of hindsight, it is now clear that if the team did not feel that ever issue needed to be treated as an emergency and all of the product, design, and development stakeholders had discussed the issue prior to taking action, about 80 hours of work would have been reduced to 4 hours. Yes, there were other factors that impacted the need of 80 hours to deal with what is a fairly minor flaw, but those factors would not have come in to play had the questions been asked up front and clarity reached through collaboration.
Facebooktwitterredditlinkedinmail
© Scott S. Nelson

Will UpNote replace Evernote?

(Post written in UpNote and feature image generated with Freepik)
I used Evernote free version for over a decade. I finally went to the paid version when I needed it on more than the supported free devices at the time. Later, the number of devices I needed it on went down and I dropped free. Shortly after that, they lowered the number of devices for free and I was forced to go back to premium. Then the price doubled. Then Bending Spoons acquired it and it doubled again. I started looking for an alternative after the first doubling, but wasn’t finding anything that worked for me. I looked even harder after the second doubling, and Obsidian was as close as I could get, but not quite what I want, and the premium version ain’t cheap. But still cheaper than BS (appropriate acronym for them given what they have done to Evernote) and I have had looking into an Obsidian migration in my Evernote To Do list (a feature that was part of the app even before there was a cloud edition). I recently read a good discussion of UpNote on Medium (Don’t Use Obsidian) that prompted me to try it again (I’m 90% positive I looked at it after the BS price hike) . It is very much like Evernote was when it first moved to the cloud.
So far, here are my comparisons:
  • Tags are case sensitive in UpNote.
  • Still will miss nesting them as in Evernote.
  • No reminders in UpNote, but that was a feature I rarely used in Evernote.
  • UpNote only pins notes, not tags.
    • But, Evernote is erratic about that feature, sometimes with sub tags and sometimes not
  • I do like how I can make notes narrow again in UpNote.
  • Evernote reduces the paragraphs spacing in lists, where UpNote provides fine-grained paragraph spacing but doesn’t differentiate with lists.
And here is what I intend to try:
  • Try exporting Evernote into this and then back to Evernote in case UpNote goes under.
If the above works well, I will probably switch. Unfortunately, I have to subscribe to do that. I can see where a monthly subscription to try and then a switch to lifetime might be worth it.
Facebooktwitterredditlinkedinmail
© Scott S. Nelson

AI as a mental crutch

(Feature image created with DALL-E, providing feedback on my image proompting skills)

Every couple of years I find myself building a new Linux virtual machine baseline for some project. Even though I’ve documented the process thoroughly the last few times there is always some quirk that has me starting mostly from scratch each time. This time I started off with setting the home page in Firefox to perplexity.ai and using it to find all of those mundane commands I forget three weeks after going back to Windows for my day to day work.

This time I hit a snag pretty early in that I was getting an error that made no sense to me (the specifics of which aren’t relevant to this train of thought). Perplexed, I asked Perplexity “wtf?” in my best prompt formatting (which, admittedly, is a WIP) and it gave me a few things to try. Some (not all) of them made perfect sense and I gave them a try. They failed.

I compared everything I was looking at against a similar appliance and didn’t see any obvious differences. I tried variations of prompts with Perplexity to get a more directly relevant response, which either resulted in what had already been suggested or even less relevant responses (I did mention my prompting skills need, work, right?).

I then tried ChatGPT, which gave me the same answers that differed only in their verbosity and longer pauses between response blocks.

Finally, I ran the same search I started with in Google, which returned the usual multiple links from our old friend Stack Overflow. I did like I did before using GPT backed by LLMs and narrowed the time frame down to the last year to eliminate answers 10 years out of date (and sometimes links to my own past explanations that are equally out of date) and found a summary that looked closer to my actual problem than the bulk of the answers (which were clearly the source of the responses from both GPT sources I had tried earlier).

And there was my answer. Not just to this one problem, but to the kind of sloppy approach I had fallen into using AI. The thread started with an exact description of the same problem, with lots of the same answers that had been of no help. And then the original poster replied to his own thread with the solution (a habit of frequent Stack Overflow contributors I have always admired and sometimes remember to emulate), along with how he wound up in the situation. Again, the specific error isn’t relevant to this tale, but the source is using the the first search result that seems to answer the question rather than reading it all the way through and seeing the subtle difference between what was needed and what was provided.

No AI response will tell you about the screw ups that caused the problem (are they embarrassed for their human creators or just don’t think it’s relevant?) and the path to realizing the mistake and then recovering (and learning). But real people will and that is how we learn from each other.

So having copilot proof your work is great and using promoting to get a start on something you’re stuck on is a great productivity boost. But relying solely on the technology to do all the work is how we wind up forgetting how to think and learn and build better technology to give us time to think and learn. In short, don’t trade the mental crutch for a creative wheelchair.

Facebooktwitterredditlinkedinmail
© Scott S. Nelson

The Real Problem with Hybrid Agile

Featured image by Gratisography: https://www.pexels.com/photo/man-person-street-shoes-2882/

Before SAFe®, most organizations would do “our brand of agile”. IMO, SAFe® takes the most common elements of a plethora of hybrid agile approaches and codifies them in to a “standard” (imagine air quotes). My comments today are not about SAFe® but hybrid agile in general.

The common denominator I see across hybrid agile approaches is that they include the notion of some specific deliverables by a specific date. For the agile purist this isn’t agile because that notion is very not agile. Hats off to the purists that get to work that way, and they have already stopped reading by now unless they share the same mental state of people that slow down to look at a bad accident on the freeway (which I feel is not agile, but I’m no purist, so I couldn’t say for sure).

So, having target dates for a collection of stories isn’t entirely a bad thing, in that there are many organizations that have a legal obligation to appear as if they can reliably predict the future. These target days are where the problems start. And I will admin here that the title of this post is a lie, it is multiple problems, but I wanted to rope in those who really think that there is one thing wrong because I think they may get the most out of this particular rant.

So, first problem (position being arbitrary, I don’t have any stats about which problem occurs most) is that if the target is missed then there will be some people that point at the agile side of the hybrid approach as the blame. It could be, but it is much more likely that it is the behaviors that result for hybrid approaches, such as skipping documentation entirely, which results in longer ramp up time and lack of the DRY design pattern, because if you don’t know what’s been done how would you know if you were doing it again?

The next problem (purposely avoiding making it the  second problem to avoid people thinking this is a non-arbitrary sequence…beyond a order that helps to communicate the concepts) is that when the targets are missed the people that are supposed to know what the future looks like look bad, so they get mad at the people who are trying to hit the target. Most people feel bad when people are mad at them (except people with either experience in such things, certain psychological disorders, or a hybrid of the two).  No one likes to feel bad (except people with different psychological disorders) so they try to figure out how to prevent that in the future.  And we have tons of action-comedies to suggest a way to do this: Lower your expectations…lower…lower…that’s it. So people stop missing their targets and Wall Street analysts think the bosses of these people are great prognosticators where what they have actually done is taught their teams to be great procrastinators.

And the last problem I will point at before running for my life from hip hybrid folks who will want blood and purists that stuck around and are still looking for blood is that the people who try to make it happen still miss the mark because they focus on the wrong targets. The long-term goals have this nice, big, shiny definition,  where agile aims to complete one small, solid solution. The magic comes from being able to look at the big shiny and build a small solid that is good-enough-for-now, and still in the direction of the big shiny. One definition of magic is “some can and some don’t know how”, and in the case of this balancing different paths to perfection, some will focus everything on the small solid piece and forget to thing about whether it will fit into the big shiny vision. Or, they will be so enamored with the big shiny vision that everything they do in the span of a sprint is inadequate for the pieces that are solid, making the next sprint slower because they are still waiting on that piece that would let them move faster. Of course, magic is hard, and expecting everyone to produce it is destined for disappointment, which is why the teams that just lower their expectations are more “successful” (Dr Evil-level air quotes there).

So, at the end of the day, or at least the end of this post, the perception of success is easiest to meet if you succeed at level far below your potential. You can stress out everyone and sometimes hit the target. Or you can start forgiving your teams for their imperfections, cheer them for their successes, and teach them to learn from each other to be more successful every quarter. The problem with that last is that I will have to write another post to find more problems with hybrid until they are all resolved.

Facebooktwitterredditlinkedinmail
© Scott S. Nelson