© Scott S. Nelson
Realizing Agile’s Efficiency
© Scott S. Nelson
Zen and the Art of IT Consulting
(Feature image created with DALL-E, providing feedback on my image proompting skills)
Every couple of years I find myself building a new Linux virtual machine baseline for some project. Even though I’ve documented the process thoroughly the last few times there is always some quirk that has me starting mostly from scratch each time. This time I started off with setting the home page in Firefox to perplexity.ai and using it to find all of those mundane commands I forget three weeks after going back to Windows for my day to day work.
This time I hit a snag pretty early in that I was getting an error that made no sense to me (the specifics of which aren’t relevant to this train of thought). Perplexed, I asked Perplexity “wtf?” in my best prompt formatting (which, admittedly, is a WIP) and it gave me a few things to try. Some (not all) of them made perfect sense and I gave them a try. They failed.
I compared everything I was looking at against a similar appliance and didn’t see any obvious differences. I tried variations of prompts with Perplexity to get a more directly relevant response, which either resulted in what had already been suggested or even less relevant responses (I did mention my prompting skills need, work, right?).
I then tried ChatGPT, which gave me the same answers that differed only in their verbosity and longer pauses between response blocks.
Finally, I ran the same search I started with in Google, which returned the usual multiple links from our old friend Stack Overflow. I did like I did before using GPT backed by LLMs and narrowed the time frame down to the last year to eliminate answers 10 years out of date (and sometimes links to my own past explanations that are equally out of date) and found a summary that looked closer to my actual problem than the bulk of the answers (which were clearly the source of the responses from both GPT sources I had tried earlier).
And there was my answer. Not just to this one problem, but to the kind of sloppy approach I had fallen into using AI. The thread started with an exact description of the same problem, with lots of the same answers that had been of no help. And then the original poster replied to his own thread with the solution (a habit of frequent Stack Overflow contributors I have always admired and sometimes remember to emulate), along with how he wound up in the situation. Again, the specific error isn’t relevant to this tale, but the source is using the the first search result that seems to answer the question rather than reading it all the way through and seeing the subtle difference between what was needed and what was provided.
No AI response will tell you about the screw ups that caused the problem (are they embarrassed for their human creators or just don’t think it’s relevant?) and the path to realizing the mistake and then recovering (and learning). But real people will and that is how we learn from each other.
So having copilot proof your work is great and using promoting to get a start on something you’re stuck on is a great productivity boost. But relying solely on the technology to do all the work is how we wind up forgetting how to think and learn and build better technology to give us time to think and learn. In short, don’t trade the mental crutch for a creative wheelchair.
(Feature Image photo by Brett Sayles: https://www.pexels.com/photo/stack-of-paper-rolls-in-shelf-3720483/)
The longer version of my contribution to a LinkedIn experts article.
There are so many options and variables that any single recommendation will only be helpful to a slice of the organizations who may be thinking about this.
From a meta perspective, minimizing the number of tools required to get work done while remaining efficient and effective is important to fostering collaboration. The days of avoiding vendor lock-in at all costs faded with the growth of SaaS and other cloud-based options. Choosing a file management option that is packaged with, or cleanly integrates with, tools that are used by the entire team should be a primary consideration. Reducing the number of logins required throughout the day helps keep the team efficient, and an option that can be directly linked to requirements, communications, and quality reviews will help ensure that they are linked and maintained.
Another requirement that should be top of mind is versioning. The ability to restore older versions and compare between versions may not always be needed during any given effort, but it will be sorely missed if it is needed.
As a general recommendation, implementing some redundant and external back up should also be considered. Cloud content should be regularly backed up to on-premise or alternative cloud in the same manner where on-premise solutions should be periodically backed up somewhere off site and secure.
(Photo by Johannes Plenio: https://www.pexels.com/photo/spider-web-with-drops-of-water-1477835/)
I’ve always tried to have some basic guidelines around communications to keep myself from straying from the purpose of the conversation. I’ve had a long-standing guideline for how long to wait for someone that is late to a meeting: 3 minutes for non-critical participants; 5 minutes for colleagues who should know better; 10 minutes for people in senior roles that have too many meetings; 15 minutes for executives and customers. After 15 minutes I write it off as a break in an otherwise hectic day and move on to other tasks.
Recently there was a long-running thread of comments in a Jira story between two colleagues that occurred while I was on PTO. Catching up on things, I ran across it and, as someone outside the conversation, identified that the length of the discussion was because there were different core understandings of the story that neither was aware the other had. Because it had gone on so long, it took longer to come to consensus in the meetings that followed that comment thread.
This is not the first time I have run across such diverging threads, and I am sure you have seen as many or more. I once worked with a very good Project Manager who had a rule that if the thread went more the two responses it was time for a phone call or meeting. As a developer-turned-architect, most of my work is with people that would rather go to the dentist than a meeting. As senior director, I know that both are annoying when unnecessary and you always feel better afterwards when useful (though not always immediately).
I’m will probably revise these in the future, but for now, here is the guideline am adopting and recommending around written threads (IMs, DMs, texts, or comment sections):
One message is a question
Two messages is a conversation
Three messages is an asynchronous meeting
Four messages probably needs a meeting to complete
For the sake of this discussion, let’s consider a meeting of any type of verbal exchange over written, i.e., treating phone, video, and in-person equally (because otherwise we are off on a different topic, and I do that easily enough without help).
Like any guideline, these are not absolutes. For a silly-yet-accurate example, consider “can we talk?” as the first message. In general, that should go straight to meeting. But sometimes the recipient is busy (maybe even in another conversation) and some discussion is required to conclude a time an channel. Another example is when assisting someone with a task where the understand the basics and need help with some advanced or nuanced aspects. Such a thread could go on for dozens of exchanges and be the right way to communicate asynchronously as both parties work on other things in between. In the same context but a different circumstance the thread may be inefficient, and meeting should be called after the first question. So, no absolutes, just some guidelines to think about when you find yourself in an extended written exchange online.
What’s your approach? Yes, I’m now encouraging a thread that is longer than four messages and no meeting 🙂
Fair warning: This is more about not having written anything in a while than the value of the topic…and the subject matter is more about drawing your own conclusions than relying on what is easily available, so…
App is one of the most over-used and ill-defined terms in the IT lexicon. This is greatly due to it being used by people outside the IT domain. The domain itself has had some whoppers, like the DHMTL that was a must-have at the turn of the century even though the only honest definitions of the term were that it had no real definition. Microservices runs a close second simply because there is an invisible grey line between SOA and Microservices that is a mile wide and an inch short. But I digress, as is often the case.
What I’m really thinking about today is apps in the world of Salesforce.com. Specifically, apps that run inside the Salesforce CRM platform. I started thinking about this because I was looking into CPQ vendors over the weekend to refresh myself on the market to support a project proposal to select the best option for a particular business. It’s a large space, so it always helps to find someone else’s list to start with and someone had given me a list from a major analyst group as that starting point.
Other than analysts, no one likes long lists with lots of details, so I first wanted to narrow it by those that integrated with Salesforce. It didn’t take me long to remember that Salesforce is the gold standard for CRM and there were only two that didn’t. I didn’t go through the whole list to get to that count because I’ve done these kind of evaluations before and figured out after the first half dozen that this was not how I was going to narrow the list. The two were just what was noticed while skinning this cat another way.
The first trimming of the list was by industry focus. The potential client is a tech service, sort of SaaSy, and “High-tech products” was one of the categories, which was much closer to what they did than “Financial services” (though they have customers in that domain) or “Industrial products” (which the analyst seemed to think usually included high-tech, though not sure why).
To spare you the tedium of the several hours of wading through 1000’s of lines of marketing prose that could have been delivered in a table (ahem, yes, I know, kettle, black, etc.), from just the perspective of Salesforce CRM integration, I found it useful to divide them into three basic styles:
Native: An application that is built entirely in Salesforce
App: An app that runs inside Salesforce that depends on data and/or functionality managed outside of Salesforce.
Connector: An application that runs independently of Salesforce and has a way to share data with Salesforce.
The terms for these distinctions change often over time and between sources. These definitions are for clarification of the table below and are purposely simplified as deeper distinctions are less relevant about integration than other aspects.
In this particular exercise, the ask was to provide some pros and cons to these different styles. My style being one of adapting general terms to technical solutions, I responded with a non-exhaustive list of Benefits and Concerns:
Integration Styles | Native | App | Connector |
---|---|---|---|
Benefits |
|
|
|
Concerns |
|
|
|
Of course, the next question is usually “which is best”, and I must respond with the architect/consultant/writer-needing-higher-word-count with “it depends”. And it depends on lots of things, such as who will be maintaining the solution; how are capex and opex prioritized and managed; how do different stake holders actually need to interact with the solution; and is it clearly understood that this only one aspect of a vendor selection process and all known aspects must be documented and weighted before giving a recommendation?
The real reminder for me when I finished this brief analysis was that context is everything when doing any type evaluation. The list that I started with included products that were questionable as to whether they really belonged in the report and many of the products were listed as serving domains that there was no mention of on the vendor’s site and no compelling reason why the unmentioned-domain would want to use it. If I had direct access to the author(s) I may learn something by asking, but the important thing is that I used their input only as a starting point and applied my own analysis because when the recommendations are provided to a client, those author’s name will not be on the agenda and they will not be there to answer the questions that hadn’t yet been thought of.