Why People Never Listen

I received my weekly James Clear newsletter today. I almost filed with all the other newsletters I subscribe to and just never seem to have time to read. But then I remembered that I’m on PTO, and Clear lives up to his name, so I jumped in. Then stopped in my tracks when I got to:

“You have to work hard to discover how to work smart. You won’t know the best solutions until you’ve made nearly all the mistakes.”

My first thought was “Hmm”, followed by “that makes sense”, and then “but what is the point of trying to teach people how to do things?”.

I mentally spun on this for a while. I know that I learn from others (directly or indirectly via person, print, or video) faster if I have had some experience with the topic and struggled with doing or understanding, or at least dissatisfied with the results. I’m also familiar with constructivism and similar concepts where some level familiarity, even if only having heard the term in passing, makes it easier for the mind to grasp and incorporate details when they are presented.

Then I realized that what I struggle with is some of the qualitative terms in the statement, i.e.,

“You have to work hard to discover how to work smart. You won’t know the best solutions until you’ve made nearly all the mistakes.”

Is it impossible to learn how to work smart without having worked hard? If a student, mentee, trainee, etc. trusts the person teaching them and is highly and intrinsically motivated to learn, I think they can do so without the prior high level of effort. That said, I think that the skill of teachers (generically rather than academically, and based on skill of transferring knowledge as well as knowledge of skill) and the motivation of learners have been steadily declining and that this circumstance as an exception has become fairly rare.

As to the second of Mr. Clear’s sentences, I might be putting too fine of a point on my umbrage. I believe that one will appreciate how much better a solution is than alternatives if they have witnessed the outcomes of the others, and that appreciation will increase with experience over mere observation. I do, however, take issue with “nearly all” at several levels. The first is the fuzziness of it. Is it all but 1? Is it 99%? And I really truly believe that some, with that fuzzy level being at least less than half, of the mistakes is more than adequate for some people, especially those that are either impatient (while motivated) or highly self critical (a motivation that tries the patience).

My  protests aside, I think James Clear’s statement is accurate often enough to be accepted as a general (if not a hard and fast) rule. I am positive in my belief that his experience with people in this context is much broader than my own.

I do tend to wander in my writing, so let’s wrap up the thought on why people never listen. They do, but the absorption, retention, and results of that listening vary between individuals and over time. Some folks will get it right away because they are motivated and trust the source (and sometimes that motivation is just lack of self trust); some will have to prove it right before they believe it, and then they may experience the source has their own actions (not all, but some); some will filter it through their own thinking until they reach a threshold where they will try anything, even something that is presented as proven already to be effective; and some will just never get it, for a variety of reasons that could be it’s own book, let alone blog post.

No matter the motivation or outcome, if you are trying to share wisdom, be wise enough to know that you may need to both vary and repeat your message to eventually be heard. If you are trying to learn something, you may want to apply what you know earlier and more often so that you are well prepared to know more.

Facebooktwitterredditlinkedinmail
© Scott S. Nelson

10% innovation, 90% stay out of their way

Google had the “20% Time Project”, which was an initiative to encourage people to innovate unsupervised. According to a page in  Wikipedia (found using Perplexity.ai,  as most of my research these days), this resulted in things like AdSense and Google News. The Wikipedia entry and other research shows that it had its issues, too. Few people did any percent, and some called it the “120% Project” because the work happened beyond the regular work day.

Official or not, many companies have some type of program like this, and I’ve worked at a few where they were both formal and informal. The formal ones have some type of financial incentive to them, and the informal ones are generally a good way to accelerate career growth. There are many ways to measure the success of such programs. Increases in employee engagement; broadening of skills (which increases organizational capabilities); competitive edge as better (or even disruptive) innovations evolve; the attraction talented new employees; and things people haven’t even thought of yet because of the nature of the topic.

The innovations that pay off go through some stages that are similar to regular old projects. There is the discomfort stage, where something is identified as bothersome at some level and wouldn’t it be great if something were done about that. There is the ideation phase, where someone thinks that they have a way to make it less bothersome (or more cool, which is a legitimate bypass of the discomfort stage). There is the experimental stage where various things are tried to fix the problem or improve the solution (or sometimes both). Then there is the demonstration stage, where someone shows others (sometimes a someone from an earlier stage) what the experiments have yielded. The demonstration and experimental stages may iterate for a while, until either the commitment or abandonment stage occurs. Those last two are not only similar to regular old projects, they are the start of regular old projects.

One interesting thing about the stages of an innovation is that the “someone” at each stage may or may not be the same someone. There are solopreneurs where they are the someone from start to finish, or at least starting at the experimental stage. Which final stage comes in to play has lots of different influences. Some innovations are clearly awesome from the start, while others seem great in isolation but have impractical issues during or after the ideation stage. Others can be killed by skipping or re-ordering stages. As mentioned, some stage can be skipped because they may not be necessary (though I believe the discomfort stage always occurs and is just not always recognized as such). The biggest problem can occur in re-ordering the stages, especially moving that commitment stage anywhere earlier. It is usually helpful to set a time frame for the demonstration stages. Having an expectation around when the demonstration will be provides the urgency that is sometimes necessary to shift from motivation to action. Gathering estimates and milestones before that first demo is a good way to influence the lifecycle to ending in abandonment. It takes away the sense of ownership, playfulness, and excitement and turns it into a job too soon.

This problem has been observed in several psychological studies, none of which I had the foresight to take note of when I heard of them but Perplexity did manage to find a decent article on the “Overjustification Effect” , which is to say it isn’t just my opinion. One simplification of the phenomenon is that something that is interesting and motivating in and of itself becomes much less so when someone else tells you it is, or tells you how to be interested, or demands that you explain exactly how you are interested.

There is a related effect (which I am too lazy to find references to, but trust me, I’m not making it up) where someone that is instinctively talented at something is studied to determine how they do that thing they do through questioning and validating those unconscious processes, cease to be good at it. Usually it is temporary, but sometimes it is permanent.

All of which is to say that if you want your team to innovate, let them come up with what they will be innovative about, come to an agreement on a time frame where they can demonstrate progress, and then get out of their way. Trying to set detailed steps and milestones for innovation will greatly lower the odds of getting to a point where defining detailed steps and milestones will lead to an innovated solution.

Facebooktwitterredditlinkedinmail
© Scott S. Nelson

AI as a mental crutch

(Feature image created with DALL-E, providing feedback on my image proompting skills)

Every couple of years I find myself building a new Linux virtual machine baseline for some project. Even though I’ve documented the process thoroughly the last few times there is always some quirk that has me starting mostly from scratch each time. This time I started off with setting the home page in Firefox to perplexity.ai and using it to find all of those mundane commands I forget three weeks after going back to Windows for my day to day work.

This time I hit a snag pretty early in that I was getting an error that made no sense to me (the specifics of which aren’t relevant to this train of thought). Perplexed, I asked Perplexity “wtf?” in my best prompt formatting (which, admittedly, is a WIP) and it gave me a few things to try. Some (not all) of them made perfect sense and I gave them a try. They failed.

I compared everything I was looking at against a similar appliance and didn’t see any obvious differences. I tried variations of prompts with Perplexity to get a more directly relevant response, which either resulted in what had already been suggested or even less relevant responses (I did mention my prompting skills need, work, right?).

I then tried ChatGPT, which gave me the same answers that differed only in their verbosity and longer pauses between response blocks.

Finally, I ran the same search I started with in Google, which returned the usual multiple links from our old friend Stack Overflow. I did like I did before using GPT backed by LLMs and narrowed the time frame down to the last year to eliminate answers 10 years out of date (and sometimes links to my own past explanations that are equally out of date) and found a summary that looked closer to my actual problem than the bulk of the answers (which were clearly the source of the responses from both GPT sources I had tried earlier).

And there was my answer. Not just to this one problem, but to the kind of sloppy approach I had fallen into using AI. The thread started with an exact description of the same problem, with lots of the same answers that had been of no help. And then the original poster replied to his own thread with the solution (a habit of frequent Stack Overflow contributors I have always admired and sometimes remember to emulate), along with how he wound up in the situation. Again, the specific error isn’t relevant to this tale, but the source is using the the first search result that seems to answer the question rather than reading it all the way through and seeing the subtle difference between what was needed and what was provided.

No AI response will tell you about the screw ups that caused the problem (are they embarrassed for their human creators or just don’t think it’s relevant?) and the path to realizing the mistake and then recovering (and learning). But real people will and that is how we learn from each other.

So having copilot proof your work is great and using promoting to get a start on something you’re stuck on is a great productivity boost. But relying solely on the technology to do all the work is how we wind up forgetting how to think and learn and build better technology to give us time to think and learn. In short, don’t trade the mental crutch for a creative wheelchair.

Facebooktwitterredditlinkedinmail
© Scott S. Nelson

A proven method to accelerate learning by doing

As a writer, architect, and manager, I am always looking to improve my communication because I sometimes experience that what I say and how people respond are out of synch and I firmly believe in the presupposition that “The meaning of any communication is the response it elicits, regardless of the intent of the communicator.” (Robert B. Dilts, et al.), which is why I was watching How to Be More Articulate (Structure Your Thoughts With 1 Framework), where Vicky Zhao mentioned  “…Jeff Bezos’ famous Reversibility Decision Making Framework, is asking if I choose to do this. Is the result reversible? If yes, we should do it. If no, then let’s think about it deeply.

This was the first time I heard of the “Reversibility Decision Making Framework”, though it perfectly describes the approach that I started following on my own at around the same time Bezos was running Amazon from his home. It is how I learned to manage computers, then networks, then scripting, then programming, then team leadership, and (eventually) architecture. I had no formal education in these areas, and at the time of learning them I had insufficient funds for books, let alone trainings.

I did have a computer (that took two years to pay off), an internet connection (at 14400 baud), and curiosity. After a couple of painful (and unplanned) lessons on how to re-install Windows and restoring a network from back up tapes, I began looking for ways to back out mistakes before I made them. After adding that small step to my process learning moved forward much more rapidly.

We all know that it is much faster to learn by doing. What many people I know fear (all of whom have extensive formal education in IT and related topics) is learning by what is often referred to as a “trial and error”, or what I prefer to call “trial and success”.  If you have a safe sandbox to work in, doing something is much more efficient and effective than doing nothing until you are certain of the outcome.

The other habit that helps with trail and success is small increments. Sure, in the bad old days of punch cards or paper tapes it was necessary to write the entire program before running it. But modern IDEs make it trivial to run small pieces of code, and some simple discipline in planning your work can make it easy to test your code a line or a method at a time. Essentially, when not certain how a particular approach will work, rather than spending hours looking for “proven” solutions (that still might not work in the specific context), take your best guess and see how it works.

If the “proven” approach fails, it is likely one of many and finding which failed where can be a daunting task, where figuring out a better solution to the one you wrote 2 minutes ago is generally much easier and less stressful. Sure, there are few feelings better than writing dozens of lines of code in one session and having it run perfectly, but it feels so good because it happens so rarely. Writing two lines of code and getting a result is motivating, because even if it fails you have an idea of where to go next, and when it succeeds there is still a good feeling. Those little successes will easily total up to a higher overall sense of satisfaction than the one big one.

Pro tip: VirtualBox is a free virtual machine platform with lots of pre-built machines available at no cost. It is easy to learn the basics of how to use one and once you can, you have endless environments that you can completely destroy and start over again in the manner many games let you re-spawn where you left off instead of having to start over.

Humble PS: As illustrated by the feature image for this post, I am still trying to get the hang of prompting for image generation 😛

Facebooktwitterredditlinkedinmail
© Scott S. Nelson

The Real Problem with Hybrid Agile

Featured image by Gratisography: https://www.pexels.com/photo/man-person-street-shoes-2882/

Before SAFe®, most organizations would do “our brand of agile”. IMO, SAFe® takes the most common elements of a plethora of hybrid agile approaches and codifies them in to a “standard” (imagine air quotes). My comments today are not about SAFe® but hybrid agile in general.

The common denominator I see across hybrid agile approaches is that they include the notion of some specific deliverables by a specific date. For the agile purist this isn’t agile because that notion is very not agile. Hats off to the purists that get to work that way, and they have already stopped reading by now unless they share the same mental state of people that slow down to look at a bad accident on the freeway (which I feel is not agile, but I’m no purist, so I couldn’t say for sure).

So, having target dates for a collection of stories isn’t entirely a bad thing, in that there are many organizations that have a legal obligation to appear as if they can reliably predict the future. These target days are where the problems start. And I will admin here that the title of this post is a lie, it is multiple problems, but I wanted to rope in those who really think that there is one thing wrong because I think they may get the most out of this particular rant.

So, first problem (position being arbitrary, I don’t have any stats about which problem occurs most) is that if the target is missed then there will be some people that point at the agile side of the hybrid approach as the blame. It could be, but it is much more likely that it is the behaviors that result for hybrid approaches, such as skipping documentation entirely, which results in longer ramp up time and lack of the DRY design pattern, because if you don’t know what’s been done how would you know if you were doing it again?

The next problem (purposely avoiding making it the  second problem to avoid people thinking this is a non-arbitrary sequence…beyond a order that helps to communicate the concepts) is that when the targets are missed the people that are supposed to know what the future looks like look bad, so they get mad at the people who are trying to hit the target. Most people feel bad when people are mad at them (except people with either experience in such things, certain psychological disorders, or a hybrid of the two).  No one likes to feel bad (except people with different psychological disorders) so they try to figure out how to prevent that in the future.  And we have tons of action-comedies to suggest a way to do this: Lower your expectations…lower…lower…that’s it. So people stop missing their targets and Wall Street analysts think the bosses of these people are great prognosticators where what they have actually done is taught their teams to be great procrastinators.

And the last problem I will point at before running for my life from hip hybrid folks who will want blood and purists that stuck around and are still looking for blood is that the people who try to make it happen still miss the mark because they focus on the wrong targets. The long-term goals have this nice, big, shiny definition,  where agile aims to complete one small, solid solution. The magic comes from being able to look at the big shiny and build a small solid that is good-enough-for-now, and still in the direction of the big shiny. One definition of magic is “some can and some don’t know how”, and in the case of this balancing different paths to perfection, some will focus everything on the small solid piece and forget to thing about whether it will fit into the big shiny vision. Or, they will be so enamored with the big shiny vision that everything they do in the span of a sprint is inadequate for the pieces that are solid, making the next sprint slower because they are still waiting on that piece that would let them move faster. Of course, magic is hard, and expecting everyone to produce it is destined for disappointment, which is why the teams that just lower their expectations are more “successful” (Dr Evil-level air quotes there).

So, at the end of the day, or at least the end of this post, the perception of success is easiest to meet if you succeed at level far below your potential. You can stress out everyone and sometimes hit the target. Or you can start forgiving your teams for their imperfections, cheer them for their successes, and teach them to learn from each other to be more successful every quarter. The problem with that last is that I will have to write another post to find more problems with hybrid until they are all resolved.

Facebooktwitterredditlinkedinmail
© Scott S. Nelson