Freepik rendering of the prompt 6 cats in a line one whispering to the next playing the telephone game

Realizing Agile’s Efficiency

(Feature image by Freepik)
TL;DR: Fostering a culture of trust that leads to calm collaboration up front will yield the benefits that Agile principles promise.
Preface: While agile is in the title of this post, no claim is made that the post is about how to do agile or how SAFe is or is not agile. It is about how the Manifesto for Agile Software Development is self-clarifying in that it concludes with “while there is value in the items on the right, we value the items on the left more.” (italics mine), and how the value of the items on either side should be measured by their effectiveness in a given organization and the organizations influence on the “self-organizing teams” referenced in the Principles behind the Agile Manifesto. That said…
The value of architecture, documentation, and design reviews in SAFe was illustrated in a scenario that played out over several weeks.
The situation started with the discovery that a particular value coming from SAP had two sources. Well, not a particular value from the perspective of the source. The value had the same name, was constrained to the same list of options, but could and did have different values depending on the source, both of which were related to the same physical asset. For numerous reasons not uncommon to SAP implementations that have evolved for over a decade, it was much more prudent to fetch these values from SAP in batches and store them locally.
The issue of the incorrect source was identified by someone outside the development team when it was found to be commonly missing from the source selected for work prioritization. For various reasons that will be common across a variety of applications that support human workflow, this was considered something that needed to be addressed urgently.
The developer who had implemented the fetch to the correct source was tapped to come up with a solution. Now, one thing about this particular application is that it was a rewrite of a previous version where the value of “Working software over comprehensive documentation” was adhered to without considering the contextual reality that the team developing release one would neither be the team working on the inevitable enhancements nor ever meet that team. The re-write came about when the system was on its third generation of developers and every enhancement was slowed because there was no way to regression test all of the undocumented parts. Unsurprisingly, the organizational context that resulted in the first version missing documentation also resulted in some tables schemas being copied wholesale from the original application and not reviewed because requirements were late, resources were late, and the timeline was unchanged. So, with no understanding of why not to, the developer provided a temporary solution of copying the data from one table to the other because it had only been communicated that the data from one source was the correct data for the prioritization filter. Users were able to get their correctly prioritized assignments and  the long-term fix went to the backlog.
As luck and timing would have it, when the design phase of the long term fix was picked up by the architect, the developer was on vacation. Further, while this particular developer had often made time to document his designs, the particular service the long-term fix depended on was one of the few that were not documented. Still further, it had been re-design as another service had been discovered to obtain the same data more reliably. But all of the data currently loaded was from the previous version, so even the attempt of reverse engineering the service to get sample data for evaluation was not possible. These kinds of issues can lead to frustration, which in turn dampens creative thinking, which is to say that had the architect looked at the data instead of following the assumption from the story that the data wasn’t yet readily available, he would have discovered that it was already present.
Eventually the source of the correct value was identified and a design created that would favor the correct value over the incorrect value but use the incorrect value if the correct one was not available to allow for the assignments to continue because sometimes the two actual values were the same (which is inspiration about a future post discussing the value of MDM). The design also included updating to the correct value if it became available after the initial values were set. The architect, being thorough, noted in the design a concern about what should be done when the correct value came into the system after the record that was prioritized based on that value has been assigned and processed by a user. After much back and forth, it was finally communicated that while the data was retrieved from the same system and labeled with the same name, the two values were not different because one was incorrect but because they were in fact to separate values meant for two different viewpoints. Which means that the design of attempting to choose and store a single correct value in both tables was invalid and that the records altered for the work-around were now (potentially) invalid. This made the correct solution a (relatively) simple change to the sorting query.
With the full 20/20 vision of hindsight, it is now clear that if the team did not feel that ever issue needed to be treated as an emergency and all of the product, design, and development stakeholders had discussed the issue prior to taking action, about 80 hours of work would have been reduced to 4 hours. Yes, there were other factors that impacted the need of 80 hours to deal with what is a fairly minor flaw, but those factors would not have come in to play had the questions been asked up front and clarity reached through collaboration.
Facebooktwitterredditlinkedinmail
© Scott S. Nelson

The Real Problem with Hybrid Agile

Featured image by Gratisography: https://www.pexels.com/photo/man-person-street-shoes-2882/

Before SAFe®, most organizations would do “our brand of agile”. IMO, SAFe® takes the most common elements of a plethora of hybrid agile approaches and codifies them in to a “standard” (imagine air quotes). My comments today are not about SAFe® but hybrid agile in general.

The common denominator I see across hybrid agile approaches is that they include the notion of some specific deliverables by a specific date. For the agile purist this isn’t agile because that notion is very not agile. Hats off to the purists that get to work that way, and they have already stopped reading by now unless they share the same mental state of people that slow down to look at a bad accident on the freeway (which I feel is not agile, but I’m no purist, so I couldn’t say for sure).

So, having target dates for a collection of stories isn’t entirely a bad thing, in that there are many organizations that have a legal obligation to appear as if they can reliably predict the future. These target days are where the problems start. And I will admin here that the title of this post is a lie, it is multiple problems, but I wanted to rope in those who really think that there is one thing wrong because I think they may get the most out of this particular rant.

So, first problem (position being arbitrary, I don’t have any stats about which problem occurs most) is that if the target is missed then there will be some people that point at the agile side of the hybrid approach as the blame. It could be, but it is much more likely that it is the behaviors that result for hybrid approaches, such as skipping documentation entirely, which results in longer ramp up time and lack of the DRY design pattern, because if you don’t know what’s been done how would you know if you were doing it again?

The next problem (purposely avoiding making it the  second problem to avoid people thinking this is a non-arbitrary sequence…beyond a order that helps to communicate the concepts) is that when the targets are missed the people that are supposed to know what the future looks like look bad, so they get mad at the people who are trying to hit the target. Most people feel bad when people are mad at them (except people with either experience in such things, certain psychological disorders, or a hybrid of the two).  No one likes to feel bad (except people with different psychological disorders) so they try to figure out how to prevent that in the future.  And we have tons of action-comedies to suggest a way to do this: Lower your expectations…lower…lower…that’s it. So people stop missing their targets and Wall Street analysts think the bosses of these people are great prognosticators where what they have actually done is taught their teams to be great procrastinators.

And the last problem I will point at before running for my life from hip hybrid folks who will want blood and purists that stuck around and are still looking for blood is that the people who try to make it happen still miss the mark because they focus on the wrong targets. The long-term goals have this nice, big, shiny definition,  where agile aims to complete one small, solid solution. The magic comes from being able to look at the big shiny and build a small solid that is good-enough-for-now, and still in the direction of the big shiny. One definition of magic is “some can and some don’t know how”, and in the case of this balancing different paths to perfection, some will focus everything on the small solid piece and forget to thing about whether it will fit into the big shiny vision. Or, they will be so enamored with the big shiny vision that everything they do in the span of a sprint is inadequate for the pieces that are solid, making the next sprint slower because they are still waiting on that piece that would let them move faster. Of course, magic is hard, and expecting everyone to produce it is destined for disappointment, which is why the teams that just lower their expectations are more “successful” (Dr Evil-level air quotes there).

So, at the end of the day, or at least the end of this post, the perception of success is easiest to meet if you succeed at level far below your potential. You can stress out everyone and sometimes hit the target. Or you can start forgiving your teams for their imperfections, cheer them for their successes, and teach them to learn from each other to be more successful every quarter. The problem with that last is that I will have to write another post to find more problems with hybrid until they are all resolved.

Facebooktwitterredditlinkedinmail
© Scott S. Nelson
Path with cloudy destination

You can always get there from here

There are many quotes to the effect that perfection is a path and not a location (my wording in this case).  To me, this is the essence of agile vs waterfall (and, to a degree, SAFe).
Agile trusts that high performing teams, following processes that support continual re-evaluation, will produce higher quality deployable results with the same amount of resources.
All methodologies have processes (or ceremonies). Properly followed, they can all produce good results. Whether one methodology will produce better results than another is fairly moot, because it isn’t the methodology alone that influences the results. It is where the focus of the team is while following the methodology that makes the difference.
A team that is focusing on a date will almost always have to skip some steps to make that date.
A team this focused on the completed product is almost always  going to miss an import use case (very simple products excepted).
A team that is focused on absolute perfection of every task is going to miss business expectations.
A team that is focused on sticking to an iterative process and willing to course-correct their approach to improve the next iteration will always produce better deliverables.
Leadership is less about providing direction and more about communicating where the team should focus to be successful. The goal is to have a shared vision and foster the flow state that will support realizing some version of that vision at regular intervals.
Or, to use another similar quote, “This is the way”.
Facebooktwitterredditlinkedinmail
© Scott S. Nelson

From Agile to Fragile in 60 sprints

Feature image by Elisa Kennemer on Unsplash

The adoption of agile software development methodologies has been a necessary evolution to support the explosive demand for new and expanded capabilities.   There is no doubt that without the broad adoption of agile practices much of the growth in technology, and all of those aspects of everyday life that is driven by technology, simply would not have happened.

Still, too much of a good thing applies. Another old adage that comes to mind is “You can have it better, cheaper, faster. Pick any two”. Many organizations have insisted on all three. How did they do it? They sacrificed the documentation.

I’m not talking about saving shipping costs and trees by making manuals virtual, and then saving bandwidth by replacing the documents download with the install files with links to online documentation (which has its own issues in this world of massive M&A). I’m talking about all those wonderful references that development teams, sometimes backed by technical writers, produced so that others may pick up where they left off to maintain and enhance the final applications. Yes, that documentation.

Self-Documenting Code does not make a Self-Documenting Solution

While no one can honestly disagree with the value put forth in the Manifesto for Agile Software Development : “Working software over comprehensive documentation”, I also don’t think the intention was that documentation impedes working software.   Still, the manifesto has fed the meme (the original definition, not the funny GIFs) “Good code is self-documenting”. When I hear this, my response is “True; and knowing what code to read for a given issue or enhancement requires documentation”.  My response lacks the desired impact for two reasons: It doesn’t easily fit on a bumper sticker and it requires putting time and effort into a task that many people do not like to do.

The danger of little or no documentation is that the application becomes dependent on “tribal knowledge”. In a perfect enterprise, this is a dependable approach because employee turnover is low and when people do depart they always do so with adequate notice and thoroughly train their replacements. I have heard these enterprises exist, though I have never spent any time working with one of them.  I did, however, recently work with a business intelligence group where their entire ETL staff departed within a few weeks of each other after a few years of furiously building hundreds of data integrations in a dozen different business areas and then spent less than 9 hours in “knowledge transfer” sessions with my team who were tasked with keeping the lights on until a new crew was hired and trained. There was not one page of documentation at the start of the knowledge transfer and I have yet to find a line of documentation in any of the code.

I’m not advocating the need for waterfall-style detailed design documents. In some ways, those can be worse than no documentation because they are written before the code and configurations they are intended to describe are created and fail to be updated when the actual implementation deviates. In an agile world, writing the documentation after the implementation is a sound approach that will support the manifesto value of “Working software over comprehensive documentation” by being just enough documentation to facilitate maintaining the software in the future.

Meeting between the Lines

How much is just enough? That is going to vary by both application (and/or system) and enterprise. Some applications are so simple that documentation in the code to supplement the “self-documenting” style is sufficient. More complex solutions will need documentation to describe things from different aspects, and the number of aspects is effected by whether maintenance is done by the development team or a separate production support group. The litmus test for whether your documentation is adequate is to take a look at it from the perspective of someone who has never heard of your application and needs to be productive in maintaining or enhancing it in less than a day. If you have difficulty in adopting that point of view (many people do, and double as many developers), have someone outside your team review the documentation.

I find the following types of documents to be a minimum to ensure that a system can be properly managed once released to production:

  • Logical System Architecture
  • Physical System Architecture
  • Component Relation Diagrams
  • Deployment Procedures

Again, the level of detail and need for additional documentation is going to be driven by complexity and experience. Another factor is how common the relevant skills are. If the candidate pool for a particular platform or framework is shallow, more detail should be provided to act as springboard for people that may be learning the technology in general while diving into the particular implementation.

Yes, there are Exceptions

Conversely, some solutions are true one-offs that are filling a very specialized need that is unlikely to evolve and may have a short lifespan. These implementations only really need sufficient reference to migrate them to another environment or decommission them to free up resources while not negatively impacting other systems. I do caution you to really be sure that an application falls into this category before deciding to minimize the documentation.  What comes to my mind when I think of such decisions is massive amount of resources dedicated to dealing with two-digit years in 1999 to address applications that were not expected to still be in use when they were developed 10 or 20 years previously.

A Final Appeal

At the beginning I agreed with the manifesto value of working code prioritized over comprehensive documentations. In the days when most software life cycles began with tons of documentation and meetings to review the documents and meetings to review the results of the review, a great deal more beneficial build and test activities could have been done in that time instead. My experience in documenting the results of agile and other iterative processes toward the end of the development cycle and then reviewing that documentation with people outside the team is that design flaws are discovered when looking at the solution as a whole rather than the implications to individual stories in a sprint. The broader perspective that waterfall tried to create (and often failed since most waterfall documentation does not match the final implementation) can be achieved better, cheaper and faster when documenting at the end of the epic. In this one case, picking cheaper and faster yields better.

Documenting the fruits of your software and application implementation labors may not be the most exciting part of your team’s work, but the results of not documenting can become the most painful experience for those that follow…or your next gig!


Originally published at InfoWorld

Facebooktwitterredditlinkedinmail
© Scott S. Nelson

The First Step of a Journey that Began Five Years Ago

Note: I will update this article with a link to the application once the customer has done their own announcements in accordance with their external communication policies and procedures.

In the Beginning there was BEA WebLogic Portal

In 2008, Oracle acquired BEA Systems. 19 days after the official merger, Oracle announced that premiere support for WebLogic Portal would end in 2014. The current policy document (latest can be found on http://www.oracle.com/us/support/lifetime-support/index.html) has moved this date out to 2018, though they have been sticking to the “no new feature release” policy since the 10.3.2 release in 2010. 10.3.2 was intended to be 11g, except it came out a year later than originally announced at Open World in 2008 and was released as a “dot” release of 10g despite the fact that it had several major enhancements and new features.

I had been hired by BEA in 2006 as a WebLogic Portal consultant due to my extensive experience with the product as a consultant for netNumina Solutions. In 2009, Oracle released WebCenter 11g and I attended the Masters Training two weeks prior to the GA date where I learned just how very different the two portal products are.

Which Way Do We Go?

I have been unable to find any officially published direction for WebLogic Portal customers who wish to migrate to WebCenter Portal, though I have had numerous conversations with engineers, architects, consultants and product managers about how to go about this. These discussions revealed three general approaches.

One approach is to simply re-build the portal in WebCenter. This is quite viable for very small portals and avoids the pitfall of other approaches, which is the need to maintain two architectures. It is not a very practical approach for medium to large portals as it is a great deal of effort and expense over a long period of time to just to provide the same functionality.

Both the second and third approaches are about transitions. On method is to create the new WebCenter portal and build all new features there and link over to the legacy WebLogic Portal for existing features. This is very quick and easy to deliver but difficult to maintain.

The third approach is staged migration. This approach creates the a new WebCenter portal that is the where users log in and interact, with the legacy functionality being exposed using WSRP. This solution allows for the immediate introduction of the WebCenter architecture and minimizes maintenance cost. By following a policy where any legacy portlet that requires modification be first moved over to WebCenter, Business and Technology stake holders can plan the complete retirement of the WebLogic Portal infrastructure as best suits the Enterprise as a whole.

Every Journey Starts with a Sprint

This month marks the deployment of the third solution to production for my current client. It is a medium-sized, high-complexity portal and it was brought from inception to production in six months using a mixed-Agile approach. It consists of 20 portlets produced by the legacy WebLogic Portal application and two legacy JSF portlets that were migrated in two days because they include file download functionality that made them easier to migrate than to dig through the documentation to fix as WSRP. The portal also includes managed content from WebCenter Content and a shared navigation structure with a legacy Struts 1.1 application.

The Enterprise Portal Architecture for the customer is to migrate all legacy functionality from WebLogic Portal to WebCenter Portal over the next year in staged releases that will also include the introduction of new features and functionality.

Facebooktwitterredditlinkedinmail
© Scott S. Nelson