Too big to survive: There is no bailout for technical debt

The only difference between technical debt and financial debt is that costs are more often known in advance when taking on financial debt. Both types of debt are a tool when used intelligently with purpose and a plan to manage it and can take a devastating toll when used recklessly or imposed through misdirection or miscommunication.

Acceptable vs unnecessary debt

The original heading here was “Necessary vs unnecessary debt”. On further reflection, though, I realized that the only good reasons for incurring debt are time drive. If time is removed as a factor there is no reasonable need for debt. So then it becomes a question of when time is important enough of a factor to make debt acceptable.  The only context I can think of where time is universally an acceptable driver for debt is in an emergency situation.

Beyond an emergency, the evaluation for whether debt is acceptable because of time becomes a value proposition. In our personal lives, the first car and house is generally considered to be a good reason to accept debt because both have a large enough cost where they are likely to become more expensive over time, making it harder and harder to save for them in a reasonable period of time.

Similarly, building in-house custom applications rather than waiting for a Common Off The Shelf (COTS) solution that will incur technical debt in minimally reviewed code and the inevitable maintenance costs is worth it for functionality that is key to business value. Having worked for software vendors, I can honestly say that it if it isn’t already Generally Available (GA) as at least a patch one then it should still be considered unavailable as a COTS solution.

The other common time driver that should generally not be an acceptable reason to take on debt is impatience. Using a home equity loan to buy the latest television is a poor financial decision and implementing a new solution without a thorough evaluation and proper training is a gamble that will usually result in higher maintenance cost or a potential system failure.

The old adage “patience is a virtue” is not only true, it is a vast understatement of the value of patience.

Stop debt before it happens

The reason technical debt is becoming an increasing concern at many companies is because it tends to grow exponentially, just like financial debt. And for the same reasons. Of the three drivers for debt mentioned previously (emergency, long-term value, short-viewed impatience), the most frequent cause is the least necessary. Impatience. Problems arising from bad habits will grow until the habit has been replaced by actions that have a more positive effect.

Without getting too psychological here, impatience is a result of either wanting very much to move towards a reward or away from loss. For some odd reason, the drive forward doesn’t seem to repeat in the same context nearly as much as the drive to move away from. In technology, the drive to move away from is so common that the three key emotions related with impatience driven by escape have an acronym: FUD (fear, uncertainty, doubt).  In the case of IT decisions all three are essentially redundant, or at least a sequence. Fear driven by uncertainty and/or doubt. When the decision is around taking on technical debt, the fear is that business owners or customers will be upset if the feature is delayed or reduced and the uncertainty and doubt are the result of either not asking these stakeholders or asking only half the question.

Asking a stakeholder “Is it a problem if feature X is not in the release?” will frequently have a different answer than “Would you prefer we include feature X in a later release or risk certain delays to all future feature releases by pushing it before we have time to include it in a maintainable manner”? My experience is that most of the time neither question is asked and it is just assumed the world will end if users don’t have access right now to a new option that only 3% will ever use. It is also my experience that when the tradeoff of reliability and stability versus immediacy is explained to stakeholders they usually opt for the delay. I know many people believe that businesses have lost sight of long term implications and I believe that in many cases it not because they are deliberately ignoring them but because the people that should tell them when and why to be cautious are afraid of saying anything that will be considered “negative”.

To summarize, the best way to reduce the accumulation of technical debt is to have open, honest communication with stake holders about when decisions involve technical debt, the consequences of that debt, and the options for avoiding taking on the debt. Then, if the decision is to still choose the right now over the right way, immediately request buy-in for a plan, timeline and budget to reduce the technical debt. Again, my experience is that when the business is presented with a request to ensure functional reliability they frequently say yes.

Getting out of unavoidable or accepted debt

Taking on some technical debt is inevitable. This is why the modifiers usually, most often, and frequently were used in the previous section rather than more-comforting-yet-inaccurate always, definitely, and every time. Even in a theoretically perfect process where business always opts for debt-free choices and emergencies never happen, there are still going to be debt-inducing choices made either from lack of information or usage of imperfect vendor releases.

In the case where the debt is incurred unknowingly, once it is discovered be sure to document it, communicate and plan for its correction. The difference with cases where the debt is taken on knowingly because it is unavoidable without a much larger cost in vendor change, monitor the item with every project and when there is a reasonable option to correct it, do it. I once had to build something that was a bit kludgey because the vendor application clearly missed an implication of how a particular feature was implemented. We created a defect in the defect tracker which was reviewed in every release. 18 months later, the vendor found the error, corrected it and we replaced the work-around with the better approach in the next release. For major enterprises it is a good idea to raise a support case with the vendor when such things are identified, which I did not do at the time because the company I was managing this application for was too small to get vendor attention and the feature was not in broad use.

Originally published at InfoWorld.

If you found this interesting, please share.

© Scott S. Nelson

How Salesforce Supports Citizen Development

Citizen development is really a responsible response to the dilemmas created by Shadow IT. Now that technology is available to those with minimal technical knowledge business users will implement solutions without the help of the IT department. The best thing IT can do about this is mentor the business users in ways that will support what business is going to do anyway in a way that will not lead to enterprise-level headaches. Salesforce is at the forefront in helping business and IT with this new paradigm.

The number of times I revised the title of this post is a sign of the times in technology. Those not steeped in the gray arts of technology may think that since computers process 1’s and 0’s that going from thought waves to software is a linear and clearly defined path. The more the technology evolves the less true that is. I started with the title of “How Salesforce Enables Citizen Development”, but a key premise of this post is that it is not a check box in the system administrator’s console, which the term “enables” insinuates. “Citizen Development with Salesforce” was considered and rejected because it has a tone that suggests that there is no longer a need for highly trained Salesforce administrators, architects and developers. Not only do I disagree with that premise, I more emphatically caution against the invalid assumption that such a void would result in cost savings. These nuances of title may seem like a lot of over-thinking except that as both a writer and reader I am all-too-aware of the tendency to base a fully formed opinion on the title alone.

I was recently asked to sum up the benefits of citizen development and came up with the following:

  • User-owned Solutions
  • Reduced IT Bottlenecks
  • Streamlined Process
  • Lower Costs to Deliver

Salesforce supports citizen development by providing a platform with capabilities that can be accessed and utilized with a minimum of training and experience. The unbridled optimist will look at the preceding sentence and imagine a world where every business user can build applications that are easy to use and will contribute to productivity at a lower cost.

Citizen Development Bumper Sticker Policies
Citizen Development Bumper Sticker Policies

The realist would (and should) take umbrage with the word “every”. Putting aside the variance in individual capabilities, there are other key factors that make “every business user” a dangerous assumption. Two key factors are time and inclination. It takes both to perform any one of the following critical tasks for a successful application:  Determine the full range of business requirements an application should address; analyze the variety of technical solutions and appropriately select the best fit for the requirements;  review the existing functionality within the organization for potential reuse and impact; train and support other users in the resulting application; and maintain proper data governance to ensure both adequate security and cost controls.

So, perhaps a better statement of how Salesforce supports citizen development would be “Salesforce provides the tools for an enterprise to enable business users to build applications with little or no IT support when proper governance processes are established and followed”. This phrase doesn’t fit on a bumper sticker as easily as “Clicks not code”. Perhaps “IT doesn’t go away. IT gets out of the way” almost fits, though.

The “lower cost to deliver” benefit is based on the streamlined process of citizen development, i.e., no need for business to create a full specification to hand off to IT for implementation since business will own the development. In an enterprise where the IT team is continuously backed up, this will lead to faster time to delivery as well. In cases where the scenario is simple or common enough to be configured in a generic manner, a great deal of time can be saved. However, this should not be confused with the false assumption that configuration over coding is inherently faster. Sometimes it is and sometimes it is not. Declarative programming must be provided in a way that is maintainable by the vendor and generic for the customer. For a skilled developer, custom development can be completed in far less time than it takes to configure a collection of generic options to something as simple as loop through a specific set of data looking for a specific output.

If it sounds like citizen development is a bad idea that is neither the case nor the intention. Application development is like raising a child… it takes a village with each member contributing their specialty at the best time and in the appropriate context. A governance group to provide guidelines, consider exceptions and enforce adherence; Architecture and security specialists to determine the best way to ensure compliance; Developers to provide reusable components when they are not readily available from the App Exchange; Trained Salesforce system administrators to enable appropriate permissions, configure necessary integrations, and manage production deployments.

In short, all of the roles that an organization following best practices for platform use will have in place anyway. On the one hand, supporting citizen development adds some additional tasks to those who support the platform. On the other hand, properly supported citizen development frees up platform support personnel to better focus on the tasks that most need their skills while improving relations between business and IT by enabling business to more self-supporting.

Originally published at InfoWorld

If you found this interesting, please share.

© Scott S. Nelson

From Agile to Fragile in 60 sprints

Feature image by Elisa Kennemer on Unsplash

The adoption of agile software development methodologies has been a necessary evolution to support the explosive demand for new and expanded capabilities.   There is no doubt that without the broad adoption of agile practices much of the growth in technology, and all of those aspects of everyday life that is driven by technology, simply would not have happened.

Still, too much of a good thing applies. Another old adage that comes to mind is “You can have it better, cheaper, faster. Pick any two”. Many organizations have insisted on all three. How did they do it? They sacrificed the documentation.

I’m not talking about saving shipping costs and trees by making manuals virtual, and then saving bandwidth by replacing the documents download with the install files with links to online documentation (which has its own issues in this world of massive M&A). I’m talking about all those wonderful references that development teams, sometimes backed by technical writers, produced so that others may pick up where they left off to maintain and enhance the final applications. Yes, that documentation.

Self-Documenting Code does not make a Self-Documenting Solution

While no one can honestly disagree with the value put forth in the Manifesto for Agile Software Development : “Working software over comprehensive documentation”, I also don’t think the intention was that documentation impedes working software.   Still, the manifesto has fed the meme (the original definition, not the funny GIFs) “Good code is self-documenting”. When I hear this, my response is “True; and knowing what code to read for a given issue or enhancement requires documentation”.  My response lacks the desired impact for two reasons: It doesn’t easily fit on a bumper sticker and it requires putting time and effort into a task that many people do not like to do.

The danger of little or no documentation is that the application becomes dependent on “tribal knowledge”. In a perfect enterprise, this is a dependable approach because employee turnover is low and when people do depart they always do so with adequate notice and thoroughly train their replacements. I have heard these enterprises exist, though I have never spent any time working with one of them.  I did, however, recently work with a business intelligence group where their entire ETL staff departed within a few weeks of each other after a few years of furiously building hundreds of data integrations in a dozen different business areas and then spent less than 9 hours in “knowledge transfer” sessions with my team who were tasked with keeping the lights on until a new crew was hired and trained. There was not one page of documentation at the start of the knowledge transfer and I have yet to find a line of documentation in any of the code.

I’m not advocating the need for waterfall-style detailed design documents. In some ways, those can be worse than no documentation because they are written before the code and configurations they are intended to describe are created and fail to be updated when the actual implementation deviates. In an agile world, writing the documentation after the implementation is a sound approach that will support the manifesto value of “Working software over comprehensive documentation” by being just enough documentation to facilitate maintaining the software in the future.

Meeting between the Lines

How much is just enough? That is going to vary by both application (and/or system) and enterprise. Some applications are so simple that documentation in the code to supplement the “self-documenting” style is sufficient. More complex solutions will need documentation to describe things from different aspects, and the number of aspects is effected by whether maintenance is done by the development team or a separate production support group. The litmus test for whether your documentation is adequate is to take a look at it from the perspective of someone who has never heard of your application and needs to be productive in maintaining or enhancing it in less than a day. If you have difficulty in adopting that point of view (many people do, and double as many developers), have someone outside your team review the documentation.

I find the following types of documents to be a minimum to ensure that a system can be properly managed once released to production:

  • Logical System Architecture
  • Physical System Architecture
  • Component Relation Diagrams
  • Deployment Procedures

Again, the level of detail and need for additional documentation is going to be driven by complexity and experience. Another factor is how common the relevant skills are. If the candidate pool for a particular platform or framework is shallow, more detail should be provided to act as springboard for people that may be learning the technology in general while diving into the particular implementation.

Yes, there are Exceptions

Conversely, some solutions are true one-offs that are filling a very specialized need that is unlikely to evolve and may have a short lifespan. These implementations only really need sufficient reference to migrate them to another environment or decommission them to free up resources while not negatively impacting other systems. I do caution you to really be sure that an application falls into this category before deciding to minimize the documentation.  What comes to my mind when I think of such decisions is massive amount of resources dedicated to dealing with two-digit years in 1999 to address applications that were not expected to still be in use when they were developed 10 or 20 years previously.

A Final Appeal

At the beginning I agreed with the manifesto value of working code prioritized over comprehensive documentations. In the days when most software life cycles began with tons of documentation and meetings to review the documents and meetings to review the results of the review, a great deal more beneficial build and test activities could have been done in that time instead. My experience in documenting the results of agile and other iterative processes toward the end of the development cycle and then reviewing that documentation with people outside the team is that design flaws are discovered when looking at the solution as a whole rather than the implications to individual stories in a sprint. The broader perspective that waterfall tried to create (and often failed since most waterfall documentation does not match the final implementation) can be achieved better, cheaper and faster when documenting at the end of the epic. In this one case, picking cheaper and faster yields better.

Documenting the fruits of your software and application implementation labors may not be the most exciting part of your team’s work, but the results of not documenting can become the most painful experience for those that follow…or your next gig!


Originally published at InfoWorld

If you found this interesting, please share.

© Scott S. Nelson

Thinking about Key Drivers to Architecture Approaches

For a solution architecture to be of utmost value it must address the target business capabilities in a manner that is maintainable, extensible and scalable. Solution Architectures follow unstated core drivers that influence the focus of the approach. The most common of these drivers are (in order): Initial Cost, Vendor Capability, Total Cost and Business Capabilities. These drivers are not mutually exclusive, and the key driver will be what each of the other drivers are weighed against in the solution. Each driver has value to the project and the enterprise as a whole.

In my opinion, Business Capabilities is the best key driver to have. Business Capabilities are what support growth and sustainability and contribute the most to the enterprise. The other drivers should not be completely sacrificed, but when they are given priority the result is frequently a gap between actual need and provided solution. They are driven by agendas that are secondary to the overall enterprise needs and better kept in the corresponding secondary priority.

This is not to say that every business capability requested by an individual or group is valuable to the enterprise as a whole. The business capabilities to focus energy and resources on need to be carefully chosen by the business, and once identified as a core need of the enterprise should take its place as the key driver.

If you found this interesting, please share.

© Scott S. Nelson

The First Step of a Journey that Began Five Years Ago

Note: I will update this article with a link to the application once the customer has done their own announcements in accordance with their external communication policies and procedures.

In the Beginning there was BEA WebLogic Portal

In 2008, Oracle acquired BEA Systems. 19 days after the official merger, Oracle announced that premiere support for WebLogic Portal would end in 2014. The current policy document (latest can be found on http://www.oracle.com/us/support/lifetime-support/index.html) has moved this date out to 2018, though they have been sticking to the “no new feature release” policy since the 10.3.2 release in 2010. 10.3.2 was intended to be 11g, except it came out a year later than originally announced at Open World in 2008 and was released as a “dot” release of 10g despite the fact that it had several major enhancements and new features.

I had been hired by BEA in 2006 as a WebLogic Portal consultant due to my extensive experience with the product as a consultant for netNumina Solutions. In 2009, Oracle released WebCenter 11g and I attended the Masters Training two weeks prior to the GA date where I learned just how very different the two portal products are.

Which Way Do We Go?

I have been unable to find any officially published direction for WebLogic Portal customers who wish to migrate to WebCenter Portal, though I have had numerous conversations with engineers, architects, consultants and product managers about how to go about this. These discussions revealed three general approaches.

One approach is to simply re-build the portal in WebCenter. This is quite viable for very small portals and avoids the pitfall of other approaches, which is the need to maintain two architectures. It is not a very practical approach for medium to large portals as it is a great deal of effort and expense over a long period of time to just to provide the same functionality.

Both the second and third approaches are about transitions. On method is to create the new WebCenter portal and build all new features there and link over to the legacy WebLogic Portal for existing features. This is very quick and easy to deliver but difficult to maintain.

The third approach is staged migration. This approach creates the a new WebCenter portal that is the where users log in and interact, with the legacy functionality being exposed using WSRP. This solution allows for the immediate introduction of the WebCenter architecture and minimizes maintenance cost. By following a policy where any legacy portlet that requires modification be first moved over to WebCenter, Business and Technology stake holders can plan the complete retirement of the WebLogic Portal infrastructure as best suits the Enterprise as a whole.

Every Journey Starts with a Sprint

This month marks the deployment of the third solution to production for my current client. It is a medium-sized, high-complexity portal and it was brought from inception to production in six months using a mixed-Agile approach. It consists of 20 portlets produced by the legacy WebLogic Portal application and two legacy JSF portlets that were migrated in two days because they include file download functionality that made them easier to migrate than to dig through the documentation to fix as WSRP. The portal also includes managed content from WebCenter Content and a shared navigation structure with a legacy Struts 1.1 application.

The Enterprise Portal Architecture for the customer is to migrate all legacy functionality from WebLogic Portal to WebCenter Portal over the next year in staged releases that will also include the introduction of new features and functionality.

If you found this interesting, please share.

© Scott S. Nelson