Preparing for (and passing) the Salesforce Certified Platform Developer I Exam (WI19)

(Originally published at InfoWorld, this version has an additional section on test taking tips.)

I’m ambivalent about certifications. Because I spent enough time in school for the equivalence test validation to be embedded in my psyche I have enough certificates to fill a two-inch binder clip. Because I have been working in the real world long enough to know what most of them truly reflect, I actually display them all in two-inch binder clip with a sticky-note on top that says “Pick One”. Anyone who has multiple certification knows that not all are of equal value in terms of proof of knowledge. I have one from an enterprise vendor that is a household name, very fancy looking with a hologram in the middle. It is the result of showing up for class every day for two weeks and the check clearing. Yet I learned a great deal achieving it. Another was acquired as a pre-requisite for continued employment following a merger, which was easily achieved with no study and only a third of the allotted test time. The next took a solid year of daily study and was taken in hopes of leveraging it to leave the employer that required the previous one. 

My most recent certification was one of the more difficult to study for: The Salesforce Certified Platform Developer I (WI19). While I have plenty of practical experience working with Salesforce, there are many aspects of the product that are required knowledge for certification that just haven’t come up for the businesses requirements I have been fulfilling. However, I enjoy working with the product and Salesforce has done a good job of convincing decision makers of the value of certification as an indicator of ability that I wanted to have it to please those decision makers. So here is how I went about it.

Trailhead

Trailhead is an invaluable resource for learning Salesforce, regardless if you are seeking certification or not. I use it all the time to keep up on new and updated features and whenever I run across a requirement where a tool that I’m not thoroughly familiar with may be helpful. Also, in 2018, Salesforce move the certification verification to Trailhead as well as the maintenance exams.

If you have not already created a Trailhead account, do so before reading on. If you already have a free Developer org, sign up at Salesforce Trailhead. If not, go to the Developer Edition site and get yourself one first. Pro tip: Don’t use your email address as your user ID for the Developer org, even though that is the default value on the sign-up form. User IDs must be unique to all of Salesforce, not just the org.

On Trailhead, navigate through the menus under Credentials > Certifications then to the specific certification you are interested in (or jump to the Platform Developer I (PDI) page). Here you will find the Exam Guide, which is a good way to understand the structure of the exam. There is also a link to a Trailmix. Trailmixes are groupings of Trailhead training modules and super badges created by Salesforce and other Trailhead users. These are a great way to begin preparation for a certification. If you have been working with Salesforce, many of the modules will be topics you are already familiar with. Skip those according to your own confidence level. I will add this personal anecdote about skipping Trailmix modules: The second-lowest score I received on the exam was in a category I work with regularly. The exam questions were about aspects that I no longer consciously think about, similar to how it may be hard to give directions with street names for a route you travel daily because you traverse them on autopilot. A refresher may be useful.

Udemy

I used Udemy to great success for the Salesforce Administrator’s exam by taking an excellent preparation course taught by Francis Pindar and then a practice test course with three practice tests. Preparing for the Developer’s exam was a bit more daunting, mostly because the nature of the exam has evolved in the last couple of years and the courses have not caught up with it.

Before you get sticker shock looking at the Udemy courses, here is the strategy to pay a reasonable price for Udemy courses in general. Create your account on Udemy and take one or two free courses (there are many worth doing). Eventually (at longest up to three months) you will receive and offer for all courses for a flat rate per course that is quite reasonable. If your employer has a discount program that includes Udemy (such as Fond), you can get an even better price. I only paid $9.99 per course through my company’s Fond program.

As of this writing, the best Udemy course I found for the Developer’s exam is Salesforce Platform Developer 1 Certification Course by Deepika Khanna. It seems to be an Apex developer course that was later re-purposed for certification prep. As such, most of the content is there, though it may not be clear how it translates to the exam. There are also several course files that are not referenced in the course. One of these is a practice exam that has all of the answers in Word. Most of these questions are also in Salesforce Certified Platform Developer practice Tests, so I suggest you not read the Word document until after you have gotten everything you can from the practice exams.

I had taken another prep course on Udemy that had a great outline, but I did not find it a good learning resource as evidenced by the abysmal score I achieved on the first of the two practice exams.

The practice exam on Udemy is not the greatest, though it does reflect the actual exam process well, if not the questions themselves. There are a lot of spelling and grammatical errors in the practice exams and the mistake I made was to assume that an incorrectly spelt answer was automatically wrong. The spelling issue is not seen in the actual exam, so it is just an issue with the practice exam author.

Another lesson about practice exams is to avoid the temptation to take them early. There are only so many questions and you can end up memorizing the answer to those rather than learning the topic well enough to answer similar-but-different questions on the actual exam.

Other practice exam sites

The site I got the most from for drilling on test questions is a ProProfs Quiz, appropriately titled  Salesforce Platform Developer 1. Questions are added and updated occasionally. There were 131 questions available the final week before I took the exam. The same spelling issues seen on the Udemy practice tests are there, and many of the same questions. I also noted questions from the course quizzes from the exam preparation course I took, though not sure who copied who there. One thing to be aware of is that not all of the questions have the correct answers. Believe it or not, that is a good thing if you use the strategy I did. Every time I took the exam, I would research the questions I missed to better understand the concepts. This helped a lot. I also would save the final page with the answers to a PDF that I stored on my phone and reviewed when idle.

Some other useful practice sites:

  • Salesforce Certification Dev 401 #1 (also on ProProfs Quiz) is for the older exam. Most of the questions are still relevant, as the new exam has more topics than the old one.
  • Salesforce Certified Platform Developer 1 Quiz at Salesforce Tips, Tricks, & Notes is short but some of the questions are really hard. The order of events question was especially helpful in getting this topic down.
  • Simplilearn’s Free Salesforce Platform Developer I Practice Test is very hard, probably because they sell a certification preparation course. It requires some contact info, but I found they only send you ads a couple of times. No telling if they sell the info, though. Which is why I keep an anonymous account for such registrations.

Key topics to study

  • There are many questions related to Triggers and Order of Execution. Memorize this as best you can.
  • Knowing the Data Model well will boost your score. If you are good at memorizing things, the link will be sufficient. Otherwise, hands on experience (at work or on Trailhead) is the best way to embed the key points into your subconscious. I studied this the least and it was my highest scoring area from a combination of project work and Trailhead modules. YMMV.
  • Apex Testing has a multitude of sub-topics and there are some over-lapping concepts that can be confusing if you don’t regularly use this aspect of Salesforce.

Test Taking Tips

(This is a bonus section for readers of my LinkedIn or Solutionist blogs)

The process of answering the test questions is just as important as the approach to preparation in ensuring a passing score. I first go through the test quickly, reading each question and response options and answering those that are immediately obvious to me. I then go through a second time, answering the questions skipped the first time through and marking for review any that I don’t feel 100% confident about. An advantage to this approach is that sometimes one question is worded in such a way that I easily remember the answer and it reminds me of the correct answer to another, related question.

I then go back and review all of the questions marked for review, re-read the question and answer and asses my confidence. I do this in order of the exam questions because I still might leave it marked for review on this pass. The ones I was still unsure of on the review I then re-review. Finally, I go through the test from start to finish, reviewing each answer.

While this may sound very time consuming, I usually still finish with 20 – 30 minutes to spare.

Some final comments about certifications

The Salesforce employment market is heavily slanted towards certified applicants, so if you really like working with Salesforce and aren’t already in your dream job (or are a consultant who is always pursuing new clients), Salesforce certification is a must have. The Salesforce Administrator certificate I find the easiest to achieve, and if you are serious about Salesforce development I recommend getting both certificates because knowing enough administration to be certified will help you in designing better components.

No matter how hard or how easy a certification is to obtain, almost all are proof only of knowledge. In general, the application of knowledge is where the value is. As someone pursuing certification, continue your learning after certification. I find participating on the support discussions and completing Trailhead modules regularly to be a good way to grow beyond the day-to-day tasks.

And for employers, please weigh overall experience with certification achievements. Someone that has years of technical experience on multiple platforms and coding languages will be able to become very proficient in Salesforce in a short period of time, and someone with several certifications who has little experience outside of Salesforce and all within a small variety of orgs may not be the right fit for a complex implementation.

Finally, my own score on the exam was not in proportion to my actual capabilities. The exam results are broken down by category. In one case I scored very low in an area that I use regularly and frequently advise others on. In another case I scored quite high in an area I rarely use and most of my learning was academic. Having previously passed the administrator’s exam, it is no surprise that my best categories were the areas that overlap.

Facebooktwitterredditlinkedinmail
© Scott S. Nelson

The future is cloudy with a chance of success

(Originally published at InfoWorld.)

I would have titled this post “How to be a rainmaker in the cloud” except the term rainmaker often refers to the selling process, which is already succeeding and that success is a key contributing factor to why so many cloud initiatives are all wet. If nothing else, the popularity of cloud services has made the use of metaphors in technical articles much easier!

In this post I want to talk about some of the slipperier aspects of cloud services; how to reap the most benefits; and ways to identity potential pitfalls before a sinking feeling sets in.

AI, big data and the cloud

A great expression going around about AI and big data is that they are “…like teenage sex: everyone talks about it; nobody really knows how to do it; everyone thinks everyone else is doing it; so everyone claims they are doing it”. I want to add to that that “and most of those that are doing are not having nearly as much fun as they could be”. The same can be said for the cloud, even though it has been around a lot longer and is relatively more mature. And many people actually already have it, though they might not know it, so maybe it is more like insanity.

These three technologies were all drivers for each other. The ease of getting started in the cloud (the quality of which we will ignore for now) was a multiplier for the data accumulation that had already been growing exponentially. The amount of data was so big it needed a separate category of, well big. Then, trying to manage that much data while it was still of business value (or even determine if it is of value) quickly became too much for human manipulation or even basic algorithms, so more complex algorithms were created, followed by algorithms that create algorithms and then AI became a battle cry to save us all from drowning in the data lakes (that we probably wouldn’t have created if we had true AI to start with).

The lack of quality in initial cloud forays that we ignored at the start of the last paragraph combined with the ease of getting into the cloud is what led the way for so many being over whelmed with data.  The truth is most cloud initiatives that have real business value are still in their early stages. The early adopters (that are now trying to hire more data scientists than then there are in order to help dam the floods) have given the rest of us a great example of what not to do. So let’s use their hindsight to build our vision.

If you don’t know what it is, don’t handle it with familiarity

Anyone with kids or animals learns fairly early that they should never pick anything up from the floor bare-handed if they are not positive of what it is. Technology deserves the same wary respect. If some says “everyone is moving to the cloud”, I know they are either misinformed or not being truthful (sometimes both). First, because everyone rarely does anything at the same time, no matter how much marketers and salespeople tell us otherwise. And second, because lots of us have been there for years and just didn’t think of it that way. Financial services is one very common area where storage, processing and source-of-truth data has been entrusted by the customers to the service provider and accessed over the internet since internet was spelled with a capital “I”.

The truth is, there are many different types of cloud services and the value of a given service and provider varies according to the enterprise needs. I know this is really obvious and what I am calling out here is how it is often forgotten after seeing a competitor’s press release or the conclusion of a well-rehearsed and intricately orchestrated sales demo. There is no doubt in my mind that cloud services will benefit most (maybe even all) businesses. But not every type of cloud service is needed by every business and how one business uses a specific service is not how every business should even if they will benefit from the same service.

Data storage is a great example, because it has become a universal need. There are businesses that can and should keep all of their data in cloud services. The plural is on purpose, because if you are going to commit your entire business to the cloud you need to have some kind of back up. On the opposite end of the spectrum there are many small companies that have neither the volume nor the budget to properly maintain all data safely in the cloud. In the middle (and the latter example is far more common than the first) there are the majority of businesses that will benefit storing certain types of data in the cloud and applying both MDM synchronization to ensure availability and continuity.

The green field is not always greener

The biggest hurdle in guiding enterprise technology is not getting buy-in on new technology; it is managing the false sense of security that comes with having been convinced to adopt the new technology.  

Part of the reason stake holders expect the latest cloud offering to save the planet is over-selling on the part of IT, vendor sales and marketing, and industry hype (which is greatly fueled by vendor marketing in much the same manner as a recursive process with a bug).

Another equally significant part is human nature. Once a decision has been made or a belief adopted, the subconscious mind will often vigorously defend that decision or belief whenever it is challenged, deepening the associated conviction. This is why we so often seen a mass movement towards solutions as they gain popularity even if the solution is not always what is called for. People who want to slow or redirect the changes for very good reasons seldom take the psychology of newly adopted convictions into account and the result is a more energized drive in the direction that may not be right (or right at the time).

Measure twice, cut once…every single time!

To avoid the rush to greener cloud pastures, IT and business need to work together to define and agree on business and technical goals. Once the goals are agreed to, an analysis of how a given cloud solution will further those goals should be supported by a pilot or proof of concept before any large commitment is made. The results of the analysis must vigorously seek any side effects that have a negative business aspect in addition to how the goals are met (or not). Finally, if the solution is adopted, the analysis needs to be reviewed and revised for every case. Again, no rocket science or epiphany here, just common sense that is not-so-common as to not benefit from repetition.

Walk, don’t run, to your nearest gateway

To offset the cautionary tone of this post, it bears repeating that there are many benefits to cloud services and some combination of cloud services will very likely benefit any enterprise. Some key concepts to keep in mind in order to realize the most benefits (and dodge the potential pitfalls of a bad combination) are: The forecast for enterprise architecture is increasing clouds. Enjoy the shade and keep to high ground away from potential flood damage!

Facebooktwitterredditlinkedinmail
© Scott S. Nelson

Maximize ROI with MVP

(Originally published at InfoWorld.)

I prefer to write about things that have either not been written about previously or where I think that the value is still being missed. This article is the latter criteria, given that the term Minimum viable product was coined in 2001 (according to Wikipedia). Like many patterns and processes related to technology there is more to the use of MVP than the name implies.

Minimum is for targeted effort

The M in MVP is often misconstrued as the minimum to go to market at the start of the effort though it is more suited to the end of the effort. The definition of minimum should evolve through the life-cycle of design and development.

If you will accept that all functionality is moving something from one point or state to another, then absolute minimum is being able to get from start to finish.  So at the start of an iterative design and development process for a feature or product this should be the first goal and go no further until the results are reviewed by the product owner and/or users. Another way to look at the first value of minimum is sufficient for demonstration and discussion.

Once the absolute minimum has been achieved, then the additional criteria can be added in. The additional criteria are going to be beyond the bare minimum to accomplish the change in value or state, of which there can be many such requirements. These requirements must be prioritized by the key stakeholders and then delivered singly unless (in very rare cases) multiple requirements are inter-dependent. The reason the inter-dependency should be rare is that the requirements should be stand-alone. They may need to be done in a particular order, which should be considered when determining the priority.

In a recent design session where a new feature was required for call center users to follow a script and record responses with the script branching at points based on answers through the process. There were some participants that wanted to start from the assumption that this functionality would be re-used in other processes and start with a generic approach, even though the expectation was that any such reuse would be far in the future. It is important to acknowledge the potential for reuse and avoid approaches that will prevent it or make it overly complicated. That said, it adds nothing to the initial solution to genericize without knowing what the future requirements are. It only adds to the level of effort in producing the first MVP for stakeholder review and getting to the first production-ready MVP. In this case the difference would have been a couple of weeks in a project already behind schedule.

Viable must be based on agreement

I can think of several well-known enterprise-technology products that have a terrible user experience. For me, personally, I feel they miss being optimally viable, though I have to admit they are minimally functionally viable. That said, I’m neither the product owner nor the key stakeholder (let’s admit, that is the person paying for the license and not the actual user) and cannot honestly say whether the standards of viability were met.

The most important part about the previous paragraph is acknowledging that it is not my role to determine viability. I can (and should) provide my input about what I think is important about viability, but the ownership belongs to the product owner and (sometimes, and maybe not as often as it should be) the key stakeholders.

Another area often forgotten about viability is from the other side of the coin. The product must be maintainable. Product owners often insist on functionality that is difficult to maintain. In some cases, this is an acceptable trade-off and in other cases the maintenance cost out-weighs the business value and that impacts viability from anyone’s point of view. My experience is that product owners asking for high-maintenance features generally do not know that is what they are asking for, and that often times it is the timeline more than the possible solutions that make it a maintenance issue. Delivering products using an MVP design approach is also about continuous communication between owners, designers and developers. If any one of those roles works without both giving and receiving input, the project is in peril.

Product should be plural

Because minimal is an evolving criteria and viable is the result of consensus, a single outcome, i.e., product that is shippable on the first iteration is extremely rare, with the possible exception where the product is a very simple addition to an existing product.

By willing to iterate and refactor, each version of the minimally viable product will be better until it at least reaches the level of minimum where it can be delivered.

Minimal valid postscript

Does the Minimal viable product approach described here sound like other approaches? Agile come to mind? Most good approaches have similarities. It is exceptional when they do not.

A lot of this depends a bit on “perfect world” scenarios. In the real world, sometimes the work needs to forge ahead with assumptions while waiting for stake holder review. This is where source control management comes in to play. The important thing to remember is to not become overly-fond of assumptions as they may prove incomplete or invalid after the stake holder review and input. This could even happen frequently with new teams, but as the product owners and producers continue to work together, the gaps will become fewer and fewer. Again, I caution to avoid complacency as those gaps narrow as the will rarely go away completely. The goal for all is best functioning product that can be created given the capabilities available, even if that means the occasional solution refactoring.Facebooktwitterredditlinkedinmail


© Scott S. Nelson

Too big to survive: There is no bailout for technical debt

The only difference between technical debt and financial debt is that costs are more often known in advance when taking on financial debt. Both types of debt are a tool when used intelligently with purpose and a plan to manage it and can take a devastating toll when used recklessly or imposed through misdirection or miscommunication.

Acceptable vs unnecessary debt

The original heading here was “Necessary vs unnecessary debt”. On further reflection, though, I realized that the only good reasons for incurring debt are time drive. If time is removed as a factor there is no reasonable need for debt. So then it becomes a question of when time is important enough of a factor to make debt acceptable.  The only context I can think of where time is universally an acceptable driver for debt is in an emergency situation.

Beyond an emergency, the evaluation for whether debt is acceptable because of time becomes a value proposition. In our personal lives, the first car and house is generally considered to be a good reason to accept debt because both have a large enough cost where they are likely to become more expensive over time, making it harder and harder to save for them in a reasonable period of time.

Similarly, building in-house custom applications rather than waiting for a Common Off The Shelf (COTS) solution that will incur technical debt in minimally reviewed code and the inevitable maintenance costs is worth it for functionality that is key to business value. Having worked for software vendors, I can honestly say that it if it isn’t already Generally Available (GA) as at least a patch one then it should still be considered unavailable as a COTS solution.

The other common time driver that should generally not be an acceptable reason to take on debt is impatience. Using a home equity loan to buy the latest television is a poor financial decision and implementing a new solution without a thorough evaluation and proper training is a gamble that will usually result in higher maintenance cost or a potential system failure.

The old adage “patience is a virtue” is not only true, it is a vast understatement of the value of patience.

Stop debt before it happens

The reason technical debt is becoming an increasing concern at many companies is because it tends to grow exponentially, just like financial debt. And for the same reasons. Of the three drivers for debt mentioned previously (emergency, long-term value, short-viewed impatience), the most frequent cause is the least necessary. Impatience. Problems arising from bad habits will grow until the habit has been replaced by actions that have a more positive effect.

Without getting too psychological here, impatience is a result of either wanting very much to move towards a reward or away from loss. For some odd reason, the drive forward doesn’t seem to repeat in the same context nearly as much as the drive to move away from. In technology, the drive to move away from is so common that the three key emotions related with impatience driven by escape have an acronym: FUD (fear, uncertainty, doubt).  In the case of IT decisions all three are essentially redundant, or at least a sequence. Fear driven by uncertainty and/or doubt. When the decision is around taking on technical debt, the fear is that business owners or customers will be upset if the feature is delayed or reduced and the uncertainty and doubt are the result of either not asking these stakeholders or asking only half the question.

Asking a stakeholder “Is it a problem if feature X is not in the release?” will frequently have a different answer than “Would you prefer we include feature X in a later release or risk certain delays to all future feature releases by pushing it before we have time to include it in a maintainable manner”? My experience is that most of the time neither question is asked and it is just assumed the world will end if users don’t have access right now to a new option that only 3% will ever use. It is also my experience that when the tradeoff of reliability and stability versus immediacy is explained to stakeholders they usually opt for the delay. I know many people believe that businesses have lost sight of long term implications and I believe that in many cases it not because they are deliberately ignoring them but because the people that should tell them when and why to be cautious are afraid of saying anything that will be considered “negative”.

To summarize, the best way to reduce the accumulation of technical debt is to have open, honest communication with stake holders about when decisions involve technical debt, the consequences of that debt, and the options for avoiding taking on the debt. Then, if the decision is to still choose the right now over the right way, immediately request buy-in for a plan, timeline and budget to reduce the technical debt. Again, my experience is that when the business is presented with a request to ensure functional reliability they frequently say yes.

Getting out of unavoidable or accepted debt

Taking on some technical debt is inevitable. This is why the modifiers usually, most often, and frequently were used in the previous section rather than more-comforting-yet-inaccurate always, definitely, and every time. Even in a theoretically perfect process where business always opts for debt-free choices and emergencies never happen, there are still going to be debt-inducing choices made either from lack of information or usage of imperfect vendor releases.

In the case where the debt is incurred unknowingly, once it is discovered be sure to document it, communicate and plan for its correction. The difference with cases where the debt is taken on knowingly because it is unavoidable without a much larger cost in vendor change, monitor the item with every project and when there is a reasonable option to correct it, do it. I once had to build something that was a bit kludgey because the vendor application clearly missed an implication of how a particular feature was implemented. We created a defect in the defect tracker which was reviewed in every release. 18 months later, the vendor found the error, corrected it and we replaced the work-around with the better approach in the next release. For major enterprises it is a good idea to raise a support case with the vendor when such things are identified, which I did not do at the time because the company I was managing this application for was too small to get vendor attention and the feature was not in broad use.

Originally published at InfoWorld.Facebooktwitterredditlinkedinmail


© Scott S. Nelson

Why you need to change your monolithic architecture

In a perfect world the contents of this section belong at the end of the article, as part of a conclusion. But a key theme to this article is that there is a lot of unintentional imperfection in the world and one of those imperfections is a tendency for some to draw conclusions early, so I will start with the end and see if we can meet in the middle.

There will be people that strongly disagree with this article. There will be others that share the sense of epiphany I experienced formulating the outline, and probably more than a few who will have come to the same conclusion before this article was written.

For everyone else, I ask that you look at your own enterprise and decide for yourself if the architectural decisions that drive your IT solution are based on corporate culture more than the best way of providing business value.

The most commonly stated reasons to migrate

Skim the thousands of recent articles and community postings about enterprises adopting a new architecture or process (Microservices and DevOps are the buzzwords at the time of this writing, and I expect those will change several times before this article is no longer relevant) and the driver behind the move will generally translate with ease to one of the following:

  • Improved operational efficiency
  • Higher reliability
  • Faster time to market
  • Better support of business needs (arguably redundant to the first three items)

All of those are excellent reasons to change how things are done.  Moving from the current way of doing things to the new way of doing things will definitely yield those benefits in many (though assuredly not all) enterprises. I’ve been in this industry for over 25 years and here are some of the shifts that I have seen made for the exact same reasons:

  • Single tier to two tier architecture
  • Two tier to n-tier architecture
  • Fat client to thin client
  • Single server to redundant services
  • Redundant services to remote procedure calls (RPC)
  • RPC to Web Services
  • Thin client to app

Yes, Virginia, there are exceptions to every rule and observation

Every one of the above-mentioned shifts resulted in some level of success. And, except for the last (which I include because irony fascinates me) reflects a cultural shift towards distribution of overall responsibility, isolation of specific responsibility and increased specialization. I can already hear the exclamations of “There is an increase in demand for full-stack developers, which refutes this observation”.  I agree that more companies are looking for and hiring full-stack developers.  I have observed, with some delightful exceptions, that once the person is hired they are pushed in to some type of specialization within a couple of years (often less).

The most frequent real reasons a change is needed

There was a behavioral study done almost 100 years ago that resulted in an a concept known as The Hawthorne Effect where changes in worker conditions resulted in increased productivity resulting from the expectation of improvement rather than the change itself (my spin on the conclusions, many of which are still being debated).  When an enterprise architecture or IT process is changed, the result is similar.

There are many common examples of why a change is needed to achieve improvements, regardless of what that change is. Here are some that I have seen from working with dozens of different enterprises in several different industries.

The person that wrote that doesn’t work here anymore

My first few IT-related roles were as an FTE and a consultant for companies that were small enough where I was the sole IT resource involved. While I’m proud of the fact that some of my earliest applications are still in use over two decades later, it has dawned on me while writing this article that it may be simply because I had not learned to properly document applications back then and no one has been able to make any changes for fear of putting the company out of business.

I learned about the value of good documentation when I did my first project for a large multi-national manufacturing company, still as an independent consultant. I knew that I would be leaving these folks on their own with the application once my part was done and that people who would be hired after the project was complete would inherit the code and functionality without the benefit of any knowledge transfer meetings. At that time, I was not very unique in providing this service as part of my work. What I have learned since is that, like myself when I first started, many full-time employees either see no need to document their work or know how to.

In later years, many consultants either reduced or completely stopped providing documentation as a way to ensure more work or (to be fair) decrease costs in an increasingly competitive market.

The string that broke the camel’s back up

Even when best practices are followed in regards to simplicity and reuse for the first release of an application, by the Nth release/enhancement/bug fix the application can reach a state where attempts at any but the most minor modifications result in something else breaking. Did the team’s skill atrophy or is this a result of a less-capable team owning maintenance? No.

Fragility creeps into solutions over time because technical debt piles up. If “technical debt” is a new term for you, I strongly suggest reading up a bit on it. In short, like credit card debt, if it isn’t dealt with early and often it will increase until more effort is allocated to dealing with the problems then solutions that caused them.

A culture where identifying, documenting and correcting potential issues and enhancements identified throughout the lifecycle of projects will extend the longevity of an application’s value and reduce IT costs minimizing the frequency of technology refreshes driven by failing systems rather than adding business value.

String theory is an anti-pattern

Another heading could be “Spaghetti and hairballs”.  This driver to move is similar to the previously described scenario except it occurs at a lower level. The architecture may still resemble something that is comprehensible and even sensible, but some of the implementation code and configuration has become unmaintainable. Frequent causes of unmaintainable code are:

  • Changes in personnel with little, poor, or no documentation to reference upon inheritance.
  • Changes in personnel with plenty of documentation and no time allotted in the “project plan” to review it before diving in to the next set of “enhancements”.
  • No change in personnel and no time allotted for code reviews.
  • No change in personnel and no time allotted to address technical debt.

The common theme here is that haste makes waste. The irony is that the haste is always driven by a desire to reduce waste (or perceived waste in the form of costs associated with the activity that would have actually prevented the waste).

Growing Pains

Earlier I mentioned some of the transitions that I have experienced first-hand.  Here is the list again for context:

  • Single tier to two tier architecture
  • Two tier to n-tier architecture
  • Fat client to thin client
  • Single server to redundant services
  • Redundant services to remote procedure calls (RPC)
  • RPC to Web Services
  • Thin client to app

A side-effect of each of these is that they tended to increase the number of teams necessary to build and maintain solutions. By itself, the sharing of responsibility is a good thing. Efficiencies can be realized by having teams focused on specific areas as long as both technical and human interfaces are aligned to support the same goals. Unfortunately, cultures of competition and departmental isolation can also result from the same growth, resulting in a focus to improve efficiency at the expense of the original goal.

Your true story here

I would be flabbergasted if I have exhausted the causes here and would really enjoy it if you were to add your own experiences in the comments section for inclusion in future revisions of this article.

How to delay the changes until they are needed

The phrase “Wherever you go, there you are” applies just as aptly to migrating from one IT solution set to another as it does to trying to leave your troubles behind by relocating. If all of the bad patterns come along for the ride, the new will surely resemble what was just left behind sooner or later.

To be both fair and clear, most (if not all) of the common issues enterprises face today that drive them to move to a new platform to resolve their issues did not crop up because someone deliberately sabotaged the processes…they came about because the intention behind a move in the right direction at some point was forgotten and only the motion was left.

Documentation started falling by the wayside driven by two trends. The first was more intuitive user interfaces that required minimal or no documentation. This was a great idea with the best of intentions. However, some of the results of this trend are not so great, usually with the end users being the ones to suffer. There were many open source projects that ditched documentation by initially simplifying the interfaces. As the projects became popular, books, paid consulting and blogs with advertised were much more lucrative than documenting the more complex version. Since people were used to the software not having documentation (because it originally didn’t need it), this became acceptable.

Within the enterprise, the adoption of Agile practices and the philosophy of documentation being no more than necessary eventually evolved into little or no documentation both because the skills to properly document became atrophied and budget-pressured management convinced themselves it was no longer needed. While I am probably the most vocal about documentation problems resulting from fractionally resembling Agile (frAgile for short), there have been many long-standing Agile proponents who have recently been calling BS on how enterprises have claimed to adopt Agile and are actually destroying it by calling what they are doing Agile (or Extreme Programming or Scrum, etc.). Two example posts are The Failure of Agile and Dark Scrum.

Ideally, make it part of your project process to capture opportunities for improvement and document any technical debt knowingly incurred. Additionally, make it part of your SDLC to review the backlog of technical debt and technical enhancement recommendations at the start of project planning and make it mandatory to budget it reducing some level or debt and/or including some improvement.

Alternatively, what I have done for most of my consulting career is to keep a running catalog of such items throughout the project. Towards the end I will assemble my notes into a single document (occasionally happily checking off items that were addressed before project completion) as a handoff to management at the completion of each project.  Later, I would re-circulate the document prior to any follow-on projects.

I’m optimistic enough to expect that there will eventually come a time when this article is no longer relevant, and cynical enough to doubt that it will happen in my lifetime. The way I cope with this is to do things as best I can with the resources I can muster and continue to write articles like this to remind people that technology was supposed to make things simpler and easier so that we could spend time focusing on more interesting problems. Please share your coping mechanisms in the comments section.

Originally published at InfoWorld.Facebooktwitterredditlinkedinmail


© Scott S. Nelson