Salesforce Native vs App vs Connector

 

Fair warning: This is more about not having written anything in a while than the value of the topic…and the subject matter is more about drawing your own conclusions than relying on what is easily available, so…

App is one of the most over-used and ill-defined terms in the IT lexicon. This is greatly due to it being used by people outside the IT domain. The domain itself has had some whoppers, like the DHMTL that was a must-have at the turn of the century even though the only honest definitions of the term were that it had no real definition. Microservices runs a close second simply because there is an invisible grey line between SOA and Microservices that is a mile wide and an inch short. But I digress, as is often the case.

What I’m really thinking about today is apps in the world of Salesforce.com. Specifically, apps that run inside the Salesforce CRM platform. I started thinking about this because I was looking into CPQ vendors over the weekend to refresh myself on the market to support a project proposal to select the best option for a particular business. It’s a large space, so it always helps to find someone else’s list to start with and someone had given me a list from a major analyst group as that starting point.

Other than analysts, no one likes long lists with lots of details, so I first wanted to narrow it by those that integrated with Salesforce. It didn’t take me long to remember that Salesforce is the gold standard for CRM and there were only two that didn’t. I didn’t go through the whole list to get to that count because I’ve done these kind of evaluations before and figured out after the first half dozen that this was not how I was going to narrow the list. The two were just what was noticed while skinning this cat another way.

The first trimming of the list was by industry focus. The potential client is a tech service, sort of SaaSy, and “High-tech products” was one of the categories, which was much closer to what they did than “Financial services” (though they have customers in that domain) or “Industrial products” (which the analyst seemed to think usually included high-tech, though not sure why).

To spare you the tedium of the several hours of wading through 1000’s of lines of marketing prose that could have been delivered in a table (ahem, yes, I know, kettle, black, etc.), from just the perspective of Salesforce CRM integration, I found it useful to divide them into three basic styles:

Native: An application that is built entirely in Salesforce
App: An app that runs inside Salesforce that depends on data and/or functionality managed outside of Salesforce.
Connector: An application that runs independently of Salesforce and has a way to share data with Salesforce.

The terms for these distinctions change often over time and between sources. These definitions are for clarification of the table below and are purposely simplified as deeper distinctions are less relevant about integration than other aspects.

In this particular exercise, the ask was to provide some pros and cons to these different styles. My style being one of adapting general terms to technical solutions, I responded with a non-exhaustive list of Benefits and Concerns:

Integration Styles Native App Connector
Benefits
  • Easily accessible in the sales process context.
  • Seamless integration with other native apps.
  • Has gone through Salesforce security review.
  • No data latency.
  • Easily accessible in the sales process context.
  • Access is managed within Salesforce.
  • Has gone through Salesforce security review (only if installed through App Exchange).
  • Control over storage impacts.
  • Broader range of vendors to choose from.
Concerns
  • May require additional Salesforce licensing.
  • May have impacts on storage limitations.
  • Frequently limited functionality.
  • Support may require coordinating the vendor and Salesforce.
  • High potential for latency.
  • Difficult to trouble-shoot.
  • Users must use multiple applications.

Of course, the next question is usually “which is best”, and I must respond with the architect/consultant/writer-needing-higher-word-count with “it depends”. And it depends on lots of things, such as who will be maintaining the solution; how are capex and opex prioritized and managed; how do different stake holders actually need to interact with the solution; and is it clearly understood that this only one aspect of a vendor selection process and all known aspects must be documented and weighted before giving a recommendation?

The real reminder for me when I finished this brief analysis was that context is everything when doing any type evaluation. The list that I started with included products that were questionable as to whether they really belonged in the report and many of the products were listed as serving domains that there was no mention of on the vendor’s site and no compelling reason why the unmentioned-domain would want to use it. If I had direct access to the author(s) I may learn something by asking, but the important thing is that I used their input only as a starting point and applied my own analysis because when the recommendations are provided to a client, those author’s name will not be on the agenda and they will not be there to answer the questions that hadn’t yet been thought of.

Facebooktwitterredditlinkedinmail
© Scott S. Nelson

5 Steps to Enterprise Privacy 

Some of my opinions on how to deal with privacy regulations concluded with a 5 step process for managing the technical aspects recently published at https://www.logic2020.com/insight/tactical/5-step-technical-approach-privacy-protection.

Facebooktwitterredditlinkedinmail
© Scott S. Nelson

The future is cloudy with a chance of success

(Originally published at InfoWorld.)

I would have titled this post “How to be a rainmaker in the cloud” except the term rainmaker often refers to the selling process, which is already succeeding and that success is a key contributing factor to why so many cloud initiatives are all wet. If nothing else, the popularity of cloud services has made the use of metaphors in technical articles much easier!

In this post I want to talk about some of the slipperier aspects of cloud services; how to reap the most benefits; and ways to identity potential pitfalls before a sinking feeling sets in.

AI, big data and the cloud

A great expression going around about AI and big data is that they are “…like teenage sex: everyone talks about it; nobody really knows how to do it; everyone thinks everyone else is doing it; so everyone claims they are doing it”. I want to add to that that “and most of those that are doing are not having nearly as much fun as they could be”. The same can be said for the cloud, even though it has been around a lot longer and is relatively more mature. And many people actually already have it, though they might not know it, so maybe it is more like insanity.

These three technologies were all drivers for each other. The ease of getting started in the cloud (the quality of which we will ignore for now) was a multiplier for the data accumulation that had already been growing exponentially. The amount of data was so big it needed a separate category of, well big. Then, trying to manage that much data while it was still of business value (or even determine if it is of value) quickly became too much for human manipulation or even basic algorithms, so more complex algorithms were created, followed by algorithms that create algorithms and then AI became a battle cry to save us all from drowning in the data lakes (that we probably wouldn’t have created if we had true AI to start with).

The lack of quality in initial cloud forays that we ignored at the start of the last paragraph combined with the ease of getting into the cloud is what led the way for so many being over whelmed with data.  The truth is most cloud initiatives that have real business value are still in their early stages. The early adopters (that are now trying to hire more data scientists than then there are in order to help dam the floods) have given the rest of us a great example of what not to do. So let’s use their hindsight to build our vision.

If you don’t know what it is, don’t handle it with familiarity

Anyone with kids or animals learns fairly early that they should never pick anything up from the floor bare-handed if they are not positive of what it is. Technology deserves the same wary respect. If some says “everyone is moving to the cloud”, I know they are either misinformed or not being truthful (sometimes both). First, because everyone rarely does anything at the same time, no matter how much marketers and salespeople tell us otherwise. And second, because lots of us have been there for years and just didn’t think of it that way. Financial services is one very common area where storage, processing and source-of-truth data has been entrusted by the customers to the service provider and accessed over the internet since internet was spelled with a capital “I”.

The truth is, there are many different types of cloud services and the value of a given service and provider varies according to the enterprise needs. I know this is really obvious and what I am calling out here is how it is often forgotten after seeing a competitor’s press release or the conclusion of a well-rehearsed and intricately orchestrated sales demo. There is no doubt in my mind that cloud services will benefit most (maybe even all) businesses. But not every type of cloud service is needed by every business and how one business uses a specific service is not how every business should even if they will benefit from the same service.

Data storage is a great example, because it has become a universal need. There are businesses that can and should keep all of their data in cloud services. The plural is on purpose, because if you are going to commit your entire business to the cloud you need to have some kind of back up. On the opposite end of the spectrum there are many small companies that have neither the volume nor the budget to properly maintain all data safely in the cloud. In the middle (and the latter example is far more common than the first) there are the majority of businesses that will benefit storing certain types of data in the cloud and applying both MDM synchronization to ensure availability and continuity.

The green field is not always greener

The biggest hurdle in guiding enterprise technology is not getting buy-in on new technology; it is managing the false sense of security that comes with having been convinced to adopt the new technology.  

Part of the reason stake holders expect the latest cloud offering to save the planet is over-selling on the part of IT, vendor sales and marketing, and industry hype (which is greatly fueled by vendor marketing in much the same manner as a recursive process with a bug).

Another equally significant part is human nature. Once a decision has been made or a belief adopted, the subconscious mind will often vigorously defend that decision or belief whenever it is challenged, deepening the associated conviction. This is why we so often seen a mass movement towards solutions as they gain popularity even if the solution is not always what is called for. People who want to slow or redirect the changes for very good reasons seldom take the psychology of newly adopted convictions into account and the result is a more energized drive in the direction that may not be right (or right at the time).

Measure twice, cut once…every single time!

To avoid the rush to greener cloud pastures, IT and business need to work together to define and agree on business and technical goals. Once the goals are agreed to, an analysis of how a given cloud solution will further those goals should be supported by a pilot or proof of concept before any large commitment is made. The results of the analysis must vigorously seek any side effects that have a negative business aspect in addition to how the goals are met (or not). Finally, if the solution is adopted, the analysis needs to be reviewed and revised for every case. Again, no rocket science or epiphany here, just common sense that is not-so-common as to not benefit from repetition.

Walk, don’t run, to your nearest gateway

To offset the cautionary tone of this post, it bears repeating that there are many benefits to cloud services and some combination of cloud services will very likely benefit any enterprise. Some key concepts to keep in mind in order to realize the most benefits (and dodge the potential pitfalls of a bad combination) are: The forecast for enterprise architecture is increasing clouds. Enjoy the shade and keep to high ground away from potential flood damage!

Facebooktwitterredditlinkedinmail
© Scott S. Nelson

Maximize ROI with MVP

(Originally published at InfoWorld.)

I prefer to write about things that have either not been written about previously or where I think that the value is still being missed. This article is the latter criteria, given that the term Minimum viable product was coined in 2001 (according to Wikipedia). Like many patterns and processes related to technology there is more to the use of MVP than the name implies.

Minimum is for targeted effort

The M in MVP is often misconstrued as the minimum to go to market at the start of the effort though it is more suited to the end of the effort. The definition of minimum should evolve through the life-cycle of design and development.

If you will accept that all functionality is moving something from one point or state to another, then absolute minimum is being able to get from start to finish.  So at the start of an iterative design and development process for a feature or product this should be the first goal and go no further until the results are reviewed by the product owner and/or users. Another way to look at the first value of minimum is sufficient for demonstration and discussion.

Once the absolute minimum has been achieved, then the additional criteria can be added in. The additional criteria are going to be beyond the bare minimum to accomplish the change in value or state, of which there can be many such requirements. These requirements must be prioritized by the key stakeholders and then delivered singly unless (in very rare cases) multiple requirements are inter-dependent. The reason the inter-dependency should be rare is that the requirements should be stand-alone. They may need to be done in a particular order, which should be considered when determining the priority.

In a recent design session where a new feature was required for call center users to follow a script and record responses with the script branching at points based on answers through the process. There were some participants that wanted to start from the assumption that this functionality would be re-used in other processes and start with a generic approach, even though the expectation was that any such reuse would be far in the future. It is important to acknowledge the potential for reuse and avoid approaches that will prevent it or make it overly complicated. That said, it adds nothing to the initial solution to genericize without knowing what the future requirements are. It only adds to the level of effort in producing the first MVP for stakeholder review and getting to the first production-ready MVP. In this case the difference would have been a couple of weeks in a project already behind schedule.

Viable must be based on agreement

I can think of several well-known enterprise-technology products that have a terrible user experience. For me, personally, I feel they miss being optimally viable, though I have to admit they are minimally functionally viable. That said, I’m neither the product owner nor the key stakeholder (let’s admit, that is the person paying for the license and not the actual user) and cannot honestly say whether the standards of viability were met.

The most important part about the previous paragraph is acknowledging that it is not my role to determine viability. I can (and should) provide my input about what I think is important about viability, but the ownership belongs to the product owner and (sometimes, and maybe not as often as it should be) the key stakeholders.

Another area often forgotten about viability is from the other side of the coin. The product must be maintainable. Product owners often insist on functionality that is difficult to maintain. In some cases, this is an acceptable trade-off and in other cases the maintenance cost out-weighs the business value and that impacts viability from anyone’s point of view. My experience is that product owners asking for high-maintenance features generally do not know that is what they are asking for, and that often times it is the timeline more than the possible solutions that make it a maintenance issue. Delivering products using an MVP design approach is also about continuous communication between owners, designers and developers. If any one of those roles works without both giving and receiving input, the project is in peril.

Product should be plural

Because minimal is an evolving criteria and viable is the result of consensus, a single outcome, i.e., product that is shippable on the first iteration is extremely rare, with the possible exception where the product is a very simple addition to an existing product.

By willing to iterate and refactor, each version of the minimally viable product will be better until it at least reaches the level of minimum where it can be delivered.

Minimal valid postscript

Does the Minimal viable product approach described here sound like other approaches? Agile come to mind? Most good approaches have similarities. It is exceptional when they do not.

A lot of this depends a bit on “perfect world” scenarios. In the real world, sometimes the work needs to forge ahead with assumptions while waiting for stake holder review. This is where source control management comes in to play. The important thing to remember is to not become overly-fond of assumptions as they may prove incomplete or invalid after the stake holder review and input. This could even happen frequently with new teams, but as the product owners and producers continue to work together, the gaps will become fewer and fewer. Again, I caution to avoid complacency as those gaps narrow as the will rarely go away completely. The goal for all is best functioning product that can be created given the capabilities available, even if that means the occasional solution refactoring.Facebooktwitterredditlinkedinmail


© Scott S. Nelson