Salesforce Community with Custom Objects in a Flow

Last year I wrote about Revisiting the Question of Build versus Buy for Web Portal Solutions, which was greatly inspired by the reduction in solid portal offerings that began about a decade ago. The dearth in offerings is a result of acquisitions by vendors who prefer that portals work only (or, at least, easily) with their own products. While I’m not a big fan of architectures that lead to vendor lock-in, there are situations where the portal is focused on exposing a certain view of a specific application and it is best if that is built by the same vendor as the application itself.  Case in point: Salesforce Community Cloud.

Salesforce has been rapidly improving their Community Cloud product with more features and easier management. I believe this is partly fueled by the demise of pure-play portals that can surface Salesforce functionality using the great Salesforce APIs (Liferay being the exception) and a smart move to increase Salesforce revenue while reducing customer costs for high API traffic. Whatever the drivers, I like where it is going. Though, like any technology on a rapid growth curve, there are some bumps along the way. One such bump is puzzling out all of the different permissions involved to provide various functionalities. Generally these bumps are easily overcome thanks to the active SFDC user community. Recently I ran across something that was not so easily solved and thought I would share it here.

The requirement is to create a workflow that ties together the standard Case object with a custom object. The goal is to make the user experience of working with a long form created previously to support the custom object by breaking down the numerous fields into user-friendly chunks that will result in a Case being created along with the custom object as a related object. With the normal amount of “oops” and “darns”, I managed to put together a Visual Work Flow that would take the inputs and create the Case object and then the related custom object.

A Visual Work Flow to take inputs and create a Case object and related custom object
A Visual Work Flow to take inputs and create a Case object and related custom object

I then created a Community page in a community using the Lightning Community template Napili and placed a Flow component with the new workflow configured.

Adding a Visual Workflow to a Lightning Community
Adding a Visual Workflow to a Lightning Community

The first bump I hit was that the Case failed to be created, which I easily figured out from the various Salesforce user communities was because I was using the wrong ID for the contact (the User ID is different than the Contact ID for community users). Once this was fixed, I then found that the custom object was not being created. Back to the trusty community posts where I found the sage advice to grant access to the object to the profile referenced in the Community Builder Settings. Well, not so sage as in my haste I forgot that that fields only applies to “guest” users, i.e., users not logged in.

To make a very long story short, the default profile granted to customer community users did not have permission to access the object (even though, during creation, I granted necessary access to all profiles) and the default profile is read-only. The fix was to clone the default community user profile, grant that profile the necessary permissions, and then use that profile for my community users. Who then could not log in! Oh, yes, even though everything about the default profile was cloned, the new profile still needed to be granted access to the community using the Community Administration view.

Granting access to a new profile in the Community Administration view.
Granting access to a new profile in the Community Administration view.

And, finally, the workflow could flow to the end.

Lightning Community Workflow Complete View
Lightning Community Workflow Complete View

Originally published at InfoWorld

If you found this interesting, please share.

© Scott S. Nelson

From Agile to Fragile in 60 sprints

Feature image by Elisa Kennemer on Unsplash

The adoption of agile software development methodologies has been a necessary evolution to support the explosive demand for new and expanded capabilities.   There is no doubt that without the broad adoption of agile practices much of the growth in technology, and all of those aspects of everyday life that is driven by technology, simply would not have happened.

Still, too much of a good thing applies. Another old adage that comes to mind is “You can have it better, cheaper, faster. Pick any two”. Many organizations have insisted on all three. How did they do it? They sacrificed the documentation.

I’m not talking about saving shipping costs and trees by making manuals virtual, and then saving bandwidth by replacing the documents download with the install files with links to online documentation (which has its own issues in this world of massive M&A). I’m talking about all those wonderful references that development teams, sometimes backed by technical writers, produced so that others may pick up where they left off to maintain and enhance the final applications. Yes, that documentation.

Self-Documenting Code does not make a Self-Documenting Solution

While no one can honestly disagree with the value put forth in the Manifesto for Agile Software Development : “Working software over comprehensive documentation”, I also don’t think the intention was that documentation impedes working software.   Still, the manifesto has fed the meme (the original definition, not the funny GIFs) “Good code is self-documenting”. When I hear this, my response is “True; and knowing what code to read for a given issue or enhancement requires documentation”.  My response lacks the desired impact for two reasons: It doesn’t easily fit on a bumper sticker and it requires putting time and effort into a task that many people do not like to do.

The danger of little or no documentation is that the application becomes dependent on “tribal knowledge”. In a perfect enterprise, this is a dependable approach because employee turnover is low and when people do depart they always do so with adequate notice and thoroughly train their replacements. I have heard these enterprises exist, though I have never spent any time working with one of them.  I did, however, recently work with a business intelligence group where their entire ETL staff departed within a few weeks of each other after a few years of furiously building hundreds of data integrations in a dozen different business areas and then spent less than 9 hours in “knowledge transfer” sessions with my team who were tasked with keeping the lights on until a new crew was hired and trained. There was not one page of documentation at the start of the knowledge transfer and I have yet to find a line of documentation in any of the code.

I’m not advocating the need for waterfall-style detailed design documents. In some ways, those can be worse than no documentation because they are written before the code and configurations they are intended to describe are created and fail to be updated when the actual implementation deviates. In an agile world, writing the documentation after the implementation is a sound approach that will support the manifesto value of “Working software over comprehensive documentation” by being just enough documentation to facilitate maintaining the software in the future.

Meeting between the Lines

How much is just enough? That is going to vary by both application (and/or system) and enterprise. Some applications are so simple that documentation in the code to supplement the “self-documenting” style is sufficient. More complex solutions will need documentation to describe things from different aspects, and the number of aspects is effected by whether maintenance is done by the development team or a separate production support group. The litmus test for whether your documentation is adequate is to take a look at it from the perspective of someone who has never heard of your application and needs to be productive in maintaining or enhancing it in less than a day. If you have difficulty in adopting that point of view (many people do, and double as many developers), have someone outside your team review the documentation.

I find the following types of documents to be a minimum to ensure that a system can be properly managed once released to production:

  • Logical System Architecture
  • Physical System Architecture
  • Component Relation Diagrams
  • Deployment Procedures

Again, the level of detail and need for additional documentation is going to be driven by complexity and experience. Another factor is how common the relevant skills are. If the candidate pool for a particular platform or framework is shallow, more detail should be provided to act as springboard for people that may be learning the technology in general while diving into the particular implementation.

Yes, there are Exceptions

Conversely, some solutions are true one-offs that are filling a very specialized need that is unlikely to evolve and may have a short lifespan. These implementations only really need sufficient reference to migrate them to another environment or decommission them to free up resources while not negatively impacting other systems. I do caution you to really be sure that an application falls into this category before deciding to minimize the documentation.  What comes to my mind when I think of such decisions is massive amount of resources dedicated to dealing with two-digit years in 1999 to address applications that were not expected to still be in use when they were developed 10 or 20 years previously.

A Final Appeal

At the beginning I agreed with the manifesto value of working code prioritized over comprehensive documentations. In the days when most software life cycles began with tons of documentation and meetings to review the documents and meetings to review the results of the review, a great deal more beneficial build and test activities could have been done in that time instead. My experience in documenting the results of agile and other iterative processes toward the end of the development cycle and then reviewing that documentation with people outside the team is that design flaws are discovered when looking at the solution as a whole rather than the implications to individual stories in a sprint. The broader perspective that waterfall tried to create (and often failed since most waterfall documentation does not match the final implementation) can be achieved better, cheaper and faster when documenting at the end of the epic. In this one case, picking cheaper and faster yields better.

Documenting the fruits of your software and application implementation labors may not be the most exciting part of your team’s work, but the results of not documenting can become the most painful experience for those that follow…or your next gig!


Originally published at InfoWorld

If you found this interesting, please share.

© Scott S. Nelson

5 easy steps to install custom components in Salesforce Trailhead playground orgs

Salesforce Trailhead trainings are a great way to learn Salesforce. Some of the Hands-on Challenges require installing components. If you are using a Developer org to run these, the instructions are easy to follow. However, if you are using Trailhead Playground org, it is kind of a pain to install some components. There is a link thoughtfully provided by Salesforce for how to do this with eleven steps that I find a bit too time-consuming and confusing. I have found a slightly different approach that seems (at least to me) simpler. I will leave it to you decide which you prefer.

The instructions for installing the component (as in the screenshot below)will often be provided well before the challenge and the hint to avoid the frustration of trying to log into a Playground org when prompted by the standard component installation URL.

Example Trailehead Package Insturction
Example Trailhead Package Instruction

Step 1: Decide for yourself whether you will read through the full lesson or skip right to the challenge and when you get to the challenge, open your Trailhead Playground org in a new window by right-clicking on the Launch button (as pictured below).

Open your Trailhead Playground org in a new window by right-clicking on the Launch button
Open your Trailhead Playground org in a new window by right-clicking on the Launch button

Step 2: Log in to your Trailhead Playground org.

Step 3: Go back to the lesson screen and copy the component installation URL without the domain, i.e., “packaging/installPackage.apexp?p0=04tj0000001mMYP”

Copy the component installation URL without the domain
Copy the component installation URL without the domain

In some cases the installation instructions will have a link without the URL on the page. In this case, right-click on the link and copy the target to get the path, pasting it into a text editor to extract the portion following the domain.

Right-click on the link and copy the target to get the path
Right-click on the link and copy the target to get the path

Step 4: With the package path in your clipboard, paste it after the domain name of your Playground in the window you have already logged into and press enter.

With the package path in your clipboard, paste it after the domain name of your Playground in the window you have already logged into and press enter.
With the package path in your clipboard, paste it after the domain name of your Playground in the window you have already logged into and press enter.

Step 5: Once the installation screen will come up, and you can continue as instructed in the Trailhead lesson.

Once the installation screen will come up, and you can continue as instructed in the Trailhead lesson.
Once the installation screen will come up, and you can continue as instructed in the Trailhead lesson.

I have tested this on both Chrome and Firefox running in Windows 7. Your results may vary with a different combination of browsers and O/S.


Originally published at InfoWorld

01/27/2018 Update: The new Salesforce Trailhead UI may take you to the log in page. No worries. Copy the login URL and strip the characters before and including “startURL=”, past into https://www.urldecoder.org/, and strip all of the characters including and following the first “&”. to get the package URL.

If you found this interesting, please share.

© Scott S. Nelson

Finding and Fixing the Cause of 0 Byte Files from FTP Source Events in ICRT

Ran into this recently and wanted to share it with my loyal reader… An Informatica Cloud Application Integration solution that worked wonderfully through months of UAT and flawlessly the first few weeks in production suddenly produced complaints from stake holders because the files output as the end result of the process were empty.

Looking at the process logs, there were no errors shown. Every technologist’s favorite finding (sarcasm, for those not blooded in debugging production issues).

The application functionality is relatively simple: Listen for new files in a remote folder via SFTP, download the files and rename them for processing by a Data Integration Mapping Configuration Task then run the MCT. I’ll spare you the details of trying to get the details of the missing files except that once they were obtained the pattern I noticed was that the missing files were the largest in the brief history of the application. Not much more deduction required to figure out that in the source environment the files were being written directly to the outgoing folder rather than being created elsewhere and dropped in when complete. The larger files took longer to write than the polling interval so they were being brought down in name only (pun intended).

It took a while of reading and re-reading the documentation for the FTP connector to determine that the best strategy for this issue was to use the File Read Lock Settings for the event source, setting the Read Lock value to “changed” and running tests to the get the correct values for the other File Read Lock Settings.

ICRT FTP File Read Lock Settings

Problem solved!

If you found this interesting, please share.

© Scott S. Nelson