From Agile to Fragile in 60 sprints

The adoption of agile software development methodologies has been a necessary evolution to support the explosive demand for new and expanded capabilities.   There is no doubt that without the broad adoption of agile practices much of the growth in technology, and all of those aspects of everyday life that is driven by technology, simply would not have happened.

Still, too much of a good thing applies. Another old adage that comes to mind is “You can have it better, cheaper, faster. Pick any two”. Many organizations have insisted on all three. How did they do it? They sacrificed the documentation.

I’m not talking about saving shipping costs and trees by making manuals virtual, and then saving bandwidth by replacing the documents download with the install files with links to online documentation (which has its own issues in this world of massive M&A). I’m talking about all those wonderful references that development teams, sometimes backed by technical writers, produced so that others may pick up where they left off to maintain and enhance the final applications. Yes, that documentation.

Self-Documenting Code does not make a Self-Documenting Solution

While no one can honestly disagree with the value put forth in the Manifesto for Agile Software Development : “Working software over comprehensive documentation”, I also don’t think the intention was that documentation impedes working software.   Still, the manifesto has fed the meme (the original definition, not the funny GIFs) “Good code is self-documenting”. When I hear this, my response is “True; and knowing what code to read for a given issue or enhancement requires documentation”.  My response lacks the desired impact for two reasons: It doesn’t easily fit on a bumper sticker and it requires putting time and effort into a task that many people do not like to do.

The danger of little or no documentation is that the application becomes dependent on “tribal knowledge”. In a perfect enterprise, this is a dependable approach because employee turnover is low and when people do depart they always do so with adequate notice and thoroughly train their replacements. I have heard these enterprises exist, though I have never spent any time working with one of them.  I did, however, recently work with a business intelligence group where their entire ETL staff departed within a few weeks of each other after a few years of furiously building hundreds of data integrations in a dozen different business areas and then spent less than 9 hours in “knowledge transfer” sessions with my team who were tasked with keeping the lights on until a new crew was hired and trained. There was not one page of documentation at the start of the knowledge transfer and I have yet to find a line of documentation in any of the code.

I’m not advocating the need for waterfall-style detailed design documents. In some ways, those can be worse than no documentation because they are written before the code and configurations they are intended to describe are created and fail to be updated when the actual implementation deviates. In an agile world, writing the documentation after the implementation is a sound approach that will support the manifesto value of “Working software over comprehensive documentation” by being just enough documentation to facilitate maintaining the software in the future.

Meeting between the Lines

How much is just enough? That is going to vary by both application (and/or system) and enterprise. Some applications are so simple that documentation in the code to supplement the “self-documenting” style is sufficient. More complex solutions will need documentation to describe things from different aspects, and the number of aspects is effected by whether maintenance is done by the development team or a separate production support group. The litmus test for whether your documentation is adequate is to take a look at it from the perspective of someone who has never heard of your application and needs to be productive in maintaining or enhancing it in less than a day. If you have difficulty in adopting that point of view (many people do, and double as many developers), have someone outside your team review the documentation.

I find the following types of documents to be a minimum to ensure that a system can be properly managed once released to production:

  • Logical System Architecture
  • Physical System Architecture
  • Component Relation Diagrams
  • Deployment Procedures

Again, the level of detail and need for additional documentation is going to be driven by complexity and experience. Another factor is how common the relevant skills are. If the candidate pool for a particular platform or framework is shallow, more detail should be provided to act as springboard for people that may be learning the technology in general while diving into the particular implementation.

Yes, there are Exceptions

Conversely, some solutions are true one-offs that are filling a very specialized need that is unlikely to evolve and may have a short lifespan. These implementations only really need sufficient reference to migrate them to another environment or decommission them to free up resources while not negatively impacting other systems. I do caution you to really be sure that an application falls into this category before deciding to minimize the documentation.  What comes to my mind when I think of such decisions is massive amount of resources dedicated to dealing with two-digit years in 1999 to address applications that were not expected to still be in use when they were developed 10 or 20 years previously.

A Final Appeal

At the beginning I agreed with the manifesto value of working code prioritized over comprehensive documentations. In the days when most software life cycles began with tons of documentation and meetings to review the documents and meetings to review the results of the review, a great deal more beneficial build and test activities could have been done in that time instead. My experience in documenting the results of agile and other iterative processes toward the end of the development cycle and then reviewing that documentation with people outside the team is that design flaws are discovered when looking at the solution as a whole rather than the implications to individual stories in a sprint. The broader perspective that waterfall tried to create (and often failed since most waterfall documentation does not match the final implementation) can be achieved better, cheaper and faster when documenting at the end of the epic. In this one case, picking cheaper and faster yields better.

Documenting the fruits of your software and application implementation labors may not be the most exciting part of your team’s work, but the results of not documenting can become the most painful experience for those that follow…or your next gig!

Originally published at InfoWorld

Facebooktwittergoogle_plusredditlinkedinmail

5 easy steps to install custom components in Salesforce Trailhead playground orgs

Salesforce Trailhead trainings are a great way to learn Salesforce. Some of the Hands-on Challenges require installing components. If you are using a Developer org to run these, the instructions are easy to follow. However, if you are using Trailhead Playground org, it is kind of a pain to install some components. There is a link thoughtfully provided by Salesforce for how to do this with eleven steps that I find a bit too time-consuming and confusing. I have found a slightly different approach that seems (at least to me) simpler. I will leave it to you decide which you prefer.

The instructions for installing the component (as in the screenshot below)will often be provided well before the challenge and the hint to avoid the frustration of trying to log into a Playground org when prompted by the standard component installation URL.

Example Trailehead Package Insturction
Example Trailhead Package Instruction

Step 1: Decide for yourself whether you will read through the full lesson or skip right to the challenge and when you get to the challenge, open your Trailhead Playground org in a new window by right-clicking on the Launch button (as pictured below).

Open your Trailhead Playground org in a new window by right-clicking on the Launch button
Open your Trailhead Playground org in a new window by right-clicking on the Launch button

Step 2: Log in to your Trailhead Playground org.

Step 3: Go back to the lesson screen and copy the component installation URL without the domain, i.e., “packaging/installPackage.apexp?p0=04tj0000001mMYP”

Copy the component installation URL without the domain
Copy the component installation URL without the domain

In some cases the installation instructions will have a link without the URL on the page. In this case, right-click on the link and copy the target to get the path, pasting it into a text editor to extract the portion following the domain.

Right-click on the link and copy the target to get the path
Right-click on the link and copy the target to get the path

Step 4: With the package path in your clipboard, paste it after the domain name of your Playground in the window you have already logged into and press enter.

With the package path in your clipboard, paste it after the domain name of your Playground in the window you have already logged into and press enter.
With the package path in your clipboard, paste it after the domain name of your Playground in the window you have already logged into and press enter.

Step 5: Once the installation screen will come up, and you can continue as instructed in the Trailhead lesson.

Once the installation screen will come up, and you can continue as instructed in the Trailhead lesson.
Once the installation screen will come up, and you can continue as instructed in the Trailhead lesson.

I have tested this on both Chrome and Firefox running in Windows 7. Your results may vary with a different combination of browsers and O/S.


Originally published at InfoWorld

Facebooktwittergoogle_plusredditlinkedinmail

Finding and Fixing the Cause of 0 Byte Files from FTP Source Events in ICRT

Ran into this recently and wanted to share it with my loyal reader… An Informatica Cloud Application Integration solution that worked wonderfully through months of UAT and flawlessly the first few weeks in production suddenly produced complaints from stake holders because the files output as the end result of the process were empty.

Looking at the process logs, there were no errors shown. Every technologist’s favorite finding (sarcasm, for those not blooded in debugging production issues).

The application functionality is relatively simple: Listen for new files in a remote folder via SFTP, download the files and rename them for processing by a Data Integration Mapping Configuration Task then run the MCT. I’ll spare you the details of trying to get the details of the missing files except that once they were obtained the pattern I noticed was that the missing files were the largest in the brief history of the application. Not much more deduction required to figure out that in the source environment the files were being written directly to the outgoing folder rather than being created elsewhere and dropped in when complete. The larger files took longer to write than the polling interval so they were being brought down in name only (pun intended).

It took a while of reading and re-reading the documentation for the FTP connector to determine that the best strategy for this issue was to use the File Read Lock Settings for the event source, setting the Read Lock value to “changed” and running tests to the get the correct values for the other File Read Lock Settings.

ICRT FTP File Read Lock Settings

Problem solved!

Facebooktwittergoogle_plusredditlinkedinmail

The Differences between IT Consultants and Contractors

I try to post original content. Sometimes that originality may only be in the presentation of the information, in which case I am attempting to provide (I hope) a clearer understanding or a simpler approach. Because of this personal rule of conduct, I first researched this topic to which I have thought and spoke about for quite some time and was very surprised at what I found. What is already out there on the subject of comparing contractor and consultant roles is sometimes contradictory and has some distinctions that I think are based on only thinking about individuals rather than encompassing companies that provide both services as well. Rather than argue the points others have made (which I don’t necessarily disagree with in certain, specific contexts) I will present my thoughts and experience and leave it to you if you wish to research further.

What’s the Difference?

In short, the basic difference between the two is simple: A contractor is an individual who possesses a specific skill set that they will utilize to your specification, where a consultant is an individual who has experience with developing a solution within a domain where you need assistance.

The basic difference is also inadequate to understanding which one you need for a given project (or aspect of a project) and how to work with them to your best advantage, so let’s dive a little deeper into the more subtle differences.

While you may work with both as individuals it is more common to work with them in groups. A group of consultants will be a team assembled on your behalf by a consulting company (AKA partner, group, professional services provider, etc.) and should be self-managing. A group of contractors may come from the same agency but will require management (which may also be contracted).

Consultants can help you define the problem and work with you to develop a plan to get from current state to target state.  Frequently they also perform and/or manage the tasks and deliverables of the plan. Consultants can direct contractors to execute to the plan, and will often provide those contractors as well.

Another difference is that for a contractor to be valuable, they must be deeply familiar with a specific aspect of the project, where consultants need only be familiar with the general domain of the project. One of the best reasons for engaging consultants is their proven ability to navigate through the unknown.

Working with Consultants vs. Contractors

One difference not included above is cost. There are many different fee structures for either, though they can all be broken down (for the sake of comparison) to cost per hour of effort. Consultants are almost always a higher hourly cost. The difference is usually reflected in the value provided during that time, meaning that you will get more benefit for each hour of consulting. They key word in the previous sentence is usually.

There are two common scenarios where the value is not always higher with consultants.  The first is when it is the wrong consultant.  The wrong consultant can be engaged for any number of reasons, and once this is determined than it should be corrected. This, however, is not the most common reason for missing out on the full value of a consultant.

The most common reason for not realizing the maximum value of a consultant or consulting team is working with them like they are contractors. Consultants should be actively involved at all levels of the project. During requirements definition they can provide their experience with what similar projects have missed including early on, and help determine prioritization through an understanding of the effort involved in delivering a requirement. Consultants will be able to apply experience in planning, knowing what tasks can be done in parallel to support timelines and where risks are most likely to occur along with mitigation approaches. Once the delivery phase has begun, consultants will recognize issues and opportunities during regular reviews that might go unnoticed by those who have not done similar projects in the past. Every consulting company I have worked with has a project management practice, and if it is a team of consultants engaged on a project it will generally yield the most value if the part of that team is a project manager who will, among other contributions, help the client to realize the maximum benefit of working with the consulting team.

Having one or more consultants on a project and then tasking them the same way as contractors is like rowing a power boat. It can still get from one place to another, but the boat is under-utilized, the journey will take more work than required, and it will not be nearly as much fun!

Which is Best for Your Project?

If your project involves technologies that your enterprise is already comforably familiar with and you just need more hours in the day, contractors should fill the need nicely. You may be implementing a larger project where an isolated area is outside of your experience and a contractor can fill that gap and train your people on how to maintain it afterwards. Or the project you are working on is scaling out your technical landscape and you will need to keep on someone afterwards for maintenance, so contracting can be a “try before you buy” approach to determine the right candidate.

If there is a concern about whether the project is the right thing to do or the technologies are the right ones to use, consultants can bring experience and a fresh viewpoint to increase confidence. If a project will introduce more than one or two completely new aspects to the enterprise, engaging a consultant should certainly be considered. The nature of consulting makes them familiar and comfortable with the unknown. For many organizations, internal teams need to be more focused on the day-to-day operations and introducing change to the technical landscape can be better served by professionals for whom change is the day-to-day operation.

Facebooktwittergoogle_plusredditlinkedinmail

A Quick Tutorial to Migrate Informatica Cloud ICS Objects between Orgs

Screenshots with captions:

Administer > Migrate Objects
In the Target Org, From the Administer Menu, Select Migrate Objects
Start Migration
On the Migrate Objects Page Click Start Migration
Log into Source Org as Admin
Click the Log In… Button
Enter Credentials when Prompted
Enter credentials for an Administrator Account in the Source Org
Click Add Objects
Once logged in you can click the Add Objects button
Select Objects to Migrate.

Save time by selecting objects at the top of  related hierarchies, such as Task Flows will automatically select the tasks, and any objects required to support the task (such as Mappings and Connections) where selecting an items lower in the hierarchy (such as Connections) will not automatically select their parents. Close the dialog by clicking the OK button (not pictured).

Back on the Migrate Objects Page Click the OK Button
Choose Carefully Whether to Overwrite or Not

If you are migrating updated objects you will see the prompt above. You may wish to rename the existing objects before migrating. You may also want to delete the existing objects as the Overwrite behavior may not be what you expect.

Once the migration is complete, you will need to review your objects to confirm or correct any Org-specific values such as the Secure Agent name or credentials for Connections that require them.

Facebooktwittergoogle_plusredditlinkedinmail

Create a Reusable Informatica Cloud Secure Agent Dev Image

I am going to start this article with a confession. As a system integration and automation consultant I am continuously shifting from one product, language and platform to another. The frequent changes strengthen the short-term memory and change the long-term memory to manage concepts more than details. A complex set of steps, processes or systems commands that can be performed without having to refer to Google, Evernote, or vendor documentation today can often fade to nothing more than a clear memory of having done them without recall of the specifics within a few months. Sometimes in mere weeks (or even days) depending on how different the next project is.  So the main reason I write these detailed articles is so that I can repeat a process if ever needed again.  Being able to share these mental breadcrumbs is just a bonus.

Lately I have been working a great deal with Informatica Cloud, an excellent iPaaS solution. Informatica Cloud uses an on-premises application called a Secure Agent to facilitate secure interactions between internal systems and the Cloud. During training and development it is useful to have a Secure Agent running on a developer’s laptop. While the Secure Agent will install on the operating systems of most laptops, it runs better in an environment that is configured to support server operations rather than day-to-day work. There is also a simple way to switch the Informatica Cloud instance (known as an Org), though it can be a hassle for a developer or consultant that needs to interact with multiple Orgs. To make life easier in such circumstances, I have taken a baseline VirtualBox image and added some applications and scripts so that the image can be quickly configured for any Org I have access to. Following are the steps I took to build this image, plus instructions for running the scripts that will make it reusable.

Customize a Linux Image

If you aren’t familiar with VirtualBox, you can download it for free at https://www.virtualbox.org/wiki/Downloads.  Be sure to get the Guest Additions download, too, you will want it later. I don’t have a preference for a specific Linux distribution. This article uses a CentOS 7.0 image available from https://s3-eu-west-1.amazonaws.com/virtualboxes.org/CentOS7-Gnome.ova.torrent, a choice resulting from the first image found that was of recent vintage and provided sufficient support for the task (it also has a graphical desktop, which is useful when you want to share the image with people who may not be comfortable with command-line-only interfaces). That said, support for various installation processes vary widely between distributions and builds, so if you choose a different distro you may need to find alternative methods to some of the steps on your own.

The first step is to import the pre-built appliance you have downloaded.  The screenshots below will help if you have never done this before.

Import Appliance

Select Appliance File

Customize Configuration Values

I change the Name of the appliance to be descriptive of the purpose of the image after I have completed configuring it. I also increase the CPU count and RAM total so that it will not be too slow during development test runs. You may need to adjust these values according to your own machine. I recommend no more than half of your available resources.

Hard Disk Space

One reason I use this particular base image is that it has adequate hard drive space, which many lack.  If you have an appliance image you like but the drive space is too small, see my blog post about Resizing the Root Drive on a Linux VirtualBox Guest Image with a Windows Host for a solution to use it anyway.

One value of a VM is the ability to use Snapshots to go back to a working point.  During the process we will create one to make a reusable appliance.  Create others as you go according to your own sense of adventure or caution. I suggest you delete old snapshots when you know your current state is stable to save disk space. Doing so at the start saves re-importing the appliance if there is a mis-step earlier in the process.

Install the Informatica Cloud Secure Agent

The next thing to do is to install the Secure Agent. If you plan to re-use this image as a starting point, do not initialize the agent after installation, just start it and verify the install as follows (screen shots follow the steps for those unfamiliar with them):

Download the agent from Informatica Cloud

Download the Secure Agent

Select Linux Agent for this set up

Default save location is the user download directory

then:

Default install options

Choosing the default install location is the simplest option.

Finally, add the following to ~/.bash_profile:

Find your .bash_profile

Planning for Reuse

For this to be reusable, we will want to change the hostname, especially for a shared dev environment as we cannot have everyone using “localhost” as their hostname. The following is a bit of an overkill script that handles setting the hostname for a variety of distributions:

changehostname.sh:

This is also a good time to place a readme on the desktop. Here is the one I use:

Save the script in ~/infaagent, shut down the virtual machine and create a snapshot. Then start the machine and follow the readme. If all works well, restore the last snapshot and then export the appliance as your base image.

Export the VM Appliance

Select appliance to export

Set location and format

Add Guest Additions

Shared Folders require Guest Additions

I set up a shared folder to easily exchange files between the guest (Virtual Machine) and host (laptop), as well as to have a place to uses as a file location accessible to both while working with Informatica Cloud flat file connections and logs.

In order to access the shared folder (as well as have a shared clipboard), run the Guest Additions set up after logging in to the new VM.

CentOS7 is not all that easy to do that with. There is a good set of instructions at http://lifeofageekadmin.com/how-to-install-virtualbox-5-additions-on-centos-7/. One thing that is not clear in the instructions is that you will want to shut down and re-start the VM after the last yum update before running the Guest Additions installation. You may also have to run it twice to get everything working.

Insert and run the virtual CD

You will probably need to use Right-CTRL+C to release your mouse from the VM until the Guest Additions are installed.

Once the shared folder shows up under /media, add the centos user to the group for access:

Restart for change to take effect.

Installing MySQL

I often find that MySQL is handy to have on the VM. The following steps will work to get it installed on the CentOS 7.0 image available from https://s3-eu-west-1.amazonaws.com/virtualboxes.org/CentOS7-Gnome.ova.torrent.

First, remove the old install with

Then download the latest install from https://dev.mysql.com/downloads/repo/yum/.

The Red Hat Enterprise Linux 7 version seems to work on Centos 7. Save the download rather than opening it, then go to your downloads folder and run the following commands:

When done, verify with

Finally, find and update the database admin password and (optionally) install Workbench using:

Be sure to add the new password to your readme.

Facebooktwittergoogle_plusredditlinkedinmail

Resizing the Root Drive on a Linux VirtualBox Guest Image with a Windows Host

Most of the solutions I design and develop are deployed to a Linux server. Before “DevOps” became a thing, there were always server admins ready willing and able to help with setting up the deployment environment and handling the day-to-day maintenance afterwards. Lately I have been left to my own means to get these tasks done and have learned some commands, written a several bash scripts for repetitive or automated tasks and bookmarked enough reference sites to be productive while still not considering myself an expert and definitely not an administrator.

So, being cautious, I prefer to have a virtual machine that is close to the environment I will be deploying my work to, especially bash scripts that can bring things down faster than they build them up if there some errant typo in the right-wrong place. I once built the duplicate virtual machine image from scratch, which I found to be a painful and dissatisfying experience given that I wanted the machine chiefly due to my lack of expertise with the finer points of configuration and administration. The next best thing to building it yourself (or first best thing, in my case) is to find one that is already pre-built and then add the necessary customizations to it. There are a plethora of free Linxu VM images out there, and finding one that is fairly close to the enterprise standard of my current client is usually fairly simple. The one thing that is almost always an issue is that the free images have a small hard drive in the configuration. If it is a case where another drive can be created and mounted, great. But recently I ran across a production configuration where the everything was off the root mount and I finally figured out how to enlarge the drive on the VM image without too many headaches. Here is how I did.

First, this is based on using VirtualBox. I have not used VMWare in a long time, but I believe the first stage of enlarging the capacity on VMWare may be even easier than with VirtualBox, which is where we start in the slide show below.

To save writing down the command from the slide show, you can copy and modify the following:

“C:\Program Files\Oracle\VirtualBox\VBoxManage.exe” modifyhd “Ubuntu 15_40GB.vdi” –resize 40960

Facebooktwittergoogle_plusredditlinkedinmail

Combine Data and Application Integration Aspects of Informatica Cloud for Fine-Grained Efficiency

Background

Informatica introduced their cloud initiative back in 2006. It has grown to encompass many data-related services including cleansing, EDI, MDM, etc. To set the context of this writing, by “Informatica Cloud” it is meant to include only the separate-but-related application- and data-centric aspects, sometimes referred to as the Informatica Cloud Integration Platform as a Service (iPaaS). iPaaS includes the Cloud Data Integration functionality based on (and separate from) their flagship Power Center ETL application and the Cloud Application Integration based on (and distinct from) the ActiveVOS platform that Informatica acquired back in 2013.

To save bytes, Informatica Cloud Data Integration will be referred to as ICS, and Informatica Cloud Application Integration (a.k.a, Real Time Integration) as ICRT.

Software Silos

Given the background, it is not very surprising that ICS and ICRT are mostly used separately for their key purposes. If there is some data that needs to move from system A to system B, ICS is the tool, and if a workflow needs to happen in real time, ICRT is the way to go. Both of these are valid assumptions, and the fact that ICRT is an additional cost to the default ICS included with iPaaS strengthens this viewpoint.

ICS provides a robust API for managing objects and running tasks. There is a connector in ICRT that provides wizard-driven access to the ICS API. ICRT processes can be exposed as web services that provide both a SOAP and ReST interface. In short, despite their distinct natures, ICS and ICRT can be easily integrated out-of-the-box (or, out-of-the-cloud, in this case).

Gain Elevators

Informatica provides ICS connectors for many third-party systems that are frequently integrated through ICS, such as SAP, Workday and Salesforce, in addition to common protocol connectors like SOAP, ReST and JDBC. In theory, there are very few systems that cannot be integrated in an ETL-manner using ICS, and this is also true in practice. That said, “able to” and “easy to” are important factors to consider when planning an integration project within delivery scope and maintenance goals.

Most of the connectors for ICS are also exposed in ICRT when enabled or installed. ICRT has a very robust architecture for creating Service Connectors to SOAP and ReST services that can be used by Processes that can in turn (as mentioned earlier) be exposed as SOAP or ReST services.

Super Mash

Not all web services are created equally. Where some provide a straight-forward interface to elicit data in a format ready for inter-platform translation, most are intended for look ups and transactions rather than being a source for batch-loading data. Informatica provides some connectors that wrangle popular APIs, such as Salesforce.com, into a structure that is easy to work with. Other services may have a connector that is more suited to being an ETL target, or meant more for the “citizen developer” to be able to load data into reporting format. Informatica also provides standard ReST and Web Service adapters, but if the API response is several layers deep it can be complex and confusing getting at the values using a graphical design platform such as ICS.

Fortunately, ICRT provides a way to quickly create a Service Connector for any standard API. The Service Connector provides a wizard to turn the API response into an object that can then be streamlined and simplified for easy management in an ICRT processes.  The ICRT process can perform further transformations, such as renaming fields and formatting data types, or simply act as a pass-through for outputting the more digestible response format.

Once the ICRT process is connected to the Service Connector, you have the option of beginning your integration in either ICRT or ICS, depending on the nature of the integration. For example, if there is a great deal of processing to be done in ICRT before the data is ready for ICS, it is simpler to initiate the process in ICRT, output the service response to a disk location, and then call ICS to perform the ETL steps with the file as a source. Alternatively, when the response is quick or ICRT is only acting as a proxy to simplify the response, the ICRT process can be exposed as a service and that service called by ICS as the ETL integration source.

For Instance…

Here is a real-world example of where this approach is useful. Informatica provides a perfectly functional connector to Workday. The connector provides full access to the Workday APIs. The Workday APIs, however, are not very simple to use. They provide a some control over the response format, but anything beyond limited data in common fields is deeply nested within complex objects. Note in the image below the number of fields available:

Workday ICS Data Source
Workday ICS Data Source

Using an ICRT Service Connector, we can take this complex response and immediately simplify it:

ICRT Service Connector Simplified XML Object Definition
ICRT Service Connector Simplified XML Object Definition

The Service Connector above can be run by an ICRT Process that will map the fields to a process object with the same names as the target system fields (SAP in this example) and then provides them directly to the mapping as a flat data set:

Simplified Source in ICS from ICRT
Simplified Source in ICS from ICRT

Granted, the mapping could still have been accomplished without the use of ICRT. By introducing ICRT as a proxy to the web service, development can be done faster by parsing simple XML rather than traversing complex nested objects. With the field names being defined in ICRT, if it is necessary to redefine the field sources there is no need to trace back through transformations in a Mapping to locate what may have been impacted.

Avoid a Clash

Only one instance of a Mappings task can be running at a time. To avoid the error “The Mapping Configuration task failed to run. Another instance of the task is currently running”, use a unique Mapping Configuration Task per process. In the case of Data Synchronization Tasks, many of the same tasks can be performed by a mapping, which can have multiple Mapping Configuration Tasks calling.

Conclusion

In most cases Informatica Cloud Data Integration functionality is all that is needed and desired to integrate data between systems. In some cases where web services are the source and the format of the service response is nested and complex, using Informatica Cloud Application Integration as a proxy service to simplify the response to just the fields needed for transformation can be a time saver both in the creation of the integration and its future maintenance.

Facebooktwittergoogle_plusredditlinkedinmail