Three Workflow Approaches with WebLogic Portal

This is a blast from the past originally published at Developer.com when they were still interested in portal development.  I came across it because I needed a reference to Nested Page Flows and couldn’t find one until I ran across a link to my own article. Deja dude. Anyway, here it is. One day I will clean up the mark up, but for now it is still useful for reference and so long as the link above works you can still see the clean version…

While the disclaimers usually start later, this article merits one up front: These are not the only solutions to creating workflows in WLP. For example, we’re not even going to touch on JSF, or consider the possibility of special considerations in a federated portal architecture. So don’t let yourself be limited by the scope of this article or author’s experiences, and prejudices. What we will examine is some solutions that are known to work and should give you enough of the basics to implement any set of workflow requirements on WLP.

Simple Page Flows

Page flows provide a very straight forward approach to creating a workflow. Using the built-in wizard will quickly generate your page flow controller with the default begin action. This default action is a simple action, which doesn’t do much for flow as all it does is forward to the generated index.jsp.

This is quickly enhanced by right-clicking on the begin action under the Actions folder in the page flow perspective and selecting Convert to a Method.

@Jpf.Action(forwards = { @Jpf.Forward(name = "default", path = "index.jsp") })
public Forward begin()
{
return new Forward("default");
}

Now you can begin adding workflow logic to your page flow. This approach is good for a simple process where the user will enter data in multiple forms and each submit does some level of processing on new data entered. You can even provide branching logic, forwarding to an action based on inputs. In either case, a single form bean in the page flow controller serves well to maintain the values, placing “frozen” values into hidden fields to maintain them from page to page and action to action.

Below is a series of action stubs that follow a simple workflow to create a web site user account:

/**
* Check if existing first last and email
* @param form userDataFormBean
* @return success if new user, error if existing user
*/
@Jpf.Action(forwards = { @Jpf.Forward(name = "success", path = "creatUserName.jsp"), @Jpf.Forward(name="error", path="index.jsp")
})
public Forward processUserNameAndEmail(userDataFormBean form)
{
Forward forward = new Forward("success");
return forward;
}

/**
* create user name and request address information
* @param form userDataFormBean
*/
@Jpf.Action(forwards = { @Jpf.Forward(name = "success", path = "getAddress.jsp")
})
public Forward createUserName(userDataFormBean form)
{
Forward forward = new Forward("success");
return forward;
}

/**
* Save the snail mail address and offer to subscribe
* @param form userDataFormBean
*/
@Jpf.Action(forwards = { @Jpf.Forward(name = "success", path = "subscribeNewsletter.jsp")
})
public Forward storeAddressInfo(userDataFormBean form)
{
Forward forward = new Forward("success");
return forward;
}

/**
* Save the subsription choice and send to summary page
* @param form userDataFormBean
*/
@Jpf.Action(forwards = { @Jpf.Forward(name = "success", path = "summaryPage.jsp")
})
public Forward offerSubscription(userDataFormBean form)
{
Forward forward = new Forward("success");
return forward;
}

What makes this simple is that each JSP uses the same form bean, with the action set to the next action. In a more robust implementation each action would also have an error page to forward to, which could easily be the JSP that submitted the information (such as processUserNameAndEmail does) with error messages. This example could be expanded with some simple branching; such as if the user already exists in the database the page flow action could forward to a password reminder page instead of simply going back to the index page.

Nested Page Flows

Nested page flows take planning a coordination between the originating and nested controllers.This makes them very useful when the work flow is predetermined and not expected to change much or often. In other words, the nested page flow approach is best suited to Waterfall projects where most (if not all) requirements are known prior to development.

Nested page flows allow passing control off to another controller while maintaining the state of the originating controller. This can be useful for branching logic or if you are creating multiple workflows that have the same set of steps as part of the flow. You can develop a page flow control that does the common steps, then call it from the controllers that deal with the parts of the workflow that vary. For instance, in our earlier simple page flow we could add a survey in the work flow before the subscription page to determine what types of subscriptions to offer. This survey workflow could also be presented to existing users at log in if their responses were out of date or when there was a new survey. In both the account creation scenario and the login scenario, the survey comes in at the middle of the process, so we want to be able to reuse the survey code without losing the state of either the enrollment or login workflow, so we call the survey flow as a nested flow.

If you know you are going to be calling a page flow as a nested flow at the beginning you can get the necessary annotations and action methods generated by checking the “Make this a nested page flow” option at the start of the page flow creation wizard. The two key ingredients to making a pageflow nested is in the controller annotation at the class declaration:

@Jpf.Controller(nested = true)
public class NestedPageFlowAController extends PageFlowController{

And the necessity to have an action with a forward that includes a return action:

@Jpf.Action(forwards = { @Jpf.Forward(name = "done", returnAction = "portlets_nestedPageFlowADone")})
protected Forward done() {return new Forward("done");}

The return action must be an action method that exists in the controller that called the nested controller. Calling the nested controller is simply a matter of having an action with a forward that resolves to the nested controller (or a method within the controller) like this:

@Jpf.Action(forwards = { @Jpf.Forward(name = "success", path = "subscribeNewsletter.jsp")})
public Forward portlets_nestedPageFlowADone(userDataFormBean form)
{return new Forward("success");}

As noted, this takes a good deal of planning up front. For a more Agile approach, let’s look at a new approach.

Event Flows

As far as the author knows, this is the first description of using events for this particular purpose. This is probably because the author doesn’t have as much time to read articles as write them, because it is a fairly intuitive leap to go from inter-portlet communication (a common use of portal events), to passing control back and forth between specialized controllers as well as loading hidden pages used only for special purposes in a workflow.

Events are a handy way of decoupling your controllers and actions. They allow you to move from one controller to another and back again with the only explicit relationship being to the event rather than the action. If you come up with a better way of handling an event or your workflow rules change, you can simply change how the event is handled rather changing all parts of the workflow.

Let’s say we are looking at a login workflow. When the user logs in, the first step would always be to check their credentials. From that point, there are many tasks we may want the user to do. It may be time for them to change their password, or there may be a message we want to show them based on some demographic information. None of these activities are mutually exclusive and could occur in any combination. We could use simple page flows or nested page flows to achieve this, but that would require tight coupling between the actions and/or controllers. Instead, we can fire an event based on an evaluation and send the user off to take a survey (for example). When they have completed the survey we may want them to see a bulletin or not. So rather than having the logic in the survey action as to where to send them to next, we can send them back to the initial action which will then evaluate whether they should just go to the landing page or somewhere else (such as our bulletin) first. The bulletin could either send them back to the same action after the user acknowledges receipt or forward them on to the landing page itself.

Accomplishing is fairly straight forward. For each page where you want to handle an event, create a .portlet file. While the portlet configuration must have some class defined where it would presumably start, once you add event handling to the configuration you have ultimate control over how to respond to that event. Let’s look at a simple example of how this works.

public Forward begin(userFromBean form)
{
PortletBackingContext pbc = PortletBackingContext.getPortletBackingContext(getRequest());;
int status = getStatus(form.getUserId());

switch(status)
{
case 0:
pbc.fireCustomEvent("callDisplayBulletin", form);
break;
case 1:
pbc.fireCustomEvent("callChangePassword", form);
break;
case 2:
pbc.fireCustomEvent("callPresentSurvey", form);
break;
}
return new Forward("default");
}

Our logic can go in any action, but for simplicity we will put it in the begin action:Since this action method always evaluates the users’ status, we can continue to send them back here and determine where to go next. If value of the status doesn’t have a case, we simply send them to the forward destination.

Each of the events has a portlet event handler registered to listen for it. The handlers can be in as many different portlet definitions as we want, allowing for reusing the methods in the same controller on different pages or be able to have several different controllers interact with each other through the event framework. Keeping our example simple, we will have the methods in one controller in a single portlet:

<netuix:portlet definitionLabel="eventBasedPageFlow" title="Event Based Page Flow">
<netuix:handleCustomEvent event="callDisplayBulletin" eventLabel="callDisplayBulletin"
fromSelfInstanceOnly="false" onlyIfDisplayed="false" sourceDefinitionWildcard="any">
<netuix:activatePage/>
<netuix:invokePageFlowAction action="callDisplayBulletin"/>
</netuix:handleCustomEvent>
<netuix:handleCustomEvent event="callChangePassword" eventLabel="callChangePassword"
fromSelfInstanceOnly="false" onlyIfDisplayed="false" sourceDefinitionWildcard="any">
<netuix:invokePageFlowAction action="changePassword"/>
</netuix:handleCustomEvent>
<netuix:handleCustomEvent event="callPresentSurvey" eventLabel="callPresentSurvey"
fromSelfInstanceOnly="false" onlyIfDisplayed="true" sourceDefinitionWildcard="any">
<netuix:activatePage/>
<netuix:invokePageFlowAction action="presentSurvey"/>
</netuix:handleCustomEvent>
<netuix:titlebar/>
<netuix:content>
<netuix:pageflowContent contentUri="/portlets/eventBasePageFlow/EventBasePageFlowController.jpf"/>
</netuix:content>
</netuix:portlet>

The above example is for the sake or brevity. It is far more likely that these events would be handled by multiple portlets either due to presentation considerations (such as going from a page full of portlets to a page with a single portlet) or logical separation of functionality (such as a survey controller, bulletin controller, etc.).

In addition to custom events, page flow actions are events that can also be listened for, allowing for the possibility of completely changing the functionality of action by listening for it and adding additional or new behaviors. The combinations are endless and can often be changed with only a minor configuration update or a single line of code. This simplicity is key to agile methodologies and provides developers with a rapid way to add functionality on an as needed basis.
Conclusion
Workflows are a common requirement for portals. While the examples in this article revolved around a simple registration and login process, they were chosen for their commonality. Employment applications, freight logistics, legal document creation, supply requisitioning, and financial transactions are other common workflows that are often required within a portal. Those that are straight-forward with little or no deviation are easily implemented with a simple page flow. Nested page flows provide a solution to complex workflows that need to interact and provide an opportunity for the reuse of common sub-flows when a project has well defined requirements. For a more agile approach, listening for and calling events provides a flexible, loosely-coupled way to call portlet methods within and between controllers without having to know all of the specifics what future requirements may be.

Facebooktwitterredditlinkedinmail
© Scott S. Nelson

Cleaner Code with the PMD Eclipse Plug-in

Re-post from the original publication on December 4, 2008 at Developer.com.

Most developers like to write clean code. So why is so much code in the world messy? While there are many opinions about this, there are two contributing causes that very most everyone will agree on.

One of the reasons is that there is rarely enough time to write code as cleanly as we would like. Even when code starts clean, the continual refactoring from changing requirements, shifting dependencies and the inevitable bug fixes (often a result of the first two factors) leads to messy code just as surely as short deadlines and long hours make an organized person’s desk become littered with piles of unfiled papers and unfinished notes.

The other reason that is generally agreed on for code in the real world not being as clean as it starts out in our minds is because not everyone generally agrees with what clean code should look like. Some people are more certain that their version is cleaner than another, and there are people who hold different opinions with equal conviction. For example, which row the opening brace of a method belongs. An example where I’m fairly certain I am not the only person who has had endless email threads and inconclusive meetings about. The one, final answer will not be decided in our lifetime.

While the coding standards of an enterprise or project team may begin as a democratic process, they will not be useful as a benchmark until their definition evolves to a benevolent dictatorship (remember, we are discussing business, not government here). Once the standards are defined, a third reason for not meeting them comes into play, which is that there are usually more rules than most folks can memorize, or remember when the time pressure is on or when the rules of one project differ from those of another. For these causes of messy code, I have found PMD to be the best solution based on its flexibility and ease of use. The letters themselves do not really stand for anything. The creator(s) just thought they sounded good together. The PMD home page supplies “several “backronyms” to explain it.”

About PMD

The PMD home page describes the value of the project simply and concisely as something that scans Java source code for potential problems such as possible bugs, dead code, suboptimal code, overcomplicated expressions, and duplicate code.

PMD is more than just an Eclipse plug-in, in fact it is available as a plug-in for many IDEs. It can be run from a command line or as an ANT task. This makes it perfect for Agile projects as it can be integrated into the developer’s IDE and run as part of an automated build process. Even if you aren’t running automated scans at build time, making it part of your IDE will allow you to write cleaner code as a developer and to speed up code reviews as a reviewer.

Eclipse Plug-in Installation

While Google is a developers best friend (though I really miss DejaNews), for the more mature Eclipse plug-ins it pays to read the details of your Google search results. In the early days of Eclipse plug-ins, the use of the update manager was less prevalent. Most plug-ins at that time were made available as downloads. Many of the plug-in projects that have survived since those early days have since moved entirely to the Eclipse project’s preferred method of using the update manager. While this can be annoying to those of us who began using Eclipse in the early versions (especially when maintaining a portable tool kit), it is a much cleaner way for plug-in projects to publish their wares and make it easier for the user to get the correct version for their workspace. I mention this because while preparing for this article my Google search found the original download site at http://sourceforge.net/projects/pmd-eclipse. Even though it wasn’t the highest ranked, it did come in third and the habit to want a download rather than an update manager URL is hard to break. Just before downloading the zip file, I noticed that site was last updated in 2005. Had I installed that version I would have then spent part of my afternoon cleaning up the mess of my highly-personalized workbench.

The currently maintained site is at http://pmd.sourceforge.net/integrations.html, where you will find the update manager URL of http://pmd.sf.net/eclipse. As a refresher from the Building the Perfect Portable Eclipse Workbench article, here are the steps to install PMD through the update manager:

Figure 1: Access the Update Manager from HelpSoftware UpdatesFind and Install

Figure 1: Access the Update Manager from HelpSoftware UpdatesFind and Install

Figure 2: Select Search for new features to install in the Update Manager Options

Figure 2: Select Search for new features to install in the Update Manager Options

Figure 3: Add the PMD URL to the Site List

Figure 3: Add the PMD URL to the Site List

From this point, the standard “Next, Next, Next” steps can be easily followed. Upon success, you will have a new perspective in your workbench:

Figure 4: The PMD Perspective

Figure 4: The PMD Perspective

If you happen to use the PHP version of Eclipse you will need to accept the restart Eclipse option or be annoyed by prompts telling you that your workspace is a mess.

Checking Your Code with PMD

While PMD can be customized to meet your coding standards, you can start using it immediately after installation with the default configuration. For those who don’t already have documented coding standards, these defaults can provide a good starting point.

PMD allows you to check code at any level available in the code view, i.e., at the project, package or class level. The code check is run by right clicking on the code level you wish to check and selecting Check Code with PMD from the PMD options:

Figure 5: PMD Options in Right-Click Menu

Figure 5: PMD Options in Right-Click Menu

The location and number of violations is then displayed in the PMD view.

Figure 6: Check Code with PMD Results
Figure 6: Check Code with PMD Results

You can drill down in the Violations Overview from the level you ran the check at all the way to individual line and violation description.

If you are introducing PMD mid-stream into a large project, the scan can take a long time. Once PMD has become part of your regular coding environment, getting in the habit of running a scan on each class you have created or updated prior to checking it into source control can save hours of bug hunting, not to mention reducing the possibility of embarrassing comments during code reviews.

After running a code check, the results can be exported by selecting Generate Reports from the PMD menu. A new folder will be added to your project named “reports” where the output will be available in several different formats.

Another cool feature is the ability to search for duplicate code with the Find Suspect Cut and Paste menu selection. The results of this search can help to pin down repeated code that should become part of a utility class.

Customizing PMD

If one reason for messy code is the differing opinions of what constitutes clean code then the expectation that the default rules will be perfect for every project is a bit ambitious. The PMD project takes this into account by making it very easy to customize which rules to enforce and what level of attention they should be given.

The basic view generally won’t need to be changed:

Figure 7: Basic PMD Option
Figure 7: Basic PMD Option

The predefined rules can be edited and removed easily in the preferences view. Changing descriptions to match the text of your coding standards can make the standards themselves easier to remember. As running the code check becomes more of a habit, most developers will tend to have fewer violations as correcting them reinforces remembering how to code to the standards.

Figure 8: Customize Rules with Configuration

Figure 8: Customize Rules with Configuration

One key option is the violation level. In the code check results view there are color coded toggles to set what level of violations to show (see Figure 6). When time for code review is limited, selecting the higher priority level (lower numbered) violations can help developers and reviewers focus on the most urgent violations.

When determining what level a violation should be it is a good idea to avoid the temptation to go too high as the top level violations will prevent compilation of code.

Figure 9: Error High Prevents Compilation

Figure 9: Error High Prevents Compilation

If you are adopting PMD mid project, setting too many violations at the highest level can bring project progress to a screeching halt or drive developers to remove the plug in, both results defeating the purposes of improving quality while saving time. However, if PMD is part of your environment from the first line written, high violation settings can lead to improved code quality throughout the project.

PMD also allows for creating your own rules, a task that is far beyond the scope of this article. Full documentation is available at http://pmd.sourceforge.net/howtowritearule.html. Most teams will find that customizing the descriptions and priorities of the large selection of pre-defined rules will be more than adequate for their needs.

Once the rules have been customized to match your standards, they can be exported for sharing across the enterprise or team.

While the tool itself is simple to install, customize and use, creating practices and policies to get the most of its use may take a bit more work. My personal preference is to make sure all violations at all levels have been addressed prior to the completion of the QA phase. Even with the best-defined rule sets, there will be some exceptions to the rules, and the project allows for this by providing a “Mark as reviewed” option. Using this option adds an annotation at the end of the line of code that will allow the code check to skip that violation in future checks.

Conclusion

PMD is a great tool for improving code quality, developing good habits and speeding up code reviews. It is not a panacea that can completely replace manually reviewing code. Code is an art as well as a science and automated tools have a long way to go before a level of heuristics that can be 100% reliable will be reached.

About the Author

http://www.linkedin.com/in/enterpriseportalarchitect/

Facebooktwitterredditlinkedinmail
© Scott S. Nelson

Portal Federation with WebLogic Portal WRSP Part 2: Advanced Techniques

Originally published at developer.com

In part 1 of this series you created your first federated portal utilizing WSRP. When the first WSRP spec was released, simply being able to render a portlet from one portal inside another with little or no additional development seemed really exciting. Like all new, cool technologies, once everyone had their gee-whiz moment, they started thinking about what else they wanted. As usual, they wanted a whole lot more, some of which is being addressed by the recently released WSRP 2.0 spec and the next Java portlet spec, JSR-286. Also as usual, neither development groups nor the WebLogic Portal product team waited for the next specification to start delivering the next generation of functionality. The over-arching theme of post-WSRP 1.0 requirements is how to go beyond sharing individual portlets, and begin integrating whole sections of portals together. Starting with version 9.2, the WebLogic Portal (WLP) began including both references on how to leverage existing capabilities as well as new APIs to allow enterprises to fully federate their portal assets. In this installment we’ll examine two of the major features that you can use to meet the expanding requirements you are bound to face once you get that first portlet reused through WSRP.

Federating Pages and Books

As all readers of developer.com are extremely intelligent I am certain that you have already concluded that if you can place one WSRP portlet in your portal, you can also place a whole page full of them in your portal. While this may be adequate for some portals, there are two scenarios that come immediately to mind (though there are bound to be others) where it would be better to have these portlets already grouped in the producer before integrating them into the consumer portal. One scenario is the division of labor provided wholesale importing of pages and books. Some portals are huge, containing hundreds of pages and thousands of portlets. Adding one or more pages to a portal administrator’s duties may simply not be practical. The other scenario is where the owners of the producer portal want to maintain a greater degree of control in how their portlets are combined no matter where they are rendered. Federated pages and books fulfill these needs easily.

WLP has made basic page and book federation extremely simple. So simple that we can just walk through the steps and understand why we are doing what we are doing.

In this walk-through we will use a simple taxonomy, keeping in mind that the more well-planned your taxonomy of portal assets is the easier your portal application will be to maintain. Once you have determined where your first federate page should reside, select the Portal perspective in Workshop, highlight the folder where you want your new page to live, and either select File > New > Other or use the CTRL+N shortcut to start the Workshop Wizard. In the first dialog, expand WebLogic Portal and select Page.

Figure 1: New Page Wizard

The wizard will have your highlighted path pre-selected for you and prompt you for a name for your page. Enter a name and click Finish and you now have a remote-able page where you can add your portlets. While it may seem odd at first to have a page with no book above it in Workshop, it will function the same as pages in the library of your Portal Administration Tool (PAT).

Figure 2: Remote  Page Layout

One nice improvement in WLP 9.2 is that the wizards generate the Definition Label of portal assets based on what you named them, rather than the old scheme of portlet_1, portlet_2, etc. While the portlet wizard prompts you for a title, the page wizard does not. With federated portals, some thought should be given to the value to use in the Title field of your remote pages (also for portlets and books). Unlike a Definition Label, the Title does not need to be unique as required by the API. However, consumer portal administrators will have no way of knowing the difference between three pages (or portlets or books) with the same name because they will not have the graphical view of them you do in Workshop or on your desktop. Consumer administrators can, however, change the Title field of a remote portal asset in the consumer portal library. Ah, but didn’t we say that one reason for federating pages was to leave the control with the producer? Like many areas of enterprise design and development, the approach needs to fit the situation. Deciding such trivial matters as portal asset titles in a in the early planning stages of a federated portal architecture will save time during the QA stage.

Adding a remote book to your producer portal starts with almost the exact same steps, the only difference being that you select the Book wizard rather than the Page wizard. Once your remote book is created, you then add pages as you would in a non-remote-able book. While it is somewhat counter-intuitive, remote pages can not be used in remote books laid out in Workshop.  Attempting to do so results in the remote page looking wrong in Workshop (the path to the page prints out where you would expect to see a layout). While this invalid configuration will build and deploy, the book with the remote page is not included in the published portal. If you have a remote page that you wish to be in a remote book, you will need to let the administrator of the consumer portal know. As with standard books, pages created in the book will be available in the PAT individually and there the administrator can assemble the way you could not in Workshop.

Note that if a portlet is consumed individually and later included in a remote page (or multiple remote pages are consumed containing the same portlet), a separate instance of the portlet is created on the consumer. If you know in advance that you will consume a portlet both individually and as part of a book or page, it is easiest to maintain your consumer portal by first placing the individual portlet before adding the remote page to your library.

Once the producer has published the remote pages and books, the consumer then needs to add them to the library. This is done in the same fashion as adding a WSRP portlet (as described in The Basics), only you select the Pages or Books section accordingly.

Figure 3: Remote Assets in the Portal Administration Tool

Oddly enough, the sequence of pages in a remote book is not maintained when consumed, so communication must be maintained between the producer developers and consumer portal administrators so that they can be arranged as desired in the consumer once the remote book has been added to the consumer desktop.

Passing Data Between Remote Portlets

Inter-Portlet Communication (IPC) is when an action in one portlet causes a reaction in another portlet. There is an example in the WLP documentation of how to have a receiver react to an Event fired by a sender. In this article we will take IPC one step forward and pass a run-time value from the sender to the receiver.

The important thing to remember when passing data from one remote portlet to another is that the request object is not shared between portlets. While mildly annoying, this makes perfect sense as the collection of remote portlets on a given consumer page may not necessarily be from the same producer. There are multiple solutions to get around this, and in the following example we will use a non-standard approach to illustrate that there are times when requirements remind us that best practices are guidelines rather than rigid rules. I promise to cover the best practices approach in the next installment.

For this example, we are going to assume that our requirement is to maintain the value in session once it has been set. This is mighty convenient for or fictitious development project as we can accomplish this with the least amount of work. First, we will create a Serializable object to hold our value, and use a session singleton pattern just to keep the example simple:

package common.example;

import javax.servlet.http.HttpSession;

public class SharedData  implements java.io.Serializable

{

private String          ipcValue1;

private static final long     serialVersionUID  = 1518508811L;

private static final String   SESSION_DATA_ID   = “sharedSessionData”;

public static SharedData getInstance(HttpSession outerSession)

{

if(outerSession.getAttribute(SESSION_DATA_ID)==null)

{

outerSession.setAttribute(SESSION_DATA_ID, new SharedData());

}

return (SharedData)outerSession.getAttribute(SESSION_DATA_ID);

}

private SharedData (){}

public String getIpcValue1(){return ipcValue1;}

public void setIpcValue1(String ipcValue1) {this.ipcValue1 = ipcValue1;}

}

Now we will create a page flow controller that uses our SharedData object to store a value submitted by a form. While a formbean is potentially redundant for our needs, we’ll cheat and use one anyways as the JSP wizards in Workshop make building a form using the form bean only a moment’s work. As the “Advanced” in the tile of this article assumes you already know how to build basic portlets, we’ll just look at the relevant parts of the code (you can always download the example application to see the full code):

public Forward begin(IpcDemo3FormBean form)

{

HttpServletRequest      outerRequest      = null;

SharedData              sharedData        = null;

outerRequest      = ScopedServletUtils.getOuterRequest(getRequest());

sharedData        = SharedData.getInstance(outerRequest.getSession());

if(form!=null && form.getIpcValue()!=null)

{

sharedData.setIpcValue1(form.getIpcValue());

}

return new Forward(“default”);;

}

The outerRequest reference may be unfamiliar ff you haven’t had to deal with request objects in WLP before. WLP breaks up the request object before providing it to portlets, scoping the request variables down to the portlet level. By using org.apache.beehive.netui.pageflow.scoping.ScopedServletUtils you can gain access to the full request, which is what you need for another portlet to be guaranteed access to your object.

Finally, let’s create the portlet that will get this data and display it (or whatever else you want to do with the data):

public Forward begin(common.formbeans.IpcDemo3FormBean form)

{

HttpServletRequest            outerRequest      = null;

SharedData              sharedData  = null;

outerRequest      = ScopedServletUtils.getOuterRequest(getRequest());

sharedData        = SharedData.getInstance(outerRequest.getSession());

if(sharedData.getIpcValue1()!=null)

{

form.setIpcValue(sharedData.getIpcValue1());

}

return new Forward(“success”);

}

That seemed too easy. Actually, it is too easy. The most frequent cause of bugs in this type of IPC is where the receiver expects there to always be a value. Even if you have a fairly well-orchestrated work flow to get to the portlet with a value it can use, our users are frequently adept at finding ways around such good intentions and then getting mad at us for not anticipating it. In this particular example, we will handle the missing value in the JSP like this:

<netui:label defaultValue=”No IPC Value Set” value=”${actionForm.ipcValue}” />

Of course, this works only because all we are doing with the value is displaying it. Your mileage may vary, and so long as you remember that the value may be null, you will still not crash.

So, we place our page flows into portlets, build, deploy, go to the consumer PAT, add the new portlets (or a remote page containing both already) from the remote producer to the library, place it on our desktop and present users first with:

Then show them what they want:

In conclusion, once you have federated your first portlet, you will probably be asked to federate more portlets, then pages of portlets, and possibly books of portlets. Once you start consuming pages and books, you have essentially created a portal within a portal, and someone will want it to act that way, such as sharing data between remote portlets.  Now you can.

Shortly after you demonstrate your ability to all of the above, they will probably want you to have even more inter-portlet communication with more data, get some of that data you are putting into session out of the session once it has been passed, pass some of the data through hyperlinks instead of forms, and then navigate from one remote page to another by choosing an action in a portlet. That’s ok, though. We’ll cover that in part three.

Facebooktwitterredditlinkedinmail
© Scott S. Nelson