In this Document -ID 402144.1


Purpose

Questions and Answers
 Question 1:  Is it acceptable to modify seeded definitions of workflow process and activities while doing customizations?
 Question 2:  When I make design changes and upload the latest version of custom Workflow definition file to the database using WFLOAD, will the existing open workflow items also automatically reflect to the new changes I did in the workflow builder?
 Question 3:  If my workflow runtime data is huge or voluminous, what could I do to optimize the online or batch performance for Workflow processing?
 Question 4:  What can be the impact of poor (read: infinite loops or low wait time repeated loops) conditional progressing looping logic design ?
 Question 5:  How do we design good customizable workflow wait activities?
 Question 6:  How is an infinite loop situation created for FNDWFBG concurrent program?
 Question 7:  How can we  avoid Infinite loop situations while executing workflow activities in the background?
 Question 8:  What are the other checks and balances best practices that I should integrate for making sure that there is no data corruption or out-of-sync between Workflow or Order management (or any business module being used) situations caused due to poor coding?

References

Applies to:

Oracle Order Management - Version 11.5.2 to 12.1 [Release 11.5.2 to 12.1]
Information in this document applies to any platform.
Checked for relevance on 12-Jul-2010
Checked for relevance on 22-Jan-2012


Purpose

Translated versions:
- Japanese: N/A
- Chinese: N/A
- Korean: Document 1521121.1

Refer to:
- Multi-Lingual Support (MLS) in the current MOS Environment (Document 1480131.1)
[This section is not visible to customers.]


These FAQs try to touch upon some best practice considerations while designing Custom Workflows. Most of the questions may be generic but a few of them may be specific to Order Entry module.

Questions and Answers

Question 1:  Is it acceptable to modify seeded definitions of workflow process and activities while doing customizations?

It is not a good practice to modify seeded definitions of WF processes/activities. Instead,they should be copied and their definition should be modified. If we do not copy the seeded process/activity definitions, future OE workflow patches could very well over-ride the customizations of the seeded WF process or activities, depending on the Protection Level defined in the Workflow builder, and the customizations can be lost.

Question 2:  When I make design changes and upload the latest version of custom Workflow definition file to the database using WFLOAD, will the existing open workflow items also automatically reflect to the new changes I did in the workflow builder?

A common thing which is ignored/not understood all that well is that runtime workflow data for previous versions of WF design is not synchronized automatically with the new/changed WF design after WFLOAD.

This can cause behaviour differences, ranging from trifle to significant, in the WF runtime data for some OEOL/OEOH wf run items across changes. We need to recognize this while analyzing workflow runtime information.

Question 3:  If my workflow runtime data is huge or voluminous, what could I do to optimize the online or batch performance for Workflow processing?

To manage data volume or growth of workflow data, You can explore these options as per "Managing Runtime Data for Performance" section in Oracle Workflow Administrator's Guide Release 2.6.3 Part Number B10283-02 (http://docs.oracle.com/cd/B14117_01/workflow.101/b10283/perfor02.htm#rundperf)

Another supporting Oracle blog by ATG can be found at http://blogs.oracle.com/schan/2006/11/17#a980

Performance issues associated with large quantities of runtime data can be addressed by:

* Partitioning

You can consider implementing partitioning of  the WF_ITEMS, WF_ITEM_ACTIVITY_STATUSES and WF_ITEM_ACTIVITY_STATUSES_H tables to avoid online or batch FNDWFBG performance issues.

It may be a good idea to evaluate composite range-hash partitioning on WF_ITEMS, WF_ITEM_ACTIVITY_STATUSES and WF_ITEM_ACTIVITY_STATUSES_H tables, with range partitioning option for item_type, hash partitioning for item_type.

If you have a really active Order management system chugging along, You would also do well to gather table stats on WF_ITEMS, WF_ITEM_ACTIVITY_STATUSES and WF_ITEM_ACTIVITY_STATUSES_H tables on a daily basis. You can experiment with 10% or even higher %(say 70%) to check online response times for OM business flows.

* Purging

The key is "effective" purging.

The Purge Obsolete Workflow Runtime Data program concurrent program uses the following query to find out which WF items are eligible to be purged, given an Item type and age (in days) - it checks if any of its ancestor (parents, grand parents, etc.) and its descendants ((grand)children) WF items for the given WF item (say OEOL) are active.. e.g. WFERROR/OMERROR. If so, that particular OEOH/OEOL wf item is not eligible for purging:

select min(WI.END_DATE), max(WI.END_DATE), count(1) 

from WF_ITEMS WI 

where WI.ITEM_TYPE = '&itemtype' 

and WI.END_DATE <= sysdate - &&age 

and exists 

(select null 

from WF_ITEM_TYPES WIT 

where WI.END_DATE+nvl(WIT.PERSISTENCE_DAYS,0)<=sysdate-&&age 

and WI.ITEM_TYPE = WIT.NAME 

and WIT.PERSISTENCE_TYPE = 'TEMP') 

and not exists 

(select null 

from WF_ITEMS WI2 

WHERE WI2.END_DATE IS NULL 

START WITH WI2.ITEM_TYPE = WI.ITEM_TYPE 

AND WI2.ITEM_KEY = WI.ITEM_KEY 

CONNECT BY PRIOR WI2.ITEM_TYPE = WI2.PARENT_ITEM_TYPE 

AND PRIOR WI2.ITEM_KEY = WI2.PARENT_ITEM_KEY 

UNION ALL 

select null 

from WF_ITEMS WI2 

WHERE WI2.END_DATE IS NULL 

START WITH WI2.ITEM_TYPE = WI.ITEM_TYPE 

AND WI2.ITEM_KEY = WI.ITEM_KEY 

CONNECT BY PRIOR WI2.PARENT_ITEM_TYPE = WI2.ITEM_TYPE 

AND PRIOR WI2.PARENT_ITEM_KEY = WI2.ITEM_KEY) 

order by WI.END_DATE 

/ 

Additionally, you can check OM patches 5601698 and 5604904 that may make more OM WF items eligible for purging. Its also a good idea to keep checking Note 398822.1 periodically to know generic data fix scripts made available to all customers. Root cause fix patches may also be available through links in Note 398822.1.

See note: How To Use The New Concurrent Program "Purge Order Management Workflow" (Doc ID 878032.1)

Question 4:  What can be the impact of poor (read: infinite loops or low wait time repeated loops) conditional progressing looping logic design ?

Loops, specifically unqualified loops, even looping due to WF activities with low timeout attribute values, are one of the biggest enemies of online order management WF performance. The more such looping logic you have in your custom WF design, the more records are created in WF_ITEM_ACTIVITY_STATUSES and WF_ITEM_ACTIVITY_STATUSES_H tables.

A very good example is this loop in the custom process, say, XXONT_CUSTOM_ATP:

This is like a vicious cycle. The more data you have in in WF_ITEM_ACTIVITY_STATUSES and WF_ITEM_ACTIVITY_STATUSES_H tables, the more time it takes to query and insert  into them (also, the more indexes you have, the more time it would take to delete/update/insert due to automatic index re-organization effort required). It just kills the online performance at times.

Instead of having conditional loops with low timeouts or even infinite/unqualified loops, you should consider using OE_STANDARD_WF.STANDARD_BLOCK API instead, which is a much more attractive option. Any OEwf item which is blocked on OE_STANDARD_WF.STANDARD_BLOCK is progressible through OE form via "Actions" button->Progress Order. This gives more flexibility to the user. When the OEOL line processing has to be progressed programatically, wf_engine.CompleteActivity API would have to be called.

Adopting this approach will create only 1 record in WF_ITEM_ACTIVITY_STATUS table and 2 records in WF_ITEM_ACTIVITY_STATUS_H table. Contrast this against continuous looping till a condition is met : for each execution of the WF logic, records are created inWF_ITEM_ACTIVITY_STATUS* tables, adding to un-necessary volume, which will later on come back to haunt the performance. 

For example:

 

Question 5:  How do we design good customizable workflow wait activities?

Whenever possible, standard Wait activity (wf_standard.wait()) should not be placed in a loop with a small wait time (shorter than several hours). Alternative workflow designs should be found.

Wf_standard.wait() with wait time of zero or near zero, or standard Defer activity (wf_standard.defer()), may NEVER be placed in a loop in a workflow process design diagram. 

Question 6:  How is an infinite loop situation created for FNDWFBG concurrent program?

Let us assume that there is a standard WAIT activity with a very short timeout/wait time in a loop with a condition checking activity. When the condition is checked, the activity will be deferred or timed out for a very short time.

Meanwhile, Workflow Background Process would be busy processing all other deferred or timed out activities. If FNDWFBG is not done processing other deferred activities by the time the wait/time-out time of previous activities expires, the RE-timedout or deferred activities will be enqueued and re-processed AGAIN by the very same FNDWFBG (Workflow Background Process for say item_type=OEOL/OEOH) job over and over, as long as the condition (e.g "line on hold") is met.

This creates an infinite loop, so FNDWFBG process almost never completes, eating up CPU time, creating massive DBMS archive logs (if turned on), and pumping rows into the WF activity statuses table. This would also create un-necessary rows in the WF_ITEM_ACTIVITY_STATUS_H table, which can be nemesis of online order entry performance in the 11i EBS system later on.

Question 7:  How can we  avoid Infinite loop situations while executing workflow activities in the background?

It is a good idea for workflow activities in custom workflows to have Wait/Timeout time greater than the longest normal execution time of Workflow Background process.

Also, a workflow activity having a near zero Relative (Wait) Time, or Wait Mode stated as Absolute Date, but no Absolute Date specified, which is equivalent to zero Relative Wait Time can cause the same issue (infinite loop) to occur for FNDWFBG (Workflow Background Process for say item_type=OEOL/OEOH) runs with Process Deferred = Yes.

Hence, these poor design practices should be avoided, because, remember, once a runtime for a particular WF definition gets initiated and even if the workflow definition is corrected later and uploaded, the previous runtime data will not be automatically corrected. So its prudent to take in good design considerations before letting your production data run on it.

The best solution is not to use Wait activity in a loop at all, but have an alternative design, for example using the blocking activities mentioned in the answer to Question 4. Extending the Wait time should be used only if this solution is not acceptable for a very good reason.

Question 8:  What are the other checks and balances best practices that I should integrate for making sure that there is no data corruption or out-of-sync between Workflow or Order management (or any business module being used) situations caused due to poor coding?

The workflow developer guide clearly states that transactional DDL statements like COMMIT and ROLLBACK may not be issued from any of the APIs being called in custom workflow activities.

This is directed so as to preserve the atomic transactional consistency of processing across a series workflow processes/activities.

Customizations are basically codelets executed by Oracle's standard routines which would take care of commits or rollbacks appropriately. Committing is done by the WF caller, which is either a concurrent program (workflow background process) or an Apps U/I session.

By the way, its also not prudent to have have commits/rollbacks in most other customization methods, including,
but not limited to, custom defaulting APIs, CUSTOM.pld modifications and recursive APIs being called through it.

Violating this basic tenet can easily cause master/detail synchronization interruption between OEOH and child OEOL workflow items, can lead to ACTIVE status of OEOH/OEOL workflow items. Essentially, it will either cause workflow data corruption or out-of-sync situations for between order management data and corresponding workflow data.

OM now provides functionality that validates workflow processes associated with order or line types against common or serious custom design flaws. This functionality is described in Metalink note 312535.1. If you don't already have it, it is best obtained by applying the latest OM Cumulative Patches for 11i9 or 11i10 as specified on the Recommended Patch List for OM.