Tracing Techniques for the Pega Data Flows
Introduction
The attached document is the lesson learned for a future reuse with the Data Flow tracing techniques. It might be useful to see how to run the Pega Tracer to trace the execution of the Pega Data Flow run, or how to see the values populated on the records, while going thru the Data Flow run.
Data Flow
“Data flows allow you to sequence and combine data based on various sources and write the results to a destination. The sequence itself is established through a set of instructions and execution points from source to destination. Source and destination points can be abstract or driven by data sets and other decision data flows. Between source and destination, you can apply compose, convert, and merge strategy execution instructions.
The execution of data flows is done through work items which are instances of Pega-DM-DDF-Work and stored in the pc_work_dsm_batch table. The processing of data flows can be divided in two categories:
- Batch data flows using a database table as main input.
- Real-time data flows that, in active state, continue processing incoming stream data and requests made through the available service interface.
Data flow runs that are initiated through the Data Flows landing page run in the access group context. These data flows always use the checked-in instance of the Data Flow rule and the referenced rules. You can use a checked-out instance of the Data Flow if you initiate a local data flow run (by using the Run action in the Data Flow rule form) or a test run (a run initiated through the API).”
More about Pega Data Flows: https://community.pega.com/sites/default/files/help_v83/procomhelpmain.htm#rule-/rule-decision-/rule-decision-ddf/main.htm#Types_of_data_flows