Search This Blog

Monday 14 November 2011

Process Control / Audit of Workflows in Informatica

1. Process Control – Definition
Process control or Auditing of a workflow in an Informatica is capturing the job information like start time, end time, read count, insert count, update count and delete count. This information is captured and written into table as the workflow executes
2. Structure of Process Control/Audit table
The table structure of process control table is given below,
Process Control structure

PROCESS_RUN_ID
Number(p,s)
11
A unique number used to identify a specific process run.
PROCESS_NME
Varchar2
120
The name of the process (this column will be populated with the names of the Informatica mappings.)
START_TMST
Date
19
The date/time when the process started.
END_TMST
Date
19
The date/time when the process ended.
ROW_READ_CNT
Number(p,s)
16
The number of rows read by the process.
ROW_INSERT_CNT
Number(p,s)
16
The number of rows inserted by the process.
ROW_UPDATE_CNT
Number(p,s)
16
The number of rows updated by the process.
ROW_DELETE_CNT
Number(p,s)
16
The number of rows deleted by the process
ROW_REJECT_CNT
Number(p,s)
16
The number of rows rejected by the process.
USER_ID
Varchar2
32
The etl user identifier associated with the process.

3.  Mapping Logic and Build Steps
The process control flow has two data flows, one is an insert flow and the other is an update flow. The insert flow runs before the main mapping and update flows runs after the main mapping, this option is chosen in “Target Load Plan”. The source for both the flows could be a dummy source which will return one record as output, for example select ‘process’ from dual or select count(1) from above Table. The following list of mapping variable is to be created,


Mapping Parameter and variables

$$PROCESS_ID
$$PROCESS_NAME
$$INSERT_COUNT
$$UPDATE_COUNT
$$DELETE_COUNT
$$REJECT_COUNT




Steps to create Insert flow:
  • 1. Have “select ‘process’ from dual” as Sequel in source qualifier
  • 2. Have a sequence generator to create running process_run_Id ’s
  • 3. In an expression SetVariable ($$PROCESS_RUN_ID,NEXTVAL), $$PROCESS_NAME to o_process_name, a output only field
  • 4. In an expression assign $$SessionStarttime to o_Starttime, an output only field
  • 5. In an expression accept the sequence id from sequence generator
  • 6. Insert into target’ process control table’ with all the above three values
Process Control Image after Insert flow

PROCESS_RUN_ID
1
PROCESS_NME
VENDOR_DIM_LOAD
START_TMST
8/23/2009 12:23
END_TMST
ROW_READ_CNT
ROW_INSERT_CNT
ROW_UPDATE_CNT
ROW_DELETE_CNT
ROW_REJECT_CNT
USER_ID
INFA8USER

Steps in main mapping,
  • 1. After the source qualifier, increment the read count in a variable (v_read_count) for each record been read in an expression and SetMaxVariable ($$READ_COUNT,v_read_count)
  • 2. Before the update strategy of target instances, do the same for Insert/Update/Delete counts; all the variables are now set with all their respective counts


Steps to create Update flow:
  • 1. Have “select ‘process’ from dual” as Sequel in source qualifier
  • 2. Use SetMaxvariable to get the process_run_id created in insert flow
  • 3. In an expression assign $$INSERT_COUNT to an o_insert_count, a output only field, assign all the counts in the same way
  • 4. In an expression assign $$SessionEndtime to o_Endtime, an output only field
  • 5. Update the target ‘Process Control Table’ with all the above three values where process_run_id equals the process_run_id generated in Insert flow
Process Control Image after Update flow

PROCESS_RUN_ID
1
PROCESS_NME
VENDOR_DIM_LOAD
START_TMST
8/23/2009 12:23
END_TMST
8/23/2009 12:30
ROW_READ_CNT
1000
ROW_INSERT_CNT
900
ROW_UPDATE_CNT
60
ROW_DELETE_CNT
40
ROW_REJECT_CNT
0
USER_ID
INFA8USER

4. Merits over Informatica Metadata
This information is also available in Informatica metadata, however maintaining this within our system has following benefits,
  • Need not write complex query to bring in the data from metadata tables
  • Job names need not be mapping names and can be user friendly names
  • Insert/Delete/Update counts of all as well as individual target can be audited
  • This audit information can be maintained outside the metadata security level and can be used by other mappings in their transformations
  • Can be used by mappings that build parameter files
  • Can be used by mappings that govern data volume
  • Can be used by Production support to find out the quick status of load
You are welcome to share your thoughts/suggestions to inflate any “Infa Tip” -

crontab -e

bash
$crontab –l  > filename.txt

$export EDITOR=vi

crontab –e

Data Integration Challenge – Capturing Changes

When we receive the data from source systems, the data file will not carry a flag indicating whether the record provided is new or has it changed. We would need to build process to determine the changes and then push them to the target table.
There are two steps to it
  1. Pull the incremental data from the source file or table
  2. Process the pulled incremental data and determine the impact of it on the target table as Insert or Update or Delete

Step 1: Pull the incremental data from the source file or table

If source system has audit columns like date then we can find the new records else we will not be able to find the new records and have to consider the complete data
For source system’s file or table that has audit columns, we would follow the below steps
  1. While reading the source records for a day (session), find the maximum value of date(audit filed) and store in a persistent variable or a temporary table
  2. Use this persistent variable value as a filter in the next day to pull the incremental data from the source table

Step 2: Determine the impact of the record on target table as Insert/Update/ Delete

Following are the scenarios that we would face and the suggested approach
  1. Data file has only incremental data from Step 1 or the source itself provide only incremental data
    • do a lookup on the target table and determine whether it’s a new record or an existing record
    • if an existing record then compare the required fields to determine whether it’s an updated record
    • have a process to find the aged records in the target table and do a cleanup for ‘deletes’
  2. Data file has full complete data because no audit columns are present
    • The data is of higher
      • have a back up of the previously received file
      • perform a comparison of the current file and prior file; create a ‘change file’ by determining the inserts, updates and deletes. Ensure both the ‘current’ and ‘prior’ file are sorted by key fields
      • have a process that reads the ‘change file’ and loads the data into the target table
      • based on the ‘change file’ volume, we could decide whether to do a ‘truncate & load’
    • The data is of lower volume
      • do a lookup on the target table and determine whether it’s a new record or an existing record
      • if an existing record then compare the required fields to determine whether it’s an updated record
      • have a process to find the aged records in the target table and do a clean up or delete

You are welcome to share your thoughts/suggestions to inflate any “Infa Tip”