Sunday, June 1, 2014

How to load single source data in four different target based upon session run through single mapping.


Scenario 1: How to load single source data in four different target based upon session run through single mapping.

Ex: One Source (Emp Table)
      1st run of Session-->EMP_TGT1
      2nd run of Session-->EMP_TGT2
      3rd run of Session-->EMP_TGT3
      4th run of Session-->EMP_TGT4
      5th run of Session-->EMP_TGT1

Solution:

Theoretical 
We need 
AA) Mapping Variable to keep counter of mapping run.
A)    Two Instance of Source: First instance treat as actual source for loading data. Second   instance of source for Aggregator Transformation.
B)    Aggregator  to find total number of records in source. As we are not selecting any port as group by so, it will return last record with Total Number of Records.
C)    Joiner to Join both Source Qualifier pipeline based upon EMPNO as It's MASTER OUTER   type of join so, it will return matched record from master and all records from outer table.
D)    In Expression keep counting each and every records passed through it. When matching of   Count in Expression is matched with Total Number of Records then increment Mapping   Variable Counter with 1.
E)     Route records based upon Mapping Variable counter value.

Practical
1. Create Mapping Variable $$TGT_HIT (Integer) Aggregation (Count) -->Initialize with 1 (Initial Value=1)

Step 1: Drop your Source twice.

Step 2: Create Aggregator and connect Second Source Qualifier to it.
Aggregator
A) Create one port Total_Record (Output Port) COUNT(EMPNO)
B) Don't select any port as Group By port

Step 3: Create Joiner and connect First Source Qualifier to it.
Joiner
A) Connect all column from First Source Qualifier to Joiner
B) Connect Total_Record and EMPNO column from Aggregator
C) Set Join Type=MASTER OUTER JOIN
D) Condition: EMPNO1=EMPNO

Step 4: Create Expression and Connect all columns from Joiner to it.
Expression
A) Create Four ports
v_rec (Number) (Variable Port)-->v_rec_count + 1
v_rec_count (Number) (Variable Port)-->v_rec
Var_Assign (Number) (Variable Port)-->
IIF(Total_Record = v_rec_count AND $$TGT_HIT = 1, SETVARIABLE($$TGT_HIT, 2), IIF(Total_Record = v_rec_count AND $$TGT_HIT = 2, SETVARIABLE($$TGT_HIT, 3), IIF(Total_Record = v_rec_count AND $$TGT_HIT = 3, SETVARIABLE($$TGT_HIT, 4), IIF(Total_Record = v_rec_count AND $$TGT_HIT = 4, SETVARIABLE($$TGT_HIT, 1)))))

NEXTVAL (Number) (Output Port)-->$$TGT_HIT

Step 5: Create Router and Connect all columns from Expression.
Router
A) Create Four Groups
FIRST_TARGET=NEXTVAL=1
SECOND_TARGET=NEXTVAL=2
THIRD_TARGET=NEXTVAL=3
FOURTH_TARGET=NEXTVAL=4

Step 6: Connect each group to each target.

Create Session and Workflow and BINGO!!!!


Saturday, November 16, 2013

TARGET UPDATE OVERRIDE - INFORMATICA

When you used an update strategy transformation in the mapping or specified the "Treat Source Rows As" option as update, informatica integration service updates the row in the target table whenever there is match of primary key in the target table found.

The update strategy works only 

  • when there is primary key defined in the target definition.
  • When you want update the target table based on the primary key.

What if you want to update the target table by a matching column other than the primary key? In this case the update strategy wont work. Informatica provides feature, "Target Update Override", to update even on the columns that are not primary key.

You can find the Target Update Override option in the target definition properties tab. The syntax of update statement to be specified in Target Update Override is
UDATE TARGET_TABLE_NAME
SET TARGET_COLUMN1 = :TU.TARGET_PORT1,
    [Additional update columns]
WHERE TARGET_COLUMN = :TU.TARGET_PORT
AND   [Additional conditions]
Here TU means target update and used to specify the target ports.

Example: Consider the employees table as an example. In the employees table, the primary key is employee_id. Let say we want to update the salary of the employees whose employee name is MARK. In this case we have to use the target update override. The update statement to be specified is

UPDATE EMPLOYEES
SET SALARY = :TU.SAL
WHERE EMPLOYEE_NAME = :TU.EMP_NAME

Saturday, September 29, 2012

Regular Expressions

What Are Regular Expressions?
A regular expression is a pattern template you define that a Linux utility Uses to filter text. A Linux utility (such as the sed editor or the gawk program)matches the regular expression pattern against data as that data flows Into the utility. If the data matches the pattern, it’s accepted for processing.
              If the data doesn’t match the pattern, it’s rejected. The regular expression pattern makes use of wildcard characters to represent one or more characters in the data stream.
Types of regular expressions:

There are two popular regular expression engines:
  • The POSIX Basic Regular Expression (BRE) engine
  • The POSIX Extended Regular Expression (ERE) engine
Defining BRE Patterns:
The most basic BRE pattern is matching text characters in a data stream.
Eg 1: Plain text

$ echo "This is a test" | sed -n ’/test/p’
This is a test.
$ echo "This is a test" | sed -n ’/trial/p’
$
$ echo "This is a test" | gawk ’/test/{print $0}’
This is a test.
$ echo "This is a test" | gawk ’/trial/{print $0}’
$
Eg 2: Special characters

The special characters recognized by regular expressions are:
.*[]^${}\+?|()
For example, if you want to search for a dollar sign in your text, just precede it with a backslash character:
$ cat data2
The cost is $4.00
$ sed -n ’/\$/p’ data2
The cost is $4.00
$
Eg 3: Looking for the ending

The dollar sign ($) special character defines the end anchor.

$ echo "This is a good book" | sed -n ’/book$/p’
This is a good book
$ echo "This book is good" | sed -n ’/book$/p’
$
Eg 4: Using ranges

You can use a range of characters within a character class by using the dash symbol.
Now you can simplify the zip code example by specifying a range of digits:
$ sed -n ’/^[0-9][0-9][0-9][0-9][0-9]$/p’ data8
60633
46201
45902
$
Extended Regular Expressions:

The POSIX ERE patterns include a few additional symbols that are used by some Linux applications and utilities. The gawk program recognizes the ERE patterns, but the sed editor doesn’t.
Eg 1: The question mark

The question mark indicates that the preceding character can appear zero or one time, but that’s all. It doesn’t match repeating occurrences of the character:
$ echo "bt" | gawk ’/be?t/{print $0}’
bt
$ echo "bet" | gawk ’/be?t/{print $0}’
Bet
$ echo "beet" | gawk ’/be?t/{print $0}’
$
$ echo "beeet" | gawk ’/be?t/{print $0}’
$
Eg 2: The plus sign

The plus sign indicates that the preceding character can appear one ormore times, but must be present at least once. The pattern doesn’t match if the character is not present:
$ echo "beeet" | gawk ’/be+t/{print $0}’
beeet
$ echo "beet" | gawk ’/be+t/{print $0}’
beet
$ echo "bet" | gawk ’/be+t/{print $0}’
bet
$ echo "bt" | gawk ’/be+t/{print $0}’
$
Eg 3: The pipe symbol

The pipe symbol allows to you to specify two or more patterns that the regular expression engine uses in a logical OR formula when examining the data stream. If any of the patterns match the data stream text, the text passes. If none of the patterns match, the data stream text fails.
The format for using the pipe symbol is:
expr1|expr2|...
Here’s an example of this:
$ echo "The cat is asleep" | gawk ’/cat|dog/{print $0}’
The cat is asleep
$ echo "The dog is asleep" | gawk ’/cat|dog/{print $0}’
The dog is asleep
$ echo "The sheep is asleep" | gawk ’/cat|dog/{print $0}’
$

Eg 4: Grouping expressions

When you group a regular expression pattern, the group is treated like a standard character. You can apply a special character to the group just as you would to a regular character.
For example:
$ echo "Sat" | gawk ’/Sat(urday)?/{print $0}’
Sat
$ echo "Saturday" | gawk ’/Sat(urday)?/{print $0}’
Saturday
$

Wednesday, August 1, 2012

Informatica Testing

Unit Test Plan(UTP)

Testing regimens:
1. Unit Testing
2. Functional Testing
3. System Integration Testing
4. User Acceptance Testing

Unit testing: The testing, by development, of the application modules to verify each unit (module) itself meets the accepted user requirements and design and development standards
Functional Testing: The testing of all the applications modules individually to ensure the modules, as released from development to QA, work together as designed and meet the accepted user requirements and system standards
System Integration Testing: Testing of all of the application modules in the same environment, database instance, network and inter-related applications, as it would function in production.  This includes security, volume and stress testing.
User Acceptance Testing(UAT): The testing of the entire application by the end-users ensuring the application functions as set forth in the system requirements documents and that the system meets the business needs.

UTP Template:


Step
#


Description


Test Conditions


Expected Results
Actual Results,
Pass or Fail
(P or F)
Tested By
SAP- CMS Interfaces






1
Check for the total count of records  in source tables that is fetched and the total records in the PRCHG table for a perticular session timestamp

SOURCE:

SELECT count(*) FROM XST_PRCHG_STG


TARGET:

Select count(*) from _PRCHG
Both the source and target table load record count should match.

Should be same as the expected

Pass
Stev
2
Check for all the target columns whether they are getting populated correctly with source data.

select PRCHG_ID,
PRCHG_DESC,
DEPT_NBR,
EVNT_CTG_CDE,
PRCHG_TYP_CDE,
PRCHG_ST_CDE,
from T_PRCHG
MINUS
select PRCHG_ID,
PRCHG_DESC,
DEPT_NBR,
EVNT_CTG_CDE,
PRCHG_TYP_CDE,
PRCHG_ST_CDE,
from PRCHG
Both the source and target table record values should return zero records

Should be same as the expected

Pass
Stev
3
Check for Insert strategy to load records into target table.

Identify a one record from the source which is not in target table. Then run the session
It should insert a record into target table with source data
Should be same as the expected

Pass
Stev
4
Check for Update strategy to load records into target table.

Identify a one Record  from the source which is already present in the target table with different PRCHG_ST_CDE or PRCHG_TYP_CDE values Then run the session

What are the differences between stored procedures and triggers ?


Stored procedure normally used for performing tasks But the Trigger normally used for tracing and auditing logs.

Stored procedures should be called explicitly by the user in order to execute But the Trigger should be called implicitly based on the events defined in the table.

Stored Procedure can run independently But the Trigger should be part of any DML events on the table.

Stored procedure can be executed from the Trigger  But the Trigger cannot be executed from the Stored procedures.

Stored Procedures can have parameters.But the Trigger cannot have any parameters.

Stored procedures are compiled collection of programs or SQL statements in the database.

Using  stored procedure  we can access and modify data present in many tables. Also a stored procedure is not associated with any particular database object. 
But triggers are event-driven special procedures which are attached to a specific database object say a table. 

Stored procedures are not automatically run and they have to be called explicitly by the user. But triggers get executed when the particular event associated with the event gets fired.

Sunday, July 29, 2012

Snapshot Facts

The snapshot fact contains a reflection of the state of an entity at a given point in time. A classic example of this would be a "daily_balance" fact in a banking system. This would, on a daily basis record the balance of each account, it would NOT list the individual transactions that happened on the account.

The differences between a logical data model and physical data model


Logical vs Physical Data Modeling

Logical Data Model
Physical Data Model
Represents business information and defines business rules
Represents the physical implementation of the model in a database.
Entity
Table
Attribute
Column
Primary Key
Primary Key Constraint
Alternate Key
Unique Constraint or Unique Index
Inversion Key Entry
Non Unique Index
Rule
Check Constraint, Default Value
Relationship
Foreign Key
Definition
Comment