Sunday, December 6, 2009

Oracle : Values of Hidden init parameters

Hiddle parameter values can not be accessed using v$parameter view. Use the following script:


SELECT
x.ksppinm name,
y.ksppstvl VALUE,
decode(ksppity,
1, 'BOOLEAN',
2, 'STRING',
3, 'INTEGER',
4, 'PARAMETER FILE',
5, 'RESERVED',
6, 'BIG INTEGER',
'UNKNOWN') typ,
decode(ksppstdf,
'TRUE', 'DEFAULT VALUE',
'FALSE', 'INIT.ORA') isdefault,
decode(bitand(ksppiflg / 256, 1),
1, 'IS_SESS_MOD(TRUE)',
'FALSE') isses_modifiable,
decode(bitand(ksppiflg / 65536, 3),
1, 'MODSYS(NONDEFERED)',
2, 'MODSYS(DEFERED)',
3, 'MODSYS(*NONDEFERED*)',
'FALSE') issys_modifiable,
decode(bitand(ksppstvf, 7),
1, 'MODIFIED_BY(SESSION)',
4, 'MODIFIED_BY(SYSTEM)',
'FALSE') is_modified,
decode(bitand(ksppstvf, 2),
2, 'ORA_STARTUP_MOD(TRUE)',
'FALSE') is_adjusted,
ksppdesc description,
ksppstcmnt update_comment
FROM x$ksppi x,
x$ksppcv y
WHERE x.inst_id = userenv('Instance')
AND y.inst_id = userenv('Instance')
AND x.indx = y.indx
AND x.ksppinm LIKE '%pga%';

Wednesday, November 18, 2009

Configure DBD::ODBC on AIX 5.3

DBD::ODBC perl module helps connecting to various datasources using ODBC. In order to get this module working, one needs to install unixodbc on aix platform.

1.Download unixODBC-2.2.11
- Version 2.2.14 has some issues causing installation to fail.
2.
export CC=xlc_r
export CCC=xlC_r

3. For 64 bit OS
export OBJECT_MODE=64
export CFLAGS=-q64

It is possible to install 32 bit binaries on 64 bit OS. Just do not set the above two variables.

4 Execute configure
./configure --prefix=/opt/unixODBC --disable-gui CC=xlc_r CXX=xlC_r
make
make install

5. Generate shared object libraries (/opt/unixODBC/lib)
ar -x libodbc.a
ar -x libodbcinst.a
ar -x libodbccr.a

5. Set the following environment variables

export ODBCINI=/usr/scratch/odbc.ini
export ODBCHOME=/opt/unixODBC
export LIBPATH=/opt/unixODBC/lib:${LIBPATH}


If perl is already installed, install module DBI and then DBD::ODBC. For new install follow the procedure mentioned below:

1. Download perl 5.10.1

2 If OS is 64 bit use flag -Duse64bitall
sh Configure -Duse64bitall -Dprefix=/opt/perl -des
make
make install

To install 32 bit perl binaries, do not use flag -Duse64bitall

3. Configure CPAN. First configure it in default mode. Then make changes to the proxy settings
cpan> o conf init /proxy/
--specify ftp_proxy and http_proxy to be used
cpan> 0 conf commit

4. Install DBI module
cpan> install DBI

5. install DBD::ODBC
cpan> install DBD::ODBC

you should be all set to use perl DBD::ODBC module !!!

Monday, November 16, 2009

CPAN : reset proxy settings

To reset proxy setting for CPAN

CPAN> o conf init /proxy/


setting root prompt with sudo access

If you have sudo access to a machine, it is convenient to set root prompt and then execute commands instead of using "sudo command " everytime.


"sudo su -" would get you the root prompt.

Wednesday, October 7, 2009

Removing Oracle 11gR2 software

To clean up the installed Oracle 11gR2 software, execute the deinstall script. It is present in $ORACLE_HOME/deinstall directory.

$ORACLE_HOME/deinstall/deinstall
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /scratch/Oracle/OraInventory/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############


######################## CHECK OPERATION START ########################
Install check configuration START


Checking for existence of the Oracle home location /scratch/Oracle11gr2/product/11.2.0/dbhome_1
Oracle Home type selected for de-install is: SIDB
Oracle Base selected for de-install is: /scratch/Oracle11gr2
Checking for existence of central inventory location /scratch/Oracle/OraInventory

Sunday, June 7, 2009

Informatica : Setting codepage

Informatica integration service sometimes fails with the following error when trying to startup:
LM_36011
Code page mismatch. Service process is running in code page [ISO 8859-1 Western European] whereas the service is configured in the Admin Console to run in code page [UTF-8 encoding of Unicode]

Set the following environment variables on the Linux based server hosting Informatica nodes:
setenv LANG en_US.utf8
setenv LC_ALL en_US.utf8
setenv INFA_CODEPAGENAME UTF-8

For windows, set environment variables LC_ALL and LANG to us_UTF8

Re-start the node service.

Now, integration service can be started with codepage set to UTF-8.

Sunday, April 26, 2009

Oracle : Using WITH Clause for improving performance

Many times using inline views destroy the execution plans. This is mainly because the execution plan of inline view interfere with the overall execution plan of the query.

To counter this, inline view can be taken to the top inside WITH clause. But this alone will not help. We need to incorporate hint /*+ materialize */ , as only then database treats the view as a physical table. Database creates a temporary table out of the view which is then used in the query. Execution plan is found far improved than before.


Query with following structure showed nested loop join between tables tab1 and tab2. This mainly was influenced by the inline view. Specifying USE_HASH hint also does not work.

SELECT col1,
col2,
Sum(col4)
FROM tab1,
tab2,
(SELECT DISTINCT Last_day(To_date(dates,'DD-MM-YYYY')) end_dt,
Add_months(Last_day(To_date(dates,'DD-MM-YYYY')) + 1,
-1) strt_dt,
To_char(To_date(dates,'DD-MON-YYYY'),'MMYYYY') mon_dt
FROM (SELECT To_char(Add_months(SYSDATE,-10) + (LEVEL - 1),'DD-MON-YYYY') AS dates
FROM dual
CONNECT BY To_number(To_char((Add_months(SYSDATE,-10) + (LEVEL - 1)),'YYYYMMDD')) <= To_number(To_char(SYSDATE,'YYYYMMDD')))) tim
WHERE tab1.assign_dt <= tim.end_dt
AND tab1.assign_dt >= tim.strt_dt
AND tab1.col3 = tab2.col3
GROUP BY col1,
col2

Changing the query to the following using WITH caluse leads to perfect execution plan using HASH join .


WITH TIM as (
(SELECT /*+ materialize */ DISTINCT Last_day(To_date(dates,'DD-MM-YYYY')) end_dt,
Add_months(Last_day(To_date(dates,'DD-MM-YYYY')) + 1,
-1) strt_dt,
To_char(To_date(dates,'DD-MON-YYYY'),'MMYYYY') mon_dt
FROM (SELECT To_char(Add_months(SYSDATE,-10) + (LEVEL - 1),'DD-MON-YYYY') AS dates
FROM dual
CONNECT BY To_number(To_char((Add_months(SYSDATE,-10) + (LEVEL - 1)),'YYYYMMDD')) <= To_number(To_char(SYSDATE,'YYYYMMDD'))))
)
SELECT col1,
col2,
Sum(col4)
FROM tab1,
tab2,
tim
WHERE tab1.assign_dt <= tim.end_dt
AND tab1.assign_dt >= tim.strt_dt
AND tab1.col3 = tab2.col3
GROUP BY col1,
col2


Tuesday, April 14, 2009

DB2 : collecting detailed statistics

to get the best optimizer execution plan, collecting detailed table statisitics might help:

db2 -x -r "runstats.db2" "select rtrim(concat('runstats on table ',concat(rtrim(tabSchema),concat('.',concat(rtrim(tabname),' on all columns with distribution on all columns and sampled detailed indexes all allow write access'))))) from syscat.tables where type='T'"
db2 -v -f "runstats.db2"

Tuesday, April 7, 2009

DB2 : Generating SQL Explain plan

This is how explain plan can be generated in DB2 database.

Create a file statements.db2:
select deptno from dept@

db2expln -database U001DB -stmtfile statements.db2 -terminator @ -terminal -user siebel siebel

Section Code Page = 1252

Estimated Cost = 30087412.000000
Estimated Cardinality = 19275950.000000

Access Table Name = SIEBEL.S_ORG_EXT ID = 7,201
| #Columns = 2
| Relation Scan
| | Prefetch: Eligible
| Lock Intents
| | Table: Intent Share
| | Row : Next Key Share
| Sargable Predicate(s)
| | Process Build Table for Hash Join
Left Outer Hash Join
| Early Out: Single Match Per Outer Row
| Estimated Build Size: 7776000
| Estimated Probe Size: 9948879872
| Access Table Name = SIEBEL.S_ORG_EXT ID = 7,201
| | #Columns = 2
| | Relation Scan
| | | Prefetch: Eligible
...
...

Monday, April 6, 2009

INFORMATICA : Truncate Logs

There was an unusual behavior when i proceeded with repository manager to truncate the existing session logs. Option "Truncate Log" under "Edit" menu was disabled. Selecting/expanding the required folder also did not help.

Since my informatica server was on AIX with client on windows, i did the following to truncate the logs:

$ pmrep
pmrep>connect -r Powercenter -d Domain_server1 -n Administrator -x Administrator

pmrep>truncatelog -t all
truncatelog completed successfully.

Sunday, April 5, 2009

DB2 : Establish remote connection on Unix/AIX

On server hosting DB2 Database:
db2set DB2COMM=tcpip
update database manager configuration using svcename 50000
db2stop
db2start

On client server (with DB2 client installed)
db2 UNCATALOG DATABASE U001DB
db2 UNCATALOG DCS DATABASE U001DB
db2 UNCATALOG NODE SERVER1

db2 CATALOG TCPIP NODE SERVER1 REMOTE SERVER1 SERVER 50000 REMOTE_INSTANCE DB2 SYSTEM SERVER1 OSTYPE AIX
db2 CATALOG DCS DATABASE U001DB AS U001DB
db2 CATALOG DATABASE U001DB AS U001DB AT NODE SERVER1 AUTHENTICATION DCS

Where, SERVER1 is the server where db2 server resides. U001DB is the db2 database name.

Test connection:

On client server:
db2 connect to U001DB user siebel using siebel

Database Connection Information

Database server = DB2/AIX64 9.1.6
SQL authorization ID = SIEBEL
Local database alias = U001DB

Tuesday, February 24, 2009

Oracle :Outer Join pitfalls

Starting Oracle 10G, Optimizer processes outer joins in much efficient way. Execution plan now has "HASH JOIN RIGHT OUTER", which means that the table with scant records will be scanned first. Table which it is outer joined to will be scanned in the end. This is especially use full for Datawarehouse queries where Fact table is outer joined to lot of dimension tables. So, DB now does not have to temp read and write the fact table records to join them later to the dimension tables.

But, before one jumps into concluding that this new method is almost sure way to win, there are few things one should keep in mind.
1. Ensure NOPARALLEL query execution. This new execution plan for outer joins is severely affected by parallel query i.e. if dimension tables are scanned in parallel and then joined to the fact tables, query exhibits worst performance. To avoid parallel execution, check that tables or any participating indexes have degree set to "1". Alternatively, one an also disable parallel query execution at instance level by setting init parameter "max_parallel_servers" to "0". But this is no advisable as there could be other kind of queries which might want to leverage the advantages of parallel query execution.

2. Keep huge dimension tables out of the outer joins. If Dimension tables (the one which is outer joined) is huge, lets say...1 Million records (for pga 4 gig), outer join using new execution plan may not help. This is because now hash tables for all the joining dimension could not be kept in memory and DB anyways have to do multipass hash join. "HASH JOIN RIGHT OUTER" based execution plan is effective only if the outer joined tables/dimensions are smaller in size.

3. Ensure that all the participating tables has the statistics collected. Missing statistics on tables would cause optimizer to pick up execution plan as it existed before 10G and you may not see the new execution plan for outer joined query.

Thursday, January 15, 2009

Informatica 8.6 : issue with pmcmd

On AIX 5.3 pmcmd command fails with following error:

$ ./pmcmd

Could not load program ./pmcmd:
Could not load module /u01/Informatica/PowerCenter8.6/server/bin/libpmser.a.
Dependent module /lib/libz.a could not be loaded.
The module has an invalid magic number.
Could not load module pmcmd.
Dependent module /u01/Informatica/PowerCenter8.6/server/bin/libpmser.a could not be loaded.
Could not load module .

This issue has been fixed on one of the latest hotfixes.

To workaround the issue, rename /lib/libz.a so that libz.a which is shipped with Informatica is picked up.

Sunday, January 11, 2009

Oracle: Correlated sub-queries effectiveness

During ETL extraction and loading into the star-schema based warehouse, we generally confront with the idea of using outer join or correlated subqeries to add dimension table primary key into the fact table. to clarify more, take an example below:

INSERT INTO w_emp_f
SELECT a.row_wid loc_wid,
b.ename,
b.address
FROM w_emp_fs b
LEFT OUTER JOIN w_location_d a
ON a.datasource_num_id = b.datasource_num_id
AND a.integration_id = b.integration_id



In the example above, row_wid is the unique identifier of the row in Location dimension which needs to be added to Employee Fact table. In reality, there could be up to 30-40 dimensions which need to be joined to Fact table. Outer joining these dimensions is one of the way, but it could be a big overhead if the dimension tables are very large e.g. >2million records. Outer join may not perform.
This is when correlated subqueries come to rescue. We can frame the above load query as follows:
INSERT INTO w_emp_f
SELECT (SELECT row_wid
FROM w_location_d a
WHERE a.datasource_num_id - b.data_source_num_id
AND a.integration_id = b.integration_id) loc_wid,
b.ename,
b.address
FROM w_emp_fs b


Correlated query would only perform good if the dimension columns used in join condition are indexed. This way DB memory can cache the index and return the records from the dimension table fast for each of the record in fact table.

We can choose to segregate dimensions attached to the fact table based on their size. Depending on the memory configuration of the DB system we can hit&trial the size threshold. Dimensions having size lower than this threshold can be outer joined to the fact table and the ones with size higher than the threshold can be pushed to correlated subqueries.