Wednesday, July 9, 2008

Oracle : Finding Non-numeric column record in a table

Find ALL rows where you have non-numeric characters only:

select ROW_SEQ#, STRING#
from TMP_DATA
WHERE length(STRING#)
- Length( TRANSLATE(STRING#, CHR(1)||TRANSLATE(String#,
CHR(1)||'1234567890', CHR(1) ), CHR(1) ) ) > 0
;

Find ALL rows where you have numeric characters only:

Find ALL rows where you have numeric characters only:

select ROW_SEQ#, STRING#
from TMP_DATA
WHERE length(STRING#)
- Length( TRANSLATE(STRING#, CHR(1)||TRANSLATE(String#,
CHR(1)||'1234567890', CHR(1) ), CHR(1) ) ) = 0
;

Friday, April 4, 2008

Oracle: Deleting duplicates from a huge table

delete from t
where rowid in
(select rid
from (select rowid rid,
row_number() over (partition by KEY_FIELDS
order by rowid) rn
from t )
where rn <> 1 );

reference : http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1224636375004

Tuesday, March 25, 2008

Oracle : ORA-4030 in AIX Environment

If a session tries to allocate pga memory more than the upper limit specified at OS level, it will error out with:
ORA-04030: out of process memory when trying to allocate 249880 bytes (QERHJ hash-joi,kllcqas:kllsltba)

checking the ulimit settings on AIX:
> ulimit -a
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) 256000
stack(kbytes) unlimited
memory(kbytes) 256000
coredump(blocks) 2097151
nofiles(descriptors) 2000

You can see that the maximum memory which can be allocated in chunk is 250M.

This needs to be set to unlimited.

edit /etc/security/limits file and set it as follows:

oracle:
fsize = -1
core = 2097151
cpu = -1
data = -1
rss = -1
stack = -1
stack_hard = -1
nofiles = 2000

here oracle is an OS user using which oracle database is installed. relogin to OS with this user and issue shall be fixed.

Wednesday, January 23, 2008

Oracle: HASH JOIN RIGHT OUTER

Queries with outer join would be executed by scanning the driving table first, which is ought to be the bigger table which is outer joined with the other table. This approach compromises with the performance as Larger table has to scanned and hashed first and then joined to the other tables, contrary to the very nature of Hash joins where smaller tables are hashed and stored in memory and then joined over to the bigger table.

In 10G, there is a new concept HASH JOIN (RIGHT OUTER) --could be seen in execution plan also--which scans the smaller tables first and then join them over to the bigger table. This surely is a welcome change as this plan is quite superior to the earlier execution plans in terms of performance.

But, beware, all your tables involved in joins should have statistics computed for 10g to pick up the new execution plan.

Monday, January 21, 2008

Informatica : Lookups Versus Outer Joins

A plain lookup over a dimension (fetching ROW_WID) can be replaced by an outer join over the dimension in the parent sql itself.

I have created a prototype to demonstrate this.

SILOS.TEST_BULK_SIL mapping is created as a copy of SILOS.SIL_PayrollFact (loads W_PAYROLL_F from W_PAYROLL_FS).

Original mapping had a mapplet mplt_SIL_PayrollFact. This mapplet has 12 lookups over various dimensions. It takes input (datasource_num_id, integration_id etc) from the parent sql, looks up the ROW_WID and loads into the fact table.

I removed this mapplet completely and incorporated all the 12 dimensions in the parent sql itself, Outer Joining them to W_PAYROLL_FS. All the expressions which were built in the mapplet were taken care in the main mapping itself (some of them may require more finishing).

Following are the results:

Mapping Records Loaded (million) Time Taken (hr.) RPS (Reader)
SIL_PayrollFact
(uses lookup)
183.3 16.3 3132
TEST_BULK_SIL
(uses Outer Join)
183.3 6.02 8429

Results show that Outer join based mapping ran approx 2.7 times faster than the one based on lookups.

Again, lookups which involve some complex calculations may not be replaced by outer join.

Thursday, December 20, 2007

Oracle: hash_area_size and sort_area_size

higher value for hash_area_size, though is desirable, does not affect the hashing performance in a big way. Reason could be that memory is only used for creating hashes and then the records are flushed to the temp segments (disk).

Whereas in case of sorting, due to complex logic performed for sorting all the records, data moves back and forth multiple time between memory and disk, thus requiring higher sort_area_size for minimizing these iterations. Temp space requirement also falls to a good extent.

One needs to set workarea_size_policy to MANUAL and specify the desired sort and hash area sizes. if hash_area_size is not defined then it defaults to 2*sort_area_size.

Thursday, November 29, 2007

Solaris : Troubleshooting Memory & CPU Consumption

Use the following command to see the TOP 5 processes which are consuming system memory.
prstat -s size -n 5

output shows the resource statistics for each thread of a server application:

prstat -L -p 3295

Processes consuming the most CPU resource:

prstat -s cpu -a -n 8