For one of the BI Apps incremental ETL execution, it was observed that the major time is consumed during statistics gathering over some of the warehouse tables. There were three tables which took 1 hr each to execute. When run standalone, gathering statistics over each of these tables took only 3 minutes. So, what was so wrong during the ETL ??
Upon closely observing the DAC execution plan it was found that all these three tables were analyzed in parallel. It was not a coincidence. In fact, each time these three tables would land up being analyzed in parallel. So, the question is how come !!!
There was a task group defined which had collection of six tasks which where loading into these three tables. Truncate table option was enabled over this task group. This caused all query indexes to be created over these three tables once all the tasks in this task group finished loading these three tables. While query indexes were created in parallel and one table had all indexes created while the other tables still had them being created, statistics gathering over the table did not spawn. Only after all the three tables had all the indexes created, analyze command over the three table was fired. This caused all these three tables getting created in parallel. Is executing these three tables in parallel so much resource consuming that time taken jumped 20 times than when run one at a time ??
Lets see the command being used to analyze these tables:
exec DBMS_STATS.GATHER_TABLE_STATS(ownname => USER, tabname => 'W_TAB_D', estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => 'FOR ALL INDEXED COLUMNS SIZE AUTO',cascade => false ,degree =>DBMS_STATS.AUTO_DEGREE );
When I executed all three tables in parallel, maximum time taken by any of analyze commands was 26 minutes (during etl there were some more Informatica jobs running which would have further slowed it down to take 1 hr). This is what I observed in v$sysstat view:
Before firing all three commands in parallel:
---------------------------------------------------
---------------------------------------------------
Parallel operations not downgraded 9
Parallel operations downgraded to serial 0
After firing all three commands in parallel:
----------------------------------------------------
Parallel operations not downgraded 10
Parallel operations downgraded to serial 2
Parallel operations downgraded to serial 0
After firing all three commands in parallel:
----------------------------------------------------
Parallel operations not downgraded 10
Parallel operations downgraded to serial 2
This shows that all the parallel servers were put to use by analyze over one of the table and rest of the two were then executed serially. This serial execution caused the other two analyze to take more time.
Now, the option was to either segregate the tasks with in the task group in three different task groups, or, to do something to speed up the analyze over these three tables in parallel.
First option was not so enticing given that this task group is shipped OOTB and any changes might impact the functionality. Also, all the tasks in this task groups were loading different tables in a given order, so changing the order by segregating them in different task groups was not so straight forward. So, the choice left was to speed up the statistics gathering over the table.
Since it was observed that parallel servers were hogged by analyze over one single table, It was decided to override the degree parameter in the analyze command and set it a value so that all the three execution will get enough parallel servers. We may not get the performance as that of standalone execution, but it might still be better than what was observed.
So, the revised syntax is:
exec DBMS_STATS.GATHER_TABLE_STATS(ownname => USER, tabname => 'W_TAB_D', estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => 'FOR ALL INDEXED COLUMNS SIZE AUTO',cascade => false ,degree =>4);
When executed in parallel, the maximum time taken by any of these stands at 19 minutes. v$sysstat shows no jobs being downgraded to serial.
Parallel operations not downgraded 16
Parallel operations downgraded to serial 2
Parallel operations downgraded to serial 2
This brought about performance improvement of 27%. Obviously, it is yet to be seen how it fares when we incorporate these changes in DAC for etl purpose. Will post more on what further is found.
No comments:
Post a Comment