David Kurtz

Subscribe to David Kurtz feed
This blog contains things about PeopleSoft that DBAs might find interesting.
Or then again they might not!
Non-PeopleSoft Oracle stuff is at blog.go-faster.co.uk.
Updated: 22 min 14 sec ago

nVision Performance Tuning: 11. Excel -v- OpenXML

Thu, 2017-11-30 04:47
This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

The general objective the performance tuning changes described in this series of blog posts has been to improve the performance of individual nVision reports, but also to allow many reports to execute concurrently.
However, if you use Excel 2010, Excel 2013 or above, then you may notice run times are significantly longer than with Excel 2007.  Also, from PeopleTools 8.54, Excel 2007 is no longer certified.
The problem is discussed in Oracle support note E-NV: nVision Performance using Microsoft Office 2010 and above (Doc ID 1537491.1).  Essentially, Excel 2010 upwards only runs single threaded.  Only one Excel nVision process that is not waiting for a database call to return can run concurrently on any one Windows server at any one time.  If you want to be able to run 10 concurrent nVision reports you would need to run one on each of 10 process schedulers, on 10 different windows servers.
From PT8.54, OpenXML is the default and preferred engine for executing nVision report on the process scheduler.  This uses a different PeopleSoft executable (PSNVSSRV).  It does not suffer from the single-threading problem so multiple processes can run concurrently.  It can also be run on non-Windows environments.
However, there are some limitations with OpenXML:
  • Excel macros are ignored during report generation, although macros can be put into a layout that will execute when the generated report is subsequently opened in Excel.
  • There are problems with nPlosion.  
  • Any print area set in the layout is lost.
  • When rerunning nVision to file any pre-existing file is not overwritten.
Therefore, it may be necessary to continue to run some nVision reports on Excel.  This would require:
  • Separate process schedulers configured to run Excel rather than OpenXML on each available Windows server.  Excel is selected by setting the variable Excel Automation = 1, in the nVision section of the process scheduler configuration file (psprcs.cfg).  
  • A new Excel nVision process type should be configured to run specific layouts or reportbooks on Excel.  
  • That new process type should only run on these additional process schedulers.  It should have a maximum concurrence of 1, or at most 2, on each Process Scheduler.  These schedulers should be configured to run this new process type (and a single Application Engine so that the purge process can run).

nVision Performance Tuning: 10. Maintaning Statistics on Non-Partitioned Tree Selector Tables

Tue, 2017-11-28 10:21
This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

If you do not partition your selector tables, and if you predominantly use static selectors then you can maintain the statistics on the selector table as the new static selectors are created or updated.  PeopleTools table PSTREESELCTL is used to track static selectors.  It is keyed by selector number.  A row is inserted when a new selector is created, and the version number is updated when an existing selector is re-extracted.  Therefore, a trigger on this table can be used to submit a job to refresh the statistics on the tree selector.
CREATE OR REPLACE TRIGGER sysadm.pstreeselector_stats
BEFORE INSERT OR UPDATE ON sysadm.pstreeselctl
FOR EACH ROW
DECLARE
l_jobno NUMBER;
l_cmd VARCHAR2(1000);
l_table_name VARCHAR2(18);
l_suffix VARCHAR2(2);
BEGIN
l_table_name := 'PSTREESELECT'||LTRIM(TO_CHAR(:new.length,'00'));
l_suffix := SUBSTR(l_table_name,-2);
l_cmd := 'dbms_stats.gather_table_stats(ownname=>user,tabname=>'''||l_table_name||''',force=>TRUE);'
||'dbms_stats.set_column_stats(ownname=>user,tabname=>'''||l_table_name||''',colname=>''RANGE_FROM_'||l_suffix||''',density=>1,force=>TRUE);'
||'dbms_stats.set_column_stats(ownname=>user,tabname=>'''||l_table_name||''',colname=>''RANGE_TO_'||l_suffix||''',density=>1,force=>TRUE);'
dbms_output.put_line(l_cmd);
dbms_job.submit(l_jobno,l_cmd);
EXCEPTION WHEN OTHERS THEN NULL;
END;
/
When all the selectors exist in a single segment, the optimizer will not correctly calculate the cardinality of the expressions in the nVision queries on the tree selector tables.  Therefore, I have found it necessary to manually set the density on the range columns to 1 to get the effective execution plans at least some of the time.
N.B. This trigger should not be used if the selector tables are partitioned.


nVision Performance Tuning: 9. Using Compression without the Advanced Compression Licence

Fri, 2017-11-24 10:42
This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

Table compression can significantly decrease the size of tables and reduce the volume of I/O required to retrieve data.  Compression of the ledger, ledger budget, and summary ledger tables can significantly improve the performance of scans in nVision.
The Advanced Compression licence enables compression features to be used where segments are still updated by the application.  Without this licence, only simple table compression can be used, although Hybrid Column Compression (HCC) can only be used on an engineered system.  Neither forms of compression can be used in on-line line transaction processing mode.  A table that is marked for compression, but that is populated in conventional path mode will not be compressed.  Any compressed blocks will no longer be compressed after being updated.
However, in most systems, ledger rows cease to be updated within a month or two of the period closing.  The ledger tables will usually be partitioned by fiscal year and accounting period, so a month after the period has closed the corresponding partition will cease to be updated.  Therefore, it could then be compressed.
I usually compress historical partitions when I introduce partitioning.  This following example comes from an engineered system, so Hybrid Columnar Compression has been used.
  • There is one range partition each for the whole of fiscal years 2014 and 2015.  This data is historical and rarely queried, and then not by a single period.  A more aggressive compression QUERY HIGH has been used.  
  • Monthly partitioning is used for the previous fiscal year, 2016.  These partitions are compressed, but using QUERY LOW which should deliver better performance with lower CPU overhead than QUERY HIGH (although usually there isn't much in it).
  • Partition LEDGER_2017_05 is not partitioned because it is current (at the time of building this script) and could still be updated by the application.
CREATE TABLE sysadm.ps_ledger
(...)
TABLESPACE GLLARGE
PCTFREE 10 PCTUSED 80
PARTITION BY RANGE(FISCAL_YEAR,ACCOUNTING_PERIOD)
SUBPARTITION BY LIST (LEDGER)
(PARTITION ledger_2014 VALUES LESS THAN (2015,0) PCTFREE 0 COMPRESS FOR QUERY HIGH
(SUBPARTITION ledger_2014_actuals VALUES ('ACTUALS')

,SUBPARTITION ledger_2014_z_others VALUES (DEFAULT)
)
,PARTITION ledger_2015 VALUES LESS THAN (2016,0) PCTFREE 0 COMPRESS FOR QUERY HIGH
(SUBPARTITION ledger_2015_actuals VALUES ('ACTUALS')

,SUBPARTITION ledger_2015_z_others VALUES (DEFAULT)
)
,PARTITION ledger_2016_bf VALUES LESS THAN (2016,1) PCTFREE 0 COMPRESS FOR QUERY LOW
(SUBPARTITION ledger_2016_bf_actuals VALUES ('ACTUALS')

,SUBPARTITION ledger_2016_bf_z_others VALUES (DEFAULT)
)

,PARTITION ledger_2017_05 VALUES LESS THAN (2017,6)
(SUBPARTITION ledger_2017_05_actuals VALUES ('ACTUALS')

,SUBPARTITION ledger_2017_05_z_others VALUES (DEFAULT)
)

)
ENABLE ROW MOVEMENT
PARALLEL
NOLOGGING
/
As periods close there are more partitions that can be compressed.  However, it may not be practical to take an outage to rebuild ledger partitions each month, and it certainly isn't practical to rebuild the entire ledger table every month.  Instead, from 11g, partitions can be rebuilt in place in an online operation.  The compression attribute can only be specified on the partition, and then the sub-partitions can be rebuilt.  The data in the partition can still be read while it is being moved.
ALTER TABLE ps_ledger MODIFY PARTITON ledger_2017_05 PCTFREE COMPRESS FOR QUERY LOW;
ALTER TABLE ps_ledger MODIFY SUBPARTITION ledger_2017_05_actuals ONLINE PARALLEL 32;

ALTER TABLE ps_ledger MODIFY SUBPARTITION ledger_2017_05_z_others ONLINE PARALLEL 32;
NB: Moving a partition online can fail in 12.1.0.1 or later of the database due to bug 2070300.  This is documented in MOS Doc ID 2040742.1.
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-00932: inconsistent datatypes: expected NUMBER got BINARY
The bug is fixed in 12.2 and can be addressed in 12.1 by patch 2070300.

nVision Performance Tuning: 8. Interval Partitioning and Statistics Maintenance of Tree Selector Tables

Mon, 2017-11-20 05:55
This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

The decision to use interval partitioning on the tree selector tables came from the need to have accurate statistics for the optimizer when parsing nVision queries.  It is not possible to introduce hints into nVision SQL. The dynamic nature of the code means it is not viable to consider any of the forms of database plan stability across the whole application, (although it may be possible to use SQL Profiles in limited cases). Therefore, as far as possible the optimizer has to choose the best plan on its own. Without accurate optimizer statistics, I have found that the optimizer will usually not choose to use a Bloom filter.
If the selector tables are not partitioned, then each table will usually contain rows for many different selectors. Even with perfectly up to date statistics, including a histogram on SELECTOR_NUM, and extended statistics on SELECTOR_NUM and RANGE_FROM_nn, I have found that Oracle miscosts the join on RANGE_FROMnn and the attribute on the ledger table.
I propose that the tree selector tables should be interval partition such that each selector goes into its own partition.
CREATE TABLE PSYPSTREESELECT10 
(SELECTOR_NUM INTEGER NOT NULL,
TREE_NODE_NUM INTEGER NOT NULL,
RANGE_FROM_10 VARCHAR2(10) NOT NULL,
RANGE_TO_10 VARCHAR2(10) NOT NULL)
PARTITION BY RANGE (selector_num) INTERVAL (1)
(PARTITION VALUES LESS THAN(2))
TABLESPACE "PTWORK"
STORAGE(INITIAL 128K NEXT 128K)
/
INSERT INTO PSYPSTREESELECT10
( SELECTOR_NUM, TREE_NODE_NUM, RANGE_FROM_10, RANGE_TO_10)
SELECT SELECTOR_NUM, TREE_NODE_NUM, RANGE_FROM_10, RANGE_TO_10
FROM PSTREESELECT10
/
DROP TABLE PSTREESELECT10
/
ALTER TABLE PSYPSTREESELECT10 RENAME TO PSTREESELECT10
/
  • nVision queries will reference a single selector with a literal value, and therefore Oracle will eliminate all but that single partition at parse time and will use the statistics on that partition to determine how to join it to other tables.
  • Statistics only have to be maintained at partition level, and not at table level. 
  • Now that there is only a single selector number in any one partition, there is no need for extended statistics. 
  • The need to use dynamic selectors, in order to get equality joins between selectors and ledger tables, in order to make use of the Bloom filter, means that statistics on selector table will inevitably be out of date. The PL/SQL triggers and package that log the selector usage, are also used to maintain statistics on the partition. 
  • Partitions do not have to be created in advance. They will be created automatically by Oracle as they are required by the application. 
Compound Triggers on Tree Selector Tables There are a pair of compound DML triggers on each tree selector tables, one for insert and one for delete.
  • The after row section captures the current selector number. The one for insert also counts the number of rows and tracks the minimum and maximum values of the RANGE_FROMnn and RANGE_TOnn columns. 
  • The after statement section updates the selector log. The insert trigger directly updates the statistics on the partition, including the minimum and maximum values of the range columns.
    • It is not possible to collect statistics in a trigger in the conventional manner because dbms_stats includes an implicit commit. If dbms_stats was called within an autonomous transaction it could not see the uncommitted insert into the tree selector that fired the trigger. Hence the trigger calls the XX_NVISION_SELECTORS package that uses dbms_stats.set_table_stats and dbms_stats.set_column_stats to set values directly. 
    • The trigger then submits a job to database job scheduler that will collect statistics on the partition in the conventional way using dbms_job. The job number is recorded on the selector log. The job will be picked up by the scheduler when the insert commits. However, there can be a short lag between scheduling the job, and it running. The first query in the nVision report can be parsed before the statistics are available. 
    • The procedure that directly sets the statistics has to make some sensible assumptions about the data. These mostly lead the optimizer to produce good execution plans. However, testing has shown that performance can be better with conventionally collected statistics. Therefore, the trigger both directly sets statistics and submits the database job to collect the statistics.
    • It is important that table level statistics are not maintained by either technique as this would lead to locking between sessions. Locking during partition statistics maintenance will not occur as no two sessions populate the same selector number, and each selector is in a different partition. A table statistics preference for granularity is set to PARTITION on each partitioned tree selector table. 
The combination of dynamics selectors, single value joins, interval partitioning of selector tables, logging triggers on the selector tables driving timely statistics maintenance on the partitions delivers execution plans that perform well and that make effective use of engineered system features.
However, there are two problems that then have to be worked around. 
Library Cache Contention 
Some data warehouse systems can need new partitions in tables daily or even hourly. If partitions were not created in a timely fashion, the application would either break because the partition was missing, or performance would degrade as data accumulated in a single partition. Oracle intended interval partitions to free the DBA from the need to actively manage such partitioning on a day-to-day basis by creating them automatically as the data was inserted. 
However, on a busy nVision system, this solution could create thousands of new selectors in a single day, and therefore thousands of new partitions. This is certainly not how Oracle intended interval partitioning to be used.  I freely admit that I am abusing the feature.
If you have multiple concurrent nVision reports running, using dynamic selectors, you will have multiple database sessions concurrently inserting rows into the tree selector tables each with a different selector number, and therefore each creating new partitions mostly into the same tables.
The recursive code that creates the new partitions, and maintains the data dictionary, acquires a lock the object handle in library cache to prevent other sessions from changing it at the same time.  As the number of concurrent nVisions increase you will start to see nVision sessions waiting on the library cache lock event during the insert into the tree selector table while the new partition is being created. Perversely, as the performance of the nVision queries improve (as you refine tree selector settings) you may find this contention increases. 
The workaround to this is to create multiple database schemas, each with copies of the partitioned tree selector tables (similarly interval partitioned) and the PSTREESELCTL table (to manage static selectors in those schemas). Synonyms will be required for all other tables referenced by nVision queries. 
Then a trigger on the process scheduler request table PSPRCSRQST will arbitarily set the current schema of the nVision batch processes to one of those schemas. The nVision reports still connect and run with privileges of the Owner ID (usually SYSADM), but the objects tables from the current schema. 
I have used a hash function to distribute nVision processes between schemas. I suggest the number of schemas should be a power of 2 (ie, 2, 4, 8 etc.).
CREATE OR REPLACE TRIGGER sysadm.nvision_current_schema
BEFORE UPDATE OF runstatus ON sysadm.psprcsrqst
FOR EACH ROW
WHEN (new.runstatus IN('7') AND new.prcsname = 'RPTBOOK' AND new.prcstype like 'nVision%')
BEGIN
EXECUTE IMMEDIATE 'ALTER SESSION SET current_schema = NVEXEC'||LTRIM(TO_CHAR(dbms_utility.get_hash_value(:new.prcsinstance,1,8),'00'));
EXCEPTION WHEN OTHERS THEN NULL; --exception deliberately coded to suppress all exceptions
END;
/
Thus different nVision reports use different tree selector tables in different schemas rather than trying to create partitions in the same tree selector table, thus avoiding the library cache locking.
Limitation on the Maximum Number of Partitions In Oracle, it is not possible to have more than 1048576 partitions in a table. That applies to all forms of partitioning.
The tree selector tables are interval partitioned on selector number with an interval of 1 starting with 1. So the highest value of SELECTOR_NUM that they can store is 1048575.
INSERT INTO pstreeselect05 VALUES(1048576,0,' ',' ')
*
ERROR at line 1:
ORA-14300: partitioning key maps to a partition outside maximum permitted number of partitions
New selector numbers are allocated sequentially from PSTREESELNUM. Left unattended, the selector numbers used by nVision will increase until they eventually hit this limit, and then nVision and ad-hoc queries that use the tree-exists operator will start to fail with this error.
Therefore, a procedure RESET_SELECTOR_NUM has been added to the PL/SQL package to reset the selector number allocation table PSTREESELNUM back to 0, delete any tree selector entries for which there is no logging entry, and then runs the regular selector PURGE procedure in the same
package that will drop unwanted interval partitions.

Recommendation: XX_NVISION_SELECTORS.RESET_SELECTOR_NUM should be scheduled run sufficiently frequently to prevent the selector number reaching the maximum.  

nVision Performance Tuning: 7 Analysis of Tree Usage with the Selector Log

Mon, 2017-11-13 11:22
This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

Over time, the selector log will build up a picture of how each tree is used in a system. Here are two examples of how it can be used.

You may look at a piece of SQL generated by nVision, it will have a literal value for the selector number, and you want to know about that particular selector.
REM treeanal.sql
WITH t as (
SELECT DISTINCT d.tree_name, s.dtl_fieldname, d.tree_acc_method, d.tree_acc_Selector, d.tree_acc_sel_opt
FROM pstreedefn d, pstreestrct s
WHERE d.tree_Strct_id = s.tree_strct_id
), l as (
SELECT *
FROM ps_nvs_treeslctlog l
WHERE l.selector_Num = &selector_num
)
SELECT l.*, t.dtl_fieldname, t.tree_acc_method, t.tree_acc_Selector, t.tree_acc_sel_opt
FROM t, l
WHERE t.tree_name = l.tree_name
/
Now you can see various pieces of information about the selector and the report in which it was used.
  • Process instance of the report, although it is also in the ACTION string
  • Length of the selector.  Here it was 10 characters so it was in PSTREESELECT10.
  • The number of rows inserted into the selector.  This is useful if you want to recreate the conditions at runtime by populating the selector table manually.
  • Time at which the selector was populated.
  • Session module, usually the process name.  
  • Session action string, which contains the process instance, report ID and business unit.
  • The client info string, containing the operator ID, database name, the name of the host running nVision and the name of the executable.
    • PSNVS is nVision running on Microsoft Excel
    • PSNVSSRV is nVision running in OpenXML
    • psae is a PS/Query using the IN TREE operator that has been scheduled to run on the Process Scheduler.
    • PSQED is the windows query client running in 2-tier mode and using an IN TREE operator.
    • PSSAMSRV indicates either nVision or PS/Query running either through the PIA or on the windows client in 3-tier mode.
  • Status of the selector.  X indicates that the selector has been deleted and the partition has been dropped.
  • Name of the tree.
  • Name of the schema in which the selector table is located.
  • Partition name in the tree selector table in which the data was held.  This will be blank if the partition has been dropped.
  • The current values of the three tree selector flags on the tree definition are also reported.
SELECTOR_NUM PROCESS_INSTANCE Len   NUM_ROWS TIMESTAMP                    MODULE       APPINFO_ACTION
------------ ---------------- --- ---------- ---------------------------- ------------ ----------------------------------------------------------------
CLIENT_INFO S TREE_NAME OWNERID PARTITION_NAME JOB_NO DTL_FIELDNAME T T T
---------------------------------------------------------------- - ------------------ -------- -------------------- ---------- ------------------ - - -
10233 1780069 10 362 10-NOV-17 02.40.50.755038 AM RPTBOOK PI=1780069:UKGL123I:UK001
GBNVISION,PSFINPRD,UKLONWIN001,,PSNVSSRV.EXE, X UKGL_ACCOUNT SYSADM 33052 ACCOUNT J D S

The following query aggregated log entries to report the number of times each tree was used over the last 7 days, and provide various statistics about the numbers of rows extracted from trees into the selector tables, and the current tree performance options.
REM tree_usage.sql
WITH t AS (
SELECT DISTINCT d.tree_name, s.dtl_fieldname, d.tree_acc_method
, d.tree_acc_Selector, d.tree_acc_sel_opt
FROM pstreedefn d, pstreestrct s
WHERE d.tree_Strct_id = s.tree_strct_id
), l AS (
SELECT tree_name, length
, COUNT(*) num_uses
, MIN(num_rows) min_rows
, AVG(num_rows) avg_rows
, MEDIAN(num_Rows) med_rows
, MAX(num_rowS) max_rows
, STDDEV(num_Rows) stddev_rows
, SUM(num_rows) sum_rows
, COUNT(distinct process_instance) processes
FROM ps_nvs_treeslctlog l
WHERE num_rows>0
AND timestamp >= sysdate-7
GROUP BY tree_name, length
)
SELECT l.*, t.dtl_fieldname, t.tree_acc_method, t.tree_acc_Selector, t.tree_acc_sel_opt
FROM t, l
WHERE t.tree_name = l.tree_name
ORDER BY sum_rows
/
The default recommendation is that all trees should use:
  • Literal values where possible when working with less than about 2000 rows in the selector.  However, where more than 2000 rows it may be better to join the table due to parse and execution overhead of each criterion.
  • Dynamic selectors 
  • Single Value joins 
This report can help to identify trees where extreme volumes mean that different options should be considered.
                          Num    Min Average Median    Max Std Dev       Sum   Num
TREE_NAME Len Uses Rows Rows Rows Rows Rows Rows Procs DTL_FIELDNAME T T T
------------------ --- ------ ------ ------- ------ ------ ------- --------- ----- ------------------ - - -

CORP_ACCT 10 5 1147 2839 2616 6668 2263 14194 1 ACCOUNT J D S
FUNCTION 10 480 9 32 35 35 8 15474 43 CHARTFIELD2 L D S
INT_SUP 6 7 225 2463 2838 2838 987 17243 1 PRODUCT L D S
STAT_PRODUCT 6 8 2889 2889 2889 2889 0 23112 1 PRODUCT J D S
AFFILIATE 5 43 215 576 509 938 223 24789 15 AFFILIATE L D S
INT_GAAP_CON 5 62 82 486 522 730 225 30153 10 BUSINESS_UNIT L D S
BU_GAAP_CON 5 96 44 619 614 731 115 59461 48 BUSINESS_UNIT L D S
STAT_ACCOUNT 10 45 23 4204 6516 6516 2905 189182 6 ACCOUNT J D S
INT_REP1 10 135 149 1563 1664 1664 379 211005 1 CHARTFIELD1 L D S
COMBO_CODE 10 172 17 1592 1532 2430 809 273846 18 CHARTFIELD1 L D S
UKGL_ACCOUNT 10 2586 2 1713 1147 7797 1793 4430262 110 ACCOUNT J D S
  • The account trees have been set to join the tree to the ledger table rather than literal values because sometimes in excess of 6000 rows are extracted. A query with 6000 literal terms would be extremely large, take time to generate in nVision, and time on the database to parse and execute each criterion. 
  • STAT_PRODUCT has been set to join partly because it is large, it always extracts 2889 rows, but also because the whole tree is extracted every time so it does not cut down the result set.

nVision Performance Tuning: 6. Logging Selector Usage

Mon, 2017-11-06 13:09
This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

Static selectors are tracked by entries in the PSTREESELCTL table.  It is maintained after the tree is extracted to the selector table.  The version number on PSTREESELCTL is compared with the corresponding version number on the PSTREEDEFN to determine whether the extract of the tree to the selector table is still valid, or whether it needs to be reextracted because the tree has been updated.  Selectors that do not have an entry in PSTREESELCTL are therefore dynamic.
Static selectors are left in the PSTREESELECTnn table after the report completes so that they can be reused.  However, many customers periodically create new effective dated versions of many trees, and so static selectors on old effective dated versions of the tree will accumulate as there is nothing to purge them.
Dynamic selectors are extracted every time the report runs.  They should normally be deleted by nVision before the report finishes.  However, if a report crashes dynamic selectors can be left behind.  That creates a number of problems
  • The size of the selector tables will tend to increase over time as old static and dynamic selectors are left behind.
  • That in turn affects the statistics on these tables.  The range of values for SELECTOR_NUM will become wider faster than the number of rows in the table grows.  The minimum value will either be the oldest static selector number, or the oldest dynamic selector left after a crash.  The maximum value will be the last selector number inserted when statistics were collected.
Therefore, it is useful to be able to track creation and deletion of dynamic selectors by the various nVision reports and queries.  I have therefore created a logging table PS_NVS_TREESLCTLOG (see nvision_dynamic_selectors.sql), a PL/SQL package XX_NVISION_SELECTORS and compound DML triggers on every tree selector table.
The column names in the log table have been chosen for compatibility with the PeopleSoft data dictionary, so that a record can be defined in Application Designer.
Column Name
Data Type
Description
SELECTOR_NUM
NUMBER
Unique identifier for tree selector records.
PROCESS_INSTANCE
NUMBER
PeopleSoft process instance number for nVision/Query
LENGTH
NUMBER
Length of tree selector
NUM_ROWS
NUMBER
Number of rows inserted into tree selector. 
Counted by the AFTER ROW part of the triggers.
TIMESTAMP
TIMESTAMP
Time when rows inserted
MODULE
VARCHAR2(64)
Module attribute of session inserting selector rows. 
APPINFO_ACTION
VARCHAR2(64)
Action attribute of session inserting selector rows
CLIENTINFO
VARCHAR2(64)
CLIENT_INFO attribute of session inserting selector rows.  This will include:
  • PeopleSoft Operator ID.
  • Name of the application server or process scheduler domain.
  • Name of the machine where the client process is executing.
  • Name of the client executable process.
STATUS_FLAG
VARCHAR2(1)
I=Selectors Inserted
S=Static Selectors Inserted
D=Selectors Deleted
X=Selectors Deleted and Partition Dropped
TREE_NAME
VARCHAR2(18)
Name of the tree from which selector extracted.
Obtained by querying statement from V$SQL.
OWNER_ID
VARCHAR2(8)
Schema under which nVision report run
PARTITION_NAME
VARCHAR2(128)
Name of partition where selectors stored
JOB_NO
NUMBER
Database Job number to collect statistics on the partition.
All of the logic is kept in the PL/SQL package because it is common to the triggers on all the tree selector tables.  Insert triggers track population of selectors and delete triggers track the successful removal of dynamic selectors.  After row triggers track the selector number and count the number of rows inserted.  After statement triggers call the logging procedures.
CREATE OR REPLACE TRIGGER sysadm.pstreeselect10_insert
FOR INSERT ON sysadm.PSTREESELECT10 compound trigger
l_err_msg VARCHAR2(100 CHAR);
AFTER EACH ROW IS
BEGIN
sysadm.xx_nvision_selectors.rowins(:new.selector_num,:new.range_from_10,:new.range_to_10);
EXCEPTION WHEN OTHERS THEN NULL;
END after each row;

AFTER STATEMENT IS
BEGIN
sysadm.xx_nvision_selectors.logins(10,'SYSADM');
EXCEPTION WHEN OTHERS THEN
l_err_msg := SUBSTR(SQLERRM,1,100);
dbms_output.put_line('Error:'||l_err_msg);
END after statement;
END;
/

CREATE OR REPLACE TRIGGER sysadm.pstreeselect10_delete
FOR DELETE ON sysadm.PSTREESELECT10 compound trigger
l_err_msg VARCHAR2(100 CHAR);
AFTER EACH ROW IS
BEGIN
sysadm.xx_nvision_selectors.rowdel(:old.selector_num);
EXCEPTION WHEN OTHERS THEN NULL;
END after each row;

AFTER STATEMENT IS
BEGIN
sysadm.xx_nvision_selectors.logdel(10);
EXCEPTION WHEN OTHERS
THEN
l_err_msg := SUBSTR(SQLERRM,1,100);
dbms_output.put_line('Error:'||l_err_msg);
END after statement;
END;
/
Once the decision to bear the overhead of triggers on the selector tables is made, there are then various pieces of additional information that can be captured and stored in the logging table for very little additional overhead.  It is easy to record the current session attributes such as module, action, and client_info.  The process instance number is captured on startup and can then be read by the psftapi package.  The tree is identified by scanning the V$SQL for the SQL that fired the triggers. 
It is also possible to maintain statistics on the selector tables.
Purging SelectorsThe selector log can be used to drive purging of selectors not cleared by nVisions that failed to complete.  The package includes a purge procedure to clear selectors that are not already marked as having been deleted either when the logged process instance is no longer running, or if there is no known process instance then if it is more than 2 days since the selector was inserted.  If the selector table is also interval partitioned, then the partition will be dropped.
A trigger on PSTREESELCTL (see pstreeselctl.sql) tracks the logs static selector maintenance.
The purge process is invoked by a trigger on PSPRCSRQST that fires when nVision process status is changed away from processing.  The selector purge process should also be scheduled to run daily.
The selector log itself is not purged as it contains useful information about tree usage.
Recommendations
  • If the tree selector tables are not partitioned create a histogram on SELECTOR_NUM.  However, up to Oracle 11g, this will not be effective as the number of distinct values reaches the maximum number of buckets, 254.  Dynamic selectors should be purged before this happens.  From Oracle 12c the maximum number of buckets is 8192, and hybrid histograms can be used.

nVision Performance Tuning: 5. Additional Instrumentation of nVision

Thu, 2017-11-02 07:28
This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

One of the challenges of tuning and monitoring nVision is to be able to identify each report being run. Calls to Oracle instrumentation package dbms_application_info  were added to the component processor in PeopleTools 8.50, and to Application Engine in PeopleTools 8.52.  However, COBOL, nVision, and SQR were never instrumented.
Therefore, there is still a place for the psftapi package and trigger.  When a PeopleSoft batch process starts, it sets the status on its request record on the Process Scheduler request record, psprcsrqst to 7, thus indicating that it is processing.  A trigger on that table fires on that update and calls the psftapi package.  The package sets module and action to the process name and process instance, and also stored the process instance number in a package global variable that can be read with another procedure in the package.  Every scheduled process will have module and action set to something meaningful.  Any PeopleSoft instrumentation will simply overwrite these values.  A sessions module and action are picked up Oracle monitoring tools, in particular, they are also stored in the Active Session History (ASH).
However, nVision reports always run as the same process name, either NVSRUN for a single report, RPTBOOK for a report book of many reports, or DRILLDWN for a nVision drill-down query.  Knowing the process instance is useful because then we can look up the operator and run control ID
However, we also need to know the report ID being run.  When each individual nVision report starts it queries the runtime parameters from the PS_NVS_REPORT PeopleTools table.  There is no update, so it is not possible to capture this with a DML trigger.  Instead, you can create a fine-grained audit policy on the query with a PL/SQL handler.  The handler package is then invoked by the audit policy.
BEGIN
DBMS_FGA.ADD_POLICY(
object_schema => 'SYSADM',
object_name => 'PS_NVS_REPORT',
policy_name => 'PS_NVS_REPORT_SEL',
handler_module => 'AEG_FGA_NVISION_HANDLER',
enable => TRUE,
statement_types => 'SELECT',
audit_trail => DBMS_FGA.DB + DBMS_FGA.EXTENDED);
END;
/
The handler package runs in the session that triggered the audit.  It can access the audit record and extract the string of colon-separated bind variables thus obtaining the report ID and business unit.  It updates the session action attribute in the same way as psftapi.sql.
CREATE OR REPLACE PROCEDURE sysadm.aeg_fga_nvision_handler
(object_schema VARCHAR2
,object_name VARCHAR2
,policy_name VARCHAR2)
AS
l_sqlbind VARCHAR2(4000);
l_parm1 VARCHAR2(30);
l_parm2 VARCHAR2(30);
l_parm3 VARCHAR2(30);
l_parm4 VARCHAR2(30);
BEGIN
BEGIN
SELECT x.lsqlbind
, SUBSTR(x.lsqlbind,x.start1,LEAST(30,NVL(x.end1,x.lensqlbind+1)-x.start1)) parm1
, SUBSTR(x.lsqlbind,x.start2,LEAST(30,NVL(x.end2,x.lensqlbind+1)-x.start2)) parm2
, SUBSTR(x.lsqlbind,x.start3,LEAST(30,NVL(x.end3,x.lensqlbind+1)-x.start3)) parm3
, SUBSTR(x.lsqlbind,x.start4,LEAST(30,NVL(x.end4,x.lensqlbind+1)-x.start4)) parm4
INTO l_sqlbind, l_parm1, l_parm2, l_parm3, l_parm4
FROM (
SELECT l.*
, NULLIF(REGEXP_INSTR(lsqlbind,' #[0-9]+\([0-9]+\)\:',1,1,1,'i'),0) start1
, NULLIF(REGEXP_INSTR(lsqlbind,' #[0-9]+\([0-9]+\)\:',1,2,0,'i'),0) end1
, NULLIF(REGEXP_INSTR(lsqlbind,' #[0-9]+\([0-9]+\)\:',1,2,1,'i'),0) start2
, NULLIF(REGEXP_INSTR(lsqlbind,' #[0-9]+\([0-9]+\)\:',1,3,0,'i'),0) end2
, NULLIF(REGEXP_INSTR(lsqlbind,' #[0-9]+\([0-9]+\)\:',1,3,1,'i'),0) start3
, NULLIF(REGEXP_INSTR(lsqlbind,' #[0-9]+\([0-9]+\)\:',1,4,0,'i'),0) end3
, NULLIF(REGEXP_INSTR(lsqlbind,' #[0-9]+\([0-9]+\)\:',1,4,1,'i'),0) start4
, NULLIF(REGEXP_INSTR(lsqlbind,' #[0-9]+\([0-9]+\)\:',1,5,1,'i'),0) end4
, LENGTH(lsqlbind) lensqlbind
FROM sys.fga_log$ l
) x
WHERE x.sessionid = USERENV('SESSIONID')
AND x.entryid = USERENV('ENTRYID')
AND x.obj$name = 'PS_NVS_REPORT';
EXCEPTION
WHEN no_data_found THEN
RAISE_APPLICATION_ERROR(-20000,'AEG_FGA_NVISION_HANDER: No Audit Row');
END;

IF l_parm4 IS NULL THEN
l_parm4 := l_parm3;
l_parm3 := l_parm2;
l_parm2 := l_parm1;
END IF;

IF l_parm4 IS NULL THEN
l_parm4 := l_parm3;
l_parm3 := l_parm2;
END IF;

IF l_parm4 IS NULL THEN
l_parm4 := l_parm3;
END IF;

dbms_output.put_line(l_sqlbind);
dbms_output.put_line(l_parm1);
dbms_output.put_line(l_parm2);
dbms_output.put_line(l_parm3);
dbms_output.put_line(l_parm4);

dbms_application_info.set_action(SUBSTR('PI='||psftapi.get_prcsinstance()||':'||l_parm4||':'||l_parm3,1,64));
--EXECUTE IMMEDIATE 'ALTER SESSION SET TRACEFILE_IDENTIFIER=''PI='||psftapi.get_prcsinstance()||':'||l_parm4||':'||l_parm3||'''';
END;
/
The action attribute is then picked up by the ASH data.  It is easy to extract the report ID and business unit from the action string with regular expressions, as in this example query.
set lines 160 trimspool on
column module format a12
column action format a32
column client_id format a12
column prcsinstance format a9 heading 'Process|Instance'
column business_unit format a8 heading 'Business|Unit'
column report_id format a10
select DISTINCT module, action, client_id
, REGEXP_SUBSTR(h.action,'[[:digit:]]+') prcsinstance
, substr(regexp_substr(h.action,':([[:alnum:]])+',1,2),2) business_unit
, substr(regexp_substr(h.action,':([A-Za-z0-9_-])+',1,1),2) report_id
from v$active_session_History h
where program like 'PSNVS%'
/
Here you can see how process instance, report ID and business unit are held in action and how they can be extracted.  Now, it is possible to profile ASH data for nVision processes, find the long-running SQL and determine which layout it came from.
                                                           Process   Business
MODULE ACTION CLIENT_ID Instance Unit REPORT_ID
------------ -------------------------------- ------------ --------- -------- ----------
RPTBOOK PI=1780508:GBGL224S:UK001 GBNVISION 1780508 UK001 GBGL113S
RPTBOOK PI=1780509:GBGL010E:UK002 GBNVISION 1780509 UK002 GBGL010E
RPTBOOK PI=1780502:GBGL91PF:UK001 GBNVISION 1780502 UK001 GBGL91PF
RPTBOOK PI=1780502:GBGL91FR:UK001 GBNVISION 1780502 UK001 GBGL91FR
RPTBOOK PI=1780502:GBGL91GB:UK001 GBNVISION 1780502 UK001 GBGL91GB
RPTBOOK PI=1780502:GBGL91DM:UK002 GBNVISION 1780502 UK002 GBGL91DM
RPTBOOK PI=1780506:GBEXP2AM:UK001 GBNVISION 1780506 UK001 GBEXP2AM
RPTBOOK PI=1780509:Processing GBNVISION 1780509 Processing
RPTBOOK PI=1780500:GBGL113S:UK003 GBNVISION 1780500 UK003 GBGL113S
RPTBOOK PI=1780509:GBGL010E:UK000 GBNVISION 1780508 UK000 GBGL010E 
This code in this blog is available on github.
Other recommendations
  • Create an index on SYS.FGA_LOG$ to support the query in the FGA handler package.
CREATE INDEX sys.fga_log$_obj$name
ON sys.fga_log$ (obj$name, sessionid, entryid)
TABLESPACE sysaux PCTFREE 1 COMPRESS 1
/
  • Put a regular purge of the FGA_LOG$ table in place, to purge rows after, say, 31 days.  Otherwise, it will grow indefinitely, one row will be added for every nVision report run.
DELETE FROM fga_log$ 
WHERE obj$name = 'PS_NVS_REPORT'
AND ntimestamp#
  • Move SYS.AUD$ and SYS.FGA_LOG$ from the SYSTEM tablespace to another ASSM tablespace using the instructions in Oracle support note 1328239.1.

nVision Performance Tuning: 4. Partitioning of Ledger, Ledger Budget, and Summary Ledger Tables

Sat, 2017-10-28 14:41
This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

Note: Partitioning is a licenced option in Oracle RDBMS, and is only available on Enterprise Edition.

nVision queries always contain single value predicates on LEDGER and FISCAL_YEAR.  They will also always have either single value predicate or a range predicate on ACCOUNTING_PERIOD.  Therefore, partitioning the ledger tables on these columns is an effective way to cut down the data to be processed by the query as early as possible.
SELECT … SUM(A.POSTED_BASE_AMT) 
FROM PS_LEDGER A, …
WHERE A.LEDGER='ACTUALS'
AND A.FISCAL_YEAR=2015
AND A.ACCOUNTING_PERIOD BETWEEN 1 AND 11

I usually partition the ledger, ledger budget and summary ledger tables on the combination of FISCAL_YEAR and ACCOUNTING_PERIOD in a single range.
Most customers have monthly accounting periods, in which case I create 14 partitions for the current and previous fiscal years, but only have a single partition for each previous fiscal years.
  • One for each of the 12 accounting periods
  • One for period 0 (brought forward)
  • One for periods 998 and 999 (carry forward and adjustments)
I have seen one system with daily accounting periods that also had 14 partitions per year, in groups of 30 days.  This also worked very well.
I would then consider sub-partitioning on another column depending on the nature of data and the processing.  For example:
  • LEDGER is often a good candidate.  In which case, I would create one list sub-partition for each of the larger ledgers and have a default list partition for the rest.
  • On one a global system with various regional, but locally overnight, batch processing windows ledger was list sub-partitioned on BUSINESS_UNIT. A list partition was proposed for each region containing the top business units for that region.  This not only helped regional reporting but also minimised inter-regional contention.
  • It would even be possible to vary the sub-partitioning in different fiscal years if a change occurred in the business.
This example shows part of the DDL used to create the LEDGER table.
  • There is one range partition for the whole of the fiscal year 2015 because it is historical and rarely queried, and then not usually by a single period.
  • Monthly partitioning is used from the previous fiscal year, 2016, onwards.
  • Historical partitions are created without any reservation for free space as they are closed and won't be updated any further.  The could also be compressed.
CREATE TABLE sysadm.gfc_ledger
(...)
TABLESPACE GLLARGE
PCTFREE 10 PCTUSED 80
PARTITION BY RANGE(FISCAL_YEAR,ACCOUNTING_PERIOD)
SUBPARTITION BY LIST (LEDGER)
(PARTITION ledger_2014 VALUES LESS THAN (2015,0) PCTFREE 0 …
(SUBPARTITION ledger_2014_actuals VALUES ('ACTUALS')

,SUBPARTITION ledger_2014_z_others VALUES (DEFAULT)
)
,PARTITION ledger_2015_bf VALUES LESS THAN (2015,1) PCTFREE 0…
(SUBPARTITION ledger_2015_bf_actuals VALUES ('ACTUALS')

,SUBPARTITION ledger_2015_bf_z_others VALUES (DEFAULT)
)
,PARTITION ledger_2015_01 VALUES LESS THAN (2015,2) PCTFREE 0 …
(SUBPARTITION ledger_2015_01_actuals VALUES ('ACTUALS')

,SUBPARTITION ledger_2015_01_z_others VALUES (DEFAULT)
)

,PARTITION ledger_2015_12 VALUES LESS THAN (2015,13) PCTFREE 0 …
(SUBPARTITION ledger_2015_12_actuals VALUES ('ACTUALS')

,SUBPARTITION ledger_2015_12_z_others VALUES (DEFAULT)
)
,PARTITION ledger_2015_cf VALUES LESS THAN (2016,0) PCTFREE 0 …
(SUBPARTITION ledger_2015_cf_actuals VALUES ('ACTUALS')

,SUBPARTITION ledger_2015_cf_z_others VALUES (DEFAULT)
)
,PARTITION ledger_2016_bf VALUES LESS THAN (2016,1) PCTFREE 0 …
(SUBPARTITION ledger_2016_bf_actuals VALUES ('ACTUALS')

,SUBPARTITION ledger_2016_bf_z_others VALUES (DEFAULT)
)

)
ENABLE ROW MOVEMENT
PARALLEL
NOLOGGING
/
I usually recommend locally partitioning all indexes.  Even though FISCAL_YEAR and ACCOUNTING_PERIOD are the 24th and 25th columns on the unique index on LEDGER, I would still locally partition it.
CREATE UNIQUE INDEX sysadm.ps_ledger_new ON sysadm.ps_ledger
(business_unit,ledger,account,altacct,deptid
,operating_unit,product,fund_code,class_fld,program_code
,budget_ref,affiliate,affiliate_intra1,affiliate_intra2,chartfield1
,chartfield2,chartfield3,project_id,book_code,gl_adjust_type
,date_code,currency_cd,statistics_code,fiscal_year,accounting_period
) LOCAL
(PARTITION ledger_2014 PCTFREE 0
(SUBPARTITION ledger_2014_actuals

,SUBPARTITION ledger_2014_z_others
)

)
TABLESPACE GLLEDGER_IDX
PCTFREE 5 COMPRESS 3
PARALLEL
/
ALTER INDEX ps_ledger NOPARALLEL
/
Maintenance TasksThe introduction of range partitioning on FISCAL_YEAR brings some regular maintenance tasks.
  • New partitions must be added to the ledger and summary ledger tables for each new fiscal year before it is opened and transactions posted to it.  The LEDGER_BUDG table can be similarly partitioned and partitions should be added before budget entries are made.
  • I deliberately do not create MAXVALUE partitions on ledger tables.  If the application attempted to post data to an accounting period for which new partitions had not been created it would experience an Oracle error.  Therefore it is essential to remember to add the partitions in advance. I think this is preferable to forgetting to add the partitions and having performance degrade as data accumulates in the MAXVALUE partition.
  • As periods close and the application ceases to insert or update rows, their partitions can be compressed using
ALTER TABLE … PARTITION … COMPRESS UPDATE ALL INDEXES
  • As and when old fiscal years are no longer needed they can be archived by simply dropping the partitions, or exchanging them out to another table.
Managing Partitioning in PeopleSoftApplication Designer is not good at managing partitioning.  I have written previously about the limited support for partitioning introduced in PeopleTools 8.54.  It uses Oracle's DBMS_METADATA package to preserve existing settings, including partitioning, but the support for initially implementing partitioning is poor.
It may be reasonable to manually manage partitioning in a single table, but if you also have a number of summary ledgers, and have perhaps also built materialized views on them, you can have a significant number of partitioned objects to manage.  Manual scripting is going to become a significant overhead.  You might want to look at Managing Oracle Table Partitioning in PeopleSoft Applications with GFC_PSPART package.

nVision Performance Tuning: 3. Indexing of Ledger, Budget, and Summary Ledger Tables on Non-Engineered Oracle Systems

Wed, 2017-10-25 12:06
This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

If you are on a conventional non-engineered Oracle system, then it is not going to be viable to full scan the ledger tables for every query.  You are going to have to use indexes to find your data.  Every customer's nVision reports are unique to that customer and the nature of their business.  Different customers will analyse their data by different combinations of attributes.  Indexes will be needed to match those analysis criteria.
A survey of the nVision reports and the SQL they produce will reveal the various sets of columns by which ledger data is queried. The heaviest SQL statements can be identified from Active Session History (ASH) or the Automatic Workload Repository (AWR) (or Statspack if the Diagnostics pack is not licenced).   As an example, let's take the following two statements as examples, and consider the indexes that would be needed to support them.
SELECT L4.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT) 
FROM PS_LEDGER A, PSTREESELECT10 L4, PSTREESELECT10 L2, PSTREESELECT10 L
WHERE A.LEDGER='ACTUALS'
AND A.FISCAL_YEAR=2017
AND A.ACCOUNTING_PERIOD=1
AND L4.SELECTOR_NUM=78948
AND A.CHARTFIELD3>= L4.RANGE_FROM_10 AND A.CHARTFIELD3 <= L4.RANGE_TO_10
AND L4.TREE_NODE_NUM BETWEEN 1000000000 AND 2000000000
AND L2.SELECTOR_NUM=92481 AND A.ACCOUNT=L2.RANGE_FROM_10
AND (A.DEPTID BETWEEN '10000' AND '18999'

OR A.DEPTID='29150'
OR A.DEPTID=' ')
AND L.SELECTOR_NUM=92469 AND A.CHARTFIELD1=L.RANGE_FROM_10
AND A.CURRENCY_CD='USD'
GROUP BY L4.TREE_NODE_NUM
  • nVision queries will always contain single value predicates on LEDGER and FISCAL_YEAR.
  • You may see some queries on a single ACCOUNTING_PERIOD, as in the query above, usually the current period.  Otherwise, they might be on a range of ACCOUNTING_PERIODs, usually the year to date as in the example below.
  • Then there will predicates on the other analysis columns, in this case, DEPTID, ACCOUNT, CHARTFIELD1, CHARTFIELD3.
  • You may see criteria on CURRENCY_CD, however, this may not be very selective.  Generally, most of the queries will be on the base currency, and most of the data will be in the base currency.
  • Sometimes you may see a criterion on STATISTICS_CODE=' '.  Normally, only a very few ledger rows have statistics code, so there is no benefit in adding STATISTICS_ CODE to any index.
  • Generally, single value predicate columns should be placed at the front of the index, followed by the other analysis criteria.
  • However, I would not attempt to overload the index with additional non-selective columns because unless you index all the referenced columns, including the amount columns, you will not be able to satisfy the query from the index only, and you will still have to visit the table.
  • So for this query, you might build an index on the following columns: LEDGER, FISCAL_YEAR, ACCOUNTING_PERIOD, DEPTID, ACCOUNT, CHARTFIELD1, CHARTFIELD3, CURRENCY_CD
This query, from the same system, is on a different combination of columns
SELECT L2.TREE_NODE_NUM,L3.TREE_NODE_NUM,L4.TREE_NODE_NUM,SUM(A.POSTED_BASE_AMT) 
FROM PS_LEDGER A, PSTREESELECT05 L1, PSTREESELECT10, L2, PSTREESELECT10 L3, PSTREESELECT10 L4, PSTREESELECT10 L
WHERE A.LEDGER='ACTUALS' AND A.FISCAL_YEAR=2015
AND A.ACCOUNTING_PERIOD BETWEEN 1 AND 11
AND L1.SELECTOR_NUM=30369 AND A.BUSINESS_UNIT=L1.RANGE_FROM_05
AND L2.SELECTOR_NUM=30374 AND A.CHARTFIELD1=L2.RANGE_FROM_10
AND L3.SELECTOR_NUM=30375 AND A.ACCOUNT=L3.RANGE_FROM_10
AND L4.SELECTOR_NUM=30376 AND A.CHARTFIELD2=L4.RANGE_FROM_10
AND L.SELECTOR_NUM=30372 AND A.CHARTFIELD1=L.RANGE_FROM_10
AND A.CURRENCY_CD='USD'
GROUP BY L2.TREE_NODE_NUM,L3.TREE_NODE_NUM,L4.TREE_NODE_NUM
  • In this case, the above query is part of a fiscal year-to-date report, returning the same period in a previous fiscal year.  The system is currently in period 11, so this report is looking at accounting periods 1 to 11.  ACCOUNTING_PERIOD is a much less selective criterion in this case.  If partitioning option were not available, I might be tempted to demote ACCOUNTING_PERIOD to further down the list of indexed columns if the criterion on the next column is more selective.
  • This query uses 5 trees.  A cartesian join of all 5 is likely to produce a product so large that it will not be effective to do index lookups for each member of the product.  The columns with the most selective criteria should be earlier in the index.  It might not even be worth including columns with non-selective criteria in the index.
  • So you might build an index on the following columns: LEDGER, FISCAL_YEAR,  BUSINESS_UNIT, ACCOUNT, CHARTFIELD1, CHARTFIELD2, CURRENCY_CD, ACCOUNTING_PERIOD.
RecommendationsIt is easy to see how you might build up a large number of indexes on the ledger and summary ledger tables.  You might have pairs of indexes, one for single accounting periods and perhaps another similar one for year-to-date for each set of analysis criteria.
However, as you add indexes the overhead of index maintenance experienced by any processes that post to the ledger, or by the summary ledger build process will increase.  There is a balance to be struck. You will have to build as many indexes with as many columns as are necessary, but as few as possible.
If you partition the tables on FISCAL_YEAR and ACCOUNTING_PERIOD and locally partition the indexes, you will only need one of the indexes.
Index leaf block dictionary compression can significantly reduce the size of the index.  Any increase in the CPU overhead of using and maintaining the index will be more than offset by the reduction in I/O overhead.  The optimal prefix length can be calculated using the command ANALYZE INDEX … VALIDATE STRUCTURE.  This feature does not require the Advanced Compression licence.

nVision Performance Tuning: 2. Effects of Performance Options

Sat, 2017-10-21 04:16
This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

In this post, I examine the effect of different settings for the tree performance options on how the SQL is generated.  It is common, to see different performance options in use on different trees in the same SQL query.  It is important to be able to look at a piece of SQL generated by nVision and to be able to work out which performance options have been set on which trees.
  • Access Method: join to tree selector –v- use literal values
  • Tree Selectors: statics –v- dynamic
  • Selector Options: single values –v- inequalities/between
Access Method
This option allows you control how the tree selector table is combined with the ledger table.  The choices of interest are whether to join the tree selector, or whether to use literal values where feasible.
JoinHere, two tree selectors are joined to the ledger table
SELECT L4.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT)
FROM PS_LEDGER A, PSTREESELECT10 L4, … PSTREESELECT10 L
WHERE A.LEDGER='ACTUALS'
AND A.FISCAL_YEAR=2017

AND L.SELECTOR_NUM=73130
AND A.CHARTFIELD1=L.RANGE_FROM_10
AND A.CURRENCY_CD='GBP'
GROUP BY L4.TREE_NODE_NUM
It is not feasible to use literal values for L4 because the data is grouped by L4.TREE_NODE, however, it is feasible to use literal values for L because no column from this table is included in either the SELECT or GROUP BY clauses.
Literal ValuesWhen 'use literal values' is selected nVision replaces the join to the tree selector table with a set of literal values generated from the contents of the tree selector table.
In this example, the join to the CHARTFIELD1 tree selector has been replaced with a series of literal value predicates.  There is one for every selected tree leaf.  A simple equality condition for single value leaves, and a BETWEEN condition for ranged leaves.
SELECT L2.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT)
FROM PS_LEDGER A, PSTREESELECT10 L2
WHERE A.LEDGER='ACTUALS'
AND A.FISCAL_YEAR=2017
AND A.ACCOUNTING_PERIOD=1

AND L2.SELECTOR_NUM=101142
AND A.ACCOUNT=L2.RANGE_FROM_10
AND (A.CHARTFIELD1='0070700'
OR A.CHARTFIELD1='0078999'

OR A.CHARTFIELD1 BETWEEN '0091100' AND '0091101'
OR A.CHARTFIELD1='0150204')
AND A.CURRENCY_CD='GBP'
GROUP BY L2.TREE_NODE_NUM
RecommendationsIn general, the literal values option is very beneficial.
  • It simplifies the queries generated by nVision by removing a table from the from clause and leaves the optimizer with fewer choices when determining the execution plan.
  • Data is filtered on the scan of the ledger table rather than when it is joined to the tree selector.  On an engineered system these predicates can be pushed down to the storage cells.
However, there are some disadvantages
  • As the number of leaves on a tree increase, so the number of literal predicates in the query increases, and the time taken to parse the SQL increases.  As a rough guideline, I would say that the parse time starts to become significant as the tree exceeds 2000 leaves, and you might be better using the Join Option.
  • Also, as the number of literal predicates increase the time taken to evaluate them increases.  This applies to both conventional and engineered systems.
  • Some PeopleSoft customers have found it generally beneficial to the system to raise the value of OPTIMIZER_DYNAMIC_SAMPLING from the default of 2 to 4 so that Oracle collects dynamic statistics if there are expressions in where clause predicates or multiple predicates on the same table (see Database SQL Tuning Guide -> Managing Optimizer Statistics).  However, this can significantly increase the parse overhead of nVision SQL using many predicates.  If necessary, this parameter can be reset at session level for nVision with a trigger.
  • The nVision client also has to do more work to generate the SQL.
  • Literal values increase the variability of SQL generated, reports using different tree nodes will generate SQL with different numbers of predicates and there is effectively no chance that any two statements would be similar enough to use any of Oracle's plan stability technologies (SQL baselines, SQL profiles, SQL plan directives).
  • In theory, cursor sharing could reduce the parse, but in my experience is that the optimizer does not correctly cost the predicates resulting in different execution plans with poorer performance.
Tree SelectorsThis option allows you to choose whether a tree selector is managed in a static or dynamic manner.  As you will see, the Tree Selectors performance option is tied in with the Selector Options performance option.
StaticWhere a tree uses a static selector, the entire tree is extracted into the tree selector.  There will be no tree node number criteria in the extract statement.
INSERT INTO PSTREESELECT10
(SELECTOR_NUM,TREE_NODE_NUM,RANGE_FROM_10,RANGE_TO_10)
SELECT 78722,L.TREE_NODE_NUM, SUBSTR(L.RANGE_FROM,1,10),SUBSTR(L.RANGE_TO,1,10)
FROM PSTREELEAF L
WHERE L.SETID='GLOBE'
AND L.SETCNTRLVALUE=' '
AND L.TREE_NAME='FUNCTION'
AND L.EFFDT=TO_DATE('1901-01-01','YYYY-MM-DD')
Note that the tree leaf was not joined to the underlying detail table.  Ranged leaves become ranged selectors.  This is because there is no mechanism to determine when the detail table has changed and thus when the static selector would be out of date and need to be refreshed.  This is why single value joins cannot be performed with static selectors.
Tree node number criteria will appear in the nVision queries to identify the required selectors.
SELECT L4.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT) 
FROM PS_LEDGER A, PSTREESELECT10 L4, PSTREESELECT10 L2, PSTREESELECT10 L
WHERE A.LEDGER='ACTUALS'
AND A.FISCAL_YEAR=2017
AND A.ACCOUNTING_PERIOD=1
AND L4.SELECTOR_NUM=89052
AND A.CHARTFIELD3>= L4.RANGE_FROM_10
AND A.CHARTFIELD3 <= L4.RANGE_TO_10
AND L4.TREE_NODE_NUM BETWEEN 1000000000 AND 2000000000
A control row is maintained on the table PSTREESELCTL.  This row indicates that a particular selector is static, and the version number is compared to the version number of PSTREEDEFN to determine whether the selector is valid, or whether the tree has changed.  If the tree has changed it will have a higher version number, and the selector will be extracted again.  The selector number from the query can be looked up on this table to identify the tree.
DynamicDynamic selectors are built on-the-fly by nVision as the report runs.  They select just the rows from the tree that are required by the report.
INSERT INTO PSTREESELECT10
(SELECTOR_NUM,TREE_NODE_NUM,RANGE_FROM_10,RANGE_TO_10)
SELECT DISTINCT 108090,L.TREE_NODE_NUM,D.ACCOUNT,D.ACCOUNT FROM
PS_GL_ACCOUNT_TBL D, PSTREELEAF L
WHERE L.SETID='GLOBE' AND L.SETCNTRLVALUE=' '
AND L.TREE_NAME='ACCOUNT'
AND L.EFFDT=TO_DATE('1901-01-01','YYYY-MM-DD')
AND L.RANGE_FROM<>L.RANGE_TO
AND D.ACCOUNT BETWEEN L.RANGE_FROM AND L.RANGE_TO
AND D.SETID='GLOBE' AND
(L.TREE_NODE_NUM BETWEEN 789473386 AND 793372019
OR L.TREE_NODE_NUM BETWEEN 810915873 AND 812865189

OR L.TREE_NODE_NUM BETWEEN 1089668204 AND 1095516154
OR L.TREE_NODE_NUM BETWEEN 1113060008 AND 1115009324)
All the tree node number predicates are found in the query that populates the selector table and not in the query that generates the report.
SELECT L4.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT) 
FROM PS_LEDGER A, PSTREESELECT10 L4, PSTREESELECT10 L2, PSTREESELECT10 L
WHERE A.LEDGER='S_USMGT'
AND A.FISCAL_YEAR=2017
AND A.ACCOUNTING_PERIOD=1

AND L.SELECTOR_NUM=96774
AND A.CHARTFIELD1=L.RANGE_FROM_10 AND
A.CURRENCY_CD='USD' GROUP BY L4.TREE_NODE_NUM
nVision will delete dynamic selectors after they have been used.  However, if the report crashes or times out, rows will be left in the selector table, and this debris can build up over time.  So there is a requirement to manage these tables.
Note that in this example when the selector was extracted, the tree leaf table was joined to the underlying table on which the tree was based (in this case GL_ACCOUNT_TBL).  This occurs when the single value join option is selected.  The tree selector contains a row for each value rather than for each tree leaf.  This will result in larger tree selectors where ranged leaves are used.
PeopleSoft do not provide any way to identify the tree used to create a dynamic selector.  From the SQL query, all we can see is the field to which it was joined.  The above example is a tree related to CHARTFIELD1.
RecommendationsAlthough static selectors are easier to manage, the decision to use dynamic selectors is driven by the need to use single value joins.
  • It is important to maintain up-to-date statistics on the selector tables.  As new selectors are extracted, a series of ever-increasing selector numbers are allocated from a sequence maintained on the table PSTREESELNUM.   If not maintained, the selector numbers will exceed the high value recorded in the column statistics for column SELECTOR_NUM, and the database will not correctly calculate the number of rows expected, resulting in inappropriate execution plans.
  • It is easier to manage optimizer statistics on static selectors because the PSTREESELCTL table records when they have been updated.  In fact, it is possible to put a trigger on that table to maintain statistics.  However, static selectors imply range predicates that bring other problems that I discuss in the next section.
  • Using compound triggers on the selector tables it is possible to maintain a log of selectors, maintain statistics on them and purge selectors left behind by nVision reports that have not cleaned up after themselves.  This is also used to identify the trees used in dynamic selectors.  I will describe this solution later.
Recommendation: Prior to Oracle 11g, I have recommended using static selectors and ranged joins, and by careful management of statistics on the tree selector and indexing of the ledger tables good results can be obtained.  The tree selectors are Cartesian joined together before using index lookups on the ledger tables.
However, this approach is generally not effective for queries that reference more than 3 tree selector, nor for very large trees.  The size of the Cartesian product is the product of the number of rows extracted from each tree selectors table.  For example, three selectors with 100 rows each will result in 1 million lookups on the ledger table.  As the size of the Cartesian product grows the number of index look-ups also grows.  There comes a point when it is better to remove the least selective tree from the Cartesian and hash joining it after the looking up ledger.  Striking the correct balance can be difficult, and leads to using different performance options in different reports.
From Oracle 11g, and especially on engineered systems, I now prefer dynamic selectors because they permit the use of single value joins that can use Bloom filters and Extended Statistics.  Although, where smaller trees are in use that do not significantly cut down the query result set, range joins may still be preferable, in which cases, so might static selectors.
Selector OptionsThis performance option controls how the tree selector is joined to the ledger table.  The choice is between single value equality joins or inequality joins.
Ranged JoinsThere are two forms of ranged joins, either using the BETWEEN keyword or using a pair of inequalities.  There is no difference to Oracle because the optimizer rewrites BETWEEN as a pair of inequalities.
NB: If you specify single value joins on a static selector you still get an inequality ranged join because only the tree leaves are extracted to the selector.  .  nVision still does this even if all of the leaves in the tree are single values and not ranges.
SELECT L4.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT) 
FROM PS_LEDGER A, PSTREESELECT10 L4, PSTREESELECT10 L2, PSTREESELECT10 L
WHERE A.LEDGER='ACTUALS'
AND A.FISCAL_YEAR=2017
AND A.ACCOUNTING_PERIOD=1
AND L4.SELECTOR_NUM=89052
AND A.CHARTFIELD3>= L4.RANGE_FROM_10
AND A.CHARTFIELD3 <= L4.RANGE_TO_10

AND A.CURRENCY_CD='GBP'
GROUP BY L4.TREE_NODE_NUM
Single Value JoinsIf single value joins are selected, nVision generates an equality join between the tree selector and the ledger table.  The tree selector must be dynamic, and nVision extracts all the intermediate values between the tree node ranges on the underlying detail table.
SELECT L4.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT) 
FROM PS_LEDGER A, PSTREESELECT10 L4, PSTREESELECT10 L2, PSTREESELECT10 L
WHERE A.LEDGER='ACTUALS'
AND A.FISCAL_YEAR=2017
AND A.ACCOUNTING_PERIOD=1

AND L.SELECTOR_NUM=96774
AND A.CHARTFIELD1=L.RANGE_FROM_10
AND A.CURRENCY_CD='GBP'
GROUP BY L4.TREE_NODE_NUM
RecommendationsIn general, I would recommend using single value joins and therefore also dynamic selectors.  The main advantage is that they enable two particular optimisations in Oracle.
  • Bloom filters only work with equality predicates, and therefore only with single value joins.  They enable a filter on one column of one table to be roughly mapped to another column of another table upon which there is an equality join.  This filters data earlier in a query, cutting down the size of the eventual join operation.  It is a rough match in that it may return some false positives to the subsequent hash operation, but it will not omit any results.  The hash join then does an exact match of the rows to produce the correct result set.  On an Engineered Oracle system this filter can also be pushed down to the storage cells. 
  • This is an example of the kind of execution plan that we should see when a Bloom filter is used.  A filter has been created on PSTREESELECT10 at line 3, and it is applied to PS_LEDGER at line 6.
-------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
-------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 12876 (100)| | | |
| 1 | HASH GROUP BY | | 501 | 35571 | 12876 (42)| 00:00:01 | | |
| 2 | HASH JOIN | | 975K| 66M| 12822 (41)| 00:00:01 | | |
| 3 | JOIN FILTER CREATE | :BF0000 | 2577 | 59271 | 7 (0)| 00:00:01 | | |
| 4 | PARTITION RANGE SINGLE | | 2577 | 59271 | 7 (0)| 00:00:01 | 87968 | 87968 |
| 5 | INDEX FAST FULL SCAN | PS_PSTREESELECT10 | 2577 | 59271 | 7 (0)| 00:00:01 | 87968 | 87968 |
| 6 | JOIN FILTER USE | :BF0000 | 1715K| 78M| 12804 (41)| 00:00:01 | | |
| 7 | PARTITION RANGE ITERATOR | | 1715K| 78M| 12804 (41)| 00:00:01 | 27 | 34 |
| 8 | MAT_VIEW REWRITE ACCESS STORAGE FULL| PS_LEDGER | 1715K| 78M| 12804 (41)| 00:00:01 | 27 | 34 |
-------------------------------------------------------------------------------------------------------------------------------
  • Extended statistics, (i.e. on groups of columns) also only work with equality predicates.  Single value join queries can, therefore, benefit from them.
There are drawbacks to be overcome.
  • Single value joins require the use dynamic selectors.  That requires management of statistics on the dynamic selectors, and debris left in the tree selectors table from failed reports that have not cleaned up after themselves.
  • Single value joins can result in larger tree selector tables with many more rows than tree leaves.  For large trees or trees with many values in leaf ranges, resulting in more than about 2000 rows on the tree selector tables, it can be advantageous to change the Access Method performance option from using literal values to join because the parse overhead becomes significant.

nVision Performance Tuning: 1. nVision Performance Options

Mon, 2017-10-16 14:26
This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

At no point when defining a nVision report does the developer directly specify a SQL statement, all the SQL is generated by nVision from the report definition.  That also means that it is not possible to directly intervene and change the SQL statement, for example, to add an optimizer hint.

However, the way that the SQL is generated can be controlled via the nVision performance options. Setting this appropriately can make a significant difference to nVision performance, but the optimal settings will differ from system to system, from tree to tree and sometimes even report to report.
Specification of Performance OptionsnVision performance options are specified for named trees.  Since PeopleTools 8, they are defined in the tree definition.
Tree Performance Options in PIA
They are stored in the database in PeopleTools table PSTREEDEFN.
Field Name
Field Value
Description
TREE_ACC_METHOD
D
Use Application Defaults
J
Join to Tree Selector
L
Use Literal Values
S
Sub-SELECT Tree Selector
TREE_ACC_SELECTOR
D
Dynamic Selectors
S
Static Selectors
TREE_ACC_SEL_OPT
B
Range of Values (BETWEEN)
R
Range of Values (>=…<=)
S
Single Values

Performance options can also be specified in individual layouts, and the specific definition in the layout overrides the generic definition in the tree.
Add-Ins->nVision->Layout Options
You can also see the trees defined in a layout file in the Name Manager in Excel.  The performance options are converted into a string of 5 characters.  Below you can see a tree called OPERATING_UNIT. The easiest way to determine whether performance options are set in a layout is to look in the Name Manager, and the only reliable way to delete a performance option from a layout is to delete them from the Name Manager.
Formulas -> Name Manager

The string of flags in the NvsTree formula can be decoded with this table.
X
Value
Position
Y
N
S
1
Dynamic Selector
Static Selector

2
Join to tree selector
Supress join; use literal values
Sub-SELECT tree selector
3
Range of values (BETWEEN)
Range of values (>= … <=)

4
Single Value
(overrides position 3)


5
Non-specific node criteria (above 2 billion)



RecommendationsIt is rare to need to set different performance options for the same tree in different layouts. Therefore, it is easiest to set performance options at tree level and not at all in layouts unless absolutely necessary.
Tree performance options can even be updated in bulk by SQL script, though it is essential to maintain the versions numbers correctly.  Care should be taken when migrating trees from other environments to ensure that the performance options are still correctly set, as the performance options are part of the tree being migrated
It is now possible to give clear advice to developers and especially business users who create nVision reports:
  • Don't use them.
  • If you have specified them in layouts in the past, then remove them unless you really do need to use a different option in a particular report.
Some customers have written Visual Basic macros to work through all their layouts and remove all tree performance options settings.

nVision Performance Tuning: Introduction

Tue, 2017-10-10 15:41
This blog post is the first in a series that discusses how to get good performance from nVision as used in General Ledger reporting.

PS/nVision is a PeopleTools technology that extracts data from the database and places it in an Excel spreadsheet (see PS/nVision Overview).  Although PS/nVision can be used with any PeopleSoft product, it is most commonly used in Financials General Ledger.

The SQL queries generated by nVision are, at least conceptually, similar to data warehouse queries. The ledger, ledger budget or summary ledger tables are the fact tables.

The ledger tables are analysed by their attribute columns. There are always literal conditions on the fiscal year and accounting period, there is usually a literal condition on currency code.  Then there are criteria on some of the other attributes.  I will take an example that analyses the ledger table in three dimensions: BUSINESS_UNIT, ACCOUNT and CHARTFIELD1, but there are many other attribute columns on the ledger tables.  These attributes are defined in lookup tables in the application, but their hierarchies are defined in trees.

nVision reports use the trees to determine which attribute values to report.  A report might report on a whole tree, or particular nodes, or branches of a tree.  nVision joins the tree definition to the attribute table and produces a list of attributes to be reported.  These are put into working storage tree selector tables (PSTREESELECT01 to 30).  The choice of selector table is controlled by the length of the attribute column.  BUSINESS_UNIT is a 5 character column so it goes into PSTREESELECT05. CHARTFIELD1 and ACCOUNT are 10 character columns so they use PSTREESELECT10.  These selector tables form the dimensions in the queries.

Here is an example of a SQL statement generated by nVision.  The tree selector 'dimension' tables are joined to the ledger 'fact' table.

SELECT L.TREE_NODE_NUM,L2.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT)
FROM PS_LEDGER A
, PSTREESELECT05 L1
, PSTREESELECT10 L
, PSTREESELECT10 L2
WHERE A.LEDGER='ACTUALS'
AND A.FISCAL_YEAR=2016
AND A.ACCOUNTING_PERIOD BETWEEN 1 AND 11
AND L1.SELECTOR_NUM=30982 AND A.BUSINESS_UNIT=L1.RANGE_FROM_05
AND L.SELECTOR_NUM=30985 AND A.CHARTFIELD1=L.RANGE_FROM_10
AND L2.SELECTOR_NUM=30984 AND A.ACCOUNT=L2.RANGE_FROM_10
AND A.CURRENCY_CD='GBP'
GROUP BY L.TREE_NODE_NUM,L2.TREE_NODE_NUM
This SQL looks simple enough, but there are various complexities
  • The tree selector tables are populated at runtime.  Many dimensions can be stored in each tree selector table, each keyed by a different SELECTOR_NUM.
  • Selectors can be static or dynamic.  In dynamic selectors, the data is only stored temporarily for the lifetime of the report and will be deleted when it completes.  So immediately, there is a challenge of keeping statistics up to date, and even then Oracle doesn't always manage to find an effective execution plan.
  • Different selectors will have different numbers of rows, so the statistics have to describe that skew.
  • Different nVision reports and even different parts of the same report generate different statements that can use different combinations of attribute columns.  The number of dimensions can vary, I have seen systems that use as many as five different trees in a single query.
  • Then the database needs to find the relevant rows on the ledger table for the dimensions specified as efficiently as possible.
This very quickly becomes a difficult and complex problem.  This series articles works through the various challenges and describe methods to overcome them.  Not all of them are applicable to all systems, in some cases, it will be necessary to choose between approaches depending on circumstances.

nVision Performance Tuning: Table of Contents

Tue, 2017-10-10 15:39
This post is an index for a series of blog posts that discuss how to get good performance from nVision as used in General Ledger reporting.  As the posts become available links will be updated in this post.
  • Introduction
  • nVision Performance Options
  • Indexing of Ledger, Budget and Summary Ledger Tables
  • Partitioning of Ledger, Budget and Summary Ledger Tables
  • Additional Oracle Instrumentation for nVision
  • Logging Selector Usage
  • Analysis of Tree Usage  with the Selector Log
  • Interval Partitioning and Statistics Maintenance of Selector Tables
  • Compression without the Advanced Compression option
  • Maintaining Statistics on Non-Partitioned Selector Tables
  • Excel -v- OpenXML
The current versions of scripts mentioned in the series will be made available on GitHub.


This week's &nbsp;<a href="http:/

Fri, 2017-09-22 09:22
This week's  PeopleSoft Administrator Podcast includes a few minutes of me talking to Dan and Kyle about Oracle Resource Manager.
(22 September 2017) #99 - Oracle Resource Manager You can listen to the podcast on psadmin.io, or subscribe with your favourite podcast player, or in iTunes.

PeopleSoft and Adaptive Query Optimization in Oracle 12c

Fri, 2017-06-30 07:44
Adaptive Query Optimization is a significant feature in Oracle 12c. Oracle has made lots of information available on the subject.(See https://blogs.oracle.com/optimizer/oracle-database-12c-is-here).
Adaptive Query Optimization is a set of capabilities that enable the optimizer to make run-time adjustments to execution plans and discover additional information that can lead to better statistics…There are two distinct aspects in Adaptive Query Optimization: adaptive plans, which focuses on improving the execution of a query and adaptive statistics, which uses additional information to improve query execution plans.from Optimizer with Oracle Database 12c Release 2, White Paper June 2017 (see http://www.oracle.com/technetwork/database/bi-datawarehousing/twp-optimizer-with-oracledb-12c-1963236.pdf)Adaptive Statistics include:
  • SQL plan directives: created and used to adapt SQL execution plans.
  • Statistics feedback: cardinality from table joins is used to improve SQL execution plans.
  • Performance feedback: improves the degree of parallelism chosen when PARALLEL_DEGREE_POLICY is set to ADAPTIVE
  • Adaptive dynamic sampling for parallel execution: dynamic statistics are gathered for certain parallel queries to improve cardinality estimates. The sample size is determined automatically.
(Parallelism is not used by default within PeopleSoft because all objects are marked NOPARALLEL by Application Designer).
This additional information should help the optimizer make better decisions, but it comes at the price of making the database do more work during SQL parse. Unfortunately, PeopleSoft makes extensive use of dynamically generated SQL, often with literal values leading to large amounts of parse. Even a small additional overhead during SQL parse can result in a significant overhead for the entire system. Adaptive Plans include:
  • Nested loop join/Hash join selection: the optimizer chooses between nested loops or hash joins at query runtime.
  • Adaptive parallel distribution method: the parallel distribution method is determined at runtime.
  • Star transformation bitmap pruning: certain bitmap indexes may be removed from the SQL execution plan at runtime if selectivity is significantly poorer than the estimate.
When joining tables, making the best choice between nested loops and hash joins is often critical for good performance. This feature could be of significant benefit to many applications, not just PeopleSoft. Let's take, as an example, this SQL generated by nVision on a 12c release 1 database. Due to the literal values it is very unlikely that the statement will be executed a second time.
SELECT SUM(A.POSTED_TOTAL_AMT) FROM sysadm.PS_LEDGER A WHERE A.LEDGER ='XXXXXXXXXX' AND A.FISCAL_YEAR=2015 AND (A.ACCOUNTING_PERIOD 
BETWEEN 1 AND 12 OR A.ACCOUNTING_PERIOD BETWEEN 998 AND 999) AND A.ACCOUNT IN ('40000001','40000002','40000003','40000004') AND
(A.DEPTID ='001A' OR A.DEPTID ='002A' OR A.DEPTID ='003A' OR A.DEPTID ='004A' OR A.DEPTID ='005B' OR A.DEPTID ='006A' OR A.DEPTID
='007A') AND A.CURRENCY_CD ='USD' AND A.STATISTICS_CODE=' '
This recursive SQL was generated by dynamic statistics processing during SQL parse. Such SQL is easily identifiable by the /* DS_SVC */ comment.
SELECT /* DS_SVC */ /*+ dynamic_sampling(0) no_sql_tune no_monitoring
optimizer_features_enable(default) no_parallel result_cache(snapshot=3600) */
SUM(C1)
FROM (
SELECT /*+ qb_name("innerQuery") NO_INDEX_FFS( "A") */ 1 AS C1
FROM "SYSADM"."PS_LEDGER" SAMPLE BLOCK(82.1847, 8) SEED(12) "A"
WHERE ("A"."FISCAL_YEAR"=2015) AND ("A"."CURRENCY_CD"='USD') AND
("A"."ACCOUNT"='40000001' OR "A"."ACCOUNT"='40000002' OR
"A"."ACCOUNT"='40000003' OR "A"."ACCOUNT"='40000004') AND
("A"."ACCOUNTING_PERIOD"<=170 AND "A"."ACCOUNTING_PERIOD">=1 OR
"A"."ACCOUNTING_PERIOD">=901 AND "A"."ACCOUNTING_PERIOD"<=905) AND
("A"."DEPTID"='001A' OR "A"."DEPTID"='002A' OR "A"."DEPTID"='003A' OR
"A"."DEPTID"='004A' OR "A"."DEPTID"='005B' OR "A"."DEPTID"='006A' OR
"A"."DEPTID"='007A') AND ("A"."LEDGER"='XXXXXXXXXX') AND
("A"."STATISTICS_CODE"=' ')) innerQuery
It is easy to see that you wouldn't need too many additional queries like this to have a significant on system performance. In the early days of Oracle 12c many PeopleSoft customers reported just such experience. It is no surprise therefore that Oracle recommend disabling adaptive optimization in 12c. 
optimizer_adaptive_features = FALSE - After upgrading to Oracle Database 12c, many PeopleSoft customers have noticed overall performance degradation which is related to Oracle 12c Optimizer Adaptive Feature (OAF). It is recommended that this value be disabled in all Oracle Databases running PeopleSoft Applications. from E-ORA Advice for the PeopleSoft Oracle DBA (Doc ID 1445965.1) (see https://support.oracle.com/epmos/faces/DocumentDisplay?id=1445965.1)In Oracle 12c release 1, the single parameter OPTIMIZER_ADAPTIVE_FEATUES controls all the adaptive optimization features. However, in Oracle 12c release 2 this is replaced by 2 new parameters:
Significantly this enhancement is available in 12c release 1. See
There are two patches available
  • patch for bug# 22652097 introduces the two parameters OPTIMIZER_ADAPTIVE_PLANS and OPTIMIZER_ADAPTIVE_STATISTICS and removes the parameter OPTIMIZER_ADAPTIVE_FEATURES.
  • patch for bug# 21171382 disables the automatic creation of extended statistics unless the optimizer preference AUTO_STATS_EXTENSIONS is set to ON.
Here is an example of another query from nVision, this time on a 12.1 system where the two patches have been applied. The optimizer has switched from a nested loop to a hash join in an adaptive plan. In this particular case, this was a good decision and the hash join outperformed the nested loop.
SELECT A.AFFILIATE,SUM(A.POSTED_TOTAL_AMT) FROM PS_XX_XXXXXXX_VW A,
PSTREESELECT10 L1 WHERE A.LEDGER='XXXXXXX' AND A.FISCAL_YEAR=2016 AND
A.ACCOUNTING_PERIOD BETWEEN 0 AND 12 AND L1.SELECTOR_NUM=63382 AND
A.ACCOUNT=L1.RANGE_FROM_10 AND A.BUSINESS_UNIT='XXXXXX' AND
A.CURRENCY_CD='USD' AND A.STATISTICS_CODE=' ' GROUP BY A.AFFILIATE

Plan hash value: 4041461164

--------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 86 (100)| | | |
| 1 | HASH GROUP BY | | 1 | 67 | 86 (3)| 00:00:01 | | |
| 2 | HASH JOIN | | 9 | 603 | 85 (2)| 00:00:01 | | |
|- 3 | NESTED LOOPS | | 9 | 603 | 85 (2)| 00:00:01 | | |
|- 4 | STATISTICS COLLECTOR | | | | | | | |
| 5 | PARTITION RANGE ITERATOR | | 9 | 459 | 84 (2)| 00:00:01 | 16 | 28 |
| 6 | INLIST ITERATOR | | | | | | | |
| 7 | PARTITION LIST ITERATOR | | 9 | 459 | 84 (2)| 00:00:01 |KEY(I) |KEY(I) |
| 8 | TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| PS_LEDGER | 9 | 459 | 84 (2)| 00:00:01 | KEY | KEY |
| 9 | INDEX RANGE SCAN | PSCLEDGER | 117 | | 52 (2)| 00:00:01 | KEY | KEY |
|- 10 | PARTITION RANGE SINGLE | | 1 | 16 | 1 (0)| 00:00:01 | 63382 | 63382 |
|- 11 | INDEX RANGE SCAN | PSAPSTREESELECT10 | 1 | 16 | 1 (0)| 00:00:01 | 63382 | 63382 |
| 12 | PARTITION RANGE SINGLE | | 23 | 368 | 1 (0)| 00:00:01 | 63382 | 63382 |
| 13 | INDEX RANGE SCAN | PSAPSTREESELECT10 | 23 | 368 | 1 (0)| 00:00:01 | 63382 | 63382 |
--------------------------------------------------------------------------------------------------------------------------------------

Note
-----
- this is an adaptive plan (rows marked '-' are inactive)
Conclusion Adaptive Optimization is a great feature that does deliver real benefits, but like many Oracle features, it works best on an application that has been developed how Oracle would recommend applications be developed. Since the default value for OPTIMIZER_ADAPTIVE_STATISTICS is FALSE, PeopleSoft is clearly not alone in not being such an application.
Given PeopleSoft's propensity to generate non-sharable SQL, adaptive statistics are not a viable option, and should not be enabled, and in 12c release 1 that has meant sacrificing the benefits of adaptive plans. However, patch 22652097 means that we don't have to wait for release 2, we can now choose which feature to use.
Every system is different, even every PeopleSoft system is different. However, my experience of this so far with PeopleSoft is that I have not seen Oracle switch join method cause a problem. It certainly isn't a panacea either. When the join method has changed, it hasn't always made a big difference, but it has been net beneficial. I would certainly recommend applying these patches.

Interview with PeopleSoft Administrator Podcast: Cost-Based Optimizer Statistics in PeopleSoft

Fri, 2017-05-26 04:56
I recently recorded another interview with Dan Iverson and Kyle Benson for the PeopleSoft Administrator Podcast, this time about management of Cost-Based Optimizer Statistics in PeopleSoft systems.
(19 May 2017) #81 - Database Statistics You can listen to the podcast on psadmin.io, or subscribe with your favourite podcast player, or in iTunes.

Running Unix Shell Scripts from the PeopleSoft Process Scheduler

Sun, 2017-02-26 04:42

It is nearly 10 years since I first wrote about how to call Unix shell scripts from the Process Scheduler.  Although very little has changed, I have had a number of questions recently, so I thought it was time I checked the script and updated the posting.  I have used PeopleTools 8.54 in the preparation of this note.
The Process Scheduler is essentially just a mechanism for initiating processes on another server.  Mostly those are other PeopleSoft delivered executables.  The exception is the Application Engine Tuxedo Server process (PSAESRV) where the Process Scheduler submits a service request message that is picked up by one of the server processes that are already running.
NB: although the PSAESRV server is configured by default in the Process Scheduler domain, Oracle recommend that you should only use this when you have lots of very short-lived (runtime less than 30 seconds) application engine processes.  Typically, this only occurs in CRM.
Process Type DefinitionFirst you need to create a new process type. I chose to call it ‘Shell Script’.  It runs a named shell wrapper script, psft.sh.  The wrapper script calls the script that is to be executed. Note that the command line in the process type definition includes the fully qualified path.


Wrapper ScriptThis is the wrapper script, psft.sh, that will be called by the process scheduler.
#!/bin/ksh
# (c) David Kurtz 2007
# Script: psft.sh
#
# Syntax: psft.sh DBNAME ACCESSID ACCESSPSWD PRCSINSTANCE command
# where
# DBNAME is the name of the PeopleSoft datbase with a corresponding TNS entry
# ACCESSID is the schema containing the PeopleSoft database
# ACCESSPSWD is the password to ACCESSID
# PRCSINSTANCE is the process instance number supplied by PeopleSoft
#
# Purpose: To start Standard UNIX Shell Script from Process Scheduler, and interface with the PeopleSoft Process Scheduler
# 07.09.2007 Initial Version
# 23.02.2017 Remove unnecessary logfiles section
#set -x

if [ $# -lt 4 ]; then
echo "Usage $0: <DBNAME> <ACCESSID> <ACCESSPSWD> <PRCSINSTANCE> <command>"
exit 1
fi

CONNECT=$2/$3@$11
PRCSINSTANCE=$4
shift 4

#
# Function to set status of API aware process instance
#
function prcsapi2
{
if [ $# -lt 2 ]; then
echo "Parameter Error in function $0"
exit 1
fi

TIMESTAMPCOL=${1}
STATUS=${2}

if [ ${PRCSINSTANCE} -gt 0 ];then
echo "Setting process request ${PRCSINSTANCE} to status ${STATUS}"
sqlplus -S /nolog <<!
set termout off echo off feedback off verify off
connect ${CONNECT}
UPDATE psprcsque
SET runstatus = ${STATUS}
, sessionidnum = $$3
, lastupddttm = SYSTIMESTAMP
WHERE prcsinstance = ${PRCSINSTANCE}
;
UPDATE psprcsrqst
SET runstatus = ${STATUS}
, prcsrtncd = ${PRCSRTNCD}
, continuejob = DECODE(${STATUS},2,1,7,1,9,1,0)4
, ${TIMESTAMPCOL} = SYSTIMESTAMP
, lastupddttm = SYSTIMESTAMP
;
COMMIT;
exit
!

RET=$?
if [ ! $RET ];then
echo "SQL*Plus Error Return Code: $?"
fi
fi
}

#
# Main Execution Starts Here
#

echo $0:$*
date
uname -a
echo "Current Directory: `pwd`"
echo "Process log files in: ${PSPRCSLOGDIR}"
PRCSRTNCD=0
prcsapi begindttm 75
#set
#Run the command
$*
PRCSRTNCD=$?6

if [ ${PRCSRTNCD} -ne 0 ]; then
prcsapi enddttm 3 # failure
else
prcsapi enddttm 9 # success
fi

date
    Notes:
    1. The Oracle user ID, password and TNS name for the PeopleSoft database are supplied in the first three parameters to the wrapper script. The PeopleSoft Process Instance number is the 4th command line parameter. These parameters are then removed with the shift command leaving any other parameters that have been specified.
    2. Function prcsapi sets the status on the process request row and updates the appropriate timestamp columns in the Process Scheduler tables.
    3. PSPRCSQUE.SESSIONIDNUM holds the operating system process ID of the shell executing the wrapper script.
    4. When the process completes and an end of process status is set (either 9 for success, 3 for failure or 2 for delete) CONTINUEJOB is set to 1, otherwise it is set to 0.
    5. When the wrapper scripts start it sets the process status on the process request record to 7 indicate that it is processing.  This can be seen in the Process Monitor.
    6. The return code of the executed script is captured. Later it will be recorded on
      PSPRCSRQST.PRCSRTNCD. A non-zero return code indicates an error and the process status will be set to error.
    Process DefinitionNow I can create a Process Definition that will use the process type twrapper script to execute another command or script.
    The first four parameters passed to the wrapper script are the name of the database, the access ID and password, and the process instance. A string of further parameters will be appended in the individual Process Definition that is the specific command and parameters to be executed.
    It is important that this new process type is defined as being API aware.  That means the process interacts with the Process Scheduler by updating the process status.  You can see how the interaction should be done by looking at procedure Update-Process-Status in the delivered SQR library prcsapi.sqc. Otherwise, the Process Scheduler cannot determine their status.  Consequently, all API-unaware processes have a run status of Success to indicate that they were started successfully.

    I have written a silly test script called i that I want to be executed by the Process Scheduler.  It just prints out the command line parameters as banner text to both standard output and a file called mybanner.log.  This script will be called by psft.sh
    The Process Scheduler creates a working directory for each process request.  It sets the variable $PSPRCSLOGDIR to the fully qualified location of this directory. Note that mybanner.sh changes the current directory to the location of this variable so that it writes mybanner.log there, and thus it is picked up by the distribution agent and made available via the report repository.  You may wish to do this in your scripts.
    Current working directory can be specified at Process Type or Process definition.  However, during my testing, I found that these settings had no effect.  The working directory of the script did not change, and the value was not found in any environmental variable.
    #!/bin/ksh
    #A silly script to test psft.sh
    #(c) David Kurtz 2017
    #banner function from http://stackoverflow.com/questions/652517/whats-the-deal-with-the-banner-command

    if [ "$PSPRCSLOGDIR" ] ; then
    cd $PSPRCSLOGDIR
    fi

    (
    while [ $# -gt 0 ]
    do
    /opt/oracle/psft/ptdb/custhome/banner $1
    shift
    done
    ) | tee mybanner.log
    exit $?
    I can now create a Process Definition that uses the Shell Script process type that will execute mybanner.sh.  Note that this command line is appended to the existing command line specified in the Process Type definition

    You can't quite see it in the screen shot, but the parameter list includes the process instance number:
    /opt/oracle/psft/ptdb/custhome/mybanner.sh "Hello World" %%INSTANCE%%
    Process Scheduler System Settings During my testing, I found that it was necessary to specify output type settings for process type Other in the Process Scheduler System Settings; otherwise the output files were not posted to the report repository.


    The newly defined Process can be run just as any other process is usually run. Any output from the script on the standard output channel is captured by the Process Scheduler and written to a log file that can then be viewed from the View Log/Trace facility within Process Monitor.
    In this case the standard output was written to OTH_DMKTEST_.log, and I also get the mybanner.log that was written to $PSPRCSLOGDIR  in the list of available files.

    mybanner.log contains just the three words passed as parameters
    H     H         ll      ll              
    H H l l
    H H eeee l l oooo
    HHHHHHH e e l l o o
    H H eeeeee l l o o
    H H e l l o o
    H H eeee lll lll oooo

    W W ll d
    W W l d
    W W oooo rr rr l d
    W W o o rr r l ddddd
    W W W o o r l d d
    W W W W o o r l d d
    W W oooo rr lll dddd d

    33333 99999 99999 5555555 00000
    3 3 9 9 9 9 5 0 00
    3 9 9 9 9 5 0 0 0
    3333 999999 999999 55555 0 0 0
    3 9 9 5 0 0 0
    3 3 9 9 5 5 00 0
    33333 99999 99999 55555 00000
    OTH_DMKTEST_39950.log contains the standard output of the entire command - including the additional messages emitted by psft.sh (in bold).
    Note that the current directory is reported as being the location of the Process Scheduler Tuxedo domain. 
    /opt/oracle/psft/ptdb/custhome/psft.sh:/opt/oracle/psft/ptdb/custhome/mybanner.sh Hello World 39950
    Tue Sep 1 21:59:46 UTC 2015
    Linux hcm.london.go-faster.co.uk 2.6.39-400.215.10.el5uek #1 SMP Tue Sep 9 22:51:46 PDT 2014 x86_64 x86_64 x86_64 GNU/Linux
    Current Directory: /home/psadm2/psft/pt/8.54/appserv/prcs/PRCSDOM
    Process log files in: /home/psadm2/psft/pt/8.54/appserv/prcs/PRCSDOM/log_output/OTH_DMKTEST_39950
    Setting process request 39950 to status 7

    H H ll ll
    H H l l
    H H eeee l l oooo
    HHHHHHH e e l l o o
    H H eeeeee l l o o
    H H e l l o o
    H H eeee lll lll oooo

    W W ll d
    W W l d
    W W oooo rr rr l d
    W W o o rr r l ddddd
    W W W o o r l d d
    W W W W o o r l d d
    W W oooo rr lll dddd d

    33333 99999 99999 5555555 00000
    3 3 9 9 9 9 5 0 00
    3 9 9 9 9 5 0 0 0
    3333 999999 999999 55555 0 0 0
    3 9 9 5 0 0 0
    3 3 9 9 5 5 00 0
    33333 99999 99999 55555 00000

    Setting process request 39950 to status 9
    Tue Sep 1 21:59:46 UTC 2015
    A more detailed version of this document can be found at http://www.go-faster.co.uk/docs/process_scheduler_shell_scripts.pdf.

    Removing Redundant Indexes in PeopleSoft

    Tue, 2017-01-17 04:15
    This is the second of a two-part article that discusses how to identify and remove redundant indexes from a PeopleSoft system. 
    This article describes a script on my website (psredundantfix.sql) that uses a similar query to that described in the previous article to identify redundant indexes from the metadata in the PeopleTools tables. It uses an anonymous block of PL/SQL so nothing is installed in the database. For each redundant index identified it:
    • Adds the record and redundant index into an Application Designer project called REDUNDANT INDEXES. The record must also be added, otherwise Application will not generate a DDL script to remove the index.
    • Unsets the platform flag on the redundant index if the superset index is active on the same platform. Thus Application Designer will generate a script to drop redundant indexes when the project is built. The redundant index definition remains in the record definition in case there is a need to revert the change.
    • If the superset index is active on Oracle
      • Creates and collects extended optimizer statistics on the combination of columns in the redundant index. If error ORA-00054 occurs, the exception will be ignored and the script will continue.
      • Makes the index invisible. Note that indexes are not dropped. That can be done later by building an alter script for the project. It would be normal to test the consequences of making the indexes invisible for a while, and drop them later. 
    Sample Output
    Project REDUNDANT INDEXES already exists

    Record PSPROJECTITEM, Redundant Index C already in project
    Creating Extended Statistics on PSPROJECTITEM for OBJECTVALUE4,OBJECTTYPE
    Making index PSCPSPROJECTITEM invisible
    ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired

    Record PSTREENODE, Redundant Index C already in project

    Record TL_RPTD_TIME, Redundant Index C added to project
    Making index PSCTL_RPTD_TIME invisible
    Record TL_TA_BATCH, Redundant Index _ added to project
    Making index PS_TL_TA_BATCH invisible
    Making index PS_TL_TA_BATCH1 invisible

    Making index PS_TL_TA_BATCH48 invisible

    Record WRK_XREF_CALC, Redundant Index A already in project
    Creating Extended Statistics on PS_WRK_XREF_CALC for PROCESS_INSTANCE,SEQ_NBR
    Creating Extended Statistics on PS_WRK_XREF_CALC1 for PROCESS_INSTANCE,SEQ_NBR

    Creating Extended Statistics on PS_WRK_XREF_CALC6 for PROCESS_INSTANCE,SEQ_NBR
    Making index PSAWRK_XREF_CALC invisible
    Making index PSAWRK_XREF_CALC1 invisible

    Making index PSAWRK_XREF_CALC6 invisible
    Note:
    • The script commits any changes it makes. There is also a rollback command at the top to prevent it accidentally committing something else.
    • Having run it once, should you immediately run the script again, nothing will be found because any redundant indexes have been marked as disabled on Oracle
    • Should you want to revert changes, mark all indexes in the REDUNDANT INDEXES project as active on Oracle.
    UPDATE psindexdefn
    SET platform_ora = 1
    , activeflag = 1
    WHERE (recname, indexid) IN (
    SELECT objectvalue1, objectvalue2
    FROM psprojectitem
    WHERE objecttype = 1
    AND projectname = 'REDUNDANT INDEXES')
    AND platform_ora = 0;
    COMMIT;
    We can see in Application Designer that index PSAWRK_XREF_CALC is a subset of PS_WRK_XREF_CALC because both start with PROCESS_INSTANCE and SEQ_NBR.


    Index A was marked inactive on all platforms because the superset index _ is active on all platforms.  


    PeopleSoft delivers some indexes for some platforms only. For example, PSETREENODE is only active on Informix. Therefore the script only deactivates the Informix platform flag on PSCTREENODE, and the platform flag changes to 'Some'. The index is still added to the project, but no attempt is made to create extended statistics or to make it invisible, and the index continues to be built on Oracle.

    The comment applied to index C indicates that only the Informix flag was changed.



    The final step, when you are satisfied that making the indexes invisible has no unacceptable consequences, is to drop the redundant indexes using the script generated by Application Designer

    DROP INDEX PSCPSPROJECTITEM
    /
    DROP INDEX PSEPSPROJECTITEM
    /

    DROP INDEX PSAWRK_XREF_CALC
    /
    DROP INDEX PSAWRK_XREF_CALC1
    /

    DROP INDEX PSAWRK_XREF_CALC6
    /

    Identifying Redundant Indexes in PeopleSoft

    Thu, 2017-01-12 05:57
    This is the first of a two-part series that discusses how to identify and remove redundant indexes from a PeopleSoft system.
    I recently wrote a series of articles on my Oracle blog about identifying redundant indexes on an Oracle database. The same approach can be taken with PeopleSoft, but using the metadata in the PeopleTools tables rather than the database catalogue.
    (This means you could take a similar approach for PeopleSoft on databases other than Oracle, but I can't say anything about the behaviour of other optimizers) 
    The following query returns a report of superset and redundant subset indexes in a PeopleSoft system. It can be downloaded as a script from my website (psredundant.sql).
    The generic Oracle version of this script will identify subset indexes that are unique as redundant because the primary key and unique constraints can be altered to use the superset index.  However, this query does not make subset unique indexes as redundant because PeopleSoft does not explicitly create unique constraints, only unique indexes.  If I disabled a unique index, I would not be able to maintain the unique constraint via Application Designer.
    Subquery IC returns a list of columns for each index. Subqueries UNI, AS2 and USI retrieve columns for unique/duplicate key, alternate search and user defined indexes respectively. The absolute value of key position is not important for this exercise, it is merely used to sort the column list in the listagg() function in subquery I.
    REM psredundant.sql
    set lines 160 pages 50 trimspool on
    column recname format a15
    column superset_index format a60
    column redundant_index format a60
    break on recname skip 1 on indexid skip 1
    spool psredundant

    WITH uni AS (/*unique indexes*/
    SELECT /*+MATERIALIZE*/ f.recname, i.indexid
    , MIN(i.uniqueflag) OVER (PARTITION BY f.recname) uniqueflag
    , CASE WHEN MAX(CASE WHEN f.recname != f.recname_parent THEN 1 ELSE 0 END) OVER (PARTITION BY f.recname)=1
    THEN f.fieldnum ELSE k.keyposn END keyposn
    , k.fieldname
    FROM psrecfielddb f
    , psindexdefn i
    , pskeydefn k
    WHERE i.recname IN(f.recname,f.recname_parent)
    AND i.recname = k.recname
    AND k.fieldname = f.fieldname
    AND i.indexid = '_' /*unique index*/
    AND k.indexid = i.indexid
    AND bitand(f.useedit,3) > 0 /*unique or dup key*/
    ), as0 AS (/*leading column on alternate search indexes*/
    SELECT f0.recname, k0.indexid, i0.uniqueflag, 0 keyposn, f0.fieldname
    FROM psrecfielddb f0
    , psindexdefn i0
    , pskeydefn k0
    WHERE bitand(f0.useedit,16) = 16 /*alternate search key*/
    AND k0.recname = f0.recname_parent
    AND k0.fieldname = f0.fieldname
    AND i0.recname = k0.recname
    AND i0.indexid = k0.indexid
    AND i0.indexid BETWEEN '0' AND '9' /*alternate search index*/
    ), as1 AS ( /*now add unique columns*/
    SELECT as0.recname, as0.indexid, as0.uniqueflag, as0.keyposn, as0.fieldname
    FROM as0
    UNION ALL /*append unique key index columns*/
    SELECT as0.recname, as0.indexid, as0.uniqueflag, uni.keyposn, uni.fieldname
    FROM as0, uni
    WHERE as0.recname = uni.recname
    ), as2 AS (
    SELECT as1.recname, as1.indexid, as1.uniqueflag, NVL(k.keyposn,as1.keyposn), as1.fieldname
    FROM as1
    LEFT OUTER JOIN pskeydefn k /*to handle custom key order*/
    ON k.recname = as1.recname
    AND k.indexid = as1.indexid
    AND k.fieldname = as1.fieldname
    ), usi AS (/*user indexes*/
    SELECT i.recname, i.indexid, i.uniqueflag, k.keyposn, k.fieldname
    FROM psindexdefn i
    , pskeydefn k
    WHERE k.recname = i.recname
    AND k.indexid = i.indexid
    AND k.indexid BETWEEN 'A' AND 'Z'
    AND i.platform_ora = 1
    ), m AS (/*merge three kinds of index here*/
    SELECT uni.recname, uni.indexid, uni.uniqueflag, uni.keyposn, uni.fieldname
    FROM uni
    UNION ALL
    SELECT as1.recname, as1.indexid, as1.uniqueflag, as1.keyposn, as1.fieldname
    FROM as1
    UNION ALL
    SELECT usi.recname, usi.indexid, usi.uniqueflag, usi.keyposn, usi.fieldname
    FROM usi
    ), ic AS ( /*list of columns, restrict to tables*/
    SELECT r.recname, m.indexid, m.uniqueflag, m.keyposn, m.fieldname
    FROM m
    , psrecdefn r
    WHERE r.rectype IN(0,7)
    And r.recname = m.recname
    ), i AS ( --construct column list
    SELECT /*+ MATERIALIZE*/
    ic.recname, ic.indexid, ic.uniqueflag
    , count(*) num_columns
    , listagg(ic.fieldname,',') within group (order by ic.keyposn) AS fieldlist
    FROM ic
    GROUP BY ic.recname, ic.indexid, ic.uniqueflag
    )
    SELECT r.recname
    , i.indexid||' ('||i.fieldlist||')' superset_index
    , r.indexid||' ('||r.fieldlist||')' redundant_index
    FROM i
    , i r
    WHERE i.recname = r.recname
    AND i.indexid != r.indexid
    AND r.uniqueflag = 0 /*non-unique redundant*/
    AND i.fieldlist LIKE r.fieldlist||',%'
    AND i.num_columns > r.num_columns
    order by r.recname, r.indexid
    /

    spool off
    This is part of the output for a HR 9.2 demo database. In all it identified 58 indexes on 41 records. However, some of those records are temporary records with several physical instances, so the number of database objects that will be removed is higher.
    RECNAME         SUPERSET_INDEX                                               REDUNDANT_INDEX
    --------------- ------------------------------------------------------------ -----------------------------------------------------------

    HRS_APP_PROFILE C (HRS_AL_LOG_ID,HRS_PERSON_ID) B (HRS_AL_LOG_ID)

    HRS_SJT_JO B (HRS_JOB_OPENING_ID,SCRTY_TYPE_CD) A (HRS_JOB_OPENING_ID)

    HR_PROMOTE_XREF C (EMPLID,EMPL_RCD) A (EMPLID)
    B (EMPLID,EMPL_RCD,ACTION_DT_SS,EFFSEQ) A (EMPLID)
    B (EMPLID,EMPL_RCD,ACTION_DT_SS,EFFSEQ) C (EMPLID,EMPL_RCD)

    HR_SSTEXT_EFFDT _ (OBJECTOWNERID,HR_SSTEXT_SUB_ID,TEXT_ID,EFFDT) A (OBJECTOWNERID,HR_SSTEXT_SUB_ID,TEXT_ID)

    HR_TRANSFR_XREF C (EMPLID,EMPL_RCD) A (EMPLID)
    B (EMPLID,EMPL_RCD,ACTION_DT_SS,EFFSEQ) A (EMPLID)
    B (EMPLID,EMPL_RCD,ACTION_DT_SS,EFFSEQ) C (EMPLID,EMPL_RCD)

    PA_ALIAS_USE 0 (ALIAS_TYPE,ALIAS_NAME) A (ALIAS_TYPE)


    WRK_XREF_CALC _ (PROCESS_INSTANCE,SEQ_NBR,XREF_NUM) A (PROCESS_INSTANCE,SEQ_NBR)
    The articles on my Oracle blog discuss how to identify redundant indexes on an Oracle database, and then how to go about removing them.  Of course, in PeopleSoft, you should only remove indexes by making the change within Application Designer, and that is what the next posting discusses.

    Setting Environment Variables in Application Server/Process Scheduler Tuxedo Domains

    Thu, 2016-07-07 10:54
    The topic of how to manage environment variables was mentioned recently on the PeopleSoft Administrator Podcast
    Recently I have built a pair of PeopleSoft environments for a proof-of-concept and have faced into exactly this problem.  I have two PS_APP_HOMEs (application homes) share the same PS_HOME (PeopleTools home).  The environment variables need to be correct before I open psadmin to administer the application server and process scheduler domains.
    On Unix, I would usually recommend running domains for different PS_APP_HOMEs in different Unix user accounts.  Thus environmental variables can be set up in the .profile or .bashrc scripts.  The processes then run as different Unix users which makes them easier to monitor.  The Unix users should be in the same Unix group and have group permissions to the PS_HOME directory.  This approach is not possible on Windows.
    The alternative is to have a scripts or a menu that sets up those variables before you enter psadmin.
    I noticed that there is a new menu in the domain configuration menu in psadmin which permits you to set environmental variables that are then built into the Tuxedo domain configuration.  In fact this has always been possible by editing the psappsrv.ubx and psprcs.ubx files directly to cause variables to be created in the psappsrv.env and psprcs.env files, but now you just have to enter the variables through the menu in psadmin.
    When we start psadmin we can see the relevant environmental parameters.   PS_APP_HOME points to my HR installation.
    PSADMIN -- PeopleTools Release: 8.54.16
    Copyright (c) 1996, 2014, Oracle. All rights reserved.

    --------------------------------
    PeopleSoft Server Administration
    --------------------------------

    PS_CFG_HOME /home/psadm1/psft/pt/8.54
    PS_HOME /opt/oracle/psft/pt/tools
    PS_APP_HOME /opt/oracle/psft/pt/hr91
    PS_CUST_HOME /opt/oracle/psft/pt/hr91/pscust

    1) Application Server
    ...
    But what if I have a Financials installation under the same PS_APP_HOME?  Option 16 of the configuration menu lets me define environmental settings.  Something very similar happens in the Process Scheduler configuration.
    ----------------------------------------------
    Quick-configure menu -- domain: RASC3K
    ----------------------------------------------
    Features Settings
    ========== ==========
    1) Pub/Sub Servers : Yes 17) DBNAME :[XXXXXX]
    ...
    Actions
    =========
    14) Load config as shown
    15) Custom configuration
    16) Edit environment settings
    h) Help for this menu
    q) Return to previous menu

    HINT: Enter 17 to edit DBNAME, then 14 to load
    So, I have added PS_APP_HOME, PS_CUST_HOME and INTERFACE_HOME and they have become a part of the configuration of the Tuxedo domain.
    --------------------------------------
    PeopleSoft Domain Environment Settings
    --------------------------------------
    Domain Name: RASC3K

    TEMP :[{LOGDIR}{FS}tmp]
    TMP :[{LOGDIR}{FS}tmp]
    TM_BOOTTIMEOUT :[120]
    TM_RESTARTSRVTIMEOUT :[120]
    TM_BOOTPRESUMEDFAIL :[Y]
    FLDTBLDIR32 :[{$TUXDIR}{FS}udataobj]
    FIELDTBLS32 :[jrep.f32,tpadm]
    ALOGPFX :[{LOGDIR}{FS}TUXACCESSLOG]
    INFORMIXSERVER :[{$Startup\ServerName}]
    COBPATH :[{$PS_APP_HOME}/cblbin:{$PS_HOME}/cblbin]
    PATH :[{$PATH}:{$Domain Settings\Add to PATH}]
    PS_APP_HOME :[/opt/oracle/psft/pt/fin91]
    PS_CUST_HOME :[/opt/oracle/psft/pt/fin91/pscust]
    INTERFACE_HOME :[/opt/oracle/psft/pt/fin91/pscust/interfaces]

    1) Edit environment variable
    2) Add environment variable
    3) Remove environment variable
    4) Comment / uncomment environment variable
    5) Show resolved environment variables
    6) Save
    h) Help for this menu
    q) Return to previous menu

    Command to execute (1-6, h or q) :
    What is going on here?  These variables have been added to the PS_ENVFILE section of psappsrv.ubx.
    # ----------------------------------------------------------------------
    *PS_ENVFILE
    TEMP={LOGDIR}{FS}tmp
    TMP={LOGDIR}{FS}tmp
    TM_BOOTTIMEOUT=120
    TM_RESTARTSRVTIMEOUT=120
    TM_BOOTPRESUMEDFAIL=Y
    FLDTBLDIR32={$TUXDIR}{FS}udataobj
    FIELDTBLS32=jrep.f32,tpadm
    ALOGPFX={LOGDIR}{FS}TUXACCESSLOG
    {WINDOWS}
    COBPATH={$PS_HOME}\CBLBIN%PS_COBOLTYPE%
    INFORMIXSERVER={$Startup\ServerName}
    # Set IPC_EXIT_PROCESS=1 to use ExitProcess to terminate server process.
    # Set IPC_TERMINATE_PROCESS=1 to use TerminateProcess to terminate server process.
    # If both are set, TerminateProcess will be used to terminate server process.
    #IPC_EXIT_PROCESS=1
    IPC_TERMINATE_PROCESS=1
    PATH={$PS_HOME}\verity\{VERITY_OS}\{VERITY_PLATFORM}\bin;{$PATH};{$Domain Settings\Add to PATH}
    {WINDOWS}
    {UNIX}
    INFORMIXSERVER={$Startup\ServerName}
    COBPATH={$PS_APP_HOME}/cblbin:{$PS_HOME}/cblbin
    PATH={$PATH}:{$Domain Settings\Add to PATH}
    {UNIX}
    PS_APP_HOME=/opt/oracle/psft/pt/fin91
    PS_CUST_HOME=/opt/oracle/psft/pt/fin91/pscust
    INTERFACE_HOME=/opt/oracle/psft/pt/fin91/pscust/interfaces
    They then appear in the psappsrv.env file that is generated when the domain is configured.  This file contains fully resolved values of environmental variables that are set by every tuxedo application server process when it starts.
    TEMP=/home/psadm1/psft/pt/8.54/appserv/XXXXXX/LOGS/tmp
    TMP=/home/psadm1/psft/pt/8.54/appserv/XXXXXX/LOGS/tmp
    TM_BOOTTIMEOUT=120
    TM_RESTARTSRVTIMEOUT=120
    TM_BOOTPRESUMEDFAIL=Y
    FLDTBLDIR32=/opt/oracle/psft/pt/bea/tuxedo/udataobj
    FIELDTBLS32=jrep.f32,tpadm
    ALOGPFX=/home/psadm1/psft/pt/8.54/appserv/XXXXXX/LOGS/TUXACCESSLOG
    # Set IPC_EXIT_PROCESS=1 to use ExitProcess to terminate server process.
    # Set IPC_TERMINATE_PROCESS=1 to use TerminateProcess to terminate server process.
    # If both are set, TerminateProcess will be used to terminate server process.
    #IPC_EXIT_PROCESS=1
    IPC_TERMINATE_PROCESS=1
    IPC_TERMINATE_PROCESS=1
    INFORMIXSERVER=
    COBPATH=/opt/oracle/psft/pt/fin91/cblbin:/opt/oracle/psft/pt/tools/cblbin
    PATH=/opt/oracle/psft/pt/fin91/bin:/opt/oracle/psft/pt/fin91/bin/interfacedrivers:/opt/oracle/psft/pt/tools/jre/bin:/opt/oracle/psft/pt/tools/appserv:/opt/oracle/psft/pt/tools/setup:/opt/oracle/psft/pt/bea/tuxedo/bin:.:/opt/oracle/psft/pt/oracle-client/12.1.0.1/bin:/opt/oracle/psft/pt/oracle-client/12.1.0.1/OPatch:/opt/oracle/psft/pt/oracle-client/12.1.0.1/perl/bin:/opt/mf/bin:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/opt/oracle/psft/pt/tools/bin:/opt/oracle/psft/pt/tools/bin/sqr/ORA/bin:/opt/oracle/psft/pt/tools/verity/linux/_ilnx21/bin:/home/psadm1/bin::.
    PS_APP_HOME=/opt/oracle/psft/pt/fin91
    PS_CUST_HOME=/opt/oracle/psft/pt/fin91/pscust
    INTERFACE_HOME=/opt/oracle/psft/pt/fin91/pscust/interfaces
    WarningI found that I had to specify fully resolved paths for the variables I defined.  I do try setting variables in terms of other variables,

    PS_APP_HOME :[/opt/oracle/psft/pt/fin91]
    PS_CUST_HOME :[{$PS_APP_HOME}{FS}pscust]
    INTERFACE_HOME :[{$PS_CUST_HOME}{FS}interfaces]

    but I started to get errors.
    ==============ERROR!================
    Value for PS_CUST_HOME: {$PS_APP_HOME}{FS}pscust, is invalid. Your environment
    may not work as expected.
    ==============ERROR!================
    And some variables were not fully resolved in the psappsrv.env file.
    PS_APP_HOME=/opt/oracle/psft/pt/fin91
    PS_CUST_HOME=/opt/oracle/psft/pt/fin91/pscust
    INTERFACE_HOME={$PS_APP_HOME}{FS}pscust/interfaces
    Configuration Settings in the .ubx –v- .cfgMy only reservation is that there is now environment specific configuration in the psappsrv.ubx file, rather than the psappsrv.cfg file.   When I have done this in the past I would create additional variables in psappsrv.cfg that were referenced from the psappsrv.ubx file.  Thus the psappsrv.ubx was consistent across environments, and all the configuration is in the main configuration file psappsrv.cfg.
    Although, you can add additional variables in psappsrv.cfg, thus
    [Domain Settings]
    ;=========================================================================
    ; General settings for this Application Server.
    ;=========================================================================

    Application Home=/opt/oracle/psft/pt/fin91
    Custom Directory=pscust
    Interface Directory=pscust/interfaces

    and then reference them in the variables, and they will resolve correctly in the psappsrv.env.
    PS_APP_HOME={$Domain Settings\Application Home}
    PS_CUST_HOME={$Domain Settings\PS_APP_HOME}{FS}{$Domain Settings\Custom Directory}
    INTERFACE_HOME={$Domain Settings\PS_APP_HOME}{FS}{$Domain Settings\Interface Directory}
    You may experience errors in psadmin
    ==============ERROR!================
    Value for PS_APP_HOME: {$Domain Settings\Application Home}, is invalid. Your
    environment may not work as expected.
    ==============ERROR!================

    ==============ERROR!================
    Value for PS_CUST_HOME: {$Domain Settings\Application Home}{FS}{$Domain
    Settings\Custom Directory}, is invalid. Your environment may not work as
    expected.
    ==============ERROR!================
    ConclusionUsing this technique, it does not matter how environment variables are set when you go into psadmin to start the application server, the correct setting is defined in the Tuxedo domain and overrides that.
    You have always been able to do this in Tuxedo, but you would have had to edit the psappsrv.ubx file yourself, now the menu allows you to administer this.
    There is no way to view the psappsrv.ubx and psappsrv.env files from within psadmin, only psappsrv.cfg can be opened.  If you want to check your settings have reached the psappsrv.env file, you will need to leave psadmin and look in the files for yourself.

    Pages