David Kurtz

Subscribe to David Kurtz feed
This blog contains things about PeopleSoft that DBAs might find interesting.
Or then again they might not!
Non-PeopleSoft Oracle stuff is at blog.go-faster.co.uk.
Updated: 15 hours 15 min ago

nVision Performance Tuning: 1. nVision Performance Options

Mon, 2017-10-16 14:26
This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

At no point when defining a nVision report does the developer directly specify a SQL statement, all the SQL is generated by nVision from the report definition.  That also means that it is not possible to directly intervene and change the SQL statement, for example, to add an optimizer hint.

However, the way that the SQL is generated can be controlled via the nVision performance options. Setting this appropriately can make a significant difference to nVision performance, but the optimal settings will differ from system to system, from tree to tree and sometimes even report to report.
Specification of Performance OptionsnVision performance options are specified for named trees.  Since PeopleTools 8, they are defined in the tree definition.
Tree Performance Options in PIA
They are stored in the database in PeopleTools table PSTREEDEFN.
Field Name
Field Value
Description
TREE_ACC_METHOD
D
Use Application Defaults
J
Join to Tree Selector
L
Use Literal Values
S
Sub-SELECT Tree Selector
TREE_ACC_SELECTOR
D
Dynamic Selectors
S
Static Selectors
TREE_ACC_SEL_OPT
B
Range of Values (BETWEEN)
R
Range of Values (>=…<=)
S
Single Values

Performance options can also be specified in individual layouts, and the specific definition in the layout overrides the generic definition in the tree.
Add-Ins->nVision->Layout Options
You can also see the trees defined in a layout file in the Name Manager in Excel.  The performance options are converted into a string of 5 characters.  Below you can see a tree called OPERATING_UNIT. The easiest way to determine whether performance options are set in a layout is to look in the Name Manager, and the only reliable way to delete a performance option from a layout is to delete them from the Name Manager.
Formulas -> Name Manager

The string of flags in the NvsTree formula can be decoded with this table.
X
Value
Position
Y
N
S
1
Dynamic Selector
Static Selector

2
Join to tree selector
Supress join; use literal values
Sub-SELECT tree selector
3
Range of values (BETWEEN)
Range of values (>= … <=)

4
Single Value
(overrides position 3)


5
Non-specific node criteria (above 2 billion)



RecommendationsIt is rare to need to set different performance options for the same tree in different layouts. Therefore, it is easiest to set performance options at tree level and not at all in layouts unless absolutely necessary.
Tree performance options can even be updated in bulk by SQL script, though it is essential to maintain the versions numbers correctly.  Care should be taken when migrating trees from other environments to ensure that the performance options are still correctly set, as the performance options are part of the tree being migrated
It is now possible to give clear advice to developers and especially business users who create nVision reports:
  • Don't use them.
  • If you have specified them in layouts in the past, then remove them unless you really do need to use a different option in a particular report.
Some customers have written Visual Basic macros to work through all their layouts and remove all tree performance options settings.

nVision Performance Tuning: Introduction

Tue, 2017-10-10 15:41
This blog post is the first in a series that discusses how to get good performance from nVision as used in General Ledger reporting.

PS/nVision is a PeopleTools technology that extracts data from the database and places it in an Excel spreadsheet (see PS/nVision Overview).  Although PS/nVision can be used with any PeopleSoft product, it is most commonly used in Financials General Ledger.

The SQL queries generated by nVision are, at least conceptually, similar to data warehouse queries. The ledger, ledger budget or summary ledger tables are the fact tables.

The ledger tables are analysed by their attribute columns. There are always literal conditions on the fiscal year and accounting period, there is usually a literal condition on currency code.  Then there are criteria on some of the other attributes.  I will take an example that analyses the ledger table in three dimensions: BUSINESS_UNIT, ACCOUNT and CHARTFIELD1, but there are many other attribute columns on the ledger tables.  These attributes are defined in lookup tables in the application, but their hierarchies are defined in trees.

nVision reports use the trees to determine which attribute values to report.  A report might report on a whole tree, or particular nodes, or branches of a tree.  nVision joins the tree definition to the attribute table and produces a list of attributes to be reported.  These are put into working storage tree selector tables (PSTREESELECT01 to 30).  The choice of selector table is controlled by the length of the attribute column.  BUSINESS_UNIT is a 5 character column so it goes into PSTREESELECT05. CHARTFIELD1 and ACCOUNT are 10 character columns so they use PSTREESELECT10.  These selector tables form the dimensions in the queries.

Here is an example of a SQL statement generated by nVision.  The tree selector 'dimension' tables are joined to the ledger 'fact' table.

SELECT L.TREE_NODE_NUM,L2.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT)
FROM PS_LEDGER A
, PSTREESELECT05 L1
, PSTREESELECT10 L
, PSTREESELECT10 L2
WHERE A.LEDGER='ACTUALS'
AND A.FISCAL_YEAR=2016
AND A.ACCOUNTING_PERIOD BETWEEN 1 AND 11
AND L1.SELECTOR_NUM=30982 AND A.BUSINESS_UNIT=L1.RANGE_FROM_05
AND L.SELECTOR_NUM=30985 AND A.CHARTFIELD1=L.RANGE_FROM_10
AND L2.SELECTOR_NUM=30984 AND A.ACCOUNT=L2.RANGE_FROM_10
AND A.CURRENCY_CD='GBP'
GROUP BY L.TREE_NODE_NUM,L2.TREE_NODE_NUM
This SQL looks simple enough, but there are various complexities
  • The tree selector tables are populated at runtime.  Many dimensions can be stored in each tree selector table, each keyed by a different SELECTOR_NUM.
  • Selectors can be static or dynamic.  In dynamic selectors, the data is only stored temporarily for the lifetime of the report and will be deleted when it completes.  So immediately, there is a challenge of keeping statistics up to date, and even then Oracle doesn't always manage to find an effective execution plan.
  • Different selectors will have different numbers of rows, so the statistics have to describe that skew.
  • Different nVision reports and even different parts of the same report generate different statements that can use different combinations of attribute columns.  The number of dimensions can vary, I have seen systems that use as many as five different trees in a single query.
  • Then the database needs to find the relevant rows on the ledger table for the dimensions specified as efficiently as possible.
This very quickly becomes a difficult and complex problem.  This series articles works through the various challenges and describe methods to overcome them.  Not all of them are applicable to all systems, in some cases, it will be necessary to choose between approaches depending on circumstances.

nVision Performance Tuning: Table of Contents

Tue, 2017-10-10 15:39
This post is an index for a series of blog posts that discuss how to get good performance from nVision as used in General Ledger reporting.  As the posts become available links will be updated in this post.
  • Introduction
  • nVision Performance Options
  • Indexing of Ledger, Budget and Summary Ledger Tables
  • Partitioning of Ledger, Budget and Summary Ledger Tables
  • Additional Oracle Instrumentation for nVision
  • Logging Selector Usage
  • Analysis of Tree Usage  with the Selector Log
  • Interval Partitioning and Statistics Maintenance of Selector Tables
  • Compression without the Advanced Compression option
  • Maintaining Statistics on Non-Partitioned Selector Tables
  • Excel -v- OpenXML
The current versions of scripts mentioned in the series will be made available on GitHub.


This week's &nbsp;<a href="http:/

Fri, 2017-09-22 09:22
This week's  PeopleSoft Administrator Podcast includes a few minutes of me talking to Dan and Kyle about Oracle Resource Manager.
(22 September 2017) #99 - Oracle Resource Manager You can listen to the podcast on psadmin.io, or subscribe with your favourite podcast player, or in iTunes.

PeopleSoft and Adaptive Query Optimization in Oracle 12c

Fri, 2017-06-30 07:44
Adaptive Query Optimization is a significant feature in Oracle 12c. Oracle has made lots of information available on the subject.(See https://blogs.oracle.com/optimizer/oracle-database-12c-is-here).
Adaptive Query Optimization is a set of capabilities that enable the optimizer to make run-time adjustments to execution plans and discover additional information that can lead to better statistics…There are two distinct aspects in Adaptive Query Optimization: adaptive plans, which focuses on improving the execution of a query and adaptive statistics, which uses additional information to improve query execution plans.from Optimizer with Oracle Database 12c Release 2, White Paper June 2017 (see http://www.oracle.com/technetwork/database/bi-datawarehousing/twp-optimizer-with-oracledb-12c-1963236.pdf)Adaptive Statistics include:
  • SQL plan directives: created and used to adapt SQL execution plans.
  • Statistics feedback: cardinality from table joins is used to improve SQL execution plans.
  • Performance feedback: improves the degree of parallelism chosen when PARALLEL_DEGREE_POLICY is set to ADAPTIVE
  • Adaptive dynamic sampling for parallel execution: dynamic statistics are gathered for certain parallel queries to improve cardinality estimates. The sample size is determined automatically.
(Parallelism is not used by default within PeopleSoft because all objects are marked NOPARALLEL by Application Designer).
This additional information should help the optimizer make better decisions, but it comes at the price of making the database do more work during SQL parse. Unfortunately, PeopleSoft makes extensive use of dynamically generated SQL, often with literal values leading to large amounts of parse. Even a small additional overhead during SQL parse can result in a significant overhead for the entire system. Adaptive Plans include:
  • Nested loop join/Hash join selection: the optimizer chooses between nested loops or hash joins at query runtime.
  • Adaptive parallel distribution method: the parallel distribution method is determined at runtime.
  • Star transformation bitmap pruning: certain bitmap indexes may be removed from the SQL execution plan at runtime if selectivity is significantly poorer than the estimate.
When joining tables, making the best choice between nested loops and hash joins is often critical for good performance. This feature could be of significant benefit to many applications, not just PeopleSoft. Let's take, as an example, this SQL generated by nVision on a 12c release 1 database. Due to the literal values it is very unlikely that the statement will be executed a second time.
SELECT SUM(A.POSTED_TOTAL_AMT) FROM sysadm.PS_LEDGER A WHERE A.LEDGER ='XXXXXXXXXX' AND A.FISCAL_YEAR=2015 AND (A.ACCOUNTING_PERIOD 
BETWEEN 1 AND 12 OR A.ACCOUNTING_PERIOD BETWEEN 998 AND 999) AND A.ACCOUNT IN ('40000001','40000002','40000003','40000004') AND
(A.DEPTID ='001A' OR A.DEPTID ='002A' OR A.DEPTID ='003A' OR A.DEPTID ='004A' OR A.DEPTID ='005B' OR A.DEPTID ='006A' OR A.DEPTID
='007A') AND A.CURRENCY_CD ='USD' AND A.STATISTICS_CODE=' '
This recursive SQL was generated by dynamic statistics processing during SQL parse. Such SQL is easily identifiable by the /* DS_SVC */ comment.
SELECT /* DS_SVC */ /*+ dynamic_sampling(0) no_sql_tune no_monitoring
optimizer_features_enable(default) no_parallel result_cache(snapshot=3600) */
SUM(C1)
FROM (
SELECT /*+ qb_name("innerQuery") NO_INDEX_FFS( "A") */ 1 AS C1
FROM "SYSADM"."PS_LEDGER" SAMPLE BLOCK(82.1847, 8) SEED(12) "A"
WHERE ("A"."FISCAL_YEAR"=2015) AND ("A"."CURRENCY_CD"='USD') AND
("A"."ACCOUNT"='40000001' OR "A"."ACCOUNT"='40000002' OR
"A"."ACCOUNT"='40000003' OR "A"."ACCOUNT"='40000004') AND
("A"."ACCOUNTING_PERIOD"<=170 AND "A"."ACCOUNTING_PERIOD">=1 OR
"A"."ACCOUNTING_PERIOD">=901 AND "A"."ACCOUNTING_PERIOD"<=905) AND
("A"."DEPTID"='001A' OR "A"."DEPTID"='002A' OR "A"."DEPTID"='003A' OR
"A"."DEPTID"='004A' OR "A"."DEPTID"='005B' OR "A"."DEPTID"='006A' OR
"A"."DEPTID"='007A') AND ("A"."LEDGER"='XXXXXXXXXX') AND
("A"."STATISTICS_CODE"=' ')) innerQuery
It is easy to see that you wouldn't need too many additional queries like this to have a significant on system performance. In the early days of Oracle 12c many PeopleSoft customers reported just such experience. It is no surprise therefore that Oracle recommend disabling adaptive optimization in 12c. 
optimizer_adaptive_features = FALSE - After upgrading to Oracle Database 12c, many PeopleSoft customers have noticed overall performance degradation which is related to Oracle 12c Optimizer Adaptive Feature (OAF). It is recommended that this value be disabled in all Oracle Databases running PeopleSoft Applications. from E-ORA Advice for the PeopleSoft Oracle DBA (Doc ID 1445965.1) (see https://support.oracle.com/epmos/faces/DocumentDisplay?id=1445965.1)In Oracle 12c release 1, the single parameter OPTIMIZER_ADAPTIVE_FEATUES controls all the adaptive optimization features. However, in Oracle 12c release 2 this is replaced by 2 new parameters:
Significantly this enhancement is available in 12c release 1. See
There are two patches available
  • patch for bug# 22652097 introduces the two parameters OPTIMIZER_ADAPTIVE_PLANS and OPTIMIZER_ADAPTIVE_STATISTICS and removes the parameter OPTIMIZER_ADAPTIVE_FEATURES.
  • patch for bug# 21171382 disables the automatic creation of extended statistics unless the optimizer preference AUTO_STATS_EXTENSIONS is set to ON.
Here is an example of another query from nVision, this time on a 12.1 system where the two patches have been applied. The optimizer has switched from a nested loop to a hash join in an adaptive plan. In this particular case, this was a good decision and the hash join outperformed the nested loop.
SELECT A.AFFILIATE,SUM(A.POSTED_TOTAL_AMT) FROM PS_XX_XXXXXXX_VW A,
PSTREESELECT10 L1 WHERE A.LEDGER='XXXXXXX' AND A.FISCAL_YEAR=2016 AND
A.ACCOUNTING_PERIOD BETWEEN 0 AND 12 AND L1.SELECTOR_NUM=63382 AND
A.ACCOUNT=L1.RANGE_FROM_10 AND A.BUSINESS_UNIT='XXXXXX' AND
A.CURRENCY_CD='USD' AND A.STATISTICS_CODE=' ' GROUP BY A.AFFILIATE

Plan hash value: 4041461164

--------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 86 (100)| | | |
| 1 | HASH GROUP BY | | 1 | 67 | 86 (3)| 00:00:01 | | |
| 2 | HASH JOIN | | 9 | 603 | 85 (2)| 00:00:01 | | |
|- 3 | NESTED LOOPS | | 9 | 603 | 85 (2)| 00:00:01 | | |
|- 4 | STATISTICS COLLECTOR | | | | | | | |
| 5 | PARTITION RANGE ITERATOR | | 9 | 459 | 84 (2)| 00:00:01 | 16 | 28 |
| 6 | INLIST ITERATOR | | | | | | | |
| 7 | PARTITION LIST ITERATOR | | 9 | 459 | 84 (2)| 00:00:01 |KEY(I) |KEY(I) |
| 8 | TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| PS_LEDGER | 9 | 459 | 84 (2)| 00:00:01 | KEY | KEY |
| 9 | INDEX RANGE SCAN | PSCLEDGER | 117 | | 52 (2)| 00:00:01 | KEY | KEY |
|- 10 | PARTITION RANGE SINGLE | | 1 | 16 | 1 (0)| 00:00:01 | 63382 | 63382 |
|- 11 | INDEX RANGE SCAN | PSAPSTREESELECT10 | 1 | 16 | 1 (0)| 00:00:01 | 63382 | 63382 |
| 12 | PARTITION RANGE SINGLE | | 23 | 368 | 1 (0)| 00:00:01 | 63382 | 63382 |
| 13 | INDEX RANGE SCAN | PSAPSTREESELECT10 | 23 | 368 | 1 (0)| 00:00:01 | 63382 | 63382 |
--------------------------------------------------------------------------------------------------------------------------------------

Note
-----
- this is an adaptive plan (rows marked '-' are inactive)
Conclusion Adaptive Optimization is a great feature that does deliver real benefits, but like many Oracle features, it works best on an application that has been developed how Oracle would recommend applications be developed. Since the default value for OPTIMIZER_ADAPTIVE_STATISTICS is FALSE, PeopleSoft is clearly not alone in not being such an application.
Given PeopleSoft's propensity to generate non-sharable SQL, adaptive statistics are not a viable option, and should not be enabled, and in 12c release 1 that has meant sacrificing the benefits of adaptive plans. However, patch 22652097 means that we don't have to wait for release 2, we can now choose which feature to use.
Every system is different, even every PeopleSoft system is different. However, my experience of this so far with PeopleSoft is that I have not seen Oracle switch join method cause a problem. It certainly isn't a panacea either. When the join method has changed, it hasn't always made a big difference, but it has been net beneficial. I would certainly recommend applying these patches.

Interview with PeopleSoft Administrator Podcast: Cost-Based Optimizer Statistics in PeopleSoft

Fri, 2017-05-26 04:56
I recently recorded another interview with Dan Iverson and Kyle Benson for the PeopleSoft Administrator Podcast, this time about management of Cost-Based Optimizer Statistics in PeopleSoft systems.
(19 May 2017) #81 - Database Statistics You can listen to the podcast on psadmin.io, or subscribe with your favourite podcast player, or in iTunes.

Running Unix Shell Scripts from the PeopleSoft Process Scheduler

Sun, 2017-02-26 04:42

It is nearly 10 years since I first wrote about how to call Unix shell scripts from the Process Scheduler.  Although very little has changed, I have had a number of questions recently, so I thought it was time I checked the script and updated the posting.  I have used PeopleTools 8.54 in the preparation of this note.
The Process Scheduler is essentially just a mechanism for initiating processes on another server.  Mostly those are other PeopleSoft delivered executables.  The exception is the Application Engine Tuxedo Server process (PSAESRV) where the Process Scheduler submits a service request message that is picked up by one of the server processes that are already running.
NB: although the PSAESRV server is configured by default in the Process Scheduler domain, Oracle recommend that you should only use this when you have lots of very short-lived (runtime less than 30 seconds) application engine processes.  Typically, this only occurs in CRM.
Process Type DefinitionFirst you need to create a new process type. I chose to call it ‘Shell Script’.  It runs a named shell wrapper script, psft.sh.  The wrapper script calls the script that is to be executed. Note that the command line in the process type definition includes the fully qualified path.


Wrapper ScriptThis is the wrapper script, psft.sh, that will be called by the process scheduler.
#!/bin/ksh
# (c) David Kurtz 2007
# Script: psft.sh
#
# Syntax: psft.sh DBNAME ACCESSID ACCESSPSWD PRCSINSTANCE command
# where
# DBNAME is the name of the PeopleSoft datbase with a corresponding TNS entry
# ACCESSID is the schema containing the PeopleSoft database
# ACCESSPSWD is the password to ACCESSID
# PRCSINSTANCE is the process instance number supplied by PeopleSoft
#
# Purpose: To start Standard UNIX Shell Script from Process Scheduler, and interface with the PeopleSoft Process Scheduler
# 07.09.2007 Initial Version
# 23.02.2017 Remove unnecessary logfiles section
#set -x

if [ $# -lt 4 ]; then
echo "Usage $0: <DBNAME> <ACCESSID> <ACCESSPSWD> <PRCSINSTANCE> <command>"
exit 1
fi

CONNECT=$2/$3@$11
PRCSINSTANCE=$4
shift 4

#
# Function to set status of API aware process instance
#
function prcsapi2
{
if [ $# -lt 2 ]; then
echo "Parameter Error in function $0"
exit 1
fi

TIMESTAMPCOL=${1}
STATUS=${2}

if [ ${PRCSINSTANCE} -gt 0 ];then
echo "Setting process request ${PRCSINSTANCE} to status ${STATUS}"
sqlplus -S /nolog <<!
set termout off echo off feedback off verify off
connect ${CONNECT}
UPDATE psprcsque
SET runstatus = ${STATUS}
, sessionidnum = $$3
, lastupddttm = SYSTIMESTAMP
WHERE prcsinstance = ${PRCSINSTANCE}
;
UPDATE psprcsrqst
SET runstatus = ${STATUS}
, prcsrtncd = ${PRCSRTNCD}
, continuejob = DECODE(${STATUS},2,1,7,1,9,1,0)4
, ${TIMESTAMPCOL} = SYSTIMESTAMP
, lastupddttm = SYSTIMESTAMP
;
COMMIT;
exit
!

RET=$?
if [ ! $RET ];then
echo "SQL*Plus Error Return Code: $?"
fi
fi
}

#
# Main Execution Starts Here
#

echo $0:$*
date
uname -a
echo "Current Directory: `pwd`"
echo "Process log files in: ${PSPRCSLOGDIR}"
PRCSRTNCD=0
prcsapi begindttm 75
#set
#Run the command
$*
PRCSRTNCD=$?6

if [ ${PRCSRTNCD} -ne 0 ]; then
prcsapi enddttm 3 # failure
else
prcsapi enddttm 9 # success
fi

date
    Notes:
    1. The Oracle user ID, password and TNS name for the PeopleSoft database are supplied in the first three parameters to the wrapper script. The PeopleSoft Process Instance number is the 4th command line parameter. These parameters are then removed with the shift command leaving any other parameters that have been specified.
    2. Function prcsapi sets the status on the process request row and updates the appropriate timestamp columns in the Process Scheduler tables.
    3. PSPRCSQUE.SESSIONIDNUM holds the operating system process ID of the shell executing the wrapper script.
    4. When the process completes and an end of process status is set (either 9 for success, 3 for failure or 2 for delete) CONTINUEJOB is set to 1, otherwise it is set to 0.
    5. When the wrapper scripts start it sets the process status on the process request record to 7 indicate that it is processing.  This can be seen in the Process Monitor.
    6. The return code of the executed script is captured. Later it will be recorded on
      PSPRCSRQST.PRCSRTNCD. A non-zero return code indicates an error and the process status will be set to error.
    Process DefinitionNow I can create a Process Definition that will use the process type twrapper script to execute another command or script.
    The first four parameters passed to the wrapper script are the name of the database, the access ID and password, and the process instance. A string of further parameters will be appended in the individual Process Definition that is the specific command and parameters to be executed.
    It is important that this new process type is defined as being API aware.  That means the process interacts with the Process Scheduler by updating the process status.  You can see how the interaction should be done by looking at procedure Update-Process-Status in the delivered SQR library prcsapi.sqc. Otherwise, the Process Scheduler cannot determine their status.  Consequently, all API-unaware processes have a run status of Success to indicate that they were started successfully.

    I have written a silly test script called i that I want to be executed by the Process Scheduler.  It just prints out the command line parameters as banner text to both standard output and a file called mybanner.log.  This script will be called by psft.sh
    The Process Scheduler creates a working directory for each process request.  It sets the variable $PSPRCSLOGDIR to the fully qualified location of this directory. Note that mybanner.sh changes the current directory to the location of this variable so that it writes mybanner.log there, and thus it is picked up by the distribution agent and made available via the report repository.  You may wish to do this in your scripts.
    Current working directory can be specified at Process Type or Process definition.  However, during my testing, I found that these settings had no effect.  The working directory of the script did not change, and the value was not found in any environmental variable.
    #!/bin/ksh
    #A silly script to test psft.sh
    #(c) David Kurtz 2017
    #banner function from http://stackoverflow.com/questions/652517/whats-the-deal-with-the-banner-command

    if [ "$PSPRCSLOGDIR" ] ; then
    cd $PSPRCSLOGDIR
    fi

    (
    while [ $# -gt 0 ]
    do
    /opt/oracle/psft/ptdb/custhome/banner $1
    shift
    done
    ) | tee mybanner.log
    exit $?
    I can now create a Process Definition that uses the Shell Script process type that will execute mybanner.sh.  Note that this command line is appended to the existing command line specified in the Process Type definition

    You can't quite see it in the screen shot, but the parameter list includes the process instance number:
    /opt/oracle/psft/ptdb/custhome/mybanner.sh "Hello World" %%INSTANCE%%
    Process Scheduler System Settings During my testing, I found that it was necessary to specify output type settings for process type Other in the Process Scheduler System Settings; otherwise the output files were not posted to the report repository.


    The newly defined Process can be run just as any other process is usually run. Any output from the script on the standard output channel is captured by the Process Scheduler and written to a log file that can then be viewed from the View Log/Trace facility within Process Monitor.
    In this case the standard output was written to OTH_DMKTEST_.log, and I also get the mybanner.log that was written to $PSPRCSLOGDIR  in the list of available files.

    mybanner.log contains just the three words passed as parameters
    H     H         ll      ll              
    H H l l
    H H eeee l l oooo
    HHHHHHH e e l l o o
    H H eeeeee l l o o
    H H e l l o o
    H H eeee lll lll oooo

    W W ll d
    W W l d
    W W oooo rr rr l d
    W W o o rr r l ddddd
    W W W o o r l d d
    W W W W o o r l d d
    W W oooo rr lll dddd d

    33333 99999 99999 5555555 00000
    3 3 9 9 9 9 5 0 00
    3 9 9 9 9 5 0 0 0
    3333 999999 999999 55555 0 0 0
    3 9 9 5 0 0 0
    3 3 9 9 5 5 00 0
    33333 99999 99999 55555 00000
    OTH_DMKTEST_39950.log contains the standard output of the entire command - including the additional messages emitted by psft.sh (in bold).
    Note that the current directory is reported as being the location of the Process Scheduler Tuxedo domain. 
    /opt/oracle/psft/ptdb/custhome/psft.sh:/opt/oracle/psft/ptdb/custhome/mybanner.sh Hello World 39950
    Tue Sep 1 21:59:46 UTC 2015
    Linux hcm.london.go-faster.co.uk 2.6.39-400.215.10.el5uek #1 SMP Tue Sep 9 22:51:46 PDT 2014 x86_64 x86_64 x86_64 GNU/Linux
    Current Directory: /home/psadm2/psft/pt/8.54/appserv/prcs/PRCSDOM
    Process log files in: /home/psadm2/psft/pt/8.54/appserv/prcs/PRCSDOM/log_output/OTH_DMKTEST_39950
    Setting process request 39950 to status 7

    H H ll ll
    H H l l
    H H eeee l l oooo
    HHHHHHH e e l l o o
    H H eeeeee l l o o
    H H e l l o o
    H H eeee lll lll oooo

    W W ll d
    W W l d
    W W oooo rr rr l d
    W W o o rr r l ddddd
    W W W o o r l d d
    W W W W o o r l d d
    W W oooo rr lll dddd d

    33333 99999 99999 5555555 00000
    3 3 9 9 9 9 5 0 00
    3 9 9 9 9 5 0 0 0
    3333 999999 999999 55555 0 0 0
    3 9 9 5 0 0 0
    3 3 9 9 5 5 00 0
    33333 99999 99999 55555 00000

    Setting process request 39950 to status 9
    Tue Sep 1 21:59:46 UTC 2015
    A more detailed version of this document can be found at http://www.go-faster.co.uk/docs/process_scheduler_shell_scripts.pdf.

    Removing Redundant Indexes in PeopleSoft

    Tue, 2017-01-17 04:15
    This is the second of a two-part article that discusses how to identify and remove redundant indexes from a PeopleSoft system. 
    This article describes a script on my website (psredundantfix.sql) that uses a similar query to that described in the previous article to identify redundant indexes from the metadata in the PeopleTools tables. It uses an anonymous block of PL/SQL so nothing is installed in the database. For each redundant index identified it:
    • Adds the record and redundant index into an Application Designer project called REDUNDANT INDEXES. The record must also be added, otherwise Application will not generate a DDL script to remove the index.
    • Unsets the platform flag on the redundant index if the superset index is active on the same platform. Thus Application Designer will generate a script to drop redundant indexes when the project is built. The redundant index definition remains in the record definition in case there is a need to revert the change.
    • If the superset index is active on Oracle
      • Creates and collects extended optimizer statistics on the combination of columns in the redundant index. If error ORA-00054 occurs, the exception will be ignored and the script will continue.
      • Makes the index invisible. Note that indexes are not dropped. That can be done later by building an alter script for the project. It would be normal to test the consequences of making the indexes invisible for a while, and drop them later. 
    Sample Output
    Project REDUNDANT INDEXES already exists

    Record PSPROJECTITEM, Redundant Index C already in project
    Creating Extended Statistics on PSPROJECTITEM for OBJECTVALUE4,OBJECTTYPE
    Making index PSCPSPROJECTITEM invisible
    ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired

    Record PSTREENODE, Redundant Index C already in project

    Record TL_RPTD_TIME, Redundant Index C added to project
    Making index PSCTL_RPTD_TIME invisible
    Record TL_TA_BATCH, Redundant Index _ added to project
    Making index PS_TL_TA_BATCH invisible
    Making index PS_TL_TA_BATCH1 invisible

    Making index PS_TL_TA_BATCH48 invisible

    Record WRK_XREF_CALC, Redundant Index A already in project
    Creating Extended Statistics on PS_WRK_XREF_CALC for PROCESS_INSTANCE,SEQ_NBR
    Creating Extended Statistics on PS_WRK_XREF_CALC1 for PROCESS_INSTANCE,SEQ_NBR

    Creating Extended Statistics on PS_WRK_XREF_CALC6 for PROCESS_INSTANCE,SEQ_NBR
    Making index PSAWRK_XREF_CALC invisible
    Making index PSAWRK_XREF_CALC1 invisible

    Making index PSAWRK_XREF_CALC6 invisible
    Note:
    • The script commits any changes it makes. There is also a rollback command at the top to prevent it accidentally committing something else.
    • Having run it once, should you immediately run the script again, nothing will be found because any redundant indexes have been marked as disabled on Oracle
    • Should you want to revert changes, mark all indexes in the REDUNDANT INDEXES project as active on Oracle.
    UPDATE psindexdefn
    SET platform_ora = 1
    , activeflag = 1
    WHERE (recname, indexid) IN (
    SELECT objectvalue1, objectvalue2
    FROM psprojectitem
    WHERE objecttype = 1
    AND projectname = 'REDUNDANT INDEXES')
    AND platform_ora = 0;
    COMMIT;
    We can see in Application Designer that index PSAWRK_XREF_CALC is a subset of PS_WRK_XREF_CALC because both start with PROCESS_INSTANCE and SEQ_NBR.


    Index A was marked inactive on all platforms because the superset index _ is active on all platforms.  


    PeopleSoft delivers some indexes for some platforms only. For example, PSETREENODE is only active on Informix. Therefore the script only deactivates the Informix platform flag on PSCTREENODE, and the platform flag changes to 'Some'. The index is still added to the project, but no attempt is made to create extended statistics or to make it invisible, and the index continues to be built on Oracle.

    The comment applied to index C indicates that only the Informix flag was changed.



    The final step, when you are satisfied that making the indexes invisible has no unacceptable consequences, is to drop the redundant indexes using the script generated by Application Designer

    DROP INDEX PSCPSPROJECTITEM
    /
    DROP INDEX PSEPSPROJECTITEM
    /

    DROP INDEX PSAWRK_XREF_CALC
    /
    DROP INDEX PSAWRK_XREF_CALC1
    /

    DROP INDEX PSAWRK_XREF_CALC6
    /

    Identifying Redundant Indexes in PeopleSoft

    Thu, 2017-01-12 05:57
    This is the first of a two-part series that discusses how to identify and remove redundant indexes from a PeopleSoft system.
    I recently wrote a series of articles on my Oracle blog about identifying redundant indexes on an Oracle database. The same approach can be taken with PeopleSoft, but using the metadata in the PeopleTools tables rather than the database catalogue.
    (This means you could take a similar approach for PeopleSoft on databases other than Oracle, but I can't say anything about the behaviour of other optimizers) 
    The following query returns a report of superset and redundant subset indexes in a PeopleSoft system. It can be downloaded as a script from my website (psredundant.sql).
    The generic Oracle version of this script will identify subset indexes that are unique as redundant because the primary key and unique constraints can be altered to use the superset index.  However, this query does not make subset unique indexes as redundant because PeopleSoft does not explicitly create unique constraints, only unique indexes.  If I disabled a unique index, I would not be able to maintain the unique constraint via Application Designer.
    Subquery IC returns a list of columns for each index. Subqueries UNI, AS2 and USI retrieve columns for unique/duplicate key, alternate search and user defined indexes respectively. The absolute value of key position is not important for this exercise, it is merely used to sort the column list in the listagg() function in subquery I.
    REM psredundant.sql
    set lines 160 pages 50 trimspool on
    column recname format a15
    column superset_index format a60
    column redundant_index format a60
    break on recname skip 1 on indexid skip 1
    spool psredundant

    WITH uni AS (/*unique indexes*/
    SELECT /*+MATERIALIZE*/ f.recname, i.indexid
    , MIN(i.uniqueflag) OVER (PARTITION BY f.recname) uniqueflag
    , CASE WHEN MAX(CASE WHEN f.recname != f.recname_parent THEN 1 ELSE 0 END) OVER (PARTITION BY f.recname)=1
    THEN f.fieldnum ELSE k.keyposn END keyposn
    , k.fieldname
    FROM psrecfielddb f
    , psindexdefn i
    , pskeydefn k
    WHERE i.recname IN(f.recname,f.recname_parent)
    AND i.recname = k.recname
    AND k.fieldname = f.fieldname
    AND i.indexid = '_' /*unique index*/
    AND k.indexid = i.indexid
    AND bitand(f.useedit,3) > 0 /*unique or dup key*/
    ), as0 AS (/*leading column on alternate search indexes*/
    SELECT f0.recname, k0.indexid, i0.uniqueflag, 0 keyposn, f0.fieldname
    FROM psrecfielddb f0
    , psindexdefn i0
    , pskeydefn k0
    WHERE bitand(f0.useedit,16) = 16 /*alternate search key*/
    AND k0.recname = f0.recname_parent
    AND k0.fieldname = f0.fieldname
    AND i0.recname = k0.recname
    AND i0.indexid = k0.indexid
    AND i0.indexid BETWEEN '0' AND '9' /*alternate search index*/
    ), as1 AS ( /*now add unique columns*/
    SELECT as0.recname, as0.indexid, as0.uniqueflag, as0.keyposn, as0.fieldname
    FROM as0
    UNION ALL /*append unique key index columns*/
    SELECT as0.recname, as0.indexid, as0.uniqueflag, uni.keyposn, uni.fieldname
    FROM as0, uni
    WHERE as0.recname = uni.recname
    ), as2 AS (
    SELECT as1.recname, as1.indexid, as1.uniqueflag, NVL(k.keyposn,as1.keyposn), as1.fieldname
    FROM as1
    LEFT OUTER JOIN pskeydefn k /*to handle custom key order*/
    ON k.recname = as1.recname
    AND k.indexid = as1.indexid
    AND k.fieldname = as1.fieldname
    ), usi AS (/*user indexes*/
    SELECT i.recname, i.indexid, i.uniqueflag, k.keyposn, k.fieldname
    FROM psindexdefn i
    , pskeydefn k
    WHERE k.recname = i.recname
    AND k.indexid = i.indexid
    AND k.indexid BETWEEN 'A' AND 'Z'
    AND i.platform_ora = 1
    ), m AS (/*merge three kinds of index here*/
    SELECT uni.recname, uni.indexid, uni.uniqueflag, uni.keyposn, uni.fieldname
    FROM uni
    UNION ALL
    SELECT as1.recname, as1.indexid, as1.uniqueflag, as1.keyposn, as1.fieldname
    FROM as1
    UNION ALL
    SELECT usi.recname, usi.indexid, usi.uniqueflag, usi.keyposn, usi.fieldname
    FROM usi
    ), ic AS ( /*list of columns, restrict to tables*/
    SELECT r.recname, m.indexid, m.uniqueflag, m.keyposn, m.fieldname
    FROM m
    , psrecdefn r
    WHERE r.rectype IN(0,7)
    And r.recname = m.recname
    ), i AS ( --construct column list
    SELECT /*+ MATERIALIZE*/
    ic.recname, ic.indexid, ic.uniqueflag
    , count(*) num_columns
    , listagg(ic.fieldname,',') within group (order by ic.keyposn) AS fieldlist
    FROM ic
    GROUP BY ic.recname, ic.indexid, ic.uniqueflag
    )
    SELECT r.recname
    , i.indexid||' ('||i.fieldlist||')' superset_index
    , r.indexid||' ('||r.fieldlist||')' redundant_index
    FROM i
    , i r
    WHERE i.recname = r.recname
    AND i.indexid != r.indexid
    AND r.uniqueflag = 0 /*non-unique redundant*/
    AND i.fieldlist LIKE r.fieldlist||',%'
    AND i.num_columns > r.num_columns
    order by r.recname, r.indexid
    /

    spool off
    This is part of the output for a HR 9.2 demo database. In all it identified 58 indexes on 41 records. However, some of those records are temporary records with several physical instances, so the number of database objects that will be removed is higher.
    RECNAME         SUPERSET_INDEX                                               REDUNDANT_INDEX
    --------------- ------------------------------------------------------------ -----------------------------------------------------------

    HRS_APP_PROFILE C (HRS_AL_LOG_ID,HRS_PERSON_ID) B (HRS_AL_LOG_ID)

    HRS_SJT_JO B (HRS_JOB_OPENING_ID,SCRTY_TYPE_CD) A (HRS_JOB_OPENING_ID)

    HR_PROMOTE_XREF C (EMPLID,EMPL_RCD) A (EMPLID)
    B (EMPLID,EMPL_RCD,ACTION_DT_SS,EFFSEQ) A (EMPLID)
    B (EMPLID,EMPL_RCD,ACTION_DT_SS,EFFSEQ) C (EMPLID,EMPL_RCD)

    HR_SSTEXT_EFFDT _ (OBJECTOWNERID,HR_SSTEXT_SUB_ID,TEXT_ID,EFFDT) A (OBJECTOWNERID,HR_SSTEXT_SUB_ID,TEXT_ID)

    HR_TRANSFR_XREF C (EMPLID,EMPL_RCD) A (EMPLID)
    B (EMPLID,EMPL_RCD,ACTION_DT_SS,EFFSEQ) A (EMPLID)
    B (EMPLID,EMPL_RCD,ACTION_DT_SS,EFFSEQ) C (EMPLID,EMPL_RCD)

    PA_ALIAS_USE 0 (ALIAS_TYPE,ALIAS_NAME) A (ALIAS_TYPE)


    WRK_XREF_CALC _ (PROCESS_INSTANCE,SEQ_NBR,XREF_NUM) A (PROCESS_INSTANCE,SEQ_NBR)
    The articles on my Oracle blog discuss how to identify redundant indexes on an Oracle database, and then how to go about removing them.  Of course, in PeopleSoft, you should only remove indexes by making the change within Application Designer, and that is what the next posting discusses.

    Setting Environment Variables in Application Server/Process Scheduler Tuxedo Domains

    Thu, 2016-07-07 10:54
    The topic of how to manage environment variables was mentioned recently on the PeopleSoft Administrator Podcast
    Recently I have built a pair of PeopleSoft environments for a proof-of-concept and have faced into exactly this problem.  I have two PS_APP_HOMEs (application homes) share the same PS_HOME (PeopleTools home).  The environment variables need to be correct before I open psadmin to administer the application server and process scheduler domains.
    On Unix, I would usually recommend running domains for different PS_APP_HOMEs in different Unix user accounts.  Thus environmental variables can be set up in the .profile or .bashrc scripts.  The processes then run as different Unix users which makes them easier to monitor.  The Unix users should be in the same Unix group and have group permissions to the PS_HOME directory.  This approach is not possible on Windows.
    The alternative is to have a scripts or a menu that sets up those variables before you enter psadmin.
    I noticed that there is a new menu in the domain configuration menu in psadmin which permits you to set environmental variables that are then built into the Tuxedo domain configuration.  In fact this has always been possible by editing the psappsrv.ubx and psprcs.ubx files directly to cause variables to be created in the psappsrv.env and psprcs.env files, but now you just have to enter the variables through the menu in psadmin.
    When we start psadmin we can see the relevant environmental parameters.   PS_APP_HOME points to my HR installation.
    PSADMIN -- PeopleTools Release: 8.54.16
    Copyright (c) 1996, 2014, Oracle. All rights reserved.

    --------------------------------
    PeopleSoft Server Administration
    --------------------------------

    PS_CFG_HOME /home/psadm1/psft/pt/8.54
    PS_HOME /opt/oracle/psft/pt/tools
    PS_APP_HOME /opt/oracle/psft/pt/hr91
    PS_CUST_HOME /opt/oracle/psft/pt/hr91/pscust

    1) Application Server
    ...
    But what if I have a Financials installation under the same PS_APP_HOME?  Option 16 of the configuration menu lets me define environmental settings.  Something very similar happens in the Process Scheduler configuration.
    ----------------------------------------------
    Quick-configure menu -- domain: RASC3K
    ----------------------------------------------
    Features Settings
    ========== ==========
    1) Pub/Sub Servers : Yes 17) DBNAME :[XXXXXX]
    ...
    Actions
    =========
    14) Load config as shown
    15) Custom configuration
    16) Edit environment settings
    h) Help for this menu
    q) Return to previous menu

    HINT: Enter 17 to edit DBNAME, then 14 to load
    So, I have added PS_APP_HOME, PS_CUST_HOME and INTERFACE_HOME and they have become a part of the configuration of the Tuxedo domain.
    --------------------------------------
    PeopleSoft Domain Environment Settings
    --------------------------------------
    Domain Name: RASC3K

    TEMP :[{LOGDIR}{FS}tmp]
    TMP :[{LOGDIR}{FS}tmp]
    TM_BOOTTIMEOUT :[120]
    TM_RESTARTSRVTIMEOUT :[120]
    TM_BOOTPRESUMEDFAIL :[Y]
    FLDTBLDIR32 :[{$TUXDIR}{FS}udataobj]
    FIELDTBLS32 :[jrep.f32,tpadm]
    ALOGPFX :[{LOGDIR}{FS}TUXACCESSLOG]
    INFORMIXSERVER :[{$Startup\ServerName}]
    COBPATH :[{$PS_APP_HOME}/cblbin:{$PS_HOME}/cblbin]
    PATH :[{$PATH}:{$Domain Settings\Add to PATH}]
    PS_APP_HOME :[/opt/oracle/psft/pt/fin91]
    PS_CUST_HOME :[/opt/oracle/psft/pt/fin91/pscust]
    INTERFACE_HOME :[/opt/oracle/psft/pt/fin91/pscust/interfaces]

    1) Edit environment variable
    2) Add environment variable
    3) Remove environment variable
    4) Comment / uncomment environment variable
    5) Show resolved environment variables
    6) Save
    h) Help for this menu
    q) Return to previous menu

    Command to execute (1-6, h or q) :
    What is going on here?  These variables have been added to the PS_ENVFILE section of psappsrv.ubx.
    # ----------------------------------------------------------------------
    *PS_ENVFILE
    TEMP={LOGDIR}{FS}tmp
    TMP={LOGDIR}{FS}tmp
    TM_BOOTTIMEOUT=120
    TM_RESTARTSRVTIMEOUT=120
    TM_BOOTPRESUMEDFAIL=Y
    FLDTBLDIR32={$TUXDIR}{FS}udataobj
    FIELDTBLS32=jrep.f32,tpadm
    ALOGPFX={LOGDIR}{FS}TUXACCESSLOG
    {WINDOWS}
    COBPATH={$PS_HOME}\CBLBIN%PS_COBOLTYPE%
    INFORMIXSERVER={$Startup\ServerName}
    # Set IPC_EXIT_PROCESS=1 to use ExitProcess to terminate server process.
    # Set IPC_TERMINATE_PROCESS=1 to use TerminateProcess to terminate server process.
    # If both are set, TerminateProcess will be used to terminate server process.
    #IPC_EXIT_PROCESS=1
    IPC_TERMINATE_PROCESS=1
    PATH={$PS_HOME}\verity\{VERITY_OS}\{VERITY_PLATFORM}\bin;{$PATH};{$Domain Settings\Add to PATH}
    {WINDOWS}
    {UNIX}
    INFORMIXSERVER={$Startup\ServerName}
    COBPATH={$PS_APP_HOME}/cblbin:{$PS_HOME}/cblbin
    PATH={$PATH}:{$Domain Settings\Add to PATH}
    {UNIX}
    PS_APP_HOME=/opt/oracle/psft/pt/fin91
    PS_CUST_HOME=/opt/oracle/psft/pt/fin91/pscust
    INTERFACE_HOME=/opt/oracle/psft/pt/fin91/pscust/interfaces
    They then appear in the psappsrv.env file that is generated when the domain is configured.  This file contains fully resolved values of environmental variables that are set by every tuxedo application server process when it starts.
    TEMP=/home/psadm1/psft/pt/8.54/appserv/XXXXXX/LOGS/tmp
    TMP=/home/psadm1/psft/pt/8.54/appserv/XXXXXX/LOGS/tmp
    TM_BOOTTIMEOUT=120
    TM_RESTARTSRVTIMEOUT=120
    TM_BOOTPRESUMEDFAIL=Y
    FLDTBLDIR32=/opt/oracle/psft/pt/bea/tuxedo/udataobj
    FIELDTBLS32=jrep.f32,tpadm
    ALOGPFX=/home/psadm1/psft/pt/8.54/appserv/XXXXXX/LOGS/TUXACCESSLOG
    # Set IPC_EXIT_PROCESS=1 to use ExitProcess to terminate server process.
    # Set IPC_TERMINATE_PROCESS=1 to use TerminateProcess to terminate server process.
    # If both are set, TerminateProcess will be used to terminate server process.
    #IPC_EXIT_PROCESS=1
    IPC_TERMINATE_PROCESS=1
    IPC_TERMINATE_PROCESS=1
    INFORMIXSERVER=
    COBPATH=/opt/oracle/psft/pt/fin91/cblbin:/opt/oracle/psft/pt/tools/cblbin
    PATH=/opt/oracle/psft/pt/fin91/bin:/opt/oracle/psft/pt/fin91/bin/interfacedrivers:/opt/oracle/psft/pt/tools/jre/bin:/opt/oracle/psft/pt/tools/appserv:/opt/oracle/psft/pt/tools/setup:/opt/oracle/psft/pt/bea/tuxedo/bin:.:/opt/oracle/psft/pt/oracle-client/12.1.0.1/bin:/opt/oracle/psft/pt/oracle-client/12.1.0.1/OPatch:/opt/oracle/psft/pt/oracle-client/12.1.0.1/perl/bin:/opt/mf/bin:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/opt/oracle/psft/pt/tools/bin:/opt/oracle/psft/pt/tools/bin/sqr/ORA/bin:/opt/oracle/psft/pt/tools/verity/linux/_ilnx21/bin:/home/psadm1/bin::.
    PS_APP_HOME=/opt/oracle/psft/pt/fin91
    PS_CUST_HOME=/opt/oracle/psft/pt/fin91/pscust
    INTERFACE_HOME=/opt/oracle/psft/pt/fin91/pscust/interfaces
    WarningI found that I had to specify fully resolved paths for the variables I defined.  I do try setting variables in terms of other variables,

    PS_APP_HOME :[/opt/oracle/psft/pt/fin91]
    PS_CUST_HOME :[{$PS_APP_HOME}{FS}pscust]
    INTERFACE_HOME :[{$PS_CUST_HOME}{FS}interfaces]

    but I started to get errors.
    ==============ERROR!================
    Value for PS_CUST_HOME: {$PS_APP_HOME}{FS}pscust, is invalid. Your environment
    may not work as expected.
    ==============ERROR!================
    And some variables were not fully resolved in the psappsrv.env file.
    PS_APP_HOME=/opt/oracle/psft/pt/fin91
    PS_CUST_HOME=/opt/oracle/psft/pt/fin91/pscust
    INTERFACE_HOME={$PS_APP_HOME}{FS}pscust/interfaces
    Configuration Settings in the .ubx –v- .cfgMy only reservation is that there is now environment specific configuration in the psappsrv.ubx file, rather than the psappsrv.cfg file.   When I have done this in the past I would create additional variables in psappsrv.cfg that were referenced from the psappsrv.ubx file.  Thus the psappsrv.ubx was consistent across environments, and all the configuration is in the main configuration file psappsrv.cfg.
    Although, you can add additional variables in psappsrv.cfg, thus
    [Domain Settings]
    ;=========================================================================
    ; General settings for this Application Server.
    ;=========================================================================

    Application Home=/opt/oracle/psft/pt/fin91
    Custom Directory=pscust
    Interface Directory=pscust/interfaces

    and then reference them in the variables, and they will resolve correctly in the psappsrv.env.
    PS_APP_HOME={$Domain Settings\Application Home}
    PS_CUST_HOME={$Domain Settings\PS_APP_HOME}{FS}{$Domain Settings\Custom Directory}
    INTERFACE_HOME={$Domain Settings\PS_APP_HOME}{FS}{$Domain Settings\Interface Directory}
    You may experience errors in psadmin
    ==============ERROR!================
    Value for PS_APP_HOME: {$Domain Settings\Application Home}, is invalid. Your
    environment may not work as expected.
    ==============ERROR!================

    ==============ERROR!================
    Value for PS_CUST_HOME: {$Domain Settings\Application Home}{FS}{$Domain
    Settings\Custom Directory}, is invalid. Your environment may not work as
    expected.
    ==============ERROR!================
    ConclusionUsing this technique, it does not matter how environment variables are set when you go into psadmin to start the application server, the correct setting is defined in the Tuxedo domain and overrides that.
    You have always been able to do this in Tuxedo, but you would have had to edit the psappsrv.ubx file yourself, now the menu allows you to administer this.
    There is no way to view the psappsrv.ubx and psappsrv.env files from within psadmin, only psappsrv.cfg can be opened.  If you want to check your settings have reached the psappsrv.env file, you will need to leave psadmin and look in the files for yourself.

    PeopleSoft on Oracle 12c

    Fri, 2016-05-06 04:59
    I was asked by Dan Iverson from psadmin.io about my experiences of PeopleSoft on Oracle 12c.  I have seen a number of PeopleSoft Financials systems on 12c recently.  Generally the experience is very positive, but one common feature is that they had all disabled Adaptive Query Optimization.

    What is Adaptive Query Optimization?

    Adaptive Query Optimization is a term for a set of new features in Oracle 12c to allow the optimizer to improve the accuracy of execution plans.  It is described in the Optimizer with Oracle 12c white paper.  Put simply, Oracle collects additional statistics at parse which it can use to generate a better execution plan than if just using the ordinarily collected statistics, and further statistics at execution time which can be used to decide to change the next execution of the same SQL statement.
    Why does it cause a Problem in PeopleSoft?Adaptive Optimization is clearly a very useful feature in an application where SQL is shareable and reused.  But there's the rub.  This is another Oracle feature that works best with an application that is written the way that Oracle advise applications be written (Automatic Memory Management and the plan stability/management technologies also come to mind), and PeopleSoft is not such an application.  PeopleSoft applications are SQL parse intensive.Parsing SQL is an expensive business in terms of time and CPU.  The library cache preserves the results of previous parses to save the overhead of repeatedly parsing the same SQL for subsequent executions.  If the SQL is different, even just by a literal value, it will have a different SQL_ID and will not be matched.  If SQL statements in the library cache are not being reused, then you incur the cost of parse for each execution.  Adaptive Optimization adds to that overhead, in PeopleSoft sometimes to the point where it can have a significant effect on performance.PeopleSoft has several behaviours which lead to different SQL statements being generated from the same source:
    • Application Engine %BIND() variables are resolved to literals in the generated SQL if the ReUseStatement attribute is not set, which by default it is not.  Thus, each iteration of a loop may cause the same Application Engine step to produce a different SQL statement with a different SQL ID (see Performance Benefits of ReUse Statement Flag in Application Engine).  PeopleSoft development have got better at using this attribute in delivered code.  However, there are still many places where it could be set but is not.
    • There are many places in PeopleSoft where SQL is generated dynamically to cater for dynamic application configuration options.  Bind variables may be embedded in the SQL as literals, or more significant changes may be introduced such as completely different criteria or joining different tables.  In Application Engine, ReUseStatement usually cannot be used in conjunction with such steps.
    • Different concurrently executing instances of the same Application Engine programs will be allocated a different non-shared instance of a temporary record, so they reference different tables.  
    Some parts of PeopleSoft do produce shareable SQL statements.  Much of the SQL generated by the component processor and PeopleCode uses bind variables, the exception being where dynamic SQL is generated in PeopleCode.  PeopleSoft COBOL programs also widely uses bind variables, but again some places dynamically generate SQL statements.
    What does Oracle say about it?
    There are a number of documents on the Oracle support website that touch on this (you will need to log in to follow these links):

    Advice for the PeopleSoft Oracle DBA: The source of this document is not clear.  It is not attached to a MOS note, and is not in the usual format for such notes.  It is the only document that I can find that makes specific recommendations for Adaptive Optimization in conjunction with PeopleSoft, and it recommends totally disabling the entire Adaptive Optimization feature:
    • "optimizer_adaptive_features = FALSE - After upgrading to Oracle Database 12c, many PeopleSoft customers have noticed overall performance degradation which is related to Oracle 12c Optimizer Adaptive Feature (OAF). It is recommended that this value be disabled in all Oracle Databases running PeopleSoft Applications."
    • It has always been tempting to enable cursor_sharing with PeopleSoft so that Oracle converts literals in the SQL back to bind variables and thus matching SQLs in the library cache with different literal values.  However, it has no benefit for some dynamic SQL and different temporary table instances.  It can also introduce other optimizer problems.  On the few occasions that I have tried it, I have never had a good experience.  This document also recommends against it.
    How to Approach Issues that Appear Related to Adaptive Features Such as Adaptive Query Optimization (Doc ID 2068807.1): This note acknowledges that "while some issues with adaptive features have been found, in a number of cases, issues manifesting as performance degradation associated with Adaptive Query Optimization simply highlight deficiencies in the system, primarily inaccurate or stale object or system statistics,  Due to the nature of the activities that Adaptive Query Optimization perform, the impact of degraded performance can be widespread and can include (but not limited to):
      • Poor SQL execution performance (where a poor plan is selected)
      • Poor Parse performance (where the optimizer takes more time to determine the optimal access path)
    • The scope of either of the issues may not be limited to individual statements but can impact the whole system in unforeseen ways, for example both poor SQL execution and parse performance may cause locks to be held for a greater duration than normal causing lock contention while poor parse performance may hold latches or mutexes to similar results.
    • Blanket disabling of features: While disabling Adaptive Features with blanket parameters (see: Document 2031605.1) may provide a short term workaround, the loss of their benefits to other queries may be significant. If the underlying cause is some other factor then a better solution is to identify and address that"
    Adaptive Query Optimization (Doc ID 2031605.1) discusses how to disable either the entire Adaptive Query Optimization feature
    OPTIMIZER_ADAPATIVE_FEATURES=FALSE /*disables adaptive optimisation as a whole*/
    or to disable individual sub-features
    _optimizer_adaptive_plans=false /*disables adaptive plans*/
    _optimizer_use_feedback=false /*disables the use of statistics feedback for subsequent executions. Default is true*/
    _px_adaptive_dist_method=off /*disables the adaptive parallel distribution methods*/
    _optimizer_dsdir_usage_control=0 /* disables the optimizer usage of dynamic statistics (sampling) directives. Default value is 126 */
    _optimizer_gather_feedback=false /*disables the gathering of execution feedback in the optimizer*/
    _optimizer_nlj_hj_adaptive_join=false /*disables only the adaptive join from nested loops to hash join*/
    Summary
    I have worked with a number of PeopleSoft Financials systems on Oracle 12c.  All but one of them had already disabled Adaptive Query Optimization, and I had to disable it on that one.  Once disabled performance improved such that it was at least as good as it had been on 11g.
    There is no question that it is important to maintain accurate statistics and histograms on tables in PeopleSoft.  I have written elsewhere about how to achieve that, especially on temporary tables.  However, I am left with a concern that disabling the entire feature may be excessive, and that there may be areas in PeopleSoft where it can bring some benefits.  I would like to find the opportunity to test whether it is possible to achieve better performance by only disabling certain parts of the feature.

    PS360: A Utility to Extract and Present PeopleSoft Configuration and Performance Data

    Wed, 2016-04-20 12:16
    IntroductionMost of a PeopleSoft application is itself stored in the database in PeopleTools tables.  Therefore there is lot of information about the configuration and operation of a PeopleSoft system.  There are also performance metrics, particularly about batch processes.
    PS360 is a new tool on which I am working.  It just uses SQL scripts to extract that data to html files, and package them up in a zip file so that they can be sent for further analysis.  The style and method is closely modelled on Enkitec's EDB360 by Carlos Sierra.  This is another free tool used for health check and performance analysis of any Oracle database system.  PS360 aims to gather PeopleSoft specific information that is not presented by EDB360.  It also runs in Oracle's SQL*Plus tool, and so is only available for use with an Oracle database.
    Every section of PS360 is just the output of a SQL query, sometimes pre-processing is done in an anonymous PL/SQL block.  It does not install anything into the database, and does not update any table (other than the PLAN_TABLE which is used for temporary working storage).  Each report is in tabular and/or graphical format.  All the charts are produced with the Google chart API.
    The output falls into three sections.  
    • Configuration: Simple reports of certain configuration tables.
    • Performance Metrics: Process Scheduler and Application Engine timings
    • Consistency Checks: both within PeopleTools tables and between the PeopleSoft and Oracle catalogues.
    Sample PS360 Index Page generated on PeopleSoft Demo Database
    InstructionsThe tool can be run by anyone with access to the PeopleSoft Owner database user (usually SYSADM).  That user will already have privilege to read the Oracle catalogue.
    Download the tool and unzip it into a directory.  Navigate to the ps360 (master) directory, open SQL*Plus and connect as SYSADM.  Execute the script ps360.sql.  The output will be written to a zip file in the same directory.  Unpack that zip file on your own PC and open the file ps360_[database name]_0_index.html with a browser.
    Feedback EnhancementsI am looking for feedback about the tool, and suggestions for further enhancements.
    Please either leave comments here or e-mail me at info@go-faster.co.uk.

    nVision Performance Tuning: Coalescing Tree Leaves

    Tue, 2016-04-19 11:09
    I have blogged recently about the effectiveness of the Tree Performance options in improving the performance of nVision reports.
    However, a consequence of the Tree Performance Access Method suppress join; use literal values is that the resulting SQL in nVision will have a criteria for every leaf on each of the selected nodes on that tree.
    nVision Tree Performance Options|


    There will be an equality condition for each single value leaf. I normally set Selector Options to Ranges of values (BETWEEN), so I get a between condition for each ranged leaf. Behind the scenes, Oracle rewrites between as a pair of inequalities, so there is no difference, but the SQL generated by nVision is slightly shorter.
    The following is typical of nVision SQL with these performance options set.
    SELECT A.ACCOUNT,SUM(A.POSTED_TOTAL_AMT) 
    FROM
    PS_LEDGER A WHERE A.LEDGER='ACTUALS' AND A.FISCAL_YEAR=2015 AND
    A.ACCOUNTING_PERIOD BETWEEN 1 AND 12 AND A.CURRENCY_CD='GBP' AND
    A.STATISTICS_CODE=' ' AND (A.BUSINESS_UNIT=

    ) AND (
    A.DEPTID='C500' OR A.DEPTID='C512' OR A.DEPTID='C117' OR A.DEPTID='C157' OR
    A.DEPTID='C340' OR A.DEPTID='C457' OR A.DEPTID='C510' OR A.DEPTID='A758' OR
    A.DEPTID='8220' OR A.DEPTID='A704' OR A.DEPTID='A121' OR A.DEPTID='A110' OR
    A.DEPTID BETWEEN 'A153' AND 'A154' OR A.DEPTID BETWEEN 'A151' AND 'A152' OR
    A.DEPTID='A724' OR A.DEPTID BETWEEN 'A131' AND 'A133' OR A.DEPTID='A733' OR
    A.DEPTID='A217' OR A.DEPTID='A437' OR A.DEPTID='A130' OR A.DEPTID='A134' OR
    A.DEPTID='A703' OR A.DEPTID='A714' OR A.DEPTID='A218' OR A.DEPTID='A226' OR
    A.DEPTID BETWEEN 'A135' AND 'A138'

    A consequence of all the criteria is that Oracle can take a long time to parse the SQL statement. It may only be a few seconds in the case of a single SQL statement, but an nVision report book can consist of thousands of SQL statements.
    To produce the following performance profile, I enabled Oracle SQL trace for a report book and profiled the trace. SQL Parse accounted for nearly 8% of the total runtime of this report book, so it can be significant, and can vary widely.
    Event Name
    % Time
    Seconds
    Calls
    - Time per Call -
    Avg
    Min
    Max
    FETCH calls [CPU]
    48.2%
    3,699.8440s
    16,068
    0.2303s
    0.0000s
    178.0640s
    db file sequential read
    22.5%
    1,728.2101s
    4,413,352
    0.0004s
    0.0002s
    0.1294s
    SQL*Net message from client [idle]
    8.0%
    617.7042s
    926
    0.6671s
    0.0002s
    61.3147s
    PARSE calls [CPU]
    7.9%
    605.9340s
    5,383
    0.1126s
    0.0000s
    11.0500s






    Total
    100.0%
    7,681.4428s

    Reducing the number of criteria in the SQL will reduce the parse time, but that is determined by the way the tree leaves are defined.
    The leafcoal.sql script seeks to address this by repeatedly merging two consecutive leaves on the same tree node into a single ranged leaf where possible. It performs two checks before merging adjacent leaves on the same tree node:
    • There is not an intermediate value on the detail field defined in the tree structure record. So if the detail field was DEPT_TBL.DEPTID, the script checks that there are no values of DEPTID on PS_DEPT_TBL that are not currently selected by existing leaves that would be included in the merged leaf.
    • There is not another leaf on another node on the tree that would intersect with the merged leaf.
    Instructionsleafcoal.sql was written as an anonymous PL/SQL block, so there is nothing to install. It should be run in SQL*Plus connected as the PeopleSoft owner ID (usually SYSADM). It is expected that there are some adjustments to the script that the user may need to make. As delivered, it runs in a test mode that does not update the database but reports on what it would do. Change k_testmode to FALSE to make it update the database.
    k_testmode     CONSTANT BOOLEAN := FALSE; /*set this false to perform update*/
    The level of output emitted depends on the variable l_debug_variable
    l_debug_level  INTEGER := 4;
    • 1. end of processing message 
    • 2. start of processing for tree 
    • 3. number of leaves in tree and number of leaves coalesced 
    • 4. details of leaves being compressed 
    • 5. start and end of each procedure 
    • 6. parameters passed to functions 
    • 7. number of rows updated/deleted during coalesce 
    • 8. dynamic SQL statement 
    The script reports on the work it has done. It does not commit its updates. That is left for the user to either commit or rollback.

    .(3)Processing SHARE, ,XXX_ACCOUNT,151201                                       
    .(4)634 nodes, 2636 leaves
    .(4)1358 leaves coalesced (52%)

    (1)Commit changes or rollback
    The query at the end of the script determines which trees will be processed and may need to be changed as required. For example, you might choose to coalesce the leaves on

    • specific trees,
    • most recent effective dated trees,
    • trees with literal values performance option

      FOR i IN (
    SELECT DISTINCT d.setid, d.setcntrlvalue, d.tree_name, d.effdt
    FROM pstreedefn d
    , pstreestrct s
    , psrecfielddb f
    WHERE d.tree_strct_id = s.tree_strct_id
    AND s.node_fieldname = 'TREE_NODE'
    -- AND d.TREE_ACC_METHOD = 'L' --literal values
    AND s.dtl_recname = f.recname
    AND s.dtl_fieldname = f.fieldname
    -- AND tree_name = 'XXX_ACCOUNT'
    ) LOOP
    Conclusion The number of leaves coalesced depends entirely on how the trees have been built. At one customer it has produced a reduction of over 50%, at another it was only 10%. The reduction in the number of leaves does produce a corresponding reduction in time spent on SQL parse time during nVision reports.

    Interview with PeopleSoft Administrator Podcast

    Fri, 2016-03-25 13:59
    I recently recorded an interview with Dan Iverson and Kyle Benson for the PeopleSoft Administrator Podcast. It has been spread over three episodes. There is lots of other good stuff on the website and other episodes that are well worth listening to.
    (25 March 2016) #21 - Temporary Tables
    (8 April 2016) #23 - The Application Server
    (15 April 2016) #24 - Application Server Tuning You can listen to the podcast on psadmin.io, or subscribe with your favourite podcast player, or in iTunes.

    nVision Performance Tuning: General Principles

    Sun, 2016-03-06 12:20
    Over the years I have dealt with performance problems with nVision reporting on General Ledger on various Financials systems in various kinds of businesses.  Different businesses use nVision very differently, and have different challenges, but I have produced an approach that mostly works well at most customers.  I have collected that advice in to a document that I have recently published on my website (http://www.go-faster.co.uk/docs.htm#nVision Tuning.Generic.pdf).

    The key points are
    • Indexing
      • Effective indexing of LEDGER and LEDGER_BUDG tables to match the analysis criteria of the reports.
      • Enhanced indexing of the PSTREESELECT tables, so that the indexes fully satisfy the queries without the need to visit the tables.
    • Collection of statistics and extended statistics on the PSTREESELECT tables.
    • Using the nVision performance options 
      • use static selectors instead of dynamic selectors.  It is difficult to maintain up-to-date optimizer statistics on the selector tables with dynamic selectors.
      • simplify SQL statements by replacing joins with literal criteria
      • updated 11.4.2016: reduce SQL parse time by coalescing leaves on trees.
    • I also suggest use of Oracle Fine Grained Auditing to 
      • enhance instrumentation,
      • detect the use of dynamic selectors.
    • Appropriate partitioning of the LEDGER and LEDGER_BUDG tables.
    • Archiving.
      • If the partitioning option is not available, then I strongly recommended that as much historical data as possible is purged from the LEDGER and LEDGER_BUDG tables.
    Caveat: This is a general approach, and the document makes general statements.  Every customer is different, because their data is different and often their method of analysis differs.  There is always something that requires adjustment or an exception to the general approach.  Your mileage will vary!

      Implementing Index Compression (and other Physical Storage Options) via Application Designer

      Thu, 2016-02-11 12:07
      There are some performance improvements that require physical storage options to be set on tables or indexes.
      One particular technique that I will take as an example for this article is index compression.  A good example in PeopleSoft is the tree node table, PSTREENODE.  It drives many security and hierarchical queries.  It is not updated very frequently, only as new trees are brought on.  Many of the indexes are good candidates for compression.
      This compression works by storing repeated column values only one per index leaf block.  Each distinct set of values in the columns up to the prefix length are stored in a symbol table.  The choice of prefix length can significantly effect the compression.  Oracle can calculate the optimal prefix length using
      ANALYZE INDEX … VALIDATE STRUCTURE
      I have written script to make it slightly easier, calc_opt_comp.sql.  This is the output on my demo database, but I get similar results on production systems.
                                                Optimal
      Compression Weighted
      Prefix Current Average
      Table Name Index Name Length FREQ PARTS Blocks Saving %
      ------------------ ------------------ ----------- ---- ----- ---------- --------
      PSTREENODE PSAPSTREENODE 4 1 0 280 39.0
      PSBPSTREENODE 3 1 0 264 30.0
      PSCPSTREENODE 1 1 0 120 7.0
      PSDPSTREENODE 4 1 0 256 61.0
      PSFPSTREENODE 2 1 0 256 67.0
      PSGPSTREENODE 3 1 0 400 49.0
      PS_PSTREENODE 4 1 0 256 44.0 
      However, I want to make sure that should the table need to rebuilt in the future, PeopleTools will generate the DDL with the appropriate settings.  The same principle would also apply to any other physical storage option.  I would always recommend that the compression prefix lengths be incorporated into the PeopleTools DDL override in Application Designer (figure 1).  While you could extend the DDL model and add another override for compression, I just append it to the PCTFREE setting.
      Index DDL OverridesFigure 1. Index DDL OverideHowever, there is catch.  PeopleTools has never examined DDL overrides when determining whether there is a difference between the PeopleSoft and database data dictionaries, even though that comparison must be platform specific.  DDL overrides and DDL models are just strings held in the PeopleTools tables.  They can be extended (or even removed) by customers.  I assume this is the reason; it was not felt possible to reliably check them,
      So, if the build settings (figure 2) are 'recreate index only if modified', which is the default, Application Designer will not generate a DDL script, nor execute any DDL.
      Build SettingsFigure 2. Build SettingsThe workaround has always been to set the index creation option in the build settings to 'recreate index if it already exists'.  However, we then discover the override doesn't appear in the DDL.  As Application Designer has not detected a difference between PeopleTools and the database, it has instead used the Oracle DBMS_METADATA package to generate the storage clause from from the index that exists in the database.  Hence the DDL contains additional keywords not in the PeopleSoft DDL model.
      CREATE UNIQUE  INDEX PS_PSTREENODE ON PSTREENODE (SETID,
      SETCNTRLVALUE,
      TREE_NAME,
      EFFDT,
      TREE_NODE_NUM,
      TREE_NODE,
      TREE_BRANCH)
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 40960 NEXT 106496 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
      BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
      TABLESPACE "PSINDEX"
      /
      ALTER INDEX PS_PSTREENODE NOPARALLEL LOGGING
      /

      I have only checked this behaviour on PeopleTools 8.54, but use of DBMS_METADATA was introduced in PeopleTools 8.51, so this problem has probably existed since then.
      SELECT dbms_metadata.get_ddl('INDEX','PS_PSTREENODE')
      FROM dual

      DBMS_METADATA.GET_DDL('INDEX','PS_PSTREENODE')
      --------------------------------------------------------------------------------
      CREATE UNIQUE INDEX "SYSADM"."PS_PSTREENODE" ON "SYSADM"."PSTREENODE" ("SETID"
      , "SETCNTRLVALUE", "TREE_NAME", "EFFDT", "TREE_NODE_NUM", "TREE_NODE", "TREE_BRANCH")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 40960 NEXT 106496 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
      BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
      TABLESPACE "PSINDEX"

      However, if I drop the index and then regenerate the DDL script in Application Designer,
      DROP INDEX ps_pstreenode
      /
      PeopleTools generates the create index with the compression specified in the PeopleTools table.

      CREATE UNIQUE INDEX PS_PSTREENODE ON PSTREENODE (SETID,
      SETCNTRLVALUE,
      TREE_NAME,
      EFFDT,
      TREE_NODE_NUM,
      TREE_NODE,
      TREE_BRANCH) TABLESPACE PSINDEX STORAGE (INITIAL 40000 NEXT 100000
      MAXEXTENTS UNLIMITED PCTINCREASE 0) PCTFREE 0 COMPRESS 4 PARALLEL
      NOLOGGING
      /
      ALTER INDEX PS_PSTREENODE NOPARALLEL LOGGING
      /

      Rather than go through the business of dropping the index so you can then generate the correct script to then recreate the index, I would suggest just implementing the change manually by rebuilding the indexes.
      ALTER INDEX PSAPSTREENODE REBUILD TABLESPACE PSINDEX COMPRESS 4;
      ALTER INDEX PSBPSTREENODE REBUILD TABLESPACE PSINDEX COMPRESS 3;
      ALTER INDEX PSCPSTREENODE REBUILD TABLESPACE PSINDEX COMPRESS 1;
      ALTER INDEX PSDPSTREENODE REBUILD TABLESPACE PSINDEX COMPRESS 4;
      ALTER INDEX PSFPSTREENODE REBUILD TABLESPACE PSINDEX COMPRESS 2;
      ALTER INDEX PSGPSTREENODE REBUILD TABLESPACE PSINDEX COMPRESS 3;
      ALTER INDEX PS_PSTREENODE REBUILD TABLESPACE PSINDEX COMPRESS 4;

      ConclusionThis makes the business of implementing physical attributes through Application Designer much more complicated.  I would still recommend recording the settings in Application Designer, if only because it provides documentation, but then it may be easier to implement the changes manually.

      Measuring Tuxedo Queuing in the PeopleSoft Application Server

      Fri, 2015-09-04 09:13

      Why Should I Care About Queuing?Queuing in the application server is usually an indicator of a performance problem, rather than a problem in its own right.  Requests will back up on the inbound queue because the application server cannot process them as fast as they arrive.  This is usually seen on the APPQ which is serviced by the PSAPPSRV process, but applies to other server processes too.  Common causes include (but are not limited to):
      • Poor performance of either SQL on the database or PeopleCode executed within the application server is extending service duration
      • The application server domain is undersized for the load.  Increasing the number of application server domains or application server process may be appropriate.  However, before increasing the number of server process it is necessary to ensure that the physical server has sufficient memory and CPU to support the domain (if the application server CPU is overloaded then requests move from the Tuxedo queues to the operating system run queue).
      • The application server has too many server processes per queue causing contention in the systems calls that enqueue and dequeue requests to and from IPC queue structure.  A queue with more than 8-10 application server processes can exhibit this contention.  There will be a queue of inbound requests, but not all the server processes will be non-idle.
      When user service requests spend time queuing in the application server, that time is part of the users' response time.  Application server queuing is generally to be avoided (although it may be the least worst alternative). 
      What you do about queuing depends on the circumstances, but it is something that you do want to know about.
      3 Ways to Measure Application Server QueuingThere are a number of ways to detect queuing in Tuxedo
      • Direct measurement of the Tuxedo domain using the tmadmin command-line interface.  A long time ago I wrote a shell script tuxmon.sh.  It periodically runs the printqueue and printserver commands on an application server and extracts comma separated data to a flat that can then be loaded into a database.  It would have to be configured for each domain in a system.
      • Direct Measurement with PeopleSoft Performance Monitor (PPM).  Events 301 and 302 simulate the printqueue and printserver commands.  However, event 301 only works from PT8.54 (and at the time of writing I am working on a PT8.53 system).  Even then, the measurements would only be taken once per event cycle, which defaults to every 5 minutes.  I wouldn't recommend increasing the sample frequency, so this will only ever be quite a coarse measurement.
      • Indirect Measurement from sampled PPM transactions.  Although includes time spent on the return queue and to unpack the Tuxedo message.  This technique is what the rest of this article is about.
      Indirectly Measuring Application Server Queuing from Transactional DataEvery PIA and Portal request includes a Jolt call made by the PeopleSoft servlet to the domain.  The Jolt call is instrumented in PPM as transaction 115.  Various layers in the application server are instrumented in PPM, and the highest point is transaction 400 which where the service enters the PeopleSoft application server code.  Transaction 400 is always the immediate child of transaction 115.  The difference in the duration of these transactions is the duration of the following operations:
      • Transmit the message across the network from the web server to the JSH.  There is a persistent TCP socket connection.
      • To enqueue the message on the APPQ queue (including writing the message to disk if it cannot fit on the queue).
      •  Time spent in the queue
      • To dequeue the message from the queue (including reading the message back from disk it was written there).
      • To unpack the Tuxedo message and pass the information to the service function
      • And then repeat the process for the return message back to the web server via the JSH queue (which is not shown  in tmadmin)
      I am going make an assumption that the majority of the time is spent by message waiting in the inbound queue and that time spent on the other activities is negligible.  This is not strictly true, but is good enough for practical purposes.  Any error means that I will tend to overestimate queuing.
      Some simple arithmetic can convert this duration into an average queue length. A queue length of n means that n requests are waiting in the queue.  Each second there are n seconds of queue time.  So the number of seconds per second of queue time is the same as the queue length. 
      I can take all the sampled transactions in a given time period and aggregate the time spent between transactions 115 and 400.  I must multiply it by the sampling ratio, and then divide it by the duration of the time period for which I am aggregating it.  That gives me the average queue length for that period.
      This query aggregates queue time across all application server domains in each system.  It would be easy to examine a specific application server, web server or time period.
      WITH c AS (
      SELECT B.DBNAME, b.pm_sampling_rate
      , TRUNC(c115.pm_agent_Strt_dttm,'mi') pm_agent_dttm
      , A115.PM_DOMAIN_NAME web_domain_name
      , SUBSTR(A400.PM_HOST_PORT,1,INSTR(A400.PM_HOST_PORT,':')-1) PM_tux_HOST
      , SUBSTR(A400.PM_HOST_PORT,INSTR(A400.PM_HOST_PORT,':')+1) PM_tux_PORT
      , A400.PM_DOMAIN_NAME tux_domain_name
      , (C115.pm_trans_duration-C400.pm_trans_duration)/1000 qtime
      FROM PSPMAGENT A115 /*Web server details*/
      , PSPMAGENT A400 /*Application server details*/
      , PSPMSYSDEFN B
      , PSPMTRANSHIST C115 /*Jolt transaction*/
      , PSPMTRANSHIST C400 /*Tuxedo transaction*/
      WHERE A115.PM_SYSTEMID = B.PM_SYSTEMID
      AND A115.PM_AGENT_INACTIVE = 'N'
      AND C115.PM_AGENTID = A115.PM_AGENTID
      AND C115.PM_TRANS_DEFN_SET=1
      AND C115.PM_TRANS_DEFN_ID=115
      AND C115.pm_trans_status = '1' /*valid transaction only*/
      --
      AND A400.PM_SYSTEMID = B.PM_SYSTEMID
      AND A400.PM_AGENT_INACTIVE = 'N'
      AND C400.PM_AGENTID = A400.PM_AGENTID
      AND C400.PM_TRANS_DEFN_SET=1
      AND C400.PM_TRANS_DEFN_ID=400
      AND C400.pm_trans_status = '1' /*valid transaction only*/
      --
      AND C115.PM_INSTANCE_ID = C400.PM_PARENT_INST_ID /*parent-child relationship*/
      AND C115.pm_trans_duration >= C400.pm_trans_duration
      ), x as (
      SELECT dbname, pm_agent_dttm
      , AVG(qtime) avg_qtime
      , MAX(qtime) max_qtime
      , c.pm_sampling_rate*sum(qtime)/60 avg_qlen
      , c.pm_sampling_rate*count(*) num_services
      GROUP BY dbname, pm_agent_dttm, pm_sampling_rate
      )
      SELECT * FROM x
      ORDER BY dbname, pm_agent_dttm
      • Transactions are aggregated per minute, so the queue time is divided by 60 at the end of the calculation because we are measuring time in seconds.
      Then the results from the query can be charted in excel (see http://www.go-faster.co.uk/scripts.htm#awr_wait.xls). This chart was taken from a real system undergoing a performance load test, and we could see


      Is this calculation and assumption reasonable?The best way to validate this approach would be to measure queuing directly using tmadmin.  I could also try this on a PT8.54 system where event 301 will report the queuing.  This will have to wait for a future opportunity.
      However, I can compare queuing with the number of busy application servers at reported by PPM event 302 for the CRM database.  Around 16:28 queuing all but disappears.  We can see that there were a few idle application servers which is consistent with the queue being cleared.  Later the queuing comes back, and most of the application servers are busy again.  So it looks reasonable.
      Application Server Activity

      PeopleTools 8.54: Performance Performance Monitor Enhancements

      Tue, 2015-03-10 05:09
      This is part of a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.
      Transaction History Search ComponentThere are a number of changes:
      • You can specify multiple system identifiers.  For example, you might be monitoring Portal, HR and CRM.  Now you can search across all of them in a single search.
        • It has always been the case that when you drill into the Performance Monitoring Unit (PMU), by clicking on the tree icon, you would see the whole of a PMU that invoked services from different systems.
      • You can also specify multiple transaction types, rather than have to search each transaction type individually.
      This is a useful enhancement when searching for a specific or a small number of transaction.  However, I do not think it will save you from having to query the underlying transactions table.
      PPM Archive Process The PPM archive process (PSPM_ARCHIVE) has been significantly rewritten in PeopleTools 8.54.  In many places, it still uses this expression to identify rows to be archived or purged:
      %DateTimeDiff(X.PM_MON_STRT_DTTM, %CurrentDateTimeIn) >= (PM_MAX_HIST_AGE * 24 * 60)
      This expands to
      ROUND((CAST(( CAST(SYSTIMESTAMP AS TIMESTAMP)) AS DATE) - CAST((X.PM_MON_STRT_DTTM) AS DATE)) * 1440, 0)
         >= (PM_MAX_HIST_AGE * 24 *  60)
      which has no chance of using an index.  This used to cause performance problems when the archive process had not been run for a while and the high water marks on the history tables had built up.

      Now, the archive process now works hour by hour, and this will use the index on the timestamp column.
      "... AND X.PM_MON_STRT_DTTM <= SYSDATE - PM_MAX_HIST_AGE 
      and (PM_MON_STRT_DTTM) >= %Datetimein('" | DateTimeValue(&StTime) | "')
      and (PM_MON_STRT_DTTM) <= %DateTimeIn('" | DateTimeValue(&EndTime) | "')"
      Tuxedo Queuing Since Performance Monitor was first introduced, event 301 has never reported the length of the inbound message queues in Tuxedo.  The reported queue length was always zero.  This may have been fixed in PeopleTools 8.53, but I have only just noticed it
      Java Management Extensions (JMX) SupportThere have been some additions to Performance Monitor that suggest that it will be possible to extract performance metrics using JMX.  The implication is that the Oracle Enterprise Manager Application Management Pack of PeopleSoft will be able to do this.  However, so far I haven't found any documentation. The new component is not mentioned in the PeopleTools 8.54: Performance Monitor documentation.
      • New Table
        • PS_PTPMJMXUSER - keyed on PM_AGENTID
      • New Columns
        • PSPMSYSDEFAULTS - PTPHONYKEY.  So far I have only seen it set to 0.
        • PSPMAGENT - PM_JMX_RMI_PORT.  So far only seen it set to 1
      • New Component

        Undocumented Application Engine Parameter: EnableAEMonitoring

        Wed, 2015-03-04 13:57
        This is part of a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.

        Oracle Support Document 1640355.1: E-AE: Performance Degradation When Using Do Loop Action to Process Large Amount of Data on PT 8.52 & PT 8.53 describes a performance problem in Application Engine when a small but frequently executed loop.  Application Engine calls DBMS_APPLICATION_INFO to set MODULE and ACTION on v$session each time it goes round the loop.

        However, Oracle Bug 10130415  Latch contention on "resmgr group change latch" acknowledges a problem in some versions of Oracle, but it is resolved in 11.2.0.3 and 12c.
        Updated 8 May 2015: Oracle support do not know of any explicit link to this database bug.

        A new parameter was introduced in PT8.54 and backported to PT8.52.23 and PT8.53.13 in the Database Options section of the process scheduler configuration file (psprcs.cfg).  EnableAEMonitor controls whether Application Engine calls DBMS_APPLICATION_INFO.  The default value for this parameter is 0.

        The PeopleSoft support document does not reference the database bug report, but it seems reasonable to infer that the new parameter was introduced to work around the database bug.

        This new parameter is not described in PeopleBooks.  It does appear in the delivered configuration files on at least 8.53.13.  However, it is not present in the delivered 8.54.05 configuration file (bug 21055140). Therefore, by default, Application Engine will not set the module and Action unless you add it to the configuration file.

        [Database Options]
        ;=========================================================================
        ; Database-specific configuration options
        ;=========================================================================

        ;DMK - added to enable DBMS_APPLICATION_INFO instrumentation
        EnableAEMonitoring=1
        Then the behaviour is then as it has been since 8.52, described in PeopleTools 8.52 Application Engine sets MODULE and ACTION.
        My Recommendation I certainly think that you should add this parameter to all process scheduler configuration files at relevant PeopleTools version.  Unless you specifically have the problem described in the support note, I recommend that you also set the parameter to 1 as shown above. I have never seen the problem in affected database versions, and it is fixed in the terminal release of 11g.

        Without setting the parameter, you will loose the ability to relate Enterprise Manager and ASH data to specific application engine steps.  If you need to make a code change to achieve a performance improvement you will have to go through the manual process of finding the SQL in an application engine trace.

        PeopleTools 8.54: Oracle Resource Manager

        Wed, 2015-02-25 04:11
        This is part of a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.

        Oracle Resource manager is about prioritising one database session over another, or about restricting the overhead of one session for the good of the other database users.  A resource plan is a set of rules that are applied to some or all database sessions for some or all of the time.  Those rules may be simple or complex, but they need to reflect the business's view of what is most important. Either way Oracle resource manager requires careful design.
        I am not going to attempt to further explain here how the Oracle feature works, I want to concentrate on how PeopleSoft interfaces with it.
        PeopleTools FeatureThis feature effectively maps Oracle resource plans to PeopleSoft executables.  The resource plan will then manage the database resource consumption of that PeopleSoft process.  There is a new component that maps PeopleSoft resource names to Oracle consumer groups.  For this example I have chosen some of the delivered plans in the MIXED_WORKLOAD_GROUP that is delivered with Oracle 11g.

        • The Oracle Consumer Group field is validated against the name of the Oracle consumer groups defined in the database, using view     .
        SELECT DISTINCT group_or_subplan, type
        FROM dba_rsrc_plan_directives
        WHERE plan = 'MIXED_WORKLOAD_PLAN'
        ORDER BY 2 DESC,1
        /

        GROUP_OR_SUBPLAN TYPE
        ------------------------------ --------------
        ORA$AUTOTASK_SUB_PLAN PLAN
        BATCH_GROUP CONSUMER_GROUP
        INTERACTIVE_GROUP CONSUMER_GROUP
        ORA$DIAGNOSTICS CONSUMER_GROUP
        OTHER_GROUPS CONSUMER_GROUP
        SYS_GROUP CONSUMER_GROUP
        If you use Oracle SQL Trace on a PeopleSoft process (in this case PSAPPSRV) you find the following query.  It returns the name of the Oracle consumer group that the session should use.The entries in the component shown above are stored in PS_PT_ORA_RESOURCE
        • PS_PTEXEC2RESOURCE is another new table that maps PeopleSoft executable name to resource name.
        SELECT PT_ORA_CONSUMR_GRP 
        FROM PS_PT_ORA_RESOURCE
        , PS_PTEXEC2RESOURCE
        WHERE PT_EXECUTABLE_NAME = 'PSAPPSRV'
        AND PT_ORA_CONSUMR_GRP <> ' '
        AND PS_PT_ORA_RESOURCE.PT_RESOURCE_NAME = PS_PTEXEC2RESOURCE.PT_RESOURCE_NAME

        PT_ORA_CONSUMR_GRP
        ------------------------
        INTERACTIVE_GROUP

        And then the PeopleSoft process explicitly switches its group, thus:
        DECLARE 
        old_group varchar2(30);
        BEGIN
        DBMS_SESSION.SWITCH_CURRENT_CONSUMER_GROUP('INTERACTIVE_GROUP', old_group, FALSE);
        END;
        Unfortunately, the consequence of this explicit switch is that it overrides any consumer group mapping rules, as I demonstrate below.
        SetupThe PeopleSoft owner ID needs some additional privileges if it is to be able to switch to the consumer groups.
        BEGIN
        DBMS_RESOURCE_MANAGER_PRIVS.GRANT_SYSTEM_PRIVILEGE
        ('SYSADM', 'ADMINISTER_RESOURCE_MANAGER',FALSE);
        END;

        BEGIN
        FOR i IN(
        SELECT DISTINCT r.pt_ora_consumr_grp
        FROM sysadm.ps_pt_ora_resource r
        WHERE r.pt_ora_consumr_grp != ' '
        AND r.pt_ora_consumr_grp != 'OTHER_GROUPS'
        ) LOOP
        dbms_output.put_line('Grant '||i.pt_ora_consumr_grp);
        DBMS_RESOURCE_MANAGER_PRIVS.GRANT_SWITCH_CONSUMER_GROUP
        (GRANTEE_NAME => 'SYSADM'
        ,CONSUMER_GROUP => i.pt_ora_consumr_grp
        ,GRANT_OPTION => FALSE);
        END LOOP;
        END;
        /

        The RESOURCE_MANAGER_PLAN initialisation parameters should be set to the name of the plan which contains the directives.
        NAME                                 TYPE        VALUE
        ------------------------------------ ----------- ----------------------
        resource_manager_plan string MIXED_WORKLOAD_PLAN

        I question one or two of the mappings on PS_PTEXEC2RESOURCE.
        SELECT * FROM PS_PTEXEC2RESOURCE …

        PT_EXECUTABLE_NAME PT_RESOURCE_NAME
        -------------------------------- -----------------

        PSAPPSRV APPLICATION SERVE
        PSQED MISCELLANEOUS
        PSQRYSRV QUERY SERVER

        • PSNVS is the nVision Windows executable.  It is in PeopleTools resource MISCELLANEOUS.  This is nVision running in 2-tier mode.  I think I would put nVision into the same consumer group as query.  I can't see why it wouldn't be possible to create new PeopleSoft consumer groups and map them to certain executables.  nVision would be a candidate for a separate group. 
          • For example, one might want to take a different approach to parallelism in GL reporting having partitioned the LEDGER tables by FISCAL_YEAR and ACCOUNTING_PERIOD
        • PSQED is also in MISCELLANEOUS.  Some customers use it to run PS/Query in 2-tier mode, and allow some developers to use it to run queries.  Perhaps it should also be in the QUERY SERVER group.
        Cannot Mix PeopleSoft Consumer Groups Settings with Oracle Consumer Group MappingsI would like to be able to blend the PeopleSoft configuration with the ability to automatically associate Oracle consumer groups with specific values of MODULE and ACTION.  Purely as an example, I am trying to move the Process Monitor component into the SYS_GROUP consumer group.
        BEGIN
        DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();

        DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING
        (attribute => 'MODULE_NAME'
        ,value => 'PROCESSMONITOR'
        ,consumer_group => 'SYS_GROUP');
        DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
        END;
        /

        However, it doesn't work because the explicit settings overrides any rules, and you cannot prioritise other rules above explicit settings
        exec dbms_application_info.set_module('PROCESSMONITOR','PMN_PRCSLIST');
        SELECT REGEXP_SUBSTR(program,'[^.@]+',1,1) program
        , module, action, resource_consumer_group
        FROM v$session
        WHERE module IN('PROCESSMONITOR','WIBBLE')
        ORDER BY program, module, action
        /

        So I have created a new SQL*Plus session and set the module/action and it has automatically mover into the SYS_GROUP.  Meanwhile, I have been into the Process Monitor in the PIA and the module and action of the PSAPPSRV session has been set, but they remain in the interactive group.
        PROGRAM          MODULE           ACTION           RESOURCE_CONSUMER_GROUP
        ---------------- ---------------- ---------------- ------------------------
        PSAPPSRV PROCESSMONITOR PMN_PRCSLIST INTERACTIVE_GROUP
        PSAPPSRV PROCESSMONITOR PMN_SRVRLIST INTERACTIVE_GROUP
        sqlplus PROCESSMONITOR PMN_PRCSLIST SYS_GROUP

        If I set the module to something that doesn't match a rule, the consumer group goes back to OTHER_GROUPS which is the default. 
        exec dbms_application_info.set_module('WIBBLE','PMN_PRCSLIST');

        PROGRAM MODULE ACTION RESOURCE_CONSUMER_GROUP
        ---------------- ---------------- ---------------- ------------------------
        PSAPPSRV PROCESSMONITOR PMN_PRCSLIST INTERACTIVE_GROUP
        PSAPPSRV PROCESSMONITOR PMN_SRVRLIST INTERACTIVE_GROUP
        sqlplus WIBBLE PMN_PRCSLIST OTHER_GROUPS

        Now, if I explicitly set the consumer group exactly as PeopleSoft does my session automatically moves into the INTERACTIVE_GROUP.
        DECLARE 
        old_group varchar2(30);
        BEGIN
        DBMS_SESSION.SWITCH_CURRENT_CONSUMER_GROUP('INTERACTIVE_GROUP', old_group, FALSE);
        END;
        /

        PROGRAM MODULE ACTION RESOURCE_CONSUMER_GROUP
        ---------------- ---------------- ---------------- ------------------------
        PSAPPSRV PROCESSMONITOR PMN_PRCSLIST INTERACTIVE_GROUP
        PSAPPSRV PROCESSMONITOR PMN_SRVRLIST INTERACTIVE_GROUP
        sqlplus WIBBLE PMN_PRCSLIST INTERACTIVE_GROUP

        Next, I will set the module back to match the rule, but the consumer group doesn't change because the explicit setting takes priority over the rules.
        PROGRAM          MODULE           ACTION           RESOURCE_CONSUMER_GROUP
        ---------------- ---------------- ---------------- ------------------------
        PSAPPSRV PROCESSMONITOR PMN_PRCSLIST INTERACTIVE_GROUP
        PSAPPSRV PROCESSMONITOR PMN_SRVRLIST INTERACTIVE_GROUP
        sqlplus PROCESSMONITOR PMN_PRCSLIST INTERACTIVE_GROUP
        You can rearrange the priority of the other rule settings, but explicit must have the highest priority (if you try will get ORA-56704). So, continuing with this example, I cannot assign a specific component to a different resource group unless I don't use the PeopleSoft configuration for PSAPPSRV.
        Instead, I could create a rule to assign a resource group to PSAPPSRV via the program name, and have a higher priority rule to override that when the module and/or action is set to a specific value.  However, first I have to disengage the explicit consumer group change for PSAPPSRV by removing the row from PTEXEC2RESOURCE.
        UPDATE ps_ptexec2resource 
        SET pt_resource_name = 'DO_NOT_USE'
        WHERE pt_executable_name = 'PSAPPSRV'
        AND pt_resource_name = 'APPLICATION SERVER'
        /
        COMMIT
        /
        BEGIN
        DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
        END;
        /
        BEGIN
        DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING
        (attribute => 'CLIENT_PROGRAM'
        ,value => 'PSAPPSRV'
        ,consumer_group => 'INTERACTIVE_GROUP');

        DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING
        (attribute => 'MODULE_NAME'
        ,value => 'PROCESSMONITOR'
        ,consumer_group => 'SYS_GROUP');

        DBMS_RESOURCE_MANAGER.set_consumer_group_mapping_pri(
        explicit => 1,
        oracle_user => 2,
        service_name => 3,
        module_name_action => 4, --note higher than just module
        module_name => 5, --note higher than program
        service_module => 6,
        service_module_action => 7,
        client_os_user => 8,
        client_program => 9, --note lower than module
        client_machine => 10
        );
        DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
        END;
        /
        So, you would have to choose between using either the PeopleSoft configuration or the Oracle Resource Manager configuration.  It depends on your requirements.  This is going to be a decision you will have to take when you design your resource management.  Of course, you can always use just the mapping approach in versions of PeopleTools prior to 8.54.

        ConclusionI have never seen Oracle Resource Manager used with PeopleSoft.  Probably because setting it up is not trivial, and then it is difficult to test the resource plan.  I think this enhancement is a great start, that makes it very much easier to implement Oracle Resource Manager on PeopleSoft.  However, I think we need more granularity.
        • I would like to be able to put specific process run on the process scheduler by name into specific consumer groups.  For now, you could do this with a trigger on PSPRCSRQST that fires on process start-up that makes an explicit consumer group change (and puts it back again for Application Engine on completion). 
        • I would like the ability to set different resource groups for the same process name in different application server domains.  For example,
          • I might want to distinguish between PSQRYSRV processes used for ad-hoc PS/Queries on certain domains from PSQRYSRVs used to support nVision running in 3-tier mode on other domains.
          • I might have different PIAs for backup-office and self-service users going to different applications servers.  I might want to prioritise back-office users over self-service users.
        Nonetheless, I warmly welcome the new support for Oracle Resource Manager in PeopleTools.  It is going to be very useful for RAC implementations, I think it will be essential for multi-tenant implementations where different PeopleSoft product databases are plugged into the same container database overrides any rules

        Pages