Friday, November 19, 2010

Utility adrepgen is used to compile Reports. Synatx is given below adrepgen userid=apps\ source = $PRODUCT_TOP\srw\filename.rdf dest=$PRODUCT_TOP\srw\filename.rdf stype=rdffile dtype=rdffile logfile=x.log overwrite=yes batch=yes dunit=character


Report Review Agent(RRA) also referred by executable FNDFS is default text viewer in Oracle Applications 11i for viewing output files and log files. As most of apps dba's are not clear about Report Server and RRA

ps -ef | grep rwmts60

$JTF_TOP/admin/scripts. Sample compilation method is
perl ojspCompile.pl

tuning from oracle apps view

$IAS_ORACLE_HOME/Apache/Apache/conf httpd.conf: all apache related parameters are stored here
$IAS_ORACLE_HOME/apache/jserv: all java related errors are written here

jserv.conf: all java related parameters , how many jvms are needed, debug level
jserv.properties: how much memory to be allocated to jvms
$ias_config_home- http.pid, http_pls.pid: these 2 pids are deleted when apache is down
$ias_oracle_home/logs : error_log, error_log_plsql: check here for common errors

If forms doesnt open after login page auth, then there is a prblm with plsql

orainventory

Global Inventory ?
Global Inventory holds information about Oracle Products on a Machine. These products can be various oracle components like database, oracle application server, collaboration suite, soa suite, forms & reports or discoverer server . This global Inventory location will be determined by file oraInst.loc in /etc (on Linux) or /var/opt/oracle (solaris). If you want to see list of oracle products on machine check for file inventory.xml under ContentsXML in oraInventory Please note if you have multiple global Inventory on machine check all oraInventory directories)

You will see entry like
HOME NAME=”ORA10g_HOME” LOC=”/u01/oracle/10.2.0/db” TYPE=”O” IDX=”1″/



Local Inventory
Inventory inside each Oracle Home is called as local Inventory or oracle_home Inventory. This Inventory holds information to that oracle_home only.

Can I have multiple Global Inventory on a machine ?

Quite common questions is that can you have multiple global Inventory and answer is YES you can have multiple global Inventory but if your upgrading or applying patch then change Inventory Pointer oraInst.loc to respective location. If you are following single global Inventory and if you wish to uninstall any software then remove it from Global Inventory as well.

What to do if my Global Inventory is corrupted ?
No need to worry if your global Inventory is corrupted, you can recreate global Inventory on machine using Universal Installer and attach already Installed oracle home by option
-attachHome

./runInstaller -silent -attachHome -invPtrLoc $location_to_oraInst.loc
ORACLE_HOME=”Oracle_Home_Location” ORACLE_HOME_NAME=”Oracle_Home_Name”
CLUSTER_NODES=”{}”

opatch apply ... -invPtrloc [orainst.lov location]:

Specifies the location of the oraInst.loc file. The invPtrLoc option is needed when this option is used during installation. Oracle recommends the use of the default Central Inventory for a platform.

Thursday, November 18, 2010

explain plan with sql id

dbms_xplan.display_cursor(
sql_id IN VARCHAR2 DEFAULT NULL,
cursor_child_no IN INTEGER DEFAULT 0,
format IN VARCHAR2 DEFAULT 'TYPICAL')
RETURN dbms_xplan_type_table PIPELINED;

Format and display the contents of the execution plan of a stored SQL statement in the AWR
dbms_xplan.display_awr(
sql_id IN VARCHAR2,
plan_hash_value IN INTEGER DEFAULT NULL,
db_id IN INTEGER DEFAULT NULL,
format IN VARCHAR2 DEFAULT 'TYPICAL')
RETURN dbms_xplan_type_table PIPELINED;

tkprof

tkprof ora_12345.trc output.txt explain=scott/tiger

The statements between brackets are optional. Their meaning is:
explain=username/password: show an executionplan.
table= schema.tabelnaam : use this table for explain plan
print=integer restrict the number of shown SQL-statements.
insert=bestandsnaam Show SQL-statements and data within SQL statements
sys = NO Don't show statements that are executed under the SYS-schema. Most of the times these are recursive SQL-statements that are less interesting.
Aggregate=NO Don't aggregate SQL-statments that are executed more than once.
sort= Sort the SQL-statements. The option is made up of 2 parts:
part1:
Prs Sorteer op parse-values
Exe Sorteer op executie-values
fch Sorteer op fetch-values

Part 2:
Cnt Sort on number of calls
Cpu Sort on CPU-usage
Ela Sort on elapsed time
Dsk Sort on disk-reads
Qry Sort on consistent reads
Cu Sort on current reads
Mis Sort on library cache misses
row Sort on number of processed rows

Example:
sort=(prsela,exeela,fchela) (VAX VMS) sort = ‘(prsela,exeela,fchela)’ (UNIX) Sort on elapsed parse time, elapsed executed time and elapsed fetch-time.

oracle versions

11g:

11.2.0.1

11.1.0.7
11.1.0.6

10g:

10.2.0.4
10.2.0.3
10.2.0.2
10.2.0.1

10.1.0.5
10.1.0.4

grid control:

CRS:


$ crsctl query crs activeversion
CRS active version on the cluster is [10.2.0.3.0]

wait events

db file scattered read: full table scan

db file sequential read
In most cases, this event means that a foreground process reads a single block (because it reads a block from an index or because it reads a block by rowid).

log file sync:

The log file sync Oracle metric indicates the process is waiting for LGWR to finish flushing the log buffer to disk. This occurs when a user commits a transaction (a transaction is not considered committed until all of the redo to recover the transaction has been successfully written to disk).

Log Buffer Space


Log Buffer Space wait event occurs when server processes write data into the log buffer faster than the LGWR process can write it out. The LGWR process begins writing entries to the online redo log file if any of the following conditions are true:

* The log buffer reaches the _log_io_size threshold. By default, this parameter is set to one third of the log buffer size.
* A server process performing a COMMIT o ROLLBACK posts to the LGWR process.
* The DBWR process posts to the LGWR process before it begins writing.

As the LGWR process writes entries to disk, user processes can reuse the space in the log buffer for new entries. If the Log Buffer is too small, user processes wait for “Log Buffer Space� until the LGWR flushes the redo information in memory to disk.

Free buffer waits:

The Free Buffer Waits Oracle metric wait event indicates that a server process was unable to find a free buffer and has posted the database writer to make free buffers by writing out dirty buffers. A dirty buffer is a buffer whose contents have been modified. Dirty buffers are freed for reuse when DBWR has written the blocks to disk.

Wednesday, November 17, 2010

shared appl_top

The shared file system infrastructure is not supported on application tier server nodes running Windows platforms or using Oracle9iAS releases earlier than 1.0.2.2.2. Application tier nodes must be running on the same operating system to implement a shared file system.

* Section 1: Overview
General concepts about sharing the application tier file system.

* Section 2: Installing a shared application tier file system with Rapid Install
Instructions for using Rapid Install to install a new system with a shared application tier file system.

* Section 3: Sharing an existing Applications file system
How to create a shared application tier file system in an existing Oracle Applications 11i installation.

* Section 4: Merging existing APPL_TOPs
How to merge existing APPL_TOPs into a single APPL_TOP in preparation for setting up a shared file system.

* Section 5: Adding a node when using a shared file system
How to add a node to an Applications system using shared files.

* Section 6: Maintaining a shared application tier file system
Describes the impact of a shared application tier file system on system administration.

* Section 7: References
Additional reference information related to sharing the application tier file system.

Definitions and Conventions
Term


Section 1: Overview

A traditional multi-node installation of Release 11i requires each application tier node to maintain its own application tier file system, consisting of the APPL_TOP file system (APPL_TOP and COMMON_TOP directories) and the application tier technology stack file system (8.0.6 and iAS Oracle homes).

In 2003, the "shared APPL_TOP" architecture was introduced, which allowed the sharing of a single APPL_TOP file system among all the application tier nodes of a multi-node system. In Release 11.5.10, the ability to share the APPL_TOP file system has been combined with the ability to share the application tier technology stack file system, resulting in the "shared application tier file system" architecture.

In a shared file system, all application tier files are installed on a single shared disk resource that is mounted from each application tier node. Any application tier node can be configured to perform any of the standard application tier services, such as serving forms or web pages, or concurrent processing. All changes made to the shared file system are immediately accessible to all the application tier nodes.

If your system does not have a single APPL_TOP already, you can choose to either share each individual APPL_TOP, or you can merge APPL_TOPs into a single APPL_TOP. Sharing the individual APPL_TOPs will not provide the complete benefits of a shared file system, but does allow immediate ability to scale an existing node type by adding more hardware resources. Merging APPL_TOPs will allow the full benefits of the shared file system, and is described in detail in section 4.

Please note that all machines sharing the file system must be configured to run the same Operating System with the same OS patch level.

Note
User ID and group ID should be consistent between nodes in a shared file system to avoid file access permission issues.
Section 2: Installing a shared application tier file system with 11.5.10.2 Rapid Install

Rapid Install configures multi-node systems to use a shared application tier file system as the default. Before you run Rapid Install, you must do the following:

1. Determine the shared file system mount points
Rapid Install must be able to access the file system from the same mount point on all the application tier nodes. For example:

* Shared COMMON_TOP: /d01/shared_mount/viscomn
* Shared APPL_TOP: /d01/shared_mount/visappl
* Shared 806 ORACLE_HOME: /d01/shared_mount/visora/8.0.6
* Shared iAS ORACLE_HOME: /d01/shared_mount/visora/iAS

2. Specify mount points in Rapid Install
When prompted by the Rapid Install wizard, define the paths to be shared to be the same for each node in the 11i Applications system. The following paths must be the same on each node where the Rapid Install is executed:
* APPL_TOP
* COMMON_TOP
* 8.0.6 ORACLE_HOME
* iAS 1.0.2.2.2 ORACLE_HOME

Section 3: Sharing an existing Applications file system


Use the instructions in this section to enable a shared application tier file system in an existing Applications 11i system.

1. Verify software versions
Ensure that these minimum versions are present in your system.
Software

Minimum Version

Details
Oracle Universal Installer 2.2 Apply patch 5035661 to each iAS ORACLE_HOME in the application tier.
TXK AutoConfig Templates RUP O or higher Apply patch 5478710 to each APPL_TOP in the application tier.
TXK Technology Stack Minipack B Apply patch 3219567 to each APPL_TOP in the application tier
AD Mini-pack I or higher Apply patch 4038964
iAS Oracle HTTP Server Rollup 5 or higher Apply the latest critical patch update available for Oracle E-Business Suite 11i . Refer to section "Oracle HTTP Server Patches for Oracle E-Business Suite 11i" for instructions.
Discoverer 4.1.48.08 See OracleMetaLink Doc ID: 139516.1
Reports 6i Developer 6i PS 17 or higher Please refer to Oracle MetaLink Note 125767.1 for instructions.
Zip 2.3

Zip Utility is available for download from http://www.info-zip.org/Zip.html

Unzip utility is available for download from http://www.info-zip.org/UnZip.html


2. Update System Configuration

1. Implement AutoConfig
If the Applications system was created with Rapid Install version 11.5.5 or earlier, and you have not migrated to AutoConfig, you must do so now. Refer to document 165195.1 on OracleMetaLink for instructions. After you implement AutoConfig, omit the remaining tasks in this step and go directly to Step 4.

2. Rename the Applications context file
If the Applications system was created with Rapid Install version 11.5.8 or earlier, you must regenerate the Applications context file:

$ cd /bin
$ perl adclonectx.pl sharedappltop \
contextfile=

Running adclonectx.pl creates a new Applications context file named _.xml.

3. Run AutoConfig to enable shared file system on the primary node
Run the following commands on the primary node where you are enabling shared file system support. After performing this step, re-load your environment settings.


4. $ cd /admin/scripts/
$ adstpall.sh /
$ cd /patch/115/bin
$ perl -I /perl txkSOHM.pl



Note
The Configuration top directory you specify as an input parameter to the shared file system script (txkSOHM.pl) can not be shared among multiple application middle tier nodes.



The script prompts for the following information:
Prompt

Response

Comment
Absolute path of Applications context file
/admin/.xml Name of the Applications context file, including the path
Type of Instance primary Primary node is the node where the 8.0.6 & iAS ORACLE HOMEs are installed
Absolute path of 8.0.6 shared Oracle home <8.0.6 ORACLE_HOME location> Location of the 8.0.6 Oracle home
Absolute path of iAS shared Oracle home Location of the iAS Oracle home
Absolute path of config top Location in which the iAS and 8.0.6 instance-specific configuration files should be stored. Specify a secure location for these files. For eg: Choose a local directory for storing the configuration files
Oracle Applications APPS schema password

3. Make the Applications files accessible
Mount the shared disk to all application tier nodes.

Note
You must retain the same absolute path for the mount points of the shared APPL_TOP, COMMON_TOP, and ORACLE_HOMEs on each node.

4. Run AutoConfig to enable shared file system on the secondary nodes

Run the following commands on the primary node where you are enabling shared file system support. After performing this step, re-load your environment settings.

$ cd /patch/115/bin
$ perl -I /perl txkSOHM.pl

Note
The Configuration top directory you specify as an input parameter to the shared file system script (txkSOHM.pl) can not be shared among multiple application middle tier nodes.



The script prompts for the following information:
Prompt

Response

Comment
Absolute path of Applications context file
/admin/.xml Name of the Applications context file, including the path
Type of Instance secondary Primary node is the node where the 8.0.6 & iAS ORACLE HOMEs are installed
Absolute path of 8.0.6 shared Oracle home <8.0.6 ORACLE_HOME location> Location of the 8.0.6 Oracle home
Absolute path of iAS shared Oracle home Location of the iAS Oracle home
Absolute path of config top Location in which the iAS and 8.0.6 instance-specific configuration files should be stored. Specify a secure location for these files. For eg: Choose a local directory for storing the configuration files
Oracle Applications APPS schema password

Section 4: Merging existing APPL_TOPs into a single APPL_TOP

In preparation for sharing a file system in an existing system with multiple APPL_TOPs, you can use the Rapid Clone utility to merge the multiple APPL_TOP (and COMMON_TOP) file systems into a single APPL_TOP file system. See OracleMetaLink Doc ID: 230672.1 for more information. Oracle recommends that the different APPL_TOPs should be merged into a completely new APPL_TOP to facilitate rollback incase of merge failure.

1. Maintain snapshot information
Log in to each application tier node as the APPLMGR user and run the maintain snapshot task in AD Administration. See Oracle Applications Maintenance Utilities for more information.

2. Merge existing APPL_TOPs

1. Prepare the source system application tier for merging
Choose one of the source system nodes to be the primary node. This document refers to it as "Node A".
* Log in to Node A as the APPLMGR user and run:

$ cd /admin/scripts/
$ perl adpreclone.pl appsTier merge

* Log in as the APPLMGR user to each of the secondary nodes being merged and run:

$ cd /admin/scripts/
$ perl adpreclone.pl appltop merge

2. Create a copy of Node A (preferred)
If you want to place the merged APPL_TOPs in a new location instead of using an existing APPL_TOP location, create a copy of Node A (the other nodes in the system do not need to be copied). Include the following directories from Node A:


(when this directory exists)

/util
/clone
/_pages (when this directory exists)
<806 ORACLE_HOME>


Note
See document 230672.1 on OracleMetaLink for complete instructions on cloning an existing APPL_TOP.

3. Copy the required files for merging
Log in as the APPLMGR user to each source node and recursively copy:

* directory /clone/appl
- to -
* directory /clone/appl on Node A
(or the copy of Node A).

Note
Do not copy any other directories from these nodes.

4. Configure the application tier server nodes
* Shut down all services on each application tier node, including Node A. The database and its listener should remain up.
* Log in to the merged APPL_TOP node as the APPLMGR user and execute the following commands:

$ cd /clone/bin
$ perl adcfgclone.pl appsTier

5. Finishing tasks
* Log in to the target system application tier node as the APPLMGR user.
* Run the following tasks in adadmin for all products:
o generate JAR files
o generate message files
o relink executables
o copy files to destination
* Depending on which tier you chose as the primary node, certain files may be missing. Run adadmin to verify files required at runtime. If any files are listed as missing files, you must manually copy them to the merged APPL_TOP.
* Remove the temporary directory /clone/appl to reduce disk space usage.

Section 5: Adding a node to a shared file system
This section describes how to add a node to a shared application tier file system.

1. Prepare existing node

1. Execute Rapid Clone on existing node
Log in to the node that is sharing the file system as the APPLMGR user and execute the following commands:

$ cd /admin/scripts/
$ perl adpreclone.pl appsTier

2. Make the Applications files accessible
Mount the shared file system disk to the node that you want to add.

Note
You must retain the same absolute path for the mount points of the shared file system on each node

3. Copy the Java signature database
Oracle Applications Java and JAR files are signed using digital keys stored in the Java key storage database. If the file /identitydb.obj exists, copy it from the source node to the new target node.

Note
Skip this step if your AD Utilities use jdk 1.3 or jdk 1.4. You can detect the version used by the AD Utilities with the command $ADJVAPRG -version.

4. Configure the node you want to add
Log in to the node that you want to add as the APPLMGR user and execute the following commands:

$ cd /bin
$ perl adclonectx.pl sharedappltop \
contextfile=

$ cd /patch/115/bin
$ perl -I /perl txkSOHM.pl

Note
The Configuration top directory you specify as an input parameter to the shared file system script (txkSOHM.pl) can not be shared among multiple application middle tier nodes.

The script prompts for the following information:

Prompt

Response

Comment
Absolute path of Applications context file /admin/.xml Name of the Applications context file, including the path
Type of Instance secondary For all additional nodes mounting up a shared file system make sure to use secondary
Absolute path of 8.0.6 shared Oracle home <8.0.6 ORACLE_HOME location> Location of the 8.0.6 Oracle home
Absolute path of iAS shared Oracle home Location of the iAS Oracle home
Absolute path of config top Location in which the iAS and 8.0.6 instance-specific configuration files should be stored. Specify a secure location for these files. For eg: Choose a local directory for storing the configuration files
Oracle Applications APPS schema password

Note
If the SQL*Net Access security is enabled in the existing system (enabled by default from 11i10), you first need to authorize the new node to access the database through SQL*Net. See Managed SQL*Net Access from Hosts in document 281758.1 on OracleMetalink for instructions on how to achieve this from OAM.

Section 6: Notes on maintaining a shared application tier file system

This section describes how a shared application tier file system affects system administration.

* Application tier environment file
The APPSORA.env environment file is not used in a shared application tier file system. The APPS.env environment file is used instead. Each application tier node has its own unique environment file, located in the shared APPL_TOP. When one of these environment files is used to set up the environment for a particular node, you must ensure that the file specific to that node is used.

* Applying Applications patches
Applications patches only need to be applied once, on any one of the application tier nodes. The patched files are immediately accessible to all the other application tier nodes.

In addition, Distributed AD can be used to enhance the patch application process of Applications patches by taking advantage of the shared application tier file system.

AutoConfig Utility must be run on all the application tier nodes if you apply patches from TXK /ADX or any other product that potentially bring in a configuration change.

* Applying application tier technology stack patches
When applying a patch to the application tier technology stack (the shared 8.0.6 or iAS Oracle homes), you only need to apply it to one application tier node, but that node must be the primary application tier node. Patched files are immediately accessible to all the secondary application tier nodes.

* Apache Lock and OPM Mutex Files
Oracle HTTP Server and the Oracle Process Manager processes create temporary lock files for their internal operations. The location of these lock files are specified in httpd.conf by the LockFile and OpmMtxFile respectively. You must ensure that value of the autoconfig variable s_lock_pid_dir is set to a location on the local file system to avoid file locking issues on the network file system.


* Recommendations for Network Attached Storage or NFS File Systems
Refer to this guide for guidelines about using NFS and NAS devices for Oracle software or database files .

Note

Oracle Cluster File System (OCFS2) for Linux is not certified with Oracle E-Business Suite Release 11i application tier file systems.

Oracle Cluster File System (OCFS2) for Linux is certified with Oracle E-Business Suite Release 12 application tier file systems. For information about using OCFS2 with Oracle E-Business Suite Release 12, see Note 384248.1.

Oracle Cluster File System for Windows is not certified with application tier file systems for either Oracle E-Business Suite Release 11i or Oracle E-Business Suite Release 12.