Quantcast
Channel: ckim – Oracle RAC, Virtualization and Exadata Expert
Viewing all 48 articles
Browse latest View live

Collaborate 2014 – Extreme Oracle DB Infrastructure As A Service Paper

$
0
0

Abstract

Learn the secrets of the trade to rapidly provisioning Oracle DB Infrastructure-As-A-Service. This extreme session will cover topics of delivering Linux-As-S-Service, RAC-As-A-Service, ASM-As-A-Service, Database-As-A-Service, Backup-As-A-Service, and even Data-Guard-As-A-Service . Advanced techniques to deploy enterprise RAC and non-RAC database deployments in an automated fashion will be shared . Save days and even weeks of deploy time by attending this session. There is no reason why you as a DBA or Architect, should not be able to deploy a fully patched RAC environment from bare metal Linux and create a RAC database in less than one hour. Anyone deploying RAC or even non-RAC will learn the secret sauce and knowledge of how to properly deploy mission critical systems that is repeatable and consistent. Learn to deploy a fully patched (11.2.0.3 or 11.2.0.4 with PSU x or 12.1 with PSU x) two node RAC in less than one hour.

Learn how to automate database builds and to leverage golden image database templates.

We can’t forget about multi-tenant deployment of Oracle 12c Pluggable Databases. Learn how to deploy pluggable databases (PDB) and to migrate PDBs and significantly increase your database consolidation density.

The details of adding nodes to an existing clusters and removing nodes from the cluster will also be disseminated.

Collaborate 2014 – Extreme Oracle DB Infrastructure As A Service.pdf


Applying the April 2015 PSU to Oracle 12.1.0.2

$
0
0

First, download and apply the patch patch: 6880880 from support.oracle.com or from https://updates.oracle.com/download/6880880.html

Download the latest PSU from Doc ID 756671.1. Unzip the PSU, cd to the directory, and check for one-off patch conflict detection and resolution.

[oracle@dal66a 20299023]$ opatch prereq CheckConflictAgainstOHWithDetail -ph ./
Oracle Interim Patch Installer version 12.1.0.1.7
Copyright (c) 2015, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /u01/app/oracle/product/12.1.0/dbhome_2
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.1.0/dbhome_2/oraInst.loc
OPatch version    : 12.1.0.1.7
OUI version       : 12.1.0.2.0
Log file location : /u01/app/oracle/product/12.1.0/dbhome_2/cfgtoollogs/opatch/opatch2015-05-16_17-46-46PM_1.log

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.
[oracle@dal66a 20299023]$ opatch lsinv
Oracle Interim Patch Installer version 12.1.0.1.7
Copyright (c) 2015, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/app/oracle/product/12.1.0/dbhome_2
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.1.0/dbhome_2/oraInst.loc
OPatch version    : 12.1.0.1.7
OUI version       : 12.1.0.2.0
Log file location : /u01/app/oracle/product/12.1.0/dbhome_2/cfgtoollogs/opatch/opatch2015-05-16_17-46-58PM_1.log

Lsinventory Output file location : /u01/app/oracle/product/12.1.0/dbhome_2/cfgtoollogs/opatch/lsinv/lsinventory2015-05-16_17-46-58PM.txt

--------------------------------------------------------------------------------
Local Machine Information::
Hostname: dal66a
ARU platform id: 226
ARU platform description:: Linux x86-64

Installed Top-level Products (1): 

Oracle Database 12c                                                  12.1.0.2.0
There are 1 products installed in this Oracle Home.


There are no Interim patches installed in this Oracle Home.


--------------------------------------------------------------------------------

OPatch succeeded.

Make sure that all databases are down from the Oracle Home that you are patching. Also, ensure that the database listeners are down from the same Oracle Home; otherwise, you will encounter the following error from opatch as you attempt to apply the PSU:

Prerequisite check "CheckActiveFilesAndExecutables" failed.
The details are:


Following executables are active :
/u01/app/oracle/product/12.1.0/dbhome_2/bin/oracle
/u01/app/oracle/product/12.1.0/dbhome_2/lib/libclntsh.so.12.1
UtilSession failed: Prerequisite check "CheckActiveFilesAndExecutables" failed.
Log file location: /u01/app/oracle/product/12.1.0/dbhome_2/cfgtoollogs/opatch/opatch2015-05-16_17-47-06PM_1.log

OPatch failed with error code 73

Shutdown all databases and listeners that you are applying the patch for

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
[oracle@dal66a 20299023]$ ps -ef |grep -i tns
root        15     2  0 14:31 ?        00:00:00 [netns]
oracle    4067     1  0 16:45 ?        00:00:00 /u01/app/oracle/product/12.1.0/dbhome_2/bin/tnslsnr LISTENER -inherit
oracle    4074     1  0 16:45 ?        00:00:00 /u01/app/oracle/product/12.1.0/dbhome_2/bin/tnslsnr DBATOOLS -inherit
oracle    6046  3362  0 17:48 pts/0    00:00:00 grep -i tns
[oracle@dal66a 20299023]$ lsnrctl stop dbatools

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 16-MAY-2015 17:48:12

Copyright (c) 1991, 2014, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dal66a)(PORT=1522)))
The command completed successfully


[oracle@dal66a 20299023]$ lsnrctl stop listener

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 16-MAY-2015 17:48:22

Copyright (c) 1991, 2014, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dal66a)(PORT=1521)))
The command completed successfully

Now we can apply the latest PSU

[oracle@dal66a 20299023]$ opatch apply
Oracle Interim Patch Installer version 12.1.0.1.7
Copyright (c) 2015, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/app/oracle/product/12.1.0/dbhome_2
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.1.0/dbhome_2/oraInst.loc
OPatch version    : 12.1.0.1.7
OUI version       : 12.1.0.2.0
Log file location : /u01/app/oracle/product/12.1.0/dbhome_2/cfgtoollogs/opatch/opatch2015-05-16_17-48-29PM_1.log

Verifying environment and performing prerequisite checks...
OPatch continues with these patches:   19769480  20299023  

Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.
Provide your email address to be informed of security issues, install and
initiate Oracle Configuration Manager. Easier for you if you use your My
Oracle Support Email address/User Name.
Visit http://www.oracle.com/support/policies.html for details.
Email address/User Name: 

You have not provided an email address for notification of security issues.
Do you wish to remain uninformed of security issues ([Y]es, [N]o) [N]:  Y



Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/u01/app/oracle/product/12.1.0/dbhome_2')


Is the local system ready for patching? [y|n]
y
User Responded with: Y
Backing up files...
Applying sub-patch '19769480' to OH '/u01/app/oracle/product/12.1.0/dbhome_2'

Patching component oracle.rdbms.deconfig, 12.1.0.2.0...

Patching component oracle.xdk, 12.1.0.2.0...

Patching component oracle.tfa, 12.1.0.2.0...

Patching component oracle.rdbms.util, 12.1.0.2.0...

Patching component oracle.rdbms, 12.1.0.2.0...

Patching component oracle.rdbms.dbscripts, 12.1.0.2.0...

Patching component oracle.xdk.parser.java, 12.1.0.2.0...

Patching component oracle.oraolap, 12.1.0.2.0...

Patching component oracle.xdk.rsf, 12.1.0.2.0...

Patching component oracle.rdbms.rsf, 12.1.0.2.0...

Patching component oracle.rdbms.rman, 12.1.0.2.0...

Patching component oracle.ldap.rsf, 12.1.0.2.0...

Patching component oracle.ldap.rsf.ic, 12.1.0.2.0...

Verifying the update...
Applying sub-patch '20299023' to OH '/u01/app/oracle/product/12.1.0/dbhome_2'
ApplySession: Optional component(s) [ oracle.has.crs, 12.1.0.2.0 ]  not present in the Oracle Home or a higher version is found.

Patching component oracle.tfa, 12.1.0.2.0...

Patching component oracle.rdbms.deconfig, 12.1.0.2.0...

Patching component oracle.rdbms.rsf, 12.1.0.2.0...

Patching component oracle.rdbms, 12.1.0.2.0...

Patching component oracle.rdbms.dbscripts, 12.1.0.2.0...

Patching component oracle.rdbms.rsf.ic, 12.1.0.2.0...

Patching component oracle.ldap.rsf, 12.1.0.2.0...

Patching component oracle.ldap.rsf.ic, 12.1.0.2.0...

Verifying the update...
Composite patch 20299023 successfully applied.
Log file location: /u01/app/oracle/product/12.1.0/dbhome_2/cfgtoollogs/opatch/opatch2015-05-16_17-48-29PM_1.log

OPatch succeeded.

Let’s confirm that the PSU was successfully applied

[oracle@dal66a 20299023]$ opatch lsinventory |grep ^Patch
Patch  20299023     : applied on Sat May 16 17:49:12 CDT 2015
Patch description:  "Database Patch Set Update : 12.1.0.2.3 (20299023)"

Now let’s load the modified SQL files into the database. We need to execute Datapatch to complete the post-install SQL deployment portion of the PSU

[oracle@dal66a OPatch]$ ./datapatch -verbose
SQL Patching tool version 12.1.0.2.0 on Sat May 16 17:56:55 2015
Copyright (c) 2015, Oracle.  All rights reserved.

Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_7734_2015_05_16_17_56_55/sqlpatch_invocation.log

Connecting to database...OK
Bootstrapping registry and package to current versions...done
Determining current state...done

Current state of SQL patches:
Bundle series PSU:
  ID 3 in the binary registry and not installed in the SQL registry

Adding patches to installation queue and performing prereq checks...
Installation queue:
  Nothing to roll back
  The following patches will be applied:
    20299023 (Database Patch Set Update : 12.1.0.2.3 (20299023))

Installing patches...
Patch installation complete.  Total patches installed: 1

Validating logfiles...
Patch 20299023 apply: SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/20299023/18703022/20299023_apply_TOOLSDEV_2015May16_17_57_25.log (no errors)
SQL Patching tool complete on Sat May 16 17:57:31 2015

Look for future blog post on applying the same patch against the Oracle Grid Home. This is where the fun really begins as we will be leveraging opatchauto instead of opatch.

Monitoring the Physical Standby with v$dataguard_stats

$
0
0

The following dg_check_lag.ksh script can be leveraged to monitor a Data Guard environment and to send alerts if the apply lag or transport lag exceeds a specified threshold in the dg_check_lag.conf file. In our example, the dg_check_lag.conf file specifies a threshold of three hours. If we encounter a lag in redo transport or apply lag that exceeds 3 hours, we send an alert to our DG DBAs.

Contents of the dg_check_lag.ksh script:

#!/bin/ksh

export DB=$1
export ORACLE_SID=${DB}1
echo $ORACLE_SID
export ORAENV_ASK=NO
export PATH=/usr/local/bin:/bin:/usr/bin:$PATH

. oraenv 
. $HOME/.ORACLE_BASE 
export LOGFILE=/tmp/check_lag_${DB}.log
export FN=`echo $0 | sed s/\.*[/]// |cut -d. -f1`
. $SH/${FN}.conf

sqlplus -s /nolog <<EOF
conn / as sysdba
col name for a13
col value for a20
col unit for a30
set pages 0 head off feed off ver off echo off trims on
spool $LOGFILE
select name||'|'||value from v\$dataguard_stats where NAME IN ('transport lag', 'apply lag');
spool off
exit;
EOF

export ERROR_COUNT=$(grep -c ORA- $LOGFILE)
if [ "$ERROR_COUNT" -gt 0 ]; then
  cat $LOGFILE |mail -s "DG Transport/Apply error for $DB ... ORA- errors encountered!" DGDBAs@viscosityna.com 
fi

TLAG=$(cat $LOGFILE |grep "transport lag" |cut -d'|' -f2)
ALAG=$(cat $LOGFILE |grep "apply lag" |cut -d'|' -f2)

echo "Transport Lag: $TLAG ------ Apply Lag: $ALAG"
export T_DAYS=$(echo $TLAG |cut -d' ' -f1 |sed -e "s/\+//g")
export T_HRS=$(echo $TLAG |cut -d' ' -f2 |cut -d: -f1)
export T_MINS=$(echo $TLAG |cut -d' ' -f2 |cut -d: -f2)
echo $T_DAYS $T_HRS $T_MINS

export A_DAYS=$(echo $ALAG |cut -d' ' -f1 |sed -e "s/\+//g")
export A_HRS=$(echo $ALAG |cut -d' ' -f2 |cut -d: -f1)
export A_MINS=$(echo $ALAG |cut -d' ' -f2 |cut -d: -f2)
echo $A_DAYS $A_HRS $A_MINS

echo "HR_THRESHOLD: $HR_THRESHOLD"
export ALERT_LOG_FILE=/tmp/check_lag_${DB}.alert
[ -f "$ALERT_LOG_FILE" ] && rm $ALERT_LOG_FILE

[ "$T_DAYS" -gt 00 ] && echo "Transport Lag is greater than 1 day!!!" |tee -a $ALERT_LOG_FILE
[ "$T_HRS" -gt $HR_THRESHOLD ] && echo "Transport Lag exceeeded our threshold limit of $HR_THRESHOLD hrs .. curently behind $T_DAYS day(s) $T_HRS hrs and $T_MINS mins" |tee -a $ALERT_LOG_FILE

[ "$A_DAYS" -gt 00 ] && echo "Apply Lag is greater than 1 day!!!" |tee -a $ALERT_LOG_FILE
[ "$A_HRS" -gt $HR_THRESHOLD ] && echo "Apply Lag exceeeded our threshold limit of $HR_THRESHOLD hrs .. curently behind $A_DAYS day(s) $A_HRS hrs and $A_MINS mins" |tee -a $ALERT_LOG_FILE

[ -s $ALERT_LOG_FILE ] && {
echo "" >> $ALERT_LOG_FILE
echo "--------- MRP Sessions Running -------------" >> $ALERT_LOG_FILE
ps -ef |grep -i mrp |grep -v grep >> $ALERT_LOG_FILE
cat $ALERT_LOG_FILE |mail -s "DG Transport/Apply for $DB is behind ... behind $T_DAYS day(s) $T_HRS hrs and $T_MINS mins " DGDBAs@viscosityna.com  
}

 

Contents of dg_check_lag.conf:
HR_THRESHOLD=03

Contents of .ORACLE_BASE

This file must exist in the $HOME directory for Oracle. We source the .ORACLE_BASE file because some companies have crazy standards to where they place ORACLE_BASE and where they place all their shell scripts. We keep it simple by leveraging our own configuration file which points to where ORACLE_BASE is located and where all the shell scripts reside in.

$ cat .ORACLE_BASE
export ORACLE_BASE=/u01/app/oracle
export BASE_DIR=/u01/app/oracle
export PATH=/usr/local/bin:/usr/bin:/usr/sbin:$PATH
export SH=$ORACLE_BASE/general/sh

v$recovery_process view to track database recovery operations for Data Guard

$
0
0

Here’s a simple script to look at the v$recovery_process view:

cat apply_rate.sql
set linesize 200
col Values for a80
col Recovery_Start_Time for a30

col db new_value v_db noprint
select name db from v$database;

Prompt Start of Recovery for this database: &v_db
SELECT MAX(start_time) Recovery_start_Time FROM v$recovery_progress;

SELECT item,
TO_CHAR(sofar)||' '||TO_CHAR(units)||' '|| TO_CHAR(timestamp,'DD-MON-RR HH24:MI:SS') Description
FROM v$recovery_progress
WHERE start_time=(SELECT MAX(start_time) FROM v$recovery_progress);

Here’s a sample output from the apply_rate.sql script:

Start of Recovery for this database: PROD_DG

RECOVERY_START_TIME
------------------------------
04-jul-15 12:00:56

ITEM DESCRIPTION
-------------------------------- ----------------------------------------
Log Files 5190 Files
Active Apply Rate 1337 KB/sec
Average Apply Rate 111 KB/sec
Maximum Apply Rate 64799 KB/sec
Redo Applied 675413 Megabytes
Last Applied Redo 0 SCN+Time 01-SEP-15 21:47:50
Active Time 424248 Seconds
Apply Time per Log 80 Seconds
Checkpoint Time per Log 0 Seconds
Elapsed Time 6218934 Seconds

10 rows selected.

Virtualizing Microsoft SQL Server on VMware Best Practices Part 1

$
0
0

Everyone knows me as an Oracle, Linux and VMware Expert. Few know me as a Certified Microsoft SQL Server expert from days of old. I am venturing into the SQL Server world again and plan on leveraging my expertise from the Oracle database world. I love the fact that Microsoft ported their SQL Server database to Linux. Stay tuned as I write future articles on how to deploy SQL Server on Linux and expose best practices to scale a SQL Server database on Linux.

For the first part of many series on virtualizing Microsoft SQL Server on VMware, let’s focus on the storage aspect of the virtalized infrastructure.

Improper storage configuration is often the culprit with performance issues. Majority of the SQL Server performance issues can be correlated back to storage configuration. Typically, relational databases, especially in the production workloads, produce heavy I/O workloads. When storage is misconfigured, performance degradations and additional latency can be introduced especially during heavy I/O workloads.

Storage is always about understanding throughput (IOPS) and disk latency. Understand your workload I/O usage patterns, thresholds, and times of high activity; benchmark and confirm you are achieving the true throughput of your hardware. Bad settings and incorrect configurations will keep the true throughput of the system from being achieved. It is important to understand the total IOPS your disk system can handle from the following formulas.

  • Total Raw IOPS = disk IOPS x number of disks
  • Functional IOPS = (disk IOPS x write%)/ (RAID overhead) + (Raw IOPS x Read%)

You need to find a balance between performance and capacity. Larger drives typically correlates to less performance. The more spindles you have, the more IOPS you can generate. Keep in mind that ESXi host demand is an aggregate demand of all VM’s residing on that host at that time. Low latency high I/O SQL Server databases are very sensitive to the latency of the I/O operations. Storage configurations are very important in achieving an optimal database configuration.

Recommendation for the best performance is always Eager Zeroes thick vmdk’s created in an Independent Persistent mode, to avoid any performance issues. Thick provisioned Lazy Zeroed vmdk’s or Thin provisioned vmdk’s can be used, as long as the Storage array is VAAI capable, improving the performance for first-time-write performance for these two types. Vmdk’s created in an Independent Persistent mode, i.e. Persistent refers to changes persistently written to disks. Independent refers to the vmdk being independent of VM based snapshots.

vAdmins can thinly provision a virtual disk. Thinly provisioned disks equate to storage on demand. Thin provisioning at the storage level and at the virtualization layer is commonly practiced in many companies, as it is a technique used to save space and to over-commit space on the storage array. Make sure how your storage is layed out for your SQL Server environments.

Development databases an be provisioned on thinly provisioned disks and can grow on-demand; however, for production workloads, make sure that you are always leveraging Eager Zeroed Thick VMDK.

This blog post touches on one of the key elements of virtualization to successfully deploy a highly performant SQL Server environment. For more details, sign up for one of my upcoming webinars on “Ten Surprising Performance Killers on Microsoft SQL Server” on Oct 12 at 1:00 PM CST.

http://info.sentryone.com/partner-webinar-surprising-performance-killers

Rolling Patch – OPatch Support for RAC

$
0
0

In a RAC Configuration, OPatch supports 3 different patch methods:

1. ALL-Node Patch: The patch is applied to the local node first.  Then the patch is propagated to all the other nodes, and ultimately updates the OraInventory. For this patching process, all instances in the RAC configuration must be shutdown during the entire patching process.

2. Minimum Downtime Strategy Mode Patch:  With this strategy, OPatch first applies the patch on the local node then prompts the user for a sub-set of nodes, which will become the first subset of nodes to be patched. After the initial subset of nodes are patched, Opatch propagates the patch to the other nodes and finally updates the inventory. The downtime would happen between the shutdown of the second subset of nodes and the startup of the initial subset of nodes patched.  Here’s an example of how the minimum downtime process flow would look like:

. Shutdown all the Oracle instances on node 1 
. Apply the patch to the RAC home on node 1 
. Shutdown all the Oracle instances on node 2 
. Apply the patch to the RAC home on node 2 
. Shutdown all the Oracle instances on node 3 
. At this point, instances on nodes 1 and 2 can be brought up
. Apply the patch to the RAC home on node 3 
. Startup all the Oracle instances on node 3

3. No down time (Rolling Patch): With this method, we do not incur a downtime. Each node would be patched and brought up while all the other nodes are up and running, resulting in no disruption of the system. Some rolling patches may incur downtime due to post-installation steps, i.e. typically by running SQL scripts to patch the actual database. You must read the patch README.txt file to find out whether the post-installation steps requires downtime or not. Here’s how the Rolling Patch process will look like:

. Shutdown all the Oracle instances on node 1 
. Apply the patch to the RAC home on node 1 
. Start all the Oracle instances on node 1 
. Shutdown all the Oracle instances on node 2 
. Apply the patch to the RAC home on node 2 
. Start all the Oracle instances on node 2 
. Shutdown all the Oracle instances on node 3 
. Apply the patch to the RAC home on node 3 
. Start all the Oracle instances on node 3

 In order for a patch to be be applied in a “rolling fashion”, the patch must be designated as a “rolling updatable patch” or simply “rolling patch” in the README.txt file. When patches are released, they are tagged as either a “rolling” or “not rolling” patch. In general, patches that could be tagged as a rolling fashion are patches that do not affect the contents of the database, patches that are not related to the RAC internode communication infrastructure, and patches that change procedural logic and do not modify common header definitions of kernel modules. This includes client side patches that only affect utilities like export, import, sql*plus, sql*loader, etc. 

Only individual patches — not patch sets — will be “rollable”. It should also be noted that a merge patch of a “rolling patch” and an ordinary patch will not be a “rolling patch”. 

From 9.2.0.4 onwards, all patches released will be marked as a “rolling” or “not rolling patch”, based on defined set of rules. Patches previously released are packaged as “not rolling”.

Because the set of rules currently defined are very conservative, patches released as “not rolling patches”, either before and after 9.2.0.4, may be eligible to be re-released as “rolling patches”, after analysis from Oracle Development.
  
If you plan to apply a patch that is marked as “not rolling” and want to check if is possible to take advantage of the rolling patch strategy, please contact Oracle Support. You can determine if a patch is a “rolling patch” or not by executing one of the following commands:

    – 9i or 10gR1: opatch query -is_rolling
    – 10gR2: opatch query -all  [unzipped patch location] | grep rolling
    – 10gR2 on Windows: opatch query -all [unzipped patch location] | findstr rolling
    – Later 10gR2 or 11g: opatch query -is_rolling_patch [unzipped patch location]

Please refer to patch readme to find out whether the patch is rolling patch or not.  For additional details, you can view Rolling Patch – OPatch Support for RAC (Doc ID 244241.1)

 

Understanding Best Practices for Virtualizing Oracle

$
0
0

One of the key benefits of virtualization is the ability to achieve a high consolidation ratio, thereby getting higher utilization of the hardware. This is especially true of the CPUs, since software licenses are usually tied to the number of CPUs in the hardware. During times of heavy utilization, the environment needs to be configured to make sure VMs with SLAs have their resources protected.

Remember best practices are recommendations which are dependent on their context. One often looks at two sources, stating a best practice that conflicts with each other. The difference can be the context. Here is a perfect example; one best practice is to never overcommit VMs running Oracle applications with stringent performance SLAs. Instead, protect the VMs with features like memory reservations, right size virtual CPUs, resource pools, allocation management mechanisms; such as Storage I/O Control (SIOC) and Network I/O Control (NIOC), etc. This is strongly recommended when virtualizing high workload intensive critical Oracle applications. The virtualized environment should be able to guarantee the resource and the Quality of Service (QoS), needed to meet the business requirements.

Another best practice in conflict with the previous one is to allow some level of overcommitment from Oracle environments. This is a great way to leverage all the features of virtualization by squeezing every ounce of utilization from your hardware that you can.  vSphere can manage resource sharing with its algorithms for fair share CPU scheduling, memory entitlement, NIOC, SIOC, and resource pools. However, this approach requires that the virtualization team have a lot of expertise and experience managing the overcommitting of resources and at the same time, ensuring the business SLA requirements are met. Latency-sensitive environments need to perform operations at the millisecond level. In order to achieve both goals, not having the required skill set will severely affect applications, especially when the environment grows or the application encounters increased utilization.

The best way to approach this issue is to start conservative and grow into aggressive, as and when you attain the required level of confidence with the workloads. The recommended way to go with over commitment would be:

  • Overcommit your development and test environments as much as you can, staying within common sense and meeting defined requirements.
  • Try not to overcommit production high profile environments, unless your expertise is ready for it. Initially, be conservative and do not overcommit production environments with SLAs. Then you can begin overcommitment of databases that do not have high utilization or strict performance SLAs. Use this strategy to build success and confidence with your users that the Oracle software will perform well in a VM.
  • Once you develop the right level of expertise, you can overcommit some production environments, if you have guidelines that ensure SLAs are always met. Your team must be good with resource pools, setting DRSpriorities and rules, I/O controls (Storage and Network), and SR-IOV, etc. To say you absolutely do not overcommit production environments is a simple answer, but it is not always the correct one.    Over committing allows much higher utilization of your hardware, but requires you to be smart as to when and how you overcommit.

 

Goals of Best Practices for Virtualizing Oracle

A goal for best practices is to reduce the possibility of errors and minimize variables when trouble shooting.

  • Develop virtualization best practices and make sure they are consistently followed.
  • Build analytical skills and metricknowledge around the four areas you are virtualizing: Memory, CPU, Storage, and Networking.
  • Understand dependencies and inter-dependencies of the various layers of the stack.
  • Educate the DBAs about the key metrics they need to understand about the virtual infrastructure, so they can determine if it is a virtualization issue or an Oracle issue.
  • Building custom widgets, using vCOPSfor DBAs, to be able to look at the virtual infrastructure the same way they would look at storage and networking in physical server environments.
  • Your bench marking should allow you to create consistent and reproducible results that you can compare against. Metrics should always be quantitative.
  • With VMware, develop best practices around vCenterand vCenter Operations, or the management and monitoring software you are using. Understand this is going to take time as well as the development of skill and expertise.
  • The approach that since the Oracle software has no knowledge if the underlying platform is physical or virtualized, the DBAs do not need to know about the virtual infrastructure will not help solve problems. DBAs and vAdmins need to work together as a team to effectively troubleshoot issues.

What are key metrics for virtualization? With any infrastructure it comes down to people, processes, and technology; learning to understand key metrics with tools like esxtop will be helpful. KISS (Keep It Simple Stupid) applies, because complex systems fail in complex ways. Good design is critical. It’s important to develop internal best practices, management processes, and guidelines for managing and monitoring a virtualization environment. It is vital to ensure your infrastructure management is ready to handle tier one workloads and the dynamics they can create.

 

Posted by Charles Kim, VMware vExpert, Oracle ACE Director

Viscosity Acquires Sumner Technologies

$
0
0

Viscosity North America (Viscosity), a leading Oracle-centric, IT solution, consulting firm, announced that they will acquire Sumner Technologies, an Oracle Application Express (APEX) training, consulting, and solutions firm with locations in Columbus, Ohio and Ashburn, Virginia.  Sumner Technologies is one of the industry’s top Oracle APEX related services and training firms, having served customers from all verticals in North America. Led by former Oracle, APEX product manager, Scott Spendolini, Sumner Technologies successfully implemented many APEX projects; as well as developed and delivered custom Oracle APEX training curriculum.
 
“Adding Sumner Technologies solutions, experts, and vast knowledge of Oracle solidifies  our position as the leader of APEX development in the industry and exponentially increases our presences in Oracle’s full-stack cloud and on-premise solutions,” said Jerry Ward, Chief Operating Officer at Viscosity. 
 
One of Viscosity’s pillars of concentration will be on delivering SaaS applications on Oracle Cloud.  Viscosity specializes in Oracle Enterprise Performance Management (EPM) Cloud with a heavy focus centered on Financial Consolidation and Close Cloud and Planning and Budgeting Cloud.  With Enterprise Resource Planning (ERP) Cloud, Viscosity sets their focus to increase productivity, lower costs, and improve controls with Financials, Procurement, and Project Portfolio Management Cloud.
 
Viscosity will continue to provide innovative approaches to integrate complex data between Oracle’s SaaS applications and on-premise applications.  Platform as a service (PaaS) will be our de facto standard to extend and bridge SaaS products. “Spendolini is expected to exponentially grow our PaaS for SaaS business by transforming business processes and building robust dashboard solutions with APEX to financial executives and lines of business” said Charles Kim, Chief Executive Officer. 
 
With Spendolini joining Viscosity, the number of books centered around the Oracle ecosystem grows to twenty books. Viscosity provides to the user community more intellectual knowledge than any other company in the world. For more information about Viscosity’s growing APEX practice, visit http://www.viscosityna.com

Viscosity North America was founded by industry and authored experts with backgrounds in Oracle engineered systems, private/public cloud, application development, big data, and E-Business Suite. As an Oracle Platinum Partner, Viscosity is known as the “Trusted Advisors”, specializing in the delivery of full stack solutions and resolving complex data challenges. Their vast experience in verticals such as Oil & Gas, Healthcare, Finance, and Retail, provides their customers with insight into what is driving IT complexity. Viscosity offers solution services in the areas of Cloud implementation and Integration, Big Data, Analytics, Mobility, Engineered systems, Middleware, and Enterprise apps, as well as solutions in full stack health checks, license assessments, and custom application development.


Viewing all 48 articles
Browse latest View live