Oracle Enterprise Manager Cloud Control 12c:Managing Data Center Chaos
上QQ阅读APP看书,第一时间看更新

Laying the foundation

The foundation of the diagnostics capabilities is the built-in repository that is installed with every Oracle database (from version 10g onwards). This is known as the Automatic Workload Repository (AWR), and is stored in the SYSAUX Tablespace. Once every hour, by default, the database takes a snapshot of its workload and statistical information, and stores it in the AWR repository. This data is saved for 8 days by default.

Different classes of statistical data are stored: base statistics (such as physical reads), SQL statistics (such as reads per SQL statement), or different metrics (such as physical reads per second). Thus, historical information about the performance of the database is captured automatically, and the DBA does not have to set up any home-grown scripts or tables to do the capture, as in the past. The AWR is also self-managing and does not have to be micromanaged by the DBA. The space requirements are automatically managed; based on the retention interval, the old data is purged nightly.

Using the functionality of the AWR, accessing the views or running the awrrpt.sql script (used by DBAs to generate an AWR report) requires the license for the Enterprise Manager Diagnostics Pack.

AWR includes Active Session History (ASH) as one of its key components. ASH is responsible for sampling active sessions every second and storing their current state in memory. A v$ view can be used to access the data in memory, this is the V$ACTIVE_SESSION_HISTORY view. Every hour, some samples of the data are inserted into the AWR repository.

The main benefit of ASH is that you can now do performance analysis of very fast transient issues, which may occur for a few fleeting seconds, as ASH samples active sessions every second. Therefore, AWR and ASH form the basis of the diagnostics capabilities that can be accessed from Enterprise Manager, and lead to the tuning capabilities (since, after diagnosing a problem you will need to tune it). Both AWR and ASH are RAC database aware and include events such as cluster wait.

After logging in to the Cloud Control 12c console, go to Targets | Databases on the menu. This brings up a list of the database Targets being monitored by the Enterprise Manager system. Only the targets visible to the currently logged in administrator can be seen. In this case, we are logged in as SYSMAN to Cloud Control, so all database Targets can be seen.

Laying the foundation

By default, the databases are seen in a Load Map (this can be changed to a Search List, if required). This is a fast, visually color-coded way of finding out the database that is placing the most load on a system (the total active load), or the database that has the most performance issues. Simply placing your mouse cursor on the largest box on the Load Map immediately shows more performance information about the database, such as the time consumed by database sessions in CPU, I/O, or WAIT (that is, waits in all other wait class categories), as well as the top three ADDM findings.

Automatic Database Diagnostic Monitor (ADDM) is like a doctor in the database, a self-diagnostic engine that is built into the Oracle database kernel. ADDM looks for performance issues and also supplies the recommendations and the fixes. This is part of the Diagnostics Pack license for Enterprise Manager.

Now, we can have a look at the performance home of the database. Log in to the database by clicking on it in the Load Map. In the initial login screen, shown in the following screenshot, we can create a named credential by simply checking the Save As checkbox and supplying a name for the credential. This named credential belongs to the current user and can be re-used by other users if they are granted the privileges for this credential:

Laying the foundation

When we drill down in this way from the Load Map, the performance home page of the database is displayed. The performance page shows over time the average active sessions occurring in the database, with a breakdown as per the wait class, which can be Application, Concurrency, User I/O, CPU Wait, among others. This is displayed in the following screenshot:

Laying the foundation

The data on this page comes from v$ views in the database, for example, v$sysmetric. Some of the information also comes from the operating system. The page also shows the host processes with or without the load average, as can be seen in the following screenshot, to give you an idea of the stress at the server level. It is important for the DBA to know if there is any other process outside the database that is impacting the database's performance:

Laying the foundation

At the bottom of the performance home page, the instance throughput rate, including Logons, Transactions, Physical Reads, and Redo Size in KB per second is displayed in the Throughput tab:

Laying the foundation

There is also a detailed I/O tab showing the I/O megabytes per second for each I/O function, and the I/O request per second. But the most interesting tab is the Parallel Execution tab, this shows an overview of the parallel activity happening in the database.

In this tab you can see a red line named Parallel Max Servers; this is as per the corresponding initialization parameter setting in the database. You can also see the active parallel sessions in the database, the parallel slaves that are being created. In the same tab, the DDL/DML/query statements that are being parallelized can be seen, as well as the amount of downgrading of the parallel operations, broken down into a percentage. This can quickly give the DBA an idea of how to tune the parallel operations in the database.

Finally, the Services tab on the screen gives you a breakdown of the service-level activity in the database, which is especially useful for multiple services in a RAC database.

You can change the performance page settings by clicking on the Settings button in the performance page. On the performance page settings screen, you can specify the Throughput Chart Settings, I/O Chart Settings, and the Baseline Display pages show the 99th percentile line using the system's moving-window baseline, or using a static baseline with computed statistics.

Coming back to the Average Active Sessions section on the performance home page, it is possible to ascertain at a glance the peaks of activities occurring in the database, and if you "click on the big stuff"—this is Oracle's way of putting it—you will find it is very easy to drill down to the wait event that is most severely affecting your database.

For example, if you click on the CPU Wait section in the graph, you will be shown the exact breakdown of the SQL statements and the sessions in the database that contributed towards that wait event. You can also move the shaded bar over the time period and the statements, and sessions displayed in the lower section will change accordingly. This is displayed in the following screenshot; due to the print size, the screenshot may not be readable, but it is included for the purpose of illustration:

Laying the foundation

If you place your cursor on the SQL ID, the entire SQL statement is shown. It is possible to drill down further into any of the statements to see their execution plan details along with the plan control information for this statement, such as whether a SQL profile (auxiliary statistics that aid the optimizer) is associated with this statement, or whether a SQL plan baseline (a stored execution plan) has been enabled for this statement. Tuning history and SQL monitoring information for this statement may also be available. The following screenshot shows the execution plan for a particular statement. Either a graphical or a tabular view of the execution plan can be selected. This screenshot may not be readable, but is included for the purpose of illustration:

Laying the foundation

The SQL Worksheet button allows you to reexecute the SQL statement using bind variables, if present, and examine the results. There is an option in this to execute only the SELECT statements and ignore the DML part of the statement.

If your SQL statement is failing to execute properly, you can run the SQL Repair Advisor utility from this page; this will try to find alternate execution paths that allow the SQL statement to complete successfully. To fix this issue, a SQL patch may be recommended for this particular SQL statement, as a workaround.

If the SQL statement executes correctly but you are not happy with its performance, it is possible to simply click on the Schedule SQL Tuning Advisor button. This allows you to start a SQL tuning task for this statement that invokes the optimizer in tuning mode, and generates solutions for your SQL statement.

These solutions may be new indexes to speed up your queries, or a statistics update may be suggested for your tables or indexes if these statistics are missing or stale. Otherwise, SQL restructuring can be suggested if your SQL is improperly formed, for example, Cartesian joins where data from two tables is selected without a proper join clause. The most interesting and useful suggestion is of course the SQL profile.

The SQL profile offers auxiliary information to the optimizer, which aids in the efficient execution of a SQL statement without actually changing the statement. As such, it can tune SQL transparently, without changing the application code.

This makes it very useful for packaged application tuning; it has always been difficult to get vendors to change their application SQL when the DBA found certain statements were inefficient.

Now, there is no need to ask the vendor to change the SQL, as a SQL profile will simply make the SQL run better without changing it. This technology is only in the Enterprise Manager Tuning Pack.

The profile is persistent, it works even after database restarts and upgrades. You can also transport the profile across databases. This is described in MOS Note ID 457531.1: How To Move SQL Profiles From One Database To Another Database.

When the profile is generated for a SQL statement, it contains auxiliary information that has been collected by the Automatic Tuning Optimizer (ATO) in the tuning mode (it takes a longer time than in the case of normal optimizer execution).

This auxiliary information contains customized optimizer settings based on past execution history (such as whether first_rows was used or all_rows). It also contains compensation for missing or stale statistics, and for errors in optimizer estimates; these errors may have occurred due to data skews and correlations, or due to complex filters and joins being used in the SQL statement.

Once the SQL profile is accepted for a particular SQL statement, from then on whenever that statement is run in production, the optimizer in normal mode will use the SQL profile and its auxiliary information in deciding the best execution path possible. The result will be a well-tuned execution plan in which there has been no change to the actual SQL code.

Comprehensive analysis or limited analysis may be chosen for this task, limited analysis is faster but will not generate a SQL profile. The results of any task you have submitted can be seen by going to Performance | Advisors Home (also known as Advisor Central). The task completes and the SQL tuning advisor can recommend collecting optimizer statistics for a table (used in the SQL statement) and its indexes if it finds they are stale or missing, or creating a SQL plan baseline, and so on. Enterprise Manager further automates the tuning process by allowing you to fulfill these recommendations by simply clicking on the Implement button. In the case of table statistics, an Enterprise Manager task will be scheduled to gather the statistics, and this is done by using the DBMS_STATS.GATHER_TABLE_STATS database procedure.

You can also implement a new SQL plan baseline or a SQL profile in this manner. Clicking on the magnifying glass icon in the Other Statistics column allows you to select the plan (with the least elapsed time) that you can use to create the SQL plan baseline, as shown in the following screenshot:

Laying the foundation

Automatic SQL tuning has been possible from Oracle Database 11g onwards. This feature can be set up to run automatically in the maintenance window and capture the highest-impact SQL. This SQL is then analyzed, SQL profiles are generated, and these profiles can be automatically implemented if they can substantially improve the execution plans. Thus, the application performance can be improved without changing the application.

The automatic implementation is optional and can be set up if needed, however, it is highly recommended. This is because a generated SQL profile can work very well one day and the same profile may not help a few days later, depending on the data that has changed in the tables. Thus, it is advisable to keep generating and implementing new SQL profiles every day for the highest-impact SQL statements, and this is best automated by automatic SQL tuning. The findings and actions taken by this process will be automatically reported to the DBA.

To set up automatic SQL tuning, on the database menu go to Administration | Oracle Scheduler | Automated Maintenance Tasks.

The automated maintenance tasks are enabled by default (in the case of the Enterprise Manager repository database, these tasks are disabled manually as a prerequisite before Enterprise Manager installation can start).

Click on the Configure button on the Automated Maintenance Tasks screen. This displays a page where you can enable or disable the maintenance tasks and also assign the daily maintenance windows to each task, as shown in the following screenshot:

Laying the foundation

Click on the Configure button against Automatic SQL Tuning. This now allows you to confirm that SQL profiles should be automatically implemented. This is set to No by default. If you set this to Yes, you can also specify the maximum time spent per SQL during tuning (in seconds), the maximum number of SQL profiles implemented per execution of the automatic SQL tuning, and the maximum SQL profiles implemented (overall). The default values for these settings are 1200 seconds, 20, and 10,000 respectively.