Thursday, July 2, 2009
Thursday, December 18, 2008
Backup strategy of crs and ocr (RAC)
1, newly complete setup
2, add/delete/modify any node
OCR:
1, newly complete setup
2, add/delete/modify any db/vip/service/listener
Quiesced state
ALTER SYSTEM QUIESCE RESTRICTED;
ALTER SYSTEM UNQUIESCE;
select active_state from v$instance;
NORMAL: Normal unquiesced state.
QUIESCING: Being quiesced, but some non-DBA sessions are still active.
QUIESCED: Quiesced; no non-DBA sessions are active or allowed.
http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/start.htm#sthref603
RAC status
target state
------ -------
online unknown
steps:
1, srvctl stop database -d dbname
2, srvctl stop asm -n nodename
3, srvctl stop nodeapps -n nodename
4, crsctl stop crs
5, rm -rf /tmp/.oracle/
6, rm -rf /var/tmp/.oracle/
7, crsctl start crs
Thursday, December 11, 2008
DBA can login as anyone..
1, Record dba_users.password
2, Change the password
3, Immediately login as that user
4, Immediate run: alter user [username] identified by values '[recorded password hash]'
Wednesday, December 10, 2008
Tuesday, December 9, 2008
Incarnation
Friday, December 5, 2008
Fine-grained audit usage
-ADD_POLICY Procedure
    DBMS_FGA.ADD_POLICY(
       object_schema VARCHAR2,
       object_name VARCHAR2,
       policy_name VARCHAR2,
       audit_condition VARCHAR2,
       audit_column VARCHAR2,
       handler_schema VARCHAR2,
       handler_module VARCHAR2,
       enable BOOLEAN,
       statement_types VARCHAR2,
       audit_trail BINARY_INTEGER IN DEFAULT,
       audit_column_opts BINARY_INTEGER IN DEFAULT);
-DROP_POLICY Procedure
    DBMS_FGA.DROP_POLICY(
       object_schema VARCHAR2,
       object_name VARCHAR2,
       policy_name VARCHAR2 );
-ENABLE_POLICY Procedure
    DBMS_FGA.ENABLE_POLICY(
       object_schema VARCHAR2,
       object_name VARCHAR2,
       policy_name VARCHAR2,
       enable BOOLEAN);
-DISABLE_POLICY Procedure
    DBMS_FGA.DISABLE_POLICY(
       object_schema VARCHAR2,
       object_name VARCHAR2,
       policy_name VARCHAR2 );
Flashback query usage
    SQL>alter database add supplemental log data;
-Tunes the system to give the best possible undo retention for the undo tablespace.
    UNDO_MANAGEMENT
    UNDO_RETENTION
-Grant flashback privileges to users
    For the DBMS_FLASHBACK package, grant the EXECUTE privilege on DBMS_FLASHBACK
    For Flashback Transaction Query, grant the SELECT ANY TRANSACTION privilege.
    For Flashback Query and Flashback Version Query, grant FLASHBACK and SELECT privileges on specific objects to be accessed during queries or grant the FLASHBACK ANY TABLE privilege to allow queries on all tables.
-Using Flashback Query
    SELECT ... AS OF
    INSERT INTO ...(SELECT ... AS OF)
    CREATE ... AS SELECT ... AS OF
Logminer usage
    SQL>alter database add supplemental log data;
-Extract dictionary
    There has 3 method to extract dictionary
    In the online catalog:
       SQL>EXECUTE DBMS_LOGMNR.START_LOGMNR(-
       OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
    In the redo log files:
       SQL>EXECUTE DBMS_LOGMNR_D.BUILD( -
       OPTIONS=> DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);
    A Flat file:
       (need to set UTL_FILE_DIR of init.ora and restart DB)
       SQL>EXECUTE DBMS_LOGMNR_D.BUILD('dictionary.ora', -
       '/oracle/database/', -
       DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);
-Add redo log files for analysis
    SQL>EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
    LOGFILENAME => '/oracle/logs/log1.f', -
    OPTIONS => DBMS_LOGMNR.NEW);
-Start logminer session
    SQL>Execute DBMS_LOGMNR.START_LOGMNR(options=>…..)
-query v$logmnr_contents
    SQL>SELECT SQL_REDO, SQL_UNDO FROM V$LOGMNR_CONTENTS WHERE...
-End logminer session
   
Logminer vs Fine-grained audit vs Flashback query
-Started: 8i
-Data source: redo log, archive redo log
-Target user: DBA
-Performance overhead: Nil to low (supplemental log)
Regular audit trail
-Started at: ?
-Data source: DBA_AUDIT_TRAIL (from AUD$)
-Target user: DBA
-Performance overhead: Low to medium (extra logging)
Fine-grained audit
-Started: 9i
-Data source: DBA_FGA_AUDIT_TRAIL (from FGA_LOG$)
-Target user: DBA
-Performance overhead: Low to medium (triggers)
Flashback query
-Started at 9i
-Data source: undo segment, flash recovery area/archive
-Target user: end-users, DBA
-Performance overhead: low (background process FBDA)
References:
http://www.oracle.com/technology/pub/articles/oracle-database-11g-top-features/11g-transactionmanagement.html http://www.oracle.com/technology/deploy/availability/pdf/32494_OW2002_PPT.pdf http://echo.sharera.com/blog/BlogTopic/5555.htm http://blog.roodo.com/mywork/archives/4513199.html
Monday, December 1, 2008
Sequential Read vs Scattered Read
DB file sequential read waits:
DB file scattered read waits:
Direct read:
Parameter:
db_file_multiblock_read_count which might affect the chose of read method, however, starting at Oralce 10g, it is auto tuned parameter if it is not set explicitly.
Friday, November 28, 2008
Relationship of some dynamic view
v$session.taddr=v$transaction.addr
v$session.lockwait=v$lock.kaddr
v$session.sql_id,prev_sql_id=v$sql/v$sqltext.sql_id
v$session.sid=v$sess_io.sid
v$session.sid=v$sesstat
I should make a diagram about them later.
With these dynamic view, we know what's going on at instance.
Thursday, November 27, 2008
Check execution plan
Execution Plan
----------------------------------------------------------
SELECT STATEMENT Optimizer=CHOOSE (Cost=541550 Card=49 Bytes =15092)
0 COUNT (STOPKEY)
1 INLIST ITERATOR
2 TABLE ACCESS (BY INDEX ROWID) OF 'TC_xx_TRADE' (Co st=541550 Card=1462696 Bytes=450510368)
3 INDEX (RANGE SCAN) OF 'IDX_TC_REF_TRA_GMT_XIAOER' (N ON-UNIQUE) (Cost=4475 Card=1462696)
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
898 consistent gets
0 physical reads
0 redo size
15366 bytes sent via SQL*Net to client
689 bytes received via SQL*Net from client
5 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
49 rows processed
Plan Table:
@$ORACLE_HOME/rdbms/admin/utlxpls.sql
Wednesday, November 26, 2008
SQL Tuning for massive data update.
order by table1.rowid;
It can save a lot of I/O read time!
Detail about rowid
http://ningoo.itpub.net/post/2149/226286
Character set
col DESCRIPTION format a40
select * from database_properties where property_name like '%SET';
PROPERTY_NAME PROPERTY_VALUE DESCRIPTION
------------------------------ ------------------------------ -----------------------NLS_CHARACTERSET AL32UTF8 Character set
NLS_NCHAR_CHARACTERSET AL16UTF16 NCHAR Character set
SQL CHAR datatypes:
CHAR, VARCHAR2, CLOB, LONG
Unicode datatypes:
NCHAR, NVARCHAR2, NCLOB
NLS Parameter Setting Precedence
1, Set in SQL function (Current SQL funciton)
2, Explicit ALTER SESSION statement (Current session)
3, Client environment variable (implicit ALTER SESSION statement) (Current session)
4, Set by server initialization parameter (Instance)
5. Default (Instance)
Monday, November 24, 2008
OLTP (OS) vs OLAP (DW, BI)
On Line Transaction Processing (Operational System)
-Source of data: Operational data; OLTPs are the original source of the data.
-Purpose of data: To control and run fundamental business tasks
-What the data: Reveals a snapshot of ongoing business processes
-Inserts and Updates: Short and fast inserts and updates initiated by end users
-Queries: Relatively standardized and simple queries Returning relatively few records
-Processing Speed: Typically very fast
-Space Requirements: Can be relatively small if historical data is archived
-Database Design: Highly normalized with many tables
-Backup and Recovery: Backup religiously; operational data is critical to run the business, data loss is likely to entail significant monetary loss and legal liability
-Mostly updates
-Many small transactions
-Mb-Tb of data
-Clerical users
-Up-to-date data
-Consistency, recoverability critical
-Bottleneck: maybe cpu
On Line Analytical Processing (Data Warehouse)
-Source of data: Consolidation data; OLAP data comes from the various OLTP Databases
-Purpose of data: To help with planning, problem solving, and decision support
-What the data: Multi-dimensional views of various kinds of business activities
-Inserts and Updates: Periodic long-running batch jobs refresh the data
-Queries: Often complex queries involving aggregations
-Processing Speed: Depends on the amount of data involved; batch data refreshes and complex queries may take many hours; query speed can be improved by creating indexes
-Space Requirements: Larger due to the existence of aggregation structures and history data; requires more indexes than OLTP
-Database Design: Typically de-normalized with fewer tables; use of star and/or snowflake schemas
-Backup and Recovery: Instead of regular backups, some environments may consider simply reloading the OLTP data as a recovery method-Mostly reads
-Queries long, complex
-Gb-TB of data
-Decision-makers, analysts as users
-summarized, consolidated data
-Bottleneck: maybe I/O
Others:
HBA (host bus adapter) connects a host system (the computer) to other network and storage devices. The terms are primarily used to refer to devices for connecting SCSI, Fibre Channel and eSATA devices, but devices for connecting to IDE, Ethernet, FireWire, USB and other systems may also be called host adapters.