承接上篇:舉一反三:跨平臺版本遷移之 XTTS 方案操作指南
7XTTS 遷移后檢查
7.1 更改用戶(hù)默認表空間
更改用戶(hù)默認表空間,將用戶(hù)默認表空間設置與源數據庫保持一致:
@default_tablespace.sql
源端執行:
spool default_tablespace.sql
select 'alter user '||username||' default tablespace '||default_tablespace||';' from dba_users where default_tablespace in(‘DATATBS ’);
spool off
添加表空間配額權限:
@unlimited_tablespace.sql
源庫:
select 'alter user '||username||' quota unlimited on '|| default_tablespace||';' from dba_users where default_tablespace in (‘DATATBS ’);
7.2 數據庫對象并行重編譯
exec utl_recomp.recomp_parallel(32);
set echo off feedback off timing off verify off
set pagesize 0 linesize 500 trimspool on trimout on
Set heading off;
set feedback off;
set echo off;
Set lines 999;
spool compile.sql
select 'alter '||
decode(object_type,'SYNONYM',decode(owner,'PUBLIC','PUBLIC SYNONYM '||object_name,
'SYNONYM '||OWNER||'.'||OBJECT_NAME)||' compile;',
decode(OBJECT_TYPE ,'PACKAGE BODY','PACKAGE',OBJECT_TYPE)||
' '||owner||'.'||object_name||' compile '||
decode(OBJECT_TYPE ,'PACKAGE BODY','BODY;',' ;'))
from dba_objects where status<>'VALID'
order by owner,OBJECT_NAME;
spool off
@compile.sql
正式環(huán)境沒(méi)有無(wú)效對象。
7.3 數據庫對象數據比對
運行數據庫對比腳本,通過(guò)創(chuàng )建 dblink,運行相關(guān)的數據庫對象比對腳本。這里我們主要比對了存儲過(guò)程,函數,觸發(fā)器,試圖,索引,表等等。
創(chuàng )建到生產(chǎn)環(huán)境 DB LINK
CREATE DATABASE LINK TEST_COMPARE CONNECT TO SYSTEM IDENTIFIED BY password xxx USING 'xxxx:1521/xxxx';
使用如下腳本對比數據庫中對象個(gè)數:
SELECT OWNER, OBJECT_NAME, OBJECT_TYPE
FROM DBA_OBJECTS@TEST_COMPARE
WHERE OBJECT_NAME NOT LIKE 'BIN%'
AND OBJECT_NAME NOT LIKE 'SYS_%'
AND OWNER IN ('LUOKLE')
MINUS
SELECT OWNER, OBJECT_NAME, OBJECT_TYPE
FROM DBA_OBJECTS
WHERE OBJECT_NAME NOT LIKE 'BIN%'
AND OBJECT_NAME NOT LIKE 'SYS_%'
AND OWNER IN ('LUOKLE');
或
源庫:
select object_type,count(*) from dba_objects where owner
in (select username from 源庫) group by object_type;
目標:
select object_type,count(*) from dba_objects where owner
in (select username from 目標庫) group by object_type;
如果索引缺失可能是由于沒(méi)有存放在傳輸的表空間所以需要重新創(chuàng )建,而缺失的表可能是臨時(shí)表,需要手工創(chuàng )建。
使用如下腳本進(jìn)行創(chuàng )建:
CREATE INDEX "LUOKLE"."IDX_XXX" ON "LUOKLE"."BI_XXXX" TABLESPACEDATATBS parallel 8;
Alter index "LUOKLE"."IDX_XX" noparallel;
CREATE GLOBAL TEMPORARY TABLE "LUOKLE"."TEMP_PAY_BATCH_CREATE_INSTR"
( "BATCH_ID" NUMBER,
"STATUS" CHAR(1)
) ON COMMIT PRESERVE ROWS ;
使用 hash 函數進(jìn)行數據對比
兩邊分別創(chuàng )建存放 hash 數據的表
create table system.get_has_value (dbname varchar2(20),owner varchar2(30),table_name varchar2(100),value varchar2(100),error varchar2(2000));
創(chuàng )建需要驗證的表:
create sequence system.sequence_checkout_table start with 1 increment by 1 order cycle maxvalue 10 nocache;
CREATE TABLE SYSTEM.checkout_table as select sys_context('USERENV', 'INSTANCE_NAME') dbnme,owner,table_name, system.sequence_checkout_table.NEXTVAL groupid from dba_tables where owner='LUOKLE'
結果顯示:
1 SELECT owner, groupid, COUNT (*)
2 FROM SYSTEM.checkout_table
3* GROUP BY owner, groupid,dbnme Order by owner,groupid
14:05:21 SQL> SELECT owner, groupid, COUNT (*)
14:05:31 2 FROM SYSTEM.checkout_table
14:05:32 3 GROUP BY owner, groupid,dbnme Order by owner,groupid;
OWNER GROUPID COUNT(*)
------------------------------ ---------- ----------
LUOKLE 1 32
LUOKLE 2 31
LUOKLE 3 31
LUOKLE 4 31
LUOKLE 5 31
LUOKLE 6 31
LUOKLE 7 31
LUOKLE 8 31
LUOKLE 9 31
LUOKLE 10 31
創(chuàng )建 hash 函數
grant select on sys.dba_tab_columns to system;
CREATE OR REPLACE PROCEDURE SYSTEM.get_hv_of_data (
avc_owner VARCHAR2,
avc_table VARCHAR2)
AS
lvc_sql_text VARCHAR2 (30000);
ln_hash_value NUMBER;
lvc_error VARCHAR2 (100);
BEGIN
SELECT 'select /*+parallel(a,25)*/sum(dbms_utility.get_hash_value('
|| column_name_path
|| ',0,power(2,30)) ) from '
|| owner
|| '.'
|| table_name
|| ' a '
INTO LVC_SQL_TEXT
FROM (SELECT owner,
table_name,
column_name_path,
ROW_NUMBER ()
OVER (PARTITION BY table_name
ORDER BY table_name, curr_level DESC)
column_name_path_rank
FROM ( SELECT owner,
table_name,
column_name,
RANK,
LEVEL AS curr_level,
LTRIM (
SYS_CONNECT_BY_PATH (column_name, '||''|''||'),
'||''|''||')
column_name_path
FROM ( SELECT owner,
table_name,
'"' || column_name || '"' column_name,
ROW_NUMBER ()
OVER (PARTITION BY table_name
ORDER BY table_name, column_name)
RANK
FROM dba_tab_columns
WHERE owner = UPPER (avc_owner)
AND table_name = UPPER (avc_table)
AND DATA_TYPE IN ('TIMESTAMP(3)',
'INTERVAL DAY(3) TO SECOND(0)',
'TIMESTAMP(6)',
'NVARCHAR2',
'CHAR',
'BINARY_DOUBLE',
'NCHAR',
'DATE',
'RAW',
'TIMESTAMP(6)',
'VARCHAR2',
'NUMBER')
ORDER BY table_name, column_name)
CONNECT BY table_name = PRIOR table_name
AND RANK - 1 = PRIOR RANK))
WHERE column_name_path_rank = 1;
EXECUTE IMMEDIATE lvc_sql_text INTO ln_hash_value;
lvc_sql_text :=
'insert into system.get_has_value(owner,table_name,value) values(:x1,:x2,:x3)';
EXECUTE IMMEDIATE lvc_sql_text USING avc_owner, avc_table, ln_hash_value;
commit;
DBMS_OUTPUT.put_line (
avc_owner || '.' || avc_table || ' ' || ln_hash_value);
EXCEPTION
WHEN NO_DATA_FOUND
THEN
lvc_error := 'NO DATA FOUND';
lvc_sql_text :=
'insert into system.get_has_value(owner,table_name,error) values(:x1,:x2,:x3)';
EXECUTE IMMEDIATE lvc_sql_text USING avc_owner, avc_table, lvc_error;
commit;
WHEN OTHERS
THEN
lvc_sql_text :=
'insert into system.get_has_value(owner,table_name,value) values(:x1,:x2,:x3)';
EXECUTE IMMEDIATE lvc_sql_text USING avc_owner, avc_table, SQLERRM;
commit;
END;
/
sqlplus system/oracle<<EOF
set heading off linesize 170 pagesize 0 feedback off echo off trimout on trimspool on termout off verify off
exit;
EOF
nohup ./check_source.sh LUOKLE 1 >./source_LUOKLE_cd_1.log 2>&1 &
nohup ./check_source.sh LUOKLE 2 >./source_LUOKLE_cd_1.log 2>&1 &
nohup ./check_source.sh LUOKLE 3 >./source_LUOKLE_cd_1.log 2>&1 &
nohup ./check_source.sh LUOKLE 4 >./source_LUOKLE_cd_1.log 2>&1 &
nohup ./check_source.sh LUOKLE 5 >./source_LUOKLE_cd_1.log 2>&1 &
nohup ./check_source.sh LUOKLE 6 >./source_LUOKLE_cd_1.log 2>&1 &
nohup ./check_source.sh LUOKLE 7 >./source_LUOKLE_cd_1.log 2>&1 &
nohup ./check_source.sh LUOKLE 8 >./source_LUOKLE_cd_1.log 2>&1 &
nohup ./check_source.sh LUOKLE 9 >./source_LUOKLE_cd_1.log 2>&1 &
nohup ./check_source.sh LUOKLE 10 >./source_LUOKLE_cd_1.log 2>&1 &
checkdata_source.sh
date
sqlplus system/oracle<<EOF
set heading off linesize 170 pagesize 0 feedback off
spool source_check_$1_$2.sql
SELECT 'exec system.get_hv_of_data('''
|| owner
|| ''','''
|| table_name
|| ''')'
FROM system.checkout_table
WHERE owner = UPPER ('$1') and groupid=$2
AND table_name NOT IN (SELECT table_name
FROM dba_tables
WHERE owner = UPPER ('$1')
AND iot_type IS NOT NULL)
AND table_name IN (SELECT table_name
FROM ( SELECT table_name, COUNT (*)
FROM dba_tab_columns
WHERE owner = UPPER ('$1')
AND DATA_TYPE IN ('TIMESTAMP(3)',
'INTERVAL DAY(3) TO SECOND(0)',
'TIMESTAMP(6)',
'NVARCHAR2',
'CHAR',
'BINARY_DOUBLE',
'NCHAR',
'DATE',
'RAW',
'VARCHAR2',
'NUMBER')
GROUP BY table_name
HAVING COUNT (*) > 0))
ORDER BY table_name;
spool off
set serveroutput on
@source_check_$1_$2.sql
exit;
EOF
date
運行 hash 計算函數腳本,在LINUX環(huán)境對 LUOKLE 下所有表進(jìn)行 hash 計算耗時(shí)30分鐘,總共311張表,有52張表沒(méi)有計算出 hash 經(jīng)分析發(fā)現這些表為空表。
SQL> select count(*) from LUOKLE.XXXX;
COUNT(*)
----------
0
7.4 數據庫對象間權限比對處理
對比源庫和目標庫數據庫的對象級別間權限,如若權限不一致建議將源庫跑出的 grant_tab_privs.log 到目標端執行。
復核對象上的 select 和 DML 權限賦予給用戶(hù)
@grant_tab_privs.sql
源庫:
select 'grant ' || privilege || ' on ' || owner || '.' || table_name || ' to ' || grantee || ';' from dba_tab_privs where (grantee in(select username from dba_users where default_tablespace in(‘DATATBS ’)) or owner in(select username from dba_users where default_tablespace in(DATATBS ))) and privilege in('SELECT','DELETE','UPDATE','INSERT') and grantable='NO'
union
select 'grant ' || privilege || ' on ' || owner || '.' || table_name || ' to ' || grantee || ' with grant option;' from dba_tab_privs where (grantee in(select username from dba_users where default_tablespace in(DATATBS )) or owner in(select username from dba_users where default_tablespace in(DATATBS ))) and privilege in('SELECT','DELETE','UPDATE','INSERT') and grantable='YES';
7.5 收集統計信息
為了防止同時(shí)收集統計信息,造成系統資源的消耗,建議提前關(guān)閉后臺自動(dòng)收集統計信息的任務(wù)。
exec DBMS_AUTO_TASK_ADMIN.DISABLE(client_name => 'auto optimizer stats collection',operation => NULL,window_name => NULL);
查看柱狀圖信息:
select count(*),owner,table_name,column_name from dba_tab_histograms
group by owner,table_name,column_name
having count(*) > 2;
手工運行收集腳本:
exec DBMS_STATS.SET_GLOBAL_PREFS('CONCURRENT','TRUE');設置并發(fā)收集模式
exec
dbms_stats.gather_database_stats(
estimate_percent =>dbms_stats.AUTO_SAMPLE_SIZE, ///// for all columns size repeat
METHOD_OPT=>'FOR ALL COLUMNS SIZE 1',
options=> 'GATHER',degree=>8,
granularity =>’all’,
cascade=> TRUE
);
select * from dba_scheduler_jobs where schedule_type = 'IMMEDIATE' and state = 'RUNNING';
收集數據字典統計信息:
exec DBMS_STATS.GATHER_DICTIONARY_STATS(degree=>16);
固定對象的統計信息:
EXECUTE DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;
開(kāi)啟默認收集
exec DBMS_AUTO_TASK_ADMIN.ENABLE(client_name => 'auto optimizer stats collection',operation => NULL,window_name => NULL);
exec DBMS_STATS.SET_GLOBAL_PREFS('CONCURRENT','false');
以下為測試過(guò)程:
13:23:41 SQL> select count(*),owner,table_name,column_name from dba_tab_histograms
13:23:45 2 where owner='LUOKLE'
13:23:46 3 group by owner,table_name,column_name
13:23:46 4 having count(*) > 2;
no rows selected
Elapsed: 00:00:00.10
13:28:06 SQL> exec dbms_stats.gather_database_stats(estimate_percent =>dbms_stats.AUTO_SAMPLE_SIZE,METHOD_OPT=>'FOR ALL COLUMNS SIZE 1',options=> 'GATHER',degree=>8, granularity =>'all',cascade=> TRUE);
PL/SQL procedure successfully completed.
Elapsed: 00:26:51.34
13:55:05 SQL>
全庫統計信息收集耗時(shí)26分鐘
13:57:14 SQL> exec DBMS_STATS.GATHER_DICTIONARY_STATS(degree=>16);
PL/SQL procedure successfully completed.
Elapsed: 00:00:18.94
7.6 修改 job 參數
show parameter job_queue_processes;
alter system set job_queue_processes=100 scope=both;
8XTTS 遷移測試耗時(shí)(20T)
9XTTS 遷移測試問(wèn)題記錄
expdp \'/ as sysdba\' directory=xtts dumpfile=expdp_LUOKLE_meta0822.dmp logfile=expdp_LUOKLE_meta0822.log CONTENT=metadata_only SCHEMAS=LUOKLE 15:06 開(kāi)始到出
ORA-39014: One or more workers have prematurely exited.
ORA-39029: worker 1 with process name "DW00" prematurely terminated
ORA-31671: Worker process DW00 had an unhandled exception.
ORA-04030: out of process memory when trying to allocate 3704 bytes (kkoutlCreatePh,kkotbi : kkotbal)
ORA-06512: at "SYS.KUPW$WORKER", line 1887
ORA-06512: at line 2
在做元數據導出時(shí)候后臺報大量 ORA-04030 錯誤,經(jīng)過(guò)分析為 AMM 問(wèn)題,通過(guò)關(guān)閉 AMM 手工管理內存解決。
10g 的 sga_target 設置為0
Errors in file /oracle/app/oracle/diag/rdbms/LUOKLE/orcl1/trace/orcl1_ora_13107324.trc (incident=28001):
ORA-04030: out of process memory when trying to allocate 32808 bytes (TCHK^cadd45dc,kggec.c.kggfa)
經(jīng)過(guò)分析發(fā)現 AIX stack 設置偏小導致,修改限制解決。
ERROR IN CONVERSION ORA-19624: operation failed, retry possible
ORA-19505:
failed to identify file "/aix_xtts/oradata2/f8rdl6vi_1_1"
ORA-27037: unable to
obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional
information: 3
ORA-19600: input file is backup piece
(/aix_xtts/oradata2/f8rdl6vi_1_1)
ORA-19601: output file is backup piece
(/aix_xtts/incr/xib_f8rdl6vi_1_1_6_8_10_12_14_16_18_20_22_)
CONVERTED BACKUP PIECE/aix_xtts/incr/xib_f8rdl6vi_1_1_6_8_10_12_14_16_18_20_22_
PL/SQL procedure successfully completed.
ERROR IN CONVERSION ORA-19624: operation failed, retry possible
ORA-19505:
failed to identify file "/aix_xtts/oradata2/f9rdl70m_1_1"
ORA-27037: unable to
obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional
information: 3
ORA-19600: input file is backup piece
(/aix_xtts/oradata2/f9rdl70m_1_1)
ORA-19601: output file is backup piece
(/aix_xtts/incr/xib_f9rdl70m_1_1_7_9_11_13_15_17_19_21_23_)
CONVERTED BACKUP PIECE/aix_xtts/incr/xib_f9rdl70m_1_1_7_9_11_13_15_17_19_21_23_
經(jīng)過(guò)分析發(fā)現增量備份沒(méi)有放在對應目錄導致。
failed to create file
"/xtts/incr/xib_f9rdl70m_1_1_7_9_11_13_15_17_19_21_23_"
ORA-27040: file create
error, unable to create file
Linux-x86_64 Error: 13: Permission
denied
Additional information: 1
ORA-19600: input file is backup piece
(/xtts/oradata2/f9rdl70m_1_1)
ORA-19601: output file is backup piece
(/xtts/incr/xib_f9rdl70m_1_1_7_9_11_13_15_17_19_21_23_)
CONVERTED BACKUP PIECE/xtts/incr/xib_f9rdl70m_1_1_7_9_11_13_15_17_19_21_23_
PL/SQL procedure successfully completed.
ERROR IN CONVERSION ORA-19624: operation failed, retry possible
ORA-19504:
failed to create file
"/xtts/incr/xib_f8rdl6vi_1_1_6_8_10_12_14_16_18_20_22_"
ORA-27040: file create
error, unable to create file
Linux-x86_64 Error: 13: Permission
denied
Additional information: 1
ORA-19600: input file is backup piece
(/xtts/oradata2/f8rdl6vi_1_1)
ORA-19601: output file is backup piece
(/xtts/incr/xib_f8rdl6vi_1_1_6_8_10_12_14_16_18_20_22_)
CONVERTED BACKUP PIECE/xtts/incr/xib_f8rdl6vi_1_1_6_8_10_12_14_16_18_20_22_
NFS 目錄權限問(wèn)題導致不行讀寫(xiě),修改權限解決。
NFS 問(wèn)題:
mount: 1831-008 giving up on:
192.168.1.100:/xtts
vmount: Operation not permitted.
# nfso -p -o nfs_use_reserved_ports=1
Setting nfs_use_reserved_ports to 1
Setting nfs_use_reserved_ports to 1 in nextboot file
# mount -o cio,rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,noac,vers=3,timeo=600 10.20.28.21:/xtts /aix_xtts
10總結
XTTS 支持跨平臺跨版本遷移,操作起來(lái)比較方便,由于停機時(shí)間較短,可以較輕松完成遷移工作,在大數據量的跨平臺跨版本遷移場(chǎng)景中,建議作為首選方案。
建議在做 XTTS 遷移的時(shí)候減少批次,批次越多,增量備份的數據越少,數據越少,最后停機時(shí)間越短,但是這個(gè)過(guò)程如果做太多就越容易出錯。一般使用一次增量備份再做一次正式遷移,甚至初始化后直接做正式遷移。
11附錄 - xttdriver.pl 腳本使用說(shuō)明
詳見(jiàn):11G - Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (文檔 ID 1389592.1)
Description of Perl Script xttdriver.pl Options
The following table describes the options available for the main supporting script xttdriver.pl.
Option
Description
-S prepare source for transfer
-S option is used only when Prepare phase method is dbms_file_transfer.
Prepare step is run once on the source system during Phase 2A with the environment (ORACLE_HOME and ORACLE_SID) set to the source database. This step creates files xttnewdatafiles.txt and getfile.sql.
-G get datafiles from source
-G option is used only when Prepare phase method is dbms_file_transfer.
Get datafiles step is run once on the destination system during Phase 2A with the environment (ORACLE_HOME and ORACLE_SID) set to the destination database. The -S option must be run beforehand and files xttnewdatafiles.txt and getfile.sql transferred to the destination system.
This option connects to the destination database and runs script getfile.sql. getfile.sql invokes dbms_file_transfer.get_file() subprogram for each datafile to transfer it from the source database directory object (defined by parameter srcdir) to the destination database directory object (defined by parameter dstdir) over a database link (defined by parameter srclink).
-p prepare source for backup
-p option is used only when Prepare phase method is RMAN backup.
Prepare step is run once on the source system during Phase 2B with the environment (ORACLE_HOME and ORACLE_SID) set to the source database.
This step connects to the source database and runs the xttpreparesrc.sql script once for each tablespace to be transported, as configured in xtt.properties. xttpreparesrc.sql does the following:
1. Verifies the tablespace is online, in READ WRITE mode, and contains no offline datafiles.
2. Identifies the SCN that will be used for the first iteration of the incremental backup step and writes it into file $TMPDIR/xttplan.txt.
3. Creates the initial datafile copies on the destination system in the location specified by the parameter dfcopydir set in xtt.properties. These datafile copies must be transferred manually to the destination system.
4. Creates RMAN script $TMPDIR/rmanconvert.cmd that will be used to convert the datafile copies to the required endian format on the destination system.
-c convert datafiles
-c option is used only when Prepare phase method is RMAN backup.
Convert datafiles step is run once on the destination system during Phase 2B with the environment (ORACLE_HOME and ORACLE_SID) set to the destination database.
This step uses the rmanconvert.cmd file created in the Prepare step to convert the datafile copies to the proper endian format. Converted datafile copies are written on the destination system to the location specified by the parameter storageondest set in xtt.properties.
-i create incremental
Create incremental step is run one or more times on the source system with the environment (ORACLE_HOME and ORACLE_SID) set to the source database.
This step reads the SCNs listed in $TMPDIR/xttplan.txt and generates an incremental backup that will be used to roll forward the datafile copies on the destination system.
-r rollforward datafiles
Rollforward datafiles step is run once for every incremental backup created with the environment (ORACLE_HOME and ORACLE_SID) set to the destination database.
This step connects to the incremental convert instance using the parameters cnvinst_home and cnvinst_sid, converts the incremental backup pieces created by the Create Incremental step, then connects to the destination database and rolls forward the datafile copies by applying the incremental for each tablespace being transported.
-s determine new FROM_SCN
Determine new FROM_SCN step is run one or more times with the environment (ORACLE_HOME and ORACLE_SID) set to the source database.
This step calculates the next FROM_SCN, records it in the file xttplan.txt, then uses that SCN when the next incremental backup is created in step 3.1. It reports the mapping of the new FROM_SCN to wall clock time to indicate how far behind the changes in the next incremental backup will be.
-e generate Data Pump TTS command
Generate Data Pump TTS command step is run once on the destination system with the environment (ORACLE_HOME and ORACLE_SID) set to the destination database.
This step creates the template of a Data Pump Import command that uses a network_link to import metadata of objects that are in the tablespaces being transported.
-d debug
-d option enables debug mode for xttdriver.pl and RMAN commands it executes. Debug mode can also be enabled by setting environment variable XTTDEBUG=1.
資源下載