Jul 14, 2015

Oracle Datapump_IMPORT

TABLE_EXISTS_ACTION in Datapump

TABLE_EXISTS_ACTION during IMPORT (IMPDP)

In data pump import the parameter TABLE_EXISTS_ACTION help to do the job. The default value of this parameter is SKIP which means if table to be imported already existed in the database table will be skipped and data not to be imported and continue processing next object. However if in your import job if CONTENT=DATA_ONLY is specified, the default is APPEND, and then data will be appended into existing table.

TABLE_EXISTS_ACTION can have following values.

1)SKIP: It leaves the table as is and moves on to the next object. This is not a valid option if the CONTENT parameter is set to DATA_ONLY.

2)APPEND: This option loads rows from the source and leaves existing rows unchanged. This is a default option is CONTENT=DATA_ONLY is specified.

3)TRUNCATE: This option deletes existing rows and then loads rows from the source.

4)REPLACE: This option drops the existing table in the database and then creates and loads it from the source. This is not a valid option if the CONTENT parameter is set to DATA_ONLY.

Note
- If you use TRUNCATE or REPLACE, make sure that rows in the affected tables are not targets of any referential constraints.

- If you use SKIP, APPEND, or TRUNCATE then existing table dependent objects in the source, such as indexes, grants, triggers, and constraints, are ignored. In case of REPLACE, the dependent objects are dropped and again created from the source, if they were not explicitly or implicitly excluded (using EXCLUDE) and they exist in the source dump file or system.

- If you use APPEND or TRUNCATE, checks are made to ensure that rows from the source are compatible with the existing table prior to performing any action. If the existing table has active constraints and triggers, it is loaded using the external tables access method. If any row violates an active constraint, the load fails and no data is loaded. You can override this behavior by specifying DATA_OPTIONS=SKIP_CONSTRAINT_ERRORS on the Import command line.

If you want data must be loaded but causes constraint voilations, you can disable constraints, import data, delete the rows which causes problems and then enable constraints.

- When you use APPEND, the data is always loaded into new space. So if you have any existing space available the space is not reused. So after the import operation, you may wish to compress your data after the load.

- TRUNCATE cannot be used on clustered tables or over network links.

In case of original import use ignore=y option to append data into an existing table. ignore=y causes rows to be imported into existing tables without any errors or messages being given.
How to Copy files from ASM to filesystem and filesystem to ASM
Copy files from ASM to filesystem
---------------------------------
$ps -ef| grep pmon
$. oraenv
ORACLE_SID = [dbinst1] ? +ASM1
The Oracle base for ORACLE_HOME=/opt/app/11.2.0/grid is /opt/app/oracle
$asmcmd

ASMCMD> lsdg
ASMCMD> cd FRA/TESTDB/EXPDP
ASMCMD> ls
dumpfile1.dmp
dumpfile2.dmp
dumpfile3.dmp
ASMCMD> cp dumpfile1.dmp dumpfile1.dmp dumpfile1.dmp /oracle/backup/testdb/expdp
copying +FRA/TESTDB/EXPDP/dumpfile1.dmp -> /oracle/backup/testdb/expdp/dumpfile1.dmp
copying +FRA/TESTDB/EXPDP/dumpfile2.dmp -> /oracle/backup/testdb/expdp/dumpfile2.dmp
copying +FRA/TESTDB/EXPDP/dumpfile3.dmp -> /oracle/backup/testdb/expdp/dumpfile3.dmp
ASMCMD> exit

$cd /oracle/backup/testdb/expdp/
$ls -lrt dumpfile*.dmp
dumpfile1.dmp
dumpfile2.dmp
dumpfile3.dmp

Copy files from filesystem to ASM
---------------------------------
$cd /oracle/backup/testdb/expdp/
$ls -lrt dumpfile*.dmp
dumpfile1.dmp
dumpfile2.dmp
dumpfile3.dmp

$ps -ef| grep pmon
$. oraenv
ORACLE_SID = [dbinst1] ? +ASM1
The Oracle base for ORACLE_HOME=/opt/app/11.2.0/grid is /opt/app/oracle
$asmcmd

ASMCMD> lsdg
ASMCMD> cd FRA/TESTDB/IMPDP
ASMCMD> ls
ASMCMD> cp /oracle/backup/testdb/expdp/dumpfile1.dmp '+FRA/TESTDB/EXPDP/'
copying /oracle/backup/testdb/expdp/dumpfile1.dmp -> +FRA/TESTDB/EXPDP/dumpfile1.dmp
ASMCMD> cp /oracle/backup/testdb/expdp/dumpfile2.dmp '+FRA/TESTDB/EXPDP/'
copying /oracle/backup/testdb/expdp/dumpfile2.dmp -> +FRA/TESTDB/EXPDP/dumpfile2.dmp
ASMCMD> cp /oracle/backup/testdb/expdp/dumpfile3.dmp '+FRA/TESTDB/EXPDP/'
copying /oracle/backup/testdb/expdp/dumpfile3.dmp -> +FRA/TESTDB/EXPDP/dumpfile3.dmp
ASMCMD> ls
dumpfile1.dmp
dumpfile2.dmp
dumpfile3.dmp

Schema Refresh using ASM Diskgroups
How to create Datapump Export Dumps within ASM diskgroups

Schema Status
-------------

select username,account_status,default_tablespace from dba_users where username=upper('&username');

select owner,sum(bytes)/1024/1024 "SIZE in MB" from dba_segments where owner=upper('&owner') group by owner;

select owner,object_type,count(*) from dba_objects where owner=upper('&owner') and object_name not like 'BIN$%' group by object_type,owner order by object_type;


EXPORT (EXPDP)

$ps -ef| grep pmon
$. oraenv
ORACLE_SID = [dbinst1] ? +ASM1
The Oracle base for ORACLE_HOME=/opt/app/11.2.0/grid is /opt/app/oracle
$asmcmd
ASMCMD> cd DATA01/TESTDB
ASMCMD> mkdir EXPDP

1. Create a directory in ASM. 

SQL> create or replace directory DATAPUMP_ASM_DIR as '+DATA01/TESTDB/EXPDP';
Directory created.
SQL> GRANT READ,WRITE ON DIRECTORY DATAPUMP_ASM_DIR TO SYSTEM;

2. Create a logfile directory in filesystem, since logfiles can't be stored in ASM.
SQL> create or replace directory DATAPUMP_LOG_DIR as '/oracle/backup/testdb/expdp';
Directory created.
SQL> GRANT READ,WRITE ON DIRECTORY DATAPUMP_LOG_DIR TO SYSTEM;

3. Create logfile directory in filesystem.
$ cd /oracle/backup/testdb/
$ mkdir expdp
$ chmod -R 755 expdp
$ cd expdp
$pwd
/oracle/backup/testdb/expdp
$ df -h /oracle/backup/testdb/expdp -- or
$df -gt /oracle/backup/testdb/expdp

$ sqlplus "/as sysdba"

set pages 50000 lines 32767
col DIRECTORY_PATH for a80;
select * from dba_directories;

OWNER DIRECTORY_NAME DIRECTORY_PATH
------- ---------------- ---------------------------------------
SYS DATAPUMP_ASM_DIR +DATA01/testdb/expdp
SYS DATAPUMP_LOG_DIR /oracle/backup/testdb/expdp

Where DATAPUMP_ASM_DIR is a ASM based directory
and   DATAPUMP_LOG_DIR is on a filesystem.

Here, Export is written to ASM based dictionary,
      but log is written to another directory which is on filesystem.

We can create export dumps to the ASM diskgroups. But we can’t write expdp log to ASM (as it is a text file).

$vi schema_export_in_ASM.par

user_id="/ as sysdba"
directory=DATAPUMP_ASM_DIR
dumpfile=source_schemaname_expdp_%U.dmp
logfile=DATAPUMP_LOG_DIR:schema_export_in_ASM.log
schemas=schema1,schema2
exclude=statistics
parallel=6
compression=all
cluster=n  ---- 11g

:wq

$nohup expdp parfile=schema_export_in_ASM.par &

$tail -f nohup.out or

$tail -f schema_export_in_ASM.log

=======================================================================

IMPORT (IMPDP)

Schema Status
-------------

select username,account_status,default_tablespace from dba_users where username=upper('&username');

select owner,sum(bytes)/1024/1024 "SIZE in MB" from dba_segments where owner=upper('&owner') group by owner;

select owner,object_type,count(*) from dba_objects where owner=upper('&owner') and object_name not like 'BIN$%' group by object_type,owner order by object_type;

$ps -ef| grep pmon
$. oraenv
ORACLE_SID = [dbinst1] ? +ASM1
The Oracle base for ORACLE_HOME=/opt/app/11.2.0/grid is /opt/app/oracle
$asmcmd
ASMCMD> cd DATA01/TESTDB
ASMCMD> mkdir IMPDP

1. Create a directory in ASM. 

SQL> create or replace directory DATAPUMP_ASM_DIR as '+DATA01/TESTDB/IMPDP';
Directory created.
SQL> GRANT READ,WRITE ON DIRECTORY DATAPUMP_ASM_DIR TO SYSTEM;

2. Create a logfile directory in filesystem, since logfiles can't be stored in ASM.

SQL> create or replace directory DATAPUMP_LOG_DIR as '/oracle/backup/testdb/impdp';
Directory created.
SQL> GRANT READ,WRITE ON DIRECTORY DATAPUMP_LOG_DIR TO SYSTEM;

3. Create logfile directory in filesystem.
$ cd /oracle/backup/testdb/
$ mkdir impdp
$ chmod -R 755 impdp
$ cd impdp
$pwd
/oracle/backup/testdb/impdp

$ sqlplus "/as sysdba"

set pages 50000 lines 32767
col DIRECTORY_PATH for a80;
select * from dba_directories;

OWNER DIRECTORY_NAME DIRECTORY_PATH
------- ---------------- ---------------------------------------
SYS DATAPUMP_ASM_DIR +DATA01/testdb/impdp
SYS DATAPUMP_LOG_DIR /oracle/backup/testdb/impdp

Where DATAPUMP_ASM_DIR is a ASM based directory
and   DATAPUMP_LOG_DIR is on a filesystem.

Here, Export is written to ASM based dictionary,
      but log is written to another directory which is on filesystem.

We can create export dumps to the ASM diskgroups. But we can’t write expdp log to ASM (as it is a text file).

$vi import_schema_export_in_ASM.par

user_id="/ as sysdba"
directory=DATAPUMP_ASM_DIR
dumpfile=source_schemaname_expdp_%U.dmp
logfile=DATAPUMP_LOG_DIR:import_schema_export_in_ASM.log
schemas=schema1,schema2
parallel=6
cluster=n  ---- 11g

If required,
REMAP_SCHEMA=source_schema1:destination_schema1,source_schema2:destination_schema2
REMAP_TABLESPACE=source_tablespace1:destination_tablespace1,source_tablespace2:destination_tablespace2

:wq

$nohup impdp parfile=import_schema_export_in_ASM.par &

$tail -f nohup.out

$tail -f import_schema_export_in_ASM.log


Schema Status
-------------

select username,account_status,default_tablespace from dba_users where username=upper('&username');

select owner,sum(bytes)/1024/1024 "SIZE in MB" from dba_segments where owner=upper('&owner') group by owner;

select owner,object_type,count(*) from dba_objects where owner=upper('&owner') and object_name not like 'BIN$%' group by object_type,owner order by object_type;  
Network Level Import of Schema
How to take Schema Import at Network Level 

The NETWORK_LINK parameter initiates a network import. This means that the impdp client initiates the import request, typically to the local database. That server contacts the remote source database referenced by the database link in the NETWORK_LINK parameter, retrieves the data, and writes it directly back to the target database. There are no dump files involved.

1. Create a TNS entry on the local database tnsnames.ora file for the remote database
Create a TNS entry on the local database tnsnames.ora file for the remote database and create a database link using the TNS entry. The user used in the database link must have read/write permission on the remote servers directory.

SOURCE_DATABASE_LINK =
 (DESCRIPTION =
   (ADDRESS = (PROTOCOL = TCP)(HOST = remote-loc)(PORT = 1521))
   (CONNECT_DATA =
     (SERVER = DEDICATED)
     (SID = remote-db)
   )
 )

2. Create the Database Link on the Destination database:

create database link SOURCE_DATABASE_LINK connect to SYSTEM identified by password using 'SOURCE_DATABASE_LINK';

3. Script to get database link information:

$ sqlplus "/as sysdba"

DBLINKS DETAILS
---------------
set pages 50000 lines 32767
col owner for a15
col username for a10
col host for a20
col created for a20

COL DB_LINK FORMAT A30

select owner, db_link, username, host, to_char(created,'dd-mon-yyyy hh24:mi:ss') CREATED from DBA_DB_LINKS order by owner, db_link;

OWNER      DB_LINK                        USERNAME HOST    CREATED
---------- ------------------------------ -------- ------- -------

4. PAR file.
$vi schema_network_level_import.par

user_id="/ as sysdba"
directory=DATA_PUMP_DIR
logfile=schema_network_level_import.log
schemas=schema1,schema2
NETWORK_LINK=source_database_link
FLASHBACK_TIME=SYSTIMESTAMP
exclude=statistics
parallel=6
cluster=n

:wq

$nohup impdp parfile=schema_network_level_import.par &

$tail -f nohup.out

$tail -f schema_network_level_import.log

Optional Parametrs:
------------------

ESTIMATE={BLOCKS | STATISTICS}
FLASHBACK_SCN=scn_number
FLASHBACK_TIME="TO_TIMESTAMP()"

The Import ESTIMATE parameter is valid only if the NETWORK_LINK parameter is also specified.

The FLASHBACK_SCN parameter is valid only when the NETWORK_LINK parameter is also specified. This is because the value is passed to the source system to provide an SCN-consistent data extraction.

This parameter is valid only when the NETWORK_LINK parameter is also specified. This is because the value is passed to the source system to provide a time-consistent import.

FLASHBACK_SCN and FLASHBACK_TIME are mutually exclusive.

Longops DATAPUMP
DATAPUMP STATUS
---------------

 set pages 50000 lines 32767
col OPNAME for a10
col UNITS for a10
col TARGET_DESC for a10
col OWNER_NAME for a10
col STATE for a10
col JOB_MODE for a10
col JOBNAME for a20
col MESSAGE for a30
alter session set nls_date_format = 'DD-MM-YYYY HH24:MI:SS';
select sid,serial#,sofar,totalwork,units,target_desc,start_time,last_update_time,
round(time_remaining/60,2) "REMAINING MINS",round(elapsed_seconds/60,2) "ELAPSED MINS",
round((time_remaining+elapsed_seconds)/60,2)"TOTAL MINS",
dp.owner_name,dp.state,dp.job_name,dp.job_mode,dp.workers,
message
from gv$session_longops sl,gv$datapump_job dp
where sl.opname=dp.job_name and sofar != totalwork
/

Currently Active DataPump Operations
---------------------------------------
set pages 9999 lines 300

TTITLE 'Currently Active DataPump Operations'
COL owner_name          FORMAT A06      HEADING 'Owner'
COL job_name            FORMAT A20      HEADING 'JobName'
COL operation           FORMAT A12      HEADING 'Operation'
COL job_mode            FORMAT A12      HEADING 'JobMode'
COL state               FORMAT A12      HEADING 'State'
COL degree              FORMAT 9999     HEADING 'Degr'
COL attached_sessions   FORMAT 9999     HEADING 'Sess'

SELECT
 owner_name
,job_name
,operation
,job_mode
,state
,degree
,attached_sessions
FROM dba_datapump_jobs
/

Currently Active DataPump Sessions
-------------------------------------
set pages 9999 lines 300

TTITLE 'Currently Active DataPump Sessions'
COL owner_name          FORMAT A06      HEADING 'Owner'
COL job_name            FORMAT A20      HEADING 'Job'
COL osuser              FORMAT A12      HEADING 'UserID'

SELECT
 DPS.owner_name
,DPS.job_name
,S.osuser
,S.sid
,S.serial#
,S.status
FROM
 dba_datapump_sessions DPS
,v$session S
WHERE S.saddr = DPS.saddr
/

DBA_DATAPUMP_JOBS - This view will show the active Data Pump jobs, their state, degree of parallelism, and the number of sessions attached.
-----------------
set pages 9999 lines 300

COL owner_name FORMAT a10
COL job_name FORMAT a20
COL state FORMAT a11
COL operation LIKE state
COL job_mode LIKE state

select * from dba_datapump_jobs where job_name not like 'BIN$%' order by 1,2;
OWNER_NAME JOB_NAME               OPERATION  JOB_MODE   STATE         DEGREE    ATTACHED_SESSIONS
---------- ---------------------- ---------- ---------- ------------- --------- -----------------

DBA_DATAPUMP_SESSIONS - This view give gives the SADDR that assist in determining why a Data Pump session may be having problems.
---------------------
set pages 9999 lines 300

COL owner_name FORMAT a10
COL job_name FORMAT a20

select owner_name,job_name,saddr,sid,serial# from v$session s, dba_datapump_sessions d
where s.saddr = d.saddr;

Longops IMPORT
IMPORT STATUS
------------
jobs
or
ps -ef| grep impdp or ps -ef| grep imp or ps -ef|grep <processid>
or
ls -lrth
tail -f nohup.out
tail -f IMPORT_SOURCE_TABLENAME_EXP.log

cat  IMPORT_SOURCE_TABLENAME_EXP.log

(OR)

IMPORT STATUS
------------
 $impdp attach=<JOB_NAME>
username: / as sysdba
IMPDP>STATUS

If required, to kill IMPORT job

CTRL+C
IMPDP>ATTACH=SOURCE_TABLE_IMPORT
IMPDP>STATUS
IMPDP>STOP_JOB=IMMEDIATE (OR) KILL_JOB


IMPORT STATUS
------------
 set pages 50000 lines 32767
col OPNAME for a10
col UNITS for a5
col TARGET_DESC for a10
col OWNER_NAME for a10
col STATE for a10
col JOB_MODE for a10
col JOB_NAME for a20
col MESSAGE for a30
alter session set nls_date_format = 'DD-MM-YYYY HH24:MI:SS';
select sid,serial#,sofar,totalwork,units,target_desc,start_time,last_update_time,
round(time_remaining/60,2) "REMAINING MINS",round(elapsed_seconds/60,2) "ELAPSED MINS",
round((time_remaining+elapsed_seconds)/60,2)"TOTAL MINS",
dp.owner_name,dp.state,dp.job_name,dp.job_mode,dp.workers,
message
from gv$session_longops sl,gv$datapump_job dp
where sl.opname=dp.job_name and sofar != totalwork
/

IMPORT SPEED
------------

set pages 50000 lines 32767
col TABLE_NAME for a80

SELECT SUBSTR(sql_text, INSTR(sql_text,'INTO "'),120) table_name
, rows_processed
, ROUND( (sysdate-TO_DATE(first_load_time,'yyyy-mm-dd hh24:mi:ss'))*24*60,1) minutes
, TRUNC(rows_processed/((sysdate-to_date(first_load_time,'yyyy-mm-dd hh24:mi:ss'))*24*60)) rows_per_minute
FROM sys.v_$sqlarea
WHERE sql_text like 'insert %into "%' AND command_type = 2 AND open_versions > 0
/

(OR)

DBA_DATAPUMP_JOBS - This view will show the active Data Pump jobs, their state, degree of parallelism, and the number of sessions attached.

set pages 50000 lines 32767

COL owner_name FORMAT a10
COL job_name FORMAT a20
COL state FORMAT a11
COL operation LIKE state
COL job_mode LIKE state

select * from dba_datapump_jobs where job_name not like 'BIN$%' order by 1,2;
OWNER_NAME JOB_NAME               OPERATION  JOB_MODE   STATE         DEGREE    ATTACHED_SESSIONS
---------- ---------------------- ---------- ---------- ------------- --------- -----------------

DBA_DATAPUMP_SESSIONS
 - This view give gives the SADDR that assist in determining why a Data Pump session may be having problems.

set pages 50000 lines 32767
COL owner_name FORMAT a10
COL job_name FORMAT a20

select owner_name,job_name,saddr,sid,serial# from v$session s, dba_datapump_sessions d
where s.saddr = d.saddr;

select owner_name, job_name, operation, job_mode, state from dba_datapump_jobs;

IMP
---
nohup imp userid=\'/ as sysdba \' file=SOURCE_TABLENAME_EXP.dmp log=IMPORT_SOURCE_TABLENAME_EXP.log tables=OWNER.TABLENAME IGNORE=y FROMUSER=OWNER TOUSER=OWNER &


No comments:

Post a Comment