LATEST TOPICS

Datapump import failed with ORA-04091: table is mutating

Recently one of my application team asked me to refresh few tables in a lower environment database using the production copy of the tables. As usual, I opted to perform a Datapump import (impdp) to refresh (data only) these tables. However, to my surprise the import failed with a unusual error as shown below.

##---
##--- impdp failed with ORA-04091 table mutating error ---##
##---
Import: Release 12.1.0.2.0 - Production on Tue May 17 17:28:18 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
Password:

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
Master table "MYAPP"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "MYAPP"."SYS_IMPORT_TABLE_01":  myapp/********@mypdb1 directory=EXP dumpfile=myapp_exp.dmp logfile=pop_doc.log tables=MYAPP.POPULAR_DOCUMENT table_exists_action=TRUNCATE content=DATA_ONLY
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
ORA-31693: Table data object "MYAPP"."POPULAR_DOCUMENT" failed to load/unload and is being skipped due to error:
ORA-29913: error in executing ODCIEXTTABLEFETCH callout
ORA-04091: table MYAPP.POPULAR_DOCUMENT is mutating, trigger/function may not see it
ORA-06512: at "MYAPP.TR_POPULAR_DOCUMENT", line 6
ORA-04088: error during execution of trigger 'MYAPP.TR_POPULAR_DOCUMENT'
Job "MYAPP"."SYS_IMPORT_TABLE_01" completed with 1 error(s) at Tue May 17 17:28:23 2016 elapsed 0 00:00:02

As a first step of investigation, I quickly looked into the error description and here is what the description states about it.

##---
##--- ORA-04091 error description ---##
##---
[oracle@cloudserver1 ~]$ oerr ora 04091
04091, 00000, "table %s.%s is mutating, trigger/function may not see it"
// *Cause: A trigger (or a user defined plsql function that is referenced in
//         this statement) attempted to look at (or modify) a table that was
//         in the middle of being modified by the statement which fired it.
// *Action: Rewrite the trigger (or function) so it does not read that table.

As per Oracle documentation:


A mutating table is a table that is currently being modified by an update, delete, or insert statement. You will encounter the ORA-04091 error if you have a row trigger that reads or modifies the mutating table. For example, if your trigger contains a select statement or an update statement referencing the table it is triggering off of you will receive the error.
Another way that this error can occur is if the trigger has statements to change the primary, foreign or unique key columns of the table the trigger is triggering from.

A table mutation acts as a defence to prevent a transaction to access inconsistent data when the table in under modification within the same transaction. This way, Oracle guarantees the fundamental principle of data consistency.

As per the descriptions, it seems like a trigger (MYAPP.TR_POPULAR_DOCUMENT in our case) attempted to query/modify a table (MYAPP.POPULAR_DOCUMENT in our case) which is already in the middle of being modified by the statement (in our case the INSERT from Datapump import job) which fired it (the trigger). Let’s check what the trigger is about to troubleshoot further.

---//
---// row level (before event) trigger enabled for the concerned table //---
---//
SQL> select owner,trigger_name,trigger_type,table_name,status
  2  from dba_triggers where table_name='POPULAR_DOCUMENT';

OWNER      TRIGGER_NAME         TRIGGER_TYPE         TABLE_NAME           STATUS
---------- -------------------- -------------------- -------------------- --------
MYAPP      TR_POPULAR_DOCUMENT  BEFORE EACH ROW      POPULAR_DOCUMENT     ENABLED

---//
---// trigger definition from the database //---
---//
SQL> select dbms_metadata.get_ddl('TRIGGER','TR_POPULAR_DOCUMENT','MYAPP') ddl from dual;

DDL
----------------------------------------------------------------------------------------------------
CREATE OR REPLACE EDITIONABLE TRIGGER "MYAPP"."TR_POPULAR_DOCUMENT"
BEFORE INSERT ON POPULAR_DOCUMENT
FOR EACH ROW
BEGIN
DECLARE
lng_popular_document_id   NUMBER;
lng_count                 NUMBER;
BEGIN
SELECT Count( * )
INTO   lng_count
FROM   POPULAR_DOCUMENT
WHERE  user_id = :NEW.user_id AND
document_id = :NEW.document_id;
IF lng_count = 0 THEN
IF :NEW.popular_id IS NULL  OR
:NEW.popular_id = 0 THEN
SELECT seq_pop_doc_id.NEXTVAL
INTO   lng_popular_document_id
FROM   DUAL;
:NEW.popular_id := lng_popular_document_id;
END IF;
ELSE
Raise_application_error( -20001, 'Popular already exists in the user''s My Popular' );
END IF;
END;
END;
ALTER TRIGGER "MYAPP"."TR_POPULAR_DOCUMENT" ENABLE

---//
---// table definition //---
---//
SQL> desc POPULAR_DOCUMENT
 Name                          Null?    Type
 ----------------------------- -------- --------------------
 POPULAR_ID                    NOT NULL NUMBER(10)
 USER_ID                       NOT NULL NUMBER(10)
 DOCUMENT_ID                   NOT NULL VARCHAR2(10 CHAR)
 TITLE                         NOT NULL VARCHAR2(4000 CHAR)
 MODULE_TYPE                   NOT NULL NUMBER(5)
 CREATED_DATE                  NOT NULL DATE
 ACCESSED_DATE                 NOT NULL DATE
 CONTENTDATABASEID                      NUMBER(5)

This seems to be a simple trigger, which runs before INSERT of each row in the table POPULAR_DOCUMENT to check if a record already exists in the same POPULAR_DOCUMENT table with the matching USER_ID and DOCUMENT_ID. If the record doesn’t exist, it gets the next sequence number from SEQ_POP_DOC_ID if the value of POPULAR_ID is passed as NULL or 0.

This trigger should not be causing a mutation error as there would be a consistent a view of the table data each time the trigger is executed before the insert statement. To validate this, I have tried to insert few dummy records in to the POPULAR_DOCUMENT table and the records were inserted without any errors as shown below.

---//
---// records inserted without any mutating errors //---
---//
SQL> insert into POPULAR_DOCUMENT (USER_ID,DOCUMENT_ID,TITLE,MODULE_TYPE,CREATED_DATE,ACCESSED_DATE) 
  2  values (1,'1','Sample Doc','3',sysdate,sysdate);

1 row created.

SQL> insert into POPULAR_DOCUMENT (USER_ID,DOCUMENT_ID,TITLE,MODULE_TYPE,CREATED_DATE,ACCESSED_DATE) 
  2  values (1,'4','Sample Doc','3',sysdate,sysdate);

1 row created.

As we can see, we are able to insert records without causing any table mutation. However, when we run Datapump import (impdp) to load data, it fails with table mutation errors for this table. This clearly indicates that Datapump (impdp) is doing something different which is generating inconsistent view of the table data within the same transaction and in turn causing the table mutation.

Let’s find out more details about the Datapump import. To help in diagnosis, I have enabled tracing for the Datapump import (TRACE=1FF0300) and here are some key finding from the trace file.

##---
##--- Parallel DML is enabled for Datapump import session ---##
##---
SHDW:17:28:22.089: *** DATAPUMP_JOB call ***
META:17:28:22.092: DEBUG set by number
META:17:28:22.092: v_debug_enable set
KUPP:17:28:22.092: Current trace/debug flags: 01FF0300 = 33489664
*** MODULE NAME:(Data Pump Worker) 2016-05-17 17:28:22.093
*** ACTION NAME:(SYS_IMPORT_TABLE_01) 2016-05-17 17:28:22.093

KUPW:17:28:22.093: 0: ALTER SESSION ENABLE PARALLEL DML called.
KUPW:17:28:22.093: 0: ALTER SESSION ENABLE PARALLEL DML returned.
KUPV:17:28:22.093: Attach request for job: MYAPP.SYS_IMPORT_TABLE_01

---
--- output trimmed for readability
---

##---
##--- Datapump opted for a EXTERNAL table load ---##
##---
KUPD:17:28:23.345: KUPD$DATA_INT access method = 4
KUPD:17:28:23.345: Table load using External Table
KUPD:17:28:23.345: Current context
KUPD:17:28:23.345:            index         = 1
KUPD:17:28:23.345:            counter       = 2
KUPD:17:28:23.345:            in_use        = TRUE
KUPD:17:28:23.345:            handle        = 20001
KUPD:17:28:23.345:            schema name   = MYAPP
KUPD:17:28:23.345:            table name    = POPULAR_DOCUMENT
KUPD:17:28:23.345:            file size     = 0
KUPD:17:28:23.345:            process order = 5
KUPD:17:28:23.345:            set param flags= 16896
KUPD:17:28:23.345:            dp flags      = 0
KUPD:17:28:23.345:            scn           = 0
KUPD:17:28:23.345:            job name      = SYS_IMPORT_TABLE_01
KUPD:17:28:23.345:            job owner     = MYAPP
KUPD:17:28:23.345:            data options  = 0
KUPD:17:28:23.345:     table_exists_action  = TRUNCATE
KUPD:17:28:23.345:        alternate method  = 0
KUPD:17:28:23.345: in EtConvLoad
KUPD:17:28:23.345: in Get_Load_SQL
KUPD:17:28:23.346: in Do_Modify with transform MODIFY
KUPD:17:28:23.353: in Setup_metadata_environ
META:17:28:23.353: set xform param: EXT_TABLE_NAME:ET$00123AEA0001
KUPD:17:28:23.353: External table name is ET$00123AEA0001
META:17:28:23.353: set xform param: EXT_TABLE_CLAUSE: ( TYPE ORACLE_DATAPUMP DEFAULT DIRECTORY "EXP" ACCESS PARAMETERS ( DEBUG = (3 , 33489664) DATAPUMP INTERNAL TABLE "MYAPP"."POPULAR_DOCUMENT"  JOB ( "MYAPP","SYS_IMPORT_TABLE_01",5) WORKERID 1 PARALLEL 1 VERSION '12.1.0.2.0' ENCRYPTPASSWORDISNULL  COMPRESSION DISABLED  ENCRYPTION DISABLED TABLEEXISTS) LOCATION ('bogus.dat') )  PARALLEL 1 REJECT LIMIT UNLIMITED
KUPD:17:28:23.353: Setup_metadata_environ: external table clause built
META:17:28:23.353: set xform param: OPERATION_TYPE:IMPORT
META:17:28:23.353: set xform param: TARGET_TABLE_SCHEMA:MYAPP
META:17:28:23.353: set xform param: TARGET_TABLE_NAME:POPULAR_DOCUMENT
META:17:28:23.353: set xform param: EXT_TABLE_SCHEMA:MYAPP

If we scroll down to the trace file, we can find that Datapump import tried to perform a parallel direct path multi row insert from the external table (ET$00123AEA0001) to the target table (POPULAR_DOCUMENT) using INSERT INTO SELECT statement as shown below.

##---
##--- Datapump import tried to perform direct path multi row insert ---##
##--- 
KUPD:17:28:23.548: CREATE TABLE "MYAPP"."ET$00123AEA0001"
   (    "POPULAR_ID" NUMBER(10,0),
        "USER_ID" NUMBER(10,0),
        "DOCUMENT_ID" VARCHAR2(10 CHAR),
        "TITLE" VARCHAR2(4000 CHAR),
        "MODULE_TYPE" NUMBER(5,0),
        "CREATED_DATE" DATE,
        "ACCESSED_DATE" DATE,
        "CONTENTDATABASEID" NUMBER(5,0)
   ) ORGANIZATION EXTERNAL
    ( TYPE ORACLE_DATAPUMP DEFAULT DIRECTORY "EXP" ACCESS PARAMETERS ( DEBUG = (3 , 33489664) DATAPUMP INTERNAL TABLE "MYAPP"."POPULAR_DOCUMENT"  JOB ( "MYAPP","SYS_IMPORT_TABLE_01",5) WORKERID 1 PARALLEL 1 VERSION '12.1.0.2.0' ENCRYPTPASSWORDISNULL  COMPRESSION DISABLED  ENCRYPTION DISABLED TABLEEXISTS) LOCATION ('bogus.dat') )  PARALLEL 1 REJECT LIMIT UNLIMITED

KUPD:17:28:23.548: INSERT /*+ APPEND PARALLEL("POPULAR_DOCUMENT",1)+*/ INTO RELATIONAL("MYAPP"."POPULAR_DOCUMENT" NOT XMLTYPE) ("POPULAR_ID", "USER_ID", "DOCUMENT_ID", "TITLE", "MODULE_TYPE", "CREATED_DATE", "ACCESSED_DATE", "CONTENTDATABASEID")
   SELECT "POPULAR_ID", "USER_ID", "DOCUMENT_ID", "TITLE", "MODULE_TYPE", "CREATED_DATE", "ACCESSED_DATE", "CONTENTDATABASEID"
    FROM "MYAPP"."ET$00123AEA0001" KU$
KUPD:17:28:23.548: in execute_sql
KUPD:17:28:23.548: Verb item: DROP
KUPD:17:28:23.552: Verb item: CREATE
KUPD:17:28:23.552: Start to execute create table stmt
KUPD:17:28:23.559: Just executed dbms_sql.parse
KUPD:17:28:23.559: Verb item: INSERT
KUPD:17:28:23.559: in explain_plan_to_tracefile
KUPD:17:28:23.565: Sql plan statement is EXPLAIN PLAN INTO SYS.DATA_PUMP_XPL_TABLE$ FOR INSERT /*+ APPEND PARALLEL("POPULAR_DOCUMENT",1)+*/ INTO RELATIONAL("MYAPP"."POPULAR_DOCUMENT" NOT XMLTYPE) ("POPULAR_ID", "USER_ID", "DOCUMENT_ID", "TITLE", "MODULE_TYPE", "CREATED_DATE", "ACCESSED_DATE", "CONTENTDATABAS
KUPD:17:28:23.565: Sql plan statement is EID")
   SELECT "POPULAR_ID", "USER_ID", "DOCUMENT_ID", "TITLE", "MODULE_TYPE", "CREATED_DATE", "ACCESSED_DATE", "CONTENTDATABASEID"
    FROM "MYAPP"."ET$00123AEA0001" KU$
KUPA:17:28:23.578: parse tree dump for ET$00123AEA0001

However, due to the presence of row level trigger (TR_POPULAR_DOCUMENT) on target table (POPULAR_DOCUMENT) direct path insert and parallel DML were disabled during the load and only INSERT AS SELECT (IAS) was executed to load the data from external table to the target table, which ultimately failed due to table mutation as shown below.

KUPA: Total datastream length processed is 0
KUPD:17:28:23.594: Explain plan output
KUPD:17:28:23.701: Plan hash value: 4129170041
KUPD:17:28:23.701:
KUPD:17:28:23.701: ------------------------------------------------------------------------------------------------
KUPD:17:28:23.701: | Id  | Operation                   | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
KUPD:17:28:23.701: ------------------------------------------------------------------------------------------------
KUPD:17:28:23.701: |   0 | INSERT STATEMENT            |                  |  8168 |    16M|    29   (0)| 00:00:01 |
KUPD:17:28:23.701: |   1 |  LOAD TABLE CONVENTIONAL    | POPULAR_DOCUMENT |       |       |            |          |
KUPD:17:28:23.701: |   2 |   EXTERNAL TABLE ACCESS FULL| ET$00123AEA0001  |  8168 |    16M|    29   (0)| 00:00:01 |
KUPD:17:28:23.701: ------------------------------------------------------------------------------------------------
KUPD:17:28:23.701:
KUPD:17:28:23.701: Note
KUPD:17:28:23.701: -----
KUPD:17:28:23.701:    - PDML disabled because triggers are defined
KUPD:17:28:23.701:    - Direct Load disabled because triggers are defined
KUPD:17:28:23.702:    - Direct Load disabled because triggers are defined
KUPD:17:28:23.702: executing IAS using DBMS_SQL

---
--- output trimmed for readability
---

KUPD:17:28:23.738: Exception raised
KUPD:17:28:23.738: Sqlcode is -29913
KUPD:17:28:23.738: Drop external table, ET$00123AEA0001
KUPD:17:28:23.751: Table ET$00123AEA0001 dropped
KUPD:17:28:23.751:  Error stack is ORA-29913: error in executing ODCIEXTTABLEFETCH callout
ORA-04091: table MYAPP.POPULAR_DOCUMENT is mutating, trigger/function may not see it
ORA-06512: at "MYAPP.TR_POPULAR_DOCUMENT", line 6
ORA-04088: error during execution of trigger 'MYAPP.TR_POPULAR_DOCUMENT'
KUPD:17:28:23.751: Exception raised
KUPD:17:28:23.751: Sqlcode is -29913
KUPD:17:28:23.751: start_job: external table  dropped
KUPD:17:28:23.751: in free_context_entry
KUPW:17:28:23.753: 1: KUPD$DATA.START_JOB returned. In procedure CREATE_MSG
KUPW:17:28:23.753: 1: ORA-31693: Table data object "MYAPP"."POPULAR_DOCUMENT" failed to load/unload and is being skipped due to error:
ORA-29913: error in executing ODCIEXTTABLEFETCH callout
ORA-04091: table MYAPP.POPULAR_DOCUMENT is mutating, trigger/function may not see it
ORA-06512: at "MYAPP.TR_POPULAR_DOCUMENT", line 6
ORA-04088: error during execution of trigger 'MYAPP.TR_POPULAR_DOCUMENT'

To elaborate more, with a row level (before event) trigger when a single row insert is performed in the POPULAR_DOCUMENT table, Oracle will have a consistent view (as the insert is yet to be performed) of the table records within that transaction and hence will not cause any mutation for that table. However, with a row level (before event) trigger when a multi row insert is performed on the POPULAR_DOCUMENT table, there would be uncommitted records (caused by the very first insert) within the same transaction resulting into inconsistent data and in turn cause table mutation for that table within the trigger.

Following example illustrates this behaviour of multi row insert with row level (before event) trigger (I am using the same table POPULAR_DOCUMENT on which trigger is defined).

---//
---// create temporary table t from POPULAR_DOCUMENT //---
---//
SQL> create table t as select * from POPULAR_DOCUMENT;

Table created.

---//
---// truncating table to avoid primary key violation //---
---//
SQL> truncate table POPULAR_DOCUMENT;

Table truncated.

---//
---// multi row insert failed with table mutation //---
---//
SQL> insert into POPULAR_DOCUMENT select * from t;
insert into POPULAR_DOCUMENT select * from t
            *
ERROR at line 1:
ORA-04091: table MYAPP.POPULAR_DOCUMENT is mutating, trigger/function may not see it
ORA-06512: at "MYAPP.TR_POPULAR_DOCUMENT", line 6
ORA-04088: error during execution of trigger 'MYAPP.TR_POPULAR_DOCUMENT'

As we can see, multi row insert could cause table mutation due to the existence of row level (before event) trigger referring to the same table; we can either choose to disable the row level trigger and perform the data load (using impdp) or we can opt for CONVENTIONAL data load while loading data using Datapump as shown below

##---
##--- import succeeds with CONVENTIONAL access method ---##
##---
Import: Release 12.1.0.2.0 - Production on Tue May 17 17:54:19 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
Password:

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
Master table "MYAPP"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "MYAPP"."SYS_IMPORT_TABLE_01":  myapp/********@mypdb1 directory=EXP dumpfile=myapp_exp.dmp logfile=pop_doc.log tables=MYAPP.POPULAR_DOCUMENT table_exists_action=TRUNCATE content=DATA_ONLY ACCESS_METHOD=CONVENTIONAL 
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "MYAPP"."POPULAR_DOCUMENT"                  28.22 KB     228 rows
Job "MYAPP"."SYS_IMPORT_TABLE_01" successfully completed at Tue May 17 17:54:23 2016 elapsed 0 00:00:02

We are now able to load the data using Datapump import (with conventional access method). If we trace the Datapump import job, we could see with conventional access method, Oracle performs a single row insert (INSERT INTO VALUES) rather than performing a multi row insert using INSERT AS SELECT (IAS) as shown below.

##---
##--- Datapump used single row insert with conventional load ---##
##---
KUPD:17:54:23.330: In routine kupd$data.conventional_load
KUPD:17:54:23.330: Verb item: DROP
KUPD:17:54:23.334: Verb item: CREATE
KUPD:17:54:23.334: CREATE TABLE "MYAPP"."ET$00875BDA0001"
   (    "POPULAR_ID" NUMBER(10,0),
        "USER_ID" NUMBER(10,0),
        "DOCUMENT_ID" VARCHAR2(10 CHAR),
        "TITLE" VARCHAR2(4000 CHAR),
        "MODULE_TYPE" NUMBER(5,0),
        "CREATED_DATE" DATE,
        "ACCESSED_DATE" DATE,
        "CONTENTDATABASEID" NUMBER(5,0)
   ) ORGANIZATION EXTERNAL
    ( TYPE ORACLE_DATAPUMP DEFAULT DIRECTORY "EXP" ACCESS PARAMETERS ( DEBUG = (3 , 33489664) DATAPUMP INTERNAL TABLE "MYAPP"."POPULAR_DOCUMENT"  JOB ( "MYAPP","SYS_IMPORT_TABLE_01",5) WORKERID 1 PARALLEL 1 VERSION '12.1.0.2.0' ENCRYPTPASSWORDISNULL  COMPRESSION DISABLED  ENCRYPTION DISABLED TABLEEXISTS) LOCATION ('bogus.dat') )  PARALLEL 1 REJECT LIMIT UNLIMITED

KUPD:17:54:23.334: Start to execute create table stmt
KUPD:17:54:23.340: Just executed dbms_sql.parse
KUPD:17:54:23.340: Verb item: SELECT
KUPD:17:54:23.341: Long position is 0
KUPD:17:54:23.341: Verb item: INSERT
KUPD:17:54:23.341: Calling kupcls, conventional path load
KUPCL:17:54:23.341: Data Pump Conventional Path Import
KUPCL:17:54:23.341:   Schema: MYAPP Table: POPULAR_DOCUMENT
KUPCL:17:54:23.341:   Long Position: 0
KUPCL:17:54:23.343:   Select Statement: SELECT "POPULAR_ID", "USER_ID", "DOCUMENT_ID", "TITLE", "MODULE_TYPE", "CREATED_DATE", "ACCESSED_DATE", "CONTENTDATABASEID"
    FROM "MYAPP"."ET$00875BDA0001" KU$

KUPCL:17:54:23.343:   Insert Statement: INSERT INTO RELATIONAL("MYAPP"."POPULAR_DOCUMENT" NOT XMLTYPE)
   ("POPULAR_ID", "USER_ID", "DOCUMENT_ID", "TITLE", "MODULE_TYPE", "CREATED_DATE", "ACCESSED_DATE", "CONTENTDATABASEID")
   VALUES (:1, :2, :3, :4, :5, :6, :7, :8)

Footnote:

When loading data using tools like Datapump or SQLLDR, it would be a good practice to check the presence of any row level (before event) trigger defined on the table which queries/modifies the same table within the trigger and opt for conventional load if required. In case, a multi row insert (IAS) needs to be performed on a table with row level (before event) trigger accessing the same table, the only option would be to disable the row level (before event) trigger prior to executing the IAS statement.

Throughout this article, I have stressed on the word “before event” whenever I mentioned about the row level trigger. This is because, if we have a AFTER EVENT row level trigger on a table, referring to (query/modify) the same table within the trigger, it will always cause table mutation error (due to inconsistent state of data) irrespective of whether a single row or multi row insert being performed on the table.

5 Comments
  1. Vipul Pahuja
    • Abu Fazal Abbas
  2. Venkat
  3. Foued
  4. venkatesh
%d bloggers like this:
Visit Us On LinkedinVisit Us On TwitterVisit Us On Google PlusCheck Our Feed