Friday, March 07, 2008

ORA-39070: Unable to open the log file.

Always check out the original article at http://www.oraclequirks.com for latest comments, fixes and updates.

You may see this error when you have not properly initialized certain directory objects required to perform the data pump import or you have not specified the proper parameters in the command line or the OS user under which you are performing the operation does not hold the necessary privileges on the file system objects (the dump file or the directory containing it).
For instance, invoking IMPDP without specifying NOLOGFILE=Y, may result in the error being discussed in this topic if the parameter LOGFILE has not been specified or contains an invalid value.

IMPDP user/pwd schemas=YOCOYA

ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-39145: directory object parameter must be specified and non-null
On the other hand, setting NOLOGFILE=Y may solve or not the problem, depending on other factors like the privileges held by the db user (see ORA-39000 for a scenario where a user lacks some specific privileges) or the OS permissions required to access the file or directories involved in the data pump operation.

Db users like SYS have automatically access to the default directory pointed to by DATA_PUMP_DIR that is also used as a default destination for the LOG file in case you don't specify a different one.
To be more precise, all users who have been granted either IMP_FULL_DATABASE or EXP_FULL_DATABASE role (see dictionary view ROLE_TAB_PRIVS), have access to DATA_PUMP_DIR. Please note that according to my understanding this holds for oracle version 10R2 (including XE) or above.

On version 10R1, the DBA had first to create the directory DATA_PUMP_DIR thereafter users holding the DBA role could automatically read from and write to DATA_PUMP_DIR. Apparently it is not necessary to explicitly grant READ and WRITE on DATA_PUMP_DIR to the DBA role, it just works once the directory object has been created.
This behavior seems to be different from 10R2 in that you can actually find READ and WRITE privileges granted to EXP_FULL_DATABASE and IMP_FULL_DATABASE roles that, in turn, are granted to the DBA role. In other words the situation on 10R2 is easier to understand from a dictionary point of view.

Standard db users don't even see this directory in the view ALL_DIRECTORIES until they are granted READ and WRITE privileges or they are granted the CREATE ANY DIRECTORY system privilege by the DBA.
Therefore a normal user should always specify the parameter DIRECTORY in the command line along with the parameter LOGFILE or specify the file as (dir:file) directly in the LOGFILE parameter:
IMPDP user/pwd DIRECTORY=logdir LOGFILE=dump.log
or
IMPDP user/pwd LOGFILE=logdir:dump.log
As already noted in a previous posting, note that if you hold only the READ privilege on the target directory, you may see it in view ALL_DIRECTORIES, but when you try perform the dump, you'll get ORA-31631.

Unix users may encounter further problems depending on the permissions associated with the underlying directories.
For instance, suppose i am logged on as user "flavio" on a xubuntu server and i am the owner of the dump file, located in the directory impdir (pointing to "/home/flavio/dumps"), if i execute:
IMPDP yocoya/yocoya NOLOGFILE=Y SCHEMAS=yocoya DIRECTORY=impdir DUMPFILE=yocoya.dmp
i am able to import the dump file successfully (supposing there are no other blocking errors...)

However, as soon as i try to specify the same directory as a destination for the log file:
IMPDP yocoya/yocoya LOGFILE=dump.log SCHEMAS=yocoya DIRECTORY=impdir DUMPFILE=yocoya.dmp
the command may fail:
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 475
ORA-29283: invalid file operation
I guess this happens because IMPDP is trying to perform write operations through the UTL_FILE package, which is probably invoking some low level program running as the OS user "oracle" (the user under which the database server has been installed) but the "oracle" user has no WRITE permissions on the OS folder /home/flavio/dumps. Indeed if i grant the write permission to that folder to the "Others" group, the operation succeeds.

So, let's summarize what to do when you get this type of error:

while logged in as the db user, check out if you have been granted READ and WRITE privileges on the directory object:
select * from all_directories where directory_name = 'DIRECTORY_NAME';

select * from user_tab_privs
where table_name = 'DIRECTORY_NAME';
alternatively, have you been granted CREATE ANY DIRECTORY system privilege?

select * from user_sys_privs
where privilege = 'CREATE ANY DIRECTORY';
if you hold CREATE ANY DIRECTORY privilege but you don't see the directory object in view ALL_DIRECTORIES, most likely the directory object has not been created (yet).

if you do not hold this system privilege and you don't see the directory in ALL_DIRECTORIES, you must ask the DBA to check for the object (assuming you are not the DBA...).

check out the permissions on the dump file: is the user who installed the db allowed to read the file?

check out the permission on the OS directory containing the file: is the user who installed the db allowed to read from it?

As you see there are many variables involved and finding where is the catch can be a trial-and-error process.
The data pump in this respect is more complex to manage compared to the old-fashioned IMP/EXP utilities, especially because everything happens at the server side, not at the client side.

I hope i didn't miss anything important, if not, please let me know.

See message translations for ORA-39070 and search additional resources

14 comments:

Dandy said...

Thanks...this was useful...

Byte64 said...

excellent, i'm glad it helped.

Jake said...

thanks for this, it allowed me to eliminate ACL's as an issue.

One more note . . . if you're getting this error on RAC, check and make sure that either 1) you're only logging into one particular node or 2) the target directory exists on all nodes

Byte64 said...

Thanks for your note Jake!

Flavio

Unknown said...

Thank you guys.
This is very useful, it drive me NUTS for a while. I did search google. Other sites are not explaining this well like this website.
I really appreciate it.
I got my problem "fixed" according to your solution/guideline.

Dominica

Anonymous said...

I had this problem on unix, and it turned out to be directory permissions on the directory above the data pump one.

The thing that confuses me is that I am able to create a file using UTL_FILE, which will be owned by the Oracle process, data pump import fails

cte said...

Thanks

It saved my life

Anonymous said...

Jake, you seem to be right on the money w.r.t RAC.

I gave my home directory...it bombed. I gave a directory visible to all nodes (the same dir that had the dump...bummer)

Solved...thanks for this post and thanks Jake

- jecca

oboe said...

God how I hate Oracle.

Danish said...

Hi, I have one note to add:

if you use a mapped drive as a datapump directory you will get the UTL.FILE error, use the UNC path (\\server\share) instead!!

This way impdp and expdp will work correctly!

Bye bye,
Daniele

Matheus Korb said...

Thanks for the helpful post, but I must aware you of a primary error that occured to me: lack of disk space. IMPDP failed because it couldn't create the log file...
Matheus Korb, out from Porto Alegre - Brasil.

Anonymous said...

Attempting to import a dump file I created a directory in $HOME then chmod'd 777 it and the and dump file in it, then chown'd both the directory and file to oracle:dba - didn't work. Then moved it to a subdirectory under the oracle install where the parent directories were owned by oracle:dba - this worked - what a pain !

Angel said...

Thanks Danish, your comment helped me a lot! :)

Shrikanth said...

thanks actually the permission of the directory above the dp directory had to be corrected.
i changed it to 777 and it worked. thanks all

yes you can!

Two great ways to help us out with a minimal effort. Click on the Google Plus +1 button above or...
We appreciate your support!

latest articles