Showing posts with label Oracle XE. Show all posts
Showing posts with label Oracle XE. Show all posts

Thursday, April 03, 2014

On HTTP 401 Unauthorized (with Oracle EPG)

Always check out the original article at http://www.oraclequirks.com for latest comments, fixes and updates.

It could happen that suddenly your Apex application that has been working for years starts asking for a username and password in order to access the XDB repository.
You hit the Cancel button and all you get is:
 
401 Unauthorized

No html, no images, nothing is returned, your app is blank.
What the hell happened with it?

How could this happen if I didn't change anything in a long time?
Where do I start looking first?

First of all, we need to assess what's gone awry.
If you are a Firefox user, Firebug comes in handy with its panels: check out the "Net" panel and see which component is giving troubles.

In a previous posting, I had problems with static files and images in the repository, but not with the main Apex component, the f function that is processing and returning the main HTML page.

If the page is just "garbled" and some components are missing, the problems are likely caused by permissions set at the XDB resource level (see the previous posting about this).

If HTTP 401 is returned by the main page itself instead, either there is problem with the XDB configuration file or what else?

In the former case, you should compare the current xdbconfig.xml with a working backup copy (some time ago I wrote a posting also about recovering a corrupted xdbconfig.xml, check it out).
It's absolutely necessary to keep backup copies of working configurations that will save your time in case of troubles like these, allowing you to make comparisons or quickly restore them if anything went wrong and it's much better than guessing its content without knowing if a certain option was set or unset when the system was running just fine.

Now, supposing the xdbconfig.xml is ok, still the same as before, where else should I turn my attention to?
Luckily enough, my sixth sense told me to look next at the database user status.
OK, great, but which one?

Depending on your configuration you may have a couple of Apex database users that have been created over time:
ANONYMOUS and APEX_PUBLIC_USER.
The good one is returned by the following query if the database is XE with EPG (Embedded PL/SQL Gateway) and the DAD is named APEX:

-- DBA user required
select dbms_epg.get_dad_attribute('APEX','database-username') d from dual;
 
D
--------- 
ANONYMOUS


The next step is to check what's the status of user ANONYMOUS.

select account_status, lock_date, expiry_date
  from dba_users
where username = 'ANONYMOUS';
 

ACCOUNT_STATUS                   LOCK_DATE           EXPIRY_DATE       
-------------------------------- ------------------- -------------------
LOCKED                           01-04-2014 18:37:26                    


Here is the answer. A locked account will make your Apex site look miserable in no time.
Just unlock the account ALTER USER ANONYMOUS ACCOUNT UNLOCK and everything will resume working immediately.
Thereafter you just need to find what or who made that account become locked suddenly, but that's another story.

Friday, March 15, 2013

LPX-00209: PI names starting with XML are reserved

Always check out the original article at http://www.oraclequirks.com for latest comments, fixes and updates.

I don't know why it took me more than the necessary to understand where was the problem when I hit the following error.

ORA-31011: XML parsing failed
ORA-19202: Error occurred in XML processing
LPX-00209: PI names starting with XML are reserved

At any rate, in my case the reason was not the same as reported in other sites (like having a duplicate XML declaration).
The trouble was caused by a simple blank character located before the initial XML file declaration.
The error occurred on Oracle 10.2 (XE), while on Oracle 11.2.0.3.0 EE the blanks seem to give no concerns whatsoever to the parser.
I ignore the results on the Oracle versions in between.

You can easily simulate the problem with the following query in SQL Developer:

select xmltype(' ' || dbms_xmlgen.getxml('select 1 from dual')) x from dual;

If you have a clob object containing an xmlfile with extra blanks at the beginning of the file and attempting to parse it you get this error, you may easily fix the problem by TRIMming it first. 

See message translations for ORA-19202, LPX-00209 and search additional resources.

Wednesday, December 12, 2012

Find the way out of ORA-12952: The request exceeds the maximum allowed database size of 4 GB

Always check out the original article at http://www.oraclequirks.com for latest comments, fixes and updates.

The story goes that at a certain point I began seeing ORA-12952 in my logs when Oracle was trying to refresh  schema statistics, then, progressively, I started receiving this error message while attempting to perform certain operations involving lots of data, until the other day, when the db virtually stopped working properly becoming increasingly sluggish. As I had only some test Apex applications running on it, that wasn't a real tragedy, but I decided it was time to address the problem once for all.

Here is the typical error log message that you can find in an application logging table:

ORA-00604: error occurred at recursive SQL level 2
ORA-12952: The request exceeds the maximum allowed database size of 4 GB

If you inspect the bdump folder on the database server you might also find trace files like the following:

/usr/lib/oracle/xe/app/oracle/admin/XE/bdump/xe_m000_7483.trc
Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
ORACLE_HOME = /usr/lib/oracle/xe/app/oracle/product/10.2.0/server
System name: Linux
Node name: domU-12-31-39-09-21-12
Release: 2.6.24-10-xen
Version: #1 SMP Tue Sep 8 19:06:53 UTC 2009
Machine: i686
Instance name: XE
Redo thread mounted by this instance: 1
Oracle process number: 28
Unix process pid: 7483, image: oracle@domU-12-31-39-09-21-12 (m000)

*** ACTION NAME:(Auto-Flush Slave Action) 2012-11-26 21:00:21.630
*** MODULE NAME:(MMON_SLAVE) 2012-11-26 21:00:21.630
*** SERVICE NAME:(SYS$BACKGROUND) 2012-11-26 21:00:21.630
*** SESSION ID:(67.10721) 2012-11-26 21:00:21.630
*** KEWROCISTMTEXEC - encountered error: (ORA-12952: The request exceeds the maximum allowed database size of 4 GB
)  
  *** SQLSTR: total-len=500, dump-len=240, 
      STR={insert into wrh$_parameter   (snap_id, dbid, instance_number, parameter_hash, value,    isdefault, ismodified)  select    :snap_id, :dbid, :instance_number, i.ksppihash hash, sv.ksppstvl,    sv.ksppstdf, decode(bitand(sv.ksppstvf,7), 1, 'MO}
*** KEWRAFM1: Error=13509 encountered by kewrfteh

The alert_XE.log will also contain references to the various occurrences of the error.

Now, after acknowledging the fact that we are in trouble, what can we do to remedy the situation?
First of all, we need to assess the situation in terms of *real* space used by data.
In my case, most of the space was unused and fragmented and this can be seen running a query as follows:

select ceil(sum(bytes) / 1024 / 1024) as used_mb, 0 as free_mb
from dba_segments
union all
select 0 as u, ceil(sum(bytes) / 1024 / 1024)
from dba_free_space;

USED_MB FREE_MB
------- -------
   2585       0 
      0    3033


There is plenty of free space!
May be at this point you notice something weird in these figures: how come that the total between used and free space is greater than 4GB that is allegedly the limit imposed on Oracle XE10g?
It turns out that XE isn't really checking if user data exceeded 4GB, but, according to an OTN legend, it checks if the total between system data and user data exceeds 5GB (I say legend because there is no official document stating how it actually does this calculation, as far as I know).
Or perhaps not even that, because in my case the total size of the tablespaces was well beyond that limit.
While discussing how *exactly* Oracle reckons that the db has grown too big is beyond the scope of this posting, I can imagine that probably the difference between the real figures and the expected amount lies in the correct interpretation of allocated tablespace extents in constrast with unallocated extents.

Now, consider that the result of the query above comes from a database where there is currently no sign of troubles as far as ORA-12952 is concerned. In the database I am talking about, the numbers were higher in the FREE_MB column which was also cause of some irritation: how is it possible that Oracle is complaining about the lack of space if it reports having more than 3GB of free space out of a total of 5.5GB?

The answer is quite simple: the free space is probably scattered around in multiple tablespaces which ultimately means that certain operations will succeed while others will fail, depending on where Oracle is trying to allocate the new segments, that is tables, indexes, LOBs, etc.

The good news is that anyway this database is only 50% full, so there is virtually still a lot of room for new data.
If the total USED_MB were very close to 5GB the attempt to free space would become really hard because we would probably reach some stage in which we cannot move anything around easily, especially if we had some big objects whose size exceeds the available free space.

So, how do we reclaim the unused free space?
And how do we shrink existing datafiles in case we need to allocate a new tablespace?

Basically this is the plan:
  1. compact the used space by moving users' schema objects to a new tablespace
  2. coalesce the resulting free space and gradually shrink the existing datafiles, until you are eventually able to remove the empty datafiles completely or until they are back to the desired size. 
While it looks like a rather simple plan, there are some pitfalls along the way.
First of all not all oracle objects can be moved in an equally easy way. Try to move a queue to a different tablespace for instance. I gave up, it was far easier to drop it and re-create it from scratch.
Secondly, if you want to be able to shrink the datafile in order to get back some physical storage space, you need to look first at the objects using up space towards the end of the tablespace, that is those segments allocating blocks near the high watermark, which means that you need to identify such objects and work out the list sorted by the highest block id in descending order.
Thirdly, Oracle XE is basically a limited standard edition, not an enterprise edition, so forget about rebuilding indexes online. In many cases, most notably domain indexes, you'll need to drop and recreate them, otherwise they will remain in an unusable state, which may prevent your application to work anyway.

Before delving into the details, bear in mind that an alternative method to accomplish the task could be to export the database using the data pump, drop all user schemas or remove all their objects, shrink the datafiles, and finally re-import the dump file.
For some reason this time I preferred the iterative approach, but I don't exclude I'll try the expdp/impdp method on another occasion.

We are now ready to start our journey, so let's start to build a list of the objects we need to relocate. In my case the vast majority of the objects were located in the USERS tablespace:

select name, bytes, blocks, block_size
from v$datafile
where file# = 4;
 
NAME                                     BYTES      BLOCKS BLOCK_SIZE
---------------------------------------- ---------- ------ ----------
/usr/lib/oracle/xe/oradata/XE/users.dbf  1184890880 144640       8192
 
 
For a slightly more detailed analysis:

select tablespace_name, sum(used_mb) as used_mb, sum(free_mb) as free_mb, sum(used_mb) + sum(free_mb) as total_mb
from (
select tablespace_name,ceil(sum(bytes)/1024/1024) as used_mb, 0 as free_mb
from dba_segments
group by tablespace_name
union all
select tablespace_name,0, ceil(sum(bytes)/1024/1024)
from dba_free_space
group by tablespace_name)
group by rollup(tablespace_name);
 
TABLESPACE_NAME USED_MB FREE_MB TOTAL_MB
--------------- ------- ------- --------
... 
USERS           546     586     1132
... 


The goal is to relocate enough objects so that we can progressively shrink the datafile containing the USERS tablespace and with the following query we find the object having the highest block_id (note that this approach works for tablespaces made up of a single datafile, if the tablespace contains more than one datafile, you have to add some additional code to get the information for the specific datafile that is not covered in this example):

select owner, segment_name, segment_type, block_id, blocks 
  from dba_extents a
where block_id = (select max(b.block_id) 
                    from dba_extents b
                   where b.tablespace_name = a.tablespace_name)
  and tablespace_name = 'USERS';


OWNER    SEGMENT_NAME                  SEGMENT_TYPE BLOCK_ID BLOCKS
-------- ----------------------------- ------------ -------- ------
QUIQUICI DR$CTX_QQ_TEMP_RSS_MESSAGES$I TABLE        103433   128

In other words this result means that if our USERS tablespace is made up of 144640 blocks (see the earlier query on the datafile), the high watermark must be located 128 blocks after the block whose ID is 103433, that is 103560. If we convert this number into megabytes (multiply by 8 and divide by 1024), we get 809.0625 MB.
This number theoretically represents the minimum value that we could currently expect to successfully specify in a datafile shrinking operation, but let's stay on the safe side and round up that figure a little bit:

alter database datafile '/usr/lib/oracle/xe/oradata/XE/users.dbf' resize 810m;

If this statements succeeds, then our theory must be correct (or we are very lucky guys).
The successful completion of the command told me that I was right, but just out of curiosity, let's try with 809M.

alter database datafile '/usr/lib/oracle/xe/oradata/XE/users.dbf' resize 809m;
 
Error starting at line 1 in command:
alter database datafile '/usr/lib/oracle/xe/oradata/XE/users.dbf' resize 809m
Error report:
SQL Error: ORA-03297: file contains used data beyond requested RESIZE value

Ok, no surprise, the theory is matching the reality: with ORA-03297 Oracle is telling us that the high watermark lies beyond the first 809MB (but before 810MB as we have seen).

Let's modify the query above in order to get the minimum datafile size anytime we need it (this query needs to be run after each iteration) :

select ceil(((block_id + blocks) * 
             (select block_size 
                from dba_tablespaces t 
               where t.tablespace_name = a.tablespace_name))/1024/1024) as MB
  from dba_extents a
where block_id = (select max(b.block_id) 
                    from dba_extents b
                   where b.tablespace_name = a.tablespace_name)
  and tablespace_name = 'USERS';

 
MB
--
810 
 
 
Now, by shrinking the USERS tablespace datafile by over 300MB (1132 - 810 = 322MB), it means we can safely create a new tablespace of equivalent size where we can start moving the existing objects. Please note that this process may need several iterations, especially if we move large segments, meaning that every few objects we may need to shrink the USERS tablespace in order to recover some space for the new tablespace.
It might be also the right time to ALTER the DEFAULT TABLESPACE for the owner(s) of the objects that were previously created in the USERS tablespace, so that any newly created objects without specifying the tablespace will not clutter the USERS tablespace any longer. You may also need to assign a QUOTA to the user(s), either limited or UNLIMITED on the new tablespace if the user(s) lacks UNLIMITED TABLESPACE privilege.

How fast we can accomplish this relocation from one fragmented tablespace to a fresh one depends on a variety of factors, but essentially, the bigger the objects, the more difficult is going to be the process. If we had only a great deal of small tables and indexes, we could probably take a very quick and automated approach where we generate and execute the required DDLs dynamically, but as I said at the beginning of this article, you must ensure you have plenty of time and you have made a backup first. Do not try this stuff if you have users connected to the database or user scheduled jobs running in the background that affect the objects being moved (suspend the jobs first).
Below there are some simple scripts that can move bulks of objects (tables, LOBs and indexes), supposing you have SYSDBA rights.

set serveroutput on
declare
  l_sql varchar2(4000);
begin
  /* move heap tables to newtablespace (CHANGE THE NAME AS REQUIRED) */ 
  /* use at your own risk! */ 
  for each_tab in (
     select owner||'.'||table_name as tab_name
       from dba_tables 
      where tablespace_name = 'USERS'
        and iot_name is null)
  loop
    l_sql := 'ALTER TABLE ' || each_tab.tab_name || ' MOVE newtablespace';
    dbms_output.put_line(l_sql);
    execute immediate l_sql;
  end loop;
end;
/
 
declare
  l_sql varchar2(4000);
begin
  /* move LOB columns to newtablespace (CHANGE THE NAME AS REQUIRED) */ 
  /* use at your own risk! */ 
  for each_lob in (
     select l.owner || '.' || l.table_name as tab_name, l.column_name, c.data_type
       from dba_lobs l, dba_tab_cols c
      where l.table_name = c.table_name
        and l.column_name = c.column_name
        and c.data_type in ('CLOB','BLOB')
        and l.tablespace_name = 'USERS')
  loop
    l_sql := 'ALTER TABLE ' || each_lob.tab_name || ' MOVE LOB(' || each_lob.column_name || ') STORE AS (TABLESPACE newtablespace)';
    dbms_output.put_line(l_sql);
    execute immediate l_sql;
  end loop;
end;
/ 
 
declare
  l_sql varchar2(4000);
begin
  /* move b-tree INDEXES to newtablespace (CHANGE THE NAME AS REQUIRED) */ 
  /* use at your own risk! */ 
  for each_idx in (
     select owner || '.' || index_name as idx_name
       from dba_indexes 
      where tablespace_name = 'USERS'
        and index_type in ('NORMAL','FUNCTION-BASED NORMAL'))
  loop
    l_sql := 'ALTER INDEX ' || each_idx.idx_name || ' REBUILD TABLESPACE newtablespace';
    dbms_output.put_line(l_sql);
    execute immediate l_sql;
  end loop;
end;
/
 
Objects that do not fall in the above sets will need separate handling: if you look at the object in the topmost extent of the tablespace (that DR$CTX_QQ_TEMP_RSS_MESSAGES$I), it's a table that belongs to an oracle context index (Oracle Text stuff). After some attempts of relocating these components on an individual basis I came to the conclusion that it was far easier to drop them and create the new ones in the new tablespace because in the end the data they contain would be reconstructed automatically. If that is not an option for you, then you'll have to figure out how to accomplish this task.

If you drop certain objects don't forget to purge the removed object(s) otherwise the space will remain allocated. There are several ways of doing it, so check out what is most convenient for you (see the PURGE command in the SQL Reference).

At the end of each iteration the datafile can be shrinked according to the position of the high watermark that is gradually moved back as we recover the space moving each segment to its new location. The process ends when we are either happy with the size of the USERS tablespace or until it's completely empty (as it happened in my case).

Thursday, November 24, 2011

ORA-31038: Invalid enumeration value: "StatelessWithFastRestPackageState"

Always check out the original article at http://www.oraclequirks.com for latest comments, fixes and updates.

This error is caused by a nice Oracle documentation typo: the enumerated constant StatelessWithFastRestPackageState is misspelled in table 37-19 (in both 10.2 and 11.1 DBMS_EPG chapters of PL/SQL Packages and Types Reference), but is correctly reported elsewhere (too bad I read the wrong one first...).

begin
  dbms_epg.set_dad_attribute('APEX','session-state-management','StatelessWithFastRestPackageState');
  commit;
end;
/
 
Error starting at line 3 in command:
begin
  dbms_epg.set_dad_attribute('APEX','session-state-management','StatelessWithFastRestPackageState');
  commit;
end;
Error report:
ORA-31038: Invalid enumeration value: "StatelessWithFastRestPackageState"
ORA-06512: at "SYS.DBMS_EPG", line 89
ORA-06512: at "SYS.DBMS_EPG", line 222
ORA-06512: at line 2

As you can easily guess, an "e" must be inserted: StatelessWithFastResetPackageState

By the way, according to the HTTP server documentation (referring to Apache as a matter of fact) the default value for this attribute should be StatelessWithResetPackageState, however the best value for Oracle XE (as well as Oracle 9 and above) should be StatelessWithFastResetPackageState.

And now for the real question is: how am I going to measure any benefit from this change?
The short answer is: I don't know, but I hope to see lower elapsed_time values in apex_workspace_activity_log.

See message translations for ORA-31038 and search additional resources.

Thursday, September 08, 2011

ORA-31603: object "XYZ" of type CONSTRAINT not found in schema ...

Always check out the original article at http://www.oraclequirks.com for latest comments, fixes and updates.

Here is a classical situation where I don't know if I am staring at a bug, at a feature or I am simply drunk.
I am executing DBMS_METADATA.GET_DDL in an Oracle XE (10g) instance, in order to get the DDL for the table constraint as an ALTER statement, but the function throws a weird exception saying that such object is not in my schema. On the contrary, if I execute DBMS_METADATA.GET_DEPENDENT_DDL, everything works normally.

create table master_table (id number, col_1 number, col_2 number);
alter table master_table add constraint pk_master_table primary key(id);

create table detail_table (id number, master_id number, col_1 number, col_2 number);
alter table detail_table add constraint pk_detail_table primary key(id);
alter table detail_table add constraint fk_detail_table_master foreign key(master_id) references master_table(id);

SELECT DBMS_METADATA.GET_DDL('CONSTRAINT', constraint_name) DDL
FROM USER_CONSTRAINTS
WHERE table_name = 'DETAIL_TABLE';

select dbms_metadata.get_dependent_ddl( 'REF_CONSTRAINT', table_name) DDL
FROM USER_TABLES
WHERE table_name = 'DETAIL_TABLE';


table MASTER_TABLE created.
table MASTER_TABLE altered.
table DETAIL_TABLE created.
table DETAIL_TABLE altered.
table DETAIL_TABLE altered.

Error starting at line 8 in command:
SELECT DBMS_METADATA.GET_DDL('CONSTRAINT', constraint_name) DDL
FROM USER_CONSTRAINTS
WHERE table_name = 'DETAIL_TABLE'
Error report:
SQL Error: ORA-31603: object "FK_DETAIL_TABLE_MASTER" of type CONSTRAINT not found in schema "TEST"
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 105
ORA-06512: at "SYS.DBMS_METADATA", line 2805
ORA-06512: at "SYS.DBMS_METADATA", line 4333
ORA-06512: at line 1
31603. 00000 -  "object \"%s\" of type %s not found in schema \"%s\""
*Cause:    The specified object was not found in the database.
*Action:   Correct the object specification and try the call again.

select dbms_metadata.get_dependent_ddl( 'REF_CONSTRAINT', table_name) DDL
FROM USER_TABLES
WHERE table_name = 'DETAIL_TABLE';

DDL
------------------------------------------------
ALTER TABLE "TEST"."DETAIL_TABLE" ADD CONSTRAINT "FK_DETAIL_TABLE_MASTER" FOREIGN KEY ("MASTER_ID")
REFERENCES "TEST"."MASTER_TABLE" ("ID") ENABLE



Interestingly enough, if I restrain the query to the primary key constraint, it works:

SELECT DBMS_METADATA.GET_DDL('CONSTRAINT', constraint_name) DDL
FROM USER_CONSTRAINTS
WHERE table_name = 'DETAIL_TABLE'
and constraint_type = 'P';

DDL
------------------------------------------------
ALTER TABLE "TEST"."DETAIL_TABLE" ADD CONSTRAINT "PK_DETAIL_TABLE" PRIMARY KEY ("ID")
...


But it fails if on the foreign key:

Error starting at line 8 in command:
SELECT DBMS_METADATA.GET_DDL('CONSTRAINT', constraint_name) DDL
FROM USER_CONSTRAINTS
WHERE table_name = 'DETAIL_TABLE'
and constraint_type = 'R'
Error report:
SQL Error: ORA-31603: object "FK_DETAIL_TABLE_MASTER" of type CONSTRAINT not found in schema "TEST" 


See message translations for ORA-31603 and search additional resources.

Thursday, September 01, 2011

ORA-29024: Certificate validation failure

Always check out the original article at http://www.oraclequirks.com for latest comments, fixes and updates.

Ever tried to run UTL_HTTP.BEGIN_REQUEST procedure passing an URL that points to a HTTPS (SSL) server?
That's what happend to me recently as some procedures that have been working for nearly two years without a hitch suddenly stopped working. After reviewing my app logs it turned out that around mid august Oracle changed something in the way you access the OTN forums and now the connection is encrypted. As a result, my log was full of the following error messages:

ORA-29024: Certificate validation failure

This implies that the good old UTL_HTTP.BEGIN_REQUEST doesn't work anymore unless you create a so-called oracle wallet containing the required SSL certificates (I beg you pardon should I say something technically questionable regarding SSL but the last thing I wished to be was a security protocol expert...).

How do we create this wallet? Piece of cake, just run owm (Oracle Wallet Manager) from the command line, may be it's the same on Windows, I tried this only on Unix.

There is a little catch however: I am running on XE and XE doesn't come with the Oracle Wallet Manager. Oops!

Fortunately things were not so bad as they looked. I could create a wallet on a local Oracle Server and move the two resulting files (ewallet.p12 and cwallet.sso) in the desired folders by means of FTP, changing the permissions so that the OS database owner (in my case the classic "oracle" user) could read the files.
Thereafter I just had to add a procedure call before invoking UTL_HTTP.BEGIN_REQUEST, as follows:

UTL_HTTP.SET_WALLET('file:/etc/oracle/owm'); -- path to the wallet in the db server

Now, one may ask why it works.
Well, it looks like that when you create this wallet, OWM stuffs some common use certificates in it for our convenience (you can see these default certificates when you are inside the OWM).

Note that I didn't provide the second parameter to the procedure, that is the password, my understanding is that Oracle will be reading the read-only version of the wallet called cwallet.sso.  This gives me the advantage of not hard-cording the password in the procedure in clear text.

It is also worth noticing that file: prefix. For some reason my brain decided to ignore altogether this piece of information for at least 15 minutes, with the results described in a subsequent posting. So, to make it short, be sure to specify it in the string, otherwise it won't work.

Another interesting fact is in that there are no restrictions as to where to put the wallet, the path can point anywhere as long as the database owner can read the file, so the path specified above does not represent anything special.

So, after completing these steps my procedures resumed working normally.

I can imagine however that in some cases this might not be sufficient but I ignore if you'll get the same error or a new one. The default files created by OWM may contain valid certificates for some sites I am currently accessing but not for others. If that is the case, I imagine that you'll need to import the correct certificates into the wallet and thereafter copy the files again across the servers. 

And finally the acknowledgements: special thanks to Marcelle Kratochvil who posted the relevant information and the valuable comments of the readers who completed the picture and allowed me to fix this problem really quickly.
It really looked scary initially.

See message translations for ORA-29024 and search additional resources.

Monday, February 07, 2011

Oracle XE 11G anyone?

If you strongly believe that Oracle XE (10g) was a great piece of software, as I do, and it's time to release an update in line with the current database offering, may be you should sign up for the following feature request:

Oracle XE 11G petition

Long live Oracle XE!

Thursday, January 07, 2010

Dispatcher process taking 99% of CPU on Oracle XE after an Apex request

Always check out the original article at http://www.oraclequirks.com for latest comments, fixes and updates.

If you are an enthusiastic user of Apex in combination with Oracle XE as i am, you may have encountered the following issue sometimes: suddenly a process called dispatcher (xe_dnnn) starts consuming 99% of CPU time without any apparent reason.
Unfortunately i failed to find a way to reproduce the problem systematically and, even in that case, Oracle XE is not supported, so there are no chances to get a patch from Oracle.

Although i said there is no apparent reason for this, i have the feeling this problem is triggered by some odd http request made by browsers like Safari and Google Chrome. I am saying this because i am quite sure i was using one of those browsers when this type of event occurred and it never occurred with Firefox.

Today i managed to find a sort of workaround that, at least, doesn't require to bounce the instance to get things back to normal.
I ignore if this problem affects the Windows version as XE is running on top of Ubuntu Linux in my case.

I am assuming that you have already realized that something odd is going on in the database server.
Looking at the running processes in the graphical system monitor of Ubuntu will just report some oracle process taking 99% of the CPU, but still there is no indication of which particular process is the culprit.
You must turn to the console to find out which one is actually consuming most of the CPU:

ps -eo "user,pid,ppid,pcpu,cmd" --sort pcpu | grep xe_


You can also run the following command to get an auto-refreshing snapshot of the situation:

top -u oracle

Now, assuming that the row with the highest CPU usage refers to the oracle process called xe_d000 and provided we have to live with this bug, what can we do to keep the ball rolling?

First of all, we (or the DBA) can kill the dispatcher process:
ALTER SYSTEM SHUTDOWN IMMEDIATE 'D000'; -- the number depends on the output of the ps command
Notice: if this command doesn't work, something that could happen if the process is unresponsive, on a linux/unix box you can the related o/s process.
KILL -9 pid
where pid is the oracle process id for D000.
I fear that on Windows you cannot kill a single thread as easily.

While there are no other active dispatcher processes available, all web clients will receive http 404 errors when attempting to load an Apex page.
Then we can issue the command to recreate at least one dispatcher process.
ALTER SYSTEM
SET DISPATCHERS =
'(PROTOCOL=TCP)(DISPATCHERS=1)(INDEX=0)';
The default number of dispatchers on Oracle XE is 1, but it can be increased if required, in order to support a higher number of concurrent connections.
You can find some information about shared server processes and dispatchers in the Administrator's Guide for Oracle 10R2, while waiting for Oracle to release the new express edition based on Oracle 11GR2, (hopefully soon?)

Updated March 9, 2010:
thanks to Jens who reported a missing parenthesis and provided a "roadmap" for restoring the service. See the comments section.

Thursday, August 20, 2009

ORA-31609: error loading file "kuauddef.xsl" from file system directory

Always check out the original article at http://www.oraclequirks.com for latest comments, fixes and updates.

Sometimes i wish certain things were a little bit easier.
I was attempting to import a datapump export obtained with the DBMS_METADATA "demo" export program i mentioned a while ago, but unfortunately it blows up while processing trigger information.
After investigating the problem, it seemed that, for some reason, there were missing stylesheets that needed to be loaded using a script called initmeta.sql, under the $ORACLE_HOME/rdbms/admin folder according to a user who reported that after doing that, the problem was fixed, but to my surprise i could not find such script in my Oracle XE linux installation.
So i began another iteration, in pursue of a solution for this new "problem" and i found an OTN forum discussion where someone else could not spot the fatal script, but a DBA published the content of initmeta.sql for Oracle version 10gR1, which is a single line:
EXECUTE SYS.DBMS_METADATA_UTIL.LOAD_STYLESHEETS('/Users/oracle/.../rdbms/xml/xsl');
(whole path omitted)

Apparently i only needed to modify the file path, but first i wanted to have a look at the package specification of DBMS_METADATA_UTIL.
Interestingly enough, in version 10gR2 (XE) something must have been changed, because procedure LOAD_STYLESHEETS doesn't take any parameters.
Oh good, i thought, it means that they are retrieving the path themselves, so i removed the parameter and ran the procedure as SYS:
EXECUTE SYS.DBMS_METADATA_UTIL.LOAD_STYLESHEETS;

ORA-31609: error loading file "kuauddef.xsl" from file system directory
"/usr/lib/oracle/xe/app/oracle/product/10.2.0/server/rdbms/xml/xsl"

ORA-06512: at "SYS.DBMS_METADATA_UTIL", line 1793
ORA-06512: at line 1
31609. 00000 - "error loading file \"%s\" from file system directory \"%s\""
*Cause: The installation script initmeta.sql failed to load
the named file from the file system directory into the database.
*Action: Examine the directory and see if the file is present
and can be read.
And there you go, with ORA-31609.
It's curious that SQL Developer explains the error message mentioning the initmeta.sql script that didn't run at all, it means that procedure LOAD_STYLESHEETS is expected to be run only in that occasion.

So, here is where i am stuck now, the missing $ORACLE_HOME/rdbms/xml directory.
May be i'll update this entry if i manage to work around this problem.

Fortunately there are still the good-ole EXP/IMP utilities to keep the ball rolling.

See message translations for ORA-31609 and search additional resources.

Monday, April 27, 2009

Heap size nnnnnK exceeds notification threshold (2048K)

Always check out the original article at http://www.oraclequirks.com for latest comments, fixes and updates.

Lately I'm working a lot with XDB.
At the same time, having some spare time to investigate a few outstanding "issues", i decided to search the web for information about alert log messages like the following:
Memory Notification: Library Cache Object loaded into SGA
Heap size 2208K exceeds notification threshold (2048K)
KGL object name :XDB.XDbD/PLZ01TcHgNAgAIIegtw==
I wasn't particularly worried as this message is marked as a notification, so i was pretty sure that it was just informational stuff, however i wanted to learn more about and see what others say about it.
I quickly found a useful article on Frits's blog.

The suggested workaround, even if this isn't a serious issue as per Oracle Support note, consists in increasing the threshold level by executing the following statement:
alter system set "_kgl_large_heap_warning_threshold"=8388608 scope=spfile;
followed by a db restart.

The value of 8Mb has no special meaning, i guess we can increase or decrease it as necessary, because the concept of threshold depends on how much memory is allocated for the SGA: the bigger the SGA, the higher the threshold, but i don't recommend making any arbitrary assumptions here, it's probably better to adjust the value one step at a time, just to reduce the frequency of the notification.

If we work a lot with large objects in memory and the alert log is cluttered by many of these notifications, but the performances of the database are still ok, we might want to increase this value to avoid logging useless entries. For example, if most of the notification messages refer to objects in the range between 2 and 4Mb, then it makes sense to set the threshold around 4Mb, so that we can still "watch" any objects above that level.

We are interested in peaks, not in the average size objects, but what average size means depends on our applications.

Tuesday, April 21, 2009

ORA-27040: file create error, unable to create file

Always check out the original article at http://www.oraclequirks.com for latest comments, fixes and updates.

It's not the first time i write about the stack of errors returned by data pump operations and ORA-31641 in particular, however the error messages may vary slightly with the operating system. For instance the following error stack is returned on XE running on top of Ubuntu, whereas the previously linked page refers to an error condition occurring on XE for Windows.
expdp user/pwd directory=EXPDIR dumpfile=dump.dp tables=emp,dept nologfile=Y

ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31641: unable to create dump file "/home/flavio/documents/dump.dp"
ORA-27040: file create error, unable to create file
Linux Error: 13: Permission denied
The problem is caused by the missing privileges of user oracle on the directory pointed to by EXPDIR.
EXPDIR is pointing to /home/flavio/documents.
ls -ld /home/flavio/documents

drwxr-xr-x 4 flavio flavio 4096 2009-04-10 19:00 /home/flavio/documents/
As you see the directory belongs to user flavio, group flavio, moreover no write privileges are granted other than to the directory owner. Now, as Oracle processes are running under the user oracle, in order to be able to dump a file in this directory we have two possibilities:
  1. to change the other group permissions on the directory i.e. by executing chmod o+w documents.
  2. to add user oracle to group flavio and grant write permissions on documents to group flavio by executing sudo usermod -a -G flavio oracle; chmod g+w documents.
Note that option #1 will work instantly while option #2 requires a db restart. If you do not restart the db, you will still get the permission denied error. In either case, the db user must have READ and WRITE permissions on EXPDIR or hold CREATE ANY DIRECTORY privilege.

Whether the optimal solution is either of the two options above it really depends on the application security requirements.
In the end may be it's better to create a public directory where everybody can dump files freely or create a shared directory EXPDIR where only oracle can create files and users belonging to a datapump group can read them. This approach avoids tampering with the existing permissions on user's private folders and it doesn't require a db restart.

Final notes: while looking for the Optimal Flexible Architecture document, i realized that dpdump is nowhere to be found in the installation documents through version 11gR1. This directory is created at time of installation and it is an OFA directory, no matter what the books say or don't. Directory object DATA_PUMP_DIR, which is the default directory for data pump operations, indeed maps to dpdump.
If i remember correctly in 10gR2 the directory object is automatically created at installation time whilst in 10gR1 you need to create it manually.

See message translations for ORA-27040 and search additional resources.

Friday, April 10, 2009

LD_LIBRARY_PATH is not set! in alert_XE.log

Always check out the original article at http://www.oraclequirks.com for latest comments, fixes and updates.

This morning i discovered the following messages in the alertXE.log, located in the bdump folder of a xubuntu virtual machine:

Fri Apr 10 15:01:15 2009
Starting ORACLE instance (normal)
Cannot determine all dependent dynamic libraries for /proc/self/exe
Unable to find dynamic library libocr10.so in search paths
RPATH = /ade/aime1_build2101/oracle/has/lib/:/ade/aime1_build2101/oracle/lib/:/ade/aime1_build2101/oracle/has/lib/:
LD_LIBRARY_PATH is not set!
The default library directories are /lib and /usr/lib
Unable to find dynamic library libocrb10.so in search paths
Unable to find dynamic library libocrutl10.so in search paths
Unable to find dynamic library libocrutl10.so in search paths
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Picked latch-free SCN scheme 2
Using LOG_ARCHIVE_DEST_10 parameter default value as USE_DB_RECOVERY_FILE_DEST
Autotune of undo retention is turned on.
IMODE=BR
ILAT =10
LICENSE_MAX_USERS = 0
SYS auditing is disabled
ksdpec: called for event 13740 prior to event group initialization
Starting up ORACLE RDBMS Version: 10.2.0.1.0.
...
After a quick search on the web, i read this posting from Mohammad, so i concluded i was not the first one to see this error...
He suggests to set the variable LD_LIBRARY_PATH in the .bash_profile or .profile of the oracle user, however i preferred to change the main startup script located in /etc/init.d containing the settings for most of the Oracle variables, as shown in the image:



After fixing oracle-xe, the alert_XE.log file doesn't show anymore the warning messages:
Fri Apr 10 15:06:43 2009
Starting ORACLE instance (normal)
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Picked latch-free SCN scheme 2
Using LOG_ARCHIVE_DEST_10 parameter default value as USE_DB_RECOVERY_FILE_DEST
Autotune of undo retention is turned on.
IMODE=BR
ILAT =10
LICENSE_MAX_USERS = 0
SYS auditing is disabled
ksdpec: called for event 13740 prior to event group initialization
Starting up ORACLE RDBMS Version: 10.2.0.1.0.
...
Someone may argue that changing the script does not guarantee against future installations (or re-installations), however hopefully Oracle will fix the bug in the meantime, so Oracle 11XE will not be affected by this flaw.

In the end i'd be curious to know which functionality was affected by the bug because this Oracle instance has been running for quite some time without showing any particular problems.

Wednesday, August 27, 2008

Apex upgrade and ORA-01653: unable to extend table SYS.SOURCE$ by 1024 in tablespace SYSTEM

Always check out the original article at http://www.oraclequirks.com for latest comments, fixes and updates.

If you, like me, have a long trail of Oracle Application Express upgrades over the years, at a certain point you might hit an error like the following while executing the script apexins.sql:
ORA-01653: unable to extend table SYS.SOURCE$ by 1024 in tablespace SYSTEM
This error is returned because you don't have enough room in the SYSTEM tablespace. The error happens because you, like me, didn't check the free space left before launching the script, something that you easily do by querying DBA_FREE_SPACE:
select sum(bytes)/(1024*1024) as MB
from dba_free_space
where tablespace_name='SYSTEM';
The same error code may appear against other objects, depending on when the tablespace became full, but in any case a possible solution is to execute the following statement, provided you have enough free disk space:
ALTER DATABASE DATAFILE 'C:\oraclexe\oradata\XE\SYSTEM.DBF' RESIZE new_size M;
where new_size is the current size in megabytes of the datafile holding the SYSTEM tablespace plus the space required for the new apex repository, which is estimated at 85MB as per installation guide instructions. There are also specific requirements for the apex tablespace, but if you are installing it into SYSAUX, the tablespace should be set to autoextend mode, so there shouldn't be any problems unless you have no physical disk space left.
If you are installing in a different tablespace, make sure autoextend is on or verify the available space on this tablespace too.

You need also to ensure that the path C:\oraclexe\oradata\XE\SYSTEM.DBF applies to your machine or change it accordingly.

After successfully executing the statement above and before retrying the upgrade, you should also remove the leftovers from the previous installation attempt and you can easily do that by dropping the newly created apex owner, that in my case was FLOWS_030100:
DROP USER FLOWS_030100 CASCADE;
Be sure not to drop the current apex owner, FLOWS_030000 in my case.

You should be now ready to restart the upgrade script.
While i was at this, i removed older apex repositories like FLOWS_020100, that were just taking space in the SYSTEM tablespace (executing DROP USER statements like that above).
Please note that if you have a couple of old apex repositories, you might get back the space required by the new version in this fashion, in other words it could be that you don't need to resize the SYSTEM tablespace at all.

See message translations for ORA-01653 and search additional resources.

Thursday, April 10, 2008

ORA-44412: Invalid ACL: Cyclic ACL inheritance is not allowed

Always check out the original article at http://www.oraclequirks.com for latest comments, fixes and updates.

I just got this weird oracle error message upon attempting to start up my Oracle XE:
ORA-44412: Invalid ACL: Cyclic ACL inheritance is not allowed
It happened after setting SGA_TARGET to 1GB in the SPFILE, on a windows XP pc where probably the amount of virtual memory was not sufficient to support this request.
I could open the database normally by using a backup PFILE with a lower SGA_TARGET value.

See a previous topic for instructions on how to restore a valid SPFILE.

See message translations for ORA-44412 and search additional resources

Tuesday, April 01, 2008

ORA-00821: Specified value of sga_target 100M is too small, needs to be at least 116M

Always check out the original article at http://www.oraclequirks.com for latest comments, fixes and updates.

This article applies to Oracle 10g R1 and R2, including XE.
Even if it is going to be published on the 1-st of April, it's not an hoax ;-)

Although i conducted my test on a Windows box, i am confident that most of it still holds true on Linux platforms as well, once you have adapted some minor details like the slashes (/) in place of the backslashes (\) and after substituting the folder %ORACLE_HOME%/database with its unix platform counterpart $ORACLE_HOME/dbs.

This error occurs at instance startup when an incompatible combination of values for SGA_TARGET and at least one of the following four parameters: DB_CACHE_SIZE, JAVA_POOL_SIZE, SHARED_POOL_SIZE and LARGE_POOL_SIZE have been specified either in the relevant SPFILE or PFILE.

On Oracle 10g for Windows, trace of this error can be found in the file oradim.log, located in %ORACLE_HOME%\database.

So, let's assume here to be working on an XE version, but I've got the same results with 10gR1 on Windows 2000.

Windows Service OracleServiceXE is listed as started however the instance is down and ordinary users, obviously, cannot connect to the database (they are probably trying to call you on the phone...).

In order to properly start the database instance, you need to override the default SPFILE containing the bogus values.
Supposing you didn't make a backup of the SPFILE (either in its semi-binary format or in clear text as a PFILE), you can proceed as follows:
  1. open the default SPFILE called SPFILEXE.ORA in %ORACLE_HOME%\database with a text editor;
  2. copy the rows written in clear text to the clipboard and save them in a new text file called safeXE.ora;
  3. change the line(s) containing the faulty parameters (scroll further down for guidance on the values that you should specify);
  4. put this file in the same directory %ORACLE_HOME%\database;
  5. connect to the idle oracle instance from an OS command prompt: sqlplus "sys/pwd as sysdba";
  6. startup pfile=%ORACLE_HOME%\database\safeXE.ora restrict;
Restricted mode will prevent ordinary users to connect while you are still working on the db.

Once the db is up, save the changes back to the default SPFILE:
create spfile from pfile='%ORACLE_HOME%/database/safeXE.ora';
shutdown immediate
startup
and there you go.

And now for the explanation of the faulty values:
you have specified the parameter SGA_TARGET in the SPFILE (and also in the PFILE).
As a consequence Oracle is trying to allocate the memory for the various buffers automatically, but without exceeding the limit you imposed on the SGA (say 100M).
However, if you got this error, there are probably other parameters specified in the SPFILE like:
DB_CACHE_SIZE, JAVA_POOL_SIZE, SHARED_POOL_SIZE, LARGE_POOL_SIZE.
If any of these is set to a number greater than zero, the value will be considered the lowest acceptable boundary, for instance:

alter system set db_cache_size=50M scope=spfile;
alter system set shared_pool_size=50M scope=spfile;

when you try to restart the db, you'll get:
ORA-00821: Specified value of sga_target is too small, needs to be at least 116M
which means that you cannot use up all the SGA allowance for the db cache and the shared pool, because Oracle needs another 16Mb of RAM, some of which are needed for other buffers and some for adjusting the number of units to the nearest SGA granule size (multiple of 4Mb on my machine).
So, you have at least three choices here:

  1. to decrease by 16Mb either of the values (or the sum thereof);
  2. to leave Oracle free to allocate the required buffers by commenting out those two parameters (prepend a pound # sign to the line to transform the text into a comment);
  3. to increase, if you have free memory available, SGA_TARGET and make it at least 116Mb or larger.

Final considerations:
i could not start an instance of Oracle 10gR1 with less than 48M of SGA_TARGET (all other buffers autoallocated).
In order to run Oracle Application Express (Apex) 3.1 or earlier, there is one requirement in terms of minimum buffer size:

SHARED_POOL_SIZE >= 100M

however there are several articles where they add also (but is not mentioned in the official documentation):

JAVA_POOL_SIZE >= 150M

which implies a minimum SGA_TARGET of 264Mb (but it could be slightly more or slightly less depending on the platform and the round-up to the nearest multiple of the SGA granule size.

Let's spend a few words on this round-up mechanism.
100Mb is multiple of 4Mb (25 *4), but 150Mb isn't a multiple of 4Mb (the granule size on my machine), the nearest multiple is 152Mb (38 * 4Mb). This explains why if i set the value to 152Mb, it will still complain and say that the minimum SGA_TARGET is 264Mb, that is:
66 * 4Mb = (25 + 38 + 3 ) * 4Mb
So, I found out the actual value of the overhead, in this case it's 3 granules, for an extra 12Mb.

In the original example both values were 50Mb, neither of which is a multiple of 4Mb, so Oracle rounded up both buffers to 52Mb (104Mb total), plus the overhead of 3 granules, another 12Mb, which gives the final amount of 116Mb.

You can find out more about this overhead by querying the following view:

select * from V$SGAINFO;

Clearly you don't want to run Apex with only 4Mb of buffer cache, it will be necessary to specify some meaningful value for DB_CACHE_SIZE and increase SGA_TARGET consequently.

See message translations for ORA-00821 and search additional resources.

Monday, March 17, 2008

ORA-12514: TNS:listener does not currently know of service requested in connect descriptor

Always check out the original article at http://www.oraclequirks.com for latest comments, fixes and updates.

This article has been edited on may 15, 2008.

ORA-12514: TNS:listener does not currently know of service requested in connect descriptor
This error can be seen when the listener has been started, but the database has not been started (yet) or automatic database service registration has not been performed (yet) or it has been disabled. Additionaly, the error can be returned when an incomplete or wrong service name is specified in the TNSNAMES.ora file or in the connector descriptor used by tools relying on JDBC drivers, i.e. SQLDeveloper.

If you are not the database administrator and you are experiencing this error on a remote client, then you should ask the DBA for help.

(see the documentation for various oracle editions about starting up a database)

Note that a bad service name or a bad SID name in the tnsnames.ora file may easily lead to ORA-12514, however the fact you got ORA-12514 is a confirmation that at least the client could connect to the listener, because otherwise you'd get ORA-12541. The problem must be then either in the service name or in the associated database process.
Given the multiple combinations originated by the various parameters involved, the following table should help you with figuring out what's wrong and where. Supposing the valid name of our database is ORCL11:



Legenda:
text in red on the leftmost column represents an invalid entry in comparison with either of the two column headings in the upper right part of the table. Valid names are obtained by running the LSNRCTL utility and executing the command SERVICES. Depending on the configuration of the instance, you may see values with or without a domain. On the client side you can either use the SID or the SERVICE_NAME method in the CONNECT_DATA specifier, depending on the method you choose, the error will be different.
The case of ORA-12514 in yellow represent a special situation, where you provided the SERVICE_NAME including a valid domain, however the instance was not configured with the necessary initialization parameters (SERVICE_NAMES and/or DB_DOMAIN), resulting in ORA-12514. Note however that if you try to connect using the SID or you specify the SERVICE_NAME without the domain portion of it, the connection will succeed.

As mentioned at the beginning of the posting, there can be also other situations.
If the listener has been bounced or it has been started after the database, there could be a period of up to 60 seconds during which the database service is not known to the listener.
From the server command line prompt, open a SQL*Plus session and try forcing the automatic registration with the listener by issuing:
ALTER SYSTEM REGISTER;
Note that a listener listening on a non standard port requires additional information (see some examples for Oracle 10G) in terms of initialization parameters in order to support automatic registration because the oracle installer sets up a database expecting to communicate with a local listener on port 1521 (default port).

On Windows, Oracle services are almost always configured to start up automatically, however there is nothing wrong with changing the start-up mode to "manual", especially in test or development environments where you could have multiple oracle homes and instances.

Be aware that there is also a registry entry called ORA_SID_AUTOSTART, containing either TRUE (default) or FALSE, whose location varies with the different Oracle versions, that can prevent a database from starting up when the windows service is started. If it is set to FALSE, you may still see the service status as "started", but you'll have to bring up the database manually, using the SQL*Plus command STARTUP or from the database console.

You may also get ORA-12514 in case you have renamed the computer or changed the domain name or workgroup the server was belonging to without updating these references in the relevant configuration files of the Oracle server.
I had to perform this operation yesterday on a virtual machine whose name was conflicting with another node created from the same baseline.

The absolute minimum required to bring the db back up to an operational state consists in opening files listener.ora and tnsnames.ora stored in %ORACLE_HOME%/network/admin (the letter case may be different depending on the OS) and updating the server and/or domain names as needed.

Most likely you'll also have to update all client tnsnames.ora files accordingly, unless you are using a centralized naming system, like a LDAP server, otherwise the clients will get ORA-12541.

Unfortunately these are not the only steps required to restore the situation, the DB console and other services rely on configuration files, hard-coded directory names containing the former server name and the repository data held in the db itself, so putting all pieces back together can be a rather pesky job so you may want to consider reinstalling everything from scratch as a highly advisable option.

To cut it short, i strongly discourage anybody from changing a production server name ever.

See message translations for ORA-12514 and search additional resources.

Tuesday, January 22, 2008

ORA-14071: invalid option for an index used to enforce a constraint

Always check out the original article at http://www.oraclequirks.com for latest comments, fixes and updates.

This error seems to be the Oracle XE equivalent of ORA-25176 that is apparently returned on Oracle standard editions when you execute a DDL statement like:
CREATE TABLE test_table (
key_a varchar2(10),
key_b number(5,0),
key_c number(10,0),
col_d varchar2(30),
PRIMARY KEY (key_a, key_b, key_c) USING INDEX COMPRESS
) ORGANIZATION INDEX
/

ORA-14071: invalid option for an index used to enforce a constraint
As explained in a previous posting about ORA-00439, it's perfectly possibile to specify the COMPRESS option for an Index Organized Table (IOT), however you must specify this clause in the right place:
CREATE TABLE test_table (
key_a varchar2(10),
key_b number(5,0),
key_c number(10,0),
col_d varchar2(30),
primary key(key_a, key_b, key_c)
) ORGANIZATION INDEX
COMPRESS
/


See message translations for ORA-14071 and search additional resources

Thursday, January 03, 2008

ORA-06531: Reference to uninitialized collection

A recent comment of a reader about an obscure PL/SQL compiler error suggested me to begin writing a few postings about errors that you may come across when working with collections, so this is the first of a series, that i don't know yet how short or long will be, not counting the errors already described in previous articles.

Let's have a look at one of the most common ones as it is reported by SQLDeveloper:
ORA-06531: Reference to uninitialized collection
ORA-06512: at line 10
06531. 00000 - "Reference to uninitialized collection"
*Cause: An element or member function of a nested table or varray
was referenced (where an initialized collection is needed)
without the collection having been initialized.
*Action: Initialize the collection with an appropriate constructor
or whole-object assignment.

What does this mean?
It's easy to explain, let's take the following PL/SQL block:
DECLARE
TYPE array_type IS VARRAY(10) OF VARCHAR2(200);
my_array array_type;
BEGIN
dbms_output.enable;
dbms_output.put_line(my_array.COUNT);
END;

ORA-06531: Reference to uninitialized collection
ORA-06512: at line 6
You cannot use the COUNT method without first initializing the collection my_array.
Likewise, you cannot use the LIMIT method either, even if LIMIT refers to the upper bound that was specified in the declaration and theoretically has little to do with the actual varray content (see below for an example).
If you need to store the varray size in a variable for easier referencing for instance or you don't want to clutter the source with such literal values, you'll need to initialize the array first, as explained later on.

my_array has been declared of type array_type, but it has not been initialized and in this situation the collection is said to be atomically NULL. Atomically null means that there are no items whatsoever in the collection that is why you cannot count them.

In order to initialize a collection you must use the type constructor, that strange beast that is named after the collection type (array_type in my example):
DECLARE
TYPE array_type IS VARRAY(10) OF VARCHAR2(200);
my_array array_type;
BEGIN
my_array := array_type();
dbms_output.enable;
dbms_output.put_line(my_array.COUNT);
END;
Alternatively and according to a better programming practice, you can initialize the collection at declaration time:
DECLARE
TYPE array_type IS VARRAY(10) OF VARCHAR2(200);
my_array array_type := array_type();
BEGIN
dbms_output.enable;
dbms_output.put_line(my_array.COUNT);
END;
both programs will return the value 0 (zero) in the dbms_output buffer.

So now we have a VARRAY with zero elements, but we declared it to hold up to 10 items.
Let's have a look at how to initialize this collection with a given number of elements.
DECLARE
TYPE array_type IS VARRAY(10) OF VARCHAR2(200);
my_array array_type := array_type('a','b');
BEGIN
dbms_output.enable;
dbms_output.put_line(my_array.COUNT);
END;
In dbms_output you'll find now the number 2 because we initialized the varray with two elements: a and b.

Imagine however that you have a large number of elements, say 32000, clearly you cannot type all of them in the constructor, so, how do you proceed?

If you are tempted to initialize the last element of the collection, see the next posting, so that is not an option.

How do you fully initialize a 32000 elements varray?

Before adding anything else, let me just suggest to ask yourself whether this is really necessary.
If the answer is yes, then read on, otherwise try to implement some other algorithm that doesn't consume db's resources so savagely...
Are you absolutely sure that this tiny program will not end up being used by dozen of concurrent users?

All right, so here comes into play the EXTEND collection method that allows us to initialize the varray by the desired number of null elements:
DECLARE
TYPE array_type IS VARRAY(32000) OF VARCHAR2(200);
my_array array_type := array_type();
BEGIN
my_array.EXTEND(32000);
dbms_output.enable;
dbms_output.put_line(my_array.COUNT);
END;
Last note: interestingly enough, the official documentation says that this form of EXTEND cannot be used when you impose a NOT NULL constraint on the array (or table) type, but, at least on Oracle XE, this is not true:
DECLARE
TYPE array_type IS VARRAY(32000) OF VARCHAR2(200) NOT NULL;
my_array array_type := array_type();
BEGIN
dbms_output.enable;
my_array.EXTEND(31000);
dbms_output.put_line(my_array.COUNT);
dbms_output.put_line(nvl(my_array(31000),'31000th is null!'));
END;

Indeed if you try to initialize the array using a non empty constructor containing nulls, Oracle will complain at parse time:
DECLARE
TYPE array_type IS VARRAY(32000) OF VARCHAR2(200) NOT NULL;
my_array array_type := array_type(null,null);
BEGIN
dbms_output.enable;
my_array.EXTEND(31000);
dbms_output.put_line(my_array.COUNT);
dbms_output.put_line(nvl(my_array(31000),'31000th is null'));
END;

Error report:
ORA-06550: line 3, column 42:
PLS-00567: cannot pass NULL to a NOT NULL constrained formal parameter
So, either i got it wrong or this is a bug...

Last but not least, let's peek at the most sensible and probably useful way of initializing a collection that is by using bulk SQL:
DECLARE
TYPE array_type IS VARRAY(32000) OF VARCHAR2(200);
my_array array_type := array_type();
upper_bound pls_integer := my_array.LIMIT;
BEGIN
dbms_output.enable;

select 'item_' || n
bulk collect into my_array
from (
select level n
from dual
connect by level <= upper_bound);

dbms_output.put_line(my_array.COUNT);
dbms_output.put_line(nvl(my_array(31000),'31000th is null'));
END;
Please note that i had to explicitly initialize the array because i used LIMIT for retrieving the array upper bound as i don't wanted to hardcode the literal 32000 inside the query, but if you don't use this kind of approach, you can omit the array initialization, it will be performed automatically when BULK COLLECT is performed.

------------------------------------------------
ORA-06531: Riferimento a collection non inizializzata
ORA-06531: Referencia a una recopilación no inicializada
ORA-06531: Referència a recollida no inicialitzada
ORA-06531: Référence à un ensemble non initialisé
ORA-06531: Nicht initialisierte Zusammenstellung referenziert
ORA-06531: Αναφορά σε μη αρχικοποιημένη συλλογή
ORA-06531: Reference til ikke-initialiseret samling
ORA-06531: Referens till ej initierad insamling
ORA-06531: Referanse til uinitialisert samling
ORA-06531: Viittaus alustamattomaan kokoelma
ORA-06531: Inicializálatlan gyűjtőre való hivatkozás
ORA-06531: Referinţă la o colecţie neiniţializată
ORA-06531: Verwijzing naar niet-geïnitialiseerde verzameling.
ORA-06531: Referência para coleta não-inicializada
ORA-06531: Referência a uma recolha não inicializada
ORA-06531: Ссылка на неинициализированный набор
ORA-06531: Odkaz na neinicializovanou skupinu
ORA-06531: Odkaz na neiniciovanú kolekciu
ORA-06531: Odwołanie do nie zainicjowanej kolekcji
ORA-06531: Başlatılmamış koleksiyona başvuru
ORA-06531: Reference to uninitialized collection

See message translations for ORA-06531 and search additional resources

Sunday, December 16, 2007

On Oracle XDB repository, FTP, WebDAV and firewall issues

After recovering the situation as described in my previous posting, i was finally ready to access my Oracle XE XDB repository via FTP or at least, that was what i thought...

Enabling XDB access via FTP protocol is just a matter of setting a valid port value in the ftp-port parameter inside xdbconfig.xml (theoretically). Typically this value will not be 21, the standard FTP port, because that would conflict with the main FTP server, if any, so the suggested value in many examples is 2100. That is the value i used. If you carefully read the Oracle documentation there is a note about setting XDB FTP on port 21.

I was ready to test my connection and i wanted to try it out with the built-in FTP client of Windows but at first i failed to understand how to specify the port number, so, after a couple of unsuccessful attempts, i gave up with ftp.exe and i went straight to Filezilla.

But even with Filezilla, i hit soon a major problem because it wasn't able to list the root folder after connecting to XDB. Here is the error message that you can receive in such situation:
Transfer channel can't be opened.
Reason: A connection attempt failed because the connected party did not properly
respond after a period of time, or established connection failed
because connected host has failed to respond.

Then a bell rang in my head and i changed the configuration from "Default" to Active Mode.
Using FTP in active mode is mentioned in the documentation, as i found out after a while.
Whether you can get FTP to work in passive mode or not is hard to say. The documentation says that it depends on the presence of either localhost or 127.0.0.1 as HOSTNAME in listener.ora, however i tried using passive mode on a server having listener.ora pointing to an address other than 127.0.0.1 and it didn't work.

After setting active mode, Filezilla started working eventually, however i find that the default 60 seconds session timeout is far too short and the "keep alive" method of Filezilla is giving some troubles, so i disabled it.

Once i got filezilla to work, i went back to ftp.exe. I just couldn't believe that i could not specify a port. Indeed there is a way, but not from the command line (as far as i know), from the command prompt execute ftp.exe, then from the ftp prompt, enter:
open host port
in my case this is:
open localhost 2100
It may take a while for the connection to be established, but then it should work.

So far so good, but what about FTP access from another computer?
Well, it all depends on the firewall(s) in between and/or the firewall rules. When there are no active firewalls between client and server, then you just need to repeat the set up already explained, the only difference being the XDB server IP address.

However, if there are active firewalls, then it may be tricky to configure the firewall rules for the FTP protocol in active mode, because you don't know exactly on which port XDB is going to
talk back to your client. The communication will occur on some port above 1023, but i am not aware of any parameter in xdbconfig.xml where one can force this random port to be chosen in a given range, thus the only option is to open up the whole range between 1024 and 65535.

I made a test on my Mac and after disabling the firewall, i could connect to XDB without problems. Unfortunately disabling the firewall altogether doesn't seem the best idea these days.

Given this situation, the best option is probably to set up a tunnelled FTP connection through SSH. May be this will be the subject of a future posting...

At this point i can draw some conclusions:
  • the best option, provided it works flawlessly, is to set up a WebDAV connection to XDB. WebDAV worked out-of-the-box in my case, (save the problem i had when i saved the xdbconfig.xml file on my Mac...) and makes no distinction between local or remote access from a configuration point of view. It's also "firewall friendly" because it uses the same port as Oracle Apex, which means that if Apex is working from a remote machine, then WebDAV access to XDB should be working as well (see the note about HTTP(S) when working on Windows XP SP2).
  • Ftp works well when client and server reside on the same machine but it can be troublesome when the client is located on a remote machine and there are firewalls in between.

Friday, December 14, 2007

ORA-31114: XDB configuration has been deleted or is corrupted

Yesterday, for some reason, i suddenly decided that i wanted to set up a FTP connection with the XDB repository of my XE database running on a xubuntu virtual machine.

After creating the WebDAV connection on my Mac, i edited the /xdbconfig.xml because i needed to set the FTP port to some value other than zero (by default FTP protocol access is disabled on XE).

Upon saving the file i got a file save failure and guess what? xdbconfig.xml was gone!

If you think that as an Apex developer you can easily get on without that file, you are plain wrong.
Apex was gone as well.
I tried connecting to the Apex login page but all i got was a wonderful blank page, not even an error, just an empty page and this is easily explained by the fact that on XE all Apex ancillary files are stored in XDB.

Finally, when i tried to execute a simple query on RESOURCE_VIEW, i got the (in)famous error message:

ORA-31114: XDB configuration has been deleted or is corrupted

At this point i had to take a decision: to restore the whole database just for recovering this tiny bit of information or to find some other solution.

I opted for the latter because i thought i could recover the file from another XE database i had on a different machine and, luckily enough, the operation was easier than expected.

I find that with Oracle, being such a vast product, the main problem is often to locate the right tool for doing a certain operation rather than doing the operation itself. And so it was, because it took me more time to spot the procedure in the Oracle Documentation than to restore that damned file.

Before showing how i did it, let me give you a simple advice.
If you are ever going to tamper with xdbconfig.xml, i suggest you to make an off-line copy of the file, just in case.
So, how to extract the current version of xdbconfig.xml without using WebDAV or FTP?

Piece of cake (when everything is working of course...) , connect as SYS or SYSTEM and execute:
select dbms_xdb.cfg_get().getClobVal()
from dual
copy and paste the result into a text file and call it xdbconfig.backup or whatever.

Now imagine that you need to perform the reverse operation taking the source of xdbconfig.xml from another database or from your saved copy. Again, you need to perform this operations as user SYS or SYSTEM.

First of all you need to ascertain whether the file exists or not (see note at the bottom dec 16):
select * from resource_view
where any_path = '/xdbconfig.xml'
if the resource exists, it means that it has been corrupted in some way, in which case you should be able to overwrite it by using the following procedure:

first of all, open the backup xdbconfig.xml file. If you haven't one, then you can try copying the source contained in the Oracle documentation, you can modify some parameters later, if needed.

set define off
set serveroutput on
declare
xmlf xmltype;
begin
xmlf := xmltype.CreateXML('-- PASTE HERE THE XML SOURCE COPIED FROM THE FILE --');
dbms_xdb.cfg_update(xmlf);
end;
If the file is missing altogether, you can re-create the resource (this is what i had to do actually...) by using the following script:
set define off
set serveroutput on
declare
xmlf xmltype;
res boolean;
begin
xmlf := xmltype.CreateXML('-- PASTE HERE THE XML SOURCE COPIED FROM THE FILE --');
res := dbms_xdb.createresource('/xdbconfig.xml',xmlf);
if res then
dbms_output.put_line('file created');
else
dbms_output.put_line('file not created');
end if;
end;
Comment added on December 16: if the xdbconfig.xml file is corrupt or missing, then you will not be able to execute the query on RESOURCE_VIEW. When i wrote up my posting, i had already recovered it so this explains why i could perform that query. So, you just need to execute the procedure for recreating it (the last one), it if fails, then fall back to the update procedure (the second last one).
-------------------------------------------
ORA-31114: La configurazione XDB è stata eliminata o è danneggiata
ORA-31114: la configuración XDB se ha suprimido o está corrupta
ORA-31114: La configuració XDB s'ha suprimit o està malmesa
ORA-31114: La configuration XDB a été supprimée ou est endommagée.
ORA-31114: XDB-Konfiguration wurde gelöscht oder ist beschädigt
ORA-31114: Η διαμόρφωση XDB έχει διαγραφεί ή έχει αλλοιωθεί
ORA-31114: XDB-konfiguration er slettet eller beskadiget
ORA-31114: XDB-konfigurationen har tagits bort eller är skadad
ORA-31114: XDB-konfigurasjonen er slettet eller ødelagt
ORA-31114: XDB-kokoonpano on poistettu tai virheellinen
ORA-31114: az XDB konfiguráció törölve lett vagy sérült
ORA-31114: Configuraţia XDB a fost ştearsă sau este coruptă
ORA-31114: De XDB-configuratie is verwijderd of beschadigd.
ORA-31114: Configuração de XDB foi deletada ou está danificada
ORA-31114: A configuração de XDB foi apagada ou está corrompida
ORA-31114: Конфигурация XDB удалена или повреждена
ORA-31114: Konfigurace XDB byla odstraněna nebo je porušena
ORA-31114: Konfigurácia XDB je odstránená alebo poškodená
ORA-31114: XDB - konfiguracja została usunięta lub jest uszkodzona
ORA-31114: XDB konfigürasyonu silindi veya bozuk
ORA-31114: XDB configuration has been deleted or is corrupted

See message translations for ORA-31114 and search additional resources

yes you can!

Two great ways to help us out with a minimal effort. Click on the Google Plus +1 button above or...
We appreciate your support!

latest articles