Loading...

Wednesday, December 12, 2012

Find the way out of ORA-12952: The request exceeds the maximum allowed database size of 4 GB

Always check out the original article at http://www.oraclequirks.com for latest comments, fixes and updates.

The story goes that at a certain point I began seeing ORA-12952 in my logs when Oracle was trying to refresh  schema statistics, then, progressively, I started receiving this error message while attempting to perform certain operations involving lots of data, until the other day, when the db virtually stopped working properly becoming increasingly sluggish. As I had only some test Apex applications running on it, that wasn't a real tragedy, but I decided it was time to address the problem once for all.

Here is the typical error log message that you can find in an application logging table:

ORA-00604: error occurred at recursive SQL level 2
ORA-12952: The request exceeds the maximum allowed database size of 4 GB

If you inspect the bdump folder on the database server you might also find trace files like the following:

/usr/lib/oracle/xe/app/oracle/admin/XE/bdump/xe_m000_7483.trc
Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
ORACLE_HOME = /usr/lib/oracle/xe/app/oracle/product/10.2.0/server
System name: Linux
Node name: domU-12-31-39-09-21-12
Release: 2.6.24-10-xen
Version: #1 SMP Tue Sep 8 19:06:53 UTC 2009
Machine: i686
Instance name: XE
Redo thread mounted by this instance: 1
Oracle process number: 28
Unix process pid: 7483, image: oracle@domU-12-31-39-09-21-12 (m000)

*** ACTION NAME:(Auto-Flush Slave Action) 2012-11-26 21:00:21.630
*** MODULE NAME:(MMON_SLAVE) 2012-11-26 21:00:21.630
*** SERVICE NAME:(SYS$BACKGROUND) 2012-11-26 21:00:21.630
*** SESSION ID:(67.10721) 2012-11-26 21:00:21.630
*** KEWROCISTMTEXEC - encountered error: (ORA-12952: The request exceeds the maximum allowed database size of 4 GB
)  
  *** SQLSTR: total-len=500, dump-len=240, 
      STR={insert into wrh$_parameter   (snap_id, dbid, instance_number, parameter_hash, value,    isdefault, ismodified)  select    :snap_id, :dbid, :instance_number, i.ksppihash hash, sv.ksppstvl,    sv.ksppstdf, decode(bitand(sv.ksppstvf,7), 1, 'MO}
*** KEWRAFM1: Error=13509 encountered by kewrfteh

The alert_XE.log will also contain references to the various occurrences of the error.

Now, after acknowledging the fact that we are in trouble, what can we do to remedy the situation?
First of all, we need to assess the situation in terms of *real* space used by data.
In my case, most of the space was unused and fragmented and this can be seen running a query as follows:

select ceil(sum(bytes) / 1024 / 1024) as used_mb, 0 as free_mb
from dba_segments
union all
select 0 as u, ceil(sum(bytes) / 1024 / 1024)
from dba_free_space;

USED_MB FREE_MB
------- -------
   2585       0 
      0    3033


There is plenty of free space!
May be at this point you notice something weird in these figures: how come that the total between used and free space is greater than 4GB that is allegedly the limit imposed on Oracle XE10g?
It turns out that XE isn't really checking if user data exceeded 4GB, but, according to an OTN legend, it checks if the total between system data and user data exceeds 5GB (I say legend because there is no official document stating how it actually does this calculation, as far as I know).
Or perhaps not even that, because in my case the total size of the tablespaces was well beyond that limit.
While discussing how *exactly* Oracle reckons that the db has grown too big is beyond the scope of this posting, I can imagine that probably the difference between the real figures and the expected amount lies in the correct interpretation of allocated tablespace extents in constrast with unallocated extents.

Now, consider that the result of the query above comes from a database where there is currently no sign of troubles as far as ORA-12952 is concerned. In the database I am talking about, the numbers were higher in the FREE_MB column which was also cause of some irritation: how is it possible that Oracle is complaining about the lack of space if it reports having more than 3GB of free space out of a total of 5.5GB?

The answer is quite simple: the free space is probably scattered around in multiple tablespaces which ultimately means that certain operations will succeed while others will fail, depending on where Oracle is trying to allocate the new segments, that is tables, indexes, LOBs, etc.

The good news is that anyway this database is only 50% full, so there is virtually still a lot of room for new data.
If the total USED_MB were very close to 5GB the attempt to free space would become really hard because we would probably reach some stage in which we cannot move anything around easily, especially if we had some big objects whose size exceeds the available free space.

So, how do we reclaim the unused free space?
And how do we shrink existing datafiles in case we need to allocate a new tablespace?

Basically this is the plan:
  1. compact the used space by moving users' schema objects to a new tablespace
  2. coalesce the resulting free space and gradually shrink the existing datafiles, until you are eventually able to remove the empty datafiles completely or until they are back to the desired size. 
While it looks like a rather simple plan, there are some pitfalls along the way.
First of all not all oracle objects can be moved in an equally easy way. Try to move a queue to a different tablespace for instance. I gave up, it was far easier to drop it and re-create it from scratch.
Secondly, if you want to be able to shrink the datafile in order to get back some physical storage space, you need to look first at the objects using up space towards the end of the tablespace, that is those segments allocating blocks near the high watermark, which means that you need to identify such objects and work out the list sorted by the highest block id in descending order.
Thirdly, Oracle XE is basically a limited standard edition, not an enterprise edition, so forget about rebuilding indexes online. In many cases, most notably domain indexes, you'll need to drop and recreate them, otherwise they will remain in an unusable state, which may prevent your application to work anyway.

Before delving into the details, bear in mind that an alternative method to accomplish the task could be to export the database using the data pump, drop all user schemas or remove all their objects, shrink the datafiles, and finally re-import the dump file.
For some reason this time I preferred the iterative approach, but I don't exclude I'll try the expdp/impdp method on another occasion.

We are now ready to start our journey, so let's start to build a list of the objects we need to relocate. In my case the vast majority of the objects were located in the USERS tablespace:

select name, bytes, blocks, block_size
from v$datafile
where file# = 4;
 
NAME                                     BYTES      BLOCKS BLOCK_SIZE
---------------------------------------- ---------- ------ ----------
/usr/lib/oracle/xe/oradata/XE/users.dbf  1184890880 144640       8192
 
 
For a slightly more detailed analysis:

select tablespace_name, sum(used_mb) as used_mb, sum(free_mb) as free_mb, sum(used_mb) + sum(free_mb) as total_mb
from (
select tablespace_name,ceil(sum(bytes)/1024/1024) as used_mb, 0 as free_mb
from dba_segments
group by tablespace_name
union all
select tablespace_name,0, ceil(sum(bytes)/1024/1024)
from dba_free_space
group by tablespace_name)
group by rollup(tablespace_name);
 
TABLESPACE_NAME USED_MB FREE_MB TOTAL_MB
--------------- ------- ------- --------
... 
USERS           546     586     1132
... 


The goal is to relocate enough objects so that we can progressively shrink the datafile containing the USERS tablespace and with the following query we find the object having the highest block_id (note that this approach works for tablespaces made up of a single datafile, if the tablespace contains more than one datafile, you have to add some additional code to get the information for the specific datafile that is not covered in this example):

select owner, segment_name, segment_type, block_id, blocks 
  from dba_extents a
where block_id = (select max(b.block_id) 
                    from dba_extents b
                   where b.tablespace_name = a.tablespace_name)
  and tablespace_name = 'USERS';


OWNER    SEGMENT_NAME                  SEGMENT_TYPE BLOCK_ID BLOCKS
-------- ----------------------------- ------------ -------- ------
QUIQUICI DR$CTX_QQ_TEMP_RSS_MESSAGES$I TABLE        103433   128

In other words this result means that if our USERS tablespace is made up of 144640 blocks (see the earlier query on the datafile), the high watermark must be located 128 blocks after the block whose ID is 103433, that is 103560. If we convert this number into megabytes (multiply by 8 and divide by 1024), we get 809.0625 MB.
This number theoretically represents the minimum value that we could currently expect to successfully specify in a datafile shrinking operation, but let's stay on the safe side and round up that figure a little bit:

alter database datafile '/usr/lib/oracle/xe/oradata/XE/users.dbf' resize 810m;

If this statements succeeds, then our theory must be correct (or we are very lucky guys).
The successful completion of the command told me that I was right, but just out of curiosity, let's try with 809M.

alter database datafile '/usr/lib/oracle/xe/oradata/XE/users.dbf' resize 809m;
 
Error starting at line 1 in command:
alter database datafile '/usr/lib/oracle/xe/oradata/XE/users.dbf' resize 809m
Error report:
SQL Error: ORA-03297: file contains used data beyond requested RESIZE value

Ok, no surprise, the theory is matching the reality: with ORA-03297 Oracle is telling us that the high watermark lies beyond the first 809MB (but before 810MB as we have seen).

Let's modify the query above in order to get the minimum datafile size anytime we need it (this query needs to be run after each iteration) :

select ceil(((block_id + blocks) * 
             (select block_size 
                from dba_tablespaces t 
               where t.tablespace_name = a.tablespace_name))/1024/1024) as MB
  from dba_extents a
where block_id = (select max(b.block_id) 
                    from dba_extents b
                   where b.tablespace_name = a.tablespace_name)
  and tablespace_name = 'USERS';

 
MB
--
810 
 
 
Now, by shrinking the USERS tablespace datafile by over 300MB (1132 - 810 = 322MB), it means we can safely create a new tablespace of equivalent size where we can start moving the existing objects. Please note that this process may need several iterations, especially if we move large segments, meaning that every few objects we may need to shrink the USERS tablespace in order to recover some space for the new tablespace.
It might be also the right time to ALTER the DEFAULT TABLESPACE for the owner(s) of the objects that were previously created in the USERS tablespace, so that any newly created objects without specifying the tablespace will not clutter the USERS tablespace any longer. You may also need to assign a QUOTA to the user(s), either limited or UNLIMITED on the new tablespace if the user(s) lacks UNLIMITED TABLESPACE privilege.

How fast we can accomplish this relocation from one fragmented tablespace to a fresh one depends on a variety of factors, but essentially, the bigger the objects, the more difficult is going to be the process. If we had only a great deal of small tables and indexes, we could probably take a very quick and automated approach where we generate and execute the required DDLs dynamically, but as I said at the beginning of this article, you must ensure you have plenty of time and you have made a backup first. Do not try this stuff if you have users connected to the database or user scheduled jobs running in the background that affect the objects being moved (suspend the jobs first).
Below there are some simple scripts that can move bulks of objects (tables, LOBs and indexes), supposing you have SYSDBA rights.

set serveroutput on
declare
  l_sql varchar2(4000);
begin
  /* move heap tables to newtablespace (CHANGE THE NAME AS REQUIRED) */ 
  /* use at your own risk! */ 
  for each_tab in (
     select owner||'.'||table_name as tab_name
       from dba_tables 
      where tablespace_name = 'USERS'
        and iot_name is null)
  loop
    l_sql := 'ALTER TABLE ' || each_tab.tab_name || ' MOVE newtablespace';
    dbms_output.put_line(l_sql);
    execute immediate l_sql;
  end loop;
end;
/
 
declare
  l_sql varchar2(4000);
begin
  /* move LOB columns to newtablespace (CHANGE THE NAME AS REQUIRED) */ 
  /* use at your own risk! */ 
  for each_lob in (
     select l.owner || '.' || l.table_name as tab_name, l.column_name, c.data_type
       from dba_lobs l, dba_tab_cols c
      where l.table_name = c.table_name
        and l.column_name = c.column_name
        and c.data_type in ('CLOB','BLOB')
        and l.tablespace_name = 'USERS')
  loop
    l_sql := 'ALTER TABLE ' || each_lob.tab_name || ' MOVE LOB(' || each_lob.column_name || ') STORE AS (TABLESPACE newtablespace)';
    dbms_output.put_line(l_sql);
    execute immediate l_sql;
  end loop;
end;
/ 
 
declare
  l_sql varchar2(4000);
begin
  /* move b-tree INDEXES to newtablespace (CHANGE THE NAME AS REQUIRED) */ 
  /* use at your own risk! */ 
  for each_idx in (
     select owner || '.' || index_name as idx_name
       from dba_indexes 
      where tablespace_name = 'USERS'
        and index_type in ('NORMAL','FUNCTION-BASED NORMAL'))
  loop
    l_sql := 'ALTER INDEX ' || each_idx.idx_name || ' REBUILD TABLESPACE newtablespace';
    dbms_output.put_line(l_sql);
    execute immediate l_sql;
  end loop;
end;
/
 
Objects that do not fall in the above sets will need separate handling: if you look at the object in the topmost extent of the tablespace (that DR$CTX_QQ_TEMP_RSS_MESSAGES$I), it's a table that belongs to an oracle context index (Oracle Text stuff). After some attempts of relocating these components on an individual basis I came to the conclusion that it was far easier to drop them and create the new ones in the new tablespace because in the end the data they contain would be reconstructed automatically. If that is not an option for you, then you'll have to figure out how to accomplish this task.

If you drop certain objects don't forget to purge the removed object(s) otherwise the space will remain allocated. There are several ways of doing it, so check out what is most convenient for you (see the PURGE command in the SQL Reference).

At the end of each iteration the datafile can be shrinked according to the position of the high watermark that is gradually moved back as we recover the space moving each segment to its new location. The process ends when we are either happy with the size of the USERS tablespace or until it's completely empty (as it happened in my case).

No comments:

yes you can!

Two great ways to help us out with a minimal effort. Click on the Google Plus +1 button above or...
We appreciate your support!

latest articles