Script to get size of generated archived logfiles in certain timeframe

I’m testing a import datapump load procedure and I need to know the approximate size of generated archives that this load will generate to determine the size needed for our database recoveryarea. …

Script below queries the v$archived_log view for this information. …

Please note that I have tested this with ONE and only ONE active / valid log_archive_dest destination. I THINK that when there is more than one active log_archive_dest active these archived log entries will be written to v$archived_log as well and the query will not ‘work’ anymore. …

SQL> SELECT dest_id, status, dest_name, destination
  2    FROM v$archive_dest_status
  3   WHERE status <> 'INACTIVE'
  4  /
   DEST_ID STATUS    DEST_NAME          DESTINATION
---------- --------- ------------------ --------------------------------------------------
         1 DISABLED  LOG_ARCHIVE_DEST_1
         2 VALID     LOG_ARCHIVE_DEST_2 /ora/recoveryarea/overflow/ 
col archives_m for 999,999,999,999,999

select round ( sum ( blocks * block_size ) /1024 /1024 )  archives_m
  from v$archived_log
 where first_time > to_date( '2013-AUG-30-090000', 'YYYY-MON-DD-HH24MISS')
   and first_time < SYSDATE
/

SQL> select round ( sum ( blocks * block_size ) /1024 /1024 )  archives_m
  2    from v$archived_log
  3   where first_time > to_date( '2013-AUG-30-090000', 'YYYY-MON-DD-HH24MISS')
  4     and first_time < SYSDATE
  5  /
       
Read more →

SQL script to check available space in your recoveryarea (db_recovery_file_dest_size)

We’re hitting a lot of ORA-19815, ORA-19809, ORA-00257, ORA-16038 these days on some of our development databases – see here. …


Scroll down for an updated version. The space_reclaimable should be substracted from space_used.

Script below displays percentage used in in your recoveryarea(s) as defined in init parameter ‘db_recovery_file_dest_size’. …

col name for a32
col size_m for 999,999,999
col used_m for 999,999,999
col pct_used for 999

SELECT name
,	ceil( space_limit / 1024 / 1024) SIZE_M
,	ceil( space_used  / 1024 / 1024) USED_M
,	decode( nvl( space_used, 0),
	0, 0
	, ceil ( ( space_used / space_limit) * 100) ) PCT_USED
FROM v$recovery_file_dest
ORDER BY name
/

kudos: http://www.shutdownabort.com/errors/ORA-19809.php …

Update – SUBSTRACT replaimable space from space_used. …

col name for a32
col size_m for 999,999,999
col reclaimable_m for 999,999,999
col used_m for 999,999,999
col pct_used for 999

SELECT name
, ceil( space_limit / 1024 / 1024) SIZE_M
, ceil( space_used  / 1024 / 1024) USED_M
, ceil( space_reclaimable  / 1024 / 1024) RECLAIMABLE_M
, decode( nvl( space_used, 0),
 0, 0
 , ceil ( ( ( space_used - space_reclaimable ) / space_limit) * 100) ) PCT_USED
 FROM v$recovery_file_dest
ORDER BY name
/

SQL> SELECT name
  2  , ceil( space_limit / 1024 / 1024) SIZE_M
  3  , ceil( space_used  / 1024 / 1024) USED_M
  4  , ceil( space_reclaimable  / 1024 / 1024) RECLAIMABLE_M
  5  , decode( nvl( space_used, 0),
  6   0, 0
  7   , ceil ( ( ( space_used - space_reclaimable ) / space_limit) * 100) ) PCT_USED
  8   FROM v$recovery_file_dest
  9  ORDER
Read more →

dbverify in nohup

Suffering from corruption in your database, it is nearly just as challenging as in real life… …

The other day we suffered from a curruption in a 500+ Gigabyte database, as a sequentual dberify would take about 16+ hours I wrote this one below which starts the dbv’s in nohup in parallel. My dbv finished within the hour, system performance will drop though;) …

[SYSTEM@DB01.REMIDIAN.COM]
SQL> l
  1  select
  2  'nohup dbv' ||
  3   ' file=' || file_name ||
  4   ' blocksize=4096' ||
  5   ' logfile=' || file_id || '_' || substr( file_name, instr(file_name,'/',-1) +1) || '.log &'
  6  from
  7* dba_data_files
[SYSTEM@DB01.REMIDIAN.COM]
SQL> /

'NOHUPDBVFILE='||FILE_NAME||'BLOCKSIZE=4096LOGFILE='||FILE_ID||'_'||SUBSTR(FILE_NAME,INSTR(FILE_NAME,'/',-1)+1)||'.LOG&'
------------------------------------------------------------------------------------------------------------------------
nohup dbv file=C:\ORACLE\ORADATA\DB01\DB01\USERS01.DBF blocksize=4096 logfile=4_C:\ORACLE\ORADATA\DB01\DB01\USERS01.DBF.log &
nohup dbv file=C:\ORACLE\ORADATA\DB01\DB01\SYSAUX01.DBF blocksize=4096 logfile=3_C:\ORACLE\ORADATA\DB01\DB01\SYSAUX01.DBF.log &
nohup dbv file=C:\ORACLE\ORADATA\DB01\DB01\UNDOTBS01.DBF blocksize=4096 logfile=2_C:\ORACLE\ORADATA\DB01\DB01\UNDOTBS01.DBF.log &
nohup dbv file=C:\ORACLE\ORADATA\DB01\DB01\SYSTEM01.DBF blocksize=4096 logfile=1_C:\ORACLE\ORADATA\DB01\DB01\SYSTEM01.DBF.log &
nohup dbv file=C:\ORACLE\ORADATA\DB01\DB01\EXAMPLE01.DBF blocksize=4096 logfile=5_C:\ORACLE\ORADATA\DB01\DB01\EXAMPLE01.DBF.log &
nohup dbv file=C:\ORACLE\ORADATA\DB01\DB01\P1PERG001.DBF blocksize=4096 logfile=7_C:\ORACLE\ORADATA\DB01\DB01\P1PERG001.DBF.log &
nohup dbv file=C:\ORACLE\ORADATA\DB01\DB01\P1PERG002.DBF blocksize=4096 logfile=8_C:\ORACLE\ORADATA\DB01\DB01\P1PERG002.DBF.log &

Read more →

delete large number of records with intermediate commits

Have to delete a large number of records and running out of undo tablespace ‘ORA-30036: unable to extend segment by 8 in undo tablespace ‘UNDO’? Script below does a delete with intermediate commits ever ‘x’ records. …

declare

  -- commit every 'i_commit' rows
  i_commit   pls_integer := 1000;

  -- row counter, 'i_rowcount' rows deleted
  i_rowcount pls_integer := 0;


begin

  -- define 'infinite' loop
  loop

    -- delete 'i_commit' rows
    delete from obj
    where rownum <= i_commit;

    i_rowcount := i_rowcount + sql%rowcount;

    -- now it's time to exit the loop
    if sql%rowcount = 0 then
       commit;
       exit;
    end if;

    commit;

  end loop;


  -- feedback
  dbms_output.put_line( trim( to_char( i_rowcount, '999999999999')) ||  ' rows deleted') ;

end;
/

Example
[REMI@DB01.REMIDIAN.COM]
SQL> create table obj as select * from dba_objects;

Table created.

[REMI@DB01.REMIDIAN.COM]
SQL> -- get current commit count for this session
[REMI@DB01.REMIDIAN.COM]
SQL> select  s.*, n.name
  2  from  v$mystat s left outer join v$statname n
  3  on  s.statistic# = n.statistic#
  4  where   n.name = 'user commits'
  5  /

    SID STATISTIC#      VALUE NAME
------- ---------- ---------- ----------------------------------------------------------
    130          4     
Read more →