What if you get yourself in situation that you have to drop one or several datafiles from tablespace?
Oracle does not provide simple interface for droping datafiles in the same way that we drop objects like table, view, etc. But there are some workarounds and I'll show one of them.
In 10g we have command : ALTER TABLESPACE [TBS_NAME] DROP DATAFILE '[FILE_NAME]' that we use to remove only blank datafile.
One workaround is to create another tablespace and move all objects from old to new tablespace. Then drop old tablespace. Recreate old tablespace and move all objects again from new to old tablespace.
Maybe the easiest way for me is:
- Export tablespace using exp command
- Drop tablespace with datafiles
- Create new tablespace with needed number of datafiles
- Load exported data in newly created tablespace using imp command
Just a simple test case:
create tablespace test datafile 'C:\ORACLEXE\ORADATA\XE\TEST1.DBF' SIZE 20M AUTOEXTEND OFF, 'C:\ORACLEXE\ORADATA\XE\TEST2.DBF' SIZE 20M AUTOEXTEND OFF, 'C:\ORACLEXE\ORADATA\XE\TEST3.DBF' SIZE 20M AUTOEXTEND OFF, 'C:\ORACLEXE\ORADATA\XE\TEST4.DBF' SIZE 20M AUTOEXTEND OFF
Then add some objects to tablespace.
(I'll create one big table using script from Tom Kyte)
create table big_table tablespace "TEST" as select rownum id, a.* from all_objects a where 1=0; alter table big_table nologging; system@XE> declare 2 l_cnt number; 3 l_rows number := &1; 4 begin 5 insert /*+ append*/ 6 into big_table 7 select rownum, a.* 8 from all_objects a 9 where rownum <= &1; 10 11 l_cnt := sql%rowcount; 12 13 commit; 14 15 while (l_cnt < l_rows) 16 loop 17 insert /*+APPEND */ into big_table 18 select rownum+l_cnt, 19 OWNER,OBJECT_NAME,SUBOBJECT_NAME,OBJECT_ID, DATA_OBJECT_ID,OBJECT_TYPE, 20 CREATED,LAST_DDL_TIME,TIMESTAMP,STATUS,TEMPORARY,GENERATED,SECONDARY 21 from big_table 22 where rownum <= l_rows - l_cnt; 23 l_cnt := l_cnt + sql%rowcount; 24 commit; 25 end loop; 26 end; 27 / Enter value for 1: 350000 Enter value for 1: 350000 PL/SQL procedure successfully completed.I have extents in all datafiles:
select a.segment_name, a.tablespace_name, b.file_name, sum(a.bytes)/1024/1024 used_mb from dba_extents a, dba_data_files b where a.segment_name = 'BIG_TABLE' and a.file_id=b.file_id group by a.segment_name, a.tablespace_name, b.file_name; SEGMENT_NAME TABLESPACE_NAME FILE_NAME USED_MB BIG_TABLE TEST C:\ORACLEXE\ORADATA\XE\TEST3.DBF 10 BIG_TABLE TEST C:\ORACLEXE\ORADATA\XE\TEST4.DBF 10 BIG_TABLE TEST C:\ORACLEXE\ORADATA\XE\TEST1.DBF 9 BIG_TABLE TEST C:\ORACLEXE\ORADATA\XE\TEST2.DBF 9 alter tablespace test read only;I'll export whole tablespace (metadata & rows):
C:\export>exp userid=system/oracle file=test_exp.dmp log=test_exp.log tablespaces=TEST Export: Release 10.2.0.1.0 - Production on Sri Vel 11 23:01:48 2009 Copyright (c) 1982, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production Export done in EE8MSWIN1250 character set and AL16UTF16 NCHAR character set server uses AL32UTF8 character set (possible charset conversion) About to export selected tablespaces ... For tablespace TEST ... . exporting cluster definitions . exporting table definitions . . exporting table BIG_TABLE 350000 rows exported . exporting referential integrity constraints . exporting triggers Export terminated successfully without warnings.Drop old tablespace which consists four datafiles and create new one with just one datafile:
system@XE> drop tablespace test including contents and datafiles; Tablespace dropped. ----------- system@XE> create tablespace test datafile 'C:\ORACLEXE\ORADATA\XE\test.dbf' SIZE 50M AUTOEXTEND ON NEXT 15M MAXSIZE 100M; Tablespace created.Load data into new tablespace:
C:\export>imp userid=system/oracle file=test.dmp log=imp_test.log full=Y; Import: Release 10.2.0.1.0 - Production on Sri Vel 11 23:08:34 2009 Copyright (c) 1982, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production Export file created by EXPORT:V10.02.01 via conventional path import done in EE8MSWIN1250 character set and AL16UTF16 NCHAR character set import server uses AL32UTF8 character set (possible charset conversion) . importing SYSTEM's objects into SYSTEM . importing SYSTEM's objects into SYSTEM . . importing table "BIG_TABLE" 350000 rows imported Import terminated successfully without warnings.Simple check:
select a.segment_name, a.tablespace_name, b.file_name, sum(a.bytes)/1024/1024 used_mb from dba_extents a, dba_data_files b where a.segment_name = 'BIG_TABLE' and a.file_id=b.file_id group by a.segment_name, a.tablespace_name, b.file_name; SEGMENT_NAME,TABLESPACE_NAME,FILE_NAME,USED_MB BIG_TABLE,TEST,C:\ORACLEXE\ORADATA\XE\TEST.DBF,38Now all my data is situated in just one datafile. Cleanup:
system@XE> drop tablespace test including contents and datafiles; Tablespace dropped.So be careful! Adding new datafile to tablespace is very easy task, but removing/droping datafile from tablespace could be little tricky. Plan your storage carefully and try to avoid dropping files from tablespace.
0 Comments:
Post a Comment