Wednesday 15 February 2012

Oracle DBA Interview Questions and Answers


1)    How to transfer table from one Tablespace to another Tablespace in same database?
There are several methods to do this. You can either:
1) export the table, drop the table, create the table definition in the new
   tablespace, and then import the data (imp ignore=y).
2) Create a new table in the new tablespace with the "CREATE TABLE x AS
   SELECT * from y" command
 
ALTER TABLE TEST MOVE TABLESPACE SLMDATA;
 
   CREATE TABLE temp_name TABLESPACE new_tablespace AS SELECT * FROM  real_table;
 
   Then drop the original table and rename the temporary table as the original:
 
   DROP TABLE real_table;
   RENAME temp_name TO real_table;
 
After #1 or #2 is done, be sure to recompile any procedures that may have been invalidated by dropping the table.
 
I prefer method #1, but #2 is easier if there are no indexes, constraints, or
triggers. If there are, you must manually recreate them.

2)   How to transfer one Tablespace from one database to another database?
You can use the Transportable Tablespaces feature to copy a set of tablespaces from one Oracle Database to another
This method for transporting tablespaces requires that you place the tablespaces to be transported in read-only mode until you complete the transporting process. If this is undesirable, you can use the Transportable Tablespaces from Backup feature, described in Oracle Database Backup and Recovery User's Guide.
The tablespaces being transported can be either dictionary managed or locally managed. Starting with Oracle9i, the transported tablespaces are not required to be of the same block size as the target database standard block size.
Moving data using transportable tablespaces is much faster than performing either an export/import or unload/load of the same data. This is because the datafiles containing all of the actual data are just copied to the destination location, and you use Data Pump to transfer only the metadata of the tablespace objects to the new database.
Note: Beginning with Oracle Database 11g Release 1, you must use Data Pump for transportable tablespaces. The only circumstance under which you can use the original import and export utilities, IMP and EXP, is for a backward migration of XMLType data to a database version 10g Release 2 or earlier. Refer to Oracle Database Utilities for more information on these utilities and to Oracle XML DB Developer's Guide for more information on XMLTypes.
The transportable tablespace feature is useful in a number of scenarios, including:
·         Exporting and importing partitions in data warehousing tables
·         Publishing structured data on CDs
·         Copying multiple read-only versions of a tablespace on multiple databases
·         Archiving historical data
·         Performing tablespace point-in-time-recovery (TSPITR)
These scenarios are discussed in "Using Transportable Tablespaces: Scenarios".
There are two ways to transport a tablespace:
·         Manually, following the steps described in this section. This involves issuing commands to SQL*Plus, RMAN, and Data Pump.
·         Using the Transport Tablespaces Wizard in Enterprise Manager
To run the Transport Tablespaces Wizard:
1.     Log in to Enterprise Manager with a user that has the EXP_FULL_DATABASE role.
2.     Click the Maintenance link to go to the Maintenance tab.
3.     Under the heading Move Database Files, click Transport Tablespaces.
Starting with Oracle Database 11g, you can transport tablespaces across platforms.
Limitations on Transportable Tablespace Use
Be aware of the following limitations as you plan to transport tablespaces:
  • The source and target database must use the same character set and national character set.
  • You cannot transport a tablespace to a target database in which a tablespace with the same name already exists. However, you can rename either the tablespace to be transported or the destination tablespace before the transport operation.
  • Objects with underlying objects (such as materialized views) or contained objects (such as partitioned tables) are not transportable unless all of the underlying or contained objects are in the tablespace set.
  • Beginning with Oracle Database 10g Release 2, you can transport tablespaces that contain XMLTypes. Beginning with Oracle Database 11g Release 1, you must use only Data Pump to export and import the tablespace metadata for tablespaces that contain XMLTypes.
·         Additional limitations include the following:
·         Advanced Queues Transportable tablespaces do not support 8.0-compatible advanced queues with multiple recipients.
·         SYSTEM Tablespace Objects You cannot transport the SYSTEM tablespace or objects owned by the user SYS. Some examples of such objects are PL/SQL, Java classes, callouts, views, synonyms, users, privileges, dimensions, directories, and sequences.
·         Opaque Types Types whose interpretation is application-specific and opaque to the database (such as RAW, BFILE, and the AnyTypes) can be transported, but they are not converted as part of the cross-platform transport operation. Their actual structure is known only to the application, so the application must address any endianness issues after these types are moved to the new platform. Types and objects that use these opaque types, either directly or indirectly are also subject to this limitation.
·         Floating-Point Numbers BINARY_FLOAT and BINARY_DOUBLE types are transportable using Data Pump.
·         Table 12-1 Minimum Compatibility Requirements
Transport Scenario
Minimum Compatibility Setting
Source Database
Target Database
Databases on the same platform
8.0
8.0
Tablespace with different database block size than the target database
9.0
9.0
Databases on different platforms
10.0
10.0

Transporting Tablespaces Between Databases: A Procedure and Example

The following steps summarize the process of transporting a tablespace. Details for each step are provided in the subsequent example.
1.     For cross-platform transport, check the endian format of both platforms by querying the V$TRANSPORTABLE_PLATFORM view.
Ignore this step if you are transporting your tablespace set to the same platform.
2.     Pick a self-contained set of tablespaces.
3.     Generate a transportable tablespace set.
A transportable tablespace set (or transportable set) consists of datafiles for the set of tablespaces being transported and an export file containing structural information (metadata) for the set of tablespaces. You use Data Pump to perform the export.
If you are transporting the tablespace set to a platform with different endianness from the source platform, you must convert the tablespace set to the endianness of the target platform. You can perform a source-side conversion at this step in the procedure, or you can perform a target-side conversion as part of step 4.
Note:
This method of generating a transportable tablespace requires that you temporarily make the tablespace read-only. If this is undesirable, you can use the alternate method known as transportable tablespace from backup. See Oracle Database Backup and Recovery User's Guide for details.
4.     Transport the tablespace set.
Copy the datafiles and the export file to a place that is accessible to the target database.
If you have transported the tablespace set to a platform with different endianness from the source platform, and you have not performed a source-side conversion to the endianness of the target platform, you should perform a target-side conversion now.
5.     Import the tablespace set.
Invoke the Data Pump utility to import the metadata for the set of tablespaces into the target database.
Example
The steps for transporting a tablespace are illustrated more fully in the example that follows, where it is assumed the following datafiles and tablespaces exist:
Tablespace
Datafile
sales_1
/u01/oracle/oradata/salesdb/sales_101.dbf
sales_2
/u01/oracle/oradata/salesdb/sales_201.dbf

Step 1: Determine if Platforms are Supported and Determine Endianness

This step is only necessary if you are transporting the tablespace set to a platform different from the source platform.
If you are transporting the tablespace set to a platform different from the source platform, then determine if cross-platform tablespace transport is supported for both the source and target platforms, and determine the endianness of each platform. If both platforms have the same endianness, no conversion is necessary. Otherwise you must do a conversion of the tablespace set either at the source or target database.
If you are transporting sales_1 and sales_2 to a different platform, you can execute the following query on each platform. If the query returns a row, the platform supports cross-platform tablespace transport.
SELECT d.PLATFORM_NAME, ENDIAN_FORMAT
     FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d
     WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;
The following is the query result from the source platform:
PLATFORM_NAME             ENDIAN_FORMAT
------------------------- --------------
Solaris[tm] OE (32-bit)   Big
The following is the result from the target platform:
PLATFORM_NAME             ENDIAN_FORMAT
------------------------- --------------
Microsoft Windows NT      Little
You can see that the endian formats are different and thus a conversion is necessary for transporting the tablespace set.

Step 2: Pick a Self-Contained Set of Tablespaces

There may be logical or physical dependencies between objects in the transportable set and those outside of the set. You can only transport a set of tablespaces that is self-contained. In this context "self-contained" means that there are no references from inside the set of tablespaces pointing outside of the tablespaces. Some examples of self contained tablespace violations are:
·         An index inside the set of tablespaces is for a table outside of the set of tablespaces.
Note: It is not a violation if a corresponding index for a table is outside of the set of tablespaces.
·         A partitioned table is partially contained in the set of tablespaces.
The tablespace set you want to copy must contain either all partitions of a partitioned table, or none of the partitions of a partitioned table. If you want to transport a subset of a partition table, you must exchange the partitions into tables.
·         A referential integrity constraint points to a table across a set boundary.
When transporting a set of tablespaces, you can choose to include referential integrity constraints. However, doing so can affect whether or not a set of tablespaces is self-contained. If you decide not to transport constraints, then the constraints are not considered as pointers.
·         A table inside the set of tablespaces contains a LOB column that points to LOBs outside the set of tablespaces.
·         An XML DB schema (*.xsd) that was registered by user A imports a global schema that was registered by user B, and the following is true: the default tablespace for user A is tablespace A, the default tablespace for user B is tablespace B, and only tablespace A is included in the set of tablespaces.
To determine whether a set of tablespaces is self-contained, you can invoke the TRANSPORT_SET_CHECK procedure in the Oracle supplied package DBMS_TTS. You must have been granted the EXECUTE_CATALOG_ROLE role (initially signed to SYS) to execute this procedure.
When you invoke the DBMS_TTS package, you specify the list of tablespaces in the transportable set to be checked for self containment. You can optionally specify if constraints must be included. For strict or full containment, you must additionally set the TTS_FULL_CHECK parameter to TRUE.
The strict or full containment check is for cases that require capturing not only references going outside the transportable set, but also those coming into the set. Tablespace Point-in-Time Recovery (TSPITR) is one such case where dependent objects must be fully contained or fully outside the transportable set.
For example, it is a violation to perform TSPITR on a tablespace containing a table t but not its index i because the index and data will be inconsistent after the transport. A full containment check ensures that there are no dependencies going outside or coming into the transportable set. See the example for TSPITR in the Oracle Database Backup and Recovery User's Guide.
Note:
The default for transportable tablespaces is to check for self containment rather than full containment.
The following statement can be used to determine whether tablespaces sales_1 and sales_2 are self-contained, with referential integrity constraints taken into consideration (indicated by TRUE).
EXECUTE DBMS_TTS.TRANSPORT_SET_CHECK('sales_1,sales_2', TRUE);
After invoking this PL/SQL package, you can see all violations by selecting from the TRANSPORT_SET_VIOLATIONS view. If the set of tablespaces is self-contained, this view is empty. The following example illustrates a case where there are two violations: a foreign key constraint, dept_fk, across the tablespace set boundary, and a partitioned table, jim.sales, that is partially contained in the tablespace set.
SQL> SELECT * FROM TRANSPORT_SET_VIOLATIONS;
 
VIOLATIONS
---------------------------------------------------------------------------
Constraint DEPT_FK between table JIM.EMP in tablespace SALES_1 and table
JIM.DEPT in tablespace OTHER
Partitioned table JIM.SALES is partially contained in the transportable set
These violations must be resolved before sales_1 and sales_2 are transportable. As noted in the next step, one choice for bypassing the integrity constraint violation is to not export the integrity constraints.
See Also: Oracle Database PL/SQL Packages and Types Reference for more information about the DBMS_TTS package 
Oracle Database Backup and Recovery User's Guide for information specific to using the DBMS_TTS package for TSPITR

Step 3: Generate a Transportable Tablespace Set

Any privileged user can perform this step. However, you must have been assigned the EXP_FULL_DATABASE role to perform a transportable tablespace export operation.
Note:This method of generating a transportable tablespace requires that you temporarily make the tablespace read-only. If this is undesirable, you can use the alternate method known as transportable tablespace from backup. See Oracle Database Backup and Recovery User's Guide for details.
After ensuring you have a self-contained set of tablespaces that you want to transport, generate a transportable tablespace set by performing the following actions:
1.     Make all tablespaces in the set you are copying read-only.
2.     SQL> ALTER TABLESPACE sales_1 READ ONLY;
3.      
4.     Tablespace altered.
5.      
6.     SQL> ALTER TABLESPACE sales_2 READ ONLY;
7.      
8.     Tablespace altered.
9.     Invoke the Data Pump export utility on the host system and specify which tablespaces are in the transportable set.
10.  SQL> HOST
11.   
12.  $ EXPDP system/password DUMPFILE=expdat.dmp DIRECTORY=dpump_dir 
13.          TRANSPORT_TABLESPACES = sales_1,sales_2
You must always specify TRANSPORT_TABLESPACES, which determines the mode of the export operation. In this example:
o    The DUMPFILE parameter specifies the name of the structural information export file to be created, expdat.dmp.
o    The DIRECTORY parameter specifies the default directory object that points to the operating system or Automatic Storage Management location of the dump file. You must create the DIRECTORY object before invoking Data Pump, and you must grant the READ and WRITE object privileges on the directory to PUBLIC. See Oracle Database SQL Language Reference for information on the CREATE DIRECTORY command.
o    Triggers and indexes are included in the export operation by default.
If you want to perform a transport tablespace operation with a strict containment check, use the TRANSPORT_FULL_CHECK parameter, as shown in the following example:
EXPDP system/password DUMPFILE=expdat.dmp DIRECTORY = dpump_dir 
      TRANSPORT_TABLESPACES=sales_1,sales_2 TRANSPORT_FULL_CHECK=Y
In this example, the Data Pump export utility verifies that there are no dependencies between the objects inside the transportable set and objects outside the transportable set. If the tablespace set being transported is not self-contained, then the export fails and indicates that the transportable set is not self-contained. You must then return to Step 1 to resolve all violations.
Notes:The Data Pump utility is used to export only data dictionary structural information (metadata) for the tablespaces. No actual data is unloaded, so this operation goes relatively quickly even for large tablespace sets.
14.  When finished, exit back to SQL*Plus:
15.  $ EXIT
If sales_1 and sales_2 are being transported to a different platform, and the endianness of the platforms is different, and if you want to convert before transporting the tablespace set, then convert the datafiles composing the sales_1 and sales_2 tablespaces:
4.     From SQL*Plus, return to the host system:
5.     SQL> HOST
6.     The RMAN CONVERT command is used to do the conversion. Start RMAN and connect to the target database:
7.     $ RMAN TARGET /
8.      
9.     Recovery Manager: Release 10.1.0.0.0 
10.   
11.  Copyright (c) 1995, 2003, Oracle Corporation.  All rights reserved.
12.   
13.  connected to target database: salesdb (DBID=3295731590)
14.  Convert the datafiles into a temporary location on the source platform. In this example, assume that the temporary location, directory /temp, has already been created. The converted datafiles are assigned names by the system.
15.  RMAN> CONVERT TABLESPACE sales_1,sales_2 
16.  2> TO PLATFORM 'Microsoft Windows NT'
17.  3> FORMAT '/temp/%U';
18.   
19.  Starting backup at 08-APR-03
20.  using target database control file instead of recovery catalog
21.  allocated channel: ORA_DISK_1
22.  channel ORA_DISK_1: sid=11 devtype=DISK
23.  channel ORA_DISK_1: starting datafile conversion
24.  input datafile fno=00005 name=/u01/oracle/oradata/salesdb/sales_101.dbf
25.  converted datafile=/temp/data_D-10_I-3295731590_TS-ADMIN_TBS_FNO-5_05ek24v5
26.  channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:15
27.  channel ORA_DISK_1: starting datafile conversion
28.  input datafile fno=00004 name=/u01/oracle/oradata/salesdb/sales_101.dbf
29.  converted datafile=/temp/data_D-10_I-3295731590_TS-EXAMPLE_FNO-4_06ek24vl
30.  channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:45
31.  Finished backup at 08-APR-03
See Also:
Oracle Database Backup and Recovery Reference for a description of the RMAN CONVERT command
32.  Exit Recovery Manager:
33.  RMAN> exit
34.  Recovery Manager complete.

Step 4: Transport the Tablespace Set

Transport both the datafiles and the export file of the tablespaces to a place that is accessible to the target database.
If both the source and destination are files systems, you can use:
·         Any facility for copying flat files (for example, an operating system copy utility or ftp)
·         The DBMS_FILE_TRANSFER package
·         RMAN
·         Any facility for publishing on CDs
If either the source or destination is an Automatic Storage Management (ASM) disk group, you can use:
·         ftp to or from the /sys/asm virtual folder in the XML DB repository
·         The DBMS_FILE_TRANSFER package
·         RMAN
Caution:
Exercise caution when using the UNIX dd utility to copy raw-device files between databases. The dd utility can be used to copy an entire source raw-device file, or it can be invoked with options that instruct it to copy only a specific range of blocks from the source raw-device file.
It is difficult to ascertain actual datafile size for a raw-device file because of hidden control information that is stored as part of the datafile. Thus, it is advisable when using the dd utility to specify copying the entire source raw-device file contents.
If you are transporting the tablespace set to a platform with endianness that is different from the source platform, and you have not yet converted the tablespace set, you must do so now. This example assumes that you have completed the following steps before the transport:
1.     Set the source tablespaces to be transported to be read-only.
2.     Use the export utility to create an export file (in our example, expdat.dmp).
Datafiles that are to be converted on the target platform can be moved to a temporary location on the target platform. However, all datafiles, whether already converted or not, must be moved to a designated location on the target database.
Now use RMAN to convert the necessary transported datafiles to the endian format of the destination host format and deposit the results in /orahome/dbs, as shown in this hypothetical example:
RMAN> CONVERT DATAFILE 
2> '/hq/finance/work/tru/tbs_31.f',
3> '/hq/finance/work/tru/tbs_32.f',
4> '/hq/finance/work/tru/tbs_41.f'
5> TO PLATFORM="Solaris[tm] OE (32-bit)"
6> FROM PLATFORM="HP TRu64 UNIX"
7> DB_FILE_NAME_CONVERT=
8> "/hq/finance/work/tru/", "/hq/finance/dbs/tru"
9> PARALLELISM=5;
You identify the datafiles by filename, not by tablespace name. Until the tablespace metadata is imported, the local instance has no way of knowing the desired tablespace names. The source and destination platforms are optional. RMAN determines the source platform by examining the datafile, and the target platform defaults to the platform of the host running the conversion.
See Also:
"Copying Files Using the Database Server" for information about using the DBMS_FILE_TRANSFER package to copy the files that are being transported and their metadata

Step 5: Import the Tablespace Set

Note:
If you are transporting a tablespace of a different block size than the standard block size of the database receiving the tablespace set, then you must first have a DB_nK_CACHE_SIZE initialization parameter entry in the receiving database parameter file.
For example, if you are transporting a tablespace with an 8K block size into a database with a 4K standard block size, then you must include a DB_8K_CACHE_SIZE initialization parameter entry in the parameter file. If it is not already included in the parameter file, this parameter can be set using the ALTER SYSTEM SET statement.
See Oracle Database Reference for information about specifying values for the DB_nK_CACHE_SIZE initialization parameter.
Any privileged user can perform this step. To import a tablespace set, perform the following tasks:
1.     Import the tablespace metadata using the Data Pump Import utility, impdp:
2.     IMPDP system/password DUMPFILE=expdat.dmp DIRECTORY=dpump_dir
3.        TRANSPORT_DATAFILES=
4.        /salesdb/sales_101.dbf,
5.        /salesdb/sales_201.dbf
6.        REMAP_SCHEMA=(dcranney:smith) REMAP_SCHEMA=(jfee:williams)
In this example we specify the following:
o    The DUMPFILE parameter specifies the exported file containing the metadata for the tablespaces to be imported.
o    The DIRECTORY parameter specifies the directory object that identifies the location of the dump file.
o    The TRANSPORT_DATAFILES parameter identifies all of the datafiles containing the tablespaces to be imported.
o    The REMAP_SCHEMA parameter changes the ownership of database objects. If you do not specify REMAP_SCHEMA, all database objects (such as tables and indexes) are created in the same user schema as in the source database, and those users must already exist in the target database. If they do not exist, then the import utility returns an error. In this example, objects in the tablespace set owned by dcranney in the source database will be owned by smith in the target database after the tablespace set is imported. Similarly, objects owned by jfee in the source database will be owned by williams in the target database. In this case, the target database is not required to have users dcranney and jfee, but must have users smith and williams.
After this statement executes successfully, all tablespaces in the set being copied remain in read-only mode. Check the import logs to ensure that no error has occurred.
When dealing with a large number of datafiles, specifying the list of datafile names in the statement line can be a laborious process. It can even exceed the statement line limit. In this situation, you can use an import parameter file. For example, you can invoke the Data Pump import utility as follows:
IMPDP system/password PARFILE='par.f'
where the parameter file, par.f contains the following:
DIRECTORY=dpump_dir
DUMPFILE=expdat.dmp
TRANSPORT_DATAFILES="'/db/sales_jan','/db/sales_feb'"
REMAP_SCHEMA=dcranney:smith
REMAP_SCHEMA=jfee:williams
See Also:
Oracle Database Utilities for information about using the import utility
7.     If required, put the tablespaces into read/write mode as follows:
8.     ALTER TABLESPACE sales_1 READ WRITE;
9.     ALTER TABLESPACE sales_2 READ WRITE;

Using Transportable Tablespaces: Scenarios

The following sections describe some uses for transportable tablespaces:

Oracle: How to move a table to another schema?

A client asked me, “How can I move a table to another schema in Oracle?” The quick answer I gave him is, “not possible”. You have to rebuild it via “create table as select”. You might ask, justifiably, why would you want to do that anyway? His problem was that the application has been split into 2 parts, and he wanted to have separate schemas for each part, to ensure that there is no cross-schema table access.
The way this should work is like this:
SQL> rename t1 to kutrovsky.t1;
 
ORA-01765: specifying table's owner name is not allowed
Oops! How could you do that without rebuilding the segments, I was wondering. And here’s what came up.
It’s not exactly rename t1 to kutrovsky.t1;, but it gets pretty close.
Let’s assume that t1 is the table we want to move. For demonstration purposes, allow me to create a simple table t1:
create table t1 ( column1 ) as select rownum from user_tables where rownum <=10;
The first step in our process is to create a range partitioned table, in the following example named t1_temp, based on the structure of our table. The name is of no importance, it is only temporary. We use any existing number, date or varchar2 column of our table for the partition key. The ranges also do not matter, since we're not validating them.
create table t1_temp partition by range (column1)
(partition dummy values less than (-1),partition t1 values less than (MAXVALUE))
as select * from t1 where rownum <=0;
As the second step, we create the new table, which will hold the data. Note that we're only creating the layout, no data.
create table kutrovsky.t1 as select * from t1 where rownum <=0;
And now here comes the magical third step:
alter table t1_temp exchange partition dummy with table t1 including indexes without validation;
alter table t1_temp exchange partition dummy with table kutrovsky.t1 including indexes without validation;
The first command "assigns" the data segment to the t1_temp table. The second command "assigns" the data segment to the t1 table in the new owner.
Magic? I don't think so. Here's how and why it works.
When you create a normal table, Oracle creates two items. The logical object (object_id) and the data segment (data_object_id). When you create a partitioned table, Oracle creates 1 logical object (the table) and multiple data segments (each partition). In a partitioned table, the logical object (the table) has no physical segment, only it's partitions have them.
Oracle has a command to re-assign physical objects (data segments) to compatible logical objects, but it only works between a partition of a table and a non-partitioned table. When you use that command in the two step process as shown above, you essentially re-assign between two tables, by using a partitioned table to work around the limitation of the exchange command.
I took three snapshots of our objects of interest.
  1. Before executing any exchanges
  2. After 1st exchange
  3. After 2nd exchange
Notice how the data_object_id travels "down" and gets re-assigned.
OWNER
OBJECT_NAME
SUBOBJ
OBJECT_ID
DATA_OBJECT_ID
Before any exchange operations
BIG_SCHEMA
T1

673309
673309 <--
BIG_SCHEMA
T1_TEMP
T1
673312
673312
KUTROVSKY
T1

673313
673313
alter table t1_temp exchange partition dummy with table t1 including indexes without validation;
BIG_SCHEMA
T1

673309
673312
BIG_SCHEMA
T1_TEMP
T1
673312
673309 <--
KUTROVSKY
T1

673313
673313
alter table t1_temp exchange partition dummy with table kutrovsky.t1 including indexes without validation;
BIG_SCHEMA
T1

673309
673312
BIG_SCHEMA
T1_TEMP
T1
673312
673313
KUTROVSKY
T1

673313
673309 <--
This approach also works when the table has indexes. The little detail is that you have to create the same indexes on both the temp partitioned table (as local indexes!) and the final destination table.
Those of you who have been following along attentively will know that you can now drop the t1_temp table.
3)     How to transfer Schema and it’s objects  from One database to another database?
4)     How to transfer multiple tables from one database to another database and what is the impact on indexes?
5)     How to rename a table or Tablespace or database or indexes?
Renaming a Tablespace
SELECT TABLESPACE_NAME FROM USER_TABLESPACES WHERE TABLESPACE_NAME = ‘MYTBS';
ALTER TABLESPACE MYTBS RENAME TO MYTBS2;

To add a column of type varchar to a table:
ALTER TABLE distributors ADD COLUMN address varchar(30);
To drop a column from a table:
ALTER TABLE distributors DROP COLUMN address RESTRICT;
To change the types of two existing columns in one operation:
ALTER TABLE distributors
    ALTER COLUMN address TYPE varchar(80),
    ALTER COLUMN name TYPE varchar(100);
To change an integer column containing UNIX timestamps to timestamp with time zone via a USING clause:
ALTER TABLE foo
    ALTER COLUMN foo_timestamp SET DATA TYPE timestamp with time zone
    USING
        timestamp with time zone 'epoch' + foo_timestamp * interval '1 second';
The same, when the column has a default expression that won't automatically cast to the new data type:
ALTER TABLE foo
    ALTER COLUMN foo_timestamp DROP DEFAULT,
    ALTER COLUMN foo_timestamp TYPE timestamp with time zone
    USING
        timestamp with time zone 'epoch' + foo_timestamp * interval '1 second',
    ALTER COLUMN foo_timestamp SET DEFAULT now();
To rename an existing column:
ALTER TABLE distributors RENAME COLUMN address TO city;
To rename an existing table:
ALTER TABLE distributors RENAME TO suppliers;
To add a not-null constraint to a column:
ALTER TABLE distributors ALTER COLUMN street SET NOT NULL;
To remove a not-null constraint from a column:
ALTER TABLE distributors ALTER COLUMN street DROP NOT NULL;
To add a check constraint to a table and all its children:
ALTER TABLE distributors ADD CONSTRAINT zipchk CHECK (char_length(zipcode) = 5);
To remove a check constraint from a table and all its children:
ALTER TABLE distributors DROP CONSTRAINT zipchk;
To remove a check constraint from a table only:
ALTER TABLE ONLY distributors DROP CONSTRAINT zipchk;
(The check constraint remains in place for any child tables.)
To add a foreign key constraint to a table:
ALTER TABLE distributors ADD CONSTRAINT distfk FOREIGN KEY (address) REFERENCES addresses (address);
To add a (multicolumn) unique constraint to a table:
ALTER TABLE distributors ADD CONSTRAINT dist_id_zipcode_key UNIQUE (dist_id, zipcode);
To add an automatically named primary key constraint to a table, noting that a table can only ever have one primary key:
ALTER TABLE distributors ADD PRIMARY KEY (dist_id);
To move a table to a different tablespace:
ALTER TABLE distributors SET TABLESPACE fasttablespace;
To move a table to a different schema:
ALTER TABLE myschema.distributors SET SCHEMA yourschema;
To recreate a primary key constraint, without blocking updates while the index is rebuilt:
CREATE UNIQUE INDEX CONCURRENTLY dist_id_temp_idx ON distributors (dist_id);
ALTER TABLE distributors DROP CONSTRAINT distributors_pkey,
    ADD CONSTRAINT distributors_pkey PRIMARY KEY USING INDEX dist_id_temp_idx;

6)     How to copy/move data from one table to another table in another database?
insert into MY_DATA.table_name select * from CORE_DATA.table_name;
Assuming that the tables are defined identically in both schemas, the above will copy all records from the table named table_name in CORE_DATA to the table named table_name in MY_DATA.

Problem :  Each of these tables that are not required are really huge with 50-80 million 
records.   So, in order to move the table from one schema to another, I have (as far as I 
know) just two options.
 
1. Create table schema2.t1 as select * from schema1.t1;
 
2. Export these tables from schema 1 and use import with from_user=schema1 and 
to_user=schema2.
 
exp userid=other_schema owner=other_schema ROWS=N
imp userid=new_schema fromuser=other_schema touser=new_schema IGNORE=Y
 
Currently I have used method 2.  But because of these huge tables, it is taking really a 
long time and also huge disk space to store these dump files temporarily (though I am not 
too worried about the diskspace).
 
So, what I would like to know is that, is there way in Oracle 8i and above, wherein we 
can just move the table from one schema to another.  I am looking for some command like 
"Rename t1 to t2" but in a different schema.

There are a couple of ways in which a table can be moved to a different tablespace:
1) One of them is to perform export/import.
2) Another approach is to use the ‘alter table’ command with ‘move tablespace’ clause.
Let us start by creating a table. Connect to SQL*Plus using proper credentials and run the following SQLs:
CREATE TABLE TEST
(
TEST_ID NUMBER(9) NOT NULL,
TEST_DESC VARCHAR(10),
CONSTRAINT PK_TEST PRIMARY KEY(TEST_ID)
)
/
CREATE INDEX TEST_IND_1 ON TEST(TEST_DESC)
/
Above SQLs creates table and index in user’s default tablespace. In our case it is ‘users’ tablespace. Run following SQL to check where these objects are created.
SELECT TABLE_NAME,TABLESPACE_NAME,STATUS
FROM User_Tables
WHERE TABLE_NAME = ‘TEST’
UNION ALL
SELECT INDEX_NAME,TABLESPACE_NAME,STATUS
FROM User_Indexes
WHERE TABLE_NAME = ‘TEST’
/
And the result is
TABLE_NAME TABLESPACE_NAME STATUS
—————————— —————————— ——–
TEST USERS VALID
TEST_IND_1 USERS VALID
PK_TEST USERS VALID
Since we are using UNION ALL, two indexes PK_TEST (Primary key index) and TEST_IND_1 are also shown under TABLE_NAME column. Result shows that table and its corresponding indexes are crated in USERS tablespace and in valid status. Now let us insert one record into the table.
INSERT INTO TEST VALUES(1,’Hello’);
COMMIT;
Now let us retrieve value for TEST_ID column along with value of pseudo column ROWID. We will explain you later in the article, why we are retrieving ROWID. Following is the SQL followed by result.
SQL> SELECT ROWID, TEST_ID FROM TEST;
ROWID TEST_ID
—————— ———-
AAAQ+eAAEAAAA3tAAA 1
Now we will move the TEST table to another tablespace ‘SLMDATA’. We already have another tablespace SLMDATA in place. If you don’t have another tablespace where you can move the table, please create the tablespace first for testing purpose. Following command will move table TEST to new tablespace SLMDATA.
SQL> ALTER TABLE TEST MOVE TABLESPACE SLMDATA;
Above command will move table to new tablespace SLMDATA. Now let us check ROWID value again by re-issuing previous SQL.
SQL> SELECT ROWID, TEST_ID FROM TEST;
ROWID TEST_ID
—————— ———-
AAAQ+hAAHAAAAAsAAA 1
So far we have moved only table. Moving table to new tablespace will make all the indexes on the table unusable. We can verify it by issuing the SQL we executed initially after creation of table and index.
TABLE_NAME TABLESPACE_NAME STATUS
—————————— —————————— ——–
TEST SLMDATA VALID
TEST_IND_1 USERS UNUSABLE
PK_TEST USERS UNUSABLE
Result shows that table has been moved to new tablespace SLMDATA and is in ‘valid’ status while indexes still point to older tablespace and are in ‘unusable’ status. Indexes in unusable state will prevent any DML activity on the table. We can verify this by inserting new record into TEST table.
INSERT INTO TEST VALUES(2,’World!’);
Running above SQL will result in following error.
ERROR at line 1:
ORA-01502: index ‘DECIPHER.PK_TEST’ or partition of such index is in unusable
State
Why this happened? When we moved table to new tablespace, each row of the table got moved and got new ROWID. If you compare the ROWID values before and after the move, you will realize that ROWID for the same TEST_ID is different. Indexes point to the previous location of the row and not to the current location. This warrants for rebuilding of an index. If you also want to move index to the new tablespace then you can include tablespace clause. If you want to keep the index in the same tablespace where it is currently, you just need to issue only rebuild clause. Following is the example of both.
ALTER INDEX PK_TEST REBUILD;
ALTER INDEX TEST_IND_1 REBUILD TABLESPACE SLMDATA;
First statement only rebuilds the primary key index without moving it to new tablespace. Second statement rebuilds the index and also moves it to the other tablespace. Verify it by issuing following SQL.
SELECT INDEX_NAME,TABLESPACE_NAME,STATUS
FROM User_Indexes
WHERE TABLE_NAME = ‘TEST’;
Result is
INDEX_NAME TABLESPACE_NAME STATUS
—————————— —————————— ——–
TEST_IND_1 SLMDATA VALID
PK_TEST USERS VALID
Once indexes are rebuilt, all DML operations can be resumed on the table. If you try to insert the record, which gave error previously, should work fine after rebuilding all the indexes on the table. One thing to remember is that ‘MOVE TABLESPACE’ does not work if table contains column with LONG or LONG RAW data type. You will run into ‘ORA-00997: illegal use of LONG datatype’ error. Such tables can be moved to new tablespace using exp/imp command.
Dynamically transferdata from table in one tablespace to other using stored procedure
PROCEDURE Test (PRM_TSPN IN VARCHAR2,tbl_name varchar2)
IS
TABLE_NAME varchar2(30):= tbl_name;
BEGIN
'INSERT INTO ' || PRM_TSPN|| .'||TABLE_NAME||' (SELECT * FROM '||TABLE_NAME||');'
END;
Hi there, I need to export tables from one schema and import them into another test schema for testing some procedures
Let's suppose we have the A and B schemas and you want to copy the tables from A to B.
For this reason you just to ensure you have the correct "grant" permission for the B schema for all tables from A schema that you want to action. For example:
grant insert on X to A;
where X is the name of the table. This you should run from the B schema.

Afterwards, you just run (in the B schema) the creation script for the table (you can get from TOAD) and then:
(A schema): grant select on X to B;
Then:
(B schema): insert into B.X select * from A.x; (from A schema)

How do you move a partitioned table from one tablespace to another in Oracle 11g?
I have a partitioned table that belongs to tablespace report. I want to move it to tablespace record instead.
One possibility is to drop the table and recreate it in the new tablespace, but that is not an option for me, since there is data in the table that needs to survive the move.
I started by checking that the partitions actually belong to tablespace report with:
SELECT * FROM user_tab_partitions WHERE table_name = 'REQUESTLOG';
Then I just tried:
ALTER TABLE requestLog MOVE TABLESPACE record;
But that gives me error ORA-145111 “cannot perform operation on a partitioned object”.
Then I found out that I can move individual partitions using:
ALTER TABLE requestLog MOVE PARTITION "2009-12-29" TABLESPACE report;
But since there are 60 partitions of the table (based on date), and because I may have to do this for several systems, I would like to loop over all the partition names, moving each to the new tablespace. I tried that, but couldn’t quite get the SQL to work.
Even if I move all the existing partitions to the new tablespace, there is still a problem when creating new partitions. The new partitions are still created in the old tablespace report. How do I change so that new partitions are created in the new tablespace record?
You have to consider indexes that may be invalidated as well - to cover your question about resetting the default tablespaces in addition to this, I think this is the full process that you'll want to implement:
1) Move partitions (a PL/SQL loop as per zürigschnäzlets' answer)
These are procedures I use within an anonymous block wrapper that defines a_tname, a_destTS, vTname, and vTspName - they should give you the general idea:
procedure mvTabPart (a_tname in varchar2, a_destTS in varchar2) is

cursor pCur(vTname varchar2, vTspName varchar2) is

  select table_name, partition_name

  from user_tab_partitions

  where table_name = vTname

      and tablespace_name not like vTspName

  order by partition_position desc;

begin

for pRow in pCur(a_tname, a_destTS) loop

 sqlStmnt := 'alter table '||pRow.table_name||

             ' move partition '||pRow.partition_name||

             ' tablespace '||a_destTS;

execute immediate sqlStmnt;

end loop;

end mvTabPart;
2) Set table default partition tablespace so new partitions are created there:
    procedure setDefTabPart (a_tname in varchar2, a_destTS in varchar2) is

    cursor tCur(vTname varchar2) is

      select table_name

      from user_part_tables

      where table_name = vTname;

    begin

    for tRow in tCur(a_tname) loop

     sqlStmnt := 'alter table '||tRow.table_name||

                 ' modify default attributes '||

                 ' tablespace '||a_destTS;

    execute immediate sqlStmnt;

    end loop;

end setDefNdxPart;
3) Set index default partition tablespace so new index partitions (if any) are created where you want them:
procedure setDefNdxPart (a_tname in varchar2, a_destTS in varchar2) is

cursor iCur(vTname varchar2) is

  select index_name

  from user_part_indexes

  where index_name in (select index_name

             from user_indexes where table_name = vTname);

begin

for iRow in iCur(a_tname) loop

 sqlStmnt := 'alter index '||iRow.index_name||

             ' modify default attributes '||

             ' tablespace '||a_destTS;

execute immediate sqlStmnt;

end loop;



end setDefNdxPart;
4) rebuild any partitioned indexes that need rebuilding and are not in the desired tablespace:
procedure mvNdxPart (a_tname in varchar2, a_destTS in varchar2) is

cursor ndxCur(vTname varchar2, vTspName varchar2) is

select i.index_name index_name, ip.partition_name partition_name

  from user_ind_partitions ip, user_indexes i

  where i.index_name = ip.index_name

     and i.table_name = vTname

     and i.partitioned = 'YES'

     and (ip.tablespace_name not like vTspName or ip.status not like 'USABLE')

  order by index_name, partition_name ;

begin

for ndxRow in ndxCur(a_tname, a_destTS) loop

 sqlStmnt := 'alter index '||ndxRow.index_name||

             ' rebuild partition '||ndxRow.partition_name||

             ' tablespace '||a_destTS;

execute immediate sqlStmnt ;

end loop;

end mvNdxPart;
5) Rebuild any global indexes
procedure mvNdx (a_tname in varchar2, a_destTS in varchar2) is

cursor ndxCur(vTname varchar2, vTspName varchar2) is

  select index_name

  from user_indexes

  where table_name = vTname

       and partitioned = 'NO'

       and (tablespace_name not like vTspName or status like 'UNUSABLE')

  order by index_name ;

begin

for ndxRow in ndxCur(a_tname, a_destTS) loop

 sqlStmnt := 'alter index '||ndxRow.index_name||

             ' rebuild tablespace '||a_destTS;

execute immediate sqlStmnt ;

end loop;

end mvNdx;

If this is an option, the easiest way could be to rename the table (ALTER TABLE requestLog RENAME TO requestLogTmp;), create the same table with all indexes in the correct tablespace and copy the data from the old table:
INSERT INTO requestLog ( SELECT * FROM requestLogTmp )
When everything is up and running, you can drop the old table.
You can either do it with PL/SQL or generate the statements with sql. I decided to generate the alter table statements with simple SQL:
--set linesize

set lines 100



--This Query generates the alter table statements:

SELECT 'ALTER TABLE '

       ||table_name

       ||' MOVE PARTITION '

       ||partition_name

       ||' TABLESPACE REPORT;'

FROM   all_tab_partitions

WHERE  table_name = 'requestLog'; 
You can execute the output from the previous statement.
Every user has a default tablespace. New database objects are created in that default tablespace if nothing else is specified on creation/alteration
--MOVING ALL TABLES FROM USER / SCHEMA to ANOTHER TABLESPACE  

BEGIN

  FOR i IN (

    SELECT * FROM ALL_tables where owner = :owner 

      and (tablespace_name is null or tablespace_name != :tbs)

      and temporary != 'Y'

      and partitioned != 'YES'

    ) LOOP

    EXECUTE IMMEDIATE 'ALTER TABLE '  || i.table_name || ' MOVE TABLESPACE ' || :tbs;

  END LOOP; 

END;





--MOVING ALL INDEX OF USER / SCHEMA TO ANOTHER TABLESPACE



 BEGIN

  FOR i IN (

    SELECT * FROM ALL_tab_partitions 

    WHERE table_owner = :owner and tablespace_name != :tbs

  ) LOOP

    EXECUTE IMMEDIATE 'ALTER TABLE ' 

      || i.table_name || ' MOVE PARTITION '

      || i.partition_name ||' TABLESPACE '|| :tbs;

  END LOOP;

END;



--MOVING ALL PARTATION TABLES FROM USER  TO ANOTHER TABLESPACE



BEGIN

  FOR i IN (

    SELECT * FROM ALL_tables where owner = :owner and partitioned = 'YES'

  ) LOOP

    EXECUTE IMMEDIATE 'ALTER TABLE '

      || i.table_name || ' MODIFY DEFAULT ATTRIBUTES TABLESPACE ' || :tbs;

  END LOOP;

END;

I have a database with multiple tablespaces, and I'm trying to copy rows from one tablespace table to a different tablespace. The following command works in SQL Plus but not in a PL/SQL procedure:

Insert into salesforce1.contacts select * from salesforce2.contacts where contactid > 1000;

When I compile the procedure, an error message says "table or view does not exist".
Your grants (permission to access one or both of those tables) is via a role. In PL/SQL stored objects, roles are automatically disabled. You MUST have DIRECT grants on the object to the user compiling the procedure.
Is it possible in oracle to copy data ( tables ) from one tablespace to another without using export/import ?
eg. User A - default tablespace A - table A
User B - default tablespace B - no table
Is it possible to create a table B in tablespace B with user B which is an exact replica of table A which resides in tablespace A and owned by user A
CREATE TABLE B.FOOBAR TABLESPACE fooB AS SELECT * FROM A.FOOBAR;

7)     What is the basic kernel parameters required in Linux/Unix for Oracle Database?
OS kernel parameters
Oracle’s OS specific installation instructions provide guidelines for the OS configuration, but the settings for the OS parameters can make an enormous difference in Oracle performance.

Because Oracle runs on over 60 different operating systems from a mainframe to a Macintosh, it is impossible to cover every single platform.  However, the common configuration issues for UNIX and Microsoft Windows platforms will be presented.
Server Settings for Windows Servers
Windows servers for Oracle are relatively simple when compared to UNIX-based servers.  There are only a few major points to cover to ensure that the Windows server is optimized for an Oracle database.  The larger Windows servers (e.g. the UNISYS ES7000 servers) can have up to 32 CPUs and hundreds of gigabytes of RAM.  They can support dozens of Oracle instances, but many third party applications can hog server resources, causing Oracle performance issues.
Kernel setting for UNIX and Linux servers
In UNIX and Linux, there is much more flexibility in configuration and hundreds of kernel setting that can benefit database performance.  Table 14.1 lists some of the most common kernel parameters that influence Oracle:

Parameter Name
Description
Default Value
Set By the DBA
shmmax
The maximum size, in bytes, of a single shared memory segment. For best performance, it should be large enough to hold the entire SGA.
1048576
YES
shmmin
The minimum size, in bytes, of a single shared memory segment.
1
YES
shmseg
The maximum number of shared memory segments that can be attached (i.e. used) by a single process.
6
YES
shmmni
This determines how many shared memory segments can be on the system.
100
YES
shmmns
The amount of shared memory that can be allocated system-wide.
N/A
NO
Table 14.1: OS Parameters

For details, the OS specific Oracle installation guide should be consulted for details.  One of the most common problems with Oracle server configuration is sub-optimal I/O.  For example, the most important thing with Linux is enabling direct I/O on the underlying file system. Without that being enabled, Linux will cache files both in the system buffer cache and in SGA. That double caching is unnecessary and will deprive the server of RAM resources.  The following section provides a closer look by outlining some of the important Oracle parameters for performance.
 
8)     How can you find the size of a table in a Tablespace/Database? Estimate the expected time to copy a table data of 100GB from one Tablespace to another database different Tablespace?
9)     What is the default role granted for users in a database? Explain about Privileges and Roles?
10)  What is a Grant Connect, Resource means? How will you check the Users Privileges on particular Tables?
11)  Explain Import and export commands syntax for data transfer?
12)  Suppose if a database control file is corrupted and there is no backup? How will you retrieve the same?
13)  We have SCN number from 1 to 15 for a database? We found 10 SCN file was corrupted? How will you retrieve the database up and running?
We can recover database upto 9SCN only otherwise user Old backup to restore entire database having more than 10 SCN. Corrupted 10 SCN onwards we cannot recover.
14)  How to move Tablespace from one database to another database?
15)  How to rename a Tablespace in an existing database?
        ALTER TABLESPACE users RENAME TO usersts;
1)      How to move a user's tables from one tablespace to another tablespace.
        ALTER TABLE table_name MOVE TABLESPACE new_ts;
 
Here I present a simple query to use when we want to move the tables and 
indexes of several users at once
 
ALTER TABLE xxxxxx MOVE TABLESPACE TEST;
ALTER INDEX xxxxxx REBUILD TABLESPACE TEST;

To move data from multiple owners. In this example OWNER1 and OWNER2:
select 'ALTER INDEX ' || OWNER || '.' || INDEX_NAME || ' REBUILD TABLESPACE ' || TABLESPACE_NAME || ';' from DBA_INDEXES WHERE OWNER IN ('OWNER1','OWNER2');
select 'ALTER TABLE ' || OWNER || '.' || TABLE_NAME || ' MOVE TABLESPACE ' || TABLESPACE_NAME || ';' from DBA_TABLES WHERE OWNER IN ('OWNER1','OWNER2');
How to configure ASM disk space for Oracle 11g

How to create DB Link for two databases to access the objects / tables related?



No comments:

Post a Comment