Thursday, April 21, 2011

ORA-32321 MV FAST REFRESH after TRUNCATE - example

When you have (fast refreshable on commit) Materialized Views in your schema and want to truncate one or more of the underlying tables, you have a great chance of running into trouble after the truncate. This is when the ORA-32321 error usually appears:

  ORA-32321: REFRESH FAST of "..." unsupported after detail table TRUNCATE


This error occurs because the truncate operation does not record any changes in the MV log and thus a fast refresh will be out of the question for this MV. To solve this, you will have to issue a manual complete refresh on that MV. After that, fast refresh will be possible again.

So far, there is not much news. It gets interesting if you want to truncate and/or delete multiple (e.g. master-detail) tables that are the basis for an MV. When truncating a master-detail pair of tables, you basically have two choices:
  1. disable the foreign key between the tables, truncate them both and re-enable the foreign key
  2. truncate the detail table and after that, delete from the master table

As I have something against disabling constraints (because you must be really confident that re-enabling them again will succeed), I slightly prefer the second option.

There is a small difference in timing when the materialized views should be refreshed. In the first scenario, The MV should be completely refreshed after both tables are truncated and the constraint has been enabled again. In the second scenario, You should refresh the MV right after the truncate and again after the delete and commit of the master table. If you forget the first refresh, you will get the ORA-32321 error on commit of the delete on the master table. Should you forget the second refresh, the MV will not be refreshing at all, although you will not get an error!

In order to see this in action, I devised a simple example (see below).
  1. we create a master-detail pair of tables, and give them an initial rowset. After that, an MV is created (and at the same time populated).
  2. the detail table is truncated and rows from the master table are deleted. Note the timing of the complete refreshes of the MV
  3. the tables are populated again in two steps, proving that the MV is populated on commit, as expected

If you want to see this in action, run the statements as a script, or use them statement-by-statement in order to follow every step along the way. In order to see what happens when you omit one of the refreshes, just try it. Leave either the first or the second (or both) of the refreshes out and spot the differences. Leaving out the first returns on obvious error. Leaving out the second can cause some trouble, as it isn't really obvious that something is wrong. Right up to the point that users start complaining, should this happen in a production environment...

Example - using manual refresh on MV's to re-enable fast refresh on commit


  --
  -- Cleanup
  --

  DROP MATERIALIZED VIEW train_min_max_times_mv;
  DROP TABLE train_times;
  DROP TABLE trains;

  --
  -- Setup for the tables
  --

  CREATE TABLE trains
  ( train_number    number(10)
  , train_date      date
  , description     varchar2(100)
  , CONSTRAINT trains_pk PRIMARY KEY (train_number) USING INDEX
  );

  CREATE TABLE train_times
  ( train_number    number(10)
  , train_time      timestamp(0)
  , description     varchar2(100)
  , CONSTRAINT train_times_pk PRIMARY KEY (train_number, train_time) USING INDEX
  , CONSTRAINT train_times_trains_fk FOREIGN KEY (train_number)
                REFERENCES trains (train_number)
  );

  INSERT INTO trains VALUES (1, sysdate, 'first train');
  INSERT INTO trains VALUES (2, sysdate, 'second train');

  INSERT INTO train_times VALUES (1, systimestamp       , 'station A');
  INSERT INTO train_times VALUES (1, systimestamp+1/1440, 'station B');
  INSERT INTO train_times VALUES (1, systimestamp+2/1440, 'station C');
  INSERT INTO train_times VALUES (1, systimestamp+3/1440, 'station D');

  INSERT INTO train_times VALUES (2, systimestamp+4/1440, 'station X');
  INSERT INTO train_times VALUES (2, systimestamp+5/1440, 'station Y');
  INSERT INTO train_times VALUES (2, systimestamp+6/1440, 'station Z');

  COMMIT;

  --
  -- Create the materialized view and view logs
  --

  CREATE MATERIALIZED VIEW LOG ON trains
    WITH SEQUENCE, ROWID (train_number, train_date)
    INCLUDING NEW VALUES;

  CREATE MATERIALIZED VIEW LOG ON train_times
    WITH SEQUENCE, ROWID (train_number, train_time)
    INCLUDING NEW VALUES;

  CREATE MATERIALIZED VIEW train_min_max_times_mv
    BUILD IMMEDIATE
    REFRESH FAST ON COMMIT
    ENABLE QUERY REWRITE
    AS SELECT tr.train_number
            , tr.train_date
            , min(tt.train_time)    min_train_time
            , max(tt.train_time)    max_train_time
            , count(*)              total_times
         FROM trains       tr
            , train_times  tt
        WHERE tt.train_number = tr.train_number
        GROUP BY tr.train_number
            , tr.train_date;

  --
  -- Truncate detail table, delete master table
  -- and make MV fast refreshable again
  --

  TRUNCATE TABLE train_times;

  exec dbms_mview.refresh('TRAIN_MIN_MAX_TIMES_MV','C');

  DELETE FROM trains;

  COMMIT;

  exec dbms_mview.refresh('TRAIN_MIN_MAX_TIMES_MV','C');

  SELECT * FROM train_min_max_times_mv;

  INSERT INTO trains VALUES (1, sysdate, 'first train');

  INSERT INTO train_times VALUES (1, systimestamp       , 'station A');
  INSERT INTO train_times VALUES (1, systimestamp+1/1440, 'station B');
  INSERT INTO train_times VALUES (1, systimestamp+2/1440, 'station C');
  INSERT INTO train_times VALUES (1, systimestamp+3/1440, 'station D');

  COMMIT;

  SELECT * FROM train_min_max_times_mv;

  INSERT INTO trains VALUES (2, sysdate, 'second train');

  INSERT INTO train_times VALUES (2, systimestamp+4/1440, 'station X');
  INSERT INTO train_times VALUES (2, systimestamp+5/1440, 'station Y');
  INSERT INTO train_times VALUES (2, systimestamp+6/1440, 'station Z');

  COMMIT;

  SELECT * FROM train_min_max_times_mv;


Example - disabling constraints when truncating

The first example can be modified to using disabling of constraints, by replacing the "truncate-refresh-delete-commit-refresh" portion in the example by the lines below.

  ALTER TABLE train_times MODIFY CONSTRAINT train_times_trains_fk DISABLE;

  TRUNCATE TABLE train_times;
  TRUNCATE TABLE trains;

  ALTER TABLE train_times MODIFY CONSTRAINT train_times_trains_fk ENABLE;

  exec dbms_mview.refresh('TRAIN_MIN_MAX_TIMES_MV','C');

Wednesday, April 20, 2011

RMAN pipe mode example - a simple interface for remote rman access

When reading the backup and recovery user's guide of the 11gR2 database (again), I came across the RMAN pipe interface. Maybe I came across this before, but I haven't really paid any attention to it. Untill now.

The RMAN pipe interface enables you to interact with RMAN when connected via a different tool or interface, like SQL*Plus. This means that you don't have to be connected to the host on which the database resides or you don't even have to be connected as sysdba to execute RMAN commands. Let's first look at the way to do this, and then we'll get back as to why you would want this (or not...)

According to the documentation, it is fairly easy to setup the pipe mode. Simply start the rman executable like this:

  [oracle@localhost ~]$ rman pipe orcl target / catalog rman@rmancat
  Recovery Manager: Release 11.2.0.2.0 - Production on Wed Apr 20 11:16:05 2011
  Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
  recovery catalog database Password:


After supplying the password, the prompt does nothing. this means that rman is now running in pipe mode and not accepting any input via the CLI.

If you want to verify that pipe mode is enabled, use sqlplus to check if the pipes are created as private pipes:

  [oracle@localhost ~]$ sqlplus / as sysdba
  SQL*Plus: Release 11.2.0.2.0 Production on Wed Apr 20 12:27:30 2011
  Copyright (c) 1982, 2010, Oracle. All rights reserved.
  Connected to:
  Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
  With the Partitioning, OLAP, Data Mining and Real Application Testing options
  SQL> select type, name from v$db_pipes;
  TYPE    NAME
  ------- ----------------------------------------
  PRIVATE ORA$RMAN_ORCL_IN
  PRIVATE ORA$RMAN_ORCL_OUT


So, now we have the pipes that the documentation told us we would have. So, how do we use these? The documentation suggests using dbms_pipe.pack_message and dbms_pipe.send_message:

  DECLARE
    l_return integer;
  BEGIN
    dbms_pipe.pack_message('list backup summary;');
    l_return := dbms_pipe.send_message('ORA$RMAN_ORCL_IN');
  END;
  /


Now, we have sent the first command to rman using the "IN" pipe. This means that we should expect some output on the "OUT" pipe as well:

  DECLARE
    l_return integer;
    l_text varchar2(4000);
  BEGIN
    l_return := dbms_pipe.receive_message('ORA$RMAN_ORCL_OUT');
    dbms_pipe.unpack_message(l_text);
    dbms_output.put_line(l_text);
  END;
  /


This returns one line of text. Usually, rman returns more than one line, so we'll have to repeat this untill we get all the lines. The last line to expect is a line saying "RMAN-00572: waiting for DBMS_PIPE input", which means that rman is waiting for a new command on the pipe.

Example interface

This calls for a procedure to handle some of this stuff for us. Or even better: we create a package. The following is a simplified example of how to handle an rman command using a pipe.

  CREATE OR REPLACE PACKAGE rman_pipe
  AS
    PROCEDURE do_command( p_command IN varchar2 );
  END;
  /

  CREATE OR REPLACE PACKAGE BODY rman_pipe
  AS
    PROCEDURE pack_message (p_command IN varchar2)
    IS
      l_return  integer;
    BEGIN
      --
      -- Pack the command (check for closing semi-colon)
      --
      IF substr(p_command,-1) != ';' THEN
        dbms_pipe.pack_message(p_command||';');
      ELSE
        dbms_pipe.pack_message(p_command);
      END IF;
      --
      -- Send the message
      --
      l_return := dbms_pipe.send_message('ORA$RMAN_ORCL_IN');
      IF l_return = 0 THEN
        dbms_output.put_line('.');
        dbms_output.put_line('RMAN command sent');
        dbms_output.put_line('.');
      ELSIF l_return = 1 THEN
        dbms_output.put_line('Timed Out');
      ELSIF l_return = 3 THEN
        dbms_output.put_line('An Interrupt Occurred');
      ELSE
        dbms_output.put_line('Unknown Return Code '||l_return);
      END IF;
    END pack_message;
    --
    PROCEDURE unpack_message (p_timeout IN integer default dbms_pipe.maxwait)
    IS
      l_return  integer         := 0;
      l_text    varchar2(4000)  := 'No text';
    BEGIN
      --
      -- Check the pipe for messages untill ORA-00572 is received
      --
      WHILE l_return  = 0
      AND   l_text   != 'RMAN-00572: waiting for DBMS_PIPE input'
      LOOP
        l_return := dbms_pipe.receive_message('ORA$RMAN_ORCL_OUT', p_timeout);
        IF l_return = 0 THEN
          dbms_pipe.unpack_message(l_text);
          dbms_output.put_line(l_text);
        ELSIF l_return = 1 THEN
          dbms_output.put_line('Timed Out');
        ELSIF l_return = 2 THEN
          dbms_output.put_line('Record in the pipe is too large for the buffer');
        ELSIF l_return = 3 THEN
          dbms_output.put_line('An Interrupt Occurred');
        ELSE
          dbms_output.put_line('Unknown Return Code '||l_return);
        END IF;
      END LOOP;
    END unpack_message;
    --
    PROCEDURE do_command( p_command IN varchar2 )
    IS
    BEGIN
      unpack_message(0);
      pack_message(p_command);
      unpack_message;
    END do_command;
  END;
  /


This is not as basic as when I started out, though. The problems I ran into, in chronological order:
  • When retrieving the results, you'll have to check if you receive the RMA-00572 message
=> after receiving this, stop extracting from the pipe
  • When packing the command, I sometimes forgot to put a semi-colon after the command
=> so I created the part where he checks for a semi-colon
  • When issuing the first command, there is something in the buffer:
  connected to target database: ORCL (DBID=1229390655)
  connected to recovery catalog database
  RMAN-00572: waiting for DBMS_PIPE input

This causes the first time you request the messages up to RMAN-00572, you will only get this part. On the next command you will get the output of the previous command, etc.
=> first, receive any old messages on the pipe, then execute the command and get the results
  • when no previous messages are on the pipe, you will wait indefinitely for a message
=> use a timeout for the first request for messages on the pipe, so you won't get stuck there

Conclusion

Still, this is a simple interface, but think of the possibilities:
  • You can now grant "execute" priviliges to a user, making that user the rman admin for this database


  • No connection via the host or sysdba is needed



  • Sure there are some (major?) drawbacks to overcome at this point:
  • When starting pipe mode as a background process (nohup &) the pipe mode will end when receiving an error (as with a malformed command)


  • When starting pipe mode as a background process, you have to provide the password on the commandline
  • Friday, April 15, 2011

    BULK COLLECT %FOUND and %NOTFOUND behaviour

    When using the BULK COLLECT feature in PL/SQL, a lot of programmers get confused on using the %FOUND and %NOTFOUND in their code. As it is said multiple times (On various sites by various people), it seems better to avoid this by using .count, but I think it is good to point out the behaviour of %FOUND and %NOTFOUND in these cases.

    Usually, you will have a code block similar to this:

      OPEN c_cur;
      LOOP
        FETCH c_cur BULK COLLECT INTO t_cur [limit <n>];
        l_found := c_cur%FOUND;
        CLOSE c_dual;
        [PL/SQL code here...]
        EXIT WHEN NOT l_found;
      END LOOP;

    When you are determined to use %FOUND or %NOTFOUND, note the meaning of these when using BULK COLLECT:
  • %FOUND will return TRUE if exactly [limit] rows were retrieved



  • If %FOUND returns TRUE than there might be more rows to be fetched



  • %FOUND will return FALSE if less than [limit] rows were retrieved



  • %FOUND will always return FALSE if [limit] is not set (you retrieve less than unlimited rows)



  • %NOTFOUND will always return the opposite as %FOUND (as is to be expected...)
  • Monday, March 28, 2011

    Encryption Wallet for TDE: can not change password

    Today, I set up Transparent Data Encryption (TDE) on an 11gR2 test environment. The steps are well documented and not at all hard to do, so there was no problem in the initial setup:

    1. Edit the sqlnet.ora file and include the following:
      ENCRYPTION_WALLET_LOCATION=
        (SOURCE=(METHOD=FILE)(METHOD_DATA=
          (DIRECTORY=/home/oracle/app/oracle/product/11.2.0/dbhome_2/dbs)))


    2. Create the encryption wallet from SQL*plus (sqlplus / as sysdba)
      SQL> ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY oracle;

    3. Create a table with an encrypted column, or create an encrypted tablespace
      SQL> CREATE TABLE employees (name varchar2(30), salary number encrypt);
      SQL> CREATE TABLESPACE encrypted
      2 DATAFILE '/u01/oradata/encrypted01.dbf' SIZE 100M
      3 ENCRYPTION DEFAULT STORAGE (ENCRYPT);


    This works fine, as expected. However, I would like to change the password for the wallet, as "oracle" isn't that strong a password after all...

    Using Oracle Wallet manager (OWM) from the (Linux) command line, I try to open the wallet. It asks for the password, and the message OWM gives me after providing the password "oracle" is "The password is incorrect. Try again?". After retrying the password, I suddenly think about the double quotes that should enclose the password. Because I did not enclose the password, the actual password that got stored is ORACLE, and not oracle. This can be seen by trying the following in SQL*Plus:

      SQL> ALTER SYSTEM SET ENCRYPTION WALLET OPEN IDENTIFIED BY "oracle";
      alter system set encryption key identified by "oracle"
      * ERROR at line 1:
      ORA-28353: failed to open wallet

      SQL> ALTER SYSTEM SET ENCRYPTION WALLET OPEN IDENTIFIED BY "ORACLE";
      
      System altered


    So, I should use ORACLE as a password for OWM. However, this too gives me the message "The password is incorrect. Try again?".



    After searching for quite a while, I discovered that this message is NOT about the password being invalid as such, but more about the password not adhering to the password criteria for OWM:



    I managed to change the password using orapki:

      orapki wallet change_pwd
        -wallet /home/oracle/app/oracle/product/11.2.0/dbhome_2/dbs
        -oldpwd ORACLE -newpwd Oracle.01


    This statement returns without error and after that, the wallet can be maintained succesfully using OWM (with the new password). Should you provide a password that doesn't conform to the OWM standards, you will get an "PKI-01002: Invalid password." error from orapki. This is because of the NEW password. If you misspelled the old password, you would get "PKI-02003: Unable to load the wallet ...".

    In OWM:



    So, be carefull when choosing your password when creating the TDE wallet, because if it is not a strong enough password, you will not be able to open and maintain the wallet with OWM, which can lead to much confusion.

    Monday, March 14, 2011

    Oracle Easy Connect - no password in the commandline

    When using Oracle's Easy Connect feature, I (re)discovered some strange behaviour. The easy connect does not always translate the easy connect string to the right connect descriptor.

    I set up Easy Connect by modifying my sqlnet.ora file on the client. Modifying the line with the directory_path in it should be enough:

      names.directory_path = (TNSNAMES,EZCONNECT)

    After that, I tried to connect using easy connect:

      sqlplus myuser@myhost:1521/myservice

    This returned an ORA-12541 Error: "TNS:listener does not currently know of service requested in connect descriptor". At first, I looked this up in the sqlnet.log file to see what he was trying to do:

      Fatal NI connect error 12514, connecting to:
       (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=myhost)(CID=(PROGRAM=sqlplus@myclient)(HOST=myclient)(USER=myuser)))(ADDRESS=(PROTOCOL=TCP)(HOST=)(PORT=1521)))


    So, it just messes up the entire connect string and tries to connect to a service called "myhost". This all disappears when you connect using a password on the commandline:

      sqlplus myuser/mypassword@myhost:1521/myservice

    According to the documentation, this should not be a problem, but when searching for this on Oracle support, I came across Note 274757.1, which states that without specifying a password, you must enclose the connect descriptor in double quotes (and depending on the OS escaping these with \):

      sqlplus myuser@"myhost:1521/myservice"

    And after that, SQL*Plus asks for the password as expected. In this case, I think the documentation could use an update, specifying this behaviour...

    Monday, December 13, 2010

    Enhancing Oracle Reports - create a custom type

    Usually, the report types (destination types) provided by Oracle Reports are sufficient. You can create your report as a file, in the cache, send it to a printer etc. See the list of standard types in the Oracle Documentation.

    It is also possible to create your own destination, but the documentation is a bit vague about this, to put it mildly. Once you know where to look though, it is quite easy to accomplish.

    First of all, you will need to modify the reportsserver configuration file. You can do this through Enterprise Manager, or modify the file directly ($ORACLE_HOME/reports/conf/<repserv>.conf). You must add a destination tag, just like there are many others in this file. It would be sufficient to add something like this:
      <destination destype="myDest" class="myDestClass"/>

    And now, your reportserver will not start, or at least give you an error that the specified class does not exist. This is the hard part. You must create a Java Class that extends the Destination Class. So, you find yourself a Java Programmer (shouldn't be that hard to find, these days) and let him create your own class. In this class, you can do (almost) anything you can imagine. You will have the XML provided by the reports engine avaiable and you will need some imagination as to what you want to do with this report.
    After the class is created, compiled and packaged, you add it to the classpath of the reports server. Easiest way to do this is to use Enterprise Manager. After that, the reports server will be able to find the new class and start the engine.

    You could for example create an RTF document from the standard XML using a stylesheet and then mail it to one or more recipients. You could send it to a printer and e-mail it both in the same class. Any requirement you might have for your reports is now available. It will require your effort to implement all your requirements though.

    This is a good way to enhance the output options of reports, it is just too bad that it is not so very well documented...
    When you start working with this, there will undoubtedly be some other obstacles, but if you have any questions, you know where to find me (i.e. right here ;-)

    Wednesday, December 1, 2010

    Using too many indexes

    When writing about the use of indexes (or rather, the lack of) it soon occurred to me that when you tell the developers to use indexes, some of them really take that too far. I once saw a 18-column table with a total of 20 indexes on it. That might be useful in a certain DWH database, but this was in a OLTP database.

    But why is it not good to have a lot of indexes? Is there such a thing as too many indexes? From the optimizer point of view, it just means a few extra access paths to consider, that should not be too difficult. On the other hand, from a DML perspective, when you insert a record or update certain columns, all related indexes must be modified as well. So, instead of inserting one row (1 I/O operation), you could end up with inserting one row and inserting 20 index entries (21 I/O operations!).

    So yes, you can certainly have too many indexes.

    What to do? Well, the obvious thing to do is to get rid of all the unnecessary indexes. But not knowing which indexes are useful, you might accidentally drop the wrong ones and end up with terrible performance for your application.

    The best course of action would be to let the database determine what indexes are being used and which indexes are candidates for deletion. You do this with the following statement:

        ALTER INDEX index MONITORING USAGE;

    This starts monitoring if the index is being use by any statement in the database. You can let this monitoring run as long as you like. I would advise to use a representative period of time, so you can be sure that each and every part of your application has been used (every statement out there should have run at least once). You should consider special cases like an end-of-month report that will run. It would be a shame to drop any indexes and then discover that this crucial report takes two days to complete…

    After you have monitored long enough, you should check the V$OBJECT_USAGE table. This table contains the rows for the monitored indexes and contains a column “USED”. If this is set to TRUE, the index was used during to monitoring period. If not, the column value will be FALSE.

    Now you can determine if there are unused indexes in your datamodel. These are the candidates for deletion. Please remember that index usage is not restricted to query performance. You should not drop primary key or unique ket indexes, just because they are not used in your monitored workload. They have a different (but important) reason for being there.

    After you complete your analysis, you can disable monitoring:

        ALTER INDEX index NOMONITORING USAGE;

    This way, you keep the minimum number of indexes while retaining optimal performance. You (or rather, the database) will save a lot of effort updating indexes that will most likely never be used and thus optimizing your performance just that bit more.

    Also see the Oracle documentation for some more details…