Monday, September 6, 2010

"NVL" for SQL*plus commandline parameters

Recently, I had to create an installer SQL*Plus script which could accept input parameters. "So, big deal?" I hear you think. And at first, the task seemed simple enough. Until I started testing it (which is always a good idea in my opinion…)

I created a script "installer.sql" and referred the input parameter as &1, &2 and &3. I needed these for the username, password and database (TNSNames entry to be precise). For the sake of simplicity, let’s say this was the entire script:

  PROMPT connecting as &1
  CONNECT &1/&2@&3
  SELECT username from user_users;

and then run the script:

  $ sqlplus /nolog @installer.sql test test test

Simple enough, right? The script does exactly what you'd except: It prompts that it will connect as user "test", then connects (provided you actually have a user test with password test on database test...) and shows the current user (hopefully "test").

I then tried the use other parameters (&4) to do some more then only connecting. Of course, that will work fine, but I ran into trouble when trying to leave the fourth parameter empty. I wanted this parameter to be optional and give it some value based on the contents of the database.

What happens is that the fourth parameter is not set to NULL, but the parameter is not recognised at all. First thing SQL*Plus will do, is ask you for this parameter:

  Enter value for 4:

So, when parameters are not entered on the commandline, SQL*plus will not recognise them. After a lot of experimenting, I found that you can define your own "parameter 4" using the COLUMN syntax. Using this, you can use a sort of NVL function for the inputparameters. I haven't exactly fugured out why thuis works, but so far, all tests are positive:

  COLUMN inputpar01 NEW_VALUE 1 NOPRINT
  select 1 inputpar01
    from dual
   where 1=2;

This fools SQL*Plus creating the explicit parameter 1 (instead of an implicit parameter), but the nice part of it is that it actually retains the value for an implicit parameter, if defined.
[Edit]: Please note that for sqlplus 11.x and above, you must be connected to a database for this part to work. So you will have to replace the "/nolog" with a valid login. This can be for any database, though.[/Edit]

I then modified the script to, basically, the following:

  COLUMN inputpar01 NEW_VALUE 1 NOPRINT
  COLUMN inputpar02 NEW_VALUE 2 NOPRINT
  COLUMN inputpar03 NEW_VALUE 3 NOPRINT
  COLUMN inputpar04 NEW_VALUE 4 NOPRINT
  select 1 inputpar01
       , 2 inputpar02
       , 3 inputpar03
       , 4 inputpar04
    from dual
   where 1=2;
  --
  PROMPT connecting as &1
  CONNECT &1/&2@&3
  SELECT username from user_users;
  PROMPT value for parameter 4 = &4

You can modify this with extra checks for mandatory parameters, allowed values, etc, etc. This way, you actually have an option to use NVL-like functionality on your input parameters.

Wednesday, August 4, 2010

The wrong way to build a CLOB

Recently, I came across one of the old CLOB-related problems again. A developer on the team had to create a CLOB and started out to create the pieces for the CLOB (there was some extensive logic applicable to the various pieces). After that, he inserted a row into the table, retrieving the CLOB handle and started to piece together the CLOB.

So far, there is nothing wrong functionally. He just broke down the work and created (a lot of) pieces for the final CLOB. The mistake was in inserting an empty CLOB and adding pieces to it a lot of times. To elaborate on that, I'll give you an example.

First, we create a simple table:

  create table lob_test
  ( id number
  , text clob
  )
  lob (text) store as (disable storage in row);


And then, we insert 100 CLOB's using the piecewise modification of a pre-inserted empty CLOB.

  DECLARE
    l_text CLOB;
    l_piece char(1024) := 'X';
  BEGIN
    -- create 100 CLOB's
    FOR l_id IN 1..100 LOOP
      -- First, create the CLOB in the table
      INSERT INTO lob_test (id, text)
        VALUES (l_id, EMPTY_CLOB())
        RETURNING text INTO l_text;
      -- Then, with the LOB locator,
      -- build your CLOB from pieces
      FOR l_loop IN 1..100 LOOP
        -- 100 pieces of 1k
        dbms_lob.append(l_text, l_piece);
      END LOOP;
    END LOOP;
    --
    COMMIT;
    --
  END;

This took about 22 seconds on my database. Remember this for future reference...

Now, we use a different approach. Instead of inserting an empty CLOB and modifying it, we now create a temporary CLOB, modify that and postpone the insert until we have a fully created CLOB:

  DECLARE
    l_text CLOB;
    l_piece char(1024) := 'X';
  BEGIN
    -- create 100 CLOB's
    FOR l_id IN 101..200 LOOP
      -- First, create a temporary CLOB
      dbms_lob.createtemporary(l_text,TRUE);
      -- Then, with the LOB locator,
      -- build your CLOB from pieces
      FOR l_loop IN 1..100 LOOP
        -- 100 pieces of 1k
        dbms_lob.append(l_text, l_piece);
      END LOOP;
      -- Finally, insert the CLOB
      INSERT INTO lob_test (id, text)
        VALUES (l_id, l_text);
    END LOOP;
    --
    COMMIT;
    --
  END;

This last piece of code does exactly the same (functionally), but it executes in under 2 seconds this time.

The reason that the first method takes so much longer, is that every call to dbms_lob.append will update the block for the CLOB. It generates UNDO and REDO. The second method holds the CLOB in memory only and has no block to update, so no UNDO and REDO is created. This will not only speed thing up, but will also have a lot of impact on the amount of logging created on your production database.

So, overall, try to keep the updating of a CLOB column to a minimum.

Friday, June 25, 2010

Application Express - using the sort arrows in a tabular form

!! This post is created using Apex 3.x and will not work for newer versions. For Apex 4.x a new blogpost is available !!

When you use APEX, chances are you've come across this feature many times. When developing an application and you create a form or report, you can arrange the order of the columns by using arrows (up/down). This changes the order in which the columns are displayed, just by clicking on the appropriate arrow.



So far, nothing new. But when you think about it: APEX has been build using APEX. This should mean that this functionality is also available to use in you own application. There seems to be one problem: there is no standard item or option to enable this.

Searching the internet, there are some strategies as how this can be accomplished. So far, I have not found a complete description that is short and easy to implement. Starting from there, I found that you need only three rather simple steps to accomplish this in your own tabular form.

I will demonstrate this using a simple tabular form:



First step: create the arrows

The first step is to incorporate some standard Javascript into your region. After creating a regular tabular form, you add one piece of Javascript to the footer of the region.

<script type="text/javascript">
  var g_#REGION_ID#;
  var g_rpreview_global = 'report_#REGION_ID#';
  function f_#REGION_ID#(){
   g_#REGION_ID# = new apex.tabular.sort('report_#REGION_ID#');
   g_#REGION_ID#.row.after_move = function(){rpreview('report_#REGION_ID#')}
   rpreview('report_#REGION_ID#');
  }
  addLoadEvent(f_#REGION_ID#);
</script>

After this, the form has the up/down arrows next to every line.


Second step: hide the order column and make it orderable

To enable ordering and make the order column hidden, just take these steps:
  • Edit the form properties and set the order item property "show" unchecked
  • Edit the column properties for "order" and set Element Attributes to 'class="orderby"'

Now we have a simple form which we can order using the arrows.


Third step: adjusting the style

The last step we need to take is to make the background and header to look like the template we use. Regrettably, I found no really easy (configurable) way to do this. So, we'll do this the hard way.

First of all, you'll have to get the style you're using in the application. The stylesheet is referenced in your application and viewable by just showing the source of your page in your browser. In this example I ran the application, selected "show source" after right-clicking and searched for the stylesheet. This shouldn't be too hard to find (was on line 3 for me).

When you look at this stylesheet (by either downloading it from the application server, or looking it up in an APEX software download), you should be able to find the section of interest by searching for "Standard th". The line you'll find and the next line are to be copied. For me this was:

table.t13Standard th.t13ReportHeader {border-bottom:1px #aaa solid;}
table.t13Standard td.t13data{background:#efefef;border-top:1px #fff solid;border-bottom:1px #ccc solid;font-size:8pt;padding:3px 9px;empty-cells:show;}

Depending on the template you chose, the numbers and settings will be somewhat different. These lines should be modified to a more generic class for th and td. So we'll leave out the specific .t13ReportHeader and .t13data qualifiers.

table.t13Standard th {border-bottom:1px #aaa solid;}
table.t13Standard td {background:#efefef;border-top:1px #fff solid;border-bottom:1px #ccc solid;font-size:8pt;padding:3px 9px;empty-cells:show;}


Finally, we must set the background color for the header. You can find this about 6 lines higher in the stylesheet, which contains the th.t13ReportHeader, which contains a background. Place the style tags around this block. We now have:

<style>
table.t13Standard th {border-bottom:1px #aaa solid;background-color:#7b9cbd;}
table.t13Standard td {background:#efefef;border-top:1px #fff solid;border-bottom:1px #ccc solid;font-size:8pt;padding:3px 9px;empty-cells:show;}
</style>


By placing this as a style tag in the HTML header of the page, our tabular form is now ready to go.

Monday, June 21, 2010

ApEx - using multiple database schema's

The parsing schema for an ApEx application

For an ApEx application, you can define one parsing schema per application. For smaller applications, this usually will be sufficient. For larger applications however, there might be the need to address multiple database schema's from one application. Because there will always be one parsing schema per application, you would have to grant the appropriate privileges to a schema in order to be able to use DML on all objects needed. This can be a daunting task, especially when you have many objects in various schema's.

The alternative
Instead of using only one application, you can use multiple applications. This may sound rediculous, but using more than one application, you can use a different parsing schema for each of these applications. The real trick is to get these applications to work as one. And this is really very simple. By explicitly setting the cookie name in the authentication scheme (via shared components), you can link various applications in a single workspace so they will use the same cookie. This way, you don't have to authenticate with each application, but authenticating once will give you access to all applications with the same cookie.

As a simple example, you can create two applications in the same workspace. Set the cookie name (and the path and domain if you like) to the same value (e.g. myApexCookie) for both applications. After authenticating with one application, change the application ID on the URL, and you should be able to access the other application without authenticating. Trying this without the cookie, Apex will ask you to login again.

Things to consider
You should consider your authorization scheme carefully when implementing this SSO for ApEx applications. Each application, and thus each parsing schema, should have an authorization in place in the application, so not everyone can access all schema's.
When creating different applications, it would be a developer's nightmare to maintain the navigation for each application. So consider creating one menu, referencing the right application and page for each menu item and then sharing this menu with all the applications.

The famous "first post"

Well, this is it.
The famous "first post" on any blog.
Nothing really interesting yet, just introducing myself (a bit) and announcing that I am planning on posting regularly etc, etc, etc...

I am an Oracle Certified Master DBA (and proud of it) and Oracle Certified Professional on various topics. In fact, I believe that I am one of the most heavily certified professionals on the planet. Having said that, I expect a lot of replies saying that, actually, I am not ;-)
For those interested in comparing notes: I collected 13 Oracle certifications so far.

This blog will not be mainly on DBA activities though. Besides the fact that there are a lot of blog concerning that topic out there, I am more of an application developer DBA. That means that I do a lot of application development and use my knowledge as a DBA to accomplish this better and faster.

And as for the "intention to post": I really do. Seriously. Trust me.