Saturday, September 17, 2016

Resolve ORA-01996 (password file full) without losing your password file entries

As I was playing around with one of my 12c databases, I ran into the ORA-01996 error "GRANT failed: password file '/<>/orapwORCL' is full". The documentation and various other sources are clear about this: you will have to re-create the password file and add all existing entries manually afterwards.

Just when I was about to create a new password file and re-create the entries in there, it occurred to me that 12c has a new feature: the input_file parameter for orapwd. As far as the documentation is concerned, this is to migrate older versions of the password file to 12c.

But, what if we can use this to "migrate" our current 12c password file to a new 12c password file? What if we backup this file and use that as an input for our new password file? I created a quick test setup and found out that this seems to be exactly what I needed. I re-created the password file using the input_file parameter and I could add new entries, but at the same time I kept all the entries that were already there.

The testcase

First of all, I created a new passwordfile with entries=1. Actually, this will hold up to 4 entries for me, because of the fact that it is rounded up to the number of entries per block (in the same documentation I mentioned above).

[oracle@DBServer1 ~]$ cd $ORACLE_HOME/dbs
[oracle@DBServer1 dbs]$ orapwd file=orapwORCL entries=1 force=y
Enter password for SYS:


Using SQL*Plus, I verified that all I had was 1 entry:

SQL> select * from v$pwfile_users order by username, con_id;

USERNAME                       SYSDB SYSOP SYSAS SYSBA SYSDG SYSKM     CON_ID
------------------------------ ----- ----- ----- ----- ----- ----- ----------
SYS                            TRUE  TRUE  FALSE FALSE FALSE FALSE          0


So, now we can start adding users to the passwordfile, until it is full:

SQL> grant sysdba to c##erik container=all;
Grant succeeded.

SQL> grant sysdg to sysdg;
Grant succeeded.

SQL> grant sysbackup to sysbackup;
grant sysbackup to sysbackup
*
ERROR at line 1:
ORA-01996: GRANT failed: password file
'/u01/app/oracle/product/12.1.0/dbhome_1/dbs/orapwORCL' is full


What do we have in the v$pwfile_users at this point?

SQL> select * from v$pwfile_users order by username, con_id;

USERNAME                       SYSDB SYSOP SYSAS SYSBA SYSDG SYSKM     CON_ID
------------------------------ ----- ----- ----- ----- ----- ----- ----------
C##ERIK                        TRUE  FALSE FALSE FALSE FALSE FALSE          0
SYS                            TRUE  TRUE  FALSE FALSE FALSE FALSE          0
SYSDG                          FALSE FALSE FALSE FALSE TRUE  FALSE          0
SYSDG                          FALSE FALSE FALSE FALSE TRUE  FALSE          1

Now, we will go back to $ORACLE_HOME/dbs and re-create the password file, using the input_file parameter. Make sure you backup the current password file, as you can never be too careful...

[oracle@DBServer1 dbs]$ cp orapwORCL orapwORCL_backup
[oracle@DBServer1 dbs]$ orapwd file=orapwORCL entries=10 input_file=orapwORCL_backup force=y


Immediately after that, check the contents of the password file with SQL*Plus:

SQL> select * from v$pwfile_users order by username, con_id;

USERNAME                       SYSDB SYSOP SYSAS SYSBA SYSDG SYSKM     CON_ID
------------------------------ ----- ----- ----- ----- ----- ----- ----------
C##ERIK                        TRUE  FALSE FALSE FALSE FALSE FALSE          0
SYS                            TRUE  TRUE  FALSE FALSE FALSE FALSE          0
SYSDG                          FALSE FALSE FALSE FALSE TRUE  FALSE          0
SYSDG                          FALSE FALSE FALSE FALSE TRUE  FALSE          1

And all entries are still there! Just one last step to prove we have successfully re-created the password file with more entries than we had:

SQL> grant sysbackup to sysbackup;
Grant succeeded.

SQL> select * from v$pwfile_users order by username, con_id;

USERNAME                       SYSDB SYSOP SYSAS SYSBA SYSDG SYSKM     CON_ID
------------------------------ ----- ----- ----- ----- ----- ----- ----------
C##ERIK                        TRUE  FALSE FALSE FALSE FALSE FALSE          0
SYS                            TRUE  TRUE  FALSE FALSE FALSE FALSE          0
SYSBACKUP                      FALSE FALSE FALSE TRUE  FALSE FALSE          0
SYSBACKUP                      FALSE FALSE FALSE TRUE  FALSE FALSE          1
SYSDG                          FALSE FALSE FALSE FALSE TRUE  FALSE          0
SYSDG                          FALSE FALSE FALSE FALSE TRUE  FALSE          1

And I didn't even have to reconnect to or restart the database :-)



Edited: Thanks to Pom for pointing out a small but essential typo

Thursday, December 17, 2015

TNS-12154 on Database Link When Connected Through Listener

I ran into a problem when creating a database link that uses a new TNS entry. The database link seemed to work fine when I created it, but the developers complained about getting TNS-12154 errors when they tried to use it. Checked it again on my side, and the database link worked just fine.
 
After some discussion with the development team, we noticed that the database did work when I connected locally from the database server, but it threw the TNS-12154 when connecting through the listener. So I focussed on why the listener prevented the database link from recognizing my newly created TNS entry. Turns out it has a lot to do with setting a TNS_ADMIN environment variable, the dynamic registration to the listener and the way the listener uses environment variables.
 
So, to paint a picture of the situation:
·        We have a database on Linux (probably any Linux or UNIX variant will have this problem)
·        We have set the TNS_ADMIN environment variable
·        The listener uses both static and dynamic service registration (because we are using OEM)
 
Solutions proposed so far
 
This problem is described in a lot of places, but I could not find a satisfactory solution until recently. The answers I found so far are not satisfactory, some more so than others. They include:
·        Restart the database. When creating a new TNS entry, this will not been seen by the dynamic service registration to the listener. Restarting the database somehow resets this and then the database link also “magically” works.
o   My problem with this: Restarting a (Production) database is not something I want to do when I just want to add a new TNS entry for a database link.
·        Use a different naming method when creating the database link. Instead of the USING clause with a TNS entry, you can specify an EZConnect string or the full description of the TNS entry. Both imply that server and database specific information is to be used in the database link itself.
o   But: I use the tnsnames.ora to have a central location for all descriptions and use that to point to any database/service I need. I don’t want to repeat sevrer/instance information in every database link I create.
·        Set the ENVS clause of the static registration (SID_LIST_LISTENER) to include TNS_ADMIN.
o   This has several drawbacks: You need to use static registration and static registration only. Dynamic takes precedence over Static, and Dynamic cannot be used to set TNS_ADMIN. Even when you decide to use only Static, then you have to maintain the TNS_ADMIN in the environment variable itself and in the listener.ora which means extra maintenance.
·        Don’t use TNS_ADMIN. When not using TNS_ADMIN, all works fine. As the database and the listener will then both use the default location ($ORACLE_HOME/network/admin), they can always find the tnsnames.ora and will pick up new entries right away.
o   The problem is obvious: we use TNS_ADMIN to create a centralized location for our tnsnames.ora file. Not using the TNS_ADMIN would defeat this initial setup.
 
My solution
 
The last “solution”, however, opens up a final solution. When the listener (the dynamic service registration) fails to pick up (or refresh, or whatever you like to call it) changes to tnsnames.ora in the TNS_ADMIN location, it will look at the default location. Creating an OS link there will allow the listener to find the file anyway and use the new TNS entry without having to restart the database. And yes, there is a drawback to this solution as well: you need to create this link in every RDBMS home you created and point it to the TNS_ADMIN location. I think this is still slightly better than not using TNS_ADMIN at all, because we will still have a centralized tnsnames.ora file, and only use links to that file in the RDBMS home directory.
 
Technical “proof”
 
The example sketched below is from an AIX machine, using an 11gR2 database. I also tested this on a 12c database on Linux. Both have the exact same results when it comes to recognizing the new TNS entry.
 
Using the original setup, I modified the tnsnames.ora file in the TNS_ADMIN location:
 
dbserver::../home/oracle:> cd $TNS_ADMIN
dbserver::../oracle/tns:> ls -l
total 16
-rw-r--r--    1 oracle   dba             483 Dec 16 16:37 listener.ora
-rw-r--r--    1 oracle   dba            1581 Dec 16 16:14 tnsnames.ora
dbserver::../oracle/tns:> vi tnsnames.ora
 
I added an “orcldb_self_ref.world” entry, pointing toward my original database. After this, without restarting the database or reloading the listener, I created the database link.
 
dbserver::../oracle/tns:> sqlplus system@orcldb
 
SQL*Plus: Release 11.2.0.2.0 Production on Thu Dec 17 09:41:34 2015
Copyright (c) 1982, 2010, Oracle.  All rights reserved.
Enter password:
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
 
SQL> create database link dblink_test_erik using 'orcldb_self_ref.world';
 
Database link created.
 
SQL> select * from dual@dblink_test_erik;
select * from dual@dblink_test_erik
                   *
ERROR at line 1:
ORA-12154: TNS:could not resolve the connect identifier specified
 
And there you have it, the TNS-12154. Normally, you would think that we did something wrong defining the TNS entry or something like that. However, when not connecting through the listener, the same database link does work:
 
dbserver::../oracle/tns:> sqlplus system
 
SQL*Plus: Release 11.2.0.2.0 Production on Thu Dec 17 09:41:34 2015
Copyright (c) 1982, 2010, Oracle.  All rights reserved.
Enter password:
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
 
SQL> select * from dual@dblink_test_erik;
 
D
-
X
 
So, on to the next part, creating an OS link:
 
dbserver::../home/oracle:> cd $ORACLE_HOME/network/admin
dbserver::../network/admin:> ln -s $TNS_ADMIN/tnsnames.ora ./tnsnames.ora
dbserver::../network/admin:> ls -l
total 8
drwxr-xr-x  2 oracle dba 256 Oct 23 09:49 samples
-rw-r--r--  1 oracle dba 187 May 07 2007  shrept.lst
lrwxrwxrwx  1 oracle dba  28 Dec 17 09:45 tnsnames.ora -> /u01/oracle/tns/tnsnames.ora
 
Directly after creating the link, our database link works as it should. Just to prove it works right away, I dropped and re-created it.
 
dbserver::../oracle/tns:> sqlplus system@orcldb
 
SQL*Plus: Release 11.2.0.2.0 Production on Thu Dec 17 09:49:10 2015
Copyright (c) 1982, 2010, Oracle.  All rights reserved.
Enter password:
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
 
SQL> drop database link dblink_test_erik;
 
Database link dropped.
 
SQL> create database link dblink_test_erik using 'orcldb_self_ref.world';
 
Database link created.
 
SQL> select * from dual@dblink_test_erik;
 
D
-
X
 
Note that when you are already connected, it might be necessary to re-connect to pick up the changes. But that beats restarting the entire database...
 
Final remarks
 
Though this is my preferred solution, it doesn’t mean that it is the only solution. If any of the other options suits your needs better, then by all means: use them ;-)
 
This solution uses an existing setup with TNS_ADMIN set and dynamic service registrations. Both good things in my opinion. Then, by creating a simple link, you can make sure that any changes to your TNS entries are useable right away, instead of having to configure other files or restarting the database.

Thursday, June 18, 2015

Testing all Oracle database links for all schema's

I had to validate quite a lot of database links this week, and a lot of different schema's were involved. A database link is deemed valid/usable when the statement "select * from dual@db_link" succeeds, so simply executing this code on every database link would suffice.

However, when connected as one user, you can validate the database links for that user and any public links. Validating the database links for other users doesn't seem to be possible from that single connection. You can't prefix the database link name with the schema, as database links themselves can contain dots. Thus, the schema prefix will be treated as part of the link name, causing the statement to fail (because the link cannot be found under that name).

Then I figured that using a stored procedure created in the owner schema could validate the database link in that schema, when created with definer's rights (the default). And because you can use dynamic SQL ("execute immediate" in this case), you can create the procedure in the other schema, invoke it to test the database link in that same schema, and drop it afterwards.

-- Execute as SYS user (because of ORA-01031)

CREATE OR REPLACE PROCEDURE test_any_db_link ( p_owner IN varchar2
                                             , p_link  IN varchar2
                                             )
AS
   l_owner  varchar2(100);
BEGIN
   IF p_owner = 'PUBLIC' THEN
      l_owner := 'SYSTEM';
   ELSE
      l_owner := p_owner;
   END IF;
   --
   BEGIN
     EXECUTE IMMEDIATE 'create procedure '||l_owner||'.test_any_db_link_temp '||
                       'as '||
                       '  l_dummy varchar2(1); '||
                       'begin '||
                       '  select * into l_dummy from dual@'||p_link||'; '||
                       '  dbms_output.put_line(''OK : '||p_owner||'.'||
                                                         p_link||'''); '||
                       'exception '||
                       ' when others then '||
                       '   dbms_output.put_line(''NOK: '||p_owner||'.'||
                                                          p_link||'''||'' ''||
                                                          sqlcode); '||
                       'end;';
     EXECUTE IMMEDIATE 'begin '||l_owner||'.test_any_db_link_temp; end;';
   EXCEPTION
     WHEN OTHERS THEN
       dbms_output.put_line('ERR: '||p_owner||'.'||p_link||' '||sqlcode);
   END;
   EXECUTE IMMEDIATE 'drop procedure '||l_owner||'.test_any_db_link_temp';
END;
/

set serveroutput on size 1000000;

spool test_db_links

BEGIN
  FOR r_stmt IN (select 'begin test_any_db_link('''||owner||''','''||
                                                     db_link||'''); end;' stmt 
                   from dba_db_links 
                  order by owner, db_link
                )
  LOOP
    execute immediate r_stmt.stmt;
  END LOOP;
END;
/

spool off

drop procedure test_any_db_link;

-- done

Main points about this code:
  • You execute it as SYS, because of an ORA-01031 (insufficient privileges) otherwise
  • The script creates a procedure for every test, and drops it afterwards
  • The status of a link is either OK (everything checks out), or ERR (the procedure could not be created, because the link is not valid). In theory, the status could be NOK, when the link is valid, but something else prevents the execution of "select * from dual" from succeeding. So far, I never saw an NOK code
  • This script was used on database versions ranging from 9.2 to 12.1
  • The error output is very short, you could enhance this with dbms_utility.format_error_stack or something similar
  • When a database link is marked as "ERR", investigate further. I have some "ExtProc" call type database links, and those were marked as "ERR", because there is no dual at the other end
  • For PUBLIC links, I translated the user for the procedure to SYSTEM, as you cannot create a procedure under the "schema PUBLIC"

Sunday, September 21, 2014

Application Express - using the sort arrows in a tabular form - Updated for Apex 4.x

In a previous post (already a few years back) I demonstrated how to get the Apex sort arrows as seen in Apex builder in your own applications.



This was done in Apex 3.x, and in the meanwhile someone pointed out that in Apex 4.0 this was not working anymore. Not having the time to go into that in detail, I left it at that. Recently, someone else reported errors with this approach, again for the Apex 4.x versions. So I sat down and figured out the new way to do this for Apex 4.x

Again, we will be using a simple Tabular Form:



First step: create the arrows

This step is different from what you did in Apex 3.x because there is an additional DIV section and an extra TABLE around Tabular Forms now. The surrouding section carries the report region as an ID, the Tabular Form itself has no ID to work with. In order to get an ID, we are going to modify the template on which the Tabular Form is based.
In this case, I modify the template directly, you should always choose to create a new template (copy it from this one) and assign that template to the region... Otherwise, all other reports using this template will get sort-arrows and that might not be what you wanted.

In the Apex Builder, go to "Shared Components - User Interface / Templates - Report / Standard". Normally, the Tabular Forms use this template. If you use another template modify that one (or rather: copy that one for modifying). In the "Before Rows" section, modify the last line (modify the class and add an ID):
  • from <td><table cellpadding="0" border="0" cellspacing="0" summary="" class="report-standard">
  • to <td><table cellpadding="0" border="0" cellspacing="0" summary="" class="report-standard-sort" ID="report_#REGION_ID#_sortable">

In the "After Rows" section, add this block right after the </table> tag (and before any other code already there):

<script src="/i/libraries/apex/minified/builder.min.js" type="text/javascript"></script>
<script type="text/javascript">
  var g_rpreview_global = 'report_#REGION_ID#_sortable';
  var g_#REGION_ID#_sortable;
  function f_#REGION_ID#_sortable(){
    g_#REGION_ID#_sortable = new apex.tabular.sort(g_rpreview_global);
    g_#REGION_ID#_sortable.row.after_move = function(){rpreview()};
  }
  addLoadEvent(f_#REGION_ID#_sortable);
</script>

Do this for all templates that you will be using for sortable reports. When switching themes, you will need to do this again!
Note the extra JS library included: the builder.min.js contains the necessary Javascript do implement the sorting.

After this, the form has the up/down arrows next to every line.



Second step: hide the order column and make it orderable

To enable ordering and make the order column hidden, just take these steps:
  • Edit the form properties and set the "order" item property "show" unchecked
  • Edit the column properties for "order" and set "Element Attributes" to class="orderby"

Now we have a simple form which we can order using the arrows.


Third step: adjusting the style

The last step we need to take is to make the background and header to look like the template we use. Regrettably, I found no really easy (configurable) way to do this. So, we'll do this the hard way.

First of all, you'll have to get the style you're using in the application. The stylesheet is referenced in your application and viewable by just showing the source of your page in your browser. In this example I ran the application, selected "show source" after right-clicking and searched for the stylesheet. This shouldn't be too hard to find (was on line 34 for me, first mention of themes).

When you look at this stylesheet (by either downloading it from the application server, or looking it up in an APEX software download), you should be able to find the section of interest by searching for "report-standard th.header". The section you'll find and the section for "report-standard th.data" are to be used.
Depending on the template you chose, the numbers and settings will be somewhat different. These sections will be used to create .report-standard-sort qualifiers (that is why we modified the class in step 1).

We must set the background color for the header, which is not directly mentioned in the section we just found. There is an url pointing to the background image, but that is relative to the stylesheet itself. Modify it to reflect the theme (theme_2 for me) and make the path relative to the apex page itself. Place the style tags around this block. We now have (after reformatting):

<style>
.report-standard-sort th {color: #ffffff; background-color: #cccccc; padding: 2px 10px; border-bottom: 1px solid #cccccc; background-image: url(../../i/themes/theme_2/images/report_bg.gif); background-repeat: repeat-x;}
.report-standard-sort td { background-color: #f0f0f0; padding: 4px 10px; border-bottom: 1px solid #cccccc;}
</style>


By placing this as a style tag in the HTML header of the page, our tabular form is now ready to go.



Keep in mind:
  • When using a new template, you should also (copy and) modify that template for the region
  • When using a new theme, all templates should be (copied and) modified again
  • When creating a new Tabular Form, change the report region template to the sorting template and copy the HTML header section for the style

Thursday, January 16, 2014

Oracle DataGuard 12: Standby database not starting after failover

I created a 12c DataGuard setup on Windows (2012 Server) using 3 machines:

  • Primary database server (DB1 on server1)
  • Standby database server (DB2 on server2)
  • separate Observer machine


I created the setup "manually" (scripted, not using EM) and everything worked fine (using FAST_START FAILOVER). Everything, except that after a failover from server2 (initially the standby) to server1 (initially the primary), the database on server2 would not start automatically.

Failing over from server1 to server2 works fine:

  • Shutdown server1 (disconnect virtual power)
  • Observer notices failure and starts DB2 as primary
  • Startup server1
  • Observer sees server1 online, re-instates DB1 and starts DB1 as standby

And after re-instatement, the DataGuard configuration works fine again.

When I failover from server2 to server1, DB2 will not restart and not be re-instated after the server comes online again. After such a failover, I had to mount the database manually and then the Observer took over and re-instated the database.

After much searching, I found that in the Windows registry, one key was missing. In the registry (HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE\KEY_OraDB12Home1), there was no entry for "ORACLE_SID". I created the entry "ORACLE_SID=DB2" and set "ORA_DB2_AUTOSTART=TRUE". This was set to TRUE initially, but was set to FALSE on restart of the machine (probably because ORACLE_SID was not set).

I think that the "oradim" command for creating the service does not work properly. It does not create the ORACLE_SID key in the registry. This stops the database from autostarting (in mount mode) and DataGuard (the Observer) was not able to re-instate the database.  This is already automatically done on the (initial) primary server, probably by DBCA, so that database works fine after a restart.

So, as a quick solution, you should create the ORACLE_SID key in the Windows registry for the standby server in order to start the database automatically. I hope this helps anyone facing this problem!

Monday, November 18, 2013

Oracle Forms (using jacob) and Java 7 update 40 gives a warning that can't be ignored

This week alone, I heard of two customers that were having problems with Oracle Forms using WebUtil and jacob. Especially jacob proves to be a bit of a problem.

Let's backup up for a bit: if you're not familiar with webutil and jacob, first of all, look at oracle support ID 1093985.1 which tells you how to install and configure a lot of things. If this still doesn't ring any bells, you will probably not use webutil/jacob, so you can skip this entire post...

Anyway. After configuring and ( most importantly) signing your jacob.jar, everything worked fine. That is, until Java 7 update 40. Before that, you always had the option to "accept and always ignore" the message about the certificate, which was self-signed and therefore, not trusted.

Then with Java 7 update 40, A new security risk profile was implemented and an unsigned jar file or a jar file signed by an unknown publisher will get you the warning:
"Running applications by UNKNOWN publishers will be blocked in a future release because it is potentially unsafe and a security risk". See this page for details.


You can tick a check-box "I accept ..." and run anyway, but this message will come up EVERY time you start the Forms application. Obviously, application users don't want to tick the check-box and press that Run button every time they start their application. They want their application to start without any interruptions.

So, the way to do this seems rather straightforward. Just make jacob.jar use a trusted certificate. When signing the jar file, almost everyone used the self-signed certificate. This was rather easy to do and there was no problem at all.

First solution: instead of using a self-signed certificate, just use a real certificate issued by a CA. Drawbacks: this costs (some) money and the certificate will eventually expire. So this is not what we will be focusing on. There is an easier way.

Second solution: use the self-signed certificate as trusted. This is done by essentially:

  • sign the jacob.jar with the provided signing batch file (sign_webutil.bat / sign_webutil.sh)
    • See note 1076945.1
  • reuse the created keystore to extract the certificate
    • use keytool to extract this information
  • import the certificate in your browser as a trusted certificate
    • I placed the resulting crt file on my laptop and double-clicked. Just follow the wizard for importing the certificate
Details of these steps:

I modified the sign_webutil.bat (Yes I know, I used Windows...) to suit my needs (Names, Passwords, Locations, etc). After that, I issued the following commands:


set CLASSPATH=C:\Oracle\Middleware\as_fr\jdk\bin
set PATH=C:\Oracle\Middleware\as_fr\forms\java; C:\Oracle\Middleware\as_fr\forms\webutil
sign_webutil.bat C:\Oracle\Middleware\as_fr\forms\java\jacob.jar


A keystore ".keystore" was created in my home directory, so I extracted the certificate using keytool:

keytool -export -alias %JAR_KEY% -file %KEYSTORE%.crt -keystore %KEYSTORE% -storepass %JAR_KEY_PASSWORD%

The parameters all come from the orignal sign_webutil.bat and I mainly copied that file and modified it a bit for ease of use with the export option.

The resulting file ".keystore.crt" was used to import into the client keystore as a trusted certificate (just by double clicking it and following the wizard).



After these steps, the jar file was signed with a certificate that is now trusted. Trusted by my laptop that is, so everyone using this application must also import this certificate. Could be a problem, but most customers I see use Oracle Forms mainly internally, so distributing the certificate will not be much of a problem.

On first access of the application, a new message appears. Somewhat like this:


There is another checkbox this time, saying "Do not show this again for apps from the publisher and location above". Effectively, this gives us the same functionality as before: the "accept and always ignore" we always had. After this, no more messages will be displayed...


Saturday, September 14, 2013

Using sqlplus input parameters to call specific scripts IF - THEN - ELSE style

Quite a while ago, I described using "NVL" for SQL*plus commandline parameters. Using this technique, you can create scripts that can cope with commandline parameters that are not always supplied on the commandline. So you don't always know whether a parameter is supplied or not and you don't want the script to go asking questions like "Enter the value for 4:" if you didn't supply a value for the fourth parameter.

A modified script made sure that all parameters were properly initialized and could be used in further statements without any prompts from sqlplus:

  COLUMN inputpar01 NEW_VALUE 1 NOPRINT
  COLUMN inputpar02 NEW_VALUE 2 NOPRINT
  COLUMN inputpar03 NEW_VALUE 3 NOPRINT
  COLUMN inputpar04 NEW_VALUE 4 NOPRINT
  select 1 inputpar01
       , 2 inputpar02
       , 3 inputpar03
       , 4 inputpar04
    from dual
   where 1=2;
  --
  PROMPT connecting as &1
  CONNECT &1/&2@&3
  SELECT username from user_users;
  PROMPT value for parameter 4 = &4


Recently, I had a question if the user could be prompted to enter the value for the fourth parameter if this had been omitted from the command line. So, you supply parameters 1, 2 and 3 and leave out parameter 4, which the script actually needs (for whatever reasons).
This can be accomplished using extra scripts ("helper" scripts).

I create the first helper script, that asks for a parameter and I call it "ask_parameter.sql". The contents are very simple:

  accept &1 prompt "What is the value for parameter &2 : "

A second script "dummy_script.sql" is created and has no content. This script will be used when a value is actually provided for the fourth parameter.

After these two scripts are created, we modify the original script to:

  COLUMN inputpar01 NEW_VALUE 1 NOPRINT
  COLUMN inputpar02 NEW_VALUE 2 NOPRINT
  COLUMN inputpar03 NEW_VALUE 3 NOPRINT
  COLUMN inputpar04 NEW_VALUE 4 NOPRINT
  select 1 inputpar01
       , 2 inputpar02
       , 3 inputpar03
       , 4 inputpar04
    from dual
   where 1=2;
  --
  PROMPT connecting as &1
  CONNECT &1/&2@&3
  SELECT username from user_users;
  PROMPT value for parameter 4 = &4
  --
  -- Check and set the fourth parameter 
  -- (NEWLY ADDED FUNCTIONALITY)
  --
  COLUMN ask_parameter NEW_VALUE ask_command NOPRINT;
  --
  SELECT nvl2( '&4'
             , 'dummy_script.sql'
             , 'ask_parameter.sql 4 my_name_for_par4'
             ) ask_parameter
  FROM dual;
  --
  set verify on
  set feedback on
  set termout on
  --
  -- Run the script determined in the previous step
  --
  @@&ask_command.
  --
  PROMPT value for parameter 4 = &4

The new part does the following:

  • define a new parameter "ask_command" that will contain the name of the script to call
  • based on the value of parameter 4, either select the value "dummy_script.sql" if it already contains a value, or select the value "ask_parameter.sql" (along with two new parameters for that script) if it contains no value
  • run the script selected in the previous step. This is why we need the dummy script: if you just leave this NULL, sqlplus will error on the @@ command.
  • if this is "ask_parameter.sql", then within that script &1 and &2 represent the parameters for that script and not the parameters for the main script. However, if you set "4" using the accept, it will set the fourth parameter for the main script!

In this case, when you call the main script using "testuser testpassword testdatabase"as parameters, you will be prompted:

  What is the value for parameter my_name_for_par4 :

This sets the value for parameter 4 of the main script, after which you can use it for any purpose (like using it as a parameter for another script).

So by using (generic) helper scripts, you can check, replace or prompt for any parameters. Just make sure that the scripts are in the same directory, or add a (relative) path to the helper scripts, should you decide to place them in a subdirectory (which is recommended for simplicity).