Oracle MVA

Tales from a Jack of all trades

Archive for the ‘Installing’ Category

Online Master Key with Oracle Key Vault on a Consolidated Platform

leave a comment »

Seems I am writing a series. In part 2 on Oracle Key Vault (OKV): having 2 databases using the same OKV.

In part 1 I mentioned an open item: what happens if you run this okvclient on a consolidated environment? Here are my notes.

Please mind: this is written with my little knowledge of Oracle Key Vault. It is likely that I will find out more in the (near) future and have to update this series as a consequence.

If you want to create the master key for TDE in the same virtual wallet for both databases, you can simply create a symbolic link that links the configurations together

ln -s $ORACLE_BASE/okv/saucer $ORACLE_BASE/okv/alien

Yes my databases have names that match the db_domain (area51).

The downside to this, is that you only have one endpoint and therefore both databases can read each others keys. I can imagine this being a problem if you ever decide to move your database to another server. Also there is a security risk, if one database is compromised, then the second database is automatically compromised also. So this was a no-go.

Snapshots of VMs rock, rollback and proceed with a second okvclient installation.

I created a second endpoint in OKV for a database and enrolled the endpoint. Furthermore I scp-ed the okvclient.jar file that was downloaded with enrolment and copied this to my database server.

When you run the okvclient.jar file as described in documentation and point to the same installation directory as used for the first okvclient.

java -jar okvclient_alien.jar -d /u01/app/oracle/product/12.2.0/okv -v
Oracle Key Vault endpoint software installed successfully.

For the reader that is familiar with the client an immediate problem occurs: no endpoint password is requested! Further investigation showed that only the installation logfile was updated and configuration was not changed. This means that you do not have any configuration for the new endpoint, basically you are the scenario where you share keys.

Since I don’t know how to create configuration manually, I rolled back to the snapshot again. (did I already mention that VirtualBox snapshots rock?)

So, I re-enrolled the endpoint and ran the installer again, only now pointing to a new directory

java -jar okvclient_alien.jar -d /u01/app/oracle/product/12.2.0/okv_alien -v
Detected JAVA_HOME: /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.111.x86_64/jre
Enter new Key Vault endpoint password (<enter> for auto-login):
Confirm new Key Vault endpoint password:
Oracle Key Vault endpoint software installed successfully.

This is not the situation I want, I prefer to have one set of software and multiple configurations. A very short investigation of the configuration setup suggested that there are exactly 2 differences between the two okv client installations:

  1. okvclient.ora, where most prominent is the difference CONF_ID and SSL_WALLET_LOC
  2. ewallet.p12, the password is the registration password and showed different keys

So, copying these files to the local configuration directory should get my desired result: 1 software tree with multiple configurations. First the setup of saucer:

rm /u01/app/oracle/okv/saucer/okvclient.ora
cp /u01/app/oracle/product/12.2.0/okv/ssl/ewallet.p12 /u01/app/oracle/okv/saucer/
cp /u01/app/oracle/product/12.2.0/okv/conf/okvclient.ora /u01/app/oracle/okv/saucer/

Turns out that if you move the files okvutil does not function anymore. Also you have to update okvclient.ora and point the SSL_WALLET_LOC to the new location (/u01/app/oracle/okv/saucer). Then check the configuration:

SQL> conn/as sysdba
Connected.
SQL> show parameter db_name

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
db_name 			     string	 saucer

SQL> select wrl_type, status from v$ENCRYPTION_WALLET;

WRL_TYPE	     STATUS
-------------------- ------------------------------
FILE		     NOT_AVAILABLE
HSM		     CLOSED

2 rows selected.

SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY Welcome01;

keystore altered.

SQL> select wrl_type, status from v$ENCRYPTION_WALLET;

WRL_TYPE	     STATUS
-------------------- ------------------------------
FILE		     NOT_AVAILABLE
HSM		     OPEN

2 rows selected.

Super duper. Proceed with the second okvclient:

mkdir /u01/app/oracle/okv/alien
mv /u01/app/oracle/product/12.2.0/okv_alien/ssl/ewallet.p12 /u01/app/oracle/okv/alien/
mv /u01/app/oracle/product/12.2.0/okv_alien/conf/okvclient.ora /u01/app/oracle/okv/alien/

Now the /u01/app/oracle/product/12.2.0/okv_alien install is obsolete.

Proceed with setting the encryption key

SQL> conn/as sysdba
Connected.
SQL> show parameter db_name

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
db_name 			     string	 alien

SQL> select wrl_type, status from v$ENCRYPTION_WALLET;

WRL_TYPE	     STATUS
-------------------- ------------------------------
HSM		     CLOSED

1 row selected.

SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY Welcome01;

keystore altered.

SQL>  select wrl_type, status from v$ENCRYPTION_WALLET;

WRL_TYPE	     STATUS
-------------------- ------------------------------
HSM		     OPEN_NO_MASTER_KEY

1 row selected.

SQL> ADMINISTER KEY MANAGEMENT SET ENCRYPTION KEY IDENTIFIED BY "Welcome01";

keystore altered.


SQL> select wrl_type, status from v$ENCRYPTION_WALLET;

WRL_TYPE	     STATUS
-------------------- ------------------------------
HSM		     OPEN

1 row selected.

Presto!. Hope this helps.

Written by Jacco H. Landlust

December 20, 2016 at 2:03 am

Oracle Key Vault: Migrating an Existing TDE Wallet to Oracle Key Vault

with one comment

Currently I am evaluating Oracle Key Vault (OKV) by setting it up in a VirtualBox environment on my laptop. I have run into some small issues that might be specific to me (in which case this post is just a personal reminder), or it can be more generic.

My testing environment consists out of a single instance 12c database running on ASM. Before I investigated OKV I already tested with transparent database encryption and the wallet was located in ASM. Therefore the scenario described in the OKV documentation for migrating an existing TDE wallet to Oracle Key Vault applies to me.

Registration of the end-point (database) in OKV went perfectly, I was able to download a jar file and install the OKV software. The jar file writes configuration and also the OKV client software to disk. It is on my open items to investigate what happens if you register a second database on the same server, the way the software and configuration is installed makes me wonder if this will fly in a consolidated environment.

First issue I hit is the action to be performed at bullet 4. Documentation suggests to update the encryption_wallet_location in sqlnet.ora to

ENCRYPTION_WALLET_LOCATION=(SOURCE=(METHOD=HSM)(METHOD_DATA=(DIRECTORY=wallet_location))) 

It turns out you should leave this at the current wallet_location, in my case +DATA. This is required for the migration at step 8 to run successfully.

When you query V$ENCRYPTION_WALLET as suggested in step 6, you actually get two rows returned whereas you only had one row before you configured HSM as source method. I think the documentation could use an example there.

Since I am running on 12c, I can directly pass to step 8 and run the command

ADMINISTER KEY MANAGEMENT SET ENCRYPTION KEY IDENTIFIED BY "<endpoint password>" MIGRATE USING "<wallet password>" WITH BACKUP;

This took me a little longer to work on too. Turns out that you enter an when you register the endpoint. But only if you did not select auto-login. And that is exactly what I did… Only after re-enroling the endpoint I realized that I could have passed null as described in the 11R2 instruction some 2 lines above the 12c instructions. So after re-enroling and setting a password, I was able to migrate the encryption key into the OKV.

Now all that is left is opening up the keystore using the command

ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY "Key_Vault_endpoint_password";

The command execute successfully given the feedback

SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY Welcome01;

keystore altered.

However when I checked v$encryption_wallet it showed that the wallet was still closed:

SQL> select wrl_type, status from v$ENCRYPTION_WALLET;

WRL_TYPE	     STATUS
-------------------- ------------------------------
ASM		     CLOSED
HSM		     CLOSED

2 rows selected.

Now this was somewhat annoying. On to the alert.log This showed the following lines

kzthsmcc1: HSM heartbeat check failed to cache
object handle. Error code: 1014
HSM connection lost, closing wallet

Time to hit documentation. And it showed a clue: “Ensure that the ORACLE_BASE environment variable is set before you start the oracle process manually. This is very important.” And important it is indeed. Because without the ORACLE_BASE environment variable OKV cannot find the configuration. And that will break your connection to HSM. So I added ORACLE_BASE to the database configuration in crs:

srvctl setenv database -db saucer -env ORACLE_BASE=/u01/app/oracle

This requires a restart of the database (via srvctl!!!!) and:

SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY Welcome01;

keystore altered.

SQL> select wrl_type, status from v$ENCRYPTION_WALLET;

WRL_TYPE	     STATUS
-------------------- ------------------------------
ASM		     CLOSED
HSM		     OPEN

2 rows selected.

So now there is only one problem left: the wallet in ASM still exists and currently has the current keys. OKV documentation does not describe what to do next. My suggestion would be to remove the wallet from ASM and update sqlnet:

ENCRYPTION_WALLET_LOCATION=
 (SOURCE=
   (METHOD=HSM))

Because that would leave the encryption_wallet view in the following state

SQL> select wrl_type, status from v$ENCRYPTION_WALLET;

WRL_TYPE	     STATUS
-------------------- ------------------------------
FILE		     NOT_AVAILABLE
HSM		     OPEN

2 rows selected.

Hope this helps.

Written by Jacco H. Landlust

December 19, 2016 at 12:43 am

could not open jre6\lib\i386\jvm.cfg

with 2 comments

Recently I spend some time at a customer that runs Windows, Windows 2008 R2 to be exact. My advise: don’t, especially that UAC business costs lots of extra time. Oracle and Windows is not match in heaven, but if you happen to end up at a customer running that operating system you might run into the error I used as title for this blogpost too. The error is generic for any Windows system.

When you install a new JDK (don’t forget to run as administrator) and install this in a new JAVA_HOME, this all works perfectly. Only when you next open a dosbox by calling cmd (again, don’t forget to run as administrator again) set your JAVA_HOME and PATH and call java -version you end up with the dreaded could not open jre6\lib\i386\jvm.cfg error. This is kind of annoying, since no references to any JAVA_HOME is in your PATH apart from the one you just set.

It turns out that upon installation of the first JDK actually java.exe (and some more java* executables) are copied to c:\windows\system32 and that happens to be the location of your cmd box when you open it. Since . is in the path first, you run the local java.exe when you call java instead of the java.exe from JAVA_HOME/bin (what I expected).

Solution:
navigate to some other directory before you call java (like cd \ ) or delete the java.exe file from c:\windows\system32 . The latter could give some unexpected results so navigating to a different directory is what I prefer.

Hope this helps.

Written by Jacco H. Landlust

February 10, 2013 at 9:11 pm

Posted in Installing, Windows

Configuring Fusion Middleware JDBC Data Sources Correctly

leave a comment »

The out of the box settings for a data source in a random Fusion Middleware product (SOA, WebCenter, OIM, etc. they are all alike) JDBC properties contains guesses about your environment and usage. Same goes for the settings required by RCU when installing a repository.

For a customer I recently wrote a document explaining which settings to set on the database and in WebLogic when configuring data sources for a Fusion Middleware product for production usage while connected to a RAC database.

The document assumes you are running a 11.2 RAC and WebLogic 10.3.4 or newer. Here’s the document:

Configure JDBC data sources for RAC

Hope this helps.

BTW: if you already downloaded the document, please download it again. Seems I made an error in the distributed lock area.

Written by Jacco H. Landlust

November 17, 2012 at 1:13 am

RCU-6011

with 2 comments

After reading the documentation and running RCU commandline for some time (mostly from a script I build) I felt confident about RCU. For a new environment I had to run RCU manually, so I setup the command:

$ ./rcu -silent -createRepository -connectString scan.area51.local:1521:rcuservice -dbUser SYS -dbRole sysdba -component MDS -component SOAINFRA -component OIM -component IAU -schemaPrefix DEV -f < /home/oracle/pass

and ran into this error:

Processing command line ....
Repository Creation Utility - Checking Prerequisites
Checking Global Prerequisites
RCU-6011:A valid prefix should be specified. Prefix can contain only alpha-numeric characters. It should not start with a number and should not contain any special characters.
RCU-6091:Component name/schema prefix validation failed.

Now this error surprised me to great extend. The parameters where all there, so what could cause this?  Some suffling around with the parameters learned me that this command does run:

$ ./rcu -silent -createRepository -connectString scan.area51.local:1521:rcuservice -dbUser SYS -dbRole sysdba -schemaPrefix TLTB1 -component MDS -component SOAINFRA -component OIM -component IAU -f < /home/oracle/pass

Turn out that the order of the parameters is of importance to rcu.

Hope this helps.

Written by Jacco H. Landlust

October 26, 2011 at 11:34 am

iscsi-targets

with 3 comments

I am build a new environment on my testing-kit. Instead of downloading OpenFiler, I decided to build my own ISCSI device on OEL 5. The main reason for this exercise is that I want this box to be DNS server and some more.

Anyway, configuring ISCSI is not an average DBA’s job. I don’t like to type in commands on a prompt when I don’t know what they mean. Every how-to I find keeps on calling difficult commands to create a ISCSI LUN, which made me spent lots of time in man-pages last night. In the end this was a waste of time, since all you need to do is:

  • add a disk to your VM (let’s say /dev/sdb)
  • install perl-Config-General and scsi-target-utils rpm’s from the ClusterStorage directory on the DVD with your installation media
  • edit /etc/tgt/targets.conf and make it look like this:
    ASM1>
    backing-store /dev/sdb
    </target>
    where area51.local is my domain, ASM1 is my LUN and /dev/sdb is the disk just added to the VM
  • make the tgtd daemon start
    chkconfig 345 tgtd on; service tgtd start

Now whenever you restart your server, you will still have the same ISCSI LUN presented to the world. No big man-page needed, just a simple configuration file. How about that….

Obviously, when you want to check the LUN, you do need the tgtadm command. This should do the trick:

tgtadm --lld iscsi --op show --mode target

Written by Jacco H. Landlust

August 24, 2010 at 8:10 am

Posted in Installing, Linux

“ignore” means “please restart the process”

leave a comment »

I just wasted lots of my precious time (and the time of a support officer at Oracle).  When loading a repository using RCU I got an error mentioning that the TSPURGE package was not valid. The option I got were “ignore” and “stop”. Looking some more into the error it turns out that the tspurge package (in ODS schema) relies on DBMS_JOB. The grant to public for DBMS_JOB was removed on the security advice from OEM though. Just granting execute privileges on DBMS_JOB to the ODS user and hitting “ignore” results in a faulty repository (even though RCU claims all went perfectly). So, “ignore” means “please restart the process from the start”. It’s very interesting that Oracle’s own RCU tool doesn’t handle the security settings suggested by Oracle though.

Written by Jacco H. Landlust

June 15, 2010 at 4:07 pm