Oracle MVA

Tales from a Jack of all trades

Archive for the ‘RDBMS’ Category

Online Master Key with Oracle Key Vault on a Consolidated Platform

leave a comment »

Seems I am writing a series. In part 2 on Oracle Key Vault (OKV): having 2 databases using the same OKV.

In part 1 I mentioned an open item: what happens if you run this okvclient on a consolidated environment? Here are my notes.

Please mind: this is written with my little knowledge of Oracle Key Vault. It is likely that I will find out more in the (near) future and have to update this series as a consequence.

If you want to create the master key for TDE in the same virtual wallet for both databases, you can simply create a symbolic link that links the configurations together

ln -s $ORACLE_BASE/okv/saucer $ORACLE_BASE/okv/alien

Yes my databases have names that match the db_domain (area51).

The downside to this, is that you only have one endpoint and therefore both databases can read each others keys. I can imagine this being a problem if you ever decide to move your database to another server. Also there is a security risk, if one database is compromised, then the second database is automatically compromised also. So this was a no-go.

Snapshots of VMs rock, rollback and proceed with a second okvclient installation.

I created a second endpoint in OKV for a database and enrolled the endpoint. Furthermore I scp-ed the okvclient.jar file that was downloaded with enrolment and copied this to my database server.

When you run the okvclient.jar file as described in documentation and point to the same installation directory as used for the first okvclient.

java -jar okvclient_alien.jar -d /u01/app/oracle/product/12.2.0/okv -v
Oracle Key Vault endpoint software installed successfully.

For the reader that is familiar with the client an immediate problem occurs: no endpoint password is requested! Further investigation showed that only the installation logfile was updated and configuration was not changed. This means that you do not have any configuration for the new endpoint, basically you are the scenario where you share keys.

Since I don’t know how to create configuration manually, I rolled back to the snapshot again. (did I already mention that VirtualBox snapshots rock?)

So, I re-enrolled the endpoint and ran the installer again, only now pointing to a new directory

java -jar okvclient_alien.jar -d /u01/app/oracle/product/12.2.0/okv_alien -v
Detected JAVA_HOME: /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.111.x86_64/jre
Enter new Key Vault endpoint password (<enter> for auto-login):
Confirm new Key Vault endpoint password:
Oracle Key Vault endpoint software installed successfully.

This is not the situation I want, I prefer to have one set of software and multiple configurations. A very short investigation of the configuration setup suggested that there are exactly 2 differences between the two okv client installations:

  1. okvclient.ora, where most prominent is the difference CONF_ID and SSL_WALLET_LOC
  2. ewallet.p12, the password is the registration password and showed different keys

So, copying these files to the local configuration directory should get my desired result: 1 software tree with multiple configurations. First the setup of saucer:

rm /u01/app/oracle/okv/saucer/okvclient.ora
cp /u01/app/oracle/product/12.2.0/okv/ssl/ewallet.p12 /u01/app/oracle/okv/saucer/
cp /u01/app/oracle/product/12.2.0/okv/conf/okvclient.ora /u01/app/oracle/okv/saucer/

Turns out that if you move the files okvutil does not function anymore. Also you have to update okvclient.ora and point the SSL_WALLET_LOC to the new location (/u01/app/oracle/okv/saucer). Then check the configuration:

SQL> conn/as sysdba
Connected.
SQL> show parameter db_name

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
db_name 			     string	 saucer

SQL> select wrl_type, status from v$ENCRYPTION_WALLET;

WRL_TYPE	     STATUS
-------------------- ------------------------------
FILE		     NOT_AVAILABLE
HSM		     CLOSED

2 rows selected.

SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY Welcome01;

keystore altered.

SQL> select wrl_type, status from v$ENCRYPTION_WALLET;

WRL_TYPE	     STATUS
-------------------- ------------------------------
FILE		     NOT_AVAILABLE
HSM		     OPEN

2 rows selected.

Super duper. Proceed with the second okvclient:

mkdir /u01/app/oracle/okv/alien
mv /u01/app/oracle/product/12.2.0/okv_alien/ssl/ewallet.p12 /u01/app/oracle/okv/alien/
mv /u01/app/oracle/product/12.2.0/okv_alien/conf/okvclient.ora /u01/app/oracle/okv/alien/

Now the /u01/app/oracle/product/12.2.0/okv_alien install is obsolete.

Proceed with setting the encryption key

SQL> conn/as sysdba
Connected.
SQL> show parameter db_name

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
db_name 			     string	 alien

SQL> select wrl_type, status from v$ENCRYPTION_WALLET;

WRL_TYPE	     STATUS
-------------------- ------------------------------
HSM		     CLOSED

1 row selected.

SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY Welcome01;

keystore altered.

SQL>  select wrl_type, status from v$ENCRYPTION_WALLET;

WRL_TYPE	     STATUS
-------------------- ------------------------------
HSM		     OPEN_NO_MASTER_KEY

1 row selected.

SQL> ADMINISTER KEY MANAGEMENT SET ENCRYPTION KEY IDENTIFIED BY "Welcome01";

keystore altered.


SQL> select wrl_type, status from v$ENCRYPTION_WALLET;

WRL_TYPE	     STATUS
-------------------- ------------------------------
HSM		     OPEN

1 row selected.

Presto!. Hope this helps.

Written by Jacco H. Landlust

December 20, 2016 at 2:03 am

Oracle Key Vault: Migrating an Existing TDE Wallet to Oracle Key Vault

with one comment

Currently I am evaluating Oracle Key Vault (OKV) by setting it up in a VirtualBox environment on my laptop. I have run into some small issues that might be specific to me (in which case this post is just a personal reminder), or it can be more generic.

My testing environment consists out of a single instance 12c database running on ASM. Before I investigated OKV I already tested with transparent database encryption and the wallet was located in ASM. Therefore the scenario described in the OKV documentation for migrating an existing TDE wallet to Oracle Key Vault applies to me.

Registration of the end-point (database) in OKV went perfectly, I was able to download a jar file and install the OKV software. The jar file writes configuration and also the OKV client software to disk. It is on my open items to investigate what happens if you register a second database on the same server, the way the software and configuration is installed makes me wonder if this will fly in a consolidated environment.

First issue I hit is the action to be performed at bullet 4. Documentation suggests to update the encryption_wallet_location in sqlnet.ora to

ENCRYPTION_WALLET_LOCATION=(SOURCE=(METHOD=HSM)(METHOD_DATA=(DIRECTORY=wallet_location))) 

It turns out you should leave this at the current wallet_location, in my case +DATA. This is required for the migration at step 8 to run successfully.

When you query V$ENCRYPTION_WALLET as suggested in step 6, you actually get two rows returned whereas you only had one row before you configured HSM as source method. I think the documentation could use an example there.

Since I am running on 12c, I can directly pass to step 8 and run the command

ADMINISTER KEY MANAGEMENT SET ENCRYPTION KEY IDENTIFIED BY "<endpoint password>" MIGRATE USING "<wallet password>" WITH BACKUP;

This took me a little longer to work on too. Turns out that you enter an when you register the endpoint. But only if you did not select auto-login. And that is exactly what I did… Only after re-enroling the endpoint I realized that I could have passed null as described in the 11R2 instruction some 2 lines above the 12c instructions. So after re-enroling and setting a password, I was able to migrate the encryption key into the OKV.

Now all that is left is opening up the keystore using the command

ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY "Key_Vault_endpoint_password";

The command execute successfully given the feedback

SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY Welcome01;

keystore altered.

However when I checked v$encryption_wallet it showed that the wallet was still closed:

SQL> select wrl_type, status from v$ENCRYPTION_WALLET;

WRL_TYPE	     STATUS
-------------------- ------------------------------
ASM		     CLOSED
HSM		     CLOSED

2 rows selected.

Now this was somewhat annoying. On to the alert.log This showed the following lines

kzthsmcc1: HSM heartbeat check failed to cache
object handle. Error code: 1014
HSM connection lost, closing wallet

Time to hit documentation. And it showed a clue: “Ensure that the ORACLE_BASE environment variable is set before you start the oracle process manually. This is very important.” And important it is indeed. Because without the ORACLE_BASE environment variable OKV cannot find the configuration. And that will break your connection to HSM. So I added ORACLE_BASE to the database configuration in crs:

srvctl setenv database -db saucer -env ORACLE_BASE=/u01/app/oracle

This requires a restart of the database (via srvctl!!!!) and:

SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY Welcome01;

keystore altered.

SQL> select wrl_type, status from v$ENCRYPTION_WALLET;

WRL_TYPE	     STATUS
-------------------- ------------------------------
ASM		     CLOSED
HSM		     OPEN

2 rows selected.

So now there is only one problem left: the wallet in ASM still exists and currently has the current keys. OKV documentation does not describe what to do next. My suggestion would be to remove the wallet from ASM and update sqlnet:

ENCRYPTION_WALLET_LOCATION=
 (SOURCE=
   (METHOD=HSM))

Because that would leave the encryption_wallet view in the following state

SQL> select wrl_type, status from v$ENCRYPTION_WALLET;

WRL_TYPE	     STATUS
-------------------- ------------------------------
FILE		     NOT_AVAILABLE
HSM		     OPEN

2 rows selected.

Hope this helps.

Written by Jacco H. Landlust

December 19, 2016 at 12:43 am

JDBC statement cache setting

leave a comment »

Recently I was asked about the statement cache setting in WebLogic by a colleague. Reason he asked about it, was that documentation wasn’t making any sense to him in combination with advise given to him from an external expert. Here’s the doc he was referring to.

The tooltip documentation in WebLogic says:

WebLogic Server can reuse statements in the cache without reloading the statements, which can increase server performance. Each connection in the connection pool has its own cache of statements.

Now this suggests that WebLogic is maintaining some kind of cache, but really it isn’t (in combination with an Oracle database). All it is doing is opening a cursor on the Oracle database and reusing this cursor.

To demonstrate what is happening I created a small example. The example I use is an sqlauthenticator for WebLogic, allowing users in some database table to authenticate in WebLogic. In this presentation you can find the DDL and DML for the tables and a description how to setup this sql authenticator.

So, my initial database has a statement cache of 10 (default). When I restart the database and WebLogic and I login to the WebLogic console, I can find the following open cursors:

<br />select hash_value, cursor_type, sql_text<br />  from v$open_cursor<br /> where user_name = 'DEMO'<br />/<br />HASH_VALUE CURSOR_TYPE SQL_TEXT<br />---------- --------- ------------------------------------------------<br />32127143   OPEN      SELECT 1 FROM DUAL<br />238104037  OPEN      SELECT G_NAME FROM GROUPMEMBERS WHERE G_MEMBER = :1<br />3221480394 OPEN      SELECT U_PASSWORD FROM USERS WHERE U_NAME = :1<br /><br />3 rows selected.<br />

The minute I reconfigure the statement cache to 0 (=disabled), restart database and WebLogic and login to the console, I find the following open cursors:

<br />HASH_VALUE CURSOR_TYPE SQL_TEXT<br />---------- --------- ------------------------------------------------<br />238104037  OPEN      SELECT G_NAME FROM GROUPMEMBERS WHERE G_MEMBER = :1<br />1 row selected.<br />

This simple test teaches me that a cursor is kept open on the users table and on dual. The query that is running on dual is actually the test query for the datasource.

It would suggest that the statement-cache does keep an administration on which query has run over which connection. This test is too small to bring proof off that. Also I wonder what happens in combination with the pin to thread setting of the jdbc driver. Food for a new blogpost 🙂

So, in short: the statement cache of your datasource has a direct impact on the number of open cursors. This can (is) improving performance, you don’t have to create a new cursor when you reuse a statement. Setting the statement cache to 0 (disable the cache) is in my opinion not a best practice, by default every session to your 11.2 database can have 50 cursors so you got plenty to spare. You should tune open_cursors and session_cached_cursors on the database according to your applications need.

Hope this helps.

Written by Jacco H. Landlust

April 19, 2013 at 10:55 pm

Posted in RDBMS, Weblogic

Oracle Database Applicance With WebLogic Server (ODA X3-2)

with 5 comments

On april 3th the new ODA X3-2 was released. Sadly I was sick from april 1th on so I had to miss the launch, and I was so well prepared…  others had the scoop. Anyway, as an administrator that not only manages databases this release is pretty exciting since it brings not only virtualization but also WebLogic to ODA. This would make ODA a pretty good appliance for some of my customers, so I did a little investigation in the product.

This blogpost is the first result of that investigation. My main focus was the WebLogic part of the box. The questions that arose with me were either answered by documentation or by product management. Obviously that doesn’t guarantee that I understood everything correct 🙂 I left out references to documentation on  purpose, it would be smart for everyone interested in the product to hit the documentation thoroughly.

The most import slide in the slide deck I received about the ODA launch is this:

oda-slide

It does some pretty smart claims that can be verified easily. The three simplified statements call for some clarification. Here’s what my questions were, plus the answers I found:

Simplified provisioning / install

Q: Can we test any of this without ODA?

A: No, although I was able to get a virtual ODA in a virtual box environment. This is by no means supported and requires altering of the images that Oracle sends you. 

Q: So how do you configure this beast?

A: You install an image on the system with Oracle VM that you can download freely from My Oracle Support. This image contains oakcli which is the cli used to manage the ODA.

Q: Ah, Oracle VM. Where is the Oracle VM Manager?

A: there is none. oakcli deploys all your VM’s.

Q: ODA is 2 physical machines running OVM, where is the shared storage?

A: The only shared storage available is database shared storage, i.e. DBFS. ARGH… DBFS is already on my todo list! 

Q: So no HA features from OVM?

A: No.

Q: What about the VM’s that oakcli deploys, can I build my own templates?

A: No you cannot. Well, technically you can, but it’s not supported.

Q: what a minute, no custom templates? What about adding layered products to the VM?

A: No can’t do. Currently only WebLogic is supported.

Q: Well, if I can’t define my own templates, what about my WebLogic domain structure?

A: To my understanding that’s fixed too: one Administration Server on it’s own VM, two managed servers in one cluster (on two VM’s) and two Oracle Traffic Director (OTD) VM’s.

Q: What is the difference between that ODA-BASE VM and the other dom-u’s?

A: The ODA-BASE VM is the only one that can actually connect to the local disks directly. 

Q: So that means you should run databases preferably in the ODA-BASE VM ?

A: Yes.

investment model (a.k.a. licenses)

Q: How does this “pay-as-you-grow” thing work partition wise?

A: It is VM hard partitioning. Not Oracle trusted partition as on ExaLogic. And partitioning only works in multiples of two (2).

Q: So I pay per core, is hyper threading turned on?

A: Yes, but I didn’t find out yet what that means for your licenses….

Q: So I can scale up and down?

A: No. Oracle expects you to grow, not to scale down. You can scale down software, not licenses.

Q: What about this separate administration server?

A: License wise that should be treated as a managed server (= pay for it)

Q: And those OTD’s? Do I have to pay for them too?

A: No. OTD is included with WebLogic Enterprise Edition and WebLogic Suite.

maintenance

Q: The JDK is in the middleware home, how does that work with upgrades?

A: Oracle will provide patches as needed.

Q: So how does a domain  upgrade work?

A: Currently not supported. So no maintenance version wise.

Q: An EM agent exists on every VM? Which version is that?

A: Currently there is not EM agent installed. Oracle plans to have the agent installed and support in next patch releases. This will be a 12c EM agent.

Well. That covers all my findings. Hope it helps you in your investigation of ODA.

Written by Jacco H. Landlust

April 5, 2013 at 9:34 pm

Posted in RDBMS, Weblogic

Shrinking thin provisioned ASM disks results in smaller diskgroup

with 5 comments

Just the other day I ran out of space on my laptop. That’s what you get with these “tiny” SSD’s nowadays. Anyway, when looking around where I had most data stored I found that an ASM disk in a VM was taking most of the space. This disk, as all my disks for VM’s, was thin provisioned. Usually I use some zero fill utillity overwrite the free space in a think provisioned disk so VirtualBox can shrink the disk. But this sounded risky to me for ASM, also those utilities need an ext filesystem.

Some googling learned me that I wasn’t the first one with this issue. Oracle actually supports a tool for this called ASRU: ASM Storage Reclamation Utility (ASMSRU must have been to long). So, easy comes easy does. Download the utility, fire it up ( ./ASRU -a sysasm DATA ) wait for some time and *presto* . 30 GB back of my 256 GB SSD. How cool is that!

Until about a week later I run into space issues in ASM in the VM I mentioned before. My ASM disk group was full. That sounded hilarious to me, since it was on a 40GB disk (checked it in VirtualBox and yes: still 40GB). Then I remembered ASRU.

Checking the v$asm_diskgroup showed something interesting:

SQL> SELECT name, total_mb, usable_file_mb FROM v$asm_diskgroup;

NAME       TOTAL_MB  USABLE_FILE_MB
--------- --------- ---------------
DATA          10090            1288

So, ASRU actually not only shrunk the disk, it also resided the disk group in ASM. Now how about that? I didn’t read any statement in the README file saying I had to resize my diskgroup after running ASRU….

SQL> ALTER DISKGROUP data RESIZE ALL;

Diskgroup altered.

SQL> SELECT name, total_mb, usable_file_mb FROM v$asm_diskgroup;

NAME       TOTAL_MB  USABLE_FILE_MB
--------- --------- ---------------
DATA          40954           32152

So, if you happen to use ASRU, don’t forget to resize your diskgroup or you will live in interesting times.

Hopes this helps.

Written by Jacco H. Landlust

January 13, 2013 at 3:16 pm

Posted in ASM, RDBMS, VirtualBox, VMWare

Configuring Fusion Middleware JDBC Data Sources Correctly

leave a comment »

The out of the box settings for a data source in a random Fusion Middleware product (SOA, WebCenter, OIM, etc. they are all alike) JDBC properties contains guesses about your environment and usage. Same goes for the settings required by RCU when installing a repository.

For a customer I recently wrote a document explaining which settings to set on the database and in WebLogic when configuring data sources for a Fusion Middleware product for production usage while connected to a RAC database.

The document assumes you are running a 11.2 RAC and WebLogic 10.3.4 or newer. Here’s the document:

Configure JDBC data sources for RAC

Hope this helps.

BTW: if you already downloaded the document, please download it again. Seems I made an error in the distributed lock area.

Written by Jacco H. Landlust

November 17, 2012 at 1:13 am

Contention problems with AQ

with 4 comments

While doing a routine checkup I found that four processes are eating up CPU’s on one node of a two (2) node RAC cluster. These processes are ora_q000_SID2, ora_Q001_SID2, ora_Q002_SID2 & ora_Q003_SID2. This suggests there is some extra activity with AQ on the SID2 instance. (This is a 10.2.0.4 RDBMS RAC running on 64-bit OEL 4.7, CRS and ASM on 11.1.0.7)

So first thing I investigated were the queues:

Read the rest of this entry »

Written by Jacco H. Landlust

February 26, 2010 at 10:11 am

Posted in RAC, RDBMS