Oracle MVA

Tales from a Jack of all trades

Archive for the ‘documentation errors’ Category

Oracle Key Vault: Migrating an Existing TDE Wallet to Oracle Key Vault

with one comment

Currently I am evaluating Oracle Key Vault (OKV) by setting it up in a VirtualBox environment on my laptop. I have run into some small issues that might be specific to me (in which case this post is just a personal reminder), or it can be more generic.

My testing environment consists out of a single instance 12c database running on ASM. Before I investigated OKV I already tested with transparent database encryption and the wallet was located in ASM. Therefore the scenario described in the OKV documentation for migrating an existing TDE wallet to Oracle Key Vault applies to me.

Registration of the end-point (database) in OKV went perfectly, I was able to download a jar file and install the OKV software. The jar file writes configuration and also the OKV client software to disk. It is on my open items to investigate what happens if you register a second database on the same server, the way the software and configuration is installed makes me wonder if this will fly in a consolidated environment.

First issue I hit is the action to be performed at bullet 4. Documentation suggests to update the encryption_wallet_location in sqlnet.ora to

ENCRYPTION_WALLET_LOCATION=(SOURCE=(METHOD=HSM)(METHOD_DATA=(DIRECTORY=wallet_location))) 

It turns out you should leave this at the current wallet_location, in my case +DATA. This is required for the migration at step 8 to run successfully.

When you query V$ENCRYPTION_WALLET as suggested in step 6, you actually get two rows returned whereas you only had one row before you configured HSM as source method. I think the documentation could use an example there.

Since I am running on 12c, I can directly pass to step 8 and run the command

ADMINISTER KEY MANAGEMENT SET ENCRYPTION KEY IDENTIFIED BY "<endpoint password>" MIGRATE USING "<wallet password>" WITH BACKUP;

This took me a little longer to work on too. Turns out that you enter an when you register the endpoint. But only if you did not select auto-login. And that is exactly what I did… Only after re-enroling the endpoint I realized that I could have passed null as described in the 11R2 instruction some 2 lines above the 12c instructions. So after re-enroling and setting a password, I was able to migrate the encryption key into the OKV.

Now all that is left is opening up the keystore using the command

ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY "Key_Vault_endpoint_password";

The command execute successfully given the feedback

SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY Welcome01;

keystore altered.

However when I checked v$encryption_wallet it showed that the wallet was still closed:

SQL> select wrl_type, status from v$ENCRYPTION_WALLET;

WRL_TYPE	     STATUS
-------------------- ------------------------------
ASM		     CLOSED
HSM		     CLOSED

2 rows selected.

Now this was somewhat annoying. On to the alert.log This showed the following lines

kzthsmcc1: HSM heartbeat check failed to cache
object handle. Error code: 1014
HSM connection lost, closing wallet

Time to hit documentation. And it showed a clue: “Ensure that the ORACLE_BASE environment variable is set before you start the oracle process manually. This is very important.” And important it is indeed. Because without the ORACLE_BASE environment variable OKV cannot find the configuration. And that will break your connection to HSM. So I added ORACLE_BASE to the database configuration in crs:

srvctl setenv database -db saucer -env ORACLE_BASE=/u01/app/oracle

This requires a restart of the database (via srvctl!!!!) and:

SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY Welcome01;

keystore altered.

SQL> select wrl_type, status from v$ENCRYPTION_WALLET;

WRL_TYPE	     STATUS
-------------------- ------------------------------
ASM		     CLOSED
HSM		     OPEN

2 rows selected.

So now there is only one problem left: the wallet in ASM still exists and currently has the current keys. OKV documentation does not describe what to do next. My suggestion would be to remove the wallet from ASM and update sqlnet:

ENCRYPTION_WALLET_LOCATION=
 (SOURCE=
   (METHOD=HSM))

Because that would leave the encryption_wallet view in the following state

SQL> select wrl_type, status from v$ENCRYPTION_WALLET;

WRL_TYPE	     STATUS
-------------------- ------------------------------
FILE		     NOT_AVAILABLE
HSM		     OPEN

2 rows selected.

Hope this helps.

Written by Jacco H. Landlust

December 19, 2016 at 12:43 am

yum exclude list for Exalogic vServers

leave a comment »

Recently I have been doing some work on Exalogic. While building a template for vServers on Exalogic I ran into an issue. After executing yum update following by a reboot, I wasn’t able to connect to the vServers anymore. This is caused by an issue with the network stack which, in the end, is caused by an documentation error.

It seems that the yum exclude list for vServers is not correctly documented , also Oracle Support Document 1594674.1 (Exalogic Virtual Environment – Guest vServer Upgrade to Oracle Linux v5.10 ) seems to be off.  The exclusion list that didn’t break the operating system after a yum update is:

exclude=kernel* compat-dapl* dapl* ib-bonding* ibacm* ibutils* ibsim* infiniband-diags* kmod-ovmapi-uek* libibcm* libibmad* libibumad* libibverbs* libmlx4* libovmapi* librdmacm* libsdp* mpi-selector* mpitests_openmpi_gcc* mstflint* mvapich* ofa* ofed* openmpi_gcc* opensm* ovm-template-config* ovmd* perftest* qperf* rds-tools* sdpnetstat* srptools* exalogic* infinibus* xenstoreprovider* initscripts* nfs-utils*

Written by Jacco H. Landlust

January 3, 2014 at 3:17 pm

SOA OIM integration and WebLogic administration port

with 2 comments

Recently I setup an Oracle Identity Manager (OIM) environment and I happened to enable the administration port. Mike Fleming wrote an excellent article about why you should enable the administration port of your weblogic domain, I won’t repeat his words. I did run into a small issue when I enabled the administration port for OIM which I figured would be interesting for other people too.

As soon as I logged into OIM and clicked on tasks the following error appeared in the oim_server1.out file:

< javax.naming.AuthenticationException [Root exception is java.lang.SecurityException: User 'principals=[weblogic, Administrators]' has administration role. All tasks by adminstrators must go through an Administration Port.]>

Now that is interesting. It seems that the OIM SOA integration stops working because of the administration port. So I started to read documentation, but found no clues here. Then I started looking some further and found this document that states:

“Connections that specify administrator credentials can use only the administration port”

Now there’s the answer for you, just as the logging states: you cannot use an administrator account to integrate OIM and SOA.

So how can I change this? First of all you need to setup a new account in weblogic. Navigate to your console and click on security realms –> myrealm –> User and Groups. Then click on new. Fill in the user details and click on ok

Do not assign any roles to the user. Next navigate to EM

First we will set the password for the soaadmin user in the credential map. Click on WebLogic Domain –> domain name. Then on WebLogic Domain –> Security –> credentials.

Select oim and then SOAAdminPassword. Click on edit and change the username from weblogic to soaadmin and the password to the password you set for the soaadmin user

Next up click on SOA –> soa-infra. Then click on SOA Infrastructure –> security –> application roles.

Now click on the button next to the the role name input box to find all roles.

Select the SOAAdmin role and click on “Add User” and select to soaadmin user.

Click on OK and you have completed the first step. Next you have to setup OIM to use this SOAAdmin user. This can be configured in EM to. Click on Identity and Access –> OIM –> oim (11.1.1.3.0). Then click on Oracle Identity Manager –> Sytem MBean Browser

Scroll al the way down and select oracle.iam –> Server: oim_server1 –> Application: oim –> XMLConfig –> Config –> XMLConfig.SOAConfig –> SOAConfig and change the username (SOA config username) from weblogic to soaadmin

Finally log into OIM and create a new user. Click on administration –> create user and fill in the form

click on save then on roles and assign the administrator role to the soaadmin user:

*presto*. Your OIM SOA integration is fully operational again.

Hope this helps.

Written by Jacco H. Landlust

January 10, 2012 at 2:41 pm

RCU-6011

with 2 comments

After reading the documentation and running RCU commandline for some time (mostly from a script I build) I felt confident about RCU. For a new environment I had to run RCU manually, so I setup the command:

$ ./rcu -silent -createRepository -connectString scan.area51.local:1521:rcuservice -dbUser SYS -dbRole sysdba -component MDS -component SOAINFRA -component OIM -component IAU -schemaPrefix DEV -f < /home/oracle/pass

and ran into this error:

Processing command line ....
Repository Creation Utility - Checking Prerequisites
Checking Global Prerequisites
RCU-6011:A valid prefix should be specified. Prefix can contain only alpha-numeric characters. It should not start with a number and should not contain any special characters.
RCU-6091:Component name/schema prefix validation failed.

Now this error surprised me to great extend. The parameters where all there, so what could cause this?  Some suffling around with the parameters learned me that this command does run:

$ ./rcu -silent -createRepository -connectString scan.area51.local:1521:rcuservice -dbUser SYS -dbRole sysdba -schemaPrefix TLTB1 -component MDS -component SOAINFRA -component OIM -component IAU -f < /home/oracle/pass

Turn out that the order of the parameters is of importance to rcu.

Hope this helps.

Written by Jacco H. Landlust

October 26, 2011 at 11:34 am

WLS, nodemanager and startup.properties

with 2 comments

It’s been a while since I blogged, been way to busy working on a couple of production systems. Anyway, while running an SR with Oracle about the nodemanager and some crash recovery issues  (a blog post will follow as soon as a solution is found) I ran into yet another documentation “feature”.

The Fusion Middleware documentation contains lots of “practices” (I wouldn’t call them best 🙂 ) which have little to do with the technical functioning of the product and everything to do with personel preferences (i.e. “it worked for me”). Some engineer setting up a fusion middleware environment for some customer and promoting his personal notes to be best practices is not the type of “Best Practice” or manual I would like to see from Oracle. A population of one (1) is not a valid sample for a “Best Practice”.

As an example, this part of documentation says:

Step 7: Define the Administration Server Address Make sure that a listen address is defined for each Administration Server that will connect to the Node Manager process. If the listen address for an Administration Server is not defined, when Node Manager starts a Managed Server it will direct the Managed Server to contact localhost for its configuration information.

I think this is incorrect because the nodemanager checks a file when it starts up a managed server. This file can be found at $DOMAIN_HOME/servers/$SERVER_NAME/data/nodemanager/startup.properties. An example of this file from one of my testservers is:

#Server startup properties
#Sat Feb 05 10:41:39 CET 2011
Arguments=-Djava.net.preferIPv4Stack\=true -Dsb.transports.mq.IgnoreReplyToQM\=true -Xmanagement\:ssl\=false,authenticate\=false,port\=7091 -Djavax.management.builder.initial\=weblogic.management.jmx.mbeanserver.WLSMBeanServerBuilder -Djava.security.egd\=file\:/dev/./urandom -Djava.security.jps.config\=/u01/app/oracle/user_projects/domains/base_domain/config/fmwconfig/jps-config.xml -Xms5g -Xmx5g -XXtlaSize\:min\=2k,preferred\=512k -XXcompaction\:percentage\=20
SSLArguments=-Dweblogic.security.SSL.ignoreHostnameVerification\=true -Dweblogic.ReverseDNSAllowed\=false
RestartMax=2
RestartDelaySeconds=0
RestartInterval=3600
AdminURL=http\://192.168.6.1\:7001
AutoRestart=true
AutoKillIfFailed=false

It contains the AdminURL (192.168.6.1 resolves to the AdminServer of my test setup). This property file is setup upon first startup of the managed server. When you boot the managed server this leads to the following startup parameter for the jvm (found in the .out file of the managed server):
-Dweblogic.management.server=http://192.168.6.1:7001

So I don’t agree that the managed server checks localhost if the AdminServer has no listen-address. I think that line in de docs should be corrected as a documentation error (at best it’s incomplete)

When you learn more about the startup.properties, you also know that the statement that you should always need to use the startWebLogic.sh script to start the AdminServer after domain creation is false. Yes you get an error when you start the AdminServer from the nodemanager if it’s the first time you boot this AdminServer, but if you manually create the startup.properties file and optionally the boot.properties (if you run in production mode) you can start the AdminServer from WLST (which helps when you script your deployments).

hope this helps.

Written by Jacco H. Landlust

February 7, 2011 at 11:23 pm

Configure a Database Audit Store for System Components

leave a comment »

The documentation for configuring a database audit store for system components is wrong. When you populate the audit store password in the secret store, docs tell you to run this command:

$ORACLE_HOME/jdk/bin/java -classpath
$ORACLE_HOME/modules/oracle.osdt_11.1.1/osdt_cert.jar:
$ORACLE_HOME/modules/oracle.osdt_11.1.1/osdt_core.jar:
$ORACLE_HOME/jdbc/lib/ojdbc5.jar:
$ORACLE_HOME/modules/oracle.iau_11.1.1/fmw_audit.jar:
$ORACLE_HOME/modules/oracle.pki_11.1.1/oraclepki.jar
-Doracle.home=$ORACLE_HOME -Doracle.instance=$ORACLE_INSTANCE
-Dauditloader.jdbcString=jdbc:oracle:thin:@host:port:sid
-Dauditloader.username=username
-Dstore.password=true
-Dauditloader.password=password
oracle.security.audit.ajl.loader.StandaloneAuditLoader

It should be this instead:

$ORACLE_HOME/jdk/bin/java -classpath
      $MW_HOME/oracle_common/modules/oracle.osdt_11.1.1/osdt_cert.jar:
      $MW_HOME/oracle_common/modules/oracle.osdt_11.1.1/osdt_core.jar:
      $ORACLE_HOME/jdbc/lib/ojdbc5.jar:
      $MW_HOME/oracle_common/modules/oracle.iau_11.1.1/fmw_audit.jar:
      $MW_HOME/oracle_common/modules/oracle.pki_11.1.1/oraclepki.jar
      -Doracle.home=$ORACLE_HOME
      -Doracle.instance=$ORACLE_INSTANCE
      -Dauditloader.jdbcString=jdbc:oracle:thin:@host:port:sid
      -Dauditloader.username=username
      -Dstore.password=true
      -Dauditloader.password=password
      oracle.security.audit.ajl.loader.StandaloneAuditLoader

Hope this helps.

Written by Jacco H. Landlust

June 17, 2010 at 3:40 pm