Oracle MVA

Tales from a Jack of all trades

Archive for the ‘Weblogic’ Category

new SOA HA paper

leave a comment »

Today I was pointed at a brand new SOA HA paper on OTN (thanks Simon. Although I didn’t give any direct input for the paper, it discusses the architecture I designed for my largest customer. I am very happy that Oracle recognizes that customers rely on active/active configurations.

Advertisements

Written by Jacco H. Landlust

August 26, 2013 at 10:09 pm

JDBC statement cache setting

leave a comment »

Recently I was asked about the statement cache setting in WebLogic by a colleague. Reason he asked about it, was that documentation wasn’t making any sense to him in combination with advise given to him from an external expert. Here’s the doc he was referring to.

The tooltip documentation in WebLogic says:

WebLogic Server can reuse statements in the cache without reloading the statements, which can increase server performance. Each connection in the connection pool has its own cache of statements.

Now this suggests that WebLogic is maintaining some kind of cache, but really it isn’t (in combination with an Oracle database). All it is doing is opening a cursor on the Oracle database and reusing this cursor.

To demonstrate what is happening I created a small example. The example I use is an sqlauthenticator for WebLogic, allowing users in some database table to authenticate in WebLogic. In this presentation you can find the DDL and DML for the tables and a description how to setup this sql authenticator.

So, my initial database has a statement cache of 10 (default). When I restart the database and WebLogic and I login to the WebLogic console, I can find the following open cursors:

<br />select hash_value, cursor_type, sql_text<br />  from v$open_cursor<br /> where user_name = 'DEMO'<br />/<br />HASH_VALUE CURSOR_TYPE SQL_TEXT<br />---------- --------- ------------------------------------------------<br />32127143   OPEN      SELECT 1 FROM DUAL<br />238104037  OPEN      SELECT G_NAME FROM GROUPMEMBERS WHERE G_MEMBER = :1<br />3221480394 OPEN      SELECT U_PASSWORD FROM USERS WHERE U_NAME = :1<br /><br />3 rows selected.<br />

The minute I reconfigure the statement cache to 0 (=disabled), restart database and WebLogic and login to the console, I find the following open cursors:

<br />HASH_VALUE CURSOR_TYPE SQL_TEXT<br />---------- --------- ------------------------------------------------<br />238104037  OPEN      SELECT G_NAME FROM GROUPMEMBERS WHERE G_MEMBER = :1<br />1 row selected.<br />

This simple test teaches me that a cursor is kept open on the users table and on dual. The query that is running on dual is actually the test query for the datasource.

It would suggest that the statement-cache does keep an administration on which query has run over which connection. This test is too small to bring proof off that. Also I wonder what happens in combination with the pin to thread setting of the jdbc driver. Food for a new blogpost 🙂

So, in short: the statement cache of your datasource has a direct impact on the number of open cursors. This can (is) improving performance, you don’t have to create a new cursor when you reuse a statement. Setting the statement cache to 0 (disable the cache) is in my opinion not a best practice, by default every session to your 11.2 database can have 50 cursors so you got plenty to spare. You should tune open_cursors and session_cached_cursors on the database according to your applications need.

Hope this helps.

Written by Jacco H. Landlust

April 19, 2013 at 10:55 pm

Posted in RDBMS, Weblogic

Oracle Database Applicance With WebLogic Server (ODA X3-2)

with 5 comments

On april 3th the new ODA X3-2 was released. Sadly I was sick from april 1th on so I had to miss the launch, and I was so well prepared…  others had the scoop. Anyway, as an administrator that not only manages databases this release is pretty exciting since it brings not only virtualization but also WebLogic to ODA. This would make ODA a pretty good appliance for some of my customers, so I did a little investigation in the product.

This blogpost is the first result of that investigation. My main focus was the WebLogic part of the box. The questions that arose with me were either answered by documentation or by product management. Obviously that doesn’t guarantee that I understood everything correct 🙂 I left out references to documentation on  purpose, it would be smart for everyone interested in the product to hit the documentation thoroughly.

The most import slide in the slide deck I received about the ODA launch is this:

oda-slide

It does some pretty smart claims that can be verified easily. The three simplified statements call for some clarification. Here’s what my questions were, plus the answers I found:

Simplified provisioning / install

Q: Can we test any of this without ODA?

A: No, although I was able to get a virtual ODA in a virtual box environment. This is by no means supported and requires altering of the images that Oracle sends you. 

Q: So how do you configure this beast?

A: You install an image on the system with Oracle VM that you can download freely from My Oracle Support. This image contains oakcli which is the cli used to manage the ODA.

Q: Ah, Oracle VM. Where is the Oracle VM Manager?

A: there is none. oakcli deploys all your VM’s.

Q: ODA is 2 physical machines running OVM, where is the shared storage?

A: The only shared storage available is database shared storage, i.e. DBFS. ARGH… DBFS is already on my todo list! 

Q: So no HA features from OVM?

A: No.

Q: What about the VM’s that oakcli deploys, can I build my own templates?

A: No you cannot. Well, technically you can, but it’s not supported.

Q: what a minute, no custom templates? What about adding layered products to the VM?

A: No can’t do. Currently only WebLogic is supported.

Q: Well, if I can’t define my own templates, what about my WebLogic domain structure?

A: To my understanding that’s fixed too: one Administration Server on it’s own VM, two managed servers in one cluster (on two VM’s) and two Oracle Traffic Director (OTD) VM’s.

Q: What is the difference between that ODA-BASE VM and the other dom-u’s?

A: The ODA-BASE VM is the only one that can actually connect to the local disks directly. 

Q: So that means you should run databases preferably in the ODA-BASE VM ?

A: Yes.

investment model (a.k.a. licenses)

Q: How does this “pay-as-you-grow” thing work partition wise?

A: It is VM hard partitioning. Not Oracle trusted partition as on ExaLogic. And partitioning only works in multiples of two (2).

Q: So I pay per core, is hyper threading turned on?

A: Yes, but I didn’t find out yet what that means for your licenses….

Q: So I can scale up and down?

A: No. Oracle expects you to grow, not to scale down. You can scale down software, not licenses.

Q: What about this separate administration server?

A: License wise that should be treated as a managed server (= pay for it)

Q: And those OTD’s? Do I have to pay for them too?

A: No. OTD is included with WebLogic Enterprise Edition and WebLogic Suite.

maintenance

Q: The JDK is in the middleware home, how does that work with upgrades?

A: Oracle will provide patches as needed.

Q: So how does a domain  upgrade work?

A: Currently not supported. So no maintenance version wise.

Q: An EM agent exists on every VM? Which version is that?

A: Currently there is not EM agent installed. Oracle plans to have the agent installed and support in next patch releases. This will be a 12c EM agent.

Well. That covers all my findings. Hope it helps you in your investigation of ODA.

Written by Jacco H. Landlust

April 5, 2013 at 9:34 pm

Posted in RDBMS, Weblogic

What happens when you change WebLogic configuration

with 4 comments

This post is a little bit a brain dump. It is based on some experiences while changing configuration for WebLogic. Most of the post is a copy of an email I send to a friend that was struggling with unexpected configuration changes (changes that got reversed all of a sudden, etc.). All statements are based on investigation, I did not decompile and read WLS source code. If I am wrong with some statement, please let me know by commenting.

First of let me state this: You are running in production mode, right? Please always do. Fix setDomainEnv.sh and set production_mode=true (instead of production_mode= blank), that changes some memory settings for your JVM and changes some settings. Most relevant in my opinion are if you are running SUN -client is for development mode and -server is for production mode. Also auto deployment is disabled.

If you are running production mode you have to lock the configuration. A lockfile is created in the DOMAIN_HOME (edit.lock). The file contains the (encrypted) username of the user holding the lock, the time when the lock was acquired etc. Whenever you make some change to the configuration in subdirectory “pending” in your DOMAIN_HOME you can find the new configuration files. All configuration files that are changed by the edit are temporarily stored there. The current configuration is only overwritten when you click on activate.

When you activate the configuration, the console application sends a request to all the running servers in the domain to overwrite the configuration in DOMAIN_HOME/config (config.xml, plus possibly some extra configuration files e.g. an xml describing a jdbc datasource). Overwriting of the configuration files on the managed servers happens sequentially, the order used for this config push is the order of managed servers in your config.xml . I think this is why the adminserver is always first (that is always the first managed server in config.xml)

Whenever your datasource is not created successfully on any running node of the domain, the complete configuration change is rolled back. One of the reasons that an update in the domain can fail is file locking. This can happen if more than one java server share a domain home, e.g. the adminserver and managed server 1, or because you setup the domain home on shared storage (and therefore all managed servers share the same directory with config files) . If for some reason the config change did succeed on the administration server, but not on a running managed server (possibly because you followed some enterprise deployment guide that guided you into setting up a separate administration server), the console will not give an error. You can find errors in the log files though, usually standard out.

If your managed server is running in managed server independent mode (MSI) it will not receive any configuration changes. For the configuration update process the managed server in MSI mode is considered down. Typically a managed server starts running in MSI mode when your administration server was down during startup of the managed server.

When a managed server (or multiple) are not running when the configuration is changed, the configuration is pushed upon startup time of the managed server. Typically you start the managed server from the console. The console application sends a request to the nodemanager running on the machine where you want to start the managed server. If you start a managed server from the console, in DOMAIN_HOME/servers/SERVER_NAME/data/nodemanager a file called startup.properties is created (if not already existing. Otherwise will be overwritten, unless you made manual changes in that file before). One of the entries in that property file is the reference to the administration server.

Whenever you start a managed server from the console, the admin url setting is checked and/or set. If you start a managed server from node manager directly (e.g. with wlst) the value is considered as “truth”. Obviously you have configured startScriptEnabled=true in nodemanager.properties. This startScriptEnabled, in combination with the name of the managed server, causes nodemanager to call the shellscript startWebLogic.sh with parameters machine_name and adminserver_url. (again: the url is grabbed from the startup.properties file I just mentioned).

As soon as a managed server boots, it calls the adminserver url and checks for configuration changes. If configuration changes exist, the managed server gets the new config.xml (and other configuration changes) and uses those to startup the managed server. If you start multiple managed servers at once, and they share the same disk for configuration (shared storage) this can also cause unsuccessful changes (again: file locking). When using shared storage you should always start 1 managed server first, than you can start the rest in a group (all together). This is because the first will change the configuration, the servers that will be started later will find out that they already have access to the “latest” configuration file.

This also explains why you shouldn’t hack in configuration files on your managed servers: these get overwritten when you reboot. But, and this is the tricky part, some files on the managed servers get updated when you start the managed server the first time. A typical example is system-jazn-data.xml and cwallet.sso. You can prevent copying those buggers around by sticking them in LDAP (OID) or database (reassociateSecureityStore). Which of these options (OID or RDBMS) is valid depends on the middleware product and version you are using.

So far for my brain dump. Hope this helps. Should I put this in a presentation for some conference?

Written by Jacco H. Landlust

March 18, 2013 at 10:27 pm

Posted in Weblogic

setting up EDG style HA administration server with corosync and pacemaker

with 5 comments

Most of the Enterprise Deployment Guide’s (EDG) for Fusion Middleware products consider setting up WebLogic’s Administration Server HA. All of these EDG’s describe a manual failover. None of my clients find that a satisfactory solution. Usually I advise my clients to use Oracle’s clustering software to automate failover (Grid Infrastructure / Cluster Ready Services). This works fine if your local DBA is managing the WebLogic layer, although the overhead is large for something “simple” like an HA administration server. Also this requires the failover node to run in the same subnet (network wise) as the primary node. All this led me into investigating other options. One of the viable options I POC’d is Linux clustering with CoroSync and PaceMaker. I considered CoroSync and PaceMaker because this seems to be the RedHat standard for clustering nowadays.

This example is configured on OEL 5.8. The example is not production ready, please don’t install this on production without thorough testing (and some more of the usual disclaimers 🙂 ) I will assume basic knowledge of clustering and linux for this post, not all details will be configured in great depth.

First you need to understand a little bit about my topology. I have a small linux server running a software loadbalancer (Oracle Traffic Director) which is also functioning as NFS server. When configuring this for an enterprise these components will most likely be provided for you (F5’s or Cisco with some NetAPP or alike). In this specific configuration the VIP for the administration server runs on the loadbalancer. The NFS server on the loadbalancer server provides shared storage that hosts the domain home. This NFS share is mounted on both the servers that will run my administration server.

Back to the cluster. To install CoroSync and PaceMaker, first install the EPEL repository for packages that don’t exist in vanilla Redhat/CentOS and add the cluster labs repository.

rpm -ivh http://mirror.iprimus.com.au/epel/5/x86_64/epel-release-5-4.noarch.rpm
wget -O /etc/yum.repos.d/pacemaker.repo http://clusterlabs.org/rpm/epel-5/clusterlabs.repo

Then install Pacemaker 1.0+ and CoroSync 1.2+ via yum

yum install -y pacemaker.$(uname -i) corosync.$(uname -i)

When all software and dependencies are installed, you can configure CoroSync. My configuration file is rather straight forward. I run a cluster over network 10.0.0.0

cat /etc/corosync/corosync.conf
# Please read the corosync.conf.5 manual page
compatibility: whitetank

totem {
	version: 2
	secauth: on
	threads: 0
	interface {
		ringnumber: 0
		bindnetaddr: 10.0.0.0
		mcastaddr: 226.94.1.1
		mcastport: 5405
	}
}

logging {
	fileline: off
	to_stderr: no
	to_logfile: yes
	to_syslog: yes
	logfile: /var/log/cluster/corosync.log
	debug: off
	timestamp: on
	logger_subsys {
		subsys: AMF
		debug: off
	}
}

amf {
	mode: disabled
}

quorum {
           provider: corosync_votequorum
           expected_votes: 2
}

aisexec {
        # Run as root - this is necessary to be able to manage resources with Pacemaker
        user:        root
        group:       root
}

service {
    # Load the Pacemaker Cluster Resource Manager
    name: pacemaker
    ver: 0
}

Now, you can start CoroSync and check the configuration of the cluster.

service corosync start
corosync-cfgtool -s
Printing ring status.
Local node ID 335544330
RING ID 0
	id	= 10.0.0.20
	status	= ring 0 active with no faults


crm status
============
Last updated: Sun Mar  3 21:30:42 2013
Stack: openais
Current DC: wls1.area51.local - partition with quorum
Version: 1.0.12-unknown
2 Nodes configured, 2 expected votes
0 Resources configured.
============

Online: [ wls2.area51.local wls1.area51.local ]

For production usage you should configure stonith, which is beyond this example. So for testing purposes I disabled stonith

crm configure property stonith-enabled=false
crm configure property no-quorum-policy=ignore

Also I configure resources not to fail back when the resource running the resource comes back online.

crm configure rsc_defaults resource-stickiness=100

Now your cluster is ready, although it doesn’t run WebLogic yet. There is no WebLogic cluster resource, so I wrote one myself. To keep it separated from other cluster resources I setup my own OCF resource tree (just mkdir and you are done). A OCF resource requires certain functions to be in the script, the The OCF Resource Agent Developer’s Guide can help you with that one.

Here’s my example WebLogic cluster resource:

cat /usr/lib/ocf/resource.d/area51/weblogic 
#!/bin/bash
#
# Description:  Manages a WebLogic Administration Server as an OCF High-Availability
#               resource under Heartbeat/LinuxHA control
# Author:	Jacco H. Landlust <jacco.landlust@idba.nl>
# 		Inspired on the heartbeat/tomcat OCF resource
# Version:	1.0
#

OCF_ROOT=/usr/lib/ocf

. ${OCF_ROOT}/resource.d/heartbeat/.ocf-shellfuncs
#RESOURCE_STATUSURL="http://127.0.0.1:7001/console"

usage()
{
	echo "$0 [start|stop|status|monitor|migrate_to|migrate_from]"
	return ${OCF_NOT_RUNNING}
}

isrunning_weblogic()
{
        if ! have_binary wget; then
		ocf_log err "Monitoring not supported by ${OCF_RESOURCE_INSTANCE}"
		ocf_log info "Please make sure that wget is available"
		return ${OCF_ERR_CONFIGURED}
        fi
        wget -O /dev/null ${RESOURCE_STATUSURL} >/dev/null 2>&1
}

isalive_weblogic()
{
        if ! have_binary pgrep; then
                ocf_log err "Monitoring not supported by ${OCF_RESOURCE_INSTANCE}"
                ocf_log info "Please make sure that pgrep is available"
                return ${OCF_ERR_CONFIGURED}
        fi
        pgrep -f weblogic.Name > /dev/null
}

monitor_weblogic()
{
        isalive_weblogic || return ${OCF_NOT_RUNNING}
        isrunning_weblogic || return ${OCF_NOT_RUNNING}
        return ${OCF_SUCCESS}
}

start_weblogic()
{
	if [ -f ${DOMAIN_HOME}/servers/AdminServer/logs/AdminServer.out ]; then
		su - ${WEBLOGIC_USER} --command "mv ${DOMAIN_HOME}/servers/AdminServer/logs/AdminServer.out ${DOMAIN_HOME}/servers/AdminServer/logs/AdminServer.out.`date +%Y-%M-%d-%H%m`"
	fi
	monitor_weblogic
	if [ $? = ${OCF_NOT_RUNNING} ]; then
		ocf_log debug "start_weblogic"
		su - ${WEBLOGIC_USER} --command "nohup ${DOMAIN_HOME}/bin/startWebLogic.sh > ${DOMAIN_HOME}/servers/AdminServer/logs/AdminServer.out 2>&1 &"
		sleep 60
		touch ${OCF_RESKEY_state}
	fi
	monitor_weblogic
	if [ $? =  ${OCF_SUCCESS} ]; then
		return ${OCF_SUCCESS}
	fi
}

stop_weblogic()
{
#	monitor_weblogic
#	if [ $? =  $OCF_SUCCESS ]; then
		ocf_log debug "stop_weblogic"
		pkill -KILL -f startWebLogic.sh
		pkill -KILL -f weblogic.Name
		rm ${OCF_RESKEY_state}
#	fi
	return $OCF_SUCCESS
}

meta_data() {
        cat <<END
<?xml version="1.0"?>
<!DOCTYPE resource-agent SYSTEM "ra-api-1.dtd">
<resource-agent name="weblogic" version="0.9">
	<version>1.0</version>
	<longdesc lang="en"> This is a WebLogic Resource Agent </longdesc>
	<shortdesc lang="en">WebLogic resource agent</shortdesc>
        
	<parameters>
		<parameter name="state" unique="1">
			<longdesc lang="en">Location to store the resource state in.</longdesc>
			<shortdesc lang="en">State file</shortdesc>
			<content type="string" default="${HA_VARRUN}{OCF_RESOURCE_INSTANCE}.state" />
		</parameter>
		<parameter name="statusurl" unique="1">
			<longdesc lang="en">URL for state confirmation.</longdesc>
			<shortdesc>URL for state confirmation</shortdesc>
			<content type="string" default="" />
		</parameter>
		<parameter name="domain_home" unique="1">
			<longdesc lang="en">PATH to the domain_home. Should be a full path</longdesc>
			<shortdesc lang="en">PATH to the domain.</shortdesc>
			<content type="string" default="" required="1" />
		</parameter>
		<parameter name="weblogic_user" unique="1">
			<longdesc lang="en">The user that starts WebLogic</longdesc>
			<shortdesc lang="en">The user that starts WebLogic</shortdesc>
			<content type="string" default="oracle" />
		</parameter>
	</parameters>   
        
	<actions>
		<action name="start"        timeout="90" />
		<action name="stop"         timeout="90" />
		<action name="monitor"      timeout="20" interval="10" depth="0" start-delay="0" />
		<action name="migrate_to"   timeout="90" />
		<action name="migrate_from" timeout="90" />
		<action name="meta-data"    timeout="5" />
	</actions>      
</resource-agent>
END
}

# Make the resource globally unique
: ${OCF_RESKEY_CRM_meta_interval=0}
: ${OCF_RESKEY_CRM_meta_globally_unique:="true"}

if [ "x${OCF_RESKEY_state}" = "x" ]; then
        if [ ${OCF_RESKEY_CRM_meta_globally_unique} = "false" ]; then
                state="${HA_VARRUN}${OCF_RESOURCE_INSTANCE}.state"
                
                # Strip off the trailing clone marker
                OCF_RESKEY_state=`echo $state | sed s/:[0-9][0-9]*\.state/.state/`
        else
                OCF_RESKEY_state="${HA_VARRUN}${OCF_RESOURCE_INSTANCE}.state"
        fi
fi

# Set some defaults
RESOURCE_STATUSURL="${OCF_RESKEY_statusurl-http://127.0.0.1:7001/console}"
DOMAIN_HOME="${OCF_RESKEY_domain_home}"
WEBLOGIC_USER="${OCF_RESKEY_weblogic_user-oracle}"

# MAIN
case $__OCF_ACTION in
	meta-data)      meta_data
       	         exit ${OCF_SUCCESS}
	                ;;
	start)          start_weblogic;;
	stop)           stop_weblogic;;
	status)		monitor_weblogic;;
	monitor)        monitor_weblogic;;
	migrate_to)     ocf_log info "Migrating ${OCF_RESOURCE_INSTANCE} to ${OCF_RESKEY_CRM_meta_migrate_to}."
			stop_weblogic
			;;
	migrate_from)   ocf_log info "Migrating ${OCF_RESOURCE_INSTANCE} to ${OCF_RESKEY_CRM_meta_migrated_from}."
			start_weblogic
			;;
	usage|help)     usage
			exit ${OCF_SUCCESS}
	       	         ;;
	*)		usage
			exit ${OCF_ERR_UNIMPLEMENTED}
			;;
esac
rc=$?

# Finish the script
ocf_log debug "${OCF_RESOURCE_INSTANCE} $__OCF_ACTION : $rc"

exit $rc

Please mind that you would have to copy the script to both nodes of the cluster.

Next up is configuring the WebLogic resource in the cluster. In my example I mounted the NFS share with domain homes on /domains and the domain is called ha-adminserver. My WebLogic is running as oracle and the administration server listens on all addresses at port 7001. Therefore the parameter weblogic_user is left at default (oracle) and the status_url to check if the administration server is running is left at default too (http://127.0.0.1:7001/console). The domain home is parsed to the cluster.

crm configure primitive weblogic ocf:area51:weblogic params domain_home="/domains/ha-adminserver" op start interval="0" timeout="90s" op monitor interval="30s"

When the resource is added, the cluster starts is automatically. Please keep in mind this takes some time, therefore you might not see results instantly. Also the scripts has a sleep configured, if your administration server takes longer to boot you might want to fiddle with the values. For an example like in this blogpost it works.

Next you can check the status of your resource:

crm status
============
Last updated: Sun Mar  3 21:35:47 2013
Stack: openais
Current DC: wls1.area51.local - partition with quorum
Version: 1.0.12-unknown
2 Nodes configured, 2 expected votes
1 Resources configured.
============

Online: [ wls2.area51.local wls1.area51.local ]

 weblogic	(ocf::area51:weblogic):	Started wls2.area51.local

To check the configuration of your resource, run the configure show command

crm configure show
node wls1.area51.local
node wls2.area51.local
primitive weblogic ocf:area51:weblogic \
	params domain_home="/domains/ha-adminserver" \
	op start interval="0" timeout="90s" \
	op monitor interval="30s" \
	meta target-role="Started"
property $id="cib-bootstrap-options" \
	dc-version="1.0.12-unknown" \
	cluster-infrastructure="openais" \
	expected-quorum-votes="2" \
	stonith-enabled="false" \
	no-quorum-policy="ignore" \
	last-lrm-refresh="1362338797"
rsc_defaults $id="rsc-options" \
	resource-stickiness="100"

You can test failover by either stopping CoroSync, killing the linux node, etc. etc.

Other useful commands:

# start resource
crm resource start weblogic

# stop resource
crm resource stop weblogic

# Cleanup errors
crm_resource --resource weblogic -C

# Move resource to other node, mind you: that means pinning and taking control of the cluster. Fail-back is automatically introduced
crm resource move weblogic wls1.area51.local
# give authority over resource back to cluster
crm resource unmove weblogic

# delete cluster resource
crm configure delete weblogic

If you want your WebLogic administration server resource to be bound to a vip, just google for setting up an HA apache on PaceMaker. There is plenty information about that on the web, e.g. this site, which helped me setting up the cluster too.

Well, I hope this helps for anyone that is trying to setup an HA administration server.

Written by Jacco H. Landlust

March 3, 2013 at 11:39 pm

storeUserConfig caveats

with 5 comments

While creating a small startup script for nodemanager on my test setup (to prevent me from having to start nodemanager manually all the time) I figured storing the clear text password in the script was a bad practice. Therefore I wanted to use the storeUserConfig command in WLST to store the username and password in a user configuration file and an associated key file.

You would expect that to be not too exciting, but some caveats arose. First you have to connect nodemanager

wls:/offline> nmConnect('nodemanager','Welkom01','wls1.area51.local',5556,'hs_test');
Successfully Connected to Node Manager.

Then you can store the configuration:

wls:/nm/hs_test> storeUserConfig(userConfigFile='/u01/app/oracle/admin/nodemanager/nodemanager.config', userKeyFile='/u01/app/oracle/admin/nodemanager/nodemanager.keyfile');
Currently connected to Node Manager to monitor the domain hs_test.
You will need to be connected to a running WLS or Node Manager to execute this command

mmmm… that was unexpected. Seems you have to pass an extra argument to mention that you are only connected to the nodemanager

storeUserConfig(userConfigFile='/u01/app/oracle/admin/nodemanager/nodemanager.config', userKeyFile='/u01/app/oracle/admin/nodemanager/nodemanager.keyfile',nm='true');
Currently connected to Node Manager to monitor the domain hs_test.
Creating the key file can reduce the security of your system if it is not kept in a secured location after it is created. Do you want to create the key file? y or ny
The username and password that were used for this WebLogic NodeManager connection are stored in /u01/app/oracle/admin/nodemanager/nodemanager.config and /u01/app/oracle/admin/nodemanager/nodemanager.keyfile .

Now that is all cool, next if you should be able to connect to nodemanager without specifying a username and password:

wls:/offline&gt; nmConnect(userConfigFile='/u01/app/oracle/admin/nodemanager/nodemanager.config', userKeyFile='/u01/app/oracle/admin/nodemanager/nodemanager.keyfile', host='wls1.area51.local', port=5556, domainName='hs_test');
Connecting to Node Manager ...
Traceback (innermost last):
File "", line 1, in ?
File "", line 123, in nmConnect
File "", line 648, in raiseWLSTException
WLSTException: Error occured while performing nmConnect : Cannot connect to Node Manager. : Access to domain 'hs_test' for user 'weblogic' denied
Use dumpStack() to view the full stacktrace

Ehhrmm… that sort of sucks. Seems that the username is defaulted to weblogic all of the sudden. Well that sucks, that implies that you cannot use the stored configuration if your username is not weblogic (like in my case). Bummer… (I just hope I am wrong)

Hope this helps.

And thanks to Peter van Nes to learn me how to use the sourcecode setting in wordpress.

Written by Jacco H. Landlust

January 9, 2013 at 1:16 pm

Posted in security, Weblogic

BEA-000362, incomplete error

with 4 comments

While setting up Service Migration in a small test setup on my laptop, I ran into this error:

<BEA-000362> <Server failed. Reason: There is either a problem contacting the database or there is another instance of ManagedServer_2 running>

It took me some time to figure out what the exact problem was. If the message was complete like this, problem solving would have been easier:

<BEA-000362> <Server failed. Reason: There is either a problem contacting the database or there is another instance of ManagedServer_2 running or the leasing table is missing from the database>

You can find the DDL for the default leasing table, called active, in a file called leasing.ddl which is located at $MW_HOME/wlserver_10.3/db/oracle/920 . If you happened to have changed the name for the leasing table, you obviously have to modify the leasing.ddl script accordingly.

Hope this helps.

Written by Jacco H. Landlust

January 9, 2013 at 1:20 am