Oracle MVA

Tales from a Jack of all trades

new SOA HA paper

leave a comment »

Today I was pointed at a brand new SOA HA paper on OTN (thanks Simon. Although I didn’t give any direct input for the paper, it discusses the architecture I designed for my largest customer. I am very happy that Oracle recognizes that customers rely on active/active configurations.

Advertisements

Written by Jacco H. Landlust

August 26, 2013 at 10:09 pm

Lengthy errormessage while creating OID

with one comment

For most of the Fusion Middleware domains or systems components I setup for clients I use scripts. Today, while creating a OID, I decided to type in the commands manually. This resulted in the following:

$ /u01/app/oracle/admin/instances/XXXXX/oid_instance_XXXXXXXX/bin/opmnctl  createcomponent -componentType OID -componentName area51_oid -Db_info rdbms-scan.area51.local:1521:srv4oid -Host oid.area51.local -Port 3060 -Sport 3131

Command requires login to weblogic admin server (oid.area51.local):
  Username: weblogic
  Password:

Creating empty component directories...Done
Provisioning OID files for area51_oid
  OID onCreate....

Enter ODS password:
  Validating OID input parameters
Enter ODSSM password:
oracle.as.config.ProvisionException: Error deleting credential odssm from CSF
        at oracle.iam.management.oid.install.wls.OIDComponentHelper.delCredFromCSF(OIDComponentHelper.java:2373)
        at oracle.iam.management.oid.install.wls.OIDComponentHelper.removeCreds(OIDComponentHelper.java:2345)
        at oracle.iam.management.oid.install.wls.OIDComponent.onRemove(OIDComponent.java:429)
        at oracle.as.config.impl.OracleASComponentBaseImpl.remove(OracleASComponentBaseImpl.java:287)
        at oracle.as.config.impl.OracleASComponentBaseImpl.remove(OracleASComponentBaseImpl.java:174)
        at oracle.as.config.impl.OracleASComponentBaseImpl.remove(OracleASComponentBaseImpl.java:155)
        at oracle.iam.management.oid.install.wls.OIDComponent.onCreate(OIDComponent.java:227)
        at oracle.as.config.impl.OracleASComponentBaseImpl.createComponent(OracleASComponentBaseImpl.java:597)
        at oracle.as.config.impl.OracleASComponentBaseImpl.create(OracleASComponentBaseImpl.java:106)
        at oracle.as.config.provisioner.commands.CreateComponentCommand.execute(CreateComponentCommand.java:40)
        at oracle.as.config.provisioner.InstallerCmdLine.run(InstallerCmdLine.java:146)
        at oracle.as.config.provisioner.InstallerCmdLine.main(InstallerCmdLine.java:46)
Caused by: java.security.PrivilegedActionException: oracle.as.config.ProvisionException:
        at java.security.AccessController.doPrivileged(Native Method)
        at oracle.iam.management.oid.install.wls.OIDComponentHelper.delCredFromCSF(OIDComponentHelper.java:2354)
        ... 11 more
Caused by: oracle.as.config.ProvisionException:
        at oracle.iam.management.oid.install.wls.OIDComponentHelper$3.run(OIDComponentHelper.java:2367)
        ... 13 more
Caused by: oracle.security.jps.config.JpsConfigurationException: /u01/app/oracle/admin/instances/eoid2/oid_instance_oesv9510/config/JPS/jps-config-jse.xml (No such file or directory)
        at oracle.security.jps.internal.config.xml.XmlConfigurationFactory.initDefaultConfiguration(XmlConfigurationFactory.java:439)
        at oracle.security.jps.internal.config.xml.XmlConfigurationFactory.getDefaultConfiguration(XmlConfigurationFactory.java:338)
        at oracle.security.jps.internal.config.xml.XmlConfigurationFactory.getConfiguration(XmlConfigurationFactory.java:160)
        at oracle.security.jps.internal.core.runtime.JpsContextFactoryImpl.(JpsContextFactoryImpl.java:112)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
        at java.lang.Class.newInstance0(Class.java:355)
        at java.lang.Class.newInstance(Class.java:308)
        at oracle.security.jps.util.JpsUtil.newInstance(JpsUtil.java:190)
        at oracle.security.jps.JpsContextFactory$1.run(JpsContextFactory.java:74)
        at oracle.security.jps.JpsContextFactory$1.run(JpsContextFactory.java:72)
        at java.security.AccessController.doPrivileged(Native Method)
        at oracle.security.jps.JpsContextFactory.getContextFactory(JpsContextFactory.java:71)
        at oracle.iam.management.oid.install.wls.OIDComponentHelper$3.run(OIDComponentHelper.java:2357)
        ... 13 more
Caused by: java.io.FileNotFoundException: /u01/app/oracle/admin/instances/eoid2/oid_instance_oesv9510/config/JPS/jps-config-jse.xml (No such file or directory)
        at java.io.FileInputStream.open(Native Method)
        at java.io.FileInputStream.(FileInputStream.java:120)
        at oracle.security.jps.internal.common.util.XmlSchemaValidationUtil.doValidation(XmlSchemaValidationUtil.java:96)
        at oracle.security.jps.internal.config.xml.XmlConfigurationFactory.initDefaultConfiguration(XmlConfigurationFactory.java:418)
        ... 28 more
    Skipping oesv9510_oid unregistration. It is not currently registered with the adminserver.
    Deleting oesv9510_oid directories
    Invoking opmn reload...Done
Command failed: Exception in onCreate()
Details are logged in /u01/app/oracle/admin/instances/eoid2/oid_instance_oesv9510/diagnostics/logs/OPMN/opmn/provision.log

opmnctl createcomponent: failed.

Now this is a rather lengthy error message and it really surprised me. Since I just associated the security store for the domain to a database, and this error pointing towards JPS-config I figured something must be wrong with the reassociateSecurityStore wlst comamnd. So I checked logfiles, My Oracle Support and Google before I checked the provision.log.

The provision log showed me these messages

SEVERE: Command failed:
oracle.as.config.ProvisionException: Exception in onCreate()
        at oracle.iam.management.oid.install.wls.OIDComponent.onCreate(OIDComponent.java:235)
        at oracle.as.config.impl.OracleASComponentBaseImpl.createComponent(OracleASComponentBaseImpl.java:597)
        at oracle.as.config.impl.OracleASComponentBaseImpl.create(OracleASComponentBaseImpl.java:106)
        at oracle.as.config.provisioner.commands.CreateComponentCommand.execute(CreateComponentCommand.java:40)
        at oracle.as.config.provisioner.InstallerCmdLine.run(InstallerCmdLine.java:146)
        at oracle.as.config.provisioner.InstallerCmdLine.main(InstallerCmdLine.java:46)
Caused by: oracle.as.config.ProvisionException: -Namespace parameter missing
        at oracle.iam.management.oid.install.wls.OIDComponentHelper.validateParams(OIDComponentHelper.java:314)
        at oracle.iam.management.oid.install.wls.OIDComponent.onCreate(OIDComponent.java:158)
        ... 5 more

Aha. So that was a enormous error message that tried to tell me “Hey, you missed the -Namespace parameter”.

P.S. 11.1.1.7 also introduced a new process called oiddispd. documentation” gives this description:
“Beginning with Oracle Internet Directory 11g Release 1 (11.1.1.7.0), the OIDLDAPD process is separated as the OIDDISPD (dispatcher) process and the OIDLDAPD (server) process. On UNIX and Linux systems, however, the ps -ef command will continue to show both of these processes as OIDLDAPD at runtime.”

If you happen to separate software from configuration like I do (binaries owned by oracle, oid processes run as some other user) you need to chown the $ORACLE_HOME/bin/oiddispd process and chmod it to 4740

Hope this helps.

Written by Jacco H. Landlust

July 19, 2013 at 4:03 pm

Posted in Uncategorized

Active Data Guard & Fusion Middleware Repositories.

with one comment

Last year while working on a POC Rob den Braber noticed the following in Disaster Recovery for Oracle Elastic Cloud with Oracle ExaData Database Machine on page 13:

Currently, Oracle Fusion Middleware does not support configuring Oracle Active Data Guard for the database repositories that are a part of the Fusion Middleware topology. However, Active Data Guard can be configured if your custom applications are designed to leverage the technology.
Today this came up in a discussion with Simon Haslam , and he didn’t hear from this support issue before. So it seems that it is not that well know that Active Data Guard and Oracle Fusion Middleware is not a supported combination.
This makes this blog post a reminder from what is already in documentation (unless someone can comment and tell me that currently in the quote is not so currently anymore).
Hope this helps.
UPDATE:
While reading this brand new SOA HA paper I found this quote today:

The Oracle Active Data Guard Option available with Oracle Database 11g Enterprise Edition enables you to open a physical standby database for read-only access for reporting, for simple or complex queries, or sorting while Redo Apply continues to apply changes from the production database. Oracle Fusion Middleware SOA does not support Oracle Active Data Guard because the SOA components execute and update information regarding SOA composite instances in the database as soon as they are started.

Written by Jacco H. Landlust

April 26, 2013 at 4:43 pm

JDBC statement cache setting

leave a comment »

Recently I was asked about the statement cache setting in WebLogic by a colleague. Reason he asked about it, was that documentation wasn’t making any sense to him in combination with advise given to him from an external expert. Here’s the doc he was referring to.

The tooltip documentation in WebLogic says:

WebLogic Server can reuse statements in the cache without reloading the statements, which can increase server performance. Each connection in the connection pool has its own cache of statements.

Now this suggests that WebLogic is maintaining some kind of cache, but really it isn’t (in combination with an Oracle database). All it is doing is opening a cursor on the Oracle database and reusing this cursor.

To demonstrate what is happening I created a small example. The example I use is an sqlauthenticator for WebLogic, allowing users in some database table to authenticate in WebLogic. In this presentation you can find the DDL and DML for the tables and a description how to setup this sql authenticator.

So, my initial database has a statement cache of 10 (default). When I restart the database and WebLogic and I login to the WebLogic console, I can find the following open cursors:

<br />select hash_value, cursor_type, sql_text<br />  from v$open_cursor<br /> where user_name = 'DEMO'<br />/<br />HASH_VALUE CURSOR_TYPE SQL_TEXT<br />---------- --------- ------------------------------------------------<br />32127143   OPEN      SELECT 1 FROM DUAL<br />238104037  OPEN      SELECT G_NAME FROM GROUPMEMBERS WHERE G_MEMBER = :1<br />3221480394 OPEN      SELECT U_PASSWORD FROM USERS WHERE U_NAME = :1<br /><br />3 rows selected.<br />

The minute I reconfigure the statement cache to 0 (=disabled), restart database and WebLogic and login to the console, I find the following open cursors:

<br />HASH_VALUE CURSOR_TYPE SQL_TEXT<br />---------- --------- ------------------------------------------------<br />238104037  OPEN      SELECT G_NAME FROM GROUPMEMBERS WHERE G_MEMBER = :1<br />1 row selected.<br />

This simple test teaches me that a cursor is kept open on the users table and on dual. The query that is running on dual is actually the test query for the datasource.

It would suggest that the statement-cache does keep an administration on which query has run over which connection. This test is too small to bring proof off that. Also I wonder what happens in combination with the pin to thread setting of the jdbc driver. Food for a new blogpost 🙂

So, in short: the statement cache of your datasource has a direct impact on the number of open cursors. This can (is) improving performance, you don’t have to create a new cursor when you reuse a statement. Setting the statement cache to 0 (disable the cache) is in my opinion not a best practice, by default every session to your 11.2 database can have 50 cursors so you got plenty to spare. You should tune open_cursors and session_cached_cursors on the database according to your applications need.

Hope this helps.

Written by Jacco H. Landlust

April 19, 2013 at 10:55 pm

Posted in RDBMS, Weblogic

Oracle Database Applicance With WebLogic Server (ODA X3-2)

with 5 comments

On april 3th the new ODA X3-2 was released. Sadly I was sick from april 1th on so I had to miss the launch, and I was so well prepared…  others had the scoop. Anyway, as an administrator that not only manages databases this release is pretty exciting since it brings not only virtualization but also WebLogic to ODA. This would make ODA a pretty good appliance for some of my customers, so I did a little investigation in the product.

This blogpost is the first result of that investigation. My main focus was the WebLogic part of the box. The questions that arose with me were either answered by documentation or by product management. Obviously that doesn’t guarantee that I understood everything correct 🙂 I left out references to documentation on  purpose, it would be smart for everyone interested in the product to hit the documentation thoroughly.

The most import slide in the slide deck I received about the ODA launch is this:

oda-slide

It does some pretty smart claims that can be verified easily. The three simplified statements call for some clarification. Here’s what my questions were, plus the answers I found:

Simplified provisioning / install

Q: Can we test any of this without ODA?

A: No, although I was able to get a virtual ODA in a virtual box environment. This is by no means supported and requires altering of the images that Oracle sends you. 

Q: So how do you configure this beast?

A: You install an image on the system with Oracle VM that you can download freely from My Oracle Support. This image contains oakcli which is the cli used to manage the ODA.

Q: Ah, Oracle VM. Where is the Oracle VM Manager?

A: there is none. oakcli deploys all your VM’s.

Q: ODA is 2 physical machines running OVM, where is the shared storage?

A: The only shared storage available is database shared storage, i.e. DBFS. ARGH… DBFS is already on my todo list! 

Q: So no HA features from OVM?

A: No.

Q: What about the VM’s that oakcli deploys, can I build my own templates?

A: No you cannot. Well, technically you can, but it’s not supported.

Q: what a minute, no custom templates? What about adding layered products to the VM?

A: No can’t do. Currently only WebLogic is supported.

Q: Well, if I can’t define my own templates, what about my WebLogic domain structure?

A: To my understanding that’s fixed too: one Administration Server on it’s own VM, two managed servers in one cluster (on two VM’s) and two Oracle Traffic Director (OTD) VM’s.

Q: What is the difference between that ODA-BASE VM and the other dom-u’s?

A: The ODA-BASE VM is the only one that can actually connect to the local disks directly. 

Q: So that means you should run databases preferably in the ODA-BASE VM ?

A: Yes.

investment model (a.k.a. licenses)

Q: How does this “pay-as-you-grow” thing work partition wise?

A: It is VM hard partitioning. Not Oracle trusted partition as on ExaLogic. And partitioning only works in multiples of two (2).

Q: So I pay per core, is hyper threading turned on?

A: Yes, but I didn’t find out yet what that means for your licenses….

Q: So I can scale up and down?

A: No. Oracle expects you to grow, not to scale down. You can scale down software, not licenses.

Q: What about this separate administration server?

A: License wise that should be treated as a managed server (= pay for it)

Q: And those OTD’s? Do I have to pay for them too?

A: No. OTD is included with WebLogic Enterprise Edition and WebLogic Suite.

maintenance

Q: The JDK is in the middleware home, how does that work with upgrades?

A: Oracle will provide patches as needed.

Q: So how does a domain  upgrade work?

A: Currently not supported. So no maintenance version wise.

Q: An EM agent exists on every VM? Which version is that?

A: Currently there is not EM agent installed. Oracle plans to have the agent installed and support in next patch releases. This will be a 12c EM agent.

Well. That covers all my findings. Hope it helps you in your investigation of ODA.

Written by Jacco H. Landlust

April 5, 2013 at 9:34 pm

Posted in RDBMS, Weblogic

What happens when you change WebLogic configuration

with 4 comments

This post is a little bit a brain dump. It is based on some experiences while changing configuration for WebLogic. Most of the post is a copy of an email I send to a friend that was struggling with unexpected configuration changes (changes that got reversed all of a sudden, etc.). All statements are based on investigation, I did not decompile and read WLS source code. If I am wrong with some statement, please let me know by commenting.

First of let me state this: You are running in production mode, right? Please always do. Fix setDomainEnv.sh and set production_mode=true (instead of production_mode= blank), that changes some memory settings for your JVM and changes some settings. Most relevant in my opinion are if you are running SUN -client is for development mode and -server is for production mode. Also auto deployment is disabled.

If you are running production mode you have to lock the configuration. A lockfile is created in the DOMAIN_HOME (edit.lock). The file contains the (encrypted) username of the user holding the lock, the time when the lock was acquired etc. Whenever you make some change to the configuration in subdirectory “pending” in your DOMAIN_HOME you can find the new configuration files. All configuration files that are changed by the edit are temporarily stored there. The current configuration is only overwritten when you click on activate.

When you activate the configuration, the console application sends a request to all the running servers in the domain to overwrite the configuration in DOMAIN_HOME/config (config.xml, plus possibly some extra configuration files e.g. an xml describing a jdbc datasource). Overwriting of the configuration files on the managed servers happens sequentially, the order used for this config push is the order of managed servers in your config.xml . I think this is why the adminserver is always first (that is always the first managed server in config.xml)

Whenever your datasource is not created successfully on any running node of the domain, the complete configuration change is rolled back. One of the reasons that an update in the domain can fail is file locking. This can happen if more than one java server share a domain home, e.g. the adminserver and managed server 1, or because you setup the domain home on shared storage (and therefore all managed servers share the same directory with config files) . If for some reason the config change did succeed on the administration server, but not on a running managed server (possibly because you followed some enterprise deployment guide that guided you into setting up a separate administration server), the console will not give an error. You can find errors in the log files though, usually standard out.

If your managed server is running in managed server independent mode (MSI) it will not receive any configuration changes. For the configuration update process the managed server in MSI mode is considered down. Typically a managed server starts running in MSI mode when your administration server was down during startup of the managed server.

When a managed server (or multiple) are not running when the configuration is changed, the configuration is pushed upon startup time of the managed server. Typically you start the managed server from the console. The console application sends a request to the nodemanager running on the machine where you want to start the managed server. If you start a managed server from the console, in DOMAIN_HOME/servers/SERVER_NAME/data/nodemanager a file called startup.properties is created (if not already existing. Otherwise will be overwritten, unless you made manual changes in that file before). One of the entries in that property file is the reference to the administration server.

Whenever you start a managed server from the console, the admin url setting is checked and/or set. If you start a managed server from node manager directly (e.g. with wlst) the value is considered as “truth”. Obviously you have configured startScriptEnabled=true in nodemanager.properties. This startScriptEnabled, in combination with the name of the managed server, causes nodemanager to call the shellscript startWebLogic.sh with parameters machine_name and adminserver_url. (again: the url is grabbed from the startup.properties file I just mentioned).

As soon as a managed server boots, it calls the adminserver url and checks for configuration changes. If configuration changes exist, the managed server gets the new config.xml (and other configuration changes) and uses those to startup the managed server. If you start multiple managed servers at once, and they share the same disk for configuration (shared storage) this can also cause unsuccessful changes (again: file locking). When using shared storage you should always start 1 managed server first, than you can start the rest in a group (all together). This is because the first will change the configuration, the servers that will be started later will find out that they already have access to the “latest” configuration file.

This also explains why you shouldn’t hack in configuration files on your managed servers: these get overwritten when you reboot. But, and this is the tricky part, some files on the managed servers get updated when you start the managed server the first time. A typical example is system-jazn-data.xml and cwallet.sso. You can prevent copying those buggers around by sticking them in LDAP (OID) or database (reassociateSecureityStore). Which of these options (OID or RDBMS) is valid depends on the middleware product and version you are using.

So far for my brain dump. Hope this helps. Should I put this in a presentation for some conference?

Written by Jacco H. Landlust

March 18, 2013 at 10:27 pm

Posted in Weblogic

mandatory columns

with 5 comments

**UPDATE: Joaquín González pointed out to me that Carlos Sierra has blogged about this too . It was his presentation where the topic came up. **

While attending Hotsos 2013 I learned of a new 11.2.0.3 feature explain to me by Toon Koppelaars. This new feature has to do with adding a mandatory column to an existing table.

Whenever I want to add a mandatory column to a table, I added the column, set the value to whatever I preferred and then set the column to not null.

JACCO@DEMODB > describe test;

Name                                    Null?    Type
--------------------------------------- -------- ---------------------------------
ID                                      NOT NULL NUMBER

JACCO@DEMODB > select * from test;

ID
----------
1
2

2 rows selected.

JACCO@DEMODB > alter table test add string_value varchar2(100);

Table altered.

JACCO@DEMODB > alter table test modify string_value  not null;

alter table test modify string_value  not null

ERROR at line 1:
ORA-02296: cannot enable (JACCO.) - null values found

JACCO@DEMODB > update test set string_value = 'not null';

2 rows updated.

JACCO@DEMODB > commit;

Commit complete.

JACCO@DEMODB > alter table test modify string_value  not null;

Table altered.

So far so good. As of 11.2.0.3 you can also add a column with a default value and set null directly without updating the table first:

JACCO@DEMODB > set timing on
JACCO@DEMODB > alter table test add string_value2 varchar2(100) default 'my value' not null;

Table altered.

Elapsed: 00:00:00.02

Very very fast (and the statement keeps running very fast, even when you have a lot of rows in the table) and it seems a very convenient statement. Especially with very big tables this seems useful.

If I would select from the table, I can actually see the value in there:

JACCO@DEMODB > column string_value format a10
JACCO@DEMODB > column string_value2 format a10
JACCO@DEMODB > select * from test;

ID  STRING_VAL STRING_VAL
--- ---------- ----------
1  not null   my value
2  not null   my value

2 rows selected.

Now this seems al very cool and such, but I think there’s a real caveats. Lets say you would have a somewhat larger table:

JACCO@DEMODB > create table my_objects as
               (select rownum as id, t.object_name from all_objects t);

Table created.

JACCO@DEMODB > alter table my_objects add constraint pk_my_objects primary key (id);

Table altered.

JACCO@DEMODB > select count(*) from my_objects;

COUNT(*)
----------
66883
1 row selected.

lets say you would add a column to this table, and make it mandatory

JACCO@DEMODB > alter table my_objects add null_test_column char(1) default 'N' not null;

Table altered.

JACCO@DEMODB > select null_test_column, count(*) from my_objects group by null_test_column;

N   COUNT(*)
- ----------
N      66883

1 row selected.

Now lets add another row (to create some skew), add an index and gather stats:

JACCO@DEMODB > insert into my_objects (id,object_name,null_test_column) values (66884,'test','Y');

1 row created.

JACCO@DEMODB > commit;

Commit complete.

JACCO@DEMODB > create index idx_null_test on my_objects (null_test_column);

Index created.

JACCO@DEMODB > exec dbms_stats.gather_table_stats(ownname => user, tabname => 'MY_OBJECTS', cascade => true, method_opt => 'for all columns size auto');

PL/SQL procedure successfully completed.

When I analyze the results, the following appears:

JACCO@DEMODB > l
1  select num_distinct, num_nulls, num_buckets, histogram,
2         utl_raw.cast_to_varchar2(low_value) as low_value,
3         utl_raw.cast_to_varchar2(high_value) as high_value
4*   from user_tab_columns where table_name = 'MY_OBJECTS' and column_name = 'NULL_TEST_COLUMN'
JACCO@DEMODB > /

NUM_DISTINCT  NUM_NULLS NUM_BUCKETS HISTOGRAM       LOW_VALUE       HIGH_VALUE
------------ ---------- ----------- --------------- --------------- ---------------
           2          0           1 NONE            N               Y

1 row selected.

The number of distinct values is correct, the number of nulls sounds correct, but given the skew in the data I would have expected a histogram. So something seems wrong. Also when I run a query on the table, I get some funny results

JACCO@DEMODB > explain plan for
2  select * from my_objects where null_test_column = 'Y';

Explained.

JACCO@DEMODB > select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------

Plan hash value: 880823944

--------------------------------------------------------------------------------
| Id  | Operation         | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |            | 33442 |  1045K|   106   (1)| 00:00:02 |
|*  1 |  TABLE ACCESS FULL| MY_OBJECTS | 33442 |  1045K|   106   (1)| 00:00:02 |
--------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("NULL_TEST_COLUMN"='Y')

13 rows selected.

There must be something wrong, it’s a wrong access path. Why is this query running a full table scan when it should run over an index? The clue is the predicate information, it’s a filter. Now where does this filter come from? Or is dms_xplan lying to me again?

To find out I ran an 10053 trace:

JACCO@DEMODB > alter session set events '10053 trace name context forever, level 1';

Session altered.

JACCO@DEMODB > select * from my_objects where null_test_column = 'Y';

ID         OBJECT_NAME                    N
---------- ------------------------------ -
66884      test                           Y

1 rows selected.

When I read the trace file I noticed this:

----- Plan Table -----
============
Plan Table
============
---------------------------------------+-----------------------------------+
| Id  | Operation          | Name      | Rows  | Bytes | Cost  | Time      |
---------------------------------------+-----------------------------------+
| 0   | SELECT STATEMENT   |           |       |       |   106 |           |
| 1   |  TABLE ACCESS FULL | MY_OBJECTS|   33K | 1045K |   106 |  00:00:02 |
---------------------------------------+-----------------------------------+
Predicate Information:
----------------------
1 - filter(NVL("NULL_TEST_COLUMN",'N')='Y')

Where did that NVL come from? That just appeared because of the default value that I was set on the column without updating the values first. This is rather unexpected and it can bite you when you don’t expect this (especially when you run into this issue 6 months after the column was added to the table). A very unpleasant side-effect of a new feature that seems very promising upon first usage. This shows again that you really have to be careful when using new features.

Hope this helps.

http://carlos-sierra.net/2013/03/12/interesting-case-where-a-full-table-scan-is-chosen-instead-of-an-index

Written by Jacco H. Landlust

March 13, 2013 at 11:15 pm

Posted in Uncategorized