Our development Oracle servers – and all our other servers – are Windows 2003 virtual machines running on VMware Fusion in a MacPro3.1 (and I’m well aware that that is not an Oracle-supported configuration). Last week Oracle became corrupt and we discovered that all copies and backups had some degree of corruption that could not be fixed. During reinstallation of Oracle, VMware corrupted the virtual machine several times. Finally, a hardware failure wiped out the first successful rebuild. It took the whole of last week to reinstate the development server.
The only positive aspect was that in the process of recovering from this mess I learnt quite a lot about Oracle, VMware, and the Mac; I was immensely helped by the huge amount of information available on the Internet – especially other people’s blogs.
- Running Oracle 10g Standard Edition on Windows Server 2003 on VMware Fusion will always risk block corruption – even when DB_BLOCK_CHECKING is set to full and even when the disk space for the Virtual Machine is pre-allocated.
- Oracle 10g Enterprise Edition with DB_BLOCK_CHECKING set to full does not develop block corruption.
- Make regular backups with RMAN even when you are using VMware snapshots as your backup.
- In VMware Fusion, the greater the number of snapshots, the greater the risk of fatal corruption.
VMware Fusion 2.0.4 (159196)
Microsoft Windows Server 2003 Standard Edition SP 2
Why run Oracle on virtualised Windows Server 2003/VMware Fusion/MacPro?
We have historical and practical reasons for running Oracle on top of VMware on a MacPro. We used to host the systems we developed for customers ourselves, and when we got out of the hosting business we still owned a computer room with several redundant rack-mount Windows servers. When we started developing in Oracle/APEX we just installed Oracle on one of our old servers. That worked well for 3 years; but then we decided to move to a new office that did not have a separate computer room. We didn’t want to share our office with a server that sounds like a vacuum cleaner – and besides, the Oracle server was now 6 years old. The quietest server-class machine turned out to be a MacPro, which led us to experiment with Windows 2003/Oracle running on VMware Fusion. We ended up with a whole rack-full of Windows servers virtualised inside a single MacPro3.1 with 2 2.8 GHz quad-core processors and 10 GB of memory. Any 4 of the servers would run simultaneously at an acceptable speed, and our virtual development Oracle server was faster than the metal original (and silent). We moved office and happily developed on this setup for the next 9 months.
Oracle error ORA-08103: object no longer exists
This error suddenly appeared every time a function or procedure from a particular long-established package was called. Trying to recompile the package produced the same error. ORA-08103 is the error you expect to see when another process has deleted the object you are in the process of accessing. In our case, I was the sole user and that could not be the problem.
Identifying Block Corruption
It didn’t take long with Google to find the more sinister cause – Oracle block corruption – eg see this Oracle forum posting which also describes how to use RMAN from the command line to populate the system view V$DATABASE_BLOCK_CORRUPTION:
RMAN TARGET=SYS (the system prompts for the SYS password) RMAN> SHUTDOWN IMMEDIATE RMAN> STARTUP MOUNT (mount the database but don't open it - only "fixed" tables/views are available) RMAN> BACKUP VALIDATE CHECK LOGICAL DATABASE;
RMAN responds with something like:
input datafile fno=00002 name=ORACLE_HOME\UNDOTBS01.DBF
input datafile fno=00003 name=ORACLE_HOME\SYSAUX01.DBF
input datafile fno=00001 name=ORACLE_HOME\SYSTEM01.DBF
with ORACLE_HOME replaced by its actual physical path
Then when logged in as a user with sysdba privileges (no need to open the database because V$DATABASE_BLOCK_CORRUPTION is a “fixed” view):
SELECT * FROM V$DATABASE_BLOCK_CORRUPTION;
Sure enough, I got a display of more than 20 corrupt blocks:
FILE# BLOCK# BLOCKS CORRUPTION_CHANGE# CORRUPTION_TYPE 13 245811 1 0 FRACTURED 13 245883 1 0 FRACTURED 13 650238 1 0 FRACTURED 13 906713 7 0 ALL ZERO
The number in the FILE column relates to one of the numbered data files that RMAN displayed (the number following fno=).
There are usually also alert log entries detailing the corrupt blocks, and similar information is given by the command line tool DBVERIFY:
where FULL_PATH_TO_THE_DATAFILE represents the file name as displayed by RMAN.
Types of Block Corruption
CORRUPTION_TYPE FRACTURED and ALL ZERO are explained in this way by Oracle in the documentation for V$DATABASE_BLOCK_CORRUPTION:
FRACTURED – Block header looks reasonable, but the front and back of the block are different versions.
ALL ZERO – Block header on disk contained only zeros. The block may be valid if it was never filled and if it is in an Oracle7 file. The buffer will be reformatted to the Oracle8 standard for an empty block.
I can’t find any detailed explanation of why these problems may occur – perhaps they mean that the block was being read or written to by some other process at the same time as Oracle was writing to it?
Recovering from Block Corruption
Block corruption should be possible to recover from. Oracle 10g RMAN includes the BLOCKRECOVER command; this should recover all blocks that were marked as corrupt in V$DATABASE_BLOCK_CORRUPTION:
RMAN> BLOCKRECOVER CORRUPTION LIST;
That produced the message:
“Block media Recovery requires Enterprise Edition”.
It’s not made at all clear in Oracle’s documentation, but that’s correct – BLOCKRECOVER is available only in Enterprise Edition. Furthermore, BLOCKRECOVER can only recover blocks when there are previous versions of the block available from retained backups. The way to recover from corruption without using BLOCKRECOVER is to restore the complete file(s) with the corrupt blocks from a good backup. However, rather than taking Oracle backups with RMAN, we had been using VMWare’s snapshot capability to store regular copies of the complete virtual machine. But every snapshot, right back to the first one from 8 months ago, showed an Oracle instance with some block corruption.
Reinstalling Oracle on a Virtual Machine
Since we use Oracle’s exp utility to take daily full dumps of the development tablespace, we keep all PL/SQL code in a Subversion source control system, and we make frequent application exports from APEX, I now decided to reinstall Oracle 10g Standard Edition from scratch on a clean server. A few hours later I had a new development system running. But after a few minutes of testing the APEX applications, the old ORA-08103 error reappeared. Sure enough, RMAN again reported block corruption. I again reinstalled Oracle, and this time tested for block corruption at every stage. Corruption first appeared after creating a new development tablespace – before any data had been imported. Setting system initialization parameter DB_BLOCK_CHECKING to FULL had no effect.
As a last resort I then tried installing Oracle Enterprise Edition, reasoning that I should at least be able to recover from block corruption if I could get a non-corrupt backup – but no need; this time there was no block corruption at any stage.
I tested the Enterprise Edition installation by creating and dropping large tablespaces and by forcing automatic tablespace extension up to the 60 GByte disk limit by inserting records into a huge table – at no stage did block corruption develop when DB_BLOCK_CHECKING was set to to FULL.
There’s no obvious reason why oracle Enterprise Edition should perform differently from Standard Edition in relation to block corruption – I would expect the relevant code to be exactly the same in both. Specifically, I would expect the DB_BLOCK_CHECKING parameter when set to FULL would ensure that Standard Edition could not write corrupt blocks – however, since our configuration is unsupported, its failure to protect against corruption cannot be described as an Oracle bug. Maybe Enterprise Edition has different default settings for some crucial parameter?
VMware File Missing Error
VMware allows one to take a “snapshot” of the Virtual Machine (VM) state at any time. I had originally thought of these snapshots as similar to normal backups, and by the time I had rebuilt Oracle, applied the 10.2.0.4 patch set, imported the development tablespace, reinstalled APEX, and imported the APEX applications on the main development VM, there were about 60 stored snapshots – the whole VM package occupied more than 270 GBYTES on disk.
I now deleted an unwanted snapshot but got an error message about being unable to delete it, which I thought was a fairly trivial problem at the time. I shut down the VM and made a disk copy of the whole package. When I restarted it, VMWare complained that a needed file was missing – it named the file as the <PACKAGE_NAME>.vmdk, which is the main file representing the VM’s hard disk. When I looked inside the VM package (right click in Finder and select “Show Package Contents”) the file was indeed missing – VMWare appeared to have “lost” it. That represented more than a day’s work lost.
More Snapshots, Less Resilience
I soon discovered that my view of snapshots as extra backups was the opposite of the truth – in fact when a snapshot is taken, it creates files which must be added to all the previous snapshot files to recreate the machine state. The VM disk image is fragmented into a base disk file <PACKAGE_NAME>.vmdk and many delta files <PACKAGE_NAME>-nnnnnn.vmdk where nnnnnn is 000001, 000002 etc. Each file has a unique id number (the CID) in its header together with the CID of its parent (parentCID). The last file in the chain is
<PACKAGE_NAME>.vmdk. Entries in file vmware.log list the disk files in the order they are opened as the failed VM is started. The last one in the list is the one that should have chained to the missing file. When a snapshot is deleted, its data is merged into the main disk image file (for more detail see eg blog post VMware Snapshots: Or, VMSD’s, VMSN’s, CID chains, and what happens when you press Delete All).
Recovering from VMWare File Missing Error
I made a copy of the VM package then I followed the recovery procedure described on the DRIVE: ACTIVATED blog and the VMware Knowledge Base: I copied into the package a version of <PACKAGE_NAME>.vmdk from a backup of the same virtual machine made at the time when it was first virtualised (so now 8 months out of date). I then edited the header of the last file in the chain so that it referred to the ID of the new <PACKAGE_NAME>.vmdk using 0xED, a hex editor for the Mac. The file was huge (32 GB) but saved successfully in 40 minutes. Miraculously, this process resulted in a working VM.
I spent the next few hours deleting most of the snapshots, 10 at a time, and finally brought the size of the VM down to 68 GBytes.
The final insult – I shut down the VM and copied it from the backup external disk (WD 1 TByte “My Book”) back on to the production disk… and the copy failed with an unrecoverable disk error. I had to repeat the retrieval of the VM; another day lost. We’re going to use a mirrored external disk for backup in future.
Most of our Oracle tables have a surrogate primary key (SPK) whose value is derived in the normal way from an Oracle sequence and inserted via a trigger tied to the “BEFORE INSERT” condition.
It takes some effort to maintain all the sequences and triggers. For every new table an associated sequence and trigger must be created, and if any sequence or trigger is accidentally dropped during development the development system will eventually fail in an unobvious way. We also frequently copy table data between systems – eg from the live to the staging system, so that candidate releases can be tested on live data. When data is copied, all the relevant sequences must be reset to be higher than the new maximum SPK value.
We therefore developed a PL/SQL function ResetSequences provided as part of an Oracle package that would
- Reset the current value of each sequence to be the maximum value of the SPK in the associated table
- Detect and recreate a missing sequence or trigger (or both)
ResetSequences takes as parameters the name of the tablespace in which to operate and either a table prefix which all affected tables should match or a comma separated list of table names. If both the table prefix and list of names are null, it attempts to reset sequences for every table in the given tablespace.
ResetSequences returns as a CLOB a text list of what sequences it reset or created, and what triggers it created.
It can be called from SQL like this:
SELECT APEXTRAS_RESET_TRIG_SEQ.ResetSequences(‘TABLESPACE’,'TABLE_PREFIX’,'TABLE_LIST’) FROM dual
Or from a PL/SQL procedure:
DECLARE l_return CLOB; BEGIN l_return := APEXTRAS_RESET_TRIG_SEQ.ResetSequences(p_tablespace => 'TABLESPACE', p_tableprefix => 'TABLE_PREFIX', p_tablelist => 'TABLE_LIST'); END;
Tested with Oracle 10.2g.
The usual caveats apply about using ResetSequences on a production system – test extensively first. In particular, ResetSequences assumes that any primary key on a numeric column should be derived from a sequence and associated trigger and will create both if they don’t exist. Don’t use it on tables for which that is not true.
The Oracle views all_constraints, all_ind_columns, all_triggers, all_trigger_cols and all_dependencies contain the required information and when joined and filtered give us a list of tables, primary key columns, sequences, and triggers.
all_constraints filtered on all_constraints.constraint_type = ‘P’ gives the name of tables with primary keys.
all_ind_columns gives the referenced column name.
all_dependencies, suitably filtered, gives the names of the trigger and sequence.
If the trigger and sequence exist then the next value of the sequence is reset to be 1 greater than the current maximum value of the SPK column (the sequence is not dropped and recreated). If either or both of the sequence or trigger are missing, they are created from scratch.
The name given to newly created sequences and triggers is based on the table name minus the table prefix concatenated with the SPK column name. If the table and column names start with more than 3 characters in common then only the column name is used.
Eg for a table PREFIX_TEST_TABLE with SPK column TEST_TABLE_ID, the trigger and sequence would both be called TEST_TABLE_ID.
For a table PREFIX_TEST_TABLE with SPK column TAB_ID the trigger and sequence would both be called TEST_TABLE_TAB_ID.
DDL issued by ResetSequences
The parts of the DDL statements inside square brackets () are each replaced by the relevant parameter or calculated number before execution.
CREATE SEQUENCE [TABLESPACE.SEQUENCE_NAME] START WITH [START_VALUE] INCREMENT BY 1 CACHE 20
CREATE OR REPLACE TRIGGER [TABLESPACE.TRIGGER_NAME] BEFORE INSERT ON [TABLE_NAME] FOR EACH ROW WHEN (NEW.[COLUMN_NAME] IS NULL) BEGIN SELECT [SEQUENCE_NAME.NEXTVAL] INTO :NEW.[COLUMN_NAME] FROM DUAL; END;
This must be done with a series of commands:
ALTER SEQUENCE [TABLESPACE.SEQUENCE_NAME] INCREMENT BY 1 NOCACHE; SELECT [TABLESPACE.SEQUENCE_NAME.NEXTVAL] FROM DUAL INTO nextval; IF (nextval < desired_value) THEN IF (nextval < (desired_value - 1)) THEN ALTER SEQUENCE [TABLESPACE.SEQUENCE_NAME] INCREMENT BY [desired_value - nextval - 1] MINVALUE 1 SELECT [TABLESPACE.SEQUENCE_NAME.NEXTVAL] FROM DUAL INTO nextval; ALTER SEQUENCE [TABLESPACE.SEQUENCE_NAME] INCREMENT BY 1'; ELSIF (nextval = (desired_value - 1)) THEN SELECT [TABLESPACE.SEQUENCE_NAME.NEXTVAL] FROM DUAL INTO nextval; END IF; END IF; ALTER SEQUENCE [TABLESPACE.SEQUENCE_NAME] INCREMENT BY 1 CACHE [ORIGINAL_CACHE_SIZE];
We often give APEX users the opportunity to have displayed reports emailed to them as PDF attachments. It may take minutes of processing to produce such an email, and it would be completely unacceptable to expect a web user to wait for the processing to complete. The solution is to use the Oracle scheduler, available in versions 10g and above. The call from APEX to schedule the report then takes at most a few seconds.
The Oracle scheduler is designed to run pre-defined scheduler jobs, which are PL/SQL procedures or anonymous blocks of PL/SQL. However all our report production is done by procedures that are defined as part of PL/SQL packages, and we are constantly writing new ones. We didn’t want the overhead of defining and maintaining a separate job for each procedure. Instead, we wrote a generalized interface to the scheduler that allows the creation and scheduling of a new job on demand. From APEX, an emailed report can then be scheduled for immediate or future production from a single PL/SQL call.
For each procedure that is to be called by the scheduler, we create a wrapper procedure with the same parameters. The wrapper procedure is the one that is called from APEX. We also declare a type arglist_t as a table of records to hold the details of each parameter.
In the wrapper procedure we assign the name, type, default value, and actual value of each parameter to elements of a variable of type arglist_t, then call the single procedure ScheduleSPJob.
A specific example – scheduling a call to a procedure ResendEmail(p_email_id IN NUMBER, p_recipients IN VARCHAR2).
Ensuring scheduler jobs are enabled
Scheduler jobs with arguments must be created as disabled, then be explicitly enabled after the arguments have been set. Our PL/SQL scheduling procedure ScheduleSPJob tries to enable each job after it is created. But if many jobs are scheduled in a short time – eg from a loop that emails several users – it is likely that some of them will not have been been available for enabling at the time they were initially created. They must then be enabled by a separate process call. We use an On Demand Application Process that calls a PL/SQL procedure to do this: APEXTRAS_SCHEDULER.EnableJobsByCreator(‘APEX_PUBLIC_USER’);. In our system, we have found it is sufficient to call this On Demand process once from an Apex Page Process immediately after the Page Process that creates the scheduled jobs.
The current state of all scheduled jobs can be seen in the view user_scheduler_jobs. Jobs created from Apex have JOB_CREATOR set to ‘APEX_PUBLIC_USER’.
We use the PL/PDF package to produce PDF documents that can be mailed on request to APEX web site users. The PDF contents will usually be a version of something that was displayed on screen, for example a list of answers to a questionnaire. Version 2.1.0 of PL/PDF has recently become available and we’ve just upgraded our development and production systems.
PL/PDF does quite a good job of converting simple text or tabular data into reasonable-looking PDFs, but it
takes some work to produce more complicated documents, and it lacks the ability to convert an HTML document directly into a PDF.
Pound sign (£)
The FAQs on the PL/PDF web site state that the default decoding cp1252 does not include a pound sign, but that one can be displayed using unistr(’0a3′) or chr(49827).
We found those characters display a capital L with a line crossing it, rather than a real pound sign. I wrote a procedure to print every Unicode character just to make sure there wasn’t some other encoding for a real pound sign – but there wasn’t. Instead we had to use an TrueType embedded font rather than one of the built-in fonts.
One of the least friendly aspects of previous PL/PDF versions was the procedure for using TrueType embedded fonts, which involved running a command line program to generate an Adobe Font Metrics (AFM) file, then using SQL*Loader with custom control files to load two tables, and finally running a stored procedure. It took me most of a day’s work to do this for a single TrueType font file, and the resulting embedded font was unusable – it produced a blank document. I never had the time or the inclination to try again. Version 2.1.0 has fixed that – it includes a graphical application (PL/PDF v2.1.0 TTF Loader) that does all the work automatically. In 5 minutes I managed to install all 4 Arial TrueType fonts in a usable form. I had to specify utf16 encoding and unicode in the graphical interface (our Oracle NLS_CHARACTERSET=AL32UTF8, NLS_NCHAR_CHARACTERSET=AL16UTF16).
It’s not immediately obvious how to use the embedded TrueType fonts. They first have to be loaded with a user-defined alias using the numerical value of an ID field in a PL/PDF table; plpdf_ttf_add.ID. The value of the ID is derived from an Oracle sequence so it can vary with the server it is running on – in order to avoid hard-coding it we had to write a procedure to find the ID corresponding to a given font name.
Once loaded, TrueType fonts can be referred to by their aliases. However, the different font styles are each in different TrueType files, and it seems they must therefore have different font names when used in PL/PDF, so the built-in PL/PDF method for specifying font styles won’t work. Eg to print in ArialMT bold, instead of specifying font ‘ArialMT’ and style ‘B’, you need to specify font ‘Arial-BoldMT’ and style NULL.
Other PL/SQL anomalies
These were seen in earlier versions – we’ve not yet had time to retest with version 2.1.0:
- The left margin setting procedure appears not to work; to produce a left indent we have to add spaces to the output text.
- The expected width of text to be printed as reported by the GetTextWidth function is sometimes wrong (by a constant factor) – we need to declare a variable to hold an empirically determined “fudge factor” that we multiply every result from GetTextWidth by.
SQL*Plus would not run on one of our Windows Oracle/APEX servers, although it was running fine on all the other apparently identical servers. ORACLE_HOME and ORACLE_SID were correctly set.
We were getting one or other of these errors:
ORA-12560: TNS:protocol adapter error
ORA-12557: TNS:protocol adapter not loadable
The problem was the Windows “path” environment variable. The \bin directory associated with each of the two Oracle homes was correctly in the path, but in the wrong order. The path associated with the Oracle/Apache HTTP server preceeded the path associated with the Oracle database.
Ie we had: path=C:\oracle\product\10.2.0\http_1\bin;C:\oracle\product\10.2.0\db_1\bin
when we should have had: path=C:\oracle\product\10.2.0\db_1\bin;C:\oracle\product\10.2.0\http_1\bin
(ignoring all the other non-Oracle elements of the path).
Presumably SQL*Plus looks for the configuration files LISTENER.ORA, SQLNET.ORA and TNSNAMES.ORA in the first Oracle home it finds.
With the path correctly set, SQL*Plus would start correctly without environment variables ORACLE_HOME or ORACLE_SID explicitly set – probably because these were also set in the registry keys
Windows Server 2003
To integrate the low-cost YubiKey device we use a custom APEX authentication scheme in which we manage user names and passwords in a table under our control.
To enable this you need to enter “return name_of_your_custom_authentication_function” in the setup screen for the APEX authentication scheme in the “Login Processing/Authentication Function” box. The APEX popup help for this field gives an excellent guide to the requirements for that function. In our demonstration application we called the function APEXTRAS_YUBICO.auth.
The YubiKey outputs a unique string every time it is activated. This is the one-time password (OTP). Its first 12 characters constitiute the YubiKey ID, the unique identifier for each YubiKey. In other words, the first 12 characters of a YubiKey’s output are always the same, and are always different from those of any other YubiKey.
It is quite simple to incorporate the YubiKey into an APEX Authentication Scheme.
1. Add a column to the user table to hold the YubiKey ID of the YubiKey issued to each user.
2. Then add a third field to the login screen to take the YubiKey output as well as the login name and password.
3. Perform this pseudocode to log in:
Verify the YubiKey OTP IF the YubiKey OTP passes verification THEN Retrieve the user name corresponding to the YubiKey ID IF the retrieved user name matches the one entered THEN Make the call to APEX_CUSTOM_AUTH.LOGIN to check the hash of the entered password against the stored hash of the user's password END IF END IF
If this succeeds, then the logged-in user is
a) in possession of a valid YubiKey that is assigned to their account
b) knows the password assigned to that account
To verify the YubiKey OTP you need to submit it (via an HTTP GET) to a YubiKey authentication service. You can either run this somewhere on your own network, or you can use Yubico’s public authentication service.
If you use the Yubico service you will also need two pieces of information about your Yubico account: your API key and your Yubico-issued company user ID: both are available from the Yubico management site. Within our demonstration client the API Key, User ID and the URI of the authentication service need to be entered into the APEXTRAS_YUBICO package as package constants.
The call to the authentication service looks something like this:
id is your company user ID
otp is the one time password you want to verify
h is a hashed MAC (using hashed SHA-1 keyed on your API key) of the query string
The authentication service will respond with a status such as OK, BAD_OTP, REPLAYED_OTP, BAD_SIGNATURE etc
Sometimes we develop applications that really require more security than is provided by the standard combination of a username and password. We like two-factor/strong authentication which typically consists of something you know (a password) and something you have (a token). In the past we’ve used RSA’s SecureID but we’ve recently been evaluating the Yubikey from Yubico.
The Yubikey is a slim device that plugs into the USB port on any computer. To the computer it appears to be a USB keyboard which means it works across all operating systems and doesn’t require any drivers to be installed.
When you touch the button on the top of a Yubikey it generates a 32 character one-time password (OTP) which appears at the current cursor position: remember the host computer sees the Yubikey as a keyboard. You can validate this OTP by passing it in a call to a restful web service provided by Yubico or you can pass it to your own authentication servers. Yubico provides Java and PHP versions of the authentication server software under an Open Source license.
We like the concept of the Yubikey because, compared to other two-factor solutions, it is very cost effective. Each Yubikey costs about $25 (if you were to buy hundreds that would drop to under $10) and both public authentication service and the authentication server software are free.
We often develop systems that consist of two APEX applications on top of the same database — a public application and an administration application that provides access to the back-end of the system and to table maintenance tasks. The administration application is typically used by just a handful of users and would really benefit from two-factor authentication. The Yubikey would allow us to incorporate strong authentication into the administration application at a common-sense price.
Our first test upgrade from APEX 3.1 to 3.2 was 100% error-free, but our applications showed up all sorts of strange errors. Then I remembered that we were using Patrick Wolf’s wrapped versions of the v, nv, and dv functions – see here for more details. We needed to replace “FLOWS_030100.V” with “APEX_030200.V” throughout – then everything worked again.
APEX has the ability to log and analyse user activity in some detail. However, the unit of the built-in APEX logs is the APEX page, and the data collected about what a user does on a particular page is limited. In many of our APEX applications, a single APEX page may correspond to multiple web pages. For example we have implemented a dynamic questionnaire system in which each screen of the questionnaire is rendered on the same APEX page. A PL/SQL routine is called with a parameter representing the page of the questionnaire that should be displayed which outputs the complete questionnaire screen by htp.p calls. We still need to collect information about “page” visits although only one APEX page is involved. In addition, there is often a requirement to collect more detailed information about how people are using the system – search terms entered, number of checkboxes ticked, etc.
The solution is to build your own page logging system that inserts a row into a log table for every page visited. You can then record any sort of detail you like. We call the logging code as an Application Process at Process Point “On Load: Before Header (page template header)”. Because APEX doesn’t provide a built-in variable that records the current page alias (and we use page aliases throughout our system), we also need a Before Header Application Computation that fills the Application Item APP_PAGE_ALIAS (see http://oraclequirks.blogspot.com/2008/02/build-and-use-apex-page-alias.html).
The actual insert into the log table is a simple SQL statement that could be executed in the APEX process, but we prefer to keep code that writes to tables in PL/SQL packages, separate from APEX: it’s more maintainable there, error handling can be more rigorously enforced, and we end up with a single PL/SQL procedure to perform each action, rather than several APEX PL/SQL regions doing more-or-less the same thing.
Oracle 10.2.0.4.0, Windows 32-bit, NLS_CHARACTERSET AL32UTF8.
We have a PL/SQL utility function that retrieves the nth item from a delimited string. The string parameter is a CLOB so that it can be used with input of any length. Normally the delimiters are commas, and the function has worked fine for years.
An APEX user’s sort order preference for a static report is stored in column WWV_FLOW_PREFERENCES$.ATTRIBUTE_VALUE as a string that looks something like fsp_sort_1_desc. We used our function to retrieve the various parts of this string, with ‘_’ as the delimiter, but it always returned a null. To our surprise it turned out that the underscore acted as a wildcard and matched any character in the string. That is expected behaviour in a SQL LIKE clause, but should not happen in PL/SQL. There is in fact a bug report in Oracle Metalink which describes this behaviour. There is no fix for Oracle version 10g – the bug is said to be fixed in Oracle 11.2. Our workaround was to replace all the underscores with another delimiter before calling the function.