Archive for the ‘APEX’ tag
Our development Oracle servers – and all our other servers – are Windows 2003 virtual machines running on VMware Fusion in a MacPro3.1 (and I’m well aware that that is not an Oracle-supported configuration). Last week Oracle became corrupt and we discovered that all copies and backups had some degree of corruption that could not be fixed. During reinstallation of Oracle, VMware corrupted the virtual machine several times. Finally, a hardware failure wiped out the first successful rebuild. It took the whole of last week to reinstate the development server.
The only positive aspect was that in the process of recovering from this mess I learnt quite a lot about Oracle, VMware, and the Mac; I was immensely helped by the huge amount of information available on the Internet – especially other people’s blogs.
- Running Oracle 10g Standard Edition on Windows Server 2003 on VMware Fusion will always risk block corruption – even when DB_BLOCK_CHECKING is set to full and even when the disk space for the Virtual Machine is pre-allocated.
- Oracle 10g Enterprise Edition with DB_BLOCK_CHECKING set to full does not develop block corruption.
- Make regular backups with RMAN even when you are using VMware snapshots as your backup.
- In VMware Fusion, the greater the number of snapshots, the greater the risk of fatal corruption.
VMware Fusion 2.0.4 (159196)
Microsoft Windows Server 2003 Standard Edition SP 2
Why run Oracle on virtualised Windows Server 2003/VMware Fusion/MacPro?
We have historical and practical reasons for running Oracle on top of VMware on a MacPro. We used to host the systems we developed for customers ourselves, and when we got out of the hosting business we still owned a computer room with several redundant rack-mount Windows servers. When we started developing in Oracle/APEX we just installed Oracle on one of our old servers. That worked well for 3 years; but then we decided to move to a new office that did not have a separate computer room. We didn’t want to share our office with a server that sounds like a vacuum cleaner – and besides, the Oracle server was now 6 years old. The quietest server-class machine turned out to be a MacPro, which led us to experiment with Windows 2003/Oracle running on VMware Fusion. We ended up with a whole rack-full of Windows servers virtualised inside a single MacPro3.1 with 2 2.8 GHz quad-core processors and 10 GB of memory. Any 4 of the servers would run simultaneously at an acceptable speed, and our virtual development Oracle server was faster than the metal original (and silent). We moved office and happily developed on this setup for the next 9 months.
Oracle error ORA-08103: object no longer exists
This error suddenly appeared every time a function or procedure from a particular long-established package was called. Trying to recompile the package produced the same error. ORA-08103 is the error you expect to see when another process has deleted the object you are in the process of accessing. In our case, I was the sole user and that could not be the problem.
Identifying Block Corruption
It didn’t take long with Google to find the more sinister cause – Oracle block corruption – eg see this Oracle forum posting which also describes how to use RMAN from the command line to populate the system view V$DATABASE_BLOCK_CORRUPTION:
RMAN TARGET=SYS (the system prompts for the SYS password) RMAN> SHUTDOWN IMMEDIATE RMAN> STARTUP MOUNT (mount the database but don't open it - only "fixed" tables/views are available) RMAN> BACKUP VALIDATE CHECK LOGICAL DATABASE;
RMAN responds with something like:
input datafile fno=00002 name=ORACLE_HOME\UNDOTBS01.DBF
input datafile fno=00003 name=ORACLE_HOME\SYSAUX01.DBF
input datafile fno=00001 name=ORACLE_HOME\SYSTEM01.DBF
with ORACLE_HOME replaced by its actual physical path
Then when logged in as a user with sysdba privileges (no need to open the database because V$DATABASE_BLOCK_CORRUPTION is a “fixed” view):
SELECT * FROM V$DATABASE_BLOCK_CORRUPTION;
Sure enough, I got a display of more than 20 corrupt blocks:
FILE# BLOCK# BLOCKS CORRUPTION_CHANGE# CORRUPTION_TYPE 13 245811 1 0 FRACTURED 13 245883 1 0 FRACTURED 13 650238 1 0 FRACTURED 13 906713 7 0 ALL ZERO
The number in the FILE column relates to one of the numbered data files that RMAN displayed (the number following fno=).
There are usually also alert log entries detailing the corrupt blocks, and similar information is given by the command line tool DBVERIFY:
where FULL_PATH_TO_THE_DATAFILE represents the file name as displayed by RMAN.
Types of Block Corruption
CORRUPTION_TYPE FRACTURED and ALL ZERO are explained in this way by Oracle in the documentation for V$DATABASE_BLOCK_CORRUPTION:
FRACTURED – Block header looks reasonable, but the front and back of the block are different versions.
ALL ZERO – Block header on disk contained only zeros. The block may be valid if it was never filled and if it is in an Oracle7 file. The buffer will be reformatted to the Oracle8 standard for an empty block.
I can’t find any detailed explanation of why these problems may occur – perhaps they mean that the block was being read or written to by some other process at the same time as Oracle was writing to it?
Recovering from Block Corruption
Block corruption should be possible to recover from. Oracle 10g RMAN includes the BLOCKRECOVER command; this should recover all blocks that were marked as corrupt in V$DATABASE_BLOCK_CORRUPTION:
RMAN> BLOCKRECOVER CORRUPTION LIST;
That produced the message:
“Block media Recovery requires Enterprise Edition”.
It’s not made at all clear in Oracle’s documentation, but that’s correct – BLOCKRECOVER is available only in Enterprise Edition. Furthermore, BLOCKRECOVER can only recover blocks when there are previous versions of the block available from retained backups. The way to recover from corruption without using BLOCKRECOVER is to restore the complete file(s) with the corrupt blocks from a good backup. However, rather than taking Oracle backups with RMAN, we had been using VMWare’s snapshot capability to store regular copies of the complete virtual machine. But every snapshot, right back to the first one from 8 months ago, showed an Oracle instance with some block corruption.
Reinstalling Oracle on a Virtual Machine
Since we use Oracle’s exp utility to take daily full dumps of the development tablespace, we keep all PL/SQL code in a Subversion source control system, and we make frequent application exports from APEX, I now decided to reinstall Oracle 10g Standard Edition from scratch on a clean server. A few hours later I had a new development system running. But after a few minutes of testing the APEX applications, the old ORA-08103 error reappeared. Sure enough, RMAN again reported block corruption. I again reinstalled Oracle, and this time tested for block corruption at every stage. Corruption first appeared after creating a new development tablespace – before any data had been imported. Setting system initialization parameter DB_BLOCK_CHECKING to FULL had no effect.
As a last resort I then tried installing Oracle Enterprise Edition, reasoning that I should at least be able to recover from block corruption if I could get a non-corrupt backup – but no need; this time there was no block corruption at any stage.
I tested the Enterprise Edition installation by creating and dropping large tablespaces and by forcing automatic tablespace extension up to the 60 GByte disk limit by inserting records into a huge table – at no stage did block corruption develop when DB_BLOCK_CHECKING was set to to FULL.
There’s no obvious reason why oracle Enterprise Edition should perform differently from Standard Edition in relation to block corruption – I would expect the relevant code to be exactly the same in both. Specifically, I would expect the DB_BLOCK_CHECKING parameter when set to FULL would ensure that Standard Edition could not write corrupt blocks – however, since our configuration is unsupported, its failure to protect against corruption cannot be described as an Oracle bug. Maybe Enterprise Edition has different default settings for some crucial parameter?
VMware File Missing Error
VMware allows one to take a “snapshot” of the Virtual Machine (VM) state at any time. I had originally thought of these snapshots as similar to normal backups, and by the time I had rebuilt Oracle, applied the 10.2.0.4 patch set, imported the development tablespace, reinstalled APEX, and imported the APEX applications on the main development VM, there were about 60 stored snapshots – the whole VM package occupied more than 270 GBYTES on disk.
I now deleted an unwanted snapshot but got an error message about being unable to delete it, which I thought was a fairly trivial problem at the time. I shut down the VM and made a disk copy of the whole package. When I restarted it, VMWare complained that a needed file was missing – it named the file as the <PACKAGE_NAME>.vmdk, which is the main file representing the VM’s hard disk. When I looked inside the VM package (right click in Finder and select “Show Package Contents”) the file was indeed missing – VMWare appeared to have “lost” it. That represented more than a day’s work lost.
More Snapshots, Less Resilience
I soon discovered that my view of snapshots as extra backups was the opposite of the truth – in fact when a snapshot is taken, it creates files which must be added to all the previous snapshot files to recreate the machine state. The VM disk image is fragmented into a base disk file <PACKAGE_NAME>.vmdk and many delta files <PACKAGE_NAME>-nnnnnn.vmdk where nnnnnn is 000001, 000002 etc. Each file has a unique id number (the CID) in its header together with the CID of its parent (parentCID). The last file in the chain is
<PACKAGE_NAME>.vmdk. Entries in file vmware.log list the disk files in the order they are opened as the failed VM is started. The last one in the list is the one that should have chained to the missing file. When a snapshot is deleted, its data is merged into the main disk image file (for more detail see eg blog post VMware Snapshots: Or, VMSD’s, VMSN’s, CID chains, and what happens when you press Delete All).
Recovering from VMWare File Missing Error
I made a copy of the VM package then I followed the recovery procedure described on the DRIVE: ACTIVATED blog and the VMware Knowledge Base: I copied into the package a version of <PACKAGE_NAME>.vmdk from a backup of the same virtual machine made at the time when it was first virtualised (so now 8 months out of date). I then edited the header of the last file in the chain so that it referred to the ID of the new <PACKAGE_NAME>.vmdk using 0xED, a hex editor for the Mac. The file was huge (32 GB) but saved successfully in 40 minutes. Miraculously, this process resulted in a working VM.
I spent the next few hours deleting most of the snapshots, 10 at a time, and finally brought the size of the VM down to 68 GBytes.
The final insult – I shut down the VM and copied it from the backup external disk (WD 1 TByte “My Book”) back on to the production disk… and the copy failed with an unrecoverable disk error. I had to repeat the retrieval of the VM; another day lost. We’re going to use a mirrored external disk for backup in future.
We often give APEX users the opportunity to have displayed reports emailed to them as PDF attachments. It may take minutes of processing to produce such an email, and it would be completely unacceptable to expect a web user to wait for the processing to complete. The solution is to use the Oracle scheduler, available in versions 10g and above. The call from APEX to schedule the report then takes at most a few seconds.
The Oracle scheduler is designed to run pre-defined scheduler jobs, which are PL/SQL procedures or anonymous blocks of PL/SQL. However all our report production is done by procedures that are defined as part of PL/SQL packages, and we are constantly writing new ones. We didn’t want the overhead of defining and maintaining a separate job for each procedure. Instead, we wrote a generalized interface to the scheduler that allows the creation and scheduling of a new job on demand. From APEX, an emailed report can then be scheduled for immediate or future production from a single PL/SQL call.
For each procedure that is to be called by the scheduler, we create a wrapper procedure with the same parameters. The wrapper procedure is the one that is called from APEX. We also declare a type arglist_t as a table of records to hold the details of each parameter.
In the wrapper procedure we assign the name, type, default value, and actual value of each parameter to elements of a variable of type arglist_t, then call the single procedure ScheduleSPJob.
A specific example – scheduling a call to a procedure ResendEmail(p_email_id IN NUMBER, p_recipients IN VARCHAR2).
Ensuring scheduler jobs are enabled
Scheduler jobs with arguments must be created as disabled, then be explicitly enabled after the arguments have been set. Our PL/SQL scheduling procedure ScheduleSPJob tries to enable each job after it is created. But if many jobs are scheduled in a short time – eg from a loop that emails several users – it is likely that some of them will not have been been available for enabling at the time they were initially created. They must then be enabled by a separate process call. We use an On Demand Application Process that calls a PL/SQL procedure to do this: APEXTRAS_SCHEDULER.EnableJobsByCreator(‘APEX_PUBLIC_USER’);. In our system, we have found it is sufficient to call this On Demand process once from an Apex Page Process immediately after the Page Process that creates the scheduled jobs.
The current state of all scheduled jobs can be seen in the view user_scheduler_jobs. Jobs created from Apex have JOB_CREATOR set to ‘APEX_PUBLIC_USER’.
We use the PL/PDF package to produce PDF documents that can be mailed on request to APEX web site users. The PDF contents will usually be a version of something that was displayed on screen, for example a list of answers to a questionnaire. Version 2.1.0 of PL/PDF has recently become available and we’ve just upgraded our development and production systems.
PL/PDF does quite a good job of converting simple text or tabular data into reasonable-looking PDFs, but it
takes some work to produce more complicated documents, and it lacks the ability to convert an HTML document directly into a PDF.
Pound sign (£)
The FAQs on the PL/PDF web site state that the default decoding cp1252 does not include a pound sign, but that one can be displayed using unistr(’0a3′) or chr(49827).
We found those characters display a capital L with a line crossing it, rather than a real pound sign. I wrote a procedure to print every Unicode character just to make sure there wasn’t some other encoding for a real pound sign – but there wasn’t. Instead we had to use an TrueType embedded font rather than one of the built-in fonts.
One of the least friendly aspects of previous PL/PDF versions was the procedure for using TrueType embedded fonts, which involved running a command line program to generate an Adobe Font Metrics (AFM) file, then using SQL*Loader with custom control files to load two tables, and finally running a stored procedure. It took me most of a day’s work to do this for a single TrueType font file, and the resulting embedded font was unusable – it produced a blank document. I never had the time or the inclination to try again. Version 2.1.0 has fixed that – it includes a graphical application (PL/PDF v2.1.0 TTF Loader) that does all the work automatically. In 5 minutes I managed to install all 4 Arial TrueType fonts in a usable form. I had to specify utf16 encoding and unicode in the graphical interface (our Oracle NLS_CHARACTERSET=AL32UTF8, NLS_NCHAR_CHARACTERSET=AL16UTF16).
It’s not immediately obvious how to use the embedded TrueType fonts. They first have to be loaded with a user-defined alias using the numerical value of an ID field in a PL/PDF table; plpdf_ttf_add.ID. The value of the ID is derived from an Oracle sequence so it can vary with the server it is running on – in order to avoid hard-coding it we had to write a procedure to find the ID corresponding to a given font name.
Once loaded, TrueType fonts can be referred to by their aliases. However, the different font styles are each in different TrueType files, and it seems they must therefore have different font names when used in PL/PDF, so the built-in PL/PDF method for specifying font styles won’t work. Eg to print in ArialMT bold, instead of specifying font ‘ArialMT’ and style ‘B’, you need to specify font ‘Arial-BoldMT’ and style NULL.
Other PL/SQL anomalies
These were seen in earlier versions – we’ve not yet had time to retest with version 2.1.0:
- The left margin setting procedure appears not to work; to produce a left indent we have to add spaces to the output text.
- The expected width of text to be printed as reported by the GetTextWidth function is sometimes wrong (by a constant factor) – we need to declare a variable to hold an empirically determined “fudge factor” that we multiply every result from GetTextWidth by.
To integrate the low-cost YubiKey device we use a custom APEX authentication scheme in which we manage user names and passwords in a table under our control.
To enable this you need to enter “return name_of_your_custom_authentication_function” in the setup screen for the APEX authentication scheme in the “Login Processing/Authentication Function” box. The APEX popup help for this field gives an excellent guide to the requirements for that function. In our demonstration application we called the function APEXTRAS_YUBICO.auth.
The YubiKey outputs a unique string every time it is activated. This is the one-time password (OTP). Its first 12 characters constitiute the YubiKey ID, the unique identifier for each YubiKey. In other words, the first 12 characters of a YubiKey’s output are always the same, and are always different from those of any other YubiKey.
It is quite simple to incorporate the YubiKey into an APEX Authentication Scheme.
1. Add a column to the user table to hold the YubiKey ID of the YubiKey issued to each user.
2. Then add a third field to the login screen to take the YubiKey output as well as the login name and password.
3. Perform this pseudocode to log in:
Verify the YubiKey OTP IF the YubiKey OTP passes verification THEN Retrieve the user name corresponding to the YubiKey ID IF the retrieved user name matches the one entered THEN Make the call to APEX_CUSTOM_AUTH.LOGIN to check the hash of the entered password against the stored hash of the user's password END IF END IF
If this succeeds, then the logged-in user is
a) in possession of a valid YubiKey that is assigned to their account
b) knows the password assigned to that account
To verify the YubiKey OTP you need to submit it (via an HTTP GET) to a YubiKey authentication service. You can either run this somewhere on your own network, or you can use Yubico’s public authentication service.
If you use the Yubico service you will also need two pieces of information about your Yubico account: your API key and your Yubico-issued company user ID: both are available from the Yubico management site. Within our demonstration client the API Key, User ID and the URI of the authentication service need to be entered into the APEXTRAS_YUBICO package as package constants.
The call to the authentication service looks something like this:
id is your company user ID
otp is the one time password you want to verify
h is a hashed MAC (using hashed SHA-1 keyed on your API key) of the query string
The authentication service will respond with a status such as OK, BAD_OTP, REPLAYED_OTP, BAD_SIGNATURE etc
Sometimes we develop applications that really require more security than is provided by the standard combination of a username and password. We like two-factor/strong authentication which typically consists of something you know (a password) and something you have (a token). In the past we’ve used RSA’s SecureID but we’ve recently been evaluating the Yubikey from Yubico.
The Yubikey is a slim device that plugs into the USB port on any computer. To the computer it appears to be a USB keyboard which means it works across all operating systems and doesn’t require any drivers to be installed.
When you touch the button on the top of a Yubikey it generates a 32 character one-time password (OTP) which appears at the current cursor position: remember the host computer sees the Yubikey as a keyboard. You can validate this OTP by passing it in a call to a restful web service provided by Yubico or you can pass it to your own authentication servers. Yubico provides Java and PHP versions of the authentication server software under an Open Source license.
We like the concept of the Yubikey because, compared to other two-factor solutions, it is very cost effective. Each Yubikey costs about $25 (if you were to buy hundreds that would drop to under $10) and both public authentication service and the authentication server software are free.
We often develop systems that consist of two APEX applications on top of the same database — a public application and an administration application that provides access to the back-end of the system and to table maintenance tasks. The administration application is typically used by just a handful of users and would really benefit from two-factor authentication. The Yubikey would allow us to incorporate strong authentication into the administration application at a common-sense price.
Our first test upgrade from APEX 3.1 to 3.2 was 100% error-free, but our applications showed up all sorts of strange errors. Then I remembered that we were using Patrick Wolf’s wrapped versions of the v, nv, and dv functions – see here for more details. We needed to replace “FLOWS_030100.V” with “APEX_030200.V” throughout – then everything worked again.
APEX has the ability to log and analyse user activity in some detail. However, the unit of the built-in APEX logs is the APEX page, and the data collected about what a user does on a particular page is limited. In many of our APEX applications, a single APEX page may correspond to multiple web pages. For example we have implemented a dynamic questionnaire system in which each screen of the questionnaire is rendered on the same APEX page. A PL/SQL routine is called with a parameter representing the page of the questionnaire that should be displayed which outputs the complete questionnaire screen by htp.p calls. We still need to collect information about “page” visits although only one APEX page is involved. In addition, there is often a requirement to collect more detailed information about how people are using the system – search terms entered, number of checkboxes ticked, etc.
The solution is to build your own page logging system that inserts a row into a log table for every page visited. You can then record any sort of detail you like. We call the logging code as an Application Process at Process Point “On Load: Before Header (page template header)”. Because APEX doesn’t provide a built-in variable that records the current page alias (and we use page aliases throughout our system), we also need a Before Header Application Computation that fills the Application Item APP_PAGE_ALIAS (see http://oraclequirks.blogspot.com/2008/02/build-and-use-apex-page-alias.html).
The actual insert into the log table is a simple SQL statement that could be executed in the APEX process, but we prefer to keep code that writes to tables in PL/SQL packages, separate from APEX: it’s more maintainable there, error handling can be more rigorously enforced, and we end up with a single PL/SQL procedure to perform each action, rather than several APEX PL/SQL regions doing more-or-less the same thing.
Oracle 10.2.0.4.0, Windows 32-bit, NLS_CHARACTERSET AL32UTF8.
We have a PL/SQL utility function that retrieves the nth item from a delimited string. The string parameter is a CLOB so that it can be used with input of any length. Normally the delimiters are commas, and the function has worked fine for years.
An APEX user’s sort order preference for a static report is stored in column WWV_FLOW_PREFERENCES$.ATTRIBUTE_VALUE as a string that looks something like fsp_sort_1_desc. We used our function to retrieve the various parts of this string, with ‘_’ as the delimiter, but it always returned a null. To our surprise it turned out that the underscore acted as a wildcard and matched any character in the string. That is expected behaviour in a SQL LIKE clause, but should not happen in PL/SQL. There is in fact a bug report in Oracle Metalink which describes this behaviour. There is no fix for Oracle version 10g – the bug is said to be fixed in Oracle 11.2. Our workaround was to replace all the underscores with another delimiter before calling the function.
We are used to working in more formal coding environments than our in-house Oracle/Apex development setup, for which we have a dedicated Oracle server and no full-time DBA. When we had to start making periodic application releases to a remote live server, we expected to be able to write a script that would do the release automatically. It proved a little more difficult than we thought it would – we needed a PL/SQL package, a PERL script, and an APEX command line utility. If you’re faced with the same problem, here’s how we did it.
We used Windows servers for this project, so the final product was two Windows batch files – however it wouldn’t be difficult to write PERL or shell scripts using the same methods.
Of course you will need to customise the batch files and possibly the PL/SQL package for your own environment, and you should test the deployment process to destruction before you let it loose on a live server.
Why did we need an Apex deployment script?
We were coming to the end of a large Apex project developed and to be run on Microsoft Windows Servers. The data was in a single dedicated Oracle workspace but there were three separate Apex applications and 17 PL/SQL packages amounting to nearly 30,000 lines of code.
There were three dedicated Windows Oracle servers – a development server at our office and staging and live servers at a remote co-location facility. We were making formal code and Apex applications releases from the development server to the staging server for user acceptance testing, then releasing from the staging to the live server when the system was approved. Each version of the project had to be retained so that releases could be rolled back if there was any problem.
The manual process
We started off doing both deployments (development->stage and stage->live) manually.
- Export each of the 3 Apex applications from the development server to a text file with a name that represents the new version number, then import it into the staging server.
- Copy each PL/SQL package from the development to the staging server, taking care not to overwrite sections of code that are site-specific; that includes SMTP server names and addresses, sender email addresses, and database links. We did this using cut and paste (carefully) between PL/SQL developer windows.
- Save the current state of the PL/SQL packages under the new version name – we used Subversion for this.
- Copy the style sheet and images from a directory on the development server to one on the staging server. It was easy to forget this step.
This process was complicated, time-consuming, and inherently risky – after several Apex versions had been generated and saved on disk it was easy to pick the wrong one for import. It was also essential to know how to edit PL/SQL code so that the site-specific code sections were retained – which meant that only a programmer could make a deployment.
The automated solution
Our goal was a single script (ie a Windows batch file) to run on the originating server that would export a project version in the form of SQL scripts to the target server, and a single script to run on the target server to import the new version.
A sample export batch file can be found here. It relies on a PL/SQL package, a PERL script, and the APEX utility APEXExport.class, which is in the standard APEX 3.1 distribution under apex\utilities\oracle\apex.
To run this, you’d save it as a Windows batch file (.bat or .cmd extension) on the source Oracle server then invoke it from the Windows command line with the version number as the only parameter. The source and destination file locations, and the numbers of the Apex applications to be deployed are hard-coded in the batch file. The names of the packages, functions, and procedures to be deployed are package-level constants in the PL/SQL package. The name of the Oracle directory for output PL/SQL code is also a package-level constant. The directory name and its physical path on disk needs to be set up (once) from Oracle – see below.
Here’s a corresponding sample import batch file. You’d save it as a Windows batch file on the target Oracle server then run it with the same version number as was given to the export batch file.
And here’s some more detail about how it works:
1. Exporting PL/SQL packages, functions and procedures (your code)
Where to write the export file containing your code
Oracle can write to external files by means of the UTL_FILE package. In order to use this, we needed to define an external directory on the Oracle server:
CREATE OR REPLACE DIRECTORY CODE_SAVE AS ‘C:\MYDIRECTORY\ETC’
Where to get your code from
The standard Oracle view USER_SOURCE contains all the package, function, and procedure code visible to the current user. To turn the code into executable SQL scripts we just needed to surround it with “CREATE OR REPLACE” and “/”.
2. Site-specific code
We used a simple mechanism for identifying conditional code sections. Each such section was given a start and finish label on a separate Oracle commented line. We arbitrarily chose ‘!~’ and ‘~!’ as begin and end delimiters for the labels because those strings didn’t occur anywhere else in our code.
The labels consisted of two parts separated by an underscore(_): an identifier that specifies the server for which the code section should be live, and the word BEGIN or END. In the exported code the relevant section for the target server is uncommented and all other sections remain commented out.
Thus a package specification could contain the following on a development server: --!~DEVELOPMENT_BEGIN~! smtp_host CONSTANT VARCHAR2(256) := 'smtp.developmentdomain.com'; --!~DEVELOPMENT_END~! --!~LIVE_BEGIN~! /* smtp_host CONSTANT VARCHAR2(256) := 'smtp.livedomain.com'; */ --!~LIVE_END~! Which would be rendered on a live server as: --!~DEVELOPMENT_BEGIN~! /* smtp_host CONSTANT VARCHAR2(256) := 'smtp.developmentdomain.com'; */ --!~DEVELOPMENT_END~! --!~LIVE_BEGIN~! smtp_host CONSTANT VARCHAR2(256) := 'smtp.livedomain.com'; --!~LIVE_END~!
3. Exporting Apex Applications
Apex comes with a command-line export tool written in Java. This produces a SQL script that can be directly invoked from SQL*Plus to reimport the application. It was not easy to make the export tool run, but after some trial and error, we found that (at least in our setup) the CLASSPATH environment variable needed to point to both the Oracle Java library and the location of the Apex export script.
To import an exported application into a different server from the one that exported it, the security group id needs to be changed to that of the target server. That involves rewriting the security group id in the following line of code:
We did this by processing the output Apex script with a simple PERL script. Instead of using a literal security group ID to replace SomeBigNumber, we rewrote this line as
That allowed the import to work on any server with the correct workspace name.
Of course you can use any other text substitution tool to do the replacement instead of a PERL script, as long as it can be invoked from the command line.
This should work on any recent version of Oracle and Apex, but it has been tested only on:
Java is likely to be already installed on your Windows server, but PERL may not be.
We used ActiveState ActivePerl 188.8.131.524.
32-bit Windws download:
64-bit Windws download:
Java for Windows (currently Version 6 Update 11) download is available from: