Later this summer, the computer system for the 311 dispatch center in Austin, Texas, will take on more work, and the information technology office is looking to move some of the system's less-urgent duties off the primary database system.One potential source of relief: the failover database system. The city is considering running the reports it must generate using the data from the failover ' rather than the primary ' database system, said Wesley Jackson, manager of business applications at the city's communication and technology management office.The idea of using the failover system or other backup databases for duties in addition to disaster recovery is not a new one. It's a feature users have been requesting for a while, said Juan Loaiza, senior vice president of database development at Oracle. 'In order to get protection from site failures, they have to set up an entire other data center ' this entire other computer system, this entire other database ' and feed it this information, but they'll never use it except in case of a disaster,' he said. 'Nobody wants to do that, because site disasters don't happen every day.'Yet, with recent replication and failover technologies improving, accessing and using that data can become easier. And organizations are finding uses for that data ' and those databases. 'The data that resides on a failover database has some value,' said Bill Cooper, vice president of data warehouse system provider Teradata, a division of NCR.Like all cities and towns, Austin gets plenty of calls about such things as trash not being picked up, dogs on the loose and other nonemergencies. And like other cities, it must keep metrics on how these calls are handled ' everything from how long it takes to complete those calls to where those calls are coming from.'This is a very high-profile system, and we have to be able to provide reports about our call volumes,' Jackson said.Such reports are based on the service requests filled out by the call center operators. When a new call comes into the center, employees log the information, which the database then stores. When reports are required, they are easily generated by tapping into the database. The database servers, running instances of Oracle 9i databases, are located at the call center.This summer, the 311 center will take on the additional duty of responding to nonemergency police calls that previously came in from the 911 call center. 'We're about to have a major ramp-up of the number of calls that we take,' Jackson said.This is expected to double or even triple the traffic coming into the call center. As a result, the city's IT department is rethinking the idea of running the reports from the primary database. With the increased traffic, the city can't afford to let the database be slowed by the generation of reports.'We don't want to have a major incident where we have to pull reports for a 311 call center [and] at the same time have our service reps take in [a surge of] support calls,' he said. 'We had this failover database at this other location, so we thought, 'Why not point to that failover database and run our reports from there?' 'Initially looking into this possibility, however, Jackson found that it could not be done, at least not without splitting the software that provides the replication services between the two databases. Now the city is looking into ways around the problem or, should those fail, other ways of generating reports ' perhaps by setting up a third database.Austin is not alone in confronting these issues. Other organizations are eying their failover or backup databases for additional duties. Many organizations have already set up secondary databases. Some will just keep a backup copy of the data.Others are kept in a failover status, meaning they can be used as a primary database in an emergency should the primary one fail. In fact, current software, such as that offered by SunGard or Oracle, can allow systems to switch between the primary and backup databases so quickly that users of those systems don't even notice the switch.Loaiza said Oracle has made great strides in making failover or backup databases more readily available for additional purposes. This is not an easy job, giventhe trickiness of updating and reading databases.In the most basic configuration, the primary database makes a log file of the changes, which then can be shipped to the secondary database and applied to make the two databases identical. Oracle has long offered this approach, which it calls Critical Standby. Users could query Oracle's Critical Standby database when it wasn't being updated with new material. But both actions couldn't be done at the same time.'So what people would do is switch back and forth,' Loaiza said. 'Maybe at day they'd read the database, and at night they'd apply the changes.'This proved to be unsatisfactory to some users, so about seven years ago, the company introduced something called the Logical Standby database, which allowed both reads and writes at the same time. 'It's more flexible,' Loaiza said. But the downside to the Logical Standby, he said, was that it wasn't as fast because a lot more conversion of data was needed. It was also more complicated for administrators.With the upcoming release of Oracle 11g, a new feature called Readable Physical with Real Time Queries will offer the best of both Logical Standby and Critical Standby, Loaiza said. Reading and writing can be done nearly simultaneously, and the database system can maintain the speed of the primary system.The agility of new tools such as Readable Physical with Real Time Queries can give organizations a lot more flexibility. For instance, organizations can bring down the primary system to apply patches and updates while running the failover the system in its place. Then when the updates are finished, they can bring the secondary system down for its upgrades.'Organizations like Amazon can't be down at all, so this a technique that they use,' said Ari Kaplan, president of the Independent Oracle Users Group.In addition to patching, organizations could also reuse failover databases to take backup chores off of the primary database server, Kaplan said. When a system is being backed up, it can slow service by 10 to 40 percent. Backing up the failover server ' rather than the primary server ' removes any lag felt by users of the primary system.The failover system could also be used for testing, Loaiza said. An organization may want to make a change in the production environment but worry that the changes will cause havoc. The change can be made to the failover server first and, if the change is successful, only applied to the production database.Increasingly, though, organizations are generating reports against these secondary databases.In many cases, the data organizations need to report exists only in the database in which it was created, Kaplan said. Yet such databases are optimized for transactions, so running reports against these databases slows down the system unduly for users. 'Organizations are stuck in that mixed, dual-purpose environment,' Kaplan said. 'If you have everything in one environment, it hurts performance.'Becuse failover databases don't have the performance requirements that the primary ones do, they might offer quicker responses when being queried by business intelligence software. 'You can do the heavy reporting in one system and the updates on the original,' Kaplan said.One thing to keep in mind with this approach is that the reporting database is just a little bit less current than the production database, Kaplan said. Unless the two database servers are fully synchronous ' meaning the two are updated at the same time ' the secondary server may not be as complete as the primary one.'That might be OK if you're doing a report about what happened last month,' he said. In some cases, it may not be acceptable to have differing versions of the data.