Army Research Lab deploys cloud for app testing
Connecting state and local government leaders
What do you do if your application is so large that it would require all the nodes on your network just to test? If you're the Army Research Laboratory, you turn to cloud computing.
What do you do when your application is so large that it would require all the nodes on your network just to test? For the Army Research Laboratory, the answer came in the form of cloud computing.
"We were in between a rock and a hard place," said Dennis Reedy, a system architect and advisor for ARL supporting contractor Altus Engineering, speaking at the JavaOne conference held earlier this month in San Francisco. The lab wanted to road-test the next version of its much-awaited modeling and simulation system, The Modular Unix-based Vulnerability Estimation Suite (MUVES).
"We need to field and validate the system, test the scalability, test the ability for the system to fail over," Reedy said.
The trouble was that ARL had no servers to spare. What capacity that would be available could only be used during the off-hours, such as in the middle of the night or during weekends.
Instead, the lab uploaded the software to the Amazon Elastic Compute Cloud (EC2), automating the build and test process through an open-source cloud management service called Elastic Grid.
"By using cloud computing, we were able to test on a number of machines we would have never been able to acquire," said ARL computer scientist Ronald Bowers, who also spoke at the JavaOne presentation.
The software being tested is version three of MUVES, a complete rewrite of a general use modeling and simulation application that ARL has used for the past 20 years. The agency uses this software to measure how much damage bullets, bombs and other projectiles can do to vehicles, among other uses.
Analysts of who use the current version of the software, which is a single-threaded application, have complained about its performance. Bowers had chalked this sluggish performance up to how the software handles persistence, specifically the large amount of material it keeps in working memory. All totaled, about 100 analysts use the software on their workstations on a regular basis.
Unlike the previous version of MUVES, the new software will be a distributed architecture, one in which different functions of the applications are broken up into different tiers. The software will be "composed over numerous services on a local network," Bowers said. The client software, which resides on the user's workstation, interacts with gateway software, which pieces together the needed components from various services elsewhere on the network, including other workstations. To confront the persistence problem, material that is not currently needed is moved to storage.
Such an architecture lends itself well to a cloud infrastructure, Reedy notes.
"Although we're not doing virtualized compute resources, our application architecture represents a lot of what you'd like to see in a cloud — we're doing real-time provisioning, service-level agreements, ... dynamically monitoring services," he said.
While ARL computers were in short supply, The development team found that they could test MUVES in Amazon's cloud offering. This way, they could run the application under a full workload, testing its scalability and gauging how much hardware it would require to run efficiently. While the specific problems that ARL works on could not be placed on the cloud, -- even with security measures in place — the testing team could use the service for testing most of the MUVES component stack, given that it was generic in nature.
One time-consuming aspect of the testing in the cloud is the amount of preparation that would be needed to ready the components for use in the cloud. Ideally, the components should not be altered to work in a cloud environment. "We want to minimize the changes to the technology we created. We really wanted to transparently switch from our [Local Area Network]-based platform" to the cloud, Reedy said.
Using the Elastic Grid helped in this regard, Reedy said. The service allowed the testing team to mirror the in-house set-up. The developers could upload the entire project to a secure site, and the software spins up however many different virtual machines are needed, in some cases anywhere from 100 to 150. After the tests are completed, the developers can download the results.
NEXT STORY: NASA stores lunar orbiter data in the cloud