A winding path to unified storage
Connecting state and local government leaders
The Air Force Center for Engineering and the Environment upgrades its storage with a network-attached storage system and gateway that lets it Keep existing systems in the mix.
FOR YEARS, RALPH MILES managed storage for the Air Force Center for Engineering and the Environment with a mix of direct-attached storage (DAS) and storage-area network (SAN) systems.
But when it came time to unify the storage networks and prepare for future growth, Miles chose network-attached storage. NAS offered scalability for future growth and allowed Miles to use the old DAS and SAN storage arrays as part of the new NAS network via a gateway from OnStor.
Headquartered in Brooks City-Base, Texas, outside San Antonio, the 740- employee Air Force center specializes in building amenities, such as commissaries, and cleaning up military bases after they have been decommissioned.
It works with contractors and environmental engineers from across the country who collaborate with one another and store their files online. The agency also submits a lot of reports to the Environmental Protection Agency.
Over the years, the center had collected a number of seemingly incompatible storage systems. In 1994, as network administrator, Miles was asked to provide a way for employees to back up their personal computer files.
So he set up a series of DAS servers that offered each user as much as 2G of storage.
But using a DAS approach proved to be cumbersome. Every time a server needed to be upgraded, Miles had to physically move the disks, along with the SCSI controller, to the new server. It was the only way to save the data.
“I needed the data to be external to the server,” Miles said.
He stuck with DAS but put the hard drives in external enclosures and increased the capacity per user to 6G. He arranged the disks in a Redundant Array of Independent Disks (RAID) format and built a 60G cabinet of 10 drives each.
That approach liberated the storage from the individual servers, though the storage space could not be reallocated across servers.
By 1998, the agency had outgrown that technology and took the next step, spending $75,000 on a three-shelf, 1.5T SAN array from Legato (now owned by EMC), which should have provided more than enough space. The array had Fibre Channel drives and switches.
Miles said he liked the reliability and flexibility of a SAN but soon realized the benefits came at a price — specifically, the cost of maintenance and upgrades.
“I couldn’t afford it,” he said. Moreover, users didn’t need the ultra-fast retrieval and write times the SAN offered.
So when the time came to upgrade, Miles went back to DAS. The agency bought a 6T DAS unit from a local supplier that cost only $24,000. The unit was so much cheaper than a SAN in part because it used Serial Advanced Technology Attachment drives, which were slower but less expensive than Fibre Channel.
“My users couldn’t tell the difference between Fibre Channel” and Serial ATA, Miles said.
The approach worked well until Miles got the order to expand the storage pool once again to provide space for disaster recovery. That effort involved backing up desktop PCs and servers at all the center’s locations.
Miles at first tried to expand the existing DAS, but he ran into problems. By that time, he had 22T of data on three trays of DAS storage.
Using the Microsoft Windows NT File System (NTFS), which could only allocate 2.2T per partition, Miles divided the partitions for the disaster recovery work into three 1.6T increments. But he soon ran into a problem — namely, Checkdisk.
The Checkdisk routine tests each sector of a hard drive to verify that the drive is still good.
“Ever try to run Checkdisk on a 1.6T partition?” Miles asked.
It took about a week to check each partition and three weeks to test all three.
“That was three weeks my disaster recovery system was down,” he said.
During one run of Checkdisk, the center missed three weeks of incremental backups, which did not please its executives.
“It was painful to report to the chief financial officer that his prized disaster recovery system was not in play,” Miles said. “I could not afford to go through that again, so I will not use NTFS schema for my terabyte-sized storage.”
At that point, he talked to his local reseller, and they agreed that a new storage system was necessary and that it should not be based on NTFS.
SAN wouldn’t work, not only because it is expensive but also because it is block-based, which was not optimal for saving user files. SAN is good for large applications that can manage their own storage space.
On the other hand, DAS storage is file-based, which generally makes it well suited to users who need a place to store files. But DAS wouldn’t work well for the Air Force center because it would require too many servers, and data recovery would be difficult if a file server went down.
Center officials decided to buy four 12T NAS storage arrays from OnStor. Each of the company’s Pantera storage units has 16 750G hot-swappable Serial ATA RAID drives. And the NAS setup meant officials could easily add more storage.
The center also invested in a Bobcat 2240 gateway from OnStor that could handle the NAS traffic and the Fibre Channel SAN traffic on the back end. The gateway allowed the agency to manage its existing SAN and DAS units as components of the NAS system.
“When clients write to the Bobcat gateway, they are reading and writing files,” said Narayan Venkat, vice president of marketing at OnStor. “But behind the gateway, the gateway can write blocks to block storage.”
With that approach, the center can continue using the SAN and DAS units until they are phased out. And rather than treating them as isolated pools of storage that must be managed separately, officials can add the units into the center’s overall storage configuration.
“You derive a lot of economies of scale by consolidating storage,” Venkat said.
NEXT STORY: Stopping the show at CES