Storage Migration Costs Can Exceed Array Prices

READ MORE

In the latest in his series of Wikibon Professional Alerts on Software-led Infrastructure, “Moving to Virtualized Clustered Storage”, Wikibon CTO David Floyer examines the full cost of storage array replacement and discovers that the costs involved in data migration can exceed the price of the original array itself. This expense is often regarded as a depreciated staffing or sunk cost and is therefore not managed as part of the cost of replacing an old array. But, he says, the cost and complexity of moving data results in leaving data on suboptimal arrays an can delay the move to software-led storage.

Migration costs include:

  • The overlapping cost of the old array (five months of a three-year lease),
  • The operational cost of planning and executing the migration of volumes from the old array to the new one, and,
  • The cost of buying the new array five months earlier than required, based on an annual decline in storage costs of about 35% per year.

These costs, Floyer’s analysis shows, typically amount to 45% of the total cost of ownership of the array, presuming that the migration can be completed in five months. If, as sometimes happens, the migration takes a full year, the cost can exceed the cost of the original array.

The complexity comes from moving the live data without shutting down the associated application. These problems are a major inhibitor to providing a scale-out, software-led infrastructure.

Vendors have not drawn attention to these costs and issues because they will cause a negative marketing perception. However, that does not mean they are not aware that this is a problem that needs fixing. Various major vendors, including IBM (SAN Volume Controller), Hitachi (High Availability Manager), EMC (VPLEX), HP (3PAR), and NetApp (ONTAP 8.1), have fielded federated storage systems that potentially can cut the cost and decrease the complexity, providing a technical foundation for moving to software-led storage.

Of these, Floyer says NetApp ONTAP 8.1 is the strongest technology. The present version only works inside a single data center, but he predicts that it will be extended to work across Metro distances and to include data storage on solid state devices in servers in future releases.

His recommendations are for CIOs, CTOs, and senior storage specialists to push their storage vendors to lay out a clear pathway to scale-out architectures, equipment, and software that will enable data-as-a-service.

As with all Wikibon research, this Alert is available in its entirety on the public Wikibon Web site. IT professionals are invited to register for membership in the Wikibon community. This allows them to comment on research and publish their own Professional Alerts, tips, questions, and relevant white papers. It also subscribes them to invitations to the periodic Peer Incite meetings, at which their peers discuss the solutions they have found to real-world problems, and to the Peer Incite Newsletter, in which Wikibon and outside experts analyze aspects of the subjects discussed in these meetings.

About Bert Latamore

Bert Latamore is a journalist and freelance writer with 30 years of experience in the IT industry including four years at Gartner and five at META Group. He is presently the editor at Wikibon.org, and associate editor at Seybold Publishing. He follows the mobile computing market, including PDAs and tablet computing, and related subjects such as both a user of PDAs and tablet computers for more than 20 years and as a strategic analyst. He was the first person at Gartner to carry a pocket computer, in 1989.