IT Today Catalog Auerbach Publications ITKnowledgebase IT Today Archives Book Proposal Guidelines IT Today Catalog Auerbach Publications ITKnowledgebase IT Today Archives Book Proposal Guidelines
IT Today is brought to you by Auerbach Publications

IT Performance Improvement



Networking and Telecommunications

Software Engineering

Project Management


Share This Article

Free Subscription to IT Today

Powered by VerticalResponse

Designing Green Networks and Network Operations: Saving Run-the-Engine Costs
Green Project Management ISBN: 9781439830017
The Green and Virtual Data Center ISBN: 9781420086669
Green IT Strategies and Applications: Using Environmental Intelligence ISBN: 9781439837801
Data Center Storage: Cost-Effective Strategies, Implementation, and Management

Data Center Storage: Migration and Retiring Aging Systems

by Hubbert Smith

Why migrate? If it's not broken, don't fix it, right? Wrong. Sooner or later, we must migrate: User demand grows, application demand grows, the use of the data expands as it integrates with other applications, data itself grows, and hardware ages. It is in your best interest to organize the task of migration well and use it frequently. The cost of the hardware is a fraction of the costs of maintenance, people, backup, and so forth of the total cost of the system.

  • Replace aging server or storage hardware. Aging hardware becomes outgrown and can become increasingly unreliable; it must be replaced eventually.
  • Standard IT platforms are good. Hardware and applications age and become nonstandard. IT hardware and software standardization serves to improve uptime, reduce risk, and reduce expenses in people and service contracts to manage a multivendor environment. Employing standard configurations and processes will improve uptime, improve your ability to fix problems quickly, and reduce expenses for people and service contracts. Improve service levels reduce server and storage sprawl and reduce expenses.
  • Replace servers using direct attached storage (DAS) with consolidated storage. Improve server performance by offloading server-backup or snapshot workloads onto storage-array-based backup or snapshot workloads.
  • Improve performance of primary storage. This can be accomplished by load-balancing several systems or by migrating aging or nonessential data to Tier 2 of the system to offload and improve performance.
  • Reduce costs of storage and costs of backup. Using a tiered storage approach and SLAs will help achieve this goal.
  • Consolidate data. This will reduce overprovisioning, improve RPO/ RTO with snapshots, and save expenses on consolidated backup.

Migrating from DAS (direct attached storage) to SAN (external shared storage) is the first place to improve where and how you spend on storage. It's harsh reality time: To achieve these benefits, not only will you need to buy and install the new systems. You must also retire olde systems (yes, I used that spelling intentionally).

The challenge is coaching your people to not repeatedly indulge in bad habits, such as leaving lots of data on Tier 1 storage and, whenever demand grows, simply buying more Tier 1. At the core of any business are people, processes, and tools (also known as technologies). Frequently, once the people and processes are established, they become habits, usually bad habits. IT operations people simply carry out the migration, backup, or expansion without considering the alternatives, such as migrating stale data to cheaper systems or retiring aging data to archive. These alternatives could improve operations in many, many ways:

  • Removing aging data from Tier 1 systems reduces the time to backup, and coincidentally creates an opportunity for improved SLA for the remaining (non-stale) Tier 1 data. A smaller capacity Tier 1 can be backed up more frequently; the Tier 2 data needs to be backed up much less frequently.
  • Removing aging data from Tier 1 storage systems also reduces the total I/O workload and can improve performance for (non-stale) Tier 1 data. A modest reduction of I/O workload from an overfull server or storage array can deliver a pronounced improvement in performance. For instance, reducing an I/O load by ten percent on a fully loaded system can offer dramatically better system performance and response.
  • Removing aging data from Tier 1 systems also serves to avoid overhead for replication and WAN data charges, and reduces competition with other WAN traffic for limited bandwidth.
  • Moving data from the Tier 1 infrastructure and onto affordable (GB/$) Tier 2 storage system hardware creates further cost efficiencies. Archival to tape completely removes stale data from spinning disk drives that take energy to run and to cool and that consume limited data center floor space. Putting that data onto tape means that it will consume no power, generate no heat, and occupy significantly less expensive floor space.

Without question, there are justifiable situations which merit Tier 1 storage spending. But all proposals to spend more money on Tier 1 storage should-must-be accompanied by a matching plan to conserve Tier 1 storage capacity, moving data to Tier 2, or archiving stale data.

Migration projects fall into two very separate categories:

  • File: Network attached storage (NAS), shared folders, anything using a file system such as New Technology File System (NTFS), Zettabyte File System (ZFS), or ext, accessed over a NAS protocol such as Network File System (NFS) or Common Internet File System (CIFS).
    • NTFS (New Technology File System) is Microsoft's NT file system
    • ZFS (Zettabyte File System) is a Sun/Oracle-developed file system
    • NFS (Network File System) is a network attached storage protocol originally developed by Sun Microsystems to share files
    • CIFS (Common Internet File System) is a network attached storage protocol developed by Microsoft to share files, printers, and more
  • Block: Databases, e-mail, SharePoint, and similar projects in which the application includes its own data structure instead of a file system.

1 File Migration Project Planning

The goal (as always) is to reduce storage expenses and improve storage service levels. To overcome resistance to change, risk avoidance, and the like, it's important to set the goals in front of the team. Make sure they all understand that leaving the file data (the shared folders) in place and simply expanding the storage increases both risks and costs for the company. To motivate the team, the project plan should include a review of current costs compared to new costs and should plan for both growth of incoming data and retirement of aging data.

Expect both end users and frontline IT people to resist moving any of the shared folders. Moving data usually results in end users not finding the data they seek, and that leads to calls to the IT help desk, yucky for everyone involved. However, the alternative of leaving the data in its current location (assumedly Tier 1 storage) and simply expanding is highly costly.

Moving the data itself is not such a big task. The central problem is all the users have static NFS or CIFS mount points to their data. To better understand that, simply look for that shared folder on your PC and click properties to see the mount point: basically a server and folder. Moving the static pointers involves changing every client and every server to no longer point at the old mount point, but rather point at the new mount point.

This causes the IT people to think, "What happens when I break the client side? What happens if I mess up the server setup or permissions? What is this going to do to performance? What is this going to do to the network? What is this going to do for backup and restore? I'll mess it up and look stupid and incompetent."

And then the IT people think, "This task will go on forever, more servers and repeated relocation of aging data off Tier 1 and onto Tier 2-there is just too much work and risk!" Expect resistance.

1.1 File Migration Plan, Discover, Test, QA, Provision, and Cut-Over

  • Size the current growth (capacity (GB) growth, growth in numbers of active users, and growth in increased demand for scalable performance), have a plan for expanding storage to accommodate. Usually a spreadsheet with past-present-future capacity-user performance loads for each application will suffice. The plan does not need to be perfect; it can (and should) be revisited and updated.
  • Plan on ongoing data retirement. If end users really need to keep data online longer, then include that in the SLA and show the cost to the company. Avoid migrating forward all your stale and unused data. Retire it to archive media, or, if that's not possible, demote it from Tier 1 to Tier 2.
  • Perform risk management. Take a look at what might go wrong and determine a remedy or workaround. Make conscious decisions regarding acceptable risk or change the plan when encountering an unacceptable risk. The risk of someone complaining or making a fuss is very different than a risk of real business impact; separate the perception or emotional from the real, concrete issues. For those emotional end users (and we all have them), have a discussion prior to the migration to reduce the drama.
  • Establish a plan of record that includes scope, schedule, and resources. If there needs to be a change of plan, do that consciously; communicate a change of plan of record and update the written project plan. Make that plan of record well known, review it with key stakeholders, make it accessible to your end users. Making a migration go smoothly with low drama will make the next migration easier to accomplish.
  • Prior to executing the project, establish a quality assurance plan with well-understood acceptance criteria. Review the acceptance criteria and the plan of record with the team up front. Review the acceptance criteria again, immediately prior to the migration, as a final approval.
  • Perform discovery, classification, and cleansing. Discovery can be conducted manually with spreadsheets plus scripting. Every system has stale, obsolete, or otherwise useless data. Either make it useful or get rid of it; involve your end users in the process. Only migrate data that is in use. Migration should first retire stale data. Use the migration as an event to do some housekeeping.
  • Do a practice or "dry run." Without disrupting production, do a test run of sample production data, then confirm the pilot run pulled the new data properly. Compare before and after log files and directory listings and file sizes to indicate folders and files were moved successfully. Compare dry run results to acceptance criteria.
  • Discover clients, discover storage. Catalog the old servers: IP addresses, folder structures, user/workgroup permissions, systems software (including versions).
  • Have a full backup and plan to recover if things go drastically wrong.
  • Provision and set up new storage: hardware, RAID volumes, LUNS, and folders.
  • Conduct a pre-migration test before moving the data itself. Conduct a pilot project, followed by phase 1, phase 2, and phase 3.
  • Create test scripts for clients and servers to change mount points. Prepare a rollback script in case things go wrong.
  • Define acceptance criteria and provide quality assurance. Establish an acceptance checklist, then conduct a QA review to walk through the plan and testing results and confirm the acceptance checklist is met.
  • Perform the cutover: move the data and run the scripts.
  • Leave the old data in place on the old hardware. Once we are convinced the new migration is properly operating, back up and archive data from the old server and storage.
  • Retire or repurpose old servers and storage. Key considerations to pave the way to the future:
  • Improve migration automation. Scripts to update clients, scripts to automate discovery.
  • Drive to an IT operation based on standardized configurations. Retire that weird hardware. Use migration to improve consistence and reduce complexity in the IT infrastructure.
  • Use migration to establish a two-tier storage infrastructure. Manually migrate aging data (usually LUN level) from Tier 1 to Tier 2.
  • Use file abstraction to ease future manual migration and automated tiered migration. On the server side, replace hard-coded pointers to storage mount points with dynamic pointers. File abstraction allows you to move the data and then update the pointer, with no touching of the clients or servers involved. It also allows you to automatically move aging data from Tier 1 to Tier 2 without administrator involvement (huge savings), we'll cover in this section.
  • Use migration to set up adoption of managed hosting or cloud storage in the future.

Expect resistance. But providing an alternative is the financially responsible thing to do. Allowing unfettered growth of unstructured file data (especially on Tier 1) is not financially responsible. See sections on static tiering and dynamic tiering for more important and related information before finalizing your migration plan.

1.2 Aging Data and the Role of Near-Line Archive and Search
Perhaps the central problem of shifting data from Tier 1 to Tier 2 to Archive is the issue of end users and access to that data. The end users rely on the data to do their jobs, so moving it to places where they may have trouble finding the data has a big downside. Establishing a search capability (like a private Google) can largely mitigate that risk.

1.3 Block (Database, Email etc.) Application Data Migration
There is good news here. Block applications, such as database, e-mail, and the like, (e.g., Oracle, Exchange) all have tools to manage data and manage user access to that data. Most importantly, applications have tools and information about the data to separate the hot data from the aging cooler data-in other words, applications like Oracle have built-in archiving features.

The only open question is your approach to keep all the data piling up on Tier 1; or your approach to push the aging data more aggressively to Tier 2 and then to Archival. See Table 11.1.

Table 11.1 Before and After Migration: Tier 1 to Tier 1 and Tier 2

100% on Tier 1 25% on Tier 1
74% on Tier 2
Performance 30,000 IOPS peak
5 MB/s peak
Tier 1: 20,000 IOPS peak
Tier 2: 10,000 IOPS peak
Tier 1: 2 MB/s peak
Tier 2: 3 MB/s peak
Notice Tier 1 is heavier random I/O (higher IOPS) and Tier 2 is heavier sequential I/O (higher MB/s) due to the different workload. The hardware can be selected/ configured for each.
Data Growth 20 TB total growing at 2 TB/yr Tier 1: 10 TB, growing at 0TB/yr
Tier 2: 15TB, growing at 2TB/year
Notice the Tier 1 has zero growth. The growth is in the less expensive Tier 2 storage. Adding Tier 2 capacity costs 10% to 20% of the cost of adding Tier 1 capacity.
Power 20 TB on 7-watt 300 G drives is 84 HDD. 588 watts unburdened. 1470 watts burdened 10 TB on 7-watt 300 G drives is 84 HDD. 294 watts unburdened. 740 watts burdened.
10TB on 8-watt 2 TB drives is 6 drives. 48 watts unburdened. 120 watts burdened. 860 watts total
After migration, consumes around half the power.
Backup/Recovery Cost 20 TB incremental backup every 12 hours is $4,000 per month in staff time and $2,000 per month tape and tape management. $6,000 per month total. 10 TB incremental backup every 12 hours. And 10 TB Incremental backup every 24 hours. $4,500 per month total.

11.2 Migration Overview (Virtualized Storage)

Your starting point with a virtual machine (VM) is an associated home folder with a virtual machines disk file.

Create a new home folder in the new target location. Most virtualization systems have tools to migrate virtual servers and virtual disks with automation. Migration still involves small quantities of downtime. The process is as follows: Move virtual machine operating files, Swap, Scratch, Config, and Log Files. Then copy the entire virtual machine storage disk file to the new target location home folder. Compare the contents of the old and the new target folder (same file names, same file sizes, same permissions). Then bring the applications back online targeting the new location.

There is an alternative method that involves only a few moments of downtime. The alternative method employs snapshot technology, and the applications are left online. Use snapshot technologies to establish a point-in-time copy. Copy the data to the new target location, while the applications continue to read and write to the original folder. Snapshot technologies keep track of the changed storage on the original location. Once the copy is complete, the VM and applications are momentarily shut down while the snapshot technology updates the target location with the changes to the original storage since the initiation of the copy. Delete the original virtual machine home folder.

Related Reading

What Defines a Next-Generation and Virtual Data Center?

Storage and the VMware VMFS File System

Separating Backup and Archiving: Securing Your Digital nformation

The State of Today's Data Center: Challenges and Opportunities

About the Author

From Data Center Storage: Cost-Effective Strategies, Implementation, and Management by Hubbert Smith. Auerbach Publications, 2011.

© Copyright 2011-2013 Auerbach Publications