IT Today Catalog Auerbach Publications ITKnowledgebase IT Today Archives Book Proposal Guidelines IT Today Catalog Auerbach Publications ITKnowledgebase IT Today Archives Book Proposal Guidelines Auerbach Publications

IT Performance Improvement



Networking and Telecommunications

Software Engineering

Project Management


Share This Article

Free Subscription to IT Today

Team Planning for Project Managers and Business Analysts by Gail Levitt, ISBN 978-1-4398-5543-0
Run Grow Transform: Integrating Business and Lean IT by Steven Bell, ISBN 978-1-4665-0449-3
Program Management: A Life Cycle Approach by Ginger Levin, ISBN 978-1-4665-1687-8
Business-Driven IT-Wide Agile (Scrum) and Kanban (Lean) Implementation: An Action Guide for Business and IT Leaders by Andrew Thu Pham and David Khoi Pham, ISBN 978-1-4665-5748-2
Digital Forensics for Handheld Devices by Eamon P. Doherty, ISBN 978-1-4398-9877-2
Enterprise 2.0: Social Networking Tools to Transform Your Organization by Jessica Keyes, ISBN 978-1-4398-8043-2
Open Source Data Warehousing and Business Intelligence by Lakshman Bulusu, ISBN 978-1-4398-1640-0

Dividing Data after a Merger or Acquisition

By David Gibson, VP of Strategy, Varonis

There are many problems and challenges facing an organization that is about to merge with another organization or sell a subsidiary. Divesting yourself of a part of your company is rather like carrying out an elaborate surgical transplant - the correct parts of the existing entity have to be identified, isolated, and then meticulously extracted to ensure that nothing extraneous is inadvertently transferred from the source to the destination.

This article examines the problem of how to migrate and separate your data during a merger, acquisition or sale without harming the patient.

The business side of M&A can be gruelling - countless meetings between executives, lawyers, and bankers as well as a mountain of paperwork and red tape. But the division of data is becoming a near impossible step in the M&A process as rapidly expanding data overwhelms shrinking IT teams.

Why is this? Because, according to IDC, 80% of organizational data is unstructured and unstructured data is a mystery for most organizations. They don't know which data is used or not used, who is using it or not using it, who owns it, what it contains, what is sensitive, and who should or should not have access.

Even if some of these questions can be answered, moving the data without introducing service disruption, data corruption, or putting data at risk of leakage is no easy task. Terabytes and petabytes of information take time to move around, and technology is required to either move "live data" safely, or data has to be moved while no one is using it (and when is that?).

Permissions do not transfer easily between domains or across platforms, so technology is needed to help with that, or permissions need to be recreated during the transition. Because a single terabyte of data usually has 2500 folders with unique permissions (50,000 folders total), each uniquely permissioned folder has 3 to 5 active directory groups, and each group has between 5 and 50 members, manual re-creation might take a bit more time than you think. And permissions are not necessarily where they should be, as only 21% of organizations in our recent survey on data migrations ( report that they regularly make sure that folders and SharePoint sites are safe from global access groups, like everyone and domain users.

There are other reasons, of course, for the need to migrate data, such as the purchase of new storage devices, the retiring of legacy storage, the adoption of new platforms, cleaning up of stale data and the removing of specific content. In fact, according to the same survey, 95% of organizations move data around at least once per year, and 44% move data more than 5 times per year.

A successful data migration requires you to identify exactly what content is going to be moved, to decide whether to move it all at once or gradually, when to move it and what to do about permissions. You'll also need to identify the data owners, determine who uses that data, and whether your data migration will affect those users while the data it is moved to your new network attached storage device, domain or SharePoint server. These are all vital tasks and take a lot of time to do manually. Could you do all of this automatically?

In order to do so, you would first need metadata to identify data that should or should not be moved; e.g., data that is stale, data that is created, accessed, or accessible by certain groups or individuals, data that contains specific content, or any combination of those metadata attributes. Once these data sets are identified, automation is needed to move or archive - whether their destination is a server in another domain or even on a completely different platform.

The following boxes must be ticked in order to move data securely and intelligently:

  • Ability to schedule one-time or on-going migrations
  • Option for incremental migrations for large data sets
  • Automatically maintain, enhance and/or translate permissions for migrated data
  • Ability to migrate data between servers in different domains
  • Migration simulation and real time monitoring to avoid unwanted surprises
  • Detailed reporting on migrated data, permissions and new groups

Does such technology exist? Yes. Enterprises are now able to configure one-time or on-going migrations, defining the destination path, folder, permissions translation, and when the migration should take place. This technology enables the rapid, safe execution of complex data migrations. Users can easily implement and enforce policies for data retention and location based on content, accessibility, and activity. The same metadata used to facilitate the migration also helps identify and remediate exposure of sensitive data and excessive permissions, identify owners, stale data, and determine who has, should and should not have access.

Intelligent, automated migration of large scale data sets can reduce complexity and improve IT service by limiting user disruption and avoiding data breaches. By automatically identifying data sets for migration based on path, permissions, actual access and content classification, compliance policy implementation and enforcement are simplified while providing the flexibility to meet the business goals of any migration project.

Until recently, splitting large, complex data sets was about a precise as medieval surgery - the surgeon was as likely to take off his own thumb and a few errant limbs from the hapless patient during the process. In some cases the procedure was even less accurate - using the sort of Solomon approach in which a 'virtual' baby is divided down the middle. Now the procedure is as accurate and scientific as keyhole surgery and for the first time data migration can be automatic, accurate and swift.

Related Reading

Why Flexibility Is Key to a Successful Data Migration

Data Center Storage: Migration and Retiring Aging Systems

© Copyright 2012 Auerbach Publications