Posts tagged "archive"

What Data, Exactly, are We Legally Required to Retain?

What are your data preservation requirements?  If you are setting up an archive, chances are you need to find out.  The challenge is that finding a reliable list of legal and regulatory requirements for data preservation sounds a lot easier than it is.

Sungard, a purveyor of hot site services a decade or so back, had someone who was dedicated to maintaining a list of regulatory requirements for data preservation. Circa 2008, the list looked like this:

 

 

 

 

In that year, analysts were projecting that some $70B had been spent on regulatory compliance, mostly on the use of consulting servies to identify relevant laws and regulations and to establish rention policies.  At the time, the big problem confronting firms was that they were discovering a need to dip into the till again to develop compliant deletion policies.

Alas, the list has not been kept up to date since I last checked, and finding a coherent compilation of data preservation requirements via Internate search engines is a pain.  The concept of data preservation to satisfy regulatory requirements is conflated with lots and lots of rants from folks who, rightly or wrongly, believe that their government, internet service provider or telco are collecting information about them and preserving it for use against them at some future date.

Clearly, different market verticals have different data retention/preservation requirements.  There are also state and national rules and regulations to consider, especially in Europe where the movement to enable on-request identity erasure from corporate and governmental databases has gathered steam.

Watch this space to learn about additional post-2008 retention and deletion rules as we uncover them.  And if you or your business are required to retain certain types of data because of a regulation or law, please use the comment section to let us know.  We hope to have a full listing of all regulations and legal requirements related to data preservation and deletion for use by DMI members and visitors.

Thanks.

 

Looking for Data Management Tools that Work: Watch this Space

Data management has always labored under the impression that it was just too difficult a task to take on.  Face it: there is a lot of data recorded on storage media in most firms.  It mostly consists of files created by users or applications that wasted no effort identifying the contents of the file in an objectively intelligible way. 

Some of this data may have importance or value; but, much does not. So, just beginning the data management exercise -- or one of the subordinate data management tasks like developing an information security strategy or a data protection strategy or an archive strategy -- first requires the segregation of data into classes:  what's important, what's required to be retained in accordance with assorted laws or regulations (and do you even know which regs or laws are applicable to you?), what needs to be retained and for how long, etc. 

Sorting through the storage "junk drawer" is considered a laborious task that absolutely no one wants to be assigned.  And, assuming you do manage to sort your existing data, it is never enough.  There is another wave of data coming behind the one that created the mess you already have.  Talk about the Myth of Sysiphus.

What?  You are still reading.  Are you nuts?

Of course, everyone is hoping that data management will get easier, that wizards of automation will define tools to help corall and segregate all the bits.

Some offer a rip and replace strategy:  rip out your existing file system and replace it with object storage.  With object storage, all of your data is wrapped into a database construct that is rich with metadata.  Sounds like just the thing, but it is a strategy that is easiest to deploy in a "greenfield" situation -- not one that is readily deployed after years of amassing undifferentiated data.

Another strategy is to deduplicate everything.  That is, use software or hardware data reduction to squeeze more anonymous bits into a fixed amount of storage space.  This may fix the capacity issue associated with the data explosion...but only temporarily.

Another strategy is to find all files that haven't been accessed in 30, 60 or 90 days, then just export those files into a cheap storage repository somewhere.  If any of the data is ever needed again -- say, for legal discovery -- just provide a copy of this junk drawer, whether on premises or in a cloud, and let someone else sort through it all.

Bottom line:  just getting data into a manageable state is a pain.  Needed are tools that can apply policies to data automatically, based on metadata.  At a minimum, we should have automated tools to identify duplicates and dreck, so it can be deleted, and other tools that can place the remaining data into a low cost archive for later re-reference.  This isn't perfect, but it is possible with what we have today.

Going forward, we need to set up a strategy for marking files in a more intelligent way.  That may involve adding a step to the workflow in which the file creator creates keywords and tags on files when saving them -- a step that can't be overwritten by the user!  Virtually every productivity app has the capability for the user to enter granular descriptions of files, and some actually save this data about the data to a metadata construct appropriate for the file system or object model used to format the data itself.

If that seems too "brute force," another option is to mark the files transparently as they are saved.  Link file classification to who the user is who created the file based on a user ID or login or something.  If the user works in accounting, treat all of his or her output as accounting data and apply a policy to the data appropriate to accounting data.  That can be done by referencing an access control system like Active Directory to identify the department-qua-subnetwork in which the user works. 

Another approach might be to tag the data based on the workstation used to create the file.  Microsoft opened up its File Classification Infrastructure a few years ago.  That's the thing that shows attributes for files when you right click the file name:  HIDDEN, SECURE, ARCHIVE, etc.  With FCI opened up for user modification, each PC in the shop can be customized with additional attributes (like ACCOUNTING) that will be stored with data created on that workstation. 

Whether you mark the file by user role or by workstation/department, it isn't as effective as manually entering granular metadata for every file that is created.  So it won't be as effective as, say, deploying an object storage solution and manually migrating files into that object storage system while editing the metadata of each file.  You will get a lot of "false positives" and this will mitigate the efficiency of your storage or your archive or whatever.

 

 

Unfortunately, the tools for data management are difficult to get information on.  As reported in another blog post, doing an internet search for data management solutions yields a bunch of stuff that really has nothing to do with the metadata-based application of storage policy to files and objects.  Many of the tools are bridges to cloud services, or they are backup software tools whose vendors are trying to teach some new tricks, like archive.  Others are just a wholesale effort of the vendor to grab you by your data, figuring that your hearts and minds will follow.

We believe that cognitive data management is the future.  Take tools for storage resource management and monitoring and for storage service management and monitoring and for global namespace creation and monitoring, then integrate the information contained in all three (all of which is being updated at all times) so that the right data is stored on the right storage and receives the right services (privacy, protection and preservation) based on a policy that is created by busienss and technology users who are in a position to know what the data is and how it needs to be handled.

Such cognitive data management tools are only now beginning to appear in the market.  Watch this space for the latest information on what the developers are coming up with to simplify data management.

Archive is the Killer App for Tape

During our recent visit to IBM in Tucson, AZ, we were honored to meet with tape experts Lee Jesionowski, Calline Sanchez, Tony Pearson and Ed Childers about tape futures and drivers behind the current renaissance of the technology.  One message that came through loud and clear was that current concerns about malware, ransomware and unauthorized disclosures of private data have built a fire under planners to consider was to secure their data better.  That includes the use of tape.

 

 

Between the natural air gap provided by tape and the pervasive data encryption service delivered on tape drives from IBM and other vendors, tape rules when it comes to security.

Thanks to IBM for having us out to the Executive Briefing Center and good luck with today's announcement of LTO-8 technology.

 

Barry M. Ferrite Warns of Z-Pocalypse, Recommends Archiving

In a couple of public service announcements made last year, Barry M. Ferrite, DMI's "trusted storage AI," warned of a coming Z-Pocalypse (zettabyte apocalypse).  Archiving is the only solution for dealing with the data deluge. 

These PSAs provided some "edutainment" to help folks get started with their archive planning.  We hope it helps...

 

 

Continuing on this message, Barry returned in the next PSA with this additional information...

 

 

Amusing but serious, we hope to add more guidance from Barry in the future on the topics of archive and data management.