The No. 1 Rule of Secure Cloud Migration: Know Your Unstructured and Dark Data and Where It Is Located
- LANGUAGE: English DATE: Tuesday, May 21, 2019 TIME: 4:00pm CEST, 10:00am EDT, 7:00am PDT
With a huge amount of data around, cloud migration is the ideal solution today. A necessary stage in migrating data to the cloud is putting it in order. This is particularly important when it comes to unstructured, so-called dark data: files and documents that are undermanaged (excel files of budget estimates, PDFs containing important patents, Word documents containing personal employee or customer information), in general the data that is not managed in an orderly fashion such as structured database which is easily governed. Usually, this kind of data that tends to be misplaced, misused, abandoned, hacked, or leaked, in the cloud and local server repositories putting enterprises at risk.
Due to high risk, categorizing and classifying the data is essential before, during and after migration, however, this process of formatting, organizing, and validating terabytes or even petabytes of information can present much bigger challenges than the process of moving them to cloud. This critical issue poses several questions.
- Getting rid of redundant data effectively.
- How can you tag and track your business-critical data across platforms ensuring the sensitive information is allocated and classified correctly while being secure?
- How can you scan extremely large data sets in a more efficient, fast and automated manner to be able to govern it?
The key to every problem is to identify the best solution and technology needed and be able to leverage its benefits. Luckily the modern world offers a wide range of applications that can fit various scenarios. In this respect, this webinar will focus on the following key points:
- New generation solution for classifying and tracking all unstructured sensitive data, its relevance to your organization and recommendations for deployment.
- Automated data labeling and protection processes leveraging AI and Machine Learning.
- Secure and efficient data migration process to cloud.
In the first part of the webinar, Martin Kuppinger, founder and principal analyst at KuppingerCole will talk about the regulations and other drivers requiring businesses to understand where which data resides, the various vectors businesses can use to get a grip on that day, and the required interplay between businesses (people, organization) and an adequate tooling (technology).
In the second part of the webinar, Yaniv Avidan Co-Founder & CEO at MinerEye will talk about how to easily categorize and identify the sensitive data during before or after migration and how to be on the safe side and avoid the data breaches using the new technologies. He will discuss MinerEye´s product DataTracker, outlining its benefits such as scanning large amounts of data and detecting outlier, abnormal behavior and will advise whether cross-platform and unified data classification and tracking is relevant for your company environment and if it is - how to deploy it correctly.
Powered by Interpretive AI™, MinerEye continually tracks data wherever it resides, in whatever form. Companies can now discover, organize and track vast information assets quickly and continuously by scanning enterprise data repositories at the byte and pixel level. Sensitive data is mapped, tagged and secured according to data protection and compliance regulations (GDPR, HIPPA, PCI-DSS, SOC2, EU-US Privacy Shield). With MinerEye's flagship product Data Tracker™, companies see and control all of their data - including previously undetected and unclassified data; they can fast-track cloud migration, and protect their information assets from security breaches.
Determined cyber attackers will nearly always find a way into company systems and networks using tried and trusted techniques. It is therefore essential to assume breach and have the capability to identify, analyze, and neutralize cyber-attacks before they can do any serious damage.