Almost one and a half years after the introduction of GDPR (EU General Data Protection Regulation), some companies still struggle with implementing appropriate measures to deal with Personally Identifiable Information (PII) in a compliant fashion. Last week the Commissioner for Data Protection and Freedom of Information of the city state Berlin Maja Smoltczyk imposed a 195,000 euro fine on the German food delivery service provider Delivery Hero after it had committed a series of data protection law violations with its subsidiaries Foodora, Lieferheld and Pizza.de. It is Germany’s highest GDPR-related fine to date.
According to the press release by the Commissioner for Data Protection and Freedom of Information, the majority of privacy breaches displayed disregard of the rights of the affected parties. In ten cases, the delivery provider had not deleted personal data of former clients, despite the latter having ceased activity on the platform for several years. Among other things, this led to marketing mails sent out without the consent of the recipients. In a statement to the privacy officer, Delivery Hero argued that some violations could be traced back to technical glitches and employee accidents but “due to the high number of repeated violations a general, structural organizational problem was assumed.” Delivery Hero was acquired by the Dutch company Takeaway.com at the end of last year and states that all violations happened prior to the takeover.
Having understood early how crucial it is for a company to be GDPR-compliant, KuppingerCole Analysts already published a Leadership Brief in May 2017 in preparation for GDPR in which Senior Analyst Mike Small identified six key actions that IT needs to take to prepare for compliance. He stressed that the Data Controller or Data Processor must ensure that Personally Identifiable Information (PII) is “only accessed in accordance with the consent given by the data subject”. This was obviously not the case when – as stated above – in most breaches the rights of data subjects were disregarded.
Another point of emphasis in the Leadership Brief is that “organizations must have processes and technology to track the consent lifecycle for each data subject”. By admitting technical glitches, employee accidents and a lack of adequate structure and organization behind the data lifecycle process, Delivery Hero essentially made a confession of grave data negligence.
Not being in comprehensive control of internal processes, employees and technologies, it can be assumed that Delivery Hero was and maybe still is not sufficiently prepared for a potential data breach and would be unable to react to an incident in a timely manner without undue delay.
Other companies can only take this case as a learning opportunity and – in order to comply with regulations such as GDPR – implement reliable processes and technologies that do not depend on the diligence of single employees.
Nevertheless, the latter should not be ignored altogether. All employees should be trained in GDPR-relevant questions about their specific work tasks.
KuppingerCole offers a wide variety of research, blog posts and recorded webinars covering many different aspects of GDPR that can support you and your company in achieving and maintaining compliance. For example, there are several technical solutions for locating and classifying structured and unstructured data. These can assist companies in determining where PII and other regulatory information is located. KuppingerCole constantly investigates these markets and provides guidance.
If you have any specific questions, please do not hesitate to get in touch with us. KuppingerCole Advisory Services can efficiently support you in establishing appropriate processes and their technical implementation, strengthened by long-term practical experience and comprehensive market knowledge.
Cyberattack resilience requires way more than just protective and defensive security tools and training. Resilience is about being able to recover rapidly and thus must include BCM (Business Continuity Management) activities. It is time to redefine the role of CISOs. I made this point in yesterday’s webinar on cybersecurity budgeting. If you missed it, you can watch the webcast here.
Prevention is key in limiting cyberattacks. A Chief Information Security Officer is responsible for prevention. Best practices of employees are responsible for prevention. From the top down the conversation surrounding cybersecurity has always been how to prevent an attack. But yet, despite the prevention, cyberattacks occur more frequently than ever before – and with more severe intensity.
Attacks will not only continue; they are continually evolving to exploit new vectors with new tools. Don’t assume that no one will attack you – they are attacking permanently. So, is prevention enough?
What are the crown jewels? What would happen to your business if they were attacked? How would you get them up and running again? And how do you prepare your C level for crisis communication?
A far more realistic ambition is to be able to react so that business can go back to usual as quickly as possible. Detect, respond, recover, and improve. How can a business react to an attack while still planning for its future? By not segregating preventative action and BCM. Do not fall prey to the blame game, allowing the BCM team to blame the CISO for a failed prevention. A fusion of creative expertise will mitigate an attack and streamline the recovery.
My suggestion for every CISO, CIO, SOC and CDC: Extend the scope of what you’re doing. It’s more than just traditional cybersecurity. Business continuity is part of the picture. Even more so, BCM is key to cybersecurity. Take a step back and reflect about your cybersecurity portfolio. You can’t manage a portfolio that is too complex.
This will definitely be a hot topic at our cybersecurity events in Washington, D.C. and Berlin. If you want to take your cybersecurity portfolio under scrutiny, you should check out our Portfolio Compass service which is explained in our Advisory Services flyer. We have a lot of current research on cybersecurity issues on our new content platform KC PLUS.
Regulation has the uncomfortable task of limiting untapped potential. I was surprised when I recently received the advice to think of life like a box. “The walls of this box are all the rules you should follow. But inside the box, you have perfect freedom.” Stunned as I was at the irony of having complete freedom to think inside the box, those at the forefront of AI development and implementation are faced with the irony of limiting projects with undefined potential.
Although Artificial General Intelligence – the ability of a machine to intuitively react to situations that it has not been trained to handle in an intelligent, human way – is still unrealized, narrow AI that enables applications to independently complete a specified task is becoming a more accepted addition to a business’ digital toolkit. Regulations that address AI are built on preexisting principles, primarily data privacy and protection against discrimination. They deal with the known risks that come with AI development. In 2018, biometric data was added to the European GDPR framework to require extra protection. In both the US and Europe, proposals are currently being discussed to monitor AI systems for algorithmic bias and govern facial recognition use by public and private actors. Before implementing any AI tool, companies should be familiar with the national laws for the region in which they operate.
These regulations have a limited scope, and in order to address the future unknown risks that AI development will pose, a handful of policy groups have published guidelines that attempt to set a model for responsible AI development.
The major bodies of work include:
- The Montreal Declaration for Responsible Development of AI from the University of Montreal and Fonds de Recherche du Quebec (published December 2018)
- Guidelines on Artificial Intelligence and Data Protection from the Council of Europe (published January 2019)
- Ethics Guidelines on Trustworthy AI from The EU Commission (published April 2019)
- The OECD Principles on AI from the OECD (published May 2019)
The principles developed by each body are largely similar. The main principles that all guidelines discussed address the need for developers and AI implementers to protect human autonomy, obey the rule of law, prevent harm and promote inclusive growth, maintain fairness, develop robust, prudent, and secure technology, and ensure transparency.
The single outstanding feature is that only one document provides measurable and immediately implementable action. The EU Commission included an assessment for developers and corporate AI implementors to conduct to ensure that AI applications become and remain trustworthy. The assessment is currently in a pilot phase and will be updated in January 2020 to reflect the comments from businesses and developers. The other guidelines offer compatible principles but are general enough to allow any of the public, private, or individual stakeholders interacting with AI to deflect responsibility.
This collection of guidelines from the international community are not legally binding restrictions, but are porous barriers that allow sufficiently cautious and responsible innovations to grow and expand as the trustworthiness of AI increases. The challenge in creating regulations for an intensely innovative industry is to build in flexibility and the ability to mitigate unknown risks without compromising the artistic license. These guidelines attempt to set an ethical example to follow, but it is essential to use tools like the EU Commission’s assessment tool which establish an appropriate responsibility, no matter the status as developer, implementor, or user.
Alongside the caution from governing bodies comes a clear that AI development can bring significant economic, social, and environmental growth. The US issued an executive order in February 2019 to prioritize AI R&D projects, while the EU takes a more cautiously optimistic approach by building of recognizing the opportunities but prioritizing building and maintaining a uniform EU strategy for AI adoption.
If you liked this text, feel free to browse our Artificial Intelligence focus area for more related content.
Oracle OpenWorld 2019 has just wrapped yesterday, and if there is a single word that can describe my impressions of it, that would be “different”. Immediately noticeable was the absence of the traditional Oracle Red spilling into the streets around the Moscone Center in San Francisco, and the reason behind it is the new corporate design system called Redwood. You can already see its colors and patterns applied to the company’s website, but more importantly, it defines new UI controls for Oracle applications and cloud services.
Design, however, is by far not the Oracle’s biggest change. It appears that the company has finally reached the stage where a radical cultural shift is inevitable. To adapt to the latest market challenges and to extend the reach towards new customer demographics, Oracle needs to seriously reconsider many of its business practices, just like Microsoft did years ago. And looking at the announcements of this year’s OOW, the company is already making major strides in the right direction.
It’s an open secret that for years, Oracle has been struggling to position itself as one of the leading cloud service providers. Unfortunately, for a latecomer to this market, playing catch-up with more successful competitors is always a losing strategy. It took the company some time to realize that, and now Oracle is trying a different game: learning from others’ mistakes, understanding the challenges and requirements of modern enterprises, and in the end offering a lean, yet complete stack of cloud services that provide the highest level of performance, comprehensive security and compliance controls and, last but not least, intelligent automation for any business process.
The key concept in this vision for Oracle is “autonomy”. To eliminate human labor from cloud management is to eliminate human error, thus preventing the most common reason for data breaches. Last year, we’ve seen the announcement of the self-patching and self-tuning Autonomous Database. This time, Autonomous Linux has been presented – an operating system that can update itself (including kernel patches) without downtime. It seems that the company’s strategic vision is to make every service in their cloud autonomous in the same sense as well. Combined with the Generation 2 cloud infrastructure designed specifically to eliminate many network-based attack vectors, this provides additional weight to Oracle’s claim of having a cloud ready to run the most business-critical workloads.
Oracle Data Safe, a cloud-based service that improves Oracle database security by identifying risky configuration, users and sensitive data, which allows customers to closely monitor user activities and ensure data protection and compliance for their cloud databases, has been announced as well. Now Oracle cloud databases now include a straightforward, easy to use and free service that helps customers protect their sensitive data from security threats and compliance violations.
It is also worth noting that the company is finally starting to think “outside of the box” with regards to their business strategy as well; or rather outside of the “Oracle ecosystem” bubble. Strategic partnerships with Microsoft (to establish low-latency interconnections between Azure and Oracle Cloud datacenters) and VMware (to allow businesses lift and shift their entire VMware stacks to the cloud while maintaining full control over them, impossible in other public clouds) demonstrate this major paradigm shift in the company’s cloud roadmap.
Even more groundbreaking is arguably the introduction of the new always free tier for cloud services – which is exactly what it says on the lid: an opportunity for every developer, student or even a corporate IT worker to use Autonomous Databases, virtual machines, and other core cloud infrastructure services for an unlimited time. Of course, the offer is restricted by allocated resources, but all functional benefits are still there, and not just for testing. Hopefully, Oracle will soon start promoting these tools outside of Oracle events as well. Seen any APEX evangelists around recently?
Today, the German Federal Government announced its Blockchain Strategy. What might sound as a great thing, falls short, for a number of reasons.
One is that it is late: after the first hype and somewhere in the phase of disillusion. This should have happened much earlier, specifically with the intent of getting or keeping a leading position. And, notably, more important would be to foster innovation by supporting start-ups with simplified regulations and administration for that type of businesses, and a far better ecosystem for venture and growth finance.
A second objection: It is too much about technology, and not enough about use cases. Blockchain technology for itself is not what we should look at. We need to understand the specific benefits, as outlined in our brand-new report “Demystifying the Blockchain”. These are e.g. the need for a distributed (in contrast to central) ledger, immutability, sequenced data, and time-stamped data. Only then, Blockchain technology can deliver benefits by enabling better business solutions. Thus, this should be a strategy for fostering use cases (with some being listed in the document), not a certain technology.
The document also falls short when it comes to Blockchain ID. It focuses only on authentication, does not reflect the current state of Blockchain ID, and ignores the immense potential for fostering data protection and control over individual data, commonly referred to as Self-Sovereign Identity (SSI). And honestly: A German-only Blockchain ID stands in stark contrast to the concepts of Blockchain ID.
Talking about a state-powered Blockchain infrastructure appears strange to me. There are many variants of Blockchains and other Distributed Ledgers with different consensus mechanisms and other specifics. One Blockchain doesn’t serve all use cases.
Also, when reading through the appendix and all the actions the German Government intends to take, it is mainly about research, which should have happened some years ago. To summarize, it is good that the Federal Government is funding new technologies, but that is too late, not focused and too technology-oriented. However, the focus should be rather on promoting the economy (and if the blockchain technology helps, that's great), not on endorsing a particular set of technologies.“
As cybercrime and concerns about cybercrime grow, tools for preventing and interdicting cybercrime, specifically for reducing online fraud, are proliferating in the marketplace. Many of these new tools bring real value, in that they do in fact make it harder for criminals to operate, and such tools do reduce fraud.
Several categories of tools and services compose this security ecosystem. On the supply side there are various intelligence services. The forms of intelligence provided may include information about:
- Users: Users and associated credentials, credential and identity proofing results, user attributes, user history, behavioral biometrics, and user behavioral analysis. Output format is generally a numerical range.
- Devices: Device type, device fingerprint from Unified Endpoint Management (UEM) or Endpoint Mobility Management (EMM) solutions, device hygiene (operating system patch versions, anti-malware and/or UEM/EMM clients presence and versions, and Remote Access Trojan detection results), Mobile Network Operator carrier information (SIM, IMEI, etc), jailbreak/root status, and device reputation. Output format is usually a numerical range.
- Cyber Threat: IP and URL blacklisting status and mapped geo-location reputation, if available. STIX and TAXII are standards used for exchanging cyber threat intel. Besides these standards, many proprietary exchange formats exist as well.
- Bot and Malware Detection: Analysis of session and interaction characteristics to assess the likelihood of manipulation by bots or malware. Output format can be Boolean, or a numerical range of probabilities, or even text information about suspected malware or botnet attribution.
Risk-adaptive authentication and authorization systems are the primary consumers of these types of intelligence. Conceptually, risk-adaptive authentication and authorization functions can be standalone services or can be built into identity and web access management solutions, web portals, VPNs, banking apps, consumer apps, and many other kinds of applications.
Depending on the technical capabilities of the authentication and authorization systems, administrators can configure risk engines to evaluate one or more of these different kinds of intelligence sources in accordance with policies. For example, consider a banking application. In order for a high-value transaction (HVT) to be permitted, the bank requires a high assurance that the proper user is in possession of the proper registered credential, and that the requested transaction is intended by this user. To accomplish this, the bank’s administrators subscribe to multiple “feeds” of intelligence which can be processed by the bank’s authentication and authorization solutions at transaction time.
The results of a runtime risk analysis that yields ‘permit’ may be interpreted as “yes, there is a high probability that the proper user has authenticated using a high assurance credential from a low risk IP/location, the request is within previously noted behavioral parameters for this user, and the session does not appear to be influenced by malware or botnet activity.”
This is great for the user and for the enterprise. However, it can be difficult to implement by the administrators because there are few standards for representing the results of intelligence-gathering and risk analysis. The numerical ranges mentioned above vary from service to service. Some vendors provide scores from 0 to 99 or 999. Others range from -100 to 100. What do the ranges mean? How can the scores be normalized across vendors? Does a score of 75 from intel source A mean the same as 750 from intel source B?
Perhaps there is room for a little more standardization. What if a few attribute name value pairs were introduced and ranges limited to improve interoperability and to make it easier for policy authors to implement? Consider the following claims set, which could be translated into formats such as JWT, SAML, XACML, etc :
The above example* shows an Issuer of “IntelSource”, with timestamp and expiry, Audience of “RiskEngine”, Subject (user ID), and 3 additional attributes: “UserAssuranceLevel”, “DeviceAssuranceLevel”, and “BotProbability”. These new attributes are composites of the information types listed above for each category. Ranges for all 3 attributes are 0-99. In this example, the user looks legitimate. Low user and device assurance levels and/or high bot probability would make the transaction look like a fraud attempt.
KuppingerCole believes that standardization of a few intelligence attributes as well as normalization of values may help with implementation of risk-adaptive authentication and authorization services, thereby improving enterprise cybersecurity posture.
*Thanks to http://jwtbuilder.jamiekurtz.com/ for the JWT sample.
Europe’s consumers have been promised for some years now that strong customer authentication (SCA) was on its way. And the rules as to when this should be applied in e-commerce are being tightened. The aim is to better protect the customers of e-commerce services.
This sounds like a good development for us all, since we are all regular customers of online merchants or providers of online services. And if you look at the details of SCA, this impression is further enhanced. Logins with only username and password are theoretically a thing of the past, the risk of possible fraud on the basis of compromised credentials is potentially considerably reduced.
The Payment Services Directive (PSD II) requires multi-factor authentication (MFA) as the implementation of SCA for all payments over €10. MFA stands for Multi Factor Authentication, i.e. all approaches involving more than one factor. The most common variant is Two Factor Authentication (2FA), i.e. the use of two factors. There are three classes of factors: Knowledge, Possession and Biometrics – or “what you know”, “what you have”, “what you are”. For each factor, there might be various “means”, e.g. username and password for knowledge, a hard token or a phone for possession, fingerprint or iris for biometrics.
The use of this results in improved protection for virtually all parties involved: E-commerce site, payment processors and customers can be more confident that transactions are legitimate and trustworthy.
A short look at the history: On November 16, 2015, the Council of the European Union passed the PSD2 and gave Member States two years to transpose the Directive into their national laws and regulations. It should be expected that the broad and comprehensive implementation of SCA as part of the PSD2 will be achieved in a timely manner, as the benefits are obvious. Of course, purchasing processes become a little more complex, because card data and account number or username and password for payment services are no longer enough for checkout. A second, different feature such as a fingerprint or an SMS to your own registered smartphone becomes necessary to increase security.
But shouldn’t we value this significantly increased security and the trust that goes with it? On the contrary, retailers, for example in Germany, are far from positive about stricter security standards. Every change and especially increase in complexity of the purchasing process is regarded as an obstacle, a potential point for dropping out of the customer journey.
And yet the development now emerging was not unexpected. As early as July 2019, the European Banking Authority (EBA) stated that some players were not sufficiently prepared for the PSD2, SCA and thus the required protection of consumers.
As a measure, the member states were offered an extension of the deadline. First and foremost, this was used extensively by the UK, but also by some other countries. In Germany, the new regulations for payments without cash will enter into force on 14 September 2019, almost four years after the European Directive PSD2 was approved. This means that only payment services that implement SCA and are therefore PSD2 compliant can be used for online purchases using credit cards.
And, you guessed it, just recently BaFin (Germany’s financial watchdog) announced in a press release that “As a temporary measure, payment service providers domiciled in Germany will still be allowed to execute credit card payments online without strong customer authentication after 14 September 2019”.
This does not only mean an immense delay of unclear duration; the otherwise rather homogeneous European market is now being chopped up into a multitude of different regulations and exceptions. The direct opposite of what was planned has been achieved, since it is unclear when and where which requirements will apply, in the European Union and in a global Internet. The obvious losers are the customers and online security and trust in reliable online purchases, at least for the short to mid-term.
Forward-looking organizations who value their customers and their security and trust are now able to implement security through SCA, even without BaFin checks. Those companies that benefit from a short delay to meet PSD2 requirements soon should quickly seize this opportunity and join the latter group. But those companies that, since the release of PSD2 and its requirements, have preferred to complain about more complex payment processes and lament EU regulations should reconsider their relationship to security and customer satisfaction (and thus to their customers). And they should rapidly start on a straight path to comprehensive PSD2 compliance. Because temporary measures and extended deadlines are exactly that, they are temporary, and they are deadlines.
To meet them successfully and in time, KuppingerCole Analysts can support organizations by providing expertise through our research in the areas of PSD2, SCA and MFA. And our Advisory Services are here to support you in identifying and working towards your individual requirements while maintaining user experience, meeting business requirements and achieving compliance. And our upcoming Digital Finance World event in Frankfurt next week is the place to be to learn from experts and exchange your thoughts with peers.
Thanks to an incessant desire to remove repetitive tasks from our to-do lists, researchers and companies are developing AI solutions to HR – namely to streamline recruiting, improve the employee experience, and to assess performance.
AI driven HR management will look different in small businesses than in large companies and multinationals. There are different barriers that will have to be navigated, but also different priorities and opportunities that small businesses will have with AI.
Smaller budgets create price barriers to implementing an AI system, and likely psychological barriers as the self-built CEO resists delegating tasks that would otherwise rely on his or her gut instinct. Access to a sufficient quantity of data to optimize algorithms is perhaps the largest challenge that small businesses will face when integrating AI into their HR practices. Companies typically gather data from their own databases, assembling a wide range of hiring documents, employee evaluations, etcetera. Large companies have decades of stored HR data from thousands of employees, and clearly have an advantage when it comes to gathering a large volume of usable data.
In terms of priorities, there is a huge divide between the value proposition that AI offers to large and small businesses. Big companies need to leverage time-saving aspects, especially to create a customized connection for thousands of employees. Routine communication, building employee engagement, and monitoring employee attrition are all aspects that minimize repetitive work and save time. In a sense, the goal is to give institutional bureaucracy a personal touch – like a small business has. A small company’s strengths come from its unique organizational culture, which is heavily dependent on natural, human interaction and well-designed teams. It is this “small company” feel that large companies try to imitate with AI customization features.
Of course, small companies also need to save time, especially because many do not have a dedicated HR department – in some cases, the department consists of one person dividing time between their main role and HR tasks. Their time is limited, so instead of implementing FAQ chatbots that make the organization feel small and accessible, small businesses should focus on another area which consumes too much time: recruiting and promoting visibility.
Finding qualified and competitive candidates is challenging when a firm’s circle of influence is geographically limited. A factor often contributing to success in small firms is the ability to hire for organizational fit, thus building tightly knit teams to deliver agile service. To increase the chances of attracting highly qualified candidates, small businesses should focus on using AI systems to support recruiting and hiring for organizational fit.
Small businesses are always under pressure to do more with less. When implementation costs are high and internal resources limited, small businesses can consider plug and play tools which rely on external datasets. For those who are open to experiment, they can look for AI projects that have overlap with their goals. For example, socially minded companies looking to attract more diverse applicants can participate in studies like AI-enabled refugee resettlement, placing people in areas where they will be most likely to find employment. A project like this could shift setup costs for implementing new technology and achieve wider HR goals that the company may have, like gaining employees with specific skills that are not common in the area, opening up more opportunities for innovation through diversity, gaining different language capabilities, and so on.
The risk of using AI technologies to support hiring has already played out in the case of Amazon. With the best intentions, the research team designing a hiring tool to select the highest qualified candidates based on their resumes noticed that their algorithm had learned to value traits that indicated the candidate was male, and penalize indicators that the candidate was female. The cause was imbedded in their input data: the CVs and associated data given to the system to learn from was influenced by years of gendered hiring practices. The project was quietly put to rest. This example was luckily only a pilot version and wasn’t the deciding factor in any applications, but provides a valuable lesson to developers and adopters of recruitment AI: maintaining transparency throughout development and beyond will illuminate weaknesses with time. Robust checks by outside parties will be necessary, because one’s own biases are most difficult to see.
AI can have a role to play in small business HR strategies just as much as the large corporations. But as with any strategy, the decision should be aimed at delivering clear advantages with a plan to mitigate any risks.
Earlier this week, Germany’s Federal Office for Information Security (popularly known as BSI) has released their Digital Barometer 2019 (in German), a public survey of private German households that measured their opinions and experience with matters of cybersecurity. Looking at the results, one cannot but admit that they do not look particularly inspiring and that they probably represent the average situation in any other developed country…
According to the study, every fourth respondent has been a victim of cybercrime at least once. The most common types of those include online shopping fraud, phishing attacks or viruses. Further 30% of participants have expressed strong concerns, believing that the risk of becoming such a victim is very high for them. Somewhat unsurprisingly, these concerns do not translate into consistent protection measures. Only 61% of surveyed users have an antivirus program installed, less than 40% update their computers regularly and only 5% opt for such “advanced” technologies as a VPN.
I’m not entirely sure, by the way, how to interpret these results. Did BSI count users running Windows and thus having a very decent antivirus installed by default as protected? And what about iPhone owners who are not given any opportunity to secure their devices even if they wished to do so? Also, it’s quite amusing that the creators of the survey consider email encryption a useful cybersecurity measure. Even weirder is the inclusion of regular password change (a practice that has long been proven useless and is no longer recommended by NIST, for example) but a notable lack of any mentions of multi-factor authentication.
More worrying statistics, however, show that although the absolute majority of users have strong concerns about their online safety, very few actually consider themselves sufficiently informed about the latest developments in this area and even fewer actually implement those recommendations.
The results also clearly indicate that victims of cybercrime have not much faith in the authorities and mostly deal with consequences themselves or turn to friends and family. Less than a third of such crimes end up reported to the police, which means that we should take the official cybercrime statistics (which incidentally show that the rate of such crimes in Germany has grown 8% last year) with a grain of salt – the real number might be much higher.
The rest of the report talks about various measures the government, BSI and police should develop to tackle the problem, but I don’t think that many users will see any notable changes in that regard: their online safety is still largely their own concern… So, what recommendations KuppingerCole could give them?
- Do not blindly spend money on security tools without understanding your risks and how those tools can (or cannot) mitigate them. Most home users do not really need another antivirus or firewall – the ones built into Windows are quite good already. However, corporate users require an efficient, multi-level security approach. Defining a tailored security portfolio therefore is an important challenge.
- In fact, investing in a reliable off-site backup solution would make much more sense: even if your device is compromised and your files are destroyed by ransomware, you could always restore them quickly. A good backup will also protect from many other risks and prevent you from losing an important document to simple negligence or a major natural disaster. And by the way: Dropbox and Google Drive are not backup solutions.
- Activating multi-factor authentication for your online services will automatically protect you from 99% of hackers and fraudsters. It is crucial to do it consistently: not just for your online banking, but for email and social media platforms. By making your accounts impossible to hijack you’re protecting not just yourself, but your online friends as well.
- Quite frankly, the best security tool is your own common sense. Checking a suspiciously looking email for some obvious indicators of fraud or asking your colleague whether they actually used an obscure website to send you an urgent document before opening it: in most cases, this simple vigilance will help you more than any antivirus or firewall.
For more complicated security-related questions, you can always talk to us!
It seems that there is simply no end to a long series of Facebook’s privacy blunders. This time, a security researcher has stumbled upon an unprotected server hosting several huge databases containing phone numbers of 419 million Facebook users from different countries. Judging by the screenshot included in an article by Techcrunch, this looks like another case of a misconfigured MongoDB server exposed to the Internet without any access controls. Each record in those databases contains a Facebook user’s unique ID that can be easily linked to an existing profile along with that user’s phone number. Some also contained additional data like name, gender or location.
Facebook has denied that it has anything to do with those databases, and there is no reason to doubt that; the sheer negligence of the case rather points to a third party lacking even basic security competence, perhaps a former Facebook marketing partner. This is by far not the first case of user data being harvested off Facebook by unscrupulous third parties, perhaps the biggest one being the notorious Cambridge Analytica scandal of early 2018. After that, Facebook has disabled access to users’ phone numbers to all its partners, so the data leaked this time is perhaps not the most current.
Still, the huge number of affected users and the company’s apparent inability to find any traces of the perpetrators clearly indicate that Facebook hasn’t done nearly enough to protect their users’ privacy in recent times. Until any further details emerge, we can only speculate about the leak itself. What we could do today, however, is to try and figure out what users can possibly do to protect them from this leak and to minimize the impact of future similar data breaches.
First of all, the most common advice “don’t give your phone number to Facebook and the likes” is obviously not particularly helpful. Many online messaging services (like WhatsApp or Telegram) use phone numbers as the primary user identities and simply won’t work without them. Others (like Google, Twitter or even your own bank) rely on phone numbers to perform two-factor authentication. Second, for hundreds of millions of people around the world, this advice comes too late – their numbers are already at the disposal of spammers, hackers, and other malicious actors. And those guys have a few lucrative opportunities to exploit them…
Besides the obvious use of these phone numbers for unsolicited advertising, they can be used to expose people who use pseudonyms on social media and link those accounts to real people – for suppressing political dissent or simply to further improve online user tracking. Alas, the only sensible method of preventing this breach of privacy is to use a separate dedicated phone number for your online services, which can be cumbersome and expensive (not to mention that it had to be done before the leaks!)
Unfortunately, in some countries (including the USA), leaked phone numbers can also be used for SIM swap attacks, where a fraudster tricks a mobile operator to issue them a new SIM card with the same number, effectively taking full control over your “mobile identity”. With that card, they can pose as you in a phone call, intercept text messages with one-time passwords and thus easily take over any online service that relies on your mobile number as the means of authentication.
Can users do anything to prevent SIM swap attacks? Apparently not, at least, until mobile operators are forced by governments to collaborate with police or banks on fighting this type of fraud. Again, the only sensible way to minimize its impact is to move away from phone-based (not-so-)strong authentication methods and adopt a more modern MFA solution: for example, invest in a FIDO2-based hardware key like Yubikey or at least switch to an authenticator app like Authy. And if your bank still offers no alternative to SMS OTP, maybe today is the right time to switch to another bank.
Remember, in the modern digital world, your phone number is the key to your online identity. Keep it secret, keep it safe!
Get access to the whole body of KC PLUS research including Leadership Compass documents for only €800 a year
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.
AI for the Future of your Business: Effective, Safe, Secure & Ethical Everything we admire, love, need to survive, and that brings us further in creating a better future with a human face is and will be a result of intelligence. Synthesizing and amplifying our human intelligence have therefore the potential of leading us into a new era of prosperity like we have not seen before, if we succeed keeping AI Safe, Secure and Ethical. Since the very beginning of industrialization, and even before, we have been striving at structuring our work in a way that it becomes accessible for [...]