Blog posts by Paul Fisher
Artificial intelligence (AI) and machine learning tools are already disrupting other professions. Journalists are concerned automation being used to produce basic news and weather reports. Retail staff, financial workers and some healthcare staff are also in danger, according to US public policy research organization, Brookings.
However, it may come as a surprise to learn that Brookings also reports that lawyers have a 38% chance of being replaced by AI services soon. AI is already being used to conduct paralegal work: due diligence, basic research and billing services. A growing number of AI based law platforms are available to assist in contract work, case research and other time-consuming but important back office legal functions. These platforms include LawGeex, RAVN and IBM Watson based ROSS Intelligence.
While these may threaten lower end legal positions, it would free up lawyers to spend more time analyzing results, thinking, and advising their clients with deeper research to hand. Jobs may well be added as law firms seek to hire AI specialists to develop in house applications.
What about adding AI into the criminal justice system, however? This is where the picture becomes more complicated and raises ethical questions. There are those who advocate AI to select potential jurors. They argue that AI could gather data about jurors, including accident history, whether they have served before and the verdict of those trials, and perhaps more controversially, a juror’s political affiliations. AI could also be used to analyze facial reactions and body language indicating how a potential juror feels about an issue, demonstrating a positive or negative bias. Proponents of AI in jury selection say it could optimize this process, facilitating greater fairness.
Others are worried that rushing into such usage could might have the opposite effect. Song Richardson, Dean of the University of California-Irvine School of Law, says that people often view AI and algorithms as being objective without considering the origins of the data being used in the machine-learning process. “Biased data is going to lead to biased AI. When training people for the legal profession, we need to help future lawyers and judges understand how AI works and its implications in our field.” she told Forbes magazine.
A good example would be Autonomous vehicles. Where does the legal blame lie for an accident? The driver, the car company, the software vendor or another third party? These are questions that are best answered by human legal experts who can understand the impact of IA and IoT on our changing society.
Perhaps a good way to illustrate the difference between human thinking and AI is that it usually wins in the game of Go because, while it plays according to formal Go rules, it does so in a way no human would ever choose.
If AI oversaw justice it might very well “play by the rules” also but this would may involve a strict interpretation of the law in every case, with no room for the nuances and consideration that experienced human lawyers and judges possess. Our jails may fill up very quickly!
Assessing guilt or innocence, cause and motive in criminal cases needs empathy and instinct as well as experience – something that only humans can provide. At the same time, it is not unknown for skilled lawyers to get an acquittal for guilty parties due to their own charisma, theatrics and the resources available to them. Greater involvement of AI could potentially lead to a more fact based and logical criminal justice system, but it’s unlikely robots will take the place of prosecution or defence lawyers in a court room. But at some point, AI may well be used in court, but its reasoning would still have to be weighted and checked against a tool like IBM Watson OpenScale to check the validity of its results.
For the foreseeable future, AI in the legal environment is best to enhance research, and even then, we should not trust it blindly, but understand what happens and whether results are valid and, as far as possible, how they are achieved.
The wider ethical debate around AI in law should not prevent us from using it right now in those areas that it will being immediate benefit and open new legal services and applications. Today, AI could benefit those seeking legal help. Time saving AI based research tools will drive down the cost of legal services making it accessible to those on lower incomes. It is not hard to envisage AI driven cloud based legal services that provide advice to consumers without any human involvement, either from startups or as add-ons to traditional legal firms.
For now, the impact of AI on the legal profession is undeniably positive if it reduces costs and frees up lawyers to do more thinking and communicating with clients. And with further development it may soon play a more high-level role in legal environments in tandem with its human law experts.
It’s not been a good couple of weeks for Apple. The company that likes to brand itself as superior to rivals in its approach to security has been found wanting. Early in August it was forced to admit that contractors had been listening in to conversations on its Siri network. It has now temporarily stopped the practice, claiming that only “snippets” of conversations were captured to improve data.
At the end of last week, a much more serious security and privacy threat was made public. Google researchers revealed that hackers have put monitoring implants into iPhones for years, affecting thousands of users per week. The hacking operation, which started in 2017, used several web sites to deliver malware onto iPhones. Users did not have to interact with the site: just visiting was enough. From there, criminals were able to siphon passwords and chat histories from WhatsApp, iMessage and Telegram – bypassing the encryption designed to protect the integrity of these messaging apps. According to the researchers, attackers used five different exploits across 14 pieces of malware.
This is undoubtedly a major incident. It strongly undermines Apple’s reputation for securing users’ devices and the (personal) data residing on these. In an age where all tech companies are facing criticisms for misuse of customer data it comes as a body blow to Apple’s security management expertise; something it has consistently portrayed itself as superior.
What is worse is the revelation that Apple was made aware of the flaw in the iPhone in February this year. Apple did release a patch for the flaw, but why did it not make a much more urgent public announcement back In February to warn all iPhone users to update iOS software urgently? This is Apple’s real failure: trying to make everyone believe it has the best security controls but not delivering. It’s not the first time that Apple’s culture of secrecy has undermined security as a previous blog by Martin Kuppinger illustrates.
Not surprisingly, others were making hay at Apples expense on social media last week. “This is a huge find by Google’s team,” said Alex Stamos, Facebook’s former security chief and now a researcher at Stanford University, while Marcus Hutchins, a security researcher who helped stop the WannaCry attack in 2017 wrote, “Maybe I’m missing something, but it feels like Apple should have found this themselves.”
Apple did not fail to patch but it failed to act swiftly and adequately communicate the flaw, and now it finds itself on the backfoot. Was all this the result of hubris or carelessness? Either way it’s not a good look as it gears up to launch the iPhone 11 and promote its new credit card as a secure alternative to conventional bank cards. As ever the best advice for users of iPhones or any device is to ensure you always have the most up to date operating system installed by making a regular check.
Reports of a data breach against Mastercard began surfacing in Germany early last week with Sueddeutsche Zeitung (in German) one of the first news outlets to report on the loss. As is often the case in major corporate breaches, the company was slow to react officially. On Monday it said only that it was aware of an “issue”. The next day the company had someone to blame: a third-party provider it said had lost data which included usernames, addresses and email addresses, but no credit card details.
By Wednesday however this statement was proved incorrect when persons unknown uploaded an Excel file with full credit card numbers to the Internet, without CVV or expiration numbers. However, a credit card number with names and addresses is still a highly valued and dangerous item on the dark web. It took until the end of the week before Mastercard admitted that 90,000 customers had been affected and reported the incident to the German Data Protection Authority (DPA). Mastercard confirmed a third party running its German rewards program Priceless Specials had been attacked.
The company said that the breach had no connection to Mastercard’s payment transaction network, and it was “taking every possible step to investigate and resolve the issue,” including informing and supporting cardholders. The company shut down the German Specials website.
There are two lessons from this breach. It took Mastercard five days to fully admit it had been attacked. Not only does this potentially contravene GDPR which requires 72 hours, but more importantly left its customers without any information and unsure of their exposure. This suggests a failure or absence of incident response management policies and processes at Mastercard, which should be put into action at first sign of a potential breach. It cannot be emphasised enough that companies must scrupulously prepare for disaster and incidents, including PR and executive response strategies to avoid telling conflicting stories.
Secondly, the fact that the breach occurred at a service provider proves once again that oversight and due diligence are essential when confidential data is at stake. GDPR quite clearly states that the data controller remains responsible for a breach from a third-party provider. And this case is a perfect example of how Mastercard may be judged to have failed in this regard when the DPA investigates.
After the recent Capital One breach, some commentators have suggested that cloud security is fundamentally flawed. Like many organizations today, Capital One uses Amazon Web Services (AWS) to store data, and it was this that was targeted and successfully stolen.
In the case of Capital One it was process, not technology, that failed. The company failed on three points to secure its data properly using the extended tool sets that AWS provides. It relied only on the default encryption settings in AWS, suggesting a lack of product knowledge or complacency in security teams. The Access Control policies had not been properly configured and allowed anonymous access from the web. Finally, the breach was not discovered until four months after it happened because Capital One had not turned on the real-time monitoring capabilities in AWS. This last point would put the company in a tricky position if any of the data belonged to EU citizens – in this case it looks like only US citizens were affected.
The lesson from the incident isn’t that cloud security is not up to the job. Certainly, putting data in the cloud without protection is foolish but modern cloud platforms such as AWS and Azure, for example, have advanced configuration controls to defend robustly against breach attempts. The cloud is here to stay; the digital transformation essential to modern business depends on it. To suggest we curtail its usage because of security concerns is avoiding our responsibility and ability to secure it with the tools at our disposal.
To learn how KuppingerCole Analysts can assist you establish a compliant and secure cloud strategy please download our Advisory Services brochure.
A new strain of Sodinokibi ransomware is being used against companies in the United States and Europe. Already notable for a steep increase in ransoms demanded ($500,000 on average), the malware can now activate itself, bypassing the need for services users to click a phishing link for example. In addition, the Financial Times reports that criminals are targeting Managed Service Providers (MSPs) to find backdoors into their client’s data, as well as attacking companies directly. “They are getting into an administration system, finding lists of client privileged credentials and then installing Sodinokibi on all the clients’ systems,” the report warns.
Ransomware has proven to be highly effective for cyber criminals, as many companies have no alternative but to pay up after they have been locked out of their own systems. This is particularly true of smaller companies who often have no cyber insurance to cover their losses. Criminal hackers have also become more ruthless – sometimes refusing to unlock systems even after the ransom has been paid.
But the sophistication of this new strain of Sodinokibi and the inflated ransom demands tells us that the criminal developers and distributors have raised the bar. The ransomware does not need to find vulnerabilities, as it gains “legitimate” access to data through stolen credentials. Left unchecked, Sodinokibi threatens to be as damaging as its notorious predecessor, Petya.
Even Managed Security Service Providers (MSSPs) are not immune. According to reports, one such MSSP was attacked through an unpatched version of the Webroot Management Console, enabling attackers to spread the ransomware to all its clients. Webroot responded by sending out a warning email to all its customers, saying it had logged out everyone and activated mandatory two-factor authentication.
Webroot’s warning email after one of its MSSP customers was attacked by Sodinikobi
Notwithstanding the fact that any MSSP clients should expect them to take robust and regular proactive security steps as part of an SLA, it shows that diligent use of IAM and authentication controls can do much to prevent ransomware from doing its worse. But it is privileged accounts that are the true nectar for cyber criminals as these unlock so many doors to critical data and services. Which is why PAM (Privileged Account Management) is essential in today’s complex, hybrid organizations and if this responsibility is outsourced to MSP or MSSPs it is doubly important. (For more on PAM please see our recent Leadership Compass and Whitepaper research documents).
The success of any ransomware, which is not a complex piece of code in itself, depends on the lack of preparedness by organizations, and a lack of due diligence on patching systems to prevent it reaching its intended targets. In the case of Sodinikobi, it’s new ability to execute unaided makes this more important than ever.
When too many users have access to critical data and systems, it makes life much easier for ransomware. A properly configured and up to date PAM platform, either on premises or at an MSP will do much to stop this and prevent the situation found at many organizations where Privileged Account and Admins often have too much access. Best practice for today’s enterprise environments is to set credentials for single tasks and be strictly time limited - and setting two-factor authentication as default for privileged accounts. This would stop ransomware from spreading too far into an organization. Another nice concept for MSPs and MSSPs is fully automated administration of client services with well tested runbooks, and no personalized access to the systems at all.
Of course, a management platform should be patched to stop any form of ransomware reaching those credentials in the first place - patches for Sodinikobo are widely available – but as we have seen organizations cannot rely on that to happen. Given what happened with the WebRoot platform there is a strong argument for organizations to host IAM on premises, at least for privileged account management so that they have control over patch management. A robust IAM and PAM solution will prevent “access creep” by ensuring the consistent application of rules and policies across an organization. After all, hackers can’t demand a ransom if they can’t get access to your critical systems.
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.
AI for the Future of your Business: Effective, Safe, Secure & Ethical Everything we admire, love, need to survive, and that brings us further in creating a better future with a human face is and will be a result of intelligence. Synthesizing and amplifying our human intelligence have therefore the potential of leading us into a new era of prosperity like we have not seen before, if we succeed keeping AI Safe, Secure and Ethical. Since the very beginning of industrialization, and even before, we have been striving at structuring our work in a way that it becomes accessible for [...]