Blog posts by Paul Fisher
AI's role in reducing the impact of future pandemics
As the coronavirus spreads fear and panic across the world, it’s perhaps timely to take a step back and consider the future of healthcare and how AI will help. But first let’s consider that the coverage and spread of the virus shows us precisely just why reliable data is needed to help us cope with new diseases. At time of writing, most official advice on coronavirus is not based on hard data led evidence on how the virus spreads, the best way to contain it, who is most vulnerable, what is the incubation period and so on. Instead we have been left with mostly guesswork and conflicting stories in newspapers.
My colleague wrote this excellent blog on the don'ts of IT in times of crisis, you should check it out to get an overview on what to do and what to avoid.
Imagine instead that an AI platform had been built ready to analyze the breakout of the virus when it was first discovered in Wuhan, China. If algorithms had been prepared earlier to screen those falling ill to an early stage of a new virus, who they were, their movements etc. - we might have gone further in containing the virus some two months ago. Of course, data and privacy are inextricably linked, and societies need to balance the safety and health of all in emergencies against possible infringement of data privacy laws. Careful use of AI and enhanced data security, along with consensual use of tracking apps, could help us in achieve the first and avoid the second – but one feels this is work that needs to be done to prepare for any future pandemic.
The future should give us hope
Even without a global pandemic to worry about, and of course it will pass, AI is increasingly going to pay a large part in providing better, and more cost-effective healthcare in major economies, whatever model of funding is in place. And it’s something not lost on developers and investors looking for the next big opportunity. According to the Financial Times, some 367 AI healthcare US start-ups received around $4bn in funding in 2019. Many of these start-ups are developing AI assisted applications and tools that help with diagnosis and treatment of conditions or allow sensors to be placed onto non-critical patients freeing up a hospital bed.
Real time data from potentially thousands of patients can be put in front of algorithms to spot early signs of deterioration in a patient rather than wait for the patient to turn up in ER. At the same time, the accumulated data from connected patients will be invaluable to create future algorithms and learn more about how diseases behave, and how some people may be more at risk.
How AI in healthcare should help us reduce waste
AI can help us not just with clinical issues in health industry but also in reducing waste and improving efficiencies – the bane of all health systems. Appointments and clinic attendance are a regular bottleneck in delivering efficient care to outpatients. Resources are spent on phone calls, appointment letters and then the routine check in/check out at the appointment. The process has not changed in decades and still clinics suffer from abandoned appointments, which then have to be followed up manually or people turning up at the wrong time. Altogether it’s one of the biggest avoidable costs in healthcare yet the biggest advance has been to send SMS messages to remind patients of upcoming appointments.
However, we have no data on missed appointments, repeat offenders, how far people may have to travel to get to appointments and even if they need to an appointment at all. Many are routine and repeated periodic checks done often to fulfill insurance rules and that have no prior data on the condition of the patient. Again, sensors that can read vital signs may rule out the need for routine appointments and instead recommend appointments if conditions signal risk to the patient. The advice often given to patients is to turn up between appointments if they “feel ill”, resulting in more people in clinics with inaccurate self-diagnosis wasting resources and time.
Clustering of data from patients
Preventative medicine has been talked about for many years, but this has mostly to educate people to change lifestyle, cut down on alcohol, get more exercise, stop smoking etc. But AI will help societies deliver genuine advances in preventative medicine by monitoring data from patients that already have disease or may be at risk of developing diseases. Such data can also be used to discover disease clusters that may develop due to environmental or social conditions. While we know (or think we know), that poor diet or lifestyle choices can increase the risk of life-threatening conditions without AI we know little as to how other factors come into play. For example, a group may use the same public transport networks, live in a certain type of building, perform a certain occupation or share similar activities.
The potential for AI to reveal hitherto hidden health data and patterns is massive, there is just so much we don’t know about what factors contribute to good and poor health. By adopting AI technologies now, we can take a big step forward into better healthcare for all, including dealing with future pandemics.
Learning more about AI
KuppingerCole has an increasing body of research on the impact of AI will have on other sectors and integration with legacy architecture.
Reports include the following:
- Leadership Brief on Explainable AI;
- Assessing the Maturity of Core AI disciplines and
- AI in the Legal Industry plus many more.
You can also contact one of our expert analysts for more specific information on an AI application or trend.
Microsoft is currently running ads extoling the virtue of AI and IoT sensors in helping farmers produce more and better crops, with less waste and higher yields. Elsewhere in manufacturing, supply chain management is being transformed with digital maps of goods and services that reduce waste and logistical delays.
In Finland, a combination of AI and IoT is making life safer for pedestrians. The City of Tampere and Tieto built a pilot system that automatically detects when a pedestrian is planning to cross the street at an intersection. Cameras at intersections accessed algorithms trained to detect the shape of pedestrians with 99% accuracy then activated the traffic lights to stop traffic.
Low Latency, High Expectations
There is a common thread in all these examples; sensors at the edge are used to send data to algorithms in the cloud trigger a response. All the while the data is collected to improve the algorithms to extrapolate trends and improve future systems. These examples show that IOT and AI already works well it responds to pre-scripted events such as a pedestrian appearing near a crossing, or soil drying out. The machines have already learnt how to deal with situations that would be expected in their environment. They are not so much replacing human decision-making process but removing the chore of having to make the right decision. All good.
Low latency is essential in any AI and IOT application for industry or agriculture if the right response is to be sent promptly to the edge from an existing library of algorithms. But what if the edge devices had to learn very quickly how to deal with a situation they had not experienced before such as an out of control wildfire or unprecedented flooding on agricultural plains? Here latency is only part of the equation. The other is the potential availability at the edge of massive amounts of data needed to decide on what to do, but edge devices by their nature cannot typically store or process such levels of data.
IBM has written a research paper on how edge devices, in this case drones sent to monitor a wildfire, could perform a complex learning operation, simultaneously model, test and rank many algorithms before deciding on the appropriate analytics that will be deployed to the edge and allow firefighters to respond. This is much closer to a truly intelligent model of IoT deployment than our earlier examples.
In the IBM example, Cognitive Processing Elements (CPE) are used in sequence to assist in making the right decisions to help stop the fire spreading, and understand how wildfires behave in extremis – in itself a not well understood phenomenon. Therefore, can we create a hybrid IOT/AI/Cloud architecture that can intelligently process data at appropriate points in the system depending on circumstances? It’s not just in natural disasters it may help but in another great hope for Ai and IOT: the fully autonomous vehicle.
Who Goes First, Who Goes Second?
Currently, driverless cars are totally reliant on pre-existing algorithm and learnings – such as a red-light or the shape of a pedestrian in the headlights to make decisions. We remain a long way from fully autonomous vehicles, in fact some researchers are now sceptical of whether we will ever achieve that point. The reason is that human car drivers, already act like the intelligent drones featured in IBM’s research paper – but uber versions of such. They not only have access to massive levels of intelligence but can process it at the edge in real time to make decisions based on their experience, intelligence and, crucially, learnt social norms.
Consider the following example that occurs millions of time every day on Europe’s narrow, crowded suburban streets to see how this works. Cars will invariably be parked on both sides with only a gap for one to pass in the middle. What happens when two cars approach: one or the other must give way – but which one? And how many cars are let through once one driver takes the passive role? Somehow, in 99.9% of incidents, it just works. One day we may be able to say the same when two autonomous vehicles meet each other on a European street!
The importance of privilege accounts to digital organizations and their appeal to cyber attackers has made Privilege Access Management (PAM) an essential component of an identity and access management portfolio. Quite often, customers will see this as purely as a security investment, protecting the company’s crown jewels against theft by organized crime and against fraudulent use by internals. More successful cyber-attacks are now enabled by attackers gaining access to privilege accounts.
However, that is only part of the story. Organizations also must worry about meeting governance and compliance demands from governments and industry bodies. Central to these are penalties, often quite stringent, that punish organizations that lose data or fail to meet data usage rules. These rules are multiplying; the most recent being the new California Privacy Act (CCPA) that joins GDPR (personal data), PCI-DSS (payment data) and HIPAA (medical data) in affecting organizations across the globe.
Along came the GDPR
In the run up to GDPR in 2018, alert security and governance managers realised that better control of identity and access management in an organization went some way to achieving compliance. Further, there was a realisation that PAM could give more granular control of those highly privileged accounts that criminals were now actively targeting and putting them in danger of falling foul of compliance laws.
Digital transformation ushered in an era of growth in cloud, big data, IoT, containers as organizations sought to get a competitive advantage. This led to an increase in data and access points and privileged accounts multiplied. Those accounts that had access to personal and customer data were at particular risk.
Digital transformation brings new challenges
The PAM market in 2020 is set for change as vendors realise that customers need to protect privilege accounts in new environments like DevOPs or cloud infrastructures that are part of digital transformation. Increasingly organizations will grant privilege access on a Just in Time (JIT) or One Time Only (OTO) basis to reduce reliance on vaults to store credentials, simplify session management and to achieve their primary goal - speed up business processes. However, this acceleration of the privilege process introduces new risks to compliance if PAM solutions are not able to secure the new processes.
The good news is that vendors are responding to these demands with established players introducing new modules for DevOPs and JIT deployment for their PAM suites, while smaller start-ups are seeing niches in the market and acting accordingly with boutique PAM solutions for more digital environments.
PAM reduces risk but does not guarantee compliance
None of this means that an organization will be fully compliant just because it beefs up its PAM solutions across the board. Done well, it will reduce the risk of data loss through infiltration of privilege accounts by some percentage points, and along the way tick some boxes in every CISO’s favourite security standard, ISO 27001. An organization also needs to harden data centres, improve web security and improve auditing - among other tasks.
At VMworld Europe 2019, Pat Gelsinger, CEO of VMware said security is fundamentally broken and that the overabundance of vendors is making the problem worse. I’m not sure this is true. Gelsinger had some good lines: applications that are updated and patched on a regular basis should be illegal and outlawed by legislation, and that security is too threat-based.
Making security less threat-focused is a good thing
The solution, according to VMware, is simple: we need to build more security in the platform with the supreme goal of a single security agent running across the entire enterprise. Security therefore should be built-in, unified and focused on the applications not the threat. That part is true: security should be less threat-focused, but I believe that the security of an organization should also be risk-based identity management.
When large platform vendors start talking about simplifying security it inevitably revolves around their platform – in this case a widely used and trusted platform. So, what is VMware’s solution? Not surprisingly it consists of putting apps and data at the center of access points, endpoint, identity, workload, cloud and the network - all protected by the “intrinsic security” layer, also known as Carbon Black, which VMware has now fully acquired. This will succeed because VMware will use big data analytics with a single agent that monitors all endpoints, and IAM lifecycle management will be built into the infrastructure.
“The Carbon Black platform will deliver a highly differentiated intrinsic security platform across network, endpoint, workload, identity, cloud and analytics. We believe this will bring a fundamentally new paradigm to the security industry,” said Gelsinger.
It ain’t what you do, it's the way that you do it
It’s obviously a compelling prospect but is it realistic? VMware are right to suggest that two major blocks to security are bolted-on solutions, and siloed platforms. But it would be more accurate to say that badly chosen bolted on solutions are a problem, and solutions that run within silos are the result of little or no risk assessment and bad planning. There are indeed thousands of security vendors out there, which VMware illustrated with a couple of slides featuring hundreds of logos (pity the poor guy who had to put that together).
The fundamental reason that so many solutions exist is that so many security and identity challenges exist, and these vary on the type and size of organization. Digital transformation has now added extra challenges. The demands of securing data, identity and authentication are fluid and require innovation in the market, which is why we cover it. Gelsinger was right to say that consolidation must come within organizations and in the vendor space – that is normal, and VMware’s acquisition is a good example of that. But consolidation is often followed by market innovation from startups that serve new demands that the process of consolidation leaves behind.
Super solutions are not a new idea
Which brings us to the crux of this so-called intrinsic security proposition. In simple terms, chucking a semi-intelligent big data analytics engine around your cloud and virtualised infrastructures sounds great. The real-time analysis engine keeps all the bad stuff out without relying solely old-fashioned AV and signature-based protection. Except I don’t think that is possible. It will not solve all granular problems around IAM such as privileged accounts and credentials embedded in code. Intrinsic Security sounds very much like a super firewall solution for VMware – useful to have but it won’t stop organizations that run on VMware from eventually going back to that slide with so many other vendor logos...
For more on Infrastructure as Service please see our Leadership Compass report.
There is more to the cloud than AWS, Azure, IBM and Google according to OVHCloud - the new name for OVH as it celebrates its 20th anniversary. While the big four have carved up the public cloud between them, the French cloud specialist believes that business needs are changing, which gives them an opportunity in the enterprise market it is now targeting. In short, OVHCloud believes there is a small, but discernible shift back to the private cloud - for security and compliance imperatives.
That does not mean that OVHCloud is abandoning the public cloud to the Americans. At October’s OVHCloud Summit in Paris, CEO Michel Paulin spoke forcefully of the need for Europe (for that, read France) to compete in this space. “We believe we can take on the US and Chinese hegemony. Europe has all the talents needed to build a digital channel that can rival all the other continents.” he said.
OVHCloud needs to shift focus and mature
The company is growing, with 2,200 employees and revenue estimated at around $500m. For comparison, AWS posted revenue of $9bn for its third quarter in 2019 – spot the difference. OVHCloud is doubling down on the Security as a Service (SaaS) market with 100 SaaS products announced for a new dedicated marketplace. The company says the focus will be on collaboration and productivity tools, web presence and cloud communication. On the security front, OVHCloud is promising the following soon: Managed Private Registry, Public Cloud Object Storage Encryption and K8s private network.
If OVHCloud is to take even a chunk of the Big Four’s market, it needs to shift focus and mature. It believes it can by moving from what it terms the “startup world” of digital native companies into the traditional enterprise sector (without neglecting its cloud native customers). Gained customers so far include insurance, aviation, big IT services and some finance and retail customers. OVHCloud believes the enterprise market is lagging its traditional customers in digital innovation and transformation.
Security reasons and better data oversight bring customers back to private clouds
Crucially, the company thinks that enterprise customers are coming back to private clouds for security reasons and better oversight of data in the age of big compliance. At the same time, it predicts that the future of cloud should remain open and multi-cloud, something I and others would agree with.
In terms of business strategy, OVHCloud is moving from a product approach to a solution approach along with the shift towards enterprise customers – this makes sense. OVHCloud makes much of its ability to build its own servers and cooling systems, and sees this as a USP, claiming the industry’s lowest TCO for energy usage. Such an advantage depends on scale, however, and in an open multi-cloud, multi-vendor market, the cost savings may make little difference to enterprise customers. But the green message may play well in today’s climate conscious market for some buyers in the startup crowd and potentially in the more digital parts of larger enterprises.
For more insight into the enterprise cloud market please read our reports or contact one of our analysts.
Much is written about the growth of AI in the enterprise and how, as part of digital transformation, it will enable companies to create value and innovate faster. At the same time, cybersecurity researchers are increasingly looking to AI to enhance security solutions to better protect organizations against attackers and malware. What is overlooked is the same determination by criminals to use AI to assist them in their efforts to undermine organizations through persistent malware attacks.
The success of most malware directed at organizations depends on an opportunistic model; sent out by bots in the hope that it infects as many organizations as possible and then executes its payload. In business terms, while relatively cheap, it represents a poor return on investment and is easier for conventional anti-malware solutions to block. On the other hand, malware that is targeted and guided by human controllers at a command and control point (2C) may well result in a bigger payoff if it manages to penetrate privileged accounts, but it is expensive and time consuming for criminal gangs to operate.
Imagine if automated malware attacks were to benefit from embedded algorithms that have learned how to navigate to where they can do the most damage; this would deliver scale and greater profitability to the criminal gangs. Organizations are facing malware that learns how to hide and perform non-suspicious actions while silently exfiltrating critical data without human control.
AI powered malware will change tactics once inside an organization. It could, for example, automatically switch to lateral movement if it finds its path blocked. The malware could also sit undetected and learn from regular data flows what is normal, and emulate this pattern accordingly. It could learn which devices the infected machines communicate with, its ports and protocols, and the user accounts which access it. All done without the current need for communication back to 2C servers – thus further protecting the malware from discovery.
It is access to user accounts that should worry organizations – particularly privileged accounts. Digital transformation has led to an increase in the number of privileged accounts in companies and attackers are targeting those directly. The use of intelligent agents will make it easier for them to discover privileged accounts such as those accessed via a corporate endpoint. At the same time, malware will learn the best times and situations in which to upload stolen data to 2C servers by blending into legitimate high bandwidth operations such as such as videoconferencing or legitimate file uploads. This may not be happening yet but all of this is feasible given the technical resources that state sponsored cyber attackers and cash rich criminal gangs have access to.
To prove what’s possible IBM research scientists created a proof-of-concept AI-powered malware called Deep Locker. The malware contained hidden code to generate keys which could unlock malicious payloads if certain conditions were met. It was demonstrated at a Las Vegas technology conference last year, using a genuine webcam application with embedded code to deploy ransomware when the right person looked at the laptop webcam. The code was encrypted to conceal its payload and to prevent reverse engineering for traditional anti-malware applications.
IBM also said in its presentation that current defences are obsolete and new defences are needed. This may not be true. AI is not yet magic. As in the corporate world, much AI assisted software, benefits from the learning capabilities of its algorithms which automate the tasks that humans have previously held. In the criminal ecosystem this includes directing malware towards privilege accounts. Therefore, it makes sense that if Privileged Access Management (PAM) does a good job of defecting human led attempts to hijack accounts then it should do the same when confronted with the same techniques orchestrated by algorithms. Already the best PAM solutions are smart enough to monitor M2M communications and DevOps that need access to resources on the fly.
But we must not stop there. Future IAM and Pam solutions must be able to detect hijacked accounts or erroneous data flows in real time and shut them down so that even AI cannot do its work. Despite the sophistication that AI will bring to malware, its target will remain the same in many attacks: business critical data that is accessed by privileged account users, which will include third parties and machines. It is one more way in which Identity – of people, data and machines - is taking centre stage in securing the digital organizations of the future. For more on KuppingerCole’s research into Identity and the digital enterprise please see our most recent reports.
Car buyers gathering at the Frankfurt Motor Show last month will have witnessed the usual glitz as car makers went into overdrive launching new models, including of course many new electric vehicles reflecting big change in the industry. Behind the glamour of the show, the world’s biggest car makers are heavily investing in new technologies to remain competitive, including Artificial Intelligence (AI) and Machine Learning. While perfecting algorithms for self-driving cars is a longer-term goal and grabs the headlines, much is being done with AI to improve the design, manufacture and marketing of cars.
In an industry characterized by high costs and low margins, car makers (OEMs) are turning to AI to improve efficiencies, improve quality control and understand their markets and buyers better. Five years ago, Volkswagen opened its Data:Lab in Munich. It is now the company’s main research base for AI with around 80 IT specialists, data scientists, programmers, physicists, and mathematicians researching and developing applications in machine learning and AI. Volkswagen goes as far to say that AI will fundamentally change the company’s value chain as it will now begin, not end, with the production of the vehicle.
An area of focus is applying AI to market research and marketing to pre-empt changes in demand and consumer choice outside of OEMs traditional 7-year model cycle. Any manufacturer that can be ahead of the curve in marketing will have a significant advantage. Volkswagen is using AI to create precise market forecasts containing a multitude of variables including economic development, household income, customer preferences, model availability and price.
With this kind of insight, it is possible that the company could configure model choice (specs, optional extras, engine sizes etc) and order production to meet buyer preferences on a smaller regional or even hyper local level. For example, a Golf special edition that appeals to specific buyers in London or an Amarok truck configured for the needs of farmers in the Rhineland.
Volkswagen’s German rivals are also scaling investment in AI technologies and are keen to be seen doing so with positive statements on their websites, and active recruitment drives to get the best developer talent. All three of Germany’s OEMs are aware that they need to be technological leaders in IT as much as engineering as cars become more connected and software driven.
At its factory in Stuttgart, Daimler has created a knowledge base that stores all the existing vehicle designs at the company which any new engineer can tap into. More than this, the algorithm has been trained to suggest that a new engineer contacts a more experienced colleague for human advice in certain circumstances. A good example, of how AI can be trained to interact with human workers.
At the final inspection area at BMW’s Dingolfing plant, an AI application compares the vehicle order data with a live image of the model designation of the newly produced car. If the live image and order data don’t correspond, for example if a designation is missing, the final inspection team receives a notification. This frees up human employees to work elsewhere. Algorithms are also being taught to tell the difference between a hairline crack in sheet metal and simple dust particles, something that is beyond the scope of human eyesight. Meanwhile in paint shops, AI and analytics applications offer the potential to detect sources of error at much earlier stages of the process. If no dust attaches to the car body before painting in the first place, none needs be polished off later.
While these examples of AI applications may lack the sci-fi appeal of self-driving cars, they are presently more important to the future survival of the car industry, not just in Germany but across the globe. AI is being used effectively to meet the three fundamental challenges of the industry’s survival: improved quality, cost and waste reduction, and customer demands.
If you liked this text, feel free to browse our Artificial Intelligence focus area for more related content.
A visit to HP Labs offices in central Bristol, about 120 miles west of London, was a chance to catch up with the hardware part of the former Hewlett Packard conglomerate, which split in two four years ago. The split also meant that there are now two HP Labs, one for the HP business and the other for Hewlett Packard Enterprise.
To perhaps position itself as a serious B2B vendor we were told that HP is an “endpoint infrastructure company”, which kind of works, but its US, Chinese and Taiwanese competition could conceivably claim the same.
To counter this, HP is tapping into the shared legacy of the research and development focus that the original Mr. Hewlett and Mr. Packard founded in that famous garage in Palo Alto – hence the trip to HP Labs. A single floor of an office block in Bristol lacks some of the wow factor of the more campus feel of the old joined-up and bigger HP Labs but, on the other hand, the ideas that came out of those Labs did not always see practical application.
The focus then was on HP’s security credentials for innovations that have found their way into products. In a series of demonstrations of its Sure Suite technologies, HP made a case for why its line of laptops and PCs are better equipped to withstand attacks on the endpoint.
Sure Start protects the BIOS from attack each time the PC or laptop is booted on a network or standalone and automatically validates the integrity of the BIOS code. Once the PC is operational, runtime intrusion detection monitors memory. In the case of an attack, the PC can self-heal using an isolated “golden copy” of the BIOS. The live demo on the day showed a laptop that had been locked by ransomware being brought back to operable life. Sure Recover is a tool squarely aimed at the SMB market allowing end users to recover their operating system even after it has been wiped out by an attack, without recourse to IT. It uses HP’s chip-based Endpoint Security Controller (ESC) to image the latest OS using a wired network connection.
A new announcement on the day was HP Sure Admin which extends the automation of secure endpoint management into the corporate domain and builds on the user-friendly technologies of Sure Start and Sure Recover, to reduce the threat of attacks on the surface created by remote management tools. Traditionally, BIOS updates on endpoint PCs have been administered through passwords which are at risk of theft or intervention. Sure Admin uses public/private key encryption to authorise remote BIOS changes. For local access, Sure Admin runs as an app on a smartphone accessed by a private key, which then generates a onetime PIN for an admin to access an endpoint that needs maintenance or recovery. Also demonstrated was HP Sure Sense which uses AI to recognise unknown malware to mitigate zero-day attacks, with a less than 20 millisecond detection rate claimed by HP.
Any kind of demo must be viewed objectively, and these technologies will only prove their mettle in the wild. The other issue is how well any of these endpoints so equipped would embed into an existing corporate environment. Sure Admin needs a serious examination of how it can be integrated into the wider enterprise IT, access management and security portfolio.
This is not to disparage the progress HP has made, and my feeling at the end of the day was that HP is using its Labs for real-world security applications. But they are currently more efficient iterations of existing technologies rather than great leaps forward. However, endpoint protection is essential for business environments that are more open, extended and connected than before. HP’s recent acquisition of endpoint security start-up Bromium will no doubt impact on HP’s future plans to improve on these technologies further.
Artificial intelligence (AI) and machine learning tools are already disrupting other professions. Journalists are concerned automation being used to produce basic news and weather reports. Retail staff, financial workers and some healthcare staff are also in danger, according to US public policy research organization, Brookings.
However, it may come as a surprise to learn that Brookings also reports that lawyers have a 38% chance of being replaced by AI services soon. AI is already being used to conduct paralegal work: due diligence, basic research and billing services. A growing number of AI based law platforms are available to assist in contract work, case research and other time-consuming but important back office legal functions. These platforms include LawGeex, RAVN and IBM Watson based ROSS Intelligence.
While these may threaten lower end legal positions, it would free up lawyers to spend more time analyzing results, thinking, and advising their clients with deeper research to hand. Jobs may well be added as law firms seek to hire AI specialists to develop in house applications.
What about adding AI into the criminal justice system, however? This is where the picture becomes more complicated and raises ethical questions. There are those who advocate AI to select potential jurors. They argue that AI could gather data about jurors, including accident history, whether they have served before and the verdict of those trials, and perhaps more controversially, a juror’s political affiliations. AI could also be used to analyze facial reactions and body language indicating how a potential juror feels about an issue, demonstrating a positive or negative bias. Proponents of AI in jury selection say it could optimize this process, facilitating greater fairness.
Others are worried that rushing into such usage could might have the opposite effect. Song Richardson, Dean of the University of California-Irvine School of Law, says that people often view AI and algorithms as being objective without considering the origins of the data being used in the machine-learning process. “Biased data is going to lead to biased AI. When training people for the legal profession, we need to help future lawyers and judges understand how AI works and its implications in our field.” she told Forbes magazine.
A good example would be Autonomous vehicles. Where does the legal blame lie for an accident? The driver, the car company, the software vendor or another third party? These are questions that are best answered by human legal experts who can understand the impact of IA and IoT on our changing society.
Perhaps a good way to illustrate the difference between human thinking and AI is that it usually wins in the game of Go because, while it plays according to formal Go rules, it does so in a way no human would ever choose.
If AI oversaw justice it might very well “play by the rules” also but this would may involve a strict interpretation of the law in every case, with no room for the nuances and consideration that experienced human lawyers and judges possess. Our jails may fill up very quickly!
Assessing guilt or innocence, cause and motive in criminal cases needs empathy and instinct as well as experience – something that only humans can provide. At the same time, it is not unknown for skilled lawyers to get an acquittal for guilty parties due to their own charisma, theatrics and the resources available to them. Greater involvement of AI could potentially lead to a more fact based and logical criminal justice system, but it’s unlikely robots will take the place of prosecution or defence lawyers in a court room. But at some point, AI may well be used in court, but its reasoning would still have to be weighted and checked against a tool like IBM Watson OpenScale to check the validity of its results.
For the foreseeable future, AI in the legal environment is best to enhance research, and even then, we should not trust it blindly, but understand what happens and whether results are valid and, as far as possible, how they are achieved.
The wider ethical debate around AI in law should not prevent us from using it right now in those areas that it will being immediate benefit and open new legal services and applications. Today, AI could benefit those seeking legal help. Time saving AI based research tools will drive down the cost of legal services making it accessible to those on lower incomes. It is not hard to envisage AI driven cloud based legal services that provide advice to consumers without any human involvement, either from startups or as add-ons to traditional legal firms.
For now, the impact of AI on the legal profession is undeniably positive if it reduces costs and frees up lawyers to do more thinking and communicating with clients. And with further development it may soon play a more high-level role in legal environments in tandem with its human law experts.
It’s not been a good couple of weeks for Apple. The company that likes to brand itself as superior to rivals in its approach to security has been found wanting. Early in August it was forced to admit that contractors had been listening in to conversations on its Siri network. It has now temporarily stopped the practice, claiming that only “snippets” of conversations were captured to improve data.
At the end of last week, a much more serious security and privacy threat was made public. Google researchers revealed that hackers have put monitoring implants into iPhones for years, affecting thousands of users per week. The hacking operation, which started in 2017, used several web sites to deliver malware onto iPhones. Users did not have to interact with the site: just visiting was enough. From there, criminals were able to siphon passwords and chat histories from WhatsApp, iMessage and Telegram – bypassing the encryption designed to protect the integrity of these messaging apps. According to the researchers, attackers used five different exploits across 14 pieces of malware.
This is undoubtedly a major incident. It strongly undermines Apple’s reputation for securing users’ devices and the (personal) data residing on these. In an age where all tech companies are facing criticisms for misuse of customer data it comes as a body blow to Apple’s security management expertise; something it has consistently portrayed itself as superior.
What is worse is the revelation that Apple was made aware of the flaw in the iPhone in February this year. Apple did release a patch for the flaw, but why did it not make a much more urgent public announcement back In February to warn all iPhone users to update iOS software urgently? This is Apple’s real failure: trying to make everyone believe it has the best security controls but not delivering. It’s not the first time that Apple’s culture of secrecy has undermined security as a previous blog by Martin Kuppinger illustrates.
Not surprisingly, others were making hay at Apples expense on social media last week. “This is a huge find by Google’s team,” said Alex Stamos, Facebook’s former security chief and now a researcher at Stanford University, while Marcus Hutchins, a security researcher who helped stop the WannaCry attack in 2017 wrote, “Maybe I’m missing something, but it feels like Apple should have found this themselves.”
Apple did not fail to patch but it failed to act swiftly and adequately communicate the flaw, and now it finds itself on the backfoot. Was all this the result of hubris or carelessness? Either way it’s not a good look as it gears up to launch the iPhone 11 and promote its new credit card as a secure alternative to conventional bank cards. As ever the best advice for users of iPhones or any device is to ensure you always have the most up to date operating system installed by making a regular check.
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.
How can we help you