Regulation has the uncomfortable task of limiting untapped potential. I was surprised when I recently received the advice to think of life like a box. “The walls of this box are all the rules you should follow. But inside the box, you have perfect freedom.” Stunned as I was at the irony of having complete freedom to think inside the box, those at the forefront of AI development and implementation are faced with the irony of limiting projects with undefined potential.
Although Artificial General Intelligence – the ability of a machine to intuitively react to situations that it has not been trained to handle in an intelligent, human way – is still unrealized, narrow AI that enables applications to independently complete a specified task is becoming a more accepted addition to a business’ digital toolkit. Regulations that address AI are built on preexisting principles, primarily data privacy and protection against discrimination. They deal with the known risks that come with AI development. In 2018, biometric data was added to the European GDPR framework to require extra protection. In both the US and Europe, proposals are currently being discussed to monitor AI systems for algorithmic bias and govern facial recognition use by public and private actors. Before implementing any AI tool, companies should be familiar with the national laws for the region in which they operate.
These regulations have a limited scope, and in order to address the future unknown risks that AI development will pose, a handful of policy groups have published guidelines that attempt to set a model for responsible AI development.
The major bodies of work include:
- The Montreal Declaration for Responsible Development of AI from the University of Montreal and Fonds de Recherche du Quebec (published December 2018)
- Guidelines on Artificial Intelligence and Data Protection from the Council of Europe (published January 2019)
- Ethics Guidelines on Trustworthy AI from The EU Commission (published April 2019)
- The OECD Principles on AI from the OECD (published May 2019)
The principles developed by each body are largely similar. The main principles that all guidelines discussed address the need for developers and AI implementers to protect human autonomy, obey the rule of law, prevent harm and promote inclusive growth, maintain fairness, develop robust, prudent, and secure technology, and ensure transparency.
The single outstanding feature is that only one document provides measurable and immediately implementable action. The EU Commission included an assessment for developers and corporate AI implementors to conduct to ensure that AI applications become and remain trustworthy. The assessment is currently in a pilot phase and will be updated in January 2020 to reflect the comments from businesses and developers. The other guidelines offer compatible principles but are general enough to allow any of the public, private, or individual stakeholders interacting with AI to deflect responsibility.
This collection of guidelines from the international community are not legally binding restrictions, but are porous barriers that allow sufficiently cautious and responsible innovations to grow and expand as the trustworthiness of AI increases. The challenge in creating regulations for an intensely innovative industry is to build in flexibility and the ability to mitigate unknown risks without compromising the artistic license. These guidelines attempt to set an ethical example to follow, but it is essential to use tools like the EU Commission’s assessment tool which establish an appropriate responsibility, no matter the status as developer, implementor, or user.
Alongside the caution from governing bodies comes a clear that AI development can bring significant economic, social, and environmental growth. The US issued an executive order in February 2019 to prioritize AI R&D projects, while the EU takes a more cautiously optimistic approach by building of recognizing the opportunities but prioritizing building and maintaining a uniform EU strategy for AI adoption.
If you liked this text, feel free to browse our Artificial Intelligence focus area for more related content.
Oracle OpenWorld 2019 has just wrapped yesterday, and if there is a single word that can describe my impressions of it, that would be “different”. Immediately noticeable was the absence of the traditional Oracle Red spilling into the streets around the Moscone Center in San Francisco, and the reason behind it is the new corporate design system called Redwood. You can already see its colors and patterns applied to the company’s website, but more importantly, it defines new UI controls for Oracle applications and cloud services.
Design, however, is by far not the Oracle’s biggest change. It appears that the company has finally reached the stage where a radical cultural shift is inevitable. To adapt to the latest market challenges and to extend the reach towards new customer demographics, Oracle needs to seriously reconsider many of its business practices, just like Microsoft did years ago. And looking at the announcements of this year’s OOW, the company is already making major strides in the right direction.
It’s an open secret that for years, Oracle has been struggling to position itself as one of the leading cloud service providers. Unfortunately, for a latecomer to this market, playing catch-up with more successful competitors is always a losing strategy. It took the company some time to realize that, and now Oracle is trying a different game: learning from others’ mistakes, understanding the challenges and requirements of modern enterprises, and in the end offering a lean, yet complete stack of cloud services that provide the highest level of performance, comprehensive security and compliance controls and, last but not least, intelligent automation for any business process.
The key concept in this vision for Oracle is “autonomy”. To eliminate human labor from cloud management is to eliminate human error, thus preventing the most common reason for data breaches. Last year, we’ve seen the announcement of the self-patching and self-tuning Autonomous Database. This time, Autonomous Linux has been presented – an operating system that can update itself (including kernel patches) without downtime. It seems that the company’s strategic vision is to make every service in their cloud autonomous in the same sense as well. Combined with the Generation 2 cloud infrastructure designed specifically to eliminate many network-based attack vectors, this provides additional weight to Oracle’s claim of having a cloud ready to run the most business-critical workloads.
Oracle Data Safe, a cloud-based service that improves Oracle database security by identifying risky configuration, users and sensitive data, which allows customers to closely monitor user activities and ensure data protection and compliance for their cloud databases, has been announced as well. Now Oracle cloud databases now include a straightforward, easy to use and free service that helps customers protect their sensitive data from security threats and compliance violations.
It is also worth noting that the company is finally starting to think “outside of the box” with regards to their business strategy as well; or rather outside of the “Oracle ecosystem” bubble. Strategic partnerships with Microsoft (to establish low-latency interconnections between Azure and Oracle Cloud datacenters) and VMware (to allow businesses lift and shift their entire VMware stacks to the cloud while maintaining full control over them, impossible in other public clouds) demonstrate this major paradigm shift in the company’s cloud roadmap.
Even more groundbreaking is arguably the introduction of the new always free tier for cloud services – which is exactly what it says on the lid: an opportunity for every developer, student or even a corporate IT worker to use Autonomous Databases, virtual machines, and other core cloud infrastructure services for an unlimited time. Of course, the offer is restricted by allocated resources, but all functional benefits are still there, and not just for testing. Hopefully, Oracle will soon start promoting these tools outside of Oracle events as well. Seen any APEX evangelists around recently?
Today, the German Federal Government announced its Blockchain Strategy. What might sound as a great thing, falls short, for a number of reasons.
One is that it is late: after the first hype and somewhere in the phase of disillusion. This should have happened much earlier, specifically with the intent of getting or keeping a leading position. And, notably, more important would be to foster innovation by supporting start-ups with simplified regulations and administration for that type of businesses, and a far better ecosystem for venture and growth finance.
A second objection: It is too much about technology, and not enough about use cases. Blockchain technology for itself is not what we should look at. We need to understand the specific benefits, as outlined in our brand-new report “Demystifying the Blockchain”. These are e.g. the need for a distributed (in contrast to central) ledger, immutability, sequenced data, and time-stamped data. Only then, Blockchain technology can deliver benefits by enabling better business solutions. Thus, this should be a strategy for fostering use cases (with some being listed in the document), not a certain technology.
The document also falls short when it comes to Blockchain ID. It focuses only on authentication, does not reflect the current state of Blockchain ID, and ignores the immense potential for fostering data protection and control over individual data, commonly referred to as Self-Sovereign Identity (SSI). And honestly: A German-only Blockchain ID stands in stark contrast to the concepts of Blockchain ID.
Talking about a state-powered Blockchain infrastructure appears strange to me. There are many variants of Blockchains and other Distributed Ledgers with different consensus mechanisms and other specifics. One Blockchain doesn’t serve all use cases.
Also, when reading through the appendix and all the actions the German Government intends to take, it is mainly about research, which should have happened some years ago. To summarize, it is good that the Federal Government is funding new technologies, but that is too late, not focused and too technology-oriented. However, the focus should be rather on promoting the economy (and if the blockchain technology helps, that's great), not on endorsing a particular set of technologies.“
As cybercrime and concerns about cybercrime grow, tools for preventing and interdicting cybercrime, specifically for reducing online fraud, are proliferating in the marketplace. Many of these new tools bring real value, in that they do in fact make it harder for criminals to operate, and such tools do reduce fraud.
Several categories of tools and services compose this security ecosystem. On the supply side there are various intelligence services. The forms of intelligence provided may include information about:
- Users: Users and associated credentials, credential and identity proofing results, user attributes, user history, behavioral biometrics, and user behavioral analysis. Output format is generally a numerical range.
- Devices: Device type, device fingerprint from Unified Endpoint Management (UEM) or Endpoint Mobility Management (EMM) solutions, device hygiene (operating system patch versions, anti-malware and/or UEM/EMM clients presence and versions, and Remote Access Trojan detection results), Mobile Network Operator carrier information (SIM, IMEI, etc), jailbreak/root status, and device reputation. Output format is usually a numerical range.
- Cyber Threat: IP and URL blacklisting status and mapped geo-location reputation, if available. STIX and TAXII are standards used for exchanging cyber threat intel. Besides these standards, many proprietary exchange formats exist as well.
- Bot and Malware Detection: Analysis of session and interaction characteristics to assess the likelihood of manipulation by bots or malware. Output format can be Boolean, or a numerical range of probabilities, or even text information about suspected malware or botnet attribution.
Risk-adaptive authentication and authorization systems are the primary consumers of these types of intelligence. Conceptually, risk-adaptive authentication and authorization functions can be standalone services or can be built into identity and web access management solutions, web portals, VPNs, banking apps, consumer apps, and many other kinds of applications.
Depending on the technical capabilities of the authentication and authorization systems, administrators can configure risk engines to evaluate one or more of these different kinds of intelligence sources in accordance with policies. For example, consider a banking application. In order for a high-value transaction (HVT) to be permitted, the bank requires a high assurance that the proper user is in possession of the proper registered credential, and that the requested transaction is intended by this user. To accomplish this, the bank’s administrators subscribe to multiple “feeds” of intelligence which can be processed by the bank’s authentication and authorization solutions at transaction time.
The results of a runtime risk analysis that yields ‘permit’ may be interpreted as “yes, there is a high probability that the proper user has authenticated using a high assurance credential from a low risk IP/location, the request is within previously noted behavioral parameters for this user, and the session does not appear to be influenced by malware or botnet activity.”
This is great for the user and for the enterprise. However, it can be difficult to implement by the administrators because there are few standards for representing the results of intelligence-gathering and risk analysis. The numerical ranges mentioned above vary from service to service. Some vendors provide scores from 0 to 99 or 999. Others range from -100 to 100. What do the ranges mean? How can the scores be normalized across vendors? Does a score of 75 from intel source A mean the same as 750 from intel source B?
Perhaps there is room for a little more standardization. What if a few attribute name value pairs were introduced and ranges limited to improve interoperability and to make it easier for policy authors to implement? Consider the following claims set, which could be translated into formats such as JWT, SAML, XACML, etc :
The above example* shows an Issuer of “IntelSource”, with timestamp and expiry, Audience of “RiskEngine”, Subject (user ID), and 3 additional attributes: “UserAssuranceLevel”, “DeviceAssuranceLevel”, and “BotProbability”. These new attributes are composites of the information types listed above for each category. Ranges for all 3 attributes are 0-99. In this example, the user looks legitimate. Low user and device assurance levels and/or high bot probability would make the transaction look like a fraud attempt.
KuppingerCole believes that standardization of a few intelligence attributes as well as normalization of values may help with implementation of risk-adaptive authentication and authorization services, thereby improving enterprise cybersecurity posture.
*Thanks to http://jwtbuilder.jamiekurtz.com/ for the JWT sample.
Europe’s consumers have been promised for some years now that strong customer authentication (SCA) was on its way. And the rules as to when this should be applied in e-commerce are being tightened. The aim is to better protect the customers of e-commerce services.
This sounds like a good development for us all, since we are all regular customers of online merchants or providers of online services. And if you look at the details of SCA, this impression is further enhanced. Logins with only username and password are theoretically a thing of the past, the risk of possible fraud on the basis of compromised credentials is potentially considerably reduced.
The Payment Services Directive (PSD II) requires multi-factor authentication (MFA) as the implementation of SCA for all payments over €10. MFA stands for Multi Factor Authentication, i.e. all approaches involving more than one factor. The most common variant is Two Factor Authentication (2FA), i.e. the use of two factors. There are three classes of factors: Knowledge, Possession and Biometrics – or “what you know”, “what you have”, “what you are”. For each factor, there might be various “means”, e.g. username and password for knowledge, a hard token or a phone for possession, fingerprint or iris for biometrics.
The use of this results in improved protection for virtually all parties involved: E-commerce site, payment processors and customers can be more confident that transactions are legitimate and trustworthy.
A short look at the history: On November 16, 2015, the Council of the European Union passed the PSD2 and gave Member States two years to transpose the Directive into their national laws and regulations. It should be expected that the broad and comprehensive implementation of SCA as part of the PSD2 will be achieved in a timely manner, as the benefits are obvious. Of course, purchasing processes become a little more complex, because card data and account number or username and password for payment services are no longer enough for checkout. A second, different feature such as a fingerprint or an SMS to your own registered smartphone becomes necessary to increase security.
But shouldn’t we value this significantly increased security and the trust that goes with it? On the contrary, retailers, for example in Germany, are far from positive about stricter security standards. Every change and especially increase in complexity of the purchasing process is regarded as an obstacle, a potential point for dropping out of the customer journey.
And yet the development now emerging was not unexpected. As early as July 2019, the European Banking Authority (EBA) stated that some players were not sufficiently prepared for the PSD2, SCA and thus the required protection of consumers.
As a measure, the member states were offered an extension of the deadline. First and foremost, this was used extensively by the UK, but also by some other countries. In Germany, the new regulations for payments without cash will enter into force on 14 September 2019, almost four years after the European Directive PSD2 was approved. This means that only payment services that implement SCA and are therefore PSD2 compliant can be used for online purchases using credit cards.
And, you guessed it, just recently BaFin (Germany’s financial watchdog) announced in a press release that “As a temporary measure, payment service providers domiciled in Germany will still be allowed to execute credit card payments online without strong customer authentication after 14 September 2019”.
This does not only mean an immense delay of unclear duration; the otherwise rather homogeneous European market is now being chopped up into a multitude of different regulations and exceptions. The direct opposite of what was planned has been achieved, since it is unclear when and where which requirements will apply, in the European Union and in a global Internet. The obvious losers are the customers and online security and trust in reliable online purchases, at least for the short to mid-term.
Forward-looking organizations who value their customers and their security and trust are now able to implement security through SCA, even without BaFin checks. Those companies that benefit from a short delay to meet PSD2 requirements soon should quickly seize this opportunity and join the latter group. But those companies that, since the release of PSD2 and its requirements, have preferred to complain about more complex payment processes and lament EU regulations should reconsider their relationship to security and customer satisfaction (and thus to their customers). And they should rapidly start on a straight path to comprehensive PSD2 compliance. Because temporary measures and extended deadlines are exactly that, they are temporary, and they are deadlines.
To meet them successfully and in time, KuppingerCole Analysts can support organizations by providing expertise through our research in the areas of PSD2, SCA and MFA. And our Advisory Services are here to support you in identifying and working towards your individual requirements while maintaining user experience, meeting business requirements and achieving compliance. And our upcoming Digital Finance World event in Frankfurt next week is the place to be to learn from experts and exchange your thoughts with peers.
Thanks to an incessant desire to remove repetitive tasks from our to-do lists, researchers and companies are developing AI solutions to HR – namely to streamline recruiting, improve the employee experience, and to assess performance.
AI driven HR management will look different in small businesses than in large companies and multinationals. There are different barriers that will have to be navigated, but also different priorities and opportunities that small businesses will have with AI.
Smaller budgets create price barriers to implementing an AI system, and likely psychological barriers as the self-built CEO resists delegating tasks that would otherwise rely on his or her gut instinct. Access to a sufficient quantity of data to optimize algorithms is perhaps the largest challenge that small businesses will face when integrating AI into their HR practices. Companies typically gather data from their own databases, assembling a wide range of hiring documents, employee evaluations, etcetera. Large companies have decades of stored HR data from thousands of employees, and clearly have an advantage when it comes to gathering a large volume of usable data.
In terms of priorities, there is a huge divide between the value proposition that AI offers to large and small businesses. Big companies need to leverage time-saving aspects, especially to create a customized connection for thousands of employees. Routine communication, building employee engagement, and monitoring employee attrition are all aspects that minimize repetitive work and save time. In a sense, the goal is to give institutional bureaucracy a personal touch – like a small business has. A small company’s strengths come from its unique organizational culture, which is heavily dependent on natural, human interaction and well-designed teams. It is this “small company” feel that large companies try to imitate with AI customization features.
Of course, small companies also need to save time, especially because many do not have a dedicated HR department – in some cases, the department consists of one person dividing time between their main role and HR tasks. Their time is limited, so instead of implementing FAQ chatbots that make the organization feel small and accessible, small businesses should focus on another area which consumes too much time: recruiting and promoting visibility.
Finding qualified and competitive candidates is challenging when a firm’s circle of influence is geographically limited. A factor often contributing to success in small firms is the ability to hire for organizational fit, thus building tightly knit teams to deliver agile service. To increase the chances of attracting highly qualified candidates, small businesses should focus on using AI systems to support recruiting and hiring for organizational fit.
Small businesses are always under pressure to do more with less. When implementation costs are high and internal resources limited, small businesses can consider plug and play tools which rely on external datasets. For those who are open to experiment, they can look for AI projects that have overlap with their goals. For example, socially minded companies looking to attract more diverse applicants can participate in studies like AI-enabled refugee resettlement, placing people in areas where they will be most likely to find employment. A project like this could shift setup costs for implementing new technology and achieve wider HR goals that the company may have, like gaining employees with specific skills that are not common in the area, opening up more opportunities for innovation through diversity, gaining different language capabilities, and so on.
The risk of using AI technologies to support hiring has already played out in the case of Amazon. With the best intentions, the research team designing a hiring tool to select the highest qualified candidates based on their resumes noticed that their algorithm had learned to value traits that indicated the candidate was male, and penalize indicators that the candidate was female. The cause was imbedded in their input data: the CVs and associated data given to the system to learn from was influenced by years of gendered hiring practices. The project was quietly put to rest. This example was luckily only a pilot version and wasn’t the deciding factor in any applications, but provides a valuable lesson to developers and adopters of recruitment AI: maintaining transparency throughout development and beyond will illuminate weaknesses with time. Robust checks by outside parties will be necessary, because one’s own biases are most difficult to see.
AI can have a role to play in small business HR strategies just as much as the large corporations. But as with any strategy, the decision should be aimed at delivering clear advantages with a plan to mitigate any risks.
Earlier this week, Germany’s Federal Office for Information Security (popularly known as BSI) has released their Digital Barometer 2019 (in German), a public survey of private German households that measured their opinions and experience with matters of cybersecurity. Looking at the results, one cannot but admit that they do not look particularly inspiring and that they probably represent the average situation in any other developed country…
According to the study, every fourth respondent has been a victim of cybercrime at least once. The most common types of those include online shopping fraud, phishing attacks or viruses. Further 30% of participants have expressed strong concerns, believing that the risk of becoming such a victim is very high for them. Somewhat unsurprisingly, these concerns do not translate into consistent protection measures. Only 61% of surveyed users have an antivirus program installed, less than 40% update their computers regularly and only 5% opt for such “advanced” technologies as a VPN.
I’m not entirely sure, by the way, how to interpret these results. Did BSI count users running Windows and thus having a very decent antivirus installed by default as protected? And what about iPhone owners who are not given any opportunity to secure their devices even if they wished to do so? Also, it’s quite amusing that the creators of the survey consider email encryption a useful cybersecurity measure. Even weirder is the inclusion of regular password change (a practice that has long been proven useless and is no longer recommended by NIST, for example) but a notable lack of any mentions of multi-factor authentication.
More worrying statistics, however, show that although the absolute majority of users have strong concerns about their online safety, very few actually consider themselves sufficiently informed about the latest developments in this area and even fewer actually implement those recommendations.
The results also clearly indicate that victims of cybercrime have not much faith in the authorities and mostly deal with consequences themselves or turn to friends and family. Less than a third of such crimes end up reported to the police, which means that we should take the official cybercrime statistics (which incidentally show that the rate of such crimes in Germany has grown 8% last year) with a grain of salt – the real number might be much higher.
The rest of the report talks about various measures the government, BSI and police should develop to tackle the problem, but I don’t think that many users will see any notable changes in that regard: their online safety is still largely their own concern… So, what recommendations KuppingerCole could give them?
- Do not blindly spend money on security tools without understanding your risks and how those tools can (or cannot) mitigate them. Most home users do not really need another antivirus or firewall – the ones built into Windows are quite good already. However, corporate users require an efficient, multi-level security approach. Defining a tailored security portfolio therefore is an important challenge.
- In fact, investing in a reliable off-site backup solution would make much more sense: even if your device is compromised and your files are destroyed by ransomware, you could always restore them quickly. A good backup will also protect from many other risks and prevent you from losing an important document to simple negligence or a major natural disaster. And by the way: Dropbox and Google Drive are not backup solutions.
- Activating multi-factor authentication for your online services will automatically protect you from 99% of hackers and fraudsters. It is crucial to do it consistently: not just for your online banking, but for email and social media platforms. By making your accounts impossible to hijack you’re protecting not just yourself, but your online friends as well.
- Quite frankly, the best security tool is your own common sense. Checking a suspiciously looking email for some obvious indicators of fraud or asking your colleague whether they actually used an obscure website to send you an urgent document before opening it: in most cases, this simple vigilance will help you more than any antivirus or firewall.
For more complicated security-related questions, you can always talk to us!
It seems that there is simply no end to a long series of Facebook’s privacy blunders. This time, a security researcher has stumbled upon an unprotected server hosting several huge databases containing phone numbers of 419 million Facebook users from different countries. Judging by the screenshot included in an article by Techcrunch, this looks like another case of a misconfigured MongoDB server exposed to the Internet without any access controls. Each record in those databases contains a Facebook user’s unique ID that can be easily linked to an existing profile along with that user’s phone number. Some also contained additional data like name, gender or location.
Facebook has denied that it has anything to do with those databases, and there is no reason to doubt that; the sheer negligence of the case rather points to a third party lacking even basic security competence, perhaps a former Facebook marketing partner. This is by far not the first case of user data being harvested off Facebook by unscrupulous third parties, perhaps the biggest one being the notorious Cambridge Analytica scandal of early 2018. After that, Facebook has disabled access to users’ phone numbers to all its partners, so the data leaked this time is perhaps not the most current.
Still, the huge number of affected users and the company’s apparent inability to find any traces of the perpetrators clearly indicate that Facebook hasn’t done nearly enough to protect their users’ privacy in recent times. Until any further details emerge, we can only speculate about the leak itself. What we could do today, however, is to try and figure out what users can possibly do to protect them from this leak and to minimize the impact of future similar data breaches.
First of all, the most common advice “don’t give your phone number to Facebook and the likes” is obviously not particularly helpful. Many online messaging services (like WhatsApp or Telegram) use phone numbers as the primary user identities and simply won’t work without them. Others (like Google, Twitter or even your own bank) rely on phone numbers to perform two-factor authentication. Second, for hundreds of millions of people around the world, this advice comes too late – their numbers are already at the disposal of spammers, hackers, and other malicious actors. And those guys have a few lucrative opportunities to exploit them…
Besides the obvious use of these phone numbers for unsolicited advertising, they can be used to expose people who use pseudonyms on social media and link those accounts to real people – for suppressing political dissent or simply to further improve online user tracking. Alas, the only sensible method of preventing this breach of privacy is to use a separate dedicated phone number for your online services, which can be cumbersome and expensive (not to mention that it had to be done before the leaks!)
Unfortunately, in some countries (including the USA), leaked phone numbers can also be used for SIM swap attacks, where a fraudster tricks a mobile operator to issue them a new SIM card with the same number, effectively taking full control over your “mobile identity”. With that card, they can pose as you in a phone call, intercept text messages with one-time passwords and thus easily take over any online service that relies on your mobile number as the means of authentication.
Can users do anything to prevent SIM swap attacks? Apparently not, at least, until mobile operators are forced by governments to collaborate with police or banks on fighting this type of fraud. Again, the only sensible way to minimize its impact is to move away from phone-based (not-so-)strong authentication methods and adopt a more modern MFA solution: for example, invest in a FIDO2-based hardware key like Yubikey or at least switch to an authenticator app like Authy. And if your bank still offers no alternative to SMS OTP, maybe today is the right time to switch to another bank.
Remember, in the modern digital world, your phone number is the key to your online identity. Keep it secret, keep it safe!
The other day I found a notebook on a train. It was in a compartment on the seat of a first-class car. The compartment was empty, no more passengers to see, no luggage, nothing.
And no, it wasn't a laptop or tablet, it was a *notebook*. One made of paper, very pretty, with the name of a big consulting company printed on it. So, it was either a promotional gift or one that employees use. Two thirds of it had been used, which could be seen from the edge of the paper.
Everyone knows these notebooks, from simple A4 college pads with cheap ballpoint pens to expensive, leather-bound prestige models combined with an equally expensive writing device such as a fountain pen.
They serve as brain extensions in meetings, for planning and conducting conversations. They contain details about the owner. And they contain sketches, meeting minutes, information about contact persons (--> GDPR), your business, the business of your partners. You can find sales figures, business plans, product developments, vulnerability analyses and architectural plans. The private mobile phone number of the important point of contact, the passwords to company infrastructure along with computer addresses. Confidential and critical data is thoughtlessly recorded on paper and then elaborated on the way home on the train, at home on the couch or the next day in the office.
Everyone worries about the loss of their computer or of the still ubiquitous, unencrypted USB stick. Rightly so. And today you also have to think about the cloud, because it bears a multitude of risks, which you have to address consistently, comprehensively and correctly (and yes, we can help you with that, but that's not the point here).
However, leakage of sensitive data does not necessarily require a nation state hacker or a violation of the confidentiality of credentials. Clumsiness, haste and forgetfulness can sometimes be enough. And that's why you should be particularly concerned about your paper notes.
You can encrypt a USB stick (yes, you can). You can encrypt whole computers, too. Your corporate laptop should be, anyway, and the encryption of your private computers and data carriers is your own personal responsibility. Most mobile phones and tablets today come with biometrics and also with potential encryption.
But this notebook is still beautiful and has so many free pages, so on to the next meeting? - So let me ask you: What is written in your current notebook? Would you have wanted me to have read all that on the train? Got a bad conscience now? Rightly so.
Paper cannot be encrypted. So, there are only the following two main approaches of data avoidance and data deletion to mitigate these risks: Give the next promotional notebook to a child for drawing (--> avoidance). Destroy all the notebooks you still have (and possibly still use) by means of your home or office shredder (--> deletion). What is still important can be scanned before and stored safely and of course encrypted.
I did not open this notebook and instead handed it over to the conductor and thus to the Deutsche Bahn "lost and found" service. But we can't expect everyone to handle it that way.
As a recommendation: For the future, for all notes that go beyond your private poems (and perhaps for your own self-protection include those as well), use mechanisms that meet your company's security requirements. Notebooks for sure don’t.
Artificial intelligence (AI) and machine learning tools are already disrupting other professions. Journalists are concerned automation being used to produce basic news and weather reports. Retail staff, financial workers and some healthcare staff are also in danger, according to US public policy research organization, Brookings.
However, it may come as a surprise to learn that Brookings also reports that lawyers have a 38% chance of being replaced by AI services soon. AI is already being used to conduct paralegal work: due diligence, basic research and billing services. A growing number of AI based law platforms are available to assist in contract work, case research and other time-consuming but important back office legal functions. These platforms include LawGeex, RAVN and IBM Watson based ROSS Intelligence.
While these may threaten lower end legal positions, it would free up lawyers to spend more time analyzing results, thinking, and advising their clients with deeper research to hand. Jobs may well be added as law firms seek to hire AI specialists to develop in house applications.
What about adding AI into the criminal justice system, however? This is where the picture becomes more complicated and raises ethical questions. There are those who advocate AI to select potential jurors. They argue that AI could gather data about jurors, including accident history, whether they have served before and the verdict of those trials, and perhaps more controversially, a juror’s political affiliations. AI could also be used to analyze facial reactions and body language indicating how a potential juror feels about an issue, demonstrating a positive or negative bias. Proponents of AI in jury selection say it could optimize this process, facilitating greater fairness.
Others are worried that rushing into such usage could might have the opposite effect. Song Richardson, Dean of the University of California-Irvine School of Law, says that people often view AI and algorithms as being objective without considering the origins of the data being used in the machine-learning process. “Biased data is going to lead to biased AI. When training people for the legal profession, we need to help future lawyers and judges understand how AI works and its implications in our field.” she told Forbes magazine.
A good example would be Autonomous vehicles. Where does the legal blame lie for an accident? The driver, the car company, the software vendor or another third party? These are questions that are best answered by human legal experts who can understand the impact of IA and IoT on our changing society.
Perhaps a good way to illustrate the difference between human thinking and AI is that it usually wins in the game of Go because, while it plays according to formal Go rules, it does so in a way no human would ever choose.
If AI oversaw justice it might very well “play by the rules” also but this would may involve a strict interpretation of the law in every case, with no room for the nuances and consideration that experienced human lawyers and judges possess. Our jails may fill up very quickly!
Assessing guilt or innocence, cause and motive in criminal cases needs empathy and instinct as well as experience – something that only humans can provide. At the same time, it is not unknown for skilled lawyers to get an acquittal for guilty parties due to their own charisma, theatrics and the resources available to them. Greater involvement of AI could potentially lead to a more fact based and logical criminal justice system, but it’s unlikely robots will take the place of prosecution or defence lawyers in a court room. But at some point, AI may well be used in court, but its reasoning would still have to be weighted and checked against a tool like IBM Watson OpenScale to check the validity of its results.
For the foreseeable future, AI in the legal environment is best to enhance research, and even then, we should not trust it blindly, but understand what happens and whether results are valid and, as far as possible, how they are achieved.
The wider ethical debate around AI in law should not prevent us from using it right now in those areas that it will being immediate benefit and open new legal services and applications. Today, AI could benefit those seeking legal help. Time saving AI based research tools will drive down the cost of legal services making it accessible to those on lower incomes. It is not hard to envisage AI driven cloud based legal services that provide advice to consumers without any human involvement, either from startups or as add-ons to traditional legal firms.
For now, the impact of AI on the legal profession is undeniably positive if it reduces costs and frees up lawyers to do more thinking and communicating with clients. And with further development it may soon play a more high-level role in legal environments in tandem with its human law experts.
Get access to the whole body of KC PLUS research including Leadership Compass documents for only €800 a year
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.
AI for the Future of your Business: Effective, Safe, Secure & Ethical Everything we admire, love, need to survive, and that brings us further in creating a better future with a human face is and will be a result of intelligence. Synthesizing and amplifying our human intelligence have therefore the potential of leading us into a new era of prosperity like we have not seen before, if we succeed keeping AI Safe, Secure and Ethical. Since the very beginning of industrialization, and even before, we have been striving at structuring our work in a way that it becomes accessible for [...]