Stay Tuned and Subscribe For Updates
Science fiction may have made you more familiar with Artificial General Intelligence (AGI), which is different than the “AI” that is being implemented in business use cases today.
Artificial General Intelligence is the ability of a machine to intuitively react to situations that it has not been trained to handle in an intelligent, human way.
An example of AGI could be of a system that experiments with its surroundings to gain new insight and apply it correctly to a completely different situation.
You should know that this does not exist yet.
In fact, there will probably have to be a huge change to the way AI is currently designed before we reach a human-like general intelligence for machines.
Today, we work with Narrow AI.
The name sums it up; instead of being intelligent enough to figure out what to do in any situation, Narrow AI is a machine or system that can (semi-)autonomously complete a task that it has been trained to do.
And only that.
It can continue to learn and get better at doing that one task it has been trained to do, but it isn’t going to go rogue and seduce your husband or wife.
Read this blog for more information on the differences between Narrow AI and AGI.
Narrow AI (usually just called AI) can be focused on many different types of tasks.
The common ones include recommendation engines, anomaly detection, image classification, natural language processing (NLP) and much more.
And actually, AI is an umbrella term that includes many different disciplines and technologies, including NLP, Machine Learning (ML), Machine Reasoning, Robotics, and Computer Vision.
Each one of these disciplines is its own research field, but they are often brought together to work in concert.
If you’re starting to wonder why all of these very different technologies are referred to as AI, you’re not alone.
AI is a far too ambiguous term, and we’ll get into that while debunking some of the myths around AI.
A side effect of being able to call nearly everything AI is that it gives the impression that AI can do anything.
Take for example those humanoid robots that can see you, listen to you, and think of things to say all on their own.
Narrow AI functionalities can be layered one on top of the other to create the appearance of “intelligence”.
Combining robotics, image classification, natural language understanding and generation, and computer vision, for example, will produce something that sees, hears, and responds to you.
But the basic difference between Artificial General Intelligence and systems like these is that the vision, hearing, and generating language components are doing only what they have been taught and what they are learning within each domain.
This is very impressive, but there are still many limitations to what AI can accomplish.
John McCarthy, a major influence in computer science, described the phenomenon very well.
And this is true.
As any AI capability becomes more mature, it more often referred to by a more specific name, and loses the AI mysticism.
“Image classification” does not sound nearly as impressive as AI, but there you go.
There is a lot of buzz about the new levels of efficiency and cost savings that AI will bring to the enterprise itself.
This also stirs up fears that AI will replace humans in the workplace, and that large numbers of people will be without jobs because a system will be able to do it faster and more reliably.
There is a grain of truth to this, in that people’s jobs will change because of the development of AI.
What is more likely, is that certain tasks will be supported by AI systems, automating the redundant aspects and delivering predictive analysis on those tasks to enable the people in those jobs to make better decisions.
But making a compassionate transition to using AI to support your employees and operations should be considered critically before adopting any AI technology, especially to ensure your employees are trained for any new skills they may need and that they understand how best to use a new AI tool.
This decision has to be divorced from the hype, and based on concrete business benefits that consider any human cost.
Although there is very little concrete regulation for AI development and use, there are several regulating bodies that are tackling this challenge.
Additionally, there are multiple frameworks that are publicly available to help govern the use and development of AI.
Even though developing regulation is a notoriously slow process, there is measurable progress on the AI topic.
Most cybersecurity products collect huge amounts of data, which can swamp the human security analyst.
Their time is better used to address active threats, not trudge through endless data.
For security analysts, AI for cybersecurity means being able to automate the most tedious parts of their job, augmenting their ability to do the more creative and productive tasks.
AI can be used to detect abnormal patterns and filter out false positive Indicators of Compromise (IOCs).
AI can also be used to help organize data before it comes to the security analyst, especially to deal with unstructured data or data in real time.
This can make SIEM results more useful by ranking alerts by risk scores or providing mitigation recommendations.
Cognitive technologies can assist in providing proactive protection against phishing attacks or generating intelligence reports from unstructured data like formal research or dark web communications.
“Explainability” serves many purposes for AI.
It is a form of accountability, of security, and communication.
The risk of more complex machine learning and neural networks is that they become too abstract, dealing with thousands of features to yield a recommendation.
These models are often called “black box” models, because all the user sees is the input data and a recommendation with no insight into why the recommendation is what it is.
Explainability can be “added” to models that are not inherently interpretable, and are most developed for classification models.
For example, a classification model that is trained to identify trees in images, labels an image of a bush “not a tree”.
To explain why the model reached that decision, a mathematical technique like LIME can be applied that highlights the pixels that are most relevant to the decision, creating a “heat map” that visually represents what areas of the picture were most influential to the model when it made its decision.
Explainability for different types of models are at varying stages of maturity, with development coming from both academia and the private sector.
Developing better explainability for machine learning models is a must for the AI industry.
Explainability is a key step towards accountability, governance, and communication with end users as well as business users
AI is definitely not new, with origins in the 1950s.
But AI was launched back into the spotlight with advancements in cloud computing, which made the computations behind AI much more feasible for enterprise use.
Despite the abundance of computing power, there is a significant knowledge gap and lack of AI and machine learning experts.
This means that a market to deliver AI Service Clouds – “build-it yourself” toolkits that make AI more accessible to different business users – has opened up.
The vendors that deliver these types of tools include AWS, Cognino, Google, IBM, Kortical, Microsoft, Salesforce, and many others. There are countless other vendors of course that deliver specific AI solutions.
AI is able to support IAM by harnessing its ability to process large volumes of data and yield an actionable insight.
AI can be particularly helpful in automating repetitive processes and in detecting anomalies, which can be helpful to deliver adaptive authentication products, identity governance and administration (IGA) solutions, or privileged access management (PAM) solutions.
AI is also often used in facial recognition and matching tasks which are used in identity verification solutions, in supporting digital identity schemes, and as part of biometric authentication.
Artificial Intelligence is seen as a key enabler of digital transformation – of the business, of societal structures such as cities, and of entire economies.
Its ability to transform large volumes of data into actionable insights, recommendations, or forecasts for nearly countless applications means that it can be a powerhouse for change.
Among its benefits in the enterprise are cost saving, the ability to redesign and automate processes, and the ability to generate data-driven insights quickly without specialized knowledge in data science, and of course the wide range of AI-supported products and services.
High-level support for innovation in this space exists to encourage digital transformation for companies, cities, and countries.
Calls for caution and regulation also come from these high-level bodies, producing uncertainty and stalling AI development.
These calls are justified, but must be accompanied by clear and reasonable regulation so that enterprises may proceed with experimenting with and implementing artificial intelligence applications.
Looking for Decision-Support in the Field of Artificial Intelligence?