Artificial Intelligence has become a nearly ubiquitous term nowadays. A Deloitte survey revealed that most companies – a staggering 90 percent of those approached by the research and consultancy firm – consider cognitive technologies to be of crucial strategic importance, and over 80 percent of those were either already using it on some level or planning to implement it in the near future.
That is hardly surprising considering the dramatic efficiency savings that adoption can bring. According to Bill Eggers, executive director of Deloitte’s Center for Government Insights, in the U.S. alone, federal employees spend about 4.3 billion hours per year on a variety of mundane tasks such as recording information and handling. He estimates that currently available AI and robotic process automation could free up about 1.3 billion of those hours by automating such tasks, effectively enabling quantum leaps in productivity as AI allows institutions to anticipate rather than merely react to problems after they occur.
Yet in spite of its pervasiveness, and the fact that it has been around for many decades, AI still falls, paradoxically, in the “emerging technology” category. One that constantly presents new facets and developments that bring us ever closer to the worlds imagined by the likes of Isaac Asimov.
And what also emerges as such technologies mature are sub-categories that address their more specific applications, to the point where it makes as little sense to refer to AI as a single entity as it would to bundle all computers into one homogenous category.
So we turned to industry stakeholders for help in defining three of these major branches of AI – assisted, augmented, and autonomous intelligence – and to provide some practical examples to help us understand their current (and future) applications.
Assisted Intelligence
Assisted Intelligence is a basic level of AI that primarily consists of automating repetitive and mundane tasks and procedures. This – at least in theory – frees up humans to perform more complex and creative assignments, enabling efficiency savings by allowing paid employees to focus on high-value tasks that are not so easily automated.
In order to work well, assisted intelligence requires clearly defined input and output parameters, and is most useful in applications such as monitoring and producing alerts and directions, which can then be checked by an actual person. Typical applications for this type of AI include GPS navigation programs and manufacturing robots.
Augmented Intelligence
Augmented intelligence, as the name suggests, augments and supports – rather than replaces – human intelligence, yet it is more complex and versatile than the basic assisted AI we described above.
This is a symbiotic relationship where humans and machines ideally learn from one another and work collaboratively by layering machine learning over existing human systems to accelerate and improve accuracy of those human-led processes. In other words, it enables us, at least in theory, to make better and faster decisions than we would otherwise.
Among the models included under this umbrella are machine learning, natural language processing, image recognition and neural networks, with typical use cases encompassing healthcare, public safety, and fraud detection.
Autonomous Intelligence
Autonomous intelligence allows machines, bots and systems to act independently of human intervention. This is the most complex form of AI not only in terms of the technology itself and its implementation and effectiveness, but of the moral implications of effectively handing over decision-making to machines. What accountability frameworks are needed if we are indeed to pursue that path? It is, quite simply, the fascinating and scary realm of science fiction, except that reality is rapidly catching up, and it remains to be seen whether the dystopian visions presented in The Matrix or Terminator franchises will come to life in some form.
“What happens when humans cannot establish all the constraints, boundaries, and limits, because of our lack of vision and imperfections as human beings?” asks Lucas Werthein, Co-founder and Head of Technology at Design Innovation firm Cactus. “Machines can go through all potential scenarios in seconds, but we as humans cannot necessarily create boundaries for things we don’t know,” he explains, citing a fascinating study which demonstrated how AI is capable of adapting in ways which can be utterly baffling to humans.
An AI Spectrum?
These definitions essentially help us envisage the different levels of interaction between humans and machines. They answer the question of how much autonomy and agency does (or should) AI have.
Yet although defining categories is helpful on some levels, the lines between them are not usually well defined, nor do they remain static. It is perhaps more appropriate therefore, to think of them as a spectrum or continuum rather than attempting to pigeonhole use cases into one category or another.
Josh Browning, machine-learning architect at AMP Robotics, agrees that it may be more accurate to view these categories in terms of progression. “AI often starts in the simplest phases (assisted) and progresses to a fully autonomous phase. This is more of a spectrum, rather than individual examples that fit into each category. For example, consider spam filtering. The initial application will be simple and rules-based: filter emails as spam if they have all capitalized titles, many misspelled words, etc. As data is collected, it moves into the range of augmented intelligence: the application can make reasonable decisions on its own, but requires human feedback to continue to learn. Once it has achieved a certain level of performance, it may be moved into the final category of being fully autonomous,” he concludes.
The usefulness of establishing a common lexicon for technologies like AI is that it enables us to have more constructive conversations about them, and perhaps be better equipped to deal with the hurdles and pitfalls along the way.
As Mike Nager, Business Development Manager at Festo Didactic, says, the path forward is not likely to be hindered by technological progress, but rather by how well society accepts further non-human interactions and how the regulations and laws keep up, to both allow beneficial technologies to flourish while safeguarding human life and privacy.
And it is that human touch that we must not lose sight of, according to Benjamin Cox, Director of Product Marketing at H2O.ai.
“By keeping humans in the loop, companies can better determine the level of automation and augmentation they need and control the ultimate effect of AI on their workforce. As a result, enterprises can massively mitigate their risk and develop a deeper understanding of what types of situations may be the most challenging for their AI deployments and machine learning applications,” he concludes.
Recent Comments