Dr Atter comments on the Royal Academy iHuman report and it's implications for human learning
The publication of the Royal Academy’s research paper, iHuman, represents a pivotal moment in the debate over the cluster of “AI” technologies and their implication for all our futures. By linking the development of AI-related technologies to striking advances in medical science, the authors highlight the paradox that the greatest challenge to our humanity may very well come from those technologies that we find most beneficial.
The Pubic Debate
Thus far the debate has been polarised between the techno-evangelists on the one hand, and the post-apocalyptic disaster theorists on the other. As a result, the debate has very much been confined to the margins of public debate and public policy formulation. This is about to change.
The authors of iHuman provide a very useful summary of how these technologies are combining together, to create both positive effects and serious dangers, way beyond their potential seen in isolation. They write:
Opportunity and Danger
It is when AI algorithms become linked together with machine learning, neural networks, advanced robotics, new battery forms, material sciences, social media penetration, and so on, that the greatest opportunities arise but also the greatest dangers.
How we respond as humans will define our very existence. Such technologies have the power to liberate or enslave us. We have past the moment where strict regulation is even possible. Much of these technologies are developed in small labs and tech start-ups, spread around the world, and super-fuelled with private finance. This provides a quite different context to prior advances, such as biotechnology, where developments were by-and-large more geographically concentrated and confined to larger institutions.
iHuman is a conscious effort to mature the debate and ensure there is readily available balance between the scientific input, and broad perspectives from humanities and social sciences. For example, the report highlights the dangers of merely reducing the mind to brain function (“neuro-essentialism”). The authors write:
At the heart of their thesis is the idea that “AI” technologies will arrive on the wings of highly valued medical breakthroughs, which will transform the lives of the disabled through neural networks. Micro-devices even offer the prospect of replacing pharmaceuticals, with their uneven effects and threats of dependency.
Rather dryly, the authors announce the neural revolution:
The Ethical Landscape
The authors make important points about the ethics and social implications of these technologies. Here is just a few of their highlights:
The risk of transference from a beneficial technology for its use in quite different contexts, such as surveillance
The potential for weaponisation of AI, through the use of drones, eco-skeletons etc
The implications for not only data privacy, but our very sense of being a private person with an individual consciousness.
Banning these technologies might also remove our capability to provide countervailing measures or solutions, should a malevolent or negligent institution or country unleashing harmful AI.
Neural networks represent the “Internet of Hackable Things”, highlighting the acute security risks arising from “wet wiring” devices to our brains.
Such technologies have the potential to create unbridgeable inequality between humans, with the wealthy literally being given the option to upgrade themselves (and their kids).
Unquestionably the AI cluster, including neural networks, offer tremendous potential for humans to overcome suffering, improve health, reduce inequality, and generate increased productivity. Such research is vital and must be continued, if not reinforced.
However, our response must be to learn faster than AI, and learn in ways that AI finds very difficult. If we are to preserve any semblance at all of preserving our current understanding of what it is to be human, learning itself must become our vital existential challenge.
We must learn to be better humans, able to address complex ethical concerns and able to organise society of fairer and more inclusive lines. AI technologies must be democratised and government and institutional funding must be geared towards high value, highly ethical technological outcomes.
As individuals, we must constantly advance our human learning, especially in areas that AI cannot replicate. This includes ethical reasoning, social intelligence, imagination, and creativity.
Dr Andrew Atter, Founder & CEO, Pivomo
Andrew is a tech entrepreneur, executive coach and researcher. His interests include entrepreneurial leadership, technology transformation and applied research methods.