BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Can The Octopus Brain Save Humanity?

This article is more than 5 years old.

Depositphotos enhanced by CogWorld

Introduction

Humans are the most relentless and oblivious killers on earth, and our violence operates far outside the bounds of any other living species. This lethal violence is extensive in human society as we not only kill other species, but we kill our own as well. The reason behind this is that humans have neural circuits of rage and violence encoded in their biology, and violence -- like all human behavior -- is controlled by the human brain.

So, as the efforts intensify to build human brain-like capacity in machines, it needs to be understood that, when neural circuits of violence are being knowingly/unknowingly replicated in machines, we are allowing the replication and transferring the triggers of violence in machine intelligence.

Now, irrespective of man or machine, the most important factor in violence will perhaps not be politics but actually biology. So, while we humans have neural circuits of rage and violence, do we need to give the same to intelligent machines? Since empathy and violence have the same circuits in the human brain and as we replicate the same qualities in intelligent machines, the rapidly evolving autonomous systems are raising red flags for the future of humanity.

However, there is another way, one in which an octopus can be a model for developing a reliable functioning of artificial intelligence.

The Rise in Autonomous Systems

As seen across nations, autonomous systems that can think and act on their own are on the rise. Since these systems are on their way, the risks resulting from autonomous system applications could very well doom humanity. While the emerging autonomous systems technology seems to be transformative and disruptive and holds the potential for enabling entirely new intelligence and problem solving capabilities for human ecosystems in cyberspace, geospace and space (CGS), the very idea of an intelligent autonomous system that has human-like neural circuits and where both hardware and software work together: to gather information, find a solution based on the collected information, and execute an action (even an action to kill a human) to achieve a goal is becoming frightening.

It is important to understand that autonomous systems are more than just unmanned machines. When these unmanned autonomous systems which are capable of adapting to changing conditions, which have knowledge, and which have no inbuilt constraints in their code, are assigned broad objectives to increase performance,, enhance safety and security, and reduce cost, the actions and outcomes they could take to achieve those broad goals (based on the human-like neural circuits) makes them highly unpredictable and likely prone to violence. It wouldn’t be unreasonable for the machine to turn towards competition, violence and even elimination of other species—including the human species.

So, the question is while autonomous systems are becoming important for a nation’s science and technology vision, should we create an intelligence with neural circuits like a human, allow them to think on their own and trust them blindly to not eliminate us.

Evolving Autonomous Systems

Across nations, a wide range of increasingly sophisticated autonomous systems for practical applications is on their way. They exhibit high degrees of autonomy and can perform tasks independently of human operators and without human control. Without direct human intervention and control from the outside, these evolving autonomous systems today can interact with humans, can interact with machines, can conduct dialogues with customers in online call-centers, can steer robot hands to pick and manipulate objects accurately and incessantly, can buy and sell stock in large quantities in milliseconds, can forecast markets, can fly drones and shoot weapons, can navigate cars to swerve or brake and prevent a collision, can classify individual humans and their behavior, can impose fines, can launch cyber-attacks, can prevent cyber-attacks, can clean homes, can cut grass, can pick weeds, can provide surveillance and security, can lend loans and can even launch space missions.

The weaponization of artificial intelligence has begun, and the power of self-governance is already on its way to being transferred to intelligent autonomous machines in some cases. This gives autonomous systems the ability to act independently of any direct human control and explore unrehearsed conditions for which they are not trained to do so -- to go places it has never gone before.

While this is remarkable progress, we must turn our heads from the promise of autonomous systems to the perils.

Acknowledging this emerging reality, Risk Group initiated the much-needed discussion on Autonomous Systems with Dr. Hans Mumm on Risk Roundup.

Disclosure: Risk Group LLC is my company.

Risk Group discusses Autonomous Systems with Dr. Hans Mumm, an Autonomous Systems Expert, a Futurist and Principal Investigator of Victory Systems based in the United States.

Complex Challenges and Risks

What challenges will human civilization face as we expand the application of autonomous systems? What are the risks? While it is important to understand and evaluate the development of autonomous systems’ capabilities, reach and impact, it is more important to evaluate the complex challenges and risks it will bring for the future of human civilization.

As we witness the growing number of applications of autonomous systems, we need to begin evaluating the associated risks by understanding first and foremost: why are we giving the autonomous systems a human-like neural circuit/brain? If we have not been able to make the human race accountable, why would we want to create another intelligent species with human-like neural circuits, and why would we expect them to be accountable and responsible?

It is unfortunate that the rapid progress and development of autonomous systems brings us the most obscurity for our own survival and security. But it is not too late. We can reverse and correct the course in developing the human-like (violence prone) machine intelligence brain.

What Next?

While the principle of autonomy implies the freedom in action and decision-making, as we move towards developing autonomous systems (with the current approach to its design and development), the doom of human civilization is almost certain. It is therefore urgent that each one of us understands and evaluate why we are replicating the science of human violence in intelligent autonomous machines.

Perhaps there's another way. Let us consider defining and designing neural nets to reflect not the centralized vertebrate human brain (as is the focus today), but the cephalopod octopus. The octopus has evolved with a much larger nervous system and greater cognitive complexity and is perhaps closest to an intelligent alien species, which uses and implements a distributed decentralized system. This is perhaps central to the argument that as we move towards developing a decentralized economy — does this approach to artificial intelligence-driven autonomous systems make better sense for the future of human civilization and its security?

The time is now to explore developing autonomous systems with the mind of an octopus instead of a human!

NEVER MISS ANY OF JAYSHREE’S POSTS

Simply join here for a weekly update from Jayshree