Press "Enter" to skip to content

Prospects of AI surpassing human intelligence By Bhushan Patwardhan and Indu Ramchandani

The possibility looms as a formidable existential threat, questioning our future control over these technological creations. We are reminded that our choices and actions today will shape our scientific journey ahead and the future of our planet.
This journey not only demands our ingenuity and ambition but also our wisdom and foresight to ensure that the path we choose benefits humanity and the planet. And this is something only conscious human beings can do!
Here is the core link in the chain of “Genome to Om”. Several contemporary eminent scientists, technologists, and thinkers have expressed serious concerns about AI.
Stephen Hawking, Elon Musk, and Sam Altman are among those who have warned about the consequences of the unchecked advancement of AI, particularly in the development of autonomous weapons and the potential for super-intelligent AI systems that could act in ways that are contrary to human values.
These concerns are not just theoretical; recent developments in AI capabilities, such as GANs and deep learning, demonstrate the technology’s rapid advance towards increasingly complex and autonomous functionalities.
Elon Musk, the CEO of Tesla and SpaceX, is perhaps the most outspoken critic of AI. He has clearly stated that unless safeguards are built, AI systems might replace humans, making the species irrelevant or even extinct.
“Human consciousness is a precious flicker of light in the universe, and we should not let it be extinguished.” In the last few years of his life, the famous physicist Stephen Hawking repeatedly warned us about the threat of climate change, artificial intelligence, population burden, and hostile aliens.
“The development of full artificial intelligence could spell the end of the human race,” Hawking told BBC News in 2014. He was very concerned about following strict ethical guidelines, as he felt that AI could potentially evolve beyond human control.
Sam Altman, another prominent figure in the tech industry and former president of Y Combinator, has voiced several concerns and criticisms regarding AI. While he acknowledges its tremendous potential benefits, he recommends a cautious and proactive approach to its development to ensure that society as a whole can actually benefit from these advancements.
One of his main criticisms is the potential for AI to exacerbate income inequality and concentrate wealth and power in the hands of a few individuals or organizations. Altman states that AI has the potential to disrupt traditional industries and job markets, leading to widespread job displacement and economic upheaval.
He raises concerns about the possibility of large segments of the population being left behind as AI-driven automation replaces many jobs, particularly those that involve repetitive or routine tasks.
Microsoft co-founder Bill Gates talks of the impact of automation on society and the loss of jobs, but he also says, “AI risks are real but nothing we can’t handle.”
Sam Harris, the neuroscientist, is convinced that “with the advance of AI, there will evolve a machine superintelligence with powers that far exceed those of the human mind”.
This he sees as “something that is not merely possible, but rather a matter of inevitability”. One of the primary concerns and challenges is to ensure that the goals of AI systems are aligned with human values and ethics.
It is difficult to specify these goals in a way that cannot be misinterpreted, especially as AI gains the ability to learn and evolve beyond its initial programming. This issue is central to the debate, as a misaligned AI could lead to unforeseen consequences, which could range from economic displacement due to automation to more catastrophic scenarios involving autonomous weapons or existential risks posed by super-intelligent systems.
The concerns extend to the realms of regulation and ethics, emphasizing the need for a proactive and international approach to AI governance. Scholars and policymakers propose developing ethical guidelines and safety standards for AI research and deployment.
This includes transparency in AI development, mechanisms for accountability, and including diverse stakeholders in decision-making processes to ensure that AI technologies are beneficial to all of humanity. Recent studies and initiatives aim to address these challenges by exploring frameworks for safe and ethical AI development.
For instance, research studies published in journals like AI & Society and Ethics and Information Technology examine the implications of AI for privacy, security, and social welfare, proposing strategies for responsible AI innovation.
Moreover, organizations such as the Future of Life Institute and the Partnership on AI bring together academics, industry leaders, and civil society to collaborate on the best practices for developing AI and mitigating the risks associated with advanced AI technologies.
The debate about AI’s potential to replace humans or render the species irrelevant reflects broader concerns about technology’s role in society and the future of humanity. Addressing these challenges requires a multidisciplinary approach that balances innovation with caution, ensuring that AI serves as a tool for enhancing human life rather than a threat to our existence.
As this field continues to evolve, ongoing dialogue, research, and policy development will be crucial in navigating the complexities of the AI era. Yoshua Bengio, Geoffrey Hinton, and Yann LeCun were honoured with the 2018 Turing Award for developing deep learning technologies that revolutionized AI.
Their work laid the foundation for many of the AI systems that are now integral to various sectors, including healthcare, transportation, and communication. However, the discussions around AI are not limited to its technological feats; they extend into the ethical and societal domains, as highlighted by the experiences of researchers like Timnit Gebru and the warnings from people like Bengio.
Yoshua Bengio has voiced concerns about the potentially catastrophic risks associated with the technology. His apprehensions revolve around the misuse of AI in areas such as autonomous weaponry and surveillance, which could have dire consequences for humanity if not properly regulated.
He suggests a more cautious approach to the development of AI, emphasizing the importance of ethical considerations and the potential need for regulatory frameworks to prevent misuse. Bengio’s stance highlights a growing awareness within the AI research community of the dual-use nature of these technologies—capable of great benefit but also significant harm. Timnit Gebru lost her job as a co-lead of Google’s ethical AI team.
Reportedly, she had many issues at Google, but the censorship of her paper was the worst instance. Gebru’s controversial paper, which questioned the ethics of large language AI models, raised important concerns about the environmental impact of training such models, their tendency to perpetuate biases, and the lack of transparency in their development.
Gebru’s work and subsequent dismissal from Google sparked a broader debate on the need for ethical guidelines and accountability in AI research and development. To ensure that the benefits of scientific progress are equitably distributed is a moral imperative.
This means international cooperation, where developed and developing nations must find common ground in sharing knowledge, resources, and technologies for the greater good. The way forward requires careful decision-making. It needs a collective effort, involving scientists, technologists, policymakers, ethicists, and the public.
(Excerpted from Genome to Om: Evolving Journey of Modern Science to Meta-science. Published by BluOne Ink)