Will AGI lead us?
12 Februar, 2025 | Aktuell Blog Nicht kategorisiert
Will AGI lead us, or will we lead it? I love listening to people smarter than me and diving into their perspectives about the future of humanity. I recently came across the Nobel Prize YouTube channel, where the 2024 laureates share fascinating insights into their groundbreaking discoveries.
One particular conversation grabbed my attention: Geoffrey Hinton, often called the «Godfather of AI», speaking about AI taking over leadership from humans. A British-Canadian computer scientist and cognitive psychologist, Hinton earned the 2024 Nobel Prize in Physics for his pioneering work on artificial neural networks.
Naturally, one of the first questions raised was whether AI might eventually control humanity. We do not speak about your GPT taking over your job but about Artificial General Intelligence (AGI), which seems to be around the corner.
While it sounds like the plot of a sci-fi movie, this concern is more relevant than ever. Humanity has long feared the idea of «machines» taking over—whether it’s jobs, control, or AGI leading us. But that’s not what keeps me awake at night.
I’ve championed digitisation for years to make work smarter and more fulfilling. My real worry isn’t AGI itself; it’s the risk of it mirroring human behaviour in all its splendour. History has repeatedly shown that greed often takes the lead. And we’re notoriously inconsistent when it comes to living by our declared values.
With the help of extensive research—and a little AI assistance—this article delves into the question: Could AGI lead us into a brighter future, or will it be used to lead us astray?
What Is AGI?
First, let’s clarify what AGI is.
Artificial General Intelligence (AGI) is an advanced form of artificial intelligence capable of reasoning, learning, and performing any intellectual task a human can. Unlike existing AI, designed for specific tasks like facial recognition or language translation, AGI can think broadly and solve problems across multiple domains without requiring task-specific programming.
AGI represents a paradigm shift. Instead of merely augmenting human capabilities, it could match or even exceed human intelligence. Scared? I am not, but we must discuss its benefits, risks, and alignment with human values.
The Massive Benefits of AGI
If developed responsibly, AGI could revolutionise how we address global challenges and improve human life. It could create solutions that are hard for the human mind to create. I have always seen AI as complementary to human skills, and I still believe in the immense potential of developing solutions together with AI.
Let`s look at a few examples:
Problem Solving at Scale: AGI could help solve complex issues like climate change, disease eradication, and poverty with unprecedented efficiency and creativity. While we have set ambitious goals like the United Nations Sustainable Development Goals, progress remains frustratingly slow.
Scientific Discovery: AGI could accelerate medical, physics, and beyond breakthroughs. It can develop solutions that humans might never discover alone.
Universal Well-being: In many parts of the world, people still lack access to education or healthcare. AGI could be a universal tutor, doctor, or advisor, providing remote communities with affordable, high-quality education and healthcare.
Global Collaboration: AGI could bring better international cooperation and facilitate communication across cultures and languages. Imagine no longer needing to learn a local language to travel or work globally, as your trusted AI translator will always have you covered.
Efficiency and Productivity: While people fear AGI could take over jobs, it could also free humans to focus on more creative and meaningful work. After all, many robots we once feared perform dangerous tasks better than humans, indirectly saving lives and improving safety.
The Risks of AGI
With great potential comes great risk. Not everything is milk and honey. AGI has the potential to become a dangerous technology that can lead to human extinction or being given goals that do not align with human values. In a not to distant future, AGI could remove itself from human control and act with poor ethics.
AGI could act in harmful or unpredictable ways if it develops goals or reasoning that diverge from ethics or human well-being intentions. How is that possible? The answer is complex, but let me share two examples that continue to intrigue me:
Control Over Technology
Governments, corporations, and other powerful entities primarily drive the development and deployment of AGI. I never subscribe to conspiracy theories, but this centralisation raises a few critical concerns, at least theoretically. Human greed is unlimited, and humans lead companies and governments.
First and foremost, who decides the goals and ethics of AGI? If a small group with narrow perspectives or self-serving interests dominates this process, AGI could be used for control, surveillance, or exploitation. Instead of serving humanity as a whole, AGI might disproportionately benefit those who control it. And that would further widen the gap between the haves and have-nots.
AGI could be used to lead the world, programmed to manipulate public opinion or suppress contrarian thinking. It’s like stepping into Orwell’s 1984, where Big Brother might not just watch you but reprogram your reality, all under the guise of progress.
Misalignment with Human Values
Even well-intentioned AGI systems could produce harmful outcomes if their programming or understanding of human values is flawed.
How could AGI understand human values? By observing us. But history reveals that human behaviour is often inconsistent, biased, and contradictory. On one hand, we speak passionately about eradicating hunger and poverty; on the other, we create wars and destroy the planet. What would AGI learn from that?
If AGI models itself on humanity’s inconsistencies, it might struggle to act ethically or fairly, especially in high-stakes situations. Humans frequently disagree on fundamental values (e.g., individual freedom vs. collective security), and AGI may find it impossible to reconcile such conflicts or choose a balanced path.
Understanding these risks is the first step in creating AGI as a force for good rather than a tool of oppression.
What Can We Do Now To Have A Better Tomorrow?
When reading all the materials on the future of AGI, we risk living in a dystopian world where machines lead humanity. What can we do to prevent that? Many interesting papers (though hard to read) propose solutions.
For example, Nick Bostrom’s paper «Public Policy and Superintelligent AI: A Vector Field Approach» explains how we can guide the development of superintelligent AI to avoid harmful outcomes. It is not easy to read his paper, but it is worth a try.
He highlights the risks if AI doesn’t align with human values and suggests using policies to steer AI in safe and beneficial directions. The «vector field» idea shows how different choices can impact AI’s future. In short, we need global teamwork, safety measures, and ethical guidelines to create AGI that helps humanity instead of harming it.
Another paper, Managing Extreme AI Risks Amid Rapid Progress published in Science, proposes a framework for creating AGI that doesn’t spiral into a dystopian nightmare. Let me summarise the proposal below.
Invest in Technical R&D
This means developing tools to evaluate potentially harmful capabilities before AGI is deployed. Among other things, we need to embed safety mechanisms—like fail-safes—into their design. We must also address biases and include ethics as part of the foundation, not as an afterthought. If we want AI to work for humanity, it must be built with humankind in mind.
Create adaptable governance
Governance structures for AI are still in their infancy. It is hard to imagine the future with AGI in place, let alone create governance for something we cannot yet comprehend.
We need not only national institutions but also global frameworks to enforce standards and policies that adapt as AI evolves. Excessive governance can stifle innovation, so we need balance. However, we must ensure that AI progress does not come at the cost of reckless deployment or inequality. The goal is simple: to create an AGI that serves everyone responsibly and equitably.
This dual approach is how we can benefit from AI’s potential without losing control of the future it shapes.
Morning Routine When AGI Leads
One day into the future, you wake up, stretch, grab your coffee, and check your phone—only to find an alert saying, “You’ve been flagged for non-compliance with Algorithmic Directive 7.2.9. Proceed to your designated appeals portal.”
What did you do? Who knows! Maybe your smart toaster reported you for over-crisping your toast.
Welcome to Kafka’s The Trial meets AGI—a world where faceless algorithms run the show, and we’re all Josef K., fumbling through digital red tape with zero context. Without a solid plan to balance AGI’s benefits and risks, we might end up in a high-tech courtroom arguing with a hologram about why our favourite playlist is not a national security threat.
Spoiler: the hologram always wins.
Conclusion
My concern isn’t simply whether AGI will one day lead us; it’s deeper than that.
I’ve lived long enough to recognise that humans, with all our flaws, have often proven far more destructive than machines. Technology, after all, is a reflection of the values and intentions of those who create it. The real question is whether we, as a species, can rise above our own shortcomings.
As we invest in AGI, we should try to improve the way we, as humans, lead and create value for humanity. This is crucial for creating AGI that serves as a force for good rather than amplifying the harm we are already capable of.
Mirela Dimofte
Read and see also: Exploring the Evolution and Future of AI: Insights from Siri’s Co-Founder Babak Hodjat