AI’s Hidden Risks: Why Experts Say It’s Not Really “Intelligent”
Artificial intelligence has become part of our everyday lives, from chatbots to recommendation systems to automated decision-making. But according to new research from Charles Darwin University (CDU), the way AI is being developed and deployed may pose a serious risk to human dignity on a global scale.
Dr. Maria Randazzo, a legal scholar at CDU, warns that while AI feels cutting-edge and transformative, it isn’t truly intelligent in the way humans are. Instead, it’s a powerful engineering tool that relies on pattern recognition, without qualities like empathy, memory, or reasoning. And this distinction matters more than most people realize.

The “Black Box” Problem
One of the biggest concerns highlighted in Randazzo’s study is AI’s lack of transparency, often called the “black box problem.” Many advanced systems, like those built on deep learning, make decisions in ways even their creators can’t fully explain.
This means when an AI tool denies someone a loan, influences a hiring decision, or even impacts criminal sentencing, the affected person may have no clear path to understanding why. Without transparency, accountability becomes nearly impossible. If people can’t trace how decisions are made, they also can’t fight back when those decisions violate their rights or dignity.
Human Rights at Risk
Randazzo argues that current regulations don’t adequately protect basic freedoms like privacy, autonomy, and protection from discrimination. Instead, AI’s rapid spread has already begun to reinforce systemic inequalities and undermine democratic values.
It’s not just about faulty technology—it’s about how governments and institutions are handling AI’s rise. Too often, policies lag behind innovation, leaving individuals vulnerable to unfair treatment and exploitation.
Competing Global Approaches
Interestingly, the world’s three biggest digital powers—the United States, China, and the European Union—are each pursuing very different AI strategies:
- United States: Market-driven, where innovation and corporate interests dominate.
- China: State-driven, with government control shaping AI’s direction.
- European Union: Human-centric, focusing on ethics and dignity.
Randazzo suggests that while the EU’s approach seems the most promising for safeguarding human dignity, it won’t be enough without global cooperation. AI is a global force, and inconsistent standards across regions may allow harmful practices to slip through.
Why It Matters
At its core, this research challenges the idea that AI is “intelligent” in any meaningful sense. Randazzo stresses that AI has no awareness, no empathy, and no understanding—qualities that are central to being human. Treating it as if it does risks reducing people to mere data points.
If humanity doesn’t anchor AI development in values like empathy, compassion, and human choice, Randazzo warns that we could end up with systems that devalue our humanity instead of enhancing it.
Her work is part of a broader trilogy on the topic, beginning with this study published in the Australian Journal of Human Rights. It’s a call for the world to recognize that while AI is impressive, it is not wise, thoughtful, or moral. Only people are.
Final Thoughts
This research doesn’t suggest abandoning AI altogether. Instead, it urges societies to think carefully about how we integrate these systems into our daily lives and legal frameworks. AI can be a powerful ally, but only if we insist on clear rules, transparency, and human-centered design.
In other words, the real question isn’t just what AI can do—but what it should do, and how we ensure it serves people rather than undermines them.