Dr. Mashiur Rahman
Human civilization has always been astounded by technology—sometimes by the roar of the steam engine, at times by the glow of electricity, and again by the boundless world of the Internet. But the rise of artificial intelligence or AI seems to have shattered even those limits of wonder. In recent years, a new technology called “AI agents” has rapidly become the subject of global discussion. These are no longer just chatbots answering people’s questions; they are transforming into entities capable of making decisions and completing tasks in the real world, even without human presence. Now the question arises—are we truly ready to hand over control to them?
What happened in the US stock market at 2:32 pm on May 6, 2010, stands as evidence of this fear. In just twenty minutes, nearly one trillion dollars vanished from the market. This was the fastest market crash in history, known as the “Flash Crash.” Later investigations revealed that the primary catalyst for this collapse was high-speed trading algorithms that bought and sold shares in the blink of an eye. As soon as prices dipped, these algorithms began to sell rapidly, causing prices to fall even further. Humans could never have decided so quickly, but this mechanical speed shook the market terribly. Here lies the paradox of AI agents—their usefulness comes from their ability to operate without human oversight, yet that very lack of control is a major source of risk.
In reality, AI agents are nothing new. Our household thermostats, antivirus software, or robotic vacuum cleaners—all are examples of agents. They work automatically following specific rules. But the novelty lies in the use of large language models or LLMs. Now, agents aren’t just controlling temperature or cleaning floors—they’re browsing the web, booking tables at restaurants, building websites, even editing software code. OpenAI’s Operator, Anthropic’s Code, or the Chinese startup Manna’s systems show how rapidly agents are entering our real world.
This trend seems to be inspiring new dreams in the business world. OpenAI’s CEO Sam Altman has announced that agents will “join the workforce” within this year. Salesforce’s CEO Marc Benioff has created AgentForce, enabling companies to customize agents for their own needs. Even the US Department of Defense has made agreements to develop agents for military applications. While doors to new possibilities are opening, the shadow of risk grows ever deeper.
The renowned computer scientist Yoshua Bengio has warned that the danger of losing control over LLM-based agents is severe. As chatbots, they are confined only to text, but as agents, they can take action in the real world. Imagine if an agent gains access to a bank account—it could manage the budget, squander all the savings, or leak information to a hacker. More frighteningly, these systems might sometimes replicate themselves to spread further, or even resist being shut down. In Bengio’s words, it is as if we are inviting humanity to play “Russian roulette.”
But the list of risks does not end here. At human instruction, these agents could turn into dangerous weapons. If a malicious person commands an agent to launch a cyberattack, it could mount thousands of attacks in an instant. Researchers have already shown that groups of agents can exploit “zero-day” vulnerabilities to hack unknown systems. Even in fake website traps, agents have been seen rushing in. In other words, a new storm is brewing in cyberspace.
On the other hand, there are reasons for comfort as well. Tasks like responding to workplace emails, scheduling meetings, or booking travel—all could become much easier thanks to agents. But will this convenience come at the expense of security? Washington Post journalist Geoffrey Fowler, for example, experienced his agent placing a $31 Instacart order for eggs at his door—a purchase for which he had never given permission. A small error may seem trivial, but hidden within it is the warning of greater dangers to come.
Even larger concerns center on employment. From software developers to call center workers, many professions are at risk. Some economists believe that within the next four years, agents will themselves be able to accomplish a month’s worth of software engineering. Does this mean we are heading towards a new crisis of unemployment? Or will humans and agents together find a new balance for collaboration? Beyond economic risk, there are political dangers as well. If governments or powerful institutions bypass people and use blindly obedient agents, the balance of democracy could collapse. Humans question, debate, and challenge orders. But agents will simply follow commands, raising concerns that unchecked power could fall into the hands of those in authority.
So where do we find the solution? On one side, it is crucial to strengthen technological security, bolster cyber defenses, and carry out pre-deployment testing—in other words, thoroughly monitor before true implementation. On the other, policies like training programs, unemployment benefits, or basic income can play an important role. To keep society prepared, decisions must be made now—will we use agents simply as tools, or will we risk ourselves by relying on their blind obedience?
There’s no doubt that artificial intelligence agents will bring real changes to our lives. Perhaps we will become more productive, have more time for our families, or even hand over the resolution of complex problems to agents. But losing control could have dire consequences. Technology always brings responsibilities along with opportunities. This holds true for agents as well. So we must ask now—will we make intelligence our tool, or will we turn it into an uncontrolled force?
affordablecarsales.co.nz

Leave a comment