The greatest dream in modern robotics and AI research is to create an assistant machine that can make its own decisions, detect its own errors, and work quietly and safely alongside humans.
Many people have seen videos of Japan’s robot restaurants. Some robots carry food on trays, some stop at designated tables, others respond to people’s calls. From the outside, it looks like robots can do everything. But Dr. Alimur Reza draws an important line here— in many of these examples, the robots aren’t fully autonomous. In some cases, humans are giving instructions; in others, robots are being guided along predefined paths; and sometimes, human-robot interaction (assistance or intervention) is inevitable. In other words, while the robots are working, their “understanding capability” and “independent decision making” are still limited.
This limitation can be understood with a simple comparison. If you visit a new building, you might first ask someone where the stairs or elevator are, or where the office is. But within a few minutes, you can navigate the environment on your own. This ability comes from human experience, memory, and common sense. Robots don’t have this inherent common sense. They must be taught— what is a road, what is an obstacle, what is a person, what is a glass door, what is a shadow. This teaching process is closely linked to Dr. Alimur Reza’s field of research.
So, a completely autonomous agent doesn’t just mean “moving on its own.” It means navigating while understanding the environment. If you want a robot to work in your home, it must know where it is safe to walk, where it might slip, where children are playing, where there is soft carpet or a hard floor. While a wrong decision by a human may cause minor discomfort, a wrong decision by a robot may be dangerous. That’s why the questions of safety, reliability, and ethics become intertwined with robotic automation.
The main driving force behind this automation is computer vision and machine learning. Computer vision means teaching machines to understand images or videos, and machine learning means learning from data to make decisions. But the real world doesn’t always stay the same. Lighting conditions change, new objects enter a room, people move quickly, sometimes pets walk in front of the robot. An autonomous agent must work reliably amidst all these changes. This is the big challenge for researchers: how to teach machines in such a way that they can adapt not only to familiar environments, but also to unfamiliar ones.
According to Dr. Alimur Reza, the goal of current AI research is to create such automated agents that can operate with minimum human intervention. But the word “minimum” is very important here. Because for machines involved in human life, a completely human-free system may not always be reasonable. In some cases, monitoring is necessary; in others, boundaries must be defined; elsewhere, emergency stop mechanisms are needed. So, responsible design, policy, and practical realities are just as important as autonomy.
This is where Dr. Alimur Reza’s message carries special value for students. He reminds us that building a robot is not just about assembling hardware or mechanical parts; it’s a kind of intellectual training. Teaching about environments, building the logic for decision making, and reteaching after mistakes—this whole process advances through a combination of mathematics, programming, and curiosity. The math you learn today, the logic you practice, or the small bits of code you write and test—these are what build the future autonomous assistants that will quietly stand by humans.
In the full interview, Dr. Alimur Reza discusses in greater detail his educational journey, the specifics of his research, the future of robotics, and practical questions surrounding AI implementation. Read Dr. Alimur Reza’s full interview below or watch it on YouTube.

Leave a comment