Whoever said we would be in this current stage of development in artificial intelligence, must be a mini-god. Year 2025 is at our door and we keep advancing in our quest for innovation and knowledge. For example, sitting in your house, and having everything done at the snap of finger.
Artificial intelligence includes technologies such as deep learning, machine learning, natural language processing (NLP), machine reasoning and computer vision. According to Tractica, artificial intelligence is an information system mimicking typical biological system, and creatively designed to equip computers with human-like capacities like seeing, hearing, learning and reasoning.
As research is advancing the more, Anthony Pipe from Bristol Lab once said; robotics is about augmenting people than making them outdated. This is a view that reflect the future where robots and human being have a kind of relationship they the both work hand-in-hand. We should not confuse this with singularity where machines self-learn and become the bosses of the human race. Rather it is kind of future where robots/machines and humans can relate – with robots aiding human capabilities.
Notably, there are several areas where robots will help us. And those areas are really great areas which we would experience growth.
Manuela Veloso from Mellon University supported the notion that we would have robots/machines aiding our daily running. For example, Mellon University built CoBots, and what these robots do is that they give directions to people, escorting them to move through the university campus. These robots even help students and visitors to ask for help such as reaching for the elevator when needed.
In one way or another, robots will help us. Whether in automated vehicle driving, grocery goods delivery, warehouse picker, and email delivery (although this is in place but not efficiently maximized yet).
Essentially, robots will have limitations. Limitation is inevitable for the robotic world. Just as we humans are limited by so many nothings, so are the robots. As Manuela noted, we would need to keep building algorithms that even robots would be able to ask for help as well. For example, there have been researches into developing artificial intelligence for sensation such as sense of touch. Majorly, this is to be able to penetrate tissues and recognize lumps and bumps in humans. This would be a tough case in which sensation from robots will be transferable to whoever the robot is subjected to.
Although this limitation is not as bad as it would be in sustainable portion. MuBot was created at Bristol Robotics Laboratory. This bot could send signal from the doctor’s finger tips to its exterior. It is more like a zombie where everything the doctor performs with his fingers are performed by the MuBot. An instance was when the robot touches the intestinal organ of the body during operation, the surgeon would feel that the internal organ is being touched which would make him/her being more careful.
This is one of the areas where artificial intelligence is shaking currently. Doctors will be able to detect and remove cancer and tumor from people early enough and this will a ground breaking testimony in our future to come.
Using MuBot has been very great over the years and more research is being deployed to bring smarter and efficient artificial intelligence robots to existence in the health sector.