Robots Without Social Containment

Proxemics. Do you know what this is? According to Wiktionary, it is defined as “The study of the effects of the physical distance between people in different cultures and societies.”

Suddenly, proxemics gained relevance in our lives, although not so much as an analysis of the physical distance maintained between individuals due to social and cultural causes, but mainly because of changes in the prescription of that distance as a result of the new social reality in which we live.

Several robotics researchers have been dealing with proxemics for many years, in support of the development of social robots. Paradoxically, since robots are represented in the public imagination as machines lacking social capabilities, we live in times when, on the one hand, robots have become desirable in scenarios where the necessary physical distance between humans is important (hospitals, restaurants). On the other hand, it increased the importance of, in these and other scenarios, behaving in a socially acceptable way by the humans with whom they interact.

There are many ways for a robot, or a set of robots, to exhibit social behavior. From robots linked to automation in factories to performing tasks in places where social interaction with humans is irrelevant, such as cleaning a space or monitoring intrusions through a property fence. Decisions are expected to maximize efficiency and can be opaque as long as they work and have an immediate effect.

A social robot, on the other hand, can make only satisfactory decisions, even if not optimal. According to efficiency criteria, it must interact more than act. It must be able to explain its decisions. Additionally, it must try to ensure that the impact of its decisions contemplates mainly the longer-term effects, in the context of a prolonged relationship. For such, Co-Bots are not limited to helping humans. They can ask a human to help themselves when they are unable to solve a problem. For example, (“please call the elevator” – asks a robot without arms or hands to a person who waits with him in the elevator lobby).

How is all this accomplished in a social robot? For example, when a non-social robot moves between two rooms in a house, it executes a chain of control in which it estimates its location in the house and determines the speed commands to go towards the destination room.

The route is carried out taking into account the proximity of objects and people detected as obstacles, from which the robot must avoid. A social robot, on the other hand, distinguishes humans from obstacles with which they have no socialization duties.

For example, a robot does not approach a person who is going to ask a question, nor does it cross between the television and the sofa where a person is sitting watching a movie. To guarantee these social rules, a robot integrates proxemic rules into its algorithm for determining movement commands.

The use of social rules in Robotics is not limited to movement decisions. When making decisions about allocating tasks with a human, or how to coordinate its actions with the humans with whom it interacts, the robot has to use rules, known in the literature as task assignment, or synchronization, and teamwork, that may or may not include social norms.

For example, a collective of humans and robots can use mediation mechanisms for their interaction inspired by institutions. An institution can be seen in a broad sense with an artifact (suppose a roundabout) with a material component (a roundabout is an obstacle in the middle of a road that must be avoided by vehicles) and a mental component (driving rules that indicate that every one vehicle must always go around the roundabout on one side).

In this sense, the institution can be used as a mediation device between robots, facilitating their coordination; for example, autonomous vehicles following the Highway Code to bypass a roundabout without accidents. If the institutional rules used by robots are social rules used by humans, then those institutions can conveniently mediate the interaction between humans and robots, similar to what traditional institutions such as Parliament or the Courts do only between humans.

However, cooperation (between robots or between robots and humans in a collective) requires more than simple coordination. In addition to this, there must be teamwork. This can be formalized through the obligation, by any member of the team, to inform their colleagues when the objective of the team has been reached when they realize that the objective is unattainable, or when it has become irrelevant.

Teamwork rules become social rules when team goals involve humans. We know this well when we arrange a meeting with friends, for example, to have lunch at a given place. If we are late, we coordinate with others through a phone call informing them of our delay. But we are only acting in a socially acceptable and cooperative way if, in addition, we undertake to inform the rest that we have just arrived and the restaurant is closed, or that it has been resolved by some that lunch has no effect. In this case, we are cooperating and being sociable. Robots can comply with these rules for example through wireless communication with each other or, by voice or sending emails, with humans.

The teams of robots and humans that cooperate to achieve certain useful goals are today more active topics in robotics research than the development of robots that replace humans in carrying out tasks with the sole aim of precision and efficiency, being less social in interaction with them.

Some of this work, especially when it involves teams with multiple robots and humans, have part of their origin in research in swarm robotics ( swarm robotics ), where it studies how populations with a high number of robotic agents with limited and simple individual behaviors, can exhibit complex team behaviors through the use of rules of interaction between team members.

One of the best-known cases is the rules that investigator Craig Reynolds created for the simulation of artificial creatures that he designated as Boids. Their behavior, following these rules, is similar to that of flocks of birds and schools of fish. It is another example of interdisciplinarity between Robotics and other areas of knowledge, here focused on Biology.

Researchers in these fields often use bio-inspired robots to test scientific hypotheses that they cannot test on animals. In Social Robotics this interdisciplinarity extends to Social Sciences or Psychology, and systems with multiple robots are beginning to be used to test some hypotheses that characterize human societies.