Game engines and reinforcement learning multi-agent-based for bricklaying construction
This research proposes a framework consisting of a communication interface between Unity and ROS for real-world distributed robotic construction. It deploys reinforcement learning in a game engine-based simulation. The Unity scene provides the environment in which agents observe, act, learn and get feedback. We draw from state-of-the-art reinforcement learning techniques for multi-agent-based execution plan generation by establishing connections between the python API and python trainer with the environment. All the learning algorithms have been set up with the TensorFlow platform to communicate with the Unity model. Then Unity passes all the information collected to ROS, namely, the poses of the robot, target object, target location, and the motion plan. In turn, ROS returns a trajectory message to Unity corresponding to the real robot feedback for further simulation of the remaining task. This research project is led by Xinghui Xu.