Website : https://ps.is.tuebingen.mpg.de/employees/aahmad

Abstract: In this talk I will present some new insights into the effects of shared autonomy on the success of human-robot collaborative search and rescue (SAR) missions within a multi-robot cooperative perception scenario. Multiple robots in a team can generate richer information of the surrounding in comparison to a single robot, therefore, increasing the chances of locating the survivors. However, the actual task of locating the survivors depends on two crucial factors, i) The operator's ability to correctly detect and classify survivors assuming that such classification is the human operator's job in the collaborative SAR mission, and ii) whether the robot team is searching for the survivors in the right locations. The second factor itself depends on the human operator's ability to control the robots either as a team or individually each robot, unless the robots survey the accident site fully autonomously. A novel and systematic psychophysical experiment for human-multi robot interaction, involving simulated search and rescue, is designed to find the dependency of success rate of SAR missions on the human operator's level of control within a shared-autonomy control scheme. We present results that include 30 human operators and over 100 hours of human-multi robot interaction experiments.

Website : http://amazonpickingchallenge.org/

Amazon Robotics builds the world’s largest mobile robotic fleet where many thousands of robots deliver inventory shelves to pick operators in e-commerce warehouses. Each Amazon warehouse holds millions of items of inventory, most customer orders represent a unique combination of several items, and many orders need to be shipped within a couple hours of being placed to meet delivery promises. This talk will describe how mobile robots and human operators collaborate to solve this challenging problem and enable Amazon to ship millions of orders every day.

Website : http://homepages.laas.fr/afranchi/

In this seminar I will introduce the flying hand concept, a system conceived and developed together with Prof. Domenico Prattichizzo from the University of Siena. The flying hand is a visionary concept that sits at the crossroad of several disciplines in robotics, in particular: human-robot interfaces, hand grasping, multi-robot systems, and aerial robotics. In its essence, the flying-hand is a multi-robot system composed by two or more aerial (flying) robots. Each robot is endowed with a rigid tool that acts in the same way a finger of a real hand would do, i.e., exerting a unilateral force on an object to be grasped and transported. The group of robots can be collectively tele-operated by a real human hand using either a haptic interface or any other gesture capturing device. In this way the flying hand represents a unique concept in the panorama of human-multi-robot interfacing systems. In this talk I will briefly introduce and present the control, the human interface, and the experimental implementation aspects related to this innovative robotic concept.

Website : https://robotics.nus.edu.sg/sge/

Robots are expected to participate in and learn from intuitive, long term interaction with humans, and be safely deployed in myriad social applications ranging from elderly care, entertainment to education. They are also envisioned to collaborate and co-work with human beings in the foreseeable future for productivity, service, and operations with guaranteed quality. In all these applications, robots which are stiff and tightly controlled in position will face problems such as saturation, instability, and physical failure, when they interact with unknown environments. In this talk, I will present some of our recent works on control of robots in interacting with unknown environments and physical human robot interaction.

Website : http://www.irisa.fr/lagadic/team/Paolo.Robuffo_Giordano.html

This talk will give an overview of some recent theoretical and experimental results in the field of collective control for multiple quadrotor UAVs under partial control of a human operator. In particular, we will discuss some possible shared control frameworks with the operator providing motion inputs and receiving a visual-force feedback informative of the group status. The talk will illustrate the nature and kind of problems addressed within this research line, by focusing on both theoretical analyses and experimental implementations, and then discuss some future research directions.

Website : https://www.sheffield.ac.uk/acse/staff/ak

The study of human-swarm interaction (HSI) is a novel research field that is motivated by the difficulty of effectively controlling large robot swarms. This talk aims to contribute to the foundations of HSI by proposing and discussing coreconcepts and open questions. In particular, we will discuss the properties of swarms that are most relevant for HSI, such as the cognitive complexity of solving tasks with swarm systems, challenges and solutions relating to human-swarm communication, state estimation and visualization, and effective human control of swarms. For the latter we present a taxonomy of swarm control methods. We will presented selected results from the state of the art, as well as highlight remaining challenges and open problems for human-swarm interaction.

Website : http://www.arscontrol.unimore.it/site/home/people/valeria-villani.html

In this talk we will describe methodologies for letting a human operator interact with a multi-robot system in a natural manner, without the need for any dedicated infrastructure, and while keeping his/her hands free. This is achieved utilizing a wearable device, namely a smartwatch or a sensorized wristband, utilized for recognizing wrist motion. Recognized motion is then translated, in a natural manner, into commands for the MRS. Feedback to the user is provided in terms of modulated frequency. We will characterize, in a systematic manner, what are the features that make an interaction system "natural", and how this enhances the user's interaction experience.

Website : http://yue6.people.clemson.edu/

Human-robot collaboration integrates the best part of human intelligence with the advantages of autonomous robotic systems. This talk will begin with a discussion on human trust in robot and modeling and measurement approaches for trust in real-time robotic operations. We consider the scenario where a human operator supervises a multi-robot team by teleoperating a selected robot while other robots coordinate with each other autonomously. Based on these results, we present our recent works on trust-based motion planning, scheduling, and control of multi-robot systems. We will first introduce trust-based symbolic motion planning for robots, which is the process of specifying and planning robot tasks in a discrete space, then carrying them out in a continuous space in a manner that preserves the discrete-level task specifications. We then present co-design approaches for the scheduling and control of multi-robot systems such that human trust to each robot is kept within an acceptable range. Last but not least, we show a trust-based leader selection strategy for multi-robot bilateral haptic teleoperation.

Website : http://users.ece.gatech.edu/~fumin/

Significant recent advances in human robot interaction calls for convenient mobile platforms that move in three dimensional space to support experiments and demonstrations. Unmanned aerial vehicles such as quad-rotors and multi-copters have become popular for this purpose. However, the indoor usage of these unmanned aerial vehicles (UAVs) is limited by flight duration per battery charge (typically less than 20 minutes) and safety concerns to humans sharing the same lab space. Safety nets or cages provide human protection, but sacrifice the potential for human robot interaction experiments. We develop the Georgia Tech Miniature Autonomous Blimp (GT-MAB) as flying vehicles for indoor experiments that supports safe interaction between human and robot swarm. The GT-MAB has relatively long flight duration up to two hours per battery charge. Furthermore, the blimps are naturally cushioned and do not cause any pain when collide with human. It offers a fun experience that often encourage physical contacts with humans. We have developed vision based feedback control laws that enable the GT-MAB to detect and track humans. An online learning algorithm is developed to select proper robot movements that generating interesting motion patterns that react to human movements.