Bruno Siciliano, University of Naples Federico II, Italy
Nonprehensile manipulation of deformable objects: Achievements and perspectives from the RoDyMan project
The state of the art of robotic manipulation is still rather far from the human dexterity in the execution of complex motions such as, for example, in dynamic manipulation tasks. Dynamic manipulation is considered as the most complex category of manipulation requiring ad-hoc controllers and specialized hardware. In case of non-prehensile manipulation or non-rigid objects, the task of dynamic manipulation becomes even more challenging. This reduces the opportunities for wide adoption of robots within human co-habited environments.
This talk presents the results achieved within the RoDyMan project related to planning and control strategies for robotic non-prehensile manipulation. The project aims at advancing the state of the art of non-prehensile dynamic manipulation of rigid and deformable objects to further enhance the possibility of employing robots in anthropic environments. The final demonstrator of the RoDyMan project will be an autonomous pizza maker. The lessons learned so far are highlighted to pave the way towards future research directions and critical discussion.
Bio: Professor Bruno Siciliano is Director of the Interdepartmental Center for Advances in RObotic Surgery (ICAROS), as well as Coordinator of the Laboratory of Robotics Projects for Industry, Services and Mechatronics (PRISMA Lab), at University of Naples Federico II. Fellow of the scientific societies IEEE, ASME, IFAC, he received numerous international prizes and awards, and he was President of the IEEE Robotics and Automation Society from 2008 to 2009. Since 2012 he is on the Board of Directors of the European Robotics Association. He has delivered more than 150 keynotes and has published more than 300 papers and 7 books. His book “Robotics” is among the most adopted academic texts worldwide, while his edited volume “Springer Handbook of Robotics” received the highest recognition for scientific publishing: 2008 PROSE Award for Excellence in Physical Sciences & Mathematics. His research team got 18 projects funded by the European Union for a total grant of 10 M€ in the last ten years, including an Advanced Grant from the European Research Council.
Antonio Bicchi, Istituto Italiano di Tecnologia/University of Pisa, Italy
From Human Robot Interaction to Human Robot Integration
Recent research advancements in the field of robotics have made it possible not only to have machines that approach or beat the computational intelligence of humans, but are also capable of ever more natural motion and exploit the "physical" intelligence embodied in their structure. Informed by neuroscientific models of human behavior in interaction with the physical world, new robots can safely touch humans and the environment to physically act on it. New sensing and display tools make it possible for other senses than just vision to share information on the world between a robot and a human. The union of such technologies, together with a deeper understanding of how to interface humans and machines, is enabling a new relationship between humans and robots, that is really more an integration than an interaction in the classical sense. We will consider examples of partial integration, as in prosthetics and rehabilitation, augmentation with supernumerary limbs, augmentation with exoskeletons, and robotic avatars, with the robot executing the human's intended actions and the human perceiving the context of his/her actions and their consequences.
Bio: Antonio Bicchi is a scientist interested in Automatic Control (the science and engineering of Systems), in Haptics (the science and technology for the sense of Touch), and in Robotics (i.e., the Machines that are not here yet). After graduating from the University of Bologna, he has been with the MIT AI Lab in Cambridge, USA, and is now Professor of Robotics at the University of Pisa. Since 2009 he leads the Soft Robotics Lab at the Italian Institute of Technology in Genoa, and from 2013 he is Adjunct Professor at Arizona State University in Tempe, Arizona.
He has published more than 400 peer reviewed papers on international journals, books, and refereed conferences (h=60 on Google Scholar). In 2016 he founded the IEEE Robotics and Automation Letters, which in two years become the largest Journal in the field. His 2012-2017 ERC Advanced Grant ``SoftHands'' established the basis for the theory of soft synergies in human manipulation, and led to the design of a new generation of robotic and prosthetic hands.
He served as Vice President for Publications in IEEE Robotics and Automation Society (RAS), as President of the Italian Society of Researchers in Automatic Control, as Editor in Chief of the Conference Editorial Board for the IEEE RAS, as VP Membership and Distinguished Lecturer of IEEE RAS. He is Editor-in-Chief for the series ``Springer Briefs on Control, Automation and Robotics,'' and has served in the editorial board of all top-ranked journals in Robotics (Int.l J. Robotics Research, the IEEE Trans. on Robotics and Automation, IEEE Trans. Automation Science and Engineering, and IEEE RAS Magazine). He has organized and co-chaired the first WorldHaptics Conference (2005), and Hybrid Systems: Computation and Control (2007). He is the recipient of several awards and honors. Antonio Bicchi is a Fellow of IEEE since 2005.
Yaochu Jin, University of Surrey, UK
An evolutionary developmental perspective of brain-like intelligence
This talk discusses the organizational principles of neural systems from the evolutionary developmental perspective. We first provide a brief introduction to the evolution and development of human brain and nervous systems. Then computational models for brain-body co-evolution and co-development are presented. Our experimental results reveal that energy minimization is one main principle behind the self-organization of nervous systems and there is a close coupling between the body and brain in evolution and development. Finally, we present computational models of neural plasticity embedded in the reservoir computing and discuss their influence on the learning performance of spiking neural networks.
Bio: Yaochu Jin received the B.Sc., M.Sc., and Ph.D. degrees from Zhejiang University, Hangzhou, China, in 1988, 1991, and 1996, respectively, and the Dr.-Ing. degree from Ruhr University Bochum, Germany, in 2001. He is a Professor in Computational Intelligence with the Department of Computer Science, University of Surrey, Guildford, U.K., where he heads the Nature Inspired Computing and Engineering Group. He is also a Finland Distinguished Professor, University of Jyvaskyla, Finland and a Changjiang Distinguished Professor, Northeastern University, China. His research interests lie in the interdisciplinary areas that bridge the gap between computational intelligence, computational neuroscience, and computational systems biology. He is also particularly interested in nature-inspired, real-world driven problem-solving. Dr Jin is an IEEE Distinguished Lecturer (2017-2019) and was the Vice President for Technical Activities of the IEEE Computational Intelligence Society (2014-2015). He is the recipient of the 2014 and 2016 IEEE Computational Intelligence Magazine Outstanding Paper Award, the 2017 IEEE Transactions on Evolutionary Computation Outstanding Paper Award, and the Best Paper Award of the 2010 IEEE Symposium on Computational Intelligence in Bioinformatics and Computational Biology. He is a Fellow of IEEE.
Sylvain Calinon, Idiap Research Institute, Switzerland
Acquisition of robot skills from few human demonstrations
Many human-centered robotic applications would benefit from the development of robots that can acquire new movements and skills from human demonstration, and that can reproduce these movements in new situations. Such learning and adaptation challenges require the development of intuitive interfaces to acquire meaningful demonstrations, the development of movement primitive representations that can exploit the structure and geometry of the acquired data in an efficient way, and the development of control techniques that can exploit the possible variations and coordinations in the movement. Moreover, the developed models need to serve several purposes (recognition, prediction, generation), and be compatible with different learning strategies (imitation, exploration). I will present an approach combining model predictive control, statistical learning and differential geometry to pursue such goal. I will illustrate the proposed approach with various human-robot interaction applications, including robots that are close to us (human-robot collaboration, robot for dressing assistance), part of us (interaction with prosthetic hand), or far from us (shared control of teleoperated bimanual robot in deep water).
Bio: Dr Sylvain Calinon is a Senior Researcher at the Idiap Research Institute (http://idiap.ch). He is also a lecturer at the Ecole Polytechnique Federale de Lausanne (EPFL), and an external collaborator at the Department of Advanced Robotics (ADVR), Italian Institute of Technology (IIT). From 2009 to 2014, he was a Team Leader at ADVR, IIT.
From 2007 to 2009, he was a Postdoc at the Learning Algorithms and Systems Laboratory, EPFL, where he obtained his PhD in 2007. He is the author of 100+ publications at the crossroad of robot learning, adaptive control and human-robot interaction, with recognition including Best Paper Awards in the Journal of Intelligent Service Robotics (2017) and at IEEE Ro-Man'2007, as well as Best Paper Award Finalist at ICRA'2016, ICIRA'2015, IROS'2013 and Humanoids'2009. He currently serves as Associate Editor in IEEE Transactions on Robotics (T-RO), IEEE Robotics and Automation Letters (RA-L), Intelligent Service Robotics (Springer), and Frontiers in Robotics and AI. Personal website: http://calinon.ch