Figure 3. Position-Based Visual Servoing In a position-based visual servoing system also called 3D visual servoing , the camera is a sensor used as a pose estimator. Figure 4. Image-Based Visual Servoing Contrary to position-based visual servoing, image-based visual servoing is more suitable when a geometric model of the task to be developed is not available. Figure 5. Figure 6.
2011 Proceedings of the 28th ISARC, Seoul, Korea
The visibility problem When the camera-object depth is online estimated, the evolution of the features in the image during the visual servoing task tends to follow a straight line [ 29 ]. The selection of the parameters of the weighted function is discussed in [ 41 ] The use of panoramic cameras may avoid the appearance of outliers. The problem of finding adequate visual features An important issue in the temporal efficiency of visual servoing systems is the complexity of the objects in the scene.
Stereo Visual Servoing Stereo rig configurations have been widely applied in the literature to obtain 3D information from the scene. Tracking of Objects: Movement Estimators Previous research on motion object tracking in real time like [ 68 , 69 ] has brought out the development of new algorithms designed to the processing of high velocity image sequences. Table 1. Summary of Spanish research on visual servoing.
Sensors Technique Application [ 9 ] Eye-in-hand configuration. Feedforward neural network. Industrial inspection.
Reinforcement learning-based neural network. Grasping of an object on a table. Autonomous submarine for underwater cable inspection tasks [ 12 ] Eye-in-hand configuration. Discrete time Cellular Neural Networks Test of the proposed visual servoing scheme. Image-based visual servoing. Visual servoing open architecture. Position-based visual servoing with change of the camera-object frame. Simple tests of the proposed approach. Test of the proposed algorithms. Testbed for a classic position-based scheme. Position-based visual servoing. Test of the proposed controller. Internet Tele-Lab for learning visual servoing techniques.
Testbed for an autonomous satellite repairer. Position-based direct visual servoing. Visual control of a 2 degree of freedom robot. RoboTenis: a parallel robot playing table tennis. Image-based visual servoing from online estimated interaction matrix by using the properties of the epipolar geometry. Test of the proposed online interaction matrix estimation.
Image-based visual servoing with online camera calibration. Test of the proposed algorithm. Image-based visual servoing solving the visibility problem. Test of visual servoing tasks with outliers. Image-based visual servoing with panoramic cameras. Safety issues for a robot arm moving in close proximity to human beings.
Image-based visual servoing with structured light external visual features. Plane-to-plane positioning tasks. Image-based visual servoing based on the homography decomposition. Simulation of the proposed control scheme. Tracking of predefined paths. Tracking of predefined paths in the change of a fault light bulb. Stereo image-based visual servoing with grasping points features. Grasping of different objects. Visual servoing of an autonomous helicopter [ 71 ] Eye-to-hand configuration. Automatic chaser car in a slot game.
Peg-in-hole task in motion. Tests of the proposed motion estimator. Tracking of a desired path in the image. Tracking of a mobile object placed at a turntable. Force Control In the current tendency where robot manipulators are supposed to be more and more autonomous, control of the physical interaction between the robot and the environment is absolutely necessary.
Figure 7. Indirect Force Control One of the most used indirect force controllers is the impedance control [ 94 ]. Figure 8. Direct Force Control Direct force control is intended to control the forces and moments in a robot interaction task. Table 2. Summary of Spanish research on force control.
Neural networks. Fine motion assembly tasks. Impedance control. Test of a contact force estimator.
Test of a self-calibrated contact force estimator. Bone drilling in a surgical repairing task. Proportional pure force control. Control of a climbing and walking robot. Control of free and constrained motion of a flexible robot. Geometric analytical models. Admittance control with the force controller in the joint space. Control of a legged-robot. Open software architecture to test robot interaction tasks. Test architecture for the analysis of the mechanical response in car seats.
Screwing in an assembly task. Robot humanoid for household furniture common tasks. Service robot for shaving and feeding tasks. Neural networks Fine motion assembly tasks. Tactile Control Tactile sensing is a technique which determines the physical properties of objects through their contact with the world [ ]. Table 3. Classification of tactile sensors. Figure 9. Tactile Sensing for Object Identification The identification of the object which the robot is touching can be implemented by two different techniques: geometric modeling and neural network classification.
Tactile Sensing for Manipulation Control In the approaches described in the previous section, tactile information is processed in order to classify the objects according to a physical property shape, stiffness…. Table 4. Summary of Spanish research on tactile control. Intrinsic tactile sensing for normal vector computation. Pipe crawling robot.
Method with 3 phases: noise cancellation, image processing and classification by a LVQ network. Classification of the local shapes of objects gripped by a robotic hand. Neural network organized as a topographic map of joint positions and contact forces. Grasping of objects of different stiffness with a predefined force. Force-pressure control law for controlling the applied force and maximizing the contact surface.
Robotic assistant that picks up books in a library. Control algorithm which detects grasping events from sensor data and generates the user's feedback. Clinical prosthesis which provides the user with feedback. Slipping detection alarm for manipulation tasks. Multi-Sensor Control Previous sections have presented the research work which has been developed by Spanish researchers in visual servoing, force control and tactile control separately. Figure Visual-Force-Tactile Control The combination of visual, force and tactile information is the most complete strategy for controlling robots which interact with the environment.
Table 5. Summary of Spanish research on multi-sensor control. Change of a faulty bulb in a streetlamp. Different interaction tasks tracking a desired path. Different interaction tasks tracking a desired path in contact with an object. Shared visual-force control. Disassembly task. Service robot opening a door of a wardrobe. Library assistant robot. Neural networks with VAM structure which relate visual and tactile data with joint positions. Reaching and grasping tasks of unknown objects. Position-vision-tactile hybrid control modified by an impedance force control. Service robot which opens a sliding door.
Conclusions This paper presents a detailed review on the control strategies developed by Spanish researchers which are used to control the movements of robotic systems depending on the information registered by sensors. References and Notes 1. Christensen H. Sensing and Estimation. In: Siciliano B. Handbook of Robotics. Springer-Verlag; Berlin Heidelberg, Germany: Xie M. Fundamentals of Robotics: Linking Perception to Action.
Hill J. Real time control of a robot with a mobile camera. Proceedings of the 9th. Kopacek P. Intelligent, flexible disassembly. Shirai Y. Guiding a robot by visual feedback in assembling tasks. Pattern Recogn. Hutchinson S. A tutorial on visual servo control. IEEE Trans. Chaumette F. Visual Servoing and Visual Tracking. Arbib M. Wells G. Promising research—vision-based robot positioning using neural networks.
Image Vis. Martinez-Marin T. Robot docking by reinforcement learning in a visual servoing framework. December, ; pp. El-Fakdi A. Policy gradient based Reinforcement Learning for real autonomous underwater cable tracking. September, ; pp. Lopez-Garcia J. July, ; pp. Sanderson A. Image-based visual servo control using relational graph error signals. Cervera E. Distributed visual servoing: A cross-platform agent-based implementation. August, ; pp. A cross-platform network-ready visual servo simulator. October, ; pp. Visual servoing with indirect image control and a predictable camera trajectory.
Improving image-based visual servoing with three-dimensional features. Stereo visual servoing with oriented blobs. June, ; pp. Vargas M. Modelling and control of a visual servoing system.
Bachiller M. A modular scheme for controller design and performance evaluation in 3D visual servoing. Wirz R. Remote programming of an Internet Tele-Lab for learning visual servoing techniques: a case study. Abderrahim M. Experimental simulation of satellite relative navigation using computer vision. RoboTenis: optimal design of a parallel robot with high performance. Sebastian J. Parallel robot high speed object tracking. Visual servoing of a parallel robot system.
European Control Conference; Kos, Greece. July ; pp. Montijano E. Visual servo control. Basic approaches. IEEE Robot. Marchand E. Avoiding robot joint limits and kinematic singularities in visual servoing. Potential problems of stability and convergence in image-based and position-based visual servoing. In: Kriegman D. The confluence of vision and control.
Springer-Verlag, Inc. Pari L. Image based visual servoing: A new method for the estimation of the image jacobian in dynamic environments. Uncalibrated visual servoing using the fundamental matrix. Image based visual servoing: Estimated image Jacobian by using fundamental matrix VS analytic Jacobian. Echegoyen Z. Modeling a legged robot for visual servoing.
In: Gervasi O. August, ; Berlin: Germany: Springer-Verlag; Pomares J. Adaptive visual servoing by simultaneous camera calibration. April, ; pp. Mezouar Y. Path planning for robust image-based control. Malis E. Visual servoing invariant to changes in camera-intrinsic parameters. Garcia-Aracil N. Continuous visual servoing despite the changes of visibility in image features. Perez C. The visibility problem in visual servoing. Parameters selection and stability analysis of invariant visual servoing with weighted features.
Safety for a robot arm moving amidst humans by using panoramic vision. May, ; pp. Combining pixel and depth information in image-based visual servoing. Pages J. Plane-to-plane positioning from image-based visual servoing and structured light. Optimizing plane-to-plane positioning tasks by image-based visual servoing and structured light.
Robust decoupled visual servoing based on structured light. An approach to visual servoing based on coded light. Visual servoing based on an analytical homography decomposition. Lopez-Nicolas G. Switched homography-based visual control of differential drive vehicles with field-of-view constraints. Switching visual control based on epipoles for mobile robots.
Nonholonomic epipolar visual servoing. Merino L. Vision-based multi-UAV position estimation. Becerra H. Ortiz A. A vision system for an underwater cable tracker. Movement flow-based visual servoing to track moving objects. Time independent tracking using 2-D movement flow-based visual servoing. Garcia G. A new time-independent image path tracker to guide robots using visual servoing. Automatic robotic tasks in unstructured environments using an image path tracker. Control Eng. Schramm F. Ensuring visibility in calibration-free path planning for image-based visual servoing.
Maru N. Manipulator control by visual servoing with stereo vision. Stacking jacobians properly in stereo visual servoing. Is 3D useful in stereo visual control?. Recatala G. Filter-based control of a gripper-to-object positioning movement. Mejias L. Visual servoing of an autonomous helicopter in urban areas using feature tracking. Field Robot. Campoy P. Nickels K. Model-based tracking of complex articulated objects. Isard M. Papanikolopoulos N. Adaptive robotic visual tracking: theory and experiments.
Visual servoing and force control fusion for complex insertion tasks. Improvement of the visual servoing task with a new trajectory predictor—The Fuzzy Kalman Filter. Bensalah F. Compensation of abrupt motion changes in target tracking by visual servoing. Improving tracking trajectories with motion estimation.
Perez-Vidal C. Visual control of robots with delayed images. Kobayashi A. Handbook on Experimental Mechanics. Society for experimental mechanics. Perception-based learning for motion in contact in task planning. Garcia J. Sensor fusion for compliant robot motion control. Contact force estimation for complaint robot motion control. Self-calibrated robotic manipulator force observer. Uchiyama M. Dynamic force sensing for high-speed robot manipulation using Kalman filtering techniques. Automatic calibration procedure for a robotic manipulator force observer.
Fraile J. Experiences in the development of a robotic application with force control for bone drilling. Galvez J. A force controlled robot for agile walking on rough terrain. In: Ollero A. Montes H. Reliable, built-in, high-accuracy force sensing for legged robots; Proceedings of the 7th International Conference on Climbing and Walking Robots; Madrid, Spain. Garcia A. Experimental testing of a gauge based collision detection mechanism for a new three-degree-of-freedom flexible robot.
Payo I. Force control of a very lightweight single-link flexible arm based on coupling torque feedback. Nabulsi S. High-resolution indirect feet-ground interaction measurement for hydraulic-legged robots. Jinjun S. Design for robust component synthesis vibration suppression of flexible structures with on-off actuators. Suarez R. Using configuration and force sensing in assembly task planning and execution.
Villani L. Force control. Springer; Berlin, Germany: De Fazio T. The instrumented remote center of compliance. Hogan N. Impedance control—an approach to manipulation. ASME J. Lawrence D. Impedance control stability properties in common implementations.
Experimental Robotics | Autonomous Motion - Max Planck Institute for Intelligent Systems
A new legged-robot configuration for research in force distribution. Design and validation of an open architecture for an industrial robot. Valera A. Development of an experimental test bench by means of force control in an industrial robot for the analysis of the mechanical response in car seats. Puente S. Automatic screws removal in a disassembly process. Raibert M. Mason M. Compliance and force control for computer controlled manipulators. Man Cybern. Bruyninckx H. Prats M. Compliant interaction in household environments by the armar-III humanoid robot. Amat J. Human robot interaction from visual perception.
Patarinski S. Robot force control: a review. Canny J. New lower bound techniques for robot motion planning problems. Sensor-based learning for practical planning of fine motions in robotics. Lee M. Review Article Tactile sensing for mechatronics—a state of the art survey. Cutkosky M. Force and tactile sensors. Springer-Verlag; Berlin, Germany: Howe R.
Dynamic tactile sensing: perception of fine surface features with stress rate sensing. Puangmali P. State-of-the-art in force and tactile sensing for minimally invasive surgery.
Leveraging Learning in Robotics: RSS 12222 Highlights
IEEE Sens. Dahiya R. Tactile sensing for robotic applications. In: Rocha J. Tegin J. Tactile sensing in intelligent robotic manipulation—a review. Bicchi A. Intrinsic contact sensing for soft fingers. Intrinsic tactile sensing for the optimization of force distribution in a pipe crawling robot. Jimenez A. Featureless classification of tactile contacts in a gripper using neural networks. Actuators, A, Phys.
Pedreno-Molina J. A neural estimator of object stiffness applied to force control of a robotic finger with opponent artificial muscles. His research focuses on developing rigorous but practical tools for nonlinear systems analysis and control. These have included key advances and experimental demonstrations in the contexts of sliding control, adaptive nonlinear control, adaptive robotics, machine learning, and contraction analysis of nonlinear dynamical systems. Tutorial 2: Benjamin Recht [ Google Scholar ]. Title : Optimization Perspectives on Learning to Control. Given the dramatic successes in machine learning over the past half decade, there has been a resurgence of interest in applying learning techniques to continuous control problems in robotics, self-driving cars, and unmanned aerial vehicles.
Though such applications appear to be straightforward generalizations of reinforcement learning, it remains unclear which machine learning tools are best equipped to handle decision making, planning, and actuation in highly uncertain dynamic environments. This tutorial will survey the foundations required to build machine learning systems that reliably act upon the physical world. The primary technical focus will be on numerical optimization tools at the interface of statistical learning and dynamical systems. We will investigate how to learn models of dynamical systems, how to use data to achieve objectives in a timely fashion, how to balance model specification and system controllability, and how to safely acquire new information to improve performance.
We will close by listing several exciting open problems that must be solved before we can build robust, reliable learning systems that interact with an uncertain environment. Ben's research group studies the theory and practice of optimization algorithms with a focus on applications in machine learning, data analysis, and controls. Tutorial 3 : Emo Todorov [ Google Scholar ]. Title : Sensorimotor intelligence via model-based optimization. Model-free reinforcement learning has produced surprisingly good results for a brute-force method.
However it appears to be reaching an asymptote that is not competitive with model-based optimization. Furthermore it is mostly limited to simulation where a model is available by definition. So we might as well take full advantage of that model, and reserve model-free methods for fine-tuning on real data.
In this tutorial I will discuss state-of-the-art methods that become available once we admit that have a model. As with any other form of optimization, the single most important ingredient is having access to analytical derivatives. This is standard in supervised learning for example, but general-purpose physics simulators are difficult to differentiate.
Nevertheless this is now possible in MuJoCo as well as some more limited simulators, opening up possibilities for much more efficient optimization. Another essential ingredient in the control context is inverse dynamics. This enables trajectory optimization methods where the consistency between states and controls no longer needs to be enforced numerically, and instead one has to enforce under-actuation constraints which are lower-dimensional. Another challenge, specific to problems with contact dynamics, is that contacts result in very complex optimization landscapes that can be difficult to navigate even for a full Newton method.
Unlike the situation in neural networks where saddle points appear to be the problem, here the problem is harder: the gradient is large yet it changes rapidly in non-linear ways that are not captured by the Hessian and we don't have 3rd-order methods. This can be alleviated using continuation methods, where the physics model is smoothed early in optimization and gradually made harder while tracking the solution.
The algorithm best suited for this problem class is Gauss-Newton. Putting all the ingredients together, one iteration of trajectory optimization can be performed in a few milliseconds on a single computer. The same machinery can be used to solve state estimation problems and system identification problems, in addition to control problems. This is done by modifying the cost function and keeping everything else the same.
A down-side of this framework is that it involves more mathematics, physics, optimization and software engineering than what the community has gotten used to. A possible solution is to produce software that does it automatically, leaving parameter tuning to the user. We are in the process of developing such software called Optico and will show demos at the tutorial. Currently he is on leave from academia, to develop the MuJoCo physics simulator as well as model-based optimization software built on top of it.
Tutorial 4: Masashi Sugiyama [ Google Scholar ]. Title: Machine learning from weak supervisionTowards accurate classification with low labeling costs.
- The Children Act!
- The Behavior of Systems in the Space Environment?
- The Lamentable Tragedy of Locrine?
- Gender and Sovereignty: Feminism, the State and International Relations.
Recent advances in machine learning with big labeled data allow us to achieve human-level performance in various tasks such as speech recognition, image understanding, and natural language translation. On the other hand, there are still many application domainsincluding roboticswhere human labor is involved in the data acquisition process and thus the use of massive labeled data is prohibited. In this tutorial, I will introduce recent advances in classification techniques from weak supervision, including classification from two sets of unlabeled data, classification from positive and unlabeled data, and a novel approach to semi-supervised classification.