Source Themes

Calculating the Support Function of Complex Continuous Surfaces With Applications to Minimum Distance Computation and Optimal Grasp Planning
“The support function of a surface” is a fundamental concept in mathematics and a crucial operation for algorithms in robotics, such as those for collision detection and grasp planning. It is possible to calculate the support function of a convex body in closed form. However, for complex continuous surfaces, especially non-convex ones, this calculation can be far more difficult, and no general solution is available so far. This limits the applicability of related algorithms. This paper presents a branch-and-bound (B&B) algorithm to calculate the support function of complex continuous surfaces. An upper bound of the support function over a surface domain is derived. While a surface domain is divided into subdomains, the upper bound of the support function over any subdomain is proved to be not greater than the one over the original domain. Then, as the B&B algorithm sequentially divides the surface domain by dividing its subdomain having a greater upper bound than the others, the maximum upper bound over all subdomains is monotonically decreasing and converges to the exact value of the desired support function. Furthermore, with the aid of the B&B algorithm, this paper derives new algorithms for the minimum distance between complex continuous surfaces and for globally optimal grasps on objects with continuous surfaces. A number of numerical examples are provided to demonstrate the effectiveness of the proposed algorithms.
Object-Agnostic Dexterous Manipulation of Partially Constrained Trajectories
We address the problem of controlling a partially constrained trajectory of the manipulation frame - an arbitrary frame of reference rigidly attached to the object - as the desired motion about this frame is often underdefined. This may be apparent, for example, when the task requires control only about the translational dimensions of the manipulation frame, with disregard to the rotational dimensions. This scenario complicates the computation of the grasp frame trajectory, as the mobility of the mechanism is likely limited due to the constraints imposed by the closed kinematic chain. In this letter, we address this problem by combining a learned, object-agnostic manipulation model of the gripper with Model Predictive Control (MPC). This combination facilitates an approach to simple vision-based control of robotic hands with generalized models, enabling a single manipulation model to extend to different task requirements. By tracking the hand-object configuration through vision, the proposed framework is able to accurately control the trajectory of the manipulation frame along translational, rotational, or mixed trajectories. We provide experiments quantifying the utility of this framework, analyzing its ability to control different objects over varied horizon lengths and optimization iterations, and finally, we implement the controller on a physical system.
End-to-End Nonprehensile Rearrangement with Deep Reinforcement Learning and Simulation-to-Reality Transfer
Nonprehensile rearrangement is the problem of controlling a robot to interact with objects through pushing actions in order to reconfigure the objects into a predefined goal pose. In this work, we rearrange one object at a time in an environment with obstacles using an end-to-end policy that maps raw pixels as visual input to control actions without any form of engineered feature extraction. To reduce the amount of training data that needs to be collected using a real robot, we propose a simulation-to-reality transfer approach. In the first step, we model the nonprehensile rearrangement task in simulation and use deep reinforcement learning to learn a suitable rearrangement policy, which requires hundreds of thousands of example actions for training. Thereafter, we collect a small dataset of only 70 episodes of real-world actions as supervised examples for adapting the learned rearrangement policy to real-world input data. In this process, we make use of newly proposed strategies for improving the reinforcement learning process, such as heuristic exploration and the curation of a balanced set of experiences. We evaluate our method in both simulation and real settings using a Baxter robot to show that the proposed approach can effectively improve the training process in simulation, as well as efficiently adapt the learned policy to the real-world application, even when the camera pose is different from simulation. Additionally, we show that the learned system can not only provide adaptive behavior to handle unforeseen events during executions, such as distracting objects, sudden changes in the positions of the objects, and obstacles, but also can deal with obstacle shapes that were not present in the training process.
Hand-object configuration estimation using particle filters for dexterous in-hand manipulation
We consider the problem of dexterous manipulation with a focus on unknown or uncertain hand-object parameters, such as hand configuration, object pose within the hand, and contact positions. In this work, we formulate a generic framework for hand-object configuration estimation using underactuated hands as an example. Due to the passive reconfigurability and lack of encoders in the hand’s joints, it is challenging to estimate, plan, and actively control underactuated manipulation. By modeling the grasp constraints, we present a particle filter-based framework to estimate the hand configuration. Specifically, given an arbitrary grasp, we start by sampling a set of hand configuration hypotheses and then randomly manipulate the object within the hand. While observing the object’s movements as evidence using an external camera, which is not necessarily calibrated with the hand frame, our estimator calculates the likelihood of each hypothesis to iteratively estimate the hand configuration. Once converged, the estimator is used to track the hand configuration in real-time for future manipulations. Thereafter, we develop an algorithm to precisely plan and control the underactuated manipulation to move the grasped object to desired poses. In contrast to most other dexterous manipulation approaches, our framework does not require any tactile sensing or joint encoders and can directly operate on any novel objects without requiring a model of the object a priori. We implemented our framework on both the Yale Model O hand and the Yale T42 hand. The results show that the estimation is accurate for different objects and that the framework can be easily adapted across different underactuated hand models. Finally, we evaluated our planning and control algorithm with handwriting tasks and demonstrated the effectiveness of the proposed framework.
Benchmarking Protocol for Grasp Planning Algorithms
Numerous grasp planning algorithms have been proposed since the 1980s. The grasping literature has expanded rapidly in recent years, building on greatly improved vision systems and computing power. Methods have been proposed to plan stable grasps on known objects (exact 3D model is available), familiar objects (e.g. exploiting a-priori known grasps for different objects of the same category), or novel object shapes observed during task execution. Few of these methods have ever been compared in a systematic way, and objective performance evaluation of such complex systems remains problematic. Difficulties and confounding factors include different assumptions and amounts of a-priori knowledge in different algorithms, different robots, hands, vision systems, and setups in different labs, and different choices or application needs for grasped objects. Also, grasp planning can use different grasp quality metrics (including empirical or theoretical stability measures), or other criteria, e.g. computational speed, or combination of grasps with reachability considerations. While acknowledging and discussing the outstanding difficulties surrounding this complex topic, we propose a methodology for reproducible experiments to compare the performance of a variety of grasp planning algorithms. Our protocol attempts to improve the objectivity with which different grasp planners are compared by minimizing the influence of key components in the grasping pipeline, e.g. vision and pose estimation. The protocol is demonstrated by evaluating two different grasp planners: a state-of-the-art model-free planner and a popular open-source model-based planner. We show results from real-robot experiments with a 7-DoF arm and 2-finger hand and simulation-based evaluations.
Perching and resting -- A paradigm for UAV maneuvering with modularized landing gears
Perching helps small unmanned aerial vehicles (UAVs) extend their time of operation by saving battery power. However, most strategies for UAV perching require complex maneuvering and rely on specific structures, such as rough walls for attaching or tree branches for grasping. Many strategies to perching neglect the UAV’s mission such that saving battery power interrupts the mission. We suggest enabling UAVs with the capability of making and stabilizing contacts with the environment, which will allow the UAV to consume less energy while retaining its altitude, in addition to the perching capability that has been proposed before. This new capability is termed “resting.” For this, we propose a modularized and actuated landing gear framework that allows stabilizing the UAV on a wide range of different structures by perching and resting. Modularization allows our framework to adapt to specific structures for resting through rapid prototyping with additive manufacturing. Actuation allows switching between different modes of perching and resting during flight and additionally enables perching by grasping. Our results show that this framework can be used to perform UAV perching and resting on a set of common structures, such as street lights and edges or corners of buildings. We show that the design is effective in reducing power consumption, promotes increased pose stability, and preserves large vision ranges while perching or resting at heights. In addition, we discuss the potential applications facilitated by our design, as well as the potential issues to be addressed for deployment in practice.