Article written by Basil Jesudason, Principal Engineer and Software Lead of the current Sharp Autonomous Development Project at SLA Sharp Labs of America, Robolliance Expert
This year I had the opportunity to attend the ICRA 2016. Over three days, more than 1,200 research papers on robotics were presented through a bunch of concurrent interactive conference sessions. There were five plenary lectures, 14 keynote speeches, as well as 41 workshops and tutorials. For example, sessions included: Aerial Robotics, Manipulators, Medical Robotics, Autonomous Navigation, Marine Robotics & Bio-inspired Aquatics, and Haptics for Neuroscience.
In this article, I will highlight some of research that is being done in the area of aerial robotics. In the next edition, I will focus on various aspects of autonomous navigation of all-terrain vehicles with just cameras and without using GPS.
Aerial robotics - Introduction
What are some of the motivating factors? Why do we need to study this field? In some of the main sessions there was a lot of discussion on what it takes to obtain low cost and low power platforms. In other words, what are the challenges in implementing some of the core business use cases?
There was lot of discussion around building the right platform for the right task and the right kind of sensors for carrying out the task. For navigation and for the tasks, we need to be able to build the right algorithms by fusing together the data from different sensors. The goal is to localize the aerial robot. The second aspect of this task requires planning and control.
Aerial robotics - Control
One of the primary aspects of aerial robotics is control. The following are some aspects that need to be taken into consideration when it comes to control:
- What are the different types of aerial robots?
- How do we model and design robots?
- And what are the key system design challenges?
Of course there are different kinds of aerial robots. Some aerial robots can be equipped with various powerful sensors to research as well as to study an array use cases for business and industry. Then there is fixed wing, such as an aero plane, which is capable of flight using wings that generate lift caused by the vehicle's forward airspeed and the shape of the wings. Also, there is the flapping wing aircraft, a smaller field of research, but still quite interesting. These are biologically inspired. Finally, there is the standard rotor craft, which is where most of research and work is currently taking place. The standard quadcopter would fall into this category.
- Quadrotor control is achieved by differential control of the thrust generated by each rotor.
- Heave (total thrust) is the sum of thrust generated by each rotor.
- Pitch and roll are obtained by differential thrust along the NS axis or EW axis.
- Yaw control is obtained by differential control of the NS rotors compared to the EW rotors such that the total thrust is constant.
Aerial robotics – Visual navigation
While aerial robotics is an exciting and new field that has a lot of potential to help consumers in various real world applications, one aspect that is particularly challenging is autonomous navigation of aerial robots. GPS sensors are commonly used but, without RTK (Real Time Kinematics) augmentation, it is difficult to get the accuracy that is necessary for precise navigation. Also, areas that do not get good GPS signal become problematic.
So the questions before us are the following:
- How do robots estimate their position, orientation and velocity in the real world, both in the presence and absence of GPS?
- How does one use cameras to estimate the position and velocity of a flying vehicle? This is commonly known in robotics literature as the pose of the flying robot.
One core sensor, that is becoming very common and holds a lot of promise for aerial roboticists, is the camera—either a single camera or a stereo camera, where two cameras are used simultaneously to perceive depth. One commonly used algorithm to improve the estimates and to localize a robot is visual odometry (VO). The following are some of the key ideas behind visual odometry:
Aerial robotic - manipulation
- VO focuses on estimating the 3D motion of the camera sequentially (as a new frame arrives) and in real time with emphasis on real-time performance.
- VO can be used as a building block of SLAM. “SLAM”, short for simultaneous localization and mapping, is the computational problem of constructing or updating a map of an unknown environment.
- We can look at VO as SLAM before closing the loop.
- Often corner detectors and blob detectors are used in the feature detection phase.
One area of research and study that holds promise is manipulation. A whole new host of problems are introduced when we add robotic arms to aerial robots.
Areas of discussion
Aerial robotics - Perching by hanging from cylindrical objects
- Can aerial robots be used for tasks involving contact with the physical environment?
- How do we get robots to carry payloads and how do they adapt to changing dynamics?
- Hierarchical task composition is one way to solve some of these issues. For example, we could define subtasks in the Cartesian space, in the image space, and in the joint space.
- Vision-based localization and servoing, also known as vision-based robot control and abbreviated VS, for quadrotors to enable autonomous perching by hanging from cylindrical structures using only a monocular camera
- The focus is on the problem of vision-based control to enable the grasping of a cylinder with an overhead gripper, allowing a robot to hang from branches, poles
- The problems of relative pose estimation, control, and trajectory planning for maneuvering a robot relative to cylinders with unknown orientations
- The axis of the cylinder and the point on the axis closest to the camera, βc, can be estimated directly from the lines representing the boundaries of the cylinder in the image
- Use Image Based Visual Servoing (IBVS).
- Control architecture:
- The ﬁrst layer is centralized, (MULTI-VEHICLE COORDINATED CONTROL). This layer is in charge of computing the reference motion, in the Cartesian space, of each UAVM end-effector, on the basis of the given coordinate task
- The second layer (AERIAL MANIPULATOR COORDINATED CONTROL) is local to each aerial manipulator: it receives the motion references from the upper layer and computes the reference motion in terms of the controlled variables, via an inverse kinematic algorithm. i.e., the manipulator joints as well as the position and the yaw angle of the vehicle