OBJECT DETECTION BY MONOCULAR CAMERA AND LASER SENSORS FOR SMALL UNMANNED AERIAL VEHICLE

. In this paper, we present selected options for detection and avoidance of obstacles by small unmanned vehicles. The solution to this problem is very complicated mainly because UAVs have a limited load capacity as well as energy sources. Sensors that can be used to solve this task must meet the minimum weight and power requirements. We decided to use a stereo camera and a laser because of the requirements that we set up earlier. The size of the obstacle is determined by the SURF algorithm and the Harris detector.


INTRODUCTION
One possibility of creating an anti-collision system for Unmanned Aerial Vehicle (UAV) is to use the relative navigation system. The disadvantage of this solution is that such a system can only work in a communication network. Further, in accordance with [24], we present selected options for detection and avoidance of obstacles by small UAVs. An Unmanned Aerial Vehicle (UAV) as the name implies, it is an aircraft without a pilot on board or attached to it. It can fly autonomously by pre-programmed the UAV's system or controlled manually by the pilot on the ground. Hypothetically, UAVs have a great potential to replace manned aircraft in many tasks or missions (e.g. surveillance, search and rescue mission, attack mission, collecting data, and etc.) [24]. For example, in 1946 to 1948, United States Air Force and Navy used unmanned B-17 and F6Fs to fly into nuclear clouds within minute after bomb detonation to collect radioactive samples [1], [24]. The concern to operate UAVs in a low altitude either indoor or outdoor are indeed very demanding due to the growing number of small and micro air vehicles. However, to achieve a robust level on self-governing system is very challenging and it is important needs for modern-day UAVs. Obstacle detection and avoidance system highly depend on sensors that being employed on the UAVs, type of the most research and considered popular choice for the system will be a Vision based and Laser based. The selection of detection and avoidance systems are very well connected to the UAVs itself. Besides, selecting a proper sensor to be placed on board of the UAVs plays a critical role for the system, and each of the techniques has its own advantages and disadvantages. For example, vision based method has rich information regarding bearing angle of the detected obstacles, while distance from the vehicle to obstacles are poorly recognized [24].
On the other hand, Laser based method is able to find accurately the distance of the obstacles to the UAVs, but weak in determine the bearing angle. In order to encounter these disadvantages, different type sensors can be fused or combined together to make the system operate into a reliable detection and avoidance system. Integration of different sensors (e.g. Laser and Optical) allows the extraction of information which cannot be acquired by a single sensor [2], [24]. It will upgrade and increase the capability of the system to estimate the attitude of the obstacles. The concepts of multisensory fusion on one system can be traced back to the humans and animals, they naturally performed the sensors ISSN  fusion to do lots of things (e.g. identification of threats, assessment of surrounding environments, etc.).
For example, precise evaluation of some substances by animals can be done by combining different senses such as sight, touch, smell and taste [3], [24]. Nowadays, commercial UAVs are already equipped with single camera (monocular vision) on-board and can be pre-programed to use features based detection techniques. Hence, to take advantage of this matter, the research project is focusing on monocular vision features based taken from computer vision technique. Plus, considering the size, weight, power consumption and capability to extract useful information, camera is the most competitive tool for UAV [3] [4] [24]. In this paper we present the possibilities of detection and avoidance of obstacles by the UAV system. Considering the integration of multiple sensors such as. combination of optical and laser sensors. Our goal is to achieve autonomous UAV operation. We try to estimate or approximate the size of the obstacle configuration for the purpose of safely avoiding these obstacles. The use of UAVs in the field of transport can bring great economic effects [14] [15] [16].

CURRENT STATE OF THE RESEARCHED ISSUE
The possibilities of detection and avoidance of obstacles by the UAV system are described e.g. in [24]. Obstacles detection and avoidance system is desirable for small and lightweight UAV and is challenging problem since the payload constraints, it only allows monocular cameras to act as a sensor to detect the obstacles. Even though, monocular cameras cannot measure the depth of scenes in an image, various cues can be computed to help with the hitch. In [5] [24] uses expansion cues to detect a frontal obstacle by using only single camera. He used Speeded Up Robust Features (SURF) features matches in combination with template matching image to compare relative obstacles sizes. Abdulla Al-Kaff et al [7] estimates the size ratios of the approaching obstacles from the sequences image frames. He used SIFT [8] features detected points and constructs a convex shape from these points, after that, he observed size changes of the convex shape when obstacles are approaching. In [9], a MOPS are used to detect the outline or edges of the obstacles. Then, they apply SIFT features to the image of the obstacles to detect the internal outline. Cooper Bills et al [10], proposed a method to fly the UAV in indoor environments by using perspective cues. However, their experiments, are strictly applied to corridor and stairs regions. Detection of obstacles based on texture variations has been demonstrated by G.C.H.E de Croon [11], [24]. By using the facts that when observers approach an object, there are two effects happen which are the image size of the object increases in the image and the detailed texture in view becomes more and more visible. On the other hand, there is also method used by monocular vision known as optical flow. It is a natural solution to navigation and obstacle avoidance problem, as motivated by insect and bird flights [12]. Green and Oh [13] has built MAV platform than can enabled a quick transition from cruise flight into hovering mode to avoid collision by using this method. However, they stated that optical flow has limitations when it comes to approaching an obstacle directly (frontal obstacle), because the detection very poor. Simon Zingg et all [14] use the method to navigate the UAV through indoor corridors. Zufferey et all [15], use optical flow on their microflyer weighing only 30g to be fly in small indoor environment. In [16], sensor fusion of ultrasonic (US) and infrared (IR) sensors is developed to obtain reliable range data for obstacle detection purposes. Results from the project showed that sensor fusion provided accurate range estimation by reducing noises and errors that were present in individual sensors measurements. In [17], they used a combination of Ka-band radar and optical cameras as their detection system. Tomic et all [18] use full capability of laser based sensor and vision based sensor.
He integrates a wide LIDAR sensor and stereo vision camera in it UAV. Shadib et all [19] employed optical sensors and U.S sensor for their autonomous mobile robot. They determine the distance to obstacles by measuring the time difference between the transmission of ultrasonic wave

TECHNIQUE OF DETECTION BY MONOCULAR CAMERA AND LASER SENSORS
Next in accordance with [24], we will discuss the technique of detecting obstacles based on a combination of detection based on properties and the reception of reflected wave [24]. ISSN  by monocular cameras and laser sensors. Features based detection is one concept in computer vision field to extract the interesting or distinct features on an image such as edges, corners and blobs. It is generally used for finding an object (e.g. matching features in one image to another) and tracking [24]. Two of widely known features based detection technique which are SURF created by Herbert Bay [20,21,22] and Harris corner by Chris Harris [23], [24], will be employed in this project.
The presented obstacle detection method is based on combination output from different sensors which is camera and Laser sensor. It allows the measurement of length of the obstacles from far for the purpose of earlier decision making in avoidance and also safe avoidance [24]. It is desired for obstacle detection and avoidance system to have good approximation on the length of the obstacles rather than using a tolerance. This is true when a UAV has to fly through tight spaces and environment that filled with obstacles [24].

Laser sensor to measure the distance from obstacles
Laser sensors can be used to measure a distance to obstacles with very high accuracy [24]. On top of that, the range of detection very high as compared to U.S sensor and I.R sensor. However, most of worthy distance laser sensors available also known as LIDAR (e.g. Hokuyo, SICK, etc.) is heavy and big in size. Thus, it is not suitable for application in small UAVs [24]. To measure distance, we used a small and compact LIDAR for the system, which use only single ray of beam to find the distance. This sensor performs initial detection by first capture the appropriate distance to obstacles and instruct the camera to capture the reference image frame. Secondly, inform the camera sensors to capture target image frame after avoidance distance has been detected [24] (see figure 1).

SURF detection
Since the system equipped with LIDAR sensor to measure the distance to obstacles and instead of detecting the incoming obstacles by using size expansion, we used the SURF algorithm to help recognize and identify the obstacle size, particularly on length of obstacles. SURF algorithm also can be considered as a Blob detector because the scale produce by this algorithm will directly proportional to the size of detected blob or in another words size of particular object in an image. Therefore, we took advantage of this characteristic to distinguish the detected obstacles from background of the image. This is necessary to allow for the calculation of obstacle's length. However, we must determine the rightful scale changes between obstacles that are being captured in reference image frame (RIF) and the one in target image frame (TIF) as shown in figure 2 [22]. Several experiments were performed to determine the appropriate scale changes by varying the distance between RIF and TIF. For this experiment, TIF distance has been fixed to 4ft.This is due to the fact that, if the TIF distance keep increasing, then obstacle size or blob will be small in the image as compared to the surrounding. As a result, the SURF algorithm will produce a lot of keypoints to the background rather than to the obstacles itself. On the other hand, if the TIF distance is small enough, the avoidance planning by the UAV towards the obstacles will be a problem. These scale changes are the scales that are able to separate or distinguish the obstacles from background of an image Table 1 Scale changes [24] In adjusting the distance of RIF, amount of matching feature keypoints between frames should be take into account. The higher amount of matching feature keypoints detected, the higher chances of length of obstacles approximation will be near to true value. SURF algorithm generate feature keypoints with sets of 2 data which are scale of the features and location of the features with regard to an image. After the obstacle features keypoints have been subtracted from the background of an image, we need to find the maxima (minimum/maximum) location in X-direction of the features ISSN 1339-9853 (online) http://acta-avionica.tuke.sk ISSN 1335-9479 (print) keypoints located on the obstacle. Any features keypoints that lies within the boundary of maxima location will be removed.

Binary image conversion
The image frame in TIF is converted to binary from greyscale image. Binary image contains only 2 pixels' value which are 0=white and 1=black, while greyscale pixels' value is from 0 to 255. To performed this conversion, we use luminance level of 0.5, which means any greyscale image pixels higher than 127.5 will replace into white otherwise black. Having performed this conversion, edge of the obstacle can be localized or transformed into many steps corner (see figure 3). We take this advantage to know the edge coordinate in the image by applying Harris corner detector.

Determining discontinuities
If Harris corner detector is applying to the binary image, it produces a lot of corner points in the background as well as in the obstacle. Thus, some method to extract corner points only at the edge of the obstacle are needed because corner point coordinates can be used latter to find pixels and real dimension relation. Small window or patch is created to find the discontinuities of the Harris corner points in the image. The patch is of size 30 x 40 pixels and it search the discontinuities at step size of 1 pixel on both side (left and right) of the image. If the discontinuities are found, then we assume the point of discontinuities in the image is edge coordinates of the obstacle (see figure 4).

Detection of the length
As previously stated, pixels to real dimension relation is used to find the real dimension (length) of the obstacles. Besides, a simple trigonometry function is also incorporated in the calculation.

System configuration used for this experiment
The platform used in the experimentation is the AR.Drone 2.0 Elite Edition Snow, a low-cost MAV built by the company Parrot. This vehicle was selected because it is low-cost commercial UAV available in market, stable flying and proper size configuration. The camera attached to AR.Drone has 1280x720 resolution and horizontal field of view is 62 degrees. On the other hand, LIDAR-Lite from Garmin will act as laser sensor distance in this project. It has low weight of 22 g, hence, making it suitable as the additional sensor for UAV application. In addition, the range that the LIDAR-Lite can cover is very impressive which is about 40 m. In this research, the main tool or software that was used for analysis and calculation is Matlab 2016a. The software contains a computer vision system toolbox which makes it possible to use as tool development for the obstacle detection and avoidance system. All algorithm processed on ground laptop which is quad core Intel i7 running windows operating system [21] [24].

EXPERIMENT RESULTS AND GOALS TO THE FUTURE
Experiments are performed by moving the UAV from RIF position to TIF position and record the image frame captured. There are 3 obstacles used to be detect by the algorithm in the experiment and each has different sizes. However, for initial experiment and algorithm validation, we only carry out the experiment with UAV at RIF position equal to 10 ft. and stop at TIF position equal to 4ft. In the future experiment, it would be suitable test the algorithm at different RIF position to simulate the scale changes as indicated in previous table 1. The algorithm process which produce the approximation length dimension of the obstacles are to be validated with real length of each of the obstacles. From the experiment, the algorithm is able to detect and determine the length of 3 obstacles used in the experiment. It shown that the accuracy of algorithm is exceptional because the length generated by the algorithm (True length) as compared to real length is only by within -0.4 to 3.6 cm. In obstacles 3, the difference is slightly greater than the rest because during the experiment, the captured image frame by the camera at TIF position may got tilted a little to the left or right which is not exactly perpendicular to obstacle. As a consequence, edge coordinate of the obstacle 3 detected by Harris corner detector will be underestimated and so, the length approximated is much less than the real length. [

CONCLUSION
In this project, multi sensor integration for obstacle detection and avoidance system has been proposed. Combine of monocular sensor with LIDAR sensor to help increase the robustness and accuracy of the system. By, using this system with combination of SURF algorithm and Harris corner detector, we are able to determine the approximate size of the obstacles. Experiment have been performed with 3 different sizes of obstacles and the results shows that algorithm closely approximate the length of the obstacles. In the future, the algorithm should be implement in real time environment including the development of avoidance algorithm once we have the output from detection algorithm. It will be necessary also eager to investigate the detection and avoidance decision mechanism when multiple obstacles are introduced. In the next verification, all of RIF position and variety of obstacles will be tested to further validate the algorithm. The relative navigation principle can also be used for positioning and obstacle detection. The relative navigation algorithms are able to work with significantly greater distances when detecting ambient traffic. A big limitation in this case is that all objects must be an active component of the aviation communications network [28] [29].