Landing Pad Detection and Computing Direction of Motion for Autonomous Precision Landing Quadcopter

. This paper presents an algorithm for an autonomous quadcopter to perform autonomous precision landing. This research focuses on designing the quadcopter so that it can land precisely on the landing pad using image processing algorithms. First, the captured image will be converted to grayscale, then the thresholding method is carried out and followed by a morphological process to eliminate noise and produce a clear image. The detected image will be displayed in a frame that will calculate the distance to the middle point. It will be used as Pulse Width Modulation (PWM) input to adjust the direction of motion of the quadcopter. so that it can land autonomously. The algorithm was tested in several color pads which are located in the grass, sand and cluttered ground. Testing is carried out to test the accuracy and precision of the designed algorithm. The results of the experiment show an accuracy rate of 94.76% and a precision level of 96.59% with an average landing time of 19 seconds and an average detection time is 8.55 milliseconds.


Introduction
An unmanned Aerial Vehicle (UAV) is an unmanned aircraft that is driven by battery electric power or gaspowered engines to fly, compared to manned aircraft, UAVs are cheaper to design and manufacture [1].UAVs have higher mobility, security, and flexibility than other manned aircraft.Some UAVs do not require a specific airfield for takeoff and landing while for manned aircraft there must be an airfield.In case of technical difficulties, if the UAV falls on the ground the opportunity of lost lives is zero whereas in the same manned plane events can be disastrous [2].
Aircraft are intended for "onboard pilotless operations" and are used in domains and agencies such as aerial imaging, emergency services, armed, force, logistics, and agriculture.In these last two fields, the growing popularity of this concept in particular branches share the desire to use highly autonomous aircraft to eliminate the need for human intervention.This vision arises from a loss of current where, in most cases, device autonomy is terminated when power is exhausted; at that point, the drone must land and recharge.In this section, we describe the various properties and aspects of the system to facilitate proper landing at the charging station.The proposed one is designed to simplify the operation of drones and, possibly the establishment of fully autonomous vehicles capable of repeated flights [3].
Due to this life-saving feature and widespread use, UAVs have garnered little attention in the scientific field of the research community, particularly in terms of their landings.Although some UAVs do not require a specific * jokohariyono@staff.uns.ac.id airfield to land, they do require a flat, suitable surface that is free of obstructions to the ground.Currently, many researchers are actively working on landing UAVs in unknown places depending on vision and the non-vision-based techniques.In quadcopter operation, one of the basic mechanisms that must be carried out is Vertical Take-Off and Landing (VTOL).To be able to carry out this mechanism, the first process that must be considered is stabilization.All quadcopter angles concerning the x, y, and z axes, hereinafter referred to as roll, pitch, and yaw angle stabilized at 0 (zero) radians.
An example of a visualization problem that is currently popular and being developed on several unmanned aircraft is target detection with digital image processing through an integrated camera, contour detection, and object recognition.This method shows that there has been progressing in the field of engineering which has led to the possibility to use artificial intelligence that can be implanted for a particular purpose.The goal to be achieved is in the form of a mission, such as a landing mechanism mission.Automatic landing is one of the missions commonly applied to unmanned aircraft.With automatic landing, things that are not desirable during the landing mechanism can be minimized.The method of using computer vision with an algorithm to detect two different colored ground marks and land target the target presented in [4].
In this study, an automatic landing system based on image processing uses a camera as a support for detecting basic objects in the form of edge detection with the shape of a circular object.First, the RGB (Red, Green, Blue) color space of the object will be converted to Grayscale.Then it is converted to HSV (Hue, Saturation, Value) and continues with the search for the appropriate contour value in to the object so that the quadcopter can recognize the shape of the object from the edges obtained.Then it is displayed on a frame which is divided into several grid sections and performs automatic movement according to grid position When detecting an object until it reaches an automatic landing.

Method
This section discusses the real-time simulation of software in the loop (SITL) and detailed step for integrating SITL and image processing to perform autonomous landing implementation.

Software-in-the-loop-simulation
UAV is a flight vehicle which operates without any human interaction, meanwhile real experiments using UAVs are expensive; therefore, the performances of UAVs systems should be analyzed before their deployments [5].Due this reason, we utilize the Software In The Loop (SITL), which is a simulation that gives the chance to substitute operating Plane, Copter or Rover, without the physical hardware.It consists of a build of the autopilot code, using the C++ language, python which serves to run an autopilot on your computer directly for testing, accordingly, the simulation is self-contained.SITL becomes an exceedingly practical tool since end-product can misbehave in-flight.It influences on avoiding hazard situations and preserving costly equipment being damaged.To connect to the SITL, two ports can be used.The port 14550 using the UDP protocol or the port 57600 using the TCP protocol.The second component of the UAV systems is the Ground Control Station (GCS), it has several features that will be detailed as follows : • Mission planning: GCS prepares the mission plans and paths for the UAV, according to the environment and mission requirements, then, UAV has to achieve the mission depending on the planned trajectories.
• Navigation and position control: During the mission, UAVs are placed in several positions at different altitudes to check out the target area.For that, GCS has to display and control the movements of the UAVs to succeed the mission.

• Communication and data exchange: GCS and
UAVs should have a direct and bi-directional communication between them.The GCS sends commands and orders for the UAV according to the mission and the UAV sends telemetry and data (images, videos, etc.) to the control station.The communication links between the various nodes is a necessary component in the flight systems.In fact, there are two different types of links: UAV-UAV and UAV-GCS.UAV-UAV communication link ensure the collaboration and the coordination between UAVs to improve the performance.
3 Experiment Result

Simulation experiment
The simulation consists of mission sequence as the real world experiment.First the copter will takeoff and go to waypoint 1, then hover for few seconds and then start to open camera to scan for detected landing pad, if no landing pad detected then the copter will stay hover in the air.When the landing pad is detected the copter will star to approach to the direction of landing pad, when the camera is centered to the center of the landing pad and the height is < 1m camera will close then mode will change to LAND.Landing sequence will start then copter will reduce the rpm to land above the landing pad, when it has reached height = 0 then the copter will disarm and the motor will shut down and set to mode LAND and disarm.

Fig. 2. Thresholding
In figure 2 we start to find the best detection value through the threshold HSV parameter, from the testing we get the best value for red landing pad which is : LH = 0, LS = 80, LV = 182, UH = 255, US = 255 and UV = 255.For the simulation we will be using SITL in linux ubuntu and connect it to GCS which in this case is ardupilot running in windows, when the simulation started and the SITL-GCS is connected we can begin the simulation experiment.
In figure 6 we begin the simulation experiment by running the python script in the SITL ubuntu, the sequence of the mission is same as the figure 1 stated.When script started we can see that the vehicle condition become ARMED, that means the UAV is ready to flight and received the mission, then motor start and begin ascending to the setpoint altitude.After reaching the setpoint altitude, then UAV will open camera to make approach to the landing pad, after landing pad detected UAV will make maneuver to go to the direction of the landing pad.Fig. 8. Landing sequence after the sequence is done and the position is correct, camera is centered and height < 1m then UAV will begin the landing sequence, close camera then set mode to LAND as shown in figure 8.

Hardware Implementation
The next step is to try it in the real hardware situation, in this testing we use two different color landing pad with two different testing area, which is in the grass field and asphalt field.As shown in figure 9, we used DIY UAV to do the experiment, and in figure 10 we use 2 different color which is red, and orange with inner diameter is 10 cm and outer of 40 cm.

Conclusions
From the results of the experiment and discussion above, it can be concluded that SITL (Software In The Loop) simulation plays an important role in experiments because we can simulate UAV conditions virtually and no physical hardware is needed, and we can try to combine scripts to perform an experiment.The combination of Masking Threshold and Contour methods produces fast final detection results so that it can perform Autonomous experiments maximally, with an average detection time of 8.55ms.The experiment has been successfully carried out with the average object detection having an accuracy percentage of 94.76% and a good level of precision, namely a percentage of 96.59% and an average detection time of up to landing 19 seconds, this value indicates that the difference in the landing pad color is not to have a significant effect on autonomous landings.

Table 1 .
Red Landing Pad, Grass Field

Table 2 .
Red Landing Pad, Asphalt Field

Table 3 .
Red Landing Pad, Asphalt Field

Table 4 .
Red Landing Pad, Asphalt Field

Table 1 -
4we can know the comparison of each experiment below: