Real-time debris flow detection using deep convolutional neural network and Jetson Nano

. This study aims to develop a potential system for real-time detection of debris flow motion using a deep convolutional neural network (CNN) and image processing techniques. A system consisting of a pre-trained CNN model, NVIDIA Jetson Nano, and a camera was used to identify debris flow movement. The pre-trained CNN model was trained on an image dataset derived from 12 debris flow videos obtained from small flume tests, large flume tests, and several debris flow events. The application results of the proposed system on the flume test in the laboratory reached an F1 score of 72.6 to 100%. The real-time processing speed of the CNN model achieved from 2 to 21 frames per second (FPS) on the Jetson Nano. Both the accuracy and the processing speed of CNN model depend on the size of the video input and the input size of the model CNN. The CNN model of 320 × 320 pixels with a resolution of 800 × 480 pixels gives accuracy (F1 = 99.2%) and processing speed (FPS = 20) considered the optimal model when running the Jetson Nano device; thus, it can be applied for early detection and warning systems.


Introduction
Debris flow are a moving mixture of loose muds, soils, rocks, and water in a steep channel that moves downstream with a high destructive potential [1,2]. An early debris flow detection system is essential. To monitor and identify debris flow movement, several previous studies [3][4][5][6][7][8][9] employed sensor devices installed near torrents to measure ground vibration waves and infrasound waves. However, sensor alarm systems have not yet been widely used in many parts of the world owing to the high costs of installation, operation, and maintenance. Recording debris flow events using a camera may provide a potential approach to the debris flow detection and monitoring problem. With the strong development of technology, the Jetson Nano device is developed by NVIDIA and is widely applied in the field of artificial intelligence. Because of its advantages such as small, low cost, power-efficient and embedded Internet of things (IoT) applications. Therefore, it is very suitable for remotely monitoring of debris flow events occurring in mountainous areas. The objective of this study is to present a potential method to detect debris flow motion using recorded data from a camera and Jetson Nano. We built a CNN model based on the YOLO framework to detect and localize a debris flow in view of digital image recording devices. When debris flow occurs, digital cameras installed at monitoring stations record the images and transmit them to a program installed on the Jetson Nano device. This program will analyse the image and identify the appearance of debris flow on the image automatically. * Corresponding author: yuntkim@pknu.ac.kr The system will then give warning sounds or send messages through the integrated system on the IoT platform. With the help of new combined techniques of artificial intelligence and image processing, this study can contribute a new approach to real-time debris flow detection for early warning and monitoring systems.

Methodology
The proposed approach for debris flow detection in this study is illustrated in Figure 1  To adopt the proposed method for debris flow motion detection and its velocity calculation, some requirements must be met: (1) the proposed method must have a trained CNN model to detect debris flow; (2) a digital camera must be located in the front of the flow and connected to the Jetson Nano processor; and (3) the proposed model only works in bright condition and cannot work at night.

Architecture of YOLO model
To identify debris flow movement from camera views. In addition to the accuracy of debris flow detection, the computational speed of the proposed CNN model is considered a major priority. The desired model must be able to operate in real-time for monitoring and warming. After careful examination of many CNN algorithms, we selected YOLOv4 as a benchmark model [10]. Figure 2 presents the used YOLOv4 network architecture and its detailed parameters of output features. The architecture of YOLOv4 is composed of a CSPDarknet53 backbone to extract features [11] and a detection layer to predict debris flow and bounding boxes.

Metrics to evaluate the performance of realtime system
Precision (Eq 1), recall (Eq 2), and F1 score (Eq 3) were selected in this study to evaluate the performance of the model [12,13,14].
Precision+Recall The number of frames per second (FPS) is used to evaluate the speed at which the system can process.

Setup experiment
We design an experimental system as shown in Figure 3 to test the proposed system in real-time. This system consists of 04 main parts. A small flume to produce debris flow motion, a digital camera (normal camera or camera pi) with a resolution of 3640 × 2160 pixels is placed in front of the flume to record data, a program is written and installed on Jetson Nano to process images and finally, a monitor connects to Jetson Nano to display result.

Dataset
The dataset was derived from 12 debris flow videos to prepare a dataset for training the CNN model. Five small flume tests [15], 3 large flume tests were selected from experiments by the United State Geological Survey [16], and 4 recorded debris flow events occurred in the Illgraben area, Switzerland [17,18].  accuracy and speed on three videos were shown in Table  1 and Figure 5. Figure 6 presents examples of debris flow motion detection on the flume test. In test 1 (video 1), the proposed CNN model achieved a precision of 77.6 to 100%, recall ranged from 85.2 to 100%, F1 ranged from 81.2 to 100% and processing speed ranged from 2 to 2.8 FPS. In test 2 (video 2), the CNN model achieved a precision of over 80.6%, recall of over 88.5%, F1 ranged from 87.3 to 100% and processing speed ranged from 5 to 7.2 FPS. In test 3 (video 3), the CNN model achieved a precision of over 62.4%, recall of over 86.9%, F1 score of 72.6 to 100%, and processing speed ranged from 10 to 21 FPS. The test results of three videos show that the real-time processing speed of the model depends on the size of the video input and input size of model CNN: The higher resolution, the slower to process. The higher size of the CNN model, the slower to process. According to the test results on 3 different camera resolutions and 5 different models of CNN. We see that the CNN model of 320 × 320 pixels with a resolution of 800 × 480 pixels gives high accuracy (F1 = 99.2%) and a fast enough processing speed (FPS = 20). Therefore, this CNN model is considered the optimal model when using Jetson Nano device.

Conclusions
The goal of this study was to introduce a novel method for detecting debris flow motion. The main conclusions drawn are as follows: The proposed CNN model successfully detected debris flow motion from camera view. The results indicated that detection model using CNN achieved an F1 score of 72.6 to 100% on the small flume test. The accuracy of the model depends on the size of the CNN model.
The real-time processing speed of the model achieved between 2 and 21 FPS on the Jetson Nano. The processing speed depends on the size of the video input and input size of model CNN: The higher resolution, the slower to process. The higher size of the CNN model, the slower to process.
The CNN model of 320 × 320 pixels with a resolution of 800 × 480 pixels gives accuracy (F1 = 99.2%) and processing speed (FPS = 20) considered the optimal model when running the Jetson Nano device.