Radar Based Precipitation Nowcasting Prediction by Using Deep Learning Techniques

. Nowcasting is an emerging area in meteorology that focuses on accurately anticipating the severity of short-term rainfall for a particular location. It is essential to many facets of society. Owing to its significance, researchers are experimenting to predict short term rainfall using neural network approaches. This study analyses proposes a novel method of merging Convolutional Neural Network and Long Short-Term Memory neural networks on a radar echo dataset. The model was tested against a synthetic moving mnist dataset before applying on actual radar image dataset. Given the previous radar images, the model could successfully find future image sequences and obtained an accuracy of more than 90%.


Introduction
Precipitation nowcasting can help us in the rainfall prediction and alert the areas which are prone to heavy rainfall Previously there are many models used for the prediction of precipitation nowcasting such as the machine learning, CNN, etc. [1].Here we are using ConvLSTM which can remember previous input sequence of radar images and help in better prediction.The main target is to estimate the highly probable sequence of subsequent frames based on a set of input data [2].The precipitation nowcasting will be anticipated using radar reflections of precipitation as the input sequence [3] [4].Because the radar pictures utilized are sequential 6-minute interval radar echo frames, a single image forecast -6 minutes in the future -is termed as nowcasting.according to the definition of nowcasting.Experiments were done to estimate 4-6 frames in the future, which amounts to 42 minutes -1 hour.To make the predictions, Convolutional Long short-term Memory Architecture was used.

Literature Survey
1.In paper [1], they employed a network model called con3D-GRU, which can receive 3D pictures as input while retaining specific radar map properties, which addresses the challenge of using GRU directly.This model predicts the intensity of upcoming rainfall over short time intervals, such as the next 1-2 hours, with increasing accuracy.
2. In paper [2], they forecasted short-term rainfall using neural network combining architectures using a radar dataset.Because the radar pictures utilized are successive time interval radar echo frames, an image forecast up to several minutes in the future.They aim to estimate photos in the future in our study, which amounts to particular time interval of around 40 minutes in future.[3]created a RainNet convolutional neural network model by combining U-Net and SegNet, which will aid in training and testing.They created a model that takes precipitation images at t-15,t-10,t-5,t minute intervals and predicts the t+5-minute precipitation using the U-Net and RainNet models.

Paper
4. Authors in [4] developed a deep learning system to enhance training and assessment.They adopted the U-Net Segmentation architecture for this purpose, which allows them to swiftly train a big dataset.The model tried to reduce computational complexity. 5.In [5] proposed design is based on and extends the UNet architecture.The UNet architecture is made up of a U-shaped encoder-decoder structure.The encoder employed max-pooling and double convolution for better results.6. Authors in [6] presented a DL application to the problem of precipitation forecasting, namely high-resolution short-term precipitation predictions.We approach predicting as frame-to-frame translation problem, making use of the capabilities of the prevalent U-Net convolutional neural network.This surpasses three commonly used models: optical flow performance, persistence of images, and NOAA's one-hour numeric High resolution rapid refresh precipitation forecast.7. Paper [7] showed PredRNN, a unique lateral part recurrent network for spatiotemporal forecasting training that takes into account both temporal as well as spatial deformations.
Memory states crisscross between time states and layered LSTM layers.They also presented a novel spatiotemporal LSTM unit with dual memory structure which is gate controlled, as a fundamental component of PredRNN.The proposed model highly reduced the computational complexity.8.In [8], authors forecasted rainfall nowcasting by watching radar echoes to construct the fields with motion for making extrapolation of rainfall regions over the next several hours.This study experimented with numerous severe rainfall instances, and summarizes multi-year verification findings.
3 Proposed Work

Dataset
The dataset we utilized will be made available on Harvard Dataverse.It was created using radar reflections from the Weather Surveillance Radar in Shanghai, China.It includes 170,000 doppler radar intensity images recorded between last quarter of the year 2015 to third quarter of year 2018, with a minimum gap of 6 minutes from frame-to-frame successive pictures.Every image is a 501 × 501-pixelpicture that covers an area of almost 501 x 501 square kilometers.Despite the enormous number of radar pictures, the dataset had many photos with no forecasting content, resulting in no training of the models.The data noise as well as non-continuous data.Which resulted in incorrect predictions of baseline model.To address these issues, we chose a day and 80 sequenced photos with a reasonable depiction of forecasting content and little or no radar noise for training and testing our models.The photos with size 501x501were preprocessed by down scaling to 64x64 pixels (mostly to improve efficiency), zero-centered and cropped.Next sequence of images was predicted using a sliding window spanning over 16 images, or 96 minutes.

Design Methodology
Convolutional networks excel mainly in recognition of patterns and extracting the spatial and temporal visual characteristics, but LSTM neural networks have a term named "memory" [5].This arrangement is anticipated to detect the patterns in radar picture sequences and predict precipitation forecasting.The model which we built uses not only the present input sample but also the past interpreted input as input [6].Using hidden states, the model makes to maintain data from sequence pictures that have gone through the model.LSTMs outperform typical feed-forward CNNs and RNNs in that they preferentially "remember" the patterns which was recognized for longer periods of time.Firstly, we have taken radar echo images from the doppler dataset (WSR-88D, NEXRAD dataset).We chose a day and 80 sequenced photos with a decent depiction of forecasting content and little or no radar noise for training and testing our base models [7].Then we have taken the input sequence of radar images and applying those sequence of images to the ConvLSTM model which tries to predict the next image sequence and we compare this sequence with the ground truth images present in the dataset for the same time [8].Then the logcosh function is applied to compare the predicted image as shown in the Fig 1 .Then we try to evaluate the performance metrics using accuracy, FAR, POD.➢ The first two ConvLSTM2Dlayers being used for encoding the network.➢ These two layers have (5,5) filtersize,128kernels with same padding used takes input of Input layer.➢ Next two layers are used in the network of forecasting and they have (5,5) filter size, 128 kernels with padding as same used.➢ This layer takes output from encoding network layer as input to this third convolution layer.➢ Then the third and second layer are added and stored to give input for fourth ConvLSTM layer.➢ For the last layer given the output of concatenation from third and fourth layers as input to it.➢ Last layer has uses Conv3D with (5,5,5) filter size and padding as same ➢ In the last layer Adam optimizer with sigmoid activation function is being used.

➢
Sigmoid Function formula is shown in the Equation ( 1) is useful to make output values in the range of 0 to 1.In this way while predicting the models probability we can make use of Sigmoid.
➢ Here the logcosh loss function as shown in Equation ( 2) is used, which gives better prediction than MSE.And it is capable of enduring outliers in a dataset.Here in the Table 5.1 gives us the metrics values after evaluation of the deep learning models and their respective loss function along with optimizer.We see that ConvLSTM model has got the accuracy of nearly 92% for 80 epochs while the Encoder-Decoder model gives us better accuracy with 92.90% for the run of 25 epochs.

5.3
Visualizing the evaluation metrics in the form of graphs.This statistical technique determines how effectively an inspection process finds important flaws.➢ A false positive ratio (also known as fall-out or False Alarm Ratio, or FAR) is the likelihood of incorrectly rejecting the null hypothesis for a particular test when using multiple comparisons in statistics.The ratio of the number of negative events that were mistakenly classified as positive (false positives) to the total number of true negative events is used to compute the false positive rate.

Conclusion
In the past models they have used Numerical Weather Prediction (NWP) which are less suitable for short-term forecasting whereas in the optical flow methods they give more accurate predictions but has limitations.There are also other architectures which have been used for predicting the precipitation nowcasting that is ConvGRU.This model uses Gated Recurrent Unit along with Convolutional layer to give better performance.In our model we have used Convolutional LSTM which tries to remember the previous input sequences as this will help the model in understanding the features present in the input radar echo images and predict the future 6-minute image sequences which translates to 42 mins -1 hour in the future.

Fig. 1 .
Fig. 1.The architecture of the proposed method

1 Fig 2 .
Fig 2. Input and predicted sequence with respective ground truth From the above sequence of images shown in Fig 2, these are the representation of Consecutive radar images with a time interval gap of 6 min between those images.And the first four sequence of images are given as input for training the model, here on the left side we can see input sequence then on right side are the Ground truth images taken from the radar dataset for comparison.Then in the next four images we can see the predicted sequence of images with a time interval of 24 min into future which are shown on the left side along with them on the right side we can see the Ground Truth values at the respective time intervals.

Fig. 3 .
Fig. 3. Model Accuracy of training vs testing data ➢ Here the accuracy metrics is plotted in the form of graph for visualization.The training and testing accuracy values are plotted in the form of graph as shown in Fig 3.

Fig. 4 .Fig. 5 .
Fig. 4. Model loss of training vs testing data ➢ In Fig 4 the model loss of training and testing data is plotted for comparison and there is less loss in case of test data while compared to the train data.ICSTCE 2023 https://doi.org/10.1051/e3sconf/202340504003 E3S Web of Conferences 405, 04003 (2023)

Table 1 .
Evaluation metrics for Model 1 and Model 2