Prediction Method of Wax Deposition Rate in Crude Oil Pipeline Based on RBF Neural Network and Support Vector Machine

. Wax-bearing crude oil will precipitate wax crystals in pipeline transportation, which will cause hidden dangers and affect the economic benefits of the pipeline. In order to study the complex wax deposition on the pipe wall and calculate the wax deposition under other conditions, this paper uses RBF neural network and support vector machine to predict the wax deposition data in Huachi operation area. The results show that the errors of the two methods meet the requirements. Because support vector machine can model and calculate finite samples, it is found that the accuracy of support vector machine is higher.


Introduction
Due to the wax crystallization of waxy crude oil in pipeline transportation, wax deposition layer is attached to the wall of the pipeline, and the pressure required for pipeline oil transportation is increased by reducing the cross-sectional area of pipeline flow, which affects the economic benefits of pipeline transportation [1] . When the wax layer is too thick or even plugs the pipeline, it may cause a condensate pipe accident. In addition, in the actual production and operation, the receiving and transmitting drums are usually set at the beginning and end of the pipeline, and the pig or pig ball is put in the pipeline. With the flow of crude oil, the wax crystals along the pipeline are cleaned [2] . The number of balls is proportional to the investment, so it is of great significance to master the wax deposition law of the pipeline and predict the wax deposition phenomenon according to the law, so as to scientifically plan the cycle of wax removal.

Influencing factors of wax deposition rules in crude oil
The mechanism of wax deposition is mainly divided into molecular diffusion, shear diffusion, Brownian diffusion, gravity sedimentation and other factors. The molecular diffusion mechanism is accepted by most scholars. The influencing factors of wax deposition rate include oil flow temperature in pipe, ambient temperature along the pipeline, temperature difference between oil flow and pipeline wall, crude oil flow rate, crude oil composition and deposition time [3] .
The annular wax deposition experiment has the characteristics of ensuring that the crude oil injection and transportation status of the experiment and the actual pipeline are consistent with the wax deposition in the pipeline. In this paper, seven relevant factors affecting wax deposition, including wall temperature, crude oil temperature, viscosity, flow rate, wall shear stress, wall temperature gradient and wax molecular concentration gradient at the wall, were selected as the input data for simulation, and thus the rate prediction model of wax deposition was established [4] .

Introduction of RBF neural network
RBF(Radial Basis Function) is a kind of feedforward neural network. In 1988, Broomhead and Lowe introduced RBF into the design of neural network. The RBF neural network has good approximation performance and faster convergence speed for complex nonlinear functions. It is widely used in the field of data classification and prediction [5] .

Structure principle of RBF neural network
RBF neural network is divided into three layers : input layer, hidden layer and output layer. the role between layers is shown in Table 1, and the specific structure is shown in   The 'base' of the hidden layer unit is composed of RBF to form the hidden layer space, connecting the two layers for nonlinear transformation. The sample vector is directly mapped to the hidden layer space through the input layer, and the low-dimensional input sample data are mapped to the high-dimensional space. Then, the output of the hidden layer is linearly weighted to obtain the output of the RBF network. The radial basis function in RBF neural network only generates local response and adjusts local important weights, which is greatly different from other neural networks with global response and global weight adjustment (e.g. BP neural network ).

Practical calculation
The experimental data obtained from Wang Xueliang's [6] experiments on crude oil in Huachi area, which are divided into 8 columns and 38 groups. Seven columns correspond to seven parameters, and the eighth column is the actual wax deposition rate.
According to the operation steps of RBF neural network, the data are first imported into Matlab, and the data are normalized by using the normalization command mapminmax(a',0,1). The data model is then established using the newrb (X, T, GOAL, SPREAD, MN, DF) statements, and the parameters involved are selected in Table 2. The mean square error of the modeling and prediction results is shown in Fig. 2, and the absolute error and relative error are shown in Table 4. The variation curve of root mean square error of RBF neural network with the increase of neurons From Table 3, it can be seen that the error size of the model trained by RBF neural network for the calculation of the eight groups of data in the prediction group is quite different. The maximum relative error is 33.39%, the minimum relative error is 5.89%, and the average relative error is 14.37%. The error size is acceptable. The model has the feasibility of predicting wax deposition rate.
In this paper, the number of experimental data is limited. Based on the characteristics of neural network, the amount of data is proportional to the prediction accuracy [7] . Due to the long cycle of the single group of wax deposition pipeline experiments, and the systematic error of the loop experimental device, the experimental group is set too closely, which will blur the difference in experimental values. To solve this problem, support vector machine is introduced to predict wax deposition rate.

Introduction to Support Vector Machine
Support Vector Machine (SVM) is a machine learning method based on statistical learning theory VC dimension theory and structural risk minimization principle. It has advantages in dealing with many sample finite and nonlinear problems. In addition, support vector machine can greatly overcome the 'curse of dimensionality', which increases exponentially with the increase of dimension when it comes to vector computation.

Structural Principle of Support Vector Machine
The support vector machine structure is shown in Fig 3.
Support vector in support vector machine (SVM) is the nearest series of points to hyperplane in the sample. The minimum distance between these points and hyperplane is the core problem to determine the hyperplane position. When the support vector machine is used for prediction, the model can predict the results according to a series of new input parameters, that is, to calculate new prediction points on both sides of the hyperplane. After combining kernel method, support vector machine can also deal with the problem of complex nonlinear relationship containing many influencing factors.

Example Calculation and Comparison
The training statement svmtrain (c, b, '-s-t-c-g-p') is used to train the model with support vector machine in Matlab, and the selection of parameters -s, -t, -c, -g, -p will affect the prediction accuracy of the model.
The range of -s is [0,4] integer, where 0 and 1 are classified, and 2,3 and 4 are regression prediction. -t is the type of kernel function. After testing, this paper selects the radial basis kernel function, as shown in Equation (2).
Where -r is the inverse of-g. In this paper, the grid method is selected to optimize the parameters of support vector machine. However, the grid method needs to traverse each point within the specified grid range, and the grid point with the smallest error is selected as the optimal parameter of support vector machine. If the optimal parameter is found in a larger range, the calculation time will be prolonged. The parameters in the model are shown in Table 4 The model has 29 support vectors as shown in Table 5. It can be seen from Table 7 that the maximum relative error, the minimum relative error and the average relative error of the eight groups of data in the prediction group calculated by the support vector machine model are 31.54%, 3.40% and 13.49%, respectively, which are all smaller than the prediction error of the RBF neural network. This model has the feasibility of predicting wax deposition rate, and has an expression, which can be calculated by using the formula after training.

Conclusions
From the perspective of error, both the RBF neural network model and the support vector machine model can predict the wax deposition rate of crude oil pipelines. The calculation errors of the support vector machine model are slightly lower than those of the RBF neural network, but the errors are not much different, because the number of samples required by the support vector machine is less than that of the neural network. Under the condition of limited samples, the accuracy of support vector machine is higher than that of neural network.