Development of control system of isopentane-isoamylene fraction rectification column using neural network

. This article presents the development of a neural network control system of a distillation column for the separation of isopentane-isoamylene fraction in order to increase the efficiency of control of rectification processes using intelligent technologies. The feasibility of using intelligent technologies to control the parameters of the column by using a neural network regulator is justified. The processing of research results was carried out using the software "MatLab."


Introduction
Mitigating the carbon footprint stands as a paramount challenge in our era, aiming to bring us closer to a sustainable level of human influence on the environment and aiding in alleviating the consequences of global climate shifts.
A hydrocarbon trace is called the cumulative footprint of human activities that affect global climate change.This includes the footprint of industrial enterprises, agricultural activities and the private life of people [1].As a result of human activity, gases are released that in a compartment with existing gases increase the greenhouse effect.These include: CO2 (carbon dioxide, it accounts for the largest amount of emissions), methane (it is smaller, but this gas is a powerful greenhouse gas), nitrous oxide, refrigerants, and so on.
In contemporary Russia, there is a notable focus on improving the fuel quality.Consequently, the standards for the quality and safety of the end product experience a substantial increase, exerting a significant impact on its competitiveness.The extraction process of the isopentane -isoamylene fraction stands as a pivotal phase within the technological sequence for generating fuel additives [2].
In contemporary Russia, there is a notable focus on improving the fuel quality.Consequently, the standards for the quality and safety of the end product experience a substantial increase, exerting a significant impact on its competitiveness.The extraction process of the isopentane -isoamylene fraction stands as a pivotal phase within the technological sequence for generating fuel additives.
The focus of this article is on the creation of an advanced system for adaptively managing column parameters during the rectification of the isopentane-isoamylene fraction.The aim is to enhance the efficiency of rectification processes through the integration of intelligent technologies.The rationale behind employing intelligent approaches, particularly a neural network-based controller, to oversee column parameters is substantiated.The research findings are analyzed and processed using the "MatLab" software.

Basic process patterns and definition of input and output data
To maintain the stability of the rectification process, it is advisable to control the process parameters in the columns.Based on numerous studies, an artificial neural network was developed that is used to control the main parameters of the distillation column, namely, the level in the bottom of the column pos.LC-23 -1, cube temperature pos.TC-26 -1, flow rate of fraction S5 -S6 pos.FC-10 -1 and steam flow to reboilers pos.FC-30 -1 (Figure 1) [3].
Inside the cube of the distillation column temperature (pos.26 -1) shall be maintained between 40 and 75 ° C, steam flow to reboiler (pos.30 -1), must be in the range from 0 to 50 m 3/h, the level in the bottom of the column (pos.23 -1 must remain at the level of 20 -80%, flow rate of overflowing bottom liquid (pos.10-1) varies from 0 to 8 m 3/h.
The rectification process is continuous -in-line.To the column pos.17/1 fraction S5 -S6 is supplied to 10, 13 and 15 trays, flow rate of which is controlled by valve pos. 10 -3.To maintain the rectification process, steam is supplied at a pressure of 10 kgf/sm 2 to reboilers pos.18 -1 and 18 -2, the flow rate of which is controlled by the valve pos.30 -3.The temperature in the bottom of the column is recorded by the instrument pos.26 -1 and is controlled by valve pos.26 -3.The level in the bottom of the column is supported by a pump, which is controlled by the frequency converter pos.23 -3.Isopentane as a bottom liquid is pumped out by a pump, lighter hydrocarbons are blown out from the top of the column pos.17/1.
Conventional PID regulators are insufficient for addressing this challenge, as they are unable to consider the nonlinear nature and interdependencies among parameters.
Currently, the most auspicious approach involves employing neural network-based controllers for overseeing technological parameters.Developing such a system will yield several benefits, including enhancing the regulator's adaptability, elevating management quality, and consequently, refining the overall quality of the technological process [4].
To solve the problem using a neural network, it is necessary to collect data for training.On the output parameter the degree of valve opening pos.10-3 and 30-3 influence the input parameters according to the following dependencies.Output calculated by formulas where F 1 -degree of valve opening for steam supply for column bottom heating; G -water vapor flow rate, kg/h;  � -pressure at valve inlet, kgf/ sm 2 ;  � -medium temperature at valve inlet, ℃; where F 2 is the degree of valve opening to supply the flowing isopentane-isoamylene

Review of existing neural networks
In the realm of artificial neural networks, various architectures and learning algorithms have been developed to tackle a wide range of tasks.In this review, we will explore several notable neural network models and methodologies, each with its unique characteristics and applications.Our focus will be on two specific types of networks: the NARX (Nonlinear Autoregression with Exogenous Inputs) network and the Back-Forward Backpropagation network.These networks possess the capability to learn and predict time series data, leveraging both past values of the same time series and feedback inputs, as well as external time series.Before delving into these specific networks, let's briefly introduce each of the neural network models and methods that we will discuss in the following sections [5].
1.A cascade direct propagation neural network is similar to a direct propagation and reverse error propagation network, but includes connecting the input data to the layer providing the resulting values.Comparative analysis of the operation of the used neural networks was carried out by the number of steps N (epochs) of network training and the value of approximation validity.
2. Competitive network.Competitive learning represents an unregulated learning method within artificial neural networks, where nodes contend to become responsive to a specific subset of input data.This form of training, known as Gebbian training, enhances the distinctiveness of each node within the network.It is particularly effective in identifying clusters within datasets.
3. Elman Network employing reverse error propagation (Elman Backprop).Elman networks featuring one or more hidden layers can effectively explore dynamic I/O relationships, provided an ample number of neurons within the hidden layers.
4. A network utilizing forward signal propagation and Back-Forward Backpropagation.The central concept of this approach involves transmitting error signals from the network's outputs to its inputs, moving in the opposite direction to the typical forward signal propagation.
5. Feed -forward time -delay Time-delayed networks are similar to direct networks, except that a branch delay line is associated with the input weight.This allows the network to have a finite dynamic response to time series inputs.
6. Generalized regression network.The generalized regression neural network (GRNN) shares similarities with the probabilistic neural network (PNN), yet it is tailored to address regression challenges instead of classification tasks.Much like in the probabilistic neural network, a Gaussian kernel function is positioned at the location of each training data point.It is our perspective that each data point signifies a level of confidence in the response surface's height at a specific location, with confidence diminishing as the distance from that point increases.In essence, the generalized regression neural network assimilates all training data points and employs them to assess responses at arbitrary points.The network's ultimate output estimation is computed as a weighted average of outputs from all training data points, with the weights being influenced by the proximity of these data points to the evaluation location [6].
7. Hopfield Neural Network (Hopfield network) is a fully connected neural network with a symmetric communication matrix.Hopfield's neural network is designed so that its response to the stored reference "images" is made up of these images themselves, and if the image is slightly distorted and applied to the input, it will be restored and the original image will be obtained in the form of a response.The Hopfield network corrects for errors and interference.
8. Network for classification of input vectors (LVQ).The LVQ network is a dual layer network.The first layer uses negdist weighting, netsum accumulation, and compet activation.The second layer uses dotprodt weighting, netsum accumulation, and purelin activation.Layers have no offsets.The weights of the first layer are initialized using the midpoint function; weights of the second layer are set so that each neuron at the output corresponds to a single neuron of the hidden layer.Adaptation and training are performed using adaptwb functions and trainwb1, which modify the weights of the first layer using predetermined learning functions.The configuration functions can be M -learnlv1 and learnlv2.9. NARX is a network.A nonlinear autoregression network with exogenous inputs (NARX) is a recurring dynamic network with feedback connections covering several layers of the network.The NARX model is based on the linear ARX model, which is commonly used in time series modeling.10.Parallel NARX -Network (NARX Series Parallel).Parallel NARX networks differ from conventional NARX networks only in that the output is returned to the input of the direct distribution neural network as part of the standard NARX architecture.
11. Perceptron.These networks are known for their speed and dependability in tasks that fall within their capabilities.Moreover, comprehending the functionality of perceptrons lays a solid foundation for grasping the intricacies of more intricate networks.
12. Probabilistic network.Probabilistic neural networks find application in classification tasks.Upon input, the initial layer computes the distances between the input vector and the training input vectors, forming a vector that represents the proximity of the input to the training data.The subsequent layer aggregates these contributions for each input class, yielding a probability vector as its net output.Ultimately, the output competition transfer function of the second layer chooses the highest probability among these, yielding an output of 1 for that specific class and 0 for the remaining classes.[7] 13.Radial basis (activation function).The radial basis function reaches a peak of 1 when its input is 0. With decreasing distance between w and p, the output rises.In this manner, the radial basis neuron functions as a detector, producing an output of 1 whenever the input p matches its weight vector w.
14. Radial Base Network with mikknikmalny number of neurons (Radial basis (fewer neurons)).Its difference from the previous neural network is only a smaller number of neurons.
15. Self -organizing map.Used for both data clustering and data dimensioning.They are inspired by sensory and motor maps in mammalian brains, which also appear to automatically organize information topologically.
Of the entire list of these neural networks, it will be necessary for our study of the NARX and Back neural networks -forward backprop.They can learn to predict time series, data past values of the same time series, feedback inputs, and other time series called external or external time series.

Development of neural network
To develop a self-learning neural network, the Matlab software environment with a preinstalled Matlab Neural Network Toolbox was used.
To create a neural network, you need to select the necessary input data.After preparatory work, you can begin to design an artificial neural network.After the neural network is created, it is trained according to previously specified parameters.After the INS training is completed, it is tested on a sample of examples [8].

Construction and training of INS in Matlab
To control the rectification process, a NARX network with feedback is selected, which will consist of 2 layers -a hidden layer and an output layer.
For the hidden layer, the required number of neurons is experimentally set to 50.Network training will be carried out using an automated error detection algorithm.We establish the upper limit for the learning epochs: 5000, defining the quantity of epochs (time periods) after which training will cease.Subsequently, we will opt for an interval of five epochs between displays.We also define the target or convergence criterion: 0.0001the threshold value at which the training will be deemed complete [9].

NARX Neural Network
We implement the neural network in Matlab.To do this, use the "nntool" command to open the "Neural Network/Data Manager" window � the initial configuration of neural networks.In the "Input Data" and "Target data" items, load the input and output (target) data respectively using the "import" button.Then the "New" button creates the NARX neural network.Click the Open button to enter the Neural Network Training tab [10].
Figure 2 shows the selected structure of the artificial neural network.The input of the neural network receives the signal x, in our case, this is the column cube temperature pos.26-1, column bottom level pos.23 -1, flow rate of fraction S5 -S6 pos. 10 -1, steam flow pos.30 -1 .The adder "+" multiplies each input bi by the weight wi and adds the weighted inputs.Then the value passes through the activation function of the corresponding layer and the output is calculated: opening the valve for steam supply pos.30 -3, valve opening for supply of fraction S5 -S6 pos. 10 -3.The results of the training are greatly influenced by the selection of the initial weights of the network.Initial values quite close to the optimal are considered ideal.At the same time, it is possible not only to eliminate delays in the points of local minima, but also to significantly speed up the training process.Unfortunately, there is no universal weighting method that guarantees the best starting point for any problem to be solved.For this reason, in most practical implementations, random weighting is most often used with a uniform distribution of values in a given interval.The value of the found weights of all neural network layers and the value of the found offsets can be viewed/edited in the View/Edit Weights tab.
Figure 3 demonstrates that after 5000 iterations, the mean square error reaches a value of 10^-6.This indicates that the examined values are closely clustered, resulting in minimal error among them.The training process employs early termination as a strategy to counter overfitting [7].Mean Square Error (MSE) serves as an indicator of network performance, reflecting its performance based on the mean squared deviation [11].
An alternative method to evaluate the results of neural network training involves creating regression functions based on the outcomes (Figure 4).
The Regression graph (Figure 4) shows the linear regression of network learning results on three analyzed subsets (training, validation, test) and on all sets (all).For each result, the correlation coefficient R is calculated, a graph is created and a regression equation of the form Output = a x Target + b is output, if the network outputs coincide completely with the target values R = 1, a = 1, b = 0, then the network perfectly approximates the function.Derived from Figure 4, the neural network closely approximates the function with remarkable accuracy [12].The correlation coefficient R stands at 1, signifying a robust relationship between variables and underscoring the high precision of the constructed neural network.The training status graphs depicted in Figure 5 are generated.The initial graph illustrates that a gradient coefficient closer to zero corresponds to more precise training and testing of the neural network.In the validation failure graph (depicting iterations where MSE validation increased), we observe validation error trending towards 0 over 5,000 iterations.The magnitude of this error signifies the precision of the model's configuration on the training dataset [13].
The mu graph shows the change in the learning parameter µ by the Levenberg-Markar method.It is possible to observe that on reaching the 5000th iteration the parameter reaches value 10 -7 .
Gradient is the value of the inverse gradient on each iteration on a logarithmic scale.0.000031986 it means that you reached the lower point of a local minimum of the criterion function [14].

Back Neural Network -forward backprop
We implement the neural network in Matlab.To do this, use the "nntool" command to open the "Neural Network/Data Manager" window the initial configuration of neural networks.In the "Input Data" and "Target data" items, load the input and output (target) data respectively using the "import" button.Then the "New" button creates the neural network Back -forward backprop.Click the Open button to enter the Neural Network Training tab [15].The view of the neural network in Matlab with 2 hidden layers and 15 neurons is shown in Figure 7.The training results depend on the selection of the initial weights of the network.Initial values quite close to the optimal are considered ideal.As practice shows, there is no universal weighting method that guarantees the best starting point for any problem to be solved.The value of the found weights of all neural network layers and the value of the found offsets can be viewed/edited on the View/Edit Weights tab [16].
Figure 8 shows the training process of this neural network.From Figure 9, it can be observed that after 3314 iterations, the mean square error reaches a value of 0.00018583.This suggests that the investigated values are closely clustered, resulting in minimal error among them.The training process employs early termination as a strategy to counter overfitting.Mean Square Error (MSE) serves as a metric of network performance, reflecting its efficiency based on the mean squared deviation [17].
The Regression graph (figure 10) shows the linear regression of network learning results on three analyzed subsets (training, validation, test) and on all sets (all).For each result, the correlation coefficient R is calculated, a graph is created and a regression equation of the form Output = a x Target + b is output, if the network outputs coincide completely with the target values R = 1, a = 1, b = 0, then the network perfectly approximates the function.The training progress charts presented in Figure 5 have been generated.The initial graph reveals that as the gradient coefficient approaches zero, the training and testing of the neural network become more precise.In the validation failure graph (which illustrates instances where MSE validation increased during iterations), we can observe confirmation of the error value converging towards zero over a span of 1000 iterations.The magnitude of this error serves as an indicator of the model's accuracy on the training dataset [18].
The mu graph shows the change in the learning parameter µ by the Levenberg-Markar method.It is possible to observe that on reaching the 5000th iteration the µ parameter reaches value 10 -5 .
Gradient is the value of the inverse gradient on each iteration on a logarithmic scale.4.7382 this means that you have reached the bottom of the local minimum of your objective function.[19].

Neural network testing
To test the neural network, we apply 4 values to the input (Table 3) with the command sim(net, [T; L;  � ; Q 2 ;]).After executing the command, 2 values were obtained (16.64; 4.17).The proximity of the obtained values to the given result (16.57; 4.17) indicates the applicability of the network.In the future, it can be used to control the parameters of the isopentane-isoamylene fraction rectification process [20].

Conclusion
Using the NARX neural network with direct signal propagation and reverse error propagation, a model of functional relationships between technological factors and product quality indicators in the complex process of isopentane-isoamylene fraction rectification is built.Based on the obtained model, this process can be controlled using the NARX neural network.The above algorithm for constructing a mathematical model of the process of producing isopentane makes it possible to facilitate the construction of mathematical models of various technological processes, thereby making it possible to successfully apply them in solving various problems, which contributes to improving the quality of products by increasing the accuracy of control.

Fig. 2 .
Fig. 2. Neural network structure.In the network training process window, by clicking on the Performance button, you can see the network training schedule showing the behavior of the learning error (Figure 3) [3].

Figure 11
Figure 11  illustrates that the network closely approximates the function with remarkable accuracy.The correlation coefficient R is 1, signifying a robust relationship between variables and underscoring the high precision of the constructed neural network.

Table 1 .
Input Data Set.