Analysis of neural network results based on experimental data during indentation

. The article is devoted to the development of machine learning methods for classes of technical problems, including determining the properties of materials. According to the authors, the neural network approximation algorithm is able to take into account the behavior of materials in various experimental conditions. The article provides illustrative examples of how a neural network with a single hidden layer can approximate a function of several variables with a given accuracy. As part of the study, a number of experimental measurements were made. The structure of the neural network and its main components are described.


Introduction
Conducting an experiment on real objects provides a low degree of control over its running, since the non-laboratory environment is not isolated from extraneous influences. Therefore, existing methods of non-destructive testing of material properties have low accuracy, repeatability, and statistical reliability. Therefore, it is necessary to use methods that are less dependent on the conditions of the experiment. This method can be used to select an approach using neural networks for processing research results. The neural network is able to take into account the behavior of materials in various conditions. Formally, the problem of training a neural network in forecasting problems is formulated as an approximation problem. It is necessary to build a neural network (approximating function) that will take the same values (with a given accuracy) not only on the data of those participating in the training (approximate interpolation problem), but also on the data of the control set that did not participate in the training.
One of the most important advantages of neural networks is their ability to form an accurate approximation for nonlinear functions of any duration.
This point of view is seen in the works of T. V. Filatova [1], who believes that the use of neural networks provides a high quality of approximation and can be used to analyze and predict the state of an object.
Noteworthy are the works performed by a group of specialists led by Katsuba Yu. N. In their work [2], the authors note that one of the most important qualities of neural networks is their ability to study the dynamics of behavior of nonlinear systems automatically, if the architecture of the neural network contains at least three layers.
It follows from the above that special attention in this area should be paid to approaches to choosing the structure of a neural network, methods for its training, determining the optimal number of neurons on the hidden layer, etc.
For example, in the works [3,4], researchers developed a technology for processing multidimensional data using neural networks. The authors note that it is difficult to find the most optimal network structure and learning algorithm for the task at hand.
Numerous studies by the authors [5,6,7,8] show that a neural network with a single hidden layer can approximate any continuous function of many variables with a given accuracy. The main thing is that this network has a sufficient number of neurons and the initial values of the weight coefficients should be correctly selected.
Analysis of literature sources in this area [9,10,11,12,13,14,15], allows us to conclude that it is advisable to use a neural network algorithm to determine the properties of materials in the process of shock indentation. The aim of the study is static processing of data obtained during the experiment, investigation of the dependencies of the material characteristics of metals, selection of optimal parameters of the neural network.

Materials and methods
Experimental data for the study were obtained during impact indentation of the surface of several metal samples.
To solve this problem, we consider 2 approaches based on the nature of the input effect of the neural network:  the first approach involves splitting the source data for each type of metal into training and control samples ( figure 1).  in the second approach, several groups of metals that are not involved in network training are used as a control sample ( figure 2). The values are normalized to the range [0,1]. Data normalization is performed using the following formula: where ̅is the normalized value; minimum parameter value for the entire sample; the maximum value of the parameter for the entire sample. The statistical distribution of the sample was studied, and the numerical characteristics of the distribution (mathematical expectation, variance) were determined.
The neural network was created using Google Colaboratory using the TensorFlow and Keras libraries.
The neural network has a four-layer structure. Location of neurons in layers: 50, 30, 12, 1.
For each type of metal, the number of experiments obtained during indentation is 9-11 values.
The output of the neural network is the Brinell hardness parameter (HB).
Hardness group, HB 30HGSA  Source  205  207  197  214  199  192  193  203  10 epochs  267  211  188  222  218  227  180  187  20 epochs  230  217  202  229  202  223  191  203  30 epochs  238  231  214  219  201  232  195  194  50 epochs  250  232  230  211  215  199  186  201  100 epochs  212  206  190  213  196  212  176  203  500 epochs  227  198  199  205  216  209  199  205 Visually, the results of approach 2 are shown in figure 2. Thus, this article discusses the results of a neural network with different input parameters. The results of the study allow us to draw the following conclusions:  the neural network can approximate the function with sufficient accuracy in various conditions for obtaining experimental data;  the results of the first approach have a higher accuracy compared to the second approach, which can be observed in the example of the 97 HB group. At the same time, the second approach is possible for practical application, given that the test set is entire groups of metals that do not participate in training;  the largest deviation from the expected values is observed at 10 epochs of training, the smallest -at 100 epochs of training.