Predictive analysis of two bijectively related families of functions in L 2 , which are expressed as tuple pairs

. This article proposes a method of function image prediction calculation by its preimage. This method is based on regression analysis of function image and preimage pairs in L 2 . The prediction model application procedure is described. The algorithm pseudocod are given .

Insufficient data situation often appears in practice (if it's getting associate with technical and another difficulties), then regression model isn't applied. Then different sampling and composition methods are used [11][12][13][14]. One of these methods is proposed in this article. Our problem is to obtain predictive expression of function = ( ) as tuple pairs ( , ), = 0, .

Data preparing
Important stage of data preparing is its normalization. We normalize tuple pairs * = − − , * = − − . There are many types of grids, that are denser at some points from [0,1]. In any case we have two grids, such that After the grids were chosen, we complete a definition of values of function and function at the grid nodes.
At the boundary nodes we have At the inner nodes we use the following rule. We fix number and study all values of number . If ∀ , ≠ then unique number 0 exists, such that , 0 < < , 0 +1 . In this case we apply linear interpolation Otherwise there is number 0 , such that = , 0 * , in this case we have Similarly, we do calculation for function . We fix number and study all values of number . If ∀ , ≠ then unique number 0 exists, such that , 0 < < , 0 +1 . In this case we apply linear interpolation Otherwise there is number 0 , such that = , 0 * , in this case we have We do this procedure for all , = 0, . Note, that we use linear interpolation to simplify narrative. We can use any interpolation [3,15,16]. Now we have corteges put in the grids: such that Further, we will deal with two matrixes Rows of matrix Φ are normalized values of functions at the grid nodes . Rows of matrix Ψ are normalized values of functions at grid nodes . The map induces bijection between i-th row of Φ and i-th row of Ψ.

Model parameters
We insert column of ones in the end of Φ and denote obtained matrix by Φ . Let and be multi-indexes. Let Φ be matrix, that consists of I-th rows and J-th columns elements of Φ . Let Ψ be matrix, that consists of I-th rows elements of Ψ. By Ψ denote r-th column of Ψ . Definition 1. We shall say that, partial regression problem is overdetermined system of equations Note. If that system is determined or underdetermined, then model is overtraining. In order that the above system to be overdetermined it is necessary to have inequality dim > dim + 1.

Definition 2.
For partial regression problem we shall say, that regression dimension is dim .
Let and ′ be two preimage functions, those are expressed as tuple pairs , and For model training we will solve series of partial regression problems for multi-index of the nearest neighbors: We number these problems by multi-index . Therefor we must choose multi-index iteration method.
Definition 5. One of these methods is complete enumeration of all multi-index with fixed length The number of these multi-indexes is great. We can get it as

Definition 6.
Another multi-index iteration method is enumerating in the ring ℤ . For this method dimension dim must be devisor of . We choose multi-indexes by following rule The number of these multi-indexes is dim and not great.
There are other multi-index iteration methods. Finally, we have following model parameters: 1) dimension of the partial regression problem dim , 2) number of the preimage function nearest neighbors , 3) multi-index iteration method.

Model training
Step 0. We fix following model parameters: , dim , and iteration method. Step 1. We choose functions and and create multi-index of the nearest neighbors = { 1 , 2 , … , }.
Step 3. We choose multi-index .
Step 5. We average of the multi-index all predictive values pred . We get predictive value of function at k-th node of grid . This value doesn't depend on multi-index . Denote this value by pred . We calculate error of prediction value = | pred − |. Go to step 2. We choose column Ψ +1 and make steps from 3 to 5. We repeat step 2 for all columns of matrix Ψ.
Go to step 1. We choose function +1 and make steps from 2 to 5. We repeat step 1 for all rows of the matrix Φ.
Step 6. We average of and values of prediction errors This value is averaged predictive error value of model for giving set of parameters. Go to step 0. We choose new values of model parameters and repeat all procedure. We repeat step 0 for all set of model parameters.
Final of model training. We find set of parameters with the smallest value of error . We call this set of parameters the optimal one. This is the finale of the model training.
We begin as we did on data preparing stage. We normalize tuple , linearly interpolate the values ( ) at nodes of grid and normalize values ( ) with considering all values ( ). As result we get tuple 0 , 1 , … , . We create multi-index of nearest neighbors , then we solve partial regression problem for all multi-indexes and columns Ψ . As result we calculate predictive values pred .
These values we average of and get predictive value pred of preimage function at r-th node of grid for ever . We get cortege of image function predictive values 0 pred , 1 pred , … , pred on grid .
After these calculations we do transformation, that is invers for normalization transformations. For values we put These tuple pairs are expression of function .