Application of business modeling tools in the context of the innovation development of the digital economy in China

. With the development of China's digital economy, a large number of high-tech enterprises have emerged, and the application of business model tools for enterprises has also been innovatively developed. Big data technology has been widely used in business process management. This article analyzed from the perspective of big data to discuss the inovative application of business modeling tools. The article first analyzed the current status of data-driven research in the field of business process management, and expounds the use of big data analysis methods to establish a BPM knowledge base to guide, optimize, and predict the flow of business processes. From the perspective of big data, use the results of data analysis to drive the flow of business processes. The article analyzes and summarizes the core links of building a data-driven BPM. On this basis, the establishment of a data-driven BPM implementation process is described in detail. Finally, the development trend and research challenges of data-driven BPM are given. It has great reference significance for the business process management of Chinese high-tech enterprises in the development environment of the digital economy. At the same time, it also has a certain reference significance for the innovation of business model tools in Russia, helping enterprises to better manage. This article analyzes the application and development of business modeling tools from the perspective of big data, which is in line with the current development direction of the times. At present, most of the literature is studying the construction of BPM process. This article mainly studies the optimization of the process after construction and the processing of data in the process, and gives the development trend and research challenges of data-driven BPM.


Introduction
With the development of computer application technology, enterprise business process management has become an important supporting technology and foundation for the normal agile and efficient operation of major enterprises and institutions.
BPM is a systematic method centered on the standardized construction of end-to-end excellent business processes and aimed at continuously improving the business performance of the organization. For a company, its BPM is not only the flow of its business, but can also be seen as a summary of its management experience and characteristics in a sense, and its management process condenses the essence of its management.
Regarding enterprise BPM, many scholars and engineers are now devoted to the research of workflow engine technology, especially in order to make the workflow have a wide range of adaptability and focus on the research of workflow engine.
The research of Luo Haibin (2010) can support the dynamic changes of the process to a certain extent. However, the workflow on this basis is still process-oriented control-driven. The execution sequence of activities is determined when the process is established, and there are Strong dependence requires a lot of modification of the process definition when faced with complex structural changes [1].
Manfred Reichert (2005) pointed out that it is possible to replace part of the process when there is an error or when the process needs to be modified, but it is not realistic in some cases that the process is required to be similar during the mapping. This document puts forward the concept of data-driven BPM [2] .
Cutler R (2000) divides the activity execution state in FreeFlow into user state and system state. The user state means that the user needs to supplement data, and the system state means that the process can run automatically [3]. Although this method can achieve targeted data fusion and the structure of the entire process, compared with the previous workflow-driven method, the development cycle is too long, and the designer needs to have a deeper understanding of the corresponding field. In many cases it is not realistic.
In the field of seeking a data-driven approach, WMP van der Aalst (2005) is more influential. This method changes the execution mode of the workflow from "what should be done" to "what can be done". Drive the operation of the workflow by using all the information available instead of part of the information. But this method does not show support for complex objects and data structures [4].
Another theory that has a greater impact on data-driven workflow research is Dominic Muller (2007). This method draws on the concept of objects in object-oriented programming and introduces them into workflow technology. But there is no effective solution to the problems in the process of operation [5].
Chen Yisong (2019) proposed a relatively flexible way of designing the process, which realized the loose coupling between activities and activities, improved the flexibility and robustness of the process, and was conducive to the automatic construction of the process [6].
At present, almost all the literature focuses on the construction of the BPM process engine, while the optimization and reconstruction of the process after the construction and the processing of the data in the process are rarely involved.
Many large domestic enterprises and institutions have built BPM systems and accumulated a large amount of data. With the development of big data technology, it has become possible to dig deep into the enterprise management knowledge from the big data of BPM, and then drive the optimized and efficient operation of enterprise BPM, and realize the use of data to drive enterprise BPM. That is, through the analysis of BPM big data, the purpose of business process reengineering is achieved.
The purpose of process reengineering is to make existing processes more effective, efficient and adaptable. Dr. Michael Hammer, a representative of the process reengineering school, believes that an excellent process has four characteristics: correct, cheap or cheap, easy or simple, and fast.
In the era of big data, if you can find that the data has a strong correlation through a certain method, it can be regarded as a fixed relationship, save these data, and use this relationship as a basis to suggestively drive the data flow. This can improve the flow efficiency of the process when processing data [7].

Data-driven BPM approach strategy
For BPM big data, applying big data analysis methods can make the process more efficient from at least the following two aspects: On the one hand, it is to optimize the process itself, on the other hand, it is hoped to find the commonality of certain data processing, for example, Based on the historical data of BPM, suggestions on the processing of certain data in certain links are given to make the flow of the process more efficient [8].

Big data-driven BPM strategy
After an enterprise establishes BPM, after a period of operation, the average enterprise will accumulate a considerable amount of data of various businesses. For these BPM big data, it is possible to formulate analysis strategies and conduct mining analysis from both the vertical and horizontal directions; the results of the analysis are confirmed and used for process optimization and reengineering. The so-called horizontal data analysis refers to the data analysis between the various businesses of the enterprise BPM, especially the data that needs to be processed by multiple businesses [9]; the so-called vertical data analysis refers to the further refinement of the data of a certain business process , Classification, analysis. Whether it is vertical or horizontal data analysis, there is a common analysis purpose, that is, from the perspective of data, it is hoped to find some connections or potential commonalities between data, in order to optimize the process and improve the efficiency of business processing [10]. Therefore, the analysis strategy shown in Figure 1 below can be constructed. Data analysis: The acquired data will be sorted and analyzed vertically and horizontally. Apply big data analysis methods to find out the inherent relationship of BPM data. If the association relationship is universal, it can be used as the analysis result and stored in the knowledge base to guide the data flow processing based on this.
Result confirmation: Confirm the analysis result. There are two ways to confirm: one is confirmation by experts, where a person who is proficient in professional business confirms the results of computer analysis; the other is through comparison and analysis of the knowledge of the knowledge base by the computer to identify whether the results are correct or not. If the result is incorrect or unreasonable, the result will be discarded and recorded in the knowledge base, and similar results will be automatically blocked after analysis; if the analysis result is deemed correct, it will be recorded in the knowledge base, and the process will be adjusted and optimized [11]. The knowledge base will record two distinct types of knowledge, right and wrong. The correct knowledge can guide the flow of the process correctly and provide efficiency; the wrong knowledge will be used as a "lessons learned", so that the computer will automatically block errors when similar analysis is done in the future.
Application of results: Apply the correct results stored in the knowledge base to guide the optimization process, and give reasonable handling opinions on the business data being processed.

BPM data analysis process
Under normal circumstances, BPM data can be analyzed and processed according to the steps shown in Figure 2.

Fig. 2. Principles of BPM data analysis
Similarity search: The amount of BPM data is huge, and most of them are descriptive data, not quantitative data. In order to effectively analyze such a huge amount of nonquantitative data and find out the correlations among them, "classification" must be carried out [12].
The general classification usually requires a clear classification standard. However, in addition to the clear process of BPM data, the data category in a certain process is not very clear. Moreover, these data need to be analyzed from different angles.
Therefore, it is necessary to perform clustering, that is, according to a certain clustering method, from different angles, to find the relationship between the data. Similarity search for different objects in a data set is the basis for clustering, and clustering is the basis for big data processing. Clustering is different from classification [13].
The class to be divided into clustering is unknown. Similarity search is performed on BPM big data. Longitudinal data analysis can learn from the fast search algorithm of time series similarity.
The time series in a certain process can be divided into sub-sequences by sliding window, fractal interpolation approximation and other methods, and linear fitting segment subsequences, calculate the euclidean distance between the query sequence and each subsequence, and meet the distance threshold conditions as similar subsequences; horizontal data analysis can use the data in a certain process as the base point, and search for matching similar values in other processes (a certain Euro It is artificially similar within the distance of 1°), and search is carried out.
Data clustering: After the similarity search is performed, the standard of clustering will be formed, that is, the angle of data classification. With this standard, BPM data can be clustered and divided to form different types of data clusters to prepare for subsequent data analysis [14].
Similarity analysis: For the classified data, perform similarity analysis and processing, and further clean the data to prepare for subsequent analysis. For example, the Euclidean distance can be defined according to the clustering standard, and the data can be further cleaned, so that the clustered data has a higher consistency, so that the data association relationship can be efficiently found in the subsequent processing.
Correlation analysis: For the processed data, use the advantages of the computer's computing speed to find the correlations one by one.
Analysis result processing: For the correlation degree of the data found, the correlation result processin g degree can be expressed in the form of probability. The degree of correlation is high, the probability value is large; the degree of correlation is low, the probability value is small. The value 1, which means linear correlation; the value 0, which means no correlation. As a result of the analysis, when the probability value is greater than a certain specified value (ie threshold), it can be regarded as having a certain correlation. The value of the threshold can be determined according to the specific problem. All results will be stored in the knowledge base; effective results can be displayed in a certain way to guide relevant personnel to optimize business processes.

Similarity search algorithm
There are many similarity search algorithms, which mainly focus on two issues: first, how to define similarity, and second, how to define the length between the data to be analyzed and the base point data (which can also be regarded as standard data). The similarity measurement is mainly based on the distance measurement, and the main methods include Euclidean distance, non-Euclidean, and dynamic time warping (DTW) [15].

Clustering method
Cluster analysis is a statistical method to study how to comprehensively classify objects according to the characteristics of multiple aspects. [7] In the past taxonomy, people mainly rely on professional knowledge or experience as qualitative classifications. Many classifications are inevitably subjective and arbitrary, and cannot reveal the inherent essential differences and connections of objective things; or people only base on things Unilateral characteristics are classified. Although these classifications can reflect the differences in certain aspects of things, they are often difficult to reflect the comprehensive differences between all kinds of things. The cluster analysis method effectively solves the multi-factor and multi-index classification problem in scientific research. [ 8 ]

Fig. 3. Clustering diagram
Clustering analysis calculation methods mainly include five kinds of methods: division method, hierarchical method, density-based method, grid-based method, and model-based method, all of which can effectively solve different clustering problems.
Outcome prediction Prediction is one of the important purposes of big data mining, and it is reflected in BPM optimization, which is the optimization of the process and the efficiency of the process in processing data [17].
Provide data processing efficiency Using the association relationship found by the above analysis method, when the data in some processes are very similar to the data in the knowledge base (the similarity defined by Euclidean distance), the future trend and processing results of the data can be predicted, and Gives more reasonable processing suggestions to improve the efficiency of data flow.
Commonly used forecasting techniques can be divided into: traditional linear time series forecasting techniques (such as autoregressive model, autoregressive moving average model, sum autoregressive moving average model and seasonal adjustment model, etc.), nonlinear time series forecasting technology (embedded space Method, time prediction method based on neural network, chaotic time series prediction, etc.), other technologies (sliding window quadratic autoregressive model, time series prediction based on cloud model, etc.). [9] Automatically recommend reasonable processes The traditional BPM process is mostly based on the existing management rules, agreements or experience of the enterprise, and the topological structure diagrams of various business activities are formulated in advance, as shown in Figure 4 below. The control information and data information of the workflow are included in the entire process, and the process only needs to determine the running direction of the process according to the control information of the process. The process design method given by Chen Yisong (2019) abandons the visual process topology and maximizes the flexibility of the process. However, the article does not give out how data automatically drives process activities and whether it is feasible. With the help of big data mining technology, it provides a possibility for the realization of data "autonomous" driving process.
With the help of big data, what kind of data (data clustering) can be done, and what kind of method (loosely coupled process) is suitable for processing. Through big data analysis, an association relationship between data styles and processing methods is established in the knowledge base, that is, an association relationship between data clustering and processing procedures is established.
When some business data needs to be processed, first automatically identify the attributes of the data; then use the business data attribute data to compare with the cluster type in the knowledge base (the comparison method needs to be modeled according to the actual business) [18]; The similarity matches the most likely process.

Process link optimization
Enterprises are not static. Even the same kind of data, the processing process will change, especially when the organizational structure of the enterprise is adjusted. Therefore, it is necessary to optimize the enterprise process to improve the efficiency of the enterprise [19]. Some optimizations are explicit, such as changes in the corporate organizational structure, while others are implicit and difficult to detect, such as duplication of certain functions in certain departments, which can be completely ignored. These hidden redundant links need to be explored. Therefore, the knowledge in the knowledge base obtained by big data analysis needs to be regularly optimized, that is, to realize the automatic optimization of the process links. Of course, the optimization results obtained need to be confirmed by experts. The redundant links or activities discovered by this kind of data are often Is recessive.
Based on the effective data types obtained by data clustering in the knowledge base, the processing procedures and results of this type of data in the historical database are analyzed one by one. The analysis process steps are as follows: 1 Find all cluster types from the knowledge base; 2 Check whether all types have been analyzed, if not, find the unanalyzed type A, and skip to the next step ③, otherwise the analysis ends; 3 According to the selected cluster A, obtain all the processing procedures and processing results of such data from the big data of the business process; 4 Classify and summarize the processing results of each link in the process: effective processing (such as: processing opinions, numerical perfection, etc.) and invalid processing (only fast (short processing time) simple instructions); forward processing (such as : Consent, approval, etc.), negative processing (such as return, denial, etc.), etc.; 5 Count those links (or activities) where the proportion of invalid processing (the ratio of invalid processing in this link to the total number of this link) in the business data related to category A is higher than 80%. List L. If yes, skip to the next step ⑥; otherwise, skip to step ②; 6 Present the results of each processing in list L and cluster A, or even L, to the business expert, especially to determine whether each activity in L is necessary for data processing such as cluster A. If not, remove the association relationship between the activity selected by the expert in L and cluster A in the knowledge base. After the expert confirms, go to step ②. If step ⑥ in the above processing flow, the selection basis and selection method of experts are quantified and modeled, the automatic flow optimization function can be realized.

Discovery of new process models
The method of Chen Yisong (2019) can make the process highly flexible, and not only can implement true data-driven and process optimization, but also discover new processes from data.
The discovery process is similar to the optimization process in the previous section, as follows: 1 Find all cluster types from the knowledge base; 2 Check whether all types have been analyzed, if not, find the unanalyzed type A, and skip to the next step ③, otherwise the analysis ends; 3 According to the selected cluster A, all the recent processing processes (which can be a natural month, a quarter, etc.) of this type of data are obtained from the big data of the business process; 4 Categorize the processing flow of type A data obtained in step ③, and obtain a list of processing flow classification ListA of type A data (including the flow and the corresponding recent usage quantity); 5 Obtain the processing flow associated with Type A data from the knowledge base, and compare it with the flow in ListA, you can get the recent processing flow of Type A data, which processes are covered in the knowledge base, and which processes are not. Covered. If the uncovered process frequently appears recently (for example, the frequency of occurrence exceeds 50%) and has a tendency to increase over time, it is regarded as a new business process associated with Type A data and must be added to the knowledge base. Then return to step ②. In this way, data-driven discovery of new processes can be achieved.

Fig. 6. BPM optimization method
To establish a data-driven BPM optimization function, it is necessary to establish a big data analysis module, a BPM knowledge base, and a BPM process optimization interface module on the basis of trying to ensure the existing BPM functions [20].
The big data analysis module completes the data reading from the BPM database, obtains BPM big data, and then performs analysis, Form the BPM knowledge base. The BPM knowledge base is responsible for storing analysis results, including valid analysis results and confirmed invalid analysis results. By recording invalid results, the analysis module can shield the appearance of similar results accordingly, and improve the analysis efficiency.
The BPM data flow optimization interface module calls the module interface based on the data circulating in the BPM system, and the module is responsible for pattern matching with the data in the BPM knowledge base. If the matching is successful, a reasonable proposal for the process is given, and the prospect of subsequent processing of the process data is predicted.

Big data analysis process
The processing flow of BMP big data analysis is shown in Figure 7 below. Optimized business process data processing flow The results obtained from BPM big data analysis can be used as knowledge to give reasonable processing opinions on certain data in circulation, and can predict the processing results. The process relationship and processing process in the existing BPM are shown in Figure 8 below.

Fig. 8. BPM business process data optimization
The data circulating in BPM can be searched for similarity in the knowledge base according to the process and the meaning of the data itself, that is, pattern matching. If there is a match, then based on the analysis results recorded in the knowledge base, combined with the process data itself, the processing opinions of the current link can be given, and the predictable results of the subsequent links can be given. In this way, the process efficiency will be greatly improved. If this system is established, a reasonable and predictable scenario is as follows: In the plan approval process of a certain company, when a planner submits a plan, pattern matching is performed based on the content of the plan submitted and the plan process. If the approximate search is successful, that is, the pattern matching is successful, then the plan can be given whether the plan is currently qualified, whether there are any improprieties (it can be given based on the historical refusal of similar plans), and reasonable suggestions can be given; at the same time, Predict the approval opinions of the key links in the follow-up approval process; or even predict the implementation effect of the current plan from the implementation of similar plans. In this way, the planner will give more reasonable opinions and predictable prospects when drafting the plan, and in turn, the planner can make targeted adjustments when drafting the plan; at the same time, a more reasonable plan will be more helpful To improve the processing efficiency of subsequent process links.

Conclusion
This article takes China's digital economy as the research background, and takes the business process management of high-tech enterprises as the research object. It analyzes the application of current enterprise business model tools and has also achieved innovative First, the article first analyzes the current status of data-driven research in the field of business process management, reviews several data-driven business process methods, and proposes a data-driven business process optimization method based on big data, and gives it Key research methods. This article analyzes and summarizes the core aspects of building a data-driven BPM. On this basis, the establishment of a data-driven BPM implementation process is described in detail. Finally, the development trend and research challenges of data-driven BPM are given, and the system structure and processing flow of the establishment of the system are described, as well as the predictable prospects.
This research has important reference significance for the business process management of Chinese high-tech enterprises in the environment of the development of the digital economy. At the same time, it also has a certain reference significance for the innovation of Russian business model tools, helping enterprises to better manage.
The amount of BPM data is large, and most of them are non-quantitative data, which increases the difficulty of analysis. With the current level of computer application and computing power, this analysis function can be fully realized. The application of this method is likely to change the current status of BPM and provide new impetus and prospects for the development of BPM.