«Abstract. At last decades people have to accumulate more and more data in different areas. Nowadays a lot of organizations are able to solve the ...»
But any discover technique requires such a representational bios. It helps limiting the search space of possible candidate models. Also it can be used to give preference to particular types of models. It’s important to observe process discovery is, by definition, restricted by the expressive power of the target language. Therefore representational bias – the selected target language for presenting and constructing process mining results. Because of every notification isn’t universal and has its limitations (e.g. silent steps, work with duplicate activities, with concurrency and loops etc.) and benefits, it recommends to use different variants to correct interpretation of process.
The following are the main characteristics of process discovery algorithms:
1) Representational bias:
• Inability to represent concurrency
• Inability to deal with (arbitrary) loops
• Inability to represent silent actions
• Inability to represent duplicate actions
• Inability to model OR-splits/joins
• Inability to represent non-free-choice behavior
• Inability to represent hierarchy
2) Ability to deal with noise
3) Completeness notion assumed
4) Used approaches - direct algorithmic approaches (α -algorithm), two-phase approaches (TS, Markov model→ WF-net), computational intelligence approaches (genetic algorithm, neural networks, fuzzy sets, swarm intelligence, reinforcement learning, machine learning, rough sets), partial approaches (mining of sequent patterns, discovery of frequent episodes), etc.
2.5. Four quality criteria
Completeness and noise refer to qualities of the event log and do not say much about the quality of the discovered model. m. In fact, there are four competing quality
• Fitness. The discovered model should allow for the behavior seen in the event log. A model with good fitness allows for most of the behavior seen in the event log. A model has a perfect fitness if all traces in the log can be replayed by the model from beginning to end.
• Precision. The discovered model should not allow for behavior completely unrelated to what was seen in the event logs (avoid underfitting). Underfitting is the problem that the model overgeneralizes the example behavior in the log (i.e., the model allows for behaviors very different from what was seen in the log).
• Generalization. The discovered model should generalize the example behavior seen in the event logs (avoid overfitting). Overfitting is the problem that a very specific model is generated whereas it is obvious that the log only holds example behavior (i.e., the model explains the particular sample log, but a next sample log of the same process may produce a completely different process model).
• Simplicity. The discovered model should be as simple as possible.
In the Figure 8 it’s presented the key criteria for evaluating the quality of process models with appropriate answers that explain meaning of every criteria.
Is the event log possible according to Is this the simplest model?
Balancing fitness, simplicity, precision and generalization is challenging. This is the reason that most of the more powerful process discovery techniques provide various parameters. Improved algorithms need to be developed to better balance the four competing quality dimensions. Moreover, any parameters used should be understandable by end-users.
2.6. Conformance Checking
PM is not limited only by discovery techniques. When one gets appropriate process model, the most interesting and necessary part (for stakeholders) of analyzing begins.
The model may have been constructed by hand or may have been discovered.
Moreover, the model may be normative or descriptive. Conformance checking relates in the event log to activities in the process model and compares both. It needs an event log and a model as input. The goal is to find commonalities and discrepancies between the modeled behavior and the observed behavior. This type of PM is relevant
for business alignment and auditing. The different types of models can be considered:
conformance checking can be applied to procedural models, organizational models, declarative process models, business rules/policies, laws, etc.
Generally, conformance checking is used for:
• improving the alignment of business processes, information systems and organizations;
• repairing models;
• evaluating process discovery algorithm;
• connecting event log and process model.
Following description of the technique for conformance checking will be mainly focused on the one of quality criteria – fitness, because the other three quality criteria are less relevant.
2.6.1. Conformance checking using Token-based play The idea of this method is counting tokens while replaying, i.e. to simply count the fraction of cases that can be “parsed completely” (the proportion of cases corresponding to firing sequences leading from start to end). While replaying on top of, for example, WF-net, we have four counters: p (produced tokens), c (consumed tokens), m (missing tokens – consumed while not there) and r (remaining tokens – produced but not consumed). Initially, p=c=0, then the environment produces a token for place start. At the end, the environment consumes a token from place end. For instance, if we try replay trace Ϭ= a,d,c,e,h on the top of given model, then final state of replay will look like in the figure 9. For this case, we will have following counters: p=6, c=6, m=1, r=1.
Figure 9. Replaying trace Ϭ= a,d,c,e,h on top of WF-net (final state)
Fitness for trace will be computed by formula:
(3) By help this method we can analyze and detect problem in compliance as it’s shown in figure below.
Figure 10. Detection of problems by token-based play.
2.6.2. Conformance checking using causal footprints Conformance analysis based on footprints is only meaningful if the log is complete with respect to the “directly follows” relation. By counting differencies (viz. Figure
10) we can compte fitness. Footprint – matrix showing causal dependencies such as:
Direct succession: xy iff for some case x is directly followed by y.
Causality: x→ iff xy and not yx.
y Parallel: x||y iff xy and yx Choice: x#y iff not xy and not yx.
This method allows for log-to-model comparisons, i.e. it can be checked whether and model and event log “agree” on the ordering of activities. However, the same approach can be used for log-to-log and model-to-model comparisons.
2.6.3. Conformance checking using alignment It should provide a "closest matching path" through the process model for any trace in the event log. Also required for performance analysis. In figure 12 the first string shows possible trace in model and second one shows move in log only. Cost of alignment are differences between model and log. The goal is to find the most optimal alignment (with minimal cost).
Figure 12. Aligning event log and process model.2.7. Model enhancement.
A process model is extended or improved using information extracted from some log. As seen before, event logs contain much more information that goes far beyond just control-flow, namely information about resources, time, and data attributes etc.
Organizational mining can be used to get insight into typical work patterns, organizational structures, and social networks. Timestamps and frequencies of activities can be used to identify bottlenecks and diagnose other performance related problems. Case data can be used to better understand decision-making and analyze differences among cases. The different perspectives can be merged into a single integrated process model for next simulation and “what if” analysis to explore different redesigns and control strategies. In the figure 13 it’s presented approach to come to a fully integrated model covering the organizational, time, and case perspectives.
2.8. Refined Process mining Framework Today many data are updated in real-time and sufficient computing power is available to analysis events when they occur. Therefore, PM should not be restricted to off-line analysis and can also be used for online operational support.
Figure 14 shows refined PM Framework (can be extended). Provenance refers to the data that is needed to be able to reproduce an experiment. Data in event logs are portioned into “pre mortem” and “post mortem”. “”Post mortem” – information about cases that have completed and can be used for process improvement and auditing, but not for influencing the cases. “Pre mortem” – cases that have not yet completed and can be exploited to ensure the correct or efficient handling the cases.
The refined PM Framework also distinguish between two types of models: “de jure models” and “de facto models”. The first one is normative, and second one is descriptive.
Figure 14. Refined PM Framework So let’s consider category of PM activities more detailed.
PM can be seen as the “maps” describing the operational processes of
organizations. Group Cartography includes three activities:
Discover. This activity is concerned with the extraction of (process) models.
Enhance. When existing process models (either discovered or hand-made) can be related to events logs, it is possible to enhance (extend and repair) these models.
Diagnose. This activity does not directly use event logs and focuses on classical model-based analysis.
Group Auditing obtains set of activities used to check whether the business processes are executed within certain boundaries set by managers, governments, and other stakeholders.
Detect. Compares de jure models with current “pre mortem” data. The moment a predefined rule is violated, an alert is generated (online).
Check. The goal of this activity is to pinpoint deviations and quantify the level of compliance (offline).
Compare. De facto models can be compared with de jure models to see in what way reality deviates from what was planned or expected.
Promote. Promote parts of the de facto model to a new de jure model.
And last category is Navigation. It’s forwardlooking, helps in supporting and guiding process execution (unlike the Cartography and Auditing).
Explore. The combination of event data and models can be used to explore business processes at run-time.
Predict. By combining information about running cases with models, it is possible to make predictions about the future, e.g., the remaining flow time and the probability of success.
Recommend. The information used for predicting the future can also be used to recommend suitable actions (e.g. to minimize costs or time).
In next part this category will be described more detailed, because of it can be used for online analysis and influencing on current process.
2.9. Operation support 2.9.1 Detect Figure below illustrates type of operational support. Users are interacting with some enterprise information system. Based on their actions, event are recorded. The partial trace of each case is continuously checked by the operational support system, which immediately generates an alert if a deviation is detected.
2.9.2 Predict We again consider the setting in which users are interacting with some enterprise information system (viz. Figure 16). The events recorded for cases can be sent to the operational support system in the form of partial traces. Based on such a partial trace and some predictive model, a prediction is generated.
Figure 16. Both the partial trace of a running case and some predictive model are used to provide a prediction.
2.9.3. Recommend The setting is similar to prediction. However, the response is not a prediction but a recommendation about what do next (viz. Figure 17). To provide such a recommendation, a model is learned from “post mortem” data. A recommendation is always given with respect to a specific goal. For example, to minimize the remaining flow time or the total cost, to maximize the fraction of cases handled within 4 weeks, etc.
Figure 17. A model based on historic data is used to provide recommendations for running cases.
2.10. Tools All techniques described above were realized in such software as PROM. ProM is an extensible framework that supports a wide variety of process mining techniques in the form of plug-ins.
The main characteristics of PROM:
Aims to cover the whole process mining spectrum.
Notations supported: Petri nets (many types), BPMN, C-nets, fuzzy models, transition systems, Declare, etc.
Also supports conformance checking and operational support.
Many plug-ins are experimental prototypes and not user friendly.
This is extremely powerful instrument, but confusing for someone. Nowadays already exist 600 plug-ins and this amount grows up.
There is also commercial software Disco that has following characteristics:
Focus on discovery and performance analysis (including animation).
Powerful filtering capabilities for comparative process mining and ad-hoc checking of patterns.
Uses a variant of fuzzy models, etc.
Does not support conformance checking and operational support.
Easy to use and excellent performance.
Disco can be used by unexperienced people, has intuitive user-friendly interface.
Tools are available (Figure 18), but process mining is still a relatively young discipline. New tools will appear in coming years and process mining functionality will be embedded in more BI/BPM/DM suites.