# «Finite-State Modeling, Analysis and Testing of System Vulnerabilities. In Proc. of Organic and Pervasive Computing Workshops (ARCS) 2004, Lecture ...»

F. BELLI, Ch. J. BUDNIK, N. NISSANKE, Finite-State Modeling, Analysis and Testing of System Vulnerabilities.

In Proc. of Organic and Pervasive Computing Workshops (ARCS) 2004, Lecture Notes in Informatics (LNI) vol. 41,

Augsburg, Germany, pp. 19-33

Finite-State Modeling, Analysis and Testing of

System Vulnerabilities

Fevzi Belli, Christof J. Budnik Nimal Nissanke

Dept. of Computer Science, School of Computing Science,

Electrical Engineering and Mathematics Electrical Engineering and Mathematics University of Paderborn South Bank University Warburger Str. 100 103 Borough Road D-33098 Paderborn UK-London SE1 0AA belli@upb.de nissanke@sbu.ac.uk budnik@adt.upb.de Abstract: Man-machine systems have several desirable properties, as to user friendliness, reliability, safety, security or other global system attributes. The potential for the lack, or breaches, of any such property constitutes a system vulnerability, which can lead to a situation that is undesirable from user’s point of view. This undesired situation could be triggered by special events in the form of intended, or unintended, attacks from the system’s environment. We view the undesirable system features as the sum of the situations, which are, mathematically speaking, complementary to the desirable ones that must be taken into account from the very beginning of the system development to achieve a stable system behavior and a robust operation. This work is about the modeling, analysis and testing of both desirable and undesirable system features which can be viewed as relations between the system and its operation.

1. Critical Features and Vulnerabilities of Embedded Systems In terms of behavioral patterns, the relationships between the system and its environment, i.e., the user, the natural environment, etc., can be described as proactive, reactive or interactive. In the case of proactivity, the system generates the stimuli, which evoke the activity of its environment. In the case of reactivity, the system behavior evolves through its responses to stimuli generated by its environment. Most computerhuman interfaces are nowadays interactive in the sense that the user and the system can be both pro- and reactive. In this particular kind of system, the user interface (UI) can have another environment, that is, the plant, the device or the equipment, which the UI is intended for and embedded in, controlling technical processes.

When categorizing behavioral aspects of the human-computer systems, it is a good principle to start with the ergonomic aspects which concern the extent of comfort, or discomfort, experienced by the system and its environment at each other’s company.

Any vulnerability is always accompanied by threats.

In the case of safety, the threat originates from within the system due to potential failures and its spill-over effects causing potentially extensive damage to its environment. In the face of such failures, the environment could be a helpless, passive victim. The goal of the system design in this case is to prevent faults that could potentially lead to such failures or, in worst cases, to mitigate the consequences at run-time should such failures ever occur.

In the case of security, the system is exposed to external threats originating from the environment, causing losses to the proprietor of the system. In this case, the environment, typically the user, maybe unauthorized, can be malicious or deliberately aggressive. The goal of the system design is then to ensure that it can protect itself against such malicious attacks.

In contrast to safety and security, the lack of user friendliness of a UI is a milder form of system vulnerability. A failure to be user-friendly is still a failure to fulfill a system attribute, though it may typically cause an annoyance or an irritant to the user, possibly leading to confusion and complaints. However, disastrous consequences cannot be entirely ruled out under stressed conditions of the user if the UI is intended to mediate between a safety critical application and its operator. Since safety is being considered here as a system vulnerability in its own right, we limit ourselves here to the UIs of modest applications designed to guide and help the user to navigate through, and explore, the system functionality.

Having identified several differences and distinctions between certain system vulnerabilities, let us

**Abstract**

away from these peculiarities and threat them all in a generic manner in the following section.

Further sections will illustrate our approach through a case study considering user interactions as a special case.

An earlier version of the approach was introduced in [BE01]. The present study generalizes the approach and extends its scope of application.

2. Finite State Modeling of System Vulnerabilities

2.1 Modeling the System and Its Environment The functionality f of an application is described here as a deterministic finite-state automaton (FSA)

- S is a set of states, where

- E is a subset of SxS,

- α is a set of input signals, and

- f is a total function from E to α.

The vertices of the corresponding transition diagrams represent the states (as the elements of S); the directed arcs (the elements of E) represent the state transitions which are labeled by input symbols (the elements of α). This notation is slightly different than the common ones used in the literature. We focus here on the state transitions caused by inputs and determined by f defining, in conjunction with E, the next state function.

E has a distinguished unlabelled element denoting the entry from the exterior (environment) of M and another distinguished unlabelled element denoting the exit to the exterior of M. S and E are such that the field of f equals S.

The detailed structure of M in (1) can be suppressed by using a transition diagram of inputs described in terms of a grammar G equivalent to M based solely on α, in terms of L(M)=L(G), where G can be represented by a regular expression R. Thus, we can merge inputs and states, leading to considerably more efficient algorithms for analysis and test.

This view allow us to focus on input sequences, generated as strings of L(G), or L(R), instead of input-state sequences, as is necessary in the case of Moore automata. A similar interpretation has been introduced by Myhill [My57] long ago.

2.2 Strings to Model the Behavioral Patterns of the System – Analysis and Test Aspects System functions, as well as the threats to a chosen system vulnerability attribute, may each be described using two disjoint subsets of strings,

Legal state transitions represent desirable events, leading to input sequences belonging to L(M), producing system functions; illegal transitions represent the undesirable events, leading to input sequences not belonging to L(M), causing breaches to vulnerabilities.

Let us denote the

Depending on the chosen system vulnerability attribute, the strings corresponding to vulnerability threats can be grouped in accordance with their length n. This assumes that all threats can be unequivocally identified by patterns of n consecutive symbols, i.e.

strings, of α. It is obvious that we can utilize the grammar G, or its regular expression R, to generate test cases to verify whether

- system functions are fulfilled, and/or

- vulnerability threats occur.

It is important to note that several vulnerability attributes simultaneously apply to a given application. In this case, F will remain the same in the study of every vulnerability attribute, but V will vary from one vulnerability attribute to another. To avoid mismatching, the relevant threats to each attribute att will be identified as Vatt.

**2.3 Ordering the Vulnerability Risk, Countermeasures**

The threats constitute only a part of the specification of system vulnerability. The remainder consists of a vulnerability risk ordering relation, or briefly, a risk ordering relation ⊑on SxS; see [ND02]. Given the states s1, s2 ∈ S, the risk ordering ⊑ is defined such that s1 ⊑s2 is true if and only if the risk level of s1 to the chosen system vulnerability is known to be less than, or equal to, the risk level of s2. In other words, risk level quantifies the degree of the unacceptability of an event, on the grounds of hazardousness, exposure to breaches of security, the lack of usability or user-friendliness and so forth.

The risk ordering relation ⊑is intended as a guide to decision making upon the detection of a threat, whether internal or external, and on how to react to it. The required response to breaches of vulnerability needs to be specified in terms of a defense matrix D, which is a partial function from SxV to S. The defense matrix utilizes the risk ordering relation to revert the system state from its current one to a less, or the least, risky state. The actual definition of this matrix is the responsibility of a domain expert specializing the risks to a given vulnerability. Reversion of the system state from its current one to a less risky state is to be implemented in the form of exception handlers X, which can be defined as

with S,E,α,f; G,F,V, ⊑,X are as defined above.

The following sections are intended at refining the above model, making it more precise and relating it to practically relevant real-life scenarios.

Objectives of our research is to demonstrate the above ideas using very different key application areas such as user friendliness, safety and security, by adhering to the same principles and, broadly speaking, striving to achieve similar goals. However, this paper presents our results on modeling, analysis and test of user interactions only.

**2.4 Event Sequence Graphs to Specify the System Functions**

FSA (finite-state automata) will be conveniently represented by their state transition diagrams (STD). STDs are directed graphs, having an entry node and an exit node, and there is at least one path from entry to exit (We will use the notions “node” and “vertex” interchangeably). The edges connect the adjacent nodes and will be labeled by inputs and outputs, in accordance with the next-state function.

** Fig. 1: State Transition Diagram (STD) of a Finite-State Automata (FSA)**

For representing UI, we will interpret the elements of the input set of the FSA as the operations on identifiable objects that can be perceived and controlled by input/output devices, i.e., elements of WIMPs (Windows, Icons, Menus, and Pointers). Any chain of edges from one vertex of the STD to another, realized by sequences of “initial user inputs – (interim) system responses – (interim) user inputs – … – (final) system response” defines an interaction sequence (IS). Note that there could also be no interim system responses so that IS can consist of only user inputs except the final system response.

For the user, the system response can be an expected, desirable event. Undesirable events are responses which the user does not want, i.e., a faulty result, or an unexpected result that surprises the user. An IS is complete if the (final) response is a desirable event producing a complete IS (CIS) ([WA00]). Thus, the set of the CISs specifies the system function F as introduced in Section 2.2; in other words, the CISs are the legal words of the language defined by the FSA that models the system.

Following this interpretation, we merge the inputs and states, assigning them to the vertices of the STD. The next-state function will be interpreted accordingly, thus inducing the next input that will be merged with the due state. This is in accordance with the conventional interpretation of finite automata as acceptors of input sequences, having a single initial state and final (accepting/rejecting) state(s) [Gi62, Sa69]. In other words, we abstract from states and concentrate on inputs. The result is a simplified version of the STD that we call an Event Sequence Graph (ESG). ESG are digraphs and can be compared with the Myhill graphs [My57] which will also be used as computation schemes [Ia58], or as syntax diagrams, e.g., to define the syntax of Pascal [Je85, Te02].

The difference between the Myhill graphs and our ESG is that the symbols which label the nodes of an ESG will be interpreted not as symbols and meta-symbols of a language, but as operations of an event set. Fig. 2 represents the ESG of the FSA given in Fig. 1.

Fig. 2: Event sequence graph of the FSA given in Fig. 1 ESGs have the same computational power as the FSA (type-3 grammars, producing regular languages), having recognizing capabilities that we need, e.g., to answer the essential question “Is a (final) event desirable?”, which can be reduced to the word recognition problem that is decidable for type-3 grammars.

To sum up, we use the notions “state” and “input” on one hand, and “state” and “output” (as “system response”) on the other hand, synonymously, because the user is interested in the external behavior of the system, and not its internal states and mechanisms. Thus, we are strongly focusing on the needs and expectations of the user.

** Fig. 3: Example of a GUI**

3. Modeling User Interactions The approach we described here has been used in different environments, including industrial applications. Thus, we could extend and deepen our theoretical view and gain practical insight by examining several applications. Below, we summarize our experiences with the approach. In stead of a full documentation which would exceed the space available in a brief report and exhaust the patience of the reader, we will display some highlights, instantaneously clarifying some relevant aspects and focusing on the fault detection capabilities of the introduced method. We chose this example from a broad variety of applications to emphasize the versatility of the approach.