A framework for design tradeoffs

July 25, 2017 | Autor: Per Runeson | Categoria: Fault Tolerance, UML, Software Quality, Computer Software, Sensor Network, Fault Tolerant
Share Embed


Descrição do Produto

A Framework for Design Tradeoffs Anneliese Andrews

Ed Mancebo

Dept. of EECS

Amazon.com

Washington State University, USA

Seattle, WA, USA

[email protected]

[email protected]

Per Runeson

Robert France

Dept. of Communications Systems

Dept. of Computer Science

Lund University, Sweden

Colorado State University, USA

[email protected]

[email protected]

Abstract Designs almost always require tradeoffs between competing design choices to meet system requirements. We present a framework for evaluating design choices with respect to meeting competing requirements. Specifically, we develop a model to estimate the performance of a UML design subject to changing levels of security and fault-tolerance. This analysis gives us a way to identify design solutions that are infeasible. Multi-criteria decision making techniques are applied to evaluate the remaining feasible alternatives. The method is illustrated with two examples: a small sensor network and a system for controlling traffic lights.

1

Introduction

When faced with competing requirements, designers must be able to recognize where tradeoffs exist and decide how to deal with them [40]. Early analysis of tradeoffs is needed to make design decisions that reflect priorities between requirements. Two research questions are at hand: 1. Can design information be used to characterize design tradeoffs in the presence of competing requirements? 2. Can the value of design alternatives be codified to rank them by preference? Although many competing requirements can be identified early in the process [41, 28], it may be difficult to determine how much they compete 1

until later. The ability to estimate how much one requirement must be compromised to attain a certain level of another requirement is valuable knowledge. If this knowledge is available during design, it can be used to determine if a design is overly ambitious in trying to support competing concerns. In other words, we want to determine if a set of requirements can be feasibly satisfied using a given design. Design decisions are influenced by requirements—one design is better than another if it supports requirements to a greater extent [2]. If this is true, then a design specifies how to realize requirements. For example, most systems are designed with certain performance requirements in mind. The designer consequently must analyze a resulting design to determine if these performance requirements can be satisfied. During design, we are forced to choose between competing design options that may differ in how they meet requirements. In practice, projects may have a large number of requirements. In addition, there may be multiple design options to realize each requirement. This can lead to a large number of choices, too many to handle by simple inspection. For example, even if there are only three requirements (performance, fault tolerance, and security) and there are four design options for each, this leads to 64 different design solutions. In addition, a design choice in one area may affect fulfilling a requirement in another. These two complications can make an ad hoc decision error-prone. The best design solution may be far from obvious. To deal with a large number of alternatives judged on multiple criteria, it is useful to adopt a systematic, automatable decision-making approach. We apply multi-criteria decision making techniques to rank design alternatives based on preference. Given that this evaluation occurs during design, we can assume that requirements and requirements priorities are stable enough to proceed to design. While we realize that there may be problems obtaining stable requirements and requirements priorities, this should not be an excuse not to use a systematic evaluation during design. We simply consider the best available knowledge at the time. This paper thus does not address setting requirements and requirements priorities, the influence of stakeholders in defining priorities, nor the types of changes requirements undergo before design. However, since the analysis of design choices is automated, it is not difficult to perform sensitivity analysis if requirements and requirements priorities do change. We introduce a framework for evaluating tradeoffs during design. The framework includes a model to characterize competing concerns using performance, fault-tolerance, and security requirements in a UML design. Section 2 introduces background material in multi-criteria decision making and performance analysis during design. Section 3 presents the method. We apply the method on two examples: the first provides a detailed illustration of each step of the method using a simplified sensor network (Section 4), the second applies the method to a more complex system for controlling traffic lights (Section 5). Section 6 concludes with lessons learned and needs for future 2

work.

2

Background

The work presented in this paper draws from many areas of software engineering, including multi-criteria decision making (MCDM), requirements prioritization, performance analysis, and architectural tradeoffs. MCDM refers to the theory and methodology of systematically assessing decisions based on multiple criteria; an overview of the principles can be found in [25] or [10]. Multi-criteria analysis involves three activities: (1) the decomposition of a decision problem into multiple criteria, (2) the evaluation of available alternatives based on the criteria chosen, and (3) the synthesis of this evaluation with preferences and priorities to determine the most desired option. Many of these issues are addressed by existing work in requirements prioritization, since each criterion is derived from requirements. An extensive method to collect, quantify and analyze a multitude of requirements is presented by von Mayrhauser[40]. It includes specifying levels of functionality, reliability, performance, security, usability, portability, adaptability, etc. Each is ranked based on three levels of importance (hard requirement, important, nice-but optional). Next, constraints and benefits for each set of solutions are identified. Then priorities between types of requirements are identified to allow for tradeoffs between them. Benefits for each set of solutions are determined, in addition to their feasibility with respect to project constraints. This analysis is done at the requirements level and does not include consideration of detailed design level issues as we do in this paper. Several other methods exist to prioritize different requirements. For a survey of existing approaches to requirements prioritization see [6]. These methods can be divided into two categories: ranking and weighted composition. Ranking methods attempt to define an ordinal ordering of requirements by priority. A complete ordering can be defined, or a list of ”top-ten” requirements can be collected from each stakeholder and compared for consistency. Weighted composition methods assign ratio-level measures to each requirement that denotes their relative importance. For example, a requirement with a weight of 0.5 is deemed twice as important as a requirement weighted at 0.25. Assuming these two requirements are evaluated on the same scale, this relationship means that we are willing to sacrifice one unit of the first requirement to gain two units of the second. Weights are often assigned using subjective judgement. This can be effective when dealing with a small set of requirements. When dealing with many requirements, systematic methods like the analytic hierarchy process (AHP) are available to determine weights. AHP uses pair-wise comparison to obtain an ordered list. It has been used before for software requirements by Karlsson and Ryan [23]: requirements are evaluated based on value and cost. Shepperd and Cartwright 3

[37] used it to estimate project effort when there was little objective data. AHP allows weighting the requirements items by quantifying their relative importance. This set of weights is then used with levels for individual requirements items to determine the value of a candidate solution. While AHP is not defined for rank order measures, it has the advantage that it calculates a consistency index that evaluates the degree to which priorities have been set consistently. The complexity of trading off a multiplicity of requirements items makes such a capability desirable. Svahnberg et al. [36] use a weighted composition method to evaluate options for software architecture. There are numerous applications of multi-criteria analysis in the literature. A rather comprehensive one is the Defect, Detection and Prevention process (DDP) [13, 14, 15]. DDP involves quantifying the comparative importance of requirements, failure modes, and their impact on requirements, as well as the impact of mitigation actions and their cost. Other approaches have been used to evaluate requirements threats, mitigation options and design choices include Quality Function Deployment (QFD) [22], Softgoal Analysis [31], WinWin [9], KAOS [11]. None of these are directly applicable to serve as design tradeoff strategy, either because they are risk-driven (we are interested in finding a combination of design solutions that together satisfy our requirements, rather than emphasizing risks), or because they only cover a small aspect of what we need in a tradeoff evaluation strategy. Requirements are often imprecise [28]. This adds to the complexity of tradeoff analysis when conflicts between requirements occur. Yen and Tiao [41] and Liu and Yen [28] proposed a tradeoff analysis that combines methods from Decision Science and Fuzzy Logic. These are useful during requirements definition and analysis, because they help in evaluating conflicting requirements. By contrast, our approach trades off design choices with respect to how well they satisfy requirements. Thus the result of applying their approach provides part of the information to our analysis at the design level. This paper evaluates design options with regards to achieving certain levels of a requirement (specifically performance, security, and fault-tolerance). Software performance is one important property of a software system, for which the boundaries often are set by early architectural design decisions. In order to manage software performance requirements effectively and efficiently, a proactive approach is needed [38]. Smith and Williams [38] present an approach to performance analysis, based on key performance scenarios expressed in terms of UML use case diagrams. They build queuing models for analysis of the performance of a system. Another approach combining use cases, UML models, and queuing models is given in [1]. An approach based on use case maps is presented by Petriu and Woodside [34]. In addition to the modeling approaches, data collection for performance evaluation is needed [3, 21]. Finally, performance evaluation depends on approaches to effectively choose which performance sequences to execute [7, 8]. For a survey of model-based performance prediction techniques, refer to Balsamo et al. [4]. 4

Architecture analysis is closely related to our method. The goal is to analyze architectural decisions by observing the effect they have on certain properties (quality attributes) of the architecture. ATAM [24] is a current method for architecture analysis. While the focus of ATAM is to make tradeoffs visible, we go a step farther by providing a framework for evaluating them.

3

A Value-based Analysis Framework for Design Decisions

A general framework for managing tradeoffs involves the definition of decision factors, priorities, value functions, and constraints. We present a process for defining these elements and using them to evaluate tradeoffs (Figure 1). The knowledge used by each step in the process is shown in column 1, the analysis steps are shown in the middle column (numbered in chronological order), the right hand column shows the results produced along the way. 1. Define decision factors. Decision factors during design reflect functional and non-functional properties of a design that may cause tradeoffs with one another. We refer to them as design factors. While they are derived from requirements (for example requirements related to security give rise to design factors related to security), we distinguish factors from requirements because there may not be a one-to-one mapping between the two. For example, a security requirement may lead to design decisions about the types of authentication and encryption mechanisms used, which could be treated as two separate factors, because they constitute two different design choices: one choice is to select the type of authentication, the other the type of encryption. For a tradeoff problem with n factors, we label them f1 ...fn . 2. Determine value levels for factors. Each factor has an associated set of levels or alternatives. Since factors are typically measured on different scales or in non-commensurate units, levels must be mapped to a common measure of overall usefulness. This mapping is called a utility or value function. Since we are trying to establish tradeoffs between design choices, it is more important to establish tradeoffs by value of a design choice, rather than by risk or cost. This approach is similar to the utility analysis for various assessment and tradeoff situations that occur in software engineering as described by von Mayrhauser [40]. Utility assessment can be done either by the category method or by the direct method. It associates a utility level usually between 0 (no value) and 1 (perfect) with each level of a factor. Utility functions can be discrete (by level of security and level of fault-tolerance) or continuous (as for performance). They can be stepwise, linear, concave, convex, or follow an S-curve. For example, if basic authentication is all that is needed for the security of the system, one would associate this with a value of 1.

5

Knowledge

Require− ments

Analysis Steps

Results

1.Determine Decision Factors Solutions

Preferences

2. Determine Value Levels

Historical Data

3. Determine Constraint Model

Domain Knowledge

4. Determine Feasible Solutions

Models

5. Determine Factor Priorities

Constraints

Feasible Solutions

Priorities

6. Determine Optimal Solution(s) Optimal Solution(s)

Figure 1: Evaluation Framework Defining levels for each factor establishes the decision space for the tradeoffs problem. Let Lfi be the set of levels for factor fi . Choosing one level for each factor constitutes a solution in the decision space, written (lf1 , lf2 , ..., lfn ) where lfi ∈ Lfi . The set of all possible solutions can be enumerated by com6

puting the Cartesian product of the levels for each factor Lf1 × Lf2 × ... × Lfn (however in practice there are too many solutions to enumerate directly). Levels for each factor are ordered by preference. For a factor fa with k levels, we define an ordering lfa 1 - lfa 2 - ... - lfa k . The term lfa 1 - lfa 2 reads lfa 1 is lower than lfa 2 , or lfa 2 is preferred to lfa 1 . 3. Develop a model for constraints on factors. Factors, value functions, and priorities (defined later) are sufficient to rank the solutions by preference, but what if requirements compete to the point where some solutions are not feasible? For example, a database implementation may not be able to achieve optimal value levels for both time and space efficiency, or the most useful off-the-shelf software component may not be available at the ideal price. This example illustrates that it is usually not possible to achieve the best levels for all factors; a database design with the highest levels of time and space efficiency may not be technically feasible. We say that the highest levels of these two factors conflict in this case. We develop a constraint model to filter out these infeasible solutions. The goal of our constraint model is to determine if a given solution (lf1 , lf2 , ..., lfn ) is feasible. This is accomplished by defining a predicate F easible such that F easible(lf1 , lf2 , ..., lfn ) holds if and only if the solution has no conflicting levels. The F easible predicate may be simple or sophisticated depending on the types of factors involved. We give steps for defining this predicate using performance, security, and fault-tolerance factors in a UML design. What is the relationship between these factors? We hypothesize that more sophisticated security and fault-tolerance mechanisms will have a larger effect on performance. Performance is bounded by security and fault-tolerance— the highest attainable level of performance for a solution depends on the corresponding levels of security and fault-tolerance. Let ls , lt , and lp be levels for security, fault-tolerance, and performance. We define a function pmax , where pmax (ls , lt ) computes the highest attainable level of performance in a solution with ls and lt as levels of security and fault-tolerance. The solution (ls , lt , lp ) is feasible if pmax (ls , lt ) % lp . In other words, the level of performance in a solution must be less than or equal to the highest attainable level allowed by the levels of security and fault-tolerance. This gives us a means to define the F easible predicate: F easible(ls , lt , lp ) = pmax (ls , lt ) % lp . We need a way to compute the function pmax . We define a six step process for computing pmax (ls , lt ) for performance related factors: 1. Derive workload units from levels of factors ls and lt . 2. Map workload units to workflows (e. g. message paths). 3. Associate size parameters with workload units. 4. Define performance functions for workflows (e. g. message paths). 5. Determine operational profile. 7

6. Calculate performance bound. Performing this six step process requires information from three levels of representation (Figure 2): (i) Requirements level. Levels of fault-tolerance and security are considered in determining the units of work the system must perform. We also define how big each unit of work is (size parameters), and how often each unit must be performed (operational profile). (ii) Logical design level. To perform a unit of work, the finished system will execute a sequence of method calls (which we call a message path). Workload units are mapped to message paths on the logical level. Message paths are determined by studying the UML design. (iii) Physical level. On this level we define functions that calculate the load of each message path. Load is an abstract quantity that represents the relative execution time of a method compared to other methods in the system. We use load to define the performance limitations of a system; a system has a certain load capacity, and the combined load introduced by each message path must be within this capacity. Technology-specific and platform-specific information may be required to determine loads. As the tradeoff method is intended to be applied at the design stage, the physical level may not be available yet. This is no different than in other situations when expected performance must be predicted during software design. For relevant techniques see [38]. As more precise and accurate information becomes available the evaluation can be repeated with that information. For methods and tools to analyze performance, given the presence of a physical level, see Balsamo et al. [4]. (1) Derive workload units from levels of factors ls and lt The first step is to characterize the work the system needs to do. A workload unit describes an amount of work, typically in the form of a use case or task. Use cases are best for systems with a lot of interaction between users and adjacent systems, for example an online bookstore or desktop word processor. Some systems are easier to model using tasks, like an embedded system that quietly runs three tasks continuously, without prompting or external stimuli. We’ve claimed that the highest level of performance attainable for a given solution depends on the levels of security and fault-tolerance. This is because the amount of work done by the system depends on these levels; namely, the system performs more work with security and fault-tolerance mechanisms implemented. Workload units are defined considering the levels ls and lt . For example, imagine a 3D chess game for two players. With no error-checking implemented (ls = 0), one unit of work might be ”making a move”. With error-checking implemented (ls = 1), the program would verify that a move is legal before allowing a player to make it; then the workload unit ”making a move” would be divided into ”making a legal move” and ”making an illegal move”. We distinguish legal moves from illegal ones because these two cases may occur at different frequencies and have different performance impacts. 8

Levels of Factors 1.

Size Parameters

3.

Workload Units 2.

Message Paths

5.

Operational Profile REQUIREMENTS

LOGICAL

4.

Performance Functions

PHYSICAL

Figure 2: The entities defined in each level of representation. The numbered arrows correspond to steps in the process. At the completion of this step we have n workload units, W1 ...Wn . (2) Map workload units to message paths. Workload units represent a conceptual decomposition of a system’s functionality on the requirements level. To create a performance model, we need to map these units of work to measurable quantities on the logical level (cf. Figure) 2. To perform a unit of work, the finished system will execute a message path. We establish a one-to-one mapping between workload units and message paths. The UML provides a convenient way to express message paths through the use of sequence diagrams. However, mapping workload units to sequence diagrams is not adequate for our purposes because a single sequence diagram may characterize multiple workload units. This is especially true when the sequence diagram contains conditions. Returning to our chess example, Figure 3 shows the sequence diagram for ”making a legal move”. Different message paths are taken depending on whether the move is legal or not. This sequence diagram actually captures two workload units: ”making a legal move” and ”making an illegal move”. To preserve the one-toone mapping between workload units and message paths, we need to extract the sequence of method calls that corresponds to each workload unit from the sequence diagram. ”Making a legal move” and ”making an illegal move” can be represented by the message paths isLegal() → updateState() → redraw() and isLegal() → printError(), respectively. With n workload units W1 ...Wn , we should also have n corresponding message paths Λ1 ...Λn . For a given message path Λi with j method invoca9

:Game

:Board

a:=isLegal(m)

alt

[a=true]

updateState(m) redraw()

[a=false] printError()

Figure 3: A sequence diagram representing the methods invoked when a player makes a move m. tions, the sequence of methods is written mi1 → mi2 → ... → mij . (3) Associate size parameters with workload units. The size or complexity of a workload unit is represented by parameterizing it with dimensions of ”size” or ”complexity”. For each workload unit Wi (i = 1...n) we associate a size measure Si (i = 1...n), including a range of values for Si from low to high, i.e. (lowi , highi ). This analysis is a rough estimate with the goal to identify boundaries. If a significant non-linear relation is expected, mid values should be investigated as well. Recall the message path in the chess example that corresponds to making a legal move: isLegal() → updateState() → redraw(). This sequence of method calls may take a different amount of time depending on how many pieces are remaining on the board. Thus, the size measure S for this message path is the number of pieces on the board. A low value might be 5 (during the endgame) and a high value 32 (during the opening). (4) Define performance functions for message paths. Each message path is a sequence of method calls. Using information from the physical level, we define performance functions to map each method call to an estimated amount of load. The load of a method mij is written L(mij , s). s is a size parameter defined in step 3. The size parameter will only affect the load of some methods. Consider the methods updateState() and redraw() from the chess example. The updateState() method changes the position of one piece on the board; the load should not depend on the number of remaining pieces. 10

On the other hand, the load of the redraw() method may vary significantly depending on how many pieces must be rendered. The load of a message path Λi with k method invocations is computed by summing the loads of each method in the sequence using a given size parameter s (Eq. 1): L(Λi , s) =

k X

L(mij , s)

(1)

j=1

(5) Determine operational profile. An operational profile is used to obtain an estimate for the total system load by composing the loads caused by each workload unit. We estimate the frequency of each workload unit and within each workload unit the frequency of various size levels. We also need to determine volume of use in a target period (such as hour, day, etc.). The frequencies for workload units W1 ...Wn are labeled q1 ...qn . For each workload unit Wi , we use αi to represent the percentage of Wi invocations that are expected to have a low size parameter. Using the chess example, a player may make a legal move every 10 seconds (q = 1 move every 10 seconds). If we reason that the endgame contains the most moves, then the low size parameter would be more common and α might have the value 0.7. What does this mean? In a three minute game, we are expecting 30 moves to be made; approximately 21 of these moves will be made with a low number of pieces on the board (and the remaining 9 with a high number of pieces). Given message paths Λ1 ...Λn corresponding to workload units W1 ...Wn , the total load of the system can be calculated as follows: loadtotal =

n X

αi qi L(Λi , lowi ) +

i=1

n X (1 − αi )qi L(Λi , highi )

(2)

i=1

Eq. 2 sums the load for each message path multiplied by the frequency of its occurrence in a target period. The units of loadtotal are load units per time period (the time period is the same as the one used to specify the frequencies q1 ...qn ). This load calculation is an approximation of the more general case in which not only high and low size values are considered, but possibly a whole spectrum of size values and their frequencies in the operational profile. If a more exact workload definition is required, one needs to define for a size measure S a range of values (s1 , ..., sn ) as well as associated normalized frequencies (α1i , ..., αki ) for workload i. Eq. 2 generalizes to loadtotal =

k X n X

αji qi L(Λi , si )with

j=1 i=1

k X

αji = 1

(3)

j=1

(6) Calculate performance bound. The result of the pmax (ls , lt ) function is a performance bound—the highest level of performance attainable given ls and lt . The last step is to calculate this upper bound. First we need to establish levels for lp . We assume that the system has a known load capacity 11

c. If this capacity is expressed in load units, it can be compared directly with loadtotal , otherwise we need to map load units to comparable units of capacity (using information from the physical level). When measuring performance using capacity, there are two possible levels for lp : (1) lp1 —load exceeds capacity, and (2) lp2 —load is within capacity. Given these levels for lp , pmax returns either lp1 or lp2 . We calculate loadtotal using Eq. 2. If loadtotal < c, then lp2 is attainable; otherwise, a solution with ls and lt can do no better on performance than to attain lp1 . 4. Determine feasible solutions. By applying the F easible predicate to each solution in the decision space, we determine a set of feasible solutions. After setting priorities between factors, we evaluate tradeoffs between these remaining solutions. 5. Set priorities between factors. Priorities are needed to balance tradeoffs between competing factors. Each factor fi is given a weight ai indicating its relative importance compared to other factors. Weights must P be assigned such that ai = 1. We recommend using AHP [35] to derive these weights in a consistent way. 6. Determine optimal solution. After removing infeasible solutions from the decision space, we apply utilities and priorities to rank the remaining solutions by preference. Given factors f1 ...fn with value levels lf1 ...lfn , minimum value levels lfmin1 ...lfminn , and weights a1 ...an , the optimal solution can be found by applying a weighted scoring model, as in Eq. 4. utility = max

µX n

¶ ai lfi

s.t. lfi ≥ lfmini , Σai = 1, ai > 0

(4)

i=1

4

Sensor Network Case Study

In this section we present an example case study for a simple system with a number of probes monitoring a network. The system is a scaled down version of a real system [18]. The probes need to be configured, they need to perform data collection and post analysis. In addition the host system will perform post analysis off line. The steps of the method defined in section 3 are followed below. 1. Define decision factors. Our analysis contains three decision factors: authentication (f1 ), error-checking (f2 ), and performance (f3 ). 2. Determine value levels for factors. Each factor has two levels. Authentication and error-checking mechanisms are either implemented or not. Performance is defined in terms of load capacity; the system has a certain capacity, and a solution can either be within that capacity or not. These types of ”all or nothing” value levels yield binary utility functions, as shown in Table 1. 3. Develop a model for constraints on factors. Our goal is to 12

Factor Level Authentication (f1 ) lf1 1 lf1 2 Error-checking (f2 ) lf2 1 lf2 2 Performance (f3 ) lf3 1 lf3 2

Utility 0 1 0 1 0 1

Description No authentication Authentication implemented No error-checking Error-checking implemented Load exceeds capacity Load within capacity

Table 1: Levels for each factor are mapped to utilities. These mappings can be expressed in functional form using a utility function u, e.g. u(lf1 2 ) = 1. define the predicate F easible(lf1 , lf2 , lf3 ) = pmax (lf1 , lf2 ) % lp . We follow the six steps from section 3 to define pmax . (1) Derive workload units from levels of factors lf1 and lf2 The workload units for this system are derived from four use cases: ”configure probes”, ”capture data”, ”check probe”, and ”post analysis”. Different levels of authentication and error-checking can be incorporated into each use case; the options are: • ”Configure probes” can be implemented with authentication or not. • ”Capture data” and ”Check probe” can be implemented with errorchecking or not. • ”Post analysis” uses neither error-checking nor authentication. Table 2 shows the resulting workload units if we implement error-checking and authentication whenever possible (i.e. lf1 = lf1 2 and lf2 = lf2 2 ). These are the second design choice for factors f1 and f2 . W1 W2 W3 W4 W5 W6 W7

Workload Units Configure probes with successful authentication Configure probes with failed authentication Capture data with successful error checking Capture data with failed error checking Check probe with successful error checking Check probe with failed error checking Post analysis Table 2: Workload units

If lf1 = lf1 1 (no authentication) instead, then W1 and W2 would be replaced by a single workload unit, a plain ”Configure probes”. To illustrate the rest of the example, we show the calculations following the workload units shown in Table 2. 13

(2) Map workload units to message paths. To map workload units to message paths, we need UML diagrams from the logical level. Levels of authentication and error-checking are represented in the UML design. Figure 4 shows a design solution using Aspect oriented design (AOD) [26] and UML. Primary Model: HostProbe System

probehost

manages User

1

1

id:UID

Probe

hostprobe

Host

*

On:bool

1 config() captureData(data) filter(data) shutdown()

configProbes(n) transfer(data) postAnalysis()

Aspects: Host-Probe System

SecurityMgrRole

authenticate(user):bool

CheckControlRole errorCheck:bool

CheckerRole

errorCheck(data):bool

checkStream(data):bool

Composed Model: Host-Probe System with Authentication and Error Checking

Probe

Host’ probehost 1

manages User’ id:UID

1 1

configProbes(n) transfer(data) postAnalysis() authenticate(user):bool checkStream(data):bool

hostprobe *

On:bool config() captureData(data) filter(data) shutdown() errorCheck(data):bool

Figure 4: Static model for the example system. Top: the primary model. Mid: aspects for authentication and error-checking. Bottom: the woven model. Each workload unit W1 ...Wn is mapped to a message path Λ1 ...Λn . We study existing sequence diagrams (Figure 5) to determine the sequence of method calls associated with each workload unit. 14

:User

:Host

:Probe

u:User’

:Probe’

configProbes(n)

configProbes(n)

loop

:Host’

a:=authenticate(u)

config()

[a=true]

alt loop

config()

error

[a=false]

Figure 5: The sequence diagram for the ”configure probes” use case. Left is the primary model, and on the right authentication is woven into the model. From Figure 5 we can derive message paths for the two workload units associated with configuring probes (W1 , W2 ): Λ1 = conf igP robes(n) → authenticate() → conf ig()1 → ... → conf ig() (5) n Λ2 = conf igP robes(n) → authenticate() → error() (6) The conf ig() function is called once for every probe that needs to be configured. We determine Λ3 ...Λ7 in the same way using sequence diagrams for the other use cases. (3) Associate size parameters with workload units. Each workload unit W1 ...W7 is given a size measure S1 ...S7 . Workload units derived from the same use case can share the same size measures. Table 3 groups the estimated workload units and maps them to size measures with low and high values. Workload Units W1 , W 2 W3 , W 4 W4 , W 5 W7

Size Measure S1 , S2 : Number of probes to configure S3 , S4 : Rate of capture S4 , S5 : Probability of error on channel S7 : Amount of data to analyze

low value 5 kHz 10−6 Hour of data

high value 100 MHz 10−9 Month of data

Table 3: Each workload unit has a size measure with low and high values. (4) Define performance functions for message paths. We first define performance functions for each method mij of message path Λi , then the estimated load of a given message path Λi can be computed using Eq. 1. The estimated load functions for each method are shown below, parameterized 15

with a size measure s. The domain of the size parameter is specified when it is used. L(conf igP robes, s) L(transf er, s) L(postAnalysis, s) L(authenticate, s) L(checkstream, s) L(conf ig, s) L(captureData, s) L(f ilter, s) L(shutDown, s) L(errorCheck, s)

= = = = = = = = = =

1000 + 200s, s ∈ S1 5 100 + 10s s ∈ S7 200 20 200 5 5 25 50

Now we apply Eq. 1 to define L(Λi , s) for i = 1...7. For example, in step 2 we defined Λ1 = conf igP robes(n) → authenticate() → conf ig()1 → ... → conf ig()n . To configure n probes, it requires 1 call to conf igP robes, 1 call to authenticate, and n calls to conf ig. Thus, L(Λ1 , s) = L(conf igP robes, s) + L(authenticate, s) + s · L(conf ig, s) where s is the number of probes to configure (i.e. s ∈ S1 ). Doing the calculations for low and high size parameters (from Table 3), we find that L(Λ1 , 5) = 3200 load units and L(Λ1 , 100) = 41200 load units. Load functions are defined for the remaining message paths in the same way. (5) Determine operational profile. We specify the frequency of invoking each workload unit and for each workload unit the percentage of invocations with the low size parameter. These numbers are shown in Table 4. Recall that α is the percentage of workload unit invocations that are given the ”low” size parameter. Workload Unit W1 W2 W3 W4 W5 W6 W7

Frequency per hour (q) 6 0.1 3.6 · 109 10−3 720000 10−3 10.02

α 0.833 0.833 0.001 0.001 0.5 0.5 0.998

Table 4: Operational profile given over a time period of an hour. We notice at this point that the load due to some workload units in negligible. For example, capturing data with failed error-checking (W4 ) hap16

pens so infrequently (once every thousand hours) that the load added to the system from the invoked message path could be disregarded. (6) Calculate performance bound. The highest level of performance attainable in a solution using levels lf1 and lf2 for authentication and error-checking is computed using the function pmax . pmax is defined by comparing the total load of a solution with the load capacity of the system. The load of a solution (loadtotal from Eq. 2) is calculated using the constants and functions defined in steps 1–5. The system capacity c is known to be 5.2 · 1012 load units. Alg. 1 shows the calculation of the upper bound pmax . pmax (lf1 , lf2 ) Calculate loadtotal given lf1 , lf2 ; c = 5.2 · 1012 ; if loadtotal > c then return lf3 1 ; else return lf3 2 ; end Algorithm 1: Calculating the upper bound on the level of performance attainable given lf1 and lf2 . For lf1 = lf1 2 and lf2 = lf2 2 , loadtotal = 2.3 · 1013 . Since the load exceeds capacity (loadtotal > c), the performance bound is lf3 1 . Any solution with levels lf1 and lf2 for error-checking and fault-tolerance can achieve no higher level of performance than lf3 1 . In other words, we can’t implement error-checking and authentication for each relevant use case and still meet performance requirements. 4. Determine feasible solutions Feasible solutions are determined by applying the F easible predicate to each solution in the decision space, as shown in Table 5. Since there are only two levels for each factor, we can represent the results as a binary truth table. 5. Set priorities between factors. The weights for factors are based on the importance of the requirements they represent. We used the following weights: a1 = 0.25 (authentication), a2 = 0.25 (error checking), and a3 = 0.5 (performance). What is the meaning of these weights? Since we are using binary utility functions, it’s easy to experiment. For instance, given these weights, a solution that meets performance requirements but does not implement error-checking or authentication is equally preferred to one that implements error-checking and authentication but fails to meet performance. We can also see that a solution implementing error-checking and meeting performance is equally preferred to one that implements authentication instead and still meets performance. 6. Determine optimal solution Using Eq. 4 we calculate the utility of each feasible solution (Table 6). The two infeasible solutions were excluded from the table. In this case, the best option is to implement authentication 17

f1 0 0 0 0 1 1 1 1

f2 0 0 1 1 0 0 1 1

f3 0 1 0 1 0 1 0 1

loadtotal 3.8 · 1012 3.8 · 1012 3.8 · 1012 2.3 · 1013 3.8 · 1012 3.8 · 1012 3.8 · 1012 2.3 · 1013

F easible(f1 , f2 , f3 )? T T T F T T T F

Table 5: The solution space for the tradeoffs problem with infeasible solutions marked false. A zero represents the lower level for a factor, and a 1 is the higher level. Recall that F easible(lf1 , lf2 , lf3 ) = pmax (lf1 , lf2 ) > lf3 . f1 0 0 0 1 1 1

f2 0 0 1 0 0 1

f3 0 1 0 0 1 0

Utility 0 0.25 0.25 0.25 0.75 0.5

Table 6: The utility of each feasible solution. (u(lf1 ) = 1) but not error-checking (u(lf2 ) = 0), thus meeting performance requirements (u(lf3 ) = 1).

5

Traffic Control Case Study

Our second case study illustrates a different way to apply the method. We use middleware configurations instead of aspects to represent different levels of fulfillment for requirements. Enabling more sophisticated security and fault-tolerance options has the disadvantage of incurring higher overhead, creating tradeoffs against performance. We investigate these tradeoffs for a system for controlling traffic lights. Many believe the strategic control of traffic signals can reduce congestion and pollution in urban environments. During periods of low traffic, lights should be timed to minimize the average wait-time at intersections and maximize bandwidth. In oversaturated traffic, the timing strategy should be modified to increase serviceability (reduce queue length) and manage critical sections (mitigate congestion) [32]. Because the optimal timing strategy depends on traffic conditions, engineers have devised fully responsive real-time systems for traffic control. 18

These systems use a sensor network to monitor traffic conditions at each intersection. The results are used to tweak timing parameters periodically so the timing plan evolves as traffic conditions change. Our case study analyzes a design along this paradigm. Based on the system described in [39], we sketched the architecture shown in Figure 6.

Controller MVP MVP

Controller

Token-ring

Master Controller Traffic Control Center Terminal

MVP

Figure 6: System Architecture Each intersection has a dedicated microcontroller for controlling lights and gathering traffic data. In turn, each microcontroller is interfaced to a Machine Vision Processor (MVP) which interprets images from several cameras to determine traffic parameters [33]. The microcontrollers are part of a network overseen by a single master controller responsible for calculating the global timing plan based on data from each intersection. The master controller also communicates with a remote terminal at a traffic control center to display the status of the system. Each microcontroller has three tasks to perform: (1) monitoring the amount of traffic through an intersection, (2) calculating a local timing plan, and (3) controlling the traffic lights. A distributed systems approach is a natural fit because the information needed to calculate the timing plan is scattered across numerous controllers. 19

Our design uses MicroQosCORBA [29] to support the need for distributed objects. MicroQosCORBA is a middleware framework designed for embedded systems with tailorable support for quality-of-service properties such as fault-tolerance, security, and timeliness. This is ideal for our analysis, since different levels of security and fault-tolerance can be represented by enabling different middleware options. Once again we apply the steps explained in section 3. 1. Define decision factors. As before, the competing concerns are security, fault-tolerance, and performance. We divide security into authentication (f1 ) and encryption (f2 ). Fault-tolerance refers to data integrity (f3 ). Performance is modeled in terms of processor utilization needed for each task. Experimenting with the amount of time spent monitoring (f4 ) vs. calculating (f5 ) could improve the management of traffic; however, a constant amount of time must be reserved for controlling the lights. A missed deadline for changing the color of a light can’t be tolerated. Since no tradeoffs can be made with the time reserved for controlling lights, we treat this time as a constraint rather than a decision factor. 2. Determine value levels for factors. MicroQosCORBA allows the developer to choose which mechanisms will be used to perform authentication, encryption, and integrity checking on messages passed over the network. The options provided become discrete levels for f1 ...f3 (Table 7). Factor Level Utility Technique Authentication (f1 ) lf1 1 0 None lf1 2 1 Shared secrets Encryption (f2 ) lf2 1 0 None lf2 2 0.7 AES-128 lf2 3 0.7 DES-56 lf2 4 1.0 AES-256 Data Integrity (f3 ) lf3 1 0 None lf3 2 0.5 Parity lf3 3 0.5 MD5 lf3 4 0.5 SHA1 lf3 5 1.0 SHA2-512 Table 7: Levels for each authentication, encryption, and data integrity are mapped to utilities. Levels of processor utilization for factors f4 and f5 are defined along a continuous domain using exponential functions. In both cases utility increases at a decreasing rate with more processor utilization (Figure 7). We justified this as follows: (1) Calculation of the optimal timing plan is computationally very hard [32]. In practice, a sub-optimal plan must be used, thus it is possible that a better plan could be calculated given more processing time. 20

1

1

0.9

0.9

0.8

0.8

0.7

0.7

0.6

0.6

Utility

Utility

(2) The task that monitors traffic conditions takes samples from the MVP. If more samples will produce a closer depiction of the traffic conditions, then the monitoring task also improves with more processing time.

0.5

0.5

0.4

0.4

0.3

0.3

0.2

0.2

0.1

0.1

0

0

50 Monitoring Utilization %

0

100

0

50 Control Utilization %

100

Figure 7: Utility functions for the performance of the monitoring and calculation tasks, measured as processor utilization. 3. Develop a model for constraints on factors. Our goal is to define the predicate F easible(lf1 ...lf5 ). Two conditions must be met for a solution to be feasible: 1. The monitoring and calculation tasks must have enough processor time to do a minimum amount of work: lf4 > monitormin and lf5 > calcmin . 2. The combined processor time used for all three tasks can’t exceed 100%: lf4 + lf5 + controlmin ≤ 100%. Given these constraints, the F easible predicate can be defined: F easible(lf1 ...lf5 ) = (lf4 > monitormin )∧(lf5 > calcmin )∧(lf4 +lf5 +controlmin ≤ 100%). (7) To complete this definition, we just need to specify values for monitormin , calcmin , and controlmin . These lower bounds are not constants—they depend on the levels of authentication, encryption, and data integrity. For example, the monitoring task needs to take a minimum number of samples in a given time period; if messages between the microcontroller and MVP take longer due to extensive encryption and integrity checking, then the time required to take a sample is increased. Thus, monitormin is dependent on the levels of f1 ...f3 and can be written as monitormin (lf1 ...lf3 ). Similar arguments can be made for the control and calculation tasks. 21

The process for defining these three lower bounds based on lf1 ...lf3 is a little different than the process used to define pmax from the sensor network example. Changes in the levels of security and fault-tolerance don’t affect the workload units because we are no longer weaving error-checking and authentication functions into the primary design. Additional functionality to support these concerns is handled in the background by MicroQosCORBA. Instead, we use experimental data collected for MicroQosCORBA to estimate overhead delay on remote messages caused by the levels chosen for f1 ...f3 [30]. This delay is factored into the performance functions that map the execution of each message path to a time delay. There are six steps, shown below. (1) Define workload units. The workload units in this example correspond directly to the monitoring (W1 ), calculation (W2 ), and control (W3 ) tasks. We don’t need to further decompose these workload units to accommodate failed authentication, encryption, and data integrity (as in the sensor network example) because the control flow for doing this is handled by the middleware layer. (2) Map workload units to message paths. Each message path can be directly derived from a sequence diagram. The mapping is straightforward with one exception; we need to mark methods that are invoked remotely because a remote invocation takes longer than a local one. In the sequence diagrams for each task (Figure 8 and 9), remote calls are represented by a slanted arrow which indicates the passage of time. In our message paths, we use a subscript r to denote remote method invocations. Λ1 = T akeP icture() → P rocessImage() → SetStats()r Λ2 = CalcLocalOpt() → SetLocalP lan()r → CalcGlobalOpt → SetP lan()r → Synchronize()r Λ3 = SetState()r → U pdateSigState()r → U pdateState()r → U pdateState()r → DisplayState() (3) Associate size parameters with workload units. The traffic control software operates as a steady-state system; lights are controlled and intersections are monitored on a continuous basis. A new timing plan is calculated periodically, regardless of traffic saturation. This simplifies our analysis—workload units are not associated with measures of ”low” and ”high” size. (4) Derive performance functions for message paths from levels of factors lf1 ...lf3 . In this step we determine the time it takes to execute each message path Λ1 ...Λ3 . T (Λi ) is the delay in ms that results from executing the ith message path. Each method path includes remote method invocations which can vary in costliness depending on the levels of authentication, encryption, and integrity checking. Thus, the amount of delay indicated by T (Λi ) depends on lf1 ...lf3 . Eq. 8 represents the overhead incurred by a remote method call. 22

slave

master

slave

mvp

CalclocalOpt()

TakePicture()

SetLocalPlan()

CalcGlobalOpt()

ProcessImage()

SetStats()

SetPlan()

Synchronize()

Figure 8: Cycle length calculation task (left) and monitoring task (right) slave

signal

intersection

master

terminal

SetState() UpdateSigState() UpdateState()

UpdateState() DisplayState()

Figure 9: Control task

Tmsg (lf1 , lf2 , lf3 ) = nwLatency + T (lf1 ) + T (lf2 ) + T (lf3 )

(8)

nwLatency = 3.8 ms is a usage constant representing the average network latency. The functions T (lfi ) represent the overhead penalty for implementing a given level of the ith factor. These overhead costs are shown in Table 8. The value of Tmsg increases with higher levels for f1 ...f3 . The load for a message path is calculated by summing the load for each method in the path, as in the sensor network example. First we estimate the relative load for significant method calls (we exclude methods that have negligible load). L(T akeP icture) L(P rocessImage) L(CalcLocalOpt) L(CalcGlobalOpt) 23

= = = =

500 2000 10000 1000

Factor Authentication (f1 ) Encryption (f2 )

Data Integrity (f3 )

Level lf1 1 lf1 2 lf2 1 lf2 2 lf2 3 lf2 4 lf3 1 lf3 2 lf3 3 lf3 4 lf3 5

Technique None Shared secrets None AES-128 DES-56 AES-256 None Parity MD5 SHA1 SHA2-512

Overhead (ms) 0 10 0 1.5 2.4 1.9 0 0.31 2.4 2.9 115

Table 8: Overhead cost on remote messages for implementing various levels of authentication, encryption, and integrity checking. Using the message paths defined in step 2, we can define L(Λi ), i = 1..3. The load for Λ3 is zero because it does not contain any method invocations that require significant amount of load. L(Λ1 ) = L(T akeP icture) + L(P rocessImage) L(Λ2 ) = L(CalcLocalOpt) + L(CalcGlobalOpt) L(Λ3 ) = 0 Next we integrate the remote message delay with the load for executing each method. First we define a constant ρ = .001 s/unit that converts load units to units of time. The definition of T (Λi ) for each Λi can be derived by counting the number of remote method calls in each message path. T (Λ1 ) = T (msg) + ρL(Λ1 ) T (Λ2 ) = 3 · T (msg) + ρL(Λ2 ) T (Λ3 ) = 4 · T (msg) + ρL(Λ3 ) If we enabled the highest levels of authentication, encryption, and integrity checking, then the delay functions would have the values T (Λ1 ) = 2.53s, T (Λ2 ) = 11.08s, and T (Λ3 ) = 0.11s. If all of these features were disabled, then T (Λ1 ) = 2.50s, T (Λ2 ) = 11.01s, and T (Λ3 ) = 0.015s. (5) Determine operational profile. We specify the frequency of invocation q1 ...q3 for each workload unit W1 ...W3 . α values don’t need to be specified because we are not using size parameters. The frequency of invocation for the monitoring and control tasks is proportional to the number of lights in the system. The control task runs every time a light changes color, which happens three times to make a complete cycle. 24

3 · nLights (9) minCycleT ime Where nLights = 40 is the number of traffic lights in the system, and minCycleT ime = 60s is the shortest possible cycle time (in the US, a cycle is the period of time over which each light in the system goes through a complete red-green-yellow transition). The monitoring task needs to take samples periodically. If samples need to be taken once every 5 seconds at minimum (12 times per minute), then the frequency of the monitoring workload unit is: q3 =

12 · nLights (10) minCycleT ime Calculation of the timing plan is done at each intersection once a cycle, which yields the equation: q1 =

nIntersections (11) minCycleT ime Where nIntersections = 10 is the maximum number of intersections in the system. (6) Calculate lower bounds for tasks. The minimum utilization for each task is calculated by multiplying the delay of each message path by its frequency of invocation: q2 =

monitormin = f1 · T (Λ1 ) calcmin = f2 · T (Λ2 ) controlmin = f3 · T (Λ3 ) The units of each lower bound is ”seconds spent executing” over ”seconds in a cycle”. We interpret this ratio directly as processor utilization. 4. Determine feasible solutions. Levels of authentication, encryption, and integrity checking are specified on discrete levels. To simplify the optimization problem, we take discrete samples of the exponential functions for processor utilization utility. The total number of solutions in the decision space is finite and can be calculated by multiplying the number of alternatives for each factor. If we use eleven discrete intervals to represent monitoring and calculation utilization (0–100 in steps of 10), then there are 52,240 solutions to choose from. We wrote a small MATLAB program to apply the feasible predicate to each solution. We found that 2,240 of these solutions were feasible. 5. Set priorities between factors. Table 5 shows the relative weight of each factor. Roughly half the utility is weighted on the optimization of processor utilization; the other half is distributed across the quality-of-service properties, with an emphasis on integrity checking. 25

Factors f1 : Authentication f2 : Encryption f3 : Integrity f4 : Monitoring Utilization f5 : Calculation Utilization

Weights a1 : 0.10 a2 : 0.20 a3 : 0.25 a4 : 0.20 a5 : 0.25

Table 9: Factors in the tradeoffs problem f1 ...f5 and their relative weights a1 ...a5 . 6. Determine optimal solution Using Eq. 4, we calculated the utility of the remaining feasible solutions. One advantage of using a weightedscoring method is that utilities can be calculated programmatically (a necessity when evaluating thousands of solutions). The feasible solution with the highest utility is shown in Table 5. Factor Authentication Encryption Integrity Monitoring Utilization Calculation Utilization

Chosen Level Shared secrets AES-256 SHA2-512 30% 60%

Utility 1.0 0.8 1.0 0.95 0.95

Table 10: The optimal solution according to our choice of utilities and priorities. The utility column shows the value achieved for each individual factor. The combined utility of this solution is 0.9375. The most preferred techniques for authentication, encryption, and data integrity were selected; these concerns did not compete with performance. Tradeoffs were made on the amount of processor utilization given to monitoring and calculation. Since the utility function for monitoring utilization was steeper, it was more advantageous to extend calculation utilization.

6

Practical Considerations

We presented a framework to systematically identify and analyze design choices and their impact on satisfying competing requirements. Within this framework, we presented methods for analyzing the impact of design decisions related to three types of design factors: security, fault tolerance, and performance. The analysis technique flags infeasible design choices (those that do not satisfy requirements) and ranks the remaining feasible design choices as to their value.

26

In practice, one would use such an approach, when the value of design choices is not obvious, either because there are many choices, or the tradeoff between them is not clear, and when finding the right answer is important. As with any other systematic, quantitative approach, it must be possible to quantify key factors. In our case, we need to be able to measure the effect of design choices as well as their value with respect to fulfilling requirements. This may require extra modeling work. For example, we modeled performance impact of our design choices by estimating load and resulting delays. In other design situations, existing results might be used or one might have to simulate the effect of design decision via a test bed. An example of such a situation is described in [29, 30]: McKinnon at el. determine the effects of various Quality of Service levels for MicroQoS Corba on fault tolerance and performance. Others can use these results to make design decisions related to service levels for their designs. Relevant performance data may also be available from existing systems, making collection and organization of such data of practical importance [21]. It thus may not always be necessary to build detailed performance models, if related results already exist. There may be some situations when some requirements and preferences between them have not stabilized enough by the time key design decisions have to be made. In this case, it is important to be able to ask “what if” questions. To accommodate this, it is important to have as much of the analysis automated as possible, so that changes in priorities and changes in the value associated with achieving specific requirements levels can be evaluated quickly. We built a spreadsheet calculation tool for workload estimation and a tool using Mathlab functions to automate the evaluation and to easily accommodate sensitivity analysis to handle “what if” questions related to changes in requirements and requirements priorities. Another practical consideration relates to an important assumption about the overall utility function used with the framework. The assumption is that the overall utility function is decomposable into the weighted sum of individual utilities. Under certain conditions [27] a multilinear utility function needs to be used. This does not change our basic framework, it merely means that a different form of utility function needs to be substituted in equation (4). For more detail on decomposable and multilinear utility functions see [27, 25]. Several more types of utility curves are presented in [12, 42]. [25] also prove that certain multilinear utility functions can be represented as the product of utility functions of the individual factors (pp. 238, 324). [27] explains that even when two factors interact, the overall utility function can still be measured by establishing unidimensional utility curves for each individual factor. This somewhat simplifies practical use of utility functions. [27] also notes that independence of factor F1 from factor F2 does not imply the converse. The example he gives is from [20] who found in a study of a computing environment that the utility of response time was independent from availability. On the other hand, the utility of availability was not independent of response time: if response time is bad, availability is not critical. 27

The evaluation framework we proposed uses estimations for performance measures. These are necessarily associated with uncertainty. As development progresses, it will be possible to iteratively re-analyze and obtain more accurate data. Since our approach is automated, this is not difficult. The advantage of re-evaluating is that we may catch a situation when a solution starts becoming borderline feasible with respect to requirements, at a time when it is still possible to do something about it. In summary, while the overall framework for our tradeoff analysis is fairly general, the specifics of each step may change. This applies to the constraint model specifics (we used a workload model) as well as the form of the utility function. The framework can be used during sensitivity analysis, and for re-evaluation later in the development process. We would not recommend using this approach if the value of design choices is obvious, or cannot be quantified, or if key competing requirements and priorities are still quite unstable (of course that may also make design a questionable activity). We also think that automation (such as with our Mathlab tool) is key, since manual analysis can become cumbersome.

7

Conclusions

Designing software systems often involves decisions about tradeoffs between different requirements, such as functionality, performance and security. In this paper we present an approach to systematically evaluate these design choices and the tradeoffs when deciding between them. Specifically, we answered two research questions posed in the introduction as follows: 1. Design information can be used to characterize design tradeoffs. We used design information from UML diagrams and from a workload model. 2. The value of design alternatives can be codified to rank them. We presented a systematic, automated analysis process. The framework has potential for evaluating other competing factors, if the relationship between factors is understood enough to determine when a given solution is infeasible. Specifically, factors need to be quantifiable nd value functions need to be defined, so that Eq. 4 can be used. The constraint model will probably differ with each new tradeoff problem. In both examples our analysis helped to clarify tradeoff issues and identify both feasible and infeasible solutions. The sensor network system forced tradeoffs between fault-tolerance and performance. In the traffic control example, tradeoffs were related to processor utilization, because the processor was shared between two tasks. Some factors identified in each problem don’t compete with other concerns significantly. Constructing the model identified the ones that do. 28

While this framework was successfully used in the two examples, it needs to be further investigated and empirically evaluated. In addition, future work includes selecting representations for the workload properties and the linking between workload units and methods to enable more efficient tool support for the calculations. We also intend to investigate more advanced performance models and investigate the sensitivity of the estimates to uncertainties in the underlying estimates of cycle times and operational profiles.

Acknowledgement We are grateful to the Fulbright commission for making Prof. Runeson’s sabbatical visit to Washington State University possible, during which part of this work was performed. This research was partially supported by NSF award #CCR-0203285 and the Swedish Research Council under grant #6222004-552.

References [1] A. Alsaadi, “A Performance Analysis Approach Based on the UML Class Diagram”, Proceedings of the 4th International Workshop on Software and Performance, January 14–16, Redwood Shores, CA, pp. 254-260, 2004. [2] A. Andrews, P. Runeson, R. France; “Requirements Tradeoffs During UML Design”, Proceedings 11th International Conference on Engineering of Computer Based Systems, (ECBS 2004), Brno, Czech Republic, May 24–27, pp. 282–291, 2004. [3] A. Avritzer, J. Kondek, D. Liu, E.J. Weyuker, “Software Performance Testing Based on Workload Characterization”, Proceedings of the 3rd International Workshop on Software and Performance, July 24–26, Rome Italy, pp. 17-24, 2002. [4] S. Balsamo, A. D. Marco, P. Inverardi; “Model-based Performance Prediction in Software Development: A Survey”, IEEE Transactions on Software Engineering, 30(5):295–310, 2004. [5] L. Bass, P. Clements, R. Kazman, Software Architecture in Practice, Addison Wesley Longman Inc., Massachusetts, U.S.A. 1998. [6] P. Berander, A. Andrews; In Engineering and Managing Software Requirements, Springer Verlag, 2005. [7] T. Berling, P. Runeson, “Application of Factorial Design to Validation of System Performance”, Proceedings 7th Annual IEEE International Con29

ference and Workshop on the Engineering of Computer Based Systems (ECBS), Edinburgh, Scotland, UK, April 3–7, pp. 318-326, 2000. [8] T. Berling, P. Runeson, “Efficient Evaluation of Multi-Factor Dependent System Performance using Fractional Factorial Design”, IEEE Transactions on Software Engineering, 29(9):769-781, 2003. [9] B. Boehm, P. Bose, E. Horowitz, M. Lee, “Software Requirements as Negotiated Win Conditions”, Proceedings 1st International Conference on Requirements Engineering, pp. 74-83, Colorado Springs, CO, 1994. [10] P. Bogetoft, P. Pruzan, Planning with Multiple Critera, North-Holland, New York, 1991. [11] R. Darimont, E. Delor, P. Massonet, A. van Lamsweerde, “GRAIL/KAOS: and Environment for Goal Driven Requirements Engineering”, Proceedings 19th International Conference on Software Engineering, May 17–23, pp. 612–613, 1997. [12] A. Easton, Complex Managerial Decisions Involving Multiple Objectives, Wiley, 1973. [13] M. Feather, “A Quantitative Risk-based Model for Reasoning over Critical System Properties”, Proceedings International Workshop on Requirements for High Assurance Systems, pp. 11-18, Essen, Germany, Sept. 2002. [14] M. Feather, S. Conford, J. Dunphy, H. Hicks, “A Quantitative Risk Model for Early Lifecycle Decision Making”, Proceedings Integrated Design and Process Technology (IDPT-2002), June 23–28, Pasadena, CA, Society for Design and Process Science, 2002. [15] M. Feather, S. Conford, “Quantitative Risk-based Requirements Reasoning”, Requirements Engineering Journal, Springer Verlag, 8(4):248265, 2003. [16] M. Fowler, K Scott, UML Distilled – A Brief Guide to the Standard Object Modeling Language, Addison-Wesley, 2nd edition 1999. [17] R. France, G. Georg, Modeling Fault Tolerant Concerns Using Aspects, Technical Report 02-102, Computer Science Department, Colorado State University, 2002. [18] G. Georg, R. France, I. Ray, An Aspect-based Approach to Modeling Security Concerns, Technical Report 03-111, Computer Science Department, Colorado State University, 2003; also in Proceedings of the Workshop on Critical System Development with UML, Dresden, Germany, 2002. 30

[19] J. Gray, T. Bapty, S. Neema, J. Tuck, “Handling Crosscutting Constraints in Domain-specific Modeling”, Communications of the ACM, 44(10):87-93, October 2002. [20] J. M. Grochow; “A Utility Theoretic Approach to Evaluation of a TimeSharing System”, In W. Freiberge (ed.), Statistical Computer Performance Evaluation, Academic Press, 1972. [21] E. Johansson, F. Wartenberg; “Proposal and Evaluation for Organizing and Using Available Data for Software Performance Estimations in Embedded Platform Development”; Proceedings 10th IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS 2004), May 25–28, pp. 156–163, 2004. [22] Y. Kao, Quality Function Deployment, Productivity Press, Cambridge, MA, 1990. [23] J. Karlsson, K. Ryan, “A Cost-value Approach for Prioritizing Requirements”, IEEE Software, 14(5): 67-74, 1997. [24] R. Kazman, M. Klein, P. Clements, ”ATAM: Method for Architecture Evaluation”, Technical Report CMU/SEI-2000-TR-004, Carnegie Mellon University – Software Engineering Institute, August 2000. [25] R. Keeney, H. Raiffa, Decisions with Multiple Objectives: Preferences and Value Tradeoffs, John Wiley & Sons, New York, 1976. [26] G. Kiczales, J. Lamping, A. Mendhekar, C. Maeda, C. V. Lopes, J.-M. Loingtier, J. Irwin, “Aspect-oriented Programming”, Proceedings of the European Conference on Object-oriented Programming (ECOOP 97), pp. 220-242, Lecture Notes in Computer Science, vol. 1241, June, 1997. [27] J. P. C. Kleijnen, Computers and Profits: Quantifying Financial Benefits of Information, Addison-Wesley, 19080. [28] X. F. Liu, J. Yen; “An Analytic Framework for Specifying and Analyzing Imprecise Requirements”, Proceedings International Conference on Software Engineering (ICSE 18), March 25–30, Berlin, Germany, pp. 60–69, 1996. [29] A. D. McKinnon, K. Dorow, T. Damania, et. al., “A Configurable Middleware Framework with Multiple Quality of Service Properties for Small Embedded Systems”, Second IEEE International Symposium on Network Computing and Applications, April 16–18, pp. 197-204, 2003. [30] A. D. McKinnon, Supporting Fine-grained Configurability with Multiple Quality of Service Properties in Middleware for Embedded Systems, Ph.D. dissertation, Washington State University, 2003. 31

[31] J. Mylopoulos, L. Chung, S. Liao, H. Wang, E. Yu, “Exploring Alternatives during Requirements Analysis”, IEEE Software, 18(1):92-96, 2001. [32] OECD Road Research Group, Traffic Control in Saturated Situations, OECD, 1981. [33] D. P. Panda, “An Integrated Video Sensor Design for Traffic Management and Control”, Third International Multiconference on Circuits, Systems, Communications, and Computers (IMACS IEEE CSCC), July 4–8, Athens, Greece, pp. 176–185, 1999. [34] D. Petriu, M. Woodside, “Analysing Software Requirements Specifications for Performance”, Proceedings of the 3rd International Workshop on Software and Performance, July 24–26, Rome Italy, pp. 1-9, 2002. [35] T. Saaty, The Analytic Hierarchy Process, McGraw-Hill, 1980. [36] M. Svahnberg, C. Wohlin, L. Lundberg, M. Mattson; “A Quality-Driven Decision Support Method for Identifying Software Architecture Candidates”, International Journal of Software Engineering and Knowledge Engineering, 13(5):547–573, 2003. [37] M. Shepperd, M. Cartwright, “Predicting with Sparse Data”, IEEE Transactions on Software Engineering, 27(11):987–999, Nov. 2001. [38] C. U. Smith, L. G. Williams, Performance Solutions: A Practical Guide to Creating Responsive, Scalable Software, Addison-Wesley, Pearson Education 2002. [39] K. Tavladakis, N.C. Voulgaris, “Development of an Autonomous Adaptive Traffic Control System”, Proceedings of European Symposium on Intelligent Techniques, June 3–4, Crete, Greece, 1999. [40] A. von Mayrhauser, Software Engineering: Methods and Management, Academic Press, 1990. [41] J. Yen, W. A. Tiao; “Systematic Tradeoff Analysis for Conflicting Imprecise Requirements”, Proceedings third IEEE Symposium on Requirements Engineering, January 6–10, IEEE CS Press, pp. 87–96, 1997. [42] M. Zeleny; “The Attribute-Dynamic Attitude Model (ADAM)”, Management Science, 23(1):12–26, 1976.

32

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.