A Near Real-Time System for Security Assurance Assessment

Share Embed


Descrição do Produto

1

A Near Real-time System for Security Assurance Assessment * Nguyen Pham, Loic Baud, Patrick Bellot, Michel Riguidel Computer Science and Networking Department (INFRES) Institut TELECOM, Telecom ParisTech (ENST), Paris, France.

Abstract—Building systems that are guaranteed to be secure or to remain secure over time is still an unachievable goal. The need for a tool that helps to determine security assurance level of a system is therefore vital in order to maintain and improve overall security. This paper introduces our system to assess the overall security assurance of a large, networked, IT-driven system in terms of a dedicated evaluation infrastructure based on multiagent technology. We use attack graph approach to compute an attackability metric value and define other metrics for anomaly detection to assess both the static and dynamic visions of the system under study. The implemented software system is described, and the examples of experiments for evaluating of network component, sub network and network security assurance levels are considered. Index Terms—security assurance, assessment, security assurance evaluation.

security

assurance

I. INTRODUCTION

U

ntil now, building systems that are guaranteed to be secure or to remain secure over time is still an unachievable goal. Hence, security of information systems becomes naturally a crucial concern for the governments and the worldwide companies. Security assurance (SA) is defined as the objective confidence that an entity meets its security requirements [2], [4]. This is based on specific evidence provided by the application of assurance techniques like formal methods, testing, or third-party reviews. Currently, there is no framework to assess and manage the SA level of a large, complex system. Existing methodologies such as ISO 17799 or ISO 15408 [4] and recently, ISO 27004 are not applicable to large deployed system, at network scale; they focus more on organizations or small system. Some specific tools exist for detecting weakness such as open ports, vulnerabilities [15], for example, but they do not take into account combinations and compositions of different elements. Recent advances in security evaluation using attack graphs [7][8][9][11] suggest a promised approach to determine the security level of a system in terms of detected vulnerabilities and relations between them. However, existing work focuses only on the static

* This work has been granted through the IST FP6 DESEREC Project (CN: 026600) of the European Union.

vision, the description of the system under study as implemented, and do not take into account system dynamic behavior. The need for a tool capable of verifying the SA level of a system in a near real time manner is therefore vital in order to maintain and improve overall security in day-to-day operation. Such a tool must take into account both the static and dynamic visions of the evaluated system. We introduce in this paper a solution to that urgent demand. It will be a form of an additional evaluation infrastructure based on multi-agent technology. We use attack graph approach to compute an attackability metric value (the likelihood that an entity will be successfully attacked) for static evaluation and define other metrics for anomaly detection to assess the dynamic view of the evaluated system. Actual SA value of a system entity is a combined value reflecting these two visions. This work extends our previous contributions [17] in which the relations among vulnerabilities are omitted. The rest of this paper is organized as follows. In Section II, we outline related work in the field. Different steps in the whole SA assessment process are discussed in Section III. We introduce in Section IV the application of attack graphs to compute an attackability metric for static evaluation. Next, Section V describes the implemented SA assessment system along with identified metrics and some experimental results. Finally, some conclusion remarks are presented in Section VI. II. RELATED WORK Existing work can be classified into three major categories: product/system evaluation, evaluation based on tree-based metrics taxonomies, and evaluation based on modeling techniques. The first category, conducted by human evaluators, uses standards such as Common Criteria (CC) [4] to evaluate the security functionalities of products. This approach is time consuming and cannot be directly applied for continuous assessment. Furthermore, this standard focuses on generic product integration level and not system integration. It does not provide appropriate means to capture the emergent effects of the composed entities. An effort trying to apply CC for automated evaluation is described in [6]. However, the security strength of a system entity measured by this work does not contain the effects of interactions with other entities. Hence, the proposed method is not practical in an operative environment.

2 Using metrics taxonomy is the main idea of the work in the second category. An early attempt introduced in [1] divides the metric space into three non-disjoint sub-spaces: organizational, technical, and operational and the taxonomy descried in [24] takes the work of [1] one step further by suggesting sub-groups within the main metrics groups, no measurable metrics are presented. A comprehensive taxonomy is suggested in [22][23] based on management, technical, and organizational classification with detailed examples of metrics and means to assign a value to each of these metrics. In spite of that, the frequency to apply these metrics is a long period (ex. month, half of year) and therefore, no real time evaluation seems to be possible. In [14] [21], the authors define a taxonomy for network systems and suggest a linear framework to obtain a single value representing the information assurance level of the system in question based on weighted-sum operators. Since only one predefined weight is assigned for each metric to reflect its relative importance, this approach is not flexible enough to represent the real systems where the importance of each metric evolves over time. In a recent book [12], the author describes seventy-five different strategic metrics that organizations use to assess their security posture, diagnose issues, and measure security activities associated with their infrastructure. However, like other contributions in this category, a general problem is the lack of modeling techniques, which are crucial for building a tool evaluating the assurance of a system. The existing work in the last category decomposes the real system into entities of concern and models the interactions between these entities in different ways such as system devices and access path [15], network components with physical and logical connections [5], security contributing factors and functional relationships [28]. While this is interesting and offers different views to study the system of evaluation, these approaches often rely on a fixed formula to determine the system-wide values. Thus, they are not applicable to the non-linear systems. In [28] and [25], some functional relationships are presented but emergent effects are not considered. Recent advances in security evaluation using attack graphs, notably the work described in [7][8][9][11][26][27], suggest a promised approach to determine the security level of a system in terms of detected vulnerabilities and relations between them. However, current efforts focus only on the static vision, i.e. the description of the system under study as implemented, and do not take into account the dynamic vision. Computational complexity is another problem to count when applying attack-graph based evaluation to large, complex system. We adapt the work presented in [7][9][11] that, to our knowledge, is the only approach scaling nearly linearly as the size of a typical network increase. However, we concentrate not only on the complexity of graph generation but also on useful information that can be extracted directly from obtained attack graphs to compute an attackability metric value for network components and to give different recommendations. Advantages of the two first categories can be used to build a SA evaluation tool. For example, system entities can be

evaluated by the former while the taxonomies in the second category are used to develop sets of SA attributes that need to be evaluated. III. SECURITY ASSURANCE ASSESSMENT Given a complex system, direct measurement of SA attributes is desirable, but not always possible. In such a case, decomposition of the system under study into measurable parts is required. Aggregation methods are then needed as a backward process to combine all the measured SA values of entity attributes into system wide values by taking into account the relations between these attributes. We distinguish five steps for the SA assessment process as: • Modeling: decompose the system into SA-relevant irreducible entities and capture SA-related properties of system structure. • Metrics assignment: assign metrics to each identified entity. • Measurement: measure the SA level of irreducible entities by means of selected metrics. • Aggregation: combined measured results to derive an SA level per entity and for the system. • Interpretation: evaluate the assurance posture of the system based on the aggregated values and visualize the results. General discussion of these fives steps can be found in [17]. This paper concentrates on the second to fourth steps of the above presented process, hence other steps are only briefly examined. We define different metrics for identified entities and describe the implemented instruments to obtain the values of these metrics. Then we apply the theoretical basic of the aggregation step introduced in [18] to determine the SA level of each entity at different levels of abstraction. A. Modeling Although there is a common objective to model the SArelevant characteristics of systems, the model may be at different levels of abstraction. To actually enable modeling of systems, where system elements can be modeled hierarchically, the set of possible system entities and relations has to be specified. To avoid the complexity, we concentrate on service-related entities and classify the system entities into two dimensions of relevant/dedicated to assurance and directly observable/indirectly observable. This classification preserves service-functional and non-functional properties while ignoring irrelevant hardware and software parts [20]. Regarding the connection impacts between identified entities made by physical means such as cable, radio or infrared, as discussed in [18], the interactions between the system entities are modeled as a set of logical relations of their SA attributes according to the topology of the system under consideration. We use hierarchical model composition and refinement techniques to determine the SA value of identified entities at different levels of abstraction. Each entity has its own SA attributes characterizing its SA posture. Output values from

3 one level are then used as parameters of the next higher level. B. Metrics assignment It is essential to be able to define what is actually meant and how to measure it when SA is assessed. We consider a metric as a standard of measurement or a function that describes how far one SA-related thing is from another. It was highlighted by the researchers [1][12][14] that a combination of various metrics must be used to quantify the assurance level of a system. A good metric should meet the following criteria: conceptually specific, quantitatively measurable, practically attainable, consistently measured without subjective criteria, and time-dependent. Any system can be viewed by one of the two visions [20]: static and dynamic. Static vision characterizes the system as implemented including technical, organizational, and individual aspects, aiming at an estimate level to which the system can be secured during operation. One can gain information about system vulnerabilities, for example, by taking into account the configuration of the deployed hardware and software, the relations between system components regarding the selected architecture. Dynamic vision describes the system characteristics when it is in use. This is the analysis of the balance between the load and the operation of available resources with respect to time. Correspondingly, we can identify two types of metrics, static and dynamic. The operational SA value of the system at a given time is a combination of the values of its static metrics and dynamic ones at that time. The value of a static metric is changed only when the configuration is changed or when a component is added or removed into the system. Meanwhile, the dynamic metrics are changed over time. Since SA cannot be directly measured, its SA assessment must be based on factors that affect the SA level or consequences that results from a certain SA level. With static vision, both the factors and consequences can be used. Meanwhile, with dynamic vision, we can only use the factors since the focus is on predictive purpose. C. Measurement Measurement involves three main steps: data collection, data validation, and data processing. Data collection consists in the definition of what to collect and how to collect the data. The kind of data to be collected is directly linked to the kind of behavior to be analyzed and to the quantitative measures to be evaluated to characterize such behavior. This directly links to the definition of metrics. Data validation consists in analyzing the collected data for correctness, consistency, and completeness. For example, in some cases, Nessus [15] solely relied on the banner of scanned hosts to detect a vulnerability. Further examination of the evaluated host for the appearance of corresponding patches is required to determine if the vulnerability is really exist or not. Data processing consists in performing normalization on the validated data to evaluate quantitative measures that characterize SA of the corresponding attribute.

Lowest level SA data obtained from different sources, for example, by using Simple Network Management Protocol (SNMP), values obtained from special scripts, may be on different scales, in different ranges and distributed differently. In other words, they may be totally incomparable. This is where the normalization plays its role to make relevant data comparable across input systems. The normalized SA values described herein are in the range of [0, 1] where 0 means that the SA attribute is not present in the entity and 1 means that it is perfect. D. Aggregation After the modeling, metrics assignment and measurement steps, obtained result is a hierarchical graph (Fig. 1) in which the nodes are SA attributes of corresponding entities and the edges are either the emergent or composition relations between these entities. The emergent relations represent the dependencies between SA attributes that cannot be decomposed into the relations of lower-level attributes. We distinguish two steps in calculating the SA value of a node in the attributes-dependency graph [18]: 1) calculate the composition value from its child nodes, 2) the obtained result is then used to determine the actual operational SA value by taking into account the emergent relations with other nodes.

Fig. 1. The attribute-dependency graph obtained from system decomposition. The rectangle, oval and round shapes represent the SA attributes at different level of decomposition.

From the values of the leaves in the attributes-dependency graph, the values of the nodes in the upper levels can be obtained by using the two steps. That is, the basic idea of our approach in aggregation is to abstract different relations between system entity attributes as aggregation operators such as min, max, weighted-sum, etc. Along with the identification of the accurate attributes, finding correct emergent relations between nodes or appropriate decomposition relations between nodes and their parent node in the attributesdependency graph is therefore crucial for choosing appropriate aggregation operators. Since the aggregation process must detect not only the

4 anomalies of individual attribute SA values (for example, the output packet rate is too high) but also the anomalies of correlated attribute values (e.g., the ratio of the number of input packet errors to the number of input packets) it takes form of an anomaly detection process. More discussion on the aggregation process is available in our previous work [18]. E. Interpretation Methods for system SA assessment are expected to provide aggregated results with SA values of the studied system. These could be the posture of the evaluated system regarding three pre-defined postures low, normal, high or an ndimension chart describes n considered metrics. In this work, we represent the SA posture of a system entity as a single real number in the range of [0, 1]. IV. ATTACK GRAPHS FOR STATIC E VALUATION As discussed in Section III.B, the static vision describes the system as implemented, aiming at an estimated level to which the system can be secured during operation. This level depends on system vulnerabilities, the configurations of deployed hardware and software and especially the relations between system components regarding the selected architecture. Attack graphs provide a comprehensive solution that covers all mentioned aspects. They supplement vulnerability scanners, firewall, and router rule sets evaluators with the missing information about relationships among vulnerabilities. Analyzing the correlated vulnerabilities thus provide a clear picture about what attacks might happen in a network and about their consequences. Attack graphs thus allow us to consider potential attacks in a particular context relevant to the given network. Although there are currently different definitions of attack graphs existing in the literature, we can consider an attack tree as a structure in which each possible exploit chain ends in a leaf state that satisfies the attacker’s goals, and an attack graph as a consolidation of the attack tree in which some or all common states are merged [2]. As such, nodes and arcs in the attack graphs represent actions the attacker takes and changes in the network state caused by these actions [10]. The goal of these actions is for the attacker to obtain normally restricted privileges on one or more network components such as user’s computers, routers, or firewalls by taking advantage of vulnerabilities in software or communication protocols. In most of cases, intermediate actions that compromise separate hosts are required in large attack graphs to reach the target host. We adapt the work introduced in [7] that, to our knowledge, is the only approach scaling nearly linearly as the size of a typical network increase. This is an important property when apply this approach to large, complex systems. However, as detailed latter, the proposed algorithm presented in [7] can miss some attach paths originating from hosts that cannot reached from already compromised hosts. We provide a solution to overcome this problem. In addition, we concentrate not only on the complexity of graph generation but also on useful information that can be extracted directly from obtained

attack graphs to compute an attackability metric for network components and to give different recommendations based on obtained information. A. Definitions and Assumptions We borrow some notions presented in [7], each host has one or more interfaces which have zero or more open ports, accepting connections from other hosts. A port has zero or more vulnerability instances. Depending on each vulnerability instance, an attacker is able to obtain one of four access levels (vulnerability post-conditions) on a host: “root” or administrator access, “user” or guest access, “DoS” or denialof-service, or “other”, a confidentiality and/or integrity loss. The combination of a host and an access level is an attacker state. A state may provide the attacker zero or more credentials, any information used as access control as passwords or private keys. A vulnerability instance may require zero or more of credentials. Vulnerability locality (remotely or locally exploitable) and credentials serve as preconditions to exploitation of a vulnerability instance. We also support the definition of the term “vulnerability” described in [7] which is broader than conventional meaning. A vulnerability is considered as any way an attacker could gain access to a system such as software flaws, trust relationships, and server mis-configuration. Therefore, a feature corresponding to normal service or functionality such as remote login with the use of a private key is a vulnerability from the point of view of an attacker. Such features must be included in the construction of attack graphs because they may help attackers in escalating their privileges when combined with other vulnerabilities, although they are not intended for that purpose. This fact also implies that not all vulnerabilities can be removed in hardening a network, so measuring the relative SA of different configurations becomes important. If a vulnerability exists at a reachable vulnerable port, then we assume that the attacker can successfully exploit the vulnerability to its fullest extent. The reason for using such a worst-case attacker model is twofold. First, we do not need to model exploitation details as in [8][26][28] because a vulnerability can be exploited in different ways, by different scripts/tools. Second, this assumption prevents false negatives and requires no additional information about the potential threat [7]. Our approach relies on an explicit assumption of monotonicity, which, in essence, states that the pre-condition of a given vulnerability is never invalidated by the successful application of another vulnerability. In other words, the attacker never needs to backtrack. The monotonicity assumption is used in almost every other papers with working prototype/tool, for example [7][8]; it greatly reduces the complexity of the analysis problem from exponential to polynomial, thereby bringing even very large networks within reach of analysis. An attack can only proceed to new victims that can be reached from compromised hosts, hence, critical for building attack graphs is a reachability analysis that determines which hosts are reachable from an already compromised host and

5 which communication ports can be used to connect and compromise new vulnerable hosts [9]. Such determination of the reachability between hosts in a network requires the analysis of which outbound interfaces of a current evaluated host can reach to which active inbound ports of all other hosts with respect to the network’s topology, routing, and filtering policies to decide if a logical connection is possible. As a consequence, reachability determination requires downloading and analyzing configuration files for different filtering devices such as firewalls, routers, switches, personal firewalls and also the configuration files of virtual private networks [10]. Although this is a real practical problem, we assume that in this work, the reachability information of the evaluated system has been known.

Fig. 2. Sample network.

B. Building Attack Graph Taking the example of a simple network illustrated in Fig. 2, A is the starting position of the attacker. Other hosts have a single open port and, except G, a single remotely-exploitable vulnerability. The firewall FW permits C and D to reach E and drops all other traffic. G has a personal firewall preventing other hosts to reach it but it can reach F. The attacker from host A can directly compromise hosts B, C and D. From C or D, the attacker can traverse the firewall and compromise host E. From E, the attacker can compromise F, completing the process. Path of full graph for the sample network is shown in the left side of Fig. 3 (the children of the B and C nodes at the top level are not shown in the figure) in which nodes are states, and edges correspond to vulnerability instances. As discussed in [7], full graphs contain much of redundant information (e.g. sub trees from C to E and to F are repeated two times in the example illustrated in Fig. 3) and hence computational complexity quickly become challenging as the network size increases. In addition, full graphs depend heavily on the starting position of the attacker. With the illustrated example, internal attacks originating from G to E cannot be represented. Some other types of attack graphs such as predictive graphs [11] avoid much of the redundant structure of the full graph and are much faster to build than full graphs but they still include redundant structure in some cases [9] and are unable to model multiple pre-conditions of vulnerabilities.

Fig. 3. Full graph (left) and MP graph (right) for sample network.

Fig. 4. Pseudo code for MP graph generation [7].

The multi prerequisite graphs (MP graphs) presented in [7] for example network is shown in the right side of Fig. 3 which has the contentless edges and three types of nodes: state nodes (circles), pre-condition nodes (rectangles) and vulnerability instance nodes (triangles). With the used example, the only pre-condition is the reachability. Outbound edges from state nodes can go only to pre-condition nodes, which in turn can point only to vulnerability instance nodes and finally, outbound edges from vulnerability instance nodes can go only to state nodes. This order reflect the nature that a state node provides post conditions, which used pre-conditions to exploit vulnerability instances, which provide more states to the attacker. The MP graphs are built using a breadth-first technique where no node is explored than once, and a node only appears on the graphs if the attacker can successfully obtain it. The pseudo code for graph generation is shown in Fig. 4. A weak point of the MP graph is that it cannot represent all possible positions of the attackers. In other words, G is not accounted in the constructed graph since the algorithm described in Fig. 4 considers only the vulnerable nodes that can be reached from a current evaluated node, it will ignore the case that an internal attacker start with G can compromise E. We use the MP graph as an intermediate step to build the

6 host-based attack graph is shown in Fig. 5 where the edges are the vulnerability instances and the nodes are corresponding hosts. Above limitation of the MP graph can be solved by firstly building the graph with an attacker position outside the considered network (external attacker). The second step involve examining all the hosts which have not been explored with the algorithm shown in Fig. 4 and building MP graphs from these hosts until all edges to already explored hosts in the first step have been taken into consideration. Therefore, the host-based graph really shows all hosts which can be compromised from any host the attacker has compromised. We can take advantage of this property to generate “all sources, all targets” host-based graphs, showing every malicious actions that could take place from any host or attacker starting location to any host in the network.

Fig. 5. Host-based attack graph for sample network..

C. Static Evaluation Based on Attack Graph As discussed in previous sub section, the host-based graphs can show every malicious action that could take place from any host or attacker starting location to any host in the network. The number of inbound edges of a node in the hostbased attack graph represents therefore all the direct attack paths that other hosts in the considered network can compromise to that node. This number can be used to compute the attackability metric of a host in the context of the system under study. The metric is based on intuitive properties derived from common sense. For example, our metric will indicate reduced confidence when more inbound edges exist, whereas it represents increased SA value in the reverse cases. We calculate the attackability metric which represents the likelihood that a node will be successfully attacked follows: SAAttackability = (1 -

nb _ inbound _ edges

∑ nb _ inbound _ edges

)

where nb_inbound_edges is the number of inbound edges of the considered node and ∑ nb_inbound_edges is the total

number of inbound edges of every nodes in the graph (equal to the number of edges in the graph). V. SYSTEM IMPLEMENTATION AND NEAR REAL-TIME ASSESSMENT We have set up an experimental test bed as shown in Fig. 6 using the resources at Computer Science and Networking Department (INFRES), Telecom ParisTech to validate our approach under the guidance of network administrators. The test bed consists of daily used network components: three Foundry routers, five sub networks, about 44 Sun workstations, and 4 Sun servers. There are two servers act as perimeter servers between sub networks. The routers are used as the core network’s backbone routers. All network component IP addresses begin with the prefix 137.194. All usual network services and applications such as web browsing, email, ftp and ssh services are running on the test bed. We use information flows to identify SA related entities and decompose the whole system into three levels of abstraction: network, sub network and network components. At each station and server, an SA Evaluator agent is running in normal user mode to monitor their availability postures. Three router evaluators, five sub network evaluators and a network evaluator, each is operating on a randomly chosen station or server to avoid single point failure problem. Periodically, when demanded, an agent collects, communicates with other agents that it relates to, calculate the values of assigned metrics and finally, send the obtained results to an overlay network running above, named ROSA. This overlay network is responsive to isolate or to reconfigure the stations, servers or sub networks depending on the obtained SA values of the entities under consideration. Calculated values are then sent by ROSA to the GUI. The whole system is implemented in Java and the CPU and network utilization are 0.1%-0.25% and 50K bytes for each evaluator agent. That is, operation of evaluated entity is not influenced by monitoring agents. The architecture of each evaluator agent is shown in Fig. 7. We use Nessus scanner [15] to obtain information about vulnerabilities of each station and server. The reachability of all stations and servers are assumed to be equal in this case study, each host can reach to all ports of other machines. We then use this information to build the host-based attack graph and calculate for each host a SAattackability for each host as described in Section IV. We realize this static evaluation with the frequency of each 7 days.

7 In our previous work [17], we described different metrics for local processing and network capabilities and selected aggregation operators used to calculate the SA value of each identified entity at different levels of abstraction. Table 1 shows the chosen metrics for network capability of a station. For each selected metric in [17], there is a maximum value representing the allowed state in which the component is in a functioning condition. The calculated values of the metrics depict therefore how their current states differ from their maximum allowed state. For example, measured packet collision rate is 4% and the allowed rate is 10%. Greater than this value indicates that the network interface is overload and traffic will need to be reduced or redistributed to other network interfaces or servers. Here, the calculated SA value for packet collision rate is 0.6. Depending on each entity under consideration, the maximum values could be different depending on its configuration. For instance, the limit for output packet rate of a station is 10000 while with a server having three network interfaces, this value is 30000. Fig. 6. Architecture of test network.

Fig. 7. SA Evaluator architecture.

It is worth noting that the security requirements may vary depends on whose perspective we consider. Within this work, we stand as the role of an internal user whose primary concerns are the ability of sending and receiving information successfully. The obtained values of system entities describe therefore the capability of a user to exchange information when he uses those network components, sub networks or the whole network under consideration. A. Near Real-time Assessment The value of a network component is calculated through a combination of its local processing capability (e.g. CPU utilization, allocated memory), its network capability (e.g. number of established connections, number of packets received) and its attackability. Since the failure of the local processing capability or the network capability will result in the inability of the component to carry out its functionalities, we take the min operator to represent this relation. OSAVcomponent = min (OSAVlocal, OSAVnetwork, SAattackability) That is, when the component operates normally, the OSAVcomponent = SAattackability.

Table 1. Network capability metrics for computer station. Metric Input packet rate

Frequency 10 seconds

Data source Script

Input packet error rate Output packets rate Output packet error rate Packet collision rate Number of UDP ports Number of datagrams discarded because no route could be found rate Number of received UDP datagrams for which there was no applications at the destination port rate Number of TCP connections rate

10 seconds 10 seconds 10 seconds 10 seconds 30 seconds

Script Script Script Script SNMP data

30 seconds

SNMP data

30 seconds

SNMP data

1 minutes

Script

A weak point of the anomaly detection approach used in [17] is that it base only on separate univariate tests. This can lead to the cases where the values of individual metrics indicate normal states but the correlated values between them are abnormal. For example, the SA values of input packets rate and input packet error rate are high express that they are in normal states but if the ratio of input packet error rate and input packet rate is higher than 0.25%, the considered host has the problem of dropping packets. Initially, we intended to use statistical multivariate analysis [13] such as Hotelling T2 control chart [19] to detect these kinds of correlated relations between individual metrics. However, our current experimental experiences shown that computational complexity and false alarm rates (both positive and negative) of these approaches are high and therefore cannot be applied for near real-time assessment. It is our future work to reduce this complexity. We take another approach to deal with this problem by grouping all the correlated metrics into a group and compute the combined value. For example, the above discussed input packet error rate and input packet rate are grouped together to

8 determine how the calculated ratio is differ from the threshold of 0.25%. Another example is the ratio between packet collision rate and output packet rate must be bellow than 10%, otherwise it indicate overload network or hardware problems. Actual SA value of network capability, for example, is the min of the values of individual metric values and also the values of identified group of correlated metrics since the low value of one of these metrics indicate that the considered host will experience issues. With the same principle, we calculate the SA value of the local processing capability and finally the SA value of the evaluated station or server as described above. We follow the equations to compute the SA values of highlevel entities (sub networks and network) as introduced in [17]. B. Experimental Results We have used our system to monitor the INFRES network for more than 2 months. In this duration, different scenarios of daily used system have been taken place such as reconfiguration of hosts, over loads in terms of local processing and network capabilities and the responded well to these situations. Fig. 8 describes the monitored SA values on two stations and a perimeter server with the interval between two assessments is 10 seconds. Detailed analysis of individual metrics values of 137.194.204.30 shows that the values of output packet rate and CPU utilization at the points 60th – 63rd are rather high, making the overall SA value of this station decreases. For 137.194.160.106, examination of metric values indicates that the station does not response to network requests at the points 75th – 126th. In these cases, the SA values down to the minimum value, 0.

correct correlation relations between identified metrics. We group all correlated metrics into one group and then calculate the combined value. This approach is very useful when the number of metrics is small and we have a deep knowledge about the system to be evaluated. However, when we do not known a priori these correlated relations, statistical multivariate analysis is a good approach but the computational complexity and false alarm rates are still open questions. REFERENCES [1] [2]

[3] [4]

[5]

[6]

[7] [8] [9] [10]

1 0.9

[11]

0.8 S A V a lu e

0.7 0.6 0.5 0.4

[12]

0.3 0.2

[13]

0.1 0 0

50

100

150

200

250

300

350

400

450

[14]

Time 137.194.192.2 (perimeter server)

137.194.160.106

137.194.204.30

Fig. 8. Experimental results of some stations and servers.

VI. CONCLUSION In this paper, we present a tool that helps to determine the SA level of an underlying network system in a near real time manner. This tool takes into account both the static and dynamic visions. The metrics used to evaluate dynamic vision currently focuses on the availability aspect of the security. Future work is to extend to other aspects of the security and propose a metrics taxonomy that can be used for assessing a network system. The hardest part of the SA assessment process is to identify

[15] [16]

[17] [18]

[19] [20]

ACSA, ed. (2002) Proceeding of the Workshop on Information Security System Scoring and Ranking (WISSSR), May 21-23, 2001, Williamsburg, Virginia. Ammann, P., Wijesekera, D., Kaushik, S. (2002) Scalable, Graph-Based Network Vulnerability Analysis Proceedings of the 9th ACM Conference on Computer and Communications Security, Washington, DC, November 2002. pp 217-224 Bishop, M. (2002) Computer Security: Art and Science, Addison Wesley Professional, December 2002. CC (2005) Common Criteria for Information Technology Security Evaluation. Part 1: Introduction and general model, v2.3. Part 2: Security functional requirements, v2.3. Part 3: Security assurance requirements, v2.3. Clark, K., Tyree, S., Dawkins, J. & Hale, J. (2004) Qualitative and Quantitative Analytical Techniques for Network Security Assessment, Proceeding of the 2004 IEEE Workshop on Information Assurance, June 10-11, West Point, New York. Hunstad, A., Hallberg, J., Andersson, R. (2004) Measuring IT Security – a Method Based on Common Criteria’s Security Functional Requirements, Proceedings of the 2004 IEEE Workshop on Information Assurance, West Point, NY, June 2004. Ingols, K., Lippmann, R., Piwowarski, K. (2006) Practical Attack Graph Generation for Netowkr Defense, 22nd Annual Computer Security Applications Conference, Florida, 2006. Jajodia, S., Nodel, S., O’Berry, B., (2003) Topological Analysis of Network Attack Vulnerability, Chapter 5, Kluwer Academic Publisher, 2003. Lippmann, R. et al (2005) Evaluating and Strengthening Enterprise Network Security Using Attack Graph, Technical Report, MIT Lincoln Laboratory, Lexington, MA, 2005. Lippmann, R. et al (2005) An Annotated Review of Past Papers on Attack Graphs, Technical Report, MIT Lincoln Laboratory, Lexington, MA, 2005. Lippmann, R., Ingols, K., Scott, C., Piwowarski, K., Kratkiewicz, K., Artz, M., Cunningham, R., Validating and Restoring Defense in Depth Using Attack Graphs, Military Communications Conference, Washington, 2006. Jaquith, A. (2007) Security Metrics – Replacing Fear, Uncertainty and Doubt, Addison Wesley, March 2007. Montgomery, D.C (2000) Design and Analysis of Experiments, 5th Edition, Wiley. Nandy, B., Pieda, P., Seddigh, N., Lambadaris, J., Matrway, A., Hatfield, A. (2005) Information Assurance Metrics, Technical report, Public Safety and Emergency Preparedness, 2005, Canada. Nessus security scanner http://www.nessus.org Oman, P., Krings, A., Conte de Leon, D., & Alves-Foss, J. (2004) Analyzing the Security and Survivability of Real-time Control Systems, Proceeding of the 2004 IEEE Workshop on Information Assurance, June 10-11 2004, West Point, New York. Pham, N., Baud, L., Bellot, P., Riguidel, M. (2008) Towards a Security Cockpit, Proceeding of the 2nd International Conference on Information Security and Assurance (ISA 2008), Busan, Korea, 2008. Pham, N., Riguidel, M. (2007) Security Assurance Aggregation for IT Infrastructures, Proceeding of the Second International Conference on Systems and Networks Communications, August 25-31, 2007, Cap Esterel, France. Qu, G., Hariri, S., Yousif, M. (2005) Multivariate Statistical Analysis for Network Attacks Detection, Proceedings of the ACS/IEEE 2005 International Conference on Computer Systems and Applications. Riguidel, M., Hecker, A. & Simon, V. (2006) Armature for Critical Infrastructure, Proceeding of the 2006 IEEE International Conference

9

[21]

[22] [23]

[24]

[25] [26]

[27] [28]

on Systems, Man, and Cybernetics, October 08-11, 2006, Taipei, Taiwan. Seddigh, N., Pieda, P., Mattrawy, A., Nandy, B., Lambardis, J. & Hatfield, A. (2004) Current Trends and Advances in Information Assurance Metrics, Proceeding of the Second Annual Conference on Privacy, Security and Trust, October 13-15, 2004, New Brunswick, Canada. Swanson, M. (2001) Security Self-Assessment Guide for Information Technology Systems, National Institute of Standards and Technology, Special Publication 800-26, November 2001. Swanson, M., Bartol, N., Sabato, J., Hash, J. & Graffo, L. (2003) Security Metrics Guide for Information Technology Systems, National Institute of Standards and Technology, Special Publication 800-55, July 2003. Vaughn, R., Henning, R. & Siraj, A. (2003) Information Assurance Measures and Metrics – State of Practice and Proposed Taxonomy, In Proceeding of the 36th Annual Hawaii International Conference on System Sciences, January 6-9, 2003, Hawaii. Walter, M. & Trinitis, C. (2005) Quantifying the Security of Composed Systems, Proceeding of the 6th International Conference on Parallel Processing and Applied Mathematics, 11-14 September, 2005, Poland. Wang, L., Singhal, A., Jajodia, S. (2007) Measuring the Overall Security of Network Configurations Using Attack Graphs, Proceedings of the 21st Annual IFIP WG 11.3 Working Conference on Data and Applications Security, July 8-11, 2007, CA, USA. Wang, L., Singhal, A., Jajodia, S. (2007) Toward Measuring Network Security Using Attack Graphs, Proceedings of the 3rd International Workshop on Quality of Protection, October 29, 2007. Wang, C. & Wulf, W. (1997) A framework for security measurement, Proceeding of the National Information Systems Security Conference, October 7-10, 1997, Baltimore, Maryland.

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.