Specifying High-Assurance Services

Share Embed


Descrição do Produto

C o v e r f e at u r e

Specifying High-Assurance Services Colin Atkinson, Daniel Brenner, Giovanni Falcone, and Monika Juhasz University of Mannheim

An enhanced approach to service specification strikes a better balance between machine processibility and human readability, and emphasizes testing- as well as reasoning-based assurance techniques. Built-in tests check the compatibility of interacting services at runtime and automatically pinpoint contract mismatches.

S

ervice orientation has become the dominant paradigm for architecting enterprise systems because it greatly enhances their flexibility and evolvability. However, the very advantages that make service-oriented architectures (SOAs) easy to assemble and change also make their verification more difficult. While software engineers can integrate a traditional system’s components under the controlled conditions of a development environment and can test them together before deploying the system, they do not bring the components in a SOA—services— together until they configure the system in its runtime environment, and the component interconnectivity can change dynamically as the system executes. There is thus no guarantee that quality assurance measures like tests performed at development and deployment time will remain valid as the system’s configuration changes. The risk of failures from contract misunderstanding is therefore much higher in SOAs than in traditional systems where developers can test all contracts at development time. This, in turn, makes SOAs difficult and costly to use for high-assurance systems that must exhibit high levels of dependability with known levels of confidence.

SEMANTIC SERVICE COMPOSITION

Most SOA technology vendors and researchers see semantic service composition as the best way to address this problem. At its core, this approach seeks to describe services in such a precise way that it can assure the cor 64

Computer

rectness of service compositions dynamically by automated “reasoning” techniques rather than by traditional verification techniques. This is best supported by languages that not only have well-defined semantics, founded on predicate logic, but also well-known computational properties that can support efficient, automated reasoning. Semantic service description approaches such as the Ontology Web Language for Services (OWL-S; www.w3.org/ Submission/2004/07) or the Web Service Modeling Ontology (WSMO; www.w3.org/Submission/WSM) therefore use languages based on description logic that has been carefully designed to maximize reasoning efficiency. Semantic service composition will play an important role in service-oriented development. However, as long as human engineers are involved in the service composition process, its focus on reasoning-based assurance has a downside. Semantic service composition not only relies on languages with concrete representations optimized for reasoning efficiency at the expense of human readability, the resulting specifications have no inherent support for dynamic verification techniques, such as testing, which are still the mainstay of human verification activities. Thus, although the approach works well for the relatively small systems amenable to automated reasoning, for large or complex systems requiring human involvement, working with semantic service descriptions is difficult. However, it is precisely these larger, more complex

Published by the IEEE Computer Society

0018-9162/08/$25.00 © 2008 IEEE

ShoppingCartService

CreditCard Details : String Number : String limit : Integer

*

Inv: limit >= 0

createNewCart (username : String, password : String) : String addProduct (cartID : String, p : Product, number : Integer) removeProduct (cartID : String, p : Product) checkout(cartID : String, Card : CreditCard) totalCost (cartID : String) : Integer numberOfItems (cartID : String, p : Product) : Integer numberOfProducts (cartID : String) : Integer isCheckedOut (cartID : String) : Boolean

* ShoppingCart ID : String /totalCost : Integer checkedOut : Boolean addItem (p : Product, price : Integer, number : Integer) removeItem (p : Product) totalCost () : Integer numberOfProduct () : Integer numberOfItems (p : Product) confirmPurchase () isCheckedOut () : Boolean

Inv: noOfItems >= 0

* Product ID : String price : Integer * description : String noOfItems : Integer

Figure 1. Structural view of ShoppingCartService. The service has eight operations: four changers that alter the service’s state, and four inspectors that reveal information about the service’s state but do not change it.

systems that most need quality assurance and human involvement. In the software engineering group at the University of Mannheim we have been exploring a complementary approach to service specification that strikes a different balance between machine processibility and human readability, emphasizing testing-based as well as reasoning-based assurance techniques. We believe that human software engineers will be involved in the composition and verification of complex, service-oriented systems for a long time to come and that the use of reasoning techniques must be balanced with support for human development and verification activities. We therefore base our service specification approach on the Unified Modeling Language (UML) and the Object Constraint Language (OCL)—languages that have been carefully engineered for human usability— augmented with a new technique for test description. Although they are not optimized for efficient reasoning, a range of model-checking and theorem-proving techniques1 can analyze UML/OCL models. They are also the foundation for the vast range of model transformation and language customization technologies offered by the model-driven development industry. We illustrate our approach in terms of a small case study from the e-commerce domain—a shopping cart

service that lets users collect and purchase items from an online shop.

Orthogonal Views of Services

We use UML and OCL to describe the externally visible properties of services from three orthogonal viewpoints. The specific viewpoints are based on those in the KobrA approach, 2 which, in turn, are based on the viewpoints popularized in early object-oriented analysis approaches such as the object modeling technique (OMT). 3 The structural viewpoint describes the types associated with the service and its visible relationships to other services or resources. The operational viewpoint describes the effects of the service’s operations. The behavioral viewpoint describes any externally visible states the service exhibits and the acceptable sequences of operation invocations—the interaction protocol. Figure 1 shows how the structural view of our sample shopping cart service might look using this approach. The ShoppingCartService class, which represents the service itself (as indicated by the stereotype ), shows that the service has eight operations: four “changers” that change the state of the service and four “inspectors” that reveal information about the service’s state but do not change it. The diagram also shows that the shopping cart has two data types, CreditCard and Product, August 2008

65

theorem proving and model checking, such as determining the Liskov substitutability of one service for another based on comdescription Adds a specified number of items of a specified parisons of their operation preconditions product type to a specified Shopping Cart and postconditions.5 Like semantic service receives cartID : String, prod : Product, number : Integer specifications, however, these models have no inherent support for testing. changes Cart : ShoppingCart = ShoppingCart -> select(ID = Developers can generate tests autocartID)->asSequence()->first() matically from UML/OCL descriptions precondition ShoppingCart .ID -> includes (cartID) of components,6,7 but the approaches for Cart.checkedOut = False doing so use coverage criteria oriented postcondition Cart.totalCost = Cart.totalCost@Pre + prod.price*number toward verification—internal defect detecprod.noOfItems =prod.noOfItems@pre + number tion—rather than validation. This makes Cart.Product -> includes (prod) them unsuitable for lightweight checking QoS Max time = 2.0 seconds of contract conformance at runtime. We assume that the core services in a SOA have been adequately tested during development and any identified faults Figure 2. ShoppingCartService’s addProduct operation. If the operation is removed. When services are connected supplied with a valid cartID parameter, it will (a) increase the totalCost attribute at runtime, therefore, any problems arisby the product of the number of items and the product price, (b) increase the ing in their interaction will more likely noOfItems attribute in the product by the appropriate amount, and (c) store the be due to contract misunderstandings product in the ShoppingCartService. than implementation errors within service providers. Runtime tests aimed used to pass information as operation parameters, and at uncovering such problems should therefore be valia third data type, ShoppingCart, representing the main dation-oriented rather than verification-oriented. To abstraction clients use. paraphrase the well-known idiom Barry Boehm8 uses to The functional view describes the functionality of the distinguish verification and validation—when a service operations the service offers. As Figure 2 shows, it takes user is connected to a service provider it needs to deterthe form of a set of OCL-based operation specifications, mine whether it is using the “right” service rather than each describing the effects of an operation in the form whether the service it is using is “right.” of pre- and postconditions written against the classes Over the past few years, researchers have been refining in the structural model. According to Figure 2, if the the notion of “built-in tests” based on this premise.9,10 addProduct operation is supplied with a valid cartID The idea behind such tests is to furnish service clients parameter, it will (a) increase the totalCost attribute by with the ability to test their service providers at runtime the product of the number of items and product price, to validate that they do what their specifications state (b) increase the noOfItems attribute in the product by they do. Built-in tests therefore have the same goal of the appropriate amount, and (c) store the product in establishing composition correctness as semantic comthe ShoppingCartService. The figure also shows how position technologies but in a different, complementary developers can specify simple QoS thresholds such as way. To ensure that they can perform this role reliably, it the maximum acceptable response time. However, more is important that humans can define built-in tests using complex QoS statements might need to be defined in an a simple and intuitive notation. auxiliary view using a QoS description language such as the Component Quality Modeling Language.4 Single-Scenario Test Sheets The third view, the behavioral view, displays the exterThe mainstream test description approach that comes nally visible states of the service in the form of a UML closest to fulfilling these requirements is the FIT approach state machine diagram. However, as is often the case in from the agile development community.11 Designed to supSOAs, the ShoppingCartService has no interesting exter- port agile system specification and validation, FIT allows nally visible states of its own because it is a data-driven customers to specify the required properties of a system in service designed to serve multiple simultaneous users. a simple, tabular fashion by defining the desired relationships between the system’s inputs and outputs. Built-in Tests However, FIT has certain limitations that makes it Our UML/OCL-based diagrams provide a human- unsuitable for defining built-in tests, such as the inabilreadable yet formally precise specification of the service’s ity to describe relationships between the parameters and externally visible properties. Although not optimized for results of different operation invocations. We therefore reasoning, the models can be subject to various forms of refined the FIT technique into a more general approach, Name

66

Computer

addProduct

A

B

C

D

E

F

F

1

ShoppingCartService

createNewCart

“U1”

“P”

2

ShoppingCartService

totalcost

F1

0

3

ShoppingCartService

numberOfItems

F1

0

4

ShoppingCartService

numberOfProducts

F1

0

5

ShoppingCartService

isCheckedOut

F1

FALSE

23

ShoppingCartService

removeProduct

F1

24

ShoppingCartService

totalcost

F1

25

ShoppingCartService

NumberOfItems

F1

26

ShoppingCartService

numberOfProducts

F1

27

ShoppingCartService

isCheckedOut

F1

28

Product

create

“784”

34

29

ShoppingCartService

removeProduct

F1

F28

30

ShoppingCartService

totalCost

F1

31

ShoppingCartService

NumberOfItems

F1

32

ShoppingCartService

numberOfProducts

F1

33

ShoppingCartService

isCheckedOut

F1

F6

0 F21-1 FALSE

0 FALSE

0 | 4 0 |1 FALSE

“PY”

F24 F28

0

0 | 1000

F13-F14*D6 F6

0

F25 F26 FALSE

1000 4 1 FALSE

Figure 3. Single-scenario test sheets. These are essentially tables, but surrounded by spreadsheet-like row and column labels. The sheet at the front is an input test sheet and the one behind is a result test sheet. Rows 6 through 22 not shown due to lack of space.

known as test sheets, which are more suitable for specifying and validating services. As Figure 3 shows, a test sheet is essentially a table, like a FIT test definition, but surrounded by spreadsheet-like row and column labels. Each row in the table represents an invocation of an operation of the object under test (in this case the ShoppingCartService) or of one of its data types. The execution flow proceeds sequentially from top to bottom, with column A identifying the object or class (in the case of object creation) and column B identifying the operation. The purple zone in the figure contains the input parameters for each operation invocation; the blue zone contains the output parameters or results. A test specification is known as an “input” test sheet because it defines the set of test cases to be executed: the invoked operations, input arguments, and expected outputs. The front table in Figure 3 is an example of an input sheet. The rear table shows the corresponding “result” test sheet, which displays the results obtained by applying the test to a specific test object. It is identical to the input sheet except that any nonempty cells in the input test sheet’s blue zone are instead colored red or green (following the FIT style) to indicate whether the actual returned

value matched the expected value. When it doesn’t, the cell is colored red and both the expected and returned value are shown, as cells F24, F25, and F26 of the result sheet show. The spreadsheet column and row labels offer the advantage that they can represent arbitrary relationships between input and output arguments of different invocations. This is not possible in FIT. Thus, “F1” in cell C2 indicates that the input parameter to that invocation of the totalCost operation is the value returned by the first invocation in the sequence (createNewCart). A test sheet allows an arbitrary sequence of invocations of ShoppingCartService operations to be defined. But we must also determine the sequence of operations that should be included to adequately validate if a service conforms to a contract. To do this, we turn to an alternative style of formal specification, algebraic specification.12 This specifications approach has fallen out of favor in recent years because it is generally more cumbersome for large systems than model-based specification based on explicit representations of system state. However, algebraic specifications are an ideal foundation for defining validation tests because they August 2008

67

totalCost

numberOfItems

numberOfProducts

isCheckedOut

createNewCart totalCost (createNewCart) =0

totalNumberOfItems (createNewCart, P) =0

numberOfProducts (createNewCart) =0

isCheckedOut (createNewCart) =FALSE

addProduct

numberOf Items (addProduct (C, numberOfProducts (addProduct (C, P, N) = isCheckedOut P, N), J) = if numberOfItems (C, P) = 0 (addProduct (C, P, N) if P = J numberOfProducts (C) + 1 = isCheckedOut (C) numberOfItems (C, P) + N else numberOfProducts (C) else numberOfItems (C, J)

totalCost (addProduct (C, P, N) = totalCost (C) + P.price * N

removeProduct totalCost (removeProduct (C,I)) numberOfItems (removeProduct = totalCost (C) – (C, P), Q) numberOfItems (C, P) * P.price = if P =Q 0 else numberOfItems (C) checkout

totalCost (checkout (C, CC)) = totalCost (C)

numberOfItems(checkout (C, CC)) = numberOfItems (C)

numberOfProducts (removeProduct (C, P)) = if noOfItems(C, P) = 0 noOfProducts (C) else numberOfProducts (C) – 1 -1 numberOfProducts (checkout (C, CC)) = numberOfProducts C)

isCheckedOut (removeProduct (C, P)) = isCheckedOut (C)

isCheckedOut (checkout (C, CC)) = if CC.limit < totalCost (C) FALSE

Figure 4. Algebraic specification axioms for the ShoppingCartService. These define the ShoppingCartService operations’ effects from the client’s perspective, including behavior that might be unexpected and thus a likely cause of contract misunderstandings.

precisely identify the invariant relationships between a service’s operations. Ideally, an algebraic specification should define the effects of each changer operation in terms of each inspector operation as shown in the ShoppingCartService algebraic specification axioms in Figure 4. The signature definition part of the algebraic specification is not shown because the structural view of the component in Figure 1 essentially provides the same information. The only difference is that all the changer operations need to return the ID of the changed shopping cart like the create NewCart operation. The axioms in Figure 4 define the ShoppingCartService operations’ effects from the client’s perspective, including behavior that might be unexpected and thus a likely cause of contract misunderstandings. For example, the axiom defining the effect of removeProduct in terms of numberOfItems makes it clear that all items of a product type are removed when the product is removed rather than just one, which might be expected. To ensure that a test sheet incorporates all semantic information in the algebraic specification axioms, we systematically map each axiom into a corresponding sequence of operations. Thus, the first five operation invocations in the test sheet check the first row of the algebraic specification, the next 11 invocations (6 to 16) check the second row of Figure 4, the next six (17 to 22) check the fourth row of Figure 4, and the remaining invocations (23-33) check the third row of Figure 4. The fourth row is checked before the third simply to avoid having to refill the ShoppingCartService with Products after the effects of removeProducts have been checked. 68

Computer

A test sheet not only defines how to validate a service, it constitutes a human-writeable and -readable specification of that service. Thus, by building test-sheet specifications of required services into processes and services, when deployed these components can test the service providers to which they are connected to validate that they fulfill their contracts. If a service provider fails a test built into one of its clients, the system can flag an error and take appropriate countermeasures.

Multiscenario Test Sheets

The test sheet in Figure 3 explicitly describes one specific sequence of operation invocations representing one specific usage scenario. We have carefully designed this to test the invariant relationships between operations defined in the service’s algebraic specification. However, it means that the test sheet can only be used as an “all or nothing” qualitative test of acceptability rather than as a quantitative measure of the service’s reliability. We have therefore defined an enhanced form of test sheet—multiscenario test sheets—that specify the effects of a service’s operations invariantly over all allowed scenarios. Unlike a single scenario test sheet, the method invocation rows in a multiscenario test sheet are not assumed to execute sequentially. Instead, a special life-cycle zone, shown in pink in Figure 5, defines their execution order. This is essentially a tabular representation of a finite state machine or Markov chain representing the algorithm or usage model driving the service’s invocation. Each row in the pink part of Figure 5 represents a state, while each nonempty cell represents a transition. The first

A

B

C

D

E

F

1

ShoppingCartService

createNewCart

“U”

“P”

2

ShoppingCartService

addProduct

F1.last

F13.Last

Poisson (1, 10, 3, 2)

3

ShoppingCartService

addProduct

F1.last

D11.last

Poisson (1, 10, 3, 2)

4

ShoppingCartService

removeProduct

F1.last

F13.last

5

ShoppingCartService

removeProduct

F1.last

D11.last

6

ShoppingCartService

checkOut

F1.last

F14.last

7

ShoppingCartService

totalCost

F1.last

if lc = 2 then F7.last +E2.last*D13.last elseif lc = 3 and F12.last > F11.last F7 .last +E2.last*D11.last.price elseif lc = 4 and F12.last < F11.last F7.last - E2.last*D11.last.price else F7.last

8

ShoppingCartService

numberOfProducts

F1.last

if lc = 2 or (lc = 3 and F12.last > F11.last) F8.last +1 elseif lc = 4 and F12.last < F11.last 0 else F8.last

9

ShoppingCartService

isCheckedOut

F1.last

If lc = 6 and F7.last > E14.last FALSE

10

ShoppingCartService

numberOfItems

F1.last

F13.last

11

ShoppingCartService

numberOfItems

F1.last

F13.random

12

ShoppingCartService

numberOfItems

F1.last

D11.last

13

Product

create

random

Poisson (0, 100, 20, 8)

random

14

CreditCard

create

random

Poisson (100, 200, 130, 20)

Uniform (2000, 3000, 2500, 50)

15

100% -> 16 / (7,8,9) --start

16

95% -> 17/13.2. (7, 8, 9, 10) --add new item

5% -> 18 --quit

17

40% -> 17/13.2. (7, 8, 9, 10) --add new item

10% -> 17/11.3. (7, 8, 9, 12) --add old item

5% -> 17/13.4. (7, 8, 9, 10) --remove new item

10% -> 17/11.5. (7, 8, 9, 12) --remove old item

30% -> 18/ 14. 11. 6. (7, 8, 9) --checkout

18

--end

If lc = 2 E2.last elseif lc = 4 0

If lc = 3 F11.last or F11.last +E3.last elseif lc = 5 0

5% -> 18/ --quit

Figure 5. Multiscenario test sheet. This is essentially a tabular representation of a finite state machine or Markov chain representing the algorithm or usage model driving the service’s invocation.

part of the transition expression defines its probability, the second part after the arrow identifies the target state, and the list of numbers after the “/” defines the operations

executed before entering the target state. Numbers separated by dots are executed in sequence, while numbers appearing within parentheses can execute in any order. August 2008

69

Thus, the transition expression “40% -> 17/13.2. (7, 8, 9, 10)” in A17 of Figure 5 indicates that the corresponding transition to the state represented by row 17 has a 40 percent probability of being selected and first executes operation invocation 13, followed by operation invocation 2, followed by 7, 8, 9, and 10 in any order. The other main difference from single-scenario test sheets is that input parameters are generated at random according to specified distribution functions, and the required results in the result zone are written in a scenario-independent way. This is achieved by using (a) a special operator, lc, that returns the row number of the most recently invoked changer operation and (b) parameter or result access operators that return specific or randomly selected values from the sets of parameter or result values accumulated as a scenario unfolds. Thus, the result expression if lc = 6 and F7.last > E14.last FALSE in cell F9 states that if the most recently invoked constructor operation is 6, and the value returned by the most recent execution of 7 is greater than the value supplied as the third parameter of the most recent invocation of 14 (E14), the result must be FALSE. The negation of this condition does not imply that the result must be TRUE, because a credit card also requires confirmation of creditworthiness by some credit card checking service. Note that some of the operations of the ShoppingCartService are invoked in more than one row. This is because each row invokes the operation under a different circumstance that results in a different result in one of the inspector operations. For example, row 2 invokes addProduct with a Product that is guaranteed not to be in the shopping cart because it has just been freshly created. Row 3, on the other hand, invokes addProduct with a Product that might already be in the shopping cart. Another interesting aspect of Figure 5 is the use of numberOfItems as a probe to determine whether or not a Product is in the shopping cart. Row 11 does not specify an expected result for the numberOfItems operation because the test sheet uses it to set up a probe. It uses the result obtained in F7, for example, to determine if the Product passed to invocations of addProduct (row 3) or removeProduct (row 5) is already in the shopping cart. If a stateful object like a service is exercised with enough scenarios that mirror its real usage profile, the likelihood that its reliability is above a given threshold, r, can be determined to a certain level of confidence, c. The minimum number of scenarios that must be executed to do this is roughly equal to ln(1 – c)/ln(1 – r).13 Given that the system can use a multiscenario test sheet to generate an arbitrary number of scenarios based on the usage profile, the sheet can be used to estimate a service’s reliability from a given client’s viewpoint. 70

Computer

S

OA engineers can use our approach to specify services in an intuitive, visually oriented notion that simplifies the use of dynamic, test-oriented techniques for quality assurance. We intend it to supplement, not replace, the kind of reasoning-oriented techniques that semantics service composition technologies emphasize. We achieve the former by using the UML/OCL to specify services from three orthogonal viewpoints, and the latter by using a new form of test specification. Test sheets are not derived from other views, as in most model-driven approaches to testing, but are meant to be primary specification artifacts that humans write and read. They are also semantically self-contained and can be directly executed at runtime to qualitatively validate a service provider’s contract compliance or to quantitatively assess a service provider’s reliability. ■

Acknowledgments Parts of the work described in this article were supported by the German state of Baden-Württemberg through the Aristaflow, MORABIT, and ECOMODIS projects.

References 1. B. Beckert, R. Hähnle, and P.H. Schmitt, eds., Verification of Object-Oriented Software: The KeY Approach, Springer, 2007. 2. C. Atkinson et al., “Modeling Components and Component-Based Systems in KobrA,” A. Rausch et al., eds., The Common Component Modeling Example: Comparing Software Component Models, LNCS 5153, Springer, 2008, pp. 54-84. 3. J. Rumbaugh et al., Object-Oriented Modeling and Design, Prentice-Hall Int’l, 1991. 4. J.Ø. Aagedal and E.F. Ecklund Jr., “Modelling QoS: Towards a UML Profile,” Proc. 5th Int’l UMLConf—The Unified Modeling Language, LNCS 2460, Springer, 2002, pp. 275-289. 5. B.H. Liskov and J.M. Wing, “A Behavioral Notion of Subtyping,” ACM Trans. Programming Languages and Systems, Nov. 1994, pp. 1811-1841. 6. L.C. Briand, Y. Labiche, and M.M. Sówka, “Automated, Contract-Based User Testing of Commercial-Off-the-Shelf Components,” Proc. Int’l Conf. Software Engineering, ACM Press, 2006, pp. 92-101. 7. S. Beydeda and V. Gruhn, Testing COTS Components and Systems, Springer, 2005. 8. B. Boehm, “Verifying and Validating Software Requirements and Design Specifications,” IEEE Software, vol. 1, no. 1, 1984, pp. 75-88. 9. D. Brenner et al., “Reducing Verification Effort in Component-Based Software Engineering through Built-In Testing,” Information Systems Frontiers, Springer, 2007, pp. 151162.

10. H-G. Gross, Component-Based Software Testing with UML, Springer, 2005. 11. R. Mugridge and W. Cunningham, FIT for Developing Software: Framework for Integrated Tests, Robert C. Martin Series, 2005. 12. K.J. Turner, Using Formal Description Techniques—An Introduction to Estelle, LOTOS and SDL, John Wiley and Sons, 1993. 13. W. Ehrenberger, Software-Verifikation, Verfahren für den Zuverlässigkeitsnachweis von Software, Hanser Fachbuch, 2002.

Colin Atkinson heads the Software Engineering Group at the University of Mannheim. His research interests include model-driven development, service-oriented architecture, and dependable systems. Atkinson received a PhD in computer science from Imperial College, London. Contact him at [email protected]. Daniel Brenner is a research assistant in the Software Engineering Group at the University of Mannheim. His

research interests include software testing, componentbased development, and dependable systems. Brenner received a diploma in computer science and business administration from the University of Mannheim. Contact him at [email protected]. Giovanni Falcone is a research assistant in the Software Engineering Group at the University of Mannheim. His research interests include software reuse, component-based development, and software metrics. Falcone received a diploma in computer science and mathematics from the University of Mannheim. Contact him at [email protected]. Monika Juhasz is a research assistant in the Software Engineering Group at the University of Mannheim. Her research interests include model-based software engineering and usage modeling. Juhasz received a diploma in computer science and business administration from the University of Mannheim. Contact her at [email protected].

The magazine of computational tools and methods for 21st century science Interdisciplinary

Emphasizes real-world applications and modern problem-solving

Top-flight departments in each issue! • Book Reviews • Computer Simulations • Education • Scientific Programming • Technologies • Views and Opinions

MEMBERS

$45/year print & online

Communicates to those at the intersection of science, engineering, computing, and mathematics

Peer- reviewed topics 2008

Jan/Feb

SSDS Science Archive

Mar/Apr

Combinatorics in Computing

May/Jun

Computational Provenance

Jul/Aug

Mixed Plate

Sep/Oct

HPC in Education

Nov/Dec

Novel Architectures

• Visualization Corner

Subscribe to CiSE online at http://cise.aip.org and



www.computer.org/cise

August 2008

71

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.