Bootstrap: fine-tuning process assessment

Share Embed


Descrição do Produto

I

Bootstrap: rine-iuning Process Assessment VOLKMAR HAASE and RICHARDMESSNARZ Uniwersity of Technology, Graz G UNTER KOCH,European Software Institute HANSJ . KUGLERK&M Technologies PAUL DECRlNIS , University of Technology, Graz )

B

ootstrap was a project done as part of the European Strategic Program for Research in Information Technology. Its goal was to develop a method for software-process assessment, quantitative measurement, and improvement. In executing that goal, Bootstrap enhanced and refined the Software Engineering Institute's process-assessment method and adapted it to the needs of the European software industry including nondefense sectors like banking, insurance, and administration. This adaptation provided a method that could be applied to a variety of sojiuawpr-oducing zozits, small to medium software companies or departments that produce software within a large company. T h e Bootstrap method assesses both the SPU and its projects, thus providing answers to two important questions: Is the

SPU providing all the necessary resources to do the projects, and are the projects using the resources efficiently? The result of applying the Bootstrap assessment method is a profile of process quality that represents the SPU's strengths and weaknesses. This quality profile serves as a quantitative foundation for decisions about process improvements. In this article, we describe the elements of the Bootstrap assessment tool, how it differs from other process and quality guidelines and models, how it can be used to attain I S 0 9001 certification, and how it has been implemented in two case studies. ELEMENTS OF THE BOOTSTRAP METHOD

T h e Bootstrap method comprises ~-

-

- -

25

ihree major elements: t a det.ailed hierarchy of process-quali. ~ yattributes based on attributes from the lnternaticinal Organization for Stand:irdization’s I S 0 9000-3 guidelines for ware-quality assurance and the opean Space Agency’s PSSOS software-engineering standards, + a refinement of the SEI algorithm i.hat lets you calculate a maturity level for i.:ach process attribute, and t an enhanced SEI questionnaire that 1 . m be used to determine an organization’s i:apabilities for 30-plus quality attributes. Quolity ottributes. T o model process cpality, an organization must first define a cluality goal. Quality attributes are assigned to this goal, and quality factors :ire, in turn, assigned to each attribute. Ouality factors might be broken into more cpality factors, and so on. In the Bootstrap method, h i s attribute breakdown is captured through the yualz1-y-atpibutehiemwhy shown in Figure 1 :In attribute is broken into clusters of

.’

attributes that contain either additional attribute clusters or elementary attributes. T h e hierarchy takes into account the three main dimensions in a process: organization, methodology, and technology. The Bootstrap team performed a study to determine technologies that can support the methodology attributes and, for each methodology attribute, defined a technology attribute. Refined algorithm. By assigning a software metric to each attribute and evaluating it according to that metric, you get a set of measured values that represents process quality. T h e metric used in Bootstrap is the Bootstrap maturity-level algorithm’ - an enhanced and refined version of the SEI maturity-level algorithm. Like the SEI method, the Bootstrap algorithm differentiates among five levels of software processes. However, unlike the SEI method, the Bootstrap method calculates a maturity level for each attribute of the processquality model.

Figure 1. Bootstmp 5- hiwarchy of process-quality attributes. The hierarchy ensures that the quality process model accounts for individual activities as well as broad p~oressureas. 26

Bootstrop questionnaires. For each objea in the quality-attribute hierarchy, we created checklists, which resulted in the Bootstrap questionnaires - one for organizations (SPLs) and one for projects. (We describe later how these differ.) The purpose of these questionnaires is to identify the degree to which an organization’s or project’s quality attributes satisfy the criterion of a particular maturity level, from 2 to S (An organiLation or project is assumed to have no process in place at level 1.) Because the questionnaire is based on IS0 9000-3 guidelines, organizations can use it to interpret the IS0 9001 standard and deternine their readiness for certification. The box on p. 32-33 describes how this works. On the basis of the IS0 9000-3 guidelines and the ESA PSSOS standards, we identified 3 1 quality attributes dealing with organization and methodology (as opposed to six attributes in the SEI ’87 questionmire3) that relate to the I S 0 attributes. We also added questions on establishing a quality system, including process-modeling requirements and risk management. This allows organizations to check, for example, if reviews - joint, design, management, and progress-control - are conducted. T h e philosophy of the questionnaire follows the philosophy of the I S 0 9000-3 guidelines: If an organization conducts reviews, there is most likely a standard software-engineering process in place. Reviews can then be used to reveal if the defined methods and procedures are being followed and used effectively within projects. If there is no standard structure and format for design descriptions and if there is no checklist to investigate the design, then no objective evaluation is possible. The goal behind this philosophy is to encourage organizations to create a software-engineering process model including standards and methods for both management and development - b$or-e they attempt to set up a quality system to check if these standards and methods are effective. Our assessment approach is also helpful here because it is based on the ESA PSSOS standards, which describe a software-engineering process model, and

J U L Y 1994

icess capability

d

Achieves

4 -Key proc:ess areas /-

I

the I S 0 9000-3 guidelines, which specify the attributes required to set up a quality system.

COMPARISON WITH EXISTING METHODS AND MODELS

1

-

Question

Candidate for

w

~

I -:

Canfoins

Key indicator

2 Specifies

I

Existing methods and models for ensuring quality processes include SEI ’87, CMM, and I S 0 9000-3 guidelines. W e realize that SEI has recently come out with a new questionnaire: but SEI ’87 continues to be widely used. T h e CAkM has also been revised,’ but much of the underlying structure, which is the basis for our comparison, remains the same. In all instances of CMM, we refer to version 1.O ( 1991).6

SEI ’87. There are two major differences between the Bootstrap questionnaire and the SEI ’87 questionnaire:’ + SEI ’87 is organized as a flat sequence of questions with no detailed hierarchy of process-quality attributes and + SEI ’87 allows a question to be answered only by yes o r no. Bootstrap, seeking to obtain more detailed and precise results, uses a four-point scale of linguistic terms: absendweak, basidpresent/ significant/fair, and extensive/ c~mplete.’.~ Each term is then translated into a percentage value: absent, 0 percent; basic, 33 percent; significant, 66 percent; and extensive, 100 percent. W e adopted a four-point scale because an expert in psychology involved i n Bootstrap performed a study on interview techniques and found that, with a threeor five-point scale, assessors are biased toward the middle value.

CMM. T h e G I differentiates among five levels of software-process maturity: initial, repeatable, defined, managed, and optimizing. For each level, it defines key process areas, with each area containing key practices that must be performed to rate that particular maturity level. A key practice specifies key indicators, which directly relate to at least one question. Thus, while the CMM provides useful IEEE SOFTWARE

I

i Indicates

Process capability

Derived ham

I

1

-U

I

I

*r

Key-process u$t:

t---~emuredby

I

osrivdfrom

Contains

I

I

I I

l

+1

LKey process attribute f 1 Contains

-

ey practice for maturity level 2

Specifies

<

kifies

-L

I Key. practice for maturity level 5 .

F i g w e 2. (A) The CMM structure, which is ninturi~y-level based. and (B) Bootstrap stnicture, which is attribute based. Bootstrap tends to gire a more indepth picture of an SPU’s strengths and ceuknesses beca~iseit lets you zero in on particlllar attributes. The C;tllzl, on the other hand, is ustfiid iii giviug geuernl guidelines on how to progressporn one maturity level t o the next. guidelines for any organization wanting to find out what has to be done to reach the next maturity level, its use of a single maturity measure for an entire process does not sufficiently support a quantitative analysis of all a process’s strengths and weaknesses.’*3 Figure 2 shows how the Bootstrap structure differs from the CLWM structure. T h e CMM improvement approach is based on the maturity level. It defines different key process areas for each maturity level, so each CMM key process area (which corresponds to Bootstrap’s “attribute”) is valid only within one maturity level. Thus you cannot map a single

CXWI attribute (key process area) onto a maturity-level scale. T h e Bootstrap method, on the other hand, is attribute-based, evaluating each attribute separately on the maturity-level scale. This lets organizations and projects check activities within a range of maturity levels. Organizations can then pinpoint strengths and weaknesses and establish more meaningful improvement plans based on individual attributes.

IS0 9000. The I S 0 9000 standard for important process-quality attributes such as quality policy, strategic planning, 27

IS0 9001 olribute

Bootstrap attribute

Coverage

(percent) Oroanizoton

Quality system

Quality policy Organization Responsibilityand authority

Resource management

1

80%

Organization

Personnel selection

I

I

30%

Verification of resotmm and personnel

100%

Organization Verification of resources and personnel

100%

I

Ufe-cycle-independent functions

Project management

Design control Design and development planning

Confiiluraticm and change management

Design mmol

Designchvlges Docnmentoond f)oarmentissul: Lhcumat moditieatom Prodm identilicationand traceability lnspeetionand test status

I

1

I

Quality management

Contract review Management responsibility Management review

Supplier management

Purchasing

ofsubcantrarrors pllmhhg dam M~ofpurchasedproduct PurchaseMavppliedproduct

90%

100%

10%

100% tOO%

0% 50% 100%

100% 108% 100%

8%



I

I 1 I

Process-relatedfunctions

Process control

Process measurement -.

Process control General processes Special processes Control of nonconforming product Corrective actions

100% 100% 30% 100%

Quality records

100%

Internal quality audits Statistical techniques

100% 100%

resource allocation, quality planning, quality control, and quality assurance. For each quality attribute an organization must prove that a method is being used and that all projects are using it. I S 0 9000 also provides a procedure for choosing the appropriate quality system. W e analyzed the attribute structures of I S 0 9001 (Design, Development, Production, Installation, Senicing) and I S 0 9000-3 (Guidelinesfir the Application of I S 0 9001 to the Development, Supply and Maintenance of Software). W e then assigned all Bootstrap questions (from

28

1

--

questionnaire version 2.22 released in 1993) to I S 0 attributes, deriving information about the comparability of Bootstrap and ISO. Table 1 shows how two of the Bootstrap attribute clusters in Figure 1, organization and life-cycle-independent functions, map to the I S 0 9001 attributes and to what degree the Bootstrap attributes cover I S 0 9001 attributes. For example, the I S 0 9001 attributes quality policy and responsibility and authority relate to the Bootstrap attribute quality system. The I S 0 9001 attributes verifica-

tion resources and personnel and training relate to Bootstrap’s attributes personnel selection and education and training. T o get the coverage percentage, we assigned all the questions in the Bootstrap questionnaire to attributes in both I S 0 9001 and IS0 9000-3 (so as to correctly interpret the I S 0 9001 attributes). Bootstrap checks, for instance, if contract reviews are performed and if quality requirements are defined but it does not specifically investigate if the customer is integrated into this review procedure. I S 0 9000-3 does check for this integration through its joint reviews and mutual cooperation attributes. Thus, as Table 1 shows for contract reviews, Bootstrap has a coverage value of only 50 percent. However, for most of the attributes under process-related functions (and lifecycle functions, which are not shown in the table but are listed in Figure I), Bootstrap checks more methods and activities than are required for an I S 0 9001 certification. For example, I S 0 does not cover modern programming practices (like library management and reuse and prototyping of user interfaces) and critical system elements, and it only very generally covers measurement. It provides more than 30 checkpoints in measurement.. Bootstrap also covers risk management; I S 0 guidelines do not.

BOOTSTRAP EVALUATION METHOD

T o understand the Bootstrap evaluation algorithm, it is helpful to compare it in-depth with the SEI algorithm. T h e Bootstrap evaluation method differs from SEI’S in three essential aspects: its profiling approach, its lack of dependence on individual questions, and its dynamic scale. Profiling approach. The SEI algorithm is strictly sequentia1.j It takes into account the scores on the next higher level (i+l) only if nearly all key questions on level i are answered “yes” (specifically, 11 of 12 on level 2, 12 of 13 on level 3 , and 11 of 12 on level 4) and if level i is satisfied by a minimum of about 80 percent for levels 2

J U L Y 1994

through 4 (79 percent on level 2, 81 percent on level 3, and 81 percent on level 4) and 100 percent for level 5. T h e Bootstrap method takes into account scores that the SPU or project gained on the next higher level. This ensures that we do not get a flat profile, one that simply characterizes an organization or project at “level 2” but one that reflects the organization’s particular strengths and weaknesses. If an SPU is below level 2, why shouldn’t we take into account scores on level 3 when looking at a particular attribute? Organizations and projects that plan and stagger innovation over time find i t difficult if the measured process capability does not take into account innovations in individual attributes. For example, suppose an organization already has an efficient design methodology and a standardized way of creating design documents, but it has no efficient project-management method. With the SEI method, it would rate a level 1 overall; with the Bootstrap method, it would rate a level 1 for project management but a level 3 for design. This kind of detailed profiling makes it easier to plan and execute innovations and improvements. Lack of dependence on individual questions. The SEI algorithm is highly dependent on single key questions. Yet, if a question can be answered only by yes or no, there will be a maximum deviation of 50 percent for the evaluation of each question. Thus, there is no reliable statistical difference between answering 10 of 12 or 11 of 12 questions with yes. In contrast, the Bootstrap algorithm relies on key clusters of questions (attributes), which must be satisfied by a percentage greater than 50 to satisfy the current level.’ Hence, it minimizes dependence on the assessor’s behavior in judging individual questions. Dynamic scale. T h e SEI algorithm i s based only on percentages, resulting in equal distances between the levels of the maturity scale, even though there is a different number of questions for each level.

IEEE SOFTWARE

.

In the SEI questionnaire the number of questions decreases as the level increases. About 18 questions on level 3 represent 50 percent on the maturity level scale ( 0 . Q yet only five questions on level 4 represent the same value. The Bootstrap algorithm defines distances between levels by the number of applicable questions, 44,where i is a value from 2 to 5, for each level. The distance 4 2 1 between levels 1 and 2, for example, is defined by the number of applicable questions on level 2. Because different numbers of questions might be applicable from one SPU to the next, d[z] are variable, not constant. The Bootstrap method can be visualized as climbing a mountain. Every SPU tries to master a number of steps to get as close as possible to the peak of the mountain (level S), with certain intermediate stations along the way (levels2 through 4). Each question represents a step, which can be satisfied by 0, 33,66, or 100 percent. O n the basis of how the SPU has satisfied each question, we calculate the number of steps it has achieved for a certain maturity level, with d[i] representing the maximum number of steps that can be achieved on maturity level i. Figure 3 shows how the total number of steps the SPU or project has fulfilled is mapped onto a step scale t o obtain a maturity level. The bottom scale (1,...,5) represents the maturity-level scale. The top scale shows the distances between levels. T h e arrow represents the total number of steps that is mapped onto the step scale to get a maturity level. T h e total number of steps is only a first measure and is refined according to four rules: 1. If all questions on level i are satisfied by a percentage [i]that is greater than 80 percent, level i is fully satisfied. 2. If an SPU or project attribute is between levels i and i+l after calculating the total steps, the calculation must be based only on the steps achieved on levels 2 to i+2. 3. To reach the next higher level, an SPU or project must satisfy all key attributes on the current level by a minimum of 50 percent.

- -e

Figure 3. How steps are distributed across maturity levels. Distances d[i] between levels are defined by the number of applicable questions per level. Each step corvesponds to a question in the Bootstrap questionnaipe. In the SPU questionnaire, there are 28 questions on level 2, 54 on level 3, 19 on level 4, and 5 on level 5. Each step is sati$able by 0, 33, 66, or 100 pewent. This step scale contrasts t o the SEI’S percentage scale, which has a fixed distance between levels. 4. T o calculate the maturity of entire processes for either an SPU or a project, rules 1 through 3 must be applied. To calculate the maturity level of an individual attribute, only rules 1 and 2 must be applied to achieve a detailed profile of strengths and weaknesses.

FORMING AN ACTION PLAN

Having obtained a quality profile through attribute rating, an organization’s next step is to form an action plan -a systematicway to examine and interpret the strengths and weaknesses of existing processes. Quality profiles contain methodology attributes. For each of these, there is a technology attribute, so after determining if an efficient method is in place, an organization or project can go on to see how effectively that method is being supported. Naturally, this will differ for SPUs and projects. Comparing SPUs and projects. Because Bootstrap has a separate questionnaire for SPUs and projects, organizations can compare their profiles with their projects’ profiles. Attributes for which the SPU’s level is lower than the project’s illustrate

29

hat the technology-transfer function did i o t work properly, and that available -esources, methods, and technologies Mere not effectively provided and used. Figure 4 shows part of a sample qualiy profile for SPU, and its project PRJ,,. The maturity levels of single attributes

are interpreted as shown in Table 2. The major difference between the attributes is that SPUs are rated according to recommendations or provisim for improvements; projects are rated according to their we of the recommendation. T h e quartiles in Figure 4 can be inter-

Level SPU otribute (recommendation)

Project attribute (use)

No methodology provided

No methodology employed

Efficient methodology provided

Efficient methodology used

Most effective methodology provided; Standardized across all projects

Efficient methodology used as a standard and extensively documented

4

Recommendation for measuring the efficiency ofthe methodology and it, effect on quality

Collection and analysis of data h o u t the methodology’sefficiency

5

Providing concepts to ensure ongoing improvement of a particular attribute

Ongoing improvement of this attribute

~

4

j 3 8 1 1

0 11.1..

-

’ i p r e 4. Part of a sample quality profile fir SPUy and its project PRJ,V, fir ttributes that are part of the cluster 1ife-rycLe-independent)mctions. ~

30

,

preted in terms of the four linguistic terms given earlier: 1.25 is weak, 1.5 is basic, 1.75 is significant, and 2.0 is effective. Thus, the method is effectively used but weakly documented and standardized a t level 2.25, basically documented and standardized at level 2.5, and so on. The figure also shows some interesting comparisons of the SPU and its project. For example, because SPU, does not provide a method for risk management, PRJ,, also lacks one. For project management, SPU, provides only a very weak method, so PRJxl does not use an effective method either. However, project mtings do not always match SPU ratings. In quality management, for example, SPU, provides an effective method, but PRJxl could have done a better job exploiting resources, so its rating is lower. Conversely,while SW, provides an effective method for configuration management, PRJ,, does a better job of standardizing the method within the project, so its rating is higher. SPU, has so far not standardized methods for quality management and configuration management across all projects, so it has no value above 2. Technology and method. With the SEI assessment method, technology is rated either A (low, below 50 percent) or B (high, above 50 percent). But what about the case in which an SPU has answered 49 percent or 5 1 percent of the technology questions with a “yes”? In the first instance, the technology level would be A; in the second, B -yet there is only a one percent difference! In the Bootstrap assessment method, we tried to solve this problem by comparing both methodology and technology simultaneously. Figure 5 shows how this is done for PRJxl to evaluate management methodology and technology. Each methodology attribute is mapped onto a maturity level on the left axis, while each technology attribute is mapped onto a percentage scale on the right axis. Both the technology and methodology questions are answered in linguistic terms like “weak,” but technology answers are translated into percentages instead of maturity levels: below 33 percent is weak,

-

J U L Y 1994

between 33 and 66 percent is basic, and above 66 percent is complete. As Figure 5 shows, project PRJ,, is not using an effectivemethod for project management even though the technology is completely in place (percentage is above 66). This tells us that, in this organization, the process to introduce technology is not as effective as it might be. As it turns out, the organization bought a technology and then started to learn the method. Moreover, we learned from Figure 4 that the organization’s project-management maturity is weak (1.25). Again, as it tums out, the organization bought the projectmanagement tool without knowing anything about project-management methods and procedures. As Figure 5 shows, we can make similar observations about configuration and change management. T h e method the project uses is effective (2.75) and is basically supported by the technology (50 percent).

Figure 5. Comparison of management methodology with m p p o f ~of technology. By comparing methodology and technology simultaneously, an organization can avoid missing a good rating by only a few percentage points.

DATA ON SPUs AND PROJECTS

T h e Bootstrap team has published data from 23 SPUs and 49 projects.* Their conclusion was that project maturity levels seem to be marginally better than those of the SPUs - a conclusion they based on calculating the average maturity value of SPUs and projects. However, calculating average values can sometimes yield wrong results. If the project average is higher than the SPU average, for example, it may be simply that more reported projects were done by above-average SPUs. T o correct this weakness, we have since sought to represent the data in a form more conducive to analysis, as su gested by Andy Huber of Data General. In response to Huber’s suggestions, we have developed a prototype database that stores all assessment data and aids in performing refined evaluations. W e refined the approach for comparing SPUs with projects by comparing the maturity levels calculated for the organization and methodology attributes. As Figures 1 and 2 show, organizational maturity is derived

F

I EEE SOFTWARE

Figure 6. Evaluation of 23 SPUs and 49 projects to rate maturity for the (A) organization attributes and (B) methodology attributes. Bootstrap believes that the maturity of the entire process is a combination of the maturity of these two groups of attributes. from the maturity of about five attributes, and the maturity of the methodology is derived from 20-plus attributes. W e believe that organization is most important and that methodology is more important than technology in determining overall process maturity (a fool with a tool is still a fool).Thus, we measure organization and methodology on a maturitylevel scale, as shown in Figure 4, and we evaluate technology through a detailed

comparison of organizadodmethodology maturity and technology satisfaction, as shown in Figure 5. Figure 6 shows the new representation for comparing SPUs’ and projects’ organization and methodology. The horizontal axis is the SPU’s maturity level; the vertical axis is the project’s maturity level. Any point on the x=y line indicates equal maturity for the SPU and the project. Any point above this line represents a

31

project whose maturity level is higher than that of the SPU. W e identified many multiple points, each of which represents three to four projects on the organizational chart and about four t o five projects o n t h e methodology chart. About 70 percent of the projects are within I x - y I < 0.25 for methodology. About 20 percent of the projects have considerably higher maturity levels compared to the respective SPUs for organization. However, in either case,

the samples are not big enough to draw reliable conclusions. Figure 7 shows that the maintenance maturity of the projects is tnarginally better than the design maturity.‘.’ Each multiple point in Figure 7 represents three to four projects. It seems that companies invest a lot of resources in maintenance management but do not really try to solve the problem by using methodologies to get better quality, obviating huge maintenance efforts. In fact, a study reported by

Hewlett-Packard” shows that the use of proper design methods can reduce defects by half.

CASE STUDIES We describe two case studies to illustrate our experience with the partial implementation of Bootstrap action plans: The first is for a tool to visualize, oversee, and control industrial processes.

IS0 CERTIFICATION PROFILES Many organizations are finding it beneficial to have an IS0 9001 certification. There is no point on the maturity-level scale that corresponds to the “point at which organizations will be guaranteed a certification.” On the one hand, IS0 defines attributes such as statistical techniques that cover some aspects of level 4. O n the other hand, it does not cover risk management. Moreover, it is completely left open how efficient some methodology is. An organization has only to prove that it has a methodology and that it is used and documented. Thus, a maturity profile with risk management at level 1, design at 2.25, and process control at 2.5, with many single values below 3 will probably pass an IS0 audit. So although an organization cannot measure certification readmess with an overall maturitv value, it can measure the minimum profile required to fulfill I S 0 requirements.

I

-

To assist in the profiling, we are currently developing prototype tools that can calculate a certification level for 85 percent of the I S 0 attributes. I S 0 9001 comprises about 35 attributes that are essential for setting up a quality system within a software-management and -development organization. For each I S 0 attribute we calculate a certification level on a five-point scale: 4 0: For thib .itnibUte we cannot calculate a certificationlevel. 4 1: Failed ccrtification. 4 2: Will pass certification but only with process modifications. 4 3: Will pass certification as is. 4 4: Will pass certification and fulfills additional Bootstrap issues. To determine certification, we use a transformation, T,

* I

I

Figwe A. Part Ofasample IS0 certtfiationprojZef0r SPU* The rertij?cutionpro$Le is ronristent with the qwlity p $ l e in Figure 4 in the main text. -

32

J U L Y 1994

The second is for a desktop-publishing technology for a newspaper production system. For each study, we describe the problems found, the improvement activities and their goals, and the effort and cost of improvement. Case study 1: Redesign and training. The tool consisted of 1 Mbyte of executable code and 2 Mbytes of source code, both in Modula-2. The effort level was 12 person-years.

T h e BootstraD assessors found that professional design was lacking and the engineers had problems understanding the complexity of their own product. It took an average of I2 person-days to fix an error because there was no graphical or textual representation of the module structure, the links among modules, and the product interfaces. Because the product had been developed by a team of highly qualified programmers, who were prepared to work overtime, the organiza-

which is based on two functions,F and E, with T = E(F0). F maps all Bootstrap questions to I S 0 attributes. It does not assign one question to one attribute because the same question sometimesrelates to more than one attribute. Thus, F is defined by F: BQ3 cp(l); where

+ c Q X L x S, and Q is the set of d Bootstrap questions, L is the set of all lwels assigned to questions ( 2 , 4 and S is the scoring of questions (0,33,66, or loo), and of d possible sets of I, which in turn is the or 9000-3 attributes ( z being a Single

tion could overcome these quality problems but they were highly dependent on individuals. They purchased a projectmanagement tool but the projects never used it because the engineers were never trained in project-management methodologies and so did not readily accept a tool based on them. Consequently, there were no detailed workflow charts, and cost and effort estimates were not based on sound planning data. We made the following improvements

+ E(z) = (i,2) if any C, questionswere answered with less than significant and the total satisfaction percentage for c1 questions is greater than 50 percent. + E(z)= (Q) if each C, question was answered with significant or extensive, and the total satisfaction percentage for C, questions is below 50 percent. + E($ = (i,4) if each C, question was answered with signifi2 cant Or extensive, and the total SatisfactionPercaWF for c 'bum for which we cannot

,

t--

x=v

4.0 3.5

i

/:

0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 SPU maturity

I Key: + Multiple points .Single point ’ -

Figure 7. Evaluation of 49 projects to rate maturity levels f o r design and maintenance methods. For projects, the maturity Ofthe maintenance method is slight4 higher than design maturity because companies often invest resources for maintenance management but do not try t o use methods that improve quality, which would reduce the resources needed for maintenance.

to solve the problem. + W e introduced a structured-design method and CASE technology and integrated a Bootstrap expert into the project to redesign the module structure and interfaces. The goal of this activity was to increase customer satisfaction. T h e redesign provided a detailed overview of all modules and interfaces and formed the basis for making decisions about where to make modifications during maintenance. The benefits were that the redesign effort was done with the project team’s cooperation and took only four person-months. After the redesign, we reduced the maintenance effort to only four person-days per maintenance activity, about a third of the previous effort. Moreover, the redesign effort was amortized in about six maintenance activities. + T h e engineers used the new design methodology in a follow-up project supported by the European Community within the European Systems and Software Initiative. The goal of this activity was to increase the project’s productivity. There were several benefits. First, the detailed graphical and textual design let them clearly decompose the product into different units that will finally be integrated into one product. Second, they could develop the units in parallel, so we had in effect given them the ability to implement

34

tasks in parallel. This reduced implementation time by half with the same amount of effort. + W e trained the engineers to use project management methodologies and to produce detailed workflow charts as a basis for making estimations. The goal of this activity was to produce reliable estimates. T h e engineers learned to use the project management tool and used their knowledge in a subsequent ESSI project. They decomposed work into concrete activities and assigned effort and resources to those activities. This gave them a technically sound basis for making estimates. T h e main benefit of this activity was that engineers could now estimate resources and effort within 10 percent. Case study 2: Technology transfer. T h e desktop-publishing system consisted of 1.5 million lines of code in C. The effort was 35 person-years. The project manager had built up a module repository with a size of 800,000 lines of code, enabling more than half the developed software to be reused in other projects. Because of this repository the project was three to four times more productive than other projects. Moreover, because they had integrated already tested modules from the repository, they achieved a lower error rate. Finally, the project manager was using a professional project-management and design method; other projects had a much lower maturity in these attributes. The problem was how to get this kind of productivity and error rate for all projects in the organization. W e made the following improvements: + W e recommended establishing a company-wide repository. T h e goal was to enhance productivity and quality across the organization. T h e recommendation was accepted and the organization enhanced and refined its repository management and found a way to adapt the developed product for many additional customers. Within a year they had considerably increased their income. + W e suggested that each project manager use a project folder with guide-

lines for project management and design. This data was to he based on the experience of the project manager for the investigated project. The goal was to increase the efficiency of other project managers. All project managers were made to use the successful methodologies with effective results. For example, the project managers established ,a quality circle to deal with the improvement of management capabilities. Continuously thinking about how t o improve management processes has become an integral part of their company philosophy and strategy. + We used the IS0 certification functions and algorithms to provide information about the organization’s ability to satisfy I S 0 attributes. T h e goal of this activity was to achieve I S 0 certification. This enabled the organization to establish an action plan that would lead to I S 0 certification. Managers used the Bootstrap profiling technique to identify where to start with improvements and began incorporating techniques such as the project folder. The folder has become part of the organization’s quality manual, which is required for I S 0 9001 certification.

T

he Bootstrap project completed in 1993. However, since 1991 we have been field-testing the methodology in parallel with the project so as to base its evolution on practical experience and customer feedback. For example, we received much feedback from Robert Bosch, a German electronics and telecommunications company with about 180,000 employees. The company participated in the Bootstrap project and provided field testing and benchmarking for the development of the Bootstrap method. Also, Siemens has been using the profiling technique since 1992 at several sites, and has provided feedback on its results. We expect the Bootstrap methodology to continue evolving. T h e European Software Institute, founded a t the end of 1993 to promote technology-transfer and improvement methodologies within Europe, plans to use Bootstrap as a strategic tool to assess and analyze software processes and to establish improve-

J U L Y 1994

ment programs. This will provide much experiential data, since the ESI is financed by 18 large European software houses. T h e r e is also the cooperation of ESPRIT projects in software metrics within the International Software Consulting Network,” which will greatly

REFERENCES 1. R. Cachia and M.Maiocchi, Middle

Management Briefing, Deliverables 10 and 20, Bootstrap ESPRIT 54441, Commission of European Communities, 1992 2. V. Haase, R. Messnarz, and R. Cachia, ”Software Process Improvement by Measurement,” in Sh@ing Paradigms in Sojhare Engineering,R. Mittermeir, ed., Springer-Verlag, Berlin, 1992, pp. 32-41. 3. T. Bollinger and C . McGowan, “A Critical Look at Software Capability Evaluations,” IEEE Sofiwaw, July 1991, pp. 25-41. 4. T h e Software Process ,Maturity Questionnaire, Software Eng. Inst., Pittsburgh, 1994. 5 . M. Paulk et al., “Capability Maturity ,1/Iodel for Software: Version 1.1,” Tech. Keport CMUISEI-93-TR-25, Software Eng. Inst., Pittsburgh, 1993. 6. M. Paulk, B. Curtis, and M.Chrissis., Capability Maturity Modelfor So@arc, Sofnvare Eng. Inst., Pittsburgh, 1991. 7. W. Humphrey and W. Sweet, “A Method for Assessing the Software Engineering Capability of Contractors,” Tech Report CLUU/SEI-877-TR-23, ADA 1873 2, Software Eng. Inst., Pittsburgh, 1987. 8. Bootstrap Team, “Bootstrap: Europe’s Assessment Method,” IEEE Sofhare, May 1993, pp. 93-95. 9. A. Huber, “A Better Way to Represent Bootstrap Data,” IEEE Sohare, Sept. 1993, p. 10. 10. R. Grady, Practical Sofhare Metricsfir Project ,Wanagemmt and Process Improvement, Prentice-Hall, Englewood Cliffs, NJ., 1992. 11. M. Biro, H . Kugler, R. Messnarz et al., “Bootstrap and ISCN - A Current Look at a European Software Quality Network,” in The Challenge ofNez%wking, D. Sima and G. Haring, eds., Oldenbourg, Vienna, 1993, pp. 97.105.

I‘

i

ologies in the European software industry. Finally, Bootstrap is supported by the European System and Software Initiative, a European program for supporting the introduction of new methodologies and technologies into industry. Our immediate plans for Bootstrap include enhancing the profiling tech-

Volkmar Haase is a professor in applied informatics at the University of Technology in Graz, Austria. H e is one of the chief developers of the Bootstrap method. His professional interests are software process evaluation, and in applications of artificial intelligence meth-

ods (esp. fuzzy logic). Haase received a P h D in physics from the University of Graz. H e is on the board of the Austrian Software Industry Association and the Austrian I S 0 9000 Certification Agency and is a representative o n various committees of the International Federation for ,4utomatic Control.

Richard Messnarz is a researcher at the University of Technology i n Graz, Austria, w-here h e is a doctoral student in computer science. H e is also working for K&M Technologies and consulting for Bootstrap. H e was responsible for the design and development of the Bootstrap evaluation method for calculating maturity profiles and I S 0 certification profiles. His dissertation deals with the design of a quality information system that lets you compare general data, process-maturity data, and product data. T h e system will serve as a basis for evaluating the influence of process-maturity data on product-quality measures. Messnarz holds an MSc in computer science from the University of T e c h n o l o g y i n G r a z , Austria. H e is a f o u n d i n g m e m b e r of t h e International Software Consulting Network and a m e m b e r of t h e S o f t w a r e M a n a g e m e n t Association

ket sectors other than software production. Additionally,the ISCN will work on a more comprehensive process model, in which assessment is only the starting point and improvement processes represent the focus. This effort will combine the results of many E S P N T projects that deal with process measurement and improvement.

+

Giinter Koch is director of the European Sofhvare Institute and was manager of Bootstrap. H e is a consultant in information technology research and development for the European Community and has managed several of their projects. His interests are in system analysis, system structures, process analysis, and process improvement. Koch received an MSc in computer science at the University of Karlsruhe in Germany. H e is a member of the IEEE and ACM.

technology transfer and software-technology applications in telecommunications. Kiigler received an MSc in computer science from the University of Dortmund, Germany, and an MA from T r i n i t y College, Dublin. H e is a founding member of I S C N , the International Software Consulting Network.

Paul Decrinis is a master’s student of computer science at the University of Technology in Graz. His thesis is a comparison of Bootstrap and KO. His main interest is in process analysis and modeling using workflow-management methods.

4ddress quesoons about this artlcle to Messnarz at Institute tor Software Technology, Unlverslty of Technology, Graz Austria, rmess@lst-tu-graz ac at

I IEEE SOFTWARE

__

35

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.