Evaluability Assessment of a Faculty Development Program

July 17, 2017 | Autor: Vinod Kumar | Categoria: Program Evaluation, Medical Education, Faculty Development, System Approach
Share Embed


Descrição do Produto

.Original Articles Evaluability Assessment of a Faculty Development Program

Balachandra Vishnu Adkoli(1), Vinod Kumar Paul(2), Kusum Verma(3) (1) Educationist (2) Co-ordinator (3) Professor, In-charge KL Wig Centre for Medical Education & Technology, All India Institute of Medical Sciences, New Delhi, India 110 029 Abstract Faculty development programs in medical education have received impetus in the South East Asian Region. However, there is need for a systematic approach to comprehensively evaluate the effectiveness and the impact of these programs. Systems approach to the program evaluation enables the evaluators to conduct the evaluability assessment of these programs as a prelude to conducting further evaluative activities. This paper is an evaluability assessment as applied to an on-going faculty development program at the All India Institute of Medical Sciences, (AIIMS), India. The implications of the study in conducting a comprehensive evaluation of this and similar other programs have been discussed. Key words: medical education, program evaluation, evaluability assessment

Introduction The quality of health care to a large extent depends upon the quality of training and education of the medical graduates(1). The faculty of medical colleges occupy a strategic role in shaping the professional competence of medical graduates(3). They have a three fold responsibility of teaching, patient care and research. While the medical teachers may be highly proficient in their own branches of specialization, their skills in teaching remain doubtful(2). KL Wig Center for Medical Education and Technology at the All India Institute of Medical Sciences (AIIMS), New Delhi was established in 1990 with two major responsibilities, viz. to promote faculty development activities at the institutional/national level and to provide a central facility for the media production activities. As a part of the faculty development activity, the Center has been holding In-house as well as National Workshops periodically for sensitizing the medical teachers in medical education technology. The present study pertains to the in-house workshop in which the Target Group consists of faculty members of AIIMS, who have not undergone any training in educational technology earlier. The main aim of the workshop was to sensitize faculty members in educational technology as applicable to medical education, so that they are able to improve instruction and provide educational leadership. The faculty development programs have become increasingly popular. However, a weakness of this program is the evaluation component. It is difficult to judge whether the

2

program goals have been attained, and if yes, to what extent; what are the strengths and limitations of the program; and whether this program has made any impact on the practice of the participants. Recent interest in the field of program evaluation has enabled the program organizers to clarify their evaluation questions, and to obtain answers to these questions in a more systematic and scientific manner(4). Program evaluation is a systematic effort which runs throughout the program, right from its inception, through planning, implementation, assessment of outcome at the end of the program, and assessment of impact after the program. Evaluability assessment is a method for examining a program prior to conducting other definitive evaluation activities(5). As a pre-evaluation activity, the evaluator works with principal stakeholders to clarify program intent, and define feasible evaluation goals and methods that will maximize the usefulness of intended evaluation. An evaluability assessment provides an early opportunity to review program history, and describe actual program activities. Thus, it helps to identify key stakeholders, clarify evaluation questions, and adopt appropriate method of evaluation based on the program realities and constraints of resources and time. In some cases, it serves as a formative evaluation in itself and may generate sufficient information for stakeholders to fine-tune a program at its early stage(5). Materials and Methods An evaluability study of the medical education technology program was conducted in accordance with the guidelines developed by Rossi(10). The following strategy was adopted: i) Document Analysis ii) Interview of the past participants and their Heads of the departments iii) Informal communication with other centers of medical education iv) Review of literature in the field of program evaluation. The authors, by virtue of their close involvement in the program had access to all the program documents, viz., the genesis of the program, the reports of the earlier workshops, communications with the participants and Heads of the departments who nominate faculty members for the workshops, and evaluation reports furnished by the past participants. In-depth interviews were held with six participants, who had attended workshops conducted earlier, in order to assess the relevance and utility of the program in their educational practice. For reviewing the course contents, opinion was elicited from two experts in the field, who run similar programs in select centres in the country. Finally, an extensive review of the literature was carried out in the field of program evaluation, especially, in developing a conceptual model which helped in analyzing inter-relationship between various components of the program.

3

Results and Discussion A program can be conceptualized as a system of interrelated components working towards the goal of producing desirable outcome, which in turn, are expected to make an impact on the practice of the participants (Table 1). The program in medical education technology is an intervention which is expected to produce some desirable outcome, viz., increase in the competency of participants, which in turn, would influence their day-today educational practice. Inputs: These refer to all kinds of resources, including physical resources, technical, financial and human resources which contribute to the program. The entry behaviour of the participants (their previous knowledge, skills, disciplinary affiliation, motivation) constitutes a major input variable. Other inputs are : the quality of instructional support by the Resource persons, the relevance and the quality of the course material and infrastructure variables, viz., venue, physical facilities, audio–visual equipment, seating plan, time and cost involved in each item. Process: This refers to a set of activities in which the inputs are utilized in pursuit of the results expected from the program. It includes implementation, monitoring and identification of the strengths and limitations, so that corrective action can be taken to improve the program. It also includes the teaching-learning strategies employed in the program including the learning climate. Outcomes: Outcomes refer to the results obtained at the end of the program as well as after the program i.e., at the practice setting of the participants. Outcomes can be operationalized in terms of a) gains made by the participants in terms of knowledge, skills at the end of the program; b) change in the attitude of participants, and their satisfaction level; c) unanticipated outcome of the program. Impact: Impact refers to the changes taking place in the relevant behaviour and practice of the participants which can be attributed to the intervention i.e., program. Impact measurement is done usually after a period of six months or more. Analysis of various components Program Goals: The need for stakeholders’ involvement: Interviews held with the past participants revealed that the program goals have been identified mainly, by the organizers. They are based on latters’ perceived needs of the faculty members. It is desirable to conduct Training Need Assessment from the point of view of the participants, Heads of the Departments and the students. The emerging needs may be further prioritized according the constraint of time and resources. An exercise of clarifying the evaluation issues, jointly by the stakeholders would not only help in ensuring user participation, but also make evaluation process more meaningful and feasible.

Table 1 Program In Medical Education Technology : A Conceptual Model INPUT

Individual Variables • Previous knowledge • Age/Sex/Designation • Discipline • Motivation

Training Inputs •

Resource Personnel



Course Material



Physical facilities, Infrastructure, cost

PROCESS



Teaching-learning Strategies; Monitoring of implementation

• • •

OUTCOME

IMPACT

PROGRAM LEVEL

PRACTICE ENVIRONMENT

Increased competency Knowledge/skill Participant satisfaction Unanticipated outcomes



Increased application, Practice of Knowledge/skill



Climate for facilitating changes

Subject to a detailed analysis, the following tentative conclusions can be arrived: 1. It is necessary to examine whether the objectives of the program are coherent with the actual needs of the participants. 2. The objectives framed should be further supported by criteria of attainment (indicators). For example, how do we measure the extent of application of knowledge and skills by the participants? Educational leadership emanating from the workshop participation? It is therefore necessary to probe in to this issue to come out with a list of success criteria (indicators) which would be useful in conducting evaluation(6,7). Input Evaluation Participants’ profile/ selection : The participants are nominated by the Heads of the Departments (HODs). There is no specific criteria for selection. For each workshop, not more than 24-30 participants are chosen on “first come, first served" basis. Here, an assumption is made that the participants’ individual characteristics (e.g. previous knowledge, motivation level, experience, etc.) may not exercise a major role in receiving program intervention, which may not be true. This requires further deliberation. However, there is reason to believe that a well designed pre-test could measure the previous knowledge, which forms a base line for measuring the gains made during the program. Resource Persons: The resource persons for the workshop are drawn from amongst the Adjunct faculty of CMET, who have been trained in Educational Technology. No attempt is made to assess the effectiveness of resource persons, although the program evaluation questionnaire includes a question about the usefulness of individual sessions, thus indirectly touching the issue of effectiveness of a resource person. It is necessary to address this issue.

6

Relevance and quality of course content: An outline of the contents of the medical education technology workshop are shown in Table 2. Table 2 Specimen Programme of workshop on Medical Education Technology Day 1

Introductory Session Curriculum - outline and Educational Objectives Hands on : “Formulation of objectives” Overview of Assessment Strategies in Medical Education Essay Type Questions Short Answer Questions Hands on : Essay/SAQ MCQs Hands on : MCQs Presentation/Feedback Essay/SAQs/MCQs Micro teaching (two groups)

Day 2

OSCE/OSPE – Introduction Demonstration of OSCE/0SPE Comments of participants Principles of Teaching Learning Overview of Media in Learning OHP 35mm Slides Hands on OHP/35 mm Slides Presentation of OHP/Slides

Day 3

Instructional Text Designing Video lecture demonstration Computer in Education Multimedia & Internet in Medical Education Program Evaluation & Valedictory Session

The opinion of experts reveals that the program includes a wide band of topics ranging from curriculum planning, formulation of educational objectives to the teaching/learning methodology, role of audio visual media, and assessment strategies as applicable to undergraduate education. Another source of evidence for optimum content is the program evaluation questionnaire which elicits opinion of the participants on the 'relevance of the topics' and 'balance of theoretical versus practical' content. It is necessary to conduct a detailed "content analysis" of the topics vis-a-vis, the training need analysis to see whether the contents are matched with the needs. This can be done by soliciting expert judgment.

7

Venue and Physical facilities: The opinion of the participants as expressed in the program evaluation questionnaire is the only source of evidence. The earlier documentation does not throw much light on the cost efficiency of these inputs. It is desirable to analyze these inputs in terms of cost-benefit analysis. Process Evaluation: The program is conducted in a workshop format, so as to facilitate experiential learning. Didactic presentations are kept to minimum. The participatory techniques include: Individual task, group discussion/group work, case study, problem solving exercise, demonstration, and some times, role play. Each session is conducted by a resource person, who briefly introduces the topic and assigns a task or a problem to be performed by the participants. The participants work in small groups. A rapporteur, chosen from amongst the group presents their findings to the whole group. The participants are provided with course materials and “hand outs” during the sessions. Audio visual equipment, viz. Over Head Projector (OHP), 35 mm slides, video, and computers are extensively used during the workshop. The participants are provided with efficient secretariat and technical assistance, especially in case of “Hands on” sessions requiring the participants to prepare a learning resource. Daily Evaluation: A source of feedback from the participants is “Daily Evaluation Proforma” which is administered to the participants at the end of each day (Table 3). The objective of Daily Evaluation Proforma is to make a rapid survey of the extent of usefulness of each session held during the day, the factors which facilitated their learning, and the factors which hindered their learning, along with comments and suggestions.

8

Table 3 Daily Evaluation Proforma 1. Please comment on the usefulness of the various sessions held today: Sessions held

Highly useful

Useful

Fairly useful

Not useful

Session 1: Session 2: Session 3: Session 4: 2. Mention the factors which facilitated your learning today: 3. Mention the factors which hindered your learning today: 4. Any other comments/suggestions: On each day of the workshop, a rapporteur is appointed from amongst the participants to record the summary of entire proceedings of the day. The rapporteur also helps to compile the Daily Evaluation Proforma administered to the participants, which will be presented to the participants on the next day before the starting of fresh session. The information gathered through the protocols mentioned above is insufficient for judging the strengths and deficiencies of various sessions. The participants, or even the rapporteur are unlikely to reveal especially adverse comments. There is also a fear of being ‘exposed’ in presence of others. It is necessary to evaluate the extent to which various sessions facilitate active participation by the participants. This requires participant observation by the evaluator, based on a scientifically developed observation instrument. This instrument should help in measuring the degree of participation and identifying the strengths and weaknesses of the sessions for taking mid course correction. Outcome Evaluation: The main outcomes expected at the end of the program are: increase in the knowledge and skills of the participants, increased application of educational technology, and development of educational leadership. Pre-test/Post-test: The participants are administered a pre–test at the beginning of the program and a post–test at the end to assess the relative improvement in their performance. These tests are essentially knowledge based. A Program Evaluation Questionnaire is administered to the participants on the last day of the workshop to elicit their opinion on various aspects dealing with the process

9

(Table 4). The response of the participants to the Post Workshop Questionnaires is analyzed and included in the proceedings of the workshop. The participants are encouraged to give their free and frank opinions, as their responses are held anonymous.

Table 4 Post workshop questionnaire

Dear Participant, The purpose of this questionnaire is to obtain your feedback on the effectiveness of the sessions which you underwent during the workshop. Your response will help us in improving such activity in future. Questions Yes No Not sure 1. Were the objectives of the workshop largely achieved? 2. Do you find the workshop useful for your professional activities? 3. Did the workshop elicit your active participation? 4. Were the Audio - Visual arrangements made during the workshop satisfactory? 5. Do you think you will be able to implement the techniques learnt during the workshop in your practice? 6. Were the arrangements made during the workshop satisfactory? 7. Do you recommend the organization of similar activity for the benefit of your colleagues? 8. Does the program have a balance of Too much of Too much of Optimum Theory and Practical? Theory practical Theory & Practical 9. Was the time estimation satisfactory? Program was Program was Program was Too tight Too relaxed Optimum Mention sessions which were Especially found useful: Mention sessions which, you think, are not necessary:

10

Any other comments/suggestions: The participants are also encouraged to give their “Informal” feedback during tea/lunch breaks, or any other free time found in between the deliberations. The valedictory session of the workshop includes a slot for the participants to express their views on the workshop. As regards outcome evaluation, the present approach (e.g. Pre test, post test) is inadequate to measure gains in the competence, especially the skill development. In addition, “unintended” effects are totally overlooked. Some of the unintended effects reported by the past participants are : increased utilization of media services, positive interpersonal relation amongst participants, and decrease in resistance to change. Resistance to change is inherent in any organization. Resistance is linked with lack of awareness or involvement in a given activity. The participants of workshops by virtue of clear perception of the advantages of educational strategies, are likely to act as “change agents”. This needs in-depth probe. Impact Evaluation: The evaluation of ‘impact’ is grossly missing in the present evaluation. One is not sure whether the knowledge and the skills learnt during the program are translated into actual practice. Whenever changes are produced, one cannot ascertain that the changes can be attributed to the intervention. If no changes are brought about, it would be necessary to investigate the reasons for lack of change. The lack of efforts to address the impact issues appears to be a feasibility problem. There is also a general feeling that a period of 5-6 years is inadequate to look for any discernible change in the practice environment. The Need for a Comprehensive Evaluation: An extensive treatise on the current trends in the field of program evaluation made by Shadish and Cook suggests that no single model of evaluation can possibly fit in total in a given evaluation situation(4). Peter Rossi's ideas on comprehensive evaluation, tailored evaluation, and theory driven evaluation appears to provide a conceptual framework for carrying out evaluation activities in the future(10). According to Rossi, evaluation should be comprehensive to address all three stages viz., i) conceptualization & designing, ii) monitoring & implementation, and iii) assessment of program utility including effects, impact and cost efficiency. Rossi's idea of Tailored evaluation enables the evaluators to fit evaluation strategy with a particular phase of the program, viz., innovative program, (new program), fine tuning (undergoing implementation), and established program (existing for quite some years). The concept of theory-driven evaluation helps the evaluators to identify and establish causal links between program intervention and the resultant outcome & impact.

11

As regards the choice of evaluation method, the field has been witnessing a healthy debate between those who advocate quantitative, experimental methods and those who recommend naturalistic inquiry and qualitative methods(4,8). The consensus appears to be the view that the choice of evaluation methodology depends upon the kind of evaluation questions posed by the stakeholders. In order to evaluate input, process, outcome and impact, no single method of evaluation would be adequate. It is therefore, proposed to utilize multiple methods drawn from quantitative and qualitative approaches(4 ,9). Such a comprehensive evaluation should: a) respond to the information needs of all its stakeholders which should be reflected in program goals; b) include built-in mechanisms for identifying and overcoming deficiencies (process evaluation); c) encompass both intended and unintended outcomes at the end of the program and their effects (outcome evaluation); d) addresses the issue of impact of training on the practice of the participants (impact evaluation). Based on the plausible questions, a strategy for the evaluation of various components has been worked out (Table 5).

12

Table 5 Outline Of Evaluation Issues And Tools/Techniques

Focus of Evaluation Conceptualization Stage: Planning

Questions Are the Goals and Objectives in conformity with the needs as identified by the stakeholders?

Tools & Techniques Document Analysis: Review of policy documents Interview with stakeholders: Organizers, Users, Heads of Depts

Stage: Designing and Implementation

To what extent is the program Questionnaires: implemented according to the Daily Evaluation plan? Proforma What problems arise in the implementation?

Focus Group Discussion With some participants

What aspects are deficient?

Interview: formal, informal

What mid-course corrections Observation of the sessions are necessary? using a schedule

Outcome Evaluation Stage:

To what extent have the program objectives been attained?

Program evaluation questionnaire.

End of the Program

After the program

To what extent are the participants able to implement the knowledge?

Follow up 3-6 months After the program

To what extent did the program have unintended effects?

Impact assessment

Competence testing by pre and post tests

Are the changes attributable to the program intervention?

Interview with participants and users (HODs). Observation of the practice.

Quasi-Experiment with Pre, Post test matched control group design, with time series design with multiple observations.

It can be seen that the proposed strategy covers most of the evaluative questions and outlines methodology which is most suitable for answering the questions. This implicates

13

extensive work on part of the evaluator right from the stage of inception of the program up to the period of follow up and practice by the participants in their work setting. It is necessary to develop a set of valid and reliable tools for measuring the variables. Indicators should also be developed for judging the success criteria. Once the tools are developed, administered and data obtained, the evaluator should analyze the data and again sit with the stakeholders to feed them with meaningful interpretation on the strengths and the weaknesses of the program. Conclusions: The evaluability study conducted with regard to the medical educational technology program conducted at AIIMS has brought to the focus the need for sharpening program evaluation. There is need for asking the right type of questions, and to administer the right kind and combination of tools to gather information with a view to take decision alternatives. There is a case for adopting a comprehensive evaluation to evaluate the inputs, the process, the outcome and the impact, on short term and long term basis. These strategies are bound to be useful in strengthening future programs conducted by the Center. At the same time, they are likely to provide a new insight into the issue of evaluation of similar programs conducted by other agencies under similar circumstances. Acknowledgements: The authors are grateful to the Director, AIIMS, New Delhi and adjunct faculty members of KL Wig Centre for Medical Education & Technology for their co-operation in the study. The suggestions received from Dr. Mamota Das, Professor and Head, Department of Education, Annamalai University, Dr. Kameshwar Prasad, Dr. P.T. Jayawickramarajah, Mr. J. Seaberg, are gratefully acknowledged. References : 1. Government of India. National health policy. Ministry of Health and Family Welfare, Nirman Bhavan, New Delhi, 1983:1-17. 2. Sajid AW, McGuire CH, Veach RM, et al, (eds). International handbook of medical education. Westport, Connecticut: Greenwood Press, 1994:1-12. 3. Bajaj JS. Draft National education policy for health sciences. Indian Journal for Medical Education 1989; 29 :36-55. 4. Shadish WR, Cook TD, LC Leviton. Foundations of program evaluation : theories of practice. London: Sage publications, 1991. 5. INCLEN XIII. Program evaluation & the future of INCLEN : workshop handbook. Canada: McMaster University, 1996. 6. Rossi PH, Freeman HE. Evaluation - a systematic approach. London: Sage publication, 1985. 7. Bertrand JT, Magnani RJ, N Rotenberg. Evaluating family planning programs – with adaptation of reproductive health. Chapel Hill, NC: The Evaluation Project, 1996. 8. Bertrand JT, Tsui A, eds. Indicators for reproductive health program evaluation. Chapel Hill, NC: The Evaluation Project, 1995.

14

9. Jayawickramarajah PT. How to Evaluate Educational Programs in the Health Professions. Medical Teacher 1992; 14: 159-166. 10. Patton MQ. Qualitative evaluation and research methods. London: Sage Publication, 1990.

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.