„A new cobras framework to evaluate e-government services: a citizen centric perspective‟

June 2, 2017 | Autor: I. DEA, Egov Perf... | Categoria: User satisfaction, Cost Benefit Analysis
Share Embed


Descrição do Produto

tGov Workshop’11 (1GOV11) March 17-18, 2011, Brunel University, West London, UB83PH

A NEW COBRAS FRAMEWORK TO EVALUATE EGOVERNMENT SERVICES: A CITIZEN CENTRIC PERSPECTIVE Ibrahim H. Osman, American University of Beirut, Olayan School of Business, Lebanon. [email protected] Abdel Latef Anouze, American University of Beirut, Olayan School of Business, Lebanon. [email protected] Zahir Irani, Brunel Business School, Brunel University, UK [email protected] Habin Lee, Brunel Business School, Brunel University, UK [email protected] Asım Balcı, Türksat, Selçuk University, Turkey [email protected] Tunç D. Medeni, Türksat, Middle East Technical University, Turkey [email protected] Vishanth Weerakkody, Brunel Business School, Brunel University, UK [email protected] Abstract E-government services involve a large number of stakeholders. In the literature, there are a huge number of fragmented papers on e-government models that aim to evaluate an egovernment service’s efficiency and effectiveness from a general perspective. But a little effort exists to provide a holistic evaluation model from a specific stakeholder’s perspective. In this paper, a holistic (COBRAS) evaluation framework is proposed based on the most successful measurement factors that impact the satisfaction of users with an e-government service. Such factors are identified from the published literature, classified into four groups and validated using e-government experts and users as follows: Cost; Opportunity, Benefit; Risk, Analysis for Satisfaction. The framework balances the user’s cost and risk of engaging with an e-service with the associated benefit and opportunity from such e-service. The balanced analysis would determine the degree of satisfaction of users that ultimately ascertains the success of an e-service take-up. A set of 49 validated questionnaires are tested on a sample of 2785 users of TurkSat e-government portal in Turkey and analyzed using confirmatory factor analysis and structural equation modeling to establish relationships. The proposed framework is demonstrated as a useful tool for evaluating satisfaction of users and the success of e-government services. Keywords: e-government service evaluation, cost-benefit analysis, risk-opportunity analysis, structured equation modeling, user satisfaction, COBRAS model. 1.

INTRODUCTION

E-government services involve many stakeholders such as citizen and business users; government employees; information technology developers; government policy makers, public administrators and politicians (Rowley, 2011). Each stakeholder has different interests, costs, benefits and objectives that would have impacts on the success and take-up of e-government services. Moreover, e-government is a dynamic socio-technical system encompassing several issues starting from governance; policy 1 Osman et al. (2011) A new COBRAS framework to evaluate e-government services: a citizen centric perspective

tGov Workshop’11 (1GOV11) March 17-18, 2011, Brunel University, West London, UB83PH

development, societal trends; information management; interaction; technological change; to human factors (Dawes, 2009). In the literature, there have been a large number of models/frameworks to evaluate E-government success for different purposes or from different perspectives (Jaeger and John, 2010). Although, these models aim to help policy makers and practitioners to evaluate and improve Egovernment services, a little effort has been made to provide a holistic framework to evaluate eservices and the interaction with citizens (Wang et al., 2005). Moreover, e-government success is a complex concept, and its measurement involves multi-dimensional factors in nature (Wang and Liao, 2008; Irani et al., 2007; 2008; Weerakkody and Dhillon, 2008). Therefore, in this study, a new conceptual framework to measure success from users’ satisfaction perspective is proposed. The framework development methodology follows a grounded theory approach where an extensive literature review on existing E-government measurement models is conducted to identify the various fragmented success factors (key performance indicators, KPIs). The aim is to propose a holistic framework that can be re-used for evaluating any e-services in any country. The identified measures are then grouped into four main categories: cost; benefit; risk; and opportunity. The proposed holistic assessment model measures a user’s satisfaction in terms of the users’ cost-benefit and users’ riskopportunity from engaging with an e-service. This approach in line with the recent literature that considers stakeholders’ costs, benefits, outcomes, outputs and impacts in their conceptual e-service evaluations (Millard, 2008; Lee et al. 2008, Rowley, 2011). The current evaluation takes into accounts the operational assessment of an e-government service efficiency and the outputs and outcomes effectiveness of the service delivery. Hence, policy makers would have an overall understanding of the e-government service (e-service) capability and consequently better improvement policies can be developed for unsuccessful e-services. The remaining part of the paper is explained as follows. Section 2 presents a background on the evaluation of e-government success models and frameworks. Section 3 introduces the new framework with associated assessment components. Section 4 discusses the methodological approach for the validation process, data collection, and data analysis on a selected sample of e-services in Turkey. The final section concludes with a managerial implication and further research directions.

2.

BACKGROUND ON THE EVALUATION OF E-GOVERNMENT SUCCESS MODELS

There have been numerous attempts by e-government researchers and practitioners alike to present a set of guidelines to bridge the gap between theory and practice for e-government architectural design (Meneklis and Douligeris, 2010). An investigation of the literature on conceptual models/ frameworks to evaluate user satisfaction with E-government services include the various publications by (Rowley, 2011; Jaeger and Bertot, 2010; Verdegem and Verleye, 2009; Hammer and Al-Qahtani, 2009; Irani et al., 2008; Wang and Liao, 2008; Esteves and Joseph, 2008; van Dijk et al., 2008; Nour et al., 2008; Ghapanchi et al., 2008; Zarei and Ghapanchi, 2008; Azad and Faraj, 2008; Irani et al. 2007; Kim et al., 2007; Gouscos et al., 2007; Grant and Chau, 2005; Moon et al., 2005; Evans and Yen, 2005; Gupta and Jana, 2003; Holliday, 2002; Mechling (2002); and Federal CIO Council, 2002). These models and frameworks can be classified into the three categories from the evaluation perspectives: E-government value evaluation models; E-government success evaluation models and E-government service quality evaluation models. 2.1

E-government Value Measurement Models

According to Mechling and Hamilton (2002), the e-government Value Measurement Models (VMM) was introduced by Harvard University in response to a request to US government and was released by the Best Practices Committee of the US Federal CIO Council (2002) to assess of the value and usage of e-government websites and projects based on a multidimensional analysis of the cost/benefit, social, and political factors. The VMM framework includes five value factors: direct user value; social/public value; government financial value; government operational/foundational value; and strategic/political value (Foley, 2006). It starts with developing a set of values for each factor including: costs; risks; tangible returns; and intangible returns for each service. These values are measured through a set of dimensions/ elements, and then assigned scores to each element/dimension. Accordingly, it becomes possible to give yes/no decisions in a fairly objective and repeatable manner 2 Osman et al. (2011) A new COBRAS framework to evaluate e-government services: a citizen centric perspective

tGov Workshop’11 (1GOV11) March 17-18, 2011, Brunel University, West London, UB83PH

for each element. The VMM approach would allow comparison between different values (cost; risk; return) among e-government services. The U.S Federal CIO Council has developed model based on VMM to assess the value of US e-services. Mechling and Hamilton (2002) extended the VMM model to include six essential factors: cost/benefit analyses; project’s political and social value to assess the E-government projects. Gouscos et al. (2007) proposed a different model to assess the quality and performance of one-stop e-government service offerings. Gupta and Jana (2003) suggested a different methodology in terms of tangible and intangible economic benefits that can be produced by an egovernment service. It should be noted that the Gupta and Jana’s model can be considered as a subset of the first two models. Moreover, the VMM model was designed to provide policy makers with qualitative data that help in assessing the potential benefits of using e-services. Although the VMM published studies shed lights and draw attention to focus on performance of e-government services from both users and government perspectives, none of the studies considered monitoring and evaluating performance at an individual e-service level or across number of e-services. 2.2

E-government Success Models

E-government success (or maturity) models were introduced by DeLone and McLean (1992); the D&M model was then updated by DeLone and McLean (2003) to measure success of any ecommerce information system. It consists of six dimensions of success factors: system quality; information quality; service quality; system use; user satisfaction; and net benefits. Information quality has involved features such as: accuracy; relevancy; precision; reliability; completeness; and currency. System quality has referred to: ease of use; user friendliness, system flexibility, usefulness, and reliability. Based on this evaluation model, any online services can be evaluated in terms of information; system; and service quality. These dimensions affect the subsequent use or intention to use and user satisfaction, as a result of using the online services, certain benefits will be achieved. The benefits will (positively or negatively) influence user satisfaction and further use of the information system. There are many researchers who adopted D&M model to assess the E-government success including (Wang and Liao, 2008; Chen, 2010; Floropoulos et al. 2010; and Jang, 2010). Jang (2010) employed the updated D&M model to measure E-Government Procurement (e-GP) system success. Results showed that information quality, system quality, and service quality had a significant effect on individual performance through the usage and user satisfaction with an e-GP system. In addition, the key antecedents to user satisfaction and system usage did differ between high and low computer selfefficacy users. Floropoulos et al. (2010) adopted the D&M model to measure the success of the Greek Tax Information System. The results provided evidence that there are strong connections among the success constructs. All hypothesized relationships are supported, except the relationship between system quality and user satisfaction. Furthermore, Chen (2010) used the D&M model to measure online tax-filing system in Taiwan. Structural equation modelling results confirmed that the quality antecedents strongly influence taxpayer satisfaction with the online tax-filing system. The factors of information and system quality were more important than service quality in measuring taxpayer satisfaction. Wang and Liao (2008) adopted D&M model to assess the success of E-government systems in Taiwan, their results showed that the hypothesized relationships between the six success factors are significantly supported by the data except he link from system quality to use. Unlike VMM models, the D&M models pay more attention to the quality of technology and user benefits with less attention to other dimensions such as cost; risk and opportunity that are very important to VMM users’ satisfactions. Hence including both D&M and VMM f measurement factors would provide an inaccurate understanding of overall E-government success to be verified as intended in the current work. 2.3

E-government Service Quality Models

E-government service quality models are mostly proposed by Parasuraman et al. (1988; 2005) under the name of SERVQUAL model. Parasuraman et al. (1988) SERVQUAL model consists of 22 service quality measures that are organized in five dimensions: tangibles (appearance of physical facilities, equipment, personnel and communication materials); reliability (ability to perform the promised service dependable and accurately); responsiveness (willingness to help customers and 3 Osman et al. (2011) A new COBRAS framework to evaluate e-government services: a citizen centric perspective

tGov Workshop’11 (1GOV11) March 17-18, 2011, Brunel University, West London, UB83PH

provide prompt service); assurance (knowledge and courtesy of employees and ability to convey trust and confidence); and empathy (provision of caring, individualized attention to customers). There are huge numbers of research papers that expanded or updated the SERVQUAL model. For instance, Iwaarden et al. (2003) expanded the SERVQUAL model; the resulting model includes five quality dimensions corresponding to the ones of the initial SERVQUAL model, with their meaning adapted to the specificities of the websites: tangibles (appearance of the website, navigation, search options and structure), reliability (ability to judge the trustworthiness of the offered service and the organization performing the service), responsiveness (willingness to help customers and provide prompt service), assurance (ability of the website to convey trust and confidence in the organization behind it with respect to security and privacy) and empathy (appropriate user recognition and customization). Later on, Parasuraman et al. (2005) developed and tested E-SQUAL as a new measure of e-service website quality. E-SQUAL is composed of 22-item scale of four dimensions: efficiency; fulfilment; system availability; and privacy. Moreover, Parasuraman et al. (2005) developed another (E-RecS-QUAL) model which was directed only to non-routine website’s users. It contains 11 items in three dimensions: responsiveness, compensation, and contact. Other researchers proposed new service quality models. For instance, Huang and Chao (2001) asserted that e-government websites should be evaluated based on usability principles, i.e., websites should specifically follow a user-centred design to allow users of e-government websites to effectively reach the information desired, while Holliday (2002) proposed a set of evaluation criteria for the level of usefulness of e-government websites, including factors such as amount of information about the government, contact information, feedback options, search capabilities, and related links. Balog et al (2008) proposed an e-ServEval model for quality evaluation of e-services, while, Papadomichelaki and Mentzas (2009) developed an e-government service quality model (e-GovQual) that consists of 25 quality attributes classified into 4 quality dimensions: reliability, efficiency, user support and trust. Reliability (the feasibility and speed of accessing, using and receiving services of the site measured by 6 items); efficiency (ease of using the site and the quality of information it provides measured by 11 items); user support (the ability to get help when needed, measured by 4 items) and trust (the degree to which the user believes the site is safe from intrusion and protects personal information, measured by 4 items). Liu et al (2010) established an e-government website evaluation index system using analytic hierarchy approach (AHP). The components of the index system are: content (practical, comprehensive, accuracy, timeliness, transparency and unique); function (network office, online communication, online monitoring, opinion survey); technology (convenient, availability, security) and other (website content protection, adaptability). Furthermore, building on previous e-services research Fassnacht and Koese (2006) developed a broad hierarchical quality model for e-service that consists of three dimensions: e-service delivery quality (information quality, ease of use, attractiveness of selection and technical quality); outcome quality (functional benefit, reliability and emotional benefit) and environment quality (graphic quality and clarity of layout). Whereas, Rowley (2006) proposed a framework that includes: website features; security; communication; information; accessibility; delivery; reliability; customer support; responsiveness; and personalization. Fassnacht and Koese (2006) developed a broadly applicable hierarchical quality model for e-services. The model consists of three dimensions and nine subdimensions: environment quality (graphic quality, clarity of layout); delivery quality (attractiveness of selection, information quality, ease of use, technical quality); and outcome quality (reliability, functional benefit, emotional benefit). Halaris et al. (2007) model for assessing quality of egovernment services consists of four layers: back office performance layer (including factors from quality models for traditional government services); website technical performance layer (website performance, such as reliability and security); website quality layer (interface and usability); and user’s overall satisfaction layer. Esteves and Joseph (2008) suggested a three-dimensional ex-post framework for the assessment of e-government initiatives. The three dimensions are e-government maturity level, stakeholders, and assessment levels. The assessment levels consider the technological, strategic, organizational, operational, service, and economic aspects. Jansen et al (2010) proposed a Contextual Benchmark Method (CBM) that is based on the Modelling Approach for Designing Information Systems framework (MADIS) by Essink's (1988). CBM consists of three levels and five aspects; the first level is the group of organizations involved in the benchmarking (benchmark 4 Osman et al. (2011) A new COBRAS framework to evaluate e-government services: a citizen centric perspective

tGov Workshop’11 (1GOV11) March 17-18, 2011, Brunel University, West London, UB83PH

partners). The second level is the individual organization that is involved in the benchmarking exercise (organization). The third level is the e-government services that are analysed (service). Whereas, the five aspects are: goal (CBM is an organized set of elements and relationships between them, focused on achieving a set of organizational goals); respondents (users who evaluate an electronic service using indicators); indicators (several indicators that should be measured in a benchmarking exercise); methods (different methods to be used in order to produce the knowledge which is needed); and infrastructure (the availability of hardware and software). On the other hand few researchers adopted ISO/IEC 9126 to evaluate e-service quality such as (Behkamal et al. 2009) and (Chutimaskul et al, 2008). Behkamal et al. (2009) proposed six-quality dimensions: functionality (suitability, accuracy, interoperability, security, traceability); reliability (maturity, fault tolerance, recoverability, availability); usability (understandability, learnability, operability, attractiveness, customizability, navigability); efficiency (time behavior, resource utilization); maintainability (analyzability, changeability, stability, testability); and portability (adaptability, install ability, co-existence, replace ability), whereas Chutimaskul et al (2008) integrated ISO/IEC 9126 with M&D model to measure Thailand e-government development success. Finally, few user-centric models have been recently suggested to address the shortfall of the previously three mentioned categories. For instance, Rowley (2011) argued that any successful egovernment service should satisfy the following user benefits: easy to use; accessibility and inclusivity; confidentiality and privacy. Magoutas and Mentzas (2010) proposed SALT (Self Adaptive quaLity moniToring) model to monitor the user satisfaction and the quality of e-government services. Jaeger and Bertot (2010) argued that any attempt to create user-centered e-government services must account for a number of essential elements. These elements range from basic issues related to the ability to use e-government, to build trust and to tie e-government to established social and institution requirements such as: access needs; information and service needs; technology needs; information and technology literacy; government literacy; availability of appropriate content and services; usability and functionality; meeting user expectations; information concerns; social institutions providing access to e-government; trust; e-government 2.0; lifelong e-government usage; and understanding how users actually use e-government. Pazalos et al (2010) proposed and validated a structured methodology for assessing and improving e-services developed in digital cities. The proposed methodology assesses the various types of value generated by an e-service and also the relationship among them, hence allowing a more structured evaluation, a deeper understanding of the value generation process.

3.

THE COBRAS FRAMEWORK AND COMPONENTS

From the previous reviewed models, dimensions with associated indicators and performed analytical tests are presented in Tables a, b and c in the appendix. It is clear that the evaluation of e-government success is approached from different directions with a recent interest in user-centered satisfaction. However, user's satisfaction evaluation depends exclusively on the user’s experience and interaction with an e-service and the generated values. Existing methodologies show that the VMM is based on a rational thinking of policy makers using a fixed weight approach assigned to indicators for evaluation. This rationality encourages the development of e-government services from users’ perspectives based users’ costs, benefits and risks used separately for evaluation but not simultaneously in previous performance evaluation models. These evaluation models ignored the value of opportunities and impact that can be obtained from using e-services. The SERVQUAL based models accounted for the service quality of system that includes some of benefit and risk aspects, but it ignores the cost and opportunity aspects. Whereas the D&M updated models account for users’ benefit and overlooked the cost; risk and opportunity. Consequently, our proposed evaluation framework builds on previous models and extended them to develop a holistic assessment model for e-government services. The various fragmented performance factors are now integrated and new updates based on the following observations on user’s satisfaction namely: the users’ experience during the execution and interaction with an e-service, the efficiency of the e-system, the effectiveness of the delivered e-service and the post-impact of the delivered e-service. The new framework is based on theoretical causal-effect relationships between the cost-benefit analysis and the risk-opportunity analysis on the one hand, and users’ satisfaction on the other hand. The observed casual relationships among constructs and various 5 Osman et al. (2011) A new COBRAS framework to evaluate e-government services: a citizen centric perspective

tGov Workshop’11 (1GOV11) March 17-18, 2011, Brunel University, West London, UB83PH

performance indicators in the literature are grouped into four sets of dimensions/constructs: Cost; Benefit; Risk and Opportunity. The cost and benefit variables are mostly tangible and are often easy to measure, whereas the risk and opportunity are mostly intangible. The expected directions of the hypothesized causal-effect relationships among the five constructs of the new framework called COBRAS: Costs, Opportunities, Benefits, Risks Analysis for Satisfaction are presented in Figure 1. COBRAS is developed by analogy to a strategic management tool known as SWOT (Strengths, Weaknesses, Opportunities and Threats) analysis. SWOT analysis is recently used in combination with data envelopment analysis to reduce the subjectivity of weight assignments in evaluation models like VMM, (Dyson, 2000). Moreover, SWOT analysis is often used in academia for development of business projects and improvement of operations. In our analogy, strengths correspond to benefits, weaknesses to costs, threats to risks and opportunities are the same. Normally, the costs and benefits are internal factors to an e-service whereas the opportunities and risks are external factors to the eservice. Similarly, COBRAS can be very subjective like SWOT analysis. Elaboration on these factors will be followed next.

Figure 1: The COBRAS Model for User Satisfaction 3.1

Cost-Benefit Analysis

Logically users compare the e-service costs with the associated benefits to decide on use/reuse of the e-service. The cost-benefit analysis is a well-known concept in management and economic where managerial decisions to select a project is based on the highest ratio of benefits to costs among competing alternatives. Cost Construct has been implicitly addressed by researchers in e-government including Verdegem and Verleye (2009); Foley (2008); Bertot et al. (2008); and Kertesz, (2003). The cost variables are often tangible and can be measured, like the actual spending of money and time to complete a requested e-service. Some cost variables include 1. Access time: The number of attempts to find the requested service on the site; length of time to find the requested service on the site (accessible time; downloading time; waiting response time; searching time). 2. Post-interaction time: time to receive confirmation of submissions, waiting time to receive a service (visa, passport, driving license). 3. Authorization requirements: authorization code and associated costs, registration with the site (username and password) for authentication. The cost to satisfaction hypothesis - H1: the lower the e-service cost is the higher the user satisfaction. Benefit Construct represents the value of using an e-service. It measures among others the total values of information availability, services quality and system quality. This set of measures has been 6 Osman et al. (2011) A new COBRAS framework to evaluate e-government services: a citizen centric perspective

tGov Workshop’11 (1GOV11) March 17-18, 2011, Brunel University, West London, UB83PH

widely used in several models like DeLone and McLean model (2003); SERVQUAL; e-SQ; EGOVSAT models. Some benefits variables include 1. Tangible benefits such as saving time or saving money. 2. Intangible benefits such as information quality (information availability, adequacy, accuracy, relevancy, reliability, understandability, completeness); Service quality (design, well– organized site, quick delivery, accessibility, ease of navigation); System quality (quick loads, responsive, visually attractive, adequacy of links, well- organized). The benefit to satisfaction hypothesis – H2: The higher the e-service benefit is the higher the user satisfaction 3.2

Risk-Opportunity Analysis

Using e-government services may involve risk that would be generated from sending personal information stored electronically. Third parties can intercept, read and modify such information. In electronic burglary, large quantities of delicate information can be stolen/ destroyed easily without the public’s consent (Horst et al, 2007). Therefore, citizen needs to trust government or other involved agencies (Evangelidis et al, 2004). However, e-service provides an opportunity and impact to users. Some researchers may include opportunities under benefit construct due no clear definition of opportunities. Risk Construct: risks arise when conditions in external environment jeopardize the reliability of eservices. They compound the vulnerability when they relate to low cost. Risks are often uncontrollable. Users have concern about their personal and credit card information. Trust in technology infrastructure and those managing the infrastructure would reduce risk leading to a strong impact on the adoption of a technology Colesca (2009). This risk dimension has been addressed by a few researchers, including Kertesz, (2003); Rotchanakitumnuai (2008); Udo et al. (2008); Zhang and Prybutok (2005). Some risk variables include 1. Privacy risk arises from the use of personal data for other purposes; 2. Financial audit risk: storage of personal information and documents for a long period may worry users of being audited again and asked for an additional tax payment; 3. Time and technology risk: users may feel they are wasting time when online services and requiring additional professional support to retrieve or renter data to the e-service. 4. Social risk: users may have less interaction with their friends during social events to continuous engagement with e-services; or may feel exposed to damage in social image (non E-government users may feel embarrassed for not using e-services or feel inferior to other citizens who use such e-services). The risk to satisfaction hypothesis- H3: The lower the e-service risk is the higher the user satisfaction. Opportunity Construct: Opportunities are presented by the environment/ country within which eservice operates. These arise when a user can take benefit of conditions to use e-services that enable him/her to become more beneficial. Users can gain personal advantage by making use of opportunities. Users should be careful and recognize the opportunities and grasp them whenever they arise. Opportunities may arise from environment, government and technology incentives. Some benefits variables may include: 1. Service support (ease to access any time, flexibility in time 24x7 accesses); access anywhere (flexibility in place). 2. Technological support (error correction; gain up-to-date information on progress, access provision of e-services in a public area (public library, cafe) and follow up facilities using email and media tools). 3. Technological advances in the e-service process and provision such as making use of personalized e-services. 4. Bypassing third party providers and avoiding bureaucratic processes. The opportunity to satisfaction hypothesis- H4: The higher the e-service opportunity is the higher the user satisfaction. 7 Osman et al. (2011) A new COBRAS framework to evaluate e-government services: a citizen centric perspective

tGov Workshop’11 (1GOV11) March 17-18, 2011, Brunel University, West London, UB83PH

4.

METHODOLOGY AND DATA ANALYSIS

4.1

Methodology

This research utilizes several modes of data collection based on questionnaires that are validated by experts as well as advanced data analysis based on Structured Equation Modeling (SEM) to test the theorized model. Although we could have concurrently validated the measurement and the structural relationships (testing reliability and validity), this approach is not advisable (Perry et al., 2008). Hence the analysis was conducted separately in this section. Items (defined as measurable variables) of the constructs/factors are mainly adapted from the different sources in the literature such as the updated D&M IS success model; SERVQUAL; Verdegem and Verleye (2009); Foley (2008); Bertot et al. (2008); Rotchanakitumnuai (2008); Udo et al. (2008). Items were developed for each of factors and dimensions listed in Table b and c based on their practical importance by researchers with modification and re-wording by ourselves and expert feedbacks. All items were measured using a five-point Likert scale (1=strongly disagree to 5=strongly agree). 4.2

Data Collection and Online Survey

An online survey was designed to include questions related to the new proposed model in addition to geographical data. The survey was developed in stages. Two workshops were conducted in United Kingdom and Turkey to ascertain the validity of the content and relevance of the proposed questionnaire to the objective of study. In Turkey, a workshop was conducted on the next day of the ICEGEG conference on explorations in e-government and e-governance (Antalya, March 2010), twenty experts from e-government public administration, private IT institutions and professional researchers were invited. At this workshop, the conceptual COBRAS framework is presented, and the questionnaire was distributed to participants for reviews of 60 initial questions. The updated questionnaire was then reduced to 49 questions that were again validated at the second T-Government workshop (London, March 2010). Face validity was also conducted to evaluate the appearance of the questionnaire in terms of feasibility, readability, consistency of style and formatting, and the clarity of the language used. Thirty MBA students at the American university of Beirut were selected to conduct the face validity. The students assessed each question in terms of clarity of wording; the likelihood that the target audience would be able to answer it; and finally the layout and style of the questionnaire. In addition to the 49 questions, there were open-ended questions to provide general comments for content analysis. The data collection was conducted in all Turkish cities (such as Ankara, Istanbul) from Turkish users using TurkSat portal for the period starting from July 2010 to December 2010. Users were asked to voluntarily fill in the questionnaire after immediate use of an e-service. A total of 3506 responses were collected. Only 2785 (79.44%) were valid due to incomplete questionnaires. The remaining sample size is deemed sufficient for the analysis. From the geographical data, it was found that around half of respondents (45%) had at least a bachelor’s degree or higher; (67%) had experience in working with a computer or with Internet and used e-government websites; (12.7%) of respondents had poor computer skills with the majority reporting at least an average level of computer proficiency, and (94.4%) have used current e-government services at least once a month whereas the remainder had used it once or several time per year. 4.3

E-services Types

E-services in Turkey are heterogeneous in terms of functionality and maturity level. An attempt was made to divide them in three types of categories. Each category would then include homogenous services in functionality with respect to users. The three groups are as follows: 1- Informational e-services provide public contents and do not require any authentication from users, for which we have 2258 respondents who used such e-services. 2- Interactive/Transactional e-services require authentication but allow download forms, contact officials and make appointments, for which we have 243 respondents.

8 Osman et al. (2011) A new COBRAS framework to evaluate e-government services: a citizen centric perspective

tGov Workshop’11 (1GOV11) March 17-18, 2011, Brunel University, West London, UB83PH

3- Personalized e-services require authentication and allow users to customize the content of eservices, conduct financial transactions online; users can pay for e-services, for which we have 284 respondents. The above grouping is different from the maturity model of Layne and Lee (2001). It is more in line with the view of Coursey and Norris (2008) who stated that the maturity models do not accurately describe or predict the development of e-government development. 4.4

Reliability and Validly of Measures

All the constructs/dimensions that were measured by using the 49 Items/indicators are presented in Table 1. The construct validity of measures was conducted using confirmatory factor analysis whereas the internal consistency reliability of measures was tested using Cronbach’s alpha. The construct validity tests the degree to which the items/questions in the questionnaire relate to the relevant theoretical construct/factor using principle component analysis (PCA). The factor loadings of the final PCA solution and their factorial weights are shown in Table 1 that shows all items have a loading ≥ 0.5 which is the acceptable norm, except two items with loading of 0.46 and 0.43 which are not very far from 0.5; they were kept in the model as they are important to the relevant factor. Kaiser-Meyer-Olkin (KMO) test has a value of 0.98 indicating a high sampling adequacy for the factor analysis. Moreover, the Bartlett' Sphericity indicates the appropriateness of the factor model and its test indicates that the correlation matrix has an identity matrix at a highly significant level with p
Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.