Úvodní stránka » Archiv čísel » Články 02/2017 » Evaluation culture

Evaluation culture within institutional and methodological context: the case of EU Structural Funds in the Czech Republic

Martin Pělucha, Viktor Květoň

Abstract

Evaluation culture began its development in the early 2000s in Central and Eastern European countries and in connection with the spending of EU Preaccession and later Structural Funds. The environment of evaluations significantly influences institutional and methodological aspects. One of the prerequisites of evaluation culture is the ability of Managing Authorities to apply evaluation recommendations.

The main goal of this paper is to provide an assessment of evaluation culture from institutional and methodological perspectives with a focus on EU funds in the Czech Republic. The paper presents a literature review of key terms with a combination of descriptive analyses of current developments in terms of evaluation practice. The main findings, conclusions and policy implications highlight the need to ensure institutional memory in public administration, human capital development in evaluations, sharing data in public administration, and methodological weaknesses in the evaluation culture of the Czech Republic.

Keywords

Evaluation culture, evaluation practice, institutions, methodology

Funding & Acknowledgement

This work was funded by the Czech Science Foundation (GA ČR) under Grant 17-12372S with a title “Theoretical and Methodological Perspectives of the EU´s Neoproductivist Rural Development Policy”. The authors would like to thank to Dr. Matthew COPLEY for preliminary proofreading of the text.

  1. Introduction

The countries of Central and Eastern Europe that joined the European Union (EU) in 2004 and 2007 began to spend large amounts of financial resources within the EU regional policy. With these expenditures, however, it was necessary to ensure an adequate absorption capacity. It relates to the readiness of the public administration to administer EU funds, as well as the readiness of a sufficient number of projects eligible for their respective funding. Moreover, the public administration has to ensure the monitoring and evaluation system (Píšová, Grolig and Hládek, 2004; Šumpíková et al., 2005). During EU integration, these requirements had an exogenous and top-down nature for these countries because the evaluation of public funds spending had not yet been applied in such a comprehensive manner (Blažek and Vozáb, 2006; Mihalache, 2010). While absorption capacity and monitoring have a more technical character for implementation systems, the issue of evaluation should have a substantial development potential for the implementation environment in particular countries. In this regard, Kozak (2016) stated that “the quality of evaluation, though, depends to a large extent on evaluation culture” [Kozak, 2016, p. 146]. Because of its role in creating evaluation culture (Ferry, 2009, p. 14), we have concentrated our effort solely on EU cohesion policy, although we are aware of the official development assistance in evaluation culture creation.

The environment in which the evaluation is shaped is very complex. The most significant groups of factors are represented by institutional and methodological aspects. Institutional aspects are mainly traditions, culture, politics, national specificities and the development of political and institutional environments. It is associated with the functioning of public administration, the fragmentation or concentration of subsidies, and regional clientelism (Bachtler and Wren, 2006; Pělucha and Shutt, 2014). Methodological aspects relate to the ability of authorities to define an object of evaluation, to reason through the evaluation, and to the ability of evaluators to identify and apply appropriate evaluation methods (Gombitová, Slintáková and Potluka, 2010; Potluka and Brůha, 2013; Cerruli, 2015). Evaluation culture can be developed only by gradual professionalization (Meyer, 2015) and the ability of authorities to apply the recommendations of the evaluation (Molle, 2007; Olejniczak, Raimondo and Kupiec, 2016). It is therefore a complex process that is constantly facing new challenges.

The main goal of this paper is to provide an assessment of the evaluation culture in the Czech Republic from institutional and methodological perspectives. The paper presents a literature review of key terms with a combination of descriptive analyses of current developments in terms of evaluation practice. The analytical part of the paper is based on existing Czech data and provides a detailed analysis based on the author's evaluation practice of programs financed by all major EU funds[1]. The paper does not include data on the development of evaluation culture that has been formed in the field of international development cooperation in the Czech Republic.

This article has a discussional character and provides a critical review of initial challenges of evaluation culture defined by Píšová, Grolig and Hládek (2004) and by Blažek and Vozáb (2006). Authors of this paper divided these challenges into two groups (institutional and methodological) and elaborate a synthesis of the progress of evaluation culture since the EU accession. The Czech Republic had a similar political and partially economic development with other Central European countries in the last decades. Thus, the findings and conclusions are also relevant for a discussion in Poland, Slovakia, and Hungary.

The paper is organized as follows: The first section introduced the topic. The second section presents a theoretical background of the evaluation culture definition with respect to the delimitation of related factors. The third section describes institutional aspects of the evaluation culture in the Czech Republic. The fourth section provides a reflection of the development of methodologies that have been applied within the context of evaluation culture development. The final section provides a synthesis and discussion of key findings that are relevant for the evaluation system. The conclusions consist of policy implications for further development of evaluation culture.

  1. Theoretical background of evaluation culture – delimitation and contextual issues

To understand the definition of evaluation culture, its theoretical background must be examined. Mihalahe (2010) noted that “the term evaluation culture is often used interchangeably with other terms that are part of the evaluation discourse, such as evaluation capacity, evaluation practice, evaluation code of conduct or code of ethics” (Mihalahe, 2010, p. 324). For example, a United States General Accounting Office (2003) study defined evaluation culture as one of the key elements of evaluation capacity, i.e. evaluation culture as a subset of the evaluation capacity concept. Stame (2012) understands evaluation culture as a combination of evaluation capacity, ethics and practice, i.e. life institutions (identifiable evaluation units within the public administration), values (transparency and independence) and accomplishment practices (the choice of relevant methodologies). Similarly, Forss and Rebien (2014) define evaluation culture as “norms, values and attitudes, and related organizational arrangements, structures and processes” (Forss and Rebien, 2014, p. 468). Mayne (2008) stresses more procedural characteristics of evaluation culture, i.e. leadership, organizational support structures and learning focus. These characteristics are closely related to the quality of human capital, through which it is possible to build an evaluation culture in institutional terms. With a well-developed and stable institutional base of the evaluation culture, it is then possible to develop methodological issues, whose quality is important for the creation of evaluation recommendations and their transfer into practice (Ivaldi, Scaratti, and Nuti, 2015).

Trochim (2006) presents a very detailed explanation of evaluation culture and distinguishes twelve characteristics of the ideal type of evaluation culture. According to these characteristics, an evaluation culture should be action and learning oriented, inclusive and participatory, responsive and fundamentally non-hierarchical, oriented towards diversity and innovation, scientifically rigorous, interdisciplinary, self-critical, honest and impartial, ethical and democratic, forward-looking, and transparent. The delimitation of these characteristics is highly beneficial for the assessment of evaluations. However, the question is whether Trochim’s approach opens space for the distinction between the “culture of evaluations” rather than “evaluation culture”. His characteristics tend to focus more on the concept of “culture of evaluations”, which is well described by Patton (2012). In Pattons´ view, the culture of evaluations is dominantly influenced by the different characteristics of evaluators and by the cultures of different countries. These factors affect the form of research and the achieved results of its evaluations. Barbier and Hawkins (2012) complement these factors by adding the importance of political culture that also affects the setting of evaluation research. Evaluation culture is a broader term. It must clearly include the institutional environment in which the evaluation is carried out, and it must include the capacity of evaluation practitioners (i.e. evaluators and delegates of contracting authorities) to formulate relevant recommendations and to transform them into a real practice of public expenditure programs. In this regard, Mesquita (2016) defines the general criteria for the assessment of evaluation culture, e.g. the existence of the evaluation skills installed in the organizations, degree of evaluation institutionalization, monitoring capacity, diversity of evaluations, and the existence of an organization or association of professional evaluators. By these parameters, it is possible to characterize and assess the level of evaluation culture in different countries.

The development and quality of evaluation culture has a significant evolutionary character. The basic issues are described by Toulemonde (2000), in the sense that evaluation culture was directly related to the process of introducing evaluations into particular countries. The initial basis is evident “in the United States along with Planning-Programming-Budgeting-System (PPBS). It was imported in the 1970s into most northern European countries where agencies, units or commissions were created to carry out policy analysis” (Toulemonde, 2000, p. 351). Evaluation culture in different countries started to evolve from the level and ability of public authorities to promote evaluation activities.

Within the EU countries, it is evident that there is a certain degree of convergence in evaluation culture with regard to a unified system of the EU structural funds implementation (European Structural and Investment Funds, respectively). The phases of the integration process and gradual geographical enlargement of the EU cause differences in national evaluation cultures and the dissemination of basic elements of an evaluation culture. In this context, it should be expected that countries of the EU-15 would be homogenous in the development of evaluation culture and at a higher level compared to countries in Central and Eastern Europe. This assumption is not entirely true. Within the EU-15, there exists a “north-south” division in terms of the degree of evaluation culture development. The Netherlands, the United Kingdom, Germany, and Scandinavian / Nordic countries belong among the states with traditional and advanced evaluation culture. On the contrary, Italy, Spain and Portugal comprise the group of states in the second category of evaluation culture development (Taylor et al., 2001; Bachtler and Wren, 2006; Barbier and Hawkins, 2012; Forss and Rebien, 2014; Ahonen, 2015).

Previous experience with evaluations in national systems influences the exogeneity of evaluation culture development. Taking into account the different levels of development of evaluation culture in EU countries, Boyle, McNamara and O’Hara (2012) discussed the driving forces that shape the evaluation culture, particularly in Ireland. Their findings are generalizable to other EU countries. They specifically distinguished international (i.e. OECD, World Bank, European Commission, professional and evaluation networks, management consultancies) and national forces (i.e. national context of politics, public administration, level of centralization, partnership and problem-solving approach). This breakdown shows that countries with a brief tradition of applying accountability to public programs, develop their evaluation culture more under the influence of exogenous international forces and they are rather less able to develop an evaluation culture according to their own national specifics. However, Bachtler (2006) draws attention to the specifics of the impact of EU cohesion policy on evaluations in the sense that “the evaluation obligations of EU Cohesion policy have acted as a ‘driver’ of policy and evaluation in the Member States. The EU evaluation requirements and practice have influenced policy choices, enhanced the role of evaluation as part of the policy process and stimulated policy learning” (Bachtler, 2006, p. 149). With regard to the objectives of EU cohesion policy, the flexibility of EU countries is logically limited. On the one hand, this contributes to the universality of the evaluation culture in the EU; on the other hand, there are no sufficiently reflected national specificities, which, if they are not fully in line with EU targets, cannot be applied.

Although there is no stable and uniform definition of evaluation culture, two sets of factors that have shaped evaluation culture can be defined quite clearly. Firstly, there are institutional aspects that are comprised mainly of the state and development of the political and institutional environment of public expenditure programs. This is also accompanied by the degree of development of human capital in the section of evaluations (on the side of both contracting authorities and evaluators). The second factor is the state of the methodological development environment. It relates to the quality of evaluation tenders (demand), quality of services provided by evaluation companies (supply), properly chosen methods of evaluations (process), and understandable and applicable recommendations in the practice of implementation (evaluation results). These two specific areas of evaluation culture are of interest with respect to the following chapters of this paper.

 EvCul_Obr1.jpg                 

  1. Institutional aspects of the evaluation culture in the Czech Republic: development and current state

In case of the Czech Republic, evaluation culture is a very complex issue. It was dominantly influenced by institutional aspects, by the political-institutional environment and by the quality of human resources. It concerns the quality of human resources of contracting authorities and the quality of evaluation teams respectively.

During the 1990s, a characteristic feature of the Czech Republic was the political elite’s low interest in a comprehensive solution to regional development problems. The main reasons were the lack of significant regional differences in the first half of the 1990s, an effort to focus attention on key steps of economic transformation and the unwillingness of liberal governments for the introduction of redistributive fiscal policy elements (Blažek and Vozáb, 2006). The situation changed during the late 1990s. In this period, policies and programs incurred have begun to reflect the first inter-regional differences (especially with respect to unemployment) which arose as a result of the transformation processes (Blažek, 2000). However, these programs were not comprehensive or long-term, and therefore there was no space for any development of any evaluation culture.

The situation changed when the Czech Republic became a candidate state for EU accession in 1998, and in particular, during the period of the Czech Republic’s active preparation for EU membership. The following section describes the development of an evaluation culture in the Czech Republic according to the main periods that were common to all ten accession-to-the-EU countries in 2004. This breakdown of specific periods has already been used in several publications (e.g. Kozak, 2016; Gombitová, Slintáková and Potluka, 2010), but not in direct relation to the assessment of evaluation culture:

1)   Pre-accession period 2000–2004

      Although the pre-accession period can be calculated starting in 1998 when the Czech Republic acquired the status of an EU candidate state, the factual beginning of the know-how development in terms of implementation of the EU funds and the appropriate need for their evaluation has been developed just in the programming period of 2000–2006. In this period, the Czech Republic was in the same situation as Poland, which Kozak (2016, p. 147) described as a period of “few studies, no methodology adjusted to pre-accession programs, painful shortage of monitoring and monitoring specialists”. For the development of an evaluation culture, it was very typical to acquire knowledge and build its base by coupling projects that engaged foreign experts, usually from the EU-15 countries (especially the United Kingdom, Italy and Austria). Their contribution, however, has been questioned in terms of the cost of their services and inadequate knowledge about the specifics of the Czech Republic (Píšová, Grolig and Hládek, 2004). Partial experiences were obtained through the evaluation of pre-accession instruments. The role of evaluation of programs in cross-border cooperation (CBC) was rather marginal, and evaluation culture during this period was at an embryonic stage.

2)   The first short programming period of 2004–2006

      After the Czech Republic´s EU accession in May 2004, the basics of evaluation culture were constituted. Their design was explicitly formed through the requirements of the EU. Evaluation activities were a relatively new branch for the Czech public administration, which led to problems. As in Poland and the Slovak Republic, the Czech Republic has established a central evaluation unit within the Community Support Framework (CSF) in the Ministry for Regional Development. The main problems with the development of evaluation culture were connected to the ability to create high quality and coherent strategic documents (Blažek and Vozáb, 2006), i.e. the National Development Plan, CSF, and operational programs. Lack of experience with the implementation of similar tools in the medium or long term reverted to the set of inadequate target values of monitoring indicators, to their incoherent links within the operational programs and to superior documents, etc. In this period, attention was focused primarily on the ability to spend the available allocated funds. Evaluation culture had not been developed comprehensively. There were assorted rather ad-hoc and partial problems that arose with the gradual implementation of operational programs. The partial exception was an effort to apply an impact evaluation through the Hermin macroeconomic model (Bradley, 2006). On both sides, contracting authorities and evaluators, there was inexperience evident at the methodology and implementation level (i.e. application of evaluation recommendations and findings).

 

3)   The first full programming period 2007–2013

      The setting of this programming period was derived from two key factors that influenced the subsequent fragmentation of subsidies and evaluation practices. These factors were caused by parliamentary elections in 2006, which dictated the “regional climate” for the setting of the programming period 2007–2013 (Shutt, Koutsoukos, Pělucha, 2010, p. 192). The authors of this paper were members of the expert team, which managed the preparation of the Czech National Development Plan 2007
–2013[2]. During all negotiations, it emerged that the expected change in the political leadership of the country was perceived by representatives of various ministries as a threat to their existence. This risk was therefore reflected in their efforts to ensure the future stability of their own operational program (OP) or by defining a significant part of the OP’s administration. The result was a high number of sectoral operational programs, their broad and sometimes vague focus, formally set values of monitoring indicators and the associated complications with management and spending. It did not significantly dampen an obligation to create evaluation plans. The second factor was the pressure on regional authorities to manage their own regional operational programs. This setting of the political-institutional environment had a significant impact not only in the aforementioned fragmentation of aid, but also on the creation of a regional clientelism. Poland experienced a similar situation at the regional level (Komorowska, 2009; Dabrowski, 2013).

A serious problem was a lack of institutional memory, which was associated with high levels of staff turnover in the managing authorities. This was also reflected in the relationship between the contracting authorities and the evaluators. The weaknesses of the Czech Republic’s evaluation culture pointed the survey (Remr, 2011) that was focused on the nature of evaluations performed in the Czech Republic. Generally, it was mainly an effort to deliver the evaluation report by ideas of contracting authorities with an excessive number of recommendations whose practical uses were very low. This issue raises the question of the ability of politicians to understand the results of evaluations that should be conducted within the decision-making process. Evaluations are tools to preserve and benchmark positive results in progress; contrarily, negative results limit the maneuvering space for further decision-making (Hill, 2012, p. 93).

Additionally, Remr (2011) in his survey identified that the system was completely lacking the use of experimental design and implementation of meta-evaluations. This situation was caused by a change in the evaluation market. Choice of many evaluation services was based just on price as the only one criterion used by commissioners. Moreover, the number of companies on the evaluation market rose sharply and that led to reduction in the quality of processed evaluations (Kváča, 2015).

Notwithstanding, there were obvious attempts to deepen the methodological expertise (see the next section of this paper). For the first time, Czech evaluation culture started to be internally developed. It was mainly the development of cooperation between evaluators and representatives of public administration, e.g. the establishment of the Czech Evaluation Society that issued its “Code of Ethics” in 2011 (ČES, 2011) and “Standards for conducting the evaluations” in 2013 (ČES, 2013). Additionally, guidelines and evaluation procedures were created for specific areas and themes. An example is the processing of “Draft guidelines on evaluating programs of targeted support for research, development and innovation and the necessary systemic changes” (Srholec 2015[3]).

4)   Programming period of 2014–2020

For the setting of this programming period, there was a cross-cutting political consensus on the concentration and centralization of sectoral and regional operational programs. In 2013, however, during key moments of negotiations about the form for the programs, there was a government fall. The following political instability resulted in the standard form of programming documents which were acceptable to the European Commission (EC), but which lacked the vigor to achieve political and developmental goals (Pělucha and Shutt, 2014). In terms of evaluation culture, the situation has already been stabilized during the programming period of the Czech Republic. There was an obvious exogenous influence of the European Commission (EC), which was typical not only in this country but in all EU countries. This evident attempt to shift attention to a comprehensive implementation of EU funds related not only to the representatives of implementation, but also to the evaluators. European Commission (2014, p. 17) stated that “an evaluation process needs to be use and user oriented from the beginning. The communications between evaluator and commissioner of evaluations on purpose, methods and use should start before any real work is undertaken.” Thus, in current programming period, the principle of monitoring and evaluation is "formally" accented by the European Commission. Moreover, the duty to evaluate is evident not only at the program level, but also at project level.

These efforts, however, face other extremes. The majority of final beneficiaries have no evaluation skills, and these activities are usually provided by themselves. As a result, the broader public may devalue the evaluation process. Evaluation of the financial spending under the n+3 rule[4] is newly complemented by an evaluation of milestones formulated in monitoring indicators and by attempts to apply a result-oriented policy and evaluation. In this sense, the Czech Republic merely reproduces the requirements set by the European Commission (EC), but the needed evaluation approaches are beginning to be formed in this area.

 

5)   The future programming period of 2021+

The next programming period will probably be under the pressure of significant changes in the forms of providing financial sources from EU funds, which will be reduced compared to the current period (European Commission, 2017; Petzold, 2017). The main reason is the expected Brexit process (i.e. the departure of the UK, which is a significant net payer to the EU budget), and secondly the declining willingness of other net payers to finance the regional development, primarily in the countries of Central and Eastern Europe. The significance of financial instruments will definitely grow, which means a paradigmatic change in the delivery of financial resources. This change should be reflected by new forms of evaluation and the preparedness of human resources with appropriate knowledge and skills. Moreover, even the financial resources that will continue to be allocated in the standard form of subsidies are likely to be under more pressure regarding their efficiency and effectiveness and, therefore, the evaluation process will also be under pressure to bring high standard (methodological and interpretative) of the conclusions.

In the Czech Republic, the development of evaluation culture objectively underwent various changes with different levels of intensity. These changes were mainly related to the preparation, setting and real implementation of structural funds during particular programming periods. Overall, the evaluation culture was formed not only under the influence of external factors and conditions (e.g. the European Commission’s directives for preparation of programming documents, including their evaluation), but also in the context of building the evaluation capacity (on the part of both contracting authorities and evaluators) in the Czech Republic. With a certain degree of a critical perspective, one can say that it was the influence of external conditions and factors, which caused a boom of evaluations in the Czech Republic. Nowadays, it is a process that is significantly formed and shaped by contracting authorities and administrators of programs/funds in the Czech Republic and by professional institutions such as the Czech Evaluation Society. However, the evaluation culture in the Czech Republic overcame the initial shortcomings associated with the introduction of any new process and has stabilized.

  1. Methodological and empirical issues of evaluation culture assessment

The importance of methodological aspects for the assessment of an evaluation culture was identified by Gombitová, Slintáková and Potluka (2010, p. 97), who stated that “the efficiency of public expenditure programs is mainly given by the evaluation culture and poor evaluation methodology of such interventions. The evaluation methods used and the quality of the methods applied have been developing as evaluation culture has been developing too”. Evaluation methods and practices are intertwined with evaluation culture and jointly affect the quality and impact of evaluations. For this reason, attention is given to the key methodological transformations that affected the evaluation culture.

Methodological aspects of evaluation culture are linked through the ability to identify and use appropriate evaluation methods and techniques, both on the part of contracting authorities (definition of expectations) and on the side of evaluators (real knowledge and practical application of methods). A more significant shift in the use of various evaluation approaches can be observed in the middle of the 2007–2013 programming period. In those years, the academic and professional communities intensely debated the issue of public policy evaluation in terms of applied evaluation methods (e.g. Martini 2009, 2011; Gaffey, 2009; Bradley and Untiedt, 2011). The evaluation should be perceived as a tool, which is able to reveal what really works, what the effects of a given policy are, or which regional or local development trends should be eliminated.

In the first half of the 2007–2013 programming period, the evaluators used qualitative methods significantly more than econometric evaluation methods. Therefore, the European Commission (EC), academics and experts headed the discourse with a greater degree of representative results and concerning the overall effort to mathematization of this issue. The European Commission (2010, p. 19) stated that it is not possible “to deliver all the evidence on the performance of cohesion policy and therefore encouraged EU member states to use some of the more rigorous methods in their own evaluations”. The evaluation community, thus, started to use quantifiable econometric models for evaluation more intensively. Gaffey (2013, p. 33) states that these discussions led to methodological wars in the professional sphere, regarding the utilization rate of qualitative or quantitative methods in evaluations. From the perspective of Gaffey, as the representative of the European Commission (EC), there is no ideal method for evaluation. It is always necessary to combine different approaches through which it is possible to respond to the evaluation’s questions and hypotheses. These connections refer specifically to the issue of causality assessment, with respect to causal links between implemented interventions and achievements, both at the micro- and macroeconomic levels. The general reasons for increasing the explanatory power of evaluation studies, as summarized by the European Commission (DG Agri, 2010, p. 20) are as follows:

 

  • Databases in general (e.g. statistics of the Eurostat or Czech Statistical Office) and the program’s monitoring systems are not sufficient bases for adequate quantitative analysis of the program’s achievements. The data is completely absent, or the values of the monitoring indicators are predictive and weak;
  • Constraints on the implementation of qualitative surveys are often caused by the low willingness of potential respondents to participate in relevant evaluations, surveys and structured interviews, which are the reasons for bias in the results for interpretation (e.g. the problem of non-response bias);
  • The result is a situation where the chosen methodology is often referred to as the “second best solution” of what is to be detected and what is real and identifiable in the evaluation.

 

Since 2010, the effort to solve the aforementioned problems was the emphasis of a triangulation of techniques, i.e. the verification of obtained findings with other evaluation methods. This emphasis has often been explicitly stated in the evaluation’s tender documentation. However, some methods were completely ground breaking, mainly regarding the application of counterfactual impact evaluation through specific statistical methods. Some evaluators from the Czech Republic were together with experts from Italian and the British pioneers of this method, not only at the national level, but also at EU level (Potluka, Květoň, Pělucha, 2012; Potluka, Brůha, 2013). However, it must be noted that this was a very demanding process of developing a methodological evaluation culture in the Czech Republic. It was developed by the “learning by doing” process on both, the evaluators and users of the outputs of this type of evaluation (especially contracting authorities).

Quantitative data collection techniques (e.g. questionnaires or structured interviews) were supplemented by qualitative methods in the Czech Republic (e.g. Qualitative comparative analysis, Outcome Mapping/Harvesting, Most Significant Change, Process Tracing). These qualitative evaluation methods provide the possibility of in-depth analysis, through which there is the potential to better understand the context and the internal and external factors of the “quantitative numbers practice”. However, a specific method was the use of qualitative comparative analysis (QCA), which is applied at the interface between quantitative and qualitative methods. It was originally used in comparative politics, but has already been successfully tested in evaluations (Blackman, Wisdow and Byrne, 2013).

Another significant limitation for the evaluation culture in the Czech Republic is connected with a weak database, particularly administrative data. The unsatisfactory situation in current data sharing within the public administration is a critical aspect to regard. A great potential use of administrative data lies in the monitoring of the interventions causality. In the case of ensuring the appropriate anonymity of individual data, there could be a comprehensive assessment of supported projects, before and after their completion, regarding the possibility of cooperation between institutional managing authorities (that provide an agenda of public policies) and the tax system (e.g. tax offices, social security administration, and audit institutions). Examining causal relationship intervention with supported final beneficiaries (respective representatives of target groups) may enable evaluators to evaluate the effectiveness of policy over the long term. By an aggregating the achieved values at the regional (respectively national) level, it is possible to compare the situation with wider socio-economic indicators (e.g. supported firms with increasing revenues or newly employed persons in the context of the socio-economic situation of a particular region or locality). In connection with the pressure of the European Commission to provide ways of assessing factual results of public expenditure programs in the Czech Republic, a partial positive shift can be identified in the case of cooperation between the Ministry of Labor and Social Affairs and the Czech Social Security Administration.[5]

From a procurement perspective, one can identify a shift in terms of tendering and processing of evaluations in the Czech Republic in recent years. In addition to the process mapping evaluations, there has been an evident effort (particularly at the end of 2007–2013/2015) to assess the actual results and impacts. This shift is particularly associated with the demand for quantitative approaches in evaluations. In the case of ministries with low staff turnover in their evaluation units, in particular, it is apparent that there is an attempt to specify in detail the tender documentation, including a precise formulation of evaluation questions. This process has contributed to the increased application of specific, often quantitative methods. At the same time, it is also evident that there is an effort to improve the qualitative approaches that are used in evaluations. Examples include the elaboration of case studies according to clear standards and rules, the testing of new methods such as Outcome Harvesting/Mapping, and applying the QCA to the example of selected interventions. In general, the development and introduction of new methods in the Czech Republic is a “contracting authority-driven process” or something that is driven by enthusiastic individuals from an explicitly non-evaluation branch. Only in limited cases do evaluators come up with entirely new approaches and methods. However, after the successful introduction of such new methods, it is apparent that there is a diffusion effect of these methods into the common methodological standards of offered services. With a further development of new knowledge-intensive methods, selective development can be expected on the part of evaluators and their profiling according to specific knowledge of personnel.

  1. Discussion

It is evident that evaluation culture is evolving and changing. Thus, we have made a synthesis of current knowledge on the development of the Czech Republic’s evaluation culture. Major institutional and methodological challenges of evaluation culture are defined by Píšová, Grolig and Hládek (2004) and by Blažek and Vozáb (2006). Authors of this paper divided these challenges into two areas: institutional and methodological aspects. Table 1 presents a synthesis of the progress of Czech evaluation culture since the EU accession.

 

EvCul_Tab1.jpg

  1. Conclusion

The main goal of this paper was to provide an assessment of the evaluation culture in the Czech Republic from institutional and methodological perspectives. In both perspectives, it is evident that there is an evolutionary character, i.e. gradualist development in the improvement that is still hampered by a number of internal and external factors.

The main challenge for further research of institutional aspects lies in the potential of existing evaluation analyses, on the one hand (i.e. evaluation as an analytic and cognitive tool), and on the other hand, in the options to realistically affect policy-making (i.e. the evaluation as a tool for improving the implementation). A major obstacle in this sense is the willingness of politicians to listen and to accept both positive and negative findings. Hill (2012) notes the reluctance of politicians to have very precise information about the implementation of certain types of policies. A possible reason is the limited maneuverability within their decision-making process. The second challenge is improving the institutional memory, i.e. ensuring the ability of public authorities to apply the results of the evaluation recommendations, especially in cases where there is a clear willingness (but not an obligation) to apply the results of evaluations. These issues dramatically affect the evaluation culture of each country.

In the methodological aspects of evaluation culture, there are significant efforts to search for new techniques and methods that are applicable in evaluations. Examples include efforts to use counterfactual approaches, QCA, Outcome Harvesting, Most Significant Change, Process tracing, etc. The application of such methodological approaches was completely missing 6–8 years ago. Today, they are introduced into the evaluation practice with varying degrees of intensity. It also brings forth increased demands for knowledge and skills on the part of contracting authorities and evaluators. Therefore, it can be considered that further methodological development of evaluations will act as a selective mechanism of evaluation capacities (technical capacity on the part of contracting authorities and evaluators). On the side of contracting authorities, this process may contribute to the professional stabilization of their evaluation units because it will be increasingly demanding on professions with a higher level of knowledge and skills. On the side of the evaluators, this process could contribute to the selection and increased specialization among suppliers (compared to the current situation where evaluators often conduct appropriate evaluations by the “move heaven and earth”) approach.

The paper also highlighted many methodological aspects of evaluation culture that still persist and hinder the development of evaluation. It is the poor availability of relevant data, low-intensity impact evaluations (low rigorousness), high-level description within evaluations, or poor feedback of contracting authorities and the associated limited possibilities of the transfer of evaluation conclusions and recommendations into practice. The sharing of data is especially noteworthy. As a part of streamlining the evaluation of public expenditure programs, it will be necessary to increase the availability of data and information within the public administration. After the appropriate data anonymization, not only could the localization of interventions be effectively evaluated, but also their causal effects. All these solutions discussed above, not only methodological solutions, represent a big challenge for the further development of an evaluation culture in the Czech Republic.

 

References

  1. AHONEN, P. Aspects of the institutionalization of evaluation in Finland: Basic, agency, process and change. Evaluation, Vol. 21, No. 3, pp. 308-324.
  2. BACHTLER, J. and C. WREN. Evaluation of European Union Cohesion Policy: Research Questions and Policy Challenges. Regional Studies, Vol. 40, No. 2, pp. 143-153.
  3. BARBIER, J. C. and P. HAWKINS. Evaluation Cultures – Sense making in complex times. New Brunswick, New Jersey: Transaction Publishers, 2012. ISBN 978-1-4128-4942-5.
  4. BLACKMAN, T., J. WISTOW and D. BYRNE. Using Qualitative Comparative Analysis to understand complex policy problems. Evaluation, Vol. 19, No. 2, pp. 126-140.
  5. BLAŽEK, J. (In)consistency and (in)efficiency of the Czech regional policy in the 1990s. Informationen zur Raumentwicklung [online]. 2000, No. 7/8, pp. 373-380. [cit. 2016-12-15]. Available on-line: http://www.bbsr.bund.de/BBSR/EN/Publications/IzR/2000/7_8Blazek.pdf?__blob=publicationFile&v=3
  6. BLAŽEK, J. and J. VOZÁB. Ex-ante Evaluation in the New Member States: The Case of the Czech Republic. Regional Studies, Vol. 40, No. 2, pp. 237-248.
  7. BOYLE, R., G. McNAMARA, and J. O´HARA. Riding the Celtic Tiger: Forces Shaping Evaluation Culture in Ireland in Good Times and Bad. In: BARBIER, J. C. and P. HAWKINS. Evaluation Cultures – Sense making in complex times. New Brunswick, New Jersey: Transaction Publishers, 2012. pp. 19-44. ISBN 978-1-4128-4942-5.
  8. BRADLEY, J. Evaluating the Impact of European Union Cohesion Policy in Less-developed Countries and Regions. Regional Studies, Vol. 40, No. 2, pp. 189-199.
  9. BRADLEY, J. and G. UNTIEDT. The Future of EU Cohesion Policy in a Time of Austerity; What’s New and What Works in the EU Cohesion Policy 2007–13: Discoveries and Lessons for 2014–2020. Vilnius: 3–4 March 2011.
  10. CERULLI, G. Econometric Evaluation of Socio-Economic Programs: Theory and Applications. Berlin: Springer-Verlag GmbH, 2015. ISBN 978-3-662-46404-5.
  11. ČES. Etický kodex evaluátora. Finální verze schválená Kongresem České evaluační společnosti. In: Czecheval [online]. 2011, [cit. 2016-12-15]. Available on-line: http://www.czecheval.cz/standardy_kodex/ces_
    pdf
  12. ČES. Formální standardy provádění evaluací. Finální verze schválená Kongresem České evaluační společnosti. In: Czecheval [online]. 2013, [cit. 2016-12-15]. Available on-line http://www.czecheval.cz/standardy
    _kodex/ces_formalni_standardy_evaluaci_short_5__.pdf
  13. DABROWSKI, M. EU cohesion policy: how to deliver more tangible results? Regional Insights, Vol. 4, No. 2, pp. 3. ISSN 2042-9843.
  14. DG AGRI. Approaches for assessing the impacts of the Rural Development Programmes in the context of multiple intervening factors. Findings of a Thematic Working Group established and coordinated by The European Evaluation Network for Rural Development. March 2010.
  15. EUROPEAN COMMISSION. Evaluating Regional Policy – Insights and Results. Panorama Inforegio, Brussels: European Commission, Directorate-General for Regional Policy, 2010. ISSN 1608-389X.
  16. EUROPEAN COMMISSION. The Programming period 2014–2020: Guidance Document on monitoring and evaluation. Brussels: European Commission, Directorate-General for Regional Policy, March 2014. ISBN 978-92-79-45496-7.
  17. EUROPEAN COMMISSION. Reflection paper on the future of EU finances. European Commission. COM(2017) 358 of 28 June 2017. In:europa [online]. 2017. Available on-line: https://ec.europa.eu/commi
    ssion/sites/beta-political/files/reflection-paper-eu-finances_en.pdf
  18. FERRY, M. . Cohesion Policy Evaluation Systems in the Visegrad States: An Overview. In: BIENIAS, S. and I. LEWANDOWSKA (Eds.), Evaluation systems in the Visegrad Member States. Warsaw: Ministry of Regional Development, 2009.
  19. FORSS, K. and C. REBIEN. Is there a Nordic/Scandinavian evaluation tradition? Evaluation, Vol. 20, No. 4, pp. 467-470.
  20. GAFFEY, V. Methods for Assessing Impacts of Structural Interventions: Lessons from Ex Post Evaluation, 2000–2006. Evaluation of EU Structural Funds: Reinforcing Quality and Utilisation. Vilnius: 26–27 March 2009.
  21. GAFFEY, V. Methods For Evaluating Cohesion Policy Programmes. European Structural and Investment Funds Journal – EStIF. October 2013, First issue, approx. 60 pages. ISSN 21 96-82 68.
  22. GOMBITOVÁ, D., B. SLINTÁKOVÁ and O. POTLUKA. Evaluations in Visegrad countries. In: POTLUKA, O. et al. Impact of EU Cohesion Policy in Central Europe. Lepziger Universitätsverlag, 2010. ISBN 978-3-86583-541-3.
  23. HILL, B. Understanding the Common Agricultural Policy. Routledge, Earthscan, 2012. ISBN 978-1-84407-777-9.
  24. IVALDI, S., G. SCARATTI and G. NUTI. The practice of evaluation as an evaluation of practices. Evaluation, 21, No. 4, pp. 497-512.
  25. KOMOROWSKA, K. A. Cohesion Policy-Making. Is There a Space for Regions? Contemporary European Studies, Vol. 1, No. 1, pp. 58-77.
  26. KOZAK, M. Why does policy learning have limited impact on policy changes? In: DOTTI, N. F., (ed.). Learning from implementation and evaluation of the EU Cohesion Policy: Lessons from a research-policy dialogue. Brussels: RSA Research Network on Cohesion Policy, 2016. pp. 139-152. ISBN 978-2-9601879-0-8.
  27. KVÁČA, V. Rozhovor – V programovém období 2014–2020 by evaluacím mělo být věnováno mnohem vice prostoru. Oko NOKu, Bulletin Ministerstva pro místní rozvoj ČR Národního orgánu pro koordinaci fondů EU, winter 2015, pp. 4-6.
  28. MARTINI, A. Counterfactual impact evaluation: what it can (and cannot) do for cohesion policy New Methods for Cohesion Policy Evaluation: Promoting Accountability and Learning. Warsaw: 30 November – 1 December 2009.
  29. MARTINI, A. The Sounds of Evaluation Pipe Organs or Bagpipes?; What’s New and What Works in the EU Cohesion Policy 2007–13: Discoveries and Lessons for 2014-2020. Vilnius: 3–4 March 2011.
  30. MAYNE, J. Building an Evaluative Culture for Effective Evaluation and Results Management. ILAC Working Paper 8. Rome: Institutional Learning and Change Initiative, 2008.
  31. MEYER, W. Does Evaluation Become a Global Profession? In: STOCKMAN, R. and W. MEYER. The Future of Evaluation: Global Trends, New Challenges, Shared Perspectives. Palgrave Macmillan, 2016. ISBN 978-1-349-57553-4.
  32. MESQUITA, J. Evaluation culture in the development cooperation sector in Portugal. [Powerpoint presentation]. MECC Maastricht, the Netherlands: 12th EES Biennial Conference, Evaluation Futures in Europe and beyond Connectivity, Innovation and Use, 26–30 September 2016.
  33. MIHALACHE, R. A Developing Evaluation Culture in Romania: Myths, Gaps and Triggers. Evaluation, Vol. 16, No. 3, pp. 323-332.
  34. MOLLE, W. European Cohesion Policy. Routledge, 2007. ISBN 0-415-43812-8.
  35. Olejniczak, K., E. Raimondo and Kupiec. Evaluation units as knowledge brokers: Testing and calibrating an innovative Framework. Evaluation, Vol. 22, No. 2, pp. 168-189.
  36. PATTON, M. Q. Essentials of Utilization-Focused Evaluation. SAGE Publications, Inc., 2012. ISBN 978-1-4129-7741-8.
  37. PĚLUCHA, M. and J. SHUTT. Preparing for 2014–2020: Can the Czech Republic improve its performance by learning more from the United Kingdom?, Regions – Quarterly Magazine of the Regional Studies Association, No. 293, pp. 25-28. ISSN 1367-3882.
  38. PĚLUCHA, M. Výzvy a rizika v nastavení programového období
    2014–2020 pro Českou republiku: kontext strukturální politiky, politiky rozvoje venkova a Společné rybářské politiky EU. In: Sborník přípěvků z mezinárodní vědecké konference Region v rozvoji společnosti 2014 [CD-ROM]. Brno, 23. 10. 2014. Brno: Mendelova univerzita v Brně, 2014. pp. 688-696. ISBN 978-80-7509-139-0.
  39. PETZOLD, W. EU Cohesion Policy: A Short History of its Future. In: Reynolds, L. Great Regional Awakenings: New Directions. Abstract Book, Annual Conference 4th –7th June 2017. Trinity College Dublin, Ireland: 2017. p. 177. ISBN 978-1-897721-60-5.
  40. PÍŠOVÁ, E., D. GROLIG and M. HLÁDEK. Evaluation in Czech Republic. Working paper. Evaluation units from V4 countries & EC working meeting, Valtice, December 2–3, 2004. In: strukturalni-fondy [online]. 2004 [cit. 2016-12-15]. Available on-line: www.strukturalni-fondy.cz/.../1103037159working_paper_cz_en
  41. POTLUKA, O., V. KVĚTOŇ and M. PĚLUCHA. Možnosti aplikace counterfactual impact evaluation v České republice. Regionální studia. 2012, No. 2, pp. 45-51. ISSN 1803-1471.
  42. POTLUKA, O. and J. BRŮHA. Zkušenosti s kontrafaktuální dopadovou evaluací v České republice. Evaluační teorie a praxe, 2013, Vol. 1, No. 1, pp. 53–68.
  43. REMR, J. Současné praktiky provádění evaluací - Stručná informace o meta-evaluačním šetření provedeném v ČR. [přednáška]. Praha: Prezentováno na konferenci České evaluační společnosti, 2. 6. 2011.
  44. SHUTT, J., S. KOUTSOUKOS and M. PĚLUCHA. The Czech Republic’s Structural Funds 2007–13: Critical Issues for Regional Regeneration, In: DIAMOND, J., J. LIDDLE, A. SOUTHERN and P. OSEI (ed.). “Urban Regeneration Management: International Perspectives”. Routledge – Taylor & Francis Group (United Kingdom), 2010. ISBN 978-0-415-45193-2.
  45. SRHOLEC, M. Návrh obecných zásad hodnocení programů účelové podpory výzkumu, vývoje a inovací a potřebných systémových změn. Prague: MŠMT, 2015, 57 pages.
  46. STAME, N. The Culture of Evaluation in Italy. In: BARBIER, J. C. and P. HAWKINS. Evaluation Cultures – Sense making in complex times. New Brunswick, New Jersey: Transaction Publishers, 2012. pp. 19–44. ISBN 978-1-4128-4942-5.
  47. ŠUMPÍKOVÁ, M. et al. Veřejné výdajové programy a jejich efektivnost. Eurolex Bohemia, s. r. o., 2005. ISBN 80-86861-77-5.
  48. TOULEMONDE, J. Evaluation Culture(s) in Europe: Differences and Convergence between National Practices. Vierteljahrshefte zur Wirtschaftsforschung [online]. Jahrgang, Heft 3/2000, pp. 350-357. [cit. 2016-12-15]. Available on-line: http://ejournals.duncker-humblot.de/doi
    /pdf/10.3790/vjh.69.3.350
  49. TROCHIM, W. An Evaluation Culture. The Research Methods Knowledge Base. Last Revised: 10/20/2006. In: socialresearchmethods [online]. 2016 [cit. 2016-12-15]. Available on-line: http://www.social
    net/kb/evalcult.php
  50. TAYLOR S., J. BACHTLER and L. POLVERARI. Structural Fund evaluation as a programme management tool: comparative assessment and reflections on Germany. Informationen zur Raumentwicklung [online]. 2001, No. 6/7, pp. 341-357. [cit. 2016-12-15]. Available on-line: http://www.bbsr.bund.de/BBSR/EN/Publications/IzR/2001/6_7TaylorBachtlerPolverari.pdf?__blob=publicationFile&v=3
  51. United States General Accounting Office. Program Evaluation – an Evaluation Culture and Collaborative Partnerships Help Build Agency Capacity. Report to Congressional Committees. GAO-03-454. May 2003.

 

[1] ESF (European Social Fund), ERDF (European Regional Development Fund) and EAFRD (European Agricultural Fund for Rural Development)

[2] This document was further abbreviated into the main document negotiating with the European Commission (i.e. National Strategic Reference Framework 2007–2013).

[3] The work was finally published in 2015, but its assignment and funding comes officially to the period 2007–2013.

[4] Financial spending is controlled under the n+3 rule which means that the managing authorities of operational programmes need to spend their annual allocations within three years from the date of commitment by the programme. Otherwise, they face a risk of decommitment of funds.

[5] Since 2011, an approach of detecting the status of supported persons in the labour market within the Operational Programme Human Resources and Employment, through the verification of their situation in the database of the Czech Social Security Administration (CSSA), began to be applied. According to the basic identification data (e.g. name, surname and date of birth), CSSA is able to identify the status of persons in the labour market (i.e. the employed, self-employed, unemployed persons) and after the anonymisation of this microdata, they then provide the results to responsible institutions. This problem is resolved by an increased cooperation among central public authorities, which have a relatively wide coverage of information on the population of the Czech Republic.
After the treatment of the relevant records, it is possible to fulfill the parameters of the personal data protection act, and to provide an adequate basis on the degree of policy-instrument effectiveness.


Novinky

5. 4. 2016
Evaltep a ERIH Plus
3. 3. 2016
Speciální anglické vydání
18. 12. 2015
Podzimní číslo roku 2015

archív

Kontakt

Redakce ETP
Sokolovská 351/25,
186 00 Praha 8
220 190 597

Časopis Evaluační teorie a praxe je součástí:

ERIH PLUS