OK

Při poskytování služeb nám pomáhají soubory cookie. Používáním našich služeb vyjadřujete souhlas s naším používáním souborů cookie. Více informací

Úvodní stránka » Archiv čísel » Články 02/2017 » Program Evaluation in the Municipal Sector

Program Evaluation in the Municipal Sector – Case Study Slovakia and Canada

Dana Švihlová, Robert Shipley

Abstract

In most developed countries, planning initiatives intended to improve people’s lives are implemented through programs planned and delivered locally. Those responsible for such programs ought to be concerned about whether or not a program is having the desired outcome. In some sectors such as public health and education it appears that program results are carefully evaluated. But what about the programs initiated and managed by regional and local authorities? To address this question, programs in Slovakia and Canada were studied. Examples of housing and waste management programs in eight municipal authorities in each country were surveyed regarding their practices. The study therefore comprised a total of thirty-two cases. It was discovered that while the performance of most programs was being monitored, not all programs were being evaluated and that most often the evaluation focused on performance rather than outcomes. Some conclusions are drawn and recommendations offered.

Keywords

Evaluation, public values, municipal and regional programs, waste management, housing

Acknowledgement

The study was conducted by a Slovak – Canadian international team in 2015 as part of „Mobility – Enhancing Research, Science and Education” project co-financed by the European Social Fund under the Operational Program Education at Matej Bel University in Banská Bystrica (ITMS code: 26110230082). The project had two purposes: to foster academic interaction among countries and, where appropriate, to transfer beneficial know-how and experience from one country to another. The authors would like to thank colleague Ing. Zuzana Rigova, PhD for her assistance.

  1. Introduction

In modern democratic countries around the world it is generally assumed that the primary role of governments at all levels is to try to constantly improve the lives of its citizens, to serve what we commonly refer to as „the public good“ (Brooks, 2002). This principle is so fundamental that it is seldom even stated. Governments serve the public good by developing and implementing policies. A generation ago, Patton and Sawicki defined a policy as „a settled course of action to be followed by a government body or institution” and that meaning holds true today (1993, p. 66). Often included in the definition of policy is the notion that there is a clearly intended goal or purpose (OECD, 2010, p. 31). An example would be to increase accessibility for the disabled (Imrie, 2012). Policies such as accessibility are actually realized by creating programs and once again Patton and Sawicki provide a time-tested definition: a program consists of „specific steps that must be taken to achieve or implement a policy” (1993, p. 66). For example, if the policy is to increase accessibility, then a program might be to require wheelchair ramps in buildings (Government of Ontario, 2012). The goals of particular programs hopefully come together to achieve the overall goal or purpose of the policy.

In the same way that serving the public good is taken as an obvious role of government, actually knowing whether or not programs are having their desired effect should also be a given. In fact, in at least some areas of public policy there is a well-developed field of study and practice program evaluation–dedicated precisely to that enterprise. We have already defined „program” and there are several ways of stating the meaning of „evaluation“: … the systematic process of collecting data in order to determine whether and to what degree objectives have been or are being achieved (Boulmetis and Dutwin, 2000, p. 4); the systematic determination of the quality or value of something (Davidson, 2005,
p. 1); the systematic assessment of a program (sic) or policy using absolute (merit-based) or relative (worth-based) criteria (McDavid, Huse and Hawthorn, 2013, p. 484).

It is important to make a further distinction between „monitoring”, which is the systematic collecting of information, and evaluation, which involves analyzing and making sense of the information that has been collected. Evaluation can be very complex but generally falls into one of two categories, „formative”, having to do with the process of carrying out a program, and „summative”, which examines the results of the program whether intended or unintended (Rossi, Lipsley and Freeman, 2007). Evaluation can either be ex-ante, the examination of a program prior to implementation, or ex-post, carried out during and after the implementation of a program (Laubli Loud, and Mayne, 2014).

Turning from the broad realm of public policy and administration to the more local and specific, the importance and utility of evaluating municipal[1] planning programs is also well recognized (Bryson, Crosby and Bloomberg, 2014). A relatively new term, „public value”, has emerged in the theoretical literature for discussing such programs (Andersen, Jørgensen, Kjeldsen, Pedersen and Vrangbæk, 2012; Chazdon and Paine, 2014). What is needed, however, is more empirical study, which legitimately asks the question: what is actually happening in practice? There may be monitoring (often a requirement of funding agencies) but is there any subsequent analysis of the resulting data concerning regional and local initiatives and does that lead to modifications in the delivery of programs to improve them?

  1. Current State of Program Evaluation Practice

Probably because such large amounts of money are involved and because the programs are so close to people’s day-to-day lives, there are two sectors of public administration where program evaluation is most advanced: public health and education. For example, in reviewing articles published between 2004 and 2013 about the evaluation of reproductive health, Casey (2015) initially identified 5,667 papers of which 36 met a narrower set of criteria for her particular study. Similarly, in the area of education Stern, Powell, and Hill (2014) reviewed 66 articles that dealt strictly with evaluating environmental education programs. However, when we look for articles on evaluation of municipal programs there are only a handful and they are not very recent (Baum, 2001; Edwards and Clayton Thomas, 2005; Goodlad, et al, 2005; Hoernig and Seasons, 2004; Laurian et al, 2004; Seasons, 2003; Shipley et al, 2004a; Shipley et al, 2004b). A cursory look at the titles of some of these articles (How Should We Evaluate Community Initiatives and Effectiveness at What?) indicates that a decade ago work in this area was just beginning.

Notable successes of health programs, such as the dramatic reduction in smoking, are even reported in the popular press (Shute 2001). Similarly, most of the books on the subject of program evaluation focus on education and public health (Rossi et al, 2007; Bamberger, Rugh, and Mabry, 2012; McDavid et al, 2013; Laubli Loud, 2014). University courses in countries all over the world educate future practitioners in the theory and techniques of evaluating health and education programs.

What is taught in the academic world and outlined in its books are procedures for making sure that the programs are first of all carried out according to plan and more importantly that their „outcomes“ are well-considered. We can see that health and education programs are being evaluated but what about the many other areas of public policy that also consume large sums of government expenditure. Senior levels of government often set policy goals and fund initiatives but the actual planning and delivery of programs is mandated to lower levels of regional, city and other municipal authorities. These policy areas can include energy efficiency, infrastructure such as water supply and sewage treatment, public transit, air quality, recreation, housing and waste management.

It was the observation of the authors from their previous experience that while the evaluation of programs in the local authority planning realm might be desirable and would certainly be useful from a program development point of view, such evaluation is perhaps not occurring at the level one might expect.

  1. Research Design and Methods

A research funding opportunity, part of a „Mobility – Enhancing Research, Science and Education” project, co-financed by the European Social Fund, presented itself to the author(s) in 2012. In order to respond to the question of whether any useful transfer of “know how” could be realized it was decided to conduct research into program evaluation initiatives in two countries. Slovakia and Canada were chosen, in spite of their obvious differences, since it was the process of evaluation rather than specific details that was under investigation.

For the study, two policy areas were chosen: housing and waste management. It is in these two areas of municipal responsibility that the greatest similarity exists between approaches in the two nations. To be clear, the focus of the study was on program evaluation and not on housing and waste management per se. Regardless of what the programs in these two policy areas do or do not do or how they are operated, our questions concerned how and to what extent specific programs are and have been evaluated.

In Slovakia waste management policy is set at the national level and implemented locally while housing policy has previously been delegated to local governments. In Canada these two subject areas are the constitutional responsibility of provincial governments and so Ontario, Canada’s largest province, was selected as the appropriate jurisdiction for the study. In each country, eight municipal governments were selected for each of the two policy areas. That means there were eight housing agencies in Slovakia, eight in Canada and eight waste management agencies in each country for a total of 32 study sites. Since the research protocol being followed promises anonymity to respondents, the actual study site selection process cannot be revealed as it might allow identification of participants[2]. However, the same selection process was followed in both countries and assurance can be given that the sites all meet the same criteria in terms of relative size, rank order, geographic distribution and other characteristics.

Once the study sites were selected, the appropriate official in each place was identified. These people were contacted, first by email and then by phone, and asked to complete a questionnaire in their own language. The officials were given free range to select a specific program under their direction to use as an example. In most cases the contact was the senior manager of housing or waste management although in some instances those people delegated the response to other employees in their organizations. For example, if within housing policy the municipal official selected a rent rebate program for low-income families, they might have asked the employee in charge of that particular program to complete the questionnaire. In the end, 30 of the 32 study sites responded with one housing department in each country failing to submit questionnaires. The reasons for the two instances of failure to respond were typical of research experience with local authority administration and will be addresses in the analysis section.

The discussions that took place during the process of recruiting municipal officials to complete the study questionnaire were essentially informal interviews. The responses and notes made by our informants were recorded and have been used to expand on the formal answers given in response to the questionnaire.

In the introductory correspondence to local authority officials, we identified the particular senior government policy, regulation or legislation on housing or waste management we were focusing on. In the housing field in Ontario it was The Places to Grow Act (2005). In the case of waste, it was the provincial Waste Reduction Strategy (2013). Under these policies the municipal initiatives that were reported on included programs such as Investment in Affordable Housing, Down Payment Assistance to Low to Moderate-Income Residents, Blue Bin Recycling and Curbside Collection to name just a few. In Slovakia the waste management program is created at the national level (Slovak National Parliament, 2001) with municipalities applying national goals to the programs they are responsible for implementing. Housing, on the other hand, has long been a responsibility assigned to municipalities. Their housing programs become part of the physical plans or social and economic plans.

It was established that under these senior government directives or delegated powers, the local authorities were responsible for delivering specific programs. Thus, we wanted to know the following: Were these local programs being monitored? Were they being evaluated and if so, how? We also wanted to know about the preferred values of program implementers. While there were some questions of clarification, all of the respondents clearly understood the context of our inquiry and commented on programs that fit within the parameters that we outlined.

The survey, which was administered using Google Forms, posed eight questions in three categories: monitoring, evaluation and values. In the questionnaire, monitoring was clearly defined as regular activity focused on the systematic collection, aggregation and saving of relevant information for examining the operation of a program and its evaluation. If the respondent indicated that there was no monitoring, they were instructed to skip the rest of the questions about monitoring and continue to the next set of questions.

In the questionnaire, evaluation was clearly defined as the systematic process for assessing whether and to what extent the program goals have been reached and what benefits, impacts and outcomes have resulted from the program activities.

In developing the last question concerning values we turned to the work of several theorists including Chazdon and Paine. In their 2014 article „Evaluating for Public Value”, they suggested a framework intended to integrate the idea of public value with program evaluation based on the Public Values Strategic Triangle, which was in turn derived from Moore´s three types of management processes: (1) identify the public purpose of the program; (2) manage upward, toward the political arena, to gain legitimacy and support for their purpose; and (3) manage downward, toward improving the organization’s ability to achieve its desired purposes (Chazdon and Paine, 2014, p. 101).

The six statements contained in question 8 are derived in part from this Public Values Strategic Triangle. Responses 8 (a) and 8 (b) concern the public purpose of the program. Responses 8 (c) and (d) have to do with the political arena while responses 8 (e) and (f) refer to organizational ability. The reason for having more than just the three statements is that the questionnaire was also developed in part from our exploration of other literature sources and from interviews with practitioners.

  1. Findings

The responses to this questionnaire are outlined here. When asked if the programs the municipal officials had identified were monitored, the response from the Canadian examples was definitive with all 15 respondents reporting that they were (see Figure 1). In the case of the Slovak sample, the response was somewhat less complete. Two of the eight waste management cases and one of the seven housing programs were not monitored.

When asked how the information from monitoring was used, participants responded in several ways (see Figure 2). All of the Canadian responders and two thirds of the Slovak housing program implementers indicated that they used the information „in reporting on the operation of the program”. Only one of the Slovak waste managers gave this answer. More than half of the Canadian and a third of the Slovak housing program implementers said they used the monitoring information „for altering or updating the program during its implementation”. Two-thirds of the Slovak and three quarters of the Canadian waste managers indicated monitoring information would help in „preparing a new program”.

PE_Obr1a2.jpg 

Where monitoring took place, employees from within the organizations were involved in all cases but in all instances, except housing programs in Slovakia, people from outside the agencies were also involved. Sometimes the outside monitors were from higher levels of government.

Question 4 inquired about evaluation. It indicated that seven out of eight of the waste management initiatives in Canada had been evaluated, while only three out of eight similar programs in Slovakia had been (see Figure 3). Officials in the five Slovak cases that were not currently being evaluated indicated that the programs would be evaluated only when they were completed. The monitoring that was being undertaken in these cases did not appear to be linked to the evaluation that was to happen after program completion. Housing policy programs in both countries showed the same results with five out of seven being evaluated in each country.

In the cases where evaluation was carried out, employees from within the organizations were involved in every instance. This result was similar to the responses concerning monitoring. However, there was also outside involvement in the evaluation cases of waste management in both countries and housing policy in Canada. Only in the case of housing policy in Slovakia was there no outside participation in evaluation.

 

PE_Obr3.jpg

When asked what the purposes were of the evaluation the responses varied but the most common was: to determine how the program met its targets or goals. This was the answer from 80 % to 100 % of the waste management programs in both Canada and Slovakia and from the housing sector in Canada. Only the housing programs in Slovakia reported less emphasis on evaluation to determine whether goals had been reached.

The next two most common reasons for evaluation were to determine if the program was implemented as intended and seeing if the program was designed and structured in a way that made it possible to achieve its goals. From half to 100 % of the programs cited these reasons except, as before, housing programs in Slovakia. Determining how programs influenced changes in attitudes, behavior and policies was near the bottom of the list of reasons given for evaluation but was a bit more common than comparing the costs of the program with the outputs such as meetings held, literature distributed and so on. None of the program implementers in Slovakia did evaluation to compare the costs of undertaking the program with the outcomes achieved where outcome means the longer term results. In Canada this was a reason for less than half the housing program evaluations but was a justification for over three quarters of the Canadian waste management programs.

The numbers of programs that were not evaluated at all is very small and the responses inconclusive, but the reasons most often given were that evaluation was not explicitly required by the funding agencies and that no guidelines were provided. Somewhat troubling is the notion expressed by some implementers that evaluating the program would not change anything since they were not allowed to alter the implementation guidelines. We will return to this idea in the analysis section below.

The final question in the survey dealt with what we might see as the values underpinning the practice of program evaluation. Respondents were asked what their priorities would be if they were given the opportunity to decide the focus of a program evaluation. There were six questions that program implementers were asked to rank in order of importance.

 

PE_Obr4.jpg 

 

Figure 4 shows that a third of respondents in both countries thought that looking at long-term economic, environmental and social change would be most important to them if they had a choice. A similar percentage of Canadian program implementers felt that efficiency and cost effectiveness would be most important. Beyond that there were four other possibilities, which received little support. Less than a quarter of the Slovak program managers and none of the Canadians thought that finding out „if the resources available to the program implementers and their organizational capacity were adequate and best suited to the task”. Smaller percentages of respondents listed the other options as primary questions.

In Figure 5, however, we have grouped the six statements posed in question 8 according to Moore and Chazdon and Paine’s Public Values Strategic Triangle. The three sides of that triangle are public purpose, the political arena and organizational ability. Figure 5 merges the responses to the pairs of statements from question 8 of the questionnaire.

 

PE_Obr5.jpg

In this configuration a clearer picture emerges. In both countries public purpose goals are seen as the most important reasons for conducting program evaluation. Learning about the implementing organization’s own abilities and competence is next in importance with political consideration showing up as least important.

  1. Analysis

The sample we chose was limited but representative. From the two countries, senior officials from eight departments responsible for designing and delivering mandated programs in waste management in housing were surveyed. Of the thirty-two agencies, thirty responded. While this is a sub-statistical sample and while there were no overwhelmingly clear trends, as a descriptive study, it shows some important factors and logically points to some useful recommendations.

The first general observation is that virtually all of the survey respondents were at least somewhat familiar with the language of program evaluation. This was true with the surveys both in Slovak and English. There were no questions either in correspondence or in phone conversations about terms such as monitoring, input, output and outcome. In the cases of the two departments that did not respond to the survey, other factors were at issue and not understanding the questionnaire was not a factor. In one instance, being in the largest municipality in the country, the department was simply over-worked; while in the other case, there was a personnel transition underway that left no one available to respond. We can conclude that senior public officials are to some degree knowledgeable about the concept of program evaluation. This is true in spite of the fact that neither the concept nor the practice is part of the normal study curriculum that leads people into the kinds of positions occupied by our respondents.

Secondly, all of the Canadian and most of the Slovak programs identified were being monitored. That means that some form of data gathering was usual and in many cases was part of the organizational culture of the departments surveyed. Fewer of the programs were being formally evaluated. For example, in the cases of the three Slovak waste management programs that were to be evaluated on completion, there did not appear to be a plan in place to gather information to be used in that evaluation. The monitoring data might have fulfilled that purpose but if that was the plan it was not clear.

In all cases both the monitoring and evaluation was conducted largely in-house by department employees but in some cases outside personnel, specifically from senior government officials, were involved. Most of the data, however, is used to report on operations, which is what we would call process or formative evaluation. A considerable number of respondents indicated that monitoring information was used for altering or updating the program during implementation and/or for preparing new programs. When it comes to evaluation, however, it appears that once again the emphasis is on process issues such as how the program targets have been met. The focus seems to be on measuring outputs rather than outcomes.

In conversations with program implementers in the Canadian context, there was a recognition that their traditional focus had been on these immediately measureable targets such as amount of waste collected or the number of housing units provided and not on the longer range outcomes or impacts such as behavioral change in the populations being served. As one respondent put it, „we are experiencing a change from counting what we do to measuring the impact it is having”. With regard to a new government initiative, one person responsible for implementation said her department was „currently developing a housing program and struggling to come up with suitable indicators of impact rather than just measuring outputs”.

The desire to do a better job of evaluating in order to improve program performance is clearly reflected in the responses to the final, value-oriented question in the survey. Emphasis there was very much on understanding the long-term changes brought about by programs and in particular in going beyond economics to the include environmental and social changes. Measuring effectiveness, efficiency and organizational capacity were also expressed as desirable values. Political matters were considered least important. At the same time more esoteric ideas such as understanding what might have happened without the program are less valued by program implementers. What appears to be a frustration for department officials, charged with delivering programs, is the reality that even if they do evaluate initiatives, they will still have to follow dictated rules from senior government officials and would not be able to make changes to improve delivery. This was given by some as the reason for not undertaking formalized evaluation. In some cases, programs with large budgets had to be delivered according to inflexible procedures, but smaller locally conceived and funded programs were possible to improve based on what was learned from evaluation.

A third observation from the surveys is that there is somewhat more evaluation going on in Canada than in Slovakia. This can be explained by the fact that a) the agencies and departments delivering programs in Canada have generally been in place longer and have deeper administrative traditions, b) the government structures in which the departments work are more established and c) there are more financial and staff resources available in Canada. Local authorities with their current responsibilities have virtually all been established in Slovakia since 1993 and consequently do not have the same stability as their Canadian counterparts that have operated for many decades.

 

  1. Conclusion and Recommendations

It is clear that the utility of evaluating the programs intended to implement public policy with the idea of improving and optimizing them is understood and accepted in modern democracies. The current research shows, however, that there are some serious issues with realizing the benefits of program evaluation outside the fields of public health and education. There are at least four areas where improvements can be made: 1) educating program implementers and other municipal government staff, 2) providing adequate funds, 3) giving proper guidance, and 4) delegating authority to evaluate and improve programs.

The first point concerns education. Local or regional governments should either require prospective program implementers to have previous training in program evaluation, provide in-service training or hire specialists who design and conduct evaluation across different departments. In a few of the municipalities surveyed for this study, there were such evaluation specialists in place. While program evaluation courses are offered at various universities in Canada, they are usually taught in departments of public health or education and not in schools of planning and public administration. No university study programs in Slovakia offer training in program evaluation[3]. Practitioners contacted as part of this study in both countries generally had some knowledge of program evaluation but most recognized their own limitations as reflected in the use of words such as „struggling”.

The second principle concerns the adequate funding for evaluation. Respondents reported that too often the evaluation function is left for the end of the program and is often the first budget item to be cut when funding is limited. Regular, thorough and professional evaluation should be built into every program from the beginning. Not only should there be adequate funding for evaluation, but the third important point is that agencies that design and implement programs should also have clear direction and guidelines. Several versions of such guidelines exist and are promulgated by institutions such as the World Bank, the European Union and the OECD, but too often these evaluative structures are not provided to implementers and are not necessarily suitable for use at the local level.

The last and perhaps most important issue concerns subsidiarity which is the principle of pushing responsibility for any actions down to the level closest to those whom the policy and program is intended to benefit. It was disconcerting in this study to find implementers who were not evaluating the impact of their programs because, they said, they were not free to improve their performance even if they could see ways to do so.

 

PE_Apx1.jpg

PE_Apx2.jpg

Thank you for your time. Please use the following box to provide any additional
information or comments you want to share with us.

Please provide your contact information if you want to receive information about the results of this research. Your contact information will be stored separately from your questionnaire responses in order to preserve confidentiality.

 

 

References

  1. Andersen, L. B., T. B. Jørgensen, A. M. Kjeldsen, L. H. Pedersen and K. Vrangbæk. Public value dimensions: Developing and testing a multi-dimensional classification. International Journal of Public Administration, Vol. 35, No. 11, pp. 715-728.
  2. Bamberger, M., J. Rugh and Mabry. Real World Evaluation. Working under budget, time, data and political constraints. Thousand Oaks: SAGE Publications, Inc., 2012.
  3. BAUM, H. How Should We Evaluate Community Initiatives? Journal of the American Planning Association, Vol. 67, No. 2, pp. 147-158.
  4. BOULMETIS, J. and P. DUTWIN. The ABCs of Evaluation: Timeless Techniques for Program and Project Managers. 3rd ed. San Francisco: Jossey-Bass/Wiley, 2000.
  5. BROOKS, M. P. Planning theory for practitioners. Chicago: APA Planners Press, 2002.
  6. BRYSON, J. M., B. C. CROSBY and L. BLOOMBERG. Public Value Governance: Moving Beyond Traditional Public Administration and the New Public Management. Public Administration Review, 2014, No. 74, pp. 445-456.
  7. CASEY, S. E. Evaluations of reproductive health programs in humanitarian settings: A systematic review. Conflict and Health [online]. Vol. 9, No. 1, 1. doi: http://dx.doi.org/10.1186/1752-1505-9-S1-S1.
  8. CHAZDON, S. and N. PAINE. Evaluating for Public Value: Clarifying the Relationship Between Public Value and Program Evaluation. Journal of Human Sciences and Extension, Vol. 2, No. 2, pp. 100-119.

  9. DAVIDSON, E. J. Evaluation Methodology Basics: The Nuts and Bolts of Sound Evaluation. Thousand Oaks, CAL: SAGE Publications, Inc., 2005.
  10. ORGANIZATION FOR ECONOMIC COOPERATION AND DEVELOPMENT (OECD). Glossary of Key Terms in Evaluation and Results Based Management. Paris: OECD, 2010.
  11. COUNCIL OF ONTARIO UNIVERSITIES. Final Proposed Accessible Built Environment Standard. In: Accessiblecampus [online]. 2010 [cit. 2015-09-25]. Available on-line: http://www.accessiblecampus.ca/
    administrators/built-environment
  12. EDWARDS, D. and T. J. CLAYTON. Developing a Municipal Performance-Measurement System: Reflections on the Atlanta Dashboard. Public Administration Review, Vol. 65, No. 3, pp. 369-376.
  13. GOODLAD, R., P. BURTON and J. CROFT. Effectiveness at what? The processes and impact of community involvement in area-based initiatives. Environment and Planning C: Government and Policy, Vol. 23, No. 6, pp. 923-938.
  14. GOVERNMENT OF ONTARIO. Accessibility for Ontarians with Disabilities Act, 2005. In: E-laws.gov.on [online]. Service Ontario e-Laws, 2012 [cit. 2015-11-15]. Available on-line: http://www.e-laws.gov.on.ca/html/regs/english/elaws_ regs_060350_e.htm
  15. HOERNIG, H. and M. SEASONS. Indicators for monitoring local and regional planning practice: concepts and issues. Planning Practice and Research, 19, No. 1, pp. 81-99.
  16. IMRIE, R. From Universal to Inclusive Design in the Built Environment.” In: SWAIN J., S. FRENCH, C. BARNES, and C. THOMAS (Eds.). Disabling Barriers – Enabling Environments. 2nd ed. London: Open University, 2004. pp. 279-284.
  17. LAUBLI LOUD, M. L. and J. MAYNE (Eds.). Enhancing Evaluation Use: Insights from Internal Evaluation Units. Thousand Oaks, CAL: SAGE Publications, Ltd., 2014.
  18. LAURIAN, L., M. DAY, P. BERKE, N. ERICKSEN, M. BACKHURST, J. CRAWFORD and J. DIXON. Evaluating Plan Implementation. Journal of the American Planning Association, Vol. 70, No. 4, pp. 471-480.
  19. MCDAVID, J. C., I. HUSE and L. R. L. HAWTHORN. Program Evaluation and Program Performance Measurement. An Introduction to Practice. Thousand Oaks, CAL: SAGE Publications, Ltd., 2013.
  20. PATTON, C. and D. SAWICKI. Basic Methods of Policy Analysis and Planning. Upper Saddle River NJ: Prentice Hall, 1993.
  21. ROSSI, P. H., M. W. LIPSLEY and H. E. FREEMAN. Evaluation: A systematic approach. Thousand Oaks, CA: Sage Publications, 2004.
  22. SEASONS, M. L. Monitoring and Evaluation in Municipal Planning: Considering the Realities. Journal of the American Planning Association, Vol. 69, No.4, pp. 430-440.
  23. SHIPLEY, R., A. REEVE, S. WALKER, P. GROVER and B. GOODEY. Townscape Heritage Initiative Evaluation. Environment and Planning C: Government and Policy, Vol. 22, No. 4, pp. 523-542.
  24. SHIPLEY, R., R. FEICK, B. HALL and R. EARLEY. Evaluating Municipal Visioning. Planning, Practice & Research, Vol. 19, No. 2, pp. 193-207.
  25. SHUTE N. Kicking the habit. In: S. News & World Report [serial online]. December 17, 2001, 131(25):48.
  26. SLOVAK NATIONAL PARLIAMENT. Act No. 223/2001 Coll. of Laws on Waste and on Amendment of Certain Acts was published in the Collection of Laws of the Slovak Republic. In: eionet.europa [online]. 2001 [cit. 2015-11-03]. Available on-line: http://scp.eionet.europa.eu/
    facts/factsheets_waste/2009_edition/factsheet?country=SK
  27. STERN, M. J., R. B. POWELL and D. HILL. Environmental education program evaluation in the new millennium: What do we measure and what have we learned? Environmental Education Research, Vol. 20, No. 5, pp. 581-611.
  28. WORLD HEALTH ORGANIZATION (WHO). Disabilities. In: WHO [online]. [cit. 2015-10-27]. Available on-line: http://www.who.int/topics/
    disabilities/en/

[1] We have used the terms “municipal” and “local authority” interchangeably.

[2]  Authorization for the project was received after review by the University of Waterloo Research Ethics Office.

[3]  The current authors have developed such a course for Matej Bel University Faculty of Economics and it was offered in 2014 and 2015.


Novinky

5. 4. 2016
Evaltep a ERIH Plus
3. 3. 2016
Speciální anglické vydání
18. 12. 2015
Podzimní číslo roku 2015

archív

Kontakt

Redakce ETP
Heřmanova 22,
170 00 Praha 7
220 190 597

Časopis Evaluační teorie a praxe je součástí:

ERIH PLUS