ORGAPET Section B2:
Assessing the Content (Logic and Coherence) and Failure Risk of Action Plans

Raffaele Zanoli and Daniela Vairo
Polytechnic University of Marche, Ancona, IT

Nic Lampkin
Aberystwyth University, UK

Version 6, April 2008

B2-1    Introduction

The aim of this section is to illustrate techniques for evaluating programme content (logic and coherence, internal and external synergies and conflicts, prioritisation) as well as failure risk, using techniques such as logical analysis, cross-impact matrices and failure mode and effect analysis. This section is linked closely to Section C1 which focuses on defining and clarification of objectives as a basis for evaluating programme outputs. It also develops some of the design and implementation process themes examined in Section B1 and the programme theory issues addressed in Section A3

B2-2    Structuring objectives and logical frameworks

As a starting point for the analysis of programme content, it is necessary to have clearly specified objectives. Section C1-1 considers how objectives can be specified in a SMART way, as well as the need for implicit as well as explicit objectives to be considered in the evaluation process. This section focuses on the logical analysis of programmes in terms of the relationship between programme goals, objectives and specific action points. There should be a high level of internal coherence (i.e. correspondence or logical links) between objectives and, in particular, between the operational objectives (actions) and the more global objectives. The assessment of this requires a structured approach to objective hierarchies and cause and effect relationships. The Evalsed framework suggests concept mapping and logic models as approaches that can be used in this context, building on and using programme theory to explain the links between objectives, policy instruments and outcomes. Europeaid also provides a good description of logical analysis procedures.

B2-2.1    Objective hierarchies

The issues of relevance and coherence imply that there needs to be some context which makes the objective meaningful. This may just be with respect to a currently existing situation, but usually implies that there are other objectives involved. Is the objective an end in itself, or a means to another end? If the question:

Why is this objective relevant/important?

can be answered with another higher-level objective (aims or goals) then this also needs to be considered in an evaluation. Similarly, within a hierarchy of objectives, if the question:

How can this objective be achieved?

can be answered:

by doing a particular task or action

then this represents a movement down a hierarchy of objectives to lower-level or sub-objectives. These may include, as for example in organic production standards, guiding principles (e.g. working within closed cycles) or practices (e.g. not using synthetic nitrogen) which, taken in isolation, serve no obvious particular purpose and can only be made meaningful with reference to the hierarchy of objectives.

The hierarchy of objectives may be structured using an objectives tree, with the level expressing the place of the objectives in the cause and effect system. In Figure B2-1, row one represents the top-level, global objective; each subordinate objective is related to one objective in the row above; and interactions between objectives in the same row and feedback links (i.e. effects becoming causes and vice versa) are not represented.

Figure B2-1: Standard objectives tree

Source: Europeaid

Based on this model, usually a long-term and global strategic objective, assumed to be the first-level objective, is fulfilled through the completion of the range of second-level objectives. This is true even where the first-level objective is straightforward. Each second-level objective depends on the completion of several third-level objectives, and so on, down to operational objectives. An objective is therefore usually understood as a means to achieve a superior-level aim, while depending on the completion of subordinate means or objectives. The objectives system is the presentation of all the objectives of all levels with their respective links.

What actually happens in thematic, sector-based or geographical evaluations is more complex than that illustrated in objective systems. Indeed, strategies and programmes under evaluation do not result systematically from objective-based planning. Even if they do, the logical classification of objectives may result from decisions based on circumstances, rather than from a rational selection derived from the fundamental issues. This situation may result in:

The use of standard objectives trees in evaluations assumes that the definition of the objectives is rational and the strategic scope can be reduced to a simplified illustration. These limitations explain why the use of objectives diagrams, whose shape can fluctuate more, is favoured towards strictly-codified objectives trees.

Figure B2-2: Complex objectives diagram

Source: Europeaid

B2-2.2    Logical frameworks

This process can be linked to the specification of logical frameworks (Table B2-1) which link different objective levels to indicators and cause-effect assumptions (programme theory), and which may be developed in a participatory process (Figure B2-3). In this context, the logical framework is intended as a learning and negotiation tool, although the assumption that a consensus can be reached for objectives and strategies among stakeholders may not be valid.

Table B2-1: Structure of logical framework

 

Narrative summary

Objectively verifiable indicators

Means of verification

Important assumption

Goal

 

 

 

 

Purpose

 

 

 

 

Outputs

 

 

 

 

Activities

 

 

 

 

Source: Europeaid

Figure B2-3: Analysis steps in developing a logical framework

Source: Europeaid

Once the hierarchical structure and logical framework has been established, it is possible to progress to definitions of impact statements and indicators (Section C2), as well as to evaluate synergies and conflicts between objectives.

B2-3    Evaluating programme synergies and conflicts

A key purpose of the action plan approach is to integrate different policy measures so that synergies between measures are maximised. This should produce an impact which is greater than the sum of the impacts that the measures would produce if implemented in isolation. Synergy generally refers to positive impacts, but measures may also contradict or conflict with each other, or involve wasteful duplication. Such conflicts may be referred to as negative synergy or anti-synergy and, in some cases, may be a major factor in the implementation failure of policy programmes (see also Section B1). Therefore, action plans should also aim to minimise conflicts or negative synergy. The coherence of the action plan thus relies not just on a logical structure, but also on the promotion of desirable synergies and the avoidance or elimination of negative ones.

This section considers how synergies and conflicts can be identified and evaluated, using examples of stakeholder workshops (Annex B1-1 and Annex C3-5) and cross-impact matrices (Annex B2-1) as techniques for evaluating internal and external conflicts and synergies.

B2-3.1    Stakeholder workshops

As part of the ORGAP project, two series of eight national workshops were held, which took external synergies and conflicts between the EU Organic Action Plan and national organic farming policies as one potential source of implementation failure risk.

In the first series of workshops (Annex C3-5), participants discussed the synergies between the EU's action plan on organic food and farming and national action plans and policies, in preparation for more in-depth study of issues arising in the second series. Response patterns and synergy ratings given by the participants at both country and stakeholder level were evaluated. The participants were encouraged to rate the synergies for eight different topics from the EU action plan in relation to their own national action plan or policy. The eight topics summarised the 21 actions included in the EU plan. It was found that, between member states, there were relevant and clear patterns in the synergies and in the comments made by the participants. Very often there was agreement between most of the participants in a member state on how to rate the synergies, although facilitators encouraged diversity of statements. However, agreement between stakeholders across the member states on the synergy ratings could be found only in a few cases, but this was quite unsystematic and therefore could not be further evaluated. Full details of the methods used and the results from the individual workshops can be found in Annex C3-5.

The main focus of the second series of workshops was the potential for implementation failure and coping strategies with respect to the implementation of the EU action plan at national level, building on the synergy and conflict issues identified in the first series. The theoretical background for the workshops was implementation research and its focus on conflicts between various actors as the main explanation for implementation success or failure. Conflicts are expected to influence implementation success through stakeholders’ willingness, capability and comprehension in relation to the policy to be implemented. This illustrates that internal and external synergies and conflicts with respect to programme content may be part of the broader interaction between stakeholders and institutions. The full results are summarised in Section B1-4 and in Annex B1-1.

B2-3.2    Cross-impact matrices

The application of cross-impact matrices as a means of analysing synergies and conflicts between objectives, described here, is based on the MEANS approach (EC,1999: Vol. 4, Part III). According to this approach, internal synergy is derived from three elements of internal coherence: interdependence of objectives, complementarity between measures and objectives and co-ordination between the measures (in time and space). As previously considered in the workshop examples, external synergy relates to the interactions between programmes at EU level, or between national and EU level programmes.

Depending on the structure of the programme concerned, it may be more relevant to analyse synergy between the objectives, the measures, the actions or the projects. The level of analysis chosen obviously depends on the number of programme components at each level. Some programmes consist of only a few projects, which makes it possible to analyse synergy rapidly at that level. If the number of projects is very high, it may be preferable to analyse synergy at the measures level. The choice of level of analysis can be made by referring to the objectives diagram.

Once a level of analysis has been chosen, the matrix of cross-impacts is constructed with as many lines and columns as there are programme components at that level. Table B2-2 shows an example of a cross-impacts matrix for the EU Organic Action Plan, which was used as a basis for analysis as part of the ORGAP project (see below and Annex B2-1).

Experts involved in the evaluation process (the evaluation team) should identify any synergy which may exist between pairs of measures. Only the BOTTOM half of the matrix (that below the main diagonal) should be filled in unless experts strongly support cases of asymmetrical synergy (relationship of non-reciprocal interdependence). The main diagonal should NOT be filled in.

Table B2-2: Matrix of cross-impacts on 21 actions of the EU Organic Action Plan (an example)

 

In order to ensure convergence of opinions, the rating of the effects of synergies or conflicts could be performed in two consecutive rounds.

First round

The evaluation team should compare pairs of measures to identify any synergies which may exist.

When some kind of synergy seems possible, a value on the following scale should be chosen corresponding to the size of the effect.

2 :       for a particularly strong effect of synergy;

1 :        for a weaker effect of synergy;

-1:       for weaker negative synergy (conflict);

-2:      for strong negative synergy (conflict)

(If there is no synergy the cell can be left blank, a zero value is implicit).

Second round

In a second round, the evaluation team should discuss and validate assumptions concerning the positive and negative synergies identified in the matrix.

After validation of these ratings, the calculation of the 'synthetic' coefficient of synergy should be performed, in order to evaluate the overall level of synergy/conflict within the action plan. Cs+ and Cs- represent these synthetic coefficients of positive and negative synergy for each measure. If all potential synergies (conflicts) between measures had received the maximum score, the coefficient would be equal to 1.0 (-1.0). The coefficient would be equal to 0.0 if neither positive nor negative synergies exist.

 

In order to have a global picture, total average Cs+ and Cs- could be calculated as the average synthetic coefficients for each measure across all expert judgements.

In addition, the coefficient of variation for Cs+ and Cs- could be calculated. The coefficient of variation (CV) is a measure of dispersion of values within a sample. It is defined as the ratio of the standard deviation to the absolute value of the mean:

 

 

If the standard deviation is equal to its mean, its coefficient of variation is equal to 1. Distributions with CV < 1 are considered low-variance, while those with CV > 1 are considered high-variance.

More specifically:

If

then

< 1 means there is a relative agreement on synergies/conflicts among experts concerning a specific measure.

  

If

then

> 1 means there is little agreement on synergies/conflicts among experts concerning a specific measure.

The evaluation team could be asked to match quantitative evaluations on conflicts and synergies with qualitative comments and explanations of the ratings given.

Where significant synergies are identified that are of major interest for the continuation or success of the programme, empirical validation should be considered, as this will provide a basis for recommending improvements or triggering synergy effects within the programme. Identification of specific 'windows of opportunity' within the cross-impact matrix can provide the basis for interviews or focus groups with addressees of the programme (beneficiaries) to review/validate the assumptions and observations. 

B2-3.3    Synergy and conflict between EU Organic Action Plan action points

As part of the ORGAP project, experts from the project partners and the project advisory committee were asked to evaluate the overall level of synergy/conflict between the 21 actions of the EU Organic Action Plan (Annex B2-1). Figure B2-4 illustrates the result of the policy and coherence analysis of the EU plan. Synergies between actions largely prevail while the opinions on conflicting actions are not shared by all members of the team, as is shown by the higher standard error bars.

Figure B2-4: Synergy/conflict between EU Organic Action Plan action points

  Action point             Number

The analysis suggests that Actions 9 and 10 are essential for the success of the EU Organic Action Plan, given their synergetic effects. They also have synergies with many other actions. Action 13 is also interesting with a high coefficient of synergy and number of measures with which it has interactions. By contrast, Action 4 appears to be a stand-alone measure, since it enters into synergy with an average of three actions only. Action 16 is somewhat peculiar, since it has a fairly weak coefficient of synergy (0.59) but exhibits synergies with many other actions (68). In this case, Action 16 has a weak potential for synergy although having numerous interactions, since these are individually weak. In addition, Action 16 combines positive and negative effects of synergy, even if the conflicts seem to be very weak.

B2-4    Priority evaluation

The determination of priorities for allocation of resources within a programme is also an important issue. Resources (finances and staff) are seldom available in sufficient quantity to enable all actions to be implemented simultaneously or objectives to be achieved to their maximum possible extent. Given that external circumstances are likely to have changed in the intervening period, it might well be that priorities of the interested parties have shifted over time, so that a re-evaluation of priorities could help to identify how changed priorities might influence the interpretation of the evaluation results.

Simulated market place (priority evaluator) techniques can be utilised both in developing an action plan and in evaluating the content of the plan after the event. These have some similarities with techniques for revealing preferences in multi-criteria and cost-benefit analysis (Section D1-5) and also with the budget allocation and voting systems described in Section A4-4.

The priority evaluator method, pioneered by Hoinville and Berthoud (1970), usually involves the use of social surveys to collect information but can also be applied in a stakeholder workshop context. The method is an attempt to combine economic theories with survey techniques in order to value unpriced commodities, such as development or environmental conservation, in situations where there is likely to be a conflict of interest between different people or interest groups, and the choice of any option will require a trade-off. The priority evaluator technique is also used to collect information on stated preferences, usually as part of the work to design initiatives which will best meet the aspirations of the intended target groups.

Respondents are allocated a hypothetical budget and are offered a set of items they could purchase, each with a hypothetical price. Values are then derived, according to the preferences given by respondents in spending their budget on the items. The use of a hypothetical budget means that the choices are not limited by the imperfections and limitations of the market, including both inequalities in income distribution and lack of information about actual prices for the choices on offer. For example, respondents could be offered five 'goods' in three different quantities (say 1, 2 and 3 units). Therefore, they are provided with fifteen possible choices, each reflecting a relative price for each good. The experiment is repeated for each individual with different relative prices until the ratio of the observed frequency of selection, to the expected frequency, is one for all fifteen combinations (thus revealing the true preferences and marginal valuations).

The method may be subject to sample and design bias problems, especially where the respondent is prejudiced by the proposed approximate value allocated, or where there is inadequate detail on the effects discussed and/or misleading statements. There is also potential for hypothetical bias where the decision posed in the question does not involve real market behaviour and there is no real incentive to think about and give answers that reflect their valuation. In some instances, for example in relation to environmental amenity, respondents may have imperfect information and a lack of experience of the impact on the utility being offered to them.

In an organic action plan evaluation context, the prioritisation by different stakeholders of resource allocation to demand-push or supply-pull measures could be evaluated using this approach.

B2-5    Failure risk evaluation

The development of a logically consistent, coherent and prioritised set of objectives and action points is still no guarantee of successful implementation. Several factors can contribute to implementation failure, including the comprehension, capability and willingness of those involved to participate effectively in the implementation process (Section B1-3). It is also possible (and desirable) to undertake a systematic assessment of the risks of failure with respect to individual objectives and actions and, where appropriate, to put in place risk management plans to avoid potential problems.

B2-5.1 Failure mode and effect analysis (FMEA)

In order to provide an early assessment of potential risks and problems associated with the implementation of an action plan, an adapted version of (process) Failure Mode and Effect Analysis (FMEA; McAndrew and Sullivan, 1993) can be used, drawing on both internal and external expertise. FMEA is an engineering technique used to define, identify and eliminate known and/or potential failures, problems, errors etc. from the system, design, process and/or service (Omdahl, 1988). When applied well, it is possible to:

The RPN (the product of frequency of occurrence, severity of impact and likelihood of detection) allows the ranking of the most relevant problem areas.

FMEA therefore offers a structure for thinking through the likelihood, seriousness and probability of detection of potential implementation problems and for the prioritisation of actions to address potential problems. Figure B2-5 shows the steps in the FMEA process as it was applied to the assessment of the failure risks in the EU organic action plan (see below and Annex B2-1).

Figure B2-5: Flow chart of  FMEA method

The first task in FMEA is to identify and rank the most relevant problem areas of the action plan implementation. The experts should use a special laddering questionnaire to elicit what can go wrong (list of problems), and to define the logical cause-effect structure of the problem by identifying all possible causes of each problem. Laddering is a Means-End Theory in-depth probing approach which attempts to uncover the link between different levels of a subject’s knowledge, in order to reconstruct the structure of her cognitive network. One way to do this is by applying an adapted version of the method illustrated by Reynolds and Gutman (1988) and later applied to goal structures by Pieters et al. (1995). Reynolds and Gutman's original approach used a one to one in-depth interview to elicit the components of the cognitive network. In FMEA, laddering can be performed more quickly by a paper and pencil approach. A specific laddering questionnaire has been developed for this task. A series of direct probes help the respondent to 'climb up the ladder' and link the chosen problems with the (potential) causes.

The analysis of the raw responses gathered through the laddering questionnaire is made up of several steps (Gengler and Reynolds, 1995). Specifically, responses should be coded into 'chunks' of meaning, if possible by at least two independent coders. These chunks should then be listed in 'ladder format' following the iterative coding procedure suggested by Reynolds and Gutman (1988), which yields ladders composed of links between causes and effects. The two independent coders should then classify each of the chunks, using a jointly-developed set of codes.

A cognitive map can then be created in order to visually identify links between causes and effects (failure modes). Specific software is available to ease this task, e.g. MecAnalyst+ by Skymax-DG. The result is very similar to a tree diagram of the type discussed in Section B2-1.

The second task consists of evaluating the failure modes. Based on the results of the laddering exercise, a specific questionnaire should be submitted to the experts. Using 10-point Likert-type scales for each failure mode, the team should estimate:

The scale ranges from 1 to 10, where 1 refers to: no effect (severity), nearly impossible probability of occurrence and almost certain detection probability; and 10 refers to extremely severe, extremely high probability of occurrence and absolute uncertainty of detection.

Once all experts have filled in the questionnaire, a Risk Priority Number (RPN) is calculated based on the product of: detection probability X severity X probability of occurrence. The RPN will enable ranking of the most important problem areas. The minimum expected RPN is 1 and the maximum is 1000.

B2-5.2 Identification of potential implementation problems with the EU Organic Action Plan

In the context of the ORGAP project, the FMEA described above was used to identify failure modes, their potential severity and probability of occurrence with respect to the EU Organic Action Plan. The likelihood of detection was estimated with respect to the provisional set of ORGAPET generic indicators, which were subsequently revised and improved (Section C3). The purpose of this task was to verify whether the main indicators of the ORGAP toolbox were able to cope with the logical cause-effect structure of potential problems concerning the implementation of EU Organic Action Plan. Table B2-3 reports the failure modes and the relative mean RPNs. A quick inspection reveals that no single failure mode is particularly risky, since the maximum mean value is 210 while theoretical maximum is 1000.

Table B2-3: Failure modes and Risk Probability Numbers for the EU Organic Action Plan and ORGAPET generic indicators

Cause

Effects

Mean

Standard deviation

Lack of stakeholder involvement

Lack of capacity-building

210.0

137.5

Inadequate information and promotion campaigns

Lack of knowledge and awareness of OF

162.8

84.1

Lack of information

Lack of political interest to support OF

159.4

86.9

Weak lobbying for organic farming (OF)

No mandatory implementation of OF

146.6

84.6

Research not enough developed

Lack of importance given to OF

133.1

90.1

Conventional interests against organic lobby

Lack of financial resources

132.2

81.5

Different priorities among member states

General implementation problems

130.8

84.4

Different interests between EU and member states

Inadequate rules/procedures

130.1

82.6

RPNs include information about the probability of detection of the failure modes by the proposed indicators. The detection mean values (not shown for conciseness) range from 3.5 (high to moderately high chance of detection) to 4.8 (moderately high to moderate chance of detection) which indicate that in general – for the selected failure modes – the ORGAPET indicators may perform sufficiently.

B2-6    Conclusions

The assessment of programme content and failure risks is an important part of understanding the reasons for success or failure in terms of results and impacts. A poorly-designed programme could prove to be ineffective in terms of uptake, and inefficient in terms of resource use. Both these factors might impact negatively on stakeholder perceptions and affect future development potential of the organic sector. A well-designed programme should have well-specified objectives with a clear logical relationship between the objectives and the measures and actions intended to achieve them. Opportunities to maximise positive synergy between programme elements should be exploited. Clear priorities should be identified. Potential failure risks should be identified and measures put in place to reduce those risks. Evaluators should seek to identify whether these issues were addressed as part of the programme development and to identify issues in the design of the programme that might impact on, or help interpret, the eventual outcomes of the programme.

B2-7    Checklist

  1. If necessary, clarify the programme objectives using the procedures and SMART objectives approach set out in Section C1.

  2. Structure the different objectives chosen at global, intermediate and operational levels in an objectives diagram, identifying cause and effect relationships and, where relevant, additional implicit objectives.

  3. Identify the logical framework for the programme. Is there a high level of internal coherence between the operational objectives and the more global objectives?

  4. Using stakeholder workshops and/or cross-impact matrices, identify the potential synergies (positive and negative) between different objectives?

  5. Identify the priorities allocated to different parts of the programme, and whether/how these have changed over time.

  6. Identify the risk, severity and possible causes of potential implementation failure modes that could occur.

  7. What are the implications of these results for the future development or implementation of the programme?

B2-8    References

EC (1999) The MEANS Collection: “Evaluating Socio-Economic Programmes”. Office for Official Publications of the European Communities, Luxembourg.

Gengler, C. E. and T. J. Reynolds (1995) Consumer Understanding and Advertising Strategy: Analysis and Strategic Translation of Laddering Data. Journal of Advertising Research, July/August:19-33.

Hoinville, G. and R. Berthoud (1970) Identifying Preference Values: Report on Development Work. Social and Community Planning Research, London.

McAndrew, I. and J. O’Sullivan (1993) FMEA (TQM Practitioner). Nelson Thornes Ltd.

Omdahl, T. P. (ed.) (1988) Reliability, Availability, and Maintainability Dictionary. ASQC Quality Press, Milwaukee, WI.

Pieters R., H. Baumgartner and D. Allen (1995) A means-end chain approach to consumer goal structures. International Journal of Research in Marketing.

Reynolds, T. and J. Gutman (1988) Laddering theory, method, analysis, and interpretation. Journal of Advertising Research, 28(1):11-31.

9    Annexes

Annex B2-1:    ORGAP project European action plan synergies and failure risk assessment