🌻 ! EES2026 bites

24 Jan 2026

The conference theme “Evaluation for Vibrant Democracies” highlights the vital role of evaluation in strengthening democratic values, public accountability, inclusive decision-making, and learning across complex systems.As societies face intersecting political, social, environmental, and technological challenges, evaluation plays a critical role in providing credible evidence, amplifying diverse perspectives, and supporting informed public debate.Call B proposals should be aligned either with one of the strands selected through Call A (to be published before Call B opens) or with one of the general conference strands:

  • New Methods – Innovations in AI, digital data, and mixed methods

  • Systemic Learning – Evaluation for complexity, transformation, and systems change

  • Responsiveness – Adaptive, inclusive, and democratically grounded evaluation practice

Modality: individual paper

▶ 5. Submission RequirementsEach submission must be in English and include:

  • Title and session modality/type

  • Selected strand

  • Abstract:

  • Up to 500 words for sessions and panels

  • Up to 300 words for individual papers

  • Rationale and objectives

  • Presenter(s) or coordinator(s):

  • Names, affiliations, and short biographies

  • Session structure or presentation approach, including interactive or innovative elements

  • 5–10 keywords

If an individual exceeds these limits, submissions may be rejected or removed from the programme.▶ 7. Review Process and CriteriaAll individual submissions are subject to a double-blind peer review conducted by members of the EES Conference Programme Committee and strand coordinators. Meta-review will be conducted by reviewers not involved in strand coordination.CriterionDescriptionRelevance & Public InterestAlignment with the conference theme and selected strandQualityClarity, coherence, and rigor of the proposalInnovationOriginality of ideas, approaches, or methodsEngagementLevel of audience interaction and participatory designNote: Relevance and quality will carry the greatest weight in the review process. Appropriate representation of diversity among presenters and topics (e.g., gender, geographic location, cultural context) is expected. ▶ 8. Registration and Practical Information

  • All accepted presenters and co-speakers must register for the conference and pay the applicable registration fee.

  • EES does not provide funding for presenters. All participants are responsible for their own travel, accommodation, and registration costs.

  • Once submitted, proposals cannot be edited directly. Please contact us at ees2026@kuonitumlare.com to request a change.

▶ 9. Contact & Support

  • Visit the FAQ section on the EES 2026 Conference official website.

  • If you need any assistance with submitting your proposal(s), please do not hesitate to reach out to us at ees2026@kuonitumlare.com.

  • For questions related to content or expertise please contact the EES Conference Programme Coordinators: Marta Semplici or John LaVelle.  

| Privacy Policy |

S03 responsiveness Democracies are dynamic - and so is evaluation. Responsiveness means listening to stakeholders, adapting to shifting contexts, embracing diversity, and empowering voices often left unheard. This is key to making evaluation relevant and actionable in today’s world.

Our submission so far#

Non-mosquitoes do not cause non-mosquito bites: People do not speak in variables.

Abstract#

To be truly responsive, evaluation must listen to stakeholders in their own language as they describe the worlds they live in and as they explain their understanding of "what causes what". Systems mapping and causal mapping are often used to record (“code”) and aggregate these stories at scale. To capture thousands of phrases like "mosquitoes cause mosquito bites", most approaches would code "mosquitoes" and "bites" as variables, things that can take different values, say between -1 and 1, and link them with an arrow which can be given a positive or negative strength; other systems try to use boolean (true/false) variables and categorise causes as necessary and/or sufficient conditions. This paper argues that coding with variables firstly hardly ever corresponds to how people think and talk and secondly brings with it unnecessary mathematical and logical complexities (for example, the statement "mosquitoes cause mosquito bites" may get treated as identical to "the absence of mosquitoes causes the absence of mosquito bites"). 

Our experience of coding qualitative data for evaluation projects shows that we can often do just fine with “minimalist coding”: We use a realist, generative understanding of causality (akin to Michael Scriven’s “Causation: The relation between mosquitoes and mosquito bites”). We do not initially worry about whether people’s claims are true or accurate, because first and foremost we want to know what they think. We code just bare, undifferentiated causal claims about links between bare events or states, not variables, and don’t initially worry about matching up events and their opposites (getting COVID, not getting COVID, presence/absence of mosquitoes). Coding causal factors “as is” makes causal coding substantially easier and more natural. 

But sometimes it is indeed useful to match up presences and absences, heat and cold, poverty and wealth. What to do then?

We present some (we think, genuinely new) ideas for doing this which can code and preserve the difference between  "mosquitoes cause mosquito bites" and the unusual but not un-heard-of claim "the absence of mosquitoes causes the absence of mosquito bites." We also show how to code challenging claims like “the river levels failed to improve in spite of the drainage project”. 

Rationale and Objectives#

This paper is aligned with the Responsiveness strand because it asks how evaluation methods can stay close to the language and causal reasoning of stakeholders rather than translating their accounts too quickly into analyst-defined variables. In democratic and participatory evaluation, this matters because stakeholders do not usually speak in the language of parameters, scales, or symmetric variable relations. They describe concrete happenings: a pump enabled irrigation, a barking dog kept someone inside, heat caused fatigue, cold caused fatigue. If evaluation is to amplify voices rather than overwrite them, its coding practices should preserve that ordinary-language logic as far as possible.

This is increasingly important for democracy because evaluators are now under pressure to listen to larger and more diverse publics than before, often across many interviews, meetings, submissions, and digital texts. If we want to bring more citizen input into evaluation without reducing it to a thin set of pre-defined variables, we need methods that can absorb many voices while still preserving what people meant. A more responsive causal coding approach can therefore help evaluation contribute to public reasoning by making wider participation more analytically manageable without silencing difference.

The paper argues for a minimalist or barefoot approach to causal coding. In this approach, the basic unit is a quoted causal claim, recorded simply as one factor influencing another, with provenance retained. We code claims before we judge whether they are true. This makes the method easier to apply at scale, easier to audit, and more faithful to what respondents actually said. It also avoids over-specifying qualitative material by forcing it into models of necessity, sufficiency, polarity, or variable covariance when the source text does not warrant those assumptions.

The core methodological argument is illustrated through Michael Scriven's "mosquito bite" example. If causation is treated too quickly as a relation between variables, evaluators can drift into the absurd idea that "non-mosquitoes cause non-mosquito bites." Minimalist coding avoids this. It does not treat absences as causes unless they are explicitly asserted in the data. This asymmetry is not a defect but a realistic reflection of how people usually talk and think about causation.

The paper also addresses the cases where minimalist coding needs extension. Sometimes evaluators do need to connect opposites such as employment and unemployment, or distinguish improvement from deterioration. Sometimes they need to represent claims such as "the river levels failed to improve despite the drainage project," where a causal power is represented as blocked or unsuccessful rather than simply absent. The paper sets out simple conventions for handling opposites, sentiment, and "despite" claims without collapsing their differences into a single variable frame.

Objectives:

  • To show why variable-based causal coding often misrepresents stakeholder language and weakens responsiveness.
  • To define a minimalist coding approach for qualitative causal claims that preserves provenance and ordinary-language meaning.
  • To show how this approach supports transparency and democratic accountability by making coded outputs more recognisable to participants.
  • To present simple extensions for opposites, sentiment, and "despite" claims that preserve nuance without overcomplicating the method.
  • To clarify that counts of coded links indicate breadth of evidence in a corpus, not effect size in the world.

Session Structure & Presentation Approach#

As an individual paper, the presentation will be structured as a focused 8-minute contribution built around one clear problem, one core methodological proposal, and a small set of worked examples. It will begin by locating the argument in the conference theme, Evaluation for Vibrant Democracies, and in the Responsiveness strand: if evaluation is meant to hear diverse voices and support inclusive judgement, then its analytic language should not routinely replace stakeholder meanings with over-formalised abstractions.

The presentation will then introduce the minimalist coding approach and contrast it with more conventional variable-based causal mapping. The "mosquito bite" case will be used as a simple anchor example to show why symmetry between presence and absence is often analytically misleading. Further short examples will show how the approach handles ordinary causal statements more naturally, and where additional conventions are needed for opposites, sentiment, and "despite" claims.

The paper will close by discussing the practical implications for large-scale qualitative and AI-assisted analysis. In particular, it will show how minimalist coding can improve transparency, speed, and auditability while avoiding false precision. It will also briefly argue that such methods matter for democracy because they make it more feasible to bring larger volumes of citizen and user input into evaluative reasoning without flattening those voices into analyst-friendly abstractions. The remaining discussion time can then test the argument against participants' own practice, especially where they face pressure to aggregate complex stakeholder narratives quickly without losing responsiveness to meaning.