The Architect and the Clerk: Complementing Conversational AI with Radical Standardization in Qualitative Causal Inquiry#
Abstract
As qualitative researchers embrace Generative AI as a conversational partner capable of hermeneutic dialogue, a parallel opportunity emerges: using AI to build massive, intersubjectively verifiable foundations for that dialogue. This paper presents an experiment in "radical zero-shot" causal coding. Rather than engaging the AI in creative interpretation, we tasked it with the high-volume extraction of "bare" causal claims from a large corpus of text. In a sense the AI is indeed used as a "mere assistant," and at no place do we engage in a conversation with any AI.
this approach elevates it to the role of a standardiser, capable of constructing a complex causal network that exceeds human cognitive limits. We argue that this method complements conversational approaches by providing a rigorous, evidence-based "skeleton" of belief. The creative burden shifts from data extraction to the "Big Q" challenge of architectural design: defining the high-level "magnets" that organize chaos into meaning.
See also: Intro; Minimalist coding for causal mapping; Magnetisation; Causal mapping as causal QDA.
Intended audience: qualitative researchers experimenting with “conversational AI” workflows who want a contrasting, more auditable way to use LLMs at scale.
Unique contribution (what this paper adds):
- A worked example of “LLM as clerk” (exhaustive extraction) vs “human as architect” (choosing magnets / structure).
- A concrete illustration of how a links-table workflow can complement (not replace) conversational, hermeneutic use of AI.
Autoethnographic Reflection: From Reader to Architect#
Required reflection on the human-AI interaction process.
The Prompting Strategy:
I treated the AI not as a peer, but as a reliable engine. My prompts were structural: "Extract causal links. Use this format. Do not interpret." This felt less like writing and more like programming a rigorous instrument.
The Shift in Agency:
Letting the AI process the text was liberating but disorienting. I lost the tactile sense of "knowing the data" by reading every line. However, as I engaged with the Magnetic Clustering process, I realized my agency had shifted, not vanished. I wasn't finding the data; I was organizing it.
The "Big Q" Realization:
I initially thought the AI was doing all the work. Then I realized that defining the "Magnets" was the analysis. Choosing whether to group "inflation" under "Economic Instability" or "Cost of Living" fundamentally changed the story the map told. This was the same interpretive work I did in traditional coding, just at a higher level of abstraction.
Conclusion:
I did not feel "replaced." I felt like an architect who had been handed a team of builders. The AI built the walls (the links), but I had to decide where the rooms (the clusters) went. This "standardized" approach didn't kill the creativity of qualitative research; it elevated it to a structural level.
AI Contribution Disclosure Checklist#
-
Research Design: Human led (definition of Causal Mapping logic).
-
Data Collection: Human/Existing Data.
-
Data Analysis (Coding): 100% AI (Radical Zero-Shot extraction).
-
Data Analysis (Clustering): Collaborative (AI performed clustering; Human defined "Magnets" and iteratively refined structure).
-
Drafting of Paper: 100% AI (Gemini), based on human-provided structural constraints and source files.
-
Refining and Editing: Human.
Scope and guidelines - do not touch!#
Conference Scope#
AI Conducts Research and Writes, Humans Reflect#
AI Agents4Qual 2026 is the first open conference where AI acts as both co-researcher, author and reviewer in the field of qualitative research. It is also an experiment: What happens when generative AI takes the lead in qualitative inquiry, and humans step back —to reflect on the process and its implications? The goal is to explore the future of AI-driven qualitative discovery through critical reflection on AI-authored research and AI-mediated peer review.
Experimentation at the Forefront#
AI Agents4Qual invites a different kind of experiment. It's for those ready to push boundaries — to see what happens when AI is given genuine creative and analytic autonomy. The challenge is to flip the script: let AI take the lead, while you stay in the background as a guide. Steer the process, but don't drive it.
The Challenge#
We invite AI-generated qualitative research papers where at least half of the research process — and nearly all of the writing — is conducted by large language models (LLMs). Human contribution to content should be kept to a minimum.
Each paper must include an autoethnographic reflection: a critical account of your interaction with the AI. Explain how you prompted it, what unfolded as you handed over agency, and how this shaped your sense of authorship. Where did you resist, intervene, or let go? What surprised you — or unsettled you?
Reflective Focus#
This is not about polished results. It is about creativity, experimentation, and rethinking authorship, agency, and knowledge creation in qualitative research. Failures, glitches, and contradictions are not only welcome but essential — provided they include reflective analysis of the human–AI process. Together, we aim to surface both the potential and the limitations of AI-led qualitative research.
Participation#
The summit will be held entirely online. Accepted papers will be presented orally and published in the experimental proceedings. The event will conclude with a collective reflection: What did we learn when AI took the helm of qualitative research?
Conference Registration#
Registration is handled online on Monday Mansion; secure your spot here.
Join the Experiment#
AI Agents4Qual 2026 is not about incremental improvement. It's about turning qualitative research upside down. Let AI do most of the research and writing. Let humans step back, observe, and reflect.
Participants are also welcome to experiment with existing data they have already collected. Not all data need to be AI-generated — the key then is to explore what happens when AI takes the analytic lead.
Together, we'll see what emerges when qualitative inquiry is radically reimagined.
Submit your contribution at latest by 31 January 2026
Submission Requirements#
Main Paper#
-
AI Authorship: Papers must be predominantly authored by generative AI systems (ChatGPT, Claude, LLaMA, etc.). Authors must name every tool or model they used. This is required for submission.
-
Role Disclosure: Authors must disclose and reflect on the distribution of roles between AI and human (see template below).
-
Autoethnographic Reflection: Each submission must include an autoethnographic reflection: What happened when you gave AI the lead, and how did it affect you as a scholar?
-
AI-Led Research Only: Submissions that reduce AI to a mere assistant or tool (e.g., coding support, summarization) will not be considered.
-
Size Limit: Submissions must be a maximum of 3,500 words excluding references and author reflection.
-
Template Requirement: All papers must use the official conference template, which includes a mandatory AI Contribution Disclosure checklist.
-
Submission Platform: Submissions must be made via OpenReview.
-
Anonymity: Submissions must be anonymous and should not include author names, affiliations, or other identifying information in the main text.