The AI Coding panel provides a streamlined AI coding workflow inside the normal left pane. It keeps the app structure familiar while reducing clutter.
You can activate it with the "AI coding" switch just below the Sources bar. When you sign up and choose to have AI options switched on and active, AI coding is turned on by default. Other users can switch it on later if they wish.
AI usage consumes credits (see Responses Panel); credits renew monthly and do not roll over. Costs depend on model and workflow, but very roughly you might autocode around 30 pages for about 1 credit.
Users with dedicated AI plans receive a larger batch of AI credits each month; other users receive 10 free AI credits per month (the free credits do not stack with paid plans).
The AI Workflow#
When AI coding is active:
- the Sources bar is hidden (to keep focus),
- the right-hand output tabs stay available,
- the Create/Filter sub-tabs are replaced by one combined AI workflow panel.
The workflow is broken down into six straightforward sections:
- One-click coding: Pipeline runner with a set-up modal first. Press One-click coding to choose level of effort (Flash vs Pro for the AI model slots), Code all sources, Skip coded, Filter on finish, and (if the project has links) whether to delete every link in the project or only links on the sources in scope — then confirm Run.
- Pre-steps: clears Filter Links and turns the filter pipeline on before the modal.
- Scope: With Code all sources off, only the current Sources bar selection counts (empty bar = all sources). The modal explains how many sources Auto-code will run on, including when Skip coded removes already-coded sources (and Run is disabled if nothing is left).
- Auto-code prompt: One-click coding uses the current Auto-code panel prompt and settings. The prompt is shown in the set-up modal before you run.
- Single source in scope: only Auto-code runs (Revise codebook and Recode are skipped so labels are not merged across several AI passes). Filter on finish defaults off in that path but you can turn it on.
- Several sources in scope: Auto-code, then Revise codebook, then Recode.
- Recode target suffix: Choose blank (simpler — synthesised labels go straight into cause/effect) or e.g. _recoded (keeps raw labels, writes synthesised to temp columns so you can compare).
- Per-step Run buttons still work on their own; the modal suppresses the extra confirms for the sequenced run after you confirm Run.
- Background: Give the AI project context before coding. A status tick indicates whether enough background text is set.
- Auto-code: This is where the AI reads your documents and extracts causal links.
- You can choose to process a small sample first (e.g.,
1or5sources) to test your prompt, or process100%of them. - The "Skip coded" switch ensures you don't waste time and money re-processing documents that already have links.
- Default model is Qwen Flash.
- Revise codebook: Once you have some causal links, the AI can review them and suggest a cleaner, more consistent list of factor labels (a "codebook"). The header tick shows whether the Recode codebook area currently contains suggestions.
- Includes a Target clusters slider; see Target clusters.
- Optional Use automatic pre-clustering switch (default OFF).
- When pre-clustering is OFF, the AI tries to find the clusters directly from the factor list using the standard Revise codebook prompt. This prompt supports macro replacement: use
[number](or[cluster_count]) and the effective target cluster count is injected at run time (same as the slider logic below). - When pre-clustering is ON, the app first groups factor labels semantically using embeddings, then sends those clustered groups to the AI with a separate labelling prompt plus a Representatives per cluster slider (
8to20, default8). - Pre-clustering is more systematic than asking the AI to find all clusters "in its head" from a long raw list. It reduces the black-box / WEIRD-data risk a bit, and may make it easier to preserve more unusual or divergent concepts instead of collapsing them into whatever the model finds most typical.
- Default model is Gemini 3 Flash Preview.
- Recode: Apply the AI's suggested, cleaned-up labels back to your existing causal links. Paste the codebook (from Revise codebook or your own), add a recode instruction, and run.
- The AI returns index mappings (row → codebook item) rather than full label text, reducing tokens and improving reliability.
- Default instruction: "For each raw label give me the NUMBER of the best-matching codebook item by meaning. Use 0 when no codebook item fits. Return only codebook label numbers, never words. Never invent labels."
- Skip recoded: When on, only processes links that have at least one unrecoded label (cause or effect). Use this when recoding again to focus on remaining work.
- Links limit (1, 5, 20%, 50%, 100%): When not 100%, a random sample of links is recoded. Non-sampled links keep their existing recoded values (or stay blank on first run).
- The header progress bar is segmented: grey = empty recoded fields, orange = recoded equals original cause/effect, green = recoded non-empty and different.
- Default model is Qwen Flash.
- Filter links: The normal Filter Links panel appears as the final section of the same accordion, so filtering is part of one continuous simple flow.
- When Filter on finish is on in the One-click set-up, completing the run applies these analysis filters to the pipeline: Factor Frequency (top
12, counted by citations) → Link Frequency (top30, counted by citations). The global Label set controls whichcause/effectcolumns Recode writes to (no separate “recode suffix” in this panel).
One-click coding (AI)#
- Sequencer for the AI pipeline, with one set-up modal (see step 1 under The AI Workflow).
- Run starts Auto-code; with more than one source in scope it continues with Revise codebook then Recode, stopping on the first non-successful stage. With exactly one source in scope, it stops after Auto-code.
- Auto-code in One-click uses the same Auto-code panel prompt/settings as the separate Auto-code Run button, while skipping the extra run confirmation after you confirm the one-click set-up modal.
- Optional link deletion is configured in the modal (project-wide vs scoped to sources in scope), not only a single “delete everything” confirm.
- Recode target: Use the global Label set below the Sources bar. Create a new suffix there first if you want Recode to fill
cause_suffix/effect_suffixinstead of only the default columns.
Background (AI)#
- Sets shared project context used by AI coding prompts.
- The status tick indicates whether enough background text is present.
Auto-code (AI)#
- Runs AI coding across selected/all sources using your prompt and model.
- Layout (top → bottom): Model, Skip coded, Add source prompt, and source limit row; then the Prompt sections editor; then Advanced (chunk size, concurrency, temperature, thinking, etc.).
- Use source limit + skip coded options to test quickly and avoid rework.
- Add source prompt (switch): when ON, each source’s optional Source Prompt (edit above the source text when viewing a source) is prepended to your main Auto-code prompt for that source. Saved per project in the browser. Use when sources need different context; skip when one background prompt in Background is enough.
- Status line under the settings shows progress, per-chunk detail, and stop — same behaviour as the former “Code with AI” card.
- Prompt sections: use Add section / Remove in the UI to split one saved prompt into reproducible iterations. Internally this is still stored as one prompt with
====separator lines. Later sections see prior user/assistant turns. Only the last iteration’s result is written to links; all iterations appear in Responses. This is best for workflows where coding is genuinely better in stages, such as first building the network, then adding columns like Time or Certainty, then running a checking pass. - Rerun from here: each prompt section has a small rerun button. Use it to continue a stable multi-section prompt without paying again for earlier successful stages, not as an open-ended chat workflow for coding maps. Section 1 reruns normally. Later sections reuse the latest successful earlier iteration history only when the earlier source text, prompt sections, chunk bounds, and Holistic setting are unchanged; otherwise the run fails loudly. You can add new sections under a successful run and rerun from the first new section.
- Holistic first pass: when enabled, only the first iteration asks the model for a Mermaid causal network with quote-backed edge labels. The server parses that Mermaid into the standard links JSON format before passing it to later iterations or saving final links. If the Mermaid cannot be parsed into links, the run fails loudly.
- Confirm before a run shows model, chunking, word count, and cost estimate. Stop cancels after current chunk tasks finish.
- Timeouts scale by model and iteration count (cap ~540s total). Concurrency (1–5) is in Advanced; raise for speed, lower if you see 429/timeouts.
- Tips on using the prompt history (same chrome as other prompt fields).
- Default model is Qwen Flash.
Holistic first pass (Auto-code)#
- What it does: keeps Auto-code on the normal single
process_chunkroute, but changes iteration 1 so it asks for a connected Mermaid causal network before converting that network to standard links JSON in code. - When to use it: helpful when asking directly for a links table gives fragmented networks and you want the first pass to reason about the whole causal story.
- Switch:
- ON = iteration 1 uses Mermaid instructions, then server-side parsing converts the result to links JSON.
- OFF = all sections use the standard links JSON instructions.
- Extra columns: if your prompt asks for extra columns such as sentiment or mood, the Mermaid edge labels must include them as safe
key=valuefields. The parser writes those fields into the resulting links.
Revise codebook (AI)#
- Suggests a cleaner consolidated codebook from existing links.
- Use this after you have enough coded links for a representative sample.
- Header tick indicates whether the Recode codebook area currently has content.
- Target clusters: see Target clusters.
- Optional Use automatic pre-clustering switch (default OFF).
- With pre-clustering OFF, the AI clusters the factor list directly from the Revise codebook prompt. That prompt supports
[number]/[cluster_count]. - With pre-clustering ON, embeddings are used first to group labels semantically, then the AI only has to label those grouped clusters. This is a bit more systematic, less dependent on the AI doing all clustering internally as a black box, and may help preserve unusual or divergent concepts.
- Pre-clustering also adds a Representatives per cluster slider (8-20, default 8) and uses a separate labelling prompt.
- Default model is Gemini 3 Flash Preview.
Target clusters (Revise codebook)#
- The Target clusters control is a slider with 50 positions. The far left is Default; moving right sets an explicit target of 2 through 50 clusters (one step per cluster count).
- Default (far left): the app derives a target count \(K\) from the number of unique factor labels \(n\) in the current filtered pipeline (same scope as Revise codebook): \(K = \min(\lfloor n/3 \rfloor, 25)\) — at most 25, or roughly one label in three as clusters, whichever is smaller.
- Explicit positions (not Default): the requested \(K\) is the number shown by the slider (2–50). If \(K\) is greater than \(n\), the run uses \(n\) instead (you cannot have more clusters than distinct labels); the app may show a short notice when that cap applies.
- Pre-clustering, embedding clustering, and
[number]/[cluster_count]in prompts all use this effective \(K\).
Recode (AI)#
- Applies your codebook back onto existing links, turning raw factor labels into cleaner synthesised ones.
- Recoding (radio buttons): AI — the model maps each raw label to a codebook line by index; Magnetic — embedding similarity to codebook lines (same magnet machinery as the pipeline’s soft-recode path; similarity threshold). Both write hard updates to links; the names refer to the mapping method, not “soft recoding” in the filter sense.
- Recode target: the global Label set (
default= standardcause/effect; a suffix = read/writecause_suffix/effect_suffixinmetadata.custom_columns, with top-levelcause/effectholding the default-set pair). - Supports sampled recoding and skip-recoded behavior (skip-recoded only applies when using a non-default label set).
- Header bar shows recode coverage mix across all cause/effect recoded fields.
- Default model is Qwen Flash.
Filter links (AI)#
- This is the same Filter Links workflow, embedded as the final simple-ai accordion section.
- Use it to refine/select links before reviewing outputs on the right.
- When One-click coding finishes with Filter on finish enabled, the app applies top-12 factor frequency (citations), then top-30 link frequency (citations) (no longer injects the deprecated Temporary Factor Labels filter).
Advanced Settings#
Each section header is clickable and opens/collapses its settings panel. Section headers also include contextual Help buttons. The advanced sections are inline (not flyouts), and only one section is expanded at a time.
Inside advanced panels you can:
- Edit the exact Prompt the AI uses.
- View your prompt history and load previous prompts.
- Change the AI Model (e.g., switch to a "Pro" model for complex reasoning, or a "Flash" model for speed).
- Tweak technical settings like chunk size, concurrency, and temperature.