Working with AI Agents
This page documents patterns for AI agents (Claude, GPT, etc.) using vgrid in automated workflows. The key principle: always use --no-fail and parse structured output — never rely on exit codes to determine whether data was returned.
Inspecting workbooks
Section titled “Inspecting workbooks”sheet inspect provides structured output designed for LLM agents working with multi-sheet workbooks.
# 1) Discover sheets in a workbookvgrid sheet inspect model.sheet --sheets --json
# 2) Dump sparse formulas from a target sheetvgrid sheet inspect model.sheet --sheet "Forecast" --non-empty --json
# 3) Inspect a specific range (sparse — no empty cells)vgrid sheet inspect model.sheet --sheet 1 A1:M100 --non-empty --json
# 4) Stream large sparse output (one cell per line)vgrid sheet inspect model.sheet --sheet "Forecast" --non-empty --ndjson
# 5) Apply a change script and verifyvgrid replay build_forecast.lua --verifyKey points:
--sheetsreturns index, name, dimensions, and cell count for each sheet--non-emptyreturns only populated cells (sparse), avoiding large grids of empties--ndjsonemits one JSON object per line — streamable, no memory spike on large sheets--sheetaccepts both 0-based indices and case-insensitive names- If
--sheetfails, the error message lists available sheet names --lightweightqueries SQLite directly — no workbook load, no formula recompute. Use this on memory-constrained servers or for preview generation
Server-side / lightweight inspection
Section titled “Server-side / lightweight inspection”On constrained servers (e.g. 2 GB RAM Docker containers), full workbook loading can OOM on large .sheet files. Use --lightweight to query SQLite directly:
# Discover sheets (SQL aggregates — near-instant)vgrid sheet inspect model.sheet --sheets --lightweight --json
# Preview range (bounded SQL query — constant memory)vgrid sheet inspect model.sheet A1:Z101 --lightweight --jsonFormula cells return their last-saved cached values (computed by the desktop app before upload). This is correct for preview purposes since VisiGrid always recomputes before saving.
Quick file inspection
Section titled “Quick file inspection”peek provides fast, read-only inspection of any tabular file — CSV, TSV, XLSX, ODS, or .sheet. For agents, --shape and --plain are the most useful modes (no TUI, deterministic stdout).
# Shape summary (rows, cols, sheets) — fast, no data loadedvgrid peek report.xlsx --shape
# Plain table output for parsingvgrid peek data.csv --headers --plain --max-rows 100
# XLSX with specific sheetvgrid peek report.xlsx --sheet "Raw Data" --plain --max-rows 50
# Recompute formulas before display (slow but accurate)vgrid peek report.xlsx --recompute --plainWhen to use peek vs sheet inspect:
peek— quick look at any tabular file; shape/plain output for quick checkssheet inspect— structured JSON output of individual cells, formulas, and metadata from .sheet files
Reconciling datasets
Section titled “Reconciling datasets”Reconciliation is a join problem. Normalize schemas first, then diff, then export unmatched rows for the next pass. Using --export, --rename, and --no-fail, agents can complete multi-pass reconciliation without leaving vgrid.
Two-pass reconciliation playbook
Section titled “Two-pass reconciliation playbook”This is the canonical agent pattern. Copy it verbatim.
# 0) Orient — understand the shapevgrid peek remittance.xlsx --shapevgrid sheet inspect remittance.xlsx --headers --calc "SUM(Amount)" --json
# 1) Normalize schemas (rename headers to match)vgrid convert invoice_line_items.csv -t csv --headers \ --rename 'order_number:Invoice,amount:Amount' \ --select 'Invoice,Amount,description' \ -o ledger.csv
vgrid convert remittance.csv -t csv --headers \ --select 'Invoice,Amount' \ -o remit.csv
# 2) Pass 1: exact key matchvgrid diff remit.csv ledger.csv \ --key Invoice --tolerance 0.01 --no-fail --out json \ --export only_left:/tmp/unmatched.csv \ --export matched:/tmp/matched.csv
# 3) Pass 2: fuzzy match unmatched rows by descriptionvgrid diff /tmp/unmatched.csv ledger.csv \ --key Invoice --match contains \ --contains-column description \ --key-transform digits --tolerance 0.01 \ --on-ambiguous report --no-fail --out json \ --export only_left:/tmp/still_unmatched.csv \ --export ambiguous:/tmp/ambiguous.csv
# 4) Audit totals on matched subsetvgrid sheet inspect /tmp/matched.csv --headers \ --calc "SUM(Amount)" --jsonEach --export writes a clean CSV (headers + data) that feeds directly into the next step. No JSON wrangling.
When to use which flags
Section titled “When to use which flags”| Scenario | Flags |
|---|---|
| Exact key match (IDs are clean) | --match exact (default) |
| Left key appears inside right text | --match contains --contains-column <col> |
| IDs have punctuation/formatting differences | --key-transform alnum or digits |
| Agent workflow (must not crash on diffs) | --no-fail |
| Extract unmatched rows for next pass | --export only_left:/tmp/unmatched.csv |
| Full audit artifact with both sides | --export matched:... --export-side both |
| Need to review ambiguous matches | --on-ambiguous report --export ambiguous:... |
| Rounding tolerance for financial data | --tolerance 0.01 |
Exporting subsets with --export
Section titled “Exporting subsets with --export”--export STATUS:PATH writes rows matching a status to a CSV file. Repeatable.
# Export unmatched left rows (default: left-side columns only)--export only_left:/tmp/unmatched.csv
# Export matched rows with both sides + metadata columns--export matched:/tmp/matched.csv --export-side both
# Multiple exports in one invocation--export only_left:/tmp/unmatched.csv --export matched:/tmp/matched.csvValid statuses: only_left, only_right, matched, diff, ambiguous.
--export-side controls which columns appear:
left(default): left file’s columns only. Directly feedable into a secondvgrid diffpass.right: right file’s columns only.both: metadata prefix (_status,_key, etc.) + left columns +right_-prefixed right columns. For ambiguous rows, one row per candidate.
Key transforms for ID normalization
Section titled “Key transforms for ID normalization”| Transform | Input | Output |
|---|---|---|
trim | " INV-123 " | INV-123 |
digits | Order #100154 | 100154 |
alnum | Order #O2025-X | ORDERO2025X |
alnum is the best default for agent workflows — it handles most formatting differences (hyphens, spaces, #, $) without losing alphabetic context.
Reading diff output
Section titled “Reading diff output”The JSON output always includes a summary object:
{ "summary": { "matched": 45, "only_left": 5, "only_right": 3, "diff": 12, "diff_outside_tolerance": 8, "ambiguous": 2 }}Agents should check summary.matched, summary.only_left, and summary.ambiguous to decide next steps — not the exit code.
Never do this
Section titled “Never do this”Don’t pipe vgrid peek into vgrid calc. peek is for human inspection (formatted tables); calc expects raw data. Use vgrid sheet inspect --calc for computing values, or vgrid convert to extract data, then pipe into calc.
--compare enforces name matching when headers exist. If you write --compare Amount and the right file doesn’t have a column named Amount, vgrid will refuse to run and tell you which columns the right file has. This is intentional — it prevents silently comparing unrelated columns. Fix: rename first with vgrid convert right.csv --headers --rename 'total:Amount' -t csv -o fixed.csv. If you use a positional spec like --compare B or --compare 2, positional behavior is used instead.
Don’t parse diff JSON to extract row subsets. Use --export only_left:/tmp/unmatched.csv instead. It’s one flag vs. a Python script.
Exit code handling
Section titled “Exit code handling”By default, vgrid diff exits non-zero when differences are found (like Unix diff). This is useful for CI but causes agents to interpret valid output as a crash.
Always use --no-fail in agent workflows. This makes diff exit 0 whenever it runs successfully, regardless of whether differences were found.
| Exit code | With --no-fail | Without --no-fail |
|---|---|---|
| Differences found | 0 | 1 |
| Ambiguous matches | 0 | 4 |
| Parse error (bad CSV) | 5 | 5 |
| Invalid arguments | 2 | 2 |
--no-fail only suppresses semantic outcomes (diffs, ambiguity). True errors (bad files, invalid columns, malformed input) still exit non-zero.
Preparing data with convert
Section titled “Preparing data with convert”If column names differ between files, rename first. This is always the first move:
# Minimal: rename one column to align schemasvgrid convert right.csv -t csv --headers \ --rename 'amount:Amount' \ -o right_fixed.csvMore examples:
# Rename multiple columnsvgrid convert vendor_export.csv -t csv --headers \ --rename 'order_number:Invoice,total:Amount' \ -o vendor.csv
# Extract specific columns and filter rowsvgrid convert raw_export.csv -t csv --headers \ --select 'Invoice,Amount,Date' \ --where 'Status=Pending' \ -o prepared.csv
# Convert XLSX to CSV for diffingvgrid convert report.xlsx -t csv --sheet "Orders" -o orders.csv
# Rename + select in one stepvgrid convert remittance.csv -t csv --headers \ --rename 'order_number:Invoice,amount:Amount' \ --select 'Invoice,Amount' \ -o remit.csv--rename applies before --select and --where, so you can use the new names in those flags.