An AI IDE for ATE engineering

An AI co-engineer that cites every line it writes — and compiles.

ATE·IQ reads your IG-XL workbook, your spec, your IPC-2581 designs and your STDF lots — then writes test code grounded in a typed 287-entry knowledge base, validated by 31 rules, and IG-Link compiled before it reaches your reviewer. Local-first. Single Windows installer. Sales-led.

· IG-XL · 92% covered· IPC-2581 · circuit-json· STDF V4 · native· local-first · 0 cloud upload· single .exe
Manager-grade evidence

Every number below is grounded against a real working IG-XL programme — 23 sheets, 37 VBA modules, 788 instrument call sites. Not a synthetic benchmark.

IG-XL coverage
92%
VBT-API call sites recognised by the typed knowledge base against a real working programme.
API entries
287
Typed VBT-API surface — every entry cited to its source documentation page.
Validator rules
23 + 8
VBA quality rules (R01–R23, dev/prod profiles) plus cross-sheet rules that catch Flow / TestInst / Limits / Binning mismatches.
Instruments
14
Detected on a real digital programme. Each carries identity, electrical envelope, settle times, prohibitions.
Cloud upload
0
Bring your own LLM keys — Anthropic, OpenAI-compatible, or Azure OpenAI. Your provider, your network.
Installer
1 .exe
Single Windows installer. Electron + bundled SQLite. No Postgres, no Docker on the engineer's bench.
Why ATE·IQ, not a chatbot

A co-engineer your reviewers will sign off on — because everything is cited, validated, and grounded.

An ATE programme runs on a tester. A wrong line costs a week of yield. ATE·IQ is built around the assumption that AI output must be reviewable end-to-end before it goes near silicon.

01Cited

Every AI output points at its source.

Reuse a prior design? The pilot emits a <cite kind="design"/> chip that links to /pcb/<id>. Quote a spec? <cite kind="spec"/> deep-links to the requirement line. Reference a part? <cite kind="part"/> points at the parts library. No claim without a citation.

02Validated

Twenty-three VBA rules + eight cross-sheet rules, on every generation.

Cross-sheet validator runs between the LLM output and the IG-Link compile step. Arg-ceiling, opcode allowlist, Flow → TestInst / Limits / Binning resolution, sheet-ref resolution, Tdelay sanity, tf* procedure-prefix convention. Failures land in the Validation tab of the run viewer.

03Grounded

Typed knowledge base from a real working programme.

287 VBT-API entries, every one citing its UltraFLEX documentation page. 14 instruments detected on the reference. Eight idiom-derived best-practices. The pilot doesn't pattern-match a public-internet LLM — it reasons against this typed surface.

04Reviewable

The diff, the run, the validation — all in-app.

Every generation produces a run with status (curate → draft → compile → verify), artefacts grouped by kind, a Validation tab with failures and warnings collapsible by rule, and one-click download. No JSON spelunking. No "check the logs".

Four intelligences, one tool

Every surface reads from the same project context. No silos.

The Excel task pane and the standalone IDE are two surfaces on the same engine — chat with your open workbook from inside Excel, then review the same run in the IDE without re-uploading anything.

On the bench, not in the cloud

Your IP never leaves the host. By design.

Loopback only

All MCP servers and the Next.js shell bind 127.0.0.1. Nothing reachable from the LAN, nothing reachable from the corp VPN.

Bring your own LLM

Anthropic, OpenAI-compatible, or Azure OpenAI. Your keys, your provider, your egress policy. Switchable at runtime from the Settings panel.

IP stays at the bench

Workbook, VBA, schematics, STDF lots — none of it leaves the host. Only the prompt does, and you decide where that goes.

Talk to us

Bring an .xlsm. Leave with a working IG-Link draft.

30-minute scoping call. We follow up with a quote within one business day.