Solution
Catch problems before a recall catches you
Darwin doesn't just capture and trace — it analyzes your data in real time to detect anomalies, predict risks and answer complex compliance questions. AI that explains every alert by citing the applicable regulation.
Two pillars
AI applied to regulated compliance
More than dashboards: actionable intelligence
Traceability generates a lot of data. The value is in having someone — or something — review that data 24/7 and warn you before an auditor flags a problem or a retailer rejects a shipment.
Anomaly detection
Pipeline crossing a deterministic rules engine + semantic analysis over embeddings. Spots ghost lots, cold-chain breaks, shrinkage and route deviations — with a natural-language explanation.
Agentic compliance
Agent that answers questions like "Does LOT-8901 qualify to export to the US?" crossing on-chain data + current regulation + risk analysis. Output: compliance report with gap analysis and risk score.
Pillar 1 — Anomaly detection
5 categories, 16+ rules, automatic explanation
Every alert comes with the exact reason, the cited regulation and the recommended action — so your team knows what to do without reading code.
Data integrity
Missing KDEs, duplicate lots, TLCs with inconsistent format between actors. Caught before they reach the audit.
Temporal
Violated sequences (Harvest > Pack), on-chain backdating, temporal gaps with no events, impossible transits by distance/speed.
Chain of custody
Ghost lots (receiving without shipping), custody gaps, unauthorized actors for that event type. The costliest failures to catch manually.
Quantity / mass balance
Excessive shrinkage between shipment and receiving, inflated yield on transformations, unit-of-measure mismatch. Early signals of fraud or error.
IoT / environmental
Cold-chain breaks, GPS route deviations, temperature out of range for commodity. Detected in real time from integrated IoT sensors.
Contextualized explanation
Every alert arrives with severity (CRITICAL/HIGH/MEDIUM/LOW), FSMA 204 section violated, lot history and a concrete action to take — generated by LLM.
Pillar 2 — Agentic compliance
Ask in natural language, get a compliance report
From junior to senior analyst, in seconds
Your team doesn't need to memorize 400 pages of regulation or navigate 10 systems — they ask the agent and get the answer with the trace of how it was obtained. Full explainability for audit.
Natural-language query
E.g.: "Does this organic spinach lot meet the requirements to export to the US?" — the agent routes the query across multiple data sources.
Automatic multi-source crosscheck
The agent queries on-chain data (chain of custody, lab results, actors), current regulation (FSMA 204, EUDR, private certifications) and risk modules (anomalies + rules).
Structured compliance report
Output with status (COMPLIANT / NON_COMPLIANT / REQUIRES_REVIEW), gap analysis (what's missing), risk scoring, regulation citations and concrete recommended actions.
Pipeline
How it works end-to-end
1. Event ingestion
CTEs captured by Captia and anchored by Tracium enter the pipeline — with temporal, geographic and relational enrichment.
2. Parallel detection
Rules engine (16+ deterministic rules) + centroid-based anomaly detection (semantic, unsupervised) run on each event.
3. Combined score
If rules + semantic both flag → high severity. If only one flags, it stays on watch. Filters to minimize false positives.
4. Explanation with LLM + RAG
The LLM generates an explanation citing FSMA 204, lot history and actor profile. All indexed in Qdrant for retrieval.
5. Alerts and feedback
Notification by severity (Slack/email/webhook). Operator marks false_positive → feeds the model back. Confirmed anomalies enrich future context.
FAQs
Frequently asked questions
Do I need to train a model with labeled data?
No. Semantic detection uses centroids over embeddings (unsupervised), so there's no need for a history of labeled anomalies. The rules engine uses explicit FSMA 204 rules. Both are working from day 1.
How does the system explain anomalies?
The LLM combines the anomalous event data with RAG over FSMA 204 regulation, lot history, actor profile and similar past anomalies. Output: severity, regulation section violated, and concrete action — not an opaque score.
Can I mark false positives?
Yes. Every alert allows feedback. False positives are excluded from future centroid computation (so they don't contaminate). Confirmed anomalies are indexed as historical cases to improve future explanations.
Does it run in real time or batch?
By default, a daily/weekly batch pipeline — enough for most regulatory cases. Critical cases (e.g., cold chain, IoT events) can be configured in streaming with immediate alerts.
Can it run self-hosted with no external APIs?
Yes. We support self-hosted mode with Ollama (local LLMs) + Qdrant + local embedding. Ideal for customers with data sovereignty requirements or government projects. API-based mode (faster and cheaper) is also available for customers without those restrictions.
What sets this apart from a BI dashboard?
A BI dashboard shows what you already know to look for. This tells you what you didn't know was happening — and explains it. The agent answers compound queries that today require an analyst crossing 3 systems + reading regulation.
Let AI watch over your chain
24/7, never tired, with an explanation for every alert. Let's talk about deploying it on top of your current data.



