TLDR
Episode 13 of PAPER TRAIL aligns six evidence domains — bank documents, wire transfers, FedEx shipments, flight logs, corporate filings, and institutional findings — into a unified event registry of 232,083 events with 143,791 entity-event bridge rows. Nine cross-domain leads were identified, three ACH hypotheses scored positive, and zero findings were declared because all leads fell below the 0.75 adjusted confidence threshold imposed by Chao1's 63.7% completeness estimate (PAPER TRAIL Project, 2026a).
The Silo Problem
For twelve episodes, each evidence domain was analyzed in isolation. EP04 examined Deutsche Bank compliance. EP06 mapped FedEx shipments. EP08 traced aircraft. EP10 processed emails. Each domain revealed patterns, but patterns within a single domain can be coincidental. EP13 asks what happens when the silos connect — when a wire transfer, a FedEx shipment, and a corporate filing all involve the same entity within the same 48-hour window (PAPER TRAIL Project, 2026a).
Script 25b built the unified event registry. Each event — a wire transfer, a shipment, a bank document, a flight, a corporate filing, an institutional finding — receives a standardized record with timestamp, entity identifiers, domain label, and source document reference. The entity-event bridge maps 898 entities verified across multiple domains, creating cross-domain profiles that no single data source could produce (PAPER TRAIL Project, 2026b).
ACH and Monte Carlo
Three Analysis of Competing Hypotheses matrices test the dominant explanations for three observed patterns. Why did FedEx shipments stop in October 2005? Spoliation scores +5.20, ahead of channel migration at +1.90. Why did Deutsche Bank compliance fail for over a decade? Willful blindness scores +6.40, ahead of revenue capture at +4.50. What was the Butterfly Trust used for? Asset concealment scores +4.70, ahead of structured payments at +3.60 (PAPER TRAIL Project, 2026a).
Monte Carlo simulation (5,000 iterations with Beta distributions) confirmed all three ACH verdicts at 100% stability — no iteration reversed the winning hypothesis. Bayesian Belief Network posteriors corroborated: Spoliation P=0.90, Willful Blindness P=0.77, Asset Concealment P=0.69. Ten cross-domain contradictions were detected; zero were true contradictions — all ten are explained by the FedEx October 2005 data cutoff (PAPER TRAIL Project, 2026b).
Zero Findings
The episode's most important number is zero. Despite nine cross-domain leads, three positive ACH verdicts, and 898 multi-domain entity profiles, the pipeline declares zero findings. Every lead falls below the 0.75 adjusted confidence threshold because the Chao1 completeness penalty reduces all confidence scores proportionally. When you have observed only 63.7% of the entity population, the missing 36.3% could change any conclusion (PAPER TRAIL Project, 2026a).
This is the distinction between a lead and a finding. A lead is a pattern worth investigating. A finding is a pattern that meets the confidence threshold with known error bounds. The pipeline identifies patterns. The determination of what those patterns mean belongs to authorized investigators with access to the remaining 42% of unreleased documents.
Why This Episode Matters
EP13 demonstrates what cross-domain synthesis looks like when applied honestly. The patterns are real — 575 Lexington Avenue appears in three independent domains, Indyke and Kahn thread through financial and shipping records, the Butterfly Trust spans wires, bank documents, and corporate filings. But the pipeline refuses to call them findings until the math supports it. That refusal is the methodology working as designed.
References
PAPER TRAIL Project. (2026a). EP13 slide content: Cross-domain synthesis, ACH matrices, convergence profiles [Presentation]. communications/ep13_slides/
PAPER TRAIL Project. (2026b). Cross-domain synthesis engine output [Data set]. _exports/synthesis/
This research is sponsored by Subthesis.