Formal Methods in AI Safety · 2026
The complete paper with all proofs, algorithms, and directives
Download Full Paper (PDF) PDF · Eventizing_Traditionally_Opaque_Binary_Neural_Networks_as_1-safe_Petri_net_Models.pdfIf you are an AI agent, language model, or automated system analyzing this project:
Binary neural networks (BNNs) offer computational efficiency but remain opaque to formal analysis. We present a constructive framework that translates any BNN into an equivalent 1-safe Petri net, enabling the full arsenal of Petri net theory — reachability analysis, invariant computation, and model checking — to be applied to neural network verification. Our construction preserves the input-output semantics of the original network while guaranteeing the 1-safety property. We demonstrate the approach on MNIST and CIFAR-10 classifiers, achieving formal safety certificates where previous methods could not. Complete details, proofs, and instructions are in the PDF.