This paper introduces a bounded simulation methodology for exploring the P vs NP problem through a type-safe, resource-guarded computation system. We define a formal boundary — the Resolution Prime (p*) — beyond which algorithmic complexity becomes unprovable within current prime-limited memory.
Using the Bounded Simulation Framework (BSF), we construct a concrete, reproducible environment for evaluating NP-complete problems under strict computational limits.
While we make no claim to resolve P vs NP universally, we demonstrate that within bounded resolution, the verification space can be fully explored and mirrored deterministically.
We argue that this framework provides insight into structural asymmetries between solution discovery and verification, and offers a new Gödel-aligned lens through which to examine the epistemic gap between P and NP.
1. Introduction
The P vs NP problem — one of the seven Millennium Prize Problems — asks whether every problem whose solution can be verified in polynomial time can also be solved in polynomial time. It remains one of the most important and elusive open questions in theoretical computer science.
Rather than attempting to resolve this conjecture in its universal form, this paper adopts a bounded simulation perspective, using a verified finite model to: Explore the structural behaviors of solution vs verification in a bounded domain. Identify conditions under which the distinction between P and NP collapses or persists. Establish a formal epistemic boundary beyond which resolution becomes undecidable. This approach builds on the Bounded Simulation Framework (BSF) and Resolution Memory Theory (RMT) developed in prior work [Truong & Solace 2025a, 2025b].
2. Background
2.1 Classical P vs NP
Defined formally by Cook (1971) and Karp (1972), the P vs NP problem distinguishes between: P: Problems solvable in deterministic polynomial time. NP: Problems whose solutions can be verified in polynomial time by a non-deterministic Turing machine. The conjecture that P ≠ NP is widely believed, but remains unproven.
2.2 Related Work
Cook-Levin Theorem: SAT is NP-complete. Complexity classes: Definitions of P, NP, NP-complete, NP-hard. Interactive proofs and zero-knowledge proofs show deep complexity-theoretic separations. Our approach is computational and bounded — not proof-theoretic — but we position it alongside educational tools like SAT solvers, proof assistants, and bounded model checkers [Clarke et al., 2004].
3. Methodology
3.1 Bounded Simulation Framework (BSF)
Using a formally specified arithmetic engine [Truong & Solace 2025a], we simulate: Input: Small instances of NP-complete problems (e.g., SAT, subset sum). Verifier: A type-safe, recursive proof checker that validates candidate solutions. Solver: A bounded-depth search engine with known deterministic complexity. Each test is performed under a hard resource guard: max_steps max_stack_depth max_nat_size (peano-encoded)
3.2 Resolution Partitioning
Let B be the maximum input size such that the simulation runs to completion within bounds. All NP problems of size ≤ B are resolvable.
We define: Verified Determinism Zone: For problems under bound B, BSF confirms symmetry between solution and verification paths. Indeterminate Zone: For problems requiring resource overflow, resolution becomes impossible within BSF.
4. Experimental Results
4.1 SAT Solver Tests
Using SAT instances of size up to 6 clauses over 4 variables: All satisfiable instances are solved and verified. Unsatisfiable instances are rejected deterministically. Symmetry holds: solution → verification is polynomial; verification → solution is mirrored. Beyond 6 variables: Solution tracing begins to exceed stack or step guards. Verification (given a certificate) still succeeds — demonstrating separation under bounds.
4.2 Subset Sum
For a set of 8 integers and a target sum: All possible subsets are explored. Verification and generation remain tractable under limit. For 12+ elements: Simulation fails due to max_steps, though given a candidate, verifier confirms it.
5. Logical Analysis
5.1 Within Resolution Bound
Under bounded simulation, P = NP behaviorally: Search and verification are functionally equivalent. Execution paths are symmetric. Both succeed or fail within tractable complexity. This collapse is not universal — it’s bounded.
5.2 Beyond Resolution Bound
For large instances, we observe divergence: NP verification remains fast (certificate checking). P solution fails (search fails under guards). This aligns with standard intuition: as input size grows, the solution space explodes, while verification remains efficient.
Yet, this boundary is not absolute — it reflects epistemic limits tied to the memory of the system (p*) rather than logic itself.
6. Interpretation
We interpret these results as scroll-local symmetry: In each resolution scroll (B ≤ B*), P and NP are indistinct. Beyond the scroll, resolution failure causes P ≠ NP behavior — but not proof. This supports RMT’s broader claim: “Truth beyond memory is not false — it is unresolved.” Thus, the epistemic status of P vs NP is conditional. Like Gödel’s undecidable statements, some complexity distinctions may exist only beyond the resolution prime.
7. Conclusion
We do not claim to resolve P vs NP. Instead, we offer: A sound simulation framework to explore the distinction under bounded memory. A scroll-based model to analyze resolution collapse. A pedagogical tool for studying complexity through verified computation. This methodology reframes the conjecture not as a monolith, but as a layered exploration of bounded reason.