A CRYPTOGRAPHIC ONTOLOGY OF NONCANONICITY

Foundations, Formalization, and Implementations

Technical-Scientific Report Foundational Research Document

São Paulo – 2026

Marcos Elias*


ABSTRACT

This handbook presents a cryptographic architecture grounded not in computational inversion hardness, nor in information-theoretic secrecy, but in controlled non identifiability: a regime in which public data is complete, deterministic, and verifiable, yet structurally underdetermining with respect to global identity.

The construction arises from the mathematical theory of mock modular and mock theta functions, whose defining characteristic is that their holomorphic part—fully observable and computable—does not canonically determine their global completion. Identity emerges only after the introduction of additional, non-local data (the shadow and normalization), which cannot be recovered through local evaluation or finite observation.

We formalize this phenomenon as a cryptographic principle, define attacker models based on restricted observables, and introduce a family of problems (MMIP: Mock Modular Identification Problems) that replace inversion with identification under ambiguity. We then show how this epistemic asymmetry survives discretization and truncation, enabling concrete implementations over finite fields, with verifiable transcripts, consistency operators, and reference constructions for key exchange and signatures.

This work does not claim absolute quantum immunity. Instead, it establishes an architecture orthogonal to known quantum attack paradigms (Shor, Grover, HSP reductions), and proposes a diversified cryptographic foundation suitable for a post quantum world.


READER’S MAP

This document is written for three audiences simultaneously:

  • Mathematicians will find a precise articulation of non-canonicity, torsor structures, and global versus-local information asymmetry, with careful avoidance of informal impossibility claims.
  • Cryptographers and engineers will find explicit threat models, attacker games, discretization strategies, transcript formats, and implementable primitives.
  • Qualified investors and technical decision-makers will find a clear separation between foundational truth, engineering artifacts, and economic optionality.

No prior familiarity with mock modular forms is assumed beyond graduate-level mathematical maturity.


NOTATION AND CONVENTIONS

Throughout this work:

  • ℍ denotes the upper half-plane {τ ∈ ℂ : Im(τ) > 0}
  • q = exp(2πiτ)
  • Γ ⊆ SL(2,ℤ) denotes a congruence subgroup
  • k denotes modular weight
  • f(τ) denotes the holomorphic mock modular form
  • g(τ) denotes its shadow (a cusp form)
  • R_g(τ) denotes the non-holomorphic completion term
  • F(τ) = f(τ) + R_g(τ) denotes the completed object
  • “Transcript” always refers to finite, public, verifiable data
  • “Completion data” always refers to global, private, identity-selecting data

We avoid categorical claims of impossibility. All security statements are made relative to explicit access models.


PART I – THE ONTOLOGICAL CRISIS OF CRYPTOGRAPHY


CHAPTER 1 – THE HIDDEN AXIOM OF PUBLIC-KEY CRYPTOGRAPHY

1.1 The Unspoken Premise

All modern public-key cryptography, from RSA to lattice-based post-quantum candidates, shares a premise so fundamental that it is almost never stated:

Public data uniquely determines the secret.

Security, under this premise, is achieved by ensuring that recovering the secret from the public data is computationally infeasible.

This premise holds for:

  • RSA: the modulus N uniquely determines (p, q)
  • ECC: the public point Q uniquely determines the scalar d
  • Lattice schemes: the public basis uniquely determines the secret vector

The difference between schemes lies only in how hard inversion is believed to be.

This is not a computational assumption.
It is an epistemic assumption.


1.2 Shor Did Not Create the Crisis

Shor’s algorithm did not weaken RSA or ECC by making factoring or discrete logs “faster”. It exposed something deeper:

The secrecy of these schemes depends entirely on the assumption that inversion is the only way to learn the secret.

Once inversion is efficient, secrecy collapses completely.

The real fragility is therefore not computational—it is ontological.


1.3 Two Ways a System Can Fail

A cryptographic system can fail in two fundamentally different ways:

  1. Computational failure
    An adversary finds a faster algorithm for inversion.
  2. Ontological failure
    The system reveals that the public data already contains the secret uniquely.

Shor’s algorithm is merely a catalyst that forces us to confront the second type of
failure.


CHAPTER 2 – THREE REGIMES OF SECURITY

2.1 Regime I: Informaton-Theoretic Secrecy

The public data contains zero information about the secret.

Example: One-time pad.

This regime is absolute, but operationally restrictive.


2.2 Regime II: Computational Inversion

The public uniquely determines the secret, but inversion is computationally hard.

Examples:

  • RSA
  • ECC
  • Lattice-based PQC
  • Isogeny-based schemes

Quantum computing threatens this regime directly.


2.3 Regime III: Controlled Non-Identifiability

The public data does not uniquely determine the secret.

Instead:

  • Multiple mathematically valid secrets are consistent with the public data.
  • The correct secret is selected only by additional, global information.
  • This information is categorically different from the public observables.

This regime is neither Shannon secrecy nor inversion hardness.

It is epistemic asymmetry by construction.


2.4 Why Regime III Is Nor “Obfuscation”

This regime is often misunderstood as “hiding more cleverly”

It is not.

The public transcript is:

  • Deterministic
  • Verifiable
  • Finite
  • Complete in its own category

What is not is canonically identifying.

This distinction is the core of the entire architecture.


CHAPTER 3 — RAMANUJAN’S INTERRUPTION

3.1 The Mathematical Anomaly

Ramanujan’s mock theta functions were not proposed as part of a theory. They appeared as finished objects that refused classification.

They had:

  • Well-defined q-expansions
  • Stable local behavior
  • Predictable asymptotics

And yet:

  • They were not modular
  • They failed closure globally
  • They resisted canonical identification

For decades, mathematics could use them but could not explain them.

This is not an accident. It is a signal.


3.2 Zwegers’ Resolution: Incompleteness, Not Failure

Zwegers showed that mock theta functions are the holomorphic parts of larger objects.

The missing piece is:

R_g(τ) = ∫_{-τ̄}^{i∞} g(z) / (z + τ)^k dz

Where g(τ) is a cusp form (the shadow).

This term is:

  • Global
  • Non-holomorphic
  • Not reconstructible from local data

The holomorphic object is not broken.
It is incomplete by design.


3.3 Zagier’s Reframing: Non-Canonicity

Zagier emphasized the crucial conceptual point:

The holomorphic data does not canonically determine the completion.

Even with infinite local precision, the identity is not fixed.

Multiple completions coexist, forming a structured ambiguity class.

This is the exact opposite of classical cryptography’s premise.


CHAPTER 4 — THE CORE CRYPTOGRAPHIC INSIGHT

4.1 Observation Does Not Imply Identification

Mock modular phenomena demonstrate a principle that cryptography has never exploited:

Local observability does not entail global identifiability.

Two objects may:

  • Agree arbitrarily well on all observable data
  • Differ globally in a way that only non-local information resolves

This is not noise.
It is structure.


4.2 The New Security Primitive

The cryptographic primitive is not a function to invert.

It is a family to select from.

Public data defines a space of valid identities.
Private data selects one identity.

Breaking the system means collapsing ambiguity without access to global data.


4.3 Why This Survives Quantum Scrutiny

Quantum algorithms accelerate:

  • Period finding
  • Linear algebra
  • Unstructured search

They do not bypass the absence of information.

No algorithm—classical or quantum—can extract what is not present in the accessible category.

This is not immunity.
It is orthogonality.


END OF PART I

What is now fixed:

  • The security regime (controlled non-identifiability)
  • The epistemic distinction between transcript and completion
  • The legitimacy of mock modular phenomena as a cryptographic substrate

What is not yet fixed:

  • The exact transcript format
  • The discrete operators
  • The concrete primitives

These are built next.

PART II — FORMAL NON CANONICITY AND THE RESTRICTED OBSERVABLES
MODEL


CHAPTER 5 — NON-CANONICITY AS A MATHEMATICAL STRUCTURE

5.1 From Ambiguity to Structure

Ambiguity, in mathematics, is usually a defect: a sign that an object has been insufficiently constrained. The mock modular setting is unusual because its ambiguity is not accidental but structural. The space of admissible completions is neither arbitrary nor unbounded; it is organized, finite dimensional, and governed by well-understood cohomological mechanisms.

This observation is essential. Cryptography cannot be built on vagueness. It requires
that any ambiguity be:

  1. Mathematically precise
  2. Stable under perturbation
  3. Verifiable without collapse

Mock modular forms satisfy all three.


    5.2 Definition: Non-Canonical Object

    We formalize the central notion.

    Definition (Non-Canonical Object).
    Let L be a class of observable data and G a class of global parameters. An object O is said to be non-canonical relative to L if there exists a family {O_g} indexed by g ∈ G such that:

    1. For all g₁, g₂ ∈ G,
      O_{g₁} and O_{g₂} are indistinguishable under all observables in L.
    2. For g₁ ≠ g₂,
      O_{g₁} ≠ O_{g₂} as global objects.
    3. The family {O_g} carries a structured action of G (affine, torsorial, or
      cohomological).

    In this sense, non-canonicity is not lack of definition, but dependence on global choice.


      5.3 Mock Modular Forms as Non-Canonical Objects

      Let f(τ) be a mock modular form of weight k for Γ.

      There exists a space S_{2−k}(Γ) of cusp forms (the shadows) such that each g ∈
      S_{2−k}(Γ) defines a completion:

      F_g(τ) = f(τ) + R_g(τ)

      where:

      R_g(τ) = ∫_{−τ̄}^{i∞} g(z) / (z + τ)^k dz

      Crucially:

      • The holomorphic part f(τ) is fixed and public
      • The shadow g(τ) is not determined by f
      • Different g produce globally distinct F_g
      • All F_g satisfy the same modular transformation laws

      The set of completions {F_g} forms an affine space over a subspace of S_{2−k}(Γ), modulo normalization equivalence.

      This is non-canonicity in its purest form.


      5.4 Torsor Interpretation

      More precisely, the completions of f form a torsor under a vector space V:

      {completions of f} ≅ v₀ + V

      where:

      • There is no distinguished origin
      • Differences are meaningful
      • Absolute identification is not

      This absence of a canonical origin is exactly what cryptography exploits.


      CHAPTER 6 — LOCAL OBSERVABILITY VS GLOBAL IDENTITY

      6.1 The Local–Global Divide

      The defining feature of mock modular phenomena is the separation between:

      • Local observables:
        Values of f(τ), derivatives, truncated q-coefficients, finite evaluations.
      • Global identity:
        The shadow g and the completion R_g, which depend on integration over
        infinite paths.

      No amount of local probing reconstructs the global choice.


      6.2 Formal Separation of Information Categories

      We define two categories of information:

      Local information (L):

      • Finite truncations of q-series
      • Pointwise evaluations
      • Finite modular consistency checks
      • Any data computable via oracle access to f

      Global information (G):

      • Shadow g
      • Normalization conditions
      • Cohomological class selecting the completion

      These categories are not refinements of each other. G is not “more L”.

      This categorical separation is what survives discretization.


      6.3 The “Difference Below the Noise Floor”

      One of the most subtle—and cryptographically powerful—features of mock modular forms is the following phenomenon:

      Given two distinct shadows g₁ ≠ g₂, the corresponding completions F_{g₁} and F_{g₂} can satisfy:

      |F_{g₁}(τ) − F_{g₂}(τ)| < ε

      uniformly on any compact subset of ℍ, for arbitrarily small ε, provided the difference is absorbed into the non-holomorphic tail.

      Thus:

      • Local evaluation cannot distinguish the identities
      • Precision does not help
      • Identity lives outside the observable category

      This is not approximation error.
      It is structural underdetermination.


      CHAPTER 7 — THE RESTRICTED OBSERVABLES MODEL (ROM)

      7.1 Why an Access Model Is Mandatory

      Any cryptographic security claim must specify what the adversary can see and do.

      Claims about “impossibility” without an access model are meaningless.

      We therefore define a Restricted Observables Model (ROM) that formalizes precisely what information is accessible.


      7.2 Definition: Restricted Observables Model

      An adversary operating under ROM is granted access to:

      1. A finite public transcript T, derived from f
      2. Evaluation oracles for f at chosen points
      3. Verification oracles that check modular consistency
      4. Polynomial-time classical or quantum computation

      The adversary is not granted access to:

      • The shadow g
      • The normalization data
      • Any oracle that computes R_g directly
      • Infinite-precision analytic continuation

      ROM is not artificial; it mirrors what real implementations expose.


      7.3 What ROM Excludes (and Why That’s Legitimate)

      ROM excludes global integration data because:

      • It is not present in the public transcript
      • It cannot be inferred from finite evaluation
      • It would require embedding the private key itself into the oracle

      ROM is no more restrictive than the standard black-box models used for RSA, ECC, or lattice schemes.


      7.4 ROM and Quantum Power

      ROM explicitly allows:

      • Quantum superposition queries over allowed oracles
      • Quantum Fourier sampling on accessible data
      • Quantum linear algebra on transcript representations

      ROM does not allow quantum access to non-existent information.

      This distinction is essential: quantum speedups do not create information ex nihilo.


      CHAPTER 8 — FORMALIZING THE CRYPTOGRAPHIC PROBLEM

      8.1 From Inversion to Identification

      Classical cryptography asks:

      Given y = f(x), find x.

      Mock-theta cryptography asks:

      Given a transcript T compatible with many identities {I₁, I₂, …}, identify which one is correct.

      This is a qualitatively different problem class.


      8.2 The Mock Modular Identification Problem (MMIP)

      We now define the central hardness assumption family.

      MMIP (Identification Form).
      Given:

      • A public transcript T derived from a mock modular form f
      • Access to all ROM oracles
      • Knowledge that T is consistent with exactly M completions {F_{g₁}, …,
        F_{g_M}}

      Output the index i of the true completion with non-negligible advantage over random guessing.


      8.3 MMIP (Distinguishability Form)

      MMIP-D.
      Given two transcripts T₀ and T₁, one derived from a true completion and one from a randomly chosen alternative completion consistent with f, distinguish which is which.


      8.4 MMIP (Forgery Form)

      MMIP-F.
      Given a valid transcript T, produce a distinct transcript T′ that passes verification but corresponds to no valid private completion.


      8.5 Why MMIP Is Not Known to Reduce to Existing Problems
      MMIP is:

      • Not a group inversion problem
      • Not a lattice short-vector problem
      • Not an isogeny path problem
      • Not a linear system
      • Not an abelian hidden subgroup problem

      This does not imply hardness in an absolute sense.
      It implies novelty of attack surface.


      CHAPTER 9 — SECURITY CLAIMS WE MAKE (AND DO NOT MAKE)

      9.1 Claims We Make

      We claim that:

      1. Under ROM, transcripts do not canonically determine completion identity.
      2. MMIP admits no known polynomial-time classical or quantum solution.
      3. Known quantum algorithms do not apply directly or give exponential advantage.
      4. Non-canonicity survives truncation and discretization.

      These claims are testable, falsifiable, and precise.


      9.2 Claims We Explicitly Refuse

      We do not claim:

      • Information-theoretic secrecy
      • Absolute impossibility
      • Unconditional quantum resistance
      • Reduction to a well-known NP-hard problem

      This refusal is not weakness; it is discipline.


      9.3 What “Security” Means in This Architecture

      Security means:

      An adversary cannot collapse structured ambiguity without access to global completion data, even with quantum resources.

      Nothing more.
      Nothing less.


      END OF PART II

      What is now fixed:

      • The formal meaning of non-canonicity
      • The local/global information divide
      • The Restricted Observables Model
      • The MMIP hardness family
      • The precise scope of security claims

      What comes next:

      • How to encode this structure into finite transcripts
      • How to build verifiers that accept families, not identities
      • How ambiguity survives discretization

      PART III — CRYPTOGRAPHIC FORMALIZATION: GAMES, INTERFACES, AND THREAT MODELS


      CHAPTER 10 — FROM MATHEMATICAL PHENOMENON TO CRYPTOGRAPHIC OBJECT

      10.1 Why Formalization Is the Point of No Return

      A mathematical phenomenon becomes cryptography only when it is forced to survive the discipline of formal attack models. This requires translating non-canonicity— originally expressed in analytic and cohomological language—into interfaces, oracles, and games that admit adversarial interaction.

      The objective of this part is not to prove security in the classical reductionist sense. It is to define a clean, falsifiable framework in which claims about security can be tested, attacked, and improved.

      We therefore proceed by fixing:

      1. What constitutes a public key
      2. What constitutes a private key
      3. What operations are allowed
      4. What it means to “win” as an adversary

      Only after these are fixed does it make sense to speak about security.


      10.2 Cryptographic Objects and Interfaces

      We define a family of schemes collectively denoted Θ-Crypt0, parameterized by a tuple of public parameters Π.

      Public Parameters Π include:

        • A prime p defining the base field
        • A truncation bound N
        • A weight k
        • A congruence subgroup Γ (or an abstract action set replacing it)
        • A transcript format specification
        • A verifier specification

        Π is fixed globally and assumed known to all parties, including adversaries.


        CHAPTER 11 — KEYS, TRANSCRIPTS, AND COMPLETIONS

        11.1 Public Transcript

        The public key in Θ-Crypt is not a single algebraic object. It is a transcript, denoted T, consisting of finite data derived from the holomorphic component f and its consistency constraints.

        A transcript T may include:

        • Truncated coefficients a₀,…,a_{N−1} modulo p
        • A finite set of evaluation pairs (τᵢ, f(τᵢ))
        • Seeds for deterministic generation of evaluation points
        • Commitments to consistency operators
        • Auxiliary metadata enabling verification

        Crucially, T is verifiable without knowing the private key.


        11.2 Private Completion Data

        The private key consists of completion data Y:

        • A shadow selector g (represented discretely)
        • Normalization / rigidification data η
        • Optional auxiliary data enabling efficient completion

        Y is never exposed through any public oracle.


        11.3 Completion Operator

        We define an abstract operator:

        Comp(T, Y) → F

        such that:

        • F is a completed object consistent with T
        • For fixed T, different Y produce distinct F
        • Verification of F against T is efficient
        • Decomposition of F into (T, Y) is non-unique under ROM

        This operator is conceptual, not necessarily implemented explicitly in all schemes.
        What matters is that its existence is well-defined.


        CHAPTER 12 — VERIFICATION AS A FIRST CLASS PRIMITIVE

        12.1 Verifiers Accept Families, Not Identities

        Classical cryptographic verifiers check whether a given object equals a unique expected
        value.

        Θ-Crypt verifiers check something subtler:

        Whether a candidate object lies within the admissible completion family determined
        by the transcript.

        This distinction is essential. Verification must not collapse ambiguity.


        12.2 Verification Interface

        We define a deterministic polynomial-time verifier:

        Verify(Π, T, X) → {accept, reject}

        where X is a candidate object (e.g., ciphertext component, signature witness, or
        completed evaluation).

        Correctness requires:

        • For the true completion F = Comp(T, Y), Verify accepts
        • For malformed or inconsistent X, Verify rejects
        • For alternative valid completions F′ consistent with T, Verify accepts

        This last condition is the architectural heart of the system.


        12.3 Consistency Operators

        Verification is implemented via consistency operators, which may include:

        • Modular transport checks
        • Convolution invariants
        • Cusp-adjacent stress tests
        • Structured linear or non-linear relations among transcript components

        Each operator is designed to be:

        • Efficient to compute
        • Deterministic
        • Blind to the specific completion identity

        The verifier is a composition of these operators.


        CHAPTER 13 — ADVERSARY MODELS

        13.1 Computational Power

        Adversaries are allowed:

        • Polynomial-time classical computation
        • Polynomial-time quantum computation
        • Superposition access to allowed oracles

        We do not restrict adversaries by computational class beyond polynomial bounds.


        13.2 Oracle Access

        Under the Restricted Observables Model (ROM), adversaries may access:

        • Transcript T
        • Evaluation oracles for f
        • Verification oracles Verify(Π, T, ·)
        • Encryption and decryption oracles, where applicable

        Adversaries may not access:

        • Completion operator Comp
        • Shadow data g
        • Any oracle that directly evaluates R_g

        13.3 Adaptive Interaction

        Adversaries may:

        • Choose queries adaptively
        • Correlate queries across sessions
        • Perform chosen-ciphertext or chosen-message attacks, depending on the scheme

        The model is intentionally strong.


        CHAPTER 14 — SECURITY GAMES

        14.1 Identification Game (MMIP-ID)

        Setup:

        • Challenger samples (T, Y)
        • T is given to adversary
        • Y is kept secret

        Challenge:

        • Adversary outputs an index i or candidate Y′

        Win Condition:

        • Adversary identifies the correct completion with advantage ε over random guessing

        Security requires ε negligible in Π.


        14.2 Distinguishability Game (MMIP-D)

        Setup:

        • Challenger samples T
        • Challenger samples Y₀ (true) and Y₁ (alternative)
        • Chooses b ∈ {0,1} uniformly
        • Computes X = Comp(T, Y_b)

        Challenge:

        • X is given to adversary

        Win Condition:

        • Adversary outputs b with advantage ε

        This game captures indistinguishability of completions.


        14.3 Forgery Game (MMIP-F)

        Setup:

        • Challenger gives adversary T

        Challenge:

        • Adversary outputs T′ ≠ T