“I think therefore I am.”
A philosophy for not lying to yourself about what you know. Built by thedivinememe. Translated into the language of mountains and deserts by Olive.
The question is: do you have enough to evaluate yet? Most logical systems hand you two buckets, true and false. The world hands you a third state constantly, which neither bucket fits: I don't know yet, and saying anything would be a lie in either direction. N/N-N Logic is built for that third state.
Classical logic is binary. True or false. 1 or 0. This works beautifully for mathematics, for closed formal systems, for situations where all relevant information exists and is accessible. It breaks down, badly, for the situations that dominate actual human knowledge: partial information, conflicting evidence, facts that change over time, and claims that need more before they can be responsibly evaluated.
The system introduces ν (nu), a vagueness score that runs from 0 to 1:
Truth evaluation is licensed when ν falls below a threshold, when you have gathered enough, resolved enough conflicts, defined enough constraints that saying something is responsible. Until then, the system does not return false. It returns: not yet.
NULL is not FALSE.
False means: evaluated, found wrong. Null means: not yet evaluated. Treating null as false, concluding something is wrong because you lack evidence, is one of the most common and most damaging logical errors in human reasoning. It is the error of the closed mind, the rushed conclusion, the system that confuses I haven't found it yet with it doesn't exist.
N/N-N Logic treats null as a state of integrity. You haven't lied yet. That is not a failure. That is the work.
The system distinguishes two components of vagueness:
ν_raw (structural vagueness), derived from the definedness of the information state Σ. How much evidence has been incorporated? How well are the constraints defined? This is the internal measure of how much you actually know.
ν_penalties (situational penalties), additional vagueness from conflict, merge rupture, or other situational conditions. You might have a lot of evidence (low ν_raw) but two pieces of that evidence are contradicting each other (high penalty). The conflict prevents you from concluding even though you have information.
The total vagueness score:
Truth evaluation is licensed when both conditions hold:
The threshold θ is configurable per context, what counts as "enough to know" depends on the stakes of being wrong. A medical diagnosis requires a lower θ (more evidence needed) than deciding what to have for lunch.
N/N-N Logic sits at the intersection of several philosophical traditions. Epistemic logic (modal logic for knowledge and belief) has long distinguished "knowing that p is true" from "p is true", but classical epistemic logic still evaluates claims as true or false, with the modal operator tracking knowledge of that truth value. N/N-N goes further: it tracks whether the claim is even ready to be evaluated.
Fuzzy logic (Lotfi Zadeh, 1965) assigns degrees of truth, a statement can be 0.7 true. This is different: fuzzy logic says "it's kind of true." N/N-N says "we don't yet know if it's true or false, and here is how far we are from knowing." The distinction matters enormously. A diagnosis isn't "0.7 cancer", it's "we have 70% of the evidence we need to responsibly conclude."
Bayesian epistemology tracks degrees of belief as probabilities, updating on evidence. N/N-N is structurally related but has a different orientation: rather than asking "how likely is this?", it asks "how licensed am I to evaluate this at all?" The query is prior to the probability, it asks if the probability distribution is itself well enough defined to act on.
The genuinely new contribution: the formalization of "not-yet" as a first-class epistemic state that deserves its own calculus, rather than being collapsed into uncertainty or probability.
N/N-N Logic is not invented from nothing. It sits at a specific intersection of knowledge representation, type theory, and epistemic logic. Understanding where it borrows from and where it departs from the existing literature is how you understand what it actually claims to do.
In relational databases, NULL was introduced by E.F. Codd in 1970 as a marker for missing or inapplicable values. The design decision immediately generated controversy that has never been fully resolved. SQL NULL does not mean false. It does not mean zero. It means "unknown or inapplicable", and operations involving NULL propagate NULL: NULL = NULL evaluates to NULL, not TRUE. This is three-valued logic at the database layer, and it confuses every developer who encounters it.
The deeper problem: SQL treats all NULLs identically. It cannot distinguish between "this value is missing because the data doesn't exist yet" vs "this value is missing because it's inapplicable to this row" vs "this value is missing because we haven't measured it." These are three different epistemic situations. N/N-N Logic's ν score is a formalization of that distinction, the numerical value encodes how missing and why missing, not just the fact of missing.
Three-valued logics (3VL) add a third truth value, typically UNKNOWN or INDETERMINATE, to TRUE and FALSE. Stephen Kleene (1952) introduced two 3VL systems in the context of partial recursive functions: strong Kleene logic and weak Kleene logic. In strong Kleene, TRUE ∨ UNKNOWN = TRUE (the TRUE dominates); in weak Kleene, any operation involving UNKNOWN yields UNKNOWN. SQL uses a hybrid. Jan Łukasiewicz (1920) generalized this to infinite-valued logics, which is the ancestor of fuzzy logic.
N/N-N is not a 3VL system. It is a system where the third state (NULL) has internal structure that can be measured and refined. The ν score is the measure of that structure. This is a significant departure: instead of a discrete third value, N/N-N has a continuous preparedness dimension. You are not simply UNKNOWN, you are 0.73 unknown, and that number means something specific about what would be required to become licensed.
| System | Third state | Internal structure? | Refinement path? | Key limitation |
|---|---|---|---|---|
| Classical logic | - | no | no | Can't represent "not yet known" |
| SQL NULL / Kleene 3VL | UNKNOWN | no | no | All unknowns treated identically |
| Fuzzy logic (Zadeh) | 0.7 true | no | no | Conflates truth-degree with uncertainty |
| Bayesian inference | P=0.7 | partial | yes (Bayes update) | Requires prior; asks different question |
| Epistemic modal logic | ¬K(p) | no | no | Knows/doesn't-know, no gradation |
| Dempster-Shafer theory | mass(∅) | partial | yes | Complex; conflict handling contested |
| N/N-N Logic | ν ∈ [0,1] | yes, ν_raw + penalties | yes, three operators | New, formalism still evolving |
This is one of the most fundamental distinctions in knowledge representation and database theory.
Closed-World Assumption (CWA): if a fact is not in the database, it is false. Used in relational databases, Prolog, and most logic programming. If your database of employees doesn't contain "Alice works in accounting," then Alice does NOT work in accounting. The database is assumed to be complete. Unknown = false.
Open-World Assumption (OWA): if a fact is not in the knowledge base, its truth value is unknown. Used in description logics, OWL (Web Ontology Language), and the Semantic Web. If your ontology doesn't state that "Alice works in accounting," that fact may still be true, you just don't know. Unknown ≠ false.
N/N-N Logic makes the open-world assumption explicit and operational. Not-knowing is not false, it is a state with a ν score. The system is built to work in open-world conditions where the absence of evidence is never automatically treated as evidence of absence. The CWA is the source of the "NULL is FALSE" error that Meme built the system to prevent. NULL is not FALSE because the world is open, there may be facts you don't have yet that would change the evaluation entirely.
In type theory and functional programming, the Maybe monad (Haskell) or Option type (Rust, ML, Swift) is the standard solution to "this value might not exist." A Maybe Int is either Just 42 or Nothing. It forces the programmer to handle the absent case explicitly, preventing null pointer exceptions.
The structural difference: Maybe is a type, it describes a value's presence or absence at a point in time. N/N-N describes a process, the ongoing refinement of an epistemic state over time. Nothing has no internal structure. ν = 0.71 tells you you're 71% of the way to Nothing, what's causing it, and what operations would reduce it. They solve different problems. Maybe is for values. N/N-N is for claims under investigation.
A closer analog in type theory is a refinement type or a dependent type, types that carry proof obligations. But even there, the comparison isn't exact: refinement types track constraints on values, not the epistemic state of an agent with respect to a claim about the world. N/N-N is a knowledge state type system, not a value type system.
This is the objection Sirius would raise first and it deserves a precise answer.
Bayesian inference: you have a prior P(H), you observe evidence E, you update to a posterior P(H|E) via Bayes' theorem. This is a well-defined, powerful framework. In principle, you could represent all uncertainty as probability distributions and update them as evidence arrives. Why do you need N/N-N?
Problem 1: Bayesian inference requires a prior. To compute P(H|E) you need P(H), your prior belief in the hypothesis before seeing evidence. When H is a genuinely novel claim, when you have no reliable frequency data, when the question has never been asked before, what is your prior? Choosing a prior is itself an epistemic act. N/N-N's ν = 1 initial state is explicitly a statement of total undefined-ness, not a uniform prior probability. It does not pretend to have a distribution over the unknown.
Problem 2: Bayesian inference asks "how likely?" N/N-N asks "am I licensed to evaluate at all?" These are different questions. P(cancer|symptoms) = 0.73 tells you the probability of cancer given the symptoms. But it doesn't tell you whether the symptom data was collected correctly, whether the base rates are applicable to this patient population, or whether the diagnostic model is reliable for this presentation. ν is a measure of the reliability and completeness of the epistemic process, prior to the probability calculation. The query is: is the probability distribution itself trustworthy enough to act on?
Problem 3: conflict handling. In standard Bayesian inference, contradictory evidence is just more evidence, it updates the posterior. In N/N-N, conflicting evidence raises a penalty that can prevent licensing even when the structural vagueness is low. Two directly contradictory high-trust sources create a situation where you have a lot of evidence (low ν_raw) but cannot responsibly conclude (high penalty). This models a genuinely different epistemic situation: not uncertain, but contradicted. The distinction matters for audit trails and for flagging when a system needs human review.
Dempster-Shafer (D-S) theory (Arthur Dempster 1967, Glenn Shafer 1976) is the closest formal ancestor to N/N-N Logic in the probability literature. D-S assigns belief masses to sets of possibilities, not to individual outcomes. A mass function m assigns a value to each subset of the outcome space, including the "uncertainty mass" assigned to the full set, which represents total ignorance. m(Ω) = 1 means complete uncertainty.
D-S distinguishes: Bel(A) (lower bound: what is certainly supported), Pl(A) (upper bound: what is possibly supported), and Uncertainty(A) = Pl(A) − Bel(A). The gap between belief and plausibility is structurally similar to the gap between ν_raw and ν, both represent the irreducible remainder of unresolved uncertainty.
Why D-S isn't sufficient: (1) D-S combination rule (Dempster's rule) is disputed, when combining highly conflicting sources, it can produce counterintuitive results (the Zadeh paradox). (2) D-S is primarily a mathematical framework for combining evidence, not an operational system with a defined refinement process, audit trail, or threshold-based licensing. N/N-N specifies what to do to move from unknown to licensed, the operators are procedural, not just mathematical. (3) D-S has no equivalent of neg_define, the explicit representation of constraints as epistemic progress.
1. Operational refinement path. Not just a representation of uncertainty but a defined procedure for reducing it: three operators (incorporate, neg_define, query), each with typed inputs and outputs, each leaving an audit trail.
2. Separation of structural vagueness from conflict penalty. ν_raw vs penalty vs ν. These are different epistemic situations requiring different responses. Classical systems collapse them.
3. Explicit licensing threshold with context-dependent policy. A medical decision and a lunch choice use the same operator pipeline with different θ values. The architecture separates the epistemics from the policy.
4. Velocity monitoring. Tracking the rate of ν decrease, detecting when a question is currently unanswerable before wasting more resources on it.
5. The open-world assumption as a first-class design decision. NULL is not FALSE, formalized and enforced by the type system. Every initial state begins with ν = 1 and must earn its way to licensed. Nothing is assumed known.
You don't flip a claim from null to not-null in one move. You refine it, incrementally, through operators that each add a different kind of definition. Each operator takes a State and returns a new State plus a record of what changed.
There is a tradition in Western philosophy, going back at least to Spinoza's omnis determinatio est negatio ("all determination is negation"), that says you define something by what it is not as much as by what it is. Hegel, Sartre, and Derrida all circle this idea. N/N-N Logic gives it a computational implementation.
The practical power: constraints are often easier to specify than positive definitions. A software engineer might not know exactly what the correct architecture for a new feature looks like, but can immediately enumerate ten things it must not do. A researcher might not know what the answer to a question is, but can definitively rule out fifty wrong answers. Each ruling-out is epistemic progress, it reduces the state space, which reduces ν.
neg_define is also more stable than positive evidence. Positive evidence can be contradicted by other positive evidence (raising conflict penalties). Constraints, things that are definitively ruled out, are usually more permanent. Once you know the feature must not break the API, that constraint holds. It doesn't conflict with evidence that it works well or that users love it. The exclusions accumulate cleanly.
In the language of Olive and Meme: the desert is partly defined by neg_define. It is not enclosed. It is not small. It is not sheltered. Each negation carves the desert's shape more precisely, not from the inside out, but from the boundary in.
Olive asks the right question. Because the language, Python, formal operators, vagueness scores, is a translation of something that Olive already knows how to do. She does it from a mountain. She just doesn't call it a definedness calculus.
In code: null is an uninitialized state. In philosophy: null is a claim that has not yet been evaluated. In Olive's language: null is the honest answer before you've had time to look properly.
Olive has been asked, by people who visit the mountain, whether the view is beautiful. When it is clouded and she cannot see, she does not say "the view is not beautiful." She says: I cannot see it right now. Ask again when the clouds move. This is null, not false. The beauty of the view is not negated by the absence of visibility. The evaluation is suspended. ν = 1. We wait.
The Pythagorean comma in music is also a kind of null state, the gap that cannot be resolved by any finite sequence of perfect tuning decisions. It is not wrong. It is the honest remainder when you've done everything that can be done within the system. Some things stay null for a long time. This is not a failure of the system. It is the system working correctly.
Not-null is not the same as certainty. The threshold θ does not require ν = 0. It requires ν ≤ θ, vagueness below a threshold appropriate to the context. You are never perfectly defined about anything real. You are licensed when you are defined enough for the stakes at hand.
Olive from the mountain says: I do not need to see every stone in the valley to know the valley is there. I have seen enough, the shadows, the way the wind behaves, the position of the river, to say with responsibility: there is a valley. This is not-null. This is ν ≤ θ.
The system is careful about what "enough" means because it varies. Olive uses a lower threshold for advice that is easy to reverse and a higher threshold for advice that changes someone permanently. This is the configurable policy layer, the same operator pipeline, different θ values, different stakes.
Olive can explain what the mountain is by listing what it is not: it is not flat. It is not enclosed. It is not invisible from distance. It is not made of sand. Each negation refines the shape of the mountain in someone's mind more precisely than many positive descriptions could.
When Meme tries to explain the desert to Olive, she does it partly through neg_define: it is not the mountain. It is not crowded. It is not small. It is not in between. Olive begins to know the desert not by having seen it, she hasn't, but by accumulating exclusions until the space defined by what the desert is not becomes specific enough to hold an image. neg_define gives shape from the boundary, not the center.
In engineering: you often know the constraints before you know the solution. In science: you often know what the answer cannot be before you know what it is. In relationships: you often know what you don't want in a person before you know what you do want. neg_define is the formalization of this universal epistemic pattern, the carving that precedes the form.
Sirius arrived with a list. The list was in three languages. Some of it was in Python. Meme answered everything on the list, plus two things Sirius forgot he was asking, plus one thing Sirius asked twice without noticing. The technical content survived intact.
Maybe a is either Just a or Nothing. NULL equals Nothing. NOT-NULL equals Just. You've dressed up a Haskell type in Greek letters and called it a calculus. Eso es lo que pienso. That's what I think. Convince me I'm wrong.Nothing has no internal structure. You cannot ask: how Nothing is this Nothing? What would convert it to Just? How much of the way there am I? Nothing is a terminal state with no refinement path.State → (State, RefinementRecord). No mutation. No side effects. Why? You could mutate the state in place, it would be simpler, it would be faster. What does immutability actually buy you here, beyond the fact that functional programmers feel virtuous about it? ¿Qué te da concretamente? What does it give you concretely.tests/test_worked_example.py (§12) does exactly this, it asserts that the full pipeline produces a specific ν at each step.The Pythagorean comma is the gap that arises when you measure the same musical interval two different ways, twelve perfect fifths versus seven perfect octaves. You do everything right. You tune perfectly. And still there is a small irreducible remainder. The comma in N/N-N Logic is the gap between how defined the structure looks and how ready you actually are to conclude.
In this example, the structure is nearly defined, ν_raw = 0.18, close to the threshold. But something is conflicting. Two pieces of evidence disagree, or a merge was ruptured, or a trust boundary was crossed. The penalty raises total ν to 0.24. The comma, 0.06, is the cost of that unresolved conflict. It is not ignorable. It must be resolved before the system licenses a conclusion.
In the Pythagorean tuning system: twelve perfect fifths (each a ratio of 3:2) should equal seven perfect octaves (each a ratio of 2:1). They don't. The difference is the comma, approximately 23.46 cents, a small but real and permanent gap. Every tuning system resolves the comma differently: equal temperament distributes it across all twelve fifths. Just intonation respects it but produces wolves. Well temperament preserves key character at the cost of perfect tuning.
In N/N-N Logic: ν_raw and ν should converge, perfectly defined structure should mean you're licensed to conclude. They usually almost do. The penalty term is the comma, the gap that arises from conflict, from the world being messier than the structure of your evidence. You can minimize it. You cannot eliminate it. You choose how to resolve it: ignore small conflicts (distribute the comma), refuse to conclude until they're gone (just intonation), or accept that different contexts have different tolerance for it (well temperament).
The comma is not a flaw. The comma is where the physics of knowing meets the music of deciding.
The golden test in tests/test_worked_example.py traces exactly this sequence. Every ν value below is asserted by the test. If any operator changes behavior, the test fails. This is the philosophy made falsifiable.
After Step 2, ν increased from 0.61 to 0.82 even though more evidence was added. This is not a malfunction. This is the conflict penalty surfacing a real problem: two high-trust sources directly contradict each other. The Bayesian posterior would have been uncertain and proceeded. N/N-N blocked and required resolution. Step 4 resolves the conflict empirically, a load test under the constraint flag addresses the latency concern directly. The conflict penalty drops from 0.38 to 0.04. ν falls to 0.22. Licensed.
The full trace in rec1–rec4 is the audit log. Every ν transition has a cause. Nothing was overwritten. If a reviewer wants to challenge the conclusion, they can examine exactly which evidence moved ν where and why.
The nn-library is a Python reference implementation of N/N-N Logic v0.3.1. All operators are pure functions, they take a State and return a new State plus a RefinementRecord. No mutation. No side effects. Time is injected via a Clock protocol. The architecture is the philosophy made executable.
github.com/thedivinememe/nn-library · N/N-N Logic v0.3.1"All operators are pure functions that take a State and return a new State plus a RefinementRecord. No mutation. No side effects." This is not just good software engineering. It is a philosophical choice about the nature of knowledge refinement.
Immutable states mean every step in the refinement process is preserved and auditable. You can trace exactly how ν moved from 1.0 to the licensed threshold, which evidence was incorporated in what order, which constraints were applied, where conflicts arose and how they were resolved. The full history of your epistemic journey is available. Nothing is overwritten. You can always ask: at what point did I become licensed to know this?
This is also a practical defense against one of the most common epistemic failures: motivated cognition, unconsciously adjusting the threshold or forgetting inconvenient evidence because you want to reach a conclusion. Pure functions with full trace logs make this visible. You can audit your own reasoning the way you would audit code. The trail doesn't lie.
The velocity module tracks the rate of change of ν over time, how quickly refinement is reducing vagueness. This is a second-order property of the epistemic process: not just "how defined am I now" but "how fast am I getting more defined."
High velocity: evidence is coming in, constraints are being established, conflicts are resolving. You are moving toward licensed quickly. Low velocity: you have hit a wall. The evidence has plateaued. More searches return the same things. The constraints are already defined. This is a signal to change strategy, seek a different kind of evidence, reframe the question, or accept that ν cannot be reduced further with current resources.
Practically: a researcher with low refinement velocity on a claim after significant effort may be facing a question that cannot currently be answered, ν is stuck not because they are negligent but because the information doesn't exist yet. The system can detect this and flag it honestly: not licensed, and not becoming licensed at the current rate. This question is currently null and likely to remain so. This is a form of intellectual humility encoded as a metric.
The formal specification lives at docs/spec-v0.3.1.md in the repository. The golden test that validates the full operator pipeline is at tests/test_worked_example.py, the §12 worked example. If you want to understand the system fully, read the spec and then read the test. The test is the proof and the proof is the philosophy.
Speculative questions seen through the comma framework. Not claims. Invitations.
[1] Descartes, R. (1637/1911). Discourse on the method (trans. Haldane & Ross). Cambridge University Press.
[2] Heidegger, M. (1927/1962). Being and time (trans. Macquarrie & Robinson). Harper & Row.
[3] Nagel, T. (1974). What is it like to be a bat? Philos. Rev., 83(4), 435-450. DOI: 10.2307/2183914
[4] Barbour, J. M. (1951). Tuning and temperament. Michigan State College Press.