In formal logic, what is the difference between ‘proof’ and ‘entailment’?

204 viewsMathematicsOther

In formal logic, what is the difference between ‘proof’ and ‘entailment’?

In: Mathematics

Anonymous 0 Comments

We try to avoid using the natural language solely to describe things because it does get confusing when studying logic. I presume that here you are differentiating between:

* Syntactic consequence (⊢, ‘proves’, TeX `vdash`, single turnstile)
* Semantic consequence (⊨, ‘entails’, TeX `models`, double turnstile)

I’ll try to ELI5 as much as I can, but note that a typical course on formal logic is beyond the introductory courses of a math degree.

Formal logic is studied in terms of ‘well-formed formulas’, which, to oversimplify, are the mathematical terms that ‘make sense to us’. We want things like like `a+b` or `P→Q`, and not `ab+` or `→PQ`. There is a proper definition, but it really is just to make sure that we only deal with stuff that makes sense.

Syntactic consequence comes from syntax and manipulating symbols based on rules of a system to prove something. A system here is made up of axioms and deduction rules. And by symbols here we really mean *meaningless symbols* as we do not ascribe any truth value to them.) We say that `Γ ⊢ φ` (Gamma proves phi) if there exists some finite sequence of formulas, ending in `φ`, where each term is from `Γ`, is an axiom, or is deduced from two previous terms by the rule(s) of the system. In other words, we can produce a **step-by-step derivation** of `φ` from our hypotheses, from the starting premise, and from what we have already derived, just by doing rule-based replacements. For examples, you could see some Wikipedia pages like Double negation or Hypothetical syllogism.

Semantic consequence comes from *truth*. This is how we’d usually think of things in math (e.g. we know that if `not (P → Q)`, then `P` must be true and `Q` must be false). It’s a little more complicated here because the ⊨ symbol is overloaded. In propositional logic, the simplest form of logic (and the one that would be introduced much earlier), we say that `Γ ⊨ φ` (Gamma entails phi) if, no matter how we assign truth values (i.e True or False) to the symbols, when all the formulas in `Γ` are true, `φ` is true. The importance here is on **assignment** or **valuation**. We can actually easily (but not quickly) verify the entailment by considering every possible assignment (computing the *truth table* – which is just for brute-forcing). In first-order logic, which allows for `∀` (for all) and `∃` (there exists) in its formulas, semantic consequence would require talking about structures and valuation functions, which I won’t go into. But the idea is the same – if the left side is satisfied, then the right side is satisfied.

The relationship between syntax and semantics is fundamental in formal logic. We want to know:

* Soundness: If `Γ ⊢ φ`, then `Γ ⊨ φ`. Are all the things that we can prove in fact true?
* Completeness: If `Γ ⊨ φ`, then `Γ ⊢ φ`. Can all true things be proven?

It turns out that in both propositional logic and first-order logic, we have both these properties.

You are viewing 1 out of 1 answers, click here to view all answers.