if you're actually curious, i can speak a bit about buddhist three value logic, where 'unknowable' is a proper state and has information
I am genuinely curious, please elaborate.
If you don't mind. I would like that. I feel so uneducated.
Even a link to something. I like logic. I guess I don't know what it is...
You shouldn't feel uneducated, I don't think familiarity with formal logic is particularly common.
Logic is essentially the study of the structure of arguments. Arguments generally consist of a number of propositions (typically called premises), which are claimed to entail another statement, called the conclusion. The chief concern of the logician is whether a given conclusion follows from the given premises or not. For any given argument, if it is the case that if all the premises are true then the conclusion must be true then this argument is said to be valid.
An example of a valid argument is as follows:
1. If it is raining, then it is wet outside.
2. It is raining.
Therefore:
C. It is wet outside.
In propositional logic we use the letters 'P', 'Q', 'R'... to represent propositions. In constructing a formal proof of the valid argument just given, we will use P to stand for 'it is raining' and Q to stand for 'it is wet outside'. Whilst explaining the rest of the syntax of the propositional calculus I will take P and Q to stand for these propositions.
In order to express 'if...then...' statements, we place '-->' between the two propositional variables, thus, in order to represent the statement 'if it is raining, then it is wet' we shall write P-->Q. Such a statement is referred to as a conditional proposition. We refer to the proposition before '-->' (in this case P) as the antecedent and we refer to the proposition after '-->' (in this case Q) as the consequent.
In order to express that a proposition is not the case, we use the negation sign '~'. So, 'it is not the case that it is raining' may be expressed as ~P.
If we want to express that more than one proposition is true, we use '&' as the connective for 'and'. Thus to say 'it is raining and it is wet outside' we will write P & Q, this proposition is called the conjunction of P and Q, P and Q are the conjuncts of the conjunction P & Q. We can also express the statement 'it is not raining and it is wet outside' as ~P & Q.
As polymath mentioned previously, 'or' statements in logic can be inclusive. Thus, to say 'it is raining or it is wet outside' does not exclude the possibility that it is raining
and it is wet outside. We use 'v' as the connective for or, so in order to express 'it is raining or it is wet outside' we write P v Q.
When expressing more complex statements it is necessary to use bracketing, the bracketing shows which logical connective ('-->', 'v', '&', '~') is the main connective. In the system of logic I use each rule of inference generally allows the introduction or elimination of a main connective. As I am running short on time I will explain this in more detail, as well as the basic rules of inference and how to set out a proof, in another post.
In formal logic an argument is expressed in a form known as a sequent. The argument I presented at the outset of this post should be expressed as follows: P-->Q, P ⊦ Q
The formulas to the left of the turnstile symbol (called the assertion sign) which are separated by commas are the assumptions on which the conclusion rests, the formula to the right of the assertion sign is the conclusion of the argument. The assertion sign can conveniently be read as saying ‘therefore’. The idea behind expressing an argument in this manner is to say that if we assume the formulas on the left we can apply the rules of inference in order to obtain the formula on the right.
The first rule is called the rule of Assumptions. According to this rule we can assume whatever we like at any stage in a proof. We simply note the line number on the left and cite Ass. on the right to indicate the rule applied. Here is an example:
1 (1) P Ass.
The above is technically a proof of the following sequent P ⊦ P
In order to show a formal proof of the argument I presented at the outset of my explanation I will explain the rule 'modus ponens' (MP). Given a conditional (for example 'P-->Q') and its antecedent (in this case 'P') we may derive the consequent (in this case 'Q').
The valid argument I presented at the outset is of the form: P-->Q, P ⊦ Q
An argument presented in this manner is called a sequent. A proof of the above sequent is as follows:
1 (1) P-->Q Ass.
2 (2) P Ass.
1,2(3) Q 1,2 MP
The next rule is modus Tollens, given a conditional and the negation of its consequent, we can infer the negation of its antecedent. Here is an example of modus Tollens used to prove the following simple sequent: P-->Q, ~Q ⊦ ~P
1 (1) P-->Q Ass.
2 (2) ~Q Ass.
1,2(3) ~P 1,2 MT
On the left we pool the assumption on which the inference rests, on the right we note the lines to which the rule was applied and note MT for modus Tollens. Later I will show that modus tollens is a redundant rule, we can prove the same sequent through the use of and-introduction and
reductio ad absurdum.
Next we have and-introduction, this rule is very simple, given any two formula we can infer the conjunction of those formulas, as in the following sequent:
P, Q ⊦ P & Q
1 (1) P Ass.
2 (2) Q Ass.
1,2(3)P & Q 1,2 &I
On the left we pool the assumption on which the inference rests. On the right we note the lines to which the rule was applied and note &I for and-introduction.
Next up is and-elimination. Given the conjunction of any two formulas we can infer either of the formulas alone, as in the following sequent: P & Q ⊦ Q
1 (1) P & Q Ass.
1 (2) P 1 &E
1 (3) Q 1 &E
On the left we pool the assumption on which the inference rests. On the right we note the line to which the rule was applied and note &E for and-elimination.
It should be noted that this sequent could easily have been proven in two lines, there was absolutely no need to infer P before I inferred Q. I chose to set out the proof in this way to demonstrate that either inference is valid, in the course of constructing a proof it is often the case that you need to isolate both conjuncts of a particular conjunction.
I will now show the rule or-introduction. Given any formula we may infer its disjunction with any other formula, as in the following sequent: P ⊦ P v Q
1 (1) P Ass.
1 (2) P v Q 1 vI
As always we pool the assumption(s) on which the inference rest and make a note on the left. On the right we note the line to which the rule is being applied and write vI to indicate that the rule or-introduction has been applied.
Next are the rules for double negation. With double negation introduction we can infer the double negation of any formula, as in the following sequent: P ⊦ ~~P
1 (1) P Ass.
1 (2) ~~P 1, DNI
Double negation elimination allows us to eliminate two negation symbols prefixed at the beginning of a formula, as in the following sequent: ~~~P ⊦ ~P
1 (1) ~~~P Ass.
1 (2) ~P 1, DNE
Digression: The rule DNI is not allowed in a form of non-classical logic called intuitionistic logic. This is because intuitionistic logicians don't agree with the law of excluded middle, and thus don't want it to be derivable in their system. I will demonstrate the classical proof of the law of excluded middle when I explain the rule RAA.
Before proceeding to the remaining rules I should say something more general about the notion of an assumption-discharging rule. When applying all the rules which I have demonstrated so far we must pool all the assumptions on which the inference rests and note them on the left hand side of the proof. The following rules are what are referred to as assumption-discharging rules, this means that in their application one or more of the assumptions on which they rest will not be noted on the left, instead they are noted on the right where we write the lines to which the rule applies. During the course of a proof it is sometimes necessary to make assumptions which do not occur in the sequent in order to derive the conclusion, assumption discharging rules allow us to soundly get rid of these so that the conclusion is not resting on unwanted assumptions.
You might have noticed that in the rules I have demonstrated so far, if they apply to more than one line I have separated the notation of the two lines with a comma. When applying an assumption-discharging rule we use a hyphen to indicate that an assumption has been discharged.
The first assumption-discharging rule I will demonstrate is called Conditional Proof. Given any derived formula we can infer a conditional in which the antecedent is any assumption on which the formula rests, application of this rule will discharge the given assumption so that the derived conditional does not rest on it. We may apply the rule Conditional Proof in order to prove the following sequent: P-->Q, Q-->~R ⊦ P-->~R
1 (1) P-->Q Ass.
2 (2) Q-->~R Ass.
3 (3) P Ass.
1,3(4) Q 1,3 MP
1,2,3(5) ~R 2,4 MP
1,2 (6) P-->~R 3-5 CP
On line (1) and (2) I assumed the formulas to the left of the assertion-sign. On line (3) I assumed P, this was because there is no way to derive the desired conclusion using only the initial two assumptions. On line (4) I derived Q from lines (1) and (3), noting these lines and the rule applied on the right. Because these lines are both assumptions I note 1,3 on the left. On line (5) I derived ~R from lines (2) and (4), again noting these lines and the rule applied on the right. On the left I pool the assumptions 1,2 and 3, this is because line (2) is an assumption and line (4) rests on 1 and 3. Finally, on line (6) I applied the rule Conditional Proof in order to derive the desired conclusion. I noted 3-5 on the right because I have discharged the assumption (P) resting on line (3), now the antecedent of my conditional, and the other line on which the rule was applied to is (5).
There is nothing about this rule which says I must stop here, I could continue applying the rule like so:
1 (7) (Q-->~R)-->(P-->~R) 2-6 CP
If I were to stop here I would have a proof of the sequent P-->Q ⊦ (Q-->~R)-->(P-->~R)
I could also apply it again:
(8 ) (P-->Q)-->((Q-->~R)-->(P-->~R)) 1-7 CP
This final formula which rests on no assumptions is called a theorem. Theorems express statements which are logically always true. If we wanted to express this theorem in a sequent we simply write ⊦ (P-->Q)-->((Q-->~R)-->(P-->~R))
Note to self: Mention: Uniform substitution to generalise from P, Q, R... to any wff. <-->, RAA and vE. Redundancy of MT.