• Philosophy and Spirituality
    Welcome Guest
    Posting Rules Bluelight Rules
    Threads of Note Socialize
  • P&S Moderators: Xorkoth | Madness

The Logicians Corner

Yeah, and then there are even more forms of three valued logic. Like where something is both true and false at the same time (i don't remember how it's called. something like dialetheism' i believe). It's suggested by a guy named priest as a solution to the liars paradox. His reasoning is that the 'neither true nor false' solution can easily be defeated by 'this statement is either false, or neither true or false'. Only his solution does not have that problem.
Though it has the problem of being somewhat puzzling. There are multiple ways in wich you could interpret this. But it's, i think, an example of how on a meta level, the boundaries between correctness and completeness can become a bit blurry.
 
Let me challenge this statement logically: Suppose the only source of sound in the room produces sound of extremely low intensity 0.001dB. Is this sound amplified to 9.001 dB in the corners? I don't think so.

You must not understand how the dB system works. It is logarithmic not linear. Every 3db doubles the perceived amplitude. Every 6 dB doubles the power. So if you have a low frequency sound of 0db (which is possible- humans can actually hear as low as -1/4db) it should measure 9db in a corner, amplifying the power by 2.5x and the perceived loudness 3x. But keep in mind 40db is still a whisper, so it would be hard to notice a difference at such low amplitudes.
 
^ Oh, I didn't consider that a very weak sound has a negative number of decibels. Then it makes sense. There wasn't really a lot of acoustics in the physics education we had where I studied physics, and they talked only about the linear scales of the energy flux contained in sound. I know what's the difference between a logarithmic and linear scales, a pH scale is another logarithmic scale, and the Richter scale of earthquakes is, too.

E: apparently different light intensities are compared with a decibel scale, too..
 
Last edited:
Hey polymath, i've got a somewhat philosophical question: Do you think that logic dictates the laws of physics, or is it actually the other way round and would/could logic be different if we where living in another universe? I suppose it's a bit of both actually. I mean, clearly the laws of physic's put limitations on what computers (even the wet ones) are capable of. Physical limitations on what can be thought of or what could be imagined. But we would not be in the position ofcourse, to imagine what could not physically be imagined.
 
Hey polymath, i've got a somewhat philosophical question: Do you think that logic dictates the laws of physics, or is it actually the other way round and would/could logic be different if we where living in another universe? I suppose it's a bit of both actually. I mean, clearly the laws of physic's put limitations on what computers (even the wet ones) are capable of. Physical limitations on what can be thought of or what could be imagined. But we would not be in the position ofcourse, to imagine what could not physically be imagined.

Most scientists tend to think that logics and mathematics are just useful tools that we have invented to be able to formulate laws af nature precisely. The question whether the laws of logics and numbers are actually just our own creations, or real natural laws that exist independently of us, is a difficult philosophical question. See this blog post and this thread. Logics is kind of more fundamental than any physical theory in the sense that no theory can contain logical contradictions. In a contradictory theory, any statement would be both true and false, and the theory couldn't predict anything.

Actually, someone could describe our own universe using three valued logic in a lame trivial way, by just saying that "there exists a third truth value in addition to true and false, but you never really need it". But that would be against the principle of Occam's razor, which says that we should not include in our theories anything that is unnecessarily complicated. Some nutcase scientist could probably construct an imaginary universe that would really be simpler to describe with some logic other than our conventional logic, but that would be so speculative that it wouldn't really be science.
 
Yeah, there's the debate between intuïtionism and for instance, platonism. But isn't there something even more fundamental going on, because doing computations is Always a physical process? I mean, if we would have the most advanced supercomputer that would theoretically be possible. There would still be computations that would physically be impossible to do for this machine. So i mean, wouldn't that have implications for say, any reasonings that would involve subjects like for instance infinity? Cantors theorem would be a good example. Is it valid or is it more something like zeno's paradox? In order to answer that question you would need an infinite list of infinite numbers, so it could never ever actually be done. Does that affect the validity of cantors reasoning? Should we just accept that our minds aren't physically capable to ever comprehend the concept of infinity, so any reasoning involving infinity is destined to be uncertain?
 
^ It is possible to play with infinity if you know what you're doing. You have to completely discard the idea that infinity is a number that you can do calculations with. For example, the expressions
gif.latex
and
gif.latex
both approach infinity when
gif.latex
. What does the quotient

gif.latex


approach when x grows without bound? If you think that infinity is a real number, you could mistakenly say that

gif.latex


because "a number divided by itself is 1". But that is not true. The quotient approaches 1/2 when x goes to infinity, because the denominator grows twice as fast as the numerator when x grows without bound.

About the physical limitations of computational speed... There obviously exist some calculations that would be physically impossible to perform because they would take so much time that the universe would reach heat death before any imaginable computer could perform them.

But I think there are also situations where you can use pure logic to deduce that some computation is physically impossible. In the era of classical physics, most scientists believed in a deterministic universe, where everything is pre-determined by the laws of physics. They imagined a hypothetical entity called "Laplace's demon", that would have so large computational capacity that it could predict every future event that will happen in the universe. I think that there is some way to prove logically that even if our world obeyed classical mechanics, the entity would get stuck in some kind of loop if it attempted to predict its own future behavior, but I'm not sure how exactly this could be done.
 
Yeah, i've thought about laplace's demon as well. I think the initial problem is that any prediction would add new information and, no matter how slight of an influence it would be, the notorious 'butterfly-effect' would cause it to eventually be way off with it's predictions, so it would have to consider the effect of it's own predictions and you'd have an infinite regression going on. But maybe that regression would at some point stop if, at some point an equilibrium would be reached between world and machine. Where you have a prediction that is the best possible answer to the state of affairs in the world, and vice versa. So that would technically be a literally self-fulfilling prophecy.
 
Yeah, i've thought about laplace's demon as well. I think the initial problem is that any prediction would add new information and, no matter how slight of an influence it would be, the notorious 'butterfly-effect' would cause it to eventually be way off with it's predictions, so it would have to consider the effect of it's own predictions and you'd have an infinite regression going on. But maybe that regression would at some point stop if, at some point an equilibrium would be reached between world and machine. Where you have a prediction that is the best possible answer to the state of affairs in the world, and vice versa. So that would technically be a literally self-fulfilling prophecy.
I mean, like when you have a sort of nash-equilibrium between a computer and a chess player. So the computer would know what it's best possible strategy is against the chess player, and it would also know the chess players best possible answer. So if there would be a nash-equilibrium between the players strategy, and the computers, the computer could actually predict the outcome of the game. And by doing so, this prediction would actively cause the outcome to happen.
 
Wait till humans can perform quantum computations instantly.
 
if you're actually curious, i can speak a bit about buddhist three value logic, where 'unknowable' is a proper state and has information

I am genuinely curious, please elaborate. :)

If you don't mind. I would like that. I feel so uneducated.

Even a link to something. I like logic. I guess I don't know what it is...

You shouldn't feel uneducated, I don't think familiarity with formal logic is particularly common.

Logic is essentially the study of the structure of arguments. Arguments generally consist of a number of propositions (typically called premises), which are claimed to entail another statement, called the conclusion. The chief concern of the logician is whether a given conclusion follows from the given premises or not. For any given argument, if it is the case that if all the premises are true then the conclusion must be true then this argument is said to be valid.

An example of a valid argument is as follows:

1. If it is raining, then it is wet outside.
2. It is raining.
Therefore:
C. It is wet outside.

In propositional logic we use the letters 'P', 'Q', 'R'... to represent propositions. In constructing a formal proof of the valid argument just given, we will use P to stand for 'it is raining' and Q to stand for 'it is wet outside'. Whilst explaining the rest of the syntax of the propositional calculus I will take P and Q to stand for these propositions.

In order to express 'if...then...' statements, we place '-->' between the two propositional variables, thus, in order to represent the statement 'if it is raining, then it is wet' we shall write P-->Q. Such a statement is referred to as a conditional proposition. We refer to the proposition before '-->' (in this case P) as the antecedent and we refer to the proposition after '-->' (in this case Q) as the consequent.

In order to express that a proposition is not the case, we use the negation sign '~'. So, 'it is not the case that it is raining' may be expressed as ~P.

If we want to express that more than one proposition is true, we use '&' as the connective for 'and'. Thus to say 'it is raining and it is wet outside' we will write P & Q, this proposition is called the conjunction of P and Q, P and Q are the conjuncts of the conjunction P & Q. We can also express the statement 'it is not raining and it is wet outside' as ~P & Q.

As polymath mentioned previously, 'or' statements in logic can be inclusive. Thus, to say 'it is raining or it is wet outside' does not exclude the possibility that it is raining and it is wet outside. We use 'v' as the connective for or, so in order to express 'it is raining or it is wet outside' we write P v Q.

When expressing more complex statements it is necessary to use bracketing, the bracketing shows which logical connective ('-->', 'v', '&', '~') is the main connective. In the system of logic I use each rule of inference generally allows the introduction or elimination of a main connective. As I am running short on time I will explain this in more detail, as well as the basic rules of inference and how to set out a proof, in another post.

In formal logic an argument is expressed in a form known as a sequent. The argument I presented at the outset of this post should be expressed as follows: P-->Q, P ⊦ Q

The formulas to the left of the turnstile symbol (called the assertion sign) which are separated by commas are the assumptions on which the conclusion rests, the formula to the right of the assertion sign is the conclusion of the argument. The assertion sign can conveniently be read as saying ‘therefore’. The idea behind expressing an argument in this manner is to say that if we assume the formulas on the left we can apply the rules of inference in order to obtain the formula on the right.

The first rule is called the rule of Assumptions. According to this rule we can assume whatever we like at any stage in a proof. We simply note the line number on the left and cite Ass. on the right to indicate the rule applied. Here is an example:

1 (1) P Ass.

The above is technically a proof of the following sequent P ⊦ P

In order to show a formal proof of the argument I presented at the outset of my explanation I will explain the rule 'modus ponens' (MP). Given a conditional (for example 'P-->Q') and its antecedent (in this case 'P') we may derive the consequent (in this case 'Q').

The valid argument I presented at the outset is of the form: P-->Q, P ⊦ Q

An argument presented in this manner is called a sequent. A proof of the above sequent is as follows:

1 (1) P-->Q Ass.
2 (2) P Ass.
1,2(3) Q 1,2 MP

The next rule is modus Tollens, given a conditional and the negation of its consequent, we can infer the negation of its antecedent. Here is an example of modus Tollens used to prove the following simple sequent: P-->Q, ~Q ⊦ ~P

1 (1) P-->Q Ass.
2 (2) ~Q Ass.
1,2(3) ~P 1,2 MT

On the left we pool the assumption on which the inference rests, on the right we note the lines to which the rule was applied and note MT for modus Tollens. Later I will show that modus tollens is a redundant rule, we can prove the same sequent through the use of and-introduction and reductio ad absurdum.

Next we have and-introduction, this rule is very simple, given any two formula we can infer the conjunction of those formulas, as in the following sequent:

P, Q ⊦ P & Q
1 (1) P Ass.
2 (2) Q Ass.
1,2(3)P & Q 1,2 &I

On the left we pool the assumption on which the inference rests. On the right we note the lines to which the rule was applied and note &I for and-introduction.

Next up is and-elimination. Given the conjunction of any two formulas we can infer either of the formulas alone, as in the following sequent: P & Q ⊦ Q

1 (1) P & Q Ass.
1 (2) P 1 &E
1 (3) Q 1 &E

On the left we pool the assumption on which the inference rests. On the right we note the line to which the rule was applied and note &E for and-elimination.

It should be noted that this sequent could easily have been proven in two lines, there was absolutely no need to infer P before I inferred Q. I chose to set out the proof in this way to demonstrate that either inference is valid, in the course of constructing a proof it is often the case that you need to isolate both conjuncts of a particular conjunction.

I will now show the rule or-introduction. Given any formula we may infer its disjunction with any other formula, as in the following sequent: P ⊦ P v Q

1 (1) P Ass.
1 (2) P v Q 1 vI

As always we pool the assumption(s) on which the inference rest and make a note on the left. On the right we note the line to which the rule is being applied and write vI to indicate that the rule or-introduction has been applied.

Next are the rules for double negation. With double negation introduction we can infer the double negation of any formula, as in the following sequent: P ⊦ ~~P

1 (1) P Ass.
1 (2) ~~P 1, DNI

Double negation elimination allows us to eliminate two negation symbols prefixed at the beginning of a formula, as in the following sequent: ~~~P ⊦ ~P

1 (1) ~~~P Ass.
1 (2) ~P 1, DNE

Digression: The rule DNI is not allowed in a form of non-classical logic called intuitionistic logic. This is because intuitionistic logicians don't agree with the law of excluded middle, and thus don't want it to be derivable in their system. I will demonstrate the classical proof of the law of excluded middle when I explain the rule RAA.

Before proceeding to the remaining rules I should say something more general about the notion of an assumption-discharging rule. When applying all the rules which I have demonstrated so far we must pool all the assumptions on which the inference rests and note them on the left hand side of the proof. The following rules are what are referred to as assumption-discharging rules, this means that in their application one or more of the assumptions on which they rest will not be noted on the left, instead they are noted on the right where we write the lines to which the rule applies. During the course of a proof it is sometimes necessary to make assumptions which do not occur in the sequent in order to derive the conclusion, assumption discharging rules allow us to soundly get rid of these so that the conclusion is not resting on unwanted assumptions.

You might have noticed that in the rules I have demonstrated so far, if they apply to more than one line I have separated the notation of the two lines with a comma. When applying an assumption-discharging rule we use a hyphen to indicate that an assumption has been discharged.

The first assumption-discharging rule I will demonstrate is called Conditional Proof. Given any derived formula we can infer a conditional in which the antecedent is any assumption on which the formula rests, application of this rule will discharge the given assumption so that the derived conditional does not rest on it. We may apply the rule Conditional Proof in order to prove the following sequent: P-->Q, Q-->~R ⊦ P-->~R

1 (1) P-->Q Ass.
2 (2) Q-->~R Ass.
3 (3) P Ass.
1,3(4) Q 1,3 MP
1,2,3(5) ~R 2,4 MP
1,2 (6) P-->~R 3-5 CP

On line (1) and (2) I assumed the formulas to the left of the assertion-sign. On line (3) I assumed P, this was because there is no way to derive the desired conclusion using only the initial two assumptions. On line (4) I derived Q from lines (1) and (3), noting these lines and the rule applied on the right. Because these lines are both assumptions I note 1,3 on the left. On line (5) I derived ~R from lines (2) and (4), again noting these lines and the rule applied on the right. On the left I pool the assumptions 1,2 and 3, this is because line (2) is an assumption and line (4) rests on 1 and 3. Finally, on line (6) I applied the rule Conditional Proof in order to derive the desired conclusion. I noted 3-5 on the right because I have discharged the assumption (P) resting on line (3), now the antecedent of my conditional, and the other line on which the rule was applied to is (5).

There is nothing about this rule which says I must stop here, I could continue applying the rule like so:

1 (7) (Q-->~R)-->(P-->~R) 2-6 CP

If I were to stop here I would have a proof of the sequent P-->Q ⊦ (Q-->~R)-->(P-->~R)

I could also apply it again:

(8 ) (P-->Q)-->((Q-->~R)-->(P-->~R)) 1-7 CP

This final formula which rests on no assumptions is called a theorem. Theorems express statements which are logically always true. If we wanted to express this theorem in a sequent we simply write ⊦ (P-->Q)-->((Q-->~R)-->(P-->~R))

Note to self: Mention: Uniform substitution to generalise from P, Q, R... to any wff. <-->, RAA and vE. Redundancy of MT.
 
Last edited:
In order to show a formal proof of the argument I presented at the outset of my explanation I will explain the rule 'modus ponens' (MP). Given a conditional (for example 'P-->Q') and its antecedent (in this case 'P') we may derive the consequent (in this case 'Q').

The valid argument I presented at the outset is of the form: P-->Q, P ⊦ Q

An argument presented in this manner is called a sequent. A proof of the above sequent is as follows:

1 (1) P-->Q Ass.
2 (2) P Ass.
1,2(3) Q 1,2 MP

How would someone express correctly with modal logic the statement "it's possible that it's wet outside even if it's not raining"? Should I write it as

gif.latex


or

gif.latex


(the diamond symbol means the modality "it is possible that") ?
 
It's been quite a while since i've been doing modal logic, but at first glance i'd say that the two are equivalent and a correct translation indeed. I think you could also write "Q, Arrow, not, box, P" (if it's wet outside, it's not nessecarily raining). As "diamond, Not P" is equivalent with "Not, box P". But, again, i may be wrong since it's been years since i've done ML.
 
That is an interesting question, I am not really familiar with modal logic. It seems to me that the two formulas are roughly equivalent, because I don't know the rules of derivation for modal logic it is difficult to say which is preferable (or whether one is at all).

Sorry I can't be of more use, if everything goes to plan I will be studying modal logic next semester and will hopefully be able to provide a more satisfactory answer to this sort of question in a couple of months.
 
Modal logic seems to be quite useful after all...

On websites about ML, there seems to be talk about the "set of possible worlds". In this case, if we haven't said anything about rain and wetness yet, the set of possibilities seems to be {P&Q, (~P)&Q, P&(~Q), (~P)&(~Q)}. Once we say that "P->Q", only one of these possibilities is eliminated, namely P&(~Q)... If we then give the additional info that "P", it leaves only one possible world, where "P&Q".

I think one could somehow define a numerical "strongness of a statement", for example by saying that it's the percentage of possible worlds that a statement eliminates...
 
This is indeed an interesting thread. I'm not familiar with formal logic per se, however I have actually practiced it in some form. Do you guys know the video game Minecraft? Among other things, it's possible to construct binary logical systems (similar to what goes on in a real computer), and I spent hundreds of hours actually building an extremely simple "computer" some 4-5 years ago, using logical gates (AND, OR, NOT and so on), which allows one to make memory cells, store bits of binary data (1 or 0) and do certain manipulations with the data (add, subtract and so on). Thinking about it now, I wonder if making a more in-detail thread about it in S&T would be worth it. What do you guys think?
 
This is indeed an interesting thread. I'm not familiar with formal logic per se, however I have actually practiced it in some form. Do you guys know the video game Minecraft? Among other things, it's possible to construct binary logical systems (similar to what goes on in a real computer), and I spent hundreds of hours actually building an extremely simple "computer" some 4-5 years ago, using logical gates (AND, OR, NOT and so on), which allows one to make memory cells, store bits of binary data (1 or 0) and do certain manipulations with the data (add, subtract and so on). Thinking about it now, I wonder if making a more in-detail thread about it in S&T would be worth it. What do you guys think?

Minecraft is a fun game.

You should nldefinitely post in depth in S&T. It would be an interesting read.
 
Top