David said:
That is the difference between .1 and 1. With going over the 1, the universe spins outward uncontrollably, and falls apart, beneath the other, it creates a 'gravitational' effect, and collapses upon itself. Returning to a singularity. Of course this is only one model, and there are others out there, and some even disagree with the Big Bang, but that always going to be the case.
You keep saying that, yet you have only just specifically refered to what this magical "0.1 to 1" is. Until that point it was just "And if its bigger than 1 it spreads infinitely!!"
Perhaps you'd like to elaborate on that somewhat? Or is it part of your theory and you worry we'll steal your ideas/thoughts

.
sexyanon2 said:
But isn't it all relative? On a smaller scale, wouldn't that difference be a big one? And even if it isn't a big difference, doesn't that show its imperfection?
As I said before, easy way to check. Put in Pi = 3.14 into your equations, and find your values for various cosmological constants. Then try 3.15 (the "real" value of Pi is between those two), and find your values of these constants. Now see if there has been a sufficent change for the equations to suddenly say something different.
Keep increasing the decimal length of your value for Pi and see if you can get rid of this sudden change. You reach this point WAY before 1.2 trillion decimal places.
sexyanon2 said:
That is true. But you're defining one abstract idea with another abstract idea. Ending numbers (like 0.12 or 15) are concrete. Pi is an abstract idea because it doesn't have an ending number. If you define one abstract idea with another one, the former isn't proven to be concrete.
You are confusing mathematics and physics. The "proof" of something being concrete is if the series which defines it converges. There are many ways to test for convergence, and many ways to define various types of converges. If something converges, then it is concrete.
Check my gallery. There is an equation for Pi in there which is an expression involving 4 fractions. That is an expression for Pi in Base 16, as opposed to our Base 10. If you give me ANY "decimal" place for you want, I can tell you what it is in base 16. You want the 5th place? No problem. You want the billionth place, again, no problem. As you can see, that formula works for ANY n, there is no sudden "It doesn't work", so therefore the value of Pi must be a well defined on.
If you don't like Base 16 (mmm..hexidecimal) then numerous tests exist to prove Pi has a unique concrete value. Unfortunately, the requirements to show you rigirously are at least 1st year university mathematics, and some notation these forums don't suppose. If you are truely interested, and have a passing knowledge in mathematics (so it doesn't sail over your head) I can knock something up for you.
Suffice to say, there is a precise value for Pi.
sexyanon2 said:
If you define one abstract idea with another one, the former isn't proven to be concrete.
It is, provided your proved the concreteness of the previous step. This works bakcwards, again and again and again till you reach the axioms of mathematics. As few as possible statements taken to be "self-evident". I'm sure I can dredge up what they are if you want.
Suffice to say that "decimal expansions" are built up over hundreds of logical arguments from these axioms and contrary to what most people think 1+1=2 is not "self evident". The arguments which get from the axioms of mathematics to these results have been analyised, reanalyises, checked and rechecked by the best logisticians ever, the logic is flawless.
Except to David, who one day plans to flick through the 2000 pages of
Principia Mathematica and pick out all the flaws 8) Despite it being quite clear his own logic leaves a lot to be desired.
sexyanon2 said:
That is true. But you're defining one abstract idea with another abstract idea. Ending numbers (like 0.12 or 15) are concrete. Pi is an abstract idea because it doesn't have an ending number. If you define one abstract idea with another one, the former isn't proven to be concrete.
You're defining the definition of a number with an abstract concept too.
What is wrong with 1/3? I have 3 apples, I take away 1, I've got 2 left. The total number has decreased by 1/3. Now, by your logic, that isn't right, there is an error, because 1/3 = 0.33333......, which I therefore use exactly. Is there an error in my calculations? Is taking 1 apple from 3 not removing 1/3 the total?
Similarly, What number do I multiply 7 by to get 1? I define it to be "1/7", so by definition "7*1/7" = 1. A computer might do it via a decimal version and get 1.00000000000000000000000000000001192. Infact, if you get C code to print out 300 decimal places for just the number "1" after about 100 it goes into random digits. Now, via your logic, its the mathematics saying 7*1/7 = 1.0000000000000000000000000000000000001192. Is it? No, its technical limitations within the computers memory and random errors within the CPU causes by imperfections (and increasingly so, quantum effects).
In the "mathematical universe" 7*1/7 == 1, its absolute. Do you deny that this is right? If so, please point out where the error is.
Similarly, lets consider 2Pi. Its the circumference of a unit circle. Now, I do not know the decimal expansion for Pi to infinite accuracy. Does this mean any calculation I do with it is therefore in error in some way? In a computer it will be, but real mathematicians don't use calculators and anyone who things mathematics is defined by what Microsoft Calculator says needs help. (2Pi)/Pi = 2. No error. Did I need to know Pi's decimal expansion? No, That wuld have been just as valid if I'd done "2X/X = 2".
Do I need to know the decimal expansion for a number if I am to use it in mathematics? Of course not, the equations which define a number tell me EVERYTHING I need to know about the number. I can derive its decimal expansion if I so wish, I can check to see if its rational, imaginary, etc.
sexyanon2 said:
You're defining it. What's the numerical value of sqrt(2)?
In mathematics, you do not need to know the decimal expansion of a number to use it. Since I know x^2-2=0 for "root2" if I ever get (root2)^2-2, I know I can instantly just write "0" down instead. Have I made an error because I don't know the decimal expansion? Of course not.
sexyanon2 said:
But isn't it all relative? On a smaller scale, wouldn't that difference be a big one? And even if it isn't a big difference, doesn't that show its imperfection?
If someone asks you what time it is, do you just say "Half past 3". You could say "3.27pm", or "3.27 and 32 seconds". If you had a fancy atomic clock you could give me to 9 decimal places of a second the "exact" time (though by the time you said it, it would be wrong

). Technology (and ultimately quantum mechanics) is a limit on our ability to measure things. You cannot tell me to time to better than 10^-43s, a "Planck Time". In only 49 order of magnitude you go from 1 year to the smallest time which has any meaning.
What about Pi? We know 1.2 TRILLION orders of magnitude in its decimal expansion, since each new place is an order of magnitude smaller than the last.
If you took a quantum fluctuation in a single atom, and blew it up to the size of the visible universe, you'd make it 40 orders of magnitude bigger (10^-14m to 10^26m). Hence, if you were measuring the size of the visible universe, and you wanted to take quantum fluctuations into account, you'd have to measure to 40 decimal places. Pi's decimal expansion we know to not 40, not 4000, not 1 billion but over 1 trillion places!! That is enough to go from the shortest meaningful length in the universe (10^-33m or so) to the size of the entire visible universe (10^26m) over 20 BILLION times. The vastness in the difference in size is beyond our ability to grasp.
If David is truely worried about the next calculated decimal value of Pi suddenly tipping the scales (which it can be easily checked, as I mentioned), then he might want to be more worried that we don't know the size of the universe to within 10^-33m!!
David said:
and even the thoughts of gravitation are up for grabs
I thought you said we understood black holes perfectly, and since black holes are very much gravity based, surely that means our gravity knowledge is perfect? Or do you now think that our knowledge of black holes is less than perfect?
David said:
You've never seen a real physics debate, have you? It's worst than that.
I would wager neither have you. If you call your claims and us constantly shotting them down with obvious errors "a debate", then you might want to get a new dictionary (though that was true after your "perfect" definition of a black hole).
sexyanon2 said:
or I can cut 6.15 mm out of my ruler.
Follwoing your own logic no you can't. If you think that "1/3" doesn't exist then you are not cutting off 6.15cm from your ruler. Are you sure its
exactly 6.15cm? What if its 6.1500001cm?
The "error" goes deeper than that. I define a new length. I've never liked metres, so I define "1 AlphaNumeric", a new unit of length which has the incredible coincidence of being 3 times the size of a metre. Now by your logic, I can cut 0.5 "ANs" (for short

) off a ruler. I can cut 0.25AN's off a ruler, but I cannot cut 1/3 AN's off a ruler. But 1/3 AN is a metre. So by your logic, a metre is not exist, because a metre is 1/3 of some larger distance.
There is nothing "special about a metre or a mile. They are lengths we have picked up during our history. IIRC 1 metre of the distance from the N Pole to the Equator through Paris, divided by 10million. Doesn't sound a particularly well defined distance. I define a "Bob" to the be distance from the N Pole, to the equator, via Paris divided by 30 million.
I now have 1 Bob = 1/3 metre. Do you deny the existance of my "Bob"?
You cannot claim that "Our current method of measuring is find, so long as its a length with terminating decimal expansion" because what is so great about our units of length? Our unit length is 1/3 of some other length, does that suddenly make our unit of length wrong?
This is where the problem in conception lies. If you start claiming "Pi will tip the scales" or "1/3 of a metre doesn't exist", then you rapidly develop numerous other more stricking problems. If you step back and realise that 1/3 and Pi are simply mental constructs to what are in reality "imperfect entities" then the problems do not arise. A computer cannot do 3*1/3, it cannot give the every decimal place of Pi, or even 1/7 for that matter, but then it can't do it for 0.5. To a computer, 0.5 = 0.500000000000000000000001 or 0.4999999999999999999999999999999. You can reduce this by increasing memory size or comuting power, but "integers" to computers are just values extremely close to certain other values.
When I program in C I can't say "If ( k=1 )" when k is a double, I have to say "If (fabs(k-1) < 1E-20), because the computer doesn't realise k should be integer (if its defined as a double), because numerous roundoff errors during computation have created these slight discrepencies.
In mathematics, every number can, and is, defined by the equations it solves. To deny that is either because you have insufficent schooling in mathematics (which is not a crime, and so asking for clarification is fine

) or a result of ignorance. Since David claims to be well versed in areas such as primes, it would be a logical conclusion he says what he says from ignorance.
In reality, there are ALWAYS errors, but not from mathematics. You want Pi to 1 billion places? No problem, maths will tell you. 1000 trillion places? No problem, maths will tell you. 10^1000000000 places? No problem, maths will tell you. You want the radius of the universe to 100 places? Tough, can't be done. The errors in the universe from uncertainty prevent it.
David seems to fail to realise this, despite being "well versed" in both basic number theory (primes) and having sufficent cosmology knowledge to have "solved relativity".
sexyanon2 said:
As AN said - math isn't based on reality. Now if math isn't based on reality, then why would you "want to represent reality with it?"
Why do you represent reality using words? If I say a word like "Tree" to you, you think of an object which is a plant, tall, with leaves, a trunk etc. How the word "tree" is not based in reality, there is nothing special about the word "tree", any more so than the french "arbre", yet because you have made the mental connection between the symbols on this screen "tree" and the living thing, you consider "tree" to describe a...well, a tree.
Now if I say "The derivative" I think of a mathematical operation. But what happens if I make the additional connection and say "The derivative of the potential", and I have a description of what is otherwise known as "force".
Mathematics is a language of a mental construction whose development is not shaped by the world around us. However, does this prevent us giving
additional meanings to some of it, like "If m is mass, and F is force, then whats this a?" then its found that its excellent for describing things in quantative ways.
The errors in peoples understanding seems to be that they think the errors in physics are from errors in the contstruction of the very mathematics behind it. The error is in the connections we make between the mathematical symbols and the physical quantities. Noone will come along and prove diffrentiation and integration wrong tomorrow (if you consider that statement wrong, you have a poor understanding of logic), but they are not the errors in physics, their application is. Sometimes, you don't need to take the derivative, or sometimes applying a certain idea to a problem is the wrong way to approach it. That is what problems arise in physics. Much of theoretical physics research at the moment is trying to find the right mathematical tools to apply to current problems. Previous tools were not "wrong", but just inappropriate.
Wow, one of my longest posts ever. Not bad for 10.15am on a Saturday morning
