• Philosophy and Spirituality
    Welcome Guest
    Posting Rules Bluelight Rules
    Threads of Note Socialize
  • P&S Moderators: JackARoe | Cheshire_Kat

Computer Language vs Human Language - similarities/differences?

Mr Wobble

Bluelighter
Joined
Nov 1, 2009
Messages
294
Location
Wilf's Quality Meats
A while ago I had one of those internet arguments with regard to the similarities or otherwise between computer and human language.

I argued that there were obvious similarities - they are both dealing with the transfer of information, and they both employ a kind of grammar and syntax.

I agree with my 'opponent' that there are differences in so far as computer language is a set of precise instructions which do not convey meaning, but elicit a precise change of state in the receiving computer, whereas human language is not a precise set of instructions, is nuanced and contains meaning/semantic content.

Where we disagreed was upon how fundamental these differences are.

My opponent argued that the semantic content of human language means that the two are fundamentally different.

My argument was that semantic content could be considered a higher level set of instructions that elicits a change of state (comprehension/understanding) within the receiving computer - the human brain. Okay, its not precise, but its function is comparable with that of computer language in so far as the outcome is a change of state of the receiving computer.

Any thoughts, comments?
 
Code:
section .data
	hello:     db 'not really the same is it?',10
	helloLen:  equ $-hello
section .text
	global _start
_start:
	mov eax,4
	mov ebx,1
	mov ecx,hello
	mov edx,helloLen
	mov edx,[helloLen]
	int 80h
        mov eax,1
	mov ebx,0
	int 80h
 
the problem is all in how you define "language." If you use the definition accepted by linguists and which is generally the definition most dictionaries give, in which "language" is used as a shorthand for "human language," then there are some problems with the comparison. Programming languages don't match up with that definition in important ways.

Human languages include:
- spontaneous transmission of the whole from parents to children;
- used as the primary means to communicate arbitrary information between people;
- used as the (or a) primary means for human beings to consciously think and reason

Programming languages
- have a purpose (they exist to make programs run) that is not found in human languages;
- are typically "interpreted" or "compiled" in a way that seems counter-intuitive with regard to human languages. (we can think "in" English, at least on quick glance; computers "think" in assembly or machine language rather than Pascal or C++)
- typically are not used, given their restricted vocabularies, to express arbitrary information between people (instead, programs serve as containers to pass such information; but does anybody use C++ to say "i'm going to the store to get some milk"--literally, not figuratively?) Is there a "word" in Java for "milk"? or for "Islam"? (you can quote these English words in Java; but are there Java words for them? I say no.)

There are many similarities, but many of these exist because programming languages are created specifically to adapt parts of human language in such a way as to make the machine components more accessible to humans. In the 1950s, programming languages were basically math. The could have stayed that way and nobody would have claimed they were like English. But a variety of engineers worked to make them more accessible, slapped the term "language" on them (they were called "codes" before) and confusion followed ever since.

Now, at a finer-grained level, there are obviously many similarities in function between given samples of programming languages and human languages.

Sorry for going on so long--it's possible i have more than a passing investment in this issue.
 
@watsons - never said they were the same now did I?

What I did say was that the idea that one functions as a set of instructions that elicits a change of state in a computer, whilst the other one doesn't, might not necessarily be the case (that semantic content is in fact a set of instructions that elicits a change of state in a - biological - computer).
 
one functions as a set of instructions that elicits a change of state in a computer, whilst the other one doesn't, might not necessarily be the case (that semantic content is in fact a set of instructions that elicits a change of state in a - biological - computer).

good, good. this is a really good area to think about. i tend to think of these as "code" functions, and they are widely distributed in nature and culture. i think you can push this even further: both programming languages (PL) and human languages (HL) can "do" algorithms. historically, PLs were built to process them, but that's because human beings using HLs wrote and discovered the idea of algorithms (ie Turing machine).

now it turns out that algorithms are found everywhere in nature--in physics, in astrophysics, in biology. it's very cool. it is one of the central functions of any HL, and i agree with you that it almost entirely overlaps with the functions of PLs.

what PLs seem to me to lack is a lot of the stuff that is non-algorithmic about HLs, so that even if a change in brain state occurs, this is only part of the "reason" the language was used. language is used to create intimacy between individuals, to soothe, to shock, to save for posterity--does the language in a book exist to change the state of something? (in the same way that the language -- not the function -- of a game program does?)

besides, i'm sure you know, the idea that the brain is a computer and language produces changes in state is mildly :) controversial. everything produces changes in state in the brain, so even if language is participating in changes of state, it might not be right to say it's producing or directing them the way a program does; further, neural imaging typically fails to find localization of particular changes with regard to cognitive processing--in other words, from my reading of the research, a single person's brain can appear to physically be in the "same state" while "thinking about" different things, and in "different states" while thinking about the same thing.
 
Some interesting ideas there xoqqiy.

language is used to create intimacy between individuals, to soothe, to shock, to save for posterity

But in such cases language still involves cognitive porcesses (changing brain states) - though I take your point, there is something else going on, emergent patterns of behaviour, or something along those lines?

And, yes, I see how there are problems associated with attributing discrete, physically located changes of brain state to the comprehension of meaning.

I guess this kind of thing is at, or beyond, the frontier of current neuroscience - we might make a better guess in 50 years time if/when the human brain has been reverse engineered and digitally modelled?
 
here is a fun book that i agree with a lot that touches on these issues in an effort to show that the mind/self goes far beyond "the brain's sensory input," raising (for me) really strong questions about what we mean by "mind" when we invoke "Artificial Intelligence" and the brain being a computer--"mind" is a lot, lot more than "intelligence." Mr Kurzweil could pay closer attention to his so-far interchangeable use of these completely non-equivalent terms.

"Mind is a lot more than intelligence"--i would say the main reason i frequent BL has something to do with my belief in that ;)

Alva Noe, Out of Our Heads: Why You Are Not Your Brain, and Other Lessons from the Biology of Consciousness (Hill & Wang, 2009)
http://www.amazon.com/gp/product/0809074656/ref=cm_rdp_product

(it's a trippy book and he's a Berkeley Philosophy prof--i find it quite BL-ish in feeling, frankly!)
 
I quite like this idea that consciousness and mind are a process that happens between the human brain and it's environment/sensory inputs.

I'm not sure that this negates the concept of the brain as a biological computer that could theoretically be reverse engineered - but I get the point that reverse engineering a brain does not equal reverse engineering the mind.
 
Some interesting ideas there xoqqiy.



But in such cases language still involves cognitive porcesses (changing brain states) - though I take your point, there is something else going on, emergent patterns of behaviour, or something along those lines?

right. and all kinds of ordinary stuff.

And, yes, I see how there are problems associated with attributing discrete, physically located changes of brain state to the comprehension of meaning.

I guess this kind of thing is at, or beyond, the frontier of current neuroscience - we might make a better guess in 50 years time if/when the human brain has been reverse engineered and digitally modelled?

at one level, i completely agree. we will learn to model all sorts of features of the brain, to connect with them more directly, to extend them, etc.

at another, i think we already know the answer to these questions, more than some views (including a lot of current practitioners of brain imaging) want to admit. the mind is everything about our selves in our world--we are, for example, "aware of having a biological, fleshy arm " in a way that isn't useful to think about from a machine perspective unless we simulate the entire embodied arm. which is possible, i think, in a certain way (maybe not the fleshy part the way i think about it)--but on my view probably (a) not desirable; and (b) circular with regard to the original point--that computational machines and minds are altogether or wholly the same.

i don't think there's any big mystery. brains and computers are partly the same. then they each have parts specifically adapted to their function. computers might become "conscious," but that doesn't mean they have to become conscious the same way we are. in fact that seems restrictive to me. why wouldn't they "become conscious"--whatever that might mean--in a way that follows from their place and function in their environment?
 
thx for such an interesting conversation! i happen to be writing a paper right now that touches on some of this stuff so it's in my operating RAM :)

I suspect Mr Noe knows a thing or two about the core BL subjects (along with quite a few other Berkeley cog sci types)
 
computers might become "conscious," but that doesn't mean they have to become conscious the same way we are. in fact that seems restrictive to me. why wouldn't they "become conscious"--whatever that might mean--in a way that follows from their place and function in their environment?
The interesting, entirely speculative, question for me, and as you've already mentioned Kurzweil, is what happens to human consciousness if and when we start to significantly augment our cognitive processing, memory, and sensory abilities by artificial means?
 
The interesting, entirely speculative, question for me, and as you've already mentioned Kurzweil, is what happens to human consciousness if and when we start to significantly augment our cognitive processing, memory, and sensory abilities by artificial means?

the part of kurzweil i agree with is that this is already happening, all over the place! although who knows how far it will go. have you read much about augmented reality (& that is just the available stuff 6 mos ago--just googled quickly)?

not to go book crazy, but here's one more that i love on this topic (& of course amazon is "smart" enough to associate the two books together):

Andy Clark, Supersizing the Mind: Embodiment, Action, and Cognitive Extension (Oxford 2008).

http://www.amazon.com/Supersizing-Mind-Embodiment-Cognitive-Philosophy/dp/0195333217/ref=pd_sim_b_2

I think Clark is an even better philosopher than Noe, actually.
 
have you read much about augmented reality (& that is just the available stuff 6 mos ago--just googled quickly)?
Not read anything, have heard a bit - saw a short vid on the interwebs the other day - the concept has gradually been creeping into my awareness over the last year or so. Intriguing stuff and developing at a pace, it seems.

The bit I most strongly agree with Kurzweil is that no-one really knows what the consequences will be, even if one accepts that the trends from which he extrapolates will hold true.

not to go book crazy, but here's one more that i love on this topic (& of course amazon is "smart" enough to associate the two books together):

Andy Clark, Supersizing the Mind: Embodiment, Action, and Cognitive Extension (Oxford 2008).
Nothing wrong with going book crazy - I just wish I could directly upload the info to my biological hard drive without all of the tedious business of reading. ;)
 
I'm guessing that you've read 'The Fabric of Reality: Towards a Theory of Everything' -
by David Deutsch?

http://www.amazon.co.uk/Fabric-Reality-Towards-Theory-Everything/dp/0140146903

A while since I read it, but from what I remember his view is that potentially everything, including mind, can be perfectly simulated given a sufficiently powerful computer (and that the Universe may in fact be considered a computer calculating its own outcome in real time). He also touches on epistemology, quantum computers, many worlds theory and lots more...
 
I'm guessing that you've read 'The Fabric of Reality: Towards a Theory of Everything' -
by David Deutsch?

http://www.amazon.co.uk/Fabric-Reality-Towards-Theory-Everything/dp/0140146903

A while since I read it, but from what I remember his view is that potentially everything, including mind, can be perfectly simulated given a sufficiently powerful computer (and that the Universe may in fact be considered a computer calculating its own outcome in real time). He also touches on epistemology, quantum computers, many worlds theory and lots more...

no, i haven't, so thanks for that reference. but i know something about this view--among its current advocates is stephen wolfram, who is a little bit more on his rocker than ray k. (imho):

http://singularityhub.com/2010/05/12/stephen-wolfram-is-computing-a-theory-of-everything-video/

and this you don't have to read. i don't like how hard it is to read now (other than reading the web every minute of the day as we both are doing obv.). i used to read books a lot more and i loved it, and if i can ever force myself to do it now i still love it. could go on about that too..

i should add: the idea that the universe is doing computation is effing fascinating. i don't think it "explains everything," but if you read around in current astrophysics (for example, scientific american-level articles) there is a steady stream of interesting material about how stuff happening at very large distances (and at very small ones!) may be using/"doing" computation.
 
Last edited:
no, i haven't, so thanks for that reference. but i know something about this view--among its current advocates is stephen wolfram, who is a little bit more on his rocker than ray k. (imho):

http://singularityhub.com/2010/05/12/stephen-wolfram-is-computing-a-theory-of-everything-video/

Wolfram's 'A New Kind of Science' is an interesting read and is available free online:

http://www.wolframscience.com/nksonline/toc.html

It will be interesting to see how his theory of everything pans out - he claims it will be something like a paradigm shift in understanding, though many in the wider scientific community are understandably skeptical (but then isn't that the way with paradigm shifts - as per Thomas Kuhn's sociological slant on the scientific process?).

And I agree re Kurzweil. Some of his predictions are a bit bizarre, or, in many, just cases too specific (who knows what 2045 will really be like?) - that alongside his daily intake of platefuls of vitamins and dietary supplemants , does make one question his credentials as a rational observer...
 
Last edited:
Top