• N&PD Moderators: Skorpio | someguyontheinternet

Scientists say they have been able to monitor people's thoughts via scans of their br

mitogen

Bluelighter
Joined
Nov 8, 2004
Messages
127
linkage:

http://news.bbc.co.uk/2/hi/health/4715327.stm

"Teams at University College London and University of California in LA could tell what images people were looking at or what sounds they were listening to.
The US team say their study proves brain scans do relate to brain cell electrical activity.

The UK team say such research might help paralysed people communicate, using a "thought-reading" computer.


We are still a long way off from developing a universal mind-reading machine
Dr John-Dylan Haynes, University College London

In their Current Biology study, funded by the Wellcome Trust, people were shown two different images at the same time - a red stripy pattern in front of the right eye and a blue stripy pattern in front of the left.

The volunteers wore special goggles which meant each eye saw only what was put in front of it.

In that situation, the brain then switches awareness between both images, sometimes seeing one image and sometimes the other.

While people's attention switched between the two images, the researchers used fMRI (functional Magnetic Resonance Imaging) brain scanning to monitor activity in the visual cortex.

It was found that focusing on the red or the blue patterns led to specific, and noticeably different, patterns of brain activity.

The fMRI scans could reliably be used to predict which of the images the volunteer was looking at, the researchers found.

Thought-provoking?

The US study, published in Science, took the same theory and applied it to a more everyday example.

They used electrodes placed inside the skull to monitor the responses of brain cells in the auditory cortex of two surgical patients as they watched a clip of "The Good, the Bad and the Ugly".

They used this data to accurately predict the fMRI signals from the brains of another 11 healthy patients who watched the clip while lying in a scanner.

Professor Itzhak Fried, the neurosurgeon who led the research, said: "We were able to tell one part of a scene from another, and we could tell one type of sound from another."

Dr John-Dylan Haynes of the UCL Institute of Neurology, who led the research, told the BBC News website: "What we need to do now is create something like speech-recognition software, and look at which parts of the brain are specifically active in a person."

He said the study's findings proved the principle that fMRI scans could "read thoughts", but he said it was a very long way from creating a machine which could read anyone's mind.

But Dr Haynes said: "We could tell from a very limited subset of possible things the person is possibly seeing."

"One day, someone will come up with a machine in a baseball cap.

"Then it really could be helpful in everyday applications."

He added: "Our study represents an important but very early stage step towards eventually building a machine that can track a person's consciousness on a second-by-second basis.

"These findings could be used to help develop or improve devices that help paralyzed people communicate through measurements of their brain activity.

But he stressed: "We are still a long way off from developing a universal mind-reading machine."

Dr Fried said: "It has been known that different areas of the temporal lobe are activated by faces, or houses.

"This UCL finding means it is not necessary to use strikingly different stimuli to tell what is activating areas of the brain."
 
The film 'Brainstorm' is based on this very idea - that you could measure/monitor the changes in the brain and record it. The film goes further and imagines that you can also 'play them back' so that someone else could directly experience what you experienced while the recording took place.

Admittedly something like that is purely in the realms of sci-fi at the moment, but I'd bet that anyone at Kitty Hawk, watching the Wright brothers in 1903 would have treated someone talking about men walking on the moon within a lifetime with the same degree of scepticism...

Of course anything that lets you know what a person is thinking, regardless of what they, or their body language is saying will be very interesting to the military/security community, as well as psychopharmacologists, psychiatrists etc
 
If humans dont develop the machine itself, in the not so distant future when machines will be building better machines that build even better machines etc. they'll not only be able to map out a brain and do this stuff but the self aware "machine"/network/brain could have everyones personalities uploaded, and well.. everything else about the entire universe too, like the whole universe in a computer/AI simulation running at the max speed it can, governed by the laws in the universe.

I imagine this like I imagine myself before I was 'born'. Here's this big bang, out comes a universe, ...happens to be a universe that supports evolution enough to get to humans etc. Its like we're all a bunch of people running around, trying to ..figure out this universe we're "IN". Just like before I was 'born' or became self aware myself, there's that universe within, its gotta go through evolutionary growth itself too, then if all goes well it gets to that infinity point where it knows its entire self as one 'thing', and limited by its own universal laws can't go higher than, well it gets close to infinity until it realises there's something OUTSIDE too! Its overloaded! My brain wakes up into a world so ridiculously alien to those within it (or, my brain sorta 'simulates' them on different substrate) that its at full speed trying to process this whole new world, the limited information laws in there also would go along with, something like a "sleep schedule" if there's a way to stop taking in more input from this outside world, it can 'sleep' and 'catch up' on what its been trying to process before sleep.

Anyway, i'll shut up and type more in my trip report thread,.. considering, i am tripping at the moment and all :).

just google "the singularity"
 
^ There is a line of thought that the world 'we' live in is actually a huge computer simulation, that's so good that we interpret it as reality. That has the implication that 'we' don't really exist and that our internal life/stream of conciousness is just the product of a lot of lines of software (which, in the broadest term is true, only for us, the hardware is the brain). This allows for an infinite regression (the beings that wrote our our 'reality' are themselves part of an AI simulation by yet higher beings. This repeats infinitely). Once we build an AI simulation of reality, we're just part of the nested reality scheme.

If it is, somebody put a lot of work designing the software that codes for the functioning of the human brain!
 
There's a lot of ethical issues related to mind reading. If it ever gets truly advanced enough, sometime in the far distant future, privacy would not exist anymore. Period. There was an excellent Star Trek: TNG episode that touched on this; it featured a race of highly telepathic aliens that communicate with each other by sending images (instead of words) telepathically to each other. They had never heard of the concept of privacy before meeting other races.
 
There was an interesting special in New Scientist released 31st July 2004 titled

Thought Detectives: They know your thoughts, why you buy, what your hiding. They know your motivations, secrets and desires.

Spread across several articles, it was indeed daunting stuff.


Behind the mask

31 July 2004
New Scientist
31
volume 183; issue 2458

IF YOU THINK mind reading is outside the realms of science, think again. In the past decade a revolution in brain-imaging technology has made it possible to see your private mental world in real time. These days no self-respecting university or teaching hospital is without a PET or fMRI scanner and, as the technology comes of age, its reach has extended. Where once researchers were most interested in sweeping generalisations about how humans think, now their studies are becoming ever more personal, uncovering details about our individual motivations, desires and prejudices.

One potentially fruitful area for brain imaging is in understanding why we make the choices we do. Economists and strategists have a vested interest in making these sorts of predictions. They have devised a whole field of mathematics – game theory – to try to uncover the rules of decision making, but it turns out that we lesser mortals just won't comply with their mathematical formulae. Now, by combining game theory with brain imaging, scientists are beginning to see why our decision making isn't necessarily rational, and how factors such as emotions and social context influence the choices we make. Already they can predict the choices monkeys will make by the pattern of activity in a single neuron. Could we be next? Will imaging one day allow others to consider your options even before you have? .

From knowing what you want, it is a short leap to manipulating what you want. Advertising agents and marketers have been trying to do this for decades, but their methods are not exactly scientific. Now, with the dawn of neuromarketing, selling could go high-tech. This particular incarnation of brain imaging has had some high-profile publicity and, despite the old adage, it is not all good. Consumer groups are enraged by the thought of multinationals trying to sneak inside our heads in search of the "buy button". But researchers say that what they are finding is much more subtle and could actually lead to more ethical ways of selling. Is this simply the latest gimmick in the corporate world's attempt to drum up business or the beginning of a new era for consumerism? .

If all this seems like an infringement on your free will and right to privacy, you ain't seen nothing yet. Brain imaging could also be used to reveal your inner secrets. Studies have already put the spotlight on such things as racial prejudice, deception, sexual fantasy and personality. Though researchers urge caution there is little doubt that it is only a matter of time before the technology moves out of the lab and into the courtrooms, our workplaces and our everyday lives. It is an ethical minefield: will people be forced to undergo scanning, will the results raise insurance premiums, affect their job prospects, or determine whether or not they go to jail? Whatever the future of brain scanning, we need to think about these things now. .


What's on your mind?

Neuroscience is revealing more and more about what our brains do when we think and make choices. The more we understand, the greater the expectation for science to solve crucial ethical issues, such as when human life begins, and to what extent people should be held responsible for their actions. Can it provide these answers? Sometimes, says neuroscientist Michael S. Gazzaniga. But we have to ask the right questions

11 June 2005
New Scientist
48
volume 186; issue 2503
English

WHEN society looks to scientists for answers to ethical questions, how should they respond? As neuroscience in particular sheds more light on what makes us human, it is confronted with some of the most challenging problems that society faces. Will advances in the field change our ideas about morality, responsibility and the law? Should they?

In some areas, new findings can help tackle these issues – the ethical dilemma of when an embryo has the moral status of a human being, for example. But in others, neuroscientists are being asked to weigh in when in fact they shouldn't. Neuroscience has nothing to say about concepts such as free will and personal responsibility, and it probably has nothing to say about the origins of anti-social thoughts. So dragging it into the courtrooms is dangerous.

Cognitive neuroscience will improve our understanding of these difficult issues, for instance by discovering whether there exist universal morals possessed by all members of our species. The developing field of neuroethics will deal with the social issues of disease, normality, mortality, lifestyle, and the philosophy of living, informed by our understanding of underlying brain mechanisms. It places personal responsibility in the broadest social context. It is – or should be – an effort to come up with a brain-based philosophy of life.

Consider the questions of when to confer moral status to an embryo, and of when life begins. These are separate. The distinction between them is important.

Biological life begins at the moment of conception. But when does human life begin? The answer has important implications for debates on abortion, in-vitro fertilisation and cloning for stem cell research. Many neuroscientists and some bioethicists believe that human life begins when the brain starts functioning. Consciousness is the critical function needed to determine humanness, because it is a quality that, in its fullness and with all its implications for self-identity, personal narrative and other mental constructs, is uniquely human. An embryo cannot have consciousness until the point in development when it has a brain able to support consciousness. But, as with many ethical issues involving the brain, the answer is not so black and white. Our grey matter creates many grey areas.

The context of the question is everything. One relevant context is biomedical cloning for stem cell research. Neuroscience clearly shows that the fertilised egg does not begin the processes that eventually generate a nervous system until day 14. For this reason, among others, stem cell researchers use fertilised embryos only up until day 14.

But we have to jump all the way to the 23rd week of development before the fetus can survive outside the womb – and then only with advanced medical technology to help it. One could argue that the embryo is not a human being, or deserving of the moral status of a human being, until then. And indeed this is when the US Supreme Court has ruled that the fetus has the rights of a human being.

In making this ruling, the court had to navigate several arguments. One is the "continuity argument" that claims life begins at conception. Its adherents view a fertilised egg as the point at which life begins, and hold that it should be granted the same rights as a human being. They take no consideration of developmental stages. And there is no rational arguing with those who see it this way.

Similarly, the "potentiality argument" views the potential to develop into a human being as conferring the status of a human being. This is akin to saying that a home improvement warehouse is the same thing as 100 houses, since it holds that potential. Neither of these makes any sense to neuroscience. How can a biological entity that has no nervous system be a moral agent?

A further argument, which most often comes into play around stem cell research, holds that the intention of those who create an embryo is significant. Such research may use embryos left unused from IVF processes, where the intention of creating several embryos is to create one or two that are viable for implantation. In natural sexual fertilisation up to 80 per cent of embryos spontaneously abort: thus IVF is simply a high-tech version of what happens naturally. Alternatively, researchers may use embryos created specifically for stem cell harvesting, and here there is never any intention to create a human being.

Looking at the facts, I see that a specific human life begins at conception. A 14-day-old embryo, a clump of cells, created for research, has no moral status. And yet there is something about the look of an ultrasound image of a 9-week fetus that makes me as a father have a personal reaction – it has started to look like one of us.

Is this gut reaction an indication of built-in moral instincts that our brains seek to make sense of with these various arguments? Cognitive neuroscientific research seems to point towards this.

The single most important insight that the cognitive neurosciences can offer ethicists is in understanding how the brain forms beliefs. A powerful example of its drive to do so comes from observing "split brain" patients. These people have had their corpus callosum, the connection between the two halves of their brain, severed as treatment for epilepsy.

Years of testing such people has revealed a brain mechanism, which I call "the interpreter", that resides in the verbal brain (usually the left hemisphere). This crafts stories or beliefs to interpret actions. If we present the word "walk" only to the right hemisphere of split-brain patients, they will get up and start walking. When we ask them why they do this, their left hemisphere, which is unaware of the command, creates a response such as "I wanted to go get a soda". Similar findings abound in split-brain research and in studies of neurological disorders.

Another exciting and relevant area of research is Giacomo Rizzolatti's work with mirror neurons. These indicate a built-in mechanism for "mind reading", or for empathy. When a monkey reaches for something, a particular neuron responsible for the movement fires in its brain. That same neuron fires in a monkey that is watching the action but not moving. It may be that when we see someone do something, the equivalent neurons are triggered in our brains, creating the same feeling or response in us.

These contributions from neuroscientists add to our general understanding of brains, minds and even ethics. The area where our counsel is most often sought, however, is the court of law. Lawyers and investigators are excited by the possibility of putting someone in a scanner to see whether they are lying, or whether they have a biological propensity to violence. Couldn't this information be used to prosecute or defend someone?

The answer should be an emphatic no. While the advances in neuroimaging are exciting, they do not produce that kind of answer. For example, we can show someone pictures of terrorist training camps and watch an area of the brain light up. This may reveal fascinating things about how certain cognitive states work. But it is dangerous and simply wrong to use such data as irrefutable evidence about such cognitive states – let alone the history that led to them, which may include photographs of camps seen in a newspaper.

What we know about brain function and brain responses is not always interpretable in a single way and therefore should not be used as infallible evidence the way DNA evidence is. This is illustrated by responses to Elizabeth Phelps's use of functional magnetic resonance imaging (fMRI) to examine black and white undergraduates' responses to pictures of known and unknown black and white faces. She found that the amygdala, an area of the brain associated with emotion, lights up in white undergraduates when they are shown pictures of unknown black faces, but not famous black faces such as Martin Luther King Jr, Michael Jordan or Will Smith.

Some have suggested that this shows that the white students are afraid of unfamiliar black faces. But this is a dangerous and, more importantly, inaccurate leap to make. The finding does not mean that racism is built into the brain.

The tricky thing is that the brain allows us – indeed it is bound and determined – to concoct stories and theories about sets of circumstances or data, and that includes data from experiments on brains. But the resulting stories are not always incontrovertible. There are other explanations for those fMRI scans of students' brains.

One crucial area where the law and neuroscience should be kept apart is the "my brain made me do it" defence in court. The whole area of research raises the question: if the brain determines the mind and determines our actions, independent of our knowing about it until after the fact, then what becomes of free will?

But personal responsibility arises out of interacting with many human beings. When people come together in groups, laws emerge from their interaction that are not to be found in assessing their brains. You could compare these to the traffic dynamics and rules generated when cars start to interact, producing laws of cooperation that exist not in the cars but in the interactions.

We are still guided by social rules of behaviour, and choose to react and act according to those, in addition to any determined brain mechanisms we may all have. Free will is alive and well. "My brain made me do it" is not an excuse and should not be used in the courts. Neuroscience simply does not have as much to say as lawyers would think or hope about personal responsibility.

Belief formation is one of the most important areas in which cognitive neuroscience needs to teach something to ethicists and to the world. The brain forms beliefs based on contextual information, and those beliefs are hard to change. If you know that, it is hard to accept the wars that rage and lives that are lost due to differences between belief systems. At another level, however, it should come as no surprise that people are behaving as they do: we are wired to form beliefs and to form theories.

In this view, religious beliefs are meta-narratives, rationales we provide for our actions. Common moral reasoning may be built in to humanity, but the stories that attempt to explain and to answer "why" questions about its results are social constructs. If we could come to understand and accept that the true sources of different belief systems are socially institutionalised theories of interpretation of our actions, then it seems to me we could go a long way to accepting their differences as mere differences of narrative. There are no universal differences in how humans exist in the world.

As we continue to uncover and understand the ways in which the brain enables belief formation and moral reasoning, we must work to identify what the intrinsic set of universal ethics might be. It is a revolutionary idea, to be sure, but clinging to outmoded belief systems and even fighting wars over them in light of this knowledge is, in a word, unethical.

Profile Michael S. Gazzaniga is director of the Center for Cognitive Neuroscience at Dartmouth College in Hanover, New Hampshire, where he researches split brains and is editor-in-chief emeritus of the Journal of Cognitive Neuroscience . He is a member of the US President's Council on Bioethics. His most recent book is The Ethical Brain (Dana Press, May 2005)



They know what you want; If neuromarketers can find the key to our consumer desires, will they be able to manipulate what we buy, asks Emily Singer

31 July 2004
New Scientist
36
volume 183; issue 2458

WHY DO people who prefer the taste of Pepsi faithfully buy Coke? Will the Catwoman movie trailer make you want to see the film? And are women subconsciously drawn to the sight of a bikini-clad model hawking beer on television?

Scientists and ad execs hope to unravel advertising mysteries like these with neuromarketing – a new spin on market research, which shuns customer surveys and focus groups in favour of technologies such as functional magnetic resonance imaging (fMRI) to peer directly into consumers' brains. Though the technique has still to prove its credentials with journal publications, a handful of consultants and companies have already started spending their marketing budgets on scanner time.

The idea is to watch what goes on in people's brains when they see or think about desirable and undesirable goods – a pair of Armani jeans versus a supermarket's own brand, for example. Researchers hope to learn about our hidden desires and preferences, and how to manipulate them so companies can flog us more of their products. It conjures up Orwellian images of commercials targeted to inflame our most secret desires. Yet some analysts believe neuromarketing is a form of advertising snake oil, a ploy to make marketers shell out millions for the latest bunch of bells and whistles. Can neuromarketing truly see into the mind of the consumer, or is it just a con?

Neuromarketing caught public attention by recreating a famous soda pop conundrum inside a brain scanner: why is Coke more popular than Pepsi when more people pick Pepsi in blind taste tests? Neuroimaging expert Read Montague from Baylor College of Medicine in Houston, Texas, scanned people's brains using fMRI as they blindly drank either Coke or Pepsi and reported which tasted best. He found that a region called the ventral putamen within the striatum lit up most strongly when people drank their favourite soda. This area is known to be associated with seeking reward. More people preferred Pepsi, just as the decades-old challenge said.

But when people were told which soda they were drinking, their preferences changed: more people chose Coke. And this time the brain area that showed most activity was the medial prefrontal cortex, a spot associated with higher cognitive processes. The results – which Montague hopes to publish soon – showed that people make decisions based on their memories or impressions of a particular soda, as well as taste. In the advertising world, this "brand recognition" is one of the most sought-after qualities advertisers attempt to engender.

While the experiment hasn't really thrown up any new marketing insights yet, researchers hope this new approach might help them pin down what this elusive brand recognition is all about. Clinton Kilts, a neuroscientist at Emory University in Atlanta, says it's about making a person identify with an object. He found the same prefrontal region that Montague identified lit up whenever people look at pictures of things they love. He says the area is associated with self-referential thinking. He now hopes to learn what sets up these personal associations. "Say you love Ford Mustangs. Maybe that comes from your family upbringing around Detroit, or the fact that it was your first car," he says.

According to Steven Quartz, a neuroscientist at the California Institute of Technology in Pasadena, neuromarketing could also uncover predilections we are unaware of. "Surveys are based on the assumption that we accurately probe our own preferences," says Quartz. "But basic science says that a lot of what underlies our preferences is unconscious." From the advertisers' point of view, neuromarketing's strength is that it may hit on subconscious biases that traditional advertising methods, such as focus groups fail to uncover.

He is designing a neuroimaging package that will help movie studios measure the success of their trailers. For example, he showed women a trailer starring wrestler-turned-action hero "The Rock". In traditional surveys women generally rate The Rock as unattractive, but their brain activity says otherwise: areas associated with facial attractiveness light up when women watch him on screen. Studios could use this information to try to tweak the movie pitch towards women, Quartz says.

But while Quartz believes his technique will predict blockbusters much better than surveys do, he still has to prove it. His group plans to test neuromarketing against traditional questionnaires, as well as against physiological measures that are much cheaper and easier to monitor than brain responses, such as the galvanic skin response, which gives an overall measure of arousal.

While scientists may be excited about the possibilities, neuromarketing has many critics. Douglas Rushkoff, a New York author who often writes about the advertising industry, doubts the technique will catch on. He describes neuromarketing as an elaborate ploy. "I don't see success beyond their ability to con marketers into giving them money," he says.

But others find the very idea frightening. Gary Ruskin, who runs consumer champion Ralph Nader's Commercial Alert group based in Portland, Oregon, says: "Even a small increase in advertising efficiency could boost advertising-related diseases such as obesity." Ruskin has protested against Kilts's work, which he did in collaboration with BrightHouse, a marketing consultancy firm based in Atlanta and one of neuromarketing's leading lights.

Making companies more moral

Caught between sceptics and downright opponents, Kilts and Joey Reiman, BrightHouse's founder and CEO, claim that rather than predicting an individual's shopping behaviour, neuromarketing will help them to understand how people develop preferences. "Our goal is to change company, not consumer, behaviour," says Reiman. He adds that this philosophy could improve advertising ethics. "What if you could, for example, show a company that their moral and ethical behaviour has a bigger influence on consumer preference than the colour of their packaging or current tag line?"

This responsible spin on neuromarketing may be more a reaction to negative press than a genuine hope for a more moral advertising industry, however. BrightHouse has recently changed its gung-ho approach, erasing the term neuromarketing from their website and replacing it with the blander "neurostrategies". And it has swapped an Orwellian logo of two eyes piercing a brain with an innocuous picture of the BrightHouse building.

The bottom line is that neuromarketing still has some way to go before it can prove itself effective – either by uncovering our secret wishes or by convincing companies that good behaviour sells. In the end, the controversy may amount to nothing. In April, Montague tried to capitalise on the neuromarketing buzz by organising a conference geared towards marketing professionals. It was cancelled due to lack of interest.

Perhaps this is because the neuromarketers have yet to find what the industry would really love: a signature brain pattern that predicts consumer behaviour. Maybe they never will. "I don't think we have a buy button," says Kilts. Quartz is perhaps nearest, with a plan to compare the brain activity of people who liked a movie trailer and went to see the film with those who liked it but stayed home. But even if such a thing is found, Kilts doesn't think advertisers could manipulate it. "We're not that good and the human brain isn't that stupid," he says.

Emily Singer is a Boston-based writer who favours Coke, even in blind taste tests
 
This is just something for the power elite, dont expect them to give this machines away freely, with this they can control your mind and beleive me, if they can, they will try, be aware, anything who can be abused will be, and the target might be you, why not?

Power call for more power, and the extreme rich and wealthy will control all the others if they can, they are "above" the law, dont expect this is something without very dangerous risk, machines could monitor milliions if not more by the remote control of machines using GPS systems and tracker nano robots being sprayed in chemtrail from secret military planes against its own population

BE (VERY) AWARE!

Im not paranoid, im a realistic thinker, not an utopis
 
one step closer to being able to map the entire brains thought every single little process and be able to render it into visual, auditory, feel and so forth. I embrace it. Dont worry guys, somone will create a country someday that will utilize all these technologies for good and not for government conspiracy. Not every country is the USA, and the USA will fall one day, its inevitable, lets speed up the process. Im a fucking terrorist, not to your civilians, just to your government, the way terrorists should be. Fuck the USA and its absurd semantics, im sick of it. You people need to riot, there needs to be an overthrow of power, not just of bush, every single president you guys elect is some fucking head up his ass douchebag, and we cant blame you guys, because all your CHOICES for president are all fucking douchebags, its a lose lose situation, and yall wonder why no one wants to vote?? DONT VOTE!! If your gonna vote vote in the form of a bullet to the presidents dome from a grassy knoll. And keep it coming until there is change, eventually no head up their ass faggot is gonna wanna run for president and then the people can have their freedom.
 
you guys are FUCKING NUTS, people moniter these fucking forums, expect to see suits at your door soon looking for your ass bc of some coment you may or may have not made but do to the patriot act might as well have, and expect to be detained indeffinetly. I have seen evidence of it happening and i recomend you edit the last two posts as soon as possible
 
back on topic. fMRI is ungodly expensive ATM and will not be avail to the would be general neuromarketer / NLP shaman anytime soon.

that said, there now exist more effective techs than fMRI for finding out what people are thinking, albeit most are classified or undeveloped/uncommercialised.

i've seen / played with some, works surprisingly well. i thought about hooking up a rudimentary neural network to the remote scanner soft, but other projects took precedence.
 
Top