What are feelings, really?
June 2, 2014 9:08 AM   Subscribe

How does the brain's circuitry actually create conciousness? What is actually going on at the terminus of all the chemical reactions and processes that creates sensory experience? What about feelings such as pain, happiness, sadness or anger? Yep, I know these are the million dollar questions and there are not definitive answers (yet!), so I'm interested in hearing your ideas or any research/theories you find particularly interesting on the topic. Cheers!
posted by turnips to Science & Nature (14 answers total) 23 users marked this as a favorite
 
Some breakthrough research in how memories are made and stored are being made right now:

The brain's RAM: Rats, like humans, have a 'working memory'

How to erase a memory –- and restore it: Researchers reactivate memories in rats

How memories are REALLY made: Incredible new video captures their formation in the brain
(All about the same study, but the last one has a COOL video of the event happening!)
posted by IAmBroom at 9:21 AM on June 2, 2014


Consciousness - my favorite current theory is from Antonio Damasio. He researched 'vegetative' patients and 'locked-in syndrome' patients to find the key differences in the brain that caused one to be "alive but can't move" vs. "alive but unaware" and extrapolated that to understand consciousness as it pertains to physical areas of the brain.
(FYI I had to watch that video several times to understand his theories.)
posted by St. Peepsburg at 9:39 AM on June 2, 2014


University of Arizona's The Center for CONSCIOUSNESS STUDIES would be a good place to start. They have abstracts and video interviews from their recent conference.
posted by Sophont at 9:43 AM on June 2, 2014


How does the brain's circuitry actually create conciousness?

We're a long way from answering this question. And if you make a distinction between the subjective experience of consciousness (which is what you seem to be asking about) and the outwardly observable phenomena of physical awakeness and cognitive alertness (which is what the majority of neurological research is interested in), an argument can be made that the question is actually impossible to answer at all. David Chalmers is a good place to start if this kind of contrarian position interests you; his book The Conscious Mind is an accessible read.
posted by my favorite orange at 10:01 AM on June 2, 2014 [1 favorite]


Consciousness isn't caused by brain circuitry though its contents is affected by it.
posted by Obscure Reference at 10:10 AM on June 2, 2014


How do you know that conscious isn't caused by brain circuitry, OR? See, the problem with discussions like this is, the target is moving all the time.
posted by Guy_Inamonkeysuit at 10:37 AM on June 2, 2014




What you seem to be asking about is what is known as the "hard problem of consciousness" or the "explanatory gap." Chalmers and Dennett, referenced above, are basically the polar extremes on this question. Dennett says, I think, that there is no hard problem and that we already have an explanation for consciousness. (I am unable to state succinctly what he thinks that explanation is.) Chalmers says that there is a hard problem and that the best way to solve it is with some form of dualism: i.e., to posit that there are two kinds of "stuff" in the world, physical and mental, that somehow work together to produce consciousness.

I think the most interesting theories fall in between these two extremes. These theories, as I understand them, say that consciousness is most likely produced by ordinary physical matter without assistance from immaterial mental "stuff," but that we are incapable of explaining how that happens. The idea is that there is a gap in our ability to understand nature. No matter how much we learn about the physical brain, we will never be satisfied that we have "explained" how the brain produces subjective experience. Philosophers who espouse views along these lines include Colin McGinn and Joseph Levine.
posted by crLLC at 11:32 AM on June 2, 2014 [3 favorites]


Personally I have no problem with the idea that brains are representation engines, and that this "I" is the representation that this particular engine makes of itself. I don't think there's anything essentially mysterious going on here, though it is insanely complicated and detailed.

How would a general purpose, bodily autonomous representation engine with this degree of abstraction power and adaptability operate, if not like this?

As for subjectivity: given that the representation engine in charge of driving these fingers right now is the only one with access to its own innards, it would be deeply deeply weird if its own representations of its own processes were not fundamentally dissimilar to its representations of anything outside itself and were not almost entirely self-contained and private.

The clue is the lack of a continuous experiential "I": every time I go to sleep, "I" goes missing. The "I" that it's convenient to assume continues to exist while I sleep is a conceptual construct, a mere placeholder for that-which-is-typing-to-you-now.

I think of this "I" rather like the apparently still image produced by a stroboscope: every time I check to see whether I exist, that very act of checking produces a "flash" that reveals itself to itself. The overall impression is of a definite thing with continuous existence, but in fact there is no way to find out via pure introspection whether an "I" continues to exist between the flashes, or whether there are absences similar to those that occur as I sleep (but shorter) as the engine gets on with all the things it does other than self-examination.

We learn object persistence early: anything that's still there every time we look at it is assumed to have the property of continuous existence, because that's how most things work. But in the special case of an "I" brought into momentary existence by the very act of looking for/at itself, I'm unconvinced that the assumption of something like object persistence is justifiable. And without it, it seems to me that the "hard problem" simply evaporates.
posted by flabdablet at 12:07 PM on June 2, 2014 [12 favorites]


flabdablet mentioned Daniel Dennett above, so here's a recommendation for The Mind's I.
posted by RobotVoodooPower at 7:16 PM on June 2, 2014


That's some really interesting food for thought, flabdablet, thanks.
posted by Drexen at 5:05 AM on June 3, 2014


Response by poster: Thanks for your responses everyone. They are, as Drexen put it, really interesting food for thought. Personally, I'm on the same page as flabdablet (or at least if I understood him correctly). I'm a material reductionist through and through and am really fascinated about what the physical make up/manifestation of something like the experience of pain, vision, etc. could be. And also, what it would mean if that make up could be reconstructed outside the brain itself. Anyways, I'm gonna check out your recommendations. Cheers.
posted by turnips at 11:35 PM on June 3, 2014


Possibly of interest
posted by flabdablet at 12:43 AM on June 4, 2014


I'm a material reductionist through and through

I'm not a reductionist, because I've seen enough weird and interesting emergent behaviors in all kinds of physical systems that no amount of time spent on understanding the constituent parts would ever let anybody predict. Some things are most easily understood without drilling down through too many levels of description. Surprisingly often, emergent behaviors turn out not to depend terribly sensitively, or even at all, on the detailed makeup of their constituent parts.

Also not strictly a materialist, except possibly in the "no true Scotsman" sense where "material" gets redefined as needed to cover any objectively observable and/or experimentally replicable phenomenon.

Quite fond of Occam's razor, though.
posted by flabdablet at 12:48 AM on June 4, 2014 [1 favorite]


« Older Is it ethical to shop wholesale for my own use?   |   Default Browser Setting, Where Art Thou? Newer »
This thread is closed to new comments.