Skip Navigation

Your brain does not process information and it is not a computer | Aeon Essays

aeon.co Your brain does not process information and it is not a computer | Aeon Essays

Your brain does not process information, retrieve knowledge or store memories. In short: your brain is not a computer

Your brain does not process information and it is not a computer | Aeon Essays

the-podcast guy recently linked this essay, its old, but i don't think its significantly wrong (despite gpt evangelists) also read weizenbaum, libs, for the other side of the coin

174
174 comments
  • As a REDACTED who has published in a few neuroscience journals over the years, this was one of the most annoying articles I've ever read. It abuses language and deliberately misrepresents (or misunderstands?) certain terms of art.

    As an example,

    That is all well and good if we functioned as computers do, but McBeath and his colleagues gave a simpler account: to catch the ball, the player simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery (technically, in a ‘linear optical trajectory’). This might sound complicated, but it is actually incredibly simple, and completely free of computations, representations and algorithms.

    The neuronal circuitry that accomplishes the solution to this task (i.e., controlling the muscles to catch the ball), if it's actually doing some physical work to coordinate movement in in a way that satisfies the condition given, is definitionally doing computation and information processing. Sure, there aren't algorithms in the usual way people think about them, but the brain in question almost surely has a noisy/fuzzy representation of its vision and its own position in space if not also that of the ball they're trying to catch.

    For another example,

    no image of the dollar bill has in any sense been ‘stored’ in Jinny’s brain

    in any sense?? really? what about the physical sense in which aspects of a visual memory can be decoded from visual cortical activity after the stimulus has been removed?

    Maybe there's some neat philosophy behind the seemingly strategic ignorance of precisely what certain terms of art mean, but I can't see past the obvious failure to articulate the what the scientific theories in question purport nominally to be able to access it.

    help?

  • Just over a year ago, on a visit to one of the world’s most prestigious research institutes, I challenged researchers there to account for intelligent human behaviour without reference to any aspect of the IP metaphor. They couldn’t do it, and when I politely raised the issue in subsequent email communications, they still had nothing to offer months later. They saw the problem. They didn’t dismiss the challenge as trivial. But they couldn’t offer an alternative. In other words, the IP metaphor is ‘sticky’. It encumbers our thinking with language and ideas that are so powerful we have trouble thinking around them.

    I mean, protip, if you ask people to discard all of their language for discussing a subject they're not going to be able to discuss the subject. This isn't a gotcha. We interact with the world through symbol and metaphors. Computers are the symbolic language with which we discuss the mostly incomprehensible function of about a hundred billion weird little cells squirting chemicals and electricity around.

    Yeah I'm not going to finish this but it just sounds like god of the gaps contrarianness. We have a symbolic language for discussing a complex phenomena that doesn't really reflect the symbols we use to discuss it. We don't know how memory encoding and retrieval works. The author doesn't either, and it really just sounds like they're peeved that other people don't treat memory as an irreducibly complex mystery never to be solved.

    Something they could have talked about - Our memories change over time because, afaik, the process of recalling a memory uses the same mechanics as the process of creating a memory. What I'm told is we're experiencing the event we're remembering again, and because we're basically doing a live performance in our head the act of remembering can also change the memory. It's not a hard drive, there's no ones and zeroes in there. It's a complex, messy biological process that arose under the influence of evolution, aka totally bonkers bs. But there is information in there. People remember strings of numbers, names, locations, literal computer code. We don't know for sure how it's encoded, retrieved, manipulated, "loaded in to ram", but we know it's there. As mentioned, people with some training and recall enormous amounts of information verbatim. There are, contrary to the dollar experiment, people who can reproduce images with high detail and accuracy after one brief viewing. There's all kinds of weird eiditic memory and outliers.

    From what I understand most people are moving towards a system model - Memories aren't encoded in a cell, or as a pattern of chemicals, it's a complex process that involves a whole lot of shit and can't be discrete observed by looking at an isolated piece of the brain. YOu need to know what the system is doing. To deliberately poke fun at the author - It's like trying to read the binary of a fragmented hard drive, it's not going to make any sense. You've got to load it in to memory so the index that knows where all the pieces of the files are stored on the disk so it can assemble them in to something useful. Your file isn't "stored" anywhere on the disk. Binary is stored on the disk. A program is needed to take that binary and turn it in to readable information. 'We're never going to be able to upload a brain" is just whiney contrarian nonesense, it's god of the gaps. We don't know how it works now so we'll never know how it works. So we need to produce a 1:1 scan of the whole body and all it's processes? So what, maybe we'll have tech to do that some day. maybe we'll, you know, skip the whole "upload" thing and figure out how to hook a brain in to a computer interface directly, or integrate the meat with the metal. It's so unimaginative to just throw your hands up and say "it's too complicated! digital intelligence is impossible!" Like come on, we know you can run an intelligence on a few pounds of electrified grease. That's a known, unquestionable thing. The machine exists, it's sitting in each of our skulls, and every year we're getting better and better at understanding and controlling it. There's no reason to categorically reject the idea that we'll some day be able to copy it, or alter it such a way that it can be copied. It doesn't violate any laws of physics, it doesn't require goofy exists only on paper exotic particles. it's just electrified meat.

    Also, if bozo could please explain how trained oral historians and poets can recall thousands of stanzas of poetry verbatim with few or no errors I'd love to hear that, because it raises some questions about the dollar bill "experiment".

  • This essay is ridiculous, it's arguing against a concept that nobody with the minutest understanding or interest in the brain has. He's arguing that because you cannot go find the picture of a dollar bill in any single neuron, that means the brain is not storing the "representation" of a dollar bill.

    I am the first to argue the brain is more than just a plain neural network, it's highly diversified and works in ways beyond our understanding yet, but this is just silly. The brain obviously stores the understanding of a dollar bill in the pattern and sets of neurons (like a neural network). The brain quite obviously has to store the representation of a dollar bill, and we probably will find a way to isolate this in a brain in the next 100 years. It's just that, like a neural network, information is stored in complex multi-layered systems rather than traditional computing where a specific bit of memory is stored in a specific address.

    Author is half arguing a point absolutely nobody makes, and half arguing that "human brains are super duper special and can never be represented by machinery because magic". Which is a very tired philosophical argument. Human brains are amazing and continue to exceed our understanding, but they are just shifting information around in patterns, and that's a simple physical process.

  • Meh, this is basically just someone being Big Mad about the popular choice of metaphor for neurology. Like, yes, the human brain doesn't have RAM or store bits in an array to represent numbers, but one could describe short term memory with that metaphor and be largely correct.

    Biological cognition is poorly understood primarily because the medium it is expressed on is incomprehensibly complex. Mapping out the neurons in a single cubic millimeter of a human brain takes literal petabytes of storage, and that's just a static snapshot. But ultimately it is something that occurs in the material world under the same rules as everything else, and does not have some metaphysical component that somehow makes it impossible to simulate using software in much the same way we'd model a star's life cycle or galaxy formations, just unimaginable using current technology.

  • A spectre is haunting Hexbear — the spectre of UlyssesT.

  • We really don't know enough about the brain to make any sweeping statements about it at all beyond "it's made of cells" or whatever.
    Also, Dr. Epstein? Unfortunate.

  • On a more serious note, techbros' understanding of the brain as a computer is just their wish to bridge subjectivity and objectivity. They want to be privy to your own subjectivity, perhaps even more privy to your own subjectivity than you yourself. This desire stems from their general contempt for humanity and life in general, which pushes them to excise the human out of subjectivity. In other words, if you say that the room is too hot and you want to turn on the AC, the techbro wants to be able to pull out a gizmo and say, "uh aktually, this gizmo read your brain and it says that your actual qualia of feeling hot is different from what you're feeling right now, so aktually you're not hot."

    Too bad for the techbro you can never bridge subjectivity and objectivity. The closest is intersubjectivity, not sticking probes into people's brains.

  • This was a really cool and insightful essay, thank you for sharing. I've always conceptualized the mind as a complex physical, chemical, and electrical pattern (edit: and a social context) - if I were to write a sci fi story about people trying to upload their brain to a computer I would really emphasize how they can copy the electrical part perfectly, but then the physical and chemical differences would basically kill "you" instantly creating a digital entity that is something else. That "something else" would be so alien to us that communication with it would be impossible, and we might not even recognize it as a form of life (although maybe it is?).

  • My brain is a pentium overdrive without a fan and I am overheating

  • I'm glad he mentioned that we aren't just our brains, but also our bodies and our historical and material contexts.

    A "mind upload" would basically require a copy of my entire brain, my body, and a detailed historical record of my life. Then some kind of witchcraft would be done to those things to combine them into the single phenomenal experience of me. Basically:

  • here are some more relevant articles for consideration from a similar perspective, just so we know its not literally just one guy from the 80s saying this. some cite this article as well but include other sources. the authors are probably not 'based' in a political sense, i do not condone the people but rather the arguments in some parts of the quoted segments.

    https://medium.com/@nateshganesh/no-the-brain-is-not-a-computer-1c566d99318c

    Let me explain in detail. Go back to the intuitive definition of an algorithm (remember this is equivalent to the more technical definition)— “an algorithm is a finite set of instructions that can be followed mechanically, with no insight required, in order to give some specific output for a specific input.” Now if we assume that the input and output states are arbitrary and not specified, then time evolution of any system becomes computing it’s time-evolution function, with the state at every time t becoming the input for the output state at time (t+1), and hence too broad a definition to be useful. If we want to narrow the usage of the word computers to systems like our laptops, desktops, etc., then we are talking about those systems in which the input and output states are arbitrary (you can make Boolean logic work with either physical voltage high or low as Boolean logic zero, as long you find suitable physical implementations) but are clearly specified (voltage low=Boolean logic zero generally in modern day electronics), as in the intuitive definition of an algorithm….with the most important part being that those physical states (and their relationship to the computational variables) are specified by us!!! All the systems that we refer to as modern day computers and want to restrict our usage of the word computers to are in fact our created by us(or our intelligence to be more specific), in which we decide what are the input and output states. Take your calculator for example. If you wanted to calculate the sum of 3 and 5 on it, it is your interpretation of the pressing of the 3,5,+ and = buttons as inputs, and the number that pops up on the LED screen as output is what allows you interpret the time evolution of the system as a computation, and imbues the computational property to the calculator. Physically, nothing about the electron flow through the calculator circuit makes the system evolution computational. This extends to any modern day artificial system we think of as a computer, irrespective of how sophisticated the I/O behavior is. The inputs and output states of an algorithm in computing are specified by us (and we often have agreed upon standards on what these states are eg: voltage lows/highs for Boolean logic lows/highs). If we miss this aspect of computing and then think of our brains as executing algorithms (that produce our intelligence) like computers do, we run into the following -

    (1) a computer is anything which physically implements algorithms in order to solve computable functions.

    (2) an algorithm is a finite set of instructions that can be followed mechanically, with no insight required, in order to give some specific output for a specific input.

    (3) the specific input and output states in the definition of an algorithm and the arbitrary relationship b/w the physical observables of the system and computational states are specified by us because of our intelligence,which is the result of…wait for it…the execution of an algorithm (in the brain).

    Notice the circularity? The process of specifying the inputs and outputs needed in the definition of an algorithm, are themselves defined by an algorithm!! This process is of course a product of our intelligence/ability to learn — you can’t specify the evolution of a physical CMOS gate as a logical NAND if you have not learned what NAND is already, nor capable of learning it in the first place. And any attempt to describe it as an algorithm will always suffer from the circularity.

    https://www.theguardian.com/science/2020/feb/27/why-your-brain-is-not-a-computer-neuroscience-neural-networks-consciousness

    And yet there is a growing conviction among some neuroscientists that our future path is not clear. It is hard to see where we should be going, apart from simply collecting more data or counting on the latest exciting experimental approach. As the German neuroscientist Olaf Sporns has put it: “Neuroscience still largely lacks organising principles or a theoretical framework for converting brain data into fundamental knowledge and understanding.” Despite the vast number of facts being accumulated, our understanding of the brain appears to be approaching an impasse.

    In 2017, the French neuroscientist Yves Frégnac focused on the current fashion of collecting massive amounts of data in expensive, large-scale projects and argued that the tsunami of data they are producing is leading to major bottlenecks in progress, partly because, as he put it pithily, “big data is not knowledge”.

    The neuroscientists Anne Churchland and Larry Abbott have also emphasised our difficulties in interpreting the massive amount of data that is being produced by laboratories all over the world: “Obtaining deep understanding from this onslaught will require, in addition to the skilful and creative application

    https://www.forbes.com/sites/alexknapp/2012/05/04/why-your-brain-isnt-a-computer/?sh=3739800f13e1

    Adherents of the computational theory of mind often claim that the only alternative theories of mind would necessarily involve a supernatural or dualistic component. This is ironic, because fundamentally, this theory is dualistic. It implies that your mind is something fundamentally different from your brain - it's just software that can, in theory, run on any substrate.

    By contrast, a truly non-dualistic theory of mind has to state what is clearly obvious: your mind and your brain are identical. Now, this doesn't necessarily mean that an artificial human brain is impossible - it's just that programming such a thing would be much more akin to embedded systems programming rather than computer programming. Moreover, it means that the hardware matters a lot - because the hardware would have to essentially mirror the hardware of the brain. This enormously complicates the task of trying to build an artificial brain, given that we don't even know how the 300 neuron roundworm brain works, much less the 300 billion neuron human brain.

    But looking at the workings of the brain in more detail reveal some more fundamental flaws with computational theory. For one thing, the brain itself isn't structured like a Turing machine. It's a parallel processing network of neural nodes - but not just any network. It's a plastic neural network that can in some ways be actively changed through influences by will or environment. For example, so long as some crucial portions of the brain aren't injured, it's possible for the brain to compensate for injury by actively rewriting its own network. Or, as you might notice in your own life, its possible to improve your own cognition just by getting enough sleep and exercise.

    You don't have to delve into the technical details too much to see this in your life. Just consider the prevalence of cognitive dissonance and confirmation bias. Cognitive dissonance is the ability of the mind to believe what it wants even in the face of opposing evidence. Confirmation bias is the ability of the mind to seek out evidence that conforms to its own theories and simply gloss over or completely ignore contradictory evidence. Neither of these aspects of the brain are easily explained through computation - it might not even be possible to express these states mathematically.

    What's more, the brain simply can't be divided into functional pieces. Neuronal "circuitry" is fuzzy and from a hardware perspective, its "leaky." Unlike the logic gates of a computer, the different working parts of the brain impact each other in ways that we're only just beginning to understand. And those circuits can also be adapted to new needs. As Mark Changizi points out in his excellent book Harnessed, humans don't have a portions of the brain devoted to speech, writing, or music. Rather, they're emergent - they're formed from parts of the brain that were adapted to simpler visual and hearing tasks.

    If the parts of the brain we think of as being fundamentally human - not just intelligence, but self-awareness - are emergent properties of the brain, rather than functional ones, as seems likely, the computational theory of mind gets even weaker. Think of consciousness and will as something that emerges from the activity of billions of neural connections, similar to how a national economy emerges from billions of different business transactions. It's not a perfect analogy, but that should give you an idea of the complexity. In many ways, the structure of a national economy is much simpler than that of the brain, and despite that fact that it's a much more strictly mathematical proposition, it's incredibly difficult to model with any kind of precision.

    The mind is best understood, not as software, but rather as an emergent property of the physical brain. So building an artificial intelligence with the same level of complexity as that of a human intelligence isn't a matter of just finding the right algorithms and putting it together. The brain is much more complicated than that, and is very likely simply not amenable to that kind of mathematical reductionism, any more than economic systems are.

  • If our brains were computers we wouldn't have computers.

  • A sidenote but you may like a book called Action in Perception. It's more of a survey of contemporary cognitive science in relation to perception, but still relevant to perceptual consciousness.

  • I just think of brains like they're anything else, a lot of arrangements of molecules, but animated by other molecules which are arranged in ways that self-replicate (RNA essentially). As a physical organ its structure changes as a result of feedback from continued survival and the environment. Just really complicated patterns of matter and energy that have emerged from the conditions on Earth.

  • I am going to write a paper titled "Subjectivity; A culture bound psychosis? And can it be cured?" This is a deliberate act of violence and I am going to mail a hard copy to every philosophy department in the English speaking world.

You've viewed 174 comments.