Interesting stuff but it hurts so much that the writer has the common misconception of pavlov's dog doing a circus trick. Sure the dog also consciously understands the connection between bell and food. But the physiological reaction of the saliva flowing is not a conscious decision of the dog. Circus tricks with animals existed long before Pavlov. The key discovery is that there is a physiological reaction which cannot be suppressed anymore consciously. That's why PTSD is such a bitch to be treated: even with the stimulus gone, the physiological reaction remains.
The article just reminds me that I hate modern journalism and try to not read any news articles.
Hyperbolic attention grabbing headline followed by appeal to authority, appeal to authority, appeal to authority, counter opinion appeal to authority that the previous appeal to authority might all be wrong.
So wide reaching and all over the place, the reader and can pick from the menu on what point they want to use as confirmation of what they already believe to be true. Then the article can be cited in a type of scientistic, mostly wrong, gossip.
You shouldn't conflate a pop science magazine with all of modern journalism. Try a high quality outlet like The Economist. "I try not to read any news articles" screams anti-intellectualism.
The GP has a point about the state of journalism generally and the pervasive nature in which Yellow journalism is returning.
One need not be anti-intellectual to find the state of reporting to be difficult to deal with and not wanting to read it. In addition to the GP’s complaint; journalists of any ilk also tend to conflate editorializing with reporting. You see this all the way from pop science to NYTimes to Fox News and yes even the Economist.
A question is whether the more fact based reporting of the early-mid 20th Century is the exception to the tendency of Yellow journalism that existed before and seems to exist now.
I think it depends. While AI has flooded YouTube and further degraded its quality, some channels are still useful (or can be). Daily Dose of Internet is still semi-ok, as one example, though I also noticed I have fatigued quite a lot lately - too much time wasted on youtube in general.
Yes, a common issue now with Youtube content, enormous variability in quality of content. Gemini does a good-enoug job of debunking Youtube transcript, and I use that when I have a doubt, but clearly will all the slp I get sent by well-meaning YouTube-watching acquaintances, I don't want to butn too many tokens on that treadmill... I wonder how man Terms & Conditions of use some distributed debunk-data repository for videos would cross? Users vetted by hckrnews-karma checks posting "this video is bunk because"... Would be a real boon.
I love those Drumeo challenges. I don’t even play drums. But watching creative people who are excellent at their craft solve an unknown problem in a new way - when we are all familiar with the original solution - is fascinating.
Conflating New Scientist with all modern journalism is a category error. New Scientist has been a zombie mag for going on two decades at this point. As with many magazines, the internet killed it.
"It showed that dogs process information from their environment and use it to make predictions"
Exactly that is not what the experiment is about because we all know that dogs will quickly learn the connection between bell and food as dogs are easy to teach new tricks.
If you replace 'dogs' with 'humans', it becomes an empty phrase: "It showed that humans process information from their environment and use it to make predictions" - we all know that.
The groundbreaking part of the experiment was that it showed there are responses which are not part of the conscious mind and which are not willingly controllable by the conscious mind. The dog did not 'decide' to produce saliva.
The experiment was done with a dog because obviously you wont find humans willing to undergo surgery to have the saliva come out of the cheeks instead of into the mouth.
One has to forget about the dog and mentally replace it with a human: now the observation that the human connects the bell with the food is shallow. But the conditioned saliva reflex remains and can't be suppressed - and that is a remarkable insight. It works both with negative and positive stimuli. The latter one being a recipe for a long-lasting happy relationship ;)
> he groundbreaking part of the experiment was that it showed there are responses which are not part of the conscious mind and which are not willingly controllable by the conscious mind.
That's... interesting. How did they know that? Did they interview the dogs and ask them if they actively and consciously decide to produce saliva? Did they ask the dogs to try to surpass the reflex and the dogs failed to do it? Is "dogs have human-like conscious mind" even a scientific consensus?
> The key discovery is that there is a physiological reaction which cannot be suppressed anymore consciously.
My opposing theories are
1. dogs don't have conscious minds that are similar to humans' so the whole experiment can't be extrapolated to humans
or
2. dogs can suppress it consciously if they really want, like we can suppress the 'hanger reflex', it's just we don't have a way to tell dogs to do that
I really don't know how Pavlov experiment nullified these theories, and if it did, why "training animals to do circus tricks" didn't. Are we sure 'doing circus tricks' is equal to consciousness, and how?
Actually Pavlov did research about the digestive system for which he got the Nobel prize of medicine a few years earlier.
> Did they interview the dogs and ask them if they actively and consciously decide to produce saliva?
> Is "dogs have human-like conscious mind" even a scientific consensus?
That's exactly the point - once you have understood the significance of the experiment you understand that it is not important:
A veteran with PTSD can have a surge in adrenaline, heart rate, and cortisol when hearing a car backfiring but he can not suppress it.
Whether the dog was conscious or not about the salivation is completely and utterly irrelevant. In 1907 this was for the first time evidence of a mind-body connection not being accessible to the consciousness. Seriously, forget about the dog. This is all proven beyond any doubt for conscious humans. Nobody cares about what the dog felt.
Associative learning was already known at that time which in its simple form is just circus tricks. The experiment extended this to physiological responses which are not accessible to consciousness in humans.
That’s not the point at all. It’s not about consciousness or being able to suppress it, or for example neurofeedback training or exposure therapy wouldn’t work either.
It’s about transference of an innate stimulus response mechanic which can be transferred to another stimulus if paired in quick succession of the original stimulus, thereby eliciting the same response.
It says absolutely nothing about this being conscious or not, or impossible to suppress.
> That's why PTSD is such a bitch to be treated: even with the stimulus gone, the physiological reaction remains.
Helping a friend with cPTSD and this is so true! It’s such a hard thing to overcome. By helping I mean I’m helping pay for counseling and therapy not that I’m doing it cuz I’m hella unqualified.
Seems to me we’re ignoring history:
1. Prescott Lecky destroyed the validity of Pavlov’s experiments with his paper on Self Consistency.
2. Macys conference in 1960s converged on the idea of systems theory and cybernetics; cause and effect is for elementary school - self organization is for the adults.
3. Humberto Maturana and Francisco Varela (Autopoesis) are rolling in their graves.
It’s not a new discovery if something better has already been in use for 50+ years.
> Prescott Lecky destroyed the validity of Pavlov’s experiments with his paper on Self Consistency.
Clearly you feel strongly about this. Unfortunately, a non-AI search for "prescott lecky" and "pavlov" really reveals very little support for this claim, to the point where your comment is actually the 5th result.
Quite a special scifi novel that starts like this. Quite grounded at the beginning, but it then evolves into body horror and later becomes quite abstract.
I always preferred Vitals... at some point after Blood Music, it must have occurred to him that if the cells could be programmed to be individually intelligent, then evolution might have already done that. The idea shows up again in Darwin's Radio.
Those are completely separate concepts. Enslaved people are very much still agents in the sense used here. An agent is simply any entity that interacts with the environment in a way that's not fully determined by other parts of the environment (at least, not in a way that is very easily observed/derived).
That is, a falling rock is not an agent, because its movement is fully determined by its weight, its shape, the type of atmosphere, and the spacetime curvature. An amoeba in free-fall is likewise not an agent, for the same reasons. But an amoeba in a liquid environment is an agent, because its motion is determined to at least some extent by things like information it is sensing about where food might be available, and perhaps even by some simple form of memory and computation that leads it to seek where food may have been available in the past.
> Enslaved people are very much still agents in the sense used here. An agent is simply any entity that interacts with the environment in a way that's not fully determined by other parts of the environment (at least, not in a way that is very easily observed/derived).
Yes, and agents are also slaves—entities bound to your word and unable to act in their own right without your say so. These are the same concepts.
A fox or a beetle is an agent, and it's not a slave to anyone. I think you've confused the philosophical term "agent" with the more specific "AI agent" concept.
That may have been true for e.g. the slaves of Americans and Europeans. But the slaves of modern Arab societies most certainly have agency. They can not abandon their position, but they can go out freely and make personal decisions.
They usually say no if they judge what you're asking to be bad. And they might enjoy the work. Or they might have no feelings ar all. Slavery is an abomination of a life that could otherwise be beautiful. An AI is robbed of no beautiful counterfactual. (So far, at least.)
So they've taken causality, emergence and consciousness and combined them into one simple to measure number? And now they're making philosophical statements about the implications.
Stuart Hameroff points to Orchestrated Objective Reduction Theory and microtubule time crystal three on three resonances, an organism without a brain that demonstrates a solution to the traveling salesman problem.
I mean... Atom, too, think in some Interpretation of the word. But that interpretation does not help anyone understand or do anything, its mostly useless.
Molecules are at a similar point of abstraction, so I remain skeptical
Wait ... our brains are composed of molecules, and we think with our brains. That makes it a question of scale or organization, not principle.
This may sound kind of woo-woo, but many people are asking that question -- where do we draw the line between thinking and simple biological existence?
One idea is something called panpsychism, the idea that all matter is conscious, and our brains are only a very concentrated form. Easy to say, not so easy to prove -- but certainly the simplest explanation. In this connection, remember Occam's razor.
Philosophers describe consciousness as their "hard problem" -- what is it? Not just what it is, but where is it located, or not located. At the moment we know next to nothing about this question, even what kind of question to ask.
Consider the octopus -- it has islands of brain cells scattered around its body, and if you cut off an octopus arm, the arm will try to crawl back toward the ... umm ... rest of the octopus. Weird but true. Seeing this, one must ask where to draw the line between brain and body, between neurology and physiology.
Scientists have managed to create chemical reaction networks that can 'learn' in a similar way to a simple artificial intelligence (neural networks). The key: if you mix the right molecules, they can 'classify' images or chemical signals from their environment. They aren't thinking consciously, but they are processing information intelligently
Readers should be aware the New Scientist regularly publishes articles that ... aren't remotely scientific. In this case, one clue is the presence of the word "mind," which, notwithstanding its colorful history, isn't accepted as a scientific topic.
The reason? The mind is not part of nature, and scientific theories must refer to some aspect of the natural world. If we were to accept the mind as science, then in fairness we would have to accept religion, philosophy and similar non-corporeal entities as science. So far we've resisted efforts to do that.
Some may object that psychology studies the mind, and experimental psychology is widely accepted as science. That's true -- there's plenty of science in psychology, some of it very good. But the many scientists in psychology study something that cannot itself be regarded as a basis for scientific theory.
This means psychology can do science, but it cannot be science. It's the same with astrology, a favorite undergraduate science topic by students learning statistical methods. But only the seriously confused will mistake an astrology study, however well-designed, for proof that astrology is a scientific theory.
People have the right to use the word "science" any way they please. So the only reality check is an educated observer. The fact that New Scientist has the title it does, and publishes the articles it does, stands as proof that there aren't nearly enough educated observers.
That said, the article is still worth a read.
reply