AI firm Anthropic has started a research program to look at AI 'welfare' - as it says AI can communicate, relate, plan, problem-solve, and pursue goals—along with other human characteristics.
AI firm Anthropic has started a research program to look at AI 'welfare' - as it says AI can communicate, relate, plan, problem-solve, and pursue goals—along with other human characteristics.

www.anthropic.com
Exploring model welfare

When it gets to the point AI is self-recursively improving itself, is this a version of 'life' as we know it? Perhaps with humans as the ultimate parent? In a sense those AIs would be our descendents.
My problem with Big Tech leading these efforts, is that they are so often anti-human welfare, why would we trust them with the issue of anyone else's? Big Tech's desire to have zero regulation is an expression of how little concern they have for other humans. The ease with which all the Big Tech firms help the military slaughter tens of thousands of civilians is another. I can't help thinking they'll use any effort to elevate AI 'welfare', to harm the interests of inconvenient humans, which means most of us to them.
This is marketing, pure and simple. There is no reasoning AI within any of these companies grasps.
I disagree. There are definitely people who sincerely believe in AI 'consciousness'. Ironically, they are usually the first to throw about terms like 'woo woo' in any discussions about human consciousness.