I'm not saying humans are always aware of when they're correct, merely how confident they are. You can still be confidently wrong and know all sorts of incorrect info.LLMs aren't aware of anything like self confidence
That's fine. I do that often. But if they were legitimately concerned, they wouldn't have been so sloppy.
The y key difference is humans are aware of what they know and don't know and when they're unsure of an answer. We haven't cracked that for AIs yet.When AIs do say they're unsure, that's their understanding of the problem, not an awareness of their own knowledge
AIs use a lot less resources rn, but humans are also constantly doing a hundred other things beyond answering questions
And how do you think it predicts that? All that complex math can be clustered into higher level structures. One could almost call it.. thinking.Besides we have reasoning models now, so they can emulate thinking if nothing else
Some sort of universal microtransaction layer is the dream. I believe there's also a proposed web standard for it.Scroll was also making it work before they got bought by Twitter
Yeah I think we're going to be grappling with this issue for at least the next decade. The traditional web model falls apart under AI
"Alright, I want to rip a video from a specific website, and this program I found says it can do exactly that for me. Sweet! I just love the convenience of modern technology!"
What do you define as thinking if not a bunch of signals firing in your brain?