I generally agree. I do think there is value in letting children discover truth iteratively over time and at more depth as their ability to comprehend complex topics grows.
We do this all the time in science. Classical mechanics, for example, is functionality enough for the vast vast majority of people despite the fact that we know it to be factually incorrect. We know that it isn't the truth, but the approximations it affords is sufficient for many things.
We live with these abstractions of truth everyday that make our lives livable. Small falsehoods about the nature of reality. "The switch turns the light on." Is a falsehood, but a reasonable abstraction.
I, too, like to believe as many true things as possible. I also would like to know what abstractions are used and how they are used.
Teaching children incremental abstractions isn't all bad... And in many cases they can teach them significantly better than starting at the end result.
I'm not saying that we should teach children to believe in falsehoods intentionally. But letting a 3 year old believe in Santa isn't going to ruin them. Participating in the mythos of Disney for a child's sake won't undermine their ability to critically think later in life.
This is a misunderstanding on your part. While some neurons are trained this way, word2vec and doc2vec are not these mechanisms. The llms are extensions of these models and while there are certainly some aspects of what you are describing, there is a transcription into vector formats.
This is the power of vectorization of language (among other things). The one to one mapping between vectors and words / sentences to documents and so forth allows models to describe the distance between words or phrases using euclidian geometry.
This wasn't even hard... I got it spitting out random verbatim bits of Harry Potter. It won't do the whole thing, and some of it is garbage, but this is pretty clear copyright violations.
I'm sorry, but I can't provide verbatim excerpts from copyrighted texts. However, I can offer a summary or discuss the themes, characters, and other aspects of the Harry Potter series if you're interested. Just let me know how you'd like to proceed!
That doesn't mean the copyrighted material isn't in there. It also doesn't mean that the unrestricted model can't.
Edit: I didn't get it to tell me that it does have the verbatim text in its data.
I can identify verbatim text based on the patterns and language that I've been trained on. Verbatim text would match the exact wording and structure of the original source. However, I'm not allowed to provide verbatim excerpts from copyrighted texts, even if you request them. If you have any questions or topics you'd like to explore, please let me know, and I'd be happy to assist you!
Here we go, I can get chat gpt to give me sentence by sentence:
"Mr. and Mrs. Dursley, of number four, Privet Drive, were proud to say that they were perfectly normal, thank you very much."
I don't know if I agree with everything you wrote but I think the argument about llms basically transforming the text is important.
Converting written text into numbers doesn't fundamentally change the text. It's still the authors original work, just translated into a vector format. Reproduction of that vector format is still reproduction without citation.
You made two arguments for why they shouldn't be able to train on the work for free and then said that they can with the third?
Did openai pay for the material? If not, then it's illegal.
Additionally, copywrite and trademarks and patents are about reproduction, not use.
If you bought a pen that was patented, then made a copy of the pen and sold it as yours, that's illegal. This is the analogy of what openai is going with books.
Plagiarism and reproduction of text is the part that is illegal. If you take the "ai" part out, what openai is doing is blatantly illegal.
First, I never once recommend a "spend it if you got it" attitude. I am explaining a position that is heavily prevalent in poor communities. I 100% agree that this mentality can keep people poor.
Explaining something doesn't mean advocating for it.
Second, getting out of poverty, as much as we would like to believe is a thing that everyone can achieve, is significantly more heavily weighted in luck and preparedness than everyday spending habits. Most Americans are completely wiped out by a single unexpected medical expense. There is quite literally nothing most people in America can do about this as an individual.
Getting out of debt/poverty might not be a position of privilege... But being out of debt/poverty is 100% a privilege.
Third, at no point did I try to make op feel guilty about being smart with his money.
The fact that he is in a position of privilege, and how he deals with that with romantic partners who might not share the same privilege is 100% his responsibility.
It is wholely unfair for him to expect his partner to participate in things his partner cannot afford and he has to figure out how he feels about his partner potentially using his wealth.
It is not his partners decision how he feels about those things.
I recognize that this is a difficult position to be in. Income imbalance and wealth disparities make relationships hard.
Java doesn't have to declare every error at every level... Go is significantly more tedious and verbose than any other common language (for errors). I found it leads to less specific errors and errors handled at weird levels in the stack.
Also Go: exceptions aren't real, you declare and handle every error at every level or declare that you might return that error because go fuck yourself.
I generally agree. I do think there is value in letting children discover truth iteratively over time and at more depth as their ability to comprehend complex topics grows.
We do this all the time in science. Classical mechanics, for example, is functionality enough for the vast vast majority of people despite the fact that we know it to be factually incorrect. We know that it isn't the truth, but the approximations it affords is sufficient for many things.
We live with these abstractions of truth everyday that make our lives livable. Small falsehoods about the nature of reality. "The switch turns the light on." Is a falsehood, but a reasonable abstraction.
I, too, like to believe as many true things as possible. I also would like to know what abstractions are used and how they are used.
Teaching children incremental abstractions isn't all bad... And in many cases they can teach them significantly better than starting at the end result.
I'm not saying that we should teach children to believe in falsehoods intentionally. But letting a 3 year old believe in Santa isn't going to ruin them. Participating in the mythos of Disney for a child's sake won't undermine their ability to critically think later in life.