Yes I think you are right. And I think this is borderline a mental illness if you can't stop lashing out. As I understand it, she somehow thinks by bashing trans women she is doing something good for women. Trans women are somehow taking away her womanhood or something like that. I have read something like this several times from Rowling but I have no clue how trans woman could do that. But Rowling is obsessed with that, for what ever reason.
This is mental illness by now! Seriously wtf? Why is this so important for her that she can't stop talking about it? If I had some irrational hate for trans woman, I would not go on about it in public all the time.
Don't we have more important problems then to bash people that are so unhappy with their body that they are willing to take hormones and let people operate on their genitals?
This is such a simple thought, everybody should be able to think it, right? But on the other hand, she is not the only one hating transgender women or men. I mean it is not right to hate people for that. But if I would hate trans people then I would just not invite them for dinner and would stop talking about them all the time.
It must be some form of mental illness I have no other explanation.
LLMs are neuronal networks! Yes they are trained with meaningful text to predict the following word, but they are still NN. And after they are trained with with human generated text they can also be further trained with other sources and in another way. Question is how an interaction between LLMs should be valuated. When does and LLM find one or a series of good words? I have not described this and I am also not sure what would be a good way to evaluate that.
Anyway I am sad now. I was looking forward to have some interesting discussions about LLMs. But all I get is down votes and comments like yours that tell me I am an idiot without telling me why.
Maybe I did not articulated my thoughts well enough. But it feels like people want to misinterpret what I'm saying.
Yes, that is true. The last 10-20% are usually the hardest. I think LLM's only become slightly better with each generation at first. My prediction is, there will be another big step forward towards AGI when these models can learn from interacting with themself. And this also might result in a potentially dangerous AGI.
Well get a concept of how physics work (balancing in your example) only by being trained with (random?) still images is a lot to ask imo. But these picture generating NN can produce "original" pictures. They can draw a spider riding a bike. Might not look very good but it is no easy task. LLM's aren't very smart, compared to a human. But they have a huge amount of knowledge stored in them that they can access and also combine to a degree.
Yes well today's LLM's would not produce anything if they talk to each other. They can't learn persistently from any interaction. But if they will become able to in the future, that is where I think it will go in the direction of AGI.
Well LLMs don't learn from any interaction at the moment. They are trained and after that, one can interact with them but they don't learn anymore. You can fine tune the model with recorded interactions later, but they do not learn directly. So what I am saying is, if this is changed and they keep learning from interactions, as we do, there will be a break through. I don't understand why you are saying Thant's not how it works when I am clearly talking about how it might work in the future.
I also don't understand why you get upvoted for this and I get down voted just for posting my thoughts about LLMs. To be clear, it is totally fine to disagree with my thoughts but why down vote it?
Well, our natural languages are developed over thousands of years. They are really good! We can use them to express our self's and we can use them to express the most complicated things humans are working on. Our natural languages are not holding us back! Or maybe the better take is, if the language is not sufficient we do expand them how it is necessary! We develop new special words and meaning for a special subjects. We developed math to express and work with laws of nature in a very compact way efficient way.
Understanding and working with language is the key to AGI.
Yes, big NN use a lot of power at the moment. Funny example is, when DeepMinds AlphaZero-Go engine beat one of the best human player. The human mind operates on something like 40W or so while AlphaZero-Go needed something like a thousand times of that. And the human even won a few games with his 40W :)
And yes you are right, AI systems learn very inefficient compared to a human brain. They need a lot more data/examples to learn from. When the AlphaZero chess engine learned by playing against itself, it played billions of chess matches in a few days. So a lot more a human can play in its lifetime.
Well of course there is a lot of hype around it. And it probably is over hyped at the moment. But there will be the next breakthrough in AI/LLMs. I don't know when, but I think it will be when AIs learns by interacting with other AIs.
Well, me as a human, yes! We all constantly have an inner dialog that helps us to solve problems. And LLMs could do this as well. It is in principle not so much different from playing chess against yourself. As far as I know, these chess NN are playing against older versions of themself to learn. So it doesn't have to play against the exact copy of itself.
Some of the training of image generators is done by two different AIs. AI-1 learns to differentiate between generated and real images and AI-2 tries to trick AI-1 by generate images that AI-1 can't differentiate from real images. They both train each other! And the result is that AI-2 can create images that are very close to real images. All without any human interaction. But they do need real images as training data.
First of all, the take that LLM are just Parrots without being able to think for themself is dumb. They do in a limited way! And they are an impressive step compared to what we had before them.
Secondly, the take that LLMs are dumb and make mistakes that takes more work to correct compared to do the work yourself from the start. That is something I often hear from programmers. That might be true for now!
But the important question is how will they develop! And now my take, that I have not seen anywhere besides it is quite obvious imo.
For me, the most impressive thing about LLMs is not how smart they are. The impressive thing is, how much knowledge they have and how they can access and work with this knowledge. And they can do this with a neuronal network with only a few billion parameters. The major flaws at the moment is their inability to know what they don't know and what they can't answer. They hallucinate instead of answering a question with "I don't know." or "I am not sure about this." The other flaw is how they learn. It takes a shit ton of data, a lot of time and computing power for them to learn. And more importantly they don't learn from interactions. They learn from static data. This similar to what the Company DeepMind did with their chess and go engine (also neuronal networks). They trained these engines with a shit tone of games that were played by humans. And they became really good with that. But then the second generation of their NN game engines did not look at any games played before. They only knew the rules of chess/go and then started to learn by playing against themself. It took only a few days and they could beat their predecessors that needed a lot of human games to learn from.
So that is my take! When LLMs start to learn while interacting with humans but more importantly with themself. Teach them the rules (that is the language) and then let them talk or more precise let them play a game of asking and answering. It is more complicated than it sounds. How evaluate the winner in this game for example. But it can be done.
And this is where the AGI will come from in the future. It is only a question how big do these NN need to be to become really smart and how much time they need to train. But this is also when AI can gets dangerous. When they interact with themself and learn from that without outside control.
The main problem right now is they are slow as you can see when you talk to them. And they need a lot of data, or in this case a lot of interactions to learn. But they will surely get better at both in the near future.
What do you think? Would love to hear some feedback. Thanks for reading!
Komplett abschaffen und die steuerlichen Mehreinnahmen als Kindergeld auszahlen!
How did you know? :) 😘
Zumindest kann man den Beamten nicht nachsagen sie haetten den Knall nicht mehr gehoert ...
I also use Proton VPN as well as Proton mail, Calendar and drive (proton cloud storage). I am also on arch and quite satisfied with Proton.
That makes sense, thank you!
Is this with or without the steam deck?
Not that I don't like the steam deck, I think it is really great for linux adaption. I am just curious.
A lot of good suggestions already here. Try to eliminate the mosquitoes in your house as much as possible. I installed mosquito nets on my windows a few years ago. This helped a lot. I am now asking myself why I haven't done this before.
But I do still get bites like one or two a day, because I also like to be outside in my garden and sometimes a mosquito still finds a way into the house.
So there is no way you can prevent all bites. But the good thing is, you can treat them really well really easy with heat! I do this when I have a cup of tea. I just press the hot tea cup on the bite for a short while. But there are also special pen like devices called electronic insect bite healer or something similar. They are about 10-20 euros. They work as well and are probably safer and easier to handle.
Heat does disintegrate the anticoagulant that mosquitos inject and that makes the bites so itchy. The bites I get itch only ones. Then I treat them with heat and they are basically gone. Try to not scratch because you might spread the anticoagulant more. Just treat them right away!
Yeah I am not sure. He might not be that well known by younger folks.
I just jumped up on the Hyprland band wagon (4 weeks ago). Very pleased with it so far!