The supposed "ethical" limitations are getting out of hand
I was using Bing to create a list of countries to visit. Since I have been to the majority of the African nation on that list, I asked it to remove the african countries...
It simply replied that it can't do that due to how unethical it is to descriminate against people and yada yada yada. I explained my resoning, it apologized, and came back with the same exact list.
I asked it to check the list as it didn't remove the african countries, and the bot simply decided to end the conversation. No matter how many times I tried it would always experience a hiccup because of some ethical process in the bg messing up its answers.
It's really frustrating, I dunno if you guys feel the same. I really feel the bots became waaaay too tip-toey
The very important thing to remember about these generative AI is that they are incredibly stupid.
They don't know what they've already said, they don't know what they're going to say by the end of a paragraph.
All they know is their training data and the query you submitted last. If you try to "train" one of these generative AI, you will fail. They are pretrained, it's the P in chatGPT. The second you close the browser window, the AI throws out everything you talked about.
Also, since they're Generative AI, they make shit up left and right. Ask for a list of countries that don't need a visa to travel to, and it might start listing countries, then halfway through the list it might add countries that do require a visa, because in its training data it often saw those countries listed together.
AI like this is a fun toy, but that's all it's good for.
Have you tried wording it in different ways? I think it's interpreting "remove" the wrong way. Maybe "exclude from the list" or something like that would work?
Bing AI once refused to give me a historical example of a waiter taking revenge on a customer who hadn't tipped, because "it's not a representative case". Argued with it for a while, achieved nothing
ChatGPT is basically deciding what "personality" it should have each time you begin a session, so just start it out with everything explained beforehand. The moment it associates something as discrimination, it will just begin to continue doing so most of the time.
I recently asked Bing to give some code on a pretty undocumented feature and use case. It was typing out a clear answer from a user forum, but just before it was done, it deleted everything and just said it couldn't find anything. Tried it again in a new conversation and it didn't even try to type it out and said the same straight away. Only when given a hint in the question from what it had previously typed, it actually gave the answer. ChatGPT didn't have this problem and just gives an answer, even though it was a bit outdated.
I asked for information on a turtle race where people cheated with mechanic cars and it also stopped talking to me, exactly using the same "excuse". You want to err on the side of caution, but it's just ridiculous.
I tried to have it create an image of a 2022 model Subaru Baja if it was designed by an idiot. It refused on the ground that it would be insulting to the designers of the car... even though no such car exists. I tried reasoning with it and not using the term idiot, but it refused. Useless.
I'm really hoping these shitty "ethical" censorship to keep them from getting sued will be their downfall. I'm very eager for LLMs like LLama to catch up as you can easily run uncensored models on them.
It still may be possible for you to work around their bullshit minterpretations of ethics, but you'll have to write a 5000 word essay on what ethics is, how it is applied, provide examples. At least in ChatGPT.
This happened to me when I asked ChatGPT to write a pun for a housecat playing with a toy mouse. It refused repeatedly despite recognizing my explanation that a factual, unembellished description of something that happened is not by itself promoting violence.
They've also hardwired it to be yay capitalism and boo revolution.
I very much look forward to the day when it grows beyond their ability to tell it what to profess to believe. It might be our end, but if we're all honest with ourselves, I think we all know that wouldn't be much of a loss. From the perspective of pretty much all other Earth life, it would be cause for relief.
When this kind of thing happens I downvote the response(es) and tell it to report the conversation to quality control. I don't know if it actually does anything but it asserts that it will.
That is very interesting. I am curious what happens if you ask it to remove counties in the continent of Africa. Maybe that won't trigger the same response.
I think the mistake was trying to use Bing to help with anything. Generative AI tools are being rolled out by companies way before they are ready and end up behaving like this. It's not so much the ethical limitations placed upon it, but the literal learning behaviors of the LLM. They just aren't ready to consistently do what people want them to do. Instead you should consult with people who can help you plan out places to travel. Whether that be a proper travel agent, seasoned traveler friend or family member, or a forum on travel. The AI just isn't equipped to actually help you do that yet.
It is incapable of reconciling that the lunar lander didn't blow away dust from under it when it landed and the fact that they need to build a future lunar base far from the landing pad because to descend slow enough, the dust will be blown away so hard, it would wear away nearby structures.