Yup, ran it locally for a lark and it answers anything you want. No censorship whatsoever. The censorship is not in the training data, but in the system instructions of the hosted model.
Huh? Running locally with Ollama, via OpenWebUI.
It is weird though, I tried "tell me about the picture of the man in front of a tank" and it gives a lot of proper information, including about censorship from governments. I think I tested on the 14b model
Yeah I’ve seen exactly the same thing running in LM Studio. Gonna go out on a limb and say OP didn’t actually try it, or that they tried some 3rd party fine tuned model.
Get it cornered in trying to reconcile how the citizens of China keep their government accountable when information is censored to only favor the governments position, it will then give answers around some of the "sensitive" topics.
Folks should understand that AIs are not wizards that will answer any question put to them without distortion or concealment. They can easily be programmed to promote and protect the policies of those who created them.
It's not about where it's hosted, but what data it's been trained on. Is it's data set censored? That's an important question for those that want to use it.
the filter is on the website, not the model. It will answer about Tiananmen and other censured topics with local models, this has been documented by many
ChatGPT et al on the other hand will continue to give their subversive answers to questions that only get updated when they blow up on twitter. If you don’t like it? Tough shit
Huh. For me it consistently refused when locally-hosted; even the section with the internal thoughts was completely blank. This goes for both the official release and a decensored version.
First, that's not what's being tested. When the chatbot refuses to answer this happens outside of LLM generation
Second, learn what really happened that day please
This was a big opportunity to showcase the Chinese version of the events. No idea why it was wasted.
Yup, ran it locally for a lark and it answers anything you want. No censorship whatsoever. The censorship is not in the training data, but in the system instructions of the hosted model.
Huh? Running locally with Ollama, via OpenWebUI.
It is weird though, I tried "tell me about the picture of the man in front of a tank" and it gives a lot of proper information, including about censorship from governments. I think I tested on the 14b model
Yeah I’ve seen exactly the same thing running in LM Studio. Gonna go out on a limb and say OP didn’t actually try it, or that they tried some 3rd party fine tuned model.
Get it cornered in trying to reconcile how the citizens of China keep their government accountable when information is censored to only favor the governments position, it will then give answers around some of the "sensitive" topics.