Less positive model
Less positive model
I'm currently running Gemma3, it is really good overall, but one thing that is frustrating is the relentless positivity.
It there a way to make it more critical?
I'm not looking for it to say "that is a shit" idea; but less of the "that is a great observation" or "You've made a really insightful point" etc...
If a human was talking like that, I'd be suspicious of their motives. Since it is a machine, I don't think it is trying to manipulate me, I think the programming is set too positive.
It may also be cultural, at a rule New Zealanders are less emotive in our communication, the LLM (to me) feels like are overly positive American.
Can't you just give it a system message that says: "User prefers a professional conversational style", "User prefers a neutral attitude", "Shut the F*** up!" or similar ? You probably have to experiment a little, but usually it works okay.
Edit. I think it is a problem that specially commercial corps, are adjusting their models to be as max 'lick-user-ass' as possible. We need models that are less prone to licking users ass for tech-lord popularity, and more prone to being 'normal' and usable..
I have been using this prompt:
It seems to be working well, most of the flowery language is gone and the responses are far more realistic (from my point of view).
I've had success restricting responses to 120 words or less.
lick ass for more tokens. they always gonna optimize that.