I use mixtral8x7b locally and it's been great. I am genuinely excited to see ddg offering it and the service in general. Now I can use this service when not on my network.
What GPU are you using to run it? And what UI are you using to interface with it? (I know of gpt4all and the generic sounding ui-text-generation program or something)
...complex almost entirely wholly-hallucinated answers that only have as much bearing on reality as 'some dude who is very talkative and heard about a bunch of stuff second-hand, and who is also high as balls and experiencing a manic episode where they think they know everything'
LOL. Yeah, sometimes, answers can be very much "I'm winging it today", but certain prompts, especially for story ideas, can be very interesting and usable.
I've always said that if you know a lot about a subject, you can easily spot how AI generally tries to fake it until it makes it.
But if you have no idea about something, the answers you get are certainly better than what your buddy might tell you 😂
But to my point, it comes up with long form content so fast that you wonder how the hell it actually processed the question that quickly.
Do we think they are going to charge for this once out of beta? Even though they have done great work on making it anonymous, I don't see anything about them not using the input/output as data to "better" their service. So perhaps it would remain free?