I just type "Speak to a human" until it relents. Usually takes 3-4 times. Kind of the chatbot equivalent of mashing 0 on telephone IVRs. The only question of its that I answer, after it agrees to get a human, is when it asks what I need support with since that gets forwarded to the tech.
If by conversation you mean asking for a word by describing it conceptually because I can't remember, every day. If you mean telling it about my day and hobbies, never.
I had fun with it a dozen times or so when it was new, but I'm not amused anymore. Last time was about a month ago, when someone told me about using chatGPT to seek an answer, and I intentionally found a few prompts that made it spill clear bullshit, to send screenshots making a point that LLMs aren't reliable and asking them factual questions is a bad idea.
This is a crucial point that everybody should make sure their non-techie friends understand. AI is not good at facts. AI is primarily a bullshitter. They are really only useful where facts don't matter, like planning events, finding ways to spend time, creating art, etc..
If you're prepared to fact check what it gives you, it can still be a pretty useful tool for breaking down unfamiliar things or for brainstorming. And I'm saying that as someone with a very realistic/concerned view about its limitations.
Used it earlier this week as a jumping off point for troubleshooting a problem I was having with the USMT in Windows 11.
Maybe 1-3 times a day. I find that the newest version of ChatGPT (4o) typically returns answers that are faster and better quality than a search engine inquiry, especially for inquiries that have a bit more conceptualization required or are more bespoke (i.e give me recipes to use up these 3 ingredients etc) so it has replaced search engines for me in those cases.
Not as much as I did at the beginning, but I mainly chalk that up to learning more about its limitations and getting better at detecting its bullshit. I no longer go to it for designing because it doesn't do it well at the scale i need. Now it's mainly used to refractor already working code, to remember what a kind of feature is called, and to catch random bugs that usually end up being typos that are hard to see visually. Past that, i only use it for code generation a line at a time with copilot, or sometimes a function at a time if the function is super simple but tedious to type, and even then i only accept the suggestion that i was already thinking of typing.
Basically it's become fancy autocomplete, but that's still saved me a tremendous amount of time.
The closest I come to chatting is asking github co-pilot to explain syntax when I'm learning a new language. I just needed to contribute a class library to an existing C# API, hadn't done OOP in 15 years, and had never touched dotNet.
I forgot how the conversation went, but one day, a conversation I had with someone about comprehensibility (which was often an issue) compelled me to talk to an AI, a talk which I remember from the fact the AI did now have such issues as the complaining humans had.
Yeah I’ve run into this a bit. People say it “doesn’t understand” things, but when I ask for a definition of “understand” I usually just get downvotes.
I ask additional questions or provide information from my side to get a better answer, but I'm still doing this to solve a problem or gather knowledge. I guess that counts as a conversation, but not a casual one.
I've never tried to have what I would call a conversation, but I use it as a tool for both fixing/improving writing and for writing basic scripts in autohotkey, which it's fairly good at.
It's language models are good for removing the emotional work from customer service - either giving bad news in a very detached professional way or being polite and professional when what I want is to call someone a fartknocker.