Stack Overflow Just Announced Their Own AI OverflowAI
Stack Overflow Just Announced Their Own AI OverflowAI

Let’s highlight the new features and products we announced today from the stage of WeAreDevelopers.

Stack Overflow Just Announced Their Own AI OverflowAI
Let’s highlight the new features and products we announced today from the stage of WeAreDevelopers.
Does it tell you to Google the problem and then downvote you?
Hence recursion since Google just takes you back, which leads to stack overflow because there is no exit condition.
Which would be especially messed up if your original question was about recursion.
This bullshit happens too often lmao
"Googles problem, finds post"
"Why are you asking this use Google"
Gee, thanks
"to keep the quality of answers high, we may arbitrarily close questions, regardless of how many upvotes it gets and how helpful it is" - stackoverflow
That sounds so StackOverflow
Future is now old man
Good way to kill your own platform, the whole point is to ask questions to real people
I thought the point was a mental BDSM exercise where you come to others for help and are instead punished for your ignorance.
You're totally not wrong
It really puts their stance on "no AI generated answers" in a different light.
Basically, "no AI generated answers unless we do it".
Well, using ai-generated answers to train their own ai would bring down the quality of answers and worse quality means lesser money. Don’t you want them to make any money??!!
Stack Overflow is unique as a page, in the sense that its contributions are under a license that allows for reuse (Creative Commons Share-Alike) as long as the individual users are properly credited. Does this mean that OverflowAI keeps the credit metadata and knows who wrote each individual part of an answer?
AI doesn't work that way. No one wrote "part of the answer." It's more like each contributor casted a vote on what the next token should be and it randomly picks one of the top ten voted tokens. (Very very roughly.)
Then I'm guilty of breaking the license. I have always been stealing code from Stack Overflow. Well, since I'm a senior dev right now I steal only from answers.
It does seem to do that in the feature video. It appears to link to all the answers it pulled from.
Nice choice of logo colors, btw.
I just noticed...
The only answer you ever get is "Closed: Marked as duplicate question."
I use ChatGPT frequently for programming and I've found it to be pretty good.
The key is using it conversational nature as this gets better results.
Start simple and expand. You can't just ask it wrote huge chunks of code.
Yeah works well, as long as the code is rather simple and it occurred rather often in the training set. But I seldom use it currently (got a little bit more complex stuff going on). It's good though to find new stuff (as it often introduces a new library I haven't known yet). But actual code... I'm writing myself (tried it often, and the quality just isn't there... and I think it even got worse over the last couple of months as also studies suggest)
Agreed. I got ChatGPT to convert python code to JavaScript and I got a buggy code sample back with new bugs.
I've found it great for asking documentation questions. It saves me a ton of time having to search through documentation myself. The problem is when it encounters something it doesn't have information on, it'll just confidently make shit up, and if you're not enough of an expert to recognize when that happens, you can be mislead. It still saves me time, but I use it as a recall tool to get me started when I'm learning to do something new, I'd never use the code it puts out without reading through it line by line. I'm also experienced enough to know when it's wrong and how to refactor its examples. People new to programming could get set down the wrong path by over relying on gpt to teach them.
The code it gives me generally just throws me into the debug stage, skipping right over the me writing buggy code stage.
Good summary. For some people iterating over existing code is preferred.
For others writing new code (and not maintaining it) feels better.
I've gotten really good results asking chat gpt for programming help. Problem is that it's wrong like 10% of the time, and when it's wrong it's very confidently incorrect. That wasn't a problem for me because I knew when it was wrong and could course correct it and get the correct solution and it still saved me time and helped me eventually get to the right solution. But if someone who's still getting started is trying to use chat gpt to learn, they could easily be mislead because they won't know when its output is wrong.
I feel like a better solution is to have a community answer as generative AI to every new question and have folks upvote or downvote it like normal.
I don't think this is it. I wouldn't want newbies to wait for the community to tell them that running sudo rm -Rf /
or other useless/dangerous command is a bad idea...
No users to answer questions? No problem…
Do we have a term for combination of enshittyfication and LLM?
Maybe add NFTs into the mix too. But don't tell wsb and the GME gang.
I'm not liking the announced changes to search. That sounds like we will be losing the lexical search and in exchange we will be getting the same technology that allows google to answer questions different to the one we asked.
How many minutes between starting to use OverflowAI until we get something like "As a large language model trained by the Stack Exchange Network i can not answer duplicated questions".
That's when I go back to ChatGPT or Google Bard. It's helped me with problems and less aggravation than SO
I look forward to the AI trend fizzling out. It's only slightly less silly than the cryptocurrency trend was.
It reminds me of 3D
AI exists because not everyone frequents a low toxicity forum like Lemmy.
This artificial pseudointelligence exists because there's the “gee whiz, that's cool” of a computer talking like a person, and a bunch of hype chasers looking to cash in. Much like cryptocurrency before it, and the dot-com boom before that, there is little substance to it, and most of it will be commercially irrelevant a decade from now.
I understand Google and Microsoft getting into it as it makes sense as a "better" Google search but for StackOverflow that sounds like they have just given up on their current platform.
I get the whole community resource and all that hoorah, but what bothers me the most is that C*O somewhere that's padding his bonus and CV, waiting for the ship to sink so he can move on to the next thing where he can sing praises to the AI revolution.
Many coding languages, mixed text and code, just plain wrong answers (commented as such). What can go wrong?
They can DDOS themselves to show raise in visits but it won't help long-term.
Can someone tell me what their angle is? Are user's supposed to curate and help train the model for free? Is it just a model trained on stackoverflow data?
All their data is open so what edge do they over the already established competition.
This type of Q&A interface is very popular and stealing traffic away from sites like Google and Stack Overflow. Stack Overflow can train it on their data and has a feature where it links to every answer it pulled from. I think that's a nice feature and like that I can troubleshoot further on my own, as AI can often hallucinate an answer or lose a piece of context I need.
I think SO has had people hallucinating for some time but it wasn't AI driven
They only had to improve the search and kept it a human platform!
Well that explains why they did a 180 on their "no AI" rule, which has the mods in a tizzy.
Who knows, maybe it'll cut back on the toxicity in the sense that you don't have to interact with toxic people ¯(ツ)_/¯
Like toxic mods
tell me it's a joke, pls
Hah, good to know that even on programming@programming.dev there are people who agree that stack overflow moderation is too draconian to ask questions in anymore. It's a good resource, though, so an LLM will probably be the answer to make the knowledge base more usable without angering its elder gods.
Probably the same data that ChatGPT or Google Bard has been trained on which to me makes the distinction moot
Clearly read the title as Stack Overthrow AI
When you check the traffic of website, it seem a bit late to take such a action.
It seems pretty good, especially vscode extension but people already implement there many generative ai solutions out there