We're also using AI internally to improve our coding processes, which is boosting productivity and efficiency. Today, more than a quarter of all new code at Google is generated by AI, then reviewed and accepted by engineers. This helps our engineers do more and move faster.
When text editors automatically create templates for boilerplate, that's AI.
I wouldn't be surprised if it's technically true but it's more like, coder starts writing out a line of code, AI autocompletes the rest of the line, and then the coder has to revise and usually edit it. And the amount of code by character count that the AI autocompleted is 25% of all new code. Like same shit as Github's CoPilot that came out years ago, nothing special at all
None of that 25% of AI generated code wasn't very heavily initiated and carefully crafted by humans every single time to ensure it actually works
It's such a purposeful misrepresentation of labour (even though the coders themselves all want to automate away and exploit the rest of the working class too)
coder starts writing out a line of code, AI autocompletes the rest of the line, and then the coder has to revise and usually edit it. And the amount of code by character count that the AI autocompleted is 25% of all new code.
When you dig past the clickbait articles and find out what he actually said, you're correct. He's jerking himself off about how good his company's internal autocomplete is.
I assume that's what it is as well. I'm guessing there's also a lot of boilerplate stuff and they're counting line counts inflated by pointless comments, and function comment templates that usually have to get fully rewritten.
Lol I'd bet 90% of that is of equal quality to the code you get by measuring lines written.
Another 9% is likely stolen.
The final 1% won't even compile, doesn't work right, or needs so much work you'd be better off redoing it.
The only useful result I've had with CS is asking for VERY basic programs that I have to then check the quality of. Besides that, I had ONE question that I knew would be answered in a text book somewhere, but couldn't get a search hit about. (I think it was something about the most efficient way to insert or sort or something like that.)
Worked with it a bit at work and the output was so unreliable I gave up and took the best result it gave me and hard coded it so I could have something to show off. Left it as a "in the future..." thing and last I heard its still spinning in the weeds.
I often help beginners with their school programming assignments. They're often dumbfounded when I tell them "AI" is useless because they "asked it to implement quicksort and it worked perfectly".
The next batch of software engineers are going to have huge dependency problems.
Riiiight. And I bet he'd tell you that 25% of their servers were powered by cold fusion if it were the newest thing that got investors to throw bags of money at them.
In what kind of workflow? Because if I start typing, my copilot generates 20 lines, and I edit that 20 lines down to 5 that will compile and bear little resemblance to what was generated, I feel like that should count as 0 AI lines, but I have a feeling it counts for more.