What do you think about this one?
What do you think about this one?

A Student Used AI to Beat Amazon’s Brutal Technical Interview. He Got an Offer and Someone Tattled to His University

I may be a old man yelling at the clouds, but I still think programming skills are going nowhere. He seems to bet his future on his 'predictions'
The only thing AI will replace is small, standalone scripts/programs. 99% of stuff companies actually write is highly integrated with existing code, special (sometimes obsolete) frameworks, open and closed source. The things they throw at you in interviews is generic scripts because they aren't telling you to read through 6k lines of korn shell scripts to actually understand and write their code. And they won't have you read and understand millions of lines of code, configs and makefiles to implement something new, in the interview.
He basically cheated on his little brother in Mario Kart and thinks he can now beat Verstappen on the real track. Bro needs to get a reality check in an actual work environment. Especially in a large company.
I recently needed to implement some batch processing logic in our code to account for some api level restrictions (the code already pulls from the api in pages and date ranges, but if I specify a date range too wide or a batch that would get too many records back, it gets rejected, so we need to break it down and run the date range in batches). I tell this junior developer what the issue is, and what we need to add to the existing class in our codebase. I follow up with him after a week, and this is what he sends me.
Boilerplate code from chatgpt that has almost nothing to do with what we discussed. And how can you even give me a whole 'working' code without even testing it? He didn't even clone our original repo and test it as is to understand why we need what we need. AI sure is making programmers dumb.
I would argue that he is not a programmer
Does it least actually have something to do with PayPal? And is pandas actually useful? lol
I can understand commenting ASM, optimized C code can use some comments too, but commenting python should be a crime, as its function calls document themselves, and you'll usually split data over named variables anyway.
For now. Eventually, I'd expect LLMs to be better at ingesting the massive existing codebase and taking it into account and either planning an approach or spitting out the first iteration of code. Holding large amounts of a language in memory and adding to it is their whole thing.
Hopefully we can fix the energy usage first.
I have a feeling that the Dyson sphere will end up powering only the AI of the world and nothing else.
They can context and predict contextually relevant symbols, but will they ever do so in a way that is consistently meaningful and accurate? I personally don't think so because an LLM is not a machine capable of logical reasoning. LLMs hallucinating is just them making bad predictions, and I don't think we're going to fix that regardless of the amount of context we give them. LLMs generating useful code in this context is like asking for perfect meteorological reports with our current understanding of weather systems in my opinion. It feels like any system capable of doing what you suggest needs to be able to actually generate its own model that matches the task it's performing.
Or not. I dunno, I'm just some idiot software dev on the internet who only has a small amount of domain knowledge and who shouldn't be commenting on shit like this without having had any coffee.