How it started vs. How it's going
How it started vs. How it's going
How it started vs. How it's going
Was listening to my go-to podcast during morning walkies with my dog. They brought up an example where some couple was using ShatGPT as a couple's therapist, and what a great idea that was. Talking about how one of the podcasters has more of a friend like relationship to "their" GPT.
I usually find this podcast quite entertaining, but this just got me depressed.
ChatGPT is by the same company that stole Scarlett Johansson's voice. The same vein of companies that thinks it's perfectly okay to pirate 81 terabytes of books, despite definitely being able to afford paying the authors. I don't see a reality where it's ethical or indicative of good judgement to trust a product from any of these companies with information.
AI can be incredibly useful, but you still need someone with the expertise to verify its output.
I took a web dev boot camp. If I were to use AI I would use it as a tool and not the motherfucking builder! AI gets even basic math equations wrong!
Holy crap, it’s real!
That is the future of AI written code: Broken beyond comprehension.
Ooh is that job security I hear????
This feels like the modern version of those people who gave out the numbers on their credit cards back in the 2000s and would freak out when their bank accounts got drained.
taste of his own medicine
But what site is he talking about?
I hope this is satire 😭
Yes, yes there are weird people out there. That's the whole point of having humans able to understand the code be able to correct it.
Chatgpt make this code secure against weird people trying to crash and exploit it ot
beep boop
fixed 3 bugs
added 2 known vulnerabilities
added 3 race conditions
boop beeb
Roger Roger
Eat my SaaS
Ha, you fools still pay for doors and locks? My house is now 100% done with fake locks and doors, they are so much lighter and easier to install.
Wait! why am I always getting robbed lately, it can not be my fake locks and doors! It has to be weirdos online following what I do.
To be fair, it's both.
Hilarious and true.
last week some new up and coming coder was showing me their tons and tons of sites made with the help of chatGPT. They all look great on the front end. So I tried to use one. Error. Tried to use another. Error. Mentioned the errors and they brushed it off. I am 99% sure they do not have the coding experience to fix the errors. I politely disconnected from them at that point.
What's worse is when a noncoder asks me, a coder, to look over and fix their ai generated code. My response is "no, but if you set aside an hour I will teach you how HTML works so you can fix it yourself." Never has one of these kids asking ai to code things accepted which, to me, means they aren't worth my time. Don't let them use you like that. You aren't another tool they can combine with ai to generate things correctly without having to learn things themselves.
100% this. I've gotten to where when people try and rope me into their new million dollar app idea I tell them that there are fantastic resources online to teach yourself to do everything they need. I offer to help them find those resources and even help when they get stuck. I've probably done this dozens of times by now. No bites yet. All those millions wasted...
I've been a professional full stack dev for 15 years and dabbled for years before that - I can absolutely code and know what I'm doing (and have used cursor and just deleted most of what it made for me when I let it run)
But my frontends have never looked better.
The fact that “AI” hallucinates so extensively and gratuitously just means that the only way it can benefit software development is as a gaggle of coked-up juniors making a senior incapable of working on their own stuff because they’re constantly in janitorial mode.
Plenty of good programmers use AI extensively while working. Me included.
Mostly as an advance autocomplete, template builder or documentation parser.
You obviously need to be good at it so you can see at a glance if the written code is good or if it's bullshit. But if you are good it can really speed things up without any risk as you will only copy cody that you know is good and discard the bullshit.
Obviously you cannot develop without programming knowledge, but with programming knowledge is just another tool.
I maintain strong conviction that if a good programmer uses llm in their work, they just add more work for themselves, and if less than good one does it, they add new exciting and difficult to find bugs, while maintaining false confidence in their code and themselves.
I have seen so much code that looks good on first, second, and third glance, but actually is full of shit, and I was able to find that shit by doing external validation like talking to the dev or brainstorming the ways to test it, the things you categorically cannot do with unreliable random words generator.
So no change to how it was before then
Different shit, same smell
Depending on what it is you're trying to make, it can actually be helpful as one of many components to help get your feet wet. The same way modding games can be a path to learning a lot by fiddling with something that's complete, getting suggestions from an LLM that's been trained on a bunch of relevant tutorials can give you enough context to get started. It will definitely hallucinate, and figuring out when it's full of shit is part of the exercise.
It's like mid-way between rote following tutorials, modding, and asking for help in support channels. It isn't as rigid as the available tutorials, and though it's prone to hallucination and not as knowledgeable as support channel regulars, it's also a lot more patient in many cases and doesn't have its own life that it needs to go live.
Decent learning tool if you're ready to check what it's doing step by step, look for inefficiencies and mistakes, and not blindly believe everything it says. Just copying and pasting while learning nothing and assuming it'll work, though? That's not going to go well at all.
It'll just keep better at it over time though. The current ai is way better than 5 years ago and in 5 years it'll be way better than now.
My hobby: extrapolating.
Past performance does not guarantee future results
Bonus points if the attackers use ai to script their attacks, too. We can fully automate the SaaS cycle!
That is the real dead Internet theory: everything from production to malicious actors to end users are all ai scripts wasting electricity and hardware resources for the benefit of no human.
That would only happen if we give power to our ai assistants to buy things on our behalf, and manage our budgets. They will decide among themselves who needs what and the money will flow to billionaires pockets without any human intervention. If humans go far enough, not even rich people would be rich, as trust funds, stock portfolios would operate under ai. If the ai achieves singularity with that level of control, we are all basically in spectator mode.
Someone really should've replied with
My attack was built with Curson
AI is yet another technology that enables morons to think they can cut out the middleman of programming staff, only to very quickly realise that we're more than just monkeys with typewriters.
Well I think I am a monkey with a typewriter...
Yeah! I have two typewriters!
We're monkeys with COMPUTERS!!!
To be fair.. If this guy would have hired a dev team, the same thing could happen.
But then they'd have a dev team who wrote the code and therefore knows how it works.
In this case, the hackers might understand the code better than the "author" because they've been working in it longer.
True, any software can be vulnerable to attack.
but the difference is a technical team of software developers can mitigate an attack and patch it. This guy has no tech support than the AI that sold him the faulty code that likely assumed he did the proper hardening of his environment (which he did not).
Openly admitting you programmed anything with AI only is admitting you haven't done the basic steps to protecting yourself or your customers.
This is satire / trolling for sure.
LLMs aren't really at the point where they can spit out an entire program, including handling deployment, environments, etc. without human intervention.
If this person is 'not technical' they wouldn't have been able to successfully deploy and interconnect all of the pieces needed.
The AI may have been able to spit out snippets, and those snippets may be very useful, but where it stands, it's just not going to be able to, with no human supervision/overrides, write the software, stand up the DB, and deploy all of the services needed. With human guidance sure, but with out someone holding the AIs hand it just won't happen (remember this person is 'not technical')
My impression is that with some guidance it can put together a basic skeleton of complex stuff too. But you need a specialist level of knowledge to fix the fail at compile level mistakes or worse yet mistakes that compile but don't at all achieve the intended result. To me it has been most useful at getting the correct arguments for argument heavy libraries like plotly, remembering how to do stuff in bash or learning something from scratch like 3js. Soon as you try to do something more complex than it can handle, it confidently starts cycling through the same couple of mistakes over and over. The key words it spews in those mistakes can sometimes be helpful to direct your search online though.
So it has the potential to be helpful to a programmer but it cant yet replace programmers as tech bros like to fantasize about.
idk ive seen some crazy complicated stuff woven together by people who cant code. I've got a friend who has no job and is trying to make a living off coding while, for 15+ years being totally unable to learn coding. Some of the things they make are surprisingly complex. Tho also, and the person mentioned here may do similarly, they don't ONLY use ai. They use Github alot too. They make nearly nothing themself, but go thru github and basically combine large chunks of code others have made with ai generated code. Somehow they do it well enough to have done things with servers, cryptocurrency, etc... all the while not knowing any coding language.
That reminds me of this comic strip....
Claude code can make something that works, but it's kinda over engineered and really struggles to make an elegant solution that maximises code reuse - it's the opposite of DRY.
I'm doing a personal project at the moment and used it for a few days, made good progress but it got to the point where it was just a spaghetti mess of jumbled code, and I deleted it and went back to implementing each component one at a time and then wiring them together manually.
My current workflow is basically never let them work on more than one file at a time, and build the app one component at a time, starting at the ground level and then working in, so for example:
Create base classes that things will extend, Then create an example data model class, iterate on that architecture A LOT until it's really elegant.
Then Ive been getting it to write me a generator - not the actual code for models,
Then (level 3) we start with be UI.layer, so now we make a UI kit the app will use and reuse for different components
Then we make a UI component that will be used in a screen. I'm using flutter as an example so It would be a stateless component
We now write tests for the component
Now we do a screen, and I import each of the components.
It's still very manual, but it's getting better. You are still going to need a human cider, I think forever, but there are two big problems that aren't being addressed because people are just putting their head in the sand and saying nah can't do it, or the clown op in the post who thinks they can do it.
That logic about how to scaffold and architect an app in a sensible way - USING AI TOOLS - is actually the new skillset. You need to know how to build the app, and then how to efficiently and effectively use the new tools to actually construct it. Then you need to be able to do code review for each change.
</rant>
Mmmmmm no, Claude definitely is. You have to know what to ask it, but I generated and entire deadman’s switch daemon written in go in like an hour with it, to see if I could.
So you did one simple program.
SaaS involves a suite of tooling and software, not just a program that you build locally.
You need at a minimum, database deployments (with scaling and redundancy) and cloud software deployments (with scaling and redundancy)
SaaS is a full stack product, not a widget you run on your local machine. You would need to deputize the AI to log into your AWS (sorry, it would need to create your AWS account) and fully provision your cloud infrastructure.
Mmmmmmmmmmmmm no
It's further than you think. I spoke to someone today about and he told me it produced a basic SaaS app for him. He said that it looked surprisingly okay and the basic functionalities actually worked too. He did note that it kept using deprecated code, consistently made a few basic mistakes despite being told how to avoid it, and failed to produce nontrivial functionalies.
He did say that it used very common libraries and we hypothesized that it functioned well because a lot of relevant code could be found on GitHub and that it might function significantly worse when encountering less popular frameworks.
Still it's quite impressive, although not surprising considering it was a matter of time before people would start to feed the feedback of an IDE back into it.
We just built and deployed a fully functional AWS app for our team entirely written in AI. From the terraform, to the backing API, to the frontend Angular. All AI. I think AI is further along here than you suspect.
I'm skeptical. You are saying that your team has no hand in the provisioning and you deputized an AI with AWS keys and just let it run wild?
How? We try to adopt AI for dev work for years now and every time the next gen tool or model gets released it fails spectacularly at basic things. And that's just the technical stuff, I still have no idea on how to tell it do implement our use cases as it simply does not understand the domain.
It is great at building things other have already built and it could train on but we don't really have a use case for that.
Might be satire, but I think some "products based on LLMs" (not LLMs alone) would be able to. There's pretty impressive demos out there, but honestly haven't tried them myself.
An otherwise meh article concluded with "It is in everyone’s interest to gradually adjust to the notion that technology can now perform tasks once thought to require years of specialized education and experience."
Much as we want to point and laugh - this is not some loon's fantasy. This is happening. Some dingus told spicy autocomplete 'make me a database!' and it did. It's surely as exploit-hardened as a wet paper towel, but it functions. Largely as a demonstration of Kernighan's law.
This tech is borderline miraculous, even if it's primarily celebrated by the dumbest motherfuckers alive. The generation and the debugging will inevitably improve to where the machine is only as bad at this as we are. We will be left with the hard problem of deciding what the software is supposed to do.
It is in everyone’s interest to gradually adjust to the notion that technology can now perform tasks once thought to require years of specialized education and experience.
The years of specialized education and experience is not for writing code in and of itself. Anyone with an internet connection can learn to do that in not that long. What takes years to perfect is writing reliable, optimized, secure code, communicating and working efficiently with others, writing code that can be maintained by others long after you leave, knowing the theories behind why code written in a certain way works better than code written in some other way, and knowing the qualitative and quantitative measures to even be able to assess whether one piece of code is "better" than the other. Source: Self-learned programming, started building stuff on my own, and then went through an actual computer science program. You miss so much nuance and underlying theory when you self-learn, which directly translates bad code that's a nightmare to maintain.
Finally, the most important thing you can do with the person that has years of specialized education and experience is you can actually have a conversation with them about their code, ask them to explain in detail how it works and the process they used to write it. Then you can ask them followup questions and request further clarification. Trying to get AI to explain itself is a complete shitshow, and while humans do have a propensity to make shit up to cover their own/their coworkers' asses, AI does that even when it make no sense not to tell the truth because it doesn't really know what "the truth" is and why other people would want it.
Will AI eventually catch up? Almost certainly, but we're nowhere close to that right now. Currently it's less like an actual professional developer and more like someone who knows just enough to copy paste snippets from Stack Overflow and hack them together into a program that manages to compile.
I think the biggest takeaway with AI programming is not that it can suddenly do just as well as someone with years of specialized education and experience, but that we're going to get a lot more shitty software that look professional on the surface, but is a dumpster fire inside.
Self-learned programming, started building stuff on my own, and then went through an actual computer science program.
Same. Starting with QBASIC, no less, which is an excellent source of terrible practices. At one point I created a code snippet that would perform a division and multiplication to find the remainder, because I'd never heard of modulo. Or functions.
Right now, this lets people skip the hair-pulling syntax errors, and tell the computer what they think the program should be doing, in plain English. It's not even "compileable pseudocode." It's high-level logic, nearly to the point that logic errors are all that can remain. It desperately needs some non-answer feedback states for if you tell it to "implement MP4 encoding" and expect that to Just Work.
But it's teaching people to write the comments first.
we’re nowhere close to that right now.
The distance from here to "oh shit" is shorter than we'd prefer. This tech works like a joke. "Chain of thought" apparently means telling the robot to act smarter... and it does. Which is almost less silly than Stable Diffusion removing every part of the marble that doesn't look like Hatsune Miku. If it's stupid, but it works... it's still stupid. But it works.
Someone's gonna prompt "Write like Donald Knuth" and the robot's gonna go, "Oh, you wanted good code? Why didn't you say so."
This industry also spends most of it's money either changing things that don't need to change (we optimized the right click menu to remove this item, mostly to fuck your muscle memory) or to avoid changing things (rather than implementing 2fa, banks have implemented 58372658 distinct algorithms for detecting things that might be fraud).
If you're just talking about enabling small scale innovation you're probably right, but if you're talking about the industry as a whole I think you need to look at what people in industry are actually spending their time on.
it's not code.
Yeah, I've been using it heavily. While someone without technical knowledge will surely allow AI to build a highly insecure app, people with more technological knowledge are going to propel things to a level where the less tech savvy will have fewer and fewer pitfalls to fall into.
For the past two months, I've been leveraging AI to build a CUE system that takes a user desire (e.g. "i want to deploy a system with an app that uses a database and a message queue" expressed as a short json) and converts a simple configuration file that unpacks into all the kubernetes manifests required to deploy the system they want to deploy.
I'm trying to be fully shift-left about it. So, even if the user's configuration is as simple as my example, it should still use CUE templating to construct the files needed for a full DevSecOps stack - Ingress Controller, KEDA, some kind of logging such as ELK stack, vulnerability scanners, policy agents, etc. The idea is the every stack should at all times be created in a secure state. And extra CUE transformations ensure that you can split the deployment destinations in any type of way, local/onprem, any cloud provider, or any combination thereof.
The idea is that if I need to swap out a component, I just change one override in the config and the incoming component already knows how to connect to everything and do what the previous component was doing because I've already abstracted the component's expected manifest fields using CUE. So, I'd be able to do something like changing my deployment from one cloud to another with a click of a button. Or build up a whole new fully secure stack for a custom purpose within a few minutes.
The idea is I could use this system to launch my own social media app, since I've been planning the ideal UX for many years. But whether or not that pans out, I can take my CUE system and put a web interface over it to turn it into a mostly automated PaaS. I figure I could undercut most PaaS companies and charge just a few percentage points above cost (using OpenCost to track the expenses). If we get to the point where we have a ton of novices creating apps with AI, I might be in a lucrative position if I have a PaaS that can quickly scale and provide automated secure back ends.
Of course, I intend on open sourcing the CUE once it's developed enough to get things off the ground. I'd really love to make money from my creative ideas on a socialized media app that I create, am less excited about gatekeeping this kind of advancement.
Interested to know if anyone has done this type of project in the past. Definitely wouldn't have been able to move at nearly this speed without AI.
so, MASD(or MDE) then ?
"If you don't have organic intelligence at home, store-bought is fine." - leo (probably)
Is the implication that he made a super insecure program and left the token for his AI thing in the code as well? Or is he actually being hacked because others are coping?
AI writes shitty code that's full of security holes, and Leo here has probably taken zero steps to further secure his code. He broadcasts his AI written software and its open season for hackers.
Not just, but he literally advertised himself as not being technical. That seems to be just asking for an open season.
Potentially both, but you don't really have to ask to be hacked. Just put something into the public internet and automated scanning tools will start checking your service for popular vulnerabilities.
He told them which AI he used to make the entire codebase. I'd bet it's way easier to RE the "make a full SaaS suite" prompt than it is to RE the code itself once it's compiled.
Someone probably poked around with the AI until they found a way to abuse his SaaS
Doesn't really matter. The important bit is he has no idea either. (It's likely the former and he's blaming the weirdos trying to get in)
Reminds me of the days before ai assistants where people copy pasted code from forums and then you’d get quesitions like “I found this code and I know what every line does except this ‘for( int i = 0; i < 10; i ++)’ part. Is this someone using an unsupported expression?”
I’m less knowledgeable than the OOP about this. What’s the code you quoted do?
It's a standard formatted for-loop. It's creating the integer variable i, and setting it to zero. The second part is saying "do this while i is less than 10", and the last part is saying what to do after the loop runs once -‐ increment i by 1. Under this would be the actual stuff you want to be doing in that loop. Assuming nothing in the rest of the code is manipulating i, it'll do this 10 times and then move on
It’s a for loop. Super basic code structure.
@Moredekai@lemmy.world posted a detailed explanation of what it’s doing, but just to chime in that it’s an extremely basic part of programming. Probably a first week of class if not first day of class thing that would be taught. I haven’t done anything that could be considered programming since 2002 and took my first class as an elective in high school in 2000 but still recognize it.
for( int i = 0; i < 10; i ++)
This reads as "assign an integer to the variable I
and put a 0 in that spot. Do the following code, and once completed add 1 to I
. Repeat until I
reaches 10."
Int I
= 0 initiates I
, tells the compiler it's an integer (whole number) and assigns 0 to it all at once.
I
++ can be written a few ways, but they all say "add 1 to I"
I
< 10 tells it to stop at 10
For tells it to loop, and starts a block which is what will actually be looping
Edits: A couple of clarifications
“Come try my software! I’m an idiot, so I didn’t write it and have no idea how it works, but you can pay for it.”
to
“🎵How could this happen to meeeeee🎵”
Im gone print this and hang it into office
It appears you may have accidentally a word
ITT: "Haha, yah AI makes shitty insecure code!"
<mad scrabbling in background to review all the code committed in the last year>
Managers hoping genAI will cause the skill requirements (and paycheck demand) of developers to plummet:
Also managers when their workforce are filled with buffoons:
Two days later...
But I thought vibe coding was good actually 😂
Vibe coding is a hilarious term for this too. As if it's not just letting AI write your code.
2 days, LMAO
If I were leojr94
, I’d be mad as hell about this impersonator soiling the good name of leojr94
—most users probably don’t even notice the underscore.
He should be promoted to management! Specifically head of cyber security! They also love security by obscurity and knowing nothing about what they are doing!
CIO, Peregrine Took.
hahahahahahahahahahahaha
ELI5?
Guy who doesn't know how to write software uses GenAI to make software that he then puts up for sale, and brags about not knowing how to write software.
People buy his software and, intentionally or not, start poking holes in it by using it in ways neither he nor the GenAI anticipated. Guy panics because he has no clue how to fix it.
Man uses AI to make software. Man learns hard way that AI doesn't care about stuff like security.
The increasing use of AI is horrifying. Stop playing Frankenstein! Quit creating thinking beings and using them as slaves.