The US add engineer to everything to sound most prestigious than they are. If you sell your service as a AI prompt writer, you get paid peanuts. If you sell the same service as AI prompt engineer, the C-Suites cream their pants.
As a professional in the field of artificial neural networks, I endorse this meme wholeheartedly and will figuratively slake my thirst for schadenfreude on the tears of this child with joy.
Yes, it is. Mostly because "real engineering" isn't the high bar it's made out to be. From that blog:
Nobody I read in these arguments, not one single person, ever worked as a ârealâ engineer. At best they had some classical training in the classroom, but we all know that looks nothing like reality. Nobody in this debate had anything more than stereotypes to work with. The difference between the engineering in our heads and in reality has been noticed by others before, most visibly by Glenn Vanderburg. He read books on engineering to figure out the difference. But I wanted to go further.
Software has developed in an area where the cost of failure is relatively low. We might make million dollar mistakes, but it's not likely anybody dies from it. In areas where somebody could die from bad software, techniques like formal verification come into play. Those tend to make everything take 10 times longer, and there's no compelling reason for the industry at large to do that.
If anything, we should lean into this as an advantage. How fast can we make the cycle of change to deployment?
We might make million dollar mistakes, but itâs not likely anybody dies from it.
I had a coworker who got a gig writing PDA software for a remote-controlled baseball machine. He was to this day the most incompetent programmer I've ever met personally; his biggest mistake on this project was firing a 120 mph knuckleball (a pitch with no spin so its flight path is incredibly erratic) a foot over a 12-year-old kid's head. This was the only time in my 25-year career that I had to physically restrain someone (the client, in this case) to prevent a fist fight. I replaced my coworker on the project after this and you can bet I took testing a little bit more seriously than he did.
In many cases this is accurate. Programming alone doesn't amount to engineering. Lotta low quality lines of code being churned out these days because standards have dropped.
Build an entire ecosystem, with multiple frontends, apps, databases, admin portals. It needs to work with my industry. Make it run cheap on the cloud. Also make sure it's pretty.
The prompts are getting so large we may need to make some sort of... Structured language to pipe into.. a device that would.. compile it all...
Realist, maybe. Often a pessimist. Never really a class traitor. Besides, I'm more blue collar than white collar, so I've never gotten the luxury of working from home at a higher pay, so as far as being the same class....in the sense of rich vs everyone else, sure.
Nah, that's going to blow, and I was talking about just that several months ago. The internet is going to be completely fucked, now. It has a nice little run of the golden years from like 1995 through about 2012. Decade after that was all downhill and the last year or so gas been a dumpster fire that's still getting bigger.
Yeah, writing prompts itâs the long term goal, programming will be obsolete.
Nobody that can write a problem in a structured language, taking edge cases into account, will be able to write a prompt for a LLM.
Prompt writers will be the useful professionals, because NO big tech company is trying to make it obsolete making AI ubiquitous and transparent, aiming it to work for natural language requests made by normal users or simply from context clues. /s
Prompt engineering itâs the griftiest side of the latest AI summer. Look a who is selling the courses. The same people that sold crypto courses, metaverse courses, Amazon dropship store coursesâŚ
But I'll definitely prefer hiring someone who does. Sure, you can code in Vi without plugins, but why? Leave your elitism at home. We have deadlines and money to make.
Edit: The discussions I've had about AI here on Lemmy and Hackernews have seriously made me consider asking whether or not the candidate uses AI tools as an interview question, with the only correct answer a variation of "Yes I do".
Boomer seniors scared of new tools is why Oracle is still around. I don't want any of those on my team.
Thinking AI is an upgrade from pencil to pen gives the impression that you spent zero effort incorporating it in your workflow, but still thinking you saw the whole payoff. Feels like watching my Dad using Eclipse for 20 years but never learning anything more complicated than having multiple tabs.
With that in mind, work on your prompting skills and give it a shot. Here are some things I've had immense success using GPT for:
Refactoring code
Turning code "pure" so it can be unit-testable
Transpiling code between languages
Slapping together frontends and backends in frameworks I'm only somewhat familiar with in days instead of weeks
I know in advance someone will tunnel vision on that last point and say "this is why AI bad", so I will kindly remind you the alternative is doing the same thing by hand... In weeks instead of days. No, you don't learn significantly more doing it by hand (in fact when accounting for speed, I would argue you learn less).
In general, the biggest tip I have for using LLM models is 1. They're only as smart as you are. Get them to do simple tasks that are time consuming but you can easily verify; 2. They forget and hallucinate a lot. Do not give them more than 100 lines of code per chat session if you require high reliability.
Things I've had immense success using Copilot for (although I cancelled my Copilot subscription last year, I'm going to switch to this when it comes out: https://github.com/carlrobertoh/CodeGPT/pull/333)
Adding tonnes of unit tests
Making helper functions instantly
Basically anything autocomplete does, but on steroids
One thing I'm not getting into on this comment is licensing/morals, because it's not relevant to the OP. If you have any questions/debate for this info though, I'll read and reply in the morning.
AIâs not bad, it just doesnât save me time. For quick, simple things, I can do it myself faster than the AI. For more big, complex tasks, I find myself rigorously checking the AIâs code to make sure no new bugs or vulnerabilities are introduced. Instead of reviewing that code, Iâd rather just write it myself and have the confidence that there are no glaring issues. Beyond more intelligent autocomplete, I donât really have much of a need for AI when I program.
This is how I use it, and it's a great way for me to speed up. It's a rubber duck for me. I have a fake conversation, it gives me different ideas or approaches to solve a problem. It does amazing with that
The code it spits out is something else though. The code it's trained on in GitHub means it could be based on someone with 2 months experience writing their CS201 program, or a seasoned experienced engineer. I've found it faster to get the gist of what it's saying, then rewrite it to fit my application.
Not even mentioning the about 50% chance response of "hey why don't you use this miracle function that does exactly what you need" and then you realize that the miracle function doesn't exist, and it just made it up.
Sure, you can code in Vi without plugins, but why? Leave your elitism at home. We have deadlines and money to make.
Nothing elitist about it. Vim is not a modular tool that I can swap out of my mental model. Before someone says it, I've tried VS Code's vim plugin, and it sucks ass.