Skip Navigation
158 comments
  • Was listening to my go-to podcast during morning walkies with my dog. They brought up an example where some couple was using ShatGPT as a couple's therapist, and what a great idea that was. Talking about how one of the podcasters has more of a friend like relationship to "their" GPT.

    I usually find this podcast quite entertaining, but this just got me depressed.

    ChatGPT is by the same company that stole Scarlett Johansson's voice. The same vein of companies that thinks it's perfectly okay to pirate 81 terabytes of books, despite definitely being able to afford paying the authors. I don't see a reality where it's ethical or indicative of good judgement to trust a product from any of these companies with information.

  • An otherwise meh article concluded with "It is in everyone’s interest to gradually adjust to the notion that technology can now perform tasks once thought to require years of specialized education and experience."

    Much as we want to point and laugh - this is not some loon's fantasy. This is happening. Some dingus told spicy autocomplete 'make me a database!' and it did. It's surely as exploit-hardened as a wet paper towel, but it functions. Largely as a demonstration of Kernighan's law.

    This tech is borderline miraculous, even if it's primarily celebrated by the dumbest motherfuckers alive. The generation and the debugging will inevitably improve to where the machine is only as bad at this as we are. We will be left with the hard problem of deciding what the software is supposed to do.

    • It is in everyone’s interest to gradually adjust to the notion that technology can now perform tasks once thought to require years of specialized education and experience.

      The years of specialized education and experience is not for writing code in and of itself. Anyone with an internet connection can learn to do that in not that long. What takes years to perfect is writing reliable, optimized, secure code, communicating and working efficiently with others, writing code that can be maintained by others long after you leave, knowing the theories behind why code written in a certain way works better than code written in some other way, and knowing the qualitative and quantitative measures to even be able to assess whether one piece of code is "better" than the other. Source: Self-learned programming, started building stuff on my own, and then went through an actual computer science program. You miss so much nuance and underlying theory when you self-learn, which directly translates bad code that's a nightmare to maintain.

      Finally, the most important thing you can do with the person that has years of specialized education and experience is you can actually have a conversation with them about their code, ask them to explain in detail how it works and the process they used to write it. Then you can ask them followup questions and request further clarification. Trying to get AI to explain itself is a complete shitshow, and while humans do have a propensity to make shit up to cover their own/their coworkers' asses, AI does that even when it make no sense not to tell the truth because it doesn't really know what "the truth" is and why other people would want it.

      Will AI eventually catch up? Almost certainly, but we're nowhere close to that right now. Currently it's less like an actual professional developer and more like someone who knows just enough to copy paste snippets from Stack Overflow and hack them together into a program that manages to compile.

      I think the biggest takeaway with AI programming is not that it can suddenly do just as well as someone with years of specialized education and experience, but that we're going to get a lot more shitty software that look professional on the surface, but is a dumpster fire inside.

      • Self-learned programming, started building stuff on my own, and then went through an actual computer science program.

        Same. Starting with QBASIC, no less, which is an excellent source of terrible practices. At one point I created a code snippet that would perform a division and multiplication to find the remainder, because I'd never heard of modulo. Or functions.

        Right now, this lets people skip the hair-pulling syntax errors, and tell the computer what they think the program should be doing, in plain English. It's not even "compileable pseudocode." It's high-level logic, nearly to the point that logic errors are all that can remain. It desperately needs some non-answer feedback states for if you tell it to "implement MP4 encoding" and expect that to Just Work.

        But it's teaching people to write the comments first.

        we’re nowhere close to that right now.

        The distance from here to "oh shit" is shorter than we'd prefer. This tech works like a joke. "Chain of thought" apparently means telling the robot to act smarter... and it does. Which is almost less silly than Stable Diffusion removing every part of the marble that doesn't look like Hatsune Miku. If it's stupid, but it works... it's still stupid. But it works.

        Someone's gonna prompt "Write like Donald Knuth" and the robot's gonna go, "Oh, you wanted good code? Why didn't you say so."

    • This industry also spends most of it's money either changing things that don't need to change (we optimized the right click menu to remove this item, mostly to fuck your muscle memory) or to avoid changing things (rather than implementing 2fa, banks have implemented 58372658 distinct algorithms for detecting things that might be fraud).

      If you're just talking about enabling small scale innovation you're probably right, but if you're talking about the industry as a whole I think you need to look at what people in industry are actually spending their time on.

      it's not code.

    • Yeah, I've been using it heavily. While someone without technical knowledge will surely allow AI to build a highly insecure app, people with more technological knowledge are going to propel things to a level where the less tech savvy will have fewer and fewer pitfalls to fall into.

      For the past two months, I've been leveraging AI to build a CUE system that takes a user desire (e.g. "i want to deploy a system with an app that uses a database and a message queue" expressed as a short json) and converts a simple configuration file that unpacks into all the kubernetes manifests required to deploy the system they want to deploy.

      I'm trying to be fully shift-left about it. So, even if the user's configuration is as simple as my example, it should still use CUE templating to construct the files needed for a full DevSecOps stack - Ingress Controller, KEDA, some kind of logging such as ELK stack, vulnerability scanners, policy agents, etc. The idea is the every stack should at all times be created in a secure state. And extra CUE transformations ensure that you can split the deployment destinations in any type of way, local/onprem, any cloud provider, or any combination thereof.

      The idea is that if I need to swap out a component, I just change one override in the config and the incoming component already knows how to connect to everything and do what the previous component was doing because I've already abstracted the component's expected manifest fields using CUE. So, I'd be able to do something like changing my deployment from one cloud to another with a click of a button. Or build up a whole new fully secure stack for a custom purpose within a few minutes.

      The idea is I could use this system to launch my own social media app, since I've been planning the ideal UX for many years. But whether or not that pans out, I can take my CUE system and put a web interface over it to turn it into a mostly automated PaaS. I figure I could undercut most PaaS companies and charge just a few percentage points above cost (using OpenCost to track the expenses). If we get to the point where we have a ton of novices creating apps with AI, I might be in a lucrative position if I have a PaaS that can quickly scale and provide automated secure back ends.

      Of course, I intend on open sourcing the CUE once it's developed enough to get things off the ground. I'd really love to make money from my creative ideas on a socialized media app that I create, am less excited about gatekeeping this kind of advancement.

      Interested to know if anyone has done this type of project in the past. Definitely wouldn't have been able to move at nearly this speed without AI.

  • ITT: "Haha, yah AI makes shitty insecure code!"

    <mad scrabbling in background to review all the code committed in the last year>

  • If I were leojr94, I’d be mad as hell about this impersonator soiling the good name of leojr94—most users probably don’t even notice the underscore.

158 comments