Skip Navigation
The future of back-end development
  • It may be an opinion, but pointing it out won't make me like java any more.

  • Opinions on how to deal with duplicate code.
  • If I find myself repeating more than twice, I just ask "Can this be a function". If yes, I move it there. If not, I just leave it as it is.

    Life's too short to spend all day rewriting your code.

  • Linux Best Practices
  • Yes, but also I would hope that if you have the autonomy to install linux you also have the autonomy to look up an unknown command before running it with superuser privileges.

  • What examples do you know?
  • If you want a pretty cool example, Le morte d'Arthur was written in prison.

  • The Coming Enshittification of Public Libraries
  • They're definitely among the worst of the worst. It's always surprised me how comparatively sterile their wiki page is. Feels like they've got someone cleaning it up.

  • GPT4All is a free-to-use, locally running, privacy-aware large language model that is a 3GB - 8GB file that you can download and query. No GPU or internet required
  • Three cents for every 1k prompt tokens. You pay another six cents per 1k generated tokens in addition to that.

    At 8k context size, this adds up quickly. Depending on what you send, you can easily be out ~thirty cents per generation.

  • GPT4All is a free-to-use, locally running, privacy-aware large language model that is a 3GB - 8GB file that you can download and query. No GPU or internet required
  • Claude 2 isn't free though, is it?

    Either way, it does depends on what you want to use it for. Claude 2 is very biased towards positivity and it can be like pulling teeth if you're asking it to generate anything it even remotely disapproves of. In that sense, Claude 1 is the superior option.

  • Some people just can't pace themselves
  • w++ is a programming language now ๐Ÿคก

  • OpenAI, Google will watermark AI-generated content to hinder deepfakes, misinfo
  • Presumably you watermark all the training data.

    At least, that's my first instinct.

  • Some people just can't pace themselves
  • You can make it as complicated as you want, of course.

    Out of curiosity, what use-case did you find for it? I'm always interested to see how AI is actually applied in real settings.

  • B-bug? What bug?
  • Lazy is right. Spending fifty hours to automate a task that doesn't take even five minutes is commonplace.

    It takes laziness to new, artful heights.

  • Some people just can't pace themselves
  • True! Interfacing is also a lot of work, but I think that starts straying away from AI to "How do we interact with it." And let's be real, plugging into OAI's or Anthropic's API is not that hard.

    Does remind me of a very interesting implementation I saw once though. A VRChat bot powered by GPT 3.5 with TTS that used sentiment classification to display the appropriate emotion for the text generated. You could interact with it directly via talking to it. Very cool. Also very uncanny, truth be told.

    All that is still in the realm of "fucking around" though.

  • B-bug? What bug?
  • That's only the first stage. Once you get tired enough you start writing code that not even you can understand the next morning, but which you're loathe to change because "it just works".

  • B-bug? What bug?
  • "The bug is fixed, but we inadvertently created two new ones, one of which broke production because it was inexplicably not caught."

  • Some people just can't pace themselves
  • If you want to disabuse yourself of the notion that AI is close to replacing programmers for anything but the most mundane and trivial tasks, try to have GPT 4 generate a novel implementation of moderate complexity and watch it import mystery libraries that do exactly what you want the code to do, but that don't actually exist.

    Yeah, you can do a lot without writing a single line of code. You can certainly interact with the models because others who can have already done the leg work. But someone still has to do it.

  • Some people just can't pace themselves
  • It really is big. From baby's first prompting on big corpo model learning how tokens work, to setting up your own environment to run models locally (Because hey, not everyone knows how to use git), to soft prompting, to training your own weights.

    Nobody is realistically writing fundamental models unless they work with Google or whatever though.

  • This is not for you (cit.)
  • I read it a long time ago. The format is interesting, novel certainly. I suppose it's the selling point, over the prose.

    To me it seemed like there were many competing "ways" to read it as well. Like a maze, you can go different paths. Do you read it front to back? Niggle through the citations? Thread back through the holes? It's not often you get a book that has this much re-read value.

  • 10 AI Graphs to rule them all
  • The assertion that they cannot be cheap is funny, when Vicuna 13B was trained on all of $300.

    Not $300,000. $300. And that gets you a model that's almost parity with ChatGPT.

  • Phoenix Phoenix @programming.dev
    Posts 0
    Comments 18