Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)HE
Posts
0
Comments
83
Joined
2 yr. ago

  • First, we are providing legal advice to businesses, not individuals, which means that the questions we are dealing with tend to be even more complex and varied.

    Additionally, I am a former professional writer myself (not in English, of course, but in my native language). Yet, even I find myself often using complicated language when dealing with legal issues, because matters tend to be very nuanced. "Dumbing down" something without understanding it very, very well creates a huge risk of getting it wrong.

    There are, of course, people who are good at expressing legal information in a layperson's way, but these people have usually studied their topic very intensively before. If a chatbot explains something in “simple” language, their output usually contains serious errors that are very easy for experts to spot because the chatbot operates on the basis of stochastic rules and does not understand its subject at all.

  • Up until AI they were the people who were inept and late at adopting new technology, and now they get to feel that they’re ahead

    Exactly. It is also a new technology that requires far fewer skills to use than previous new technologies. The skills are needed to critically scrutinize the output - which in this case leads to less lazy people being more reluctant to accept the technology.

    On top of this, AI fans are being talked into believing that their prompting as such is a special “skill”.

  • That's why I find the narrative that we should resist working with LLMs because we would then train them and enable them to replace us problematic. That would require LLMs to be capable of doing so. I don't believe in this (except in very limited domains such as professional spam). This type of AI is problematic because its abilities are completely oversold (and because it robs us of our time, wastes a lot of power and pollutes the entire internet with slop), not because it is "smart" in any meaningful way.

  • But if you’re not an expert, it’s more likely that everything will just sound legit.

    Oh, absolutely! In my field, the answers made up by an LLM might sound even more legit than the accurate and well-researched ones written by humans. In legal matters, clumsy language is often the result of facts being complex and not wanting to make any mistakes. It is much easier to come up with elegant-sounding answers when they don't have to be true, and that is what LLMs are generally good at.

  • And then we went back to “it’s rarely wrong though.”

    I am often wondering whether the people who claim that LLMs are "rarely wrong" have access to an entirely different chatbot somehow. The chatbots I tried were rarely ever correct about anything except the most basic questions (to which the answers could be found everywhere on the internet).

    I'm not a programmer myself, but for some reason, I got the chatbot to fail even in that area. I took a perfectly fine JSON file, removed one semicolon on purpose and then asked the chatbot to fix it. The chatbot came up with a number of things that were supposedly "wrong" with it. Not one word about the missing semicolon, though.

    I wonder how many people either never ask the chatbots any tricky questions (with verifiable answers) or, alternatively, never bother to verify the chatbots' output at all.

  • FWIW, I work in a field that is mostly related to law and accounting. Unlike with coding, there are no simple "tests" to try out whether an AI's answer is correct or not. Of course, you could try these out in court, but this is not something I would recommend (lol).

    In my experience, chatbots such as Copilot are less than useless in a context like ours. For more complex and unique questions (which is most of the questions we are dealing with everyday), it simply makes up smart-sounding BS (including a lot of nonexistent laws etc.). In the rare cases where a clear answer is already available in the legal commentaries, we want to quote it verbatim from the most reputable source, just to be on the safe side. We don't want an LLM to rephrase it, hide its sources and possibly introduce new errors. We don't need "plausible deniability" regarding plagiarism or anything like this.

    Yet, we are being pushed to "embrace AI" as well, we are being told we need to "learn to prompt" etc. This is frustrating. My biggest fear isn't to be replaced by an LLM, not even by someone who is a "prompting genius" or whatever. My biggest fear is to be replaced by a person who pretends that the AI's output is smart (rather than filled with potentially hazardous legal errors), because in some workplaces, this is what's expected, apparently.

  • If computers become capable of mass-producing stuff other computers will like, but many humans won't, this might also lead to a quick decline of algorithm-based search engines, social media feeds etc. (as has been discussed here before, of course).

  • I think most cons, scams and cults are capable of damaging vulnerable people's mental health even beyond the most obvious harms. The same is probably happening here, the only difference being that this con is capable of auto-generating its own propaganda/PR.

    I think this was somewhat inevitable. Had these LLMs been fine-tuned to act like the mediocre autocomplete tools they are (rather than like creepy humanoids), nobody would have paid much attention to them, and investors would have started to focus on the high cost of running them quickly.

    This somewhat reminds me of how cryptobros used to claim they were fighting the "legacy financial system", yet they were creating a worse version (almost a parody) of it. This is probably inevitable if you are running an unregulated financial system and are trying to extract as much money from it as possible.

    Likewise, if you have a tool capable of messing with people's minds (to some extent) and want to make a lot of money from it, you are going to end up with something that resembles a cult, an LLM or similarly toxic groups.

  • I think this has happened before. There are accounts of people who completely lost touch with reality after getting involved with certain scammers, cult-leaders, self-help gurus, "life coaches", fortune tellers or the like. However, these perpetrators were real people who could only handle a limited number of victims at any given time. Also, they probably had their very specific methods and strategies which wouldn't work on everybody, not even all the people who might have been the most susceptible. ChatGPT, on the other hand, can do this at scale. Also, it was probably trained from all websites and public utterances of any scammer, self-help author, (wannabe) cult leader, life coach, cryptobro, MLM peddler etc. available, which allows it to generate whatever response works best to keep people "hooked". In my view, this alone is a cause for concern.

  • I think we don't know how many people might be at risk of slipping into such mental health crises under the right circumstances. As a society, we are probably good at protecting most of our fellow human beings from this danger (even if we do so unconsciously). We may not yet know what happens when people regularly experience interactions that follow a different pattern (which might be the case with chatbots).

  • Just guessing, but the reported "90% accuracy" are probably related to questions that could be easily answered from an FAQ list. The rest is probably at least in part about issues where the company itself f*cked up in some way... Nothing wrong with answering from an FAQ in theory, but if all the other people get nicely worded BS answers (for which the company couldn't be held accountable), that is a nightmare from every customer's point of view.

  • At the very least, actual humans have an incentive not to BS you too much, because otherwise they might be held accountable. This might also be the reason why call center support workers sound less than helpful sometimes - they are unable to help you (for various technical or corporate reasons) and feel uneasy about this. A bot is probably going to tell you whatever you want to hear while sounding super polite all the time. If all of it turns out to be wrong... well, then this is your problem to deal with.

  • Almost sounds as if in order to steal intellectual property, they had to go down the "traditional" route of talking to someone, making promises etc. If it turns out that a chatbot isn't the best tool for plagiarizing something, what is it even good for?

  • And there might be new "vulture funds" that deliberately buy failing software companies simply because they hold some copyright that might be exploitable. If there are convincing legal reasons why this likely won't fly, fine. Otherwise I wouldn't rely on the argument that "this is a theoretical possibility, but who would ever do such a thing?"

  • And, after the end of the AI boom, do we really know what wealthy investors are going to do with the money they cannot throw at startups anymore? Can we be sure they won't be using it to fund lawsuits over alleged copyright infringements instead?

  • At the very least, many of them were probably unable to differentiate between "coding problems that have been solved a million times and are therefore in the training data" and "coding problems that are specific to a particular situation". I'm not a software developer myself, but that's my best guess.

  • Even the idea of having to use credits to (maybe?) fix some of these errors seems insulting to me. If something like this had been created by a human, the customer would be eligible for a refund.

    Yet, under Aron Peterson's LinkedIn posts about these video clips, you can find the usual comments about him being "a Luddite", being "in denial" etc.

  • FWIW, due to recent developments, I've found myself increasingly turning to non-search engine sources for reliable web links, such as Wikipedia source lists, blog posts, podcast notes or even Reddit. This almost feels like a return to the early days of the internet, just in reverse and - sadly - with little hope for improvement in the future.

  • I think all of this should be true about almost any other company. However, if OpenAI employees had a reasonably strong belief in the hype surrounding their company and their technology, wouldn't they be holding more shares? After all, the rest of the world is constantly being told that this is the future and that pretty much all of our jobs are at risk because of it.