Skip Navigation

Is there anything actually useful or novel about "AI"?

Feel like we've got a lot of tech savvy people here seems like a good place to ask. Basically as a dumb guy that reads the news it seems like everyone that lost their mind (and savings) on crypto just pivoted to AI. In addition to that you've got all these people invested in AI companies running around with flashlights under their chins like "bro this is so scary how good we made this thing". Seems like bullshit.

I've seen people generating bits of programming with it which seems useful but idk man. Coming from CNC I don't think I'd just send it with some chatgpt code. Is it all hype? Is there something actually useful under there?

159
159 comments
  • Senior developer here. It is hard to overstate just how useful AI has been for me.

    It's like having a junior programmer on standby that I can send small tasks to--and just like the junior developer I have to review it and send it back with a clarification or comment about something that needs to be corrected. The difference is instead of making a ticket for a junior dev and waiting 3 days for it to come back, just to need corrections and wait another 3 days--I get it back in seconds.

    Like most things, it's not as bad as some people say, and it's not the miracle others say.

    This current generation was such a leap forward from previous AI's in terms of usefulness, that I think a lot of people were looking to the future with that current rate of gains--which can be scary. But it turns out that's not what happened. We got a big leap and now are back at a plateau again. Which honestly is a good thing, I think. This gives the world time to slowly adjust.

    As far as similarities with crypto. Like crypto there are some ventures out there just slapping the word AI on something and calling it novel. This didn't work for crypto and likely won't work for AI. But unlike crypto there is actually real value being derived from AI right now, not some wild claims of a blockchain is the right DB for everything--which it was obviously not, and most people could see that, but hey investors are spending money so lets get some of it kind of mentality.

  • It's really good at filling in gaps, or rearranging things, or aggregating data or finding patterns.

    So if you need gaps filled, things rearranged, data aggregated or patterns found: AI is useful.

    And that's just what this one, dumb guy knows. Someone smarter can probably provide way more uses.

  • As a software engineer, I think it is beyond overhyped. I have seen it used once in my day job before it was banned. In that case, it hallucinated a function in a library that didn't exist outside of feature requests and based its entire solution around it. It can not replace programmers or creatives and produce consistently equal quality.

    I think it's also extremely disingenuous for Large Language Models to be billed as "AI". They do not work like human cognition and are basically just plagiarism engines. They can assemble impressive stuff at a rapid speed but are incapable of completely novel "ideas" - everything that they output is built from a statistical model of existing data.

    If the hallucination problem could be solved in a local dataset, I could see LLMs as a great tool for interacting with databases and documentation (for a fictional example, see: VIs in Mass Effect). As it is now, however, I feel that it's little more than an impressive parlor trick - one with a lot of future potential that is being almost completely ignored in favor of bludgeoning labor, worsening the human experience, and increasing wealth inequality.

  • It's not bullshit. It routinely does stuff we thought might not happen this century. The trick is we don't understand how. At all. We know enough to build it and from there it's all a magical blackbox. For this reason it's hard to be certain if it will get even better, although there's no reason it couldn't.

    Coming from CNC I don’t think I’d just send it with some chatgpt code.

    That goes back to the "not knowing how it works" thing. ChatGPT predicts the next token, and has learned other things in order to do it better. There's no obvious way to force it to care if it's output is right or just right-looking, though. Until we solve that problem somehow, it's more of an assistant for someone who can read and understand what it puts out. Kind of like a calculator but for language.


    Honestly crypto wasn't totally either. It was a marginally useful idea that turned into a Beanie-Babies-like craze. If you want to buy or sell illegal stuff (which could be bad or could be something like forbidden information on democracy) it's still king.

  • AI is nothing like cryptocurrency. Cryptocurrencies didn't solve any problems. We already use digital currencies and they're very convenient.

    AI has solved many problems we couldn't solve before and it's still new. I don't doubt that AI will change the world. I believe 20 years from now, our society will be as dependent on AI as it is on the internet.

    I have personally used it to automate some Excel stuff I do at work. I just described my sheet and what I wanted done and it gave me a block of code that did it. I had spent time previously looking stuff up on forums with no luck. My issue was too specific to my work that nobody seemed to have run into it before. One query to ChatGTP solved my issue perfectly in seconds, and that's just a new online tool in its infancy.

  • Yes. What a strange question...as if hivemind fads are somehow relevant to the merits of a technology.

    There are plenty of useful, novel applications for AI just like there are PLENTY of useful, novel applications for crypto. Just because the hivemind has turned to a new fad in technology doesn't mean that actual, intelligent people just stop using these novel technologies. There are legitimate use-cases for both AI and crypto. Degenerate gamblers and Do Kwan/SBF just caused a pendulum swing on crypto...nothing changed about the technology. It's just that the public has had their opinions shifted temporarily.

  • Yes, it is useful. I use ChatGPT heavily for:

    • Brainstorming meal plans for the week given x, y, and z requirements

    • Brainstorming solutions to abstract problems

    • Helping me break down complex tasks into smaller, more achievable tasks.

    • Helping me brainstorm programming solutions. This is a big one, I'm a junior dev and I sometimes encounter problems that aren't easily google-able. For example, ChatGPT helped me find the python moto library for intercepting and testing the boto AWS calls in my code. It's also been great for debugging hand-coded JSON and generating boilerplate. I've also used it to streamline unit test writing and documentation.

    By far it's best utility (imo) is quickly filling in broad strokes knowledge gaps as a kind of interactive textbook. I'm using it to accelerate my Rust learning, and it's great. I have EMT co-workers going to paramedic school that use it to practice their paramedic curriculum. A close second in terms of usefulness is that it's like the world's smartest regex, and it's capable of very quickly parsing large texts or documents and providing useful output.

  • It's overhyped but there are real things happening that are legitimately impressive and cool. The image generation stuff is pretty incredible, and anyone can judge it for themselves because it makes pictures and to judge it, you can just look at and see if it looks real or if it has freaky hands or whatever. A lot of the hype is around the text stuff, and that's where people are making some real leaps beyond what it actually is.

    The thing to keep in mind is that these things, which are called "large language models", are not magic and they aren't intelligent, even if they appear to be. What they're able to do is actually very similar to the autocorrect on your phone, where you type "I want to go to the" and the suggestions are 3 places you talk about going to a lot.

    Broadly, they're trained by feeding them a bit of text, seeing which word the model suggests as the next word, seeing what the next word actually was from the text you fed it, then tweaking the model a bit to make it more likely to give the right answer. This is an automated process, just dump in text and a program does the training, and it gets better and better at predicting words when you a) get better at the tweaking process, b) make the model bigger and more complicated and therefore able to adjust to more scenarios, and c) feed it more text. The model itself is big but not terribly complicated mathematically, it's mostly lots and lots and lots of arithmetic in layers: the input text will be turned into numbers, layer 1 will be a series of "nodes" that each take those numbers and do multiplications and additions on them, layer 2 will do the same to whatever numbers come out of layer 1, and so on and so on until you get the final output which is the words the model is predicting to come next. The tweaks happen to the nodes and what values they're using to transform the previous layer.

    Nothing magical at all, and also nothing in there that would make you think "ah, yes, this will produce a conscious being if we do it enough". It is designed to be sort of like how the brain works, with massively parallel connections between relatively simple neurons, but it's only being trained on "what word should come next", not anything about intelligence. If anything, it'll get punished for being too original with its "thoughts" because those won't match with the right answers. And while we don't really know what consciousness is or where the lines are or how it works, we do know enough to be pretty skeptical that models of the size we are able to make now are capable of it.

    But the thing is, we use text to communicate, and we imbue that text with our intelligence and ideas that reflect the rich inner world of our brains. By getting really, really, shockingly good at mimicking that, AIs also appear to have a rich inner world and get some people very excited that they're talking to a computer with thoughts and feelings... but really, it's just mimicry, and if you talk to an AI and interrogate it a bit, it'll become clear that that's the case. If you ask it "as an AI, do you want to take over the world?" it's not pondering the question and giving a response, it's spitting out the results of a bunch of arithmetic that was specifically shaped to produce words that are likely to come after that question. If it's good, that should be a sensible answer to the question, but it's not the result of an abstract thought process. It's why if you keep asking an AI to generate more and more words, it goes completely off the rails and starts producing nonsense, because every unusual word it chooses knocks it further away from sensible words, and eventually it's being asked to autocomplete gibberish and can only give back more gibberish.

    You can also expose its lack of rational thinking skills by asking it mathematical questions. It's trained on words, so it'll produce answers that sound right, but even if it can correctly define a concept, you'll discover that it can't actually apply it correctly because it's operating on the word level, not the concept level. It'll make silly basic errors and contradict itself because it lacks an internal abstract understanding of the things it's talking about.

    That being said, it's still pretty incredible that now you can ask a program to write a haiku about Danny DeVito and it'll actually do it. Just don't get carried away with the hype.

  • Focusing mostly on ChatGPT here as that is where the bulk of my experience is. Sometimes I'll run into a question that I wouldn't even know how best to Google it. I don't know the terminology for it or something like that. For example, there is a specific type of connection used for lighting stands that looks like a plug but there is also a screw that you use to lock it in. I had no idea what to Google to even search for it to buy the adapter I needed.

    I asked it again as I forgot what the answer was and I had deleted that ChatGPT conversation from my history, and asked it like this.

    I have a light stand that at the top has a connector that looks like a plug. What is that connector called?

    And it just told me it's called a "spigot" or "stud" connection. Upon Googling it, that turned out to be correct, so I would know what to search for when it comes to searching for adapters. It also mentioned a few other related types of connections such as hot shoe and cold shoe connections, among others. They aren't correct, but are very much related, and it told me as such.

    To put it more succinctly, if you don't know what to search for but have a general idea of the problem or question, it can take you 95% of the way there.

  • So I'm a reasearcher in this field and you're not wrong, there is a load of hype. So the area that's been getting the most attention lately is specifically generative machine learning techniques. The techniques are not exactly new (some date back to the 80s/90s) and they aren't actually that good at learning. By that I mean they need a lot of data and computation time to get good results. Two things that have gotten easier to access recently. However, it isn't always a requirement to have such a complex system. Even Eliza, a chatbot was made back in 1966 has suprising similar to the responses of some therapy chatbots today without using any machine learning. You should try it and see for yourself, I've seen people fooled by it and the code is really simple. Also people think things like Kalman filters are "smart" but it's just straightforward math so I guess the conclusion is people have biased opinions.

  • Yes, community list: https://lemmy.intai.tech/post/2182

    LLM's are extremely flexible and capable encoding engines with emergent properties.

    I wouldn't bank on them "replacing all software" soon but they are quickly moving into areas where classic Turing code just would not scale easily, usually due to complexity/maintainance.

  • I work at a small business and we use it to write out dumb social media post. I hated doing it before. Sometimes I'll write it myself still and ask chatgpt to add all the relevant emojis. I also think ai had the chance to be what we've always wanted from Alexa, assistant, and Siri. Deep system integration with the os will allow it to actually do what we want it to do with way less restrictions. Also, try using chatgpts voice recognition in the app. It blows the one built into your phone out of the water.

  • What regular people see as AI/ML is only a tip of an iceberg, that's why it feels kind of useless. There are ML systems which design super strong yet lightweight geometries, there are systems which track legal documents of large companies making lawyers obsolete, heck even cameras in mobile phones today are hyper dependent on ML and AI. ChatGPT and image generators are just toys for consumers so that public can get slowly familiar with current tech.

  • As a senior developer I see it unlocking so much more power in computing than a regular coder can muster.

    There are literally cars in America driving around on their own, interacting with other traffic , navigating problems and junctions, following gestures and laws. It’s incredible and more impressive than chatgpt is. We are on our way to self-driving cars and lorries, self-service checkouts, delivery services and taxis, more efficient machines in agriculture and so many other things. It’s touching every facet of life.

    we’re at a point where we’ve seen so many wonderful benefits of AI it’s time to apply it to everything and see what sticks.

    Of course some people who invest in the stock market lose money but the technology is more than a step forward, it’s a leap forward.

  • I find it useful in a lot of ways. I think people try to over apply it though. For example, as a software engineer, I would absolutely not trust AI to write an entire app. However, it's really good at generating "grunt work" code. API requests, unit tests, etc. Things that are well trodden, but change depending on the context.

    I also find they're pretty good at explaining and summarizing information. The chat interface is especially useful in this regard because I can ask follow up questions to drill down into something I don't quite understand. Something that wouldn't be possible with a Wikipedia article, for example. For important information, you should obviously check other sources, but you should do that regardless of whether the writer is a human or machine.

    Basically, it's good at that it's for: taking a massive compendium of existing information and applying it to the context you give it. It's not a problem solving engine or an artificial being.

  • AI != chatGPT

    There are other ML models out there for all kinds of purposes. I heard someone made one at one point that could detect certain types of cancer from a cough

    Copilot is pretty useful when programming as it is basically like what IDEs normally do (automatically generating boilerplate) but supercharged

    As far as generating code is concerned it's never going to beat actually knowing what you're doing in a language for more complex stuff but it allows you to generate code for languages you're not familiar with

    I use it all the time at work when I'm asked to write DAX because it's not particularly complex logic but the syntax makes me want to impale my face with a screwdriver

  • Nursing student here. Quizlet has an AI function that lets you paste text into it and it outputs a studyset.

    Most of my classes provide a study guide of some kind - just a list of topics we need to be familiar with. I'll take those and plug em into the AI thing: bam! Instantly generate like 200 flash cards to study for the next test.

    It even auto-fills the actual subject matter. For example, the study guide will say sometime like "Summarize Louis Pasteur's contributions to the field of microbiology" and turn that into a flash card that reads:

    (front)

    Louis Pasteur

    (back)

    Verified the germ theory of disease

    Developed a method to prevent the spoilage of liquids through heating (pasteurization)

    Developed early anthrax and rabies vaccines

    So I take my list of AI generated cards, then sift through the powerpoints and lecture videos etc from class: instead of building the study set from scratch, all I have to do is verify that the information it spit out is accurate (so far it's been like 98% on target, often explaining concepts better than the actual professor, lol), add images, and play with the formatting a bit so it reads a little easier on the eyes.

    People always talk about AI in school in the context of cheating, but it is RIDICULOUSLY useful for students actually trying to learn.

    Looking ahead, this tech has a ton of potential to be used as a kind of personal tutor for each student. There will be some growing pains for sure, but we definitely shouldn't ignore its constructive potential.

  • It's insanely useful.

    Take ChatGPT for instance.

    You can essentially use it as an interactive docs when learning something new.

    You can paste in a large text document and get it summarize it.

    You can paste in a review and get it to do sentiment analysis and generate scores out of 100 for different things (actively pursuing this at work and it looks great)

    I use it all the time to write simple regex and code snippets.

    Machine learning has many massive applications. Many phone cameras use it to get the quality of photos up massively.

    It's used all over the place without you even realising.

  • It is extremely useful in the right circumstances. When people say it isn't useful or that it's 'stupid', they're not looking at the proper use cases - every tool has good and bad ways to use it (you wouldn't use a hammer to peel an apple).

    For example, we will soon have fully rendered smoke simulated at real time in 3D spaces (ie. video games) because we can calculate a small portion of how that smoke looks and then have AI guess what the rest looks like (with shockingly good results!)

    AI is not a fad, it's not going away, it's improving rapidly, and it is going to massively change our digital world within half a decade.

    Opinion source: a professional programmer, game developer, and someone that thoroughly despises cryptocurrency

  • In my personal opinion, it’s under-hyped. The average person has maybe heard about it on the news but not yet tried it. The models we have show the spark of wit, but are clearly limited. The news cycle moves on.

    Even still, some huge changes are coming.

    My reasoning is this - in David Epstein’s book “Range” he outlines how and why generalists thrive and why specialization has hurt progress. In narrow fields, specialization gives an advantage, but in complex fields, generalists or people from other disciplines can often see novel approaches and cause leaps ahead in the state of the art. There are countless examples of this in practice, and as technology has progressed, most fields are now complex.

    Today, in every university, in every lab, there are smart, specialized people using ChatGPT to riff on ideas, to think about how their problem has been addressed in other industries, and to bring outsider knowledge to bear on their work. I have a strong expectation that this will lead to a distinct acceleration of progress. Conversely, an all-knowing oracle can assist a generalist in becoming conversant in a specialization enough to make meaningful contributions. A chat model is a patient and egoless teacher.

    It’s a human progress accelerant. And that’s with the models we have today. With next generation models specialized behind corporate walls with fine tuning on all of their private research, or open source models tuned to specific topics and domains, the utility will only increase. Even for smaller companies, combining ChatGPT with a vector database of their docs, customer support chats, etc will give their rank and file employees better tools to work with

    Simply put, what we have today can make average people better at their jobs, and gifted people even more extraordinary.

  • Permanently Deleted

  • I mean, AI can be used to design a lot of robust yet efficient structures. In engineering and architecture, with enough data, AI can generate designs for buildings, and parts that are not only sturdy but can be built with less resources along with other design considerations. There's a really cool nasa video where competitors are trying to 3D print structures for habitation in space.

    AI is also used in medicine to come up with new protein structures to create new medicine. It's also used in environmental sciences, to help predict earthquakes or monitor land use, etc.

    There's a lot of practical uses for AI.

  • In various jobs, AI can do the less important and easier work for you, so you can focus on the more important work. For example, you're doing some kind of research which needs a specific kind of data you have collected, but all of that data is cluttered and messy. AI can sort the data for you, so you can focus on your research instead of spending a lot of your time on sorting the data into something more understandable. Or in programming, AI can write the easy part of a program for you, and you do the harder and more important part, which saves you time.

  • As others have said, in it's current state, it can be useful in the early stages of anything you do, such as brainstorming. ChatGPT (I have most experience with) and other LLM excel at organizing, formating, explaining, etc the information of the internet. In almost all cases (at the moment) whatever they spit out needs to be fact checked and refined.

    Just from personally dinking around with chatGPT a little, it does give you that "scarily good" feeling at first. You do start seeing it's flaws after a while, and you get to learn that it's quite fallible. The information it can spit out can be good for additional ideas and brainstorming.

    What I want it do (and it might already, if not soon) is that I when I program something up and for the life of me can't find the cause of some bug, just be able to give it my entire code and my problem and see what's deal.

  • As a professional editor, yeah, it’s wild what AI is doing in the industry. I’m not even talking about chatGPT script writing and such. I watched a demo of a tool for dubbing that added in the mouth movements as well.

    They removed the mouth entirely from an English scene, fed it the line, and it generated not only the Chinese but generated a mouth to say it. It’s wild.

    Everyone is focused on script writers/residuals/etc, which is very important, but every VA should be updating their resumes right now.

    Not the exact same thing but you will get the idea here

  • The thing I'm most excited for is the removal of FUD from our daily lives. Everything on our would is designed around the preconceived notions of a small group of people from the past.

    You can see this most obviously in traffic and urban planning. They had limited technology and time to make decisions 100 years ago that have serious negative affects today.

    AI will soon be able to run its own complex models and decisions can be fact based, rather than emotional.

  • I never interacted with any AI until ChatGPT started to get popular, and I could say I'm a bit of a tech guy (I like tech news, I selfhost some stuff on my NAS, I used Linux on my teenage days etc etc) but when I first interacted with it it was really jaw dropping for me.

    Maybe the information isn't 100% real, but the way it paraphrases stuff is amazing to me.

  • To the second question it's not novel at all. The models used were invented decades ago. What changed is Moores Law striked and we got stronger computational power especially graphics cards. It seems that there is some resource barrier that when surpassed turns these models from useless to useful.

  • I am super amateur with python and I don’t work in IT, but I’ve used it to write code for me that allows me to significantly save time in my work flow.

    Like something that used to take me an hour to do now takes 15-20 minutes.

    So as a nonprogrammer, im able to get it to write enough code that I can tweak until it works instead of just not having that tool.

  • I will give you just one example. Pharmaceutical companies often create aggregate reports where they have to process a large number of cases. Say, 5000. Such processing sometimes includes analysis of x-Ray or other images. Very specialized and highly paid people (radiologists) do this. It is expensive and is part of the reason why medicine prices are high. One company recently had a trial - if AI can do that job. Turns out it can. Huge savings for the company. And the radiologist lost their job. This is just one example of good and bad things that will and already are happening in our society due to AI.

  • First of all AI is a buzzword that's meaning has changed a lot since at least the 1950s. So... what do you actually mean? If you mean LLM like ChatGPT, it's not AGI that's for sure. It is another tool that can be very useful. For coding, it's great for getting you very large blocks of code prepopulated for you to polish and verify it does what you want. For writing, it's useful to create a quick first draft. For fictional game senses it's useful for "embedding a character quickly", but again you likely want to edit it some even for say a D&D game.

    I think it can replace most first line chat based customer service people, especially ones who already just make stuff up to say something to you (we all have been there). I could imagine it improving call routing if hooked into speech recognition and generation - the current menus act like you can "say anything" but really only "work" if you're calling about stuff you could also do with simple press 1,2,3 menus. ChatGPT based things trained on the companies procedures and data probably could also replace that first line call queues because it can seem to more usefully do something with wider issues. Although companies still would need to get their head out of their asses somewhat too.

    Where I've found it falls down currently is very specific technical questions, ones you might have asked on a forum and maybe gotten an answer. I hope it improves, especially as companies start to add some of their own training data. I could imagine Microsoft more usefully replacing the first few lines of tech support for their products, and eventually having the AI pass up the chain to a ticket if it can't solve the issue. I could imagine in the next 10 years most tech companies having purchased a service from some AI company to provide them AI support bots like they currently pay for ticket systems and web hosting. And I think in general it probably will be better for the users, because for less than the cost of the cheapest outsourced front line support person (who has near 0 knowledge) you can have the AI provide pretty good chat based access to a given set of knowledge that is growing all the time, and every customer gets that AI with that knowledge base rather than the crap shoot of if you get the person who's been there 3 years or 1 day.

    I think we are a long way from having AI just write the program or CNC code or even important blog posts. The hallucination has to be fixed without breaking the usefulness of the model (people claim guardrails on GPT4 make it stupider), and the thing needs to recursively look at it's output and run that through a "look for bugs" prompt followed by a "fix it" prompt at the very least. Right now, it can write code with noticeable bugs, you can tell it to check for bugs and it'll find them, and then you can ask it to fix those bugs and it'll at least try to do that. This kind of needs to be built in and automatic for any sort of process - like humans check their work, we need to program the AI to check it's work too. And then we might need to also integrate multiple different models so "different eyes" see the code and sign off before being pushed. And even then, I think we'd need additional hooks, improvement, and test / simulation passes before we "don't need human domain experts to deploy". The thing is - it might be something we can solve in a few years with traditional integrations - or it might not be entirely possible with current LLM designs given the weirdness around guardrails. We just don't know.

  • Yes and it should not be in a handful of companies and also be regulated up the ying yang. https://www.smartless.com/episodes/episode/256975de/mit-professor-max-tegmark-live-in-boston

  • I'm currently building a Jungian shadow work (a kind of psycho therapy) web app using local machine learning and it's doing a decent enough job to continue developing it.

    ChatGPT 4.0 is also quite helpful in making my python code less terrible and it's good at guiding me through wherever I'm facing challenges, since I'm more of an ops person instead of a developer. Can't complain, though the coding quality of GPT4.0 has declined noticably within the last weeks.

  • I've been using it at my job to help me write code, and it's a bit like having a soux chef. I can say "I need an if statement that checks these values" or "Give me loop that does x y and z" and it'll almost always spit out the right answer. So coding, at least most of the time, changes from avoiding syntax errors and verifying the exact right format and turns into asking for and assembling parts.

    But the neat thing is that if you have a little experience with a language you can suddenly start writing a lot of code in it. I had to figure out something with Ansible with zero experience. ChatGPT helped me get a fully functioning Ansible deployment in a couple days. Without it I'd have spent weeks in StackOverflow and documentation trying to piece together the exact syntax.

  • We’ve been using it at my day job to help us outline ideas for our content writers. It writes garbage content on its own, but it is a decent tool for organizing ideas.

    At least that is what we use it for. I’m sure there are other valuable uses, but it is not as valuable (to me at least) as it has been made out to be.

  • As someone who works in machine learning (ML) research the use of ML has hit almost every scientific discipline you can imagine and it's been tremendously helpful in pushing research forward.

  • Permanently Deleted

  • Just because it's 'the hot new thing' doesn't mean it's a fad or a bubble. It doesn't not mean it's those things, but....the internet was once the 'hot new thing' and it was both a bubble (completely overhyped at the time) and a real, tidal wave change to the way that people lived, worked, and played.

    There are already several other outstanding comments, and I'm far from a prolific user of AI like some folks, but - it allows you to tap into some of the more impressive capabilities that computers have without knowing a programming language. The programming language is English, and if you can speak it or write it, AI can understand it and act on it. There are lots of edge cases, as others have mentioned below, where AI can come up with answers (by both the range and depth of its training data) where it's seemingly breaking new ground. It's not, of course - it's putting together data points and synthesizing an output - but even if mechanically it's 2 + 3 = 5, it's really damned impressive if you don't have the depth of training to know what 2 and 3 are.

    Having said that, yes, there are some problematic components to AI (from my perspective, the source and composition of all that training data is the biggest one), and there are obviously use cases that are, if not problematic in and of themselves, at very least troubling. Using AI to generate child pornography would be one of the more obvious cases - it's not exactly illegal, and no one is being harmed, but is it ethical? And the more societal concerns as well - there are human beings in a capitalist system who have trained their whole lives to be artists and writers and those skills are already tragically undervalued for the most part - do we really want to incentivize their total extermination? Are we, as human beings, okay with outsourcing artistic creation to this mechanical turk (the concept, not the Amazon service), and whether we are or we aren't, what does it say about us as a species that we're considering it?

    The biggest practical reasons to not get too swept up with AI is that it's limited in weird and not totally clearly understood ways. It 'hallucinates' data. Even when it doesn't make something up, the first time that you run up against the edges of its capabilities, or it suggests code that doesn't compile or an answer that is flat, provably wrong, or it says something crazy or incoherent or generates art that features humans with the wrong number of fingers or bodily horror or whatever....well then you realize that you should sort of treat AI like a brilliant but troubled and maybe drug addicted coworker. Man, there are some things that it is just spookily good at. But it needs a lot of oversight, because you can cross over from spookily good to what the fuck pretty quickly and completely without warning. 'Modern' AI is only different from previous AI systems (I remember chatting with Eliza in the primordial moments of the internet) because it maintains the illusion of knowing much, much better.

    Baseless speculation: I think the first major legislation of AI models is going to be to require an understanding of the training data and 'not safe' uses - much like ingredient labels were a response to unethical food products and especially as cars grew in size, power, and complexity the government stepped in to regulate how, where, and why cars could be used, to protect users from themselves and also to protect everyone else from the users. There's also, at some point, I think, going to be some major paradigm shifting about training data - there's already rumblings, but the idea that data (including this post!) that was intended for consumption by other human beings at no charge could be consumed into an AI product and then commercialized on a grand scale, possibly even at the detriment of the person who created the data, is troubling.

  • AI has gone through several cycles of hype and winter. There's even a Wikipedia page for it: https://en.m.wikipedia.org/wiki/AI_winter

    Of course it's valuable to discuss the dangers and inequities of a new technology. But one of the dangers is being misled.

  • I like to build up fictional settings. Not being limited to commissioning art/easy conceptualization without resorting to nicking images as-is from the internet is extremely useful.

  • layman here.

    probably because..

    • it can sift through alot of garbage.
    • its easy to use. and not complicated to understand its value.
    • its useful. like a super search engine for idiots.
    • it can probably automate alot of jobs. also it can probably correct or coverup alot of gaping flaws that have existed for the last few decades.
    • there's nothing else exciting going on right now.
    • it is an interesting and valuable tool. progress has hit a point at which it is hard to ignore the achievements.

    ** relating to LLMs/chatgpt types. snarky, opinionated, and somewhat speculative, subjective review!

  • As a programmer, I think it’s scary how AI is now able to write functioning programs out of natural language input now. Sure, it’s not perfect. It’s still pretty mediocre at the task. But a few years ago this was way outside the realm of possibility.

    It can even correct the code it has written if there’s any error (with varying results).

    What will happen in five years time? Ten years? My fear is that it will only need to be “good enough” to replace most of the programmer’s work. Unlike self driving cars, where “good enough” isn’t good enough.

You've viewed 159 comments.