Skip Navigation

Posts
0
Comments
1,150
Joined
2 yr. ago

  • a little. was kinda disappointed when some in my friends group bought the switch 2. but you can’t expect people to value the same things you do, but the judgement is natural as long as it’s in perspective.

  • sure, though the 4 foot bed was designed to handle more then you think as they were pretty clever. the hybrid is that nice middle ground where i have the long range of an ice engine, and can get 55 mpg. i also don’t have access to charging for my vehicle. as i mentioned for me it fits perfectly

  • and that won’t happen until democrats are made to or replaced with people who will, which means only voting for candidates who represent your values, and not voting blue no matter who

  • i just bought a Ford Maverick. and it’s a smaller truck. it’s basically everything i needed.

  • the fact we are still arguing is why. and now i am leaving there is nothing else to be said.

  • by that logic, what does arguing about the semantics of a word choice where the initial idea by the post was obviously understood, else we would not be talking about it?

    seems off topic like i warned about, and a waste of time

  • though the democrats have been decidedly less democratic as of late, seems more like the acknowledgement of that fact then just a pejorative. if the shoe fits maybe they should do something about it.

  • Wanting the government to regulate free speech is putting that power directly in the hands of the executive branch. that’s why it’s not done… yet

  • democrats have a vested interest in preventing ranked choice voting and have already acted to prevent it in areas where it won. So i think fixing the democrats would have to be the first order of business

  • with no leverage because they know you will vote for them no matter what anyways.

  • this yet again, still seething.

    democrat failure is a failure of the democrats. they knew the assignment, they didn’t want to pay the price. they chose this reality instead

  • yes if the calculator incorrectly provided an answer, and i was having a casual conversation over it.

    such as with over simplified rounding and truncation errors that some calculators give.

  • ok so, i have large reservations with how LLM’s are used. but when used correctly they can be helpful. but where and how?

    if you were to use it as a tutor, the same way you would ask a friend what a segment of code does, it will break down the code and tell you. and it will get as nity grity, and elementary school level as you weir wish without judgement, and i in what ever manner you prefer, it will recommend best practices, and will tell you why your code may not work with the understanding that it does not have the knowledge of the project you are working on. (it’s not going to know the name of the function you are trying to load, but it will recommend checking for that in trouble shooting).

    it can rtfm and give you the parts you need for any thing with available documentation, and it will link to it so you can verify it, wich you should do often, just like you were taught to do with wikipedia articles.

    if you ask i it for code, prepare to go through each line like a worksheet from high school to point out all the problems, wile good exercise for a practicle case, being the task you are on, it would be far better to write it yourself because you should know the particulars and scope.

    also it will format your code and provide informational comments if you can’t be bothered, though it will be generic.

    again, treat it correctly for its scope, not what it’s sold as by charletons.

  • it depends on the topic really. it is a lie in that it is a told false hood. by reasonable people talking about the unreliability of LLM’s it is sufficient without dragging the conversation away from the topic. if the conversation starts to surround the ‘feelings’ of the ‘AI’ then it’s maybe helpful point it out. otherwise it’s needlessly combative and distracting

  • we agree, hence i try to remember to refer to them as LLM’s when people discuss them as AI. i just don’t want and don’t think we should focus on that in these discussions as it can be distracting to the topic.

    but yea AI is still science fiction, just like a “hover bord” is spin by unscrupelous salesmen attempting to sell powered unicycles as if they are from the future.

  • it doesn’t. it after the fact evaluates the actions, and assumes an intent that would get the highest rated response from the user, based on its training and weights.

    now humans do sorta the same thing, but llm’s do not appropriately grasp concepts. if it weighed it diffrent it could just as easily as said that it was mad and did it out of frustration. but the reason it did that was in its training data at some point connected to all the appropriate nodes of his prompt is the knowledge that someone recommended formatting the server. probably as a half joke. again llm’s do not have grasps of context

  • listen, if browsers just block ads as a matter of their existence and the average joe is unaware they are blocking ads, then all the better. this article references a poll that specifically asks if the users know they are hard blocking ads, and just under half say they were not. which is good news, as that is farther reach then what user competency rates would have got. i am just taking that poll at face value.

  • i think this is a symantics issue. yes using ‘lie’ is a bit of short hand/personifying a process. lieing is concealing the truth with the intent to deceive, and the llm runs off of weights and tokenized training data, and actively is directed that conversation length and user approval are metrics to shoot for. Applying falsehoods are the most efficient way to do that.

    the llm does not share the goals of the user and the user must account for this

    but like calling it a lie is the most efficient means to get the point across.

  • i see mostly pushback actually, but hey i am still scrolling