Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)YO
Posts
0
Comments
744
Joined
1 yr. ago

  • I actually like the argument here, and it's nice to see it framed in a new way that might avoid tripping the sneer detectors on people inside or on the edges of the bubble. It's like I've said several times here, machine learning and AI are legitimately very good at pattern recognition and reproduction, to the point where a lot of the problems (including the confabulations of LLMs) are based on identifying and reproducing the wrong pattern from the training data set rather than whatever aspect of the real world it was expected to derive from that data. But even granting that, there's a whole world of cognitive processes that can be imitated but not replicated by a pattern-reproducer. Given the industrial model of education we've introduced, a straight-A student is largely a really good pattern-reproducer, better than any extant LLM, while the sort of work that pushes the boundaries of science forward relies on entirely different processes.

  • I feel like this primarily will end up creating opportunities in the blackhat and greyhat spaces as LLM-generated software and configurations open and replicate vulnerabilities and insecure design patterns while simultaneously creating a wider class of unemployed or underemployed ex-developers with the skills to exploit them.

  • I wouldn't say pearl-clutching as much as eye-rolling. Though we do dip into full BEC mode sometimes and the stubsack in particular can swing wildly between moral condemnation, intellectual critique, and calling out straight-up cringe.

  • I mean, it's obviously true that games have their own internal structures and languages that aren't always obvious without knowledge or context, and the FireRed comparison is a neat case where you can see that language improving as designers have both more tools (here meaning colors and pixels) and also more experience in using them. But also even in the LW thread they mention that when humans run into that kind of problem they don't just act randomly for 6 hours. Either they came up with some systematic approach for solving the problem, they walked away from the game to ask for help, or something else. Also you have the metacognition to be able to understand easily "that rug at the bottom marks the exit" once it's explained, which I'm pretty sure the LLM doesn't have the ability to process. It's not even like a particularly dumb 6-year-old. Even if it's prone to similar levels of over matching and pattern recognition errors, the 6-year-old has an actual conscious brain to help solve those problems. The whole thing shows once again that pattern recognition and reproduction can get you impressively far in terms of imitating thought, but there's a world of difference between that imitation and the real deal.

  • Also I think he doesn't understand MAD like, at all. The point isn't that you can strike your enemy's nuclear infrastructure and prevent them from fighting back. In fact that's the opposite of the point. MAD as a doctrine is literally designed around the fact that you can't do this, which is why the Soviets freaked out when it looked like we were seriously pursuing SDI.

    Instead the point was that nuclear weapons were so destructive and hard to defend against that any move against the sovereignty of a nuclear power would result in a counter-value strike, and whatever strategic aims were served by the initial aggression would have to be weighed against something in between the death of millions of civilians in the nuclear annihilation of major cities and straight-up ending human civilization or indeed all life on earth.

    Also if you wanted to reinstate MAD I think that the US, Russia, and probably China have more than enough nukes to make it happen.

  • Odds that this catches some Israeli nationalists in the net because they were posting about Hamas and arguing with the supposed sympathizers? Given their moves to gut the bureaucracy I can't imagine they have the manpower to have a human person review all the people this is going to flag.

  • DOGE has so far failed to realize that marking people in the ancient Social Security database as 150 years old means they’re dead and not receiving payments.

    You know, I'm deeply curious where he thinks these payments are going? As in, are banks holding accounts for these people? Are they directly cashing checks somewhere? Even if social security is issuing the funds someone has to be receiving them and a casket, while a large up-front investment, can't sign for deliveries as it were.

  • Details emerge about Natal conference in Austin later this month, set to feature figures linked to far-right politics

    I'm also learning some details about the Vegan Conference in Seattle, set to feature people who don't eat meat.

  • Noting up front that I'm trusting you rather than subjecting myself to that crap firsthand.

    I think it's like you say; what matters isn't that it makes a compelling argument, what matters is that it makes an argument and it's on a site like Medium where it can look more credible than the same argument would look if copied directly into a Reddit comment. Just the implication that someone else in a relative position of authority believes that you're right and the people criticizing you are wrong is enough to alleviate the cognitive dissonance.

  • True, but I want to be absolutely clear that this isn't some kind of "efficiency" or "profit motive" or whatever. Making ever more obscene amounts of money is part of the goal, of course, but I think there's a deeper motivation rooted in not wanting to acknowledge their responsibility for the problems they're trying to solve without giving up the power they have over those institutions and organizations where those problems exist.

  • The "fix your data" line matters a lot here. However hard the job is it's entirely feasible for a person to do it. Like, this isn't a case where we need magic to solve the underlying problems, and in a lot of cases there are real advantages to solving them. But doing so would require admitting they had previously done (or are currently doing) stupid or evil things that need to be fixed and paying someone a fair chunk of money to do so.

  • I think the central challenge of robotics from an ethical perspective is similar to AI, in that the mundane reality is less actively wrong than the idealistic fantasy. Robotics, even more than most forms of automation, is explicitly about replacing human labor with a machine, and the advantages that machine has over people are largely due to it not having moral weight. Like, you could pay a human worker the same amount of money that electricity to run a robot would cost, it would just be evil to do that. You could work your human workforce as close to 24/7 as possible outside of designated breaks for maintenance, but it would be evil to treat a person that way. At the same time, the fantasy of "hard AI" is explicitly about creating a machine that, within relevant parameters, is indistinguishable from a human being, and as the relevant parameters expand the question of whether that machine ought to be treated as a person, with the same ethical weight as a human being should become harder. If we create Data from TNG he should probably have rights, but the main reason why anyone would be willing to invest in building Data is to have someone with all the capabilities of a person but without the moral (or legal) weight. This creates a paradox of the heap; clearly there is some point at which a reproduction of human cognition deserves moral consideration, and it hasn't been (to my knowledge) conclusively been proven impossible to reach. But the current state of the field obviously doesn't have enough of an internal sense of self to merit that consideration, and I don't know exactly where that line should be drawn. If the AGI crowd took their ideas seriously this would be a point of great concern, but of course they're a derivative neofascist collection of dunces so the moral weight of a human being is basically null to begin with, neatly sidestepping this problem.

    But I also think you're right that this problem is largely a result of applying ever-improved automation technologies to a dysfunctional and unjust economic system where any improvement in efficiency effectively creates a massive surplus in the labor market. This drives down the price (i.e. how well workers are treated) and contributes to the immiseration of the larger part of humanity rather than liberating them from the demands for time and energy placed on us by the need to eat food and stuff. If we can deal with the constructed system of economic and political power that surrounds this labor it could and should be liberatory.