YourNetworkIsHaunted @ YourNetworkIsHaunted @awful.systems Posts 0Comments 745Joined 1 yr. ago
‘It became clear to me that people wanted more children than they were having,’ Babu says.
So clearly the best action is to constantly tell everyone how great kids are and that they should totally have them. Because that solves the problem of people wanting kids they don't/can't have. I try to read even our designated sneer fodder in good faith but I can't understand why anyone thinks these people are at all intelligent beyond the "only slightly less than average" level. I thought Good Will Hunting taught everyone the difference between smart and rich, but maybe that was just me.
The continued polytopia mentions just keep reminding me of this take from Dave Karpf. Like, he's not talking about incredibly deep games here (no offense to the people who love them) and in the context of him trying to take on the reigns of Presidentissimo or whatever all the arguing and doubting about his gamer cred is obscuring the arguments over how weird this is to try and focus on. Like, if he was trying to claim he was a Go all-star or something that would be one thing. Even Chess has tradition behind it, even if it's actual utility for learning more general strategic thinking is more questionable. But Polytopia and Diablo? Really? If we did start apportioning political power to whoever can execute a basic strategy while clicking as fast as possible I think we'd all be bowing down to God-Emperor Flash or something.
Anyways, even if we put my unearned strategy gaming elitism aside this is such a dumb argument to be having in the first place and I don't know that I can forgive Elon for making it part of the problem.
My gut says that liquidity in this context means "making sure that there are tokens available to purchase for initial buyers" or in other words listing them on the market instead of distributing them at initial purchase price.
Pretty sure there's gonna be a contract somewhere that defines raw intelligence as "the amount of money you make for OpenAI."
If only someone had written a pretty interesting case study in how you can use valid-looking data to prove anything, even the existence of psychic powers. And people have been trying to scientifically justify racism for just about as long as the scientific method has been a thing, while studying psychic powers didn't really pick up until the latter half of the 20th century.
I'm sure I'm not the first to note it, but there is a kind of irony in Scott and the gang using such a clear example of a motte-and-bailey argument (got to find a better phrase for that. Maybe some pithy reference to Patton at Calais to maintain the history theme? Inflatable Tank Defense?) in regards to IQ. When talking among friends they treat IQ tests like they are a strong correlate with innate intelligence, no caveats. As such IQ test scores are a reason to ignore environmental factors and not bother investing in equity-minded interventions. But when someone makes the obviously racist conclusion too visible, the argument shifts to be about how actually the correlations between IQ and environmental factors are obvious and really this supports anti racism. It's a straightforward form of decontextualization that relies on completely ignoring the entire history and contemporary arguments around IQ to defends a single data point. Of course once everyone agrees with that data point they can go right back to the wildly racist nonsense that they were doing in the first place.
I don't think the problem is that it all sounds the same as much as it is that there is a tension in a lot of artists to decide where to aim on the spectrum from their own pure artistic vision to the most immediately marketable thing. It's easy to romanticize throwing your art into the void and praying that it resonates with just one other human soul, but if you're trying to be a professional then your ability to get food is resting on being able to find enough of an audience that you can get some kind of monetization. It can be a difficult balance to find depending on what kind of art you're driven to produce.
But how are we supposed to grift our absurd ideology into respectability without having a stately English manor to entertain people in??
There's something to be said here about the general disconnect between craft and productivity; craft being the art of doing a thing well and productivity being the act of creating a product efficiently. Craft is innately satisfying, particularly when the task is difficult or finicky. However, those same circumstances are toxic to productivity because working through problems takes time and effort. It requires craftsmanship. But if you cut out the need for craftsmanship by sanding off those finicky bits you can increase productivity massively, at the cost of replacing skilled and satisfied craftsmen with immiserated labor drones. This may be economically valuable in terms of raw GDP but I don't know that it's spiritually or societally sustainable and I honestly suspect that the current reactionary moment is tied to this at least in part. So naturally the moneyed classes are using generative AI to push even farther down the productivity path as though that's going to solve the underlying problem. Like, in my sci-fi version of this story it either ends with apocalyptic revolution or the extermination of the human need to have a soul or whatever. And I'm pretty sure that the a16z crowd would unironically prefer the latter.
I mean, I would put DF pretty high up on the list of software projects that possibly contain some serious paragraph-in-every-textbook-forever type of comp sci advancement.
Paul I am begging you to actually write out a fucking timeline. Apparently woke started in the 80s in universities when the (white) civil rights protestors of the 70s got tenure in the 60s, as an inevitable and predictable extension of political correctness in the 90s. From the title you're obviously going to indulge the conservative fantasy that "wokeness" is a coherent thing rather than a political tool to dismiss calls for action to actually address blatant injustice. But if you're going to bullshit me, at least do it competently and have an internally consistent narrative that allows for the natural passage of time.
The fact that DNS is "Domain Name Protocol" rather than the actual acronym (Domain Name System) is baffling and maddening.
Well obviously they can't be the money protocol, whatever that even means. Surely something like FIX would be the closest thing to an actual protocol for money, as opposed to a system.
Even if that was true, they have to understand that other people exist and may take advantage of this, right? Like, even if you believe you and your friends are paragons of moral and intellectual virtue, the same law applies to villains and dumbasses.
I remember from my misspent youth reading Scott's ramblings a fair bit of antipathy towards FDA regulations in particular. What I can only attribute to ignorance of history makes them fall prey to the standard libertarian talking points about slowing down drugs that could improve people's lives, never mind the fact that in the absence of those regulations everybody who could hypothetically benefit from psychedelic nootropics or whatever would have been too busy dealing with phocomelia to care.
See, I feel like the one thing that Generative AI has been able to do consistently is to fool even some otherwise-reasonable people into thinking that there's something like a person they're talking to. One of the most toxic impacts that it's had on online discourse and human-computer interactions in general is by introducing ambiguity into whether there's a person on the other end of the line. On one hand, we need to wonder whether other posters on even this forum will Disregard All Previous Instructions. On the other hand, it's a known fact that a lot of these "AI" tools are making heavy use of AGI technologies - A Guy in India. Before the bubble properly picked up my wife got contracted to work for a company that claimed to offer an AI personal assistant. Her job would have literally been to be the customer's remote-working personal assistant. I like to think that her report to the regulators may have been part of what inspired these grifts to look internationally for their exploitable labor. I don't think I need to get into the more recent examples here of all forums.
Obviously yelling at your compiler isn't going to lead to being an asshole to actual people any more than smashing a keyboard or cursing after missing a nail with a hammer. And to be fair most of the posters here (other than the drive-thrus) aren't exactly lacking in class consciousness or human decency or whatever you want to call it, so I'm probably preaching to the choir. But I do think there's a risk that injecting that ambiguity into the incidental relations we have with other people through our technologies (e.g. the chat window with tech support that could be a bot or a real agent depending on the stage of the conversation) is going to degrade the working conditions for a lot of real people, and the best way to avoid that is to set the norm that it's better to be polite to the robot if it's going to pretend to be a person.
See this is why I try to do my reading here at night, because now when I feel sad and angry for the rest of the day it's gonna be like 5 hours tops.
Gamers, for all their faults, have been pretty consistently okay on generative AI, at least in the cases I've seen. It doesn't hurt that nVidia keeps stapling features like this into hardware that supposedly improves performance but at the cost of breaking things and/or requiring more work from devs that are already being run ragged.
Also, I can almost guarantee that the neural texture stuff they're talking about won't see enough use from developers to actually see improvements. Let's do a bunch more work to maybe get some memory savings on some of the highest-end hardware!
Counterpoint: to what extent are hyperkludges actually a unique thing versus an aspect of how technologies and tools are integrated into human context? Like, one of the original examples is the TCP/IP stack, but as anyone who has had to wrangle multiple vendors can attest a lot of the value in that standardization necessarily comes from the network effects - the fact that it's an accepted standard. The web couldn't function if you had a bespoke protocol stack hand-made to elegantly handle the specific problems of a given application not just because of the difficulty in building that much software (i.e. network effects on the design and construction side) but because of how unwieldy and impractical it would be to get any of those applications in front of people. The fit of those tools for a given application is secondary to how much more cleanly the entire ecosystem can operate because they are more limited in number.
The OP also talks about how embedded the history of a given problem is in the solution which feels like the central explanation for this trend. In that sense a hyperkludge isn't a unique pattern that some things fall into and more a way of indicating a particularly noteworthy whorl in the fractal infinikludge that is all human endeavors.
I've watched a few of those "I taught an AI to play tag" videos from some time back, and while its interesting to see what kinds of degenerate strategies the computer finds (trying to find a way out of bounds being a consistent favorite after enough iterations) it's always a case of "wow I screwed up in designing the environment or rewards" and not "dang, look how smart the computer is!"
As always with this nonsense, the problem is always that the machine is too dumb to be trusted rather than too smart and powerful. Like, identifying patterns that people would miss is arguably the biggest strength of machine learning in general, but that's not the same as those patterns being meaningful or useful.