Architeuthis @ Architeuthis @awful.systems Posts 7Comments 220Joined 2 yr. ago

if bitcoin mining can only be profitable at scale by (among other things) not letting proper noise reduction solutions cut into your profit margins, saying that lax noise pollution laws are the issue seems disingenuous.
Assange becomes a Russian asset because him being a low key sex pest somehow gives some European authorities cause to want to send him packing to the US where he is wanted for espionage should probably be one of the steps but in general yes.
The interminable length has got to have started out as a gullibility filter before ending up as an unspoken imperative to be taken seriously in those circles, isn't HPATMOR like a million billion chapters as well?
Siskind for sure keeps his wildest quiet-part-out-loud takes until the last possible minute of his posts, when he does decide to surface them.
There's also the Julian Assange connection, so we can probably blame him for Trump being president as well.
IKR like good job making @dgerard look like King Mob from the Invisibles in your header image.
If the article was about me I'd be making Colin Robinson feeding noises all the way through.
edit: Obligatory only 1 hour 43 minutes of reading to go then
I like how you lose faith in your argument the longer your post goes on. Maybe start with the last sentence next time.
It hasn't worked 'well' for computers since like the pentium, what are you talking about?
The premise was pretty dumb too, as in, if you notice that a (very reductive) technological metric has been rising sort of exponentially, you should probably assume something along the lines of we're probably still at the low hanging fruit stage of R&D, it'll stabilize as it matures, instead of proudly proclaiming that surely it'll approach infinity and break reality.
There's nothing smart or insightful about seeing a line in a graph trending upwards and assuming it's gonna keep doing that no matter what. Not to mention that type of decontextualized wishful thinking is emblematic of the TREACLES mindset mentioned in the community's blurb that you should check out.
So yeah, he thought up the Singularity which is little more than a metaphysical excuse to ignore regulations and negative externalities because with tech rupture around the corner any catastrophic mess we make getting there won't matter. See also: the whole current AI debacle.
I'm not spending the additional 34min apparently required to find out what in the world they think neural network training actually is that it could ever possibly involve strategy on the part of the network, but I'm willing to bet it's extremely dumb.
I'm almost certain I've seen EY catch shit on twitter (from actual ml researchers no less) for insinuating something very similar.
To have a dead simple UI where you, a person with no technical expertise, can ask in plain language for the data you want in the way you want them presented, along with some basic analysis that you can tell it to make it sound important. Then you tell it to turn it into an email in the style of your previous emails, send it, and take a 50min coffee break. All this allegedly with no overhead besides paying a subscription and telling your IT people to point the thing to the thing.
I mean, it would be quite something if transformers could do all that, instead of raising global temperatures to synthesize convincing looking but highly suspect messaging at best while being prone to delirium at worst.
Google pivoting to selling shovels for the AI gold rush in the form of data tools should be pretty viable if they commit to it, I hadn't thought if it that way.
It's a sad fate that sometimes befalls engineers who are good at talking to audiences, and who work for a big enough company that can afford to have that be their primary role.
edit: I love that he's chief evangelist though, like he has a bunch of little google cloud clerics running around doing chores for him.
Before we accidentally make an AI capable of posing existential risk to human being safety, perhaps we should find out how to build effective safety measures first.
You make his position sound way more measured and responsible than it is.
His 'effective safety measures' are something like A) solve ethics B) hardcode the result into every AI, I.e. garbage philosophy meets garbage sci-fi.
Over time FHI faced increasing administrative headwinds within the Faculty of Philosophy (the Institute’s organizational home). Starting in 2020, the Faculty imposed a freeze on fundraising and hiring. In late 2023, the Faculty of Philosophy decided that the contracts of the remaining FHI staff would not be renewed. On 16 April 2024, the Institute was closed down.
Sound like Oxford increasingly did not want anything to do with them.
edit: Here's a 94 page "final report" that seems more geared towards a rationalist audience.
Wonder what this was about:
Why we failed [...] There also needs to be an understanding of how to communicate across organizational communities. When epistemic and communicative practices diverge too much, misunderstandings proliferate. Several times we made serious missteps in our communications with other parts of the university because we misunderstood how the message would be received. Finding friendly local translators and bridgebuilders is important.
tvtropes
The reason Keltham wants to have two dozen wives and 144 children, is that he knows Civilization doesn't think someone with his psychological profile is worth much to them, and he wants to prove otherwise. What makes having that many children a particularly forceful argument is that he knows Civilization won't subsidize him to have children, as they would if they thought his neurotype was worth replicating. By succeeding far beyond anyone's wildest expectations in spite of that, he'd be proving they were not just mistaken about how valuable selfishness is, but so mistaken that they need to drastically reevaluate what they thought they knew about the world, because obviously several things were wrong if it led them to such a terrible prediction.
huh
Past 1M words
That's gonna be 4.000 pages of extremely dubious porn and rationalist navel gazing, if anyone's keeping count.
Hi, my name is Scott Alexander and here's why it's bad rationalism to think that widespread EA wrongdoing should reflect poorly on EA.
The assertion that having semi-frequent sexual harassment incidents go public is actually an indication of health for a movement since it's evidence that there's no systemic coverup going on and besides everyone's doing it is uh quite something.
But surely of 1,000 sexual harassment incidents, the movement will fumble at least one of them (and often the fact that you hear about it at all means the movement is fumbling it less than other movements that would keep it quiet). You’re not going to convince me I should update much on one (or two, or maybe even three) harassment incidents, especially when it’s so easy to choose which communities’ dirty laundry to signal boost when every community has a thousand harassers in it.
Once [a company] can make more money by screwing its customers, that screw-job becomes a fait accompli.
If they open their APIs so I can coordinate different brands without downloading a bazillion different apps and as long as I can do it without my data leaving the house, I'll think about it.