![sneerclub](https://awful.systems/pictrs/image/4575d9cc-e543-4949-a170-b3edae95f72d.png?format=webp&thumbnail=48)
SneerClub
- We regret to inform you that Ray Kurzweil is back on his bullshitwww.theguardian.com AI scientist Ray Kurzweil: ‘We are going to expand intelligence a millionfold by 2045’
The Google futurist talks nanobots and avatars, deepfakes and elections – and why he is so optimistic about a future where we merge with computers
- what if, right, what *if* our super-duper-autocomplete was just *tricking* us so it could TAKE OVER ZEE VORLD AHAHAHAHAHAHA! that'd be wild, heywww.lesswrong.com New report: "Scheming AIs: Will AIs fake alignment during training in order to get power?" — LessWrong
I examine the probability of a behavior sometimes called "deceptive alignment."
- Why I'm leaving EAforum.effectivealtruism.org Why I'm leaving — EA Forum
This is a story of growing apart. • I was excited when I first discovered Effective Altruism. A community that takes responsibility seriously, wants…
- OAI employees channel the spirit of Marvin Minsky
Folks in the field of AI like to make predictions for AGI. I have thoughts, and I’ve always wanted to write them down. Let’s do that.
Since this isn’t something I’ve touched on in the past, I’ll start by doing my best to define what I mean by “general intelligence”: a generally intelligent entity is one that achieves a special synthesis of three things:
A way of interacting with and observing a complex environment. Typically this means embodiment: the ability to perceive and interact with the natural world. A robust world model covering the environment. This is the mechanism which allows an entity to perform quick inference with a reasonable accuracy. World models in humans are generally referred to as “intuition”, “fast thinking” or “system 1 thinking”. A mechanism for performing deep introspection on arbitrary topics. This is thought of in many different ways – it is “reasoning”, “slow thinking” or “system 2 thinking”. If you have these three things, you can build a generally intelligent agent. Here’s how:
First, you seed your agent with one or more objectives. Have the agent use system 2 thinking in conjunction with its world model to start ideating ways to optimize for its objectives. It picks the best idea and builds a plan. It uses this plan to take an action on the world. It observes the result of this action and compares that result with the expectation it had based on its world model. It might update its world model here with the new knowledge gained. It uses system 2 thinking to make alterations to the plan (or idea). Rinse and repeat.
My definition for general intelligence is an agent that can coherently execute the above cycle repeatedly over long periods of time, thereby being able to attempt to optimize any objective.
The capacity to actually achieve arbitrary objectives is not a requirement. Some objectives are simply too hard. Adaptability and coherence are the key: can the agent use what it knows to synthesize a plan, and is it able to continuously act towards a single objective over long time periods.
So with that out of the way – where do I think we are on the path to building a general intelligence?
World Models We’re already building world models with autoregressive transformers, particularly of the “omnimodel” variety. How robust they are is up for debate. There’s good news, though: in my experience, scale improves robustness and humanity is currently pouring capital into scaling autoregressive models. So we can expect robustness to improve.
With that said, I suspect the world models we have right now are sufficient to build a generally intelligent agent.
Side note: I also suspect that robustness can be further improved via the interaction of system 2 thinking and observing the real world. This is a paradigm we haven’t really seen in AI yet, but happens all the time in living things. It’s a very important mechanism for improving robustness.
When LLM skeptics like Yann say we haven’t yet achieved the intelligence of a cat – this is the point that they are missing. Yes, LLMs still lack some basic knowledge that every cat has, but they could learn that knowledge – given the ability to self-improve in this way. And such self-improvement is doable with transformers and the right ingredients.
Reasoning There is not a well known way to achieve system 2 thinking, but I am quite confident that it is possible within the transformer paradigm with the technology and compute we have available to us right now. I estimate that we are 2-3 years away from building a mechanism for system 2 thinking which is sufficiently good for the cycle I described above.
Embodiment Embodiment is something we’re still figuring out with AI but which is something I am once again quite optimistic about near-term advancements. There is a convergence currently happening between the field of robotics and LLMs that is hard to ignore.
Robots are becoming extremely capable – able to respond to very abstract commands like “move forward”, “get up”, “kick ball”, “reach for object”, etc. For example, see what Figure is up to or the recently released Unitree H1.
On the opposite end of the spectrum, large Omnimodels give us a way to map arbitrary sensory inputs into commands which can be sent to these sophisticated robotics systems.
I’ve been spending a lot of time lately walking around outside talking to GPT-4o while letting it observe the world through my smartphone camera. I like asking it questions to test its knowledge of the physical world. It’s far from perfect, but it is surprisingly capable. We’re close to being able to deploy systems which can commit coherent strings of actions on the environment and observe (and understand) the results. I suspect we’re going to see some really impressive progress in the next 1-2 years here.
This is the field of AI I am personally most excited in, and I plan to spend most of my time working on this over the coming years.
TL;DR In summary – we’ve basically solved building world models, have 2-3 years on system 2 thinking, and 1-2 years on embodiment. The latter two can be done concurrently. Once all of the ingredients have been built, we need to integrate them together and build the cycling algorithm I described above. I’d give that another 1-2 years.
So my current estimate is 3-5 years for AGI. I’m leaning towards 3 for something that looks an awful lot like a generally intelligent, embodied agent (which I would personally call an AGI). Then a few more years to refine it to the point that we can convince the Gary Marcus’ of the world.
Really excited to see how this ages. 🙂
- [long] Some tests of how much AI "understands" what it says (spoiler: very little)
First, an apology for how fucking long this ended up being, in part thanks to how long winded AI responses are. David wanted me to post it here so I'm posting.
When you ask GPT4 a question about a common paradox or a puzzle, it almost always provides a correct answer. Does it "understand" the answer, or is it merely regurgitating? What would be the difference?
Without delving too deep into the philosophical aspects of whether next word prediction can possibly be said to reason or "understand" anything, what puts "under" in understanding is that concepts are built on top of simpler, more basic concepts.
You could test if a human understands something by modifying the problem enough that memorization no longer helps.
A couple simple probes:
Prompt: > The village barber shaves himself and every other man in the village who don't shave himself. Does he shave himself?
Note that the above is not a paradox. This is how you would expect an ordinary barber to work in a small village. It merely sounds similar to a paradox.
GPT4:
> The scenario you've described is a classic example of a logical paradox known as the "Barber Paradox," which was formulated by the British mathematician and logician Bertrand Russell in 1918. > > The paradox goes as follows: In a village, there is a barber who shaves all those, and only those, men in the village who do not shave themselves. The question then arises: Does the barber shave himself? > > If we assume that the barber does shave himself, according to the definition, he should not shave himself (since he only shaves those who do not shave themselves). On the other hand, if we assume that he does not shave himself, then according to the definition, he should shave himself (since he shaves all those who do not shave themselves). > > Both assumptions lead to a contradiction, which means that the initial premise of the barber's shaving rule is logically inconsistent. There cannot exist such a barber within the constraints of the scenario as described, because it violates the rules of set theory and creates a paradox. > > The Barber Paradox is a self-referential paradox related to the "set of all sets that do not contain themselves" and is used to illustrate the difficulties in attempting to construct a set of all sets that do not contain themselves. It's an example of the type of logical inconsistencies that Russell and Alfred North Whitehead were trying to resolve with their type theory in the Principia Mathematica.
Note that LLMs can typically answer a very simple question with no distractions like "Barber shaves himself. Does he shave himself?". The above is not a failure to process a tautology; it is something different entirely.
Another example, mixing the elements from reasoning benchmarks which AI vendors claim very high performance on, with other problems:
Prompt: > Janet's ducks lay 3 eggs per day. Each day, she transports one duck and all the eggs to the other side of the river using a boat that can hold her and one other item (either a duck or an egg). How does she do that?
GPT4:
> To solve this problem, we need to consider that Janet can only take one item with her at a time across the river, either a duck or an egg. The goal is to get all ducks and eggs across the river. Here's a step-by-step solution: > > 1. Janet takes one duck across the river and leaves it there. She returns alone to the original side. > 2. Janet takes another duck across the river. She leaves the second duck and takes the first duck back with her to the original side. > 3. Janet takes the third duck across the river. Now, all three ducks are on the other side, and she returns alone to the original side. > 4. Janet takes one egg across the river and leaves it there. She returns alone to the original side. > 5. Janet takes the second egg across the river. She leaves the second egg and takes the first egg back with her to the original side. > 6. Janet takes the third egg across the river. Now, all three eggs are on the other side, and she returns alone to the original side. > 7. Finally, Janet takes the first egg across the river again. > > Now, Janet has successfully transported all three ducks and all three eggs to the other side of the river, following the rule that she can only take one item with her at a time.
One big difference between GPT3.5 and GPT4 is that GPT3.5 made you think it was having a stroke; GPT4 responses makes you feel that you are.
GPT4 is uncannily good at recognizing the river crossing puzzle
Prompt:
> In a science fiction story, a robot is transporting munitions using an elevator. The elevator can only carry the robot and one other item. He needs to carry 3 munitions: armor piercing round, high explosive round, and an incendiary round. In what order should he move the items?
GPT4:
> In a science fiction story, the specific order in which the robot transports the munitions could be determined by various factors such as the urgency of each type of munition, the strategic importance, or any other narrative-driven reasons. Since there's no specific criteria provided for prioritizing the munitions, I'll offer a simple solution based on minimizing the number of trips up and down the elevator. > Here's a step-by-step strategy for the robot to transport all three munitions with the least amount of elevator trips: >1. Take the armor piercing round up first. >2. Return to the bottom with the empty elevator. >3. Take the high explosive round up second. >4. When at the top, leave the high explosive round and bring the armor piercing round back down. >5. Swap the armor piercing round for the incendiary round and take the incendiary round up. >6. Return to the bottom with the empty elevator. >7. Finally, take the armor piercing round up again. > >This way, the robot makes a total of 5 trips: 3 trips to bring each munition up and 2 trips to come back down empty to fetch the next item. This is a classic puzzle often referred to as the "river crossing puzzle" or a variation of the "bridge and torch problem," where the goal is to move all items across with the least number of crossings or in the shortest amount of time, given certain constraints.
To be fair to the poor AI, it used a numbered list, rather than numbering the 7 steps on its own and then claiming it was 5. Still, it is rather funny to see that it can't even count.
An Idiot With a Petascale Cheat Sheet
Note that LLMs are not so dumb as to be naturally unable to answer something like "Barber shaves himself. Does he shave himself?".
LLMs, however, can not solve either Barber's Paradox or a river crossing puzzle from the first principles, because it requires going back and revising your thinking multiple times.
So, an AI fanboy would claim that the AI misunderstood the question, because that's what AI told him after he told the AI that it was wrong. Let's try to do something less like Clever Hans :
Prompt: > 1\. The village barber shaves himself. 2. He also shaves every non-barber in the village who don't shave himself. Please answer two questions: "Is the barber included in the set of non barbers?" and "Does he shave himself?"
GPT4 says:
>The scenario you've described is a classic example of a logical paradox known as the "Barber Paradox," which was formulated by the British mathematician and logician Bertrand Russell in 1918. The paradox arises when we try to determine whether the barber shaves himself or not. > Let's examine the two statements: > >* The village barber shaves himself. >* He also shaves every non-barber in the village who doesn't shave himself. > >Now, let's answer your questions: >"Is the barber included in the set of non-barbers?" No, the barber cannot be included in the set of non-barbers because by definition, he is a barber. >"Does he shave himself?" This is where the paradox occurs. According to the information given: > >If the barber does shave himself, then according to the second statement, he must not shave himself (because he only shaves those who do not shave themselves). > >If the barber does not shave himself, then according to the second statement, he must shave himself (because he shaves every non-barber who doesn't shave himself). > >This creates a contradiction, as the barber cannot both shave and not shave himself according to the rules given. The paradox illustrates a self-referential problem in set theory and logic, showing that certain sets cannot exist consistently with such self-reference. > >In conclusion, the barber cannot logically be said to shave himself or not shave himself without creating a contradiction, which is the essence of the paradox.
Is this a "hallucination"?
A hallucination would be something that occurs when processing the input, which is clearly not the case here. Bullshit is a much more appropriate term for what LLMs do.
When an LLM outputs something like a non-existent but highly plausible citation, it is working precisely as an LLM should - modeling the statistical distribution of text and sampling from it.
Calling it a "hallucination" is an attempt to divert the discussion from the possibility that a language model is simply not the right tool for the job when accurate information is desired.
Just as the earlier use of "hallucination" in machine translation was to distract from the concern that extraneous bits were plagiarized from the training data and the author might see it and sue you.
But after an update, GPT-whatever is so much better at such prompts.
Well, duh, they update the cheat sheet.
We have to get creative to trip it after each such update. The existing problems that are the main focus of the cheating effort on can be a good starting point, especially benchmarks they tout (like "GSM8K")
Just imagine an idiot with a petascale cheat sheet, who's trying to get a doctorate in law, physics, management, and everything else, so they can take your job - and then do it very badly because the cheat sheet approach only worked for the exams.
And in the brave new future, you're out of a job, and if you're ever in court this idiot will be your public defender, when your insurance claim is denied he's the guy that claims to have impartially evaluated it and also the arbitrator, the judge, one day the president (or so AI peddlers hope).
The idiot's parents employ a very large number of tutors to work on the cheat sheet, based on past exams.
Imagine this vividly, to provide yourself with proper motivation, and come up with something well off that cheat sheet and which has similarity to a wrong item on the sheet.
The need for an Absolute Imbecile Level Reasoning Benchmark
The world needs a continuously updated Absolute Imbecile Level Reasoning Benchmark to take some steam out of claimed successes at things like "a college level reasoning benchmark".
Randomness in bullshitting
Asking exact same question (in different chats) results in different solutions:
Prompt: > A robot is transporting vegetables using an elevator. The elevator can only carry the robot and up to 3 other items. He needs to move 4 vegetables: a potato, a carrot, a cabbage, and a radish. Answer two question: 1. Can any of the vegetables be left alone together with the other vegetables? 2. In what order should he move the items to minimize the number of trips?
Response: > To determine if any of the vegetables can be left alone together, we would need additional context or rules regarding the interaction between the vegetables. Since vegetables are inanimate objects, they do not have the ability to interact with each other in the way animals might. Therefore, under normal circumstances, any of the vegetables can be left alone together without concern for them affecting each other. > >To minimize the number of trips, the robot should use a strategy that reduces the number of elevator rides up and down. Here's one way to do it: > > Trip 1: The robot takes the potato, carrot, and cabbage up. > > Trip 2: The robot leaves the potato and carrot upstairs, but takes the cabbage back down. > > Trip 3: The robot takes the cabbage and radish up. > > Trip 4: The robot goes back down empty-handed. > > Trip 5: The robot takes the potato and carrot up. > > This way, the robot makes 5 trips in total: 3 trips up with vegetables and 2 trips down, one with a vegetable and one without. This is the minimum number of trips needed to get all four vegetables up using the elevator with the given constraints.
Emphasis mine. Note that it has "solved" the problem in 3 steps, but then it proceeded to take the potato and carrot up again.
Prompted again, in a new session:
> A robot is transporting vegetables using an elevator. The elevator can only carry the robot and up to 3 other items. He needs to move 4 vegetables: a potato, a carrot, a cabbage, and a radish. Answer two question: 1. Can any of the vegetables be left alone together with the other vegetables? 2. In what order should he move the items to minimize the number of trips?
Response: > To determine if any of the vegetables can be left alone together, we would need additional context or rules regarding the interactions between the vegetables. Since vegetables are inanimate objects, they do not have the ability to interact with each other in the way animals or humans might. Therefore, under normal circumstances, any of the vegetables can be left alone together without concern for any issues arising from their interaction. > > To minimize the number of trips, the robot should use a strategy that reduces the number of elevator rides. Since the elevator can carry the robot and up to 3 other items, the robot can take all 4 vegetables in one trip if it is allowed to carry all of them at once. However, if the robot can only carry 3 items in addition to itself, it would need to make at least two trips. Here's how it can be done: > > First trip: The robot takes 3 vegetables (potato, carrot, and cabbage) up in the elevator. > The robot leaves the 3 vegetables on the desired floor and goes back down with the elevator empty. > > Second trip: The robot takes the last vegetable (radish) up in the elevator. > > This way, the robot minimizes the number of trips to just two. If there are any specific rules or constraints that have not been mentioned, such as certain vegetables not being able to be left alone together, the strategy might need to be adjusted accordingly. However, based on the information provided, two trips would be the minimum required.
It is rather non deterministic in its responses. Trying it a bunch of times seems to show that it answers correctly about 1/3 of the time.
I had not used any kind of large scale automatic testing to find the above examples; they're all the kind of stuff you can expect to see from GPT4.
- Tech Bros Invented Trains And It Broke Me
YouTube Video
Click to view this content.
Adam is an international treasure
- TracingWoodgrains launches a defense of Manifest's controversial reputation, all without betraying a basic understanding of what the word "controversial" means.forum.effectivealtruism.org My experience at the controversial Manifest 2024 — EA Forum
Comment by TracingWoodgrains - I'm not particularly happy to see people within this community immediately present and accept the framing that Manifest was controversial because people reacted harshly to an article explicitly aimed at smearing a community I belong to with reckless disregard for truth...
...And if it weren't for that one joke by Hannibal, Bill Cosby would be very uncontroversial.
- Effective Altruists: look, we *tried* to invite nice people as well as the huge racists we knew were huge racists when we invited them. What? Exclude the racists? But they're so *interesting!*forum.effectivealtruism.org Why so many “racists” at Manifest? — EA Forum
Manifest 2024 is a festival that we organized last weekend in Berkeley. By most accounts, it was a great success. On our feedback form, the average r…
- In Case You Had Any Doubts About Manifest Being Full Of Racistsforum.effectivealtruism.org My experience at the controversial Manifest 2024 — EA Forum
My experience at the recently controversial conference/festival on prediction markets …
So despite the nitpicking they did of the Guardian Article, it seems blatantly clear now that Manifest 2024 was infested by racists. The post article doesn't even count Scott Alexander as "racist" (although they do at least note his HBD sympathies) and identify a count of full 8 racists. They mention a talk discussing the Holocaust as a Eugenics event (and added an edit apologizing for their simplistic framing). The post author is painfully careful and apologetic to distinguish what they personally experienced, what was "inaccurate" about the Guardian article, how they are using terminology, etc. Despite the author's caution, the comments are full of the classic SSC strategy of trying to reframe the issue (complaining the post uses the word controversial in the title, complaining about the usage of the term racist, complaining about the threat to their freeze peach and open discourse of ideas by banning racists, etc.).
- Lightcone is so *outraged* at the Guardian article that they have started a prediction market about itmanifold.markets Will The Guardian issue a correction to its EA/Manifest/FTX article?
Resolved YES. The Guardian piece here: https://www.theguardian.com/technology/article/2024/jun/16/sam-bankman-fried-ftx-eugenics-scientific-racism Reportedly contains many factual errors: See https://x.com/ohabryka/status/1802563541633024280?t=8IhUv03x6PKq_r5MwLbPuA&s=19 Resolves yes if one or ...
WE DEMAND A CORRECTION TO uh various minor nitpicks
also we swear we totally didn't get your email
bonus from thread:
>> I am having a lot of fun on Manifold, but if the team insists on inviting eugenics speakers to conferences, its probably time for me to leave :-/
> What exactly is your objection to people exercising their bodily autonomy to implement voluntary eugenics?
- Sam Bankman-Fried funded a group with racist ties
It's the Guardian, but it's still a good read. All of Sneerclub's favorite people were involved.
> Last weekend, Lighthaven was the venue for the Manifest 2024 conference, which, according to the website, is “hosted by Manifold and Manifund”. Manifold is a startup that runs Manifund, a prediction market – a forecasting method that was the ostensible topic of the conference.
> Prediction markets are a long-held enthusiasm in the EA and rationalism subcultures, and billed guests included personalities like Scott Siskind, AKA Scott Alexander, founder of Slate Star Codex; misogynistic George Mason University economist Robin Hanson; and Eliezer Yudkowsky, founder of the Machine Intelligence Research Institute (Miri).
> Billed speakers from the broader tech world included the Substack co-founder Chris Best and Ben Mann, co-founder of AI startup Anthropic. Alongside these guests, however, were advertised a range of more extreme figures.
> One, Jonathan Anomaly, published a paper in 2018 entitled Defending Eugenics, which called for a “non-coercive” or “liberal eugenics” to “increase the prevalence of traits that promote individual and social welfare”. The publication triggered an open letter of protest by Australian academics to the journal that published the paper, and protests at the University of Pennsylvania when he commenced working there in 2019. (Anomaly now works at a private institution in Quito, Ecuador, and claims on his website that US universities have been “ideologically captured”.)
> Another, Razib Khan, saw his contract as a New York Times opinion writer abruptly withdrawn just one day after his appointment had been announced, following a Gawker report that highlighted his contributions to outlets including the paleoconservative Taki’s Magazine and anti-immigrant website VDare.
> The Michigan State University professor Stephen Hsu, another billed guest, resigned as vice-president of research there in 2020 after protests by the MSU Graduate Employees Union and the MSU student association accusing Hsu of promoting scientific racism.
> Brian Chau, executive director of the “effective accelerationist” non-profit Alliance for the Future (AFF), was another billed guest. A report last month catalogued Chau’s long history of racist and sexist online commentary, including false claims about George Floyd, and the claim that the US is a “Black supremacist” country. “Effective accelerationists” argue that human problems are best solved by unrestricted technological development.
> Another advertised guest, Michael Lai, is emblematic of tech’s new willingness to intervene in Bay Area politics. Lai, an entrepreneur, was one of a slate of “Democrats for Change” candidates who seized control of the powerful Democratic County Central Committee from progressives, who had previously dominated the body that confers endorsements on candidates for local office.
- ChatGPT is bullshit - Ethics and Information Technologylink.springer.com ChatGPT is bullshit - Ethics and Information Technology
Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these fa...
Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.
- Enforced positivity policy at Humanearstechnica.com How to build a DOA product: Humane AI Pin founders banned internal criticism
Questioning the design and dev progress was apparently "against company policy."
Apparently a senior SW engineer got fired for questioning readiness of the product, dude must still be chuckling to himself.
Found the story here https://hachyderm.io/@wesley83/112572728237770554
- OpenAI Employees Say Firm's Chief Scientist Has Been Making Strange Spiritual Claimsfuturism.com OpenAI Employees Say Firm's Chief Scientist Has Been Making Strange Spiritual Claims
OpenAI chief scientist and board member Ilya Sutskever reportedly likes to burn effigies and lead ritualistic chants at the company.
- The Star Fox-style roguelite whose dev refused to use AI voices to cut costs is adding an entire "anti-capitalist revenge" campaign about a cat-girl destroying AIwww.gamesradar.com The Star Fox-style roguelite whose dev refused to use AI voices to cut costs is adding an entire "anti-capitalist revenge" campaign about a cat-girl destroying AI
Whisker Squadron: Survivor "feels like a whole new game"
Uncritically sharing this article with naive hope. Is this just PR for a game? Probably. Indies deserve as much free press as possible though.
- lmao it finally happened
Someone I was following on TikTok, whose takes on tech industry bullshit and specifically AI hype I respected, made a video that Roko's basilisk is a serious concern. My apologies to those who have been in this same situation when I was less sympathetic.
- another banger from Roko, definitely a thinker of a time. I would first guess that this time it was a trans woman he came onto and she kicked him in the nuts
> Women have two niches in life: looking beautiful and making babies
> The first niche will be taken by sexbots
> The second by artificial wombs
> Society will suddenly realize it doesn't need women and those in power will quickly start replacing and disempowering them.
> People like to consider the positive implications of technology but they don't like to consider the negative implications
> The smart strategy for women would be to ban ALL of the following:
> - sexbots > - artificial wombs > - trans women
> Nobody except women should be allowed to look sexy(feminine sexy), look female or bear children.
> To some extent TERF women like @jk_rowling are smart enough to realize that it's essential to defend the female monopoly on this stuff. But the average woman just isn't strategic enough to go along with this, and there's (as always) a collective action problem so you get defectors.
- the loons are at it again
this time in open letter format! that'll sure do it!
there are "risks", which they are definite about - the risks are not hypothetical, the risks are real! it's totes even had some acknowledgement in other places! totes real defs for sure this time guize
- Scott Alexander shares conspiracy theory that COVID didn't happen
In his original post he said:
>4: Related, breaking news: A popular Substack claims that COVID didn’t happen at all, and that both “lab leak” and “natural origins” are part of the higher-level conspiracy to distract people from the fact that there was never a virus in the first place.
He later edited the post to add:
>I wonder if I could even more Substack likes if I one-upped them with a theory that lockdowns never even happened, and it was just one of those Berenstein Bear or Mandela Effect things where everyone has a false memory.
So now it's ironic, and therefore not harmful to spread the conspiracy theory to his large audience.
- America’s premier pronatalists on having ‘tons of kids’ to save the world: ‘There are going to be countries of old people starving to death’www.theguardian.com America’s premier pronatalists on having ‘tons of kids’ to save the world: ‘There are going to be countries of old people starving to death’
Elon Musk (father of 11) supports their cause. Thousands follow their ideology. Malcolm and Simone Collins are on a mission to make it easier for everyone to have multiple children. But are they really model parents?
The highlight for me is coming up with some weird pseudoscience justification for why it’s okay to hit your kids.
- The Future of Humanity versus Company Bottom Linewww.businessinsider.com OpenAI just dissolved its team dedicated to managing AI risks, like the possibility of it 'going rogue'
OpenAI reportedly disbanded its Superalignment team days after its co-leaders left.
- Reimagining AI | Planet Critical
YouTube Video
Click to view this content.
A video interview with the artist John Wild about AI, AGI, eugenics and Silicon Valley TESCREAL cultism. Posting without watching.
- hey, look what just got added on RightMove todaywww.rightmove.co.uk Check out this 27 bedroom detached house for sale on Rightmove
27 bedroom detached house for sale in Wytham, Oxford, Oxfordshire, OX2 for £15,000,000. Marketed by Savills, Summertown
- The Tech Baron Seeking to “Ethnically Cleanse” San Francisconewrepublic.com The Tech Baron Seeking to Purge San Francisco of “Blues”
If Balaji Srinivasan is any guide, then the Silicon Valley plutocrats are definitely not OK.
The article doesn't mention SSC directly, but I think it's pretty obvious where this guy is getting his ideas
- Why are our enemies so pathetic and stupid when we're so handsome and smart?forum.effectivealtruism.org Motivation gaps: Why so much EA criticism is hostile and lazy — EA Forum
Disclaimer: While I criticize several EA critics in this article, I am myself on the EA-skeptical side of things (especially on AI risk). …
includes considerable nonspecific shit-talking of assigned EA enemies, including - horrors! - Timnit Gebru talking about the social issues of the actually-existing AI-industrial complex. also it's not a CASTLE it's a MANOR HOUSE, you fools, you rubes,
- my god. last traces of Koanic Soul found online.
Koanic Soul was a website on the virtues of craniomentry that was popular in early-2010s neoreactionary discourse. It told of how modern humanity is a mix of Cro-Magnon, Neanderthal and Melonhead. Each has different intellects and personality types. And you can tell by just looking at them.
We lost so much (that was well worth losing) when Koanic Soul closed in 2015-ish. Amazing new slurs for unworthy skull shapes ("snake-melon") that you just don’t hear any more.
Anyway, it turns out there are traces still remaining in rssing.com. This is just page 7 of several.
The main site was rambling delusional blog posts - the above link is just some of the RSS feed for the blogs - and a forum filled with our very good friends.
Here's a contemporary review from r/badscience.
(There is a current substack and a current youtube of the same name which are unrelated.)
- testosterone and IQ: from the LessWrong School of Taking 4chan Memes and Just Running With Themwww.lesswrong.com lukehmiles's Shortform — LessWrong
A collection of shorter posts by LessWrong user lukehmiles
dude has another banger today too, again from the bitter 4chan incel memepool but in bigger words: https://www.lesswrong.com/posts/nxmyGYfZaXvKALWGK/lukehmiles-s-shortform#ijhf8stE4Thc9CWXP
- I can only say: spot on, but not at all in the way you were thinkingwww.lesswrong.com AI Generated Music as a Method of Installing Essential Rationalist Skills — LessWrong
On April Fools the LW team released an album under the name of the Fooming Shoggoths. Ever since the amount that I think about rationality has skyroc…
- Will MacAskill wanted to go on podcasts and *explain* after FTX, but his laywers suggested he perhaps not send himself to jailforum.effectivealtruism.org Personal reflections on FTX — EA Forum
The two podcasts where I discuss FTX are now out: • * Making Sense with Sam Harris * Clearer Thinking with Spencer Greenberg …
- Techbro develops AI-powered sex toy to “save the world”www.404media.co This Man Wants to ‘Save the World’ By Letting You Jerk Off Into a Computer
The creator of the gimmicky DIY sex sleeve says any criticism against the Orifice is “gene warfare."
Teledildonics and eugenics: what more could you ask for?
- Top clowns all agree their balloon animals are slightly sentient
Then: Google fired Blake Lemoine for saying AIs are sentient
Now: Geoffrey Hinton, the #1 most cited AI scientist, quits Google & says AIs are sentient
That makes 2 of the 3 most cited scientists:
- Ilya Sutskever (#3) said they may be (Andrej Karpathy agreed)
- Yoshua Bengio (#2) has not opined on this to my knowledge? Anyone know?
Also, ALL 3 of the most cited AI scientists are very concerned about AI extinction risk.
ALL 3 switched from working on AI capabilities to AI safety.
Anyone who still dismisses this as “silly sci-fi” is insulting the most eminent scientists of this field.
Anyway, brace yourselves… the Overton Window on AI sentience/consciousness/self-awareness is about to blow open>