Skip Navigation
OpenAI introduces Sora, its text-to-video AI model
  • It seems to me that AI won't completely replace jobs (but will do in 10-20 years). But will reduce demand because oversaturation + ultraproductivity with AI. Moreover, AI will continue to improve. A work of a team of 30 people will be done with just 3 people.

  • /kbin meta @kbin.social genesis @kbin.social
    Why are the microblogs not working properly?

    Why are the microblogs not working properly?

    \#kbinMeta

    1
    Red Hat's Relationship With Fedora
  • I rather avoid anything related to Red Hat. On the other hand, Debian 12 looks epic.

  • Is Standard Notes going to be proprietary?
    github.com Clarification on project's license and open source state · standardnotes/forum · Discussion #3196

    Hello 👋, Just came across this project, and saw the app being advertised as open source, like on this page where it's advertised as "100% open source". Looking at the app repo though, the licensing...

    Clarification on project's license and open source state · standardnotes/forum · Discussion #3196

    Hello 👋, Just came across this project, and saw the app being advertised as open source, like on this page where it's advertised as "100% open source". Looking at the app repo though, the licensing...

    5
    Artificial Intelligence Could Finally Let Us Talk with Animals
    www.scientificamerican.com Artificial Intelligence Could Finally Let Us Talk with Animals

    AI is poised to revolutionize our understanding of animal communication

    Artificial Intelligence Could Finally Let Us Talk with Animals

    AI is poised to revolutionize our understanding of animal communication

    0
    Researchers discover 'Reversal Curse:' LLMs trained on "A is B" fail to learn "B is A"

    Training AI models like GPT-3 on "A is B" statements fails to let them deduce "B is A" without further training, exhibiting a flaw in generalization. (https://arxiv.org/pdf/2309.12288v1.pdf)

    Ongoing Scaling Trends

    • 10 years of remarkable increases in model scale and performance.

    • Expects next few years will make today's AI "pale in comparison."

    • Follows known patterns, not theoretical limits.

    No Foreseeable Limits

    • Skeptical of claims certain tasks are beyond large language models.

    • Fine-tuning and training adjustments can unlock new capabilities.

    • At least 3-4 more years of exponential growth expected.

    Long-Term Uncertainty

    • Can't precisely predict post-4-year trajectory.

    • But no evidence yet of diminishing returns limiting progress.

    • Rapid innovation makes it hard to forecast.

    TL;DR: Anthropic's CEO sees no impediments to AI systems continuing to rapidly scale up for at least the next several years, predicting ongoing exponential advances.

    0
    RAIN: Your Language Models Can Align Themselves without Finetuning - Microsoft Research 2023 - Reduces the adversarial prompt attack success rate from 94% to 19%! AI

    Paper: https://arxiv.org/abs/2309.07124

    Abstract:

    > > > Large language models (LLMs) often demonstrate inconsistencies with human preferences. Previous research gathered human preference data and then aligned the pre-trained models using reinforcement learning or instruction tuning, the so-called finetuning step. In contrast, aligning frozen LLMs without any extra data is more appealing. This work explores the potential of the latter setting. We discover that by integrating self-evaluation and rewind mechanisms, unaligned LLMs can directly produce responses consistent with human preferences via self-boosting. We introduce a novel inference method, Rewindable Auto-regressive INference (RAIN), that allows pre-trained LLMs to evaluate their own generation and use the evaluation results to guide backward rewind and forward generation for AI safety. Notably, RAIN operates without the need of extra data for model alignment and abstains from any training, gradient computation, or parameter updates; during the self-evaluation phase, the model receives guidance on which human preference to align with through a fixed-template prompt, eliminating the need to modify the initial prompt. Experimental results evaluated by GPT-4 and humans demonstrate the effectiveness of RAIN: on the HH dataset, RAIN improves the harmlessness rate of LLaMA 30B over vanilla inference from 82% to 97%, while maintaining the helpfulness rate. Under the leading adversarial attack llm-attacks on Vicuna 33B, RAIN establishes a new defense baseline by reducing the attack success rate from 94% to 19%. > >

    Source: https://old.reddit.com/r/singularity/comments/16qdm0s/rain\_your\_language\_models\_can\_align\_themselves/

    0
    Fediverse slander
  • the Lemmy/kbin instances stability during June 12-14 is goofily accurate 💀

  • Now that smaller instances are disappearing, which instances do you think will stand the test of time?
  • kbin.social is well funded and will probably be around for many years to come

  • The Truth about Aaron Swartz’s “Crime”
  • RIP Aaron. You were the hero we needed but lost. Fuck those prosecutors. I hope your dream comes true one day.

  • The Future of Reddit after the Blackout
  • Agreed. But it seems likely that the blackout will soon be extended since alot of people on YouTube advocated it.

  • genesis genesis @kbin.social

    \#Filipino 🇵🇭 Aro/Ace 💜🖤

    A fellow supporter of #FOSS and The #Fediverse

    \#Philippines #fedi22

    Posts 67
    Comments 11