Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)RU
Posts
15
Comments
1,712
Joined
2 yr. ago

  • Yeah, doesn't really work. I mean it has a rough idea of that it needs to go east. And I'm surprised that it knows which interstates are in an area and a few street names in the cities. I'm really surprised. But I told it to get me from Houston to Montgomery as in your example. And in Houston it just tells random street names that aren't even connected and in different parts of the city. Then it drives north on the I-45 and somehow ends up in the south on the I-610-E and finally the I-10-E. But then it makes up some shit, somehow drives to New Orleans, then a bit back and zig-zags it's way back onto the I-10. Then some more instructions I didn't fact check and it gets that it needs to go through Mobile and then north on the I-65.

    I've tested ChatGPT on Germany. And it also gets which Autobahn is connected to the next. It still does occasional zig-zags and in between it likes to do an entire loop of 50km (30 miles) that ends up 2 cities back where it came from... Drives east again and on the second try takes a different exit.

    However: I'm really surprised by the level of spatial awareness. I wouldn't have expected it to come up with mostly correct cardinal directions and interstates that are actually connected and run through the mentioned cities. And like cities in between.

    I don't think I need to try "phi". Small models have very limited knowledge stored inside of them. They're too small to remember lots of things.

    So, you were right. Consider me impressed. But I don't think there is a real-world application for this unless your car has a teleporter built in to deal with the inconsistencies.

  • Thanks. Yeah I know most of the story/history of Matrix. I'm just now making the decisions for the years to come. And Dendrite has been the announced successor to Synapse for quite some time now... I'm not sure what to make of this. If it's going to happen soon, I'd like to switch now. And not move again and relocate my friends more times than necessary.

    Judging by the graphs on my Netdata, Synapse plus the database are currently eating more resources than I'd like for just chat. Afaik the other projects were meant to address that. But I've never used anything else. And I've always refrained from joining large rooms because people told me that'd put considerable load on the server. If there's a better solution I'm open to try even if it's not the default choice... It just needs to work for my use-case. I don't necessarily need feature-completeness.

    Yeah, with the multiple domains: I meant I have 1 VPS and like 3 domain names for different projects. I have a single email-server, one webserver and they just handle all three domains. Even Prosody (XMPP) has "VirtualHost" directives and I only need to run it once to provide service on all the different domains. With Matrix this doesn't seem to be the case... I'd need to launch 3 different instances of Synapse simultaneously on that one server and do some trickery with the reverse http proxy. That'd be more expensive and take more time and effort. I don't really care about how the identities are handled internally, I can provide them in a format that is supported. And the users are seperate anyways. It's just: I'd like to avoid running the same software three times in parallel.

  • Out of curiosity: Do you have to deal with that much spam? If so: Is there a specific reason?

    Because I only get some bot join one of the public rooms and start spamming every few months or so. And we deal with that pretty quickly. My own account has been perfectly safe for years... So my experience is different. Might be my usage-pattern vs yours?!

  • How do I start a new lemmy?

    Jump
  • Is this an honest question?

    If yes: Read the info here: https://join-lemmy.org/docs/administration/administration.html

    That is the installation guide.

    If you're not that tech-savy I recommend using a self-hosting platform like YunoHost or Cosmos.

    You have to at least put some effort in and google it and read the instructions yourself. Everyone is invited to run their own instance of Lemmy, and so are you.

    You'd need a domain and some sort of server. Any VPS will do or some 24/7 online device at home if you can do port forwards on your home internet connection.

    I'd invite you to have a look at it. If you're really interested, feel free to ask follow-up questions.

    Regarding your other question: Yes, you can.

  • Which model(s) did you try? I'm willing to test it later. Downside is, I mainly use smaller LLMs, live in Germany, in an urban region with lots of streets and different Autobahnen and it's kind of a hassle to deal with textual driving instructions anyways. 😆

  • I think they're using Widevine DRM. And with DRM they can enforce whatever arbitrary policies they like. They set special restrictions for Linux. I think Amazon set 480p as max, Netflix 720p and YouTube 4k or sth like that. AFAIK it has little to do with technology. It's just a number that the specific company sets in their configuration.

  • Quite some AI questions coming up in selfhosted in the last few days...

    Here's some more communities I'm subscribed to:

    And a few inactive ones on lemmy.intai.tech

    I'm using koboldcpp and ollama. KoboldCpp is really awesome. In terms of hardware it's an old PC with lots of RAM but no graphics card, so it's quite slow for me. I occasionally rent a cloud GPU instance on runpod.io Not doing anything fancy, mainly role play, recreational stuff and I occasionally ask it to give me creative ideas for something, translate something or re-word or draft an unimportant text / email.

    Have tried coding, summarizing and other stuff, but the performance of current AI isn't enough for my everyday tasks.

  • Ah jo, danke. Brett vorm Kopf... Ja Amazon macht natürlich sein eigenes Ding. Bei denen ist es schwer echte Hilfe zu bekommen. Deren ganzes Geschäftsmodell ist ja alles Automatisieren und möglichst wenig qualifizierte menschliche Arbeit zu bezahlen... Viel Glück jedenfalls an OP. Notfalls bei eBay bestellen, da gibt's auch fast alles.

    Und... Letztendlich würde ich das doch irgendwann klären. Und vielleicht auch eine Auskunft bei der SCHUFA einholen. Alle paar Jahre steht einem da eine kostenlose Auskunft zu... Wenn da irgendwie Identitätsdiebstahl im Spiel ist, lohnt sich das vielleicht da auch mal nachzuforschen.

  • Denke schon, dass das ginge.

    Manchmal ist es auch ein guter Tipp eine E-Mail zu schreiben statt bei der Hotline anzurufen weil das auf einem anderen Tisch landet oder weitergeleitet wird und dann von der richtigen Abteilung für Rechnungen bearbeitet wird.

    Dafür musst du nur irgendwo eine E-Mail-Adresse von denen finden. Und dein eigentliches Problem ordentlich darlegen.

    Unabhängig davon kannst du meiner Meinung nach auch auf die DSGVO pochen, dann landet dein Anliegen halt auf dem Schreibtisch vom Datenschutzbeauftragten.

    Ich hab leider auch keine Ahnung was "Tropischer Wald" fùr eine Firma ist.

  • Yeah, but usually with open-source software you get like 150 Github comments complaining and outlining their shady business practices... If there's something to complain about.

    The XZ disaster is an example for sth else. There are probably more backdoors in proprietary software that we just don't know about. And they can just keep it hidden away and force the manufacturers to do so. No elaborate social engineering like in the XZ case needed... And no software is safe. They all have bugs and most of them depend on third-party libraries. That has nothing to do with being open or closed source. If so, being open provides you with more of a chance to catch mischievous behaviour. At least generally speaking. There will be exceptions to this rule.

  • I think in the next time it's mostly the unskilled and office jobs. I think we still have a shortage of skilled IT professionals and people who can do more than webdevelopment and write simple python scripts. And we also have a shortage of teachers, kindergarden teachers, people who care for the elderly, doctors, psychologists. And despite AI creeping into all the fields, I still see a career there for quite some time to come. Also I don't see an AI plumber anytime soon coming around and fixing your toilet. So I'd say handyman is a pretty safe bet.

    But I'd say all the people making career decisions right now better factor that in. Joining a call center is probably not a sustainable decision any more. And some simple office or management jobs will become redundant soon. I just think big tech laying off IT professionals is more an artificially inflated bubble bursting, than AI being now able to code complex programs or do the job of an engineer.

    It's not really a gamble. We know what AI can do. And there are lists with predictions which jobs can be automated. We can base our decisions on that and I've seen articles in the newspapers 10 years ago. They're not 100% accurate but a rough guide... For example we still have a shortage of train operators. And 10 years ago people said driving trains on rails is easy to automate and we shouldn't strive for that career anymore.

    It'll likely get there. But by that time society will have changed substantially. We can watch Star Trek if we're talking about a post-scarcity future and all the hard work is done for us. We'd need universal income for that. Or we end up in a dystopia. But I think that's to uncertain to base decisions on.

  • Agree. Came here to say the same. And it's not even far from what we've been doing in the past. When taking text or pictures from other people, we were/are also forced to mention that because of copyright. We could just do the same for AI generated content.

  • Agree. And the voting is stupid most of the times. I also regularly see correct answers to a post with several downvotes. Or (false) urban legends / myths being upvoted to no end... The Lemmy votes are kind of meaningless and have been for quite some time. It just shows if an opinion or article pleases the herd. And sometimes it's not even the article, but just the headline and the phrasing of that.

  • I don't think you can use Retrieval Augmented Genaration or vector databases for a task like that. At least not if you want to compare the whole papers and not just a single statement or fact. And that'd be what most tools are focused on. As far a I know the tools that are concerned with big PDF libraries are meant to retrieve specific information out of the library. Relevant to a specific question from the user. If your task is to go through the complete texts, it's not the right tool because it's made to only pick out chunks of text.

    I'd say you need an LLM with a long context length, like 128k or way more, fit all the texts in and add your question. Or you come up with a clever agent. Make it summarize each paper individually or extract facts, then feed that result back and let it search for contradictions, or do a summary of the summaries.

    (And I'm not sure if AI is up to the task anyways. Doing meta-studies is a really complex task, done by highly skilled professionals of a field. And it takes them months... I don't think current AI's performance is anywhere near that level. It's probably going to make something up instead of outputting anything that's related to reality.)