Likely transformers now (I think SD3 uses a ViT for text encoding, and ViTs are currently one of the best model architectures for image classification).
Laissez-faire economics is a foundational component of liberalism (well, classical liberalism anyway, which I assume is what he means when using that word).
I mean, you can be sued for anything, but it will get thrown out. Like, I guess the MPAA could offer a movie for download, then try to sue the first hop they upload a chunk to, but that really doesn't make any sense (because they offered it for download in the first place). Furthermore, the first hop(s) aren't the people that are using the file, and they can't even read it. If people could successfully sue nodes, then ISPs and postal services could be sued for anything that passes through their networks.
I think similar, and arguably more fine-grained, things can be done with Typescript, traditional OOP (interfaces, and maybe the Facade pattern), and perhaps dependency injection.
Onion-like routing. It takes multiple hops to get to a destination. Each hop can only decrypt the next destination to send the packet to (i.e. peeling off a layer of the onion).
I thought the tuning procedures, such as RLHF, kind of messes up the probabilities, so you can't really tell how confident the model is in the output (and I'm not sure how accurate these probabilities were in the first place)?
Also, it seems, at a certain point, the more context the models are given, the less accurate the output. A few times, I asked ChatGPT something, and it used its browsing functionality to look it up, and it was still wrong even though the sources were correct. But, when I disabled "browsing" so it would just use its internal model, it was correct.
It doesn't seem there are too many expert services tied to ChatGPT (I'm just using this as an example, because that's the one I use). There's obviously some kind of guardrail system for "safety," there's a search/browsing system (it shows you when it uses this), and there's a python interpreter. Of course, OpenAI is now very closed, so they may be hiding that it's using expert services (beyond the "experts" in the MOE model their speculated to be using).
I find Kagi results a little bit better than Google's (for most things). I like that certain categories of results are put in their own sections (listicles, forums) so they're easy to ignore if you want. I like that I can prioritize, deprioritize, block, or pin results from certain domains. I like that I can quickly switch "lenses" to one of the predefined or custom lenses.
I think it's reported that way because traders and other people adjacent to the financial sector are trying to figure out when the Fed is likely to lower rates. I don't really see inflation numbers reported outside financial articles.
For the things you mentioned, the vegan and gluten-free options are processed much more. Beef, for example, is arguably a "whole food."
Gluten-free isn't healthier unless you have specific conditions. Most people can handle gluten fine, and some vegan foods are primarily gluten (such as seitan).
Vegan isn't inherently healthy, especially if your eating mostly processed foods. A primarily whole-food vegan diet is likely healthier and cheaper than most people's diets though.
They're good for media centers, since the support 4k HDR. Can also use Moonlight to stream games from a PC. GPIO is useful, but I guess the PI is overpowered for most GPIO use cases at this point.
Hmm, so looks like around 100kB/s. That's about what I remember (100kB/s - 300kB/s).
I've recently been trying out Tribler, and it's much faster than the last time I tried it (I've seen 2MB/s on popular torrents, but around 500kB/s on less popular). Not sure if there are simply more exit nodes with more bandwidth now or if there are more people on the Tribler network seeding.
Likely transformers now (I think SD3 uses a ViT for text encoding, and ViTs are currently one of the best model architectures for image classification).