Never forget the bribery of the criminals serving on the Supreme Court.
It is a shit show from top to bottom.
Endless Sky. The save game is a text file. Save a file on the mobile app (F-Droid), and on the PC (Flatpak), and note the last line. This is the line you must swap to transfer the save file. It is the first game I have played on both practically. The game mechanics are different between the two and you need to alter your strategy accordingly. On mobile, I travel with a ship setup for boarding pirate vessels and never target enemies directly; all of my guns are automatic turrets. I just use a fast ship and travel with a large group of fighters. It is more of a grind on mobile, but it can be used to build up resources and reserves. The game is much bigger than it first appears to be. You need to either check out a guide or explore very deep into the obscure pockets of the map.
I won't touch the proprietary junk. Big tech "free" usually means street corner data whore. I have a dozen FOSS models running offline on my computer though. I also have text to image, text to speech, am working on speech to text, and probably my ironman suit after that.
These things can't be trusted though. It is just a next word statistical prediction system combined with a categorization system. There are ways to make an LLM trustworthy, but it involves offline databases and prompting for direct citations, these are different from Chat prompt structures.
I just got a Llama 2 70B LLM (offline chat AI) working on my laptop. That is a much larger (smarter) system than I thought was possible on a laptop. It takes every bit of 64GB of RAM, and it is about as fast as AOL instant messanger on bad 56k dialup, but it works.
I think I also fixed my problem that stopped me from using text to speech AI. Now I just need to figure out speech to text, get a few billion dollars, and make an iron man suit.
The best years of YT before 2017, there was an advanced maker and DIY tech culture revolving around people sharing projects and content just to share it. That died. This all pro CC thing has an up side to some extent, but it also lobotomized YT. Peertube might eventually get to the same kind of utility level, but it needs a lot of time and momentum to get there.
I find it interesting how much of a difference things like "not going" versus "no" can create just under the surface. Like we can't really address them in-situ directly, but they do have an impact in many situations.
Just to pick on this example in hyperbolic magnified context, "not going" is like argumentative banter IMO, whereas a simple "no" is concise and respectful.
I grew up in the southeastern USA, where racism and stupidity are common. The tendency is for isolated communities and ostracism. Personally, I try my best to be aware of this so that I can avoid acting this way as much as possible.
Nearly twenty years ago I had a business relationship with a Taiwanese man. We got along fine, we even had a lot of peripheral interests in common, but the subtle cultural differences made him difficult for me to do business with. So much of business and negotiating is about reading people and subtle context. A lot of that gets lost between the language cracks with stuff like "not going" has more contextual impact.
I've lived in Southern California for many years now. Here Spanglish is common. Nothing stands out IMO as something worth mentioning. I'm sure there are instances. I just don't pay much attention to it. I do notice how living in an openly mix culture makes a gigantic difference in how people tend to lean into prejudice. I haven't been to many other cultural regions, but intuitively, I imagine this is universal; where any regional culture that tends to isolate will also display prejudice amongst the least intelligent members of the group.
I wonder if these compatibility divisions are really something deeper and related to evolutionary forces at play. Like if all complex life displays this same type of social isolationism at various levels that ultimately drives speciation. I don't mean that in any kind of justification for isolationism or prejudice. It is just an observation of the forces that divide and maintain the division, like a social component in addition to geopolitical factors and time.
I loved Dread and Prime 2. I tried playing Super Metroid on switch, but the controls are just too poor to pull off the advanced combination moves with the slow low quality emulation. I'm disappointed that there are not a dozen Metroid titles on the switch. Everything in the Prime series should be ported.
I'm mostly referring to the long hiatus(es) before Dread, and all of the nonsense from developers other than Retro Studio. I understand they were probably in a funky position when it came to writing and coding for a new 3D engine after all of the Prime series had played out the life of the prior engine. IMO, the entire SDK for Nintendo hardware should account for key franchise titles like Metroid. These games should have story boards and plans from first light of new hardware. The plans should always include classic titles too. My biggest complaint about Nintendo is the low quality of most titles on the platform. They are too focused on recruiting developers instead of quality games. Sure there are some great games like BotW, TotK, and Dread, but I'm not going sifting through all the junk in their store to try and find anything else worth playing. I got a couple of titles that a lot of people recommended, and hated them with no recourse and they cost as much as good games. I would have paid for and played all of the Prime series if it had been ported, but Nintendo totally fails at maintaining their legacy titles effectively. It is this lack of availability now, and the stupid fumble of letting extra developers with their own forked vision into the franchise that I am calling a fumbled opportunity.
Yeah but MG is WAY older @ 1987 vs GoW in 2005 and ES in 1994.
Metal Gear Solid was one of the best games on the original PlayStation. I haven't been into consoles since the PS2. Metal Gear Solid was so good compared to anything else at the time, the idea it is only at 60M now, seems like a major fumble and lack of management. I guess it is like Metroid for being underdeveloped or given to idiots "with a new vision" like in the case of Metroid.
Don't worry. I wanted to make sure someone replied to you as soon as I viewed your message. Sorry I was not able to respond again right away. I am often slow to respond. I am disabled (injured long time). I work on little hobby projects all day. I take a lot of breaks during the day. This is when I use a phone for social stuff like beehaw. I will respond eventually once I take a break and see the message ;)
Hello from Los Angeles!
I'm sure they will eventually try to force ID's because it would be profitable for criminal data theft ads stalkers. This is all about corrupt money and exploitation. Billionaires are worthless parasites that have no right to exist in a Democratic system. Fuck the US fascist oligarchy party.
Fire fox, Fire fox;
Fuck you Google;
We're throwing rocks.
Alpha bet, Alpha bet;
Farming data is,
stalking/theft.
Oobabooga is the main GUI used to interact with models.
https://github.com/oobabooga/text-generation-webui
FYI, you need to find checkpoint models. In the available chat models space, naming can be ambiguous for a few reasons I'm not going to ramble about here. The main source of models is Hugging Face. Start with this model (or get the censored version):
https://huggingface.co/TheBloke/llama2_7b_chat_uncensored-GGML
First, let's break down the title.
- This is a model based in Meta's Llama2.
- This is not "FOSS" in the GPL/MIT type of context. This model has a license that is quite broad in scope with the key point stipulating it can not be used commercially for apps that have more than 700 million users.
- Next, it was quantized by a popular user going by "The Bloke." I have no idea who this is IRL but I imagine this is a pseudonym or corporate alias given how much content is uploaded by this account on HF.
- This model is based on a 7 Billion parameter dataset, and is fine tuned for chat applications.
- This is uncensored meaning it will respond to most inputs as best it can. It can get NSFW, or talk about almost anything. In practice there are still some minor biases that are likely just over arching morality inherent to the datasets used, or it might be coded somewhere obscure.
- Last part of the title is that this is a GGML model. This means it can run on CPU or GPU or a split between the two.
As for options on the landing page or "model card"
- you need to get one of the older style models that have "q(numb)" as the quantization type. Do not get the ones that say "qK" as these won't work with the llama.cpp file you will get with Oobabooga.
- look at the guide at the bottom of the model card where it tells you how much ram you need for each quantization type. If you have a Nvidia GPU with the CUDA API, enabling GPU layers makes the model run faster, and with quite a bit less system memory from what is stated on the model card.
The 7B models are about like having a conversation with your average teenager. Asking technical questions yielded around 50% accuracy in my experience. A 13B model got around 80% accuracy. The 30B WizardLM is around 90-95%. I'm still working on trying to get a 70B running on my computer. A lot of the larger models require compiling tools from source. They won't work directly with Oobabooga.
It's no freaking mystery anywhere. Kids are too damn expensive because just living is too damn expensive. The real fix is massive land reform that absolutely murders the real estate bubble with a nuclear bomb. Regulate the availability of funds directly to the minimum wage. You work, you live a decent life with a good balance. Build dense housing with tight local communities and perfect transportation so we're always in contact with people in our communities. Babies will be popping up like weeds.
An article about one of the poorest European countries is not really relevant. They don't have the same zoning stagnation nonsense that makes housing unaffordable. The stupid incentives that exploded home loan amounts combined with 100 years without zoning reforms are the problem.
Have you seen the great gatspy with Wizard too? That's what always comes up when mine goes too far. I'm working on compiling llama.cpp from source today. I think that's all I need to be able to use some of the other models like Llama2-70B derivatives.
The code for llama.cpp is only an 850 line python file (not exactly sure how python=CPP yet but YOLO I guess, I just started reading the code from a phone last night). This file is where all of the prompt magic happens. I think all of the easy checkpoint model stuff that works in Oobabooga uses python-llama-cpp from pip. That hasn't had any github repo updates in 3 months, so it doesn't work with a lot of newer and larger models. I'm not super proficient with Python. It is one of the things I had hoped to use AI to help me learn better, but I can read and usually modify someone else's code to some extent. It looks like a lot of the functionality (likely) built into the more complex chat systems like Tavern AI are just mixing the chat, notebook, and instruct prompt techniques into one 'context injection' (-if that term makes any sense).
The most information I have seen someone work with independently offline was using langchain with a 300 page book. So I know at least that much is possible. I have also come across a few examples of people using langchain with up to 3 PDF files at the same time. There is also the MPT model with up to 32k context tokens but it looks like it needs server machine ram in the hundreds of GB to function.
I'm having trouble with distrobox/conda/nvidia on Fedora Workstation. I think I may start over with Nix soon, or I am going to need to look into proxmox, virtualization or go back to an immutable base to ensure I can fall back effectively. I simply can't track down where some dependencies are getting stashed and I only have 6 distrobox containers so far. I'm only barely knowledgeable enough in Linux to manage something like this well enough for it to function. - suggestions welcome
Pee-wee, Mr. Rogers, Sesame Street, Bozo, and Reading Rainbow are some of my fondest memories; when we were all Pee-wees, and he was Herman.
My main reason for playing with offline AI right now is to help me get further into the Computer Science curriculum on my own. (disabled/just curious)
I have seen a few AI chat characters with highly detailed prompts that attempt to keep the LLM boxed into a cosplay character. I would like to try to create fellow students in a learning curriculum. I haven't seen anything like this yet, but maybe someone else here has seen this or has some helpful tips. I would like to prompt a character to not directly use programming knowledge from its base tokens and only use what is available in a Lora, or a large context, or a langchain database. I would like to have the experience of learning along side someone to talk out ideas when they have the same amount of information as myself. Like, I could grab all the information for a university lecture posted online and feed it to the AI, watch and read the information myself, and work through the quizzes or question anything I do not understand with the answers restricted to my own internal context region.
WizardLM 30B at 4 bits with the GGML version on Oobabooga runs almost as fast as Llama2 7B on just the GPU. I set it up with 10 threads on the CPU and ~20 layers on the GPU. That leaves plenty of room for a 4096 context with a batch size of 2048. I can even run a 2GB Stable Diffusion model at the same time with my 3080's 16GBV.
Have you tried any of the larger models? I just ordered 64GB of ram. I also got kobold mostly working. I hope to use it to try Falcon 40. I really want to try a 70B model at 2-4 bit and see how its accuracy is.
Fedora workstation. Had been on Silverblue for years, but got a machine with Nvidia and didn't want the extra headaches of SB
I just got Oobabooga running for the first time with Llama-2, and have Automatic1111, and ComfyUI running for images. I am curious about ML too but I don't know where this start with that one yet.
For the uninitiated, all of these tools are running offline open source (or mostly) models.
As one of the best trained and most battle-hardened units at Moscow’s disposal, Wagner has been pivotal for Russia’s war in Ukraine.
The full story about Archaeologists: Humans in the Americas Earlier Than Thought. Know the facts. Reveal the bias. Improve the News.
The companies are protesting Canada’s new "link tax" law by pulling news links off their platforms.
In the USA the cultural atmosphere slows to a crawl between Christmas and New Years. I couldn't care less about the holidays. I am curious if the slow down is entirely cultural, or if there is some kind of inherent coupling where we all naturally slow down with the longest winter nights, in places with significantly shorter daylight hours.
I've worked night shifts doing hard manual labor. I'm well aware humans can adapt to any rhythm when required. I'm curious about the effects on people that do not have such rigid lifestyles.
I encountered someone saying, "I have no problems with a person's sexual orientation and choice, I have a problem with anyone being openly sexual or flaunting their sexuality in front of me regardless of their choice of orientation."
I am a card carrying atheist. I was raised in one of the worst fundamental christian extremist groups and now live in near isolation from abandoning it nearly 10 years ago. All sexuality was bottled in my life and surroundings. This is still my comfort zone. A part of me wants to hold on to a similar ethos as the person I mentioned above, but I feel like I'm not very confident it is the right inner philosophical balance either.
I'm partially disabled now, so this is almost completely hypothetical. I am honestly looking to grow in my understanding of personal space and inner morality as it relates to others. Someone enlighten me please. Where does this go, what does it mean to you?
I'm just curious if it is on the table at some point. I only see a small slice of beehaw when I'm logged in but the active participation feels like it is on a downward trend. Like, there appears to be ~700 on here right now. I know numbers aren't everything, but overall engagement is important. I'm on several instances with different accounts. I've been gravitating towards my .world account because it is so active. I get a grouchy or rude reply still from time to time, but it seems like most of the trolls have gone or been removed. That instance seems to be maturing fast and growing some personality all its own. The server seems constantly stressed, but Ruud is holding it together. The moderation seems much more in check now too. That's just my perspective. I'd like to see everyone come together again, but I am just one user.
Like we're not triple-A machine possessors at this point. A friend and I played in the era of the original Age of Empires, and StarCraft; Worms, and Dune. We were core SNES-PS2 era. We were never the ultra competitive hotkey speed run strategy types, but just played for fun.
Anyone out there in your late 30s to early 40s that have managed to connect to old friends despite long distances, what are you playing now?
I don't want the super health food tree bark nonsense you give nonbelievers. I'm looking for better than those of any animal infidels. Don't hold back on me now!
Tell me the details like what makes yours perfect, why, and your cultural influence if any. I mean, rice is totally different with Mexican, Chinese, Indian, Japanese, and Persian food just to name a few. It is not just the spices or sauces I'm mostly interested in. These matter too. I am really interested in the grain variety and specifically how you prep, cook, and absolutely anything you do after. Don't skip the cultural details that you might otherwise presume everyone does. Do you know why some brand or region produces better ingredients, say so. I know it seems simple and mundane but it really is not. I want to master your rice as you make it in your culture. Please tell me how.
So, how do you do rice?