I was at a small roleplaying convention last week. It was great to meet the others again after about a year and game with them. Unfortunately someone was rather generous with their flu viruses and I got my personal helping. So I'm on sick leave for the second say but luckily, according to the test it's just a flu and not the big bad C. On Monday I clobbered together a small template for my sister to build fake computer screens as props for TV shows... All in all a mixed bag of some good stuff and some annoying things...
The link between autism spectrum disorder (ASD) and the body's 'second brain' is more apparent than ever before.
Don't really know what to make of this...
There are a few different things I'd like to mention:
- I don't think, that there is such a thing as a massively defederated instance exists right now. The most blocked instance is blocked by about 11% of the instances, followed by two instances at 6%
- Even if the die hard scene users would know their instances, not every random troll or spammer would.
- This doesn't address the possible legal issues of publicly announcing where someone could find illegal content
- If "small queer instance" refers to beehaw... That's the second largest instance there is as of today according to https://github.com/maltfield/awesome-lemmy-instances
And lastly: If you're new to the fediverse you maybe shouldn't run your own instance in first place. Helping reckless people pull reckless stunts is a bad reason to promote a feature.
Currently it isn't and I don't think, that this would be the best idea ever since it could be misused as some kind of index to find bad instances. The defederated-list is available to the public and thus the defederating instance could in fact be "advertizing" the instances they defederated from ("Look, we don't want this stuff here, but these instances are for [right-wing|transphobes|bots|spammers|porn]")...
Depending on where the instance is hosted or where the admin lives, it might even be illegal to in fact point people to places where they can find certain things.
I've seen the fun of "prints everywhere" in production when a colleague forgot to remove a "Why the fuck do you end up here?" followed by a bunch of variables before committing a hot-fix... Customers weren't to amused...
Edit: That was a PHP driven web shop and the message ended up on to of the checkout page
I don't know if this community is intened for posts like this, if not, I'm sorry and I'll delete this post ASAP....
So, I play TTRPG (mostly online) and I'm a big fan of visual aids, so I wanted to create some chahrcter images for my charakter in the new campaign I'm playing in. I don't need perfect consistency as humans usually change a little over time and I only needed the character to be recognizable on a couple of images that are usually viewed on their own and not side by side, so nothing like the consistency you'd need for a comic book or something similar. So I decided to create a Textual Inversion following this tutorial and it worked way better than expected. After less than 6 epochs I had a consistency that was enough for my usecase and it didn't start to overfit when I stopped the training around epoch 50.
!Generated image of a character wearing a black hoodie standing in a rundown neighborhood at night !Generated image of the character wearing a black hoodie standing on a street !Gerneated image of the character cosplaying as Ironman !Generater image of the character cosplaying as Amos from the Expanse
Then my SO, who's playing in the same campaign asked me to do the same for their character. So we went through the motions and created and filtered the images. A first training attempt had the TI starting to overfit halfway through the second epoch, so I lowered the learning rate by factor five and started another round. This time the TI started overfitting somewhere around epoch 8 without reaching consistency before. The generated images alternate between a couple of similar yet distinguishable faces. To my eye the training images seem to have a simliar or higher quality than the images I used in the first set. Was I just lucky with my first TI and unlucky with the other two and simply should keep on trying or is there something I should change (like the learningrate that still seems high to me with 0.0002 judging from other machine learning topics)?