Artist: Onion-Oni aka TenTh from Random-tan Studio Original post: #Humanization 23 on Tapas (warning: JS-heavy site)
Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐ฐ๐ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.
Artist: Onion-Oni aka TenTh from Random-tan Studio Original post: #Humanization 22 on Tapas (warning: JS-heavy site)
Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐ฐ๐ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.
Actually, shaggy mane (Coprinus comatus) is edible.
Artist: Onion-Oni aka TenTh from Random-tan Studio Original post: #Humanization 22 on Tapas (warning: JS-heavy site)
Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐ฐ๐ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.
Artist: Onion-Oni aka TenTh from Random-tan Studio Original post: #Humanization 21 on Tapas (warning: JS-heavy site)
Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐ฐ๐ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.
Artist: Onion-Oni aka TenTh from Random-tan Studio Original post: #Humanization 21 on Tapas (warning: JS-heavy site)
Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
I also tried Upscayl but that took about 1000x longer and "reinterpreted" the entire picture in an anime style, which made lines thinner, lost detail etc: !
Artist: Onion-Oni aka TenTh from Random-tan Studio Original post: #Humanization 20 on Tapas (warning: JS-heavy site)
Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐ฐ๐ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.
A little voodoo doll version of herself on that spear... Kinky
Artist: Onion-Oni aka TenTh from Random-tan Studio Original post: #Humanization 20 on Tapas (warning: JS-heavy site)
Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Reference image is Mandelbrot set zoomed in by a factor of about 1 million, rotated 90ยฐ anticlockwise. ๐ = -๐๐ถ(๐) = -0.131,825,253,6 โ 0.0000011001; ๐ = ๐ก๐ฎ(๐) = -0.7436447860 ยฑ 0.0000014668
Artist: Onion-Oni aka TenTh from Random-tan Studio Original post: #Humanization 19 on Tapas (warning: JS-heavy site)
Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐ฐ๐ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.
Artist: Onion-Oni aka TenTh from Random-tan Studio Original post: #Humanization 19 on Tapas (warning: JS-heavy site)
Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
I wonder where the beam would come from. The eyes?
Artist: Onion-Oni aka TenTh from Random-tan Studio Original post: #Humanization 18 on Tapas (warning: JS-heavy site)
Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Finally, a MorphMoe waifu where I would figure out what the reference was.
Artist: Onion-Oni aka TenTh from Random-tan Studio Original post: #Humanization 18 on Tapas (warning: JS-heavy site)
Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐ฐ๐ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.
Artist: Onion-Oni aka TenTh from Random-tan Studio Original post: #Humanization 17 on Tapas (warning: JS-heavy site)
Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original
Artist: Onion-Oni aka TenTh from Random-tan Studio Original post: #Humanization 17 on Tapas (warning: JS-heavy site)
Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐ฐ๐ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.
Four, actually, and it's still missing two from the product it's supposed to represent (they could be removable though).
What is that metal instrument on her back that says "๐๐ช๐ฌ๐จ๐ฆ๐ซ๐ค ๐๐ฆ๐ฉ๐ฉ๐ฐ"?
Artist: Onion-Oni aka TenTh from Random-tan Studio Original post: #Humanization 16 on Tapas
I tried upscaled by waifu2x (model: upconv_7_anime_style_art_rgb) but it didn't go too well.
Artist: Onion-Oni aka TenTh from Random-tan Studio Original post: #Humanization 16 on Tapas (warning: JS-heavy site)
Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐ฐ๐ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.
Artist: Onion-Oni aka TenTh from Random-tan Studio Original post: #Humanization 15 on Tapas (warning: JS-heavy site)
Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐ฐ๐ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.
Artist: Onion-Oni aka TenTh from Random-tan Studio Original post: #Humanization 15 on Tapas (warning: JS-heavy site)
Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐ฐ๐ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.
Artist: Onion-Oni aka TenTh from Random-tan Studio Original post: #Humanization 14 on Tapas (warning: JS-heavy site)
Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐ฐ๐ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.
Artist: Onion-Oni aka TenTh from Random-tan Studio Original post: #Humanization 14 on Tapas (warning: JS-heavy site)
Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐ฐ๐ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.
And the hat features a quote from Homer's Iliad:
...ฮฯฮฝฯ ฮบฮฑฮน ฮฮฑฮฝฮฌฯฯ ฮดฮนฮดฯ ฮผฮฌฮฟฯฮนฮฝ.
"...of Sleep and Death, who are twin brothers." This refers to the fraternal relationship of the respective deieties, Hypnos and Thanatos.
The ship says
ฮ ฮฌฯฮนฮฝ ฮทฮผฮฏฮฝ ฮบฮฑฯฮธฮฑฮฝฮตฮฏฮฝ ฮฟฯฮตฮฏฮปฮตฯฮฑฮน
This is Greek for "Death is a debt which every one of us must pay", a quote from Euripides' play Alcestis.
Paint timelapse available!
Artist: Onion-Oni aka TenTh from Random-tan Studio Original post: #Humanization 6 on Tapas (warning: JS-heavy site)
Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). Original Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea ๐ฐ๐ท and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.
It is obviously pretending to be a historical artifact but then it proudly says "QUARTZ", indicating there's probably just a cheap modern movement inside.
The waifu is nice though, I like the thigh clasp.
They are hard to separate but when you do, they both become half N and half S. No monopoles allowed!
It wouldn't last long, hurt a lot and smell horribly... unless you can fake the fire and lightning effect with fluorescent paint in a UV-lit venue. I don't think LEDs can do this yet if the dress is meant to be comfortable.
The Random-tan Studio "Humanization" pics I've been posting follow a pattern. See if you can spot it.
I'd say owls are sentient (see sidebar) but the spirit is there so owl allow it.
There is just one picture... Show me a screenshot if you see two.
Very creative with the various black protrusions.
Lethal humanoid monsters, weird voice acting (likely not AI though) and "telephone"-distorted audio (it's not just because I limited the bitrate to 20 kb/s to fit under 10 MiB, the YouTube video is like that). It's an artistic choice but not a very rare one, so likely not directly inspired by H. P. Lovecraft's audiobooks.
I'll show you some superior weaponry. Today's post won't be automated.