Should Hexbear have a more robust robots.txt?
Should Hexbear have a more robust robots.txt?
Consider https://arstechnica.com/robots.txt or https://www.nytimes.com/robots.txt and how they block all the stupid AI models from being able to scrape for free.
The robots.txt construct is completely voluntary and some bots use it to specially target content.
In my opinion, anyone relying on this to protect their content has no business publishing anything online.
See: https://en.m.wikipedia.org/wiki/Robots.txt
We will sue their unauthorized use in the marketplace of ideas.
Of course it's voluntary, but if entities like OpenAI say they will respect it then presumably they really will.
Couple of things:
but why presume that the guys trying to scam people by claiming their algorithms are aRtIfIcIaL iNtElLiGeNcE aren't lying about that?
Eh, will they really? It'd be pretty hard to prove they didn't respect it.
Can it work as a way to manifest unconsenting juridically?
It's not about relying on it, it's about changing the behaviour of web crawlers that respect 'em, which, as someone who has adminned a couple scarily popular sites over the years, is a surprisingly high percentage of them.
If someone wants to get around it, they obviously can, but this is true of basically all protective measures ever. Doesn't make them pointless.