Well there seems to be a new project called "Poison Fountain", that is feeding corrupted data to AI crawlers — intentionally “poisoning” training sets to disrupt big models.
The twist?
Some of the creators allegedly work inside major AI companies.
Their claim: Regulation’s too slow, so they’re taking matters into their own hands.
Check your output, and also rewrite it to be more human... well now we need to double down on the fact checking part!
(Don't say I didn't tell you - and weird the page where I found the article about this - has been removed!)