Success of some sort — Kilcher had his bot post about 1,500 times on /pol/ during its first 24-hour period. Its posts did everything they were supposed to: commented on current events and called other users inflammatory names, for example.
That success proved to be short-lived, though. Soon 4chan members became suspicious of just how frequently the bot was posting. Some even created dedicated threads to unmask the anonymous user, which they thought might be a spy from some government organization. (Others did believe the bot to be a real poster, though, with one user pointing out the bot’s comments about its “wife.”)
Eventually, the bot had sown so much dissent in /pol/ that other users began accusing each other of being bots. Even now, with Kilcher’s bots entirely offline, discussions continue about the consequences of humans interacting with artificial intelligence. Kilcher, in closing, says that’s a “pretty good legacy for now.”
“This is the worst website in the universe,” one user wrote, “I’m not even sure I’m not a bot anymore.”
Of course, there are ethical implications to unleashing a purposefully bigoted chatbot upon the world, and some may find Kilcher’s experiment in poor taste. Perhaps what it reveals is that we could really use better technology to help us detect bots when they’re pretending to be human.