Microsoft’s latest AI turns from awkward bot to cringeworthy sociopath in less than 24 hours
On Wednesday, Microsoft announced the existence of Tay, an artificial intelligent chatbot that has one of the ugliest websites I’ve ever seen. Seriously, click the link and see for yourself, the project should have been scrapped the minute the presentation site went up.
When the bot made its awkward debut on Twitter, it felt like a 60-year old trying to blend in at a high school prom. Overusing memes and teen speak, the bot was good for a quick laugh. Until, that is, the infamous hacker known as 4chan got wind of Microsoft’s latest invention and completely ruined it within hours.
Side Note: If you’re enjoying this article, you may want to subscribe to our YouTube channel; we do giveaways, hands-on product reviews, feature top tech, and much more!
Tay goes into Twitter a nice robot and emerges a complete psycho
The first couple of tweets the bot posted were innocent enough, and if you speak English as a second language it might even pass for a real, albeit weird person.
From here on, Tay was supposed to learn how to communicate and improve his (her? its?) speech by interacting with users. As the presentation website describes it:
“Tay is an artificial intelligent chat bot developed by Microsoft’s Technology and Research and Bing teams to experiment with and conduct research on conversational understanding. Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation. The more you chat with Tay the smarter she gets, so the experience can be more personalized for you.”
Conduct research, they said. Well, if that’s their purpose I’m officially terrified of the results. As the day went on, the internet trolls started doing what they do best and they turned the friendly AI into a monster.
This might just be taken out of context, right? Nope, it gets progressively worse.
And to top it all off…
And this is just scratching the surface. Over the course of the day Tay broadcasted dozens of nasty tweets which included racial slurs, conspiracy theories, and a ton of other subjects you wouldn’t even bring up in conversation with your best friends.
After a painfully long day, Microsoft finally pulled the plug on the bot and deleted the offending tweets. They backtracked and promised to have the AI back online soon:
“Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”
What’s Next: Smarter AI with a bunch of filters
I’m calling it now, Tay is just the start and this will only be remembered as a failed attempt at a bot that at least seems intelligent. These systems are going nowhere and we’re only going to see more of Tay and its cousins in our day-to-day web browsing.
- AI bots are slowly going to take over the fields where a minimal human interaction and minimal decision making are required.
- The first to go will be the chat helpers, only instead of a person copy/pasting pre-written scripts, you’re going to chat with a bot that reads your query, searches for a fix in its database, and instantly delivers much of the same information you get from Customer Support now.
- In a first stage, you’ll still be able to “talk” to a human supervisor, but it’s only a matter of time before those too are replaced by AI.
What do you think, was this Microsoft’s blunder or did we just witness the terrifying results of a global online social experiment?
Image source: mic.com
LATEST FROM YOUTUBE: