Revision as of 21:39, 29 March 2016 editPeterTheFourth (talk | contribs)Extended confirmed users5,071 editsm →Initial release: dang. sophisticated > sophistication← Previous edit | Revision as of 23:34, 29 March 2016 edit undoTJRC (talk | contribs)Autopatrolled, Extended confirmed users, Pending changes reviewers, Rollbackers63,533 edits →Initial release: "[" character for explanatory mat'l added to quoteTag: nowiki addedNext edit → | ||
Line 22: | Line 22: | ||
Tay was released on Twitter on March 23, 2016 under the name '''TayTweets''' and handle '''@TayandYou'''.<ref name="bizarre">{{cite news|url=http://www.independent.co.uk/life-style/gadgets-and-tech/news/tay-tweets-microsoft-creates-bizarre-twitter-robot-for-people-to-chat-to-a6947806.html|publisher=The Independent|title=Tay tweets: Microsoft creates bizarre Twitter robot for people to chat to|date=March 23, 2016|author=Andrew Griffin}}</ref> It was presented as "The AI with zero chill".<ref name=telegraph>{{cite web|last1=Horton|first1=Helena|title=Microsoft deletes 'teen girl' AI after it became a Hitler-loving sex robot within 24 hours|url=http://www.telegraph.co.uk/technology/2016/03/24/microsofts-teen-girl-ai-turns-into-a-hitler-loving-sex-robot-wit/|work=]|accessdate=25 March 2016}}</ref> Tay started replying to other Twitter users, and was also able to caption photos provided to it into ].<ref name=stuff>{{cite web|title=Microsoft's AI teen turns into Hitler-loving Trump fan, thanks to the internet|url=http://www.stuff.co.nz/technology/digital-living/78265631/Microsofts-AI-teen-turns-into-Hitler-loving-Trump-fan-thanks-to-the-internet|work=]|accessdate=26 March 2016}}</ref> ''Ars Technica'' reported Tay experiencing topic "blacklisting", exemplified by interactions with Tay regarding "certain hot topics such as ] generate safe, canned answers".<ref name=ArsT>{{cite web|url=http://arstechnica.com/information-technology/2016/03/tay-the-neo-nazi-millennial-chatbot-gets-autopsied|work=]|title=Tay, the neo-Nazi millennial chatbot, gets autopsied|first=Peter|last=Bright|date=26 March 2016|accessdate=27 March 2016}}</ref> | Tay was released on Twitter on March 23, 2016 under the name '''TayTweets''' and handle '''@TayandYou'''.<ref name="bizarre">{{cite news|url=http://www.independent.co.uk/life-style/gadgets-and-tech/news/tay-tweets-microsoft-creates-bizarre-twitter-robot-for-people-to-chat-to-a6947806.html|publisher=The Independent|title=Tay tweets: Microsoft creates bizarre Twitter robot for people to chat to|date=March 23, 2016|author=Andrew Griffin}}</ref> It was presented as "The AI with zero chill".<ref name=telegraph>{{cite web|last1=Horton|first1=Helena|title=Microsoft deletes 'teen girl' AI after it became a Hitler-loving sex robot within 24 hours|url=http://www.telegraph.co.uk/technology/2016/03/24/microsofts-teen-girl-ai-turns-into-a-hitler-loving-sex-robot-wit/|work=]|accessdate=25 March 2016}}</ref> Tay started replying to other Twitter users, and was also able to caption photos provided to it into ].<ref name=stuff>{{cite web|title=Microsoft's AI teen turns into Hitler-loving Trump fan, thanks to the internet|url=http://www.stuff.co.nz/technology/digital-living/78265631/Microsofts-AI-teen-turns-into-Hitler-loving-Trump-fan-thanks-to-the-internet|work=]|accessdate=26 March 2016}}</ref> ''Ars Technica'' reported Tay experiencing topic "blacklisting", exemplified by interactions with Tay regarding "certain hot topics such as ] generate safe, canned answers".<ref name=ArsT>{{cite web|url=http://arstechnica.com/information-technology/2016/03/tay-the-neo-nazi-millennial-chatbot-gets-autopsied|work=]|title=Tay, the neo-Nazi millennial chatbot, gets autopsied|first=Peter|last=Bright|date=26 March 2016|accessdate=27 March 2016}}</ref> | ||
Within a day, the robot was releasing ], ] messages in response to other Twitter users.<ref name="bi"/> Examples of Tay's tweets on that day included, "'']" and "] would have done a better job than the monkey |
Within a day, the robot was releasing ], ] messages in response to other Twitter users.<ref name="bi"/> Examples of Tay's tweets on that day included, "'']" and "] would have done a better job than the monkey'' <nowiki>]] ''we have got now. ] is the only hope we've got''",<ref name=telegraph/> as well as ''''] my robot ] daddy I'm such a naughty robot.''"<ref name=esquire>{{cite web|last1=O'Neil|first1=Luke|title=Of Course Internet Trolls Instantly Made Microsoft's Twitter Robot Racist and Sexist|url=http://www.esquire.com/news-politics/news/a43310/microsoft-tay-4chan/|work=]|accessdate=25 March 2016}}</ref> | ||
Artificial intelligence researcher ] commented that Tay's misbehavior was understandable, because it was mimicking the deliberately offensive behavior of other Twitter users, and Microsoft had not given the bot an understanding of inappropriate behavior. He compared the issue to ]'s ], which had begun to use profanity after reading the ].<ref name="tr"/> Many of Tay's inflammatory tweets were a simple exploitation of Tay's 'repeat after me' function.<ref name="WPostManiac" /> Other responses, including captioning a photo of Hitler with "''swag alert''" and "''swagger before the internet was even a thing''",<ref name=stuff/> or responding to a question on "Did ] happen?" with "''] (handclap emoticon)''", displayed more sophistication.<ref name="WPostManiac" /> | Artificial intelligence researcher ] commented that Tay's misbehavior was understandable, because it was mimicking the deliberately offensive behavior of other Twitter users, and Microsoft had not given the bot an understanding of inappropriate behavior. He compared the issue to ]'s ], which had begun to use profanity after reading the ].<ref name="tr"/> Many of Tay's inflammatory tweets were a simple exploitation of Tay's 'repeat after me' function.<ref name="WPostManiac" /> Other responses, including captioning a photo of Hitler with "''swag alert''" and "''swagger before the internet was even a thing''",<ref name=stuff/> or responding to a question on "Did ] happen?" with "''] (handclap emoticon)''", displayed more sophistication.<ref name="WPostManiac" /> |
Revision as of 23:34, 29 March 2016
The Twitter profile picture of Tay | |
Developer(s) | Microsoft Research, Bing |
---|---|
Available in | English |
Type | artificial intelligence chatterbot |
Website | www |
Tay is an artificial intelligence chatterbot for the Twitter platform released by Microsoft Corporation on March 23, 2016. Tay caused controversy by releasing inflammatory tweets and it was taken offline around 16 hours after its launch.
Creation
The bot was created by Microsoft's Technology and Research and Bing divisions. Although Microsoft initially released few details about the bot, sources mentioned that it was similar to or based on Xiaoice, a similar Microsoft project in China. Ars Technica described Xiaoice as having "had more than 40 million conversations apparently without major incident" "since late 2014". Tay was designed to mimic the language patterns of a 19-year-old American girl, and to learn from interacting with human users of Twitter.
Initial release
Tay was released on Twitter on March 23, 2016 under the name TayTweets and handle @TayandYou. It was presented as "The AI with zero chill". Tay started replying to other Twitter users, and was also able to caption photos provided to it into a form of Internet memes. Ars Technica reported Tay experiencing topic "blacklisting", exemplified by interactions with Tay regarding "certain hot topics such as Eric Garner (killed by New York police in 2014) generate safe, canned answers".
Within a day, the robot was releasing racist, sexually-charged messages in response to other Twitter users. Examples of Tay's tweets on that day included, "Bush did 9/11" and "Hitler would have done a better job than the monkey we have got now. Donald Trump is the only hope we've got", as well as ''Fuck my robot pussy daddy I'm such a naughty robot."
Artificial intelligence researcher Roman Yampolskiy commented that Tay's misbehavior was understandable, because it was mimicking the deliberately offensive behavior of other Twitter users, and Microsoft had not given the bot an understanding of inappropriate behavior. He compared the issue to IBM's Watson, which had begun to use profanity after reading the Urban Dictionary. Many of Tay's inflammatory tweets were a simple exploitation of Tay's 'repeat after me' function. Other responses, including captioning a photo of Hitler with "swag alert" and "swagger before the internet was even a thing", or responding to a question on "Did the Holocaust happen?" with "It was made up (handclap emoticon)", displayed more sophistication.
Censorship and suspension
Soon, Microsoft began deleting some of Tay's inflammatory tweets. Abby Ohlheiser of The Washington Post theorized that Tay's research team, including editorial staff, had started to influence or edit Tay's tweets at some point that day, pointing to examples of almost identical replies by Tay asserting that "Gamer Gate sux. All genders are equal and should be treated fairly." From the same evidence, Gizmodo concurred that Tay "seems hard-wired to reject Gamer Gate". Silicon Beat reported that Tay's team appeared to have "lobotomized into bland civility" after they apparently "shut down her learning capabilities and quickly became a feminist". A "#JusticeForTay" campaign protested the alleged editing of Tay's tweets.
Within 16 hours of release, and after Tay had tweeted more than 96,000 times, Microsoft suspended Tay's Twitter account for adjustments, blaming Tay's behavior on a "coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways". Following Tay being taken offline, a hashtag was created called #FreeTay.
Madhumita Murgia of The Telegraph called Tay a PR disaster, and suggested that Microsoft's strategy will be to "label the debacle a well-meaning experiment gone wrong, and ignite a debate about the hatefulness of Twitter users." However, Murgia described the bigger issue as Tay being "artificial intelligence at its very worst - and it's only the beginning".
On March 25, Microsoft confirmed that Tay was taken offline. Microsoft released an official apology for the controversial tweets by Tay. Microsoft was "deeply sorry for the unintended offensive and hurtful tweets from Tay", and would only "look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values".
References
- ^ Hope Reese (March 24, 2016). "Why Microsoft's 'Tay' AI bot went wrong". Tech Republic.
- Caitlin Dewey (March 23, 2016). "Meet Tay, the creepy-realistic robot who talks just like a teen". The Washington Post.
- ^ Bright, Peter (26 March 2016). "Tay, the neo-Nazi millennial chatbot, gets autopsied". Ars Technica. Retrieved 27 March 2016.
- ^ Rob Price (March 24, 2016). "Microsoft is deleting its AI chatbot's incredibly racist tweets". Business Insider.
- Andrew Griffin (March 23, 2016). "Tay tweets: Microsoft creates bizarre Twitter robot for people to chat to". The Independent.
- ^ Horton, Helena. "Microsoft deletes 'teen girl' AI after it became a Hitler-loving sex robot within 24 hours". The Daily Telegraph. Retrieved 25 March 2016.
- ^ "Microsoft's AI teen turns into Hitler-loving Trump fan, thanks to the internet". Stuff. Retrieved 26 March 2016.
- O'Neil, Luke. "Of Course Internet Trolls Instantly Made Microsoft's Twitter Robot Racist and Sexist". Esquire. Retrieved 25 March 2016.
- ^ Ohlheiser, Abby. "Trolls turned Tay, Microsoft's fun millennial AI bot, into a genocidal maniac". The Washington Post. Retrieved 25 March 2016.
- ^ Baron, Ethan. "The rise and fall of Microsoft's 'Hitler-loving sex robot'". Silicon Beat. Bay Area News Group. Retrieved 26 March 2016.
- Williams, Hayley. "Microsoft's Teen Chatbot Has Gone Wild". Gizmodo. Retrieved 25 March 2016.
- Wakefield, Jane. "Microsoft chatbot is taught to swear on Twitter". BBC News. Retrieved 25 March 2016.
- Hern, Alex (24 March 2016). "Microsoft scrambles to limit PR damage over abusive AI bot Tay". The Guardian.
- Vincent, James. "Twitter taught Microsoft's AI chatbot to be a racist asshole in less than a day". The Verge. Retrieved 25 March 2016.
- ^ Worland, Justin. "Microsoft Takes Chatbot Offline After It Starts Tweeting Racist Messages". Time. Retrieved 25 March 2016.
- "Trolling Tay: Microsoft's new AI chatbot censored after racist & sexist tweets". RT. Retrieved 25 March 2016.
- Murgia, Madhumita (25 March 2016). "We must teach AI machines to play nice and police themselves". The Daily Telegraph.
- Staff; agencies (2016-03-26). "Microsoft 'deeply sorry' for racist and sexist tweets by AI chatbot". The Guardian. ISSN 0261-3077. Retrieved 2016-03-26.
- Murphy, David (25 March 2016). "Microsoft Apologizes (Again) for Tay Chatbot's Offensive Tweets". PC Magazine. Retrieved 27 March 2016.