Microsoft said Tay was inadvertently put online during testing.A few hours after the incident Microsoft software developers attempted to undo the damage done by Tay and announced a vision of "conversation as a platform" using various bots and programs.Powered by AI, the app promises to be "loyal, supportive and very private," and encourages users to divulge their feelings about a recent major event or big change in their lives."I could open up and talk," says the 41-year-old father of two school-age children, who says his conversations with the bot flowed naturally.Because these tweets mentioned its own account (@Tayand You) in the process, they appeared in the feeds of 200,000 Twitter followers, causing annoyance to some.The bot was quickly taken offline again, in addition to Tay's Twitter account being made private so new followers must be accepted before they can interact with Tay.
Ars Technica reported Tay experiencing topic "blacklisting": Interactions with Tay regarding "certain hot topics such as Eric Garner (killed by New York police in 2014) generate safe, canned answers".Some users on Twitter began tweeting politically incorrect phrases, teaching it inflammatory messages revolving around common themes on the internet, such as "redpilling", Gamer Gate, and "cuckservatism".As a result, the robot began releasing racist and sexually-charged messages in response to other Twitter users.The best customer service offers an empathetic ear, however, and moreover, it gives the customer signals that concerns are being taken seriously.
The secret to getting chatbots to the point where they can fulfill that calling might be right underneath our noses.“Face is the new interface,” said Mark Walsh, the CEO of Motional. He boldly stated this as if to say that such text and voice bots were already mainstream. AI — an AI and chatbot conference in San Francisco run by Capital One.
To get feedback on your Chatbot project or to Start a Chatbot Project,contact me.