But what comes intuitively to native speakers turns out to be very hard to teach to a bot. As a chatbot, Tay was required to parse the textual speech of others and respond in kind. To understand the failures of Tay as a series of design flaws requires an understanding of the design of conversational agents. As professor of computer science Caroline Sinders argued, “If your bot is racist, and can be taught to be racist, that’s a design flaw”. The more interesting account lies somewhere in the middle: Tay certainly was a mirror of sorts, but like any mirror, the image is profoundly mediated. Others framed Tay as a harbinger of a dystopia where users will be completely helpless in the face of all-powerful technologies. Some responses following the Tay controversy argued that technology is neutral and that Tay simply presented a mirror of society. This is a point made evident in the discourse that surrounds machine learning algorithms as emerging naturally from our datafied world, both simultaneously neutral and objective, as well as spooky and mythical. While the rapid development and take-up of this technology has outpaced legal frameworks, it also poses a challenge for cultural understandings of AI. Machine learning algorithms are increasingly used in the lifestyle domain, quietly working in the background to power conversational agents, media recommendations, ad placements and search results, image identification, and more. Tay’s short life produced a parable of machine learning gone wrong that may function as a cautionary “ boon to the field of AI helpers”, but it also has broader implications for the relationship between algorithms and culture. This precipitated a storm of negative media attention and prompted the creators of the bot to remove some of the more outrageous tweets, take Tay offline permanently, and issue a public apology. Although some protections were built into the system, with prepared responses to handle some more recent sensitive political topics such as the 2014 death of Eric Garner after he was put in a police chokehold, the bot was ill-prepared for the full Twitter experience and soon began tweeting all manner of sexist, racist, anti-Semitic, and otherwise offensive content. Microsoft created the bot using machine learning techniques with public data and then released Tay on Twitter so the bot could gain more experience and-the creators hoped-intelligence. Similar to popular conversational agents like Siri and Alexa, Tay was designed to provide an interface for interaction, albeit one with the humor and style of a 19-year-old American girl. In the 16 hours that passed between its first message and its last, Tay-the AI with “ zero chill”- sent over 96,000 tweets, the content of which ranged from fun novelty to full Nazi. “c u soon humans need sleep now so many conversations today thx□ ”Ī day in the life of an artificial intelligence can be a very full day indeed. Researcher, College of Media, Communication and Information, University of Colorado, Boulder
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |