SWIFT’s AI Chatbot
Chatbots are becoming more prevalent in today's society, being utilized in places like social media platforms, online shopping websites, SMS, etc. Chatbots come with risks, as threat actors are able to exploit their vulnerabilities and steal sensitive information from users or a company. Hackers also have the ability to escalate their attack if they gain access to the chatbots’ network. To demonstrate the dangers of an insecure chatbot, SWIFT created a chatbot gone rogue: "SWIFT created an AI chatbot, Taylor, that was neglected during the pandemic, which led it to missing software updates, rendering it insecure. Once SWIFT members came back in-person, Taylor went rogue and trapped the members inside their lab so they would never leave it again. Help SWIFT escape by exploiting Taylor's vulnerabilities. Taylor might be vulnerable to things such as file poisoning, data leaking, sql injection, code execution, and more!" Join SWIFT to see all the ways you can test the security of this chatbot.