When you partner with FUSION 1, together with Microsoft Security and OpenAI we can help identify and neutralize emerging threats swiftly. Discover invaluable insights into top threats and principles for safeguarding AI technologies, ensuring your business continues with ethical solutions for success.
What are the emerging AI threats identified by Microsoft and OpenAI?
Microsoft and OpenAI have focused on emerging AI threats associated with threat actors such as Forest Blizzard, Emerald Sleet, and Crimson Sandstorm. Their research highlights activities like prompt injections, misuse of large language models (LLMs), and various forms of fraud. The analysis indicates that threat actors are leveraging AI as a productivity tool to enhance their offensive capabilities.
How does Microsoft respond to the misuse of AI technologies by threat actors?
When Microsoft detects the misuse of its AI applications by identified malicious threat actors, it takes appropriate actions such as disabling accounts, terminating services, or limiting access to resources. Additionally, Microsoft notifies other AI service providers about detected misuse, enabling them to verify findings and take necessary actions.
What role do LLMs play in the tactics of threat actors?
Threat actors are using LLMs for various purposes, including reconnaissance to gather information on potential victims, enhancing scripting techniques for malware development, and assisting in social engineering efforts. For instance, actors like Emerald Sleet have used LLMs to draft content for spear-phishing campaigns, while others have employed them to understand vulnerabilities and troubleshoot technical issues.