Newsletter Subscribe
Enter your email address below and subscribe to our newsletter
The latest technology updates & more!
Researchers have successfully created a new AI worm named Morris II that has the functionality to steal user’s private data and also can send spam emails. That’s not all, the worm can even spread malware using various methods.
The main target of the worm seems to focus on the AI systems, Morris II is fully capable of effecting generative AI tools such as AI assistants, taking out the data from any AI implemented email assistant.
According to the report, Morris II can take down security measures of massive AI power systems in the internet such as ChatGPT or the recently released Gemini by Google.
Morris II AI worm relies on its self-replicating prompts for the AI worm to navigate through AI systems without ever getting caught.
Attackers here can insert self-replicating prompts into the Input section; when the prompts get processed by GenAI models, the model will replicate the input as output, which will then lead to malicious activities.
The name Morris comes after the original Morris which made its debut in 1988. But the newer model seems to target AI models such as Gemini Pro, ChatGPT 4.0, AND LLaVA.
Stav Cohen from Israel Institute of Technology, Ben Nassi from Cornell Tech, and Ron Bitton from Intuit describes Morris II as:
We created a computer worm that targets GenAI-powered applications and demonstrated it against GenAI-powered email assistants in two use cases (spamming and exfiltrating personal data), under two settings (black-box and white-box accesses), using two types of input data (text and images) and against three different GenAI models (Gemini Pro, ChatGPT 4.0, and LLaVA).
The prompt can also be inserted into a photo, which then will make the email assistant automatically forward mails to new email clients. Reports have shown Morris II having the power to hijack private information such as credit card and Social Security information from the user.
The problem here is massive, even in a limited environment, the Morris II will do an irreparable amount of damage to the system. Effective solutions must be found tackling the problems the worm brings to the AI world.
The team behind the Morris have reported their findings to Google and OpenAI. Wired reached out for a comment from Google but they refused to comment on Morris II.
While Open AI vows to make their system stronger for any sort of attack in the future by saying –
“They appear to have found a way to exploit prompt-injection type vulnerabilities by relying on user input that hasn’t been checked or filtered.”
AI is not what it used to be 2 or 3 years ago, nearly every single day we are witnessing AI getting implemented to every corner of our digital life. We are seeing AI being mixed with CPU, GPU, apps, system security, Cars, smartphones, and many more.
While we have seen how using AI can lead to many good things, such as eliminating the many steps between the user and the objective. But here we have worms such as Morris II that can create malware out of thin and spread it all around.
Countermeasures become more important than ever. As more AI becomes part of our life, its vulnerability will leave significant damage to our lives. Now it is time to improve the AI to solve its problem before it tackles any from outside.
For more news from Tech, Subscribe to Tecxology