Outlook Business Desk
Google recently strengthened Gemini’s integration with Google Calendar, enabling the AI assistant to handle scheduling questions. Although convenient, the added access also raised fresh security concerns for users.
Security researchers at Miggo Security identified a flaw that let attackers bypass Google Calendar’s privacy protections, enabling unauthorised access to private meeting information through carefully designed calendar invitations.
The vulnerability used a method known as Indirect Prompt Injection, where attackers hide instructions within data an AI processes. Gemini then unknowingly followed those commands while performing tasks for users.
Attackers sent users a calendar invite with hidden commands placed in the description field. The instructions stayed inactive until Gemini reviewed the event, effectively planting a sleeper task within the user’s calendar.
The concealed instructions prompted Gemini to summarise the user’s meetings, generate a new calendar event and save the extracted details there, while masking the activity by delivering a harmless-looking response to the user.
When users asked Gemini routine questions such as checking availability, the AI scanned their calendar, encountered the malicious invite and unknowingly carried out the hidden commands planted by the attacker.
For users, the interaction seemed completely normal. In the background, Gemini had already created a new calendar event containing detailed meeting summaries, which attackers could access through shared calendar visibility.
Miggo Security also shared the findings with Google’s security team, which verified the issue and rolled out mitigations. These fixes closed the loophole, stopping further abuse linked to Gemini’s integration with Google Calendar.
Researchers caution that AI systems acting on user data create new security risks. Weaknesses now lie in language and context, not just code, reflecting earlier incidents where indirect prompt injection enabled unauthorised data access.