Recently the Jalandhar Rural Police have issued a public advisory, urging citizens to stay cautious while using the Gemini app. A senior woman police officer warned that the figurines, though cute and fun, come with hidden dangers. She explained that the Gemini app's terms and conditions allow Google to use user-submitted images for Al training purposes, which could potentially lead to data misuse, identity theft, or cyber fraud.
Google’s AI platform Gemini has rolled out a new experimental tool called Nano Banana, but privacy experts are already flagging serious concerns around its usage and data collection practices.
The feature, designed to run lightweight on-device AI assistance, was promoted as a move towards faster responses without heavy cloud reliance. However, early testers and watchdogs caution that Nano Banana may still be transmitting user prompts and metadata to Google’s servers, raising questions on how much “on-device” control it truly offers.
Digital rights advocates point out that the feature lacks clear transparency on:
What type of data is processed offline vs. sent to the cloud
How long user queries and interactions are stored
Whether users can opt out of background data sharing
Cybersecurity researchers warn that without strong safeguards, the feature could become another layer of passive surveillance under the guise of convenience. The Electronic Privacy Foundation (EPF) has urged Google to publish a detailed whitepaper on Nano Banana’s data handling and commit to independent audits.
Google itself said that Nano Banana is in testing and is set to provide "faster and more private AI experiences directly on users' devices." But with Gemini spanning such a huge ecosystem on smartphones, laptops, and web search, the battle between local processing and cloud syncing still raises eyebrows.