Half of All Generative AI Inputs Is Sensitive Data
In a concerning discovery, about 55% of all inputs to generative AI platforms contain sensitive and personally identifiable information (PII). This revelation was uncovered in a new report released by Menlo Security, which highlighted the rapid developments of both generative AI and the accompanying security challenges organizations face.
The report analyzed employee behavior and generative AI interaction from July to December 2023. It found that over half of data loss prevention or DLP events detected in the last 30 days included attempts to input personally identifiable information.
An 80% increase in attempted file uploads to generative AI sites, a trend attributed to the addition of file upload features by many AI platforms in the past six months, was also noted by the report. Uploads of confidential documents and customer lists represented a significant portion of the input attempts, with confidential documents alone accounting for 40%.
The ease and speed with which this sensitive information can be inputted into generative AI platforms poses a considerable risk to data security. The report points out the necessity for organizations to adopt comprehensive, organizational security policies that cover all used AI applications to effectively eliminate the risk of data exposure.
Pejman Roshan, the Chief Marketing Officer at Menlo Security, stated that although there has been a reduction in attempts to copy and paste sensitive data into AI models in the past year, “the dramatic rise of file uploads poses a new and significant risk".
This series of findings serves as a stark reminder of the cybersecurity risks posed by generative AI platforms. As organizations continue to navigate the benefits and challenges of generative AI, the importance of implementing robust security measures to protect sensitive and personally identifiable information cannot be overstated.
Please, comment on how to improve this article. Your feedback matters!