The number of workers who use the generative ia tools at work They have not been approved by their companies is quite high. In fact, according to an AG software study, last fall, Those who use half of them are outside the company controls.
The study, which reflects that then 75% of workers use AI, indicate that 46% of those who use AI tools that have not passed the approval process of the companies where they work would not stop using them even if they were prohibited from doing so.
The generative applications used by workers, in addition, are not few, according to a Harmonic report on the company and the balance between innovation and exposure of information. Nothing less than 254 on average, and not counting the applications to which it is watched through smartphones or mobile APIs. This figure is the result obtained after analyzing, throughout the first quarter of 2025, around 176,000 Prompts of AI and several thousand files of files to applications made by 8,000 company users.
One of the most worrying results that this study has shown is the frequency with which workers use their personal accounts to interact with AI platforms. According to Harmonic research, 45.4% of AI interactions involved sensitive information originated in personal email accounts. Of these, nothing less than 57.9% were Gmail accounts.
This makes it clear that there is sensitive content sent through accounts that are out of the control of companies, which means for the safety and protection of their data. In addition, 21% of these sensitive data that were collected to carry out the study were sent to the free chatgpt plan, which can retain the PROMPTS and use them for models training.
This means that while companies believe that they have controlled through security policies the internal use of AI, their employees have found mechanisms to skip their protections because they want to have the advantages that AI gives them without taking into account the implications of security of how they access it.
In addition, although many companies have generative policies, few have the necessary tools to ensure that they are met. Especially when the activity of their workers who is related to the generative AI leaves their personal accounts, or extensions for the navigator of indeterminate origin.
Therefore, they use generative applications that have not passed the approval of the company, with sources and methods also oblivious to it because they do not have, nor want, problems to do so, regardless of for what they use it (make summaries, create email drafts, generate content, etc).
If you see that the “official” tools to which your company gives access are very rigid or feel isolated using them, they will use others that give them more facilities. Among these tools are, in addition to Chatgpt, Gemini, Claude or Perplexity. In many cases they do not do it to take the opposite to their companies, but to do their job with agility.
But companies therefore face an increasingly serious governance problem. Something that is accentuated because most of the apps they use have not been evaluated internally. Some are even connected to cloud services with data retention policies that are not clear. Others do not make it clear whether or not they comply with the privacy laws of the different regions in which they are available.
Among the most relevant data of Harmonic’s study is the fact that 7% of users who have participated in it accessed AI developed tools in China, such as Deepseek. In addition, 68.3% of the information increases to ChatgPT were image files. It is also clear that workers rise to public models types of common documents, such as Docx, PDF or XLSX; Without worrying too much whether or not they contain company data.
But Block the generative AI tools completely If they are not those approved by the company does not work. Workers will find a system to continue using them. Therefore, companies have to focus on the behavior of their employees on the use of this type of tools and to strengthen their fair use policies, in general. ?