Could the last data leaving the servers please turn off the lights?

IT departments have struggled with the challenges of shadow IT for some time. Employees, often keen to bid adieu to mundane tasks or avoid mediocre corporately approved tools, are happy to seek out productivity apps, file-sharing platforms, and messaging tools that work for them. The same applies to IT devices — from smartphones and USB drives to WiFi routers — that they’ll willingly plug into any available socket.

On the one hand, employees are often trying to be efficient or shorten a project’s time to market. However, on the other hand, there are serious hidden risks, such as a lack of compliance, potential for data breaches, and a failure to back up data on these shadow devices and platforms.

Recently, a new danger has emerged: Shadow AI. If your employees thought Canva would speed up the creation of marketing artwork, the availability of MidJourney could turbocharge it. So, of course they created an account and tried it out. Which was then followed by giving ChatGPT a spin.

The employee perspective

As highlighted, this isn’t our first rodeo when it comes to shadow use of tools. IT departments are often perceived as slow and bureaucratic when it comes to approving new software and apps. But this stands in opposition to the statements of C-level executives who extol the benefits of digitalization and the (still pending) death of the fax machine.

So, it’s no wonder that employees take things into their own hands. AI can help automate tasks, create lists, and even code Python scripts for data analysis. Not using AI can even make non-user colleagues seem a little antiquated, leading to everyone “giving it a go” so they’re not left out of water cooler chats.

In a survey by Software AG, half of knowledge workers questioned in the US, UK, and Germany are using non-company-issued AI tools1. 46% would refuse to give them up, even if they were banned.

Hidden dangers of Shadow AI: A gamble for enterprises

Use of AI tools has skyrocketed. According to a report by Cyberhaven, the amount of data corporate workers uploaded into AI tools in the year to March 2024 grew by 485%. And over 70% of office ChatGPT usage is undertaken using private accounts2

Unlike enterprise versions, the data shared with many of these tools is used for further training — that’s things like HR asking for a translation of an internal email, or your software developers asking for a Python script to parse customer data. While it may seem relatively innocuous to seek help writing code, the mere description of what the code needs to do could reveal proprietary company data, processes, and IT system implementations.

Some of the compliance concerns include:

  • Data leakage and leak vectors
  • Privacy violations
  • Compliance and regulatory violations
  • Cybersecurity threats
  • Algorithmic bias and misinformation

Furthermore, many tools leverage the power of AI in the background, which may not be apparent to the average user. Take Grammarly as an example, the writing assistant that blows Word’s spell check out of the water. 

Grammarly uses transformers for error correction, natural language processing, and conversational AI3. Admittedly, they also have Responsible AI Guidelines covering compliance, fairness, bias, and safety that are aligned with human values and principles4

However, if you’re not a user based in the UK or EU, you’re automatically enrolled in their Product Improvement and Training program, which uses your content to train their models and improve the product5. You can, of course, opt out, but you first need to realize that your content is being used for this purpose.

Repercussions from our laissez-faire approach to AI tools are already being felt. Engineers at a semiconductor division of Samsung pasted proprietary code into ChatGPT, leading to a prompt limit within the organization of 1,024 bytes6. And sanctions were imposed on two New York lawyers who submitted six fictitious case citations7. They’d used the tool to generate them.

Strategies for managing Shadow AI

You could, of course, implement an outright ban and block access to such tools. But this only pushes people to use them on their personal devices instead. In fact, Samsung had only recently unblocked access to ChatGPT when the infractions occurred, and you can’t help but wonder if it was that step that was seen as tacit approval to use the tool in any way needed.

Thankfully, help is at hand. The International Organization for Standardization has released ISO 42001, a framework for the implementation and management of AI systems that functions as a continuous improvement system8. It provides safeguards to avoid risks in AI features, such as:

  • Oversight of autonomous decision making, possibly requiring further explanation and oversight.
  • Protective measures around insight, data analysis, and machine learning for systems coded by AI rather than humans.
  • Tracking how an AI tool changes over time as it learns.

Such guidelines help organizations ensure their teams use AI responsibly, understand how it can put reputations at risk, and how to use it in compliance with legal and regulatory requirements. This applies equally to third-party tools that use AI as it does to implementing and deploying on-premises AI tools.

The AI genie has left the building

Like shadow IT before it, AI is already part of your employees’ productivity toolkit, and your data has already been used to train someone else's AI models. You’ll be hard-pressed to find the genie to put it back in the bottle. The first step is to acknowledge that it is happening and then respond appropriately. Standards, such as ISO 42001 and the NIST AI Risk Management Framework9, are a good starting point. Additionally, rolling out training for your employees will enable them to understand the risks and provide them with guardrails within which they can work appropriately with these new tools.

Increasingly, organizations will circumvent the issue by developing on-premises AI tools, leveraging their power in the privacy of their own servers. While this resolves the problem of data leaks, ethical responsibility must also be integrated into the creation of these tools, so that we understand how they work, detect potential for bias, and maintain oversight at every step.

If it has become clear in your organization that on-premises AI is the way forward, and you need agility to test possible solutions risk-free, reach out to us here at ape factory - we’d be happy to advise on the best way forward to get you to where you need to be.

----

[1] https://newscenter.softwareag.com/en/news-stories/press-releases/2024/1022-half-of-all-employees-use-shadow-ai.html
[2] https://info.cyberhaven.com/hubfs/Content%20PDF/Cyberhaven%20Q2%202024%20AI%20Adoption%20and%20Risk%20Report%20052024.pdf
[3] https://www.grammarly.com/ai
[4] https://www.grammarly.com/business/events-resources/ebook/ethical-innovation
[5] https://support.grammarly.com/hc/en-us/articles/25555503115277-Product-Improvement-and-Training-Control
[6] https://uk.pcmag.com/news/146345/samsung-software-engineers-busted-for-pasting-proprietary-code-into-chatgpt
[7] https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22/
[8] https://www.youtube.com/watch?v=O4iKEr5AIi4
[9] https://www.nist.gov/itl/ai-risk-management-framework

Never miss an update.

Subscribe for spam-free updates and articles.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.