🚀 Think you’ve got what it takes for a career in Data? Find out in just one minute!

Shadow AI: What is it? What are the issues?

-
3
min read
-

The emergence of tools like ChatGPT has transformed work practices. While their utilization promises increased efficiency and time savings, it also introduces a new concern: Shadow AI… Here’s why this practice is alarming and how it can be managed.

In April 2023, Samsung permitted the experimental use of ChatGPT within certain divisions. The goal was to aid with document translation, code proofreading, and overall to enhance productivity. However, shortly thereafter, Samsung prohibited the use of this generative AI and implemented several preventive measures.

What occurred in the interim? On three separate occasions, employees divulged highly confidential information to ChatGPT…

  • An employee utilized the Chatbot to summarize confidential notes concerning the performance of an internal project. These notes contained strategies and management decisions.
  • A developer aiming to resolve programming errors copied and pasted source code into ChatGPT from an internal application. This code included proprietary algorithms and critical technical details.
  • A third employee provided semiconductor performance test data to ChatGPT seeking improvement suggestions.

In response to these events, Samsung was compelled to:

  • Prohibit the unsupervised use of external AI tools.
  • Initiate awareness campaigns about the risks linked with such practices.
  • Develop alternative AI solutions to reduce reliance on unsecured general-purpose applications.

The Shadow AI

The experiment undertaken by Samsung had the benefit of being official. In reality, the true issue lies with the unsanctioned use of AI applications like ChatGPT, Claude, or Perplexity without the IT department’s consent. This practice is known as Shadow AI or Ghost AI.

The term derives from another practice called Shadow IT, which refers to the broader use of applications without the IT department’s awareness.

Shadow AI is not a restricted phenomenon. A 2023 study by Salesforce disclosed that 18% of French employees employed generative AI at work. In 49% of instances, this practice was pursued despite it being explicitly forbidden. Recent data indicates that nearly 68% of French employees engage in Shadow AI. They often use ChatGPT or Claude for revising texts intended for internal use or analyzing company performance data. However, this practice carries risks.

The motivations

The easy access to tools like ChatGPT accounts for the proliferation of Shadow AI. Employees endeavor to boost their efficiency or simplify complex problem-solving. For instance, a marketing manager might be inclined to use AI for developing campaigns for products that are yet to be launched and remain confidential. Yet, many collaborators are unaware of the security and legal risks associated with unsupervised AI solutions.

Additionally, some employees perceive the IT department as slow in addressing their needs. Therefore, they might be tempted to resort to solutions regarded as practical and effective, often because they are free.

What are the risks associated with Shadow AI?

Nevertheless, Shadow AI exposes organizations to various potential risks.

Data breaches

Without monitored AI usage, employees might inadvertently disclose sensitive information. A recent survey in the UK showed that one in five companies had experienced a data breach following the use of generative AI by their employees.

Inconsistencies

Shadow AI can result in inconsistencies in internal processes. When different departments utilize varied tools, this can lead to inconsistent results, complicating decision-making and reducing overall efficiency.

Regulatory non-compliance

Organizations must adhere to specific standards in data processing, such as the GDPR. Uncontrolled AI tool use might result in inadvertent regulatory violations. Fines for non-compliance can be as high as 4% of a company’s turnover.

Reputation harm

There is a risk that AI-generated results might not align with the company’s quality standards, undermining consumer trust. For example, Sports Illustrated faced backlash for publishing AI-generated articles.

How to address it?

There are three strategies to confront Shadow AI:

  • Prohibit its use.
  • Encourage the integration of AI into the work environment.
  • Enhance collaborator awareness.

 

Banning AI usage isn’t a comprehensive solution. “It results in even more concealed AI usage,” a manager observes.

Facilitating this evolution appears more suitable. The question is: what needs concerning generative AI are being expressed by collaborators? Is there a possibility of adopting a solution that can be supervised by the IT department?

Increasing worker awareness about the risks of unsupervised AI use is crucial. It is possible thereafter to establish a usage charter that collaborators are willing to follow.

In summary, it is vital to implement supervised tools, establish a clear framework, and foremost, enhance team awareness.

Facebook
Twitter
LinkedIn

DataScientest News

Sign up for our Newsletter to receive our guides, tutorials, events, and the latest news directly in your inbox.

You are not available?

Leave us your e-mail, so that we can send you your new articles when they are published!
icon newsletter

DataNews

Get monthly insider insights from experts directly in your mailbox