メインコンテンツに移動

Shadow AI: What It IS and How to Manage It

|

0 分の読み物

Get a Demo of Forcepoint Solutions

Technological innovation has a habit of curling the monkey’s paw. 

Look no further than artificial intelligence for the latest example. Popular chatbots and AI-powered everything have given way to shadow AI, which is the unapproved corporate use of generative AI and AI software.

Shadow AI, like shadow IT, usually takes the form of an innocuous activity such as asking ChatGPT to summarize an email. But the repercussions of using unapproved software to interact with sensitive information can understandably have far-reaching consequences for data security and compliance. 

Savvy IT teams are getting ahead of these risks through better visibility and control of shadow AI—here’s how. 

The Four Main Risks of Shadow AI

Understanding where risk resides in shadow AI is the first step in preventing it. While shadow AI presents unique challenges to data security, it is fundamentally no different than the unapproved cloud app usage that every IT admin knows all too well. 

Unsanctioned software usage inherently creates risk of data loss because the security team hasn’t had a chance to evaluate the controls the vendor has in place to prevent data loss. Where there are countless mature organizations like Microsoft, Salesforce or ServiceNow that have extensive documentation on how they protect sensitive information used on their platforms, there are – especially in the case of AI startups – thousands more that do not.

Similarly, if the security team does not have a tight handle on how and where AI is interacting with data, then companies risk non-compliance with local and international regulations. Legislators are increasingly seeking to put guardrails on how AI stores and uses data, and organizations bear the burden of ensuring the data they are responsible for is handled correctly.

Of course, there are also risks that are unique to AI. For instance: some artificial intelligence models are trained through user input. In cases where employees share financial records or customer information for help with analysis or responses, this may inadvertently spur a data leak. In those cases, companies need to ensure that the AI doesn’t train using that data – a tough task with shadow AI – or it may provide the information as part of an answer to another user.

The challenges businesses face isn’t only limited to the input of AI, but the output as well. Written content and software code from GenAI invites copyright and trademark risk, and prompt injections make applying that code to a flagship product a danger in itself as it could provide a backdoor into the software or otherwise present a security threat.

How to Get Visibility and Control of Shadow AI

Properly managing shadow AI requires a multi-layered approach. Of course, organizations must take the time to develop an internal acceptable use policy that clearly outlines which AI applications can be used and the process for evaluating new software, but the real question is what is the best way to operationalize such a policy.

One of the biggest risks posed by AI is in relation to losing control of sensitive data, whether regulated data that could mean non-compliance fines, or intellectual property which could mean losing competitive advantages. If an organization has a mature data security posture they can mitigate the biggest risks with AI and see quicker returns on investment for AI initiatives. This is why it makes so much sense to start with a tool like Forcepoint Data Security Posture Management (DSPM), to automatically discover and classify sensitive data across the enterprise. This provides the best foundation for a data security program by starting with a more holistic understanding of what sensitive data exists where throughout the environment, and what risks the data is exposed to. This also helps with mitigating risks from redundant, outdated, and trivial data that multiply the attack surface and increase storage costs. And, when it comes to securing data in ChatGPT, Forcepoint DSPM can see data being shared with the chatbot in real time and revoke that data from the application.  

Forcepoint Data Loss Prevention helps organizations create, manage and enforce policies designed to prevent sensitive information such as PII, PHI, intellectual property and other types of regulated or proprietary data from being shared with AI. These policies deliver unparalleled control by blocking the copying and pasting of data into GenAI to stop risky activity with shadow AI in real time.

Lastly, Forcepoint Cloud Access Security Broker (CASB) and Forcepoint Secure Web Gateway (SWG) can be used in tandem to control access to AI applications on the cloud and web, and extend data security policies from Forcepoint DLP for more granular policies for web and SaaS use. They can limit access based on factors such as the user’s position or team, and direct employees to approved software where there is overlap—from Gemini to ChatGPT for example.  

Download the eBook on Shadow AI to learn how you can manage the risks and help your organization use SaaS AI services safely.

X-Labs

インサイトや分析、ニュースを直接お届けします

要点

サイバーセキュリティ

サイバーセキュリティの最新トレンドや話題をカバーするポッドキャスト

今すぐ聴く