top of page

Download our Survival Guide for Healthcare Security Teams:

Playbook: Eliminating Breach Risks — 2025 Edition for midmarket organizations. Download to learn more

Suspect a Breach? 

!

Pondurance_Logo_R-10pxMargin_312px_REV-wordmark.png

Manage Risk With a Comprehensive Gen AI Policy

Pondurance
September 25, 2025

Midmarket organizations are adopting generative artificial intelligence (gen AI) at a fast pace, and with good reason. Gen AI offers the opportunity for innovation with significant benefits such as greater efficiency, enhanced customer service, streamlined processes, and increased productivity. But with the good comes the bad, and gen AI does have the potential for serious risks, such as inaccuracies, employee misuse, data privacy violations, cybersecurity breaches, and other consequential issues. 


To manage the risks, every organization should have a written gen AI policy in place. But only 44% of organizations have completed gen AI policies for their employees, according to Littler’s 2024 AI C-Suite Survey Report. That percentage, however, will likely increase since an additional 44% of respondents either have a gen AI policy in progress or are considering one.


A well-written gen AI policy can act as a road map to guide your entire organization through the legal, ethical, reputational, and security challenges of using gen AI. Every organization will have a different set of circumstances to address within the gen AI policy, but here are a few important topics for your organization to consider.


Governance

Gen AI tools need capable humans to monitor them. The governance section explains who — whether departments, teams, or individuals — will manage the gen AI policy and processes. You’ll need to determine who will monitor the usage, oversee regulatory compliance, review the data outputs for accuracy, and communicate with employees about gen AI. It’s also important to designate who employees must report to with questions, for permissions, and with possible compliance violations. The policy should include a fixed timeline, whether that’s monthly, quarterly, or annually, for review of the gen AI policy and should explain how to make policy updates. 


Over time, gen AI tools will continue to evolve, and as they do, employees will want to use the latest tools on the market. The policy will need to provide employees with a detailed, step-by-step process — the who, what, where, when, and how — for getting approval of new gen AI tools. 


Usage

Usage policy must be clear and concise so that employees understand precisely what they can do with gen AI tools — and, maybe more importantly, what they can’t do. The policy should list each tool that is acceptable to use at your organization and specify the intended purpose and use of the tool. The policy should explain who can use the tool by individual name, title, or department and take into consideration whether the gen AI policy will extend to vendors, contract workers, consultants, or other third parties. Detail any prohibited, restricted, or limited uses that apply.


Data privacy, security, and integrity are important considerations when using gen AI tools. Employees and all users need to understand the risks involved when the gen AI tools have access to protected health information, personally identifiable information, confidential information and trade secrets, and any other sensitive data. Be sure to address the organizational procedures for how to keep data private, secure, and reliable. 


Employees also need to understand transparency when using gen AI. A few states now have laws, such as the California AI Transparency Act, that require disclosure if gen AI is used to create content. Your organization’s gen AI policy should explain in detail to employees how to disclose the use of gen AI to help build trust with customers and co-workers. 


Data output

The data output by gen AI can be incorrect, biased, out of date, or contain sensitive information, and employees need to be aware of these possible issues. The gen AI policy needs to clearly reinforce the concept of verification. A human should always verify the information generated by gen AI before using it. 


Gen AI relies on large amounts of data, which can include sensitive data such as names, addresses, social security numbers, medical records, birth dates, and phone numbers. If an employee inputs sensitive data into the AI database, gen AI can output the sensitive data, ultimately causing compliance violations. Therefore, gen AI policy should set strict guidelines for the collection, storage, sharing, and processing of such data. Be sure to explain who in the organization is responsible for the data at each stage and how specific employees are allowed to interact with the data.


Regulatory compliance

Staying in compliance means following all state, federal, and international laws that may apply. Midmarket organizations, and particularly those in regulated industries, need a robust gen AI policy to help employees understand how their actions can violate the law and lead to fines, penalties, and lawsuits. Employees need to know who to contact to report potential gen AI compliance violations.


Training and education

Organizations should provide gen AI user awareness training to all employees. The gen AI policy should provide a training schedule, offer refresher courses for individuals who have experienced problems, and make sure employees know when the training is mandatory. 


Ongoing training can reduce risk as employees learn how to properly use gen AI tools and stay up to date on the evolving risks of the technology. The training should teach employees how to handle sensitive data (or not handle it at all), cover ethics issues such as transparency and bias prevention, and focus on the risks, such as inaccuracies, data privacy violations, and security vulnerabilities.


In the gen AI policy, your organization will also want to schedule training to upskill employees, or prepare them for job changes as gen AI performs mundane tasks while employee work becomes more strategic. During upskill training, it’s important for employees to know that gen AI is meant to help humans do their jobs, not replace them.


Risk management

Midmarket organizations must understand the potential risks of using gen AI tools, including data privacy violations, security breaches, and exploitation by threat actors. Your gen AI policy should incorporate ongoing monitoring and scheduled risk assessments. A risk assessment enables an organization to assess the cyber landscape, identify potential risks and vulnerabilities, and prioritize the actions needed to stay safe while using gen AI.


Pondurance recommends that organizations conduct an annual risk assessment and perform an additional risk assessment when the organization adds a new gen AI tool to the network. 


Conclusion

Every organization should have a well-written, comprehensive gen AI policy in place to guide employees on how to interact with gen AI tools and minimize risk. Consider the topics that are relevant to your midmarket organization and build a gen AI policy to help employees succeed at the initial stages of gen AI innovation and into the future. Check out our comprehensive AI playbook here.

wave pattern background

Featured Posts

Manage Risk With a Comprehensive Gen AI Policy

September 25, 2025

Maintaining Regulatory Compliance in the Gen AI World

September 11, 2025

Incident Response Leaders from Pondurance Take Top Prize in SentinelOne Capture-the-Flag Event for the Second Year in a Row

September 25, 2025

bottom of page