Download our Survival Guide for Healthcare Security Teams:
Playbook: Eliminating Breach Risks — 2025 Edition for midmarket organizations. Download to learn more

Maintaining Regulatory Compliance in the Gen AI World
Generative artificial intelligence (gen AI) uses deep learning models and large datasets to recognize patterns within the data and learn from them. Once a gen AI tool is trained, it can create new data, similar to the data it was trained on, to generate content such as text, images, designs, music, and computer codes. As much as 91% of midmarket organizations use AI in their business practices, according to RSM Middle Market AI Survey 2025. That’s a marked increase from 78% reported in 2024.
With such rapid adoption of gen AI, midmarket organizations are legitimately concerned about balancing the innovation benefits of gen AI and the potential risks that come with its use. Data privacy, security, and integrity top the list of concerns and are a primary focus for regulatory compliance. Keeping data safe can help organizations build trust with customers, sustain a good company reputation, and avoid costly penalties from compliance violations.
Organizations can take a few substantial steps toward achieving gen AI compliance by knowing the applicable laws, making a written plan, monitoring and conducting risk assessments, and providing training to all employees.
Know the law
Organizations have had to follow cybersecurity rules to stay in compliance, and adding gen AI rules heightens the risk of possible compliance violations. Staying in compliance means following all state, federal, and even international laws that may apply.
Currently, there’s no overarching federal law that regulates gen AI for U.S. companies. In a January 2025 executive order, President Donald Trump revoked “existing AI policies and directives that act as barriers to American AI innovation.” Then, in the One Big Beautiful Bill, the Trump administration attempted to place a 10-year moratorium on state governments’ abilities to enforce and enact legislation on AI; however, the moratorium provision was removed by the Senate prior to passage. Proponents of the moratorium argued that the restriction would keep states from passing a mishmash of rules that would discourage AI innovation, while those in opposition to the moratorium argued that AI technology needs rules in place to ensure fairness and protect citizens from possible misuse.
All 50 states introduced legislation on AI in 2025, and as of July, 38 states have adopted or enacted laws covering a wide range of AI topics. A few examples of new and existing AI legislation for various states include:
Arkansas. The statute sets out who owns AI-generated content and makes clear that the content cannot violate intellectual property rights or copyrights.
California. The law requires a disclosure when election-related advertisements use AI-generated content, and a separate law requires AI services with over 1 million users to disclose AI-generated content.
Colorado. The statute requires developers and deployers of high-risk AI systems to use reasonable care to protect against algorithmic bias and disclose the use of AI to customers.
North Dakota. The legislation expands existing law, mandating that an individual must not use an AI robot to stalk or harass.
Oregon. The law mandates that a nonhuman, including an AI robot, is prohibited from using the title “nurse.”
Tennessee. The Ensuring Likeness, Voice, and Image Security Act, known as the ELVIS Act, prohibits the use of gen AI to mimic the voice of a songwriter, performer, or celebrity without the individual’s permission.
Utah. The legislation mandates that a mental health chatbot must disclose to the user that it is not a human being but is gen AI technology.
The diverse collection of state gen AI regulations poses a challenge for organizations doing business in all or multiple states across the country. Organizations should become familiar with the enacted and pending AI laws in the 50 states — or whatever states they do business in — to make sure they are in compliance. In addition, organizations that conduct business globally should comply with the European Union AI Act and other regulatory measures drafted or enacted in countries such as China, Japan, Singapore, India, and Brazil.
Make a written plan
For years, midmarket organizations have implemented incident response plans for cybersecurity — and written plans for gen AI are just as important, particularly for regulated industries like healthcare and financial services. Only 43% of organizations are currently developing ethical AI guidelines aligned with regulations, according to RSM Middle Market AI Survey 2025. The need for such plans may rise as future legal battles over gen AI increase in number and more companies adopt gen AI tools.
The provisions needed in a written plan depend on the company size, which gen AI tools are implemented, how the gen AI tools are used, and other company-specific considerations. In general, a few crucial topics to consider in a written plan are:
Proper uses for gen AI. Organizations need to define when employees should use gen AI and when it should be prohibited. The plan’s rules should define who can use gen AI, under what conditions, and for what purposes.
Data privacy. Gen AI tools collect and analyze large amounts of data, including sensitive and confidential information, to train algorithms, make predictions, and improve performance. Organizations need to protect the data from possible employee misuse, threat actor exploits, intellectual property infringement, and data leaks that can lead to consumer privacy violations and cause ethical or reputational damage to the company.
Verification of data. AI tools require human oversight, and employees should regularly verify all information generated by AI tools to confirm accuracy, remove bias, ensure privacy, and protect intellectual property rights. The plan should provide a clear understanding of when corrective action is needed.
AI use disclosure. Some state laws now require that AI-generated output must be disclosed, and experts agree. In a 2024 MIT Sloan Management Review survey, 84% of participants either agreed or strongly agreed that organizations should make disclosures about AI use in their consumer products and offerings. The plan should spell out any mandatory AI use disclosure language and requirements.
Monitor and conduct risk assessments
Midmarket organizations must understand the possible compliance risks of using gen AI technology, including privacy and data security breaches, exposure of confidential information and trade secrets, and exploitation by bad actors. The organization’s cybersecurity team or provider should constantly monitor the environment to protect against threats and conduct a risk assessment on an annual basis and whenever the organization adds a new gen AI tool to the network.
A risk assessment enables an organization to assess the cyber landscape, identify potential risks and vulnerabilities, and prioritize the actions needed to stay safe while using gen AI. A written plan should provide provisions for ongoing monitoring and scheduled risk assessments to evaluate and protect the network.
Provide training
A single employee using a gen AI tool can be the epicenter for a regulatory compliance violation. Employee misuse, such as sharing sensitive customer information using gen AI tools, can create data privacy breaches and confidentiality issues. Also, the use of gen AI tools, such as ChatGPT or Microsoft Copilot, is making phishing emails harder for employees to detect primarily due to more convincing language and correct grammar.
To reduce the risk of a compliance violation, midmarket organizations should provide user awareness training to all employees. Currently, 45% of organizations are training employees on compliance requirements to reinforce the safe, ethical, and responsible use of gen AI, according to RSM Middle Market AI Survey 2025. Ongoing training can reduce risk as employees learn how to properly use gen AI tools and stay up to date on the evolving risks of their use. Organizations should designate a specific contact person who employees can report to and should outline the reporting process for any possible gen AI-related compliance issues.
Conclusion
The fast adoption of gen AI has left midmarket organizations concerned about balancing innovation and the potential risks of gen AI. But knowing the laws, making a written plan, monitoring and conducting risk assessments, and providing training to employees can be the basic steps an organization needs to stay compliant. Check out Doug Howard's latest analysis of the NetDiligence Cyber Claims Study for future predictions.
