Why Every RIA Needs an AI Use Policy
June 2025
The use of generative AI in investment advisory firms isn’t just hypothetical anymore. Whether it’s drafting marketing copy, summarizing research, or helping parse legal documents, AI tools are quietly entering RIA workflows. But the regulatory landscape hasn’t caught up yet, and that gap presents risk.
If you’re a registered investment adviser, especially one using AI tools in your operations, it’s time to create a clear internal policy. Doing so helps demonstrate to regulators that your firm is managing new technologies thoughtfully and in line with fiduciary obligations.
At Moeller Law PLLC, we help RIAs navigate compliance requirements around emerging tech. Below is an overview of the key areas your AI policy should cover, followed by a sample policy template you can adapt to your firm’s needs.
Why a Policy Matters
Under the Advisers Act, RIAs are required to adopt and implement written policies and procedures reasonably designed to prevent violations of the Act and its rules. AI tools raise potential issues around supervision, disclosures, data privacy, recordkeeping, marketing practices, and even conflicts of interest.
Even if you’re just using ChatGPT to draft a client newsletter, regulators want to know:
· How are you supervising that use?
· Is the content accurate, compliant, and not misleading?
· Could sensitive client data be exposed?
· Are you keeping the required records?
Key Considerations for RIAs Using AI
A comprehensive policy should address the following:
Supervision and Compliance: Align with Rule 206(4)-7 by integrating AI use into your compliance framework, including the role of the Chief Compliance Officer.
Disclosures and Marketing: Ensure AI-generated marketing content meets Rule 206(4)-1 requirements and that your firm discloses any material use of AI to clients if applicable.
Data Privacy and Confidentiality: Prohibit entering client PII or proprietary firm data into public AI tools. If AI tools are used internally, outline safeguards and vendor due diligence.
Recordkeeping: Retain records of AI-assisted outputs where they relate to investment advice or firm communications, in line with Rule 204-2.
Contingency Planning: Identify potential failure points if AI tools become unavailable or malfunction, and integrate those into business continuity plans.
Third-Party Oversight: Evaluate any third-party AI vendors for compliance and operational risk.
Litigation and IP Risk: Note potential for copyright issues, trade secret leakage, or liability for hallucinated or false outputs.
Need Help Implementing a Policy?
Moeller Law serves as outsourced general counsel to investment advisers, helping firms develop and maintain practical compliance frameworks, including those dealing with AI, cybersecurity, marketing, and more. If your firm is exploring or already using AI tools, we can help assess risk, draft a custom policy, and align your internal processes with SEC expectations.
SAMPLE AI USE POLICY FOR RIAS
[Firm Name]
Policy and Procedure on the Use of Generative Artificial Intelligence (AI)
Effective Date: [Insert Date]
Approved By: [Insert Name or Title]
Applies To: All Employees, Contractors, and Associated Persons
Reviewed Annually or As Needed
1. Purpose
This Policy sets forth the parameters under which [Firm Name], a registered investment adviser, permits the use of generative Artificial Intelligence (AI) tools. It addresses legal, operational, and compliance-related considerations, including supervision, disclosures, marketing, data protection, third-party oversight, and contingency planning in accordance with the Investment Advisers Act of 1940 and applicable SEC regulations.
2. Scope
This Policy applies to all Personnel using AI tools for firm-related activities, including but not limited to: Meeting transcription and summaries, Drafting internal and external documents, Research and compliance support, Trading, portfolio analysis, and risk assessments, Client communications and marketing
3. Supervision and Compliance Program Obligations
Pursuant to Section 203(e)(6) and Rule 206(4)-7 of the Advisers Act, the firm must maintain a compliance program reasonably designed to prevent violations of federal securities laws. The use of AI does not diminish these obligations.
· The Chief Compliance Officer (CCO) will oversee all AI-related activities and ensure appropriate employee training and supervision.
· The firm will document employee roles related to AI (development, use, oversight) and maintain attestation records.
· All personnel must complete training on AI usage, limitations, and error handling, with periodic updates and refreshers.
4. Permitted Uses of AI
Subject to the controls in this Policy, AI tools may be used for: Internal Use (e.g., drafting emails, policies, research summaries), Client Support (e.g., note taking and meeting summaries, template responses), Compliance & Monitoring (e.g., flagging risk terms, AML support). All outputs must undergo human review prior to dissemination or reliance.
5. Prohibited Uses
The following are expressly prohibited: Inputting nonpublic personal information (NPI) or material non-public information (MNPI) into unsecured or unapproved AI systems; Relying on AI outputs for investment recommendations without human review and approval; Using AI tools to create marketing content that bypasses firm review procedures or creates false/misleading impressions; Outsourcing advisory functions to third-party AI providers without formal due diligence and contractual oversight.
6. Third-Party AI Service Provider Oversight
Due diligence is required before engaging any AI-enabled service provider and must include review of the provider’s algorithm design, security protocols, and capacity. Ongoing monitoring must assess vendor performance, data handling, and model integrity.
7. Recordkeeping Requirements
Pursuant to Rule 204-2 under the Advisers Act, all business-related communications or records generated or processed via AI systems must be retained for five years, with the first two years maintained onsite.
8. Marketing and AI-Generated Content
Under Section 206 and Rule 206(4)-1, any marketing materials involving AI-generated performance claims or statements must be reviewed to ensure they are not false or misleading. Disclosures must describe how AI is used, its risks, limitations, and conflicts of interest.
9. Data Privacy and Security
Compliance with Regulation S-P and applicable state privacy laws is required. AI use must include safeguards against unauthorized access and data leakage. No client PII may be used in public AI models.
10. Contingency Planning
AI tools used for critical functions must have backup systems. Business continuity plans must include recovery from AI failure, error handling, and data corruption procedures.
11. Litigation and Liability Risk Mitigation
Maintain documentation of AI models, versioning, and oversight. Review insurance policies for AI-related coverage. Implement complaint handling for AI-related issues. Designate personnel for regulatory responses.
12. Disclosure Requirements
Disclosures must explain AI use in plain language, including risks and conflicts. Disclosures should be reviewed annually in Form ADV and marketing materials.
13. Training, Monitoring, and Periodic Review
Personnel must undergo mandatory AI training. The firm must periodically test AI tools and review this Policy annually.
Appendix A – Approved AI Tools
Tool
Use Case
Approval Date
Notes