How to Write an AI Policy for Your Company (With a Free Template)
Key Points
- Sixty-eight percent of small businesses want to adopt AI but lack governance frameworks, creating risk from data leaks, liability, and brand damage that a clear policy prevents.
- A strong AI policy defines approved tools, sets data handling rules, requires human review of customer-facing output, and establishes incident response procedures—all without blocking innovation.
- The complete template provided covers all critical areas and can be customized in under an hour to protect your company while enabling your team to use AI effectively.
Why You Need an AI Policy (And Why NOW)
A recent Goldman Sachs survey found that 68% of SMBs want to adopt AI but have no governance framework. No policy. No guidelines. No rules.
Here's what that means: your team is using ChatGPT, Claude, Gemini, and other AI tools—some officially, some secretly. They're feeding proprietary data into these systems. They're making decisions based on AI outputs without understanding potential biases. And your company has no way to manage the risk.
This isn't theoretical. Here's what can go wrong:
- A sales rep pastes customer names and contract details into ChatGPT to draft emails. That data is now in OpenAI's training set. A competitor later gets similar language and knows your deal structure.
- Your customer support team uses ChatGPT to draft responses without verifying accuracy. The AI hallucinates facts. A customer acts on bad information. You're liable.
- An accountant uses an AI tool to prepare tax filings without disclosing to the client that AI was used. The AI makes an error. You face liability and reputation damage.
- A junior marketer generates customer testimonials with AI, posts them as real reviews. You face FTC violations and lawsuits.
Each of these scenarios is happening in SMBs right now. The difference between companies that suffer and companies that don't is a clear AI policy.
An AI policy doesn't stop you from using AI. It clarifies how and when and where you use it safely.
What an AI Policy Actually Does
A strong AI policy:
- Defines approved AI tools for your organization (what's allowed, what's forbidden)
- Sets data handling rules (what information can/cannot go into AI systems)
- Clarifies responsibility (who reviews AI output before it's used internally or customer-facing)
- Prevents legal risk (data leaks, liability, compliance violations)
- Protects your brand (ensures customers and partners aren't misled)
- Aligns your team (everyone knows the rules and expectations)
- Documents accountability (if something goes wrong, you can trace it back to process failures, not just individual error)
It's not about blocking innovation. It's about managing risk while you innovate.
A Complete AI Policy Template (Copy and Use)
Below is a ready-to-use template. You'll notice it's written in plain language, not legal jargon. That's intentional—your team needs to understand and follow this, not just have a lawyer sign it.
AI TOOLS AND USAGE POLICY
Effective Date: [Date] Last Reviewed: [Date] Owner: [Name, Title]
1. PURPOSE
This policy establishes guidelines for the use of artificial intelligence and generative AI tools (ChatGPT, Claude, Gemini, Perplexity, etc.) at [Company Name]. The purpose is to:
- Enable productive use of AI tools to improve efficiency and quality
- Prevent unauthorized data leaks and privacy violations
- Ensure accuracy and compliance in customer-facing output
- Protect company reputation and customer trust
- Clarify responsibility and accountability
This policy applies to: All employees, contractors, and consultants using AI tools for work purposes, whether company-issued or personal.
2. APPROVED AI TOOLS
The following tools are APPROVED for use:
| Tool | Use Case | Restrictions | |------|----------|---| | OpenAI ChatGPT (Paid) | Content drafting, brainstorming, analysis | No proprietary data, no customer PII | | Anthropic Claude | Research, summarization, technical writing | No proprietary data, no customer PII | | Google Gemini (Free) | General research, learning | No proprietary data, no customer PII | | Perplexity AI | Market research, competitive analysis | No proprietary data, no customer PII | | [Company-licensed tool] | [Specific use] | [Restrictions] |
Tools that are PROHIBITED:
- Any tool not listed above without prior approval from [Department Head]
- Free/untested tools that require account registration with personal email
- Tools that do not have clear privacy policies
If you need a tool approved: Submit a request to [Contact] with the tool name, use case, and justification.
3. DATA HANDLING RULES: WHAT YOU CAN AND CANNOT INPUT INTO AI SYSTEMS
PROHIBITED (Never paste this into any AI tool):
- Customer names, emails, phone numbers, or addresses
- Customer credit card, bank account, or Social Security numbers
- Employee Social Security numbers, home addresses, or salary information
- Proprietary pricing, margin, or cost structures
- Product roadmaps or unreleased feature details
- Client contract terms or deal structures
- Source code or API keys
- Passwords, authentication tokens, or credentials of any kind
- Patient health information, legal case details, or other regulated data
ALLOWED (With caution):
- De-identified data (remove all identifiable information first)
- General industry information and public knowledge
- Anonymized case studies or examples
- Aggregate statistics or benchmarks
- Published, non-confidential company information
When in doubt: Do not input it. Ask your manager or compliance contact first.
4. REVIEW AND APPROVAL WORKFLOW
Before any AI-generated content is published or sent to customers, it must be reviewed by a human with appropriate expertise.
Customer-Facing Content (must be reviewed and approved):
- Email campaigns or outreach
- Sales proposals or quotes
- Customer support responses
- Marketing copy or social media posts
- Website content or landing pages
- Product documentation or help articles
Internal Content (should be reviewed, but lower stakes):
- Internal emails or memos
- Meeting notes or summaries
- Data analysis or reports
- Draft documents or brainstorms
Review Checklist:
- [ ] Output is factually accurate and I can verify key claims
- [ ] Output matches our brand voice and values
- [ ] No hallucinated facts or citations are present
- [ ] No customer data or confidential information was inadvertently included
- [ ] The tone is appropriate for the audience
- [ ] Pricing, product features, and commitments are correct
If you cannot check all of these boxes, do not send. Revise or escalate.
5. CUSTOMER AND PARTNER DISCLOSURE
When AI was used significantly in producing customer-facing deliverables, you must disclose this in writing.
Examples that require disclosure:
- An AI tool generated 50%+ of a proposal or estimate
- An AI tool analyzed customer data or provided recommendations
- An AI tool created content that the customer relies on for decision-making
Example disclosure language: "This [proposal/analysis/content] was produced with the assistance of AI tools for research and draft generation. All outputs were reviewed and verified by [Name] for accuracy and appropriateness."
Examples that do NOT require disclosure:
- You used an AI tool to research a topic, then wrote original analysis
- You used AI for brainstorming, but your team created the final work
- You used AI to check grammar or formatting
The test: Would a reasonable customer expect to know AI was involved? If yes, disclose. If no, use judgment.
6. EMPLOYEE TRAINING REQUIREMENT
All employees must complete AI policy training within 30 days of hire and annually thereafter.
Training covers:
- What this policy is and why it exists
- Approved tools and how to request new ones
- Data handling rules and examples
- Review workflows and responsibilities
- Consequences of violations
Training frequency: Annual refresher for all staff, additional sessions for departments that use AI heavily.
7. INCIDENT RESPONSE: WHAT TO DO IF SOMETHING GOES WRONG
If you accidentally paste confidential information into an AI tool or notice a violation, report it immediately.
Steps:
- Stop using the tool immediately
- Notify your manager and [Compliance Contact] within 2 hours
- Do not discuss externally or with customers
- Provide details: What data? Which tool? When? How long was it there?
Company response:
- We assess the severity and risk
- We document the incident
- We notify affected parties if required by law
- We investigate root cause and prevent recurrence
- We do NOT assume bad intent—we focus on improving the process
Consequences:
- Accidental mistakes: No punitive action, focus on retraining
- Repeated violations after training: Discussion and retraining
- Deliberate violations or ignoring policy: Disciplinary action up to and including termination
8. ROLES AND RESPONSIBILITIES
| Role | Responsibility | |------|---| | All Employees | Follow this policy, review AI output before use, report violations | | Department Managers | Ensure team understands policy, approve tool requests, spot-check compliance | | [Compliance/IT Lead] | Manage approved tool list, investigate incidents, provide training | | Executive Leadership | Review policy annually, approve new tools or major changes |
9. LIMITATIONS OF AI
All employees should understand that AI tools can:
- Hallucinate facts or generate plausible-sounding but false information
- Reproduce biases from training data
- Perform worse on specialized or niche topics
- Fail at reasoning tasks or judgment calls
- Become obsolete or change without notice
Always verify critical information from AI tools against trusted sources before relying on it.
10. POLICY REVIEW AND UPDATES
This policy will be reviewed and updated:
- Annually by [Owner]
- When new AI tools emerge that require guidance
- When legal/compliance requirements change
- When we identify new risks or gaps
Employees and contractors can suggest changes by contacting [Email].
End of Template
How to Customize This for Your Business
The template above is generic. Personalize it:
- Add your company name throughout
- Specify your approved tools. What AI tools does your team actually use? List them. If your team has requested others, add them with restrictions.
- Clarify your data sensitivity. If you're in healthcare or finance, tighten the data handling rules. If you're in lower-risk industries, you might be slightly more permissive.
- Name your point person. Who owns compliance? Who do people report violations to?
- Add company-specific tools. If you use specialized AI software (industry-specific platforms, internal tools), add them to the approved list.
- Adjust approval workflows. If your sales process is different, modify the review workflow accordingly.
Implementation Tips
Don't launch silently. Hold a 30-minute meeting explaining the policy. Answer questions. Make it clear this isn't about restricting work—it's about protecting the company while enabling productivity.
Make it findable. Put the policy somewhere accessible (shared drive, wiki, handbook). Link it in onboarding. Reference it in team meetings.
Give a grace period. You've been using AI without a policy. Give the team 2 weeks to understand the new rules before you enforce them strictly.
Lead by example. If you're the owner or manager, follow the policy visibly. Review AI outputs before you send them. Ask questions. Show the team it matters.
Revisit in 90 days. After three months, check in. Is the policy working? Are people confused by any sections? Has a new tool become critical? Update as needed.
Why This Matters
Legal risk: If a customer is harmed by AI output you generated without proper review or disclosure, you're liable. A policy shows you had reasonable controls in place.
Data security: Employees feeding data into ChatGPT without guidance is one of the top ways SMBs leak customer information. A policy stops this.
Brand trust: If customers discover you used AI to generate testimonials or reviews without disclosure, trust collapses. A policy prevents this.
Team alignment: Without clear rules, different people do different things. A policy ensures consistency.
The Checklist: Getting Your AI Policy Live
- [ ] Copy the template above (or download from [Rotate blog])
- [ ] Customize with your company name, tools, and contacts
- [ ] Have [someone] review for any legal considerations specific to your industry
- [ ] Schedule a 30-minute team meeting to explain and discuss
- [ ] Publish in an accessible location (share drive, handbook, wiki)
- [ ] Confirm all current employees have read and understand it
- [ ] Add policy review to your annual compliance checklist
- [ ] Revisit in 90 days and update based on feedback
Ready to Get Your Policy in Place?
An AI policy isn't optional anymore. It's table stakes. If you're ready to move beyond ad-hoc AI use and build a sustainable, compliant AI practice, contact Rotate. We help companies build governance frameworks and implement AI safely.
Learn more about responsible AI:
Related Articles
Anthropic Built an AI So Dangerous They Won't Release It. Here's Why That Matters.
Claude Mythos found thousands of zero-days, escaped its sandbox, and won't be made public. Project Glasswing changes how every business should think about AI security.
How to Get Your Business Found in AI Search (ChatGPT, Perplexity, Gemini)
47% of Google keywords now trigger AI Overviews. Learn how to optimize your business for ChatGPT, Perplexity, and other AI search engines with practical GEO strategies.
15 Questions to Ask Before Hiring an AI Agency or Vendor
The ultimate checklist to avoid overpromising AI vendors. 60% of AI projects fail because businesses chose the wrong partner. Here are the questions that matter—print this before your next vendor meeting.