AI Policy: Rules for Using ChatGPT and Copilot in an SME

TL;DR: A corporate AI-policy guide for SMEs — using ChatGPT, Microsoft Copilot, and Claude responsibly, KVKK alignment, and employee rules.
Summary: In SMEs, employees are already using AI tools (ChatGPT, Copilot, Claude, Gemini) on personal accounts without management approval. Uncontrolled use = data-leak risk: customer information, source code, and financial reports flow to the cloud-AI provider's servers. A corporate AI policy turns that uncontrolled use into discipline: which tools are approved, what data can or cannot be entered, audit logging, training, and incident response. Under KVKK, the "adequate technical measures" framework also expects such a policy. This article offers an AI-policy template that's practical at SME scale.
An SME owner remembers on the drive home: "did our sales manager paste our customer list into ChatGPT?" The answer in most organisations is "probably" — or "most likely, we don't know". Employees are already using AI; they just don't know the limits. In a KVKK audit, "do you have an AI policy?" with a "no" answer becomes uncomfortable. An AI policy delivers both operational order and legal alignment.
In this article we cover the components of a practical corporate AI policy at SME scale, the rules for employees, and the rollout flow. Target audience: IT managers, HR leads, and decision-makers who want to move from "we're open to AI but rule-less" to a disciplined practice.
The Three Goals of an AI Policy
1. Protect Data Privacy
- Keep customer data out of cloud AI
- Prevent sensitive business information from leaking
- Stay aligned with KVKK
2. Quality and Accuracy
- Verification of AI output
- Stay alert to "hallucination" risk
- Professional accountability
3. Copyright and Ethics
- Usage rights for AI-generated content
- Bias and unfair decisions
- Transparency with customers and partners
Components of a Corporate AI Policy
A mature policy covers these eight components:
1. List of Approved Tools
The AI tools employees may use are spelled out:
| Tool | Status | Usage limit |
|---|---|---|
| ChatGPT (personal account) | ❌ Prohibited | No sensitive data |
| ChatGPT Enterprise (corporate) | ✓ Approved | Data not used for training |
| Microsoft 365 Copilot | ✓ Approved | Inside M365 data |
| Claude (Anthropic, enterprise) | ✓ Approved | Under contract |
| Gemini (corporate) | ✓ Approved | Inside Workspace |
| GitHub Copilot | ✓ Approved | For code |
| Perplexity Pro | ⚠️ Conditional | No sensitive data |
| Unknown tool | ❌ Prohibited | Get IT approval first |
2. Data Classification
Which data may go into which tool?
| Data class | Example | AI use |
|---|---|---|
| Public | Website content, blog | Open use |
| Internal | Internal procedures, training | Corporate AI only |
| Confidential | Customer data, contracts | Local AI or contracted enterprise AI only |
| Highly confidential | Financial reports, M&A | No AI at all |
3. Prohibited Behaviour
Hard "don'ts":
- Pasting customer national ID, phone, or address into AI
- Uploading source code to an unknown AI
- Pushing contract text into a personal ChatGPT account
- Using cloud AI for financial-report analysis
- Entering employee personnel data into AI
- Medical / health data (special-category)
- Sharing passwords or API keys
4. Approved Use Cases
The "dos" that show employees what's fine:
- Drafting emails (without personal details)
- General research, brainstorming
- Code help (general patterns)
- Meeting summaries (anonymised)
- Translation (of non-personal data)
- Marketing copy
- Training materials
5. Verification Obligation
AI output must be verified:
- AI can be wrong (hallucination)
- Legal / financial contexts need professional review
- Review before anything goes to the customer
- "AI generated it" does not transfer responsibility
6. Transparency
- Customers should know when they're talking to an AI (chatbot)
- An AI-assisted report can carry an "AI-assisted" note where appropriate
- Cite AI assistance in internal documents
7. Record-Keeping
- Audit logs for approved AI tools
- Which employee uses which kind of AI
- Annual usage report
- Available for a KVKK audit
8. Training and Awareness
- Annual AI usage training
- Covered in new-employee onboarding
- Policy refresh on schedule
- Recognising "phishing-style" AI social engineering
ChatGPT Enterprise vs. Personal
The SME decision: have employees use a corporate ChatGPT, not personal.
Personal ChatGPT
- Your data can be used by OpenAI for training
- No contract in place
- No audit log
- When an employee leaves, the chat history is outside your control
ChatGPT Enterprise
- Your data is not used for training (contractually guaranteed)
- SOC 2 Type II aligned
- SAML SSO, audit log
- Centralised user management
- Contract details documented for KVKK
If ChatGPT is used in an SME, Enterprise is mandatory — personal accounts are prohibited.
Microsoft 365 Copilot — for SME Environments
For SMEs on M365 Business Premium, Copilot is the natural choice.
Why Copilot Works
- It runs inside the M365 data (Word, Excel, Teams, Outlook)
- Data isn't used for training
- Microsoft contractual coverage
- European data residency for KVKK alignment
- AD / Azure AD integration
Copilot Licensing
- Microsoft 365 Copilot: ~USD 30 per user / month (on top of the M365 licence)
- Pricey; the SME scale needs assessment
Common SME Scenarios
- Drafting email in Outlook (from company data)
- Excel data analysis (over company files)
- Word document summarisation
- Teams meeting summary
- PowerPoint slide generation
GitHub Copilot — Developer Teams
If your SME has a development team, Copilot is practical.
Upsides
- Code assistance and completion
- Test generation
- Documentation drafting
- Bug-fix suggestions
Risks
- Code can leak (Copilot may have ingested it in training)
- Licence compliance (is the generated code truly original?)
- Secure-coding practices (it can produce known vulnerabilities)
In the SME policy: GitHub Copilot approved, with extra care around critical security code (auth, crypto).
A Hybrid Approach with Local LLMs
A pragmatic SME AI policy:
- Public + internal data: cloud AI (Copilot, ChatGPT Enterprise)
- Confidential data: local LLM (Ollama, LM Studio)
- Highly confidential data: no AI, manual
We've covered the local-LLM side in detail in a previous article.
Policy Template — for SMEs
A draft you can actually deploy:
CORPORATE AI USAGE POLICY
1. Purpose
This policy is designed to help employees use AI tools
productively while protecting company assets.
2. Scope
All employees, part-time staff, contractors.
3. Approved AI Tools
- Microsoft 365 Copilot (official licence)
- ChatGPT Enterprise (corporate account)
- GitHub Copilot (developers)
- Local LLM (Ollama)
4. Prohibited Tools / Behaviour
- Using a personal ChatGPT/Claude/Gemini account for work
- Pasting customer data into unknown AI services
- Uploading financial reports or contracts to cloud AI
- Writing passwords / API keys into AI
5. Data Classification
- Highly confidential data does not enter AI
- Confidential data only via local LLM
- Internal data via corporate Copilot
- Public data across all approved tools
6. Verification Obligation
AI output must not be forwarded to customers or
management without verification.
7. Training
1-hour mandatory AI usage training per year.
8. Violations
Policy violations trigger disciplinary review.
HR + IT assess jointly.
9. Refresh
Reviewed every 6 months.
Approved by: [CEO]
Date: [Date]
Version: 1.0
Relationship to KVKK
An AI policy is part of KVKK alignment.
KVKK Touch Points
- Information notice: AI usage disclosed to employees (VERBİS, employment contract)
- Explicit consent: for special-category data (health, biometric)
- Data minimisation: only the minimum necessary data goes into AI
- Retention: AI logs kept for the statutory + reasonable period
- Cross-border transfer: explicit consent if the cloud-AI provider is overseas
- Rights: rights around automated decision-making (KVKK Art. 11)
VERBİS Registration
If a corporate AI is in use, it should be listed under "third-party data processors" in your VERBİS record.
Employee Training Content
Typical contents of a 1-hour annual AI-usage training:
Module 1: Intro to AI (15 min)
- What AI is and how it works
- LLM hallucination
- The difference between cloud and local AI
Module 2: Corporate Policy (15 min)
- Approved tools
- Prohibited behaviour
- Data classification
Module 3: Practical Examples (20 min)
- "Is this prompt safe?" scenarios
- Good and bad usage examples
- Verification discipline
Module 4: Q&A + Quiz (10 min)
- Open discussion
- Short quiz (certificate)
Violation Scenarios and Response
Typical Violations
| Violation | Action |
|---|---|
| Customer national ID entered into personal ChatGPT | Reminder + discipline on repeat |
| Contract text uploaded to AI | Legal counsel + KVKK breach assessment |
| Employee forwarded unverified AI output to a customer and it was wrong | Remediation + training |
| Fake customer review generated via AI | Ethics violation, strict discipline |
Incident-Response Flow
- Violation detected (automated or reported)
- IT + HR joint assessment
- If there's a KVKK risk, fast notification (72 hours)
- Incident report, lessons learned
- Policy refresh (if needed)
What Yamanlar Bilişim Offers
Our AI-policy support areas at SME scale:
- Audit of current AI usage (who's using what)
- Drafting an AI policy
- KVKK alignment integration
- ChatGPT Enterprise / Copilot rollout
- Local-LLM hybrid approach
- Employee training sessions
- Annual policy refresh
- VERBİS record update
Frequently Asked Questions
- If the Microsoft 365 ecosystem is dense: Copilot (inside M365 data, file-based)
- General chat / research needs: ChatGPT Enterprise
- Mixed: you can run both
At SME scale the single-tool choice is usually cost-driven; Copilot just gets added if M365 is already in place.
Conclusion
A corporate AI policy carries an SME from "open to AI but rule-less" into a measured, controlled, KVKK-aligned practice. An approved-tools list, data classification, prohibited behaviour, a verification obligation, and annual training — these five components form a discipline you can actually apply at SME scale. A policy gets the balance right: employees keep getting value from AI without exposing the company's assets.
Yamanlar Bilişim provides AI-policy drafting, ChatGPT Enterprise / Copilot rollout, and employee-training services sized to your needs — turning your "open but controlled" AI stance into a concrete discipline.
Frequently Asked Questions
Does an SME really need an AI policy?
Your employees are already using AI tools — with or without a policy. Without one: uncontrolled data leakage, KVKK risk, no quality control. With one: that same usage becomes disciplined, logged, bounded. In a KVKK audit, do you have an AI policy? will become a standard question.
ChatGPT Enterprise or Copilot?
It depends on what you need:
Won't a strict prohibited list make employees complain?
Some initial resistance is normal; but: frame the policy as safe usage , not prohibition . The approved tools (Copilot, Enterprise) are available; personal accounts aren't. Employees gain access to corporate tooling, the company stays safe. Positive communication matters.
How do I enforce the personal-ChatGPT ban?
A hard ban is technically difficult; but: web-filter blocks for personal accounts, awareness training, audit (e.g. M365 access logs), installing Copilot on work machines and blocking ChatGPT. If an employee uses it on a personal device, contract terms commit them to keeping corporate data out.
Who holds the copyright on AI output?
A complex, still-evolving area. In the US there's a ruling that AI alone cannot hold copyright ; in the EU it's contested. SME practice: treat AI as a tool , have a human editor sign off on the end product. Disclose AI assistance transparently in content you publish (especially professional / academic). Customer contracts may ask for an AI-assisted disclosure.
If I use a local LLM, do I still need a policy?
A local LLM cuts risk substantially, but: employees can still paste data into a personal ChatGPT, prompt-security concerns remain (social engineering), and verification of AI output is still required. A policy is needed in any case — the local LLM is a technical control; the policy is the human control.
Author
Serdar
Yamanlar Bilişim Expert
Writes content on IT infrastructure, cybersecurity, and digital transformation at Yamanlar Bilişim. Get in touch for any questions.
Professional Support
Get help on this topic
Let's design the Enterprise AI and Data Intelligence solution you need together. Our experts get back to you within 1 business day.
support@yamanlarbilisim.com.tr · Response time: 1 business day
Keep Reading
Related Articles

Embeddings and Vector DBs: Refreshing SME Document Search
Embeddings and vector databases — moving SME document search to semantic retrieval, RAG architecture, and an implementation guide.

Excel Automation: Killing Manual Work with Power Automate
Automating Excel workflows with Microsoft Power Automate — practical SME scenarios, connectors, and productivity gains.

Local LLM Deployment: Data-Private AI in an SME
Self-hosted LLM deployment at SME scale — running Ollama, LM Studio, and vLLM for data-privacy-first AI.