A recent McKinsey study revealed that employees are three times more likely to be using generative AI (GenAI) today than their leaders expect. Yet, 48% feel they would use AI more if they got more training and support from their companies.
That’s where a company AI policy, describing when, where, and how to use AI, comes in. A documented AI policy aligns everyone around a shared vision of working with AI and builds confidence that AI use can be strategic. Just as important, these guardrails ensure responsible usage — lowering data, security, and compliance risks. The more trust you can build with your company when implementing AI, the more likely you’ll benefit from acting faster with new tools, increased innovation, better risk management, and more.
As Director of IT at Front, I wrote and rolled out Front’s internal AI usage policy. Here, I’m sharing a practical guide to creating your own, which I’ve broken down into three simple phases: discover, organize, and execute.
But first, a bonus template: Get your employees started in a jiffy with AI basics, example use cases, and practice prompts with this AI Handbook.
Phase 1: Discover
Before you put pen to paper (or pixels to screen), you’ll need to define the scope and purpose of your AI policy. While this isn’t an exhaustive list, here are some questions to kickstart the process:
Vision, goals, culture
How does your company define AI and which applications will you cover?
Example: generative AI (OpenAI’s ChatGPT, Anthropic’s Claude), agentic AI (Salesforce’s Agentforce, NVIDIA’s AI Blueprints), large language models (LLMs), etc.
What is your company’s vision about working with AI?
Example thought starters:
We want to boost team efficiency.
We want to be perceived as cutting-edge, industry-leading innovators so we need to be fully immersed in the latest technologies to build better products or provide better service.
We want to be known as experts in the field to establish credibility in consulting about the latest tech.
What are the goals of your AI policy?
Example thought starters: Ethical use, risk mitigation, productivity, data and security compliance, etc.
How would you like to foster company culture about using tools like AI?
Example thought starters:
We want our employees to freely experiment with AI to accelerate breakthroughs or optimization in operational efficiency.
We want our employees to use AI ethically and responsibly for the safety and confidence of our customers.
We want our employees to embrace AI and drive more innovation in creating better experiences for our customers.
What does successful implementation of your AI policy look like for your org?
Example thought starters:
We want to see a 20% boost in efficiency by the end of the year.
We want to see a 25% adoption rate by the end of the quarter.
If we’re investing X, we want to ensure we get Y back.
Tools, guidelines, measurement
Which AI tools are currently being used within the company, both officially and unofficially? What is the criteria and process for evaluating tools to be approved? How will you weigh the pros and cons?
Example thought starters:
Which genAI tool will have more universal benefits across departments?
What is our budget for licensing AI technology?
What is the security and reliability of this AI tool?
What are the potential legal and regulatory implications of using AI in your industry?
Example thought starters:
What are the regional AI laws in the geographies we do business in (e.g. GDPR)?
What are the industry-specific AI regulations (e.g. HIPAA compliance)?
What are the guidelines for acceptable use?
Example thought starters:
What types of data are approved versus prohibited?
How should employees be using AI tools on company devices or networks?
What are the recommended steps for data retention and learning feedback loops for using AI tools?
Will you track AI adoption and what are your key performance indicators (KPIs) that indicate leading or lagging progress?
Example thought starters:
A leading indicator could be hours saved after introducing the AI tool
A lagging indicator could be how many employees within each department have yet to try AI
When I went through this process, I first sent out a company-wide, anonymized survey to get a sense of the tools they were using, the applications they were experimenting with, and how often they were using them. That helped me understand which AI tools were popular and the respective best practices I needed to share in the company policy.
Phase 2: Organize
Now that you know the what and why behind your AI policy, let’s think about the who. Who are your stakeholders? For example, your legal, security, or HR team may all need to weigh in on the process for reporting violations, investigating misuse, or enforcing corrective action plans. Or if you’re running solo, do you have the budget to hire a temporary consultant? We have an AI team at Front, so I tapped their expertise when creating the AI Handbook. They suggested good examples of prompting to help users understand how to get desired outputs from GenAI.
What is your plan for policy governance and enforcement? Who is responsible for keeping the policy up-to-date with a rapidly evolving technology and communicating current best practices with the rest of the org? If you don’t have a Chief AI Officer or lack the resources to handle everything yourself, consider gathering volunteers to create a committee.
Keeping employees informed can be formalized through ongoing training programs, whether in the form of live presentations, async updates via email, or pre-recorded courses. The goal is to establish a training cadence that can happen monthly, quarterly, or yearly.
Phase 3: Execute
After you draft your policy (Spoiler: there’s an AI prompt in the handbook that can help you get started), be sure to get it approved by your stakeholders to ensure compliance before you start thinking about how you’ll roll this out to the wider org.
Here’s a quick checklist of what you need to prepare before disseminating your policy:
✅ Establish a resource hub where employees can find the policy and training
✅ Create an internal communication plan for initial rollout and long-term updates
✅ Designate a communication channel for questions, feedback, requests, and concerns
✅ Determine a reporting cadence for transparent progress and motivating adoption
The AI landscape is constantly evolving, so iterating your AI policy may be more frequent than you think (I had to update my policy multiple times in the days leading up to launch). Decide how often to revisit your policy, which can be event-driven (e.g. new model release), time-based (e.g. once a quarter), or a mix of both.
If implementing your AI policy feels overwhelming, remember you can always do a phased approach to establish guidelines for your org as soon as possible. Even small steps can have far-reaching benefits, like scheduling a lunch-and-learn about AI basics or creating a Slack channel for sharing AI tips and tricks.
My biggest piece of advice is to just get started and not put it off simply because it’s “not perfect” or “not the right time.” Having a preliminary policy out there is better than none!
Take the first step to support employees with some basic AI training. Download our AI Handbook.
Frequently asked questions
How do I convince the AI skeptics?
Educating your staff goes a long way to making unfamiliar technology more approachable. Build user confidence by helping them understand how AI works, its limitations, and best practices for getting started. Resources like the AI Handbook are a good introduction, as well as a recommended list of articles, videos, podcasts, or experts to learn more.
What if AI adoption is slow? How do I get more users to experiment with AI?
A lot of the hesitation stems from fear of new technology, not knowing where to start, or having limited time. The biggest hurdle is often just learning the basics, which is why the AI Handbook is a great starting point.
Another way to encourage experimentation is by identifying the super users in your org. There are often AI enthusiasts in every department testing AI, so it could instill more confidence in the rest of the team if they follow a leading example — especially when it’s a use case specific to their role. E.g. engineering use case will vary greatly from a marketing one. Check out these eight example GenAI use cases.
What are some examples of metrics I should be tracking for successful AI implementation?
Daily active users of AI tools
Training completion rates
Time saved per team
Cost savings or ROI
Employee satisfaction with AI support
Written by Greg Karp-Neufeld