Department or tool-specific rules (using ChatGPT in comms, data handling in HR)
Practice
Experiments, pilots, sandboxing, and reflection—where change really starts
The Community Garden Metaphor
The master plan
Capital P Policy
The planting calendar
lowercase p policies
Weekly weeding & watering
practice
AI policy isn't like an HR policy—it can't be "set and forget." It requires learning, iteration, and relevance over time.
Common AI Policy Pitfalls
Avoid these frequent missteps when developing your organization's AI policies.
Over-Restrictive Policies
Rules that are too rigid can stifle innovation and prevent teams from exploring AI's full potential.
Copy-Paste Approaches
Generic policies from other organizations rarely fit your unique culture and specific operational needs.
Technical Jargon
Policies filled with complex terms alienate non-technical staff, leading to confusion and non-compliance.
Lack of Implementation Support
A policy without clear guidelines, training, and resources for its practical application is largely ineffective.
Balancing Structure and Flexibility
Structure without flexibility becomes obsolete.
Flexibility without structure becomes chaos.
Why policies need to be adaptable:
The tech is changing rapidly
Your teams are learning in real time
You need guardrails, not rigidity
Lightweight governance examples:
Sunset clauses – "we'll review this tool's use in 90 days"
Pilot-first approaches – "let's test this with one team before org-wide adoption"
Reflect and Review – bake in scheduled sessions to explore lessons learned after pilots
Case Example: A nonprofit comms team using AI for content drafts developed interim guidance based on early lessons, which later informed a broader policy.
Organizing for Collaboration & Feedback
AI policy can't live in Legal or IT alone.
The need for cross-functional working groups:
Diverse Representation
Include IT, Legal, Programs, Communications, Learning & Evaluation—not just "AI champions"
Hold Space for Tension
People who can wrestle with both the promise and the messiness of AI
Evolve the Policy
Review experiments, hear feedback, and serve as translators and stewards
Reflection question:Who in your org is positioned to wrestle with both the promise and the messiness of AI?
Building Psychological Safety for Practice
Foster an environment where experimentation and learning thrive without fear of reprisal.
1
Create safe spaces for AI experimentation
Encourage teams to explore AI tools in low-stakes environments, providing clear guidelines and support for testing new approaches.
2
Normalize failure as learning
Reframe 'mistakes' as valuable insights for improvement and adaptation, essential for evolving AI policies and practices.
3
Protect early adopters from criticism
Shield those who bravely experiment with new AI applications from undue blame, promoting a culture of innovation and shared learning.
Prompt — Personal vs. Organizational Policy
Not everyone here may be in a position to write policy—but we can all influence practice.
Prompt for Reflection
1
What is one thing you are doing with AI (or considering)?
2
What would it take to feel more confident that it's aligned with your values?
3
Who would need to know or be involved?
Think about your personal AI practice and how it interacts with your organization's culture or absence of policy.
Evaluating Our Living Practice
Measuring the effectiveness and evolution of your AI policy framework.
Capital "P" Policy Effectiveness
What to measure:
Alignment with organizational values (annual pulse survey)
Board/leadership confidence in AI governance
Major incident prevention (privacy breaches, ethical concerns)
Staff awareness of core principles
How often: Annually or bi-annually
Lowercase "p" Policies Effectiveness
What to measure:
Tool adoption rates by department
Time from pilot to department-wide implementation
Number of iterations/updates made
Cross-department policy sharing
How often: Quarterly
Practice Effectiveness
What to measure:
Volume of experiments initiated
"Failure stories" shared and learned from
Time saved/quality improved metrics
Peer-to-peer teaching moments
How often: Monthly or ongoing
Practical Implementation Ideas
Transforming your AI policy into an active, evolving framework.
For Practice
AI Experiment Log: Simple shared doc tracking what's being tried
Coffee & AI Chats: Monthly informal sharing sessions
Quick Wins Wall: Visible celebration of small improvements
Garden: Daily observations (Are the seeds sprouting?)
For lowercase "p" policies
Policy Sprint Reviews: Quarterly gatherings by function/tool
Cross-Pollination Sessions: Departments share what's working
Iteration Tracking: Version control showing policy evolution
Ethical Near-Miss Reporting: Learning from close calls
Garden: Annual planning (What should we plant next year?)
Resource Requirements: Making It Sustainable
Time Investment Reality Check
Monthly Commitment by Phase:
Starting Out: 2-4 hours/month for working group
Building Momentum: 8-10 hours/month including pilots
Full Integration: Built into regular workflow
Human Capital Needs
Internal Champions:
1-2 experimenters per department
Technical "translator" (not necessarily IT)
Process documenter
Culture carrier
External Support (as needed):
Legal review
Quarterly ethics check-ins
Peer learning networks
Remember: Time spent governing AI is recouped through efficiency gains, improved effectiveness of programming and work activities, and opportunities for innovation.
Resource Multipliers:
Partner with similar nonprofits
Join learning cohorts
Document everything to avoid duplicate effort
Trade expertise
Start where you are. Perfect resourcing prevents starting—begin imperfectly and iterate.
Reflection Question:What resources and existing practices do you already have that you could repurpose for AI governance?
Key Takeaways
AI policy is a living practice, not a document
Governance happens at three levels: Policy, policies, and practice
The most useful AI policy evolves with experimentation, reflection, and collaboration
AI policy isn't just about what's allowed. It's about what kind of organization you want to be and how you want AI to stem from your mission, vision, and values.