Responsible Al from day one: Governance lessons for the sector
AI promises efficiency and scale for non-profits, but without responsible governance, it can quickly undermine trust, compliance, and member safety.
You’ll learn practical governance in action: common pitfalls, the safeguards that work, and how to implement policies and human checks that keep AI safe while improving daily operations. You’ll leave with clear, actionable guidance and real examples to implement AI responsibly in your organisation.
Key Questions We'll Explore:
- What ethical and governance considerations matter most for non-profits using AI?
- What additional governance and policy frameworks are needed to protect data and reduce risk?
- How do we embed human checks in AI workflows and manage risk or bias?
- What emerging risks should organisations be aware of, such as “agentic” AI use by staff?
Key takeaways for attendees +
1. AI is already inside most organisations
Even if leadership believes their organisation isn’t using AI, staff are already experimenting with tools like chatbots in their day-to-day work. This “shadow AI” creates informal, unregulated use across teams, meaning the real question for organisations is no longer “Should we adopt AI?” but “How do we govern the AI that’s already being used?”
2. Governance needs to start before AI scales
Many organisations are experimenting with AI before putting clear policies or oversight in place. Without those foundations, projects can create compliance risks, waste time and investment, or stall when issues arise. Starting with clear policies, defined accountability, staff training, and basic risk management helps organisations scale AI safely.
3. Responsible AI doesn’t slow innovation it enables it
Governance isn’t about stopping teams from experimenting with AI. It’s about putting simple guardrails in place so people can explore new tools with confidence. The organisations seeing the most success start with low-risk use cases, prove value quickly, and scale from there.
4. Organisations are responsible for AI outputs
If an AI system provides incorrect information or advice, responsibility still sits with the organisation using it, not the technology provider. In practice, this means AI should be treated like any other tool representing your organisation, with clear oversight, review processes, and accountability.
5. Governance should be practical, not theoretical
Responsible AI isn’t about waiting for the perfect policy document. It’s about putting practical guardrails in place now. This can include simple steps like:
- an AI charter or policy
- staff training
- risk assessments for new use cases
- clear incident reporting processes
These foundations help organisations experiment with AI while protecting people, data, and trust.
We welcome your feedback on the event so please take 30 seconds to complete the following survey with your feedback:
%201.png?width=500&height=210&name=HS%20Logos%20-%20Rectangle%20(500%20x%20300%20px)%201.png)