Artificial Intelligence
Keep judgement in the loop
AI can make good processes faster. It can also make bad processes faster, which is less charming.
If your reporting process is already confusing, AI may simply produce confusing reports more efficiently. If your stakeholder communication is vague, AI may generate beautifully vague emails. If your strategy is weak, AI can help you produce a longer version of the weakness.
Before using AI, ask what problem you are actually solving. Is the task repetitive, low-risk and time-consuming? Good candidate. Does it require empathy, judgement, sensitive context, professional expertise or legal responsibility? Use AI carefully, if at all.
The final responsibility stays with the person or organisation using the tool. Not the software. Not the model. Not “the AI said so.” Someone needs to review AI-supported work before it goes to a funder, client, board, volunteer, customer or public audience.
Small organisations do not need huge AI governance departments. They do need a few clear rules: what AI can be used for, what data must never be uploaded, who checks the output, when AI use should be disclosed, and who is accountable if something goes wrong.
AI should support human work, not hide the fact that nobody has made a proper decision.
Protect data and trust
For charities and small businesses, data protection is not optional. Do not paste donor lists, client records, private emails, health information, staff issues, financial details, confidential board papers or sensitive interview notes into public AI tools unless you have a clear, approved, compliant reason and understand where that data goes.
This is not paranoia. It is basic GDPR hygiene.
The same applies to client confidentiality and intellectual property. A business should not upload a client’s confidential strategy document into an AI tool because it wants a quicker summary. A charity should not paste identifiable service-user data into a chatbot to save time on reporting. Convenience does not cancel responsibility.
If the data is personal, confidential or sensitive, stop and check the rules before using AI.
Useful, fast, and not to be trusted unsupervised

AI can be extremely useful in project work. It can help draft, summarise, brainstorm, structure, compare, rephrase, analyse and organise. For small businesses and charities, that matters. Time is short, admin is everywhere, and nobody started a community project because they were passionate about formatting meeting notes.
But AI is not magic. It is not a project manager. It is not a strategist. It is not a lawyer, accountant, data protection officer, fundraiser, therapist or replacement for knowing what you are doing.
AI is best understood as a very fast assistant: useful, tireless, occasionally brilliant, and fully capable of producing nonsense with complete confidence. The aim is not to avoid AI. The aim is to use it without becoming careless.
What AI can help with
Used properly, AI can reduce friction across the project lifecycle.
At the start of a project, it can help brainstorm options, summarise public research, turn rough notes into a project brief, or suggest possible risks. During planning, it can help break a project into tasks, draft a first version of a Work Breakdown Structure, create a stakeholder communication plan, or turn a vague idea into clearer objectives.
During implementation, AI can summarise meeting notes, draft updates, produce checklists, write first versions of reports, reformat content for different audiences, or help create social media posts, newsletters and training materials. It can also support simple data analysis, such as identifying patterns in anonymised survey responses or sales information.
The key phrase is “first version.” AI is good at getting you from blank page to something workable. It is much less reliable as the final authority.
Prompting is delegation
A bad prompt produces generic sludge. A good prompt gives AI enough context to be useful.
Think of prompting as delegating to an eager junior colleague who works very quickly but has no judgement unless you build it into the instruction. You need to explain the role, context, task, format and constraints.
A weak prompt is: “Write a project plan.”
A better prompt is: “Act as a project manager helping a small Irish charity plan a six-week community fundraising event. Create a simple project plan with phases, tasks, owners, risks and weekly milestones. Keep the language plain, assume limited volunteer capacity, and flag any assumptions I need to check.”
The second prompt gives the AI something to work with. The first one just invites it to cosplay competence.

Verify everything that matters
AI can hallucinate. That means it can invent facts, sources, numbers, examples, legal claims, policies and confident little lies that sound like they arrived wearing a suit.
This is especially dangerous in project work because plausible errors can travel far. A fake statistic can enter a grant proposal. A made-up regulation can influence a decision. A generic risk assessment can miss the one risk that actually matters. A polished report can sound credible while being hollow.
Use a simple rule: if the output affects money, people, law, safety, reputation, strategy or public claims, verify it.
Check sources. Confirm numbers. Test assumptions. Review tone. Make sure the advice fits the actual organisation, not an imaginary one with unlimited staff and a suspiciously obedient stakeholder environment.
AI can help you think. It should not be allowed to think instead of you.
Strategy is not planning
Strategy, planning, implementation and governance are related, but they are not the same thing.
Strategy chooses the direction. Planning works out the route, resources, timeline and responsibilities. Implementation does the work. Governance checks whether the work still makes sense, still has authority, and still deserves support.
Confusing these creates trouble. A detailed plan is not a strategy. A busy organisation is not necessarily a strategic one. A project delivered on time can still be the wrong project. That is the uncomfortable part people often avoid: good execution does not rescue a bad choice. It just delivers the mistake more efficiently.
Choose what not to do
A useful strategy creates focus. It says yes to some things and no to others. This is where many small organisations struggle, especially charities and mission-led businesses. When the cause matters, every opportunity can feel morally important. Every grant looks tempting. Every partnership seems worth exploring. Every service gap feels like something you should fix.
But saying yes to everything is not generosity. It is strategic self-harm.
A charity that chases every available funding stream may end up running projects that do not fit its mission, confuse staff, exhaust volunteers and make the organisation harder to explain. A small business that copies every competitor may lose the thing that made it distinctive. A team that keeps adding initiatives without stopping anything is not becoming more ambitious. It is building a museum of unfinished intentions.
Strategy requires trade-offs. What will you not do? Which customer or beneficiary will you prioritise? Which projects will you pause? Which “nice idea” does not deserve resources right now?
The strategy is often hidden in the no.
Tools to use
Use to decide which project tasks are suitable for AI and which require human judgement.
AI Task Selector
Use to improve the quality of AI outputs by giving clear role, context, task, format and constraints.
Prompt Checklist
Use to fact-check sources, numbers, claims, tone, bias and data safety before using AI-generated material.
Verification Workflow
Use to set basic rules for staff and volunteers around confidentiality, GDPR, disclosure and accountability.
AI Use Policy Builder
“AI is neither artificial nor intelligent.”
— Kate Crawford
Recommended reading & sources
Mollick, E. – Co-Intelligence: Living and Working with AI
A highly accessible guide to working with AI as a collaborator, coach and thinking partner rather than treating it as magic software.
Crawford, K. – Atlas of AI
A critical and eye-opening book on the hidden labour, data, energy use and power structures behind artificial intelligence. Useful for anyone who wants to look past the hype and understand what AI systems actually depend on.
Taylor, P. – AI and the Project Manager
Useful for understanding how AI may change project administration, reporting and decision support while leaving human leadership firmly in the frame.
Houghton, J. – Applying Artificial Intelligence to Close the Accessibility Gap
Especially useful for charities and community organisations thinking about how AI can support accessibility and inclusion.
Data Protection Commission – Guidance for SMEs
Essential reading for Irish organisations that need plain-English data protection guidance before using AI with personal or sensitive information.
Fundraising Regulator – Guidance on Using Artificial Intelligence in Fundraising
Important for charities using AI in donor communication, fundraising content or automated decision support.