Centring AI on human needs, values, and well-being
What is human-centred AI?
Human-centred AI (HCAI) is an approach that intentionally places human needs, values, and well-being at the heart of how artificial intelligence systems are imagined, designed, built, and deployed. It ensures that technology serves people, not the other way around.

Why Human-Centred AI?
AI doesn’t simply appear out of thin air. Real people imagine, design, regulate, build, and use these systems. Their human values and intentions—both the best and the worst—are already shaping what AI becomes and how it touches our lives.
As Fei-Fei Li, Co-founder of Stanford's Institute for Human-Centred AI, notes, "AI has the ability to be a force multiplier of our very best—and very worst—intentions."

Why now?
In a time of rapid technological advancement, we have a critical opportunity and responsibility to be intentional. We need to ensure that these powerful tools amplify our capacity for good, support human agency, and contribute positively to society and the planet.
By thoughtfully designing AI systems from the ground up, we can create tools that people understand, trust, and control. This human-centred focus is particularly crucial for government and other high-impact services where decisions directly affect people's lives

What HCAI means in practice
Unlike traditional AI approaches that might focus primarily on technical capabilities, HCAI considers the whole ecosystem of human interaction. It acts as a vital bridge, transforming abstract technical possibilities like machine learning or natural language processing into tangible, real-world value for people.
This transformation requires deliberate design choices focused on:
Making systems legible: Ensuring people can understand what AI is doing and why.
Supporting human judgment: Augmenting human capabilities and decision-making, rather than aiming solely for replacement or full automation.
Creating feedback mechanisms: Allowing for continuous learning and improvement based on human input and real-world outcomes.
Through this deliberate process, we translate technological potential into concrete human benefits like improved decision support, better access to services, reduced administrative burden, and personalised, adaptable experiences.

Our Human-centred AI principles
Our approach to HCAI is guided by three core principles, ensuring the systems we help create are Responsible, Human, and Impactful. These principles provide a framework for reflection and evaluation throughout the entire lifecycle of an AI project.
How Today practices Human-centred AI
Translating principles into practice requires specific methods and a commitment to involving diverse perspectives. We integrate these pillars throughout our work, influencing everything from data strategy to interface design:
Participatory design
We bring diverse stakeholders—including the people who will use or be affected by the system—directly into the design process. This isn't just about consultation; it's about co-creating solutions together to uncover barriers, find new possibilities, and ensure systems address real needs and concerns.Supporting human agency
We make conscious design decisions to keep humans in control. AI should provide recommendations and support, particularly in critical decision points, rather than automating choices completely. The design must prioritise human interests.Ethical frameworks
We reference and apply established ethical guidelines (like Australia’s AI Ethics Principles and relevant state-level guidance) as guardrails, helping navigate complex questions around bias, privacy, fairness, and decision-making.Accessibility and inclusivity
We strive to ensure AI's benefits are available to everyone, regardless of ability, background, or digital literacy. Designing for diverse needs and 'edge cases' often leads to better, more robust solutions for all. It's not just the right thing to do; it leads to the best outcomes.

Designing for a brighter future
Ultimately, Human-centred AI is about shaping technology to create net positive outcomes. It means designing systems that are trustworthy, transparent, fair, and privacy-preserving. It requires seeking positive social and environmental outcomes, considering externalities, and aiming for the lowest possible carbon footprint. It involves building secure, reliable systems with mechanisms for accountability, continuous monitoring, and adaptation.
By embedding these principles and practices, we partner with organisations ready to harness AI to contribute to a fairer, smarter, more sustainable, and more human future.

Ready to apply Human-Centred AI principles to your next project?
Let's discuss how our approach can help you achieve responsible and impactful results.