Practicing responsible and human-centred AI

Practicing responsible and human-centred AI

With GEORGIE IBARRA from CSIRO's Data61

We're firmly in the era of the rise of AI. As a result, Responsible AI has emerged as a critical framework for ensuring that artificial intelligence serves humanity's best interests.

Adam Morris, Founder of Today, sat down with Georgie Ibarra, Principal Designer at CSIRO's Data61 business unit , who is at the forefront of developing human-centred approaches to AI. With over a decade of experience in technology projects and a passionate commitment to ethical AI, Ibarra offers unique insights into how organisations can develop AI systems that not only drive innovation but also prioritise human well-being, societal impact, and fundamental ethical principles.

By Adam Morris

21 Nov 2024

Georgie CSIRO 1

Thanks for taking the time to chat about Responsible AI, Georgie. Really appreciate it.

No worries, Adam. Thank you for having me.

Let's start by discussing your role at CSIRO and how you landed in the world of Responsible AI.

I'm a Principal Designer in the Data61 business unit, currently focusing on the Responsible AI program. I've been with CSIRO for just over 10 years now, in various roles across product and design.

Previously I was leading a capability group of product managers and designers, where I was working on different technology projects. One of those was a program under the government’s Modernisation Fund, which partnered with law enforcement agencies to build a tool that used graph analytics and machine learning to analyse data for criminal investigations.

The tool provided a more visual way of looking at information, the machine learning was able to surface hidden connections in large volumes of data.

Because we were working closely with partners in the high stakes domain of law enforcement that project ignited my interest in trust in ML/AI, and at the same time the AI ethics framework had just been published so there was a lot of interest from the team in how to apply those principles in practice—and that's stuck with me ever since. For the past year I’ve been able to put my full focus into various projects focused on responsible AI at CSIRO, and I feel pretty lucky that I get to work on something I’m so interested in and passionate about.

Absolutely. And for the benefit of our readers, could you explain what Responsible AI is?

Sure. My colleagues on the responsible AI program, Jon Whittle (Research Unit Director for Data 61), Liming Zhu, Xiwei Xu and Qinghua Lu cover this well in their book on Responsible AI : Their definition is that Responsible AI is the practice of developing and using AI systems in a way that benefits everyone—individuals, groups, and society as a whole—while minimising potential downsides. It's all about finding that sweet spot between benefits and risks because too much focus on the risks can stifle innovation and prevent people from adopting AI.

And it is becoming more and more established as a ‘practice’—the focus of Responsible AI right now is all about moving from high-level ethical principles towards developing concrete processes and practices that organisations can adopt.

Operationalising those principles includes providing guidance for how organisations can implement them in practice, including governance structures, risk management processes, and accountability mechanisms. There’s a lot of work happening both internationally and domestically to develop standards and regulations for safe and responsible AI, and here in Australia, the government has just released a policy for the responsible use of AI in government and a voluntary AI safety standard for industry.

Responsible AI is the practice of developing and using AI systems in a way that benefits everyone—individuals, groups, and society as a whole—while minimising potential downsides.

Georgie Ibarra
Georgie CSIRO 3

What can you tell us about CSIRO’s role in the context of responsible AI?

CSIRO, as Australia's national science agency, plays a key role in researching, developing, and advocating for Responsible AI. It is an ethical duty and the central pillar of our AI work and strategy, as trust and ethical practices are at the heart of everything we do at CSIRO. We bring together, curate, and synthesise information from different sources, ensuring that the guidance we provide is unbiased and not tied to any particular technology and being Australia’s national science agency, we do that with a neutral perspective.

But we’re also doing a lot of innovation around new science and technology that can foster Responsible AI practices in parallel to the rapidly accelerating pace of AI capability. We’ve made a big commitment to deepening scientific understanding around responsible AI and we have research teams dedicated to developing system-level engineering practices and supply chain accountability, as well as tailoring responsible AI practices for both smaller AI models and large foundation model-based systems. We also work closely with the National AI Centre on connecting and applying that research with industry, one of our recent collaborations that was just launched is the AI Impact Navigator which was co-designed with industry representatives from the Responsible AI Think Tank. The Navigator is designed to help Australian businesses better understand, manage and report the impact and outcomes of their AI systems and to apply the Voluntary AI Safety Standard to their context. I know Today is working with NAIC on a website curating best practices for implementing the Responsible AI safety standards.

We are, yes. It’s a really relevant resource given the appetite for guidance and support, particularly for Australian businesses. There’s this explosion of demand and a real pace of change here… How do you keep up with it all, being right at the forefront?

Like any organisation, CSIRO is adapting to AI internally, but the pace within the Responsible AI program is something else. There’s so much new information, frameworks, and new AI technologies emerging almost every day.

In terms of Responsible AI, organisations are feeling overwhelmed by the sheer number of frameworks out there, and it’s a real challenge to sift through all of that and distil it down to actionable steps they need to take. What we have heard from organisations is that they need targeted, practical advice that's tailored to their needs and shows them how to turn principles into action. And that practical guidance needs to go to a more granular level but also align with emerging standards and regulations. It’s a challenging space.

Given that, what advice would you give to organisations starting out on their Responsible AI journey?

Well, it depends on how far along they are with using AI. If they're already using or developing AI systems, they might just need to tweak their existing processes and improve their governance. However, for organisations at earlier stages, it's helpful to create a clear AI policy or strategy that is led from the top and reflects the organisation's values. This means not only figuring out how they want to use AI but also assessing and monitoring its impact, not only internally but also externally to the organisation.

At the same time, they should think about their workforce's AI literacy and training needs. This makes sure employees can use AI systems safely and responsibly, and it also addresses potential job losses by upskilling people and integrating AI into existing workflows.

Another important step is setting up governance structures, both at the organisational level and for each AI system. So, defining processes and practices for development, deployment, and ongoing monitoring. As you get closer to the AI system itself, things get more specific and technical.

...for organisations at earlier stages, it's helpful to create a clear AI policy or strategy that is led from the top and reflects the organisation's values. This means not only figuring out how they want to use AI but also assessing and monitoring its impact...

Georgie Ibarra
Georgie CSIRO 2

So when organisations develop strategies and practices around responsible AI, how can they start to build towards the right kind of use cases? I’m thinking about fostering collaboration and creativity—AI products and services that augment human capabilities, supporting human potential.

That's a really important question, and a huge research area. There’s a research program here at CSIRO and Data61 called CINTEL (Collaborative Intelligence) . That's a program that includes many research disciplines including social scientists coming from all different backgrounds that are studying this area and doing a lot of great research around human and AI collaboration.

In terms of practice, it’s emergent, but there are some principles we can follow. When designing AI systems, we need to carefully consider how much autonomy the system has and whether it's appropriate for the specific situation and the people involved. We should explore how AI can support human workflows, rather than trying to replace or automate them.

This is where we’re getting back to alignment with organisational values. If you have a set of values that everybody knows and understands how to apply in their work and what that means for their work, it makes those ethical conversations and decisions at the team level much easier.

If we build strong organisational values into the design process, it empowers teams to have ethical conversations that guide their decisions. Clear values guide discussions about which tasks are best suited for AI augmentation versus human execution, making sure that people's well-being and sense of purpose aren't lost in the process.

Let’s talk about Human Centred AI as a concept within the frame of Responsible AI. What is it, and why does it matter?

I want to first acknowledge that Human Centred AI is an emergent practice, and even the research community is not quite at an agreed-upon consensus for an accepted definition. We are currently undertaking a literature review on this and recently surfaced a research paper published last year that questions what it means to frame, design and evaluate HCAI, with a lot of differences emerging across their critical review of a large corpus of peer-reviewed literature.

We’ve been exploring and experimenting with HCAI for a while at CSIRO, both in terms of the AI systems we are designing and developing and in terms of the methodologies that can be applied by others across industry.

I like to think of Human Centred AI as the “people part” of Responsible AI. There has been a lot of effort that has gone into establishing AI governance structures and processes and even though it's widely acknowledged that responsible AI involves significant stakeholder engagement, I haven't seen many concrete methodologies for actually implementing that.

On the other hand human centred design is an established practice that provides relevant methodologies to support Responsible AI, but it usually occurs at the product or experience level.

For AI to be truly human-centred, we need to look beyond the constraints of a product perspective and its focus on core users and customers to broaden out to directly and indirectly impacted stakeholders at the community and societal levels. As designers, we need to extend the HCD and design thinking process and adopt a systems thinking approach, utilising methodologies such as service design but tailoring them to an AI context.

There is also an opportunity to extend our interdisciplinary collaboration beyond the borders of technology teams and into other business areas of an organisation to draw from specialist domain expertise in business areas such as legal, compliance and risk management.

For AI to be truly human-centred, we need to look beyond the constraints of a product perspective and its focus on core users and customers to broaden out to directly and indirectly impacted stakeholders at the community and societal levels.
As designers, we need to extend the HCD and design thinking process and adopt a systems thinking approach, utilising methodologies such as service design but tailoring them to an AI context.

Georgie Ibarra
Georgie CSIRO 4

How can we make sure that AI systems protect vulnerable and marginalised communities and don't make existing inequalities worse?

First and foremost AI projects can involve these types of stakeholders in the design process in order to bring their lived experience into consideration of positive and negative consequences or impacts. But this may involve a shift in how AI projects typically function end to end or even most technology projects. Even though a HCD process is now accepted and applied at most organisations, it is still constrained to understanding core users of the technology and doesn’t commonly extend its focus to these broader stakeholder groups.

Impact and risk assessments can be useful tools within Responsible AI because they help us think about the wider, long-term impacts, not just the immediate effects of the product. But it's important to actively involve vulnerable and impacted communities in these processes and in the actual design process itself. This means slowing down the typical 'move fast and break things' approach to technology development and making trust-building and community needs a priority. I really like this idea of ‘moving at the speed of trust’.

There is a great talk I just caught from the co-director of Stanford’s HAI (Human Centred AI) centre who is advocating for this type of change, and in their research they are exploring methods for integrating the needs and concerns of users, communities and society at large as part of their design process. We are also exploring this kind of adaptation in our design processes here at CSIRO both for AI projects we are working on and as material for training courses we are developing.

What about implementing AI systems, data privacy, and ethical concerns around generative AI? How do these things relate to your work?

Generative AI has its own set of challenges, especially when it comes to input and output of data. Large Language Models are often trained on massive public datasets, which raises concerns about privacy and data leaks but also IP and copyright issues. Organisations should have clear policies on using generative AI to prevent employees from accidentally sharing sensitive information.

Fairness and bias in data are still big considerations for more specific AI applications. Organisations need to put practices in place to understand how they're using data, how it's collected and labelled, and whether it is representative of the community the AI system will serve. There are heaps of examples that show what can go wrong when you use biased historical data to train AI, especially in areas like recruitment, loan approvals, and the criminal justice system.

Are there any guidelines or frameworks that can help organisations avoid biases in data and training? What practical steps can they take?

The approach will be different depending on whether an organisation is buying AI from a vendor or building it themselves. If they're buying it, it's important to ask vendors the right questions about accountability, data practices, and how they're mitigating bias. If they're developing AI in-house, the Gradient Institute's report on putting principles into practice [PDF] has some great advice on implementing ethical AI principles, including fairness. At Data61 we also have a research team dedicated to Diversity and Inclusion in AI who have created a set of D&I for AI Guidelines, and they are actively collaborating with industry to apply these in context. The challenge for organsiations how to apply general guidance and tailor it to their context, to suit where they’re at in their journey.

That's a really good reference. Let’s move onto some tangible examples. Which products or services have you seen that are responsible and might have used a HCAI approach well?

Red Cross Humanitech has some inspiring case studies about community-led innovation, like the Maya Cares chatbot . That’s a conversational guide for women processing racism. It's designed specifically for women experiencing racism, and it shows how powerful a long-term, community-driven design process can be. By focusing on building trust and deeply engaging with the affected community, the project has achieved some fantastic outcomes.

UTS Human Technology Institute also have some great project examples of HCAI in action such as their partnership with Service NSW on the responsible implementation of facial verification systems . They have also published some excellent case studies in their Lighthouse Case Study Series that highlight the importance of a stakeholder engagement process as part of an overall AI Corporate Governance Program, including their report on understanding impacted communities and missing voices.

Fairness and bias in data are still big considerations for more specific AI applications. Organisations need to put practices in place to understand how they're using data, how it's collected and labelled, and whether it is representative of the community the AI system will serve.

Georgie Ibarra

Could you talk to what’s happening in Australian industry in terms of the knowledge and capabilities they’re looking to grow and improve?

There’s a broad spectrum of AI maturity across Australian industry, with large enterprises tending to be on the more mature end of the scale due to having the resources and funding to invest in their AI adoption and capability uplift. There has been a gap in support and guidance for small to medium-sized businesses on their AI adoption journey, which CSIRO and the National AI Centre are working to address.

But across the board organisations need to invest in upskilling their workforce in order to capture the efficiency and productivity gains that AI is promising, but also as the responsible thing to do for their employees and to minimise the effects of this disruption on the labour force overall. A core principle of Human-Centred AI is that of human and AI collaboration - ensuring that AI technologies are designed to augment human capability, not replace it.

CSIRO estimates that Australian industry will need up to 161,000 new AI and emerging technology workers by 2030. The Next Generation Graduates Program is a nationwide scholarship program aimed at addressing that.

Super interesting… Can you tell us more about that program?

The Next Gen program is funded by the Department of Industry, Science, and Resources and is designed to equip the Australian workforce with the skills needed for emerging technologies like AI, quantum computing, and cybersecurity. The program is geared towards university graduates and PhD students, offering them a mix of skills through a multidisciplinary curriculum that covers ethical and responsible AI, data-centric engineering practices, human-centred AI, as well as the more technical aspects of AI and machine learning.

What's your specific role in the program?

We're putting together and delivering a human-centred AI module for the next cohort of students. This module builds on our previous work and goes beyond just human-centred design for AI. It incorporates broader ethical considerations, collaborative methodologies, and aims to arm students with practices that develop a deeper understanding of how AI can impact society.

We're trying out new approaches like 'implication design,' to encourage discussion and critical thinking about AI's impacts at different levels – individual, community, and societal. The goal is to give future professionals the skills and mindset they need to develop and use emerging technology—and particularly AI—more responsibly.

How wonderful. And to wrap us up, is there a call to action you'd like to share for the work you’re doing at CSIRO?

Right now, we're doing a literature review on human-centred AI, looking at definitions, best practices, and emerging methodologies. We're really keen to connect with practitioners and experts who are working on the human side of AI, whether through HCD, stakeholder engagement, or impact assessments.

We're trying out new approaches like 'implication design,' to encourage discussion and critical thinking about AI's impacts at different levels – individual, community, and societal. The goal is to give future professionals the skills and mindset they need to develop and use emerging technology—and particularly AI—more responsibly.

Georgie Ibarra

If you're a practitioner or expert working at the intersection of AI and human-centred approaches, CSIRO's Next Generation Graduates team would love to hear from you. They're keen to learn about your experiences, the methods you use, and any case studies you have to share, to help inform their research on human-centred AI. Your contributions can make a real difference in shaping the future of ethical and responsible AI.

To find out more or to express your interest in getting involved, please contact Data61-NextGenGrad@csiro.au .

Thanks again for your time and insights, Georgie. This has been a really interesting chat.

Thanks for the opportunity to share my thoughts Adam, it’s been a great discussion.

Today Team Kate

Get in touch

Want to chat about news in the impact space? Get in touch with Kate.

Kate Bensen

Storyteller

Next Article

What are we waiting for?

Site footer

×