Organisations in Australia and around the world increasingly embrace artificial intelligence (AI) in their operations. In order to safely unlock the benefits of these systems, organisations should be engaging with stakeholders and investing in the people, skills and culture that foster safe and responsible AI adoption.
People, Skills, Culture and Stakeholder Engagement
In June 2024, HTI and the Australian Institute of Company Directors published a suite of resources for directors on AI Governance: A Director’s Introduction to AI and A Director’s Guide to AI Governance. The eight key elements of AI governance featured in this guidance included:
ensuring that an organisation’s people, skills, values, and culture support the safe and responsible use of AI; and
engaging stakeholder engagement to understand and manage the impact of AI systems.
From July to November 2024, HTI held several workshops discussing AI governance with over 200 directors and senior executives. Participants were particularly interested to discuss:
What people, skills and capabilities are needed to support AI governance?
What is role of culture in ensuring that responsible AI policies are followed? How do I build a culture of responsible AI?
How can we improve stakeholder engagement, particularly with our staff, to identify, explore and successfully deploy AI use cases?
People, Skills and Culture for Effective AI Governance
In November 2024, HTI published its fourth AI governance snapshot. This Snapshot explores the critical role of people, skills, and organisational culture in effective AI governance. Organisations need to invest in closing the gap between responsible AI aspirations and practices, equipping staff and leaders with the necessary capabilities, and fostering a culture of responsible AI through leadership, shared values, and ongoing engagement.
The Snapshot provides actionable insights for directors, executives, and organisations seeking to build the teams, skills, and culture needed to adopt AI responsibly and unlock its benefits.
Stakeholder engagement: Invisible Bystanders
In May 2024, the Human Technology Institute published its innovative qualitative research into the experience of Australian workers of AI and automation. The study revealed that workers are being excluded in discussions around AI development and deployment, leaving organisations without their expert insights and exposed to additional governance risks.
Engaging workers on these issues meant that they could provide extremely valuable and nuanced insights into the legal, ethical, and operational issues raised by AI systems. However, as many organisations are failing to engage with workers on these issues, they are losing the benefit of deep worker expertise and insights into opportunities for higher productivity, ethical boundaries and the broader impact of AI systems on colleagues and customers.
This research sets out a methodology for deep engagement with staff on AI. HTI’s considers that best practice staff engagement with staff should be:
Inclusive: Capture insights from different people and groups across the organisation
Deliberative: Provide opportunities for honest sharing and deliberation regarding AI uses, benefits, and challenges
Influential: Can influence policy and decision-making in the organization.
Read Invisible Bystanders: how Australian workers experience the uptake of AI and automation here
Disconnected AI: the unmet expectations of consumers and workers
In March 2025, HTI published its Insight Summary - Disconnected AI: the unmet expectations of consumers and workers, which outlines the attitudes, concerns and expectations of consumers and workers towards the increasing use of AI systems. Drawing on quantitative research into consumer expectations and quantitative research into worker experiences, conducted in partnership with Essential Media, the Insight Summary underscores why corporate leaders need to engage with these stakeholders when developing and deploying AI systems.
While Australians recognise the potential benefits of AI, they have significant concerns about its increasing adoption. In short, consumers are worried and workers feel disempowered. There is a disconnect between they want and what they expect organisations will provide.
To meet the expectations of consumers and workers, corporate leaders should ensure they are providing accountability, transparency, redress mechanisms, deep engagement, and quality training for workers. Effective AI governance, such as by implementing the guardrails in the Voluntary AI Safety Standard, can help organisations address these challenges and meet stakeholder expectations.
Read Insight Summary - Disconnected AI: the unmet expectations of consumers and workers