Indigenous Perspectives on AI Highlight Deep Concerns Over Accountability, Trust, and Power

Global discussions on artificial intelligence often assume that its adoption is inevitable, that technological expansion leads to better outcomes, and that the primary challenge lies in managing associated risks.

Global discussions on artificial intelligence often assume that its adoption is inevitable, that technological expansion leads to better outcomes, and that the primary challenge lies in managing associated risks. However, emerging research presents a contrasting perspective.

The Relational Futures project in Australia examines artificial intelligence through the lens of Indigenous sovereignty and governance. Rather than treating AI as a neutral or standalone tool, the project frames it as part of a broader system that shapes relationships between people, institutions, data, and land.

By engaging Aboriginal and Torres Strait Islander communities through surveys and yarning circles, the research captures lived experiences and perspectives that are often absent from mainstream technological discourse.

Limited Trust and Unequal Impacts

Participants in the study expressed significant skepticism toward AI systems, with many indicating a willingness to refuse engagement altogether. This distrust is not rooted in a rejection of technology itself, but in concerns about how it is deployed.

Experiences with automated decision making in Australia, particularly in systems like Robodebt, have demonstrated how technology can amplify harm when implemented without adequate oversight. Similar concerns are now emerging in sectors such as aged care and the National Disability Insurance Scheme.

AI systems are often introduced under the banner of efficiency. However, participants questioned who truly benefits from this efficiency and at what cost. In practice, automation can make decisions faster while simultaneously making them less transparent and harder to challenge.

Because these systems operate within institutions that already have unequal distributions of power and accountability, their negative impacts tend to fall disproportionately on marginalized communities.

Indigenous Data Sovereignty

A central theme in the research is the concept of Indigenous data sovereignty, which emphasizes the collective rights of Indigenous peoples to control data related to their communities, lands, and resources.

This framework calls for data governance systems that prioritize self determination, community benefit, and cultural integrity. Participants stressed that data practices must not reproduce harm or deepen marginalization.

Concerns extended beyond issues of privacy. Participants highlighted risks such as environmental costs, the misrepresentation or simplification of Indigenous knowledge, and a lack of transparency in how AI systems are developed and deployed.

There is also unease about AI being used to compensate for underfunded social services, potentially replacing human care with automated systems that lack cultural understanding.

The Limits of Technological Substitution

The project explored speculative ideas, including the concept of an “AI Elder” designed to provide cultural guidance or support community reconnection.

This idea was met with strong resistance. Participants emphasized that Elders are not merely holders of knowledge, but are deeply embedded in relationships, responsibilities, and accountability within their communities.

AI, by contrast, cannot form genuine relationships, cannot be held accountable in the same way, and cannot embody cultural or spiritual connections to land and community. This highlights a fundamental limitation of technology: not all forms of knowledge and authority can be replicated or replaced.

Rethinking Governance and Inclusion

The findings suggest that current approaches to AI governance are insufficient. Technical standards and regulatory frameworks alone cannot address deeper issues of power, responsibility, and harm.

Participants emphasized the need for inclusive design processes that involve Indigenous communities in shaping how AI systems are built, trained, and implemented. Without such involvement, there is a significant risk that existing inequalities will be reinforced through technological systems.

Importantly, the study argues that designing AI systems to meet the needs of the most marginalized is not a niche concern. Rather, it is a critical test of whether these systems are fair, effective, and socially sustainable.

Analysis

This research challenges dominant narratives about artificial intelligence by exposing the social and political assumptions embedded within technological development. The belief that AI is inherently beneficial or neutral is undermined by evidence showing how it can replicate and intensify existing inequalities.

The emphasis on relationality offers a fundamentally different framework for understanding technology. Instead of focusing solely on efficiency and innovation, it foregrounds accountability, care, and long term responsibility. This shift is particularly significant in contexts where historical injustices and systemic marginalization shape how new technologies are experienced.

The skepticism expressed by Indigenous participants also reflects a broader crisis of trust in automated systems. When decision making becomes opaque and accountability is diffused, affected communities are left with limited avenues for redress. This erodes confidence not only in technology but in the institutions that deploy it.

Moreover, the rejection of ideas like the “AI Elder” underscores the limits of technological solutionism. It illustrates that certain forms of knowledge, particularly those grounded in lived experience, cultural continuity, and community relationships, cannot be meaningfully replicated by artificial systems.

Ultimately, the findings point toward a critical need to rethink AI governance. Inclusion must go beyond consultation to genuine power sharing in decision making processes. Without this, AI risks becoming another tool through which existing hierarchies are maintained.

The broader implication is clear: if AI systems cannot be designed to serve and protect those most vulnerable to harm, their legitimacy and effectiveness for society as a whole remain deeply questionable.

With information from Reuters.

Sana Khan
Sana Khan
Sana Khan is the News Editor at Modern Diplomacy. She is a political analyst and researcher focusing on global security, foreign policy, and power politics, driven by a passion for evidence-based analysis. Her work explores how strategic and technological shifts shape the international order.