News & events

News

20 October 2025

AI as a catalyst for good governance

Human Sciences Research Council (HSRC)

President Cyril Ramaphosa unveils inscriptions depicting the values of the Constitution of the Republic of South Africa, at the Houses of Parliament in Cape Town. Photo: GovernmentZA, Flickr

The 2025 Corruption Perceptions Index categorised South Africa as a ‘flawed democracy’, ranking 81st on the list, with a score of 41 out of 100—two points below the average of 43 (where a score of 100 is clean, and 0 is highly corrupt). According to the International Monetary Fund, corruption discourages investment, limits economic growth, and alters the composition of government spending, often undermining future economic growth.

For whistleblowers, exposing the truth about corruption can mean putting their lives at risk. For example, Gauteng health official Babita Deokaran was gunned down outside her Johannesburg home in August 2021 after flagging a R332 million personal protective equipment procurement scam at the provincial Department of Health.

South Africa’s Whistleblower Protection Bill, which aims to address gaps in existing legislation to strengthen whistleblower protections, is due to be introduced to Parliament in late 2025. Considering the widespread integration of AI into modern life, a new HSRC article argues that AI tools could flag corruption early and begin to rebuild public trust.

Using a systematic review of the current literature on the application of AI in combatting corruption, HSRC researchers sought to determine how AI and emerging technologies could be leveraged to help South African organisations govern more effectively, detect fraud, increase transparency, and address corruption, cybercrime, and misinformation.

What is AI?

AI is a branch of computer science that focuses on building systems capable of performing tasks that typically require human intelligence. One of the primary ways in which AI operates is through machine learning, where the system mimics human decision-making by identifying patterns in data.AI audit systems apply these algorithms more systematically and thoroughly than their human counterparts, and are not subject to fear or favour. They can scrutinise transactions and identify patterns and anomalies that a human auditor might overlook, enabling the early detection and even prediction of corrupt practices.

In the recent HSRC article, researchers discussed five key ways in which AI tools are currently being utilised to combat corruption. These applications target different types of corruption and strengthen governance systems in distinct but overlapping ways.

Detecting fraud with AI

AI is increasingly being used to detect fraudulent financial activity in both the public and private sector, by rapidly sifting through millions of transactions, payment records, and other financial data to flag anomalies that would be nearly impossible for a human to spot.

In the United Kingdom’s hospitality industry, machine-learning models now sift through booking and payment records in real time, flagging suspicious patterns such as duplicate card details or improbably high room-rate discounts. Malaysia has deployed a predictive model within its police service that predicts the intent to commit petty corruption among law enforcement officers with 90% accuracy.

These models are especially effective when they combine financial data with other forms of evidence, such as employee behaviour or audit reports. Privacy concerns remain, making it essential to establish clear legislation, data protection protocols, and oversight mechanisms that govern their use. A legislative toolkit should include clear regulations defining what data can be accessed, how it can be used, and who is liable if the system makes an error. It should also establish strong data protection protocols and transparent oversight mechanisms to ensure the technology isn’t misused. The effectiveness of AI systems in this context will also depend on the quality of their input data and the digital proficiency of auditing staff, including internal auditors and the Auditor-General of South Africa (AGSA).

However, these challenges can be addressed. The effectiveness of these AI models is significantly boosted by continuous training for auditing staff, keeping them abreast of new AI advances. When staff understand how AI works and how to interpret its findings, they can use the tool more effectively, integrating its insights into their investigations.

Enabling smarter governance

According to the South African Cities Network, smart governance is defined as a government’s ability to make better decisions through a combination of information and communication technology-based tools and collaborative governance, thereby achieving its developmental mandates. AI could offer opportunities for smart governance practices. For example, in healthcare, AI can optimise patient scheduling to reduce wait times, streamline the procurement of medicines, and automate administrative tasks to free up medical staff. Furthermore, by creating secure and transparent digital records, AI can help prevent the kind of procurement fraud that has plagued the health sector.

In South Korea and China, AI and emerging technologies have proven useful in reducing corruption, as they have improved efficiencies in the legal workflow and assisted in recovering state assets from corrupt public officials. Closer to home, in 2023 the South African Revenue Service (SARS) integrated AI services, enabling it to recover an additional R210 billion in tax revenue. In this case, AI tools zeroed in on taxpayers who were most likely to settle their debts.

In Canada, AI has enhanced data processing and management while also reducing corruption; however, its use has raised privacy concerns, leading to legal challenges. The validity of AI responses also depends on regional and cultural contexts, as different regions have unique legal frameworks and social norms. An AI model trained on data from one country may make biased or inaccurate predictions in another, hence the need for local AI audit models trained on local data.

Transforming public procurement

AI tools have the potential to reshape public procurement, the process by which government departments or agencies purchase goods, services, and works from private entities. For example, in Italy, machine-learning platforms now handle bid disputes, score tenders, and sift unstructured documents, making selection faster and fairer. The same technologies enable auditors to identify collusion risks, establish clear performance indicators, and even produce user-satisfaction scores in real-time.

The accuracy of these AI-driven findings is directly linked to the quality and depth of the data that the system analyses. High-quality data, which is complete, accurate, and consistently formatted, allows AI to establish a reliable baseline of what ‘normal’ activity looks like. When more varied data sources are integrated (e.g. procurement bids, company director databases, and expenditure reports), AI can cross-reference information and uncover more complex corruption schemes, such as collusive bidding or conflicts of interest, that would otherwise remain hidden.

Predictive models that analysed the data of 147 Colombian municipalities, including procurement contracts and politicians’ personal information, flagged likely sources of corrupt practices with approximately 84% accuracy. To effectively enhance public oversight and combat corruption, institutions should actively utilise this data and ensure that staff understand the predictions.

Combating cybercrime and disinformation

In 2025, the INTERPOL Africa Cyberthreat Assessment Report 2025 showed that South Africa is a top target for cybercriminals on the continent. Although law enforcement agencies are increasingly adopting AI for crime detection, cybercrime itself is on the rise. There is often poor understanding of the intersection between economic crime and cybercrime. The downside of AI as a mitigation against cybercrime is that AI itself can be hijacked, requiring firms to improve their IT security protocols aggressively.

Law enforcement institutions are increasingly adopting AI for crime detection worldwide. However, this may raise concerns about privacy and human rights safeguards. Law enforcement agencies will need to invest in better technology, establish stronger data pipelines, provide ongoing staff training, and implement clear policies for the responsible use of AI. An example of where AI can mitigate cybercrime is in securing pooled healthcare data, which is typically vulnerable to theft, by utilising technologies such as blockchain, where cryptographic processing keeps the data secure.

As with cybercrime, AI may exacerbate the issue of disinformation. For example, cheap, automated chatbots already churn out false stories faster than human fact-checkers can respond. In Kenya’s last elections, foreign-run bots disseminated profit-driven or politically motivated rumours that influenced online debate.

One solution is to allow users to tag suspect posts themselves. In a study involving 645 participants, crowd-sourced digital content labelling encouraged fact-checking. Social platforms now blend crowd input with their own AI filters to flag hate speech and fake news, and researchers expect next-generation ‘multi-modal’ systems that are capable of reading text, images, and video together to enhance accuracy further.

Restoring public trust through AI

The HSRC study examined global experiences in how AI has been applied in combatting corrupt practices and offered findings to strengthen South African governance and restore public trust. For AI adoption to be successful, it requires a champion at the highest level of government, such as in the Presidency. Such leadership signals a serious commitment to a culture of transparency and would drive the development of a national AI anti-corruption strategy. Additionally, instead of fragmented efforts, the government could invest in a centralised, powerful AI-based auditing system accessed across government departments and entities, ensuring a consistent and high standard of oversight. Policy would also need to mandate the data-sharing protocols necessary for the system to be effective.

Policy must prioritise people and digital skills. The most advanced AI tool is useless if no one can operate it or understand its findings. Therefore, a critical policy initiative must involve upskilling public servants. Investing in AI literacy programmes, particularly for individuals in financial and procurement roles, is crucial. When officials are empowered by technology instead of being intimidated by it, they become active participants in fostering ethical governance.

By creating a policy that combines high-level political will, strategic technological investment, and widespread skills development, South Africa can leverage AI to create more accountable institutions and, in doing so, begin to restore the public’s faith in government.

Research contacts and acknowledgements

This Review article was written by Jessie-Lee Smith and Krish Chetty. It was based on the article ‘AI as a Catalyst for Good Governance: Transforming South Africa’s Fight Against Corruption’, published in Development (2024), issue 67: 50–60. For more information about this work, please contact Krish Chetty at kchetty@hsrc.ac.za.

The research team included Krish Chetty, Petronella Saal, Nothando Ntshayintshayi, Nondumiso Masuku and Tahiya Moosa.

Link: https://link.springer.com/content/pdf/10.1057/s41301-024-00404-8.pdf

Related Articles