Over 60 Nations, Including Nigeria, Target Deepfake Proliferation
The Nigeria Data Protection Commission has thrown its weight behind an international enforcement framework targeting the proliferation of deepfakes and synthetic media, as global regulators move to close privacy gaps opened by generative artificial intelligence technologies.
Nigeria has formally aligned with more than 60 data protection and privacy regulators worldwide in endorsing new international standards designed to combat the growing risks posed by artificial intelligence-generated images and videos, the Nigeria Data Protection Commission has announced.
In a statement issued in Abuja on Wednesday, the commission disclosed that the initiative is coordinated by the Global Privacy Assembly through its International Enforcement Cooperation Working Group. The framework specifically targets the spread of deepfakes and non-consensual digital content, areas where technological advancement has outpaced regulatory safeguards in many jurisdictions.
The Head of Legal, Enforcement and Regulations at the NDPC, Babatunde Bamigboye, signed the statement confirming Nigeria’s participation in what marks one of the most significant international collaborations on AI governance to date. The Global Privacy Assembly, which brings together data protection authorities from across the world, has designated the crackdown on harmful synthetic media as a priority enforcement area.
According to the commission, the joint action follows mounting global concern about privacy risks linked to artificial intelligence tools now capable of generating highly realistic images and videos of identifiable individuals with minimal technical expertise. These technologies, once confined to research laboratories, have become widely accessible through consumer applications and online platforms.
The statement noted that such technologies have increasingly been misused to produce non-consensual imagery, defamatory materials and other harmful content, particularly targeting children and vulnerable persons. The commission warned that the same tools enabling creative expression and innovation are being weaponised to violate personal privacy and dignity.
Deepfake technology, which uses deep learning artificial intelligence to create or alter content so that it appears authentic, has evolved rapidly since the term first emerged in 2017. What began as an academic exercise in generative adversarial networks has transformed into a global privacy and security concern affecting individuals, corporations and governments.
The technology typically involves training AI systems on large datasets of images or videos of a target individual, enabling the system to generate new content showing that person saying or doing things they never actually said or did. While early deepfakes were often detectable due to visual artefacts, recent advances have produced synthetic media increasingly indistinguishable from authentic recordings.
The NDPC’s statement highlighted that these capabilities have created unprecedented privacy challenges. Individuals can now be digitally inserted into pornographic content without their knowledge or consent. Corporate executives can be depicted making false statements. Political figures can appear to say things that could influence elections or incite violence.
For Nigeria, which has witnessed the weaponisation of manipulated media in political campaigns and social conflicts, the endorsement of international standards represents a significant step toward coordinated enforcement. The commission noted that cross-border nature of digital content means domestic regulation alone cannot adequately address the threat.
The NDPC framed its participation as part of broader efforts to encourage responsible use of artificial intelligence while protecting Nigerians’ privacy rights. The commission added that the Minister of Communications, Innovation and Digital Economy, Bosun Tijani, had earlier led the development of a national strategy to guide the country’s adoption of AI technologies.
That strategy, unveiled in 2024, aims to position Nigeria as a competitive player in the global AI economy while establishing guardrails against misuse. The National Artificial Intelligence Strategy outlines plans for investment in research and development, skills training, and regulatory frameworks that balance innovation with public interest protections.
Speaking on the new international standards, the National Commissioner of the NDPC, Vincent Olatunji, said compliance audit mechanisms under the Nigeria Data Protection Act would be used to monitor responsible AI deployment in the country. The Act, signed into law in June 2023, established the NDPC as a full-fledged regulatory agency with powers to enforce data protection principles across both public and private sectors.
Section 5 of the Nigeria Data Protection Act empowers the commission to issue regulations and enforcement orders, conduct investigations, and impose administrative penalties for violations. Olatunji indicated that these powers would be deployed to ensure organisations deploying AI systems comply with the new international framework.
The NDPC statement urged organisations deploying AI systems to introduce strong safeguards, ensure transparency in how the technology is used, and provide clear mechanisms for removing harmful content while complying with data protection laws. These requirements align with the accountability principle embedded in the Nigeria Data Protection Act, which mandates that data controllers demonstrate compliance with data protection principles rather than merely professing adherence.
Specifically, the commission called on organisations to implement technical and organisational measures that prevent their AI systems from being used to generate non-consensual content. This includes proper testing of systems before deployment, ongoing monitoring of outputs, and rapid response mechanisms when harmful content is identified.
Transparency requirements demand that organisations disclose when content has been generated or modified by AI, enabling individuals to distinguish between authentic and synthetic media. The commission noted that such transparency is essential for maintaining trust in digital communications and protecting individuals from manipulation.
The requirement for clear removal mechanisms addresses the practical challenge faced by individuals whose images have been misused. Without accessible procedures for requesting removal, victims of deepfake abuse often find themselves powerless to stop the spread of harmful content across platforms and jurisdictions.
Nigeria’s engagement with AI governance builds on nearly two decades of data protection evolution. The journey began with the Nigeria Data Protection Regulation issued by the National Information Technology Development Agency in 2019, which established initial data protection obligations. That regulation was superseded by the Nigeria Data Protection Act 2023, which created a statutory framework with stronger enforcement mechanisms.
The country’s Data Protection Authority, now a full commission, has progressively expanded its focus from traditional data protection concerns to emerging technologies. Participation in the Global Privacy Assembly dates back to 2021, when Nigeria became a member of the international body that brings together data protection authorities from over 140 countries.
The International Enforcement Cooperation Working Group, which coordinates the current initiative, was established by the Global Privacy Assembly to facilitate cross-border enforcement actions and harmonise regulatory approaches. Its deepfake-focused work stream reflects recognition that synthetic media represent one of the most urgent challenges facing privacy regulators globally.
Nigeria joins over 60 nations in endorsing standards that aim to address what experts describe as an enforcement gap in AI governance. While several countries have enacted laws targeting deepfakes, enforcement has often been hampered by jurisdictional limitations and the speed at which content spreads across borders.
The European Union’s Artificial Intelligence Act, which entered into force in 2024, classifies certain AI applications by risk level and imposes transparency obligations on providers of deepfake systems. The United States has seen patchwork regulation at state level, with California, Texas and other states enacting laws targeting specific deepfake harms such as election interference and non-consensual intimate imagery.
China has taken perhaps the most comprehensive approach, with regulations requiring identification and labelling of synthetic content dating back to 2020. Deep synthesis providers must verify user identities and ensure generated content does not violate laws or social morality.
The Global Privacy Assembly’s framework aims to build on these national efforts by establishing common standards for enforcement cooperation. Participating authorities commit to sharing information about cross-border cases, coordinating investigations, and developing consistent approaches to sanctions and remedies.
For Nigerian organisations deploying AI systems, the endorsement of international standards carries practical implications. Companies developing or using generative AI tools must now ensure their practices align not only with domestic law but with expectations set by global regulatory peers.
The NDPC indicated that compliance audit mechanisms would be deployed to monitor responsible AI deployment. Organisations can expect increased scrutiny of their AI systems, particularly those capable of generating images or videos of identifiable individuals. Sectors likely to face early attention include media and entertainment, financial services, and any platform hosting user-generated content.
For ordinary Nigerians, the commission reiterated its commitment to protecting privacy rights while supporting innovation in emerging technologies. The statement emphasised that individuals whose images have been misused through deepfake technology should have accessible channels for seeking redress, whether through the commission’s complaint mechanisms or through platform removal procedures.
The protection of children and vulnerable persons received specific mention in the commission’s statement, reflecting global concern about the targeting of minors through synthetic media. Cases documented internationally include deepfake pornography created using minors’ images scraped from social media, as well as financial scams targeting elderly individuals through AI-generated impersonations of family members.
Despite the international cooperation framework, significant enforcement challenges remain. The speed of AI development continues to outpace regulatory responses, with each advance in generation technology creating new possibilities for misuse. Detection tools, while improving, often struggle to keep pace with generation capabilities.
Jurisdictional questions persist even with enhanced cooperation. Content hosted on servers in one country, created by developers in another, and affecting victims in a third presents complex questions of which authority should take the lead. The Global Privacy Assembly’s framework attempts to address these questions through agreed protocols for case allocation and information sharing.
The NDPC’s statement acknowledged these challenges while affirming Nigeria’s commitment to playing an active role in the international regulatory community. By joining the coordinated action, Nigeria positions itself as a participant in shaping global standards rather than merely responding to them after they have been set by others.
The commission did not specify timelines for implementing the new standards or indicate whether domestic regulations would be amended to reflect the international framework. However, Olatunji’s reference to compliance audit mechanisms suggests that enforcement actions under existing law will increasingly reflect the principles articulated in the global initiative.
