So far developments in AI have been predominately driven by private sector technology actors, but growing interest by African governments has seen the start of conversations around “AI strategies'' for growth and governance across the continent. AI’s application to a defined problem is not typically applied in a neutral way. Navigating the complexities of AI application calls for a typology of positive AI and negative AI in the governance context. Positive AI is the use of such systems for broad social benefit. Conversely, negative AI is used for social division, suppression, or even violence.
Positive Impacts in the Health Sector
Positive AI applications in Africa have garnered most of the media coverage. Start-ups in Ghana and Nigeria are addressing doctor shortages and the lack of medical access for rural Africans. They have begun to use AI to empower doctors and leverage growing mobile phone ownership as a vehicle for collecting data, improving administrative efficiency, and to expand treatment coverage. In both Kenya and Nigeria, AI focused start-ups have begun working on agricultural planning, reducing financial transaction costs, and improving public transportation access and efficiency. Education has also been a focus of start-ups like M-Shule and Tuteria, which provide accessible and extensive training and learning platforms to help teachers in the classroom. Governments in AI rich countries like Ghana, Nigeria, Kenya, and South Africa have taken a supportive but cautious approach. Monetary support for AI research and development alongside the promotion of STEM education have taken priority over AI’s integration within government agencies.
Some potential threats
While the above positive applications seek to solve gaps in development, the power of AI to augment skills and resource deficits can also potentially be harnessed by challengers to the state and by states that seek to suppress political opposition. Deep fakes, or the creation of artificial videos, voice recordings, and data, could be used to emphasize existing ethnic and religious divisions and to attack nascent democratic institutions. For example, imagine a scenario in which a supporter of Boko Haram could fabricate an inflammatory audio recording attributed to governmental authorities in an effort to stoke religious division. Such tactics may prove difficult to manage during contentious elections in transitioning democracies, especially when combined with popular social media platforms.
Government misuse of Artificial Intelligence Technologies
Alongside artificial misinformation, governments may also seek to use AI to further suppress and monitor political opposition or marginalized groups. With the help of China, the Zimbabwean government has begun collecting individuals’ facial imagery to be used by existing monitoring and facial recognition applications. These applications have human rights advocates worried about potential misuse once the system comes online.
Use of Artificial Intelligence for Warfare
Finally, like the proliferation of telecommunication technology, negative AI applications may help to lower the cost associated with violence by both non-state and state entities alike. Cyber-intelligence gathering, automated or augmented small arms, and AI-powered drones could all serve as vehicles in which to conduct progressively more violent operations at a lower cost and less risk to the aggressor raising serious ethical issues.
Learn more about Artificial Intelligence at HURU School AI Picodegree.
Comments