The Overall issue: data science for development
Which structural changes must be considered in the design of public and private policies and strategies, as well as in the processes of knowledge production, diffusion and governance, so that we move effectively towards safer, cleaner and more resilient, cooperative and knowledge societies? In particular, which specific initiatives should be considered in the Global South?
This question drives the work plan associated with “K4P Alliances”, which relies on the hypothesis that current challenges associated with increasing uncertainties in modern societies require a novel understanding of “Human Agency” and the dynamics of emerging data ecologies integrated with complex network systems, adequate social norms and collective behaviours, in a way to promote the global well-being and accelerating the path towards carbon neutrality, avoiding a climate disaster.
This is because geospatial data and advanced data analytics can enable real-time measurements of trends in complex landscapes, including those in vulnerable urban areas and rural landscapes. This includes the use of high-resolution satellite-driven images and data driven by other sources, including georeferenced mobile phones combined with advanced data processing systems, artificial intelligence and computer vision approaches integrated with “ground truth” data from several sources[1].
It is becoming well known that the virtuous combination of geospatial data, advances in AI and blockchain can lead to a better governed digital age that achieves a higher level of common good than would otherwise be possible. It is a basic fact that data science and AI have been changing our lives for the past few years and the revolution they are provoking tends to evolve exponentially. Some sixty years after the first scientific papers on AI have been published, we now see numerous knowledge-intensive business services being developed and deployed at fast pace. And this is not limited to the private sector, with the digital transformation of the public sector is also ramping up at unprecedent levels.
Examples include data handling and analysis in public health, land register and sustainable land management for fire prevention, biodiversity management, protection of space assets, data analysis for consumer protection, or accident and disaster prevention, among many other areas of critical relevance in the public domain and public-private interactions. On the one hand, states as political authorities are designing policies and regulations to protect citizens from AI-related harms and risks, whilst on the other hand public administrations are showing a clear interest in using AI-enabled systems and technologies in order to improve their processes, services and policies.
It is under this context that the experience of many initiatives worldwide has shown that it is becoming critically relevant the need to foster research of public interest among the AI research communities in close cooperation with public administration. In addition, the social character of scientific knowledge is its greatest strength and the greatest reason we can trust it[2].
However, the massified use of AI-enabled innovations is also not free of additional questions because the “power it has to make us act in the ways it predicts, reduces our agency over the future” [3]. In predicting our behaviour, AI systems can end up changing it. Consequently, collective human wisdom needs to be strengthen in a way that emerging regulatory issues for a decentralized digital age should help promoting critical approaches to AI, with clear accountability and clarity about boundaries and purpose, as well as responsibility[4]. Requires rethinking of the techno-centric narrative of progress, embracing and harnessing uncertainty, as well as abandoning the fantasy of control over nature and the illusion of techno-centric dominance of AI-enable innovations[5].
The issue is clear in that it creates tensions between developers/promoters and human-led policy making, which need to be informed by negotiations of trade-offs. Above all, it requires a transdisciplinary approach to collective behaviours[6] and consideration of “human agency” across economics, philosophy, law, science and technology studies, history and sociology to engage with the all necessary ingredients of an emerging decentralized digital age and AI-enabled innovations.
Understanding knowledge as our common public good will allow citizens to be an integral part and a key stakeholder of future developments and will drive policy-makers to better understand how decentralized digital networks and AI can be used and further developed to make public services more effective and deliver seamless services, cutting down on digital bureaucracy and giving citizens back their most precious asset, namely their time. In addition, it will drive new policy options targeted to enhance the governance and regulation of decentralized digital networks, including in the public sector, aimed at ensuring high standards of conduct across all areas of public sector practice, promoting public sector effectiveness and delivering better service to its users.
The key idea is that decentralized digital networks together with AI have the potential to contribute significantly to the problem solving of long-standing issues in the public sector, such as large unmanageable caseloads, administrative burdens, delays in service delivery and language barriers including automated working processes, as well as improved decision-making and service quality. For this vision to become a reality, the associated risks and challenges must be better understood, so that secure and successful implementation and application of AI can be assured at large. Ultimately, the reliance of decentralized digital networks and AI developments in terms of design, production and even management must be combined with the unshakeable commitment to uphold transparency and accountability standards in the public sector, which ultimately sustain our democratic institutions.
These challenges and associated risks will be possibly mitigated in implementations using practices, methods and tools generated by a new trend in research and innovation: that of “Responsible AI”, underscoring principles such as fairness, transparency and explainability, human-centeredness, privacy and security.
[1] See, for example, the work at CEGA at UC Berkeley, as in https://cega.berkeley.edu/research/mosaiks-a-generalizable-and-accessible-approach-to-machine-learning-with-global-satellite-imagery/. In particular, Rolf et al (2021), “An generalized and accessible approach to machine learning with global satellite imagery”, NATURE COMMUNICATIONS| (2021) 12:4392 | https://doi.org/10.1038/s41467-021-24638-z | www.nature.com/naturecommunications11234567890.
[2] Naomi Oreskes (2019); Why Trust Science?, 2019 (Princeton University Press).
[3] Helga Nowotny (2021), “In AI we Trust: power, Illusion and Control of predictive algorithms”, Polity Books.
[4] Thelisson, E., Morin, J.-H., Rochel, J. (2019), “AI Governance: digital resposability as a building block”, 2 DELPHI 167.
[5] Karamjit S. Gill (2022), Book review, “Nowotny 2021: In AI we trust”, AI& Society, January 2021.
[6] See, for example, the work of Bak-Coleman et al (2021), “Stewardship of global collective behavior“, PNAS, June 21, 2021.