We’ve all heard how almost any organization, any size, anywhere, can leverage artificial intelligence (AI) to increase productivity and revenue. Teachers can modify course materials to meet students’ learning needs, insurers can increase capacity and reduce fraudulent claims, utilities can predict equipment downtime and prevent outages, organizations can consumer goods (CPG) can predict what their customers will want to buy next.
The applications of AI to businesses are endless, limited only by the fact that bad data used to train AI can lead to algorithmic biases and trust issues. In the past, we’ve discussed the dangers of AI data bias and the way forward to create responsible and trustworthy AI. Poor data and clunky deep learning processes are two causes for concern. They can do AI, in the words of Elon Musk, “much more dangerous than nuclear weapons. “Observation is important, especially since data is now widely recognized as a strategic asset and governments are one of the biggest owners of data. The relationship between data, AI, and governments is a delicate one – with the potential to bring about revolutionary change or lead to unwanted outcomes.
Governments have rich and diverse data repositories. They have data on industrial production, natural resources, biodiversity, GHG emissions, space exploration, citizens’ health, the use of languages and ethnicity, movements of people, employment, education, housing, trade, investments, markets, patents, transportation networks, law and order, poverty levels… the list goes on. Governments spend billions of dollars building smart cities. They sign measures to mitigate global warming. They are responsible for the fight against pandemics and the well-being of their citizens. Governments are empowered to solve the most complex problems facing humanity.
Whether taxpayer dollars are to be spent wisely, much depends on the data available, its quality, and how machine learning, deep learning systems and neural networks will use it. These form the foundation of AI.
The bright future of AI in government
The future of AI in government is bright. Technology enables the public sector to improve the efficiency with which governments deliver projects. Private tech companies are making huge strides in AI and the way it is applied to industry and the good of society. AI funding has reached record levels in the second quarter of 2021, with more than 550 AI startups worldwide, more than $ 20 billion in investments were raised. Collaboration between the private sector and governments will be essential to make smart and precise links between national needs, resources and events. Governments can use the frameworks, software libraries, tools, models, hardware, testbeds, and skills that technology companies have to process their data and transform public administration.
This area of collaboration between the public and private sectors is rich in opportunities. Right now says a 2020 survey commissioned by Microsoft, only 4% of the European public sector have evolved AI to transform their organizations. This figure, plus or minus a few percentage points, is likely to be true for the entire planet. But as more and more senior public sector leaders sponsor and dedicate budgets to AI programs, that number will see a dramatic improvement. Expect these same leaders to seek professional assistance in prioritizing areas of AI application and encouraging collaboration with the private sector.
There are several examples of governments using AI successfully to make an impact on society. The Las Vegas Health Department used AI to extract information from millions of Tweets to identify restaurants to inspect, replacing their old system that operated on a rotating basis. By reducing restaurants to inspect, the Ministry of Health has reduced incidents of food poisoning of 9,000 and thus saw 500 hospitalizations related to food poisoning less. In San Diego, a chatbot called Coptivity helps law enforcement officers access criminal information in seconds, a task that would take dispatchers up to 30 minutes (example: executing a license plate number). At Singapore’s National Cancer Center, AI helps improve health services by accurately locating gastric cancer.
Related article: Responsible AI Focuses on Microsoft’s Data Science & Law Forum
Without trust, the benefits of AI won’t matter
Despite the benefits, not everything is smooth for AI and public-private partnerships. Earning the trust of society is the biggest hurdle before you can witness the rise of AI-powered governments. No company will trust systems that do not enshrine the ethical and moral values of that company or that violate the fair and transparent use of data.
Therefore, to ensure that AI can increase efficiency, reduce risk, improve citizen experience, evolve and transform services, promote equality, and leverage unbiased data-driven decisions, governments should focus on defining the ethical boundaries within which the AI systems of their private sector partners operate. It means identifying the right data, improving data quality, eliminating data bias, and engaging civil society leaders to define and monitor ethical practices and ensure transparent solutions.
By implication, the AI base must ensure that it is:
- Reliable: AI systems and their decisions must be explainable, designed in such a way that they can stop when their probabilistic outcomes do not conform to deterministic ethical principles.
- Collaborative: When faced with uncertainties or decisions contrary to ethical practices, AI should leave the decision-making to humans. Additionally, AI systems must be designed to provide users with the ability to determine how much of their data can be used and when it can be used.
- Durable: AI systems must be energy efficient and cannot operate at the cost of environmental degradation.
- Scalable IT: AI systems must be able to make decisions in real time, using any data points deemed appropriate for decision making.
The world of AI and its links to governments is evolving at a breakneck pace. After years of lack of regulation, governments are now moving quickly to define, regulate and operate AI.
Related article: IBM, Microsoft Sign ‘Rome Appeal For Ethics In AI’: What Happens Next?
Emerging partnerships, principles
In early August, the world witnessed a document released by the Chinese government titled The implementation plan for building a government under the rule of law (2021-2025) (English translation here). He laid down the ground rules for integrating the Chinese state with digital technologies to provide public services. The regulations will force large tech companies to share their data with the government. The government will then use AI to scrutinize decisions related to public life (legislation, law enforcement, etc.). The world will be watching these developments closely. How governments partner with private organizations, for data and AI technologies, will be quickly established.
The methods and principles of public-private engagement will largely depend on the socio-political environment of nations. But one thing is certain: The path to a public-private partnership for AI may vary, but every government will start the journey sooner rather than later.
Kalyan Kumar (KK) plays the role of Global CTO & Head – Ecosystems for HCL Technologies. He is actively involved in the product and technology strategy, the ecosystem of strategic partners, the incubation of startups, Open Innovation / Open Source, the Enterprise Technology Office and supports the organic / inorganic initiatives of the company.