Optimizing the Value of AI Solutions for the Public Sector

Optimizing the Value of AI Solutions for the Public Sector

Without a doubt, 2023 has shaped up to be generative AI’s breakout year. Less than 12 months after the introduction of generative AI large language models such as ChatGPT and PaLM, image generators like Dall-E, Midjourney, and Stable Diffusion, and code generation tools like OpenAI Codex and GitHub CoPilot, organizations across every industry, including government, are beginning to leverage generative AI regularly to increase creativity and productivity.

Earlier this month, I had the opportunity to lead a roundtable discussion at the PSN Government Innovation show (2023 Government Innovation Show – Federal – Public Sector Network) in Washington, DC. There, I met with IT leaders across multiple lines of business and agencies in the US Federal government focused on optimizing the value of AI in the public sector. I’ll highlight some key insights and takeaways from my conversations in the paragraphs that follow.

Predictably, the roundtable participants I spoke with were guardedly optimistic about the potential for generative AI to accelerate their agency’s mission. In fact, most of the public servants I spoke with were predominantly cautious about the current limitations of generative AI, and underscored the need to ensure that models are used responsibly and ethically. As also expected, most had experimented on their own with large language models (LLM) and image generators. However, none of the government leaders I spoke with had deployed gen AI solutions into production, nor did they have plans to do so in the coming months, despite numerous applicable use cases within the federal government.

The underlying reason? Because the perceived potential benefits—improved citizen service through chatbots and voice assistants, increased operational efficiency through automation of repetitive, high-volume tasks, and rapid policymaking through synthesis of large amounts of data—are still outweighed by considerations about bias perpetuation, misinformation, fairness, transparency, accountability, security, and potential job displacement. Also, while agencies view embracing AI as a strategic imperative that will enable them to accelerate the mission, they also face the challenge of finding readily available talent and resources to build AI solutions.

Top operational problems in the public sector

Realizing the full potential of AI in the public sector requires tackling several operational problems that hinder government innovation and efficiency. Some of the primary operational problems highlighted at the PCN Government Innovation event include:

Civil Government: A major challenge facing the civil government is the inefficient and cumbersome procurement process. The lack of clear guidelines and the need for strict compliance with regulations results in a complex and time-consuming procurement process. AI-based procurement that uses natural language processing to process RFIs, RFPs, and RFQs, as well as text classification to streamline and automate processes such as supplier evaluation, contract analysis, and spend management, can streamline the procurement process and improve transparency and efficiency.

Defense and Intelligence Communities: The defense and intelligence communities face significant cybersecurity threats, with malicious actors trying to penetrate their systems continually. AI-enabled threat intelligence can help prevent cyberattacks, identify threats, and provide early warning to take necessary precautions. Innovations in AI-enabled data management in defense and intelligence communities also enable secure data sharing across the organization and with partners, optimizing data analysis and intelligence collaboration. By analyzing huge volumes of data in real time, including network traffic data, log files, security event, and endpoint data, AI systems can detect patterns and anomalies, helping to identify known and emerging threats.

State, Local, and Education: One of the significant challenges faced by state and local governments and education is the growing demand for social services. AI can optimize citizen-centric service delivery by predicting demand and customizing service delivery, resulting in reduced costs and improved outcomes. Academic institutions can leverage AI tools to track student performance and deliver personalized interventions to improve student outcomes. AI/ML models can process large volumes of structured and unstructured data, such as student academic records, learning management systems, attendance and participation data, library usage and resource access, social and demographic information, and surveys and feedback to provide insights and recommendations that optimize outcomes and student retention rates.

My final question to the roundtable was, “What are government agencies to do to optimize the value of AI today while balancing the inherent risks and limitations facing them?” Our government leaders had several suggestions:

  1. Start small. Limit access and capabilities initially. Start with narrow, low-risk use cases. Slowly expand capabilities as benefits are proven and risks addressed.
  2. Improve dataset quality. Ensure you can trust your data by using only diverse, high-quality training data that represents different demographics and viewpoints. Make sure to audit data regularly.
  3. Develop mitigation strategies. Have plans to address issues like harmful content generation, data abuse, and algorithmic bias. Disable models if serious problems occur.
  4. Identify operational problems AI can solve. Identify and prioritize potential use cases by their potential value to the organization, potential impact, and feasibility.
  5. Establish clear AI ethics principles and policies. Form an ethics review board to oversee AI projects and ensure they align with ethical values. Update policies as needed when new challenges emerge.
  6. Implement rigorous testing. Thoroughly test generative AI models for errors, bias, and safety issues before deployment. Continuously monitor models post-launch.
  7. Increase AI model explainability. Employ techniques like LIME to better understand model behavior. Make key decisions interpretable.
  8. Collaborate across sectors. Partner with academia, industry, and civil society to develop best practices. Learn from each other’s experiences.
  9. Enhance AI expertise within government. Hire technical talent. Provide training on AI ethics, governance, and risk mitigation.
  10. Communicate transparently with the public. Share progress updates and involve citizens in AI policymaking. Build public trust through education on AI.

The Year Ahead

The next 12 months hold tremendous potential for the public sector with generative AI. As the technology continues to advance rapidly, government agencies have an opportunity to harness it to transform how they operate and serve citizens.

Learn more about how Cloudera can help you on your AI journey. Trust your data. Trust your enterprise AI.  Enterprise AI | Cloudera

Steve DeVoir
Managing Director Industry Solutions - Public Sector
More by this author

Leave a comment

Your email address will not be published. Links are not permitted in comments.