As innovation in artificial intelligence (AI) outpaces news cycles and grabs public attention, a framework for its responsible and ethical development and use has become increasingly critical to ensuring that this unprecedented technology wave reaches its full potential as a positive contribution to economic and societal progress.
The European Union has already been working to enact laws around responsible AI; I shared my thoughts on those initiatives nearly two years ago. Then, the AI Act, as it is known, was “an objective and measured approach to innovation and societal considerations.” Today, leaders of technology businesses and the United States government are coming together to map out a unified vision for responsible AI.
The power of generative AI
OpenAI’s release of ChatGPT captured the imagination of technology innovators, business leaders and the public last year, and consumer interest and understanding of the capabilities of generative AI exploded. However, with artificial intelligence becoming mainstream, including as a political issue, and humans’ propensity to experiment and test systems, the ability for misinformation, impact on privacy and the risk to cybersecurity and fraudulent behavior run the risk of quickly becoming an afterthought.
In an early effort to address these potential challenges and ensure responsible AI innovation that protects Americans’ rights and safety, the White House has announced new actions to promote responsible AI.
In a fact sheet released by the White House last week, the Biden-Harris administration outlined three actions to “promote responsible American innovation in artificial intelligence (AI) and protect people’s rights and safety.” These include:
- New investments to power responsible American AI R&D.
- Public assessments of existing generative AI systems.
- Policies to ensure the U.S. Government is leading by example in mitigating AI risks and harnessing AI opportunities.
Regarding new investments, The National Science Foundation’s $140 million in funding to launch seven new National AI Research Institutes pales in comparison to what has been raised by private companies.
While directionally correct, the U.S. Government’s investment in AI broadly is microscopic compared to other countries’ government investments, namely China, which started investments in 2017. An immediate opportunity exists to amplify the impact of investment through academic partnerships for workforce development and research. The government should fund AI centers alongside academic and corporate institutions already at the forefront of AI research and development, driving innovation and creating new opportunities for businesses with the power of AI.
The collaborations between AI centers and top academic institutions, such as MIT’s Schwarzman College and Northeastern’s Institute for Experiential AI, help to bridge the gap between theory and practical application by bringing together experts from academic, industry and government to collaborate on cutting-edge research and development projects that have real-world applications. By partnering with major enterprises, these centers can help companies better integrate AI into their operations, improving efficiency, cost savings and better consumer outcomes.
Additionally, these centers help to educate the next generation of AI experts by providing students with access to state-of-the-art technology, hands-on experience with real-world projects and mentorship from industry leaders. By taking a proactive and collaborative approach to AI, the U.S. government can help shape a future in which AI enhances, rather than replaces, human work. As a result, all members of society can benefit from the opportunities created by this powerful technology.
Model assessment is critical to ensuring that AI models are accurate, reliable and bias-free, essential for successful deployment in real-world applications. For example, imagine an urban planning use case in which generative AI is trained on redlined cities with historically underrepresented poor populations. Unfortunately, it is just going to lead to more of the same. The same goes for bias in lending, as more financial institutions are using AI algorithms to make lending decisions.
If these algorithms are trained on data discriminatory against certain demographic groups, they may unfairly deny loans to those groups, leading to economic and social disparities. Although these are just a few examples of bias in AI, this must stay top of mind regardless of how quickly new AI technologies and techniques are being developed and deployed.
To combat bias in AI, the administration has announced a new opportunity for model assessment at the DEFCON 31 AI Village, a forum for researchers, practitioners and enthusiasts to come together and explore the latest advances in artificial intelligence and machine learning. The model assessment is a collaborative initiative with some of the key players in the space, including Anthropic, Google, Hugging Face, Microsoft, Nvidia, OpenAI and Stability AI, leveraging a platform offered by Scale AI.
In addition, it will measure how the models align with the principles and practices outlined in the Biden-Harris administration’s Blueprint for an AI Bill of Rights and the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework. This is a positive development whereby the administration is directly engaging with enterprises and capitalizing on the expertise of technical leaders in the space, which have become corporate AI labs.
With respect to the third action regarding policies to ensure the U.S. government is leading by example in mitigating AI risks and harnessing AI opportunities, the Office of Management and Budget is to draft policy guidance on the use of AI systems by the U.S. Government for public comment. Again, no timeline or details for these policies has been given, but an executive order on racial equity issued earlier this year is expected to be at the forefront.
The executive order includes a provision directing government agencies to use AI and automated systems in a manner that advances equity. For these policies to have a meaningful impact, they must include incentives and repercussions; they cannot merely be optional guidance. For example, NIST standards for security are effective requirements for deployment by most governmental bodies. Failure to adhere to them is, at minimum, incredibly embarrassing for the individuals involved and grounds for personnel action in some parts of the government. Governmental AI policies, as part of NIST or otherwise, must be comparable to be effective.
Additionally, the cost of adhering to such regulations must not be an obstacle to startup-driven innovation. For instance, what can be achieved in a framework for which cost to regulatory compliance scales with the size of the business? Finally, as the government becomes a significant buyer of AI platforms and tools, it is paramount that its policies become the guiding principle for building such tools. Make adherence to this guidance a literal, or even effective, requirement for purchase (e.g., The FedRamp security standard), and these policies can move the needle.
As generative AI systems become more powerful and widespread, it is essential for all stakeholders — including founders, operators, investors, technologists, consumers and regulators — to be thoughtful and intentional in pursuing and engaging with these technologies. While generative AI and AI more broadly have the potential to revolutionize industries and create new opportunities, it also poses significant challenges, particularly around issues of bias, privacy and ethical considerations.
Therefore, all stakeholders must prioritize transparency, accountability and collaboration to ensure that AI is developed and used responsibly and beneficially. This means investing in ethical AI research and development, engaging with diverse perspectives and communities, and establishing clear guidelines and regulations for developing and deploying these technologies.