The U.S. Department of Commerce announced the release of new guidance and software to help improve the safety, security and trustworthiness of artificial intelligence (AI) systems.
The Department’s National Institute of Standards and Technology (NIST) released three final guidance documents that were first released in April for public comment, as well as a draft guidance document from the U.S. AI Safety Institute that is intended to help mitigate risks. NIST is also releasing a software package designed to measure how adversarial attacks can degrade the performance of an AI system. In addition, Commerce’s U.S. Patent and Trademark Office (USPTO) issued a guidance update on patent subject matter eligibility to address innovation in critical and emerging technologies, including AI,
“Under President Biden and Vice President Harris’ leadership, we at the Commerce Department have been working tirelessly to implement the historic Executive Order on AI and have made significant progress in the nine months since we were tasked with these critical responsibilities,” said U.S. Secretary of Commerce Gina Raimondo. “AI is the defining technology of our generation, so we are running fast to keep pace and help ensure the safe development and deployment of AI. Today’s announcements demonstrate our commitment to giving AI developers, deployers, and users the tools they need to safely harness the potential of AI, while minimizing its associated risks. We’ve made great progress, but have a lot of work ahead. We will keep up the momentum to safeguard America’s role as the global leader in AI.”
NIST’s document releases cover varied aspects of AI technology. Two were made public today for the first time. One is the initial public draft of a guidance document from the U.S. AI Safety Institute, and is intended to help AI developers evaluate and mitigate the risks stemming from generative AI and dual-use foundation models — AI systems that can be used for either beneficial or harmful purposes. The other is a testing platform designed to help AI system users and developers measure how certain types of attacks can degrade the performance of an AI system. Of the remaining three document releases, two are guidance documents designed to help manage the risks of generative AI — the technology that enables many chatbots as well as text-based image and video creation tools — and serve as companion resources to NIST’s AI Risk Management Framework (AI RMF) and Secure Software Development Framework (SSDF). The third proposes a plan for U.S. stakeholders to work with others around the globe on AI standards.
“For all its potentially transformational benefits, generative AI also brings risks that are significantly different from those we see with traditional software,” said Under Secretary of Commerce for Standards and Technology and NIST Director Laurie E. Locascio. “These guidance documents and testing platform will inform software creators about these unique risks and help them develop ways to mitigate those risks while supporting innovation.”
USPTO’s guidance update will assist USPTO personnel and stakeholders in determining subject matter eligibility under patent law (35 U.S.C. § 101) of AI inventions. This latest update builds on previous guidance by providing further clarity and consistency to how the USPTO and applicants should evaluate subject matter eligibility of claims in patent applications and patents involving inventions related to AI technology. The guidance update also announces three new examples of how to apply this guidance throughout a wide range of technologies.
“The USPTO remains committed to fostering and protecting innovation in critical and emerging technologies, including AI,” said Kathi Vidal, Under Secretary of Commerce for Intellectual Property and Director of the USPTO. “We look forward to hearing public feedback on this guidance update, which will provide further clarity on evaluating subject matter eligibility of AI inventions while incentivizing innovations needed to solve world and community problems.”
NTIA’s soon-to-be-published report will review the risks and benefits of dual-use foundation models whose model weights are widely available (i.e. “open-weight models”), as well as develop policy recommendations maximizing those benefits while mitigating the risks. Open-weight models allow developers to build upon and adapt previous work, broadening AI tools’ availability to small companies, researchers, nonprofits, and individuals.
There are no comments
Please login to post comments