WASHINGTON, D.C. – U.S. Senator Gary Peters (D-MI), Chairman of the Homeland Security and Governmental Affairs Committee, released a report examining how hedge funds use artificial intelligence (AI) to inform trading decisions and the potential impacts it could have on market stability.The report finds the development of AI technology is outpacing the work of regulators, who have only recently begun to examine how AI is used in their respective industries and how current regulations may apply to AI use. The report also finds a lack of baseline standards to ensure these systems do not produce unintended risks. As more hedge funds use AI for increasingly diverse and more advanced purposes, the report finds that associated risks within the financial services sector could increase without sufficient baseline requirements and safeguards. The investigation identifies needed reforms to mitigate current gaps in protocols and to update existing requirements to ensure hedge funds are safely adopting evolving technologies. While the report’s focus is on the use of AI by hedge funds, the risks identified in this report represent concerns applicable across the financial services sector.
“As hedge funds and the financial sector at large increasingly use AI to inform trading decisions, it is critical that there are safeguards in place to ensure the technology is being used in a way that minimizes potential risks to individuals and to market stability itself,” said Senator Peters. “My report and recommendations will help encourage responsible development, use, and oversight of AI across the financial industry by identifying needed reforms to establish a cohesive regulatory framework.”
READ THE FULL REPORT: “AI in the Real-World: Hedge Funds’ Use of AI in Trading”
The report’s key findings include:
- Hedge funds use different terms to name and define their AI-based systems: Hedge funds use AI to inform aspects of trading decisions such as pattern identification and portfolio construction. A variety of terms are used to name their systems, such as expert systems, algorithmic systems, and optimizers. While the systems all fall under the definition of AI set out by Congress in the National Defense Authorization Act of 2019, several companies do not consider these technologies to be AI.
- Hedge funds do not have uniform requirements or an understanding of when human review is necessary in trading decisions: Hedge funds that Majority Committee staff spoke to incorporate human review during the development and deployment of AI systems. However, variations with respect to how and when humans are involved during these processes raises questions regarding accuracy, efficacy, and safety.
- Existing and proposed regulations concerning AI in the financial sector fail to classify technologies based on their associated risk levels: The public lacks clarity on the degree and scope of risks related to the use of AI and machine learning strategies in the financial sector.
- Regulators have begun to examine regulations for potential gaps in authority, but have not sufficiently clarified how current regulations apply to hedge funds’ use of AI in trading decisions: It is unclear how the existing regulatory framework for hedge funds applies to the growing use of sophisticated AI systems. While regulators have recently begun examining current regulations for potential gaps and identifying areas for reform to better accommodate the expansion of advanced AI uses, they have only done so in a select few instances.
The report’s key recommendations include:
- Create common definitions for hedge funds’ systems that utilize AI: U.S. Securities and Exchange Commission and Commodity Futures Trading Commission should define guidelines and standards for how hedge funds name and refer to trading systems that utilize AI. These regulators should also require hedge funds to identify systems that fall under the FY 2019 NDAA definition of “artificial intelligence.”
- Create AI operational baselines and establish a system for accountability in AI deployment: Regulators should create operational baselines that address testing and review of AI systems by hedge funds to inform trading decisions. They should also impose best practices and version control frameworks for algorithms and AI technologies that require companies to manage and track advancements to deployed technologies.
- Require internal risk assessments that identify levels of risk for various use cases: Regulators should develop a risk assessment framework, adhering to principles in National Institute of Standards and Technology’s AI Risk Management Framework, to address risks AI technologies pose to internal operations and larger financial market security.
- Clarify authority of current regulations: Regulators should also continue to examine potential gaps in regulations and propose rules to address unique concerns posed by AI and AI related technologies. These reviews should examine risks to both investors and larger financial market impacts.
In his role as Chairman on the Homeland Security and Governmental Affairs Committee, Peters has held several hearings regarding the increasing role of AI, including the use of AI in federal acquisition and government operations, as well as broader uses and risks associated with AI. Peters has also led bipartisan legislation to encourage responsible development, use, and oversight of AI. Chairman Peters’ Advancing American AI Act, AI Training Act, and AI Scholarship for Service Act each became law in the 117th Congress. This Congress, Peters has introduced the AI Leadership to Enable Accountable Deployment (AI LEAD) Act, AI Leadership Training Act, the Transparent Automated Governance (TAG) Act, and the PREPARED for AI Act. Together, these laws and bipartisan legislation encourage the responsible development and deployment of AI by the federal government.
###