Contributed By Moses & Singer LLP
Aside from sector-specific regulatory schemes, the treatment of artificial intelligence continues to evolve under the distinctive requirements of general areas of US law, including:
Healthcare
Financial Services
Aerospace and Defence
Executive Office of the President
2019 Executive Order
Former President Trump issued an Executive Order on Maintaining American Leadership in AI in February 2019. This Executive Order was aimed at promoting the research, development and deployment of AI through directing federal agencies to focus R&D on AI, directing government agencies to enhance access to federal data for AI R&D purposes, directing the National Institute of Standards and Technology to spearhead the development of standards for AI, and developing regulatory practices to remove barriers to AI development.
2020 Executive Order
Former President Trump then issued an Executive Order in December 2020, Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government, which expanded on the 2019 Executive Order in a manner intended to increase public trust in AI. This Executive Order set out principles for federal agencies to follow when developing and using AI, including:
Agency Policies
See 5.3 Regulatory Objectives.
The main national security concern are keeping the USA as a leader in the development and use of AI, retaining military superiority and restricting foreign countries from using AI to improperly use data collected from US citizens.
In order to address these national security considerations, the NSCAI (see 5.1 Key regulatory Agencies and 5.3 Regulatory Objectives) proposed that the USA increase export control of EUV and ArF lithography equipment to China, granting the Treasury the authority to mandate CFIUS filings for non-controlling investments in AI from China, Russia and other competitive nations, and working with allies to incorporate similar protections against competitors.
Another consideration is protecting against data collected by foreign AI. For example, former President Trump banned WeChat and TikTok in 2020. These bans were overturned by President Biden in 2021. President Biden signed orders that require the Department of Commerce to launch national security reviews of any apps that have links to foreign adversaries.
The National AI Initiative Act of 2020 (NAIIA) became law on 1 January 2021. NAIIA is designed to promote US leadership in the research and development of AI by creating a programme to be co-ordinated across the federal government.
Some US states have enacted general legislation covering AI. For instance, in 2021 Alabama and Illinois enacted legislation to embody task forces to review and advise their respective governments on the use and development of AI.
Mississippi enacted legislation in 2021 that requires the primary school curricula to implement studies on robotics, AI and machine learning.
Colorado enacted legislation in 2021 that prohibits insurers from using AI to discriminate based on race, colour, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity or gender expression.
Some states have enacted legislation specific to autonomous vehicles – see 9.3 Autonomous Vehicles.
Recent federal legislative proposals aimed at increasing transparency in AI and reducing any potential disparate discriminatory impact as a result of the use of AI.
The United States House of Representatives passed the America Creating Opportunities for Manufacturing, Pre-Eminence in Technology, and Economic Strength Act of 2022 on 4 February 2022; it directs the National Institute of Standards and Technology (NIST) to study bias in AI and requires guidance from the NIST on how to reduce disparate impacts from AI.
The proposed Algorithmic Accountability Act of 2022 has been introduced to the Senate which would require certain companies that use algorithms to report to the FTC and conduct assessments on the impact of the algorithms. Additionally, White House officials have indicated that a Bill of Rights relating to AI may be in the works.
Proposed legislation has been introduced in several states including California, Massachusetts, Michigan, New Jersey, New York, Virginia, Vermont, and Washington, aimed at improving transparency and reducing disparate impact in the use of AI.
The United States Department of Commerce, acting pursuant to the NAIIA, through the NIST and the National Artificial Intelligence Advisory Committee (NAIAC), has been tasked with a voluntary risk management framework for trustworthy AI systems and advise the President and other federal agencies regarding key issues concerning AI.
The US federal agency, the Federal Trade Commission, acting pursuant to Section 5 of the FTC Act, as well as the Fair Credit Reporting Act (FCRA) and the Equal Credit Opportunity Act (ECOA), seeks to investigate the use of biased algorithms and create compliance standards for companies to follow.
The United States Food and Drug Administration (FDA) is responsible for regulating medical devices in the USA. AI companies developing digital health products should recognise how recent regulatory changes may affect them and that the FDA is engaging industry to further refine its oversight approach. The FTC has issued recent guidance in the area of AI and ML, and through its enforcement actions and press releases has made clear its view that AI may pose issues that run afoul of the FTC Act’s prohibition against unfair and deceptive trade practices.
The National Security Commission and Government Accountability Office (GAO), through the National Security Commission on Artificial Intelligence (NSCAI), advises the government to take certain actions at the domestic level to protect the privacy and civil rights of US citizens in the government’s deployment of AI.
What follows are examples of different definitions used for AI.
The NAIIA defines AI as, “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. Artificial intelligence systems use machine and human-based inputs to – (A) perceive real and virtual environments; (B) abstract such perceptions into models through analysis in an automated manner; and (C) use model inference to formulate options for information or action”.
The United States Federal Drug Administration has broadly defined artificial intelligence as the science and engineering of making intelligent machines, especially intelligent computer programs (McCarthy, 2007), and took cognisance of the fact that AI can use different techniques, including models based on statistical analysis of data, expert systems that primarily rely on "if-then" statements, and machine learning.
Machine learning is an artificial intelligence technique that can be used to design and train software algorithms to learn from and act on data. Software developers can use machine learning to create an algorithm that is "locked" (so that its function does not change), or "adaptive" (so its behaviour can change over time based on new data). Some real-world examples of artificial intelligence and machine learning technologies include:
NIST focuses on cultivating trust in the design, development, use and governance of AI technologies and systems. NIST is developing a voluntary risk management framework (AI RMF) for companies to use to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services and systems. In developing the AI RMF, the NIST created an initial draft of the AI RMF and asked for comments on the draft through 29 April 2022. The NIST will use the comments to work towards a final AI RMF.
The NAIAC is focused on advising the President and the government on topics related to the NAIIA, the progress of implementing the NAIIA, the current state of the USA's competitiveness in AI, etc.
The FTC issued a memo on 19 April 2021 titled “Aiming for truth, fairness, and equity in your company’s use of AI”, which laid out the FTC’s focus and the harms they are seeking to prevent. The FTC stated that the following should be used as guidelines when using AI:
The U.S. Food and Drug Administration (FDA) issued the Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan from the Center for Devices and Radiological Health’s Digital Health Center of Excellence. Traditionally, the FDA reviews medical devices through an appropriate pre-market pathway, such as pre-market clearance (510(k)), De Novo classification, or pre-market approval. The FDA may also review and clear modifications to medical devices, including software as a medical device, depending on the significance or risk posed to patients of that modification.
NSCAI’s role in the development of AI is to present recommendations on how the USA can improve its approach to AI, based on national security considerations.
The matter is not relevant in this jurisdiction.
The matter is not relevant in this jurisdiction.
The International Organization of Standardization (ISO) and the International Electrotechnical Commission (IEC) created a technical report, ISO/IEC TR 24028:2020, Information technology – Artificial intelligence – Overview of trustworthiness in artificial intelligence, to analyse factors that can impact the trustworthiness of AI. This technical report focuses on current approaches to support trustworthiness in technical support and discusses potential application of these approaches to AI. Further, the technical report identifies gaps where the approaches to technical support do not align with AI and how to address these gaps in future standards work.
The Institute of Electrical and Electronic Engineers Standards Association, through its Artificial Intelligence Systems Committee, creates standards in order to prioritise ethical considerations when developing and using AI.
While the benefits of healthcare in AI are great, regulators need to consider patient protection from defective diagnosis, unacceptable use of personal data and the elimination of bias built into algorithms. AI-based products create additional privacy challenges especially when de-identified data is used to try and address potential bias issues.
As more data is added to the AI systems, the potential to create identifiable data also increases, especially as the increased sophistication of AI systems has made it easier to create data linkages where such links did not previously exist.
The use and development of AI in healthcare poses unique challenges to companies that have ongoing obligations to safeguard protected health information, personally identifiable information and other sensitive information.
AI’s processes often require enormous amounts of data. As a result, it is inevitable that using AI may implicate the Health Insurance Portability and Accountability Act (HIPAA) and state-level privacy and security laws and regulations with respect to such data, which may need to be de-identified. AI could be used to automate the removal of personally identifiable information from recordings of patients in medical procedures.
Currently, there is not a regulatory scheme that governs transparency. However, as mentioned above, the FTC has made clear that that it will use its authority under Section 5 of the FTC Act to prevent deceptive business practices (see 5.1 Key Regulatory Agencies).
While there is no regulatory scheme at the moment, California passed SB 1001 in 2018, effective 1 July 2019, which made it “unlawful for any person to use a bot to communicate or interact with another person in California online, with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election”. However, a safe harbour exists if there is a disclosure that a bot is being used.
Although California is the only state to pass such a law, it may be indicative of the types of regulations that will follow from other states or the federal government.
Generally, the USA does not have the level of regulatory rules that European countries have. However, concerns regarding automated decision making in AI-based facial recognition have increasingly grown throughout the years and have led legislatures of all levels to ban the use of facial recognition systems.
Illinois enacted its Biometric Information Privacy Act (BIPA) in 2008, which prohibits the unlawful collection and storing of biometric information. Biometric information includes retina scans, iris scans, fingerprints, palm prints, voice recognition, facial geometry recognition, DNA recognition, gait recognition and even scent recognition. Negligent violations of the BIPA result in a USD1,000 penalty, while wilful violations result in a USD5,000 penalty. Further, in 2019, Illinois enacted the Artificial Intelligence Video Interview Act, which requires employers to disclose to candidates if AI will or may be used to analyse the candidate’s interview, to explain how the AI will be used and to obtain the candidate’s consent.
In 2019, California’s AB 1215 placed a three-year moratorium on any use by law enforcement of any biometric information collected by an officer camera. The cities of Berkely, CA and San Francisco, CA banned all government use of facial recognition technology, although San Francisco established an approval process for any future uses. In July 2021, New York passed at the state level a two-year moratorium on the use of facial recognition in schools.
These bans were adopted due to concerns relating to privacy and concerns relating to the inaccuracy of the automated decision-making of the AI-based facial recognition technology.
See 8.2 Facial Recognition and Biometrics for discussion of AI and HIPAA.
Liability may relate to the following:
Although the healthcare sector is heavily regulated, no regulations target the use of AI in healthcare settings. Several countries and organisations, including the USA, have proposed regulations addressing the use of AI in healthcare, but no regulations have been adopted.
In the USA, the FDA recently published a discussion paper for a proposed regulatory framework for modifications to AI/machine learning-based software as a medical device (SaMD). It is based upon practices from current FDA pre-market programs, including 510(k), De Novo, and pre-market approval (PMA) pathways. It utilises risk categorisation principles from the IMDRF, along with the FDA benefit-risk framework, risk management principles in the software modifications guidance, and the total product life cycle (TPLC) approach from the FDA Digital Health Pre-Cert program. The FDA also released an action plan for the regulation of AI-based SaMD that reaffirmed its commitment to encourage the development of AI best practices.
The United States Department of Health and Human Services (HHS) has also announced its strategy for the regulation of AI applied in healthcare settings.
Please see the information in 1.1 General Legal Background Framework.
Two competing bills proposing legislation regarding autonomous vehicles (AVs) were introduced to the United States of House of Representatives and United States Senate in 2017. The proposed legislation was not enacted and the United States Congress, as of this writing, has not enacted legislation regarding AVs.
In the USA, 38 states and the District of Columbia (a separate jurisdiction) have enacted legislation or executive orders relating to AVs. Those in some states allow for studies or authorise funding; others allow for testing of AVs with an operator in the car; others allow for testing of AVs without an operator; others allow for full deployment of an AV with an operator; and others allow for full deployment of an AV without an operator.
California’s laws address the testing of fully driverless AVs and the deployment of AVs. California laws regulate manufacturers. In order for manufacturers to test a fully driverless AV, the manufacturer must certify that their autonomous vehicles complies with all applicable Federal Motor Vehicle Safety Standards and must provide proof of the ability to meet up to USD5 million in potential damages. When testing AVs, manufacturers must report any collisions to the California DMV within ten days of any collision that causes damage, injury or death. California also requires manufacturers to have a plan that details how law enforcement and first responders should interact with an AV. California is one of the first states to place restrictions on a manufacturer’s collection and use of personal information through the AV. California requires that AVs meet current industry cybersecurity best practices.
It is noteworthy that the California Department of Motor Vehicles ruled that Tesla’s “full self-driving” (FSD) beta requires human intervention, and thus is not subject to California’s AV laws.
Arizona’s laws are less stringent than others and do not distinguish between the testing of AVs and the operation of AVs on public roads. Arizona law allow AVs performing commercial services to be operated fully autonomously, without a driver, if the operator has an interaction plan in place that complies with the protocol established by the Arizona Department of Public Safety and certifies to the Department of Transportation that the vehicle meets certain standards and is insured. The AV must also achieve a “minimal risk condition” if the automated driving system fails, with minimal risk being defined as “a condition to which a human driver or an automated driving system may bring a vehicle in order to reduce the risk of a crash when a given trip cannot or should not be completed”. A licensed driver may operate an AV so long as the individual can resume driving or respond to any request from the AV to intervene.
Proposed legislation is pending in New Jersey, and would:
The pending litigation would require, among other things: manufacturers to have proof of insurance in the amount of USD5 million or more; and an employee, contractor or other person designated by the manufacturer to operate the vehicle in order to allow testing of the AVs on public roads.
Due to the fact that AVs are relatively new, there is not an extensive body of case law regarding liability when an AV causes an injury. Further, most lawsuits in the USA have been civil suits and have been settled prior to any court reaching judgment.
The first criminal case was filed against a driver of an AV in January 2022 in California. The defendant was charged with manslaughter as a result of his Tesla, while on autopilot – Tesla’s AV feature, colliding with another vehicle, resulting in the death of two individuals. This criminal case could be the first indicator of how courts will treat the issue of liability.
The uses of AI in an employment/hiring context include:
In 2021, the United States Patent and Trademark Office issued a ruling refusing to permit an AI “machine” to be identified as an inventor under patent law. A civil action was then filed in the U.S. Federal District Court in Virginia. There the Court agreed with the Patent Office and ruled that the US Patent Act’s use of the term “individual” is limited to a human person and therefore only humans can be inventors. That decision is now on appeal before the special appellate court, the United States Court for the Federal Circuit.
A similar result was reached under US Copyright Act. US federal courts have ruled that only human beings can be authors for copyrighted works. It remains an open question as to whether a corporation can author a work in which it owns a copyright via nonhuman means. The question of authorship remains primarily a question of the allocation of rights between the operator of the software and the authors or other participants in the creation of the software that is used to create an output. However, just because a piece of software creates an output does not automatically mean that the output is protected by copyright.
The scope of discovery in the USA is broader than in Europe. This results in a large volume of material being produced, including in digital form. AI is used to address the tremendous volume of documentation as part of the process to determine the relevant documents and to determine the documents that are responsive to discovery requests.
AI is also used to establish patterns and guide in the organisation of evidence.
A US Board of Directors is equivalent to a European Supervisory Board, and company executives are equivalent to a European Board.
Board of Director activities include:
Looking forward, it is possible to make a number of predictions, as follow.
The Chrysler Building
405 Lexington Avenue
New York, New York 10174
USA
+212 554 7800
Wtanenbaum@mosessinger.com www.mosessinger.com