Constitutional Rights
The United States Constitution does not explicitly include a right to privacy. The Bill of Rights does, however, protect certain aspects of privacy. For example, the First Amendment protects the privacy of beliefs, the Third Amendment protects the privacy of the home against the quartering of soldiers in private homes without the consent of the home’s owner, the Fourth Amendment prohibits unreasonable searches and seizures, and the Fifth Amendment creates rights relevant to both criminal and civil legal proceedings. Moreover, the Ninth Amendment provides that the enumeration of certain rights in the Bill of Rights cannot deny the existence of other rights. Some commentators interpret the Ninth Amendment as affirming the existence of rights outside those expressly protected by the Bill of Rights. Finally, certain decisions appear to indicate that the right to privacy, especially in marital relations, is part of the liberty interest of the 14th Amendment.
Sector-Specific Data Protection Legislation
There is currently no single, all-encompassing federal legislation covering privacy and the protection of personal information generally in the USA. Instead, legislation at the federal level primarily protects data in specific sectors, such as healthcare, education, communications and financial services or, in the case of online data collection, that of children. Examples of such laws include:
The sectoral approach adopted by US federal law to address privacy and data protection means that each state may enact its own laws governing privacy and data protection. As a result, privacy requirements differ from state to state, and cover different areas. Where a federal statute covers a specific topic, it may pre-empt a similar state law on the same topic.
The Federal Trade Commission
The Federal Trade Commission (FTC) is an independent US law enforcement agency charged with protecting consumers and enhancing competition across broad sectors of the economy. The FTC’s primary legal authority comes from Section 5 of the Federal Trade Commission Act, which prohibits “unfair or deceptive practices” in the marketplace. The FTC has taken the view that “unfair or deceptive practices” include, for example, a company’s failure to adhere to its own published privacy notice and the company’s failure to provide an adequate level of security for the personal information it holds, as well as the use of deceptive marketing practices. If a company violates an FTC order, the FTC can seek civil monetary penalties for the violations. The FTC can also obtain civil monetary penalties for violations of certain privacy statutes and rules. This broad authority allows the FTC to address a wide array of practices affecting consumers, including those that emerge with the development of new technologies and business models.
The FTC
The FTC, an independent US federal law enforcement agency charged with protecting consumers, has become the primary privacy and security enforcement agency in the USA. The FTC’s primary legal authority comes from section 5 of the Federal Trade Commission Act, under which the FTC’s jurisdiction is limited to challenging privacy violations by organisations whose information practices are considered “deceptive” or “unfair.” For example, when a company claims that it will safeguard the personal information of its customers, the FTC may begin an enforcement action to make sure that companies live up to these promises. On this basis, the FTC has brought legal actions against organisations that have violated consumers’ privacy rights, or misled them by failing to maintain security for sensitive consumer information, or caused substantial consumer injury.
In addition to its authority to take action against deceptive or unfair trade practices, the FTC has the authority to enforce several sector-specific laws, which include the CAN-SPAM Act, COPPA, the FCRA, and the TCFAPA, among others. Since the FTC’s enforcement actions nearly always result in settlement agreements with companies, the contents of those agreements are used by companies looking for guidance in developing privacy practices.
The FTC may start an investigation on its own based on publicly available information, at the request of another agency, or based on complaints from consumers or competitors.
Other Agencies
Other agencies at the federal and state-level, as well as state consumer protection regulators (usually the state Attorneys General), may also exercise regulatory authority in relation to privacy. At the federal level, examples include the Office of the Comptroller of the Currency, the Department of Health and Human Services, the Federal Communications Commission, the Securities and Exchange Commission, the Consumer Financial Protection Bureau and the Department of Commerce.
State Attorneys General
State Attorneys General have the power to bring enforcement actions based on unfair or deceptive trade practices. The source of these powers are typically state laws prohibiting “unfair or deceptive acts and practices” and authorising the state Attorney General to initiate enforcement actions.
Recent privacy events have seen increased co-operation and co-ordination in enforcement among state Attorneys General, whereby multiple states will jointly pursue actions against companies that experience data breaches or other privacy allegations. Co-ordinated actions among state Attorneys General often exact greater penalties from companies than would typically be obtained by a single enforcement authority. In recent years, Attorneys General in states such as California, Connecticut and Maryland have formally created units charged with the oversight of privacy, and the State of New York has created a unit to oversee the internet and technology.
California Privacy Protection Agency
In November 2020, voters in the State of California approved Proposition 24, also known as the California Privacy Rights Act of 2020 (CPRA). The CPRA added new privacy protections to the existing California Consumer Privacy Act of 2018 (CCPA) and created a new agency, the California Privacy Protection Agency (CPPA), to implement and enforce the CCPA and the CPRA. The CCPA may bring enforcement actions related to the CCPA or CPRA before an administrative law judge. The California Attorney General retains civil enforcement authority over the CCPA and the CPRA.
Adjudication
The FTC determines in an adjudicative proceeding whether a practice violates the law. As mentioned previously, pursuant to Section 5(b) of the FTC Act, the FTC may challenge “unfair or deceptive” acts or practices. When the FTC has “reason to believe” that a violation of the law has occurred, the FTC may issue a complaint setting forth its charges. If the respondent elects to settle the charges, it may sign a consent agreement (without admitting liability), consent to entry of a final order, and waive all right to judicial review. If the FTC accepts the proposed consent agreement, it places the order on the record for thirty days of public comment (or for such other period as the FTC may specify) before determining whether to make the order final.
If instead the respondent elects to contest the charges, the complaint is adjudicated before an administrative law judge (ALJ) in a trial-type proceeding conducted under the FTC’s Rules of Practice. A “complaint counsel,” who is a staff member from the relevant bureau or a regional office, conducts the prosecution of the matter. Upon conclusion of the hearing, the ALJ issues an “initial decision” setting forth their findings of fact and conclusions of law, and recommending either the entry of an order to cease and desist, or the dismissal of the complaint. Either complaint counsel or respondent, or both, may appeal the initial decision.
Upon appeal of an initial decision, the FTC receives briefs, holds oral argument, and thereafter issues its own final decision and order. The FTC’s final decision is appealable by any respondent against which an order is issued. The respondent may file a petition for review with any US Court of Appeals within whose jurisdiction the respondent resides or carries on business or where the challenged practice was used. If the Court of Appeals affirms the FTC’s order, the Court enters its own order of enforcement. The party losing in the court of appeals may seek review by the Supreme Court.
Enforcement
An FTC order generally becomes final (ie, binding on the respondent) sixty days after it is served, unless the order is stayed by the FTC or by a reviewing court. Divestiture orders become final after all judicial review is complete (or if no review is sought, after the time for seeking review has expired). If a respondent violates a final order, it is liable for a civil penalty for each violation. The penalty is assessed by a federal district court in a suit brought to enforce the FTC’s order.
Where the FTC has determined in a litigated administrative adjudicatory proceeding that a practice is unfair or deceptive, and has issued a final cease and desist order, the FTC may obtain civil penalties from non-respondents who thereafter violate the standards articulated by the FTC. To accomplish this, the FTC must show that the violator had “actual knowledge that such act or practice is unfair or deceptive and is unlawful” under Section 5(a)(1) of the FTC Act. To prove “actual knowledge,” the FTC typically shows that it provided the violator with a copy of the FTC’s determination about the act or practice in question, or a “synopsis” of that determination.
APEC’s CBPR System
The USA participates in the Asia-Pacific Economic Cooperation’s (APEC) Cross Border Privacy Rules (CBPR) system. At this stage, around twenty US companies are certified under the CBPR system and are therefore required to implement privacy policies and practices that are consistent with the CBPR programme requirements. The objective of the CBPR system is to bridge national privacy laws within APEC and reduce barriers to the flow of information. Certified businesses also demonstrate their commitment to consumer privacy through this system.
Transfers from the EEA: the Privacy Shield and SSCs
Data transfer from the European Economic Area (EEA) towards countries outside the EEA may only occur if they offer an “adequate” level of data protection, which generally means an equivalent level to the EU General Data Protection Regulation (GDPR). The European Commission has determined that several countries ensure an adequate level of protection due to their domestic law or the international commitments they have entered into. Pursuant to EU data protection law, the USA is not considered to offer an “adequate” level of data protection in relation to transfers of data. However, the EU Commission considered data transfers to US organisations that were certified under the EU-US Privacy Shield framework to be adequate. The EU Commission’s adequacy decision on the Privacy Shield framework was adopted on 12 July 2016, and the Privacy Shield framework became operational on 1 August 2016.
On 16 July 2020, the Court of Justice of the European Union (CJEU) issued a ruling invalidating the EU-US Privacy Shield framework and setting out stricter criteria for using other safeguards such as standard contractual clauses (SCCs) or binding corporate rules (BCR). In particular, the CJEU pointed to the far-reaching possibilities of surveillance that exists under the US national security laws. The CJEU identified Section 702 of the Foreign Intelligence Surveillance Act (FISA), Executive Order 12333 and Presidential Policy Directive 28 (PPD-28), which allow US intelligence agencies to collect data on foreign nationals, as inconsistent with rights guaranteed in the Charter of Fundamental Rights of the European Union.
On 4 June 2021, the European Commission issued an updated set of SCCs for data transfers from controllers or processors located in the EEA (or otherwise subject to the GDPR) to controllers or processors established outside the EEA (and not subject to the GDPR). Since then, there have been decisions, such as from the Austrian or French data protection authorities in relation to Google Analytics, which have invalidated certain data transfers from the EEA to the USA due to concerns surrounding the potential accessibility of the data by intelligence services.
There are a number of non-governmental organisations (NGOs) in the USA that are focused on privacy and data protection issues.
The USA and the EU have a fundamentally different approach to privacy law. Generally, the EU member states view privacy as a fundamental human right and freedom. In particular, Article 8 of the EU Charter of Fundamental Rights proclaims that “everyone has the right to the protection of personal data concerning him or her”, and also that “everyone has the right of access to data which has been collected concerning him or her, and the right to have it rectified.” In addition, even before the GDPR was adopted, the European approach was to use a comprehensive or omnibus approach to data protection law, where an overarching law covers the collection of all information and data relating to all EU data subjects.
By contrast, the US Constitution contains no express right to privacy. Moreover, rather than create an overarching privacy regulation, the USA has enacted various privacy laws as and when a need for them arises, based on a sectoral approach. As discussed in 1.1 Laws, there are a number of laws covering specific sectors: for example, health information is regulated under HIPAA, financial information is regulated under the GLBA and FCRA, and marketing can be regulated under the TCPA or CAN-SPAM regulations.
Moreover, information relating to an individual is typically referred to as “personally identifiable information” (PII) or “personal information”, in contrast to the concept of “personal data” found in the European framework. Under US law, the scope of PII or personal information is not uniform as the information protected varies across legislation and state. In particular, certain types of data may be protected for a given purpose under a specific framework, but not for another. Personal data, in the context of GDPR, covers a much wider range of information than PII. In other words, all PII is personal data but not all personal data is PII.
Unsurprisingly, terminology and concepts introduced in the GDPR, such as “processor,” “controller,” “data subject,” or “sensitive personal data”, are generally not applicable in the USA, among (many) other differences.
A number of key developments have taken place in the past 12 months affecting US businesses.
Schrems II
On 16 July 2020, the CJEU issued its decision in Case C-311/18, Data Protection Commissioner v Facebook Ireland and Maximillian Schrems (also known as the “Schrems II case”) invalidating the EU-US Privacy Shield framework as an approved data transfer mechanism. In addition, the CJEU ruled that international data transfers from the EEA to a country outside the EEA may continue to be based on other approved transfer mechanisms, such as the EU standard contractual clauses (SCCs), as long as additional safeguards are in place. SCCs contractually bind third parties that receive the personal data of EEA residents in order to provide privacy protections that are consistent with the GDPR. Binding corporate rules (BCRs) have a similar function and are used by multinational companies to transfer personal data from the EEA to their affiliates located outside of the EEA.
On June 4, 2021, the European Commission issued its Implementing Decision on standard contractual clauses (revised SCCs) for the transfer of personal data from the EEA to “third countries”, such as the USA. This new set of SCCs repeals and replaces the prior sets of SCCs (which were published in 2001, 2004 and 2010) and is intended to address the requirements of the Schrems II case. Since 27 September 2021, it is no longer possible to conclude contracts incorporating the prior sets of SCCs: all contracts entered into since that date should incorporate the revised SCCs.
In addition, controllers and processors can continue, until 27 December 2022, to rely on the prior sets of SCCs for contracts that were concluded before 27 September 2021, as long as the processing operations that are the subject matter of the contract remain unchanged. Such contracts currently in effect must be updated with the revised SCCs by 27 December 2022.
Brexit
Brexit has also impacted data transfers towards the USA, since in addition to the GDPR, organisations now need to consider the UK General Data Protection Regulation (UK GDPR), as tailored by the UK Data Protection Act 2018. For instance, due to Brexit, the revised SCCs mentioned above published by the EU Commission do not directly apply in the UK, since the UK is no longer an EU member state.
On 2 February 2022, the Secretary of State laid before Parliament an international data transfer agreement (IDTA), an international data transfer addendum to the European Commission’s standard contractual clauses for international data transfers (Addendum) and a document setting out transitional provisions. If no objections are raised, they would come into force on 21 March 2022: exporters will be able to use the IDTA or the Addendum as a transfer tool to comply with Article 46 of the UK GDPR when making restricted transfers.
State Legislation
California Consumer Protection Act and Privacy Rights Act
At the state level, the California Consumer Protection Act of 2018 (CCPA) came into effect on 1 January 2020, introducing one of the most comprehensive privacy laws in the USA. The CCPA has established new rights for California residents, additional protections for children’s data and rules around the sale of personal information. The CCPA also included the right for California residents to opt out of the sale of their personal information and the right to non-discrimination in terms of price and services when a consumer exercises a privacy right under the CCPA. The Office of the Attorney General issued regulations in June 2020 clarifying how to implement the CCPA’s requirements. The regulations address topics such as how businesses should provide notice to individuals, verify the identity of the persons, and handle requests for the exercise of privacy rights (eg, right to know, right to access, right to delete, right to opt out, etc). The regulations went into effect on 14 August 2020. Additional amendments to the regulations went into effect on 15 March 2021.
In November 2020, voters in the State of California approved Proposition 24, also known as the California Privacy Rights Act (CPRA), which goes into effect on 1 January 2023. The CPRA amends and expands the existing CCPA. In particular, the CPRA created the California Privacy Protection Agency (CPPA) which has the authority to bring an administrative enforcement action against businesses that violate the CCPA or the CPRA. The attorney general retains enforcement authority over the CCPA or the CPRA. Changes introduced in the CPRA include:
Virginia Consumer Data Protection Act
On 2 March 2021, the Virginia Consumer Data Protection Act (VCDPA) was signed into law and is effective on 1 January 2023. This made Virginia the second state to enact a consumer privacy and data security law, following in the footsteps of California. The VCPDA applies to businesses that conduct business in Virginia, or produce products or services that target Virginia residents, and that (i) during a calendar year, control or process personal data of at least 100,000 “consumers”; or (ii) control or process personal data of at least 25,000 “consumers” and derive over 50% of gross revenue from the sale of personal data. “Consumer” is defined as a natural person who is a resident of Virginia, acting only in an individual or household context. The definition explicitly exclude individuals acting in a commercial or employment context.
The VCDPA grants Virginia consumers the rights to access, correct, delete, know, and opt out of the sale and processing for targeted advertising purposes of their personal information, similar to the CCPA and CPRA. However, the VCDPA is not a replica of the CPRA; instead, it takes inspiration from the GDPR in a few key areas. For example, it requires covered organisations to perform Data Protection Assessments (not to be confused with Data Protection Addendums) which resemble the GDPR’s Data Protection Impact Assessments (DPIAs) and the VCDPA further adopts similar terminology to that used in the GDPR (ie, “controller” and “processor”). The Attorney General may initiate actions and fines of USD7,500 per violation of the VCDPA. There is no private right of action for consumers under the VCDPA.
Colorado Privacy Act
Finally, Colorado became the third state to pass a comprehensive privacy law. The Colorado Privacy Act (the CoPA) was enacted on 8 July 2021 and is set to take effect on 1 July 2023. CoPA applies to legal entities that conduct business or produce commercial products or services that are intentionally targeted to Colorado residents and that either:
Similar to the VCDPA, CoPA’s definition of consumer does not include individuals acting in commercial or employment contexts. CoPA also uses similar terminology to the GDPR (ie, “controller” and “processor”).
More states are expected to enact a privacy legislation in the near future: the Indiana Senate unanimously voted to pass SB 358 on 1 February 2022, which mirrors the VCDPA. The bill now moves to the Indiana House. If passed by the House and signed into law, it will go into effect on 1 January 2025.
Significant pending changes, hot topics and issues on the horizon over the next 12 months include the following.
Following the Schrems II decision, SCCs remain a valid EU-US data transfer mechanism but require companies to self-assess the recipient country’s level of protection and adopt supplementary measures where the third country does not provide “a sufficient level” of safeguards in accordance with EU data protection law. In practice, this means that companies can no longer rely on SCCs alone. Over the course of the past few months, European data protection authorities and courts have examined data transfers and have declared them invalid due to the lack of sufficient safeguards, particularly to address the risk of access by intelligence and surveillance agencies. The topic of data transfers is expected to remain a hot topic.
In the 3 November 2020 general election, California voters overwhelmingly voted in favour of Proposition 24 for the enactment of the California Privacy Rights Act (CPRA). The CPRA’s inclusion on the ballot came due to efforts from “Californians for Consumer Privacy”, the same organisation whose privacy ballot initiative in 2018 prompted the California legislature to enact the CCPA. The CPRA provides additional rights to consumers and expands the CCPA’s opt-out rights to include new types of information sharing. The CPRA also established a new state privacy authority, the California Privacy Protection Agency, which is the first dedicated privacy agency in the USA. Its role is to issue rules and enforce the CCPA and the CPRA. The CPRA is set to take effect on 1 January 2023; however, it has a lookback period to 1 January 2022, meaning that it will apply to personal information collected starting from January 2022 with some exceptions, such as the right to access and correct.
Several state legislatures are in the process of approving privacy bills following California’s steps. As mentioned in 1.7 Key Developments, 2023 would be the year when Virginia’s and Colorado’s new privacy laws enter into effect, and more states are expected to adopt their own privacy laws. In turn, the enactment of various state privacy laws is likely to increase pressure to enact a comprehensive US federal privacy law, as organisations grapple to comply with the requirements of the various state laws, each imposing slightly different requirements.
As mentioned in 1.1 Laws, there is no federal legislation protecting personal information generally across the country. Rather, there are many laws at the federal level protecting personal information in specific sectors, and in addition, the various privacy laws enacted at state level must be taken into account.
The State of California has traditionally taken a leadership role in the USA in relation to cybersecurity and the protection of the personal information of California residents. For example, California was one of the first states in the nation to provide an express right of privacy in the California Constitution, giving each citizen an “inalienable right” to pursue and obtain “privacy.” California also was the first US state to enact, in 2002, a data breach notification law requiring organisations to notify all impacted individuals “in the most expedient time possible and without unreasonable delay, consistent with the legitimate needs of law enforcement”, whenever information relating to a California resident may have been compromised. The CCPA is the first state-level omnibus privacy law imposing broad obligations on businesses to provide state residents with transparency and control over their personal information.
Territorial Scope
Organisations established in other jurisdictions may be subject to both federal and state privacy laws if they collect, store, transmit, process or share personal information of US residents.
Principles
The FTC has issued various guidance documents addressing principles such as transparency, lawfulness of processing, purpose limitation, data minimisation, proportionality, retention and recommending privacy-by-design practices. The FTC staff has also issued guidance on online behavioural advertising, emphasising core principles such as giving meaningful disclosure and choice to consumers, limiting data retention and obtaining consent where information is intended to be used in a manner that differs from the disclosures made when the data were collected.
Privacy Policy
Certain states have enacted laws requiring the publication of a privacy policy. The first state law in the nation to require commercial websites and online services to post a privacy policy, the California Online Privacy Protection Act (CalOPPA) went into effect in 2004. CalOPPA was later amended in 2013 to require certain disclosures regarding tracking of online visits.
CalOPPA applies to any person or company whose website or online service collects personal information from California consumers. It requires the website to feature a conspicuous privacy policy stating exactly what information is collected and with whom it is shared. Sectoral laws may impose certain requirements. For example, financial institutions covered by the Gramm-Leach-Bliley Act must tell their customers about their information-sharing practices and explain to customers their right to “opt out” if they do not wish their information shared with certain third parties.
Individual Rights
There is no general right of access, rectification, deletion, objection or restriction recognised across the country for all types of personal information. Instead, the existence of these rights depends on each specific statute (there is no common general approach across the country). For example, COPPA provides that parents have a right to review and delete the personal information relating to their children. Pursuant to HIPAA, individuals are entitled to request copies of medical information held by a health services provider. Pursuant to the FCRA, individuals may receive a copy of their credit report maintained by a reporting agency. In relation to state law, the CCPA grants California residents several rights in relation to personal information held by a business relating to that resident, such as the right of access, right of deletion, right to restrict processing, right to data portability, etc.
Registration Requirements
Some states (such as California and Vermont) require data brokers to register with the state Attorney General. For example, California’s data broker law applies to “data brokers” which are defined as businesses that knowingly collect and sell to third parties the personal information of consumers with whom the businesses do not have direct relationships. Data brokers must also pay an annual registration fee. Any data broker that fails to register may be subject to a civil penalty of USD100 for each day it remains unregistered, as well as other penalties, fees, and costs.
Data Protection Officer
There are no specific requirements to appoint a formal privacy officer or data protection officer in the USA. However, certain regulated entities (eg, those covered by statutes such as HIPAA or the GLBA) are required to comply with certain privacy and security obligations. Some states may also require the formal appointment of an employee to maintain the organisation’s information security programme. The SHIELD Act, which became effective on 21 March 2020, identifies the required components of a data security programme that, if implemented, are deemed to satisfy the reasonableness standard under New York law – the designation of “one or more employees to coordinate the security program” is expressly listed as a “reasonable administrative safeguard.” In any case, appointing a chief privacy officer and a chief information security officer is a best practice that is common among larger organisations and increasingly also among mid-sized ones.
International Transfers
The USA does not have restrictions on the transfer of personal information to other countries.
Data Security and Data Breaches
Certain federal and state laws impose obligations to ensure the security of personal information. The FTC has stated that a company’s security measures must be reasonable. In addition, some federal and state laws establish breach notification requirements. State statutes require the reporting of data breaches to a state agency or Attorney General under certain circumstances.
In light of the recent cyber-attacks, certain states have recently begun enacting laws providing a liability exemption for companies that adopt industry-recognised cybersecurity frameworks such as the National Institute of Standards and Technology’s (NIST) Cybersecurity Framework and the Center for Internet Security’s (CIS) Critical Security Controls. These laws are intended to provide incentives for companies to follow nationally recognised cybersecurity standards, by granting a “safe harbour” against certain state tort law claims in their states in the event of a data breach. Both Utah (March 2021) and Connecticut (July 2021) adopt such cybersecurity safe harbour statutes for businesses impacted by a data breach, following in the footsteps of Ohio, which enacted such legislation in 2018.
In the USA, certain statutes (such as the GLBA and the FCRA) impose additional requirements for sensitive information.
Financial Information
The GLBA regulates the collection, safekeeping, and use of private financial information by financial institutions. For example, according to the GLBA’s Safeguards Rule, if an entity meets the definition of a financial institution, it must adopt measures to protect the customer data in its possession. Financial institutions are required to notify customers of their data practices and privacy policies, prevent the disclosure of personal information to third parties and establish appropriate safeguards to secure personal information.
Where the personal information of customers is impacted by a security breach, financial institutions must notify the relevant regulators and the customers involved. There are a number of regulators that can enforce consumer privacy rights under the GLBA including the Federal Reserve, the Federal Deposit Insurance Corporation, the Office of the Comptroller of the Currency, the Securities and Exchange Commission, the Consumer Financial Protection Bureau and the FTC (for non-bank financial institutions).
Health Information
For organisations operating in the healthcare industry, the Department of Health and Human Services (HHS) enforces compliance with HIPAA and HITECH. HIPAA applies to a range of organisations, such as those that administer health plans, healthcare clearing houses, healthcare providers, service providers that require access to personal health information (PHI) and providers of employee medical insurance. In order to safeguard electronically stored health information, HIPAA requires that organisations enter into business associate agreements with vendors who will require access to PHI. Such agreements restrict the vendors’ use and disclosure of the PHI except as set out in the agreement as well as ensuring that the confidentiality and integrity of data is ensured. HIPAA’s Breach Notification Rule requires any data breaches to be reported to the HHS and imposes civil and criminal penalties for organisations that fail to adequately protect PHI with appropriate information security standards. In addition, HIPAA’s Security Rule requires organisations to maintain appropriate administrative, physical and technical measures to protect the confidentiality, integrity and security of electronic PHI.
Communications Data
Communications data is governed by a number of federal laws such as:
Children’s and Students’ Information
Information relating to children is protected by the Children’s Online Privacy Protection Act (COPPA), which imposes requirements on operators of websites or online services directed to children under the age of 13, and on operators of other websites or online services that have actual knowledge that they are collecting personal information online from a child under the age of 13. Among other requirements, operators of websites or online services must post a complete privacy policy online, notify parents directly about their information collection practices, and get verifiable parental consent before collecting personal information from their children or sharing it with others. The FTC is responsible for the enforcement of COPPA.
The Family Educational Rights and Privacy Act (FERPA) is a federal law that protects the privacy of student education records. The law applies to all schools that receive funds under an applicable programme of the US Department of Education. It gives parents or eligible students more control over their educational records, and it prohibits educational institutions from disclosing “personally identifiable information in education records” without the written consent of an eligible student, or if the student is a minor, the student’s parents.
Video Viewing Information
The Video Privacy Protection Act (VPPA), passed by Congress in 1988, is intended to prevent a “video tape service provider” from “knowingly” disclosing an individual’s “personally identifiable information” (PII) to third parties where that individual “requested or obtained […] video materials,” such as “pre-recorded video cassette tapes or similar audio visual materials.” When passing the law, Congress had in mind rental video providers of visual materials such as VHS tapes. While the text of the VPPA may appear outdated today, the VPPA has been at the centre of a number of high-profile lawsuits in recent years since its broad language is used in relation to digital video materials, such as online video-streaming services. The VPPA creates a private right of action and allows a court to award statutory damages upwards of USD2,500 per violation.
Credit and Consumer Reports
Credit and consumer reports are governed by the FCRA, as amended by the Fair and Accurate Credit Transactions Act 2003, which promotes accuracy, fairness and privacy of the information contained in consumer credit reports and aims to protect consumers from identity theft. The law regulates the way credit reporting agencies can collect, access, use and share the data they collect in individuals’ consumer reports. For example, the FCRA grants consumers the right to request and access all the information a reporting agency has about such a consumer. Enforcement of the FCRA is shared between the FTC and federal banking regulators.
Online Behavioural Advertising
The FTC staff has issued guidance on online behavioural advertising, emphasising the following principles to protect consumer privacy interests.
However, the FTC has not indicated that opt-in consent for the use of non-sensitive information is necessary in behavioural advertising.
The CAN-SPAM Act, a law that sets the rules for commercial email, requires commercial messages to contain a method for recipients to opt out or unsubscribe from such communications without incurring any costs. Despite its name, the CAN-SPAM Act does not apply just to bulk email. It covers all commercial messages, which the law defines as “any electronic mail message the primary purpose of which is the commercial advertisement or promotion of a commercial product or service”, including email that promotes content on commercial websites. The law makes no exception for business-to-business email. That means all email – for example, a message to former customers announcing a new product line – must comply with the law. However, emails that are informational, transactional or relationship-oriented are exempt from CAN-SPAM.
There are federal and state laws that apply to telemarketing communications that vary in the restrictions imposed including restricted calling times, do-not-call registers, opt-out requests, mandatory disclosures and a prohibition on using auto-diallers or pre-recorded messages.
The FTC’s Telemarketing Sales Rule established a national do-not-call register that will prevent most unsolicited telemarketing calls although there are some exceptions. Political parties, charities, debt collectors, health care providers and organisations issuing informational (rather than sales) messages are still permitted to call telephone numbers that feature on the register. Where an individual has an established business relationship with an organisation, the register will not prevent unsolicited calls for a period of 18 months after the relationship has ended.
Under the TCPA, an individual’s express written consent must be obtained before certain marketing texts may be sent to a mobile phone, which includes messages sent using an auto-dialler.
The TCPA and CAN-SPAM Act apply to both business-to-consumer and business-to-business electronic direct marketing.
The FTC, FCC and the state Attorneys General are active enforcers in this area.
California’s Shine the Light Act requires business that disclose personal information to third parties for the direct marketing purposes of those third parties to provide notice and access to certain information.
Federal Legislation
Broadly, in the USA, employee monitoring is legal and mostly unregulated. As an employer-provided computer system is property of the employer, they may listen to, watch, and read employees’ workplace communication and, in some cases, personal messages. While the Fourth Amendment of the US Constitution protects the privacy rights of federal, state and local government employees, this protection does not extend to employees in the private sector.
Digital privacy is covered by the Electronic Communications Privacy Act (ECPA), which protects against the interception (in transit) of digital and electronic communications. It also includes the Stored Communications Act (SCA), which, as the name suggests, covers the disclosure of stored communications and records. The ECPA permits employers to monitor the verbal and written communications of their employees, provided there is a legitimate business reason for such monitoring or the employer has obtained the employee’s consent. The SCA has a broader exception, allowing employers to access stored information outside of the ordinary course of business.
State Legislation
There are some state laws that regulate the monitoring of employee communications. In Connecticut, employees must receive written notice of the monitoring and the monitoring methods that will be used. In California, Florida, Louisiana and South Carolina, there is a state constitutional right to privacy, which makes employee monitoring difficult for employers.
On a state level, only Connecticut and Delaware require that employers notify employees about monitoring of email or internet beforehand.
Video Surveillance
The National Labor Relations Board has stated that video surveillance introduced in the workplace is a condition of employment and, as such, should be agreed with trade unions and be subject to collective bargaining unless previously agreed. The National Labor Relations Board recommends that the roll out of any surveillance or monitoring programmes are always subjected to the scrutiny of trade unions. There are significant issues caused in monitoring employees in relation to trade union activities.
Whistle-Blowing
US employees are protected from retaliation from their employers if they make a protected disclosure under the Whistleblower Protection Act. For federal employees, disclosures are usually made to an Inspectors General using a confidential hotline that permits confidential whistle-blowing disclosures. The Inspectors General may not disclose the identity of the disclosing employee unless it is unavoidable or it is mandated by a court order. For non-federal employers, it is recommended that hotlines allow for anonymous reporting. Further, the Sarbanes-Oxley Act 2002 introduced a requirement for publicly traded companies to implement a mechanism for employees to make anonymous reports of financial irregularities.
FTC Enforcement
The FTC is active in regulating data security and privacy issues. For example, on 1 February 2021, the FTC announced it had finalised a settlement with Zoom, over allegations that the company misled consumers about the level of security it provided in its software for online meetings. The FTC order also requires the company to implement a comprehensive security programme, review any software updates for security flaws prior to release and ensure the updates will not hamper third-party security features. The company must also obtain biennial assessments of its security programme by an independent third party, which the FTC has authority to approve, and notify the FTC if it experiences a data breach.
The possible enforcement penalties that are available to the FTC include injunctions and damages although the FTC places greater reliance on consent decrees, under which the organisation will be monitored by the FTC for further violations, which will incur financial penalties. On 14 December 2020, the FTC announced that it had issued orders to nine social media and video streaming companies, requesting information on how these companies collect, use and present personal information, their advertising and user engagement practices and how their practices affect children and teens.
To date, the largest fine imposed by the FTC for violating consumers’ privacy was the fine of USD5 billion against Facebook, on 24 July 2019 for violations of an earlier FTC order. It is one of the largest penalties ever assessed by the US government for any violation. As part of the new FTC order, the company is required to conduct a privacy review of every new or modified product, service, or practice before it is implemented, and document its decisions about user privacy.
On 4 September 2019, the FTC imposed a total fine of USD170 million on YouTube to settle allegations by the FTC and the New York Attorney General that the video sharing service illegally collected personal information from children without their parents’ consent. To be more precise, the settlement included a fine of USD136 million to the FTC and USD34 million to New York for allegedly violating the Children’s Online Privacy Protection Act (COPPA) Rule. The USD136 million penalty is currently the largest amount imposed in a COPPA case.
Enforcement by Other Regulators
The FTC is not the only regulator actively enforcing privacy. On 6 August 2020, the Office of the Comptroller of the Currency imposed an USD80 million civil money penalty against Capital One, a large national bank. It was alleged that the bank failed to establish effective risk assessment processes before migrating information technology operations to a public cloud environment and that it failed to correct the issues that arose in a timely manner.
On 22 July 2020, the New York Department of Financial Services (NYDFS) announced that it had filed administrative charges against First American, an insurance company, pursuant to the NYDFS Cybersecurity Regulation, marking the agency’s first enforcement action since the rules went into effect in March 2017. The NYDFS alleges that the insurer failed to fix a vulnerability on its public-facing website, resulting in the exposure of millions of documents containing consumers’ sensitive personal information, including bank account numbers, mortgage and tax records, social security numbers, wire transaction receipts and drivers’ licence images.
On 24 November 2020, a multistate coalition of Attorneys General announced that Home Depot, a large home improvement retailer, agreed to pay USD17.5 million and implement a series of data security practices in response to a data breach the company experienced in 2014, with the USD17.5 million payment to be divided among the 46 participating states and the District of Colombia.
On 16 August 2021, the US Securities and Exchange Commission announced that Pearson plc, a London-based public company that provides educational publishing and other services to schools and universities, agreed to pay USD1 million to settle charges that it misled investors about a 2018 cyber-incident involving the theft of student data and administrator credentials.
According to the SEC, the company made a reference in its semi-annual report filed in July 2019 (Form 6-K) to a data privacy incident as a hypothetical risk, when the cyber-incident had in fact already occurred. The SEC further alleged that the company subsequently indicated in a media statement that the breach may have included dates of birth and email addresses when such records were in fact known to have been stolen, and that the company did not patch a critical vulnerability for six months after being notified of it. The SEC also held that the company’s disclosure controls and procedures were not designed to ensure that those responsible for making disclosure determinations were informed of certain information about the circumstances surrounding the incident.
This settlement highlights once again the importance of carefully assessing the materiality of a cyber-attack and the importance of providing adequate and accurate disclosures in company filings. In June 2021, the SEC held that another company, First American Financial Corporation, made inaccurate disclosures regarding a cybersecurity incident reflecting inadequate disclosure controls and procedures. These cases show the increased focus on cybersecurity issues and the importance of disclosure controls and procedures to timely escalate cyber-incidents and support any cybersecurity response plans.
Private Litigation
In addition to enforcement from regulatory entities, individuals may bring private rights of action and class actions for privacy and security violations that relate to credit reporting, marketing, electronic communications and call recording, under the respective legislation. Pursuant to the CCPA (California), individuals may bring a private right of action to claim statutory damages where their unencrypted personal information was not adequately protected by an organisation. It is possible that there will be increased class actions in this area.
Employees may also bring a private right of action under the common law, where previous cases have established a precedent regarding the invasion of their privacy by their employer’s workplace monitoring. Employees would need to demonstrate that there was an expectation of privacy in relation to the specific information that has been monitored by an employer.
The Fourth Amendment of the US Constitution protects the privacy of a person and possessions from unreasonable searches and seizures by federal or state law enforcement authorities. This right is triggered where an individual has a reasonable expectation of privacy.
The Fourth Amendment provides safeguards to individuals during searches and detentions, and prevents unlawfully seized items from being used as evidence in criminal cases. The degree of protection available in a particular case depends on the nature of the detention or arrest, the characteristics of the place searched, and the circumstances under which the search takes place.
The reasonableness standard generally requires a warrant supported by probable cause. The search and seizure must also be conducted reasonably.
When law enforcement officers violate an individual’s constitutional rights under the Fourth Amendment, and a search or seizure is deemed unlawful, any evidence derived from that search or seizure will almost certainly be kept out of any criminal case against the person whose rights were violated.
The Foreign Intelligence Surveillance Act
The Foreign Intelligence Surveillance Act (FISA) permits the US government to access personal data for national security purposes. Pursuant to FISA, the government can obtain information, facilities or technical assistance from a broad range of entities. National Security Letters (NSLs) offer an additional investigative tool for limited types of entities. The Foreign Intelligence Surveillance Court (FISC), a federal court staffed by independent, life-tenured judges, approves and oversees FISA activities.
FISA was originally intended to govern surveillance activities targeting individuals inside the USA In 2008, however, Section 702 was enacted to authorise the acquisition of foreign intelligence information about non-US persons located outside the USA (a non-US person is anyone who is not a US citizen or permanent US resident).
Section 702 operates differently to the “traditional” FISA provisions, which require the government to obtain orders on an individualised basis and demonstrate probable cause in each case. However, pursuant to Section 702, the government does not have to specify which non-US persons will be targeted.
Under Section 702, the Attorney General (AG) and Director of National Intelligence (DNI) submit written certifications to the FISC that jointly authorise surveillance activities for up to a year. A Section 702 certification must be based on specific criteria determined annually by the AG and DNI, pending review and approval by the FISC, and must be accompanied by (and the FISC must approve) targeting procedures defining how the government determines which specific persons’ communications may be acquired.
In practice, the government sends the providers “selectors” (such as telephone numbers or email addresses) that are associated with specific “targets” (an account identifier such as an email address or telephone number of an individual or legal entity). Thus, in the terminology of Section 702, people (eg, non-US persons reasonably believed to be located outside the USA) are “targeted”, whereas “selectors” (eg, email addresses or telephone numbers) are tasked. The targeting procedures approved by the FISC are binding on the government and must specify how a “selector” may be “tasked” to acquire the type of foreign intelligence specified in the certification.
Executive Order 12333
Originally issued in 1981, Executive Order 12333 on US Intelligence Activities (EO 12333) was enacted to, among other things, “enhance human and technical collection techniques [of the US government], especially those undertaken abroad, and the acquisition of significant foreign intelligence, as well as the detection and countering of international terrorist activities and espionage conducted by foreign powers.”
In broad terms, EO 12333 provides the foundational authority by which US intelligence agencies collect foreign “signals intelligence” information, being information collected from communications and other data passed or accessible by radio, wire and other electromagnetic means. Unlike FISA’s Section 702, EO 12333 does not authorise the US government to require any company or person to disclose data.
In a Press Release issued in 2013, the NSA has indicated that “Executive Order 12333 is the foundational authority by which NSA collects, retains, analyzes, and disseminates foreign signals intelligence information. The principal application of this authority is the collection of communications by foreign persons that occur wholly outside the United States. To the extent a person located outside the United States communicates with someone inside the United States or someone inside the United States communicates with a person located outside the United States those communications could also be collected. Collection pursuant to EO 12333 is conducted through various means around the globe, largely from outside the United States, which is not otherwise regulated by FISA. Intelligence activities conducted under this authority are carried out in accordance with minimization procedures established by the Secretary of Defense and approved by the Attorney General.”
The NSA further indicated that this process will often involve the collection of communications metadata: “For instance, the collection of overseas communications metadata associated with telephone calls - such as the telephone numbers, and time and duration of calls - allows NSA to map communications between terrorists and their associates. This strategy helps ensure that NSA’s collection of communications content is more precisely focused on only those targets necessary to respond to identified foreign intelligence requirements.”
Similar to FISA’s Section 702, EO 12333 requires procedures to minimise how an agency collects, retains or disseminates US person information. These procedures must be approved by the Attorney General and can be found in documents such as United States Signals Intelligence Directive SP0018 (USSID 18).
Presidential Policy Directive 28
Presidential Policy Directive 28 (PPD-28), a presidential directive in effect since 2014, sets certain binding requirements for SIGINT (ie, signals intelligence) activities. As a formal presidential directive, it has the force of law within the executive branch, and compliance is mandatory.
PPD-28 declares that “all persons should be treated with dignity and respect regardless of their nationality or wherever they might reside, and all persons have legitimate privacy interests in the handling of their personal information.” The order recognises that the same protections and safeguards applicable to Americans (ie, requiring that surveillance take place only for defined and legitimate purposes), also apply to citizens of foreign countries. In particular, PPD-28 delimits the use of SIGINT collected in bulk to detecting and countering six types of threat: (i) espionage and other threats from foreign powers; (ii) terrorism; (iii) threats from weapons of mass destruction; (iv) cybersecurity threats; (v) threats to US or allied forces; and (vi) transnational criminal threats, including illicit finance and sanctions evasion related to the other purposes named in this section.
PPD-28 further provides that “In no event may signals intelligence collected in bulk be used for the purpose of suppressing or burdening criticism or dissent; disadvantaging persons based on their ethnicity, race, gender, sexual orientation, or religion; affording a competitive advantage to U.S. companies and U.S. business sectors commercially; or achieving any purpose other than those identified in this section.” It also requires each intelligence agency to adopt new policies and procedures allowing the retention or dissemination of personal information, regardless of nationality, only if retention or dissemination of “comparable information concerning U.S. persons would be permitted.”
The CLOUD Act
The US Clarifying Lawful Overseas Use of Data Act (CLOUD Act) was passed in 2018, mooting the then pending US Supreme Court case, United States. v Microsoft (Ireland), in which Microsoft challenged a warrant from the US government requiring it to produce emails that were electronically stored on servers located in Ireland.
The CLOUD Act amended an existing US law, the Stored Communications Act (SCA), to allow US law enforcement, through a warrant, subpoena or court order, to access communications data stored electronically outside the USA, as long as the information sought is relevant and material to an ongoing criminal investigation.
The CLOUD Act explicitly states that it applies to providers of an electronic communication service or remote computing service who hold or store data or other information “pertaining to a customer or subscriber”, “regardless of whether such communication, record, or other information is located within or outside of the United States.” Accordingly, even if data is stored outside the USA, the US government would still be able to seek access to such data located outside the US, as long as the service provider is subject to the jurisdiction of the USA. These powers apply to any provider of an electronic communication service or remote computing service who is subject to US jurisdiction.
CLOUD Act Agreements
In addition, the CLOUD Act also enables the entry into an executive agreement with foreign countries, whereby countries who enter into such agreements may request data directly from companies based in the other country. In this respect, the CLOUD Act supplements rather than eliminates mutual legal assistance treaties (MLAT), which remain another method by which evidence in criminal cases is made available to authorities from other countries.
The first known country to have entered into an executive agreement pursuant to the CLOUD Act is the UK: on 3 October 2019, the USA and the UK signed a bilateral agreement under the CLOUD Act, allowing law enforcement bodies from both countries direct access to electronic data stored by companies in the other country. The agreement allows law enforcement, when armed with appropriate court authorisation, to go directly to companies based in the other country to access electronic data, rather than having to go through the other country’s government (which can take years).
“Quashing” CLOUD Act Warrants
In addition, the CLOUD Act provides for a procedure for service providers to file a motion to “quash” (ie, annul) or modify a CLOUD Act warrant, in limited circumstances and subject to several conditions: the CLOUD Act provides that “a provider of electronic communication service to the public or remote computing service, including a foreign electronic communication service or remote computing service, that is being required to disclose” the contents of a communication “may file a motion to modify or quash the legal process where the provider reasonably believes:
a. that the customer or subscriber is not a United States person and does not reside in the United States; and
b. that the required disclosure would create a material risk that the provider would violate the laws of a qualifying foreign government.”
According to the US Department of Justice, a “request to issue a warrant must be submitted to an independent judge for approval. The judge cannot authorize the warrant unless he or she finds that the government has established by a sworn affidavit that “probable cause” exists that a specific crime has occurred or is occurring and that the place to be searched, such as an email account, contains evidence of that specific crime. Further, the warrant must describe with particularity the data to be searched and seized; fishing expeditions to see if evidence exists are not permitted.”
The Safe Harbour arrangement for data transfers between the EU and the USA was introduced back in 2000. In the Schrems I case in 2015, the Court of Justice of the European Union (CJEU) invalidated the arrangement based on concerns around government access and inadequate protection of the personal data of EU citizens. Following that decision, the EU and the USA negotiated the Privacy Shield framework in 2016. The Privacy Shield framework was developed to provide a more robust interoperability mechanism to manage transfers of EU personal data between the USA and EU. The agreement was welcomed on both sides of the Atlantic, with then-vice president of the commission stating that “businesses, especially the smallest ones, have the legal certainty they need to develop their activities across the Atlantic.” Likewise, then US Federal Trade Commission (FTC) chairwoman stated that it was essential to ensure that “consumer privacy is protected on both sides of the Atlantic”.
However, as mentioned in 1.4 Multilateral and Subnational Issues and 1.7 Key Developments, the CJEU invalidated, on 16 July 2020, the Privacy Shield framework based on similar concerns as for the Safe Harbour arrangement. As a result of that decision, the Privacy Shield framework is no longer a valid mechanism to comply with EU data protection requirements when transferring personal data from the EU to the USA.
Likewise, on 8 September 2020 the Federal Data Protection and Information Commissioner (FDPIC) of Switzerland issued an opinion concluding that the Swiss-US Privacy Shield framework does not provide an adequate level of protection for data transfers from Switzerland to the USA pursuant to Switzerland’s Federal Act on Data Protection (FADP).
On 10 August 2020, US Secretary of Commerce Wilbur Ross and the European Commissioner for Justice Didier Reynders issued a joint statement noting that “the U.S. Department of Commerce and the European Commission have initiated discussions to evaluate the potential for an enhanced EU-U.S. Privacy Shield framework to comply with the July 16 judgment of the Court of Justice of the European Union in the Schrems II case”. The Department of Commerce has further indicated that it will continue to administer the Privacy Shield programme while those discussions proceed.
The fact that the topic of international data transfers between the EU and the USA is again being revisited shows the ongoing tension regarding US government access and the protection of personal information under US law, leaving any replacement transfer mechanisms equally vulnerable to a legal challenge before the CJEU.
Furthermore, when the US government issues a warrant to an organisation for access to personal data, a secrecy order would compel that organisation to keep the warrant confidential from the target. Some commentators have expressed concerns that such secrecy is becoming routine and creates difficulties for organisations that are unable to inform customers that their data has been accessed.
There are no restrictions on international data transfers of personal information under US law. However, data transfer restrictions introduced by other jurisdictions, such as those pursuant to EU law, restrict the transfer of personal data relating to EU residents into countries such as the USA that are not deemed to offer an “adequate” level of protection. In order to remedy this situation, companies have to commit to EU principles by entering into arrangements such as binding corporate rules and standard contractual clauses (SCCs) to facilitate the data transfer, and implement supplementary safeguards. On 12 November 2020, the European Commission released a draft set of new SCCs for personal data transferred from the EU to a third country. The consultation on the new draft SCCs was closed on 10 December 2020, and the final version of the SCCs are expected to be issued in 2021.
As mentioned in 1.4 Multilateral and Subnational Issues, 1.7 Key Developments and 3.4 Key Privacy Issues, Conflicts and Public Debates, both the EU-US and the Swiss-US Privacy Shield frameworks are no longer a valid transfer mechanism for data transfers to the USA. However, the use of binding corporate rules and the EU Commission’s SCCs remain valid mechanisms, as long as supplemental safeguards are implemented to protect the data.
The USA participates in the Asia-Pacific Economic Cooperation Cross-Border Privacy Rules system. The system allows US companies to demonstrate their compliance with internationally recognised data privacy protections and the framework has been recognised in a number of trade agreements between Canada, Mexico and the USA.
US law does not require any government notifications or approvals in order to transfer personal information internationally.
There are no data localisation requirements under US federal law. However, the transfer of sensitive personal information belonging to US citizens is an emerging issue in the USA. In 2020, the National Security and Personal Data Protection Act was put forward to the US Congress (although it did not pass). The Act sought to address the growing concerns of sensitive personal information being transferred, through social media platforms (ie, TikTok) or for data storage purposes, to countries where the information is accessible to intelligence services. In addition, the Department of Commerce announced, on 18 September 2020, prohibitions on transactions relating to mobile applications WeChat and TikTok to “safeguard the national security of the United States”, further alleging that these apps “collect vast swaths of data from users, including network activity, location data, and browsing and search histories.”
Certain public procurement contracts impose domestic data storage as a requirement. For example, in Google’s agreement with the City of Los Angeles, Section 1.7 provides that “Google agrees to store and process Customer’s email and Google Message Discovery (GMD) data only in the continental United States. As soon as it shall become commercially feasible, Google shall store and process all other Customer Data, from any other Google Apps applications, only in the continental United States.”
There is no law formally generally requiring software code, algorithms or similar technical detail to be shared with the US government. This does not mean, however, that organisations have never been requested to share such information, for example on the grounds of national security. For example, such requirements may exist in certain public procurement contracts.
A mutual legal assistance treaty (MLAT) is the most common method of foreign enforcement agencies requesting assistance with cross-border issues and access to information located within another jurisdiction. However, organisations are not compelled to comply with such requests. The MLAT simply provides a formal mechanism for processing information requests.
In 2018, the USA introduced the CLOUD Act as an alternative request mechanism to streamline requests and data exchange between jurisdictions. Under the CLOUD Act, service providers under US jurisdiction may be prevented from disclosing communications to foreign governments unless there is a CLOUD Act agreement in place. However, these executive agreements only lift the blocking statute (the Stored Communications Act) and permit companies to comply with foreign government requests; companies are not required to comply with such requests.
The Stored Communications Act (SCA) operates as a “blocking statute” as it prohibits service providers in the USA from disclosing communications to a foreign government (subject to limited exceptions that do not apply to foreign government requests) unless there is a CLOUD Act agreement in place. The SCA will apply where the information sought by the foreign government relates to the communications of one of its own nationals, even where it relates to the investigation of criminal behaviour. Furthermore, the SCA prevents disclosure of such data even where the foreign government is subject to an order under its own national laws to obtain the required information.
Artificial Intelligence
There are no specific laws in the USA regarding Artificial Intelligence (AI). In 2019, Executive Order No 13,859 was issued which acknowledged that the US government must facilitate AI research and development and introduced “The American AI Initiative” which will be guided by five core principles. Those principles are as follows:
Technical standards are to be developed by the National Institute of Standards and Technology (NIST) to support in the development of reliable AI systems.
Connected TVs
California was the first state in the USA to regulate the collection and use of voice data through connected televisions (ie, smart TVs). Section 22948.20 of the Business & Professions Code provides that a “person or entity shall not provide the operation of a voice recognition feature within this state without prominently informing, during the initial setup or installation of a connected television, either the user or the person designated by the user to perform the initial setup or installation of the connected television.” In short, this section requires manufacturers to provide notice of voice-control features during the initial set up of a connected television. Sections 22948.20 (b) and (c) also restrict the sale or use of voice data for advertising purposes.
Internet of Things (IoT)
California is also the first state in the nation to enact a cybersecurity law for connected devices, as Senate Bill 327 was signed, in October 2019, into law. This law, also known as the “Internet of Things (IoT) Law”, requires device manufacturers to consider and to implement security features for all functionality stages of connected devices. Notably, the IoT Law does not appear to be limited to consumer devices: any device that connects to the Internet, regardless of the type of information processed, appears to be covered by this law. Even though California may have taken the first step to address the topic of device security, it seems that other states in the USA may not be far behind (eg, Oregon has already adopted a similar law).
The Oregon law specifies requirements for “reasonable security features” that are similar to those in the California law. Similar to California’s law, the law in Oregon also provides that a reasonable security feature may consist of “compliance with requirements of federal law or federal regulations that apply to security measures for connected devices”. Oregon’s law does, however, include some notable differences from California’s law. Under Oregon’s HB 2395, a “connected device” is restricted to a device that “is used primarily for personal, family or household purposes”, thereby excluding from its scope devices used or sold for business-to-business purposes. In addition, Oregon’s law applies to a narrower range of entities. In Oregon, a “manufacturer” is defined as “a person that makes a connected device and sells or offers to sell the connected device in this state”. In comparison, California’s law defines manufacturers to include any entity that “contracts with another person to manufacture” the connected device on the person’s behalf.
In December 2020, a federal law, the IoT Cybersecurity Improvement Act of 2020, was signed into law, requiring the NIST to develop and publish standards and guidelines on addressing issues related to the development, management, configuring, and patching of IoT devices for use by federal agencies.
Biometrics and Facial Recognition
In the US, there is no single federal law that regulates biometric data use and collection although there are state specific laws in place. The State of Illinois introduced the Biometric Information Privacy Act (BIPA) in 2008 which regulates how private entities can collect, use and share biometric information and biometric identifiers and imposes certain security requirements to protect this data. In particular, the Illinois Supreme Court held in Rosenbach v Six Flags Entertainment Corp (2019) that the BIPA does not require persons whose fingerprints or other biometric identifiers are stored without compliance with the law to prove anything more before being able to sue for the statutory damages prescribed by the statute.
The State of Texas introduced a statute similar to the BIPA in 2009, which prevents the collection of biometric identifiers for a commercial purpose unless prior consent has been obtained from the individual. Furthermore, the State of Washington introduced biometric data privacy provisions in 2017 that prevent the use of biometric data without providing prior notice to individuals, obtaining their consent and implementing a mechanism to prevent commercial use of the data. Only Illinois’ BIPA currently provides a private right of action.
More recently, three federal legislative proposals were introduced in 2020 regarding the use of biometric and facial recognition technology: the Ethical Use of Facial Recognition Act, the Facial Recognition and Biometric Technology Moratorium Act of 2020 and the National Biometric Information Privacy Act of 2020. This reflects the increased importance of this area. At the state level, for instance, on 6 January 2021, a bipartisan group of New York state lawmakers introduced Assembly Bill 27, known as the Biometric Privacy Act or BPA, the latest version of proposed privacy legislation that would allow consumers to sue companies for improperly using or retaining their biometric data.
Chatbots
On 1 July 2019, California’s Bolstering Online Transparency Act (BOT Act) came into effect as a reaction to growing concerns that, as technology improves, bots are getting increasingly better at influencing consumers and voters. The BOT Act defines a bot as an “automated online account where all or substantially all of the actions or posts of that account are not the result of a person.” The BOT Act makes it “unlawful for any person to use a bot to communicate or interact with another person in California online, with the intent to mislead the other person about its artificial identity […] in order to incentivise a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election.” There is no liability, however, if the person discloses its use of a bot in a manner that is “clear, conspicuous, and reasonably designed to inform persons with whom the bot communicates or interacts.”
While the BOT Act does not provide details as to the specific form of disclosure, the legislative history of the BOT Act points to the FTC’s “.com Disclosures: How to Make Effective Disclosures in Digital Advertising”. Under the FTC’s guidance, scrolling and pop-up disclosures are discouraged and affirmative statements should be provided (ie, a statement such as “I am a bot”, as opposed to “I am not a person”). The BOT Act only applies to bots that interact with California residents, but there is currently no indication that the law is limited to California businesses only.
Organisations in the USA have not yet established any protocols for digital governance or fair data practice review boards or committees to address the risks of emerging or disruptive digital technologies. However, organisations can establish such protocols or bodies on a voluntary basis. Microsoft has implemented a technology and corporate responsibility team to provide guidance on ethical business practices, privacy and cybersecurity and a separate internal board to navigate the issues raised by the use of AI. Other companies such as Walmart and AIG have implemented technology committees on their board that are responsible for monitoring technological trends and developments in cybersecurity in order to manage and oversee developing disruptive digital technologies.
Data breaches – in 2020, some of the world’s largest businesses experienced data breaches in connection with the spread of COVID-19 and remote work. The US government also suffered a major data breach. Unsurprisingly, some of these data breaches have resulted in class actions or shareholder derivative litigation. There have been also several settlements resolving data breach cases from previous years.
The TCPA – 2020 also brought significant litigation under the Telephone Consumer Protection Act, highlighting an important division among circuit courts of appeal.
The CCPA – see 1.8 Significant Pending Changes, Hot Topics and Issues.
The BIPA – 2020 was another year of active litigation under the Illinois Biometric Privacy Act (BIPA), which recognises a private right of action. COVID-19 also resulted in new types of BIPA litigation around health screening and remote work.
Other relevant cases relate to COPPA infringements and child privacy cases, among other areas.
The acquirer’s due diligence investigation should at least consider the following (this is not intended to be an exhaustive list).
If any pre-close sharing of data is taking place, then a data transfer agreement will need to be put in place. Typically, this will cover the acquiring company’s obligations around the handling of such data – eg, requiring the acquiring company to:
There are no express US law that mandate disclosure of an organisation’s cybersecurity risk profile or experience. However, the Securities and Exchange Commission (SEC) has issued guidance stating that publicly traded companies should give consideration to cybersecurity risks and incidents when preparing for disclosure under its registration statements required under the Securities Act 1933 and the Securities Exchange Act. This also includes the periodic and current reports under the Exchange Act. The purpose of the statements is to disclose timely, comprehensive and accurate information regarding a company that investors would consider as part of an investment decision. On this basis, the SEC considers cybersecurity issues as relevant to the statement. Furthermore, the SEC considers that it may be necessary to disclose cybersecurity issues as a significant factor if it results in a speculative or high-risk investment. Companies must evaluate their cybersecurity risks based on prior security incidents and the severity and frequency of those incidents and all other available information.
There are no other significant issues in US data protection practice not already addressed in this article.
2650 Birch Street
Suite 100
Palo Alto, CA 94306
USA
+1 650 313 2361
Paul.Lanois@fieldfisher.com www.fieldfisher.comIntroduction
The year 2021 has again been a momentous one, with the COVID-19 pandemic further accelerating the shift towards an increased reliance on digital platforms and services, which in turn raises new challenges for data privacy and security as innovative data usages emerge. Likewise, privacy and cybersecurity remain a priority for legislators and regulators.
International Transfers of Data
The topic of international transfers of data was already a significant area of development back in 2020, and it remains a hot topic today. As strange as it may seem, one of the biggest privacy developments for international organisations based in the USA comes from outside the country. On 16 July 2020, Europe’s highest court, the Court of Justice of the European Union (the CJEU), issued its decision in Case C-311/18 (more commonly known as the Schrems II case) where the CJEU invalidated the EU-US Privacy Shield framework as a valid mechanism for the transfer of data from the European Union to the USA. This was primarily due to concerns around the scope of US government’s surveillance and law enforcement powers (eg, under FISA 702 orders) and the related insufficient redress mechanisms provided to EU residents. Shortly thereafter, the Swiss Federal Data Protection and Information Commissioner followed suit and declared that the Swiss-US Privacy Shield does not provide for an adequate level of protection when transferring data from Switzerland to the USA.
However, while the CJEU invalidated the EU-US Privacy Shield, it also confirmed that international data transfers from the European Union to a country outside the European Union remain permitted based on other approved transfer mechanisms, such as the European Standard Contractual Clauses (SCCs), subject to certain requirements. Specifically, the CJEU held that such transfer mechanisms remain valid, as long as data exporters conduct a case-by-case risk assessment of the applicable data protection laws in the destination country, and, where necessary, put in place “supplementary measures” to ensure the data remains protected to an “essentially equivalent” standard as that provided under European law. In the absence of an essential equivalence, data transfers outside the European Union would not be valid.
On 18 June 2021, the European Data Protection Board (EDPB) adopted its much-awaited final recommendations on data transfers pursuant to the GDPR, following a public consultation on a first draft issued in November 2020. These recommendations include a six-step approach that organisations should follow, including carrying out a transfer impact assessment. On 27 June 2021, the European Commission adopted new SCCs for organisations to implement for transfers of personal data to countries outside the European Union.
The story does not end there. On 13 January 2022, the Austrian data protection authority found that the use of Google Analytics by an Austrian website was a violation of the GDPR because data would be sent to the USA (ie, to Google’s servers in the USA). Google Analytics is a service used by numerous websites to track website traffic and activity, and facilitate search engine optimisation. According to the Austrian regulator, the use of Google Analytics raises concerns, since the data was not sufficiently protected against potential access by US intelligence agencies. Likewise, the European Data Protection Supervisor (EDPS) issued a decision stating that the European Parliament’s COVID-19 testing website was also in breach of the GDPR by using cookies from Stripe and Google Analytics.
In addition, the French data protection authority (CNIL) announced on 10 February 2022 that it has issued a decision against a website operator due to its use of Google Analytics, requiring it “to bring this processing into compliance with the GDPR, if necessary by ceasing to use the Google Analytics functionality (under the current conditions) or by using a tool that does not involve a transfer outside the EU.”
The topic of compliance with the GDPR goes beyond the scope of this section, but from a US perspective, it is clear that the topic of cross-border data transfers originating from the European Economic Area (EEA) will be one of the top concerns for organisations located in the USA. The new SCCs alone are not sufficient to enable a valid transfer of data under the GDPR since these “just” form the basis of a contract between two companies whose terms are not binding on governmental authorities in countries outside the EEA. On this basis, a transfer impact assessment will have to be performed and based on its results, an assessment will have to be made as to whether the implementation of technical, organisational and/or contractual supplementary measures is needed to address any identified shortcomings.
Perhaps the solution may come in the form of a revamped Privacy Shield framework - the EU and US governments have been negotiating a replacement for the Privacy Shield framework since it was invalidated by the Schrems II decision.
The Push towards a US Federal Privacy Legislation
Within the USA, there has recently been an increased legislative interest in and public support for a federal-level data privacy law. Such a federal law would provide baseline data protections for all Americans, aligning with the requirements of the GDPR, streamlining a patchwork of state laws each with their own differing requirements (which sometimes conflict which each other) and providing greater consumer trust in how data is being handled.
Unfortunately, the COVID-19 pandemic appears to have taken most of the legislative focus and made it more challenging to overcome partisan polarisation. For example, two privacy bills were introduced in November 2019 – the Consumer Online Privacy Rights Act (COPRA) from Democrats and the United States Consumer Data Privacy Act (USCDPA) from Republicans – although neither proposal ultimately passed. In May 2020, two bills were introduced, again one from each side of the aisle, aiming to provide privacy protections for COVID-19 contact tracing. Then in 2021, the Setting an American Framework to Ensure Data Access, Transparency, and Accountability (SAFE DATA) Act, as well as the Information Transparency & Personal Data Control Act were proposed. The fact that federal privacy legislation is regularly put forward suggests that there is a growing support for a federal legislation offering greater privacy protections.
If there is a new privacy legislation on the horizon, its focus is likely to be on the limitation of data collection activities of companies and a strengthening of enforcement actions in the event of a privacy violation. Currently, the US Federal Trade Commission (FTC) has been limited in how it pursues and penalises data protection violations. While the FTC has recently levied significant fines – such as a USD5 billion penalty levied against a social media company (one of the largest penalties ever assessed by the US government for any violation) or a total fine of USD170 million imposed on a video sharing service – commentators have suggested that more enforcement powers are needed to protect consumers. In this respect, the California Privacy Rights Act (CPRA) appears to have taken a similar approach, establishing the California Privacy Protection Agency (Agency) as an “independent watchdog” whose mission is both to “vigorously enforce” the CPRA and “ensure that businesses and consumers are well‐informed about their rights and obligations.” The CPRA amends and extends the California Consumer Privacy Act of 2018 (CCPA), and has vested the Agency with the full administrative power, authority and jurisdiction to implement and enforce both the CCPA and the CPRA.
An issue that may arise with a federal privacy legislation is whether such law would displace state law. One of the concerns that privacy advocates have is that a federal law may override existing state laws, such as the CPRA, which may result in privacy protections being reduced or watered down. Another possibility, however, is that a federal privacy legislation may still enable individual states to craft their own privacy legislation. For example, states could choose to create their own state privacy laws that would differ from the federal legislation, potentially resulting in a scenario in which 50 different states have 50 different versions of the same law, and states are enforcing their own state privacy law rather than the federal privacy law.
The Rise of State Privacy Laws
In the absence of a comprehensive federal privacy framework, states all over the nation are slowly stepping in with their own solutions. The first comprehensive state privacy law, the CCPA, came into effect on 1 January 2020. The CCPA established new rights for California residents, additional protections for children’s data and rules around the sale of personal information. The CCPA also includes the right for California residents to opt-out of the sale of their personal information and the right to non-discrimination in terms of price and services when a consumer exercises a privacy right under the CCPA. An updated and extended version of the CCPA – the CPRA (discussed above) – will enter into effect in 2023, bringing additional privacy protections.
Other states have already followed California’s example. On 2 March 2021, the Virginia Consumer Data Protection Act (VCDPA) was signed into law and becomes effective on 1 January 2023. This made Virginia the second state to enact a consumer privacy and data security law. The Colorado Privacy Act (CoPA) was enacted on 8 July 2021 and is set to take effect on 1 July 2023.
Legislatures in several other states, such as Minnesota, New York, North Dakota, Oklahoma, and Washington, also have data protection bills currently under consideration by their respective state legislature. In addition, new technologies such as connected devices, facial recognition technology, biometrics, and algorithms have prompted state legislators to propose additional rules to protect consumers.
Please refer to 1.7 Key Developments (State Legislation) in the US Law & Practice chapter of this guide for further discussion of the rise of state-level data privacy legislation.
A Greater Focus on Cybersecurity
On 16 August 2021, the US Securities and Exchange Commission announced that Pearson plc, a London-based public company that provides educational publishing and other services to schools and universities, agreed to pay USD1 million to settle charges that it misled investors about a 2018 cyber-incident involving the theft of student data and administrator credentials.
According to the SEC, the company made a reference in its semi-annual report filed in July 2019 (Form 6-K) to a data privacy incident as a hypothetical risk, when the cyber-incident had in fact already occurred. The SEC further alleged that the company subsequently indicated in a media statement that the breach may have included dates of birth and email addresses when such records were in fact known to have been stolen, and that the company did not patch a critical vulnerability for six months after being notified of it. The SEC also held that the company’s disclosure controls and procedures were not designed to ensure that those responsible for making disclosure determinations were informed of certain information about the circumstances surrounding the incident.
This settlement highlights once again the importance of carefully assessing the materiality of a cyber-attack and the importance of providing adequate and accurate disclosures in company filings. In June 2021, the SEC held that another company, First American Financial Corporation, made inaccurate disclosures regarding a cybersecurity incident reflecting inadequate disclosure controls and procedures. These cases show the increased focus on cybersecurity issues and the importance of disclosure controls and procedures to timely escalate cyber-incidents and support any cybersecurity response plans.
In parallel, certain states have recently begun enacting laws providing a liability exemption for companies that adopt industry-recognised cybersecurity frameworks such as the National Institute of Standards and Technology’s (NIST) Cybersecurity Framework and the Center for Internet Security’s (CIS) Critical Security Controls. These laws are intended to provide incentives for companies to follow nationally recognised cybersecurity standards, by granting a “safe harbour” against certain state tort law claims in their states in the event of a data breach. Both Utah (March 2021) and Connecticut (July 2021) adopt such cybersecurity safe harbour statutes for businesses impacted by a data breach, following in the footsteps of Ohio who enacted such legislation in 2018.
More Scrutiny and Litigation to Come
Under current law (CCPA), California residents have a limited ability to directly file an action against a business for violating their privacy rights (private right of action) when there is a data breach. However, in most situations where a person believes that a business violates their privacy rights, that person’s only recourse is that the Attorney General’s office decides to take further action. The new CPRA, which will supersede the CCPA and be operative on 1 January 2023, will expand the current CCPA private right of action by authorising consumers to bring lawsuits arising from data breaches involving additional categories of personal information.
Businesses are also coming under more intense scrutiny from regulators. For example, the Federal Trade Commission (FTC) announced that it has initiated an antitrust case against Facebook. The FTC announced, in December 2020, an inquiry into data collection practices at nine social media and video streaming companies (Facebook, WhatsApp, Snap, Twitter, YouTube, ByteDance, Twitch, Reddit and Discord). The FTC indicated that it is seeking information specifically related to:
In a public statement issued by FTC Commissioners Chopra, Slaughter and Wilson, the FTC indicated that it is concerned with the increased surveillance of individuals and monetisation of data, and seeks to “lift the hood on the social media and video streaming firms to carefully study their engines.” The FTC indicated that it is particularly interested in ascertaining the full scale and scope of social media and video streaming companies’ data collection, including how many users these companies have, how active the users are, what the companies know about them, how they got that information, and what steps the companies take to continue to engage users.
2650 Birch Street
Suite 100
Palo Alto, CA 94306
USA
+1 650 313 2361
Paul.Lanois@fieldfisher.com www.fieldfisher.com