The AI governance landscape is shifting — fast. From US Senate bills and executive orders to emerging guidelines and best practices, regulatory entities across the globe are making moves to shape the way organizations handle AI and privacy.

With Nymity Research, you can access all the latest global regulatory information, vetted by more than 20 in-house legal experts.

There are 200+ references on AI in Nymity Research. Check out a few of our favorites that we have made publicly free below:

FTC Highlights Scope of Enforcement Powers (Jan 2024)

FTC Highlights Scope of Enforcement Powers (Jan 2024)

>>> Read the full reference in Nymity Research & Alerts: FTC Highlights Scope of Enforcement Powers

Model-as-a-service AI companies who develop and host models to third-party businesses will be subject to FTC enforcements for confidentiality and privacy violations where its AI models unlawfully obtain and process customer personal data, companies fail to deliver on commitments made to its customers (e.g. disclosing customer’s sensitive health data to advertising companies despite its promise against this, as stated within a privacy policy), and AI models are used for deceptive practices that may violate anti-trust and consumer protection laws.

FTC Banning AI-Based Technology After Lack of Consumer Safeguards (Jan 2024)

FTC Banning AI-Based Technology After Lack of Consumer Safeguards (Jan 2024)

>>> Read the full reference in Nymity Research & Alerts: FTC Banning AI-Based Technology After Lack of Consumer Safeguards

A retailer deployed AI-based facial recognition technologies (“FRTs”) in its stores without implementing safeguards to detect false positive matches on consumers who were believed to have committed criminal activities, and failed to verify the accuracy of its FRTs.

The Company is prohibited from using FRTs for five years, must delete (and notify its third-parties to delete) any images and videos of its consumers collected from its FRTs, and must notify consumers when their biometric information will be enrolled into a database used to operate a biometric security or surveillance system.

US Senate Proposes Bill to Boost Accountability and Transparency Requirements (Nov 2023)

US Senate Proposes Bill to Boost Accountability and Transparency Requirements (Nov 2023)

>>> Read the full reference in Nymity Research & Alerts: US Senate Proposes Bill to Boost Accountability and Transparency Requirements

If passed, the Artificial Intelligence Research Innovation and Accountability Act will require organizations to submit a transparency report detailing the design and safety plans of high-impact AI systems (i.e., an AI system that makes decisions that could impact the opportunities of individuals, such as their right to access housing), provide a transparency notice to inform its users that they are interacting with AI-generated content, and conduct and submit a risk management assessment report on critical-impact AI systems (e.g., the report shall contain the structure and capabilities of the AI system, and a description about how organizations assess AI risks).

DPA Hamburg’s Checklist on the Use of LMM-Based Chatbots (Nov 2023)

DPA Hamburg’s Checklist on the Use of LMM-Based Chatbots

>>> Read the full reference in Nymity Research & Alerts: DPA Hamburg’s Checklist on the Use of LMM-Based Chatbots

The checklist provides general guidance for companies that use large language model-based chatbots, such as through the use of generative AI.

Notable practices include to document internal guidelines and train employees on permissible uses for such tools, provide employees with professional accounts (do not have them create their own personal account), ensure that no personal data is transmitted to the chatbot where the AI provider is permitted to use the data for its own purposes, and avoid entries that relate to specific individuals (including entries that may allow conclusions to be drawn).

Automated Decisions: California CPPA Drafts Regulatory Framework for Businesses (Nov 2023)

Automated Decisions: California CPPA Drafts Regulatory Framework for Businesses (Nov 2023)

>>> Read the full reference in Nymity Research & Alerts: Automated Decisions: California CPPA Drafts Regulatory Framework for Businesses

Businesses who use automated decision making technologies (ADMT) shall provide consumers with a Pre-use Notice disclosing its use of ADMT, consumers rights to opt-out of the businesses’ ADMT (e.g. where ADMT are used to make decisions that produce legal effects) and consumers ability to access information about the businesses’ use of ADMT (e.g. the processing parameters of the ADMT).

Businesses using ADMT to profile a minor must obtain verifiable consent from a parent or guardian of the child, and shall inform the parent about their rights to opt-out.

Employee Data: Global Privacy Assembly Highlights AI Risks (Nov 2023) 

Employee Data: Global Privacy Assembly Highlights AI Risks (Nov 2023) 

>>> Read the full reference in Nymity Research & Alerts: Employee Data: Global Privacy Assembly Highlights AI Risks

In light of the risks associated with using AI systems for employment purposes (e.g. processing employee’s personal data), the Global Privacy Assembly suggests that organizations design AI systems that are human-centric, incorporate data protection principles and privacy by design elements into AI systems, establishing legal basis for processing employee’s personal data, disclosing to employees how AI systems produce automated decisions about them, and allowing employees to exercise their right to request for human intervention when an automated decision is not to their favor.

Best Practices: Global Privacy Assembly Regulatory Visions on Generative AI (Nov 2023)

Best Practices: Global Privacy Assembly Regulatory Visions on Generative AI (Nov 2023)

>>> Read the full reference in Nymity Research & Alerts: Best Practices: Global Privacy Assembly Regulatory Visions on Generative AI

Due to growing concerns of the unregulated use of generative AI, the GPA emphasizes implementation of basic data protection principles in the generative AI systems lifecycle (e.g. establishing legal basis for processing personal data and practicing data minimization), conducting DPIAs to identify any data risks throughout the AI lifecycle, implementing security safeguards to defend against attacks particularly aimed towards vulnerable generative AI systems, and disclosing to individuals the purposes of using generative AI.

US Executive Order Aims to Address Safety, Security, and Trustworthiness (Oct 2023)

US Executive Order Aims to Address Safety, Security, and Trustworthiness (Oct 2023)

>>> Read the full reference in Nymity Research & Alerts: US Executive Order Aims to Address Safety, Security, and Trustworthiness

The Executive Order calls on Congress to pass bipartisan data privacy legislation to protect all Americans (especially children) from the harms of AI, directs companies developing a foundation model that poses a serious risk to national interests to notify the federal government when training the model and share the results of red-team safety tests before going public (NIST will set rigorous red-team standards), aims to accelerate the development and use of privacy-preserving technologies, develop best practices to mitigate harms and maximize benefits of AI for workers, and advance the responsible use of AI in healthcare.