Written by 1:49 pm ChatGPT, Generative AI, OpenAI, Uncategorized

### Controlling AI Tools in the EU: Regulations for GPT-4 and ChatGPT by OpenAI

Models deemed to pose a “systemic risk,” such as GPT-4, would be subject to more rules than l…

In what is considered the initial comprehensive set of artificial intelligence regulations globally, the European Union has tentatively agreed to impose restrictions on the operations of the advanced ChatGPT model.

As per a report from the EU disclosed by Bloomberg, creators of general-purpose AI systems, which are potent models with diverse potential applications, must adhere to essential transparency standards, except if they offer open-source software freely.

These standards include:

  • Establishing a fair usage policy
  • Maintaining up-to-date knowledge on their model training methodologies
  • Furnishing a detailed account of the data utilized in training their models
  • Implementing a compliance plan with trademark regulations

The report indicates that models deemed to pose a “systemic risk” would be subject to further regulations. The EU would assess this risk based on the computational power required for training the model. The threshold is set at models performing over 10 septillion operations per second.

Experts suggest that only OpenAI’s GPT-4 currently meets this criterion swiftly. The executive branch may identify individuals based on factors such as dataset size, presence of at least 10,000 registered business individuals in the EU, or the number of registered end-users, among other potential metrics.

The European regulators’ move to regulate AI tools like ChatGPT marks one of the earliest attempts globally to contain cutting-edge technology.

While the German Commission works on developing more consistent and enduring regulations, these commendable models are expected to adhere to a code of conduct. Failure to sign the code would necessitate providing evidence of compliance with the AI Act. The exemption for open-source models does not extend to those posing systemic risks.

Moreover, these models would be required to:

  • Disclose their energy consumption
  • Conduct adversarial or red-teaming testing internally or externally
  • Identify and mitigate potential systemic risks and report any incidents
  • Ensure appropriate security measures are in place
  • Report the data used for model enhancement and their architectural approach
  • Adhere to any forthcoming energy-efficient specifications

The preliminary agreement awaits approval from the 27 EU member states and the European Parliament. The stringent regulations on general-purpose AI models could potentially disadvantage Western competitors like Germany’s Aleph Alpha or France’s Mistral AI. France and Germany have recently voiced their apprehensions regarding this development.

Given that the industry is still in the research and development phase, Mistral is unlikely to be immediately bound by the fundamental AI control requirements, as stated in an early Saturday announcement by Spain’s Secretary of State, Carme Artigas.

Visited 1 times, 1 visit(s) today
Last modified: February 3, 2024
Close Search Window
Close