Post by account_disabled on Feb 15, 2024 5:02:23 GMT -5
They are also trained on large amounts of data, which raises privacy and copyright concerns. Yet GPAI models (such as GPT-4 ) power many of our AI applications and form the cornerstone of the AI industry . So you can't regulate them too strictly either. Currently, EU legislators have begun to propose modest transparency and testing requirements around the GPAI model . But this is the minimum. , content moderation, and energy consumption, which are other typical issues related to the GPAI model . That is, the model only provides capabilities. Even the most powerful models with high “ systemic risk ” require applications to have real-world impact.
Conversely, models that do not pose “ systemic risk ” may still be applied in Cyprus Phone Number List unsafe or dangerous ways. So as long as the rules around " no " and " high risk " systems are in effect, the current GPAI model rules are probably fine for now and can be slowly strengthened over time. 3. Responsibility remains unclear Who is responsible when AI goes wrong? The EU Artificial Intelligence Act identifies two key roles: “ providers ” and “ deployers ” . There are other roles such as " Importer " and " Distributor " , but let's focus on the first two. A “ provider ” is a person who develops an AI system and places it on the market.
By default, “ providers ” will be responsible for ensuring that their AI systems comply with the Act. “ Deployers ” are users of the AI system . Generally speaking, " deployers " will not be responsible for the AI system unless they make " significant modifications " to the AI system (in which case the deployer will be considered a new provider). But what qualifies as a “ major revision ” ? It's not clear yet. This is especially tricky when it comes to fine-tuning. For example, when a deployer trains a model on their own data, does this count as a " significant modification " ? If so, then the deployer will be responsible for the fine-tuned model.
Conversely, models that do not pose “ systemic risk ” may still be applied in Cyprus Phone Number List unsafe or dangerous ways. So as long as the rules around " no " and " high risk " systems are in effect, the current GPAI model rules are probably fine for now and can be slowly strengthened over time. 3. Responsibility remains unclear Who is responsible when AI goes wrong? The EU Artificial Intelligence Act identifies two key roles: “ providers ” and “ deployers ” . There are other roles such as " Importer " and " Distributor " , but let's focus on the first two. A “ provider ” is a person who develops an AI system and places it on the market.
By default, “ providers ” will be responsible for ensuring that their AI systems comply with the Act. “ Deployers ” are users of the AI system . Generally speaking, " deployers " will not be responsible for the AI system unless they make " significant modifications " to the AI system (in which case the deployer will be considered a new provider). But what qualifies as a “ major revision ” ? It's not clear yet. This is especially tricky when it comes to fine-tuning. For example, when a deployer trains a model on their own data, does this count as a " significant modification " ? If so, then the deployer will be responsible for the fine-tuned model.